CRTM Monthly Meeting Protocol

Core Topic of the Meeting: CloudCT - Computed Tomography by 10 Small Satellites for Better Climate Prediction

Date: 2022-06-30                                  Time: 15:00h EST

Location: Virtual

Invited Speakers: Prof. Dr. Klaus Schilling (Center for Telematics eV) 

Meeting Chair: Benjamin Johnson

Keeper of the Minutes: Patrick Stegmann

Attendees: Benjamin Johnson, Cheng Dang, Patrick Stegmann, Klaus Schilling, Ming Chen, Hui Christophersen, Shih-wie Wie, Nick Nalli, Yingtao Ma, Yulan Hong, Hongli Wang, Jianjun Jin, Mingjin Tong, Quanhua Liu, Flavio Iturbide, Yanqiu Zhu, Jim Jung, Cory Martin



Agenda Item 1:

Invited Talk


Discussion:

Schilling: CloudCT is a project to coordinate 10 smallsats. With the European Research Council (ERC) we had the chance to combine our expertise. The focus of the project is warm clouds. Clouds are essential for the climate because of their albedo. Here we want to take a look inside. We want to characterize their 3D structure. The basic principle we are using is tomography, as in medicine. In medicine you do have x-rays as a light source. The beams are rotated and scattered by the bones. A similar principle is applied here. Solar light is scattered by clouds and with satellites we can observe the scattered light from different angles. So, we need satellites to fly in formation with each other to observe at different angles. It’s a new measurement principle in earth system science. Here you can see it in more detail. There are some problems correlated with it. You have to take the interpretation of the backscattered light from the same observation. There are different noise effects, turbulence in the atmosphere, and this needs to be filtered and retrieved. W.r.t. the noise it’s much better to look from above, with satellites. You could also look from below, but there you have a denser atmosphere and the noise is higher. The main goal of CloudCT is to develop formation flight for SmallSats to limit costs. One challenge is the high pointing accuracy needed. And the images need to be prepared in such a condition that you can combine them. In this way a comprehensive 3D retrieval can be developed, so it’s a step beyond the current 1D and 2D retrievals. And this will be particularly applied to warm clouds to improve climate forecasts. To summarize the problems: The first issue is to find a suitable payload. We did a trade-off study with different instrument types, such as MW, IR, and Visible. Second, we need to design these small cubesats. The mass allocation is almost one third for the payload. For propulsion we also have thrusters for formation flight. Also, for a cubesat, the payload data handling for images has limits. Here, we try to implement suitable communication links. The next challenge is to coordinate 10 satellites. We have intersatellite links and autonomous formation control implemented. Planning is done on ground in coordination with the autonomous formation control in orbit. This way, we have minimum telemetry requirements. The experiences date back to NetSat. Here the control principles were already tested. NetSat produces nice images up until now. Several successors have been implemented. QUBE is a quantum key distribution mission. TOM is used for 3D earth observation, e.g. for volcano ash characterization. In this way we also have 3D images, such as the height of the ash cloud to establish aircraft detour maneuvers that are safe. As we have seen in the past, sometimes volcano eruptions ground fleets of aircraft. LoLaSat is a communication system with low latency. So, there are several applications in preparation for the coming years.

Here you can see a ground testing facility with a turntable to test satellite formation flight. This was the winner of the Airbus challenge devoted to production of satellites. Here you can see an electric propulsion system and they are very efficient in terms of fuel. It is only one thruster, but it needs to be combined with an attitude control system. It’s just a cube of 2cm edge length. It’s very low power consumption as well. We require only 0.5 Watt to control 3 axis movement. This way we can achieve high precision pointing. The cubesat has a mass of a few kilograms. All 10 satellites are similar. Here you see the CloudCT payload characteristics. It’s a commercial camera upgraded to space environment. It’s a chip with polarizing filters. The filters help us get polarized images of the same scene. The ground resolution is about 40m. There is a red filter on board. In the video you see now, there are the 10 satellite configurations. In the beginning we start as a string of pearls, and then move to a cartwheel. One of the targets is to find out which is the best configuration for imaging. The imaging occurs when the satellites take a simultaneous image from different perspectives and the 3D scene is reconstructed. We can also make a distinction between a clean and polluted atmosphere. Here you see the test environment. The turntables are fixed, but have 3 degrees of freedom in attitude. We simulate the orbit movement by moving the scene with a robot. This multi-satellite test environment is quite unique for Europe. The almost last slide is a survey. The 3-axis attitude control is a key component. The satellite formation is a product of NetSat. And the subsequent missions focus is on obtaining 3D images from photogrammetry. And we combine our experiences with low earth orbit missions to perform biomonitoring. We try to automate as far as possible these robot satellite missions. These are precursors to future earth observation networks. On the satellite side, the small satellites have more and more capabilities. We have advantages of such satellite formations, such as higher fault tolerance, robustness and higher temporal resolution. Of course, this is just the beginning, there are a lot of challenges in terms of control. What might be of interest to you might be sensor fusion. With this I want to finish my presentation.


Result:


-        

Tasks:


-        

Responsible People:


-        

Deadline:

N/A



Agenda Item 2:

Q&A

Discussion:

Ben: The LEO, what’s the altitude and the expected lifetime?


Schilling: With LEO, we are at 600km and we hope to expect a reasonable lifetime. The precursor satellites already had lifetimes of up to 6 years.


Quanhua: The CloudCT provides images, or also radiances?


Schilling: It provides different polarized images with different filters.


Quanhua: How do you deal with calibration between sensors?


Schilling: We have a planned observation area where we can do the calibration. This is also part of the experience we want to gain.


Ming: Very interesting presentation. When you talk about the autonomous control, whether this kind of control depends on the cloud shape?


Schilling: We try to identify interesting targets later in the mission. The front runner will identify a target and the others will follow. Interactively, the satellite identifies interesting spots.


Ming: In this case, in meteorology we have deep convection clouds, if you use just visible, whether you obtain the formation of the cloud clusters or the inside. But is there any challenge regarding the special objective?


Schilling: The challenge is to orient your 10 satellites to the same target. So, you have a highly dynamic system and completely different attitude angles. This is the first time that you have such highly coordinated system of satellites. For the first goal, we are happy if we can characterize the interior structure of a cloud.


Ming: My last question is related to radiative transfer. What kind of radiative transfer model are you using?


Schilling: Yes, this is a question for my colleagues. I am just the satellite guy. Of course, you cannot just use what you have in medicine. You have to put a lot of effort into modelling the cloud properties.


Ben: Yes, some people have done a lot of work in this field, such as Steve Albers. Have you talked about ice clouds, these are a bigger challenge?


Schilling: No, right now we are looking to gain experience.


Ben: In the next years we want to improve the CRTM in the visible, so this is something we will keep in mind.


Schilling: Right, fast is not the way to go right now.


Ben: Do you have a launch platform?


Schilling: Unfortunately, we are rather limited because of the Ukraine war, so we have to reschedule, probably with a SpaceX launch.


Ben: There is a company called tomorrow.io which is building radars, which brings to mind the possibility to include radar as well, to do 3D reconstruction of heavy clouds.


Schilling: Our main trick is to use the sunlight as an illuminator. It’s the cheapest illuminator possible for your power budget.


Ming: So technically, in developing clouds if you want to see the internal structure, can you stop the satellites on the top?


Schilling: You can focus on a specific area. With many of the existing satellites you only get a high resolution 2D image.


Yingtao: Which one is harder to fly, the visible band or the microwave sensor?


Schilling: MW is much more costly. With radar you have to emit this high power from the satellite.


Yingtao: For the MW sensor you can also measure the emission from the cloud.


Quanhua: I think Yingtao is talking about passive MW, but MW cannot achieve a high resolution. The highest resolution is 5km, something like GMI. For the sounder it’s about 15km.


Yingtao: So, if you want higher resolution you need a larger antenna.


Quanhua: And for the professor’s project, they’re talking about higher resolution.


Schilling: That’s the world of big satellites, 500kg or more.


Ben: We would like to cooperate on this, and it would be good to have technical discussion. I think in combination with LEO sensors, that can resolve all spatial scales, this can improve data assimilation.




Result:


-        

Tasks:


-        

Responsible People:


-        

Deadline:

N/A



Agenda Item 3:

Round-up

Discussion:

Ben: CRTM v2.4.1 is still delayed. Cheng, do you want to give an update?


Cheng: For the v2.4.1 release, there are 2 major updates. The first is aerosol updates, the second is updating the relative humidity. Adding this variable could be a burden to the DA system. So, this is one thing holding us back. The other thing is Nick and Jim have done a lot of work on IR water and snow tables for the CRTM surface emissivity. Once all the tests are done, the release will be ready.


Quanhua: Cheng, for the Nick’s surface emissivity table, that’s for which band?


Cheng: IR only. Nick?


Nick: The IR tables will be from 600 to 6000 cm^-1.


Ming: Nick, what’s the difference between the version that Cheng is implementing and the one that we implemented in CSEM?


Nick: The ocean model is the same, it’s the same look-up table. But there’s a new snow model look-up table. So, I can work with you on the look-up tables.


Ming: There’s no problem when Cheng has already implemented that. My concern is only for the IR water because we have implemented that in CSEM. For the future, if we continue working on the CRTM in the current structure we will continue top-down development. It seems we are losing the version control. It seems that we have a delay of moving from 2.4.1 to 3.0.0. We need to follow some design plan. Otherwise we just go back and forth.


Ben: I can see where the argument has merit. We were hesitant to create a new major release. We just want to make sure that it’s a nice clean release. Version 3.0.0 is expected in September.


Ming: We were told it’s in July. Do we have a new AOP?


Ben: Not yet. It’s still being developed. It’s not ready yet. We will get there soon. We really want to get to version 3.0.0 of the CRTM, because this is being used in the next JEDI release. It will also include CSEM with full polarization support.


Ming: I agree, but it seems that we are lacking the coordination.


Ben: Patrick, do you have any updates?


Patrick: I have finished computing the coefficients for all 7 TROPICS constellation coefficients and was otherwise busy supporting the ongoing JEDI UFO code sprint.


Quanhua: Are you planning to compute the coefficients for all TROPICS satellites, or are you re-using the Pathfinder coefficients?


Patrick: I have received the SRFs for all satellites from MIT and have computed the coefficients based on those data.


Quanhua: We have done some studies with TROPICS data for Pathfinder.


Hui Christopherson: Which one of the channels did you assimilate?


Quanhua: You just need to adjust the surface polarization to schemes available in the


Ben: Hui, have you done other assimilation experiments with TROPICS?


Hui: I am working with Ben Ruston to do UFO tests.


Ben: Any updates from the community?


Quanhua: I wanted to talk about the AI based RT model. It looks already promising if we process large datasets. Forward calculations are very fast. Each profile itself is actually very slow. We also wrote our own Fortran code, but for you own Fortran code we don’t have the Jacobian part. If we work on each profile, it’s better to use the Fortran version, but if you work on a large number of profiles, the Python version is better. But we have encountered problems. For the Jacobian calculation it’s slow. It’s proportional to the number of channels. For MW it’s ok, but for hyperspectral IR you have several thousand channels. If you know how to compute fast Jacobians, please let us know.


Ben: Leonard Scheck’s MPHASIS model should be quite fast in the visible.


Quanhua: Let me show you the timing test. Do you see the table? One row is the CRTM, one is the AI Fortran code. If you calculate just one profile each time, the AI Python code is much slower. For the Jacobian, the AI Python code is extremely slow.


Ben: Is there any I/O associated with it?


Quanhua: The timing does not include I/O here.


Ben: What’s more impressive to me is that the CRTM can run 100k profiles in the same time it takes to run one profile. And you ran that using OpenMP?


Quanhua: Yes.


Ming: So, you are talking about a specific sensor?


Quanhua: ABI with 3 hidden layers for the network.


Ming: But your input will also include the profiles.


Quanhua: We are wondering, even for the Forward model, why the Python model is so slow for the profile-by-profile case.


Ming: Normally, Python doesn’t use Jacobians. It will take a lot of time if you have many input features.


Yingtao: When you run the CRTM you load a profile set. When you run a profile set on OpenMP, is this contributing to the speeding here?


Quanhua: The CPU time takes all running times into account. For CRTM v3.0.0 can also OpenMP over channels.


Yingtao: OpenMP for the profile set doesn’t benefit much.


Result:


N/A



Tasks:


N/A


Responsible People:


N/A


Deadline:

N/A


Ben: We would like to find out more about the timing of the Jacobian.


Meeting End: 16:30h EST








  • No labels