P4 – Satellite-based Cloud Remote Sensing

The main aim of this project is to investigate the 3D effects of the cloud properties retrieved from satellite using modelled and existing observations to improve cloud retrieval methods for the new satellite generation. As part of these activities new methods will be developed and adapted to the long satellite time series for climate applications. Thereby, we will use new satellite measurements as well as radiative fluxes, cloud properties and cloud structures derived from them. The project will assess the errors made in the satellite retrievals and estimated radiative fluxes when neglecting 3D radiative effects and it will aim at minimizing these errors by new developments.

In this project we will make use of both state-of-the-art 3D radiative transfer simulations and state-of-the-art satellite systems. These new satellite systems with their high spatio-temporal resolution and the combination of active and passive remote sensing enable us to resolve cloud properties and cloud processes at even better temporal and spatial scales. Here, two major components are EarthCARE and the Flexible Combined Imager (FCI) onboard the Meteosat Third Generation satellites.

Current cloud remote sensing often suffers from ambiguities due to under-determined inversions of observed radiances to retrieved cloud properties. In particular, insufficient knowledge of the horizontal (sub-pixel) and vertical cloud structure, e.g., of their microphysical properties, introduce major uncertainties. Both of the new satellite systems mentioned above will resolve the spatial (vertical and horizontal) cloud structure much better. FCI for example will provide measurements at 2km for all IR channels and at 1km for all VIS and NIR channels (numbers refer to the resolution at sub satellite point (SSP)) at 10 minute temporal resolution. Furthermore, selected channels will be available at even finer resolutions, e.g., Channel 2 (0.62μm) and Channel 8 (2.25μm) at 0.5km at SSP.

The high spatial resolution, however, does not only present new opportunities but also challenges to adjust existing retrieval techniques or even to develop new techniques might be required to account, for example, for net horizontal radiation transports. In order to improve existing cloud retrieval algorithms for passive imagers the errors made through the 1D assumption of the plane-parallel and homogeneous clouds need to be quantified. Furthermore, as the implementation of full 3D radiative transfer in retrieval algorithms is unrealistic, approximations will have to be developed and implemented. It is planned that this project sets a baseline that future projects can build on so that more accurate global, multi-year datasets can be generated in the future to better estimate key components of the Earth’s radiation budget.

Principal Investigators:

Research Questions

The primary research questions of this project are:

  1. What is the error in cloud and radiative flux retrievals introduced by the planeparallel and homogeneous cloud assumptions?
  2. How can we use the new satellite missions with advanced capabilities to better determine the full 3D structure of clouds?
  3. How can we use the knowledge of the 3D cloud structure and full 3D radiative transfer to infer approximation for the 3D effects, which are applicable to standard 1D retrievals?
  4. What is the impact of including 1D-to-3D correction on the calculation of broadband radiative fluxes and thus on the determined Earth’s energy budget?

Work Programme

Analyzing 3D Effects on Satellite Radiances and Retrievals

We will investigate how 3D radiative transfer effects influence satellite radiances and cloud property retrievals. These effects are compared with standard 1D radiative transfer simulations, where the cloud field is approximated as plane-parallel and horizontally homogeneous. The goal is to understand the impact of cloud geometry and spatial variability on radiances and top-of-atmosphere fluxes and to quantify the uncertainties in cloud retrievals caused by neglecting 3D effects.

We will use synthetic EarthCARE cloud scenes created by an end-to-end simulator. For these scenes, we will compare radiances from 3D and 1D radiative transfer simulations using MYSTIC. Key channels of interest include visible, near-infrared, and thermal infrared bands (e.g., 0.6 μm, 1.6 μm, 10.8 μm). These comparisons will consider different spatial scales by aggregating high-resolution radiative transfer simulations to satellite pixel sizes of several kilometers. Initial test cases are based on cloud scenes provided from Project 2, that include diverse cloud regimes. The analysis will explore how 3D-1D radiance deviations relate to cloud morphology and spatial variability.

Furthermore, we want to apply standard cloud property retrievals to the simulated radiances using the Nakajima and King method to derive cloud optical thickness and effective radius. These retrieved properties will be evaluated against the model input fields to assess retrieval accuracy. Radiative broadband fluxes at the top of atmosphere will then be computed using BUGSrad for both 3D and 1D cases, and the resulting differences will highlight the impact of 3D effects on flux estimation.

Reconstruction and Synthesis

While observations are usually limited in dimension, models can provide a consistent four dimensional context of specific atmospheric situations and observations. In combination with the works performed in Project 2 and Project 3, we will create a synthesis of observations and simulations to reconstruct the full 4-dimensional context.

Development of a remote sensing technique, a machine learning model, to reconstruct 2d cloud vertical structures from 1D imager data

A significant challenge in satellite remote sensing is the limited vertical information provided by passive instruments. Traditional methods assume vertically homogeneous clouds, which can introduce errors—especially in the presence of pronounced 3D structures. We aim to overcome this limitation by reconstructing 2D vertical cloud structures from 1D multispectral imager (MSI) data through the use of machine learning. This approach takes full advantage of data from the EarthCARE satellite mission, which combines the strengths of active and passive sensors.

Thereby, we will establish a proof of concept using synthetic EarthCARE scenes created by an end-to-end simulator. These scenes provide realistic testbeds for linking MSI radiances with vertical profiles of cloud properties, particularly cloud water content. We will train a convolutional neural network using data provided by the ACM-CAP retrieval, which integrates observations from lidar (ATLID), radar (CPR), and the MSI. Instead of treating individual pixels in isolation, we consider entire image segments where the vertical direction represents altitude and the horizontal axis follows the satellite track. This method leverages the spatial correlation in the data to more effectively reconstruct detailed 2D vertical structures.

In addition, considerable effort is devoted to pre-processing the input datasets. Although EarthCARE’s instruments provide co-located measurements along a common nadir track, the raw data are often noisy and incomplete. We will develop a dedicated pre-processing tool that identifies and corrects these inconsistencies, ensuring that the training dataset is robust and reliable. For training the convolutional neural network, we start with an initial dataset covering one month of EarthCARE observations, with plans to expand as more data become available. Rigorous testing on scenes not used during training will help to verify that the model generalizes well across different cloud types and conditions. Ultimately, this approach will provide detailed 2D vertical reconstructions over the entire MSI swath, significantly enhancing the retrieval of cloud structure from passive satellite observations.

Accounting for 3D Effects in Standard Retrievals

Our aim here is to develop methods that incorporate 3D radiative transfer effects into standard cloud property retrievals and flux calculations at the top of atmosphere. The overall strategy follows a three-step approach of increasing complexity, where the end result is evaluated against the “true” properties of the model fields.

First, we will quantify and parameterize the systematic errors that occur when standard retrieval techniques ignore 3D effects. By identifying these biases, we can later apply correction factors that are at least dependent on the specific cloud regime. This step ensures that our post-retrieval adjustments address the shortcomings of the 1D methods.

Next, we plan to explore methods to pre-process the 3D measurement data (derived from our simulations) to minimize the influence of 3D effects. For example, shadows cast by higher clouds in neighboring pixels are a clear manifestation of 3D effects. In this context, spatial distributions in the radiance fields—especially in infrared channels, which provide a good first guess of the cloud top height and are less affected by 3D scattering—can offer valuable insights into the cloud top surface morphology. We will develop preprocessing methods, possibly employing a convolutional neural network or another machine learning approach that analyzes a whole domain of pixels, to reduce these effects before proceeding with retrieval.

Finally, we will modify the standard retrieval schemes (typically using plane-parallel and independent column approximations) to better account for 3D radiative transfer. While an ideal solution is unattainable, our objective is to reduce the discrepancies between the 1D retrievals and the actual 3D behavior of clouds. A promising approach is to derive cloud-regime-specific 3D correction factors, akin to the atmospheric corrections already applied to lookup tables. These factors could be applied directly to the measurements or to the 1D simulations embedded in the lookup tables and would utilize cloud regime classification or indicators of local spatial variability, as identified in our earlier analysis.

Analysing 3d-to-1d uncertainties on the cloud properties and fluxes at the top of atmosphere

We will apply the developed techniques using real-world satellite and ground-based data. Our primary testbed will be data from the Flexible Combined Imager (FCI) onboard the Meteosat Third Generation (MTG) satellite. Thereby, one focus will be on cloud regimes over Central Europe. Another focus will be on a local zoom-in area, for example, 100 km by 100 km, anchored at Lindenberg. This will coincide with parts of the Radiation Closure Experiments in Project 3. It offers an excellent opportunity to compare different methods of calculating both all-sky and clear-sky broadband fluxes, partly based on different input data. Furthermore, these applications will accompany the C3SAR field campaign. All of these applications will be accompanied by evaluation of the retrieved cloud properties and calculated broadband fluxes at the top of atmosphere against A-Train, EarthCARE, CERES and GERB reference products as far as possible. A final step in these activities is a first definition to transfer the development to polar-orbiting sensor, e.g., AVHRR, that would allow the application at global and longer time scales in future projects.

Ground-based active sensors at the Lindenberg site can provide detailed 3D cloud structure information for point measurements over the whole project period. This information will be combined with EarthCARE measurements to validate the 2D cloud vertical profiles that our convolution neural network retrieves using EarthCARE MSI data. In addition, geostationary satellite measurements will help us analyze and quantify our validation approach, following the method introduced by Hünerbein et al. (2014) for the period of the C3SAR field campaign at Lindenberg. This method combines vertically resolved cloud properties from ground-based observations with cloud data from geostationary satellite images while taking the horizontal variability of the cloud field into account. Using this approach, we expect to better understand and measure the similarities and differences between the two perspectives.