Super-resolution, real challenges: EvoLand at LPS 25

Super-resolution, real challenges: EvoLand at LPS 25

23 Jun 2025

SISR is often applied to boost the resolution of satellite images from missions like Sentinel-2 by learning from high-resolution counterparts captured by sensors such as Venµs or Worldview. While this cross-sensor learning approach ensures best performances on real end-user data, it introduces a less visible but critical issue: radiometric and geometric discrepancies. [for 🛠️ Super-resolution toolbox 🛠️ scroll to the end 👇]

At this year’s Living Planet Symposium (LPS 2025), Julien Michel of EvoLand partners CESBIO / CNES, delivers an insightful presentation on a topic with deep technical implications: how to effectively apply Single Image Super-Resolution (SISR) for quantitative Earth observation.

Presenting in the session “Super-resolution in Earth Observation: The AI change of paradigm”, Michel, alongside co-authors Ekaterina Kalinicheva and Jordi Inglada, explore why SISR, though promising, is not without pitfalls when learned across different satellite sensors.

Scatter plot between surface reflectances of Sentinel-2 and HR target image for the Worldstrat [1] (top row) and Sen2Venμs [2] (bottom rows)

radiometric_distortion_scatter-1

Color composition of Sentinel-2 and HR target image for the Worldstrat [1] (rightmost columns) and Sen2Venμs [2] (leftmost columns)

geometric_distortion-1

The Hidden Complexity Behind Sharper Satellite Imagery

Discrepancies, stemming from differences in sensor characteristics and viewing conditions, can mislead deep learning models, causing them to learn distortions instead of genuinely useful patterns.

Too often, the focus is placed on model complexity, while training and evaluation strategies in cross-sensor setups remain underexplored” Michel explaines.

A Better Way to Measure Super-Resolution Success

The CESBIO team addressed this gap by proposing a more robust framework for training and evaluating SISR models in cross-sensor contexts. Their approach includes:

  • New metrics: Moving beyond popular but limited measures like PSNR and SSIM, the team introduces frequency-domain metrics and tools for detecting radiometric and geometric distortion.
  • Smarter training: The framework includes strategies to help models avoid “learning the distortion”, and instead, generalise meaningfully across datasets.
  • Combined evaluation: Using a mix of indicators like BRISQUE (for independent image quality) and FRR (for spatial frequency recovery) allows researchers to gain a multi-dimensional understanding of performance.

Their work is backed by openly available cross-sensor datasets and codebases, enabling reproducibility and fair benchmarking – see links in the end of this blog.

Experiments on Sen2Venμs [2] and Worldstrat [1] using ESRGAN

ws_x4_examples

Experiments on Sen2Venμs [2] andWorldstrat [1] using ESRGAN

s2v_x2_reg_examples

What the Community Needs to Know

If you’re using cross-sensor datasets to train SISR models, there are a few critical things to keep in mind:

  • Beware of hidden distortions: Your model might unintentionally learn sensor differences, not just image content.
  • Adjust your training strategy: Frameworks like the one proposed by Michel et al. (2025) are key.
  • Rethink your metrics: Traditional metrics don’t reflect real-world performance in cross-sensor cases.
  • Use a mix of evaluation tools: Combine metrics for a more nuanced view of image quality and spatial detail.
  • Mind the application context: SISR is best suited when pairing global low-resolution coverage (like Sentinel-2) with tasked high-resolution missions (like Venµs). If both are global coverage (e.g., Sentinel-2 and Landsat), data fusion may be the better path.
  • Avoid overly large resolution jumps: Bigger isn’t always better, high super-resolution ratios can can be pretty but wrong.

This work by CESBIO reinforces EvoLand’s broader mission: to improve the reliability, quality, and transparency of next-generation land monitoring services. By tackling the subtle challenges in AI-driven image enhancement, projects like this help push the boundaries of what’s possible for Copernicus and beyond.

Coming Soon: A New Era in Spatio-Temporal Fusion

Following their work on SISR, the CESBIO team is also preparing to introduce a powerful new approach to fusing satellite image time series. TAMRFSITS (Temporal Attention Multi-Resolution Fusion of Satellite Image Time-Series), aims to seamlessly combine data from missions like Sentinel-2 and Landsat-8, delivering all bands, any time, at the best possible spatial resolution.

USEFUL LINKS: 🛠️ Super-resolution toolbox 🛠️

🛠️ Cross-Sensor Single Image Super Resolution:
🔍PAPER: https://hal.science/hal-04723225v3 (open access, accepted version)
https://ieeexplore.ieee.org/document/11010858 (IEEE edited version)
💻SOURCE CODE: https://github.com/Evoland-Land-Monitoring-Evolution/sisr4rs
🗂️DATASET: Sen2venµs dataset: https://zenodo.org/records/6514159

🛠️ Temporal-Attention, Multi-Resolution fusion:
🔍PAPER: https://hal.science/hal-05101526v1 (preprint)
💻SOURCE CODE: https://github.com/Evoland-Land-Monitoring-Evolution/tamrfsits
🗂️DATASETS ls2s2 dataset: https://zenodo.org/records/15471890

🔍 Slides from the presentation at the Living Planet Symposium: slides

Newsletter

Stay current! Subscribe to our EvoLand newsletter!