AGENIUM Space has built up its product portfolio by serving institutional players’ needs like ESA and CNES. We have managed to impress governmental and commercial customers with novelty and quality of our AI-driven products.

Building on top of our deliveries, AGENIUM has invested in maturing some initially experimental technologies into commercial products. It has 3 AI product categories to offer, elaborated below: object detection applications, camera calibration software and SSA applications for autonomy in space.

AI Apps for space

Agenium's ready-to-fly apps

AGENIUM Space has developed multiple high F1-score AI apps that are available for onboard use. Those are ready to be integrated SW blocks for Earth Observation (EO) and Space Situational Awareness (SSA) satellites.

Edge-AI solutions are composed of AI computations accelerating HW and AI SW that can execute efficiently on that HW. The two critical components of the SW part are a DNN (deep neural network) that is trained to extract certain information from image and a SW framework that runs the image processing. Jointly, edge-AI solution offers a reliable solution to extract intelligence from images, like airplanes or cloud-coverage.

AGENIUM’s edge-AI SW supports CPU, GPU, VPU and SoC-FPGA HW architectures. The HW can be a fully dedicated board or an existing re-purposed component, e.g. FPGA used for image compression repurposed to do also AI. The AI SW can be pre-installed before the launch of a mission or added post-launch via SW update.

There are infinite possibilities of AI apps for EO image processing, ranging from finding objects like ships and trucks to characterizing pollution of air or water. AGENIUM has flown on 3 missions in space demonstrating a full edge-AI app lifecycle proficiency: SVC004 by D-Orbit, OPS-SAT by ESA and YAM-3 by Loft Orbital. The following are the on-the-shelf and in-development apps that AGENIUM has DNNs for. Deployment can take from 2weeks to few months, depending on amount of customisations necessary, e.g. adoption to bands of EO camera sensor.

Introduction to edge-ai app development process

To make a satellite “to see” an object on Earth is a nontrivial task, composed of multiple building blocks. It starts with getting representative satellite images containing objects to be found and labelling those. Continues with preparing a DNN model – choosing DNN architecture and training object detection, then optimizing and packing into AI framework for flight hardware. Finally, uploading and executing. A simplified workflow is shown below.

AGENIUM Space masters the full value chain of edge-AI application development, already demonstrated by running AI apps in space in multiple missions (see slide deck for mission reference).

Data sourcing
  • App definition
  • Acquiring of imagery
  • Labelling
  • Formatting
  • DNN Training
  • DNN Distillation
  • DNN Quantization
App packaging
  • Choice of AI framework
  • Image tiling & streaming
  • Workflow design
  • Processing optimization
  • Installation
  • Operation
  • Maintenance
  • Improvements

Satellite camera calibrator

PRNU/DSNU calibration

The core of any satellite imagery system is a camera sensor. As any technology, nothing is perfect and each pixel in a sensor has its particular noise level and color sensitivity. Usually, ahead of satellite launch, these properties are measured to create a calibration file. The file is then applied to downloaded images to compensate for inequalities per pixel. This approach ensures removal of majority of inaccuracies, e.g. obvious vertical stripes visible on push-broom camera sensor images (image blow, left). Some institutional satellites like Sentinels go a step further and regularly perform dark night acquisitions to capture and compensate also minor noise level deviations originating from sensor aging, temperature differences and other operational factors. Both the major and minor calibrations adjust the Pixel Response Non-Uniformity (PRNU) and Dark Signal Non-Uniformity (DSNU) coefficients to perfect the image.

This coefficient adjustment procedure can have a much simpler solution – use of AI for calibration. AGENIUM Space using ESA’s FutureEO aid has developed a solution that uses satellite’s routine captures to determines coefficients that harmonize PRNU/DSNU values. Types of camera sensors supported are: matrix, push-broom and push-frame. A poster presenting this product in ESA’s VH-RODA conference in 2023 is available here: LINK. This SW product is available for ground-segment use since 2024 and for space-segment applications planned in 2025.

S2B_OPER_MSI_L0__GR_MTI__20170317T184946_S20170316T155438_D05_B04-raw S2B_OPER_MSI_L0__GR_MTI__20170317T184946_S20170316T155438_D05_B04-corrected
de-vibration and de-blur

After successfully nailing PRNU/DSNU calibration, AGENIUM Space is further expanding satellite imagery calibration tools. Currently working on implementation of algorithms to address vibration (for push-broom sensors) and blurring problems. The de-vibration algorithms would would enable satellites with basic stabilizing systems yield a perfectly-stablized image acquisitions – a true game changer for newspace players in EO. Secondly, the de-blur algorithms would help satellites with non-perfectly calibrated optics to deblur images where physically possible, and thus potentially save or extend some EO satellite lifetime.

If your company is interested in partnering for development of these or other image calibration tools for satellites, especially if you can provide raw images requiring corrections, please don’t hesitate to reach out to us.

Space based SSA

Number of satellites in space has been exponentially increasing in the past decade, increasing demand for dedicated flight operators who can navigate satellites away from probable collisions. As there are no rules like traffic lights on roads, every satellite operator needs to monitor all the thousands of objects, calculate their future orbit tracks and adjust own satellite trajectories to avoid dangerous vicinity passes.

AGENIUM Space has been involved in multiple SSA-related projects, harnessing power of AI to detect objects in space below threshold and SNR values. In 2020 AGENIUM won the 1st place in SpotGEO challenge by ESA – building the winning AI algorithm to find objects in ground-to-space applications. Later in 2023 in partnership with CNES and AIRBUS, AGENIUM developed an AI algorithm for space-to-space object detection. The algorithm yielded sensitivity improvement of a sensor with close to 2x more objects found than a thresholding method. The novel achievement put in numbers: 92% track detection rate with a pixel probability of false alarms (PFA) 4.16e-6 and 0.5% of false tracks; ability to detect close to all objects with SNR>0.5.

For national and strategic assets in space like communication and military satellites, many more direct threats are on a rise. Like RF-jamming or physical assault by an intentionally approaching satellite to disable critical satellite’s functionality. While AGENIUM doesn’t build robotic arms for space-wars, it does build computer vision applications, that could enable not just spotting objects in vicinity, but also interpreting actions and choose countermeasures to approaching threats. AGENIUM is thrilled to be selected by EDF to lead work in this thematic with BODYGUARD project.

With high confidence it can be assumed that computer vision technology for space-to-space observations will mature in a not-so-distant future, enabling autonomous visual navigation for all satellites. It will actually enable applications like self-defence for satellites. AGENIUM is determined to be part of this space race towards autonomy and build onboard solutions with complementary partners.