AGENIUM Space has built up its product portfolio by serving institutional players’ needs like ESA and CNES. We have managed to impress governmental and commercial customers with novelty and quality of our AI-driven products.
Building on top of our deliveries, AGENIUM has invested in maturing some initially experimental technologies into commercial products. It has 3 AI product categories to offer, elaborated below: object detection applications, camera calibration software and SSA applications for autonomy in space.
AI Apps for space
AGENIUM Space has developed multiple high F1-score AI apps that are available for onboard use. Those are ready to be integrated SW blocks for Earth Observation (EO) and Space Situational Awareness (SSA) satellites.
Edge-AI solutions are composed of AI computations accelerating HW and AI SW that can execute efficiently on that HW. The two critical components of the SW part are a DNN (deep neural network) that is trained to extract certain information from image and a SW framework that runs the image processing. Jointly, edge-AI solution offers a reliable solution to extract intelligence from images, like airplanes or cloud-coverage.
AGENIUM’s edge-AI SW supports CPU, GPU, VPU and SoC-FPGA HW architectures. The HW can be a fully dedicated board or an existing re-purposed component, e.g. FPGA used for image compression repurposed to do also AI. The AI SW can be pre-installed before the launch of a mission or added post-launch via SW update.
There are infinite possibilities of AI apps for EO image processing, ranging from finding objects like ships and trucks to characterizing pollution of air or water. AGENIUM has flown on 3 missions in space demonstrating a full edge-AI app lifecycle proficiency: SVC004 by D-Orbit, OPS-SAT by ESA and YAM-3 by Loft Orbital. The following are the on-the-shelf and in-development apps that AGENIUM has DNNs for. Deployment can take from 2weeks to few months, depending on amount of customisations necessary, e.g. adoption to bands of EO camera sensor.
To make a satellite “to see” an object on Earth is a nontrivial task, composed of multiple building blocks. It starts with getting representative satellite images containing objects to be found and labelling those. Continues with preparing a DNN model – choosing DNN architecture and training object detection, then optimizing and packing into AI framework for flight hardware. Finally, uploading and executing. A simplified workflow is shown below.
AGENIUM Space masters the full value chain of edge-AI application development, already demonstrated by running AI apps in space in multiple missions (see slide deck for mission reference).
- App definition
- Acquiring of imagery
- Labelling
- Formatting
- DNN Training
- DNN Distillation
- DNN Quantization
- Choice of AI framework
- Image tiling & streaming
- Workflow design
- Processing optimization
- Installation
- Operation
- Maintenance
- Improvements
Cloud masking
Cloud masking is the most commonly used app for onboard AI capability demos. It has very principal added value – enabling to cut away cloudy parts of satellite image prior to downloading it, which saves downlink capacity and lowers latency for relevant data. This is a clear benefit to any player in Earth Observation industry, from a component supplier to end-user.
AGENIUM Space offers two options for cloud applications: cloud and cloud+snow detections (pixelwise segmentations). Thus, output of image processing is a mask specifying category of each pixel. This allows convenient cloud coverage estimation per image and masking the useless parts prior to compression and download.
AGENIUM trains its DNNs with real satellite images. The cloud-only training is made with 200k images and achieves 83% F1 score for edge-AI implementation with processing throughput 2.5M pix/sec/watt. AGENIUM has already demonstrated operational cloud-segmentation edge-AI app on Xilinx FPGA of YAM-3 satellite, during a 6-month AI service demonstration Q4.2023–Q1.2024. The project was funded by France Relance and Occitanie.
Ship detection
Water covers 71% of Earth and is a busy place for leisure, logistics and defense activities. Due the huge area it can’t be globally lined with surveillance cameras. Thus, many shipping, environmental and intelligence organizations are using space to monitor waters. While satellites can spot any geo-location at given time, it is a huge challenge to provide a continuous global tracking due the vast amount of data it would require to download.
AGENIUM has developed DNN that can find ships in raw images onboard satellite. This creates technical feasibility for satellite operators to find and monitor anything in waters globally, without a need to retrieve terabytes of useless empty water surface images to ground.
This AI app can raise an alarm when certain activity is spotted, e.g. two tankers exchanging goods or illegal fishing. AGENIUM has already demonstrated operational ship -detection and -segmentation edge-AI apps on Xilinx FPGA of YAM-3 satellite, during a 6-month AI service demonstration Q4.2023–Q1.2024. The project was funded by France Relance and Occitanie.
Airplane detection
Airplanes are essential part of today’s solutions to transport and logistics. Since 2018 ICAO SARPS requires 15-minute airplane tracking, globally.
AGENIUM Space has built a DNN that allows airplane detection in satellite images. It is trained with satellite images, where airplanes are aground in airports as well as in flight, observed at various backgrounds like sea, deserts, islands, snowy mountains and other complex terrains.
This application can provide monitoring of busyness of airport, estimate its peak hours and capacity limits.
AI can go beyond airplane detector to classify target objects, like passenger transporters or fighter jets. AGENIUM has partners capable to label specific military airplanes by their application and model, as far as image resolution and spectral bands permit. For a particular order, AGENIUM is capable to create DNNs and onboard AI applications that monitor specific military jets at areas of interest. SAR and optical space imagery can be used.
Forest segmentation
Forests are the lungs of our planet and occupy significant part of the land. Monitoring human activities and nature processes there is an important task of forestry management within every country.
The DNN developed by AGENIUM Space does forest segmentation – classifying each pixel in satellite image if it is a part of forest or not. This allows easy counting of forested areas, globally.
If forest-disease monitoring is of interest, AGENIUM has partners who can help to label and train specific DNNs capable to spot particular tree diseases. The application can also be expanded into fighting Illegal forest felling with change detection algorithms and aiding forest fire detections with smoke or SWIR/NIR observations.
Destroyed building detection
With nature disasters growing in count and in severity, as well as with military operations being on a rise, it has become important to detect and assess impact of nature disasters, building structural failures and offensives at highest speed possible.
AGENIUM Space has created DNN to detect devastated buildings and assess severity of destruction. The algorithm uses a two-step procedure: building recognition and change-detection, with the last part using a historic image of the same area.
More details on the topic can be read in AGENIUM Space’s poster for EGU-2023 conference here: LINK.
Marine plastic detection
Marine plastic is a global environmental issue, happening from humans’ excessive use of plastics and poor recycling of it. For sure space data can help to monitor the problem and highlight the sources of pollution. ESA has introduced Sentinel-2 benchmark dataset for detection of dense marine plastic accumulations to aid R&D on that exact topic and universities are evaluating AI applications, like MARIDA.
AGENIUM Space is confident AI can deliver a massive help in scanning wide ocean areas and detecting plastics onboard satellites. It is coordinating Horizon Europe 2030 Edge-SpAIce project, aimed at building a solution for marine plastic detection onboard a satellite. This will be the 1st of its kind application by analyzing multi-spectral data to detect marine litter close to real time onboard. The target satellite mission is BALKAN constellation.
Edge-SpAIce started in 2023, more details about solutions to technical challenges can be found in official project website www.edgespaice.eu
Satellite camera calibrator
The core of any satellite imagery system is a camera sensor. As any technology, nothing is perfect and each pixel in a sensor has its particular noise level and color sensitivity. Usually, ahead of satellite launch, these properties are measured to create a calibration file. The file is then applied to downloaded images to compensate for inequalities per pixel. This approach ensures removal of majority of inaccuracies, e.g. obvious vertical stripes visible on push-broom camera sensor images (image blow, left). Some institutional satellites like Sentinels go a step further and regularly perform dark night acquisitions to capture and compensate also minor noise level deviations originating from sensor aging, temperature differences and other operational factors. Both the major and minor calibrations adjust the Pixel Response Non-Uniformity (PRNU) and Dark Signal Non-Uniformity (DSNU) coefficients to perfect the image.
This coefficient adjustment procedure can have a much simpler solution – use of AI for calibration. AGENIUM Space using ESA’s FutureEO aid has developed a solution that uses satellite’s routine captures to determines coefficients that harmonize PRNU/DSNU values. Types of camera sensors supported are: matrix, push-broom and push-frame. A poster presenting this product in ESA’s VH-RODA conference in 2023 is available here: LINK. This SW product is available for ground-segment use since 2024 and for space-segment applications planned in 2025.
After successfully nailing PRNU/DSNU calibration, AGENIUM Space is further expanding satellite imagery calibration tools. Currently working on implementation of algorithms to address vibration (for push-broom sensors) and blurring problems. The de-vibration algorithms would would enable satellites with basic stabilizing systems yield a perfectly-stablized image acquisitions – a true game changer for newspace players in EO. Secondly, the de-blur algorithms would help satellites with non-perfectly calibrated optics to deblur images where physically possible, and thus potentially save or extend some EO satellite lifetime.
If your company is interested in partnering for development of these or other image calibration tools for satellites, especially if you can provide raw images requiring corrections, please don’t hesitate to reach out to us.
Space based SSA
Number of satellites in space has been exponentially increasing in the past decade, increasing demand for dedicated flight operators who can navigate satellites away from probable collisions. As there are no rules like traffic lights on roads, every satellite operator needs to monitor all the thousands of objects, calculate their future orbit tracks and adjust own satellite trajectories to avoid dangerous vicinity passes.
AGENIUM Space has been involved in multiple SSA-related projects, harnessing power of AI to detect objects in space below threshold and SNR values. In 2020 AGENIUM won the 1st place in SpotGEO challenge by ESA – building the winning AI algorithm to find objects in ground-to-space applications. Later in 2023 in partnership with CNES and AIRBUS, AGENIUM developed an AI algorithm for space-to-space object detection. The algorithm yielded sensitivity improvement of a sensor with close to 2x more objects found than a thresholding method. The novel achievement put in numbers: 92% track detection rate with a pixel probability of false alarms (PFA) 4.16e-6 and 0.5% of false tracks; ability to detect close to all objects with SNR>0.5.
For national and strategic assets in space like communication and military satellites, many more direct threats are on a rise. Like RF-jamming or physical assault by an intentionally approaching satellite to disable critical satellite’s functionality. While AGENIUM doesn’t build robotic arms for space-wars, it does build computer vision applications, that could enable not just spotting objects in vicinity, but also interpreting actions and choose countermeasures to approaching threats. AGENIUM is thrilled to be selected by EDF to lead work in this thematic with BODYGUARD project.
With high confidence it can be assumed that computer vision technology for space-to-space observations will mature in a not-so-distant future, enabling autonomous visual navigation for all satellites. It will actually enable applications like self-defence for satellites. AGENIUM is determined to be part of this space race towards autonomy and build onboard solutions with complementary partners.