AI Onboard: Intelligent Algorithms in Space

Posted by in Science & Technology

Storing and analyzing data in the cloud is still exciting and powerful because of features like cost-saving, loss prevention, and security; but when it comes to applications of Artificial Intelligence (AI) and the Internet of Things (IoT) where a delay can result in a risky situation, cloud computation is woefully incompetent. Real-time applications like environmental sensor networks, weather forecasting networks, healthcare, and epidemic networks, smart houses being built than ever before, and more IoT devices being implemented, data centers will need to respond within a constrained time. These applications service several millions of queries and instructions on several thousands of machines and are concerned with response latency, latency variation, and tail latency. Incorrect decision-making at this level could potentially be life-threatening for end-users, making this a key research issue. That’s where Edge Computing comes in. The capability that sensors have to produce data increases by a factor of 100 in every new generation of wireless communications, while our capabilities to download data are increasing, but only by a factor of three, four, or five per generation. A huge percentage of data savings can be achieved by using AI at the edge. A report by Gartner says, around 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud, and this figure will reach 75% by 2025. Edge can locally analyze data and provide recommendations without the need for data transfer. On average, most monitoring data tends to be standard “heartbeat” data, which is information that the node is alive. Edge devices can minimize the volume of data by not transmitting this heartbeat data.

The storage and transmission of data have seen a gradual shift from single mainframe CPUs to physical data centers, self-managed clusters of CPUs/computing servers, and now moving closer to the point of data-generation, i.e., to the “edge.” The new applications demand a more decentralized approach to computing and networking than a traditional, fully cloud-based model. Edge Computing, which means processing the data at or closer to the point where it is being generated, and Artificial Intelligence, which involves the creation of intelligent machines to make intelligent decisions, naturally complement each other, avoiding the bottleneck of data transfer and cutting out the delay in life-saving analytics. Take for instance fall detection. Many wearables these days come with the ability to detect if a person falls suddenly using specific hardware. Edge AI in these devices can be trained to detect falls in an instant and even alert caregivers. In most cases, this can be life-saving. One example of this is the fall detection feature on Apple Watch. Another area where Edge AI can be of immense help is the monitoring of vital signs. Medical devices designed to record data like heart rate, temperature, respiration rate; blood pressure, etc. can leverage AI to detect any abnormality in an instant. The devices could then notify the hospital staff and they can take it from there. For the patient, this is not only critical but also improves the overall experience.

As both Artificial Intelligence and Edge Computing mature, the “perfect” combination of the two, termed “Edge AI,” will be table stakes for enterprises. Edge AI is not a new concept; it has surreptitiously made a place for itself in our lives already. Cases in point are facial recognition to unlock our devices, or Google Maps alerting us about the traffic conditions in real-time. Out of necessity to scale with the demands of a data-driven world, the future Machine Learning (ML) will take place at the edge, instead of on cloud or a data center, ensuring continuous training of the ML solution. Edge Computing serves as a connecting point between information and operational technologies. Its ability to process devices in a cohesive manner removes the barrier to integrating compatible systems and technologies. In addition, constant upgrades ensure a steady flow of innovations and reduce time to market. Not only will this reduce time-to-market for AI/ML solutions, but it will also power decision-making based on real-time data.

The AI at the Edge concept was designed for ground-based services but it really suits space applications also. Machine learning is widely used to help technology companies in many daily life applications but some scientists see applications far beyond Earth. These technologies are very important, especially for big data sets and such as in the exoplanet field, because the data we get from space observations is sparse and noisy. It’s really hard to understand this data due to noise. For example, analyzing massive amounts of Earth observation data from spacecraft, ML plays an important role. Remotely sensed data are often degraded by different noise sources, noise reduction and data restoration can improve the signal-to-noise ratio of acquired data and consequently affect remote sensing applications. So using these kinds of tools has so much potential to help us. Every day the satellites send millions of images from space to Earth; the process of transmitting these images to earth is too slow to cope up with the image acquisition. It is expected that in the near future images will be produced at even higher rates like gigabytes per second. This data is far greater than the sensor data generated from all satellites today. The current technologies are not capable to handle these situations. It will take hours to send these pictures down to Earth stations and to be processed. This latency will severely affect the purpose for which these satellites are being deployed.

Current space exploration missions use AI and machine learning capabilities in many areas of space operations including mission planning and operations, data collection, autonomous navigation and maneuvering, and spacecraft maintenance. Future missions will need to rely on the same, however, the evolution of sensors and decision-making systems will give us tons of information. Over the past decade, the volume of data we have collected from Earth-observing spacecraft, deep-space probes, and planetary rovers has increased significantly for ground infrastructures used to distribute and deliver the data to multiple end-users. Having the ability to optimize the vast amount of data collected from scientific missions and analyze the data using AI automation positively impacts how data is processed and transmitted to the end-user.

AI-based techniques can be applied onboard Earth Observation (EO) satellites for applications such as fire detection or oil-spill detection, requiring the minimization of processing and transmission latency and the impact of the consequent damages. The deployment of AI onboard spacecraft might also mitigate the problem of the increasing number of sensor data that must be downloaded to the ground. Indeed, less usable data, like cloud-covered images, can be identified, tagged, pre-filtered, discarded, or selectively compressed. In this regard, the use of aboard AI might extend the acquisition of images in areas of the planet such as deserts or oceans (which are usually scanned at lower priority to save bandwidth), enabling the detection of specific targets (e.g., detection of assets, vessels or random/sparse events as oil leaks). AI used aboard spacecraft can autonomously detect and characterize features that are nominal, such as typical weather patterns, and differentiate them from unusual patterns like smoke plumes from volcanic activity to create maps and data sets. We can use AI to determine which data sets are important to send to ground segments for processing. We can also use AI technologies to remove data that provides little to no significance. This can ease burdens or constraints the space-to-ground networks experience with the transmission of large-volume data.

The data that we receive from the space in the form of images faces the challenge to decode those images and extract the needed information. NASA Frontier Development Lab and tech giants such as IBM and Microsoft have come together to leverage machine learning as a solution for solar storm damage detection, atmosphere measurement, and determining the ‘space weather’ of a given planet through the magnetosphere and atmosphere measurement. The same technique can also be used for resource discovery in space and to identify suitable planet landing sites. We got our first black-hole image using the CHIRP (Continous High-Resolution Image Reconstruction using Patch Priors) algorithm. CHIRP is a Bayesian algorithm used to perform deconvolution on images created in radio astronomy. The development of CHIRP involved a large team of researchers from MIT’s Computer Science and Artificial Intelligence. The CHIRP used the image data from the Event Horizon Telescope (EHT) which was too large and this is where image processing had to be done. Scientists used Numpy, pandas, and other Python libraries to scale down the data, data correlation, and calibrations. ML was also used in image analysis.

Definition of “AI at the Edge”

Artificial Intelligence (AI) is a computer science branch that focuses on the ability to provide human-like intelligence to devices and systems. This is characterized by behaviors such as sensing, learning, understanding, decision-making, and acting. Owing to the availability of powerful computing hardware (GPUs and specialist architectures) and of large amounts of data, AI solutions especially Machine Learning (ML) and more specifically Deep Learning (DL) have found numerous and widespread applications over the past two decades (such as image recognition, fault detection or automated driving functions). Due to their reliance on large amounts of data, most current AI solutions require large-scale cloud data centers for computationally demanding processing tasks. Nevertheless, we are now in a new information-centric era in which computing is becoming pervasive and ubiquitous, thanks to the billion IoT devices connected to the Internet, and increasing digitalization generates a massive amount of data every year. Consequently, edge computing is emerging as a strong alternative to traditional cloud computing, supporting new types of applications (such as connected health, autonomous driving, and Industry 4.0) with the advantage of implementing the required AI solutions as close as possible to the end-users and the data sources.

Technical Advantages and Opportunities of AI at the Edge

AI solutions that run autonomously, are distributed, and implemented at the Edge offer the following advantages:

  • Increased real-time performance (low-latency): Edge applications process data and generate results locally on the sensing device. As a consequence, the device is not required to be continuously connected with a cloud data center. As it can process data and make decisions independently, there is increased real-time performance in the decision-making process, reduced delay of data transmissions, and improved response speed.
  • Reliable low-bandwidth communication: Distributed devices can handle a large number of computational tasks, therefore reducing the need to send data to the cloud for storage and further processing. Overall, this results in minimizing the traffic load in the network and supports low-bandwidth communication.
  • Enhanced power-efficiency: As the amount and rate of data exchange with the cloud is minimized, the power consumption of the device is reduced thus improving battery lifetime, which is critical for many edge devices.
  • Improved data security and privacy: By processing data locally it does not have to be sent over a network to remote servers for processing. This improves data security and privacy as the data is not visible externally.

Challenges in AI Edge Implementation

While the convergence of the two is still in its infancy, together they hold the potential to revolutionize the lives of consumers and businesses alike. But it is not a marriage without its challenges, especially when it comes to practical implementation. It is worth recapping that there are still some unsolved open challenges in realizing edge intelligence. It is crucial to identify and analyze these challenges and seek novel theoretical and technical solutions. These challenges include data scarcity at the edge, data consistency on edge devices, bad adaptability of the statically trained model, privacy and security issues, and incentive mechanism.

  • Without trust and a comprehensive set of security measures, AI at the edge will never truly take off. The challenge is that privacy remains a double-edged sword.
  • On the one hand, processing data locally offers inherent benefits because the data remains in the desired sovereign area and does not traverse the network to the core. In other words, the data is physically domiciled at all times.
  • On the flip side, keeping data locally means more locations to protect and secure simultaneously, with increased physical access allowing for different kinds of imminent threats. A greater physical presence at the edge could, for example, increase the likelihood of Denial of Service (DoS) attacks, rendering individual machines or networks compromised.
  • To combat this threat, backup solutions that circumvent local edge failures may be needed. However, by removing the constant back and forth of data between the cloud and edge, privacy will be enhanced beyond its current capacity; especially where individual consumers are concerned, because personal information remains in the hands of the user at the edge. And when privacy combines with flexible infrastructure, AI at the edge will deliver innovation at a much greater scale.
  • For most machine learning algorithms, especially supervised machine learning, the high performance depends on sufficiently high-quality training instances. However, it often does not work in edge intelligence scenarios, where the collected data is sparse and unlabelled, e.g., in Human Activity Recognition (HAR) and speech recognition applications.
  • In most edge intelligence-based AI applications, the model is first trained on a central server, then deployed on edge devices. The trained model will not be retrained, once the training procedure is finished. These statically trained models cannot effectively deal with the unknown new data and tasks in unfamiliar environments, which results in low performance and bad user experience. On the other hand, for models trained with a decentralized learning manner, only the local experience is used. Consequently, such models may become experts only in their small local areas. When the serving area broadens, the quality of service decreases.

Deploying AI on the Spacecraft

From analyzing the terrain on Mars to enhancing communications between satellites and ground communications, artificial intelligence (AI) is playing an increasingly important role in space operations and exploration. It is a capability with numerous applications and vast promise for the data-rich and complex environment of space. 

Space missions go through several stages, from initial analysis, identification, feasibility, definition to the design, and finally operations. Spacecraft operations refer precisely to this latter stage, where the mission goals and requirements are carried out. There are many different missions, each with a goal, ranging from simple tasks, such as weather forecasting and telecommunications, to taking pictures of the Earth to study environmental causes, such as deforestation or the melting of polar caps. Other spacecraft carry more elaborated science missions such as scientific experiments on Earth’s magnetosphere, such as ESA’s mission Cluster, and other planets, such as NASA’s Mars rover, all supported by ground systems and data processing centers. 

Machine learning algorithms can extract information from data and find patterns, allowing the algorithm’s designer to understand relationships between unique features within the dataset. The most exciting ML applications use deep learning, which incorporates artificial neural networks inspired by information processing in biological systems, allowing for the processing of vast amounts of data. For spacecraft operations, we aim for full autonomy, which will require a thorough understanding of the relationships between all the operational details. When you consider that spacecraft operations collect extensive amounts of data, machine learning can work very well, allowing you to train models using such data, to make autonomous decisions on various problems quickly. In contrast, manual processes take significantly more effort to get comparable results than when combined with machine learning or when compared to a fully independent machine learning agent.

The primary benefit of machine learning and deep learning for spacecraft operations comes from the fact you can automate almost anything you want. A constant problem in spacecraft operations is the vast space between the ground station and the satellite. If you have extensive data to transfer, it costs a significant amount of energy and time to do so. If we have machine learning agents on board doing the work that would be done on the ground, we can drop a significant percentage of the data bandwidth requirement. The ML agent not only has an impact on bandwidth but also on spacecraft’s design as you don’t need as much hardware to store and transfer data. As they don’t need to be as powerful, spacecraft can be smaller, lighter, and less complex, making them cheaper to design, build, and operate.

In recent years, research in the space community has shown a growing interest in the application of Artificial Intelligence (AI), and in particular Deep Learning (DL), onboard spacecraft in view of its potential advantages. One main reason is due to the high potential demonstrated by Deep Neural Network (DNN) models for many different space applications, such as object detection and recognition, image scene classification, super-resolution, agricultural-crop detection, and change detection, outperforming classical approaches both in terms of performance and time to design. Thanks to this capability, DNNs might be applied onboard Earth Observation (EO) satellites for applications such as fire detection or oil-spill detection, requiring the minimization of processing and transmission latency and the impact of the consequent damages.

The deployment of DNNs onboard spacecraft provides an opportunity to reduce the amount of data from sensors that needs to be beamed/transmitted to the ground. Indeed, less usable data, like cloud-covered images, can be identified, tagged, pre-filtered, discarded, or selectively compressed. In this regard, the use of aboard DNNs might extend the acquisition of images in areas of the planet such as deserts or oceans (which are usually scanned at lower priority to save bandwidth), enabling the detection of specific targets, e.g., detection of assets, vessels, or random/sparse events as oil leaks.

Conventional spacecraft Guidance, Navigation, and Control (GNC) architectures have been designed to receive and execute commands from ground control with minimal automation and autonomy onboard spacecraft. In contrast, Artificial Intelligence (AI)-based systems can allow real-time decision-making by considering system information that is difficult to model and incorporate in the conventional decision-making process involving ground control or human operators. With growing interests in on-orbit services with manipulation, the conventional GNC faces numerous challenges in adapting to a wide range of possible scenarios, such as removing unknown debris, potentially addressed using emerging AI-enabled robotic technologies. 

Using Artificial Intelligence technology in the GNC systems provides the opportunity to carry out real-time onboard decision-making in an unforeseen and time-critical situation with the autonomous optimization of the on-orbit manipulation for both co-operative and non-cooperative targets. Machine learning, especially deep learning, has depicted tremendous progress and achievement to resolve complicated optimization problems.

Hardware for Edge AI

Several Satellites (iSats) having AI/Edge capability form a space edge computing system, which can acquire, store, process, and forward data from the end-users (such as vehicles, airplanes, ships, buoys, and sensors). For small sensors, data can be relayed to the satellite via the base station. In the face of different end-users, the space edge computing system can flexibly change the applications on each satellite to satisfy different mission requirements.

The space edge computing system consists of several iSats. iSat, as an edge computing node, uses a powerful standardized hardware platform and a fault-tolerant and expandable satellite operating system. It can load different apps according to the task requirements, providing customized services. The capabilities of iSat include and are not limited to the following:

  • Provide a consistent operating paradigm across multiple satellite infrastructures,
  • Support large-scale, distributed space network environments,
  • Support application integration, orchestration, and migration,
  • Meet hardware resource limits and cost constraints,
  • Capable of running on confined and unstable networks,
  • Meet the needs of ultra-low latency applications,
  • Onboard resources can be flexibly shared with multiple users or applications.

In terms of hardware structure, traditional satellites usually consist of two parts: platform and payload. The platform is responsible for satellite management, including attitude determination and control (ADCS), communication (Comm), electrical power (EPS), telemetry and telecontrol (TM and TC), thermal control (TCS), etc.

Choosing the ideal hardware for a particular application requires careful consideration of all the requirements. This becomes even more complicated when facing moving it into a spacecraft or satellite. A successful system design finds a balance between the different aspects of system architecture, such as memory footprint, executing time, model accuracy, power consumption, scalability, cost, and maintainability. While data centers allow engineers to scale available computational power to the current demand (via GPUs or TPUs), an application running on edge devices needs to keep sufficient power reserves. An increasing number of vendors are now moving from producing simple resource-scarce microcontrollers (e.g. ARM Cortex MCU) to pairing general-purpose processors with specialized units tailored to execute the computational tasks required to implement AI solutions. As embedded systems are typically focused on using AI in the form of machine learning (ML) for interpreting incoming sensor data, these specialized sub-processors aim to speed up classification or prediction tasks while maintaining a low power draw. This is especially important in applications running on battery power or with a low potential for cooling the system.

One of the open points in research involves the choice of hardware accelerators for DNNs, which are computationally intensive and memory-hungry algorithms. Their deployment onboard spacecraft requires finding acceptable design trade-offs for the reduced power and memory budget of SmallSats like CubeSat, where power generation and heat dissipation are stringent design issues.

The first DNN implementations have been software-based, involving Graphics Processing Units (GPUs), or Central Processing Units (CPUs). The high consumption of these GPUs and CPUs is a problem even for data centers, meaning that their employment on board would be feasible for very small networks only (e.g., single input networks for sensor failure detection). The broad use of DNNs for commercial applications led to the realization of Commercial-Off-The-Shelf (COTS) hardware accelerators for these algorithms, such as Myriad 2 Visual Processing Unit (VPU), Coral Tensor Processing Unit (TPU) Dev Board, NVIDIA Jetson Nano GPU, and Xilinx Zynq FPGA development board (the Zynq-7000 and 7020 SoCs have both flown extensively). These devices feature high energy efficiency and remarkable performance, cost, and mass trade-offs. Furthermore, they exploit open-source tools that highly speed up the deployment of the model, reducing the development time and costs with an acceptable level of reliability due to the wide diffusion in various fields (automotive, health, etc.), and the large open developer community. Owing to this, and thanks to their reconfigurability, the use of COTS hardware accelerators might lead to a significant reduction of mission costs and design in the future.

Currently, however, the usability of COTS in space is limited because none of them is fully suitable to the space environment, mainly due to radiation tolerance concerns. In particular, Single Event Effects (SEEs; Probabilistic events, not long-term, any measurable effect to a circuit due to an ion strike) are caused by charged particles impacting electronic devices, leading to soft errors such as Single Event Upsets (SEUs; an upset of a memory element or more) and Single Event Transients (SETs; a glitch in the combinatorial logic) or permanent damages in the case of Single Event Latch-ups (SELs). Furthermore, a limited Total Ionizing Dose (TID) figure bounds devices’ reliability in space for long-term missions, confining their use for short-term Low Earth Orbit (LEO) applications. For Geostationary Earth Orbit (GEO) and Medium Earth Orbit (MEO) or long-lasting LEO missions, space-qualified devices are generally preferred because of their high reliability and long-term traceability and support.

Space-qualified devices generally lag behind compared to their terrestrial counterparts because of their longer lifespan, older technology nodes, and their design typically more oriented to high dependability than performances. Because of this, space-qualified devices generally feature far worse performance/mass/cost trade-offs compared to other COTS.

Machine Learning Models for Edge AI

Current AI models for the edge are far more limited in terms of performance when compared to cloud-based models because of the relatively limited computation and storage abilities. Since edge devices have limited computing power, energy consumption is a critical factor, and computations too intense for edge devices are sent over to more powerful remote servers. Model training and inference on resource-scarce devices are still a debated problem throughout academia and industry. A number of novel libraries and algorithms have been developed in recent years with the goal to adapt standard ML models to resource-constrained devices. A well-known example is given by ProtoNN (Prototype-based k-nearest neighbors (kNN) classifier) which aims to adapt kNN in memory space-limited microcontrollers via sparse-projection and joint optimization. For low memory scenarios (<2 kB), ProtoNN outperformed the state-of-the-art compressed models. In settings allowing 16-32 kB memory, it matched the performance of the state-of-the-art compressed models. Moreover, when compared to the best-uncompressed models, ProtoNN was only 1–2% less accurate while consuming 1–2 orders of magnitude less memory. Bonsai is another novel algorithm based, instead, on decision trees and aims to reduce the model size by learning a sparse, single shallow tree. When deployed on an Arduino Uno, Bonsai required only 70 bytes for a binary classification model and 500 bytes for a 62-class classification model. Its prediction accuracy was up to 30% higher than other resource-constrained models and even comparable with unconstrained models, with better prediction times and energy usage. The development of neural networks and deep neural networks with lighter and faster architectures (e.g. small size model, minimization of trainable parameters, minimization of the number of computations) for edge platforms has also gained massive traction among researchers. Some examples are represented by CMSIS-NN (developed for Cortex-M processor cores) which generates neural networks that can achieve about a fourfold improvement in performance and energy efficiency, yet minimizing the memory footprint. Even recurrent neural networks (RNN) have been implemented in tiny IoT devices (FastGRNN and FastRNN). It is possible to fit FastGRNN in 1-6 kilobytes which makes this algorithm suitable for IoT devices, such as Arduino Uno. Some of the well-known techniques considered for model size reduction include:

  • knowledge distillation, whereby a small (easy to implement) model (student) is trained to behave like a larger trained neural network (teacher) while trying to preserve the accuracy of the teacher model, thus enabling the deployment of such models on small devices,
  • steps such as quantization, dimensionality reduction, pruning, components sharing, etc. These methods exploit the inherent sparsity structure of gradients and weights to reduce the memory and channel occupation as much as possible,
  • conditional computation reduces the amount of calculation by selectively turning off some unimportant calculations (for example with components shutoff, input filtering, early exit, results caching, etc.).

AI on the Edge: Recent Space Projects

Spaceborne Computer-2

NASA and Hewlett Packard Enterprise (HPE) announced that they will test the limits of the term “edge computing” with a new computer designed to deliver artificial intelligence in space. The new Spaceborne Computer-2 (SBC-2) will become the first high-performance commercial computer to operate in space on the International Space Station. HPE says Spaceborne Computer-2 will allow astronauts to process data that used to take months in mere minutes. Once launched and assembled in space, NASA will use it for at least the next two years, giving astronauts the power to use AI and other advanced computing capabilities that were once out of reach in space.

The most important benefit to delivering reliable in-space computing with Spaceborne Computer-2 is making real-time insights a reality. Space explorers can now transform how they conduct research based on readily available data and improve decision-making.

Getting and using computers in space is not an easy task. First, just putting the hardware into orbit involves shooting it on a rocket — rattling, shaking and jolting through the atmosphere for minutes on end. Once in space, if the computer’s complex circuits still work, the zero-gravity environment and constant exposure to the sun’s radiation present further challenges. For this purpose of testing and validating new techniques in mission control and onboard satellite systems, ESA has launched a flying laboratory, named OPS-SAT. It is devoted to demonstrating drastically improved mission control capabilities, that will arise when satellites can fly more powerful onboard computers. The satellite is only 30cm high, but it contains an experimental computer ten times more powerful than any current ESA spacecraft. It is very difficult to perform live testing of mission control systems. No one wants to take any risk with an existing, valuable satellite, so testing new procedures, techniques, or systems in orbit is not often possible. The OPS-SAT solution is to design a low-cost satellite that is rock-solid safe and robust, even if there are any malfunctions due to testing. The robustness of the basic satellite itself will give ESA flight control teams the confidence they need to upload and try out new, innovative control software submitted by experimenters; the satellite can be pushed to its limits but can always be recovered if something goes wrong. Achieving this level of performance and safety at a low cost is a challenge. To do this, OPS-SAT combines off-the-shelf subsystems typically used with CubeSats, the latest terrestrial microelectronics for the onboard computer, and the experience ESA has gained in operating satellites for the last 40 years in keeping missions safe. However, Spaceborne Computer-2 was built off a prototype launched into orbit in 2017. And HPE specially designed it to sustain operations in space, along with software coded for space-based work.

Astronauts will use the computer to process data from the space station, satellites, cameras, and other sensors. Loaded with the necessary graphics processing units (GPUs), Spaceborne Computer-2 will process everything from photos of polar ice caps to medical images of the astronauts’ health, according to the news release. The GPUs’ processing power will be enough to fuel AI and machine learning capabilities, eliminating the need to send data back to Earth for ground-based processing.

Edge computing provides core capabilities for unique sites that have limited or no connectivity, giving them the power to process and analyze data locally and make critical decisions quickly. Sending data to and from Earth is time-consuming and requires a great deal of bandwidth, but with its sophisticated features and server, SBC-2 will be able to tap into information on the spot from an array of sensors, satellites, cameras, and other devices. The system will be equipped with certain technologies HPE already incorporates in computers developed by them, which are used in ATEX (ATmosphere EXplosible) environments, like oil and gas refineries and manufacturing plants. 

Graphics processing units to efficiently process image-intensive data requiring higher image resolution are also included in its components. Potential early applications could be monitoring astronauts’ physiological conditions in real-time, and making more sense of the hundreds of sensors that space agencies have strategically placed on the ISS. With in-space edge computing, researchers can process onboard images, signals, and other data related to a range of events, such as:

  • Traffic trends by having a wider look at the number of cars on the road and even in parking lots,
  • Air quality by measuring the level of emissions and other pollutants in the atmosphere,
  • Tracking objects moving in space and in the atmosphere from planes to missile launches.

The Edgeline EL4000 servers will use NVIDIA T4 GPUs for AI and machine learning, image processing, video processing, and other tasks. Previously, the first SBC-1 used CPUs for those tasks. The latest SBC-2 will include CPUs and GPUs to allow for comparison performance experimentation in space.

The 1U boxes insert into the standard data center 19-inch racks on the ISS. The racks are then inserted into lockers aboard the ISS to hold them securely. Also provided is an enterprise-class compute node, HPE’s ProLiant DL360, for intense compute requirements.

SBC-2 will also tap Microsoft’s Azure Space service to connect users on the space station to the Earth, and vice versa, through the cloud. The computer will be installed for the next 2 to 3 years and is backed by a sponsorship from the ISS U.S. National Laboratory.

NASA’s Joint Research with Intel, IBM, and Google

Could the same computer algorithms that teach autonomous cars to drive safely help identify nearby asteroids or discover life in the universe? NASA scientists are trying to figure that out by partnering with pioneers in artificial intelligence (AI) — companies such as Intel, IBM, and Google — to apply advanced computer algorithms to problems in space science. 

The final frontier, space exploration still captures the imagination of the young and old alike. Yet there are very practical solutions that satellite technologies can offer to mitigate the effects of the grand challenges that face humankind today, such as forest fires caused by changing weather patterns.

La Jument Nanosatellites by Lockheed Martin Corporation

One noteworthy small-satellite project currently underway is being run by the Space and Engineering Research Center at the University of Southern California’s Information Sciences Institute. The goal for its four La Jument nanosatellites is to enhance AI and ML space technologies. Lockheed Martin Corporation is building mission payloads for the nanosats, which will use the company’s SmartSat software-defined satellite architecture for both the payload and bus. SmartSat is designed to let satellite operators quickly change missions while in orbit with the simplicity of starting, stopping, or uploading new applications.

The La Jument nanosats will enable AI/ML algorithms in orbit, using advanced multicore processing and onboard graphics-processing units. An app being tested is an algorithm known as SuperRes, developed by Lockheed Martin, which can automatically enhance the quality of an image in the same way as a smartphone does. SuperRes enables the exploitation and detection of imagery produced by lower-cost, lower-quality image sensors. 

SmartSat also provides cyber threat detection, while the software-defined payload houses advanced optical and infrared cameras used by Lockheed Martin’s Advanced Technology Center to qualify AI and ML technologies for space.

These systems are powered by the NVIDIA Jetson platform, built on top of the CUDA-X capable software stack, and supported by the NVIDIA Jetpack software development kit. This configuration facilitates powerful AI-at-the-edge computing capabilities to unlock advanced and digital-signal processing.

The PhiSat-1 Satellite

Intel, Ubotica, and the European Space Agency (ESA) have launched the first AI satellite into Earth’s orbit. The PhiSat-1 satellite is about the size of a cereal box and was ejected from a rocket’s dispenser alongside 45 other satellites. The rocket launched from Guiana Space Centre on September 2nd, 2020. These CubeSats are built around standard 10×10 cm units.

Intel has integrated its Movidius Myriad 2 Vision Processing Unit (VPU) into PhiSat-1 – enabling large amounts of data to be processed on the device. This helps to prevent useless data from being sent back to Earth and consuming precious bandwidth.

NASA is continuously making progress in AI applications for space exploration to automate image analysis for galaxy/planet/star classification, develop autonomous spacecraft that could avoid space debris without human intervention, create communication networks more efficient and distortion-free by using an AI-based cognitive radio. 

The successful launch and commissioning of the satellite is the result of almost three years of work. This included testing the COTS (Commercial Off The Shelf) Myriad 2 device in the CERN Large Hadron Collider (LHC) / Super Proton Synchrotron (SPS) linear accelerator (LINAC) and other radiation testing facilities around Europe to determine its suitability for space applications and its susceptibility to soft errors. Subsequently, the launch itself was delayed by over a year by a failed rocket, two natural disasters, and a global pandemic.

But while lots of satellites carry custom-built spaceworthy chips, PhiSat-1 uses Intel’s Myriad 2 chip, a commercially available chip found in DJI drones, discontinued Google Clips, inspection and surveillance cameras, and even Magic Leap’s AR goggles. The space processors typically lag their commercial counterparts in terms of performance; Myriad 2 provides a sudden performance ‘step’ for in-flight avionics of more than 100x with respect to comparable systems.

Approximately two weeks after launch, the satellite was commissioned and began transmitting its first images back to Earth. The initial application is to filter out clouds, which obscure Earth-observation imagery and which can account for 68% of EO imagery on average. Filtering out clouds at the source allows precious downlink bandwidth and power to be conserved.

The front end of the PhiSat-1 imaging system is called HyperScout-2, and, unlike an RGB camera which processes 3 spectral channels, it processes 48 distinct spectral bands from visible light to infrared.

Selecting particular spectral bands to analyze for particular applications makes it possible to see things that no ordinary image would show – for instance the red carotene of drying leaves, which indicates areas at elevated risk of forest fires. Having AI inference onboard opens the prospect of identifying forest fires onboard the satellite by examining the IR bands from the sensor, with consequent reductions in the time to generate alerts for forestry managers.

The imager and associated AI engine are not limited to observing dryland areas but can automatically detect algae, oil spills, and ships in the oceans and inland waterways. The fact that the platform is programmable allows AI inference networks to be uploaded over the satellite communications channel in a manner familiar to app store users, allowing existing satellites to support a myriad of new applications while in orbit.

This could pave the way for future space-based innovations, from fighting fires and oil spills on Earth to piloting spacecraft and landers, to driving rovers on Mars. With a view to the future, Ubotica plans to launch a Myriad X-based next-generation platform with a space partner in early 2022 as well as a programmed mission to the ISS in 2021.

Emirates Lunar Mission (ELM)

Mission Control Space Services Inc. will demonstrate a cutting-edge AI-integrated flight computer on the Emirates Lunar Mission (ELM), an international micro-rover mission led by the Mohammed Bin Rashid Space Centre in the United Arab Emirates, launched on a SpaceX rocket and delivered to the Moon by ispace of Japan in 2022.

Mission Control and subcontractor Xiphos Technologies will fly a modern flight computer as a payload on the ispace lander. This payload will host an Artificial Intelligence (AI) application that will classify types of lunar geological features visible in images from Rashid, the rover in the Emirates Lunar Mission, as it drives around the lunar surface. Xiphos and ESA/OPS-SAT use ARM dual-core in their processors.

As the demonstration of Deep Learning beyond Low Earth Orbit (LEO), this will be a historic milestone in space exploration. In traditional missions, such analysis would be performed by powerful computers on Earth; however, this limits the capabilities of a rover to perform actions such as navigation autonomously. By introducing this AI technology in an edge-computing architecture for a lunar mission, it will unlock new capabilities in science-driven robotic exploration.

The outputs of this AI will be transferred back to Earth, where we will use our cloud-based Mission Control Software platform to distribute mission data to our science team in real-time. This will enable our mission science team to seamlessly and securely interface with the spacecraft to support a variety of experiments.

Conclusion

AI and machine learning capabilities are making significant impacts in the space industry by creating efficiencies in mission planning and operations and providing scientists with the ability to explore the far reaches of space. While automation of tasks paves the way for the use of AI, the ability for spacecraft to become fully cognitive machines, capable of making critical decisions based on their current environment, without reliance on ground systems to perform essential functions will create more time for humans to spend on valued and more complex research activities. On the other side, edge computing provides core capabilities for unusual sites that have limited or no connectivity, giving them the power to process and analyze data locally and make critical decisions quickly. Edge AI is an emergent field that would be explored and applied in Space Missions. Edge computing needs to be brought closer together with AI/ML methods. Edge computing is by design distributed and leans towards decentralization; however, modern AI/ML methods are only just beginning to allow distributed and decentralized computations. This is where Edge AI comes in. High-Performance Edge Computers bring data center capabilities closer to the source of data to enable AI-powered analytics, deep learning, and other data-intensive applications at the edge. Some of the benefits can be summarized as:

  • Assets and machines at the Edge can be trained to perform autonomous tasks,
  • Digital data for real-time and remote management of devices in the field,
  • Real-time analytics to trigger immediate and automate decision making,
  • Application of training models and inference happen directly on Edge devices.

AI is a general-purpose technology already transforming the global economy but still largely untapped potential for deep space exploration and Earth Observation (EO) technology. AI research is the “new electricity” fueling the fourth industrial revolution. In the past, the main innovation in EO was derived by AI applications on-ground data leveraging on large scale computing capability e.g. cloud computing or GPU architectures. Now due to the advances in microelectronics of space-grade AI hardware accelerators, we have the possibility to exploit AI directly onboard opening a new era for EO satellites where feature extraction and decision making is performed directly onboard thus reducing unnecessary data exchanged between satellite sensors and ground.

The opportunities presented by the use of AI at the edge are multi-faceted; benefitting from them will require a deeply interdisciplinary approach. Collaborative research efforts focused on specific application classes or domains should be encouraged, with an end-to-end systems view. The tensions and trade-offs between safety, security, privacy, performance, and cost need deep exploration; they cannot be explored in a piecemeal manner. While the ultimate goal should be to build generalizable solutions to the extent possible, as the field expands, it is imperative to demonstrate successful solutions, even in somewhat narrow contexts. Thus we advocate for funding at the governmental level to seeding programs that explicitly encourage collaborative multi-disciplinary research between AI, systems, applications, and human factors researchers with a focus on complete edge AI solutions.