Space Situational Awareness

Space Situational Awareness

Space Object (SO) identification, classification, and characterization is a problem of paramount importance in Space Domain Awareness (SDA). Currently, the Joint Space Operation Center Mission System is currently tracking upwards of 29,000 resident SOs larger than 10 centimeters in size. It is estimated that more than 500,000 objects larger than 1 cm are currently orbiting Earth in the LEO/MEO/GEO regime. As a result, new methods are needed to effectively identify, classify, characterize, and concurrently track such a large number of objects. Recent advancements in deep learning such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) have shown tremendous results in many practical and theoretical fields, including computer vision, robotics, and speech recognition. Although deep learning methods are becoming ubiquitous, they have barely been explored in SDA applications.

SSEL aims to develop, test, and validate a new class of deep learning algorithms that can successfully classify the nature of SOs using light curve measurements. Recently our team demonstrated that deep CNN and RNN trained on real and simulated light curves can be effectively employed in discriminating between active satellites, debris, and rocket bodies. However, these do prove computationally expensive and require a large amount of available data, the latter of which can be an extensive and time-consuming operation to collect. Recently, a new machine learning technique has emerged where the networks are designed to teach the system to “learn to learn”. Named meta-learning, it relies on the assumption that a deep learning system can mimic the ability of humans to efficiently learn to recognize objects from a few examples. Here, we explore the use of a meta-learning algorithm named Model-Agnostic Meta-Learning (MAML) and demonstrate that such a class of algorithms can effectively solve the light curve SO classification problem. More specifically, we will employ a combination of simulated and real data to show that we can train meta-learning-based deep networks to efficiently and quickly learn to discriminate debris between different space objects under a variety of observational conditions. We test and validate the proposed methodology using simulated (model-based) and real light curve data collected by our University of Arizona (UA) RAPTORS EO telescopes.