Skip to main content
SearchLoginLogin or Signup

A Novel Neural Network-based Approach for the Remote Sensing of of Dust Storms via Martian Satellite Observation

Presentation #203.08 in the session “Mars: Dust, Dynamics, and Dunes”.

Published onOct 03, 2021
A Novel Neural Network-based Approach for the Remote Sensing of of Dust Storms via Martian Satellite Observation

The development of small interplanetary satellites that capture high-resolution imagery of planets, moons, and other celestial objects is a fascinating area of recent research. This technology enables a vast array of new opportunities in terms of monitoring objects and events on planets in the solar system. Specifically, future deployment of capable small satellites to the orbit of Mars is exciting because collecting multi-temporal imagery data of the planet allows for the study of events such as dust storms, using machine learning approaches. On Earth, recent advances in deep learning, the collection of high-resolution multi-temporal satellite imagery, and more has led to development of sophisticated natural disaster response systems. Artificial neural networks are used to analyze data in real time to assess infrastructure damage after earthquakes, extreme storms, wildfires, etc., and inform policy makers and local communities regarding the varying levels of impact and the timely and targeted allocation of resources and personnel. While the study of Martian dust storms does not have the same humanitarian ramifications, it is important to understand these sometimes elusive events in a novel manner. For instance, in 2018, a storm killed NASA’s Opportunity rover by coating its solar panels. Additionally, there is emerging evidence to suggest that the storms are a cause of the loss of water on Mars. In this work, once the appropriate multi-temporal high-resolution data is collected, we seek to train an automated classifier. Labels for each image, including bounding boxes and storm severity, are collected through crowdsourcing. We propose the use of a VGG-16 model, which is a stack of convolution layers with small receptive fields in the first layers. The model segments the storm in a pair of images captured in the same location but at different times and outputs a classification representing the severity of the change detected.

Comments
0
comment
No comments here