Skip to main content
SearchLoginLogin or Signup

A Novel Explainability Framework for Transit Signal Machine Classifiers

Presentation #418.02 in the session Exoplanet Transits III.

Published onJun 29, 2022
A Novel Explainability Framework for Transit Signal Machine Classifiers

The application of Machine Learning (ML) for the classification of transit signals in the search for exoplanets is increasingly becoming more common. One of the barriers in using ML for this task is the black box nature of effective ML algorithms such as deep neural networks. We have developed a novel explainability framework that can explain the classification result of a transit signal machine classifier in a technical natural language form. As an example, after classifying a transit signal into False Positive (FP), this framework provides explanation in the form of “This signal is an FP because there is a centroid shift” or “This signal is classified as FP due to the shape of the signal”. Given the lack of gold standard annotation regarding why a domain expert classifies a signal into planet candidate (PC) or FP, we have utilized the minor flags output by Robovetter for Q1-Q17 DR25 to study the performance of this new framework and verify that it is highly effective in explaining why the model classifies a signal into PC or FP. We will also discuss the knowledge discovery provided by this novel explainability framework. Some of the findings resulted from this framework are counterintuitive but verifiably correct.

No comments here