Supernova spectral time series contain a wealth of information about the progenitor and explosion process of these energetic events. However, the transformation of a measured spectral time series into an explosion model requires an understanding of statistical and systematic uncertainties as well as the correlation of different parameters. Ultimately, the modeling of these data requires the exploration of very high dimensional (tens to hundreds) posterior probabilities with expensive radiative transfer codes. Physically realistic models require at least tens of CPU minutes per evaluation putting a detailed reconstruction of the explosion out of reach of traditional methodology.
The advent of widely available libraries for the training of neural networks combined with their ability to approximate almost arbitrary functions with high precision allows for a new approach to this problem. Instead of evaluating the radiative transfer model itself, one can build a neural network proxy trained on the simulations but evaluating orders of magnitude faster. We have developed a framework that allows the speed-up of spectral creation by orders of magnitude by approximating the physical realistic TARDIS supernova radiative transfer code: the Dalek emulator. In this talk, I will show that we can train an emulator for this problem given a modest training set of a few ten thousand spectra (easily calculable on modern supercomputers). The results show an accuracy on the percent level (that are dominated by the Monte Carlo nature of TARDIS and not the emulator) with a speedup of several orders of magnitude. I will give an overview of the problems that my group is currently pursuing with this methodology. I will conclude by discussing the much broader set of applications that are enabled by this method and give an outlook of new emulators that can tackle larger and more complex problems.