Presentation #132.09 in the session “Computation, Data Handling, Image Analysis”.
The core data product of radio interferometers like the Atacama Large Millimeter Array (ALMA) is a set of visibilities, or samples of the Fourier transform of the astronomical sky brightness distribution. To study an astronomical source in the image-plane, we must invert this transformation to identify the image most consistent with the data. Traditional methods like CLEAN (implemented in the popular CASA facility software package) accomplish this by first performing an inverse Fourier transform of the visibility samples and then removing the effects of the suboptimal PSF through image-plane deconvolution. On the other hand, Regularized Maximum Likelihood (RML) imaging techniques, like those used to produce the recent EHT images of the M87 supermassive black hole shadow, work by forward-modeling the visibility data. Though CLEAN techniques have historically been more popular than RML workflows, recent developments in computing architectures (GPUs) and software (machine learning autodifferention frameworks) have made rapid RML imaging possible, including efficient exploration and cross-validation of hyperparameter settings. Excitingly, the composable and probabilistic nature of RML imaging opens up many opportunities for advanced imaging workflows, including velocity-space regularization and simultaneous data self-calibration. In this short presentation, we will introduce the concept of Regularized Maximum Likelihood imaging and present our “Million Points of Light” (MPoL) package: https://mpol-dev.github.io/MPoL, a user-friendly Python package for RML imaging with sub-mm interferometers.