Presentation #102.392 in the session Poster Session.
Over the past decade, hundreds of nights have been spent on the world’s largest ground-based telescopes to search for and directly detect new exoplanets using high-contrast imaging (HCI) techniques. The ultimate goal of this work is to study the characteristics and statistics of the underlying planet population and distinguish between different planet formation and evolution theories. Further, HCI aims to find and characterize planets in our immediate Solar neighborhood. Both, the search for nearby planets as well as the studies on planet occurrence rates are heavily reliant on the metric used to estimate the achieved contrast relative to the bright target stars. Any error or inaccuracy made during the contrast estimation will directly affect all subsequent scientific reasoning. Currently used standards often rely on several explicit or implicit assumptions about the noise residing in the final images. For example, it is assumed that the noise is independent identically distributed gaussian. While being an inseparable part of the standard, such assumptions are rarely verified. Computing the final achievable contrast and deriving detection limits under the assumption of gaussian noise, while the actual noise is non-gaussian, will strongly bias our results. The severity of the bias depends on the extent to which the assumption is violated. This makes it hard, if not impossible, to compare results across datasets or instruments. With the recent launch of the James Webb Space Telescope, this problem will become even more severe as space-based observations have different noise statistics than observations from the ground. In this contribution, we revisit the fundamental question of how to robustly quantify detection limits in HCI. We focus our analysis on the error budget of currently used standards w.r.t. violated assumptions about the noise. For this purpose, we propose a new metric based on bootstrapping which generalizes current standards to non-gaussian noise. We apply our method to HCI data from the Very Large Telescope and derive detection limits for different types of noise. Our analysis shows that current standards tend to overestimate the achieved detection limits by more than one magnitude due to inaccurate assumptions about the noise. That is, we may have excluded planets that can still exist.