Our universe is homogeneous and isotropic, and its perturbations are translational and rotational invariant. In this talk we will introduce a generative Normalizing Flow (NF) architecture which explicitly incorporates these symmetries and defines the data likelihood via a sequence of Fourier space based nonlinear transforms, evaluating their Jacobian at each layer. This allows to train the NF by maximizing the data likelihood p(x|y) as a function of the labels y, such as cosmological or nuisance parameters. In contrast to other generative models the NF approach has no loss of information since it preserves the full dimensionality of the data, and gives direct access to the data likelihood p(x|y). We apply this to outputs of cosmological N-body simulations and show that it is able to generate samples that have summary statistics indistinguishable from the training data. We show that the NF transformed data with correct y are visually indistinguishable from a Gaussian white noise: when this mapping is perfect the resulting p(x|y) analysis becomes optimal. On N-body simulation outputs we show that this leads to significant improvements in constraining power over the standard summary statistics such as the power spectrum.