## Publications

226 results found

Abdalla E, Abellán GF, Aboubrahim A,
et al., 2022, Cosmology intertwined: A review of the particle physics, astrophysics, and cosmology associated with the cosmological tensions and anomalies, *Journal of High Energy Astrophysics*, Vol: 34, Pages: 49-211, ISSN: 2214-4048

The standard Cold Dark Matter (CDM) cosmological model provides a good description of a widerange of astrophysical and cosmological data. However, there are a few big open questions that make thestandard model look like an approximation to a more realistic scenario yet to be found. In this paper,we list a few important goals that need to be addressed in the next decade, taking into account thecurrent discordances between the different cosmological probes, such as the disagreement in the valueof the Hubble constant H0, the σ8–S8 tension, and other less statistically significant anomalies. Whilethese discordances can still be in part the result of systematic errors, their persistence after several yearsof accurate analysis strongly hints at cracks in the standard cosmological scenario and the necessity fornew physics or generalisations beyond the standard model. In this paper, we focus on the 5.0 σ tensionbetween the Planck CMB estimate of the Hubble constant H0 and the SH0ES collaboration measurements.After showing the H0 evaluations made from different teams using different methods and geometriccalibrations, we list a few interesting new physics models that could alleviate this tension and discusshow the next decade’s experiments will be crucial. Moreover, we focus on the tension of the PlanckCMB data with weak lensing measurements and redshift surveys, about the value of the matter energydensity m, and the amplitude or rate of the growth of structure (σ8, f σ8). We list a few interestingmodels proposed for alleviating this tension, and we discuss the importance of trying to fit a full arrayof data with a single model and not just one parameter at a time. Additionally, we present a wide rangeof other less discussed anomalies at a statistical significance level lower than the H0–S8 tensions whichmay also constitute hints towards new physics, and we discuss possible generic theoretical approachesthat can collectively explain

Percival WJ, Friedrich O, Sellentin E,
et al., 2022, Matching Bayesian and frequentist coverage probabilities when using an approximate data covariance matrix, *Monthly Notices of the Royal Astronomical Society*, Vol: 510, Pages: 3207-3221, ISSN: 0035-8711

Observational astrophysics consists of making inferences about the Universe by comparing data and models. The credible intervals placed on model parameters are often as important as the maximum a posteriori probability values, as the intervals indicate concordance or discordance between models and with measurements from other data. Intermediate statistics (e.g. the power spectrum) are usually measured and inferences are made by fitting models to these rather than the raw data, assuming that the likelihood for these statistics has multivariate Gaussian form. The covariance matrix used to calculate the likelihood is often estimated from simulations, such that it is itself a random variable. This is a standard problem in Bayesian statistics, which requires a prior to be placed on the true model parameters and covariance matrix, influencing the joint posterior distribution. As an alternative to the commonly used independence Jeffreys prior, we introduce a prior that leads to a posterior that has approximately frequentist matching coverage. This is achieved by matching the covariance of the posterior to that of the distribution of true values of the parameters around the maximum likelihood values in repeated trials, under certain assumptions. Using this prior, credible intervals derived from a Bayesian analysis can be interpreted approximately as confidence intervals, containing the truth a certain proportion of the time for repeated trials. Linking frequentist and Bayesian approaches that have previously appeared in the astronomical literature, this offers a consistent and conservative approach for credible intervals quoted on model parameters for problems where the covariance matrix is itself an estimate.

Hu L, Heavens A, Bacon D, 2022, Light bending by the cosmological constant, Publisher: ArXiv

We revisit the question of whether the cosmological constant $\Lambda$affects the cosmological gravitational bending of light, by numericalintegration of the geodesic equations for a Swiss cheese model consisting of apoint mass and a compensated vacuole, in a Friedmann-Robertson-Walkerbackground. We find that there is virtually no dependence of the light bendingon the cosmological constant that is not already accounted for in the angulardiameter distances of the standard lensing equations, plus small modificationsthat arise because the bending is restricted to a finite region covered by thehole. The residual $\Lambda$ dependence for a $10^{13}\,M_{\odot}$ lens is atthe level of 1 part in $10^7$, and even this might be accounted for by smallchanges in the hole size evolution as the photon crosses. We therefore concludethat there is no need for modification of the standard cosmological lensingequations in the presence of a cosmological constant.

Mootoovaloo A, Jaffe AH, Heavens AF,
et al., 2022, Kernel-based emulator for the 3D matter power spectrum from CLASS, *Astronomy and Computing*, Vol: 38, Pages: 100508-100508, ISSN: 2213-1337

The 3D matter power spectrum, is a fundamental quantity in the analysis of cosmological data such as large-scale structure, 21 cm observations, and weak lensing. Existing computer models (Boltzmann codes) such as CLASS can provide it at the expense of immoderate computational cost. In this paper, we propose a fast Bayesian method to generate the 3D matter power spectrum, for a given set of wavenumbers, and redshifts, . Our code allows one to calculate the following quantities: the linear matter power spectrum at a given redshift (the default is set to 0); the non-linear 3D matter power spectrum with/without baryon feedback; the weak lensing power spectrum. The gradient of the 3D matter power spectrum with respect to the input cosmological parameters is also returned and this is useful for Hamiltonian Monte Carlo samplers. The derivatives are also useful for Fisher matrix calculations. In our application, the emulator is accurate when evaluated at a set of cosmological parameters, drawn from the prior, with the fractional uncertainty, centred on 0. It is also times faster compared to CLASS, hence making the emulator amenable to sampling cosmological and nuisance parameters in a Monte Carlo routine. In addition, once the 3D matter power spectrum is calculated, it can be used with a specific redshift distribution, to calculate the weak lensing and intrinsic alignment power spectra, which can then be used to derive constraints on cosmological parameters in a weak lensing data analysis problem. The software (emuPK) can be trained with any set of points and is distributed on Github, and comes with a pre-trained set of Gaussian Process (GP) models, based on 1000 Latin Hypercube (LH) samples, which follow roughly the current priors for current weak lensing analyses.

Porqueres N, Heavens A, Mortlock D,
et al., 2021, Lifting weak lensing degeneracies with a field-based likelihood, *Monthly Notices of the Royal Astronomical Society*, Vol: 509, Pages: 3194-3202, ISSN: 0035-8711

We present a field-based approach to the analysis of cosmic shear data to infer jointly cosmological parameters and the dark matter distribution. This forward modelling approach samples the cosmological parameters and the initial matter fluctuations, using a physical gravity model to link the primordial fluctuations to the non-linear matter distribution. Cosmological parameters are sampled and updated consistently through the forward model, varying (1) the initial matter power spectrum, (2) the geometry through the distance-redshift relationship, and (3) the growth of structure and light-cone effects. Our approach extracts more information from the data than methods based on two-point statistics. We find that this field-based approach lifts the strong degeneracy between the cosmological matter density, Ωm, and the fluctuation amplitude, σ8, providing tight constraints on these parameters from weak lensing data alone. In the simulated four-bin tomographic experiment we consider, the field-based likelihood yields marginal uncertainties on σ8 and Ωm that are, respectively, a factor of 3 and 5 smaller than those from a two-point power spectrum analysis applied to the same underlying data.

Bahr-Kalus B, Bertacca D, Verde L,
et al., 2021, The Kaiser-Rocket effect: three decades and counting, *Journal of Cosmology and Astroparticle Physics*, Vol: 2021, Pages: 1-41, ISSN: 1475-7516

The peculiar motion of the observer, if not accurately accounted for, is bound to induce a well-defined clustering signal in the distribution of galaxies. This signal is related to the Kaiser rocket effect. Here we examine the amplitude and form of this effect, both analytically and numerically, and discuss possible implications for the analysis and interpretation of forthcoming cosmological surveys. For an idealistic cosmic variance dominated full-sky survey with a Gaussian selection function peaked at z ∼ 1.5 it is a > 5σ effect and it can in principle bias very significantly the inference of cosmological parameters, especially for primordial non-Gaussianity. For forthcoming surveys, with realistic masks and selection functions, the Kaiser rocket is not a significant concern for cosmological parameter inference except perhaps for primordial non-Gaussianity studies. However, it is a systematic effect, whose origin, nature and imprint on galaxy maps are well known and thus should be subtracted or mitigated. We present several approaches to do so.

Berera A, Brahma S, Brandenberger R,
et al., 2021, Quantum coherence of photons to cosmological distances, *Physical Review D: Particles, Fields, Gravitation and Cosmology*, Vol: 104, ISSN: 1550-2368

We identify potential sources of decoherence for U(1) gauge bosons from a cosmological standpoint. Besides interactions with different species in the cosmological medium, we also consider effects due to the expansion of the Universe, which can produce particles (especially scalars) that can potentially interact with the photon in a quantum state. We look in particular at the case of axionlike particles and their predicted decay channels in our analysis. These interactions are shown to have a negligible effect as far as decoherence goes. Interaction rates with cosmic microwave background radiation or through Thomson scattering are small, so that the interstellar medium remains the biggest decoherence factor. Thus, quantum teleportation experiments with photon energies in the range 1–10 keV should be feasible at cosmological distances up to the galaxy formation epoch or beyond (z∼100).

Leclercq F, Heavens A, 2021, On the accuracy and precision of correlation functions and field-levelinference in cosmology, *Monthly Notices of the Royal Astronomical Society*, Vol: 506, Pages: L85-L90, ISSN: 0035-8711

We present a comparative study of the accuracy and precision of correlation function methods and full-field inference in cosmological data analysis. To do so, we examine a Bayesian hierarchical model that predicts lognormal (LN) fields and their two-point correlation function. Although a simplified analytic model, the LN model produces fields that share many of the essential characteristics of the present-day non-Gaussian cosmological density fields. We use three different statistical techniques: (i) a standard likelihood-based analysis of the two-point correlation function; (ii) a likelihood-free (simulation-based) analysis of the two-point correlation function; (iii) a field-level analysis, made possible by the more sophisticated data assimilation technique. We find that (a) standard assumptions made to write down a likelihood for correlation functions can cause significant biases, a problem that is alleviated with simulation-based inference; and (b) analysing the entire field offers considerable advantages over correlation functions, through higher accuracy, higher precision, or both. The gains depend on the degree of non-Gaussianity, but in all cases, including for weak non-Gaussianity, the advantage of analysing the full field is substantial.

Jung G, Namikawa T, Liguori M,
et al., 2021, The integrated angular bispectrum of weak lensing, *Journal of Cosmology and Astroparticle Physics*, Vol: 2021, Pages: 1-22, ISSN: 1475-7516

We investigate three-point statistics in weak lensing convergence, through the integrated bispectrum. This statistic involves measuring power spectra in patches, and is thus easy to measure, and avoids the complexity of estimating the very large number of possible bispectrum configurations. The integrated bispectrum principally probes the squeezed limit of the bispectrum. To be useful as a set of summary statistics, accurate theoretical predictions of the signal are required, and, assuming Gaussian sampling distributions, the covariance matrix. In this paper, we investigate through simulations how accurate are theoretical formulae for both the integrated bispectrum and its covariance, finding that there a small inaccuracies in the theoretical signal, and more serious deviations in the covariance matrix, which may need to be estimated using simulations.

Di Valentino E, Anchordoqui LA, Akarsu O,
et al., 2021, Snowmass2021-Letter of interest cosmology intertwined I: Perspectives for the next decade, *ASTROPARTICLE PHYSICS*, Vol: 131, ISSN: 0927-6505

- Author Web Link
- Cite
- Citations: 15

Di Valentino E, Anchordoqui LA, Akarsu O,
et al., 2021, Snowmass2021-Letter of interest cosmology intertwined II: The hubble constant tension, *ASTROPARTICLE PHYSICS*, Vol: 131, ISSN: 0927-6505

- Author Web Link
- Cite
- Citations: 69

Di Valentino E, Anchordoqui LA, Akarsu O,
et al., 2021, Cosmology intertwined III: f sigma(8) and S-8, *ASTROPARTICLE PHYSICS*, Vol: 131, ISSN: 0927-6505

- Author Web Link
- Cite
- Citations: 48

Di Valentino E, Anchordoqui LA, Akarsu O,
et al., 2021, Snowmass2021-Letter of interest cosmology intertwined IV: The age of the universe and its curvature, *ASTROPARTICLE PHYSICS*, Vol: 131, ISSN: 0927-6505

- Author Web Link
- Cite
- Citations: 17

Porqueres N, Heavens A, Mortlock D,
et al., 2021, Bayesian forward modelling of cosmic shear data, *Monthly Notices of the Royal Astronomical Society*, Vol: 502, Pages: 3035-3044, ISSN: 0035-8711

We present a Bayesian hierarchical modelling approach to infer the cosmic matter density field, and the lensing and the matter power spectra, from cosmic shear data. This method uses a physical model of cosmic structure formation to infer physically plausible cosmic structures, which accounts for the non-Gaussian features of the gravitationally evolved matter distribution and light-cone effects. We test and validate our framework with realistic simulated shear data, demonstrating that the method recovers the unbiased matter distribution and the correct lensing and matter power spectrum. While the cosmology is fixed in this test, and the method employs a prior power spectrum, we demonstrate that the lensing results are sensitive to the true power spectrum when this differs from the prior. In this case, the density field samples are generated with a power spectrum that deviates from the prior, and the method recovers the true lensing power spectrum. The method also recovers the matter power spectrum across the sky, but as currently implemented, it cannot determine the radial power since isotropy is not imposed. In summary, our method provides physically plausible inference of the dark matter distribution from cosmic shear data, allowing us to extract information beyond the two-point statistics and exploiting the full information content of the cosmological fields.

Heavens A, Sellentin E, Jaffe A, 2020, Extreme data compression while searching for new physics, *Monthly Notices of the Royal Astronomical Society*, Vol: 498, Pages: 3440-3451, ISSN: 0035-8711

Bringing a high-dimensional dataset into science-ready shape is a formidablechallenge that often necessitates data compression. Compression has accordinglybecome a key consideration for contemporary cosmology, affecting public datareleases, and reanalyses searching for new physics. However, data compressionoptimized for a particular model can suppress signs of new physics, or evenremove them altogether. We therefore provide a solution for exploring newphysics \emph{during} data compression. In particular, we store additionalagnostic compressed data points, selected to enable precise constraints ofnon-standard physics at a later date. Our procedure is based on the maximalcompression of the MOPED algorithm, which optimally filters the data withrespect to a baseline model. We select additional filters, based on ageneralised principal component analysis, which are carefully constructed toscout for new physics at high precision and speed. We refer to the augmentedset of filters as MOPED-PC. They enable an analytic computation of Bayesianevidences that may indicate the presence of new physics, and fast analyticestimates of best-fitting parameters when adopting a specific non-standardtheory, without further expensive MCMC analysis. As there may be large numbersof non-standard theories, the speed of the method becomes essential. Should nonew physics be found, then our approach preserves the precision of the standardparameters. As a result, we achieve very rapid and maximally preciseconstraints of standard and non-standard physics, with a technique that scaleswell to large dimensional datasets.

Jimenez R, Heavens AF, 2020, The distribution of dark galaxies and spin bias, *Monthly Notices of the Royal Astronomical Society*, Vol: 498, Pages: L93-L97, ISSN: 0035-8711

In the light of the discovery of numerous (almost) dark galaxies from the ALFALAFA and LITTLE THINGS surveys, we revisit the predictions of Jimenez et al. 1997, based on the Toomre stability of rapidly-spinning gas disks. We have updated the predictions for ΛCDM with parameters given by Planck18, computing the expected number densities of dark objects, and their spin parameter and mass distributions. Comparing with the data is more challenging, but where the spins are more reliably determined, the spins are close to the threshold for disks to be stable according to the Toomre criterion, where the expected number density is highest, and reinforces the concept that there is a bias in the formation of luminous galaxies based on the spin of their parent halo.

Mootoovaloo A, Heavens AF, Jaffe AH,
et al., 2020, Parameter Inference for Weak Lensing using Gaussian Processes and MOPED, *Monthly Notices of the Royal Astronomical Society*, Vol: 497, Pages: 2213-2226, ISSN: 0035-8711

In this paper, we propose a Gaussian Process (GP) emulator for the calculation both of tomographic weak lensing band powers, and of coefficients of summary data massively compressed with the MOPED algorithm. In the former case cosmological parameter inference is accelerated by a factor of ∼10–30 compared with Boltzmann solver class applied to KiDS-450 weak lensing data. Much larger gains of order 103 will come with future data, and MOPED with GPs will be fast enough to permit the Limber approximation to be dropped, with acceleration in this case of ∼105. A potential advantage of GPs is that an error on the emulated function can be computed and this uncertainty incorporated into the likelihood. However, it is known that the GP error can be unreliable when applied to deterministic functions, and we find, using the Kullback–Leibler divergence between the emulator and class likelihoods, and from the uncertainties on the parameters, that agreement is better when the GP uncertainty is not used. In future, weak lensing surveys such as Euclid, and the Legacy Survey of Space and Time, will have up to ∼104 summary statistics, and inference will be correspondingly more challenging. However, since the speed of MOPED is determined not the number of summary data, but by the number of parameters, MOPED analysis scales almost perfectly, provided that a fast way to compute the theoretical MOPED coefficients is available. The GP provides such a fast mechanism.

Leclercq F, Faure B, Lavaux G,
et al., 2020, Perfectly parallel cosmological simulations using spatial comoving Lagrangian acceleration, *Astronomy and Astrophysics: a European journal*, Vol: 639, ISSN: 0004-6361

Context. Existing cosmological simulation methods lack a high degree of parallelism due to the long-range nature of the gravitational force, which limits the size of simulations that can be run at high resolution.Aims. To solve this problem, we propose a new, perfectly parallel approach to simulate cosmic structure formation, which is based on the spatial COmoving Lagrangian Acceleration (sCOLA) framework.Methods. Building upon a hybrid analytical and numerical description of particles’ trajectories, our algorithm allows for an efficient tiling of a cosmological volume, where the dynamics within each tile is computed independently. As a consequence, the degree of parallelism is equal to the number of tiles. We optimised the accuracy of sCOLA through the use of a buffer region around tiles and of appropriate Dirichlet boundary conditions around sCOLA boxes.Results. As a result, we show that cosmological simulations at the degree of accuracy required for the analysis of the next generation of surveys can be run in drastically reduced wall-clock times and with very low memory requirements.Conclusions. The perfect scalability of our algorithm unlocks profoundly new possibilities for computing larger cosmological simulations at high resolution, taking advantage of a variety of hardware architectures.

Leclercq F, Enzi W, Jasche J,
et al., 2019, Primordial power spectrum and cosmology from black-box galaxy surveys, *Monthly Notices of the Royal Astronomical Society*, Vol: 490, Pages: 4237-4253, ISSN: 0035-8711

We propose a new, likelihood-free approach to inferring the primordial matterpower spectrum and cosmological parameters from arbitrarily complex forwardmodels of galaxy surveys where all relevant statistics can be determined fromnumerical simulations, i.e. black-boxes. Our approach builds upon approximateBayesian computation using a novel effective likelihood, and upon thelinearisation of black-box models around an expansion point. Consequently, weobtain simple "filter equations" for an effective posterior of the primordialpower spectrum, and a straightforward scheme for cosmological parameterinference. We demonstrate that the workload is computationally tractable, fixeda priori, and perfectly parallel. As a proof of concept, we apply our frameworkto a realistic synthetic galaxy survey, with a data model accounting forphysical structure formation and incomplete and noisy galaxy observations. Indoing so, we show that the use of non-linear numerical models allows the galaxypower spectrum to be safely fitted up to at least $k_\mathrm{max} = 0.5$$h$/Mpc, outperforming state-of-the-art backward-modelling techniques by afactor of $\sim 5$ in the number of modes used. The result is an unbiasedinference of the primordial matter power spectrum across the entire range ofscales considered, including a high-fidelity reconstruction of baryon acousticoscillations. It translates into an unbiased and robust inference ofcosmological parameters. Our results pave the path towards easy applications oflikelihood-free simulation-based inference in cosmology.

Jones DM, Heavens AF, 2019, Gaussian mixture models for blended photometric redshifts, *Monthly Notices of the Royal Astronomical Society*, Vol: 490, Pages: 3966-3986, ISSN: 0035-8711

Future cosmological galaxy surveys such as the Large Synoptic Survey Telescope (LSST) will photometrically observe very large numbers of galaxies. Without spectroscopy, the redshifts required for the analysis of these data will need to be inferred using photometric redshift techniques that are scalable to large sample sizes. The high number density of sources will also mean that around half are blended. We present a Bayesian photometric redshift method for blended sources that uses Gaussian mixture models to learn the joint flux–redshift distribution from a set of unblended training galaxies, and Bayesian model comparison to infer the number of galaxies comprising a blended source. The use of Gaussian mixture models renders both of these applications computationally efficient and therefore suitable for upcoming galaxy surveys.

Jimenez R, Maartens R, Khalifeh AR,
et al., 2019, Measuring the homogeneity of the universe using polarization drift, *Journal of Cosmology and Astroparticle Physics*, Vol: 2019, ISSN: 1475-7516

We propose a method to probe the homogeneity of a general universe, without assuming symmetry. We show that isotropy can be tested at remote locations on the past lightcone by comparing the line-of-sight and transverse expansion rates, using the time dependence of the polarization of Cosmic Microwave Background photons that have been inverse-Compton scattered by the hot gas in massive clusters of galaxies. This probes a combination of remote transverse and parallel components of the expansion rate of the metric, and we may use radial baryon acoustic oscillations or cosmic clocks to measure the parallel expansion rate. Thus we can test remote isotropy, which is a key requirement of a homogeneous universe. We provide explicit formulas that connect observables and properties of the metric.

Schmit CJ, Heavens AF, Pritchard JR, 2019, The gravitational and lensing-ISW bispectrum of 21 cm radiation, *Monthly Notices of the Royal Astronomical Society*, Vol: 483, Pages: 4259-4275, ISSN: 0035-8711

Cosmic microwave background experiments from COBE to Planck have launched cosmology into an era of precision science, where many cosmological parameters are now determined to the per cent level. Next-generation telescopes, focusing on the cosmological 21 cm signal from neutral hydrogen, will probe enormous volumes in the low-redshift Universe, and have the potential to determine dark energy properties and test modifications of Einstein’s gravity. We study the 21 cm bispectrum due to gravitational collapse as well as the contribution by line-of-sight perturbations in the form of the lensing-ISW bispectrum at low redshifts (z ∼ 0.35−3), targeted by upcoming neutral hydrogen intensity mapping experiments. We compute the expected bispectrum amplitudes and use a Fisher forecast model to compare power spectrum and bispectrum observations of intensity mapping surveys by Canadian Hydrogen Intensity Mapping Experiment (CHIME), MeerKAT, and SKA-mid. We find that combined power spectrum and bispectrum observations have the potential to decrease errors on the cosmological parameters by an order of magnitude compared to Planck. Finally, we compute the contribution of the lensing-ISW bispectrum, and find that, unlike for the cosmic microwave background analyses, it can safely be ignored for 21 cm bispectrum observations.

Jones DM, Heavens AF, 2019, Bayesian photometric redshifts of blended sources, *Monthly Notices of the Royal Astronomical Society*, Vol: 483, Pages: 2487-2505, ISSN: 0035-8711

Photometric redshifts are necessary for enabling large-scale multicolour galaxy surveys to interpret their data and constrain cosmological parameters. While the increased depth of future surveys such as the Large Synoptic Survey Telescope (LSST) will produce higher precision constraints, it will also increase the fraction of sources that are blended. In this paper, we present a Bayesian photometric redshift (BPZ) method for blended sources with an arbitrary number of intrinsic components. This method generalizes existing template-based BPZ methods, and produces joint posterior distributions for the component redshifts that allow uncertainties to be propagated in a principled way. Using Bayesian model comparison, we infer the probability that a source is blended and the number of components that it contains. We extend our formalism to the case where sources are blended in some bands and resolved in others. Applying this to the combination of LSST- and Euclid-like surveys, we find that the addition of resolved photometry results in a significant improvement in the reduction of outliers over the fully blended case. We make available blendz, a Python implementation of our method.

Amendola L, Appleby S, Avgoustidis A,
et al., 2018, Cosmology and fundamental physics with the Euclid satellite, *Living Reviews in Relativity*, Vol: 21, Pages: 1-345, ISSN: 1433-8351

Euclid is a European Space Agency medium-class mission selected for launch in 2020 within the cosmic vision 2015–2025 program. The main goal of Euclid is to understand the origin of the accelerated expansion of the universe. Euclid will explore the expansion history of the universe and the evolution of cosmic structures by measuring shapes and red-shifts of galaxies as well as the distribution of clusters of galaxies over a large fraction of the sky. Although the main driver for Euclid is the nature of dark energy, Euclid science covers a vast range of topics, from cosmology to galaxy evolution to planetary research. In this review we focus on cosmology and fundamental physics, with a strong emphasis on science beyond the current standard models. We discuss five broad topics: dark energy and modified gravity, dark matter, initial conditions, basic assumptions and questions of methodology in the data analysis. This review has been planned and carried out within Euclid’s Theory Working Group and is meant to provide a guide to the scientific themes that will underlie the activity of the group during the preparation of the Euclid mission.

Jeffrey N, Heavens AF, Fortio PD, 2018, Fast sampling from Wiener posteriors for image data with dataflow engines, *Astronomy and Computing*, Vol: 25, Pages: 230-237, ISSN: 2213-1337

We use Dataflow Engines (DFE) to construct an efficient Wiener filter of noisy and incomplete image data, and to quickly draw probabilistic samples of the compatible true underlying images from the Wiener posterior. Dataflow computing is a powerful approach using reconfigurable hardware, which can be deeply pipelined and is intrinsically parallel. The unique Wiener-filtered image is the minimum-variance linear estimate of the true image (if the signal and noise covariances are known) and the most probable true image (if the signal and noise are Gaussian distributed). However, many images are compatible with the data with different probabilities, given by the analytic posterior probability distribution referred to as the Wiener posterior. The DFE code also draws large numbers of samples of true images from this posterior, which allows for further statistical analysis. Naive computation of the Wiener-filtered image is impractical for large datasets, as it scales as [Formula presented], where [Formula presented] is the number of pixels. We use a messenger field algorithm, which is well suited to a DFE implementation, to draw samples from the Wiener posterior, that is, with the correct probability we draw samples of noiseless images that are compatible with the observed noisy image. The Wiener-filtered image can be obtained by a trivial modification of the algorithm. We demonstrate a lower bound on the speed-up, from drawing [Formula presented] samples of a [Formula presented] image, of 11.3 ± 0.8 with 8 DFEs in a 1U MPC-X box when compared with a 1U server presenting 32 CPU threads. We also discuss a potential application in astronomy, to provide better dark matter maps and improved determination of the parameters of the Universe.

Heavens AF, Di Valentino E, Melchiorri A,
et al., 2018, Bayesian Evidence against Harrison-Zel'dovich spectrum in tension cosmology, *Physical Review D - Particles, Fields, Gravitation and Cosmology*, Vol: 98, ISSN: 1550-2368

Current cosmological constraints on the scalar spectral index of primordial fluctuations ns in the ΛVcold dark matter (ΛCDM) model have excluded the minimal scale-invariant Harrison-Zel’dovich model (ns=1; hereafter HZ) at high significance, providing support for inflation. In recent years, however, some tensions have emerged between different cosmological data sets that, if not due to systematics, could indicate the presence of new physics beyond the ΛCDM model. In light of these developments, we evaluate the Bayesian evidence against HZ in different data combinations and model extensions. Considering only the Planck temperature data, we find inconclusive evidence against HZ when including variations in the neutrino number Neff and/or the helium abundance YHe. Adding the Planck polarization data, on the other hand, yields strong evidence against HZ in the extensions we considered. Perhaps most interestingly, Planck temperature data combined with local measurements of the Hubble parameter [A. G. Riess et al., Astrophys. J. 826, 56 (2016); A. G. Riess et al. Astrophys. J. 861, 126 (2018)] give as the most probable model a HZ spectrum, with additional neutrinos. However, with the inclusion of polarization, standard ΛCDM is once again preferred, but the HZ model with extra neutrinos is not strongly disfavored. The possibility of fully ruling out the HZ spectrum is therefore ultimately connected with the solution to current tensions between cosmological data sets. If these tensions are confirmed by future data, then new physical mechanisms could be at work and a HZ spectrum could still offer a valid alternative.

Heavens AF, Sellentin E, 2018, Objective Bayesian analysis of neutrino masses and hierarchy, *Journal of Cosmology and Astroparticle Physics*, Vol: 2018, ISSN: 1475-7516

Given the precision of current neutrino data, priors still impact noticeably the constraints on neutrino masses and their hierarchy. To avoid our understanding of neutrinos being driven by prior assumptions, we construct a prior that is mathematically minimally informative. Using the constructed uninformative prior, we find that the normal hierarchy is favoured but with inconclusive posterior odds of 5.1:1. Better data is hence needed before the neutrino masses and their hierarchy can be well constrained. We find that the next decade of cosmological data should provide conclusive evidence if the normal hierarchy with negligible minimum mass is correct, and if the uncertainty in the sum of neutrino masses drops below 0.025 eV. On the other hand, if neutrinos obey the inverted hierarchy, achieving strong evidence will be difficult with the same uncertainties. Our uninformative prior was constructed from principles of the Objective Bayesian approach. The prior is called a reference prior and is minimally informative in the specific sense that the information gain after collection of data is maximised. The prior is computed for the combination of neutrino oscillation data and cosmological data and still applies if the data improve.

Heavens A, Alsing J, Jaffe A, et al., 2017, Bayesian hierarchical modelling of weak lensing - the golden goal, MG14 Meeting on General Relativity, Publisher: World Scientific, Pages: 3005-3010

To accomplish correct Bayesian inference from weak lensing shear datarequires a complete statistical description of the data. The natural frameworkto do this is a Bayesian Hierarchical Model, which divides the chain ofreasoning into component steps. Starting with a catalogue of shear estimates intomographic bins, we build a model that allows us to sample simultaneously fromthe the underlying tomographic shear fields and the relevant power spectra(E-mode, B-mode, and E-B, for auto- and cross-power spectra). The proceduredeals easily with masked data and intrinsic alignments. Using Gibbs samplingand messenger fields, we show with simulated data that the large (over67000-)dimensional parameter space can be efficiently sampled and the fulljoint posterior probability density function for the parameters can feasibly beobtained. The method correctly recovers the underlying shear fields and all ofthe power spectra, including at levels well below the shot noise.

Heavens AF, Sellentin E, 2017, On the insufficiency of arbitrarily precise covariance matrices: non-Gaussian weak lensing likelihoods, *Monthly Notices of the Royal Astronomical Society*, Vol: 473, Pages: 2355-2363, ISSN: 0035-8711

We investigate whether a Gaussian likelihood, as routinely assumed in the analysis of cosmologicaldata, is supported by simulated survey data. We define test statistics, based on anovel method that first destroys Gaussian correlations in a data set, and then measures the nonGaussiancorrelations that remain. This procedure flags pairs of data points that depend on eachother in a non-Gaussian fashion, and thereby identifies where the assumption of a Gaussianlikelihood breaks down. Using this diagnosis, we find that non-Gaussian correlations in theCFHTLenS cosmic shear correlation functions are significant. With a simple exclusion of themost contaminated data points, the posterior for s8 is shifted without broadening, but we findno significant reduction in the tension with s8 derived from Planck cosmic microwave backgrounddata. However, we also show that the one-point distributions of the correlation statisticsare noticeably skewed, such that sound weak-lensing data sets are intrinsically likely to leadto a systematically low lensing amplitude being inferred. The detected non-Gaussianities getlarger with increasing angular scale such that for future wide-angle surveys such as Euclidor LSST, with their very small statistical errors, the large-scale modes are expected to beincreasingly affected. The shifts in posteriors may then not be negligible and we recommendthat these diagnostic tests be run as part of future analyses.

Heavens AF, Sellentin E, de Mijolla D,
et al., 2017, Massive data compression for parameter-dependent covariance matrices, *Monthly Notices of the Royal Astronomical Society*, Vol: 472, Pages: 4244-4250, ISSN: 0035-8711

We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.