## Publications

85 results found

Hill R, Shariff H, Trotta R,
et al., 2018, Projected distances to host galaxy reduce SNIa dispersion, *Monthly Notices of the Royal Astronomical Society*, Vol: 481, Pages: 2766-2777, ISSN: 0035-8711

We use multi-band imagery data from the Sloan Digital Sky Survey (SDSS) to measure projected distances of 302 supernova type Ia (SNIa) from the centre of their host galaxies, normalized to the galaxy's brightness scale length, with a Bayesian approach. We test the hypothesis that SNIas further away from the centre of their host galaxy are less subject to dust contamination (as the dust column density in their environment is maller) and/or come from a more homogeneous environment. Using the Mann-Whitney U test, we find a statistically significant difference in the observed colour correction distribution between SNIas that are near and those that are far from the centre of their host. The local p-value is 3 x 10^{-3}, which is significant at the 5 per cent level after look-elsewhere effect correction. We estimate the residual scatter of thetwo subgroups to be 0.073 +/- 0.018 for the far SNIas, compared to 0.114 +/-0.009 for the near SNIas -- an improvement of 30 per cent, albeit with a low statistical significance of 2sigma. This confirms the importance of host galaxy properties in correctly interpreting SNIa observations for cosmological inference.

Amendola L, Appleby S, Avgoustidis A,
et al., 2018, Cosmology and fundamental physics with the Euclid satellite, *Living Reviews in Relativity*, Vol: 21, Pages: 1-345, ISSN: 1433-8351

Euclid is a European Space Agency medium-class mission selected for launch in 2020 within the cosmic vision 2015–2025 program. The main goal of Euclid is to understand the origin of the accelerated expansion of the universe. Euclid will explore the expansion history of the universe and the evolution of cosmic structures by measuring shapes and red-shifts of galaxies as well as the distribution of clusters of galaxies over a large fraction of the sky. Although the main driver for Euclid is the nature of dark energy, Euclid science covers a vast range of topics, from cosmology to galaxy evolution to planetary research. In this review we focus on cosmology and fundamental physics, with a strong emphasis on science beyond the current standard models. We discuss five broad topics: dark energy and modified gravity, dark matter, initial conditions, basic assumptions and questions of methodology in the data analysis. This review has been planned and carried out within Euclid’s Theory Working Group and is meant to provide a guide to the scientific themes that will underlie the activity of the group during the preparation of the Euclid mission.

Clark HA, Scott P, Trotta R,
et al., 2018, Dark matter substructure cannot explain properties of the Fermi Galactic Centre excess, *Journal of Cosmology and Astroparticle Physics*, Vol: 2018, ISSN: 1475-7516

An excess of gamma rays has been identified at the centre of the Milky Way, and annihilation of dark matter has been posited as a potential source. This hypothesis faces significant challenges: difficulty characterizing astrophysical backgrounds, the need for a non-trivial adiabatic contraction of the inner part of the Milky Way's dark matter halo, and recent observations of photon clustering, which suggest that the majority of the excess is due to unresolved point sources. Here we point out that the apparent point-like nature of the emission rules out the dark matter interpretation of the excess entirely. Attempting to model the emission with dark matter point sources either worsens the problem with the inner slope, requires an unrealistically large minihalo fraction toward the Galactic Centre, or overproduces the observed emission at higher latitudes.

Trotta R, 2018, The Hands-On Universe: Making Sense of the Universe with All Your Senses, *CAPJOURNAL*, Pages: 20-25, ISSN: 1996-5621

For the past four years, the Hands-On Universe public engagement programme has explored unconventional, interactive and multi-sensorial ways of communicating complex ideas in cosmology and astrophysics to a wide variety of audiences. The programme lead, Roberto Trotta, has reached thousands of people through food-based workshops, art and science collaborations and a book written using only the 1000 most common words in the English language. In this article, Roberto reflects in first person on what has worked well in the programme, and what has not.

Revsbech EA, Trotta R, van Dyk DA, 2017, STACCATO: a novel solution to supernova photometric classification with biased training sets, *Monthly Notices of the Royal Astronomical Society*, Vol: 473, ISSN: 0035-8711

We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN's) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al. – a diffusion map combined with a random forest classifier – to deal specifically with the case of biased training sets. We propose a novel method called Synthetically Augmented Light Curve Classification (STACCATO) that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the ‘gold standard’ of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low-brightness SNe.

Oreshenko M, Lavie B, Grimm SL,
et al., 2017, Retrieval analysis of the emission spectrum of WASP-12b: sensitivity of outcomes to prior assumptions and implications for formation history, *Astrophysical Journal Letters*, Vol: 847, ISSN: 2041-8205

We analyze the emission spectrum of the hot Jupiter WASP-12b using our HELIOS-R retrieval code and HELIOS-K opacity calculator. When interpreting Hubble and Spitzer data, the retrieval outcomes are found to be prior-dominated. When the prior distributions of the molecular abundances are assumed to be log-uniform, the volume mixing ratio of HCN is found to be implausibly high. A VULCAN chemical kinetics model of WASP-12b suggests that chemical equilibrium is a reasonable assumption even when atmospheric mixing is implausibly rigorous. Guided by (exo)planet formation theory, we set Gaussian priors on the elemental abundances of carbon, oxygen, and nitrogen with the Gaussian peaks being centered on the measured C/H, O/H, and N/H values of the star. By enforcing chemical equilibrium, we find substellar O/H and stellar to slightly superstellar C/H for the dayside atmosphere of WASP-12b. The superstellar carbon-to-oxygen ratio is just above unity, regardless of whether clouds are included in the retrieval analysis, consistent with Madhusudhan et al. Furthermore, whether a temperature inversion exists in the atmosphere depends on one's assumption for the Gaussian width of the priors. Our retrieved posterior distributions are consistent with the formation of WASP-12b in a solar-composition protoplanetary disk, beyond the water iceline, via gravitational instability or pebble accretion (without core erosion) and migration inward to its present orbital location via a disk-free mechanism, and are inconsistent with both in situ formation and core accretion with disk migration, as predicted by Madhusudhan et al. We predict that the interpretation of James Webb Space Telescope WASP-12b data will not be prior-dominated.

Aalbers J, Agostini F, Alfonsi M,
et al., 2016, DARWIN: towards the ultimate dark matter detector, *Journal of Cosmology and Astroparticle Physics*, Vol: 2016, ISSN: 1475-7516

DARk matter WImp search with liquid xenoN (DARWIN) will be an experiment for the direct detection of dark matter using a multi-ton liquid xenon time projection chamber at its core. Its primary goal will be to explore the experimentally accessible parameter space for Weakly Interacting Massive Particles (WIMPs) in a wide mass-range, until neutrino interactions with the target become an irreducible background. The prompt scintillation light and the charge signals induced by particle interactions in the xenon will be observed by VUV sensitive, ultra-low background photosensors. Besides its excellent sensitivity to WIMPs above a mass of 5 GeV/c2, such a detector with its large mass, low-energy threshold and ultra-low background level will also be sensitive to other rare interactions. It will search for solar axions, galactic axion-like particles and the neutrinoless double-beta decay of 136Xe, as well as measure the low-energy solar neutrino flux with < 1% precision, observe coherent neutrino-nucleus interactions, and detect galactic supernovae. We present the concept of the DARWIN detector and discuss its physics reach, the main sources of backgrounds and the ongoing detector design and R&D efforts.

Aaboud M, Aad G, Abbott B,
et al., 2016, Dark matter interpretations of ATLAS searches for the electroweak production of supersymmetric particles in √ s=8 TeV proton-proton collisions, *Journal of High Energy Physics*, Vol: 2016, ISSN: 1126-6708

A selection of searches by the ATLAS experiment at the LHC for the electroweak production of SUSY particles are used to study their impact on the constraints on dark matter candidates. The searches use 20 fb−1 of proton-proton collision data at s√=8s=8 TeV. A likelihood-driven scan of a five-dimensional effective model focusing on the gaugino-higgsino and Higgs sector of the phenomenological minimal supersymmetric Standard Model is performed. This scan uses data from direct dark matter detection experiments, the relic dark matter density and precision flavour physics results. Further constraints from the ATLAS Higgs mass measurement and SUSY searches at LEP are also applied. A subset of models selected from this scan are used to assess the impact of the selected ATLAS searches in this five-dimensional parameter space. These ATLAS searches substantially impact those models for which the mass m(χ~01)m(χ~10) of the lightest neutralino is less than 65 GeV, excluding 86% of such models. The searches have limited impact on models with larger m(χ~01)m(χ~10) due to either heavy electroweakinos or compressed mass spectra where the mass splittings between the produced particles and the lightest supersymmetric particle is small.

Liem S, Bertone G, Calore F,
et al., 2016, Effective field theory of dark matter: a global analysis, *The Journal of High Energy Physics*, Vol: 2016, ISSN: 1029-8479

We present global fits of an effective field theory description of real, and complex scalar dark matter candidates. We simultaneously take into account all possible dimension 6 operators consisting of dark matter bilinears and gauge invariant combinations of quark and gluon fields. We derive constraints on the free model parameters for both the real (five parameters) and complex (seven) scalar dark matter models obtained by combining Planck data on the cosmic microwave background, direct detection limits from LUX, and indirect detection limits from the Fermi Large Area Telescope. We find that for real scalars indirect dark matter searches disfavour a dark matter particle mass below 100 GeV. For the complex scalar dark matter particle current data have a limited impact due to the presence of operators that lead to p-wave annihilation, and also do not contribute to the spin-independent scattering cross-section. Although current data are not informative enough to strongly constrain the theory parameter space, we demonstrate the power of our formalism to reconstruct the theoretical parameters compatible with an actual dark matter detection, by assuming that the excess of gamma rays observed by the Fermi Large Area Telescope towards the Galactic centre is entirely due to dark matter annihilations. Please note that the excess can very well be due to astrophysical sources such as millisecond pulsars. We find that scalar dark matter interacting via effective field theory operators can in principle explain the Galactic centre excess, but that such interpretation is in strong tension with the non-detection of gamma rays from dwarf galaxies in the real scalar case. In the complex scalar case there is enough freedom to relieve the tension.

Shariff H, Dhawan S, Jiao X,
et al., 2016, Standardizing type Ia supernovae optical brightness using near infrared rebrightening time, *Monthly Notices of the Royal Astronomical Society*, Vol: 463, Pages: 4311-4316, ISSN: 1365-2966

Accurate standardization of Type Ia supernovae (SNIa) is instrumental to theusage of SNIa as distance indicators. We analyse a homogeneous sample of 22 lowzSNIa, observed by the Carnegie Supernova Project (CSP) in the optical and nearinfra-red (NIR). We study the time of the second peak in the J-band, t2, as an alternativestandardization parameter of SNIa peak optical brightness, as measured by thestandard SALT2 parameter mB. We use BAHAMAS, a Bayesian hierarchical modelfor SNIa cosmology, to estimate the residual scatter in the Hubble diagram.We find that in the absence of a colour correction, t2 is a better standardizationparameter compared to stretch: t2 has a 1σ posterior interval for the Hubble residualscatter of σ∆µ = {0.250, 0.257} mag, compared to σ∆µ = {0.280, 0.287} mag whenstretch (x1) alone is used. We demonstrate that when employed together with a colourcorrection, t2 and stretch lead to similar residual scatter. Using colour, stretch andt2 jointly as standardization parameters does not result in any further reduction inscatter, suggesting that t2 carries redundant information with respect to stretch andcolour. With a much larger SNIa NIR sample at higher redshift in the future, t2 couldbe a useful quantity to perform robustness checks of the standardization procedure.

Shariff H, Jiao X, Trotta R,
et al., 2016, Bahamas: new analysis of type Ia supernovae reveals inconsistencies with standard cosmology, *The Astrophysical Journal*, Vol: 827, ISSN: 1538-4357

We present results obtained by applying our BAyesian HierArchical Modeling for the Analysis of Supernova cosmology (BAHAMAS) software package to the 740 spectroscopically confirmed supernovae of type Ia (SNe Ia) from the "Joint Light-curve Analysis" (JLA) data set. We simultaneously determine cosmological parameters and standardization parameters, including corrections for host galaxy mass, residual scatter, and object-by-object intrinsic magnitudes. Combining JLA and Planck data on the cosmic microwave background, we find significant discrepancies in cosmological parameter constraints with respect to the standard analysis: we find ${{\rm{\Omega }}}_{{\rm{m}}}=0.399\pm 0.027$, $2.8\sigma $ higher than previously reported, and $w=-0.910\pm 0.045$, $1.6\sigma $ higher than the standard analysis. We determine the residual scatter to be ${\sigma }_{{\rm{res}}}=0.104\pm 0.005$. We confirm (at the 95% probability level) the existence of two subpopulations segregated by host galaxy mass, separated at ${\mathrm{log}}_{10}(M/{M}_{\odot })=10$, differing in mean intrinsic magnitude by 0.055 ± 0.022 mag, lower than previously reported. Cosmological parameter constraints, however, are unaffected by the inclusion of corrections for host galaxy mass. We find $\sim 4\sigma $ evidence for a sharp drop in the value of the color correction parameter, $\beta (z)$, at a redshift ${z}_{t}=0.662\pm 0.055$. We rule out some possible explanations for this behavior, which remains unexplained.

Johannesson G, de Austri RR, Vincent AC,
et al., 2016, Bayesian analysis of cosmic ray propagation: evidence against homogeneous diffusion, *Astrophysical Journal*, Vol: 824, ISSN: 1538-4357

Bertone G, Calore F, Caron S,
et al., 2016, Global analysis of the pMSSM in light of the Fermi GeV excess: prospects for the LHC Run-II and astroparticle experiments, *Journal of Cosmology and Astroparticle Physics*, Vol: 2016, ISSN: 1475-7516

Strege C, Bertone G, Besjes GJ,
et al., 2014, Profile likelihood maps of a 15-dimensional MSSM, *Journal of High Energy Physics*, Vol: 2014, ISSN: 1126-6708

We present statistically convergent profile likelihood maps obtained via globalfits of a phenomenological Minimal Supersymmetric Standard Model with 15 free parameters(the MSSM-15), based on over 250M points. We derive constraints on the modelparameters from direct detection limits on dark matter, the Planck relic density measurementand data from accelerator searches. We provide a detailed analysis of the richphenomenology of this model, and determine the SUSY mass spectrum and dark matterproperties that are preferred by current experimental constraints. We evaluate the impactof the measurement of the anomalous magnetic moment of the muon (g −2) on our results,and provide an analysis of scenarios in which the lightest neutralino is a subdominant componentof the dark matter. The MSSM-15 parameters are relatively weakly constrained bycurrent data sets, with the exception of the parameters related to dark matter phenomenology(M1, M2, µ), which are restricted to the sub-TeV regime, mainly due to the relic densityconstraint. The mass of the lightest neutralino is found to be < 1.5 TeV at 99% C.L., butcan extend up to 3 TeV when excluding the g − 2 constraint from the analysis. Low-massbino-like neutralinos are strongly favoured, with spin-independent scattering cross-sectionsextending to very small values, ∼ 10−20 pb. ATLAS SUSY null searches strongly impacton this mass range, and thus rule out a region of parameter space that is outside the reachof any current or future direct detection experiment. The best-fit point obtained after inclusionof all data corresponds to a squark mass of 2.3 TeV, a gluino mass of 2.1 TeV and a130 GeV neutralino with a spin-independent cross-section of 2.4×10−10 pb, which is withinthe reach of future multi-ton scale direct detection experiments and of the upcoming LHCrun at increased centre-of-mass energy.

Martin J, Ringeval C, Trotta R,
et al., 2014, Compatibility of Planck and BICEP2 results in light of inflation, *PHYSICAL REVIEW D*, Vol: 90, ISSN: 2470-0010

- Author Web Link
- Cite
- Citations: 36

Martin J, Ringeval C, Trotta R,
et al., 2014, The best inflationary models after Planck, *Journal of Cosmology and Astroparticle Physics*, Vol: 2014, ISSN: 1475-7516

We compute the Bayesian evidence and complexity of 193 slow-roll single-field models of inflation using the Planck 2013 Cosmic Microwave Background data, with the aim of establishing which models are favoured from a Bayesian perspective. Our calculations employ a new numerical pipeline interfacing an inflationary effective likelihood with the slow-roll library ASPIC and the nested sampling algorithm MultiNest. The models considered represent a complete and systematic scan of the entire landscape of inflationary scenarios proposed so far. Our analysis singles out the most probable models (from an Occam's razor point of view) that are compatible with Planck data, while ruling out with very strong evidence 34% of the models considered. We identify 26% of the models that are favoured by the Bayesian evidence, corresponding to 15 different potential shapes. If the Bayesian complexity is included in the analysis, only 9% of the models are preferred, corresponding to only 9 different potential shapes. These shapes are all of the plateau type.

Amendola L, Appleby S, Bacon D,
et al., 2013, Cosmology and Fundamental Physics with the Euclid Satellite, *Living Reviews in Relativity*, Vol: 16, ISSN: 1433-8351

Euclid is a European Space Agency medium-class mission selected for launch in 2019 withinthe Cosmic Vision 2015 – 2025 program. The main goal of Euclid is to understand the originof the accelerated expansion of the universe. Euclid will explore the expansion history of theuniverse and the evolution of cosmic structures by measuring shapes and red-shifts of galaxiesas well as the distribution of clusters of galaxies over a large fraction of the sky.Although the main driver for Euclid is the nature of dark energy, Euclid science covers avast range of topics, from cosmology to galaxy evolution to planetary research. In this reviewwe focus on cosmology and fundamental physics, with a strong emphasis on science beyondthe current standard models. We discuss five broad topics: dark energy and modified gravity,dark matter, initial conditions, basic assumptions and questions of methodology in the dataanalysis.This review has been planned and carried out within Euclid’s Theory Working Group andis meant to provide a guide to the scientific themes that will underlie the activity of the groupduring the preparation of the Euclid mission.

Strege C, Bertone G, Feroz F,
et al., 2013, Global fits of the cMSSM and NUHM including the LHC Higgs discovery and new XENON100 constraints, *JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS*, Vol: 2013, ISSN: 1475-7516

We present global fits of the constrained Minimal Supersymmetric Standard Model (cMSSM) and the Non-Universal Higgs Model (NUHM), including the most recent CMS constraint on the Higgs boson mass, 5.8 fb−1 integrated luminosity null Supersymmetry searches by ATLAS, the new LHCb measurement of BR(bar Bs → μ+μ−) and the 7-year WMAP dark matter relic abundance determination. We include the latest dark matter constraints from the XENON100 experiment, marginalising over astrophysical and particle physics uncertainties. We present Bayesian posterior and profile likelihood maps of the highest resolution available today, obtained from up to 350M points. We find that the new constraint on the Higgs boson mass has a dramatic impact, ruling out large regions of previously favoured cMSSM and NUHM parameter space. In the cMSSM, light sparticles and predominantly gaugino-like dark matter with a mass of a few hundred GeV are favoured. The NUHM exhibits a strong preference for heavier sparticle masses and a Higgsino-like neutralino with a mass of 1 TeV. The future ton-scale XENON1T direct detection experiment will probe large portions of the currently favoured cMSSM and NUHM parameter space. The LHC operating at 14 TeV collision energy will explore the favoured regions in the cMSSM, while most of the regions favoured in the NUHM will remain inaccessible. Our best-fit points achieve a satisfactory quality-of-fit, with p-values ranging from 0.21 to 0.35, so that none of the two models studied can be presently excluded at any meaningful significance level.

Pato M, Strigari LE, Trotta R,
et al., 2013, Taming astrophysical bias in direct dark matter searches, *Journal of Cosmology and Astroparticle Physics*, Vol: 2013, ISSN: 1475-7516

We explore systematic biases in the identification of dark matter in future direct detection experiments and compare the reconstructed dark matter properties when assuming a self-consistent dark matter distribution function and the standard Maxwellian velocity distribution. We find that the systematic bias on the dark matter mass and cross-section determination arising from wrong assumptions for its distribution function is of order ~ 1σ. A much larger systematic bias can arise if wrong assumptions are made on the underlying Milky Way mass model. However, in both cases the bias is substantially mitigated by marginalizing over galactic model parameters. We additionally show that the velocity distribution can be reconstructed in an unbiased manner for typical dark matter parameters. Our results highlight both the robustness of the dark matter mass and cross-section determination using the standard Maxwellian velocity distribution and the importance of accounting for astrophysical uncertainties in a statistically consistent fashion.

Gandy A, Trotta R, 2013, Special Issue on Astrostatistics, *STATISTICAL ANALYSIS AND DATA MINING*, Vol: 6, Pages: 1-+, ISSN: 1932-1864

Strege C, Trotta R, Bertone G,
et al., 2012, Fundamental statistical limitations of future dark matter direct detection experiments, *Physical Review D*, Vol: 86, ISSN: 1550-7998

We discuss irreducible statistical limitations of future ton-scale dark matter direct detection experiments. We focus in particular on the coverage of confidence intervals, which quantifies the reliability of the statistical method used to reconstruct the dark matter parameters and the bias of the reconstructed parameters. We study 36 benchmark dark matter models within the reach of upcoming ton-scale experiments. We find that approximate confidence intervals from a profile-likelihood analysis exactly cover or overcover the true values of the weakly interacting massive particle (WIMP) parameters, and hence are conservative. We evaluate the probability that unavoidable statistical fluctuations in the data might lead to a biased reconstruction of the dark matter parameters, or large uncertainties on the reconstructed parameter values. We show that this probability can be surprisingly large, even for benchmark models leading to a large event rate of order a hundred counts. We find that combining data sets from two different targets leads to improved coverage properties, as well as a substantial reduction of statistical bias and uncertainty on the dark matter parameters

Strege C, Bertone G, Cerdeno DG,
et al., 2012, Updated global fits of the cMSSM including the latest LHC SUSY and Higgs searches and XENON100 data, *Journal of Cosmology and Astroparticle Physics*, Vol: 2012, ISSN: 1475-7516

We present new global fits of the constrained Minimal Supersymmetric Standard Model (cMSSM), including LHC 1/fb integrated luminosity SUSY exclusion limits, recent LHC 5/fb constraints on the mass of the Higgs boson and XENON100 direct detection data. Our analysis fully takes into account astrophysical and hadronic uncertainties that enter the analysis when translating direct detection limits into constraints on the cMSSM parameter space. We provide results for both a Bayesian and a Frequentist statistical analysis. We find that LHC 2011 constraints in combination with XENON100 data can rule out a significant portion of the cMSSM parameter space. Our results further emphasise the complementarity of collider experiments and direct detection searches in constraining extensions of Standard Model physics. The LHC 2011 exclusion limit strongly impacts on low-mass regions of cMSSM parameter space, such as the stau co-annihilation region, while direct detection data can rule out regions of high SUSY masses, such as the Focus-Point region, which is unreachable for the LHC in the near future. We show that, in addition to XENON100 data, the experimental constraint on the anomalous magnetic moment of the muon plays a dominant role in disfavouring large scalar and gaugino masses. We find that, should the LHC 2011 excess hinting towards a Higgs boson at 126 GeV be confirmed, currently favoured regions of the cMSSM parameter space will be robustly ruled out from both a Bayesian and a profile likelihood statistical perspective.

Bertone G, Cerdeno DG, Fornasa M,
et al., 2012, Complementarity of indirect and accelerator dark matter searches, *Physical Review D*, Vol: 85, ISSN: 1550-7998

Even if supersymmetric particles are found at the Large Hadron Collider (LHC), it will be difficult to prove that they constitute the bulk of the dark matter (DM) in the Universe using LHC data alone. We study the complementarity of LHC and DM indirect searches, working out explicitly the reconstruction of the DM properties for a specific benchmark model in the coannihilation region of a 24-parameters supersymmetric model. Combining mock high-luminosity LHC data with presentday null searches for gamma rays from dwarf galaxies with the Fermi Large Area Telescope, we show that current Fermi Large Area Telescope limits already have the capability of ruling out a spurious wino-like solution which would survive using LHC data only, thus leading to the correct identification of the cosmological solution. We also demonstrate that upcoming Planck constraints on the reionization history will have a similar constraining power and discuss the impact of a possible detection of gamma rays from DM annihilation in the Draco dwarf galaxy with a Cherenkov-Telescope-Array-like experiment. Our results indicate that indirect searches can be strongly complementary to the LHC in identifying the DM particles, even when astrophysical uncertainties are taken into account.

Arina C, Hamann J, Trotta R,
et al., 2012, Evidence for dark matter modulation in CoGeNT?, *Journal of Cosmology and Astroparticle Physics*, Vol: 2012, ISSN: 1475-7516

We investigate the question of whether the recent modulation signal claimed by CoGeNT is best explained by the dark matter (DM) hypothesis from a Bayesian model comparison perspective. We consider five phenomenological explanations for the data: no modulation signal, modulation due to DM, modulation due to DM compatible with the total CoGeNT rate, and a signal coming from other physics with a free phase but annual period, or with a free phase and a free period. In each scenario, we assign to the free parameters physically motivated priors. We find that while the DM models are weakly preferred to the no modulation model, but when compared to models where the modulation is due to other physics, the DM hypothesis is favoured with odds ranging from 185:1 to 560:1. This result is robust even when astrophysical uncertainties are taken into account and the impact of priors assessed. Interestingly, the odds for the DM model in which the modulation signal is compatible with the total rate against a DM model in which this prior is not implemented is only 5:8, in spite of the former's prediction of a modulation amplitude in the energy range 0.9 → 3.0 keVee that is significantly smaller than the value observed by CoGeNT. Classical hypothesis testing also rules out the null hypothesis of no modulation at the 1.6σ to 2.3σ level, depending on the details of the alternative. Lastly, we investigate whether anisotropic velocity distributions can help to mitigate the tension between the CoGeNT total and modulated rates, and find encouraging results.

Bertone G, Cerdeno DG, Fornasa M,
et al., 2012, Global fits of the cMSSM including the first LHC and XENON100 data, *Journal of Cosmology and Astroparticle Physics*, Vol: 2012, ISSN: 1475-7516

We present updated global fits of the constrained Minimal Supersymmetric Standard Model (cMSSM), including the most recent constraints from the ATLAS and CMS detectors at the LHC, as well as the most recent results of the XENON100 experiment. Our robust analysis takes into account both astrophysical and hadronic uncertainties that enter in the calculation of the rate of WIMP-induced recoils in direct detection experiment. We study the consequences for neutralino Dark Matter, and show that current direct detection data already allow to robustly rule out the so-called Focus Point region, therefore demonstrating the importance of particle astrophysics experiments in constraining extensions of the Standard Model of Particle Physics. We also observe an increased compatibility between results obtained from a Bayesian and a Frequentist statistical perspective. We find that upcoming ton-scale direct detection experiments will probe essentially the entire currently favoured region (at the 99% level), almost independently of the statistical approach used. Prospects for indirect detection of the cMSSM are further reduced.

Bertone G, Cumberbatch D, Ruiz de Austri R,
et al., 2012, Dark Matter searches: the nightmare scenario, *Journal of Cosmology and Astroparticle Physics*, Vol: 2012, ISSN: 1475-7516

The unfortunate case where the Large Hadron Collider (LHC) fails to discover physics Beyond the Standard Model (BSM) is sometimes referred to as the ``Nightmare scenario'' of particle physics. We study the consequences of this hypothetical scenario for Dark Matter (DM), in the framework of the constrained Minimal Supersymmetric Standard Model (cMSSM). We evaluate the surviving regions of the cMSSM parameter space after null searches at the LHC, using several different LHC configurations, and study the consequences for DM searches with ton-scale direct detectors and the IceCube neutrino telescope. We demonstrate that ton-scale direct detection experiments will be able to conclusively probe the cMSSM parameter space that would survive null searches at the LHC with 100 fb−1 of integrated luminosity at 14 TeV. We also demonstrate that IceCube (80 strings plus DeepCore) will be able to probe as much as sime 17% of the currently favoured parameter space after 5 years of observation.

March MC, Trotta R, Berkes P,
et al., 2011, Improved constraints on cosmological parameters from Type Ia supernova data, *Monthly Notices of the Royal Astronomical Society*, Vol: 418, Pages: 2308-2329, ISSN: 1365-2966

We present a new method based on a Bayesian hierarchical model to extract constraints on cosmological parameters from Type Ia supernova (SNIa) data obtained with the SALT-II light-curve fitter. We demonstrate with simulated data sets that our method delivers tighter statistical constraints on the cosmological parameters over 90 per cent of the time, that it reduces statistical bias typically by a factor of ∼2–3 and that it has better coverage properties than the usual χ2 approach. As a further benefit, a full posterior probability distribution for the dispersion of the intrinsic magnitude of SNe is obtained. We apply this method to recent SNIa data, and by combining them with cosmic microwave background and baryonic acoustic oscillations data, we obtain Ωm= 0.28 ± 0.02, ΩΛ= 0.73 ± 0.01 (assuming w=−1) and Ωm= 0.28 ± 0.01, w=−0.90 ± 0.05 (assuming flatness; statistical uncertainties only). We constrain the intrinsic dispersion of the B-band magnitude of the SNIa population, obtaining σintμ= 0.13 ± 0.01 mag. Applications to systematic uncertainties will be discussed in a forthcoming paper.

Eugenia Cabrera M, Alberto Casas J, Ruiz de Austri R,
et al., 2011, Quantifying the tension between the Higgs mass and (g-2)(mu) in the constrained MSSM, *PHYSICAL REVIEW D*, Vol: 84, ISSN: 2470-0010

- Author Web Link
- Cite
- Citations: 9

March MC, Trotta R, Amendola L,
et al., 2011, Robustness to systematics for future dark energy probes, *Monthly Notices of the Royal Astronomical Society*, Vol: 415, Pages: 143-152, ISSN: 1365-2966

We extend the figure of merit formalism usually adopted to quantify the statistical performance of future dark energy probes to assess the robustness of a future mission to plausible systematic bias. We introduce a new robustness figure of merit which can be computed in the Fisher matrix formalism given arbitrary systematic biases in the observable quantities. We argue that robustness to systematics is an important new quantity that should be taken into account when optimizing future surveys. We illustrate our formalism with toy examples, and apply it to future Type Ia supernova (SN Ia) and baryonic acoustic oscillation (BAO) surveys. For the simplified systematic biases that we consider, we find that SNe Ia are a somewhat more robust probe of dark energy parameters than the BAO. We trace this back to a geometrical alignment of systematic bias direction with statistical degeneracy directions in the dark energy parameter space.

Trotta R, Kunz M, Liddle AR, 2011, Designing decisive detections, *Monthly Notices of the Royal Astronomical Society*, Vol: 414, Pages: 2337-2344, ISSN: 1365-2966

We present a general Bayesian formalism for the definition of figures of merit (FoMs) quantifying the scientific return of a future experiment. We introduce two new FoMs for future experiments based on their model selection capabilities, called the decisiveness of the experiment and the expected strength of evidence. We illustrate these by considering dark energy probes and compare the relative merits of stages II, III and IV dark energy probes. We find that probes based on supernovae and on weak lensing perform rather better on model selection tasks than is indicated by their Fisher matrix FoM as defined by the Dark Energy Task Force. We argue that our ability to optimize future experiments for dark energy model selection goals is limited by our current uncertainty over the models and their parameters, which is ignored in the usual Fisher matrix forecasts. Our approach gives a more realistic assessment of the capabilities of future probes and can be applied in a variety of situations.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.