## Publications

86 results found

Trotta R, Kunz M, Liddle AR, 2011, Designing decisive detections, *Monthly Notices of the Royal Astronomical Society*, Vol: 414, Pages: 2337-2344, ISSN: 1365-2966

We present a general Bayesian formalism for the definition of figures of merit (FoMs) quantifying the scientific return of a future experiment. We introduce two new FoMs for future experiments based on their model selection capabilities, called the decisiveness of the experiment and the expected strength of evidence. We illustrate these by considering dark energy probes and compare the relative merits of stages II, III and IV dark energy probes. We find that probes based on supernovae and on weak lensing perform rather better on model selection tasks than is indicated by their Fisher matrix FoM as defined by the Dark Energy Task Force. We argue that our ability to optimize future experiments for dark energy model selection goals is limited by our current uncertainty over the models and their parameters, which is ignored in the usual Fisher matrix forecasts. Our approach gives a more realistic assessment of the capabilities of future probes and can be applied in a variety of situations.

Feroz F, Cranmer K, Hobson M,
et al., 2011, Challenges of profile likelihood evaluation in multi-dimensional SUSY scans, *Journal of High Energy Physics*, Vol: 2011, ISSN: 1126-6708

Statistical inference of the fundamental parameters of supersymmetric theoriesis a challenging and active endeavor. Several sophisticated algorithms have been employedto this end. While Markov-Chain Monte Carlo (MCMC) and nested sampling techniquesare geared towards Bayesian inference, they have also been used to estimate frequentistconfidence intervals based on the profile likelihood ratio. We investigate the performanceand appropriate configuration of MultiNest, a nested sampling based algorithm, whenused for profile likelihood-based analyses both on toy models and on the parameter spaceof the Constrained MSSM. We find that while the standard configuration previously usedin the literarture is appropriate for an accurate reconstruction of the Bayesian posterior,the profile likelihood is poorly approximated. We identify a more appropriate MultiNestconfiguration for profile likelihood analyses, which gives an excellent exploration of theprofile likelihood (albeit at a larger computational cost), including the identification of theglobal maximum likelihood value. We conclude that with the appropriate configurationMultiNest is a suitable tool for profile likelihood studies, indicating previous claims tothe contrary are not well founded.

Trotta R, Cranmer K, 2011, Statistical Challenges of Global SUSY Fits

We present recent results aiming at assessing the coverage properties ofBayesian and frequentist inference methods, as applied to the reconstruction ofsupersymmetric parameters from simulated LHC data. We discuss the statisticalchallenges of the reconstruction procedure, and highlight the algorithmicdifficulties of obtaining accurate profile likelihood estimates.

Vardanyan M, Trotta R, Silk J, 2011, Applications of Bayesian model averaging to the curvature and size of the Universe, *Monthly Notices of the Royal Astronomical Society*, Vol: 413, Pages: L91-L95, ISSN: 1365-2966

Bayesian model averaging is a procedure to obtain parameter constraints that account for the uncertainty about the correct cosmological model. We use recent cosmological observations and Bayesian model averaging to derive tight limits on the curvature parameter, as well as robust lower bounds on the curvature radius of the Universe and its minimum size, while allowing for the possibility of an evolving dark energy component. Because flat models are favoured by Bayesian model selection, we find that model-averaged constraints on the curvature and size of the Universe can be considerably stronger than non-model-averaged ones. For the most conservative prior choice (based on inflationary considerations), our procedure improves on non-model-averaged constraints on the curvature by a factor of ∼2. The curvature scale of the Universe is conservatively constrained to be Rc > 42 Gpc (99 per cent), corresponding to a lower limit to the number of Hubble spheres in the Universe NU > 251 (99 per cent).

Pato M, Baudis L, Bertone G,
et al., 2011, Complementarity of dark matter direct detection targets, *Physical Review D*, Vol: 83, ISSN: 1550-7998

We investigate the reconstruction capabilities of the dark matter mass and spin-independent cross section from future ton-scale direct detection experiments using germanium, xenon, or argon as targets. Adopting realistic values for the exposure, energy threshold, and resolution of dark matter experiments which will come online within 5 to 10 years, the degree of complementarity between different targets is quantified. We investigate how the uncertainty in the astrophysical parameters controlling the local dark matter density and velocity distribution affects the reconstruction. For a 50 GeV WIMP, astrophysical uncertainties degrade the accuracy in the mass reconstruction by up to a factor of ∼4 for xenon and germanium, compared to the case when astrophysical quantities are fixed. However, the combination of argon, germanium, and xenon data increases the constraining power by a factor of ∼2 compared to germanium or xenon alone. We show that future direct detection experiments can achieve self-calibration of some astrophysical parameters, and they will be able to constrain the WIMP mass with only very weak external astrophysical constraints.

Martin J, Ringeval C, Trotta R, 2011, Hunting down the best model of inflation with Bayesian evidence, *Physical Review D*, Vol: 83, ISSN: 1550-7998

We present the first calculation of the Bayesian evidence for different prototypical single field inflationary scenarios, including representative classes of small field and large field models. This approach allows us to compare inflationary models in a well-defined statistical way and to determine the current “best model of inflation.” The calculation is performed numerically by interfacing the inflationary code FieldInf with MultiNest. We find that small field models are currently preferred, while large field models having a self-interacting potential of power p>4 are strongly disfavored. The class of small field models as a whole has posterior odds of approximately 3∶1 when compared with the large field class. The methodology and results presented in this article are an additional step toward the construction of a full numerical pipeline to constrain the physics of the early Universe with astrophysical observations. More accurate data (such as the Planck data) and the techniques introduced here should allow us to identify conclusively the best inflationary model.

Bridges M, Cranmer K, Feroz F,
et al., 2011, A coverage study of the CMSSM based on ATLAS sensitivity using fast neural networks techniques, *Journal of High Energy Physics*, Vol: 2011, ISSN: 1126-6708

We assess the coverage properties of confidence and credible intervals on the CMSSM parameter space inferred from a Bayesian posterior and the profile likelihood based on an ATLAS sensitivity study. In order to make those calculations feasible, we introduce a new method based on neural networks to approximate the mapping between CMSSM parameters and weak-scale particle masses. Our method reduces the computational effort needed to sample the CMSSM parameter space by a factor of ~ 104 with respect to conventional techniques. We find that both the Bayesian posterior and the profile likelihood intervals can significantly over-cover and identify the origin of this effect to physical boundaries in the parameter space. Finally, we point out that the effects intrinsic to the statistical procedure are conated with simplifications to the likelihood functions from the experiments themselves.

Bertone G, Kong K, Ruiz de Austri R,
et al., 2011, Global fits of the minimal universal extra dimensions scenario, *Physical Review D*, Vol: 83, ISSN: 1550-7998

In theories with universal extra dimensions (UED), the γ1 particle, first excited state of the hypercharge gauge boson, provides an excellent dark matter (DM) candidate. Here, we use a modified version of the SuperBayeS code to perform a Bayesian analysis of the minimal UED scenario, in order to assess its detectability at accelerators and with DM experiments. We derive, in particular, the most probable range of mass and scattering cross sections off nucleons, keeping into account cosmological and electroweak precision constraints. The consequences for the detectability of the γ1 with direct and indirect experiments are dramatic. The spin-independent cross section probability distribution peaks at ∼10−11 pb, i.e. below the sensitivity of ton-scale experiments. The spin-dependent cross section drives the predicted neutrino flux from the center of the Sun below the reach of present and upcoming experiments. The only strategy that remains open appears to be direct detection with ton-scale experiments sensitive to spin-dependent cross sections. On the other hand, the LHC with 1 fb−1 of data should be able to probe the current best-fit UED parameters.

Trotta R, Johannesson G, Moskalenko IV,
et al., 2011, CONSTRAINTS ON COSMIC-RAY PROPAGATION MODELS FROM A GLOBAL BAYESIAN ANALYSIS, *Astrophysical Journal*, Vol: 729, ISSN: 1538-4357

Research in many areas of modern physics such as, e.g., indirect searches for dark matter and particle acceleration in supernova remnant shocks rely heavily on studies of cosmic rays (CRs) and associated diffuse emissions (radio, microwave, X-rays, γ-rays). While very detailed numerical models of CR propagation exist, a quantitative statistical analysis of such models has been so far hampered by the large computational effort that those models require. Although statistical analyses have been carried out before using semi-analytical models (where the computation is much faster), the evaluation of the results obtained from such models is difficult, as they necessarily suffer from many simplifying assumptions. The main objective of this paper is to present a working method for a full Bayesian parameter estimation for a numerical CR propagation model. For this study, we use the GALPROP code, the most advanced of its kind, which uses astrophysical information, and nuclear and particle data as inputs to self-consistently predict CRs, γ-rays, synchrotron, and other observables. We demonstrate that a full Bayesian analysis is possible using nested sampling and Markov Chain Monte Carlo methods (implemented in the SuperBayeS code) despite the heavy computational demands of a numerical propagation code. The best-fit values of parameters found in this analysis are in agreement with previous, significantly simpler, studies also based on GALPROP.

March MC, Starkman GD, Trotta R,
et al., 2011, Should we doubt the cosmological constant?, *Monthly Notices of the Royal Astronomical Society*, Vol: 410, Pages: 2488-2496, ISSN: 1365-2966

While Bayesian model selection is a useful tool to discriminate between competing cosmological models, it only gives a relative rather than an absolute measure of how good a model is. Bayesian doubt introduces an unknown benchmark model against which the known models are compared, thereby obtaining an absolute measure of model performance in a Bayesian framework. We apply this new methodology to the problem of the dark energy equation of state, comparing an absolute upper bound on the Bayesian evidence for a presently unknown dark energy model against a collection of known models including a flat Lambda cold dark matter (ΛCDM) scenario. We find a strong absolute upper bound to the Bayes factor B between the unknown model and ΛCDM, giving B≲ 5. The posterior probability for doubt is found to be less than 13 per cent (with a 1 per cent prior doubt) while the probability for ΛCDM rises from an initial 25 per cent to almost 70 per cent in light of the data. We conclude that ΛCDM remains a sufficient phenomenological description of currently available observations and that there is little statistical room for model improvement.

Roszkowski L, Ruiz de Austri R, Trotta R,
et al., 2011, Global fits of the nonuniversal Higgs model, *Physical Reveiw D*, Vol: 83, ISSN: 1550-7998

We carry out global fits to the nonuniversal Higgs Model (NUHM), applying all relevant present-day constraints. We present global probability maps for the NUHM parameters and observables (including collider signatures, direct, and indirect detection quantities), both in terms of posterior probabilities and interms of profile likelihood maps. We identify regions of the parameter space where the neutralino dark matter in the model is either binolike, or else higgsinolike with mass close to 1 TeV and a spinindependentscattering cross section ˜10¯⁹–10¯⁸ pb. We trace the occurrence of the higgsinolike region to be a consequence of a mild focusing effect in the running of one of the Higgs masses, the existence of which in the NUHM we identify in our analysis. Although the usual binolike neutralino is more prominent, higgsinolike dark matter cannot be excluded, however its significance strongly depends on the prior and statistics used to assess it. We note that, despite experimental constraints often favoring different regions of parameter space to the constrained minimal supersymmetric standard model, mostobservational consequences appear fairly similar, which will make it challenging to distinguish the two models experimentally.

Bertone G, Cerdeno DG, Fornasa M,
et al., 2010, Identification of dark matter particles with LHC and direct detection data, *Physical Review D*, Vol: 82, ISSN: 1550-7998

Dark matter (DM) is currently searched for with a variety of detection strategies. Accelerator searches are particularly promising, but even if weakly interacting massive particles are found at the Large Hadron Collider (LHC), it will be difficult to prove that they constitute the bulk of the DM in the Universe ΩDM. We show that a significantly better reconstruction of the DM properties can be obtained with a combined analysis of LHC and direct detection data, by making a simple Ansatz on the weakly interacting massive particles local density ρχ˜01, i.e., by assuming that the local density scales with the cosmological relic abundance, (ρχ˜01/ρDM)=(Ωχ˜01/ΩDM). We demonstrate this method in an explicit example in the context of a 24-parameter supersymmetric model, with a neutralino lightest supersymmetric particle in the stau coannihilation region. Our results show that future ton-scale direct detection experiments will allow to break degeneracies in the supersymmetric parameter space and achieve a significantly better reconstruction of the neutralino composition and its relic density than with LHC data alone.

Roszkowski L, Ruiz de Austri R, Trotta R, 2010, Efficient reconstruction of constrained MSSM parameters from LHC data: A case study, *Physical Review D*, Vol: 82, ISSN: 1550-7998

We present an efficient method of reconstructing the parameters of the constrained MSSM from assumed future LHC data, applied both on their own right and in combination with the cosmological determination of the relic dark matter abundance. Focusing on the ATLAS SU3 benchmark point, we demonstrate that our simple Gaussian approximation can recover the values of its parameters remarkably well. We examine two popular noninformative priors and obtain very similar results, although when we use an informative, naturalness-motivated prior, we find some sizeable differences. We show that a further strong improvement in reconstructing the SU3 parameters can by achieved by applying additional information about the relic abundance at the level of WMAP accuracy, although the expected data from Planck will have only a very limited additional impact. Further external data may be required to break some remaining degeneracies. We argue that the method presented here is applicable to a wide class of low-energy effective supersymmetric models, as it does not require one to deal with purely experimental issues, e.g., detector performance, and has the additional advantages of computational efficiency. Furthermore, our approach allows one to distinguish the effect of the model’s internal structure and of the external data on the final parameters constraints.

Starkman GD, Trotta R, Vaudrevange PM, 2010, The virtues of frugality - why cosmological observers should release their data slowly, *Monthly Notices of the Royal Astronomical Society*, Vol: 401, Pages: L15-L18, ISSN: 1365-2966

Cosmologists will soon be in a unique position. Observational noise will gradually be replaced by cosmic variance as the dominant source of uncertainty in an increasing number of observations. We reflect on the ramifications for the discovery and verification of new models. If there are features in the full data set that call for a new model, there will be no subsequent observations to test that model's predictions. We give specific examples of the problem by discussing the pitfalls of model discovery by prior adjustment in the context of dark energy models and inflationary theories. We show how the gradual release of data can mitigate this difficulty, allowing anomalies to be identified and new models to be proposed and tested. We advocate that observers plan for the frugal release of data from future cosmic-variance-limited observations.

Trotta R, Parkinson DR, Kunz M, et al., 2009, Bayesian Experimental Design and Model Selection Forecasting, Bayesian Methods in Cosmology, Editors: Hobson, Jaffe, Liddle, Mukherjee, Parkinson, Cambridge, Publisher: Cambridge University Press, ISBN: 9780521887946

Strigari LE, Trotta R, 2009, Reconstructing WIMP properties in direct detection experiments including galactic dark matter distribution uncertainties, *Journal of Cosmology and Astroparticle Physics*, Vol: 2009, ISSN: 1475-7516

We present a new method for determining Weakly Interacting Massive Particle(WIMP) properties in future tonne scale direct detection experiments which accounts for uncertainties in the Milky Way (MW) smooth dark matter distribution. Using synthetic data on the kinematics of MW halo stars matching present samples from the Sloan Digital Sky Survey, complemented by local escape velocity constraints, we demonstrate that the local dark matter density can be constrained to ∼ 20% accuracy. For low mass WIMPs, wefind that a factor of two error in the assumed local dark matter density leads to a severely biased reconstruction of the WIMP spin-independent cross section that is incorrect at the 15σ level. We show that this bias may be overcome by marginalizing over parameters that describe the MW potential, and use this formalism to project the accuracy attainable on WIMP properties in future 1 ton Xenon detectors. Our method can be readily applied to different detector technologies and extended to more detailed MW halo models.

Trotta R, Ruiz de Austri R, de los Heros CP, 2009, Prospects for dark matter detection with IceCube in the context of the CMSSM, *Journal of Cosmology and Astroparticle Physics*, Vol: 2009, ISSN: 1475-7516

We study in detail the ability of the nominal configuration of the IceCube neutrinotelescope (with 80 strings) to probe the parameter space of the Constrained MSSM (CMSSM) favoured by current collider and cosmological data. Adopting conservative assumptions about the galactic halo model and the expected experiment performance, we find that IceCube has a probability between 2% and 12% of achieving a 5σ detection of dark matter annihilation in the Sun, depending on the choice of priors for the scalar and gaugino masses and on the astrophysical assumptions. We identify the most important annihilationchannels in the CMSSM parameter space favoured by current constraints, and we demonstrate that assuming that the signal is dominated by a single annihilation channel can lead to large systematic errors in the inferred WIMP annihilation cross section. We demonstrate that ∼ 66% of the CMSSM parameter space violates the equilibrium condition between capture and annihilation in the center of the Sun. By cross-correlating our predictions withdirect detection methods, we conclude that if IceCube does detect a neutrino flux from the Sun at high significance while direct detection experiments do not find a signal above a spinindependent cross section σSIp>∼ 7× 10−9 pb, the CMSSM will be strongly disfavoured, given standard astrophysical assumptions for the WIMP distribution. This result is robust with respect to a change of priors. We argue that the proposed low-energy DeepCore extension ofIceCube will be an ideal instrument to focus on relevant CMSSM areas of parameter space.

Vardanyan M, Trotta R, Silk J, 2009, How flat can you get ? A model comparison perspective on the curvature of the Universe, *Monthly Notices of the Royal Astronomical Society*, Vol: 397, Pages: 431-444, ISSN: 1365-2966

Martinez GD, Bullock JS, Kaplinghat M,
et al., 2009, Indirect dark matter detection from dwarf satellites: joint expectations from astrophysics and supersymmetry, *Joutrnal of Cosmology and Astroparticle Physics*, Vol: 2009, ISSN: 1475-7516

We present a general methodology for determining the gamma-ray flux fromannihilation of dark matter particles in Milky Way satellite galaxies, focusing on two promising satellites as examples: Segue 1 and Draco. We use the SuperBayeS code to explore the best-fitting regions of the Constrained Minimal Supersymmetric Standard Model (CMSSM) parameter space, and an independent MCMC analysis of the dark matter halo properties of the satellites using published radial velocities. We present a formalism for determining the boost from halo substructure in these galaxies and show that its value depends stronglyon the extrapolation of the concentration-mass (c(M)) relation for CDM subhalos down to the minimum possible mass. We show that the preferred region for this minimum halo mass within the CMSSM with neutralino dark matter is ∼ 10−9 − 10−6 M⊙. For the boost model where the observed power-law c(M) relation is extrapolated down to the minimum halo mass we find average boosts of about 20, while the Bullock et al (2001) c(M) model results in boosts of order unity. We estimate that for the power-law c(M) boost model and photonenergies greater than a GeV, the Fermi space-telescope has about 20% chance of detecting a dark matter annihilation signal from Draco with signal-to-noise greater than 3 after about 5 years of observation.

Roszkowski L, de Austri RR, Silk J,
et al., 2009, On prospects for dark matter indirect detection in the Constrained MSSM, *PHYSICS LETTERS B*, Vol: 671, Pages: 10-14, ISSN: 0370-2693

- Author Web Link
- Cite
- Citations: 26

Hobson MP, Jaffe AH, Liddle AR, et al., 2009, Bayesian methods in cosmology, ISBN: 9780521887946

© Cambridge University Press, 2010. In recent years cosmologists have advanced from largely qualitative models of the Universe to precision modelling using Bayesian methods, in order to determine the properties of the Universe to high accuracy. This timely book is the only comprehensive introduction to the use of Bayesian methods in cosmological studies, and is an essential reference for graduate students and researchers in cosmology, astrophysics and applied statistics. The first part of the book focuses on methodology, setting the basic foundations and giving a detailed description of techniques. It covers topics including the estimation of parameters, Bayesian model comparison, and separation of signals. The second part explores a diverse range of applications, from the detection of astronomical sources (including through gravitational waves), to cosmic microwave background analysis and the quantification and classification of galaxy properties. Contributions from 24 highly regarded cosmologists and statisticians make this an authoritative guide to the subject.

- Abstract
- Open Access Link
- Cite
- Citations: 26

Trotta R, Feroz F, Hobson M,
et al., 2008, The impact of priors and observables on parameter inferences in the constrained MSSM, *Journal of High Energy Physics*, Vol: 2008, ISSN: 1126-6708

We use a newly released version of the SuperBayeS code to analyze the impact of the choice of priors and the influence of various constraints on the statistical conclusions for the preferred values of the parameters of the Constrained MSSM. We assess the effect in a Bayesian framework and compare it with an alternative likelihood-based measure of a profile likelihood. We employ a new scanning algorithm (MultiNest) which increases the computational efficiency by a factor ~ 200 with respect to previously used techniques. We demonstrate that the currently available data are not yet sufficiently constraining to allow one to determine the preferred values of CMSSM parameters in a way that is completely independent of the choice of priors and statistical measures. While B R ({\bar B} → Xsγ) generally favors large m0, this is in some contrast with the preference for low values of m0 and m1/2 that is almost entirely a consequence of a combination of prior effects and a single constraint coming from the anomalous magnetic moment of the muon, which remains somewhat controversial. Using an information-theoretical measure, we find that the cosmological dark matter abundance determination provides at least 80% of the total constraining power of all available observables. Despite the remaining uncertainties, prospects for direct detection in the CMSSM remain excellent, with the spin-independent neutralino-proton cross section almost guaranteed above σSIp 10-10pb, independently of the choice of priors or statistics. Likewise, gluino and lightest Higgs discovery at the LHC remain highly encouraging. While in this work we have used the CMSSM as particle physics model, our formalism and scanning technique can be readily applied to a wider class of models with several free parameters.

Roszkowski L, Austri RRD, Silk J,
et al., 2008, On prospects for dark matter indirect detection in the Constrained MSSM, *Physics Letters B*, Vol: 671, Pages: 10-14, ISSN: 0370-2693

In the framework of the Constrained MSSM we derive the most probable rangesof the diffuse gamma radiation flux from the direction of the Galactic centerand of the positron flux from the Galactic halo due to neutralino dark matterannihilation. We find that, for a given halo model, and assuming flat priors,the 68% probability range of the integrated gamma-ray flux spans about oneorder of magnitude, while the 95% probability range can be much larger andextend over four orders of magnitude (even exceeding five for a tiny region atsmall neutralino mass). The detectability of the signal by GLAST dependingprimarily on the cuspiness of the halo profile. The positron flux, on the otherhand, appears to be too small to be detectable by PAMELA, unless the boostfactor is at least of order ten and/or the halo profile is extremely cuspy. Wealso briefly discuss the sensitivity of our results to the choice of priors.

Starkman GD, Trotta R, Vaudrevange PM, 2008, Introducing doubt in Bayesian model comparison

There are things we know, things we know we don't know, and then there are things we don't know we don't know. In this paper we address the latter two issues in a Bayesian framework, introducing the notion of doubt to quantify the degree of (dis)belief in a model given observational data in the absence of explicit alternative models. We demonstrate how a properly calibrated doubt can lead to model discovery when the true model is unknown.

Feroz F, Allanach BC, Hobson M,
et al., 2008, Bayesian selection of sign mu within mSUGRA in global fits including WMAP5 results, *JOURNAL OF HIGH ENERGY PHYSICS*, ISSN: 1029-8479

- Author Web Link
- Open Access Link
- Cite
- Citations: 56

Trotta R, 2008, Bayes in the sky: Bayesian inference and model selection in cosmology, *Contemporary Physics*, Vol: 49, Pages: 71-104, ISSN: 1366-5812

The application of Bayesian methods in cosmology and astrophysics has flourished over the past decade, spurred by data sets of increasing size and complexity. In many respects, Bayesian methods have proven to be vastly superior to more traditional statistical tools, offering the advantage of higher efficiency and of a consistent conceptual basis for dealing with the problem of induction in the presence of uncertainty. This trend is likely to continue in the future, when the way we collect, manipulate and analyse observations and compare them with theoretical models will assume an even more central role in cosmology.This review is an introduction to Bayesian methods in cosmology and astrophysics and recent results in the field. I first present Bayesian probability theory and its conceptual underpinnings, Bayes' Theorem and the role of priors. I discuss the problem of parameter inference and its general solution, along with numerical techniques such as Monte Carlo Markov Chain methods. I then review the theory and application of Bayesian model comparison, discussing the notions of Bayesian evidence and effective model complexity, and how to compute and interpret those quantities. Recent developments in cosmological parameter extraction and Bayesian cosmological model building are summarised, highlighting the challenges that lie ahead.

Kampakoglou M, Trotta R, Silk J, 2008, Monolithic or hierarchical star formation? A new statistical analysis, *Monthly Notices of the Royal Astronomical Society*, Vol: 384, Pages: 1414-1426, ISSN: 1365-2966

We consider an analytic model of cosmic star formation which incorporates supernova feedback, gas accretion and enriched outflows, reproducing the history of cosmic star formation, metallicity, Type II supernova rates and the fraction of baryons allocated to structures. We present a new statistical treatment of the available observational data on the star formation rate and metallicity that accounts for the presence of possible systematics. We then employ a Bayesian Markov Chain Monte Carlo method to compare the predictions of our model with observations and derive constraints on the seven free parameters of the model. We find that the dust-correction scheme one chooses to adopt for the star formation data is critical in determining which scenario is favoured between a hierarchical star formation model, where star formation is prolonged by accretion, infall and merging, and a monolithic scenario, where star formation is rapid and efficient. We distinguish between these modes by defining a characteristic minimum mass, M≳ 1011M⊙, in our fiducial model, for early-type galaxies where star formation occurs efficiently. Our results indicate that the hierarchical star formation model can achieve better agreement with the data, but that this requires a high efficiency of supernova-driven outflows. In a monolithic model, our analysis points to the need for a mechanism that drives metal-poor winds, perhaps in the form of supermassive black hole induced outflows. Furthermore, the relative absence of star formation beyond z∼ 5 in the monolithic scenario requires an alternative mechanism to dwarf galaxies for re-ionizing the universe at z∼ 11, as required by observations of the microwave background. While the monolithic scenario is less favoured in terms of its quality-of-fit, it cannot yet be excluded.

Ballesteros G, Casas JA, Espinosa JR,
et al., 2008, Flat tree-level inflationary potentials in the light of cosmic microwave background and large scale structure data, *JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS*, ISSN: 1475-7516

- Author Web Link
- Cite
- Citations: 12

Gordon C, Trotta R, 2007, Bayesian calibrated significance levels applied to the spectral tilt and hemispherical asymmetry, *MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY*, Vol: 382, Pages: 1859-1863, ISSN: 0035-8711

- Author Web Link
- Open Access Link
- Cite
- Citations: 40

Zunckel C, Trotta R, 2007, Reconstructing the history of dark energy using maximum entropy, *MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY*, Vol: 380, Pages: 865-876, ISSN: 0035-8711

- Author Web Link
- Cite
- Citations: 24

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.