Imperial College London

Professor David van Dyk

Faculty of Natural SciencesDepartment of Mathematics

Chair in Statistics
 
 
 
//

Contact

 

+44 (0)20 7594 8574d.van-dyk Website

 
 
//

Assistant

 

Mr David Whittaker +44 (0)20 7594 8481

 
//

Location

 

539Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

128 results found

Autenrieth M, van Dyk D, Trotta R, Stenning Det al., 2024, Stratified learning: a general-purpose statistical method for improved learning under covariate shift, Statistical Analysis and Data Mining, Vol: 17, ISSN: 1932-1864

We propose a simple, statistically principled, and theoretically justified method to improve supervised learning when the training set is not representative, a situation known as covariate shift. We build upon a well-established methodology in causal inference and show that the effects of covariate shift can be reduced or eliminated by conditioning on propensity scores. In practice, this is achieved by fitting learners within strata constructed by partitioning the data based on the estimated propensity scores, leading to approximately balanced covariates and much-improved target prediction. We refer to the overall method as Stratified Learning, or StratLearn. We demonstrate the effectiveness of this general-purpose method on two contemporary research questions in cosmology, outperforming state-of-the-art importance weighting methods. We obtain the best-reported AUC (0.958) on the updated “Supernovae photometric classification challenge,” and we improve upon existing conditional density estimation of galaxy redshift from Sloan Digital Sky Survey (SDSS) data.

Journal article

Meyer A, van Dyk D, Tak H, Siemiginowska Aet al., 2023, TD-CARMA: Painless, accurate, and scalable estimates of gravitational-lens time delays with flexible CARMA processes, The Astrophysical Journal: an international review of astronomy and astronomical physics, Vol: 950, ISSN: 0004-637X

Cosmological parameters encoding our understanding of the expansion history of the universe can be constrained by the accurate estimation of time delays arising in gravitationally lensed systems. We propose TD-CARMA, a Bayesian method to estimate cosmological time delays by modeling observed and irregularly sampled light curves as realizations of a continuous auto-regressive moving average (CARMA) process. Our model accounts for heteroskedastic measurement errors and microlensing, an additional source of independent extrinsic long-term variability in the source brightness. The semiseparable structure of the CARMA covariance matrix allows for fast and scalable likelihood computation using Gaussian process modeling. We obtain a sample from the joint posterior distribution of the model parameters using a nested sampling approach. This allows for "painless" Bayesian computation, dealing with the expected multimodality of the posterior distribution in a straightforward manner and not requiring the specification of starting values or an initial guess for the time delay, unlike existing methods. In addition, the proposed sampling procedure automatically evaluates the Bayesian evidence, allowing us to perform principled Bayesian model selection. TD-CARMA is parsimonious, and typically includes no more than a dozen unknown parameters. We apply TD-CARMA to six doubly lensed quasars HS2209+1914, SDSS J1001+5027, SDSS J1206+4332, SDSS J1515+1511, SDSS J1455+1447, and SDSS J1349+1227, estimating their time delays as −21.96 ± 1.448, 120.93 ± 1.015, 111.51 ± 1.452, 210.80 ± 2.18, 45.36 ± 1.93, and 432.05 ± 1.950, respectively. These estimates are consistent with those derived in the relevant literature, but are typically two to four times more precise.

Journal article

Fan M, Wang J, Kashyap VL, Lee TCM, Dyk DAV, Zezas Aet al., 2023, Identifying diffuse spatial structures in high-energy photon lists, The Astrophysical Journal: an international review of astronomy and astronomical physics, Vol: 165, ISSN: 0004-637X

Data from high-energy observations are usually obtained as lists of photon events. A common analysis task for such data is to identify whether diffuse emission exists, and to estimate its surface brightness, even in the presence of point sources that may be superposed. We have developed a novel nonparametric event list segmentation algorithm to divide up the field of view into distinct emission components. We use photon location data directly, without binning them into an image. We first construct a graph from the Voronoi tessellation of the observed photon locations and then grow segments using a new adaptation of seeded region growing that we call Seeded Region Growing on Graph, after which the overall method is named SRGonG. Starting with a set of seed locations, this results in an oversegmented data set, which SRGonG then coalesces using a greedy algorithm where adjacent segments are merged to minimize a model comparison statistic; we use the Bayesian Information Criterion. Using SRGonG we are able to identify point-like and diffuse extended sources in the data with equal facility. We validate SRGonG using simulations, demonstrating that it is capable of discerning irregularly shaped low-surface-brightness emission structures as well as point-like sources with strengths comparable to that seen in typical X-ray data. We demonstrate SRGonG's use on the Chandra data of the Antennae galaxies and show that it segments the complex structures appropriately.

Journal article

Rahman W, Trotta R, Boruah SS, Hudson MJ, Dyk DAVet al., 2022, New constraints on anisotropic expansion from supernovae type Ia, Monthly Notices of the Royal Astronomical Society, Vol: 514, Pages: 139-163, ISSN: 0035-8711

We re-examine the contentious question of constraints on anisotropic expansion from Type Ia supernovae (SNIa) in the light of a novel determination of peculiar velocities, which are crucial to test isotropy with SNe out to distances ⪝200h-1 Mpc. We re-analyse the Joint Light-Curve Analysis (JLA) Supernovae (SNe) data, improving on previous treatments of peculiar velocity corrections and their uncertainties (both statistical and systematic) by adopting state-of-the-art flow models constrained independently via the 2M++ galaxy redshift compilation. We also introduce a novel procedure to account for colour-based selection effects, and adjust the redshift of low-z SNe self-consistently in the light of our improved peculiar velocity model. We adopt the Bayesian hierarchical model BAHAMAS to constrain a dipole in the distance modulus in the context of the Lambda cold dark matter (ΛCDM) model and the deceleration parameter in a phenomenological Cosmographic expansion. We do not find any evidence for anisotropic expansion, and place a tight upper bound on the amplitude of a dipole, |Dμ| < 5.93 × 10−4 (95 per cent credible interval) in a ΛCDM setting, and |Dq0|<6.29×10−2 in the Cosmographic expansion approach. Using Bayesian model comparison, we obtain posterior odds in excess of 900:1 (640:1) against a constant-in-redshift dipole for ΛCDM (the Cosmographic expansion). In the isotropic case, an accelerating universe is favoured with odds of ∼1100:1 with respect to a decelerating one.

Journal article

Moss A, von Hippel T, Robinson E, El-Badry K, Stenning DC, van Dyk D, Fouesneau M, Bailer-Jones CAL, Jeffery E, Sargent J, Kloc I, Moticska Net al., 2022, Improving white dwarfs as chronometers with gaia parallaxes and spectroscopic metallicities, The Astrophysical Journal, Vol: 929, Pages: 26-26, ISSN: 0004-637X

White dwarfs (WDs) offer unrealized potential in solving two problems in astrophysics: stellar age accuracy and precision. WD cooling ages can be inferred from surface temperatures and radii, which can be constrained with precision by high-quality photometry and parallaxes. Accurate and precise Gaia parallaxes along with photometric surveys provide information to derive cooling and total ages for vast numbers of WDs. Here we analyze 1372 WDs found in wide binaries with main-sequence (MS) companions and report on the cooling and total age precision attainable in these WD+MS systems. The total age of a WD can be further constrained if its original metallicity is known because the MS lifetime depends on metallicity at fixed mass, yet metallicity is unavailable via spectroscopy of the WD. We show that incorporating spectroscopic metallicity constraints from 38 wide binary MS companions substantially decreases internal uncertainties in WD total ages compared to a uniform constraint. Averaged over the 38 stars in our sample, the total (internal) age uncertainty improves from 21.04% to 16.77% when incorporating the spectroscopic constraint. Higher mass WDs yield better total age precision; for eight WDs with zero-age MS masses ≥2.0 M⊙, the mean uncertainty in total ages improves from 8.61% to 4.54% when incorporating spectroscopic metallicities. We find that it is often possible to achieve 5% total age precision for WDs with progenitor masses above 2.0 M⊙ if parallaxes with ≤1% precision and Pan-STARRS g, r, and i photometry with ≤0.01 mag precision are available.

Journal article

Jeong S, Park T, Dyk DAV, 2021, Bayesian model selection in additive partial linear models via locally adaptive splines, Journal of Computational and Graphical Statistics, Vol: 31, Pages: 324-336, ISSN: 1061-8600

We provide a flexible framework for selecting among a class of additivepartial linear models that allows both linear and nonlinear additivecomponents. In practice, it is challenging to determine which additivecomponents should be excluded from the model while simultaneously determiningwhether nonzero additive components should be represented as linear ornon-linear components in the final model. In this paper, we propose a Bayesianmodel selection method that is facilitated by a carefully specified class ofmodels, including the choice of a prior distribution and the nonparametricmodel used for the nonlinear additive components. We employ a series of latentvariables that determine the effect of each variable among the threepossibilities (no effect, linear effect, and nonlinear effect) and thatsimultaneously determine the knots of each spline for a suitable penalizationof smooth functions. The use of a pseudo-prior distribution along with acollapsing scheme enables us to deploy well-behaved Markov chain Monte Carlosamplers, both for model selection and for fitting the preferred model. Ourmethod and algorithm are deployed on a suite of numerical studies and areapplied to a nutritional epidemiology study. The numerical results show thatthe proposed methodology outperforms previously available methods in terms ofeffective sample sizes of the Markov chain samplers and the overallmisclassification rates.

Journal article

Marshall HL, Chen Y, Drake JJ, Matteo G, Kashyap V, Meng X-L, Plucinsky PP, Ratzlaff P, Van Dyk D, Wang Xet al., 2021, Concordance: in-flight calibration of X-ray telescopes without absolute references, The Astronomical Journal, Vol: 162, ISSN: 0004-6256

We describe a process for cross-calibrating the effective areas of X-ray telescopes that observe common targets. The targets are not assumed to be “standard candles” in the classic sense, in that we assume that the source fluxes have well-defined, but a priori unknown values. Using a technique developed by Chen et al. (2019) that involves a statistical method called shrinkage estimation, we determine effective area correction factors for each instrument that brings estimated fluxes into the best agreement, consistent with prior knowledge of their effective areas. We expand the technique to allow unique priors on systematic uncertainties in effective areas for each X-ray astronomy instrument and to allow correlations between effective areas in different energy bands. We demonstrate the method with several data sets from various X-ray telescopes.

Journal article

Meyer A, van Dyk D, Kashyap VL, Campos LF, Jones DE, Siemiginowska A, Zezas Aet al., 2021, eBASCS: disentangling overlapping astronomical sources II, usingspatial, spectral, and temporal information, Monthly Notices of the Royal Astronomical Society, Vol: 506, Pages: 6160-6180, ISSN: 0035-8711

The analysis of individual X-ray sources that appear in a crowded field can easily be compromised by the misallocation of recorded events to their originating sources. Even with a small number of sources, which none the less have overlapping point spread functions, the allocation of events to sources is a complex task that is subject to uncertainty. We develop a Bayesian method designed to sift high-energy photon events from multiple sources with overlapping point spread functions, leveraging the differences in their spatial, spectral, and temporal signatures. The method probabilistically assigns each event to a given source. Such a disentanglement allows more detailed spectral or temporal analysis to focus on the individual component in isolation, free of contamination from other sources or the background. We are also able to compute source parameters of interest like their locations, relative brightness, and background contamination, while accounting for the uncertainty in event assignments. Simulation studies that include event arrival time information demonstrate that the temporal component improves event disambiguation beyond using only spatial and spectral information. The proposed methods correctly allocate up to 65 per cent more events than the corresponding algorithms that ignore event arrival time information. We apply our methods to two stellar X-ray binaries, UV Cet and HBC 515 A, observed with Chandra. We demonstrate that our methods are capable of removing the contamination due to a strong flare on UV Cet B in its companion ≈40× weaker during that event, and that evidence for spectral variability at times-scales of a few ks can be determined in HBC 515 Aa and HBC 515 Ab.

Journal article

Algeri S, van Dyk D, 2021, Testing one hypothesis multiple times, Statistica Sinica, Vol: 31, Pages: 959-979, ISSN: 1017-0405

In applied settings, tests of hypothesis where a nuisance parameteris only identifiable under the alternative often reduces into one of Testing OneHypothesis Multiple times (TOHM). Specifically, a fine discretization of the spaceof the non-identifiable parameter is specified, and the null hypothesis is testedagainst a set of sub-alternative hypothesis, one for each point of the discretization.The resulting sub-test statistics are then combined to obtain a global p-value.In this paper, we discuss a computationally efficient inferential tool to performTOHM under stringent significance requirements, such as those typically requiredin the physical sciences, (e.g., p-value < 10−7). The resulting procedure leadsto a generalized approach to perform inference under non-standard conditions,including non-nested models comparisons.

Journal article

Algeri S, van Dyk D, 2021, Testing One Hypothesis Multiple Times, Statistica Sinica, ISSN: 1017-0405

Journal article

Zhao S, van Dyk D, Imai K, 2020, Propensity-score based methods for causal inference in observational studies with non-binary treatments, Statistical Methods in Medical Research, Vol: 29, Pages: 709-727, ISSN: 0962-2802

Propensity score methods are a part of the standard toolkit for applied researchers who wish to ascertain causaleffects from observational data. While they were originally developed for binary treatments, several researchershave proposed generalizations of the propensity score methodology for non-binary treatment regimes. Suchextensions have widened the applicability of propensity score methods and are indeed becoming increasinglypopular themselves. In this article, we closely examine two methods that generalize propensity scores in thisdirection, namely, the propensity function (pf), and the generalized propensity score (gps), along with twoextensions of thegpsthat aim to improve its robustness. We compare the assumptions, theoretical properties,and empirical performance of these methods. On a theoretical level, thegpsand its extensions are advantageousin that they are designed to estimate the full dose response function rather than the average treatment effectthat is estimated with thepf. We comparegpswith a newpfmethod, both of which estimate the doseresponse function. We illustrate our findings and proposals through simulation studies, including one based onan empirical study about the effect of smoking on healthcare costs. While our proposedpf-based estimatorpreforms well, we generally advise caution in that all available methods can be biased by model misspecificationand extrapolation.

Journal article

von Hippel T, Moss A, Kloc I, Moticska N, Sargent J, Robinson E, Stenning D, van Dyk D, Jeffery E, Fouesneau M, Bailer-Jones Cet al., 2020, A catalog of 159,238 white dwarf ages, Proceedings of the International Astronomical Union, Pages: 188-191, ISSN: 1743-9213

We employ Pan-STARRS photometry, Gaia trigonometric parallaxes, modern stellar evolution and atmosphere models, and our Bayesian fitting approach to determine cooling and total ages for 159,238 white dwarfs. In many cases we are able to derive precise ages (better than 5%) for individual white dwarfs. These results are meant for broad use within the white dwarf and stellar astrophysics communities and we plan to make available on-line the posterior distributions for cooling age, total age, initial stellar mass, and other parameters.

Journal article

Jeffery EJ, Von Hippel T, Robinson E, Van Dyk D, Stenning Det al., 2020, A Bayesian analysis of white dwarfs in open clusters observed with Gaia, Proceedings of the International Astronomical Union, Pages: 192-196, ISSN: 1743-9213

We analyze individual white dwarfs in open clusters observed by Gaia. In particular, we determine ages when different model ingredients are used. We also explore fundamental properties of the white dwarfs, including temperature and mass, when using different filter combinations. Such tests are important to understanding any systematic effects when applying similar techniques to field stars.

Journal article

Algeri S, van Dyk D, 2019, Testing one hypothesis multiple Times: The multidimensional case, Journal of Computational and Graphical Statistics, Vol: 29, Pages: 358-371, ISSN: 1061-8600

The identification of new rare signals in data, the detection of a suddenchange in a trend, and the selection of competing models, are among the mostchallenging problems in statistical practice. These challenges can be tackledusing a test of hypothesis where a nuisance parameter is present only under thealternative, and a computationally efficient solution can be obtained by the"Testing One Hypothesis Multiple times" (TOHM) method. In the one-dimensionalsetting, a fine discretization of the space of the non-identifiable parameteris specified, and a global p-value is obtained by approximating thedistribution of the supremum of the resulting stochastic process. In thispaper, we propose a computationally efficient inferential tool to perform TOHMin the multidimensional setting. Here, the approximations of interest typicallyinvolve the expected Euler Characteristics (EC) of the excursion set of theunderlying random field. We introduce a simple algorithm to compute the EC inmultiple dimensions and for arbitrary large significance levels. This leads toan highly generalizable computational tool to perform inference undernon-standard regularity conditions.

Journal article

Stampoulis V, van Dyk D, Kashyap VL, Zezas Aet al., 2019, Multidimensional data driven classification of emission-line galaxies, Monthly Notices of the Royal Astronomical Society, Vol: 485, Pages: 1085-1102, ISSN: 0035-8711

We propose a new soft clustering scheme for classifying galaxies in different activity classes using simultaneously four emission-line ratios: log ([NII]/H α), log ([SII]/H α), log ([OI]/H α), and log ([OIII]/H β). We fit 20 multivariate Gaussian distributions to the four-dimensional distribution of these lines obtained from the Sloan Digital Sky Survey in order to capture local structures and subsequently group the multivariate Gaussian distributions to represent the complex multidimensional structure of the joint distribution of galaxy spectra in the four-dimensional line ratio space. The main advantages of this method are the use of all four optical-line ratios simultaneously and the adoption of a clustering scheme. This maximizes the use of the available information, avoids contradicting classifications, and treats each class as a distribution resulting in soft classification boundaries and providing the probability for an object to belong to each class. We also introduce linear multidimensional decision surfaces using support vector machines based on the classification of our soft clustering scheme. This linear multidimensional hard clustering technique shows high classification accuracy with respect to our soft clustering scheme.

Journal article

Chen Y, Meng XL, Wang X, Van Dyk DA, Marshall H, Kashyap Vet al., 2019, Calibration concordance for astronomical instruments via multiplicative shrinkage, Journal of the American Statistical Association, Vol: 114, Pages: 1018-1037, ISSN: 0162-1459

Calibration data are often obtained by observing several well-understood objects simultaneously with multiple instruments, such as satellites for measuring astronomical sources. Analyzing such data and obtaining proper concordance among the instruments is challenging when the physical source models are not well understood, when there are uncertainties in “known” physical quantities, or when data quality varies in ways that cannot be fully quantified. Furthermore, the number of model parameters increases with both the number of instruments and the number of sources. Thus, concordance of the instruments requires careful modeling of the mean signals, the intrinsic source differences, and measurement errors. In this article, we propose a log-Normal model and a more general log-t model that respect the multiplicative nature of the mean signals via a half-variance adjustment, yet permit imperfections in the mean modeling to be absorbed by residual variances. We present analytical solutions in the form of power shrinkage in special cases and develop reliable Markov chain Monte Carlo algorithms for general cases, both of which are available in the Python module CalConcordance. We apply our method to several datasets including a combination of observations of active galactic nuclei (AGN) and spectral line emission from the supernova remnant E0102, obtained with a variety of X-ray telescopes such as Chandra, XMM- Newton, Suzaku, and Swift. The data are compiled by the International Astronomical Consortium for High Energy Calibration. We demonstrate that our method provides helpful and practical guidance for astrophysicists when adjusting for disagreements among instruments. Supplementary materials for this article, including a standardized description of the materials available for reproducing the work, are available as an online supplement.

Journal article

Hill R, Shariff H, Trotta R, Ali-Khan S, Jiao X, Liu Y, Moon SK, Parker W, Paulus M, van Dyk D, Lucy LBet al., 2018, Projected distances to host galaxy reduce SNIa dispersion, Monthly Notices of the Royal Astronomical Society, Vol: 481, Pages: 2766-2777, ISSN: 0035-8711

We use multi-band imagery data from the Sloan Digital Sky Survey (SDSS) to measure projected distances of 302 supernova type Ia (SNIa) from the centre of their host galaxies, normalized to the galaxy's brightness scale length, with a Bayesian approach. We test the hypothesis that SNIas further away from the centre of their host galaxy are less subject to dust contamination (as the dust column density in their environment is maller) and/or come from a more homogeneous environment. Using the Mann-Whitney U test, we find a statistically significant difference in the observed colour correction distribution between SNIas that are near and those that are far from the centre of their host. The local p-value is 3 x 10^{-3}, which is significant at the 5 per cent level after look-elsewhere effect correction. We estimate the residual scatter of thetwo subgroups to be 0.073 +/- 0.018 for the far SNIas, compared to 0.114 +/-0.009 for the near SNIas -- an improvement of 30 per cent, albeit with a low statistical significance of 2sigma. This confirms the importance of host galaxy properties in correctly interpreting SNIa observations for cosmological inference.

Journal article

Yu X, Del Zanna G, Stenning D, Cisewski-Kehe J, Kashyap V, Stein N, van Dyk D, Warren H, Weber MAet al., 2018, Incorporating uncertainties in atomic data Into the analysis of solar and stellar observations: a case study in Fe XIII, The Astrophysical Journal: an international review of astronomy and astronomical physics, Vol: 866, ISSN: 0004-637X

Information about the physical properties of astrophysical objects cannot be measured directly but is inferred by interpreting spectroscopic observations in the context of atomic physics calculations. Ratios of emission lines, for example, can be used to infer the electron density of the emitting plasma. Similarly, the relative intensities of emission lines formed over a wide range of temperatures yield information on the temperature structure. A critical component of this analysis is understanding how uncertainties in the underlying atomic physics propagate to the uncertainties in the inferred plasma parameters. At present, however, atomic physics databases do not include uncertainties on the atomic parameters and there is no established methodology for using them even if they did. In this paper we develop simple models for uncertainties in the collision strengths and decay rates for Fe xiii and apply them to the interpretation of density-sensitive lines observed with the EUV (extreme ultraviolet) Imagining spectrometer (EIS) on Hinode. We incorporate these uncertainties in a Bayesian framework. We consider both a pragmatic Bayesian method where the atomic physics information is unaffected by the observed data, and a fully Bayesian method where the data can be used to probe the physics. The former generally increases the uncertainty in the inferred density by about a factor of 5 compared with models that incorporate only statistical uncertainties. The latter reduces the uncertainties on the inferred densities, but identifies areas of possible systematic problems with either the atomic physics or the observed intensities.

Journal article

Si S, van Dyk D, von Hippel T, Robinson E, Jeffery E, Stenning DCet al., 2018, Bayesian hierarchical modelling of initial-final mass relations across star clusters, Monthly Notices of the Royal Astronomical Society, Vol: 480, Pages: 1300-1321, ISSN: 0035-8711

The initial–final mass relation (IFMR) of white dwarfs (WDs) plays an important role in stellar evolution. To derive precise estimates of IFMRs and explore how they may vary among star clusters, we propose a Bayesian hierarchical model that pools photometric data from multiple star clusters. After performing a simulation study to show the benefits of the Bayesian hierarchical model, we apply this model to five star clusters: the Hyades, M67, NGC 188, NGC 2168, and NGC 2477, leading to reasonable and consistent estimates of IFMRs for these clusters. We illustrate how a cluster-specific analysis of NGC 188 using its own photometric data can produce an unreasonable IFMR since its WDs have a narrow range of zero-age main sequence (ZAMS) masses. However, the Bayesian hierarchical model corrects the cluster-specific analysis by borrowing strength from other clusters, thus generating more reliable estimates of IFMR parameters. The data analysis presents the benefits of Bayesian hierarchical modelling over conventional cluster-specific methods, which motivates us to elaborate the powerful statistical techniques in this paper.

Journal article

Chen Y, Meng X-L, Wang X, van Dyk D, Marshall HL, Kashyap VLet al., 2018, Calibration concordance for astronomical instruments via multiplicative shrinkage, Publisher: arXiv

Calibration data are often obtained by observing several well-understoodobjects simultaneously with multiple instruments, such as satellites formeasuring astronomical sources. Analyzing such data and obtaining properconcordance among the instruments is challenging when the physical sourcemodels are not well understood, when there are uncertainties in "known"physical quantities, or when data quality varies in ways that cannot be fullyquantified. Furthermore, the number of model parameters increases with both thenumber of instruments and the number of sources. Thus, concordance of theinstruments requires careful modeling of the mean signals, the intrinsic sourcedifferences, and measurement errors. In this paper, we propose a log-Normalhierarchical model and a more general log-t model that respect themultiplicative nature of the mean signals via a half-variance adjustment, yetpermit imperfections in the mean modeling to be absorbed by residual variances.We present analytical solutions in the form of power shrinkage in special casesand develop reliable MCMC algorithms for general cases. We apply our method toseveral data sets obtained with a variety of X-ray telescopes such as Chandra.We demonstrate that our method provides helpful and practical guidance forastrophysicists when adjusting for disagreements among instruments.

Working paper

Tak H, Meng X-L, van Dyk D, 2018, A repelling–attracting metropolis algorithm for multimodality, Journal of Computational and Graphical Statistics, Vol: 27, Pages: 479-490, ISSN: 1061-8600

Although the Metropolis algorithm is simple to implement, it often has difficulties exploring multimodal distributions. We propose the repelling–attracting Metropolis (RAM) algorithm that maintains the simple-to-implement nature of the Metropolis algorithm, but is more likely to jump between modes. The RAM algorithm is a Metropolis-Hastings algorithm with a proposal that consists of a downhill move in density that aims to make local modes repelling, followed by an uphill move in density that aims to make local modes attracting. The downhill move is achieved via a reciprocal Metropolis ratio so that the algorithm prefers downward movement. The uphill move does the opposite using the standard Metropolis ratio which prefers upward movement. This down-up movement in density increases the probability of a proposed move to a different mode. Because the acceptance probability of the proposal involves a ratio of intractable integrals, we introduce an auxiliary variable which creates a term in the acceptance probability that cancels with the intractable ratio. Using several examples, we demonstrate the potential for the RAM algorithm to explore a multimodal distribution more efficiently than a Metropolis algorithm and with less tuning than is commonly required by tempering-based methods. Supplementary materials are available online.

Journal article

Revsbech EA, Trotta R, van Dyk D, 2017, STACCATO: a novel solution to supernova photometric classification with biased training sets, Monthly Notices of the Royal Astronomical Society, Vol: 473, ISSN: 0035-8711

We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN's) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al. – a diffusion map combined with a random forest classifier – to deal specifically with the case of biased training sets. We propose a novel method called Synthetically Augmented Light Curve Classification (STACCATO) that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the ‘gold standard’ of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low-brightness SNe.

Journal article

Revsbech EA, Trotta R, van Dyk D, 2017, STACCATO: a novel solution to supernova photometric classification with biased training sets

R implementation of STACCATO method for supernova classification with biased training set. Accompanying paper: STACCATO: A Novel Solution to Supernova Photometric Classification with Biased Training Sets,E.A. Resvbech, R. Trotta & D.A. van Dyk, MNRAS 473, 3, 3969-3986 (2018), e-print archive: 1706.03811, doi

Software

Tak H, Mandel K, Van Dyk DA, Kashyap V, Meng XL, Siemiginowska Aet al., 2017, Bayesian estimates of astronomical time delays between gravitationally lensed stochastic light curves, Annals of Applied Statistics, Vol: 11, Pages: 1309-1348, ISSN: 1932-6157

The gravitational field of a galaxy can act as a lens and deflectthe light emitted by a more distant object such as a quasar. Stronggravitational lensing causes multiple images of the same quasar to ap-pear in the sky. Since the light in each gravitationally lensed imagetraverses a different path length from the quasar to the Earth, fluc-tuations in the source brightness are observed in the several imagesat different times. The time delay between these fluctuations canbe used to constrain cosmological parameters and can be inferredfrom the time series of brightness data or light curves of each image.To estimate the time delay, we construct a model based on a state-space representation for irregularly observed time series generatedby a latent continuous-time Ornstein-Uhlenbeck process. We accountfor microlensing, an additional source of independent long-term ex-trinsic variability, via a polynomial regression. Our Bayesian strategyadopts a Metropolis-Hastings within Gibbs sampler. We improve thesampler by using an ancillarity-sufficiency interweaving strategy andadaptive Markov chain Monte Carlo. We introduce a profile likeli-hood of the time delay as an approximation of its marginal posteriordistribution. The Bayesian and profile likelihood approaches comple-ment each other, producing almost identical results; the Bayesianmethod is more principled but the profile likelihood is simpler toimplement. We demonstrate our estimation strategy using simulateddata of doubly- and quadruply-lensed quasars, and observed datafrom quasarsQ0957+561andJ1029+2623.

Journal article

Si S, van Dyk D, von Hippel T, Robinson E, Webster A, Stenning Det al., 2017, A hierarchical model for the ages of Galactic halo white dwarfs, Monthly Notices of the Royal Astronomical Society, Vol: 468, Pages: 4374-4388, ISSN: 1365-2966

In astrophysics, we often aim to estimate one or more parameters for each member object in a population and study the distribution of the fitted parameters across the population. In this paper, we develop novel methods that allow us to take advantage of existing software designed for such case-by-case analyses to simultaneously fit parameters of both the individual objects and the parameters that quantify their distribution across the population. Our methods are based on Bayesian hierarchical modelling that is known to produce parameter estimators for the individual objects that are on average closer to their true values than estimators based on case-by-case analyses. We verify this in the context of estimating ages of Galactic halo white dwarfs (WDs) via a series of simulation studies. Finally, we deploy our new techniques on optical and near-infrared photometry of 10 candidate halo WDs to obtain estimates of their ages along with an estimate of the mean age of Galactic halo WDs of12.11+0.85−0.86Gyr. Although this sample is small, our technique lays the ground work for large-scale studies using data from the Gaia mission.

Journal article

Wagner-Kaiser R, Sarajedini A, von Hippel T, Stenning DC, van Dyk D, Jeffrey E, Robinson E, Stein N, Anderson J, Jefferys WHet al., 2017, The ACS Survey of Galactic Globular Clusters XIV: Bayesian Single-Population Analysis of 69 Globular Clusters, Monthly Notices of the Royal Astronomical Society, Vol: 468, Pages: 1038-1055, ISSN: 0035-8711

We use Hubble Space Telescope (HST) imaging from the ACS Treasury Survey to determine fits for single population isochrones of 69 Galactic globular clusters. Using robust Bayesian analysis techniques, we simultaneously determine ages, distances, absorptions and helium values for each cluster under the scenario of a ‘single’ stellar population on model grids with solar ratio heavy element abundances. The set of cluster parameters is determined in a consistent and reproducible manner for all clusters using the Bayesian analysis suite BASE-9. Our results are used to re-visit the age–metallicity relation. We find correlations with helium and several other parameters such as metallicity, binary fraction and proxies for cluster mass. The helium abundances of the clusters are also considered in the context of carbon, nitrogen, and oxygen abundances and the multiple population scenario.

Journal article

Kashyap VL, van Dyk D, McKeough K, Primini F, Jerius D, Gowrishankar A, Siemiginowska A, Zezas Aet al., 2017, X-raying the evolution of SN 1987A, 331st Symposium of the International-Astronomical-Union (IAU), Publisher: CAMBRIDGE UNIV PRESS, Pages: 284-289, ISSN: 1743-9213

Conference paper

Algeri S, van Dyk D, Conrad J, Anderson Bet al., 2016, On methods for correcting for the look-elsewhere effect in searches for new physics, Journal of Instrumentation, Vol: 11, Pages: P12010-P12010, ISSN: 1748-0221

The search for new significant peaks over a energy spectrum often involves a statistical multiple hypothesis testing problem. Separate tests of hypothesis are conducted at different locations over a fine grid producing an ensemble of local p-values, the smallest of which is reported as evidence for the new resonance. Unfortunately, controlling the false detection rate (type I error rate) of such procedures may lead to excessively stringent acceptance criteria. In the recent physics literature, two promising statistical tools have been proposed to overcome these limitations. In 2005, a method to ``find needles in haystacks'' was introduced by Pilla et al. [1], and a second method was later proposed by Gross and Vitells [2] in the context of the ``look-elsewhere effect'' and trial factors. We show that, although the two methods exhibit similar performance for large sample sizes, for relatively small sample sizes, the method of Pilla et al. leads to an artificial inflation of statistical power that stems from an increase in the false detection rate. This method, on the other hand, becomes particularly useful in multidimensional searches, where the Monte Carlo simulations required by Gross and Vitells are often unfeasible. We apply the methods to realistic simulations of the Fermi Large Area Telescope data, in particular the search for dark matter annihilation lines. Further, we discuss the counter-intuitive scenario where the look-elsewhere corrections are more conservative than much more computationally efficient corrections for multiple hypothesis testing. Finally, we provide general guidelines for navigating the tradeoffs between statistical and computational efficiency when selecting a statistical procedure for signal detection.

Journal article

McKeough K, Siemiginowska A, Cheung CC, Stawarz L, Kashyap V, Stein N, Stampoulis V, van Dyk D, Wardle JFC, Lee NP, Harris DE, Schwartz DA, Donato D, Maraschi L, Tavecchio Fet al., 2016, Detecting relativistic X-ray jets in high-redshift quasars, Astrophysical Journal, Vol: 833, ISSN: 0004-637X

We analyze Chandra X-ray images of a sample of 11 quasars that are known tocontain kiloparsec scale radio jets. The sample consists of five high-redshift (z ≥3.6) flat-spectrum radio quasars, and six intermediate redshift (2.1 < z < 2.9)quasars. The dataset includes four sources with integrated steep radio spectraand three with flat radio spectra. A total of 25 radio jet features are present inthis sample. We apply a Bayesian multi-scale image reconstruction method todetect and measure the X-ray emission from the jets. We compute deviationsfrom a baseline model that does not include the jet, and compare observed X-ray images with those computed with simulated images where no jet features exist.This allows us to compute p-value upper bounds on the significance that an Xrayjet is detected in a pre-determined region of interest. We detected 12 of thefeatures unambiguously, and an additional 6 marginally. We also find residualemission in the cores of 3 quasars and in the background of 1 quasar that suggestthe existence of unresolved X-ray jets. The dependence of the X-ray to radioluminosity ratio on redshift is a potential diagnostic of the emission mechanism,since the inverse Compton scattering of cosmic microwave background photons(IC/CMB) is thought to be redshift dependent, whereas in synchrotron modelsno clear redshift dependence is expected. We find that the high-redshift jetshave X-ray to radio flux ratios that are marginally inconsistent with those fromlower redshifts, suggesting that either the X-ray emissions is due to the IC/CMBrather than the synchrotron process, or that high redshift jets are qualitativelydifferent.

Journal article

Si S, van Dyk D, von Hippel T, 2016, Sensitivity Analysis of Hierarchical Models for the Ages of Galactic Halo White Dwarfs, 20th European White Dwarf Workshop, Proceedings of a conference

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00704773&limit=30&person=true