Imperial College London

DrAndrewDuncan

Faculty of Natural SciencesDepartment of Mathematics

Senior Lecturer in Statistics and Data-Centric Engineering
 
 
 
//

Contact

 

a.duncan

 
 
//

Location

 

6M35Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

25 results found

Yatsyshin P, Kalliadasis S, Duncan AB, 2022, Physics-constrained Bayesian inference of state functions in classical density-functional theory, Journal of Chemical Physics, Vol: 156, Pages: 074105-1-074105-10, ISSN: 0021-9606

We develop a novel data-driven approach to the inverse problem of classical statistical mechanics: given experimental data on the collective motion of a classical many-body system, how does one characterise the free energy landscape of that system? By combining non-parametric Bayesian inference with physically-motivated constraints, we develop an efficient learning algorithm which automates the construction of approximate free energy functionals. In contrast to optimisation-based machine learning approaches, which seek to minimise a cost function, the centralidea of the proposed Bayesian inference is to propagate a set of prior assumptions through the model, derived from physical principles. The experimental data is usedto probabilistically weigh the possible model predictions. This naturally leads to humanly interpretable algorithms with full uncertainty quantification of predictions. In our case, the output of the learning algorithm is a probability distribution over a family of free energy functionals, consistent with the observed particle data. We find that surprisingly small data samples contain sufficient information for inferring highly accurate analytic expressions of the underlying free energy functionals, making our algorithm highly data efficient. We consider excluded volume particle interactions, which are ubiquitous in nature, whilst being highly challenging for modelling in terms of free energy. To validate our approach we consider the paradigmaticcase of one-dimensional fluid and develop inference algorithms for the canonical and grand-canonical statistical-mechanical ensembles. Extensions to higher dimensional systems are conceptually straightforward, whilst standard coarse-graining techniques allow one to easily incorporate attractive interactions

Journal article

Briol F-X, Barp A, Duncan AB, Girolami Met al., 2022, Statistical inference for generative models with maximum meandiscrepancy, Publisher: ArXiv

While likelihood-based inference and its variants provide a statisticallyefficient and widely applicable approach to parametric inference, theirapplication to models involving intractable likelihoods poses challenges. Inthis work, we study a class of minimum distance estimators for intractablegenerative models, that is, statistical models for which the likelihood isintractable, but simulation is cheap. The distance considered, maximum meandiscrepancy (MMD), is defined through the embedding of probability measuresinto a reproducing kernel Hilbert space. We study the theoretical properties ofthese estimators, showing that they are consistent, asymptotically normal androbust to model misspecification. A main advantage of these estimators is theflexibility offered by the choice of kernel, which can be used to trade-offstatistical efficiency and robustness. On the algorithmic side, we study thegeometry induced by MMD on the parameter space and use this to introduce anovel natural gradient descent-like algorithm for efficient implementation ofthese estimators. We illustrate the relevance of our theoretical results onseveral classes of models including a discrete-time latent Markov process andtwo multivariate stochastic differential equation models.

Working paper

Scillitoe A, Seshadri P, Wong CY, Duncan Aet al., 2021, Polynomial ridge flowfield estimation, PHYSICS OF FLUIDS, Vol: 33, ISSN: 1070-6631

Journal article

Cockayne J, Duncan A, 2021, Probabilistic gradients for fast calibration of differential equation models, SIAM/ASA Journal on Uncertainty Quantification, Vol: 9, ISSN: 2166-2525

Calibration of large-scale differential equation models to observational or experimental data is a widespread challenge throughout applied sciences and engineering. A crucial bottleneck in state-of-the art calibration methods is the calculation of local sensitivities, i.e. derivatives of the loss function with respect to the estimated parameters, which often necessitates several numerical solves of the underlying system of partial or ordinary differential equations. In this paper we present a new probabilistic approach to computing local sensitivities. The proposed method has several advantages over classical methods. Firstly, it operates within a constrained computational budget and provides a probabilistic quantification of uncertainty incurred in the sensitivities from this constraint. Secondly, information from previous sensitivity estimates can be recycled in subsequent computations, reducing the overall computational effort for iterative gradient-based calibration methods. The methodology presented is applied to two challenging test problems and compared against classical methods.

Journal article

Wilde H, Mellan T, Hawryluk I, Dennis JM, Denaxas S, Pagel C, Duncan A, Bhatt S, Flaxman S, Mateen BA, Vollmer SJet al., 2021, The association between mechanical ventilator compatible bed occupancy and mortality risk in intensive care patients with COVID-19: a national retrospective cohort study., BMC Medicine, Vol: 19, Pages: 1-12, ISSN: 1741-7015

BACKGROUND: The literature paints a complex picture of the association between mortality risk and ICU strain. In this study, we sought to determine if there is an association between mortality risk in intensive care units (ICU) and occupancy of beds compatible with mechanical ventilation, as a proxy for strain. METHODS: A national retrospective observational cohort study of 89 English hospital trusts (i.e. groups of hospitals functioning as single operational units). Seven thousand one hundred thirty-three adults admitted to an ICU in England between 2 April and 1 December, 2020 (inclusive), with presumed or confirmed COVID-19, for whom data was submitted to the national surveillance programme and met study inclusion criteria. A Bayesian hierarchical approach was used to model the association between hospital trust level (mechanical ventilation compatible), bed occupancy, and in-hospital all-cause mortality. Results were adjusted for unit characteristics (pre-pandemic size), individual patient-level demographic characteristics (age, sex, ethnicity, deprivation index, time-to-ICU admission), and recorded chronic comorbidities (obesity, diabetes, respiratory disease, liver disease, heart disease, hypertension, immunosuppression, neurological disease, renal disease). RESULTS: One hundred thirty-five thousand six hundred patient days were observed, with a mortality rate of 19.4 per 1000 patient days. Adjusting for patient-level factors, mortality was higher for admissions during periods of high occupancy (> 85% occupancy versus the baseline of 45 to 85%) [OR 1.23 (95% posterior credible interval (PCI): 1.08 to 1.39)]. In contrast, mortality was decreased for admissions during periods of low occupancy (< 45% relative to the baseline) [OR 0.83 (95% PCI 0.75 to 0.94)]. CONCLUSION: Increasing occupancy of beds compatible with mechanical ventilation, a proxy for operational strain, is associated with a higher mortality risk for individuals admitted to ICU

Journal article

Mateen BA, Wilde H, Dennis JM, Duncan A, Thomas N, McGovern A, Denaxas S, Keeling M, Vollmer Set al., 2021, Hospital bed capacity and usage across secondary healthcare providers in England during the first wave of the COVID-19 pandemic: a descriptive analysis, BMJ Open, Vol: 11, Pages: 1-9, ISSN: 2044-6055

Objective In this study, we describe the pattern of bed occupancy across England during the peak of the first wave of the COVID-19 pandemic.Design Descriptive survey.Setting All non-specialist secondary care providers in England from 27 March27to 5 June 2020.Participants Acute (non-specialist) trusts with a type 1 (ie, 24 hours/day, consultant-led) accident and emergency department (n=125), Nightingale (field) hospitals (n=7) and independent sector secondary care providers (n=195).Main outcome measures Two thresholds for ‘safe occupancy’ were used: 85% as per the Royal College of Emergency Medicine and 92% as per NHS Improvement.Results At peak availability, there were 2711 additional beds compatible with mechanical ventilation across England, reflecting a 53% increase in capacity, and occupancy never exceeded 62%. A consequence of the repurposing of beds meant that at the trough there were 8.7% (8508) fewer general and acute beds across England, but occupancy never exceeded 72%. The closest to full occupancy of general and acute bed (surge) capacity that any trust in England reached was 99.8% . For beds compatible with mechanical ventilation there were 326 trust-days (3.7%) spent above 85% of surge capacity and 154 trust-days (1.8%) spent above 92%. 23 trusts spent a cumulative 81 days at 100% saturation of their surge ventilator bed capacity (median number of days per trust=1, range: 1–17). However, only three sustainability and transformation partnerships (aggregates of geographically co-located trusts) reached 100% saturation of their mechanical ventilation beds.Conclusions Throughout the first wave of the pandemic, an adequate supply of all bed types existed at a national level. However, due to an unequal distribution of bed utilisation, many trusts spent a significant period operating above ‘safe-occupancy’ thresholds despite substantial capacity in geographically co-located trusts, a key operational issue to address in pre

Journal article

Pozharskiy D, Wichrowski NJ, Duncan AB, Pavliotis GA, Kevrekidis IGet al., 2020, Manifold learning for accelerating coarse-grained optimization, Journal of Computational Dynamics, Vol: 7, Pages: 511-536, ISSN: 2158-2505

Algorithms proposed for solving high-dimensional optimization problems with no derivative information frequently encounter the "curse of dimensionality, " becoming ineffective as the dimension of the parameter space grows. One feature of a subclass of such problems that are effectively low-dimensional is that only a few parameters (or combinations thereof) are important for the optimization and must be explored in detail. Knowing these parameters/combinations in advance would greatly simplify the problem and its solution. We propose the data-driven construction of an effective (coarse-grained, "trend") optimizer, based on data obtained from ensembles of brief simulation bursts with an "inner" optimization algorithm, that has the potential to accelerate the exploration of the parameter space. The trajectories of this "effective optimizer" quickly become attracted onto a slow manifold parameterized by the few relevant parameter combinations. We obtain the parameterization of this low-dimensional, effective optimization manifold on the fly using data mining/manifold learning techniques on the results of simulation (inner optimizer iteration) burst ensembles and exploit it locally to "jump" forward along this manifold. As a result, we can bias the exploration of the parameter space towards the few, important directions and, through this "wrapper algorithm, " speed up the convergence of traditional optimization algorithms.

Journal article

Yates CA, George A, Jordana A, Smith CA, Duncan AB, Zygalakis KCet al., 2020, The blending region hybrid framework for the simulation of stochastic reaction–diffusion processes, Journal of The Royal Society Interface, Vol: 17, Pages: 1-19, ISSN: 1742-5689

The simulation of stochastic reaction–diffusion systems using fine-grained representations can become computationally prohibitive when particle numbers become large. If particle numbers are sufficiently high then it may be possible to ignore stochastic fluctuations and use a more efficient coarse-grained simulation approach. Nevertheless, for multiscale systems which exhibit significant spatial variation in concentration, a coarse-grained approach may not be appropriate throughout the simulation domain. Such scenarios suggest a hybrid paradigm in which a computationally cheap, coarse-grained model is coupled to a more expensive, but more detailed fine-grained model, enabling the accurate simulation of the fine-scale dynamics at a reasonable computational cost. In this paper, in order to couple two representations of reaction–diffusion at distinct spatial scales, we allow them to overlap in a ‘blending region’. Both modelling paradigms provide a valid representation of the particle density in this region. From one end of the blending region to the other, control of the implementation of diffusion is passed from one modelling paradigm to another through the use of complementary ‘blending functions’ which scale up or down the contribution of each model to the overall diffusion. We establish the reliability of our novel hybrid paradigm by demonstrating its simulation on four exemplar reaction–diffusion scenarios.

Journal article

Seshadri P, Duncan A, Simpson D, Thorne G, Parks GTet al., 2020, Spatial flow-field approximation using few thermodynamic measurements Part II: Uncertainty assessments, Journal of Turbomachinery

Journal article

Seshadri P, Simpson D, Thorne G, Duncan A, Parks GTet al., 2020, Spatial flow-field approximation using few thermodynamic measurements Part I: formulation and area averaging, Journal of Turbomachinery, ISSN: 0889-504X

Our investigation raises an important question that is of relevance to the wider turbomachinery community: howdo we estimate the spatial average of a flow quantity given finite (and sparse) measurements? This paper seeks toadvance efforts to answer this question rigorously. In this paper, we develop a regularized multivariate linear regressionframework for studying engine temperature measurements. As part of this investigation, we study the temperaturemeasurements obtained from the same axial plane across five different engines yielding a total of 82 data-sets. Thefive different engines have similar architectures and therefore similar temperature spatial harmonics are expected. Ourproblem is to estimate the spatial field in engine temperature given a few measurements obtained from thermocouplespositioned on a set of rakes. Our motivation for doing so is to understand key engine temperature modes that cannotbe captured in a rig or in computational simulations, as the cause of these modes may not be replicated in thesesimpler environments. To this end, we develop a multivariate linear least squares model with Tikhonov regularizationto estimate the 2D temperature spatial field. Our model uses a Fourier expansion in the circumferential direction anda quadratic polynomial expansion in the radial direction. One important component of our modeling framework isthe selection of model parameters, i.e. the harmonics in the circumferential direction. A training-testing paradigm isproposed and applied to quantify the harmonics.

Journal article

Barp A, Briol FX, Duncan A, Girolami M, Mackey Let al., 2019, Minimum Stein discrepancy estimators, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Publisher: Neural Information Processing Systems Foundation, Inc.

When maximum likelihood estimation is infeasible, one often turns to score matching, contrastive divergence, or minimum probability flow to obtain tractable parameter estimates. We provide a unifying perspective of these techniques as minimum Stein discrepancy estimators, and use this lens to design new diffusion kernel Stein discrepancy (DKSD) and diffusion score matching (DSM) estimators with complementary strengths. We establish the consistency, asymptotic normality, and robustness of DKSD and DSM estimators, then derive stochastic Riemannian gradient descent algorithms for their efficient optimisation. The main strength of our methodology is its flexibility, which allows us to design estimators with desirable properties for specific models at hand by carefully selecting a Stein discrepancy. We illustrate this advantage for several challenging problems for score matching, such as non-smooth, heavy-tailed or light-tailed densities.

Conference paper

Gorham J, Duncan A, Vollmer S, Mackey Let al., 2019, Measuring sample quality with diffusions, Annals of Applied Probability, Vol: 29, Pages: 2884-2928, ISSN: 1050-5164

Stein’s method for measuring convergence to a continuous targetdistribution relies on an operator characterizing the target andSteinfactorbounds on the solutions of an associated differential equation.While such operators and bounds are readily available for a diversityof univariate targets, few multivariate targets have been analyzed. Weintroduce a new class of characterizing operators based on Itˆo diffu-sions and develop explicit multivariate Stein factor bounds for anytarget with a fast-coupling Itˆo diffusion. As example applications, wedevelop computable and convergence-determiningdiffusion Stein dis-crepanciesfor log-concave, heavy-tailed, and multimodal targets anduse these quality measures to select the hyperparameters of biasedMarkov chain Monte Carlo (MCMC) samplers, compare random anddeterministic quadrature rules, and quantify bias-variance tradeoffsin approximate MCMC. Our results establish a near-linear relation-ship between diffusion Stein discrepancies and Wasserstein distances,improving upon past work even for strongly log-concave targets. Theexposed relationship between Stein factors and Markov process cou-pling may be of independent interest.

Journal article

Duncan A, Zygalakis K, Pavliotis G, 2018, Nonreversible Langevin Samplers: Splitting Schemes, Analysis and Implementation

For a given target density, there exist an infinite number of diffusion processes which are ergodic with respect to this density. As observed in a number of papers, samplers based on nonreversible diffusion processes can significantly outperform their reversible counterparts both in terms of asymptotic variance and rate of convergence to equilibrium. In this paper, we take advantage of this in order to construct efficient sampling algorithms based on the Lie-Trotter decomposition of a nonreversible diffusion process into reversible and nonreversible components. We show that samplers based on this scheme can significantly outperform standard MCMC methods, at the cost of introducing some controlled bias. In particular, we prove that numerical integrators constructed according to this decomposition are geometrically ergodic and characterise fully their asymptotic bias and variance, showing that the sampler inherits the good mixing properties of the underlying nonreversible diffusion. This is illustrated further with a number of numerical examples ranging from highly correlated low dimensional distributions, to logistic regression problems in high dimensions as well as inference for spatial models with many latent variables.

Working paper

Bierkens J, Bouchard-Côté A, Doucet A, Duncan AB, Fearnhead P, Lienart T, Roberts G, Vollmer SJet al., 2018, Piecewise deterministic Markov processes for scalable Monte Carlo on restricted domains, Statistics & Probability Letters, Vol: 136, Pages: 148-154, ISSN: 0167-7152

Journal article

Duncan AB, Nusken N, Pavliotis GA, 2017, Using perturbed underdamped langevin dynamics to efficiently sample from probability distributions, Journal of Statistical Physics, Vol: 169, Pages: 1098-1131, ISSN: 1572-9613

In this paper we introduce and analyse Langevin samplers that consist of perturbations of the standard underdamped Langevin dynamics. The perturbed dynamics is such that its invariant measure is the same as that of the unperturbed dynamics. We show that appropriate choices of the perturbations can lead to samplers that have improved properties, at least in terms of reducing the asymptotic variance. We present a detailed analysis of the new Langevin sampler for Gaussian target distributions. Our theoretical results are supported by numerical experiments with non-Gaussian target measures.

Journal article

Bierkens J, Duncan A, 2017, Limit theorems for the zig-zag process, Advances in Applied Probability, Vol: 49, Pages: 791-825, ISSN: 0001-8678

Markov chain Monte Carlo (MCMC) methods provide an essential tool in statistics for sampling from complex probability distributions. While the standard approach to MCMC involves constructing discrete-time reversible Markov chains whose transition kernel is obtained via the Metropolis–Hastings algorithm, there has been recent interest in alternative schemes based on piecewise deterministic Markov processes (PDMPs). One such approach is based on the zig-zag process, introduced in Bierkens and Roberts (2016), which proved to provide a highly scalable sampling scheme for sampling in the big data regime; see Bierkens et al. (2016). In this paper we study the performance of the zig-zag sampler, focusing on the one-dimensional case. In particular, we identify conditions under which a central limit theorem holds and characterise the asymptotic variance. Moreover, we study the influence of the switching rate on the diffusivity of the zig-zag process by identifying a diffusion limit as the switching rate tends to ∞. Based on our results we compare the performance of the zig-zag sampler to existing Monte Carlo methods, both analytically and through simulations.

Journal article

Kasprzak MJ, Duncan AB, Vollmer SJ, 2017, Note on A. Barbour’s paper on Stein’s method for diffusion approximations, Electronic Communications in Probability, Vol: 22, ISSN: 1083-589X

In [2] foundations for diffusion approximation via Stein’s method are laid. This paper has been cited more than 130 times and is a cornerstone in the area of Stein’s method (see, for example, its use in [1] or [7]). A semigroup argument is used in [2] to solve a Stein equation for Gaussian diffusion approximation. We prove that, contrary to the claim in [2], the semigroup considered therein is not strongly continuous on the Banach space of continuous, real-valued functions on D[0,1] growing slower than a cubic, equipped with an appropriate norm. We also provide a proof of the exact formulation of the solution to the Stein equation of interest, which does not require the aforementioned strong continuity. This shows that the main results of [2] hold true.

Journal article

Duncan A, Erban R, Zygalakis K, 2016, Hybrid framework for the simulation of stochastic chemical kinetics, Journal of Computational Physics, Vol: 326, Pages: 398-419, ISSN: 0021-9991

Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.

Journal article

Duncan AB, Kalliadasis S, Pavliotis GA, Pradas Met al., 2016, Noise-induced transitions in rugged energy landscapes, Physical Review E, Vol: 94, ISSN: 1539-3755

We consider the problem of an overdamped Brownian particle moving in multiscale potential with N+1 characteristic length scales: the macroscale and N separated microscales. We show that the coarse-grained dynamics is given by an overdamped Langevin equation with respect to the free energy and with a space-dependent diffusion tensor, the calculation of which requires the solution of N fully coupled Poisson equations. We study in detail the structure of the bifurcation diagram for one-dimensional problems, and we show that the multiscale structure in the potential leads to hysteresis effects and to noise-induced transitions. Furthermore, we obtain an explicit formula for the effective diffusion coefficient for a self-similar separable potential, and we investigate the limit of infinitely many small scales.

Journal article

Duncan AB, Pavliotis GA, Lelievre T, 2016, Variance reduction using nonreversible Langevin samplers, Journal of Statistical Physics, Vol: 163, Pages: 457-491, ISSN: 1572-9613

A standard approach to computing expectations with respect to a given target measure is to introduce an overdamped Langevin equation which is reversible with respect to the target distribution, and to approximate the expectation by a time-averaging estimator. As has been noted in recent papers, introducing an appropriately chosen nonreversiblecomponent to the dynamics is beneficial, both in terms of reducing the asymptotic variance and of speeding up convergence to the target distribution. In this paper we present a detailed study of the dependence of the asymptotic variance on the deviation from reversibility. Our theoretical findings are supported by numerical simulations.

Journal article

Duncan A, Liao S, Vejchodský T, Erban R, Grima Ret al., 2015, Noise-induced multistability in chemical systems: Discrete versus continuum modeling, Physical Review E, Vol: 91, ISSN: 1539-3755

Journal article

Duncan AB, Elliott CM, Pavliotis GA, Stuart AMet al., 2015, A Multiscale Analysis of Diffusions on Rapidly Varying Surfaces, JOURNAL OF NONLINEAR SCIENCE, Vol: 25, Pages: 389-449, ISSN: 0938-8974

Journal article

Duncan AB, 2015, Homogenization of Lateral Diffusion on a Random Surface, Multiscale Modeling &amp; Simulation, Vol: 13, Pages: 1478-1506, ISSN: 1540-3459

Journal article

Papandreou Y, Cockayne J, Girolami M, Duncan ABet al., Theoretical Guarantees for the Statistical Finite Element Method

The statistical finite element method (StatFEM) is an emerging probabilisticmethod that allows observations of a physical system to be synthesised with thenumerical solution of a PDE intended to describe it in a coherent statisticalframework, to compensate for model error. This work presents a new theoreticalanalysis of the statistical finite element method demonstrating that it hassimilar convergence properties to the finite element method on which it isbased. Our results constitute a bound on the Wasserstein-2 distance between theideal prior and posterior and the StatFEM approximation thereof, and show thatthis distance converges at the same mesh-dependent rate as finite elementsolutions converge to the true solution. Several numerical examples arepresented to demonstrate our theory, including an example which test therobustness of StatFEM when extended to nonlinear quantities of interest.

Journal article

Seshadri P, Duncan A, Thorne G, Parks G, Diaz RV, Girolami Met al., Bayesian Assessments of Aeroengine Performance with Transfer Learning

Aeroengine performance is determined by temperature and pressure profilesalong various axial stations within an engine. Given limited sensormeasurements both along and between axial stations, we require a statisticallyprincipled approach to inferring these profiles. In this paper we detail aBayesian methodology for interpolating the spatial temperature or pressureprofile at axial stations within an aeroengine. The profile at any given axialstation is represented as a spatial Gaussian random field on an annulus, withcircumferential variations modelled using a Fourier basis and radial variationsmodelled with a squared exponential kernel. This Gaussian random field isextended to ingest data from multiple axial measurement planes, with the aim oftransferring information across the planes. To facilitate this type of transferlearning, a novel planar covariance kernel is proposed, with hyperparametersthat characterise the correlation between any two measurement planes. In thescenario where precise frequencies comprising the temperature field areunknown, we utilise a sparsity-promoting prior on the frequencies to encouragesparse representations. This easily extends to cases with multiple engineplanes whilst accommodating frequency variations between the planes. The mainquantity of interest, the spatial area average is readily obtained in closedform. We term this the Bayesian area average and demonstrate how this metricoffers far more precise averages than a sector area average -- a widely usedarea averaging approach. Furthermore, the Bayesian area average naturallydecomposes the posterior uncertainty into terms characterising insufficientsampling and sensor measurement error respectively. This too provides asignificant improvement over prior standard deviation based uncertaintybreakdowns.

Working paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00830053&limit=30&person=true