Results
- Showing results for:
- Reset all filters
Search results
-
Journal articleAaij R, Abdelmotteleb ASW, Beteta CA, et al., 2024,
A measurement of ΔΓ<sub>s</sub>
, JOURNAL OF HIGH ENERGY PHYSICS, ISSN: 1029-8479 -
Journal articleArmano M, Audley H, Baird J, et al., 2024,
Nano-Newton electrostatic force actuators for femto-Newton-sensitive measurements: System performance test in the LISA Pathfinder mission
, PHYSICAL REVIEW D, Vol: 109, ISSN: 2470-0010 -
Journal articleHenry S, Su H, Akhter S, et al., 2024,
Measurement of electron neutrino and antineutrino cross sections at low momentum transfer
, PHYSICAL REVIEW D, Vol: 109, ISSN: 2470-0010 -
Journal articleHayrapetyan A, Tumasyan A, Adam W, et al., 2024,
Measurement of the primary Lund jet plane density in proton-proton collisions at √s=13 TeV
, The Journal of High Energy Physics, Vol: 2024, ISSN: 1029-8479A measurement is presented of the primary Lund jet plane (LJP) density ininclusive jet production in proton-proton collisions. The analysis uses 138 fb−1of datacollected by the CMS experiment at √s = 13 TeV. The LJP, a representation of the phasespace of emissions inside jets, is constructed using iterative jet declustering. The transversemomentum kT and the splitting angle ∆R of an emission relative to its emitter are measuredat each step of the jet declustering process. The average density of emissions as functionof ln(kT/GeV) and ln(R/∆R) is measured for jets with distance parameters R = 0.4 or 0.8,transverse momentum pT > 700 GeV, and rapidity |y| < 1.7. The jet substructure is measuredusing the charged-particle tracks of the jet. The measured distributions, unfolded to thelevel of stable charged particles, are compared with theoretical predictions from simulationsand with perturbative quantum chromodynamics calculations. Due to the ability of the LJPto factorize physical efects, these measurements can be used to improve diferent aspectsof the physics modeling in event generators.
-
Journal articleAalbers J, Akerib DS, Al Musalhi AK, et al., 2024,
First constraints on WIMP-nucleon effective field theory couplings in an extended energy region from LUX-ZEPLIN
, PHYSICAL REVIEW D, Vol: 109, ISSN: 2470-0010 -
Journal articleHayrapetyan A, Tumasyan A, Adam W, et al., 2024,
Search for W' bosons decaying to a top and a bottom quark in leptonic final states in proton-proton collisions at √s=13 TeV
, The Journal of High Energy Physics, Vol: 2024, ISSN: 1029-8479A search for W′bosons decaying to a top and a bottom quark in final statesincluding an electron or a muon is performed with the CMS detector at the LHC. Theanalyzed data correspond to an integrated luminosity of 138 fb−1of proton-proton collisionsat a center-of-mass energy of 13 TeV. Good agreement with the standard model expectationis observed and no evidence for the existence of the W′boson is found over the mass rangeexamined. The largest observed deviation from the standard model expectation is found fora W′boson mass (mW′) hypothesis of 3.8 TeV with a relative decay width of 1%, with a local(global) significance of 2.6 (2.0) standard deviations. Upper limits on the production crosssections of W′bosons decaying to a top and a bottom quark are set. Left- and right-handedW′bosons with mW′ below 3.9 and 4.3 TeV, respectively, are excluded at the 95% confdencelevel, under the assumption that the new particle has a narrow decay width. Limits arealso set for relative decay widths up to 30%.
-
Journal articleHayrapetyan A, Tumasyan A, Adam W, et al., 2024,
Search for long-lived particles decaying to final states with a pair of muons in proton-proton collisions at √s=13.6 TeV
, JOURNAL OF HIGH ENERGY PHYSICS, ISSN: 1029-8479 -
Journal articleHayrapetyan A, Tumasyan A, Adam W, et al., 2024,
Inclusive and diferential cross section measurements of t(t)over-barb(b)over-bar production in the lepton plus jets channel at √s=13 TeV
, JOURNAL OF HIGH ENERGY PHYSICS, ISSN: 1029-8479 -
Conference paperStagni F, Boyer A, Tsaregorodtsev A, et al., 2024,
DIRAC current, upcoming and planned capabilities and technologies
, 26th International Conference on Computing in High Energy and Nuclear Physics (CHEP), Publisher: EDP Sciences, ISSN: 2100-014XDIRAC is the interware for building and operating large scale distributed computing systems. It is adopted by multiple collaborations from various scientific domains for implementing their computing models. DIRAC provides a framework and a rich set of ready-to-use services for Workload, Data and Production Management tasks of small, medium and large scientific communities having different computing requirements. The base functionality can be easily extended by custom components supporting community specific workflows. DIRAC is at the same time an aging project, and a new DiracX project is taking shape for replacing DIRAC in the long term. This contribution will highlight DIRAC’s current, upcoming and planned capabilities and technologies, and how the transition to DiracX will take place. Examples include, but are not limited to, adoption of security tokens and interactions with Identity Provider services, integration of Clouds and High Performance Computers, interface with Rucio, improved monitoring and deployment procedures.
-
Conference paperBauer D, Fayer S, Whitehouse D, 2024,
Estimating the environmental impact of a large Tier 2
, 26th International Conference on Computing in High Energy and Nuclear Physics (CHEP), Publisher: EDP Sciences, Pages: 1-8, ISSN: 2100-014XRecent years have seen an increasing interest in the environmental impact, especially the carbon footprint, generated by the often large scale computing facilities used by the communities represented at CHEP. As this is a fairly new requirement, this information is not always readily available, especially at universities and similar institutions which do not necessarily see large scale computing provision as their core competency. We present the results of a survey of a large WLCG Tier 2 with respect to power usage and carbon footprint leveraging all sources of information currently available to us. We show that it is possible to estimate the environmental impact with respect to power usage without having to invest in dedicated monitoring equipment. Manufacturers however do not yet provide sufficient information to allow for a detailed analysis of the carbon footprint of equipment manufacture, but even with the available information it is clear that this cannot be ignored.
-
Conference paperBauer D, Fayer S, 2024,
Standardizing DIRAC's cloud interfaces
, 26th International Conference on Computing in High Energy and Nuclear Physics (CHEP), Publisher: EDP Sciences, ISSN: 2100-014XDIRAC is a widely used framework for distributed computing. It provides a layer between users and computing resources by offering a common interface to a number of heterogeneous resource providers. In these proceedings we describe a new implementation of the DIRAC to Cloud interface.
-
Conference paperBarbone M, Brown C, Radburn-Smith B, et al., 2024,
Deployment of ML in changing environments
, 26th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2023), Publisher: EDP Sciences, ISSN: 2100-014XThe High-Luminosity LHC upgrade of the CMS experiment will utilise a large number of Machine Learning (ML) based algorithms in its hardware-based trigger. These ML algorithms will facilitate the selection of potentially interesting events for storage and offline analysis. Strict latency and resource requirements limit the size and complexity of these models due to their use in a high-speed trigger setting and deployment on FPGA hardware. It is envisaged that these ML models will be trained on large, carefully tuned, Monte Carlo datasets and subsequently deployed in a real-world detector environment. Not only is there a potentially large difference between the MC training data and real-world conditions but these detector conditions could change over time leading to a shift in model output which could degrade trigger performance. The studies presented explore different techniques to reduce the impact of this effect, using the CMS track finding and vertex trigger algorithms as a test case. The studies compare a baseline retraining and redeployment of the model and episodic training of a model as new data arrives in a continual learning context. The results show that a continually learning algorithm outperforms a simple retrained model when degradation in detector performance is applied to the training data and is a viable option for maintaining performance in an evolving environment such as the High-Luminosity LHC.
-
Conference paperBarbone M, Brown C, Gaydadjiev G, et al., 2024,
Embedded continual learning for high-energy physics
, 26th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2023), Publisher: EDP Sciences, ISSN: 2100-014XNeural Networks (NN) are often trained offline on large datasets and deployed on specialised hardware for inference, with a strict separation between training and inference. However, in many realistic applications the training environment differs from the real world, or data arrives in a streaming fashion and is continuously changing. In these scenarios, the ability to continuously train and update NN models is desirable. Continual learning (CL) algorithms allow training of models on a stream of data. CL algorithms are often designed to work in constrained settings, such as limited memory and computational power, or limitations on the ability to store past data (e.g, due to privacy concerns or memory requirements). High-energy physics experiments are developing intelligent detectors, with algorithms running on computer systems located close to the detector to meet the challenges of increased data rates and occupancies. The use of NN algorithms in this context is limited by changing detector conditions, such as degradation over time or failure of an input signal which might cause the NNs to lose accuracy leading, in the worst case to the loss of interesting events. CL has the potential to solve this issue, using large amounts of continuously streaming data to allow the network to recognise changes, and to learn and adapt to detector conditions. It has the potential to outperform traditional NN training techniques as not all possible scenarios can be predicted and modelled in static training data samples. However, NN training is computationally expensive and when combined with the strict timing requirements of embedded processors deployed close to the detector, current state-of-the-art offline approaches cannot be directly applied to the real-time systems. Alternatives to typical backpropagation-based training that can be deployed on FPGAs for real-time data processing are presented, and their computational and accuracy characteristics are discussed in the context of
-
Conference paperBarbone M, Gaydadjiev G, Howard A, et al., 2024,
Fast, high-quality pseudo random number generators for heterogeneous computing
, 26th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2023), Publisher: EDP Sciences, ISSN: 2100-014XRandom number generation is key to many applications in a wide variety of disciplines. Depending on the application, the quality of the random numbers from a particular generator can directly impact both computational performance and critically the outcome of the calculation. High-energy physics applications use Monte Carlo simulations and machine learning widely, which both require high-quality random numbers. In recent years, to meet increasing performance requirements, many high-energy physics workloads leverage GPU acceleration. While on a CPU, there exist a wide variety of generators with different performance and quality characteristics, the same cannot be stated for GPU and FPGA accelerators. On GPUs, the most common implementation is provided by cuRAND - an NVIDIA library that is not open source or peer reviewed by the scientific community. The highest-quality generator implemented in cuRAND is a version of the Mersenne Twister. Given the availability of better and faster random number generators, high-energy physics moved away from Mersenne Twister several years ago and nowadays MIXMAX is the standard generator in Geant4 via CLHEP. The MIXMAX original design supports parallel streams with a seeding algorithm that makes it especially suited for GPU and FPGA where extreme parallelism is a key factor. In this study we implement the MIXMAX generator on both architectures and analyze its suitability and applicability for accelerator implementations. We evaluated the results against “Mersenne Twister for a Graphic Processor” (MTGP32) on GPUs which resulted in 5, 13 and 14 times higher throughput when a 240, 17 and 8 sized vector space was used respectively. The MIXMAX generator coded in VHDL and implemented on Xilinx Ultrascale+ FPGAs, requires 50% fewer total Look Up Tables (LUTs) compared to a 32-bit Mersenne Twister (MT-19337), or 75% fewer LUTs per output bit. In summary, the state-of-the art MIXMAX pseudo random number generator has been implemen
-
Conference paperOurida T, Luk W, Tapper A, et al., 2024,
Acceleration of a Deep Neural Network for the compact muon solenoid
, 26th International Conference on Computing in High Energy and Nuclear Physics, Publisher: EDP Sciences, ISSN: 2100-014XThere are ongoing efforts to investigate theories that aim to explain the current shortcomings of the Standard Model of particle physics. One such effort is the Long-Lived Particle Jet Tagging Algorithm, based on a DNN (Deep Neural Network), which is used to search for exotic new particles. This paper describes two novel optimisations in the design of this DNN, suitable for implementation on an FPGA-based accelerator. The first involves the adoption of cyclic random access memories and the reuse of multiply-accumulate operations. The second involves storing matrices distributed over many RAM memories with elements grouped by index. An evaluation of the proposed methods and hardware architectures is also included. The proposed optimisations can yield performance enhancements by more than an order of magnitude compared to software implementations. The innovations can also lead to smaller FPGA footprints and accordingly reduce power consumption, allowing for instance duplication of compute units to achieve increases in effective throughput.
-
Journal articleAaij R, Abdelmotteleb ASW, Beteta CA, et al., 2024,
Measurement of the Λ<sub>c</sub><SUP>+</SUP> to D<SUP>0</SUP> production ratio in peripheral PbPb collisions at √s<sub>NN</sub> = 5.02 TeV (vol 2024, 21, 2024)
, JOURNAL OF HIGH ENERGY PHYSICS, ISSN: 1029-8479 -
Journal articleAbe K, Bronner C, Hayato Y, et al., 2024,
Solar neutrino measurements using the full data period of Super-Kamiokande-IV
, PHYSICAL REVIEW D, Vol: 109, ISSN: 2470-0010 -
Journal articleAaij R, Abdelmotteleb ASW, Beteta CA, et al., 2024,
Study of CP violation in B<SUP>0</SUP> → DK*(892)<SUP>0</SUP> decays with D → Kπ(ππ), ππ(ππ), and KK final states
, JOURNAL OF HIGH ENERGY PHYSICS, ISSN: 1029-8479 -
Journal articleAaij R, Abdelmotteleb ASW, Abellan Beteta C, et al., 2024,
Measurement of the branching fraction of B0→ J/ψπ0 decays
, Journal of High Energy Physics, Vol: 2024The ratio of branching fractions between B<sup>0</sup> → J/ψπ<sup>0</sup> and B<sup>+</sup> → J/ψK<sup>*+</sup> decays is measured with proton-proton collision data collected by the LHCb experiment, corresponding to an integrated luminosity of 9 fb<sup>−1</sup>. The measured value is (Formula presented.) where the first uncertainty is statistical and the second is systematic. The branching fraction for B<sup>0</sup> → J/ψπ<sup>0</sup> decays is determined using the branching fraction of the normalisation channel, resulting in (Formula presented.) where the last uncertainty corresponds to that of the external input. This result is consistent with the current world average value and competitive with the most precise single measurement to date.
-
Journal articleAyres Rocha D, Baptista de Souza Leite J, Bediaga IB, et al., 2024,
The LHCb Upgrade I
, Journal of Instrumentation, Vol: 19The LHCb upgrade represents a major change of the experiment. The detectors have been almost completely renewed to allow running at an instantaneous luminosity five times larger than that of the previous running periods. Readout of all detectors into an all-software trigger is central to the new design, facilitating the reconstruction of events at the maximum LHC interaction rate, and their selection in real time. The experiment's tracking system has been completely upgraded with a new pixel vertex detector, a silicon tracker upstream of the dipole magnet and three scintillating fibre tracking stations downstream of the magnet. The whole photon detection system of the RICH detectors has been renewed and the readout electronics of the calorimeter and muon systems have been fully overhauled. The first stage of the all-software trigger is implemented on a GPU farm. The output of the trigger provides a combination of totally reconstructed physics objects, such as tracks and vertices, ready for final analysis, and of entire events which need further offline reprocessing. This scheme required a complete revision of the computing model and rewriting of the experiment's software.
-
Journal articleAaij R, Abdelmotteleb ASW, Beteta CA, et al., 2024,
Search for Bc+→π+μ+μ- decays and measurement of the branching fraction ratio B(Bc+→ψ(2S)π+)/B(Bc+→J/ψπ+)
, European Physical Journal C, Vol: 84, ISSN: 1434-6044The first search for nonresonant B<inf>c</inf><sup>+</sup>→π<sup>+</sup>μ<sup>+</sup>μ<sup>-</sup> decays is reported. The analysis uses proton–proton collision data collected with the LHCb detector between 2011 and 2018, corresponding to an integrated luminosity of 9fb<sup>-1</sup>. No evidence for an excess of signal events over background is observed and an upper limit is set on the branching fraction ratio B(B<inf>c</inf><sup>+</sup>→π<sup>+</sup>μ<sup>+</sup>μ<sup>-</sup>)/B(B<inf>c</inf><sup>+</sup>→J/ψπ<sup>+</sup>)<2.1×10<sup>-4</sup> at 90% confidence level. Additionally, an updated measurement of the ratio of the B<inf>c</inf><sup>+</sup>→ψ(2S)π<sup>+</sup> and B<inf>c</inf><sup>+</sup>→J/ψπ<sup>+</sup> branching fractions is reported. The ratio B(B<inf>c</inf><sup>+</sup>→ψ(2S)π<sup>+</sup>)/B(B<inf>c</inf><sup>+</sup>→J/ψπ<sup>+</sup>) is measured to be 0.254±0.018±0.003±0.005, where the first uncertainty is statistical, the second systematic, and the third is due to the uncertainties on the branching fractions of the leptonic J/ψ and ψ(2S) decays. This measurement is the most precise to date and is consistent with previous LHCb results.
-
Journal articleTumasyan A, Adam W, Andrejkovic JW, et al., 2024,
Measurement of simplified template cross sections of the Higgs boson produced in association with 𝑊 or 𝑍 bosons in the 𝐻 →𝑏¯𝑏 decay channel in proton-proton collisions at √𝑠 =13 TeV
, Physical Review D: Particles, Fields, Gravitation and Cosmology, Vol: 109, ISSN: 1550-2368Differential cross sections are measured for the standard model Higgs boson produced in association with vector bosons (𝑊, 𝑍) and decaying to a pair of 𝑏 quarks. Measurements are performed within the framework of the simplified template cross sections. The analysis relies on the leptonic decays of the 𝑊 and 𝑍 bosons, resulting in final states with 0, 1, or 2 electrons or muons. The Higgs boson candidates are either reconstructed from pairs of resolved 𝑏-tagged jets, or from single large-radius jets containing the particles arising from two 𝑏 quarks. Proton-proton collision data at √𝑠 =13 TeV, collected by the CMS experiment in 2016–2018 and corresponding to a total integrated luminosity of 138 fb−1, are analyzed. The inclusive signal strength, defined as the product of the observed production cross section and branching fraction relative to the standard model expectation, combining all analysis categories, is found to be 𝜇 =1.15+0.22−0.20. This corresponds to an observed (expected) significance of 6.3 (5.6) standard deviations.
-
Journal articleAbratenko P, Alterkait O, Andrade Aldana D, et al., 2024,
Measurement of nuclear effects in neutrino-argon interactions using generalized kinematic imbalance variables with the MicroBooNE detector
, Physical Review D, Vol: 109, ISSN: 2470-0010We present a set of new generalized kinematic imbalance variables that can be measured in neutrino scattering. These variables extend previous measurements of kinematic imbalance on the transverse plane and are more sensitive to modeling of nuclear effects. We demonstrate the enhanced power of these variables using simulation and then use the MicroBooNE detector to measure them for the first time. We report flux-integrated single- and double-differential measurements of charged-current muon neutrino scattering on argon using a topology with one muon and one proton in the final state as a function of these novel kinematic imbalance variables. These measurements allow us to demonstrate that the treatment of charged current quasielastic interactions in genie version 2 is inadequate to describe data. Further, they reveal tensions with more modern generator predictions particularly in regions of phase space where final state interactions are important.
-
Journal articleAaij R, Abdelmotteleb ASW, Abellan Beteta C, et al., 2024,
Measurement of forward charged hadron flow harmonics in peripheral PbPb collisions at √sNN = 5.02 TeV with the LHCb detector
, Physical Review C, Vol: 109, ISSN: 2469-9985Flow harmonic coefficients, vn, which are the key to studying the hydrodynamics of the quark-gluon plasma (QGP) created in heavy-ion collisions, have been measured in various collision systems and kinematic regions and using various particle species. The study of flow harmonics in a wide pseudorapidity range is particularly valuable to understand the temperature dependence of the shear viscosity to entropy density ratio of the QGP. This paper presents the first LHCb results of the second- and the third-order flow harmonic coefficients of charged hadrons as a function of transverse momentum in the forward region, corresponding to pseudorapidities between 2.0 and 4.9, using the data collected from PbPb collisions in 2018 at a center-of-mass energy of 5.02 TeV. The coefficients measured using the two-particle angular correlation analysis method are smaller than the central-pseudorapidity measurements at ALICE and ATLAS from the same collision system but share similar features.
-
Journal articleAaij R, Abdelmotteleb ASW, Abellan Beteta C, et al., 2024,
Multiplicity dependence of σψ(2S)/σJ/ψ in pp collisions at √s = 13 TeV
, Journal of High Energy Physics, Vol: 2024The ratio of production cross-sections of ψ(2S) over J/ψ mesons as a function of charged-particle multiplicity in proton-proton collisions at a centre-of-mass energy s = 13 TeV is measured with a data sample collected by the LHCb detector, corresponding to an integrated luminosity of 658 pb<sup>−1</sup>. The ratio is measured for both prompt and non-prompt ψ(2S) and J/ψ mesons. When there is an overlap between the rapidity ranges over which multiplicity and charmonia production are measured, a multiplicity-dependent modification of the ratio is observed for prompt mesons. No significant multiplicity dependence is found when the ranges do not overlap. For non-prompt production, the ψ(2S)-to-J/ψ production ratio is roughly independent of multiplicity, irrespective of the rapidity range over which the multiplicity is measured. The results are compared to predictions of the co-mover model and agree well except in the low multiplicity region. The ratio of production cross-sections of ψ(2S) over J/ψ mesons are cross-checked with other measurements in di-lepton channels and found to be compatible.
-
Journal articleHayrapetyan A, Tumasyan A, Adam W, et al., 2024,
Development of the CMS detector for the CERN LHC Run 3
, Journal of Instrumentation, Vol: 19, ISSN: 1748-0221Since the initial data taking of the CERN LHC, the CMS experiment has undergone substantial upgrades and improvements. This paper discusses the CMS detector as it is configured for the third data-taking period of the CERN LHC, Run 3, which started in 2022. The entire silicon pixel tracking detector was replaced. A new powering system for the superconducting solenoid was installed. The electronics of the hadron calorimeter was upgraded. All the muon electronic systems were upgraded, and new muon detector stations were added, including a gas electron multiplier detector. The precision proton spectrometer was upgraded. The dedicated luminosity detectors and the beam loss monitor were refurbished. Substantial improvements to the trigger, data acquisition, software, and computing systems were also implemented, including a new hybrid CPU/GPU farm for the high-level trigger.
-
Journal articleHayrapetyan A, Tumasyan A, Adam W, et al., 2024,
Search for an exotic decay of the Higgs boson into a Z boson and apseudoscalar particle in proton-proton collisions at √𝑠 = 13 TeV
, Physics Letters B: Nuclear Physics and Particle Physics, Vol: 852, ISSN: 0370-2693A search for an exotic decay of the Higgs boson to a Z boson and a light pseudoscalar particle (a), decaying to apair of leptons and a pair of photons, respectively, is presented. The search is based on proton-proton collisiondata at a center-of-mass energy of √𝑠 = 13TeV, collected with the CMS detector at the LHC and correspondingto an integrated luminosity of 138 fb−1. The analysis probes pseudoscalar masses 𝑚a between 1 and 30 GeV,leading to two pairs of well-isolated leptons and photons. Upper limits at 95% confidence level are set on theHiggs boson production cross section times its branching fraction to two leptons and two photons. The observed(expected) limits are in the range of 1.1–17.8 (1.7–17.9) fb within the probed 𝑚a interval. An excess of data abovethe expected standard model background with a local (global) significance of 2.6 (1.3) standard deviations isobserved for a mass hypothesis of 𝑚a = 3GeV. Limits on models involving axion-like particles, formulated as aneffective field theory, are also reported.
-
Journal articleCampbell JM, Diefenthaler M, Hobbs TJ, et al., 2024,
Event generators for high-energy physics experiments
, SCIPOST PHYSICS, Vol: 16, ISSN: 2542-4653 -
Journal articleAcampora G, Ahdida C, Albanese R, et al., 2024,
SND@LHC: the scattering and neutrino detector at the LHC
, JOURNAL OF INSTRUMENTATION, Vol: 19, ISSN: 1748-0221 -
Journal articlePec V, Kudryavtsev VA, Araujo HM, et al., 2024,
Muon-induced background in a next-generation dark matter experiment based on liquid xenon
, European Physical Journal C: Particles and Fields, Vol: 84, ISSN: 1124-1861Muon-induced neutrons can lead to potentially irreducible backgrounds in rare event search experiments. We have investigated the implication of laboratory depth on the muon-induced background in a future dark matter experiment capable of reaching the so-called neutrino floor. Our simulation study focused on a xenon-based detector with 70 tonnes of active mass, surrounded by additional veto systems plus a water shield. Two locations at the Boulby Underground Laboratory (UK) were analysed as examples: an experimental cavern in salt at a depth of 2850 m w. e. (similar to the location of the existing laboratory), and a deeper laboratory located in polyhalite rock at a depth of 3575 m w. e. Our results show that no cosmogenic background events are likely to survive standard analysis cuts for 10 years of operation at either location. The largest background component we identified comes from beta-delayed neutron emission from
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.