In 1948 Shanon laid the foundations of information theory which revolutionized statistics, physics, engineering, and computer science. A strong limitation, however, is that the semantic content of data is not taken into account. This research will produce a novel framework for characterizing semantic information content in multimodal data. It will combine non-convex optimization with advanced statistical methods, leading to new representation learning algorithms, measures of uncertainty, and sampling methods. Given data from a scene, the algorithms will be able find the most informative representations of the data for a specific task. These methods will be applied to complex datasets from real situations, including text, images, videos, sensor signals, and cameras, resulting in intelligent decision based algorithms. (EPSRC grant ref: EP/R018413/1).
ASSET is a 14 Partner Collaborative European FP7 - HEALTH - 2010 project in the theme "tackling human diseases through systems biology approaches". Using a combination of state-of-the-art genomics, proteomics and mathematical modelling, ASSET's major goal is to identify mechanistically understood network vulnerabilities that can be exploited for new approaches to the diagnosis and treatment of highly aggressive and devastating major paediatric tumours, including neuroblastoma (NB), medulloblastoma (MB) and Ewing sarcoma family tumours (ESFT).
Interactive Machine Learning Accelerating Progress in Science, An Emerging Theme of ICT Research (ENGAGE)
Our vision is to establish and lead a new theme in ICT research based on Interactive Machine Learning (IML). Our expansion of IML will give scientists and non-ICT specialists unprecedented access to cutting-edge Machine Learning algorithms by providing a human-computer interface by which they can directly interact with large scale data and computing resources in an intuitive visual environment. In addition, the outcome of this particular project will have a direct transformative impact on the sciences by making it possible for non-programming individuals (scientists), to create systems that semi-automatically detect objects and events in vast quantities of A) audio and B) visual data. By working together across two parallel, highly interconnected streams of ICT research, we will develop the foundations of statistical methodology, algorithms and systems for IML. As an exemplar, this project partners with world leading scientists grappling with the challenge of analysing enormous quantities of heterogeneous data being generated in Biodiversity Science. (EPSRC grant ref: EP/K015664/2).
A mathematical model for a physical experiment is a set of equations which relate inputs to outputs. Inputs represent physical variables which can be adjusted before the experiment takes place; outputs represent quantities which can be measured as a result of the experiment. The forward problem refers to using the mathematical model to predict the output of an experiment from a given input. The inverse problem refers to using the mathematical model to make inferences about input(s) to the mathematical model which would result in a given measured output. An example concerns a mathematical model for oil reservoir simulation. An important input to the model is the permeability of the subsurface rock. A natural output would be measurements of oil and/or water flow out of production wells. Since the subsurface is not directly observable, the problem of inferring its properties from measurements at production wells is particularly important. Accurate inference enables decisions to be made about the economic viability of drilling a well, and about well-placement. In many inverse problems the measured data is subject to noise, and the mathematical model may be imperfect. It is then very important to quantify the uncertainty inherent in any inferences made as part of the solution to the inverse problem. The work brings together a team of mathematical scientists, with expertise in applied mathematics, computer science and statistics, together with engineering applications, to develop new methods for solving inverse problems, including the quantification of uncertainty. The work will be driven by applications in the determination of subsurface properties, but will have application to a range of problems in the biological, physical and social sciences. (EPSRC grant ref: EP/K034154/1).
Advancing the Geometric Framework for Computational Statistics: Theory, Methodology and Modern Day Applications (Geo)
The vision of this research is to formalise the geometric foundations of computational statistics and provide the tools and analytic results required to realise the ambition of developing the advanced statistical methodology that is essential to address emerging inference problems of major importance across the sciences and industry. As ever more demanding and ambitious applications of existing statistical inference methods are being considered, the capabilities of computational statistics tools are constantly being stretched, often beyond what is practically feasible. For example the potential to gain insights into the mechanisms of cellular function, elucidating ecological dynamics; improving neurological diagnostics, and uncovering the deep mysteries of the cosmos are only some of the ongoing scientific studies that are heavily reliant on statistical inference methods and are placing unparalleled demand on the current capabilities of available statistical methodology. This situation motivates continual innovation in the development of statistical methods for the quantification of uncertainty. The aim of this proposed research is to be more ambitious and go much further in establishing a novel paradigm that underpins the advancement of next generation computational statistical methods by formalising and developing advanced Monte Carlo methods. The geometric foundations of computational statistics will be formalised within this proposed research in a way that reaches beyond traditional interfaces between statistical and mathematical sciences. (EPSRC grant ref: EP/J016934/2).
There are many interesting open questions at the interface between applied mathematics, scientific computing and applied statistics. Mathematics is the language of science, we use it to describe the laws of motion that govern natural and technological systems. We use statistics to make sense of data. We develop and test computer algorithms that make these ideas concrete. By bringing these concepts together in a systematic way we can validate and sharpen our hypothesis about the underlying science, and make predictions about future behaviour. This general field of Uncertainty Quantification is a very active area of research, with many challenges; from intellectual questions about how to define and measure uncertainty to very practical issues concerning the need to perform intensive computational experiments as efficiently as possible. ICONIC brings together a team of high profile researchers with the appropriate combination of skills in modeling, numerical analysis, statistics and high performance computing. To give a concrete target for impact, the ICONIC project will focus initially on Uncertainty Quantification for mathematical models relating to crime, security and resilience in urban environments. Then, acknowledging that urban analytics is a very fast-moving field where new technologies and data sources emerge rapidly, and exploiting the flexibility built into an EPSRC programme grant, we will apply the new tools to related city topics concerning human mobility, transport and infrastructure. In this way, the project will enhance the UK's research capabilities in the fast-moving and globally significant Future Cities field. (EPSRC grant ref: EP/P020720/1).
There is a growing need for clinicians to be able to diagnose and prescribe therapy according to an individual's healthcare needs and potential responses. To allow this personalised medicine approach to be fulfilled, new technologies allowing rapid and accurate detection of biomarkers indicative of specific diseases are needed and to be available to clinicians to aid in their management of disease. This proposal aims to bring together physical scientists working on nanoparticles capable of detecting biomarkers at ultralow concentrations with information technologists capable of interpreting and presenting data from these complex assays to the clinical partners who are interested in how best to utilise this new information in improved healthcare practice. The basis of the proposal is to create an in vitro diagnostic assay at first which is capable of detecting multiple biomarkers in a patient's sample which allows the clinician to produce a risk profile of the patient. A second aspect of the research is to investigate in vivo imaging by SERS for specific biomarkers and in a multiplexed manner. The disease we are targeting is cardiovascular disease which covers atherosclerotic plaques. Risk of atherosclerosis is identified by increased levels of specific biomarkers, however, atherosclerosis is characterised by a localised rather than a systemic immune response. Therefore the measurement of biomarkers for in vitro prediction will be investigated in parallel to quantification of vascular inflammation and the development of a therapeutic approach to convey treatments directly to the affected vessel. The assays will be based on surface enhanced Raman scattering (SERS) and use metallic nanoparticles. The output will be in the form of a vibrational spectrum which will contain a high degree of information relating to the relative quantitation of each of the specific biomarkers being investigated. Two types of in vitro assay will be investigated with one of them carrying forward for in vivo imaging. The in vivo assay will recognise the target and through interpretation of the signal allow a decision to be made whether to induce a therapeutic action. The action we are proposing is a photothermal response from an assembly of the nanoparticles triggered by the specific biomarker being interrogated. This makes the response highly specific to that biomarker and will offer a new way to manage atherosclerosis. (EPSRC grant ref: EP/L014165/1).
End-user adaptation of software structure offers a potentially major benefit: giving people the means to customise a program to suit individual needs, interests and contexts. However, in their different ways, users, analysts and developers are all challenged by complexities and costs when handling structural variation. Developers and analysts struggle to understand and work with what is happening in deployments. Each user is often unsupported in understanding what changes to software structure and its use might work well in terms of either objective functionality or subjective experience. We are building tools and infrastructure that will let people in these roles handle such systems, understand relevant design niches and use contexts, and be more innovative. This work is woven together with advances in inference methods, formal models andconceptual frameworks. Collectively, this enables our large scale real-world deployments of software applications. These deployments serve both as useful everyday applications for large numbers of users and as testbeds for our new approach—user experiences that both ground and drive our technological, methodological and conceptual advances. Treating a software class as a varied and changing population of software instances is the overarching concept that makes this work coherent and interconnected. Drawing metaphorically from biological concepts of species and evolution, the population concept is based on accepting—or even taking advantage of—the scale, variety and dynamism possible in contemporary software. (EPSRC grant ref: EP/J007617/1).