Imperial College London

DrLuisMunoz Gonzalez

Faculty of EngineeringDepartment of Computing

Research Associate (Machine Learning & Probabalistic App.)
 
 
 
//

Contact

 

l.munoz-gonzalez Website

 
 
//

Location

 

502Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

32 results found

Carnerero-Cano J, Muñoz-González L, Spencer P, Lupu ECet al., 2021, Regularization can help mitigate poisoning attacks... with the right hyperparameters, Publisher: arXiv

Machine learning algorithms are vulnerable to poisoning attacks, where afraction of the training data is manipulated to degrade the algorithms'performance. We show that current approaches, which typically assume thatregularization hyperparameters remain constant, lead to an overly pessimisticview of the algorithms' robustness and of the impact of regularization. Wepropose a novel optimal attack formulation that considers the effect of theattack on the hyperparameters, modelling the attack as a \emph{minimax bileveloptimization problem}. This allows to formulate optimal attacks, selecthyperparameters and evaluate robustness under worst case conditions. We applythis formulation to logistic regression using $L_2$ regularization, empiricallyshow the limitations of previous strategies and evidence the benefits of using$L_2$ regularization to dampen the effect of poisoning attacks.

Working paper

Co KT, Muñoz-González L, Kanthan L, Glocker B, Lupu ECet al., 2021, Universal Adversarial Robustness of Texture and Shape-Biased Models, IEEE International Conference on Image Processing (ICIP)

Increasing shape-bias in deep neural networks has been shown to improverobustness to common corruptions and noise. In this paper we analyze theadversarial robustness of texture and shape-biased models to UniversalAdversarial Perturbations (UAPs). We use UAPs to evaluate the robustness of DNNmodels with varying degrees of shape-based training. We find that shape-biasedmodels do not markedly improve adversarial robustness, and we show thatensembles of texture and shape-biased models can improve universal adversarialrobustness while maintaining strong performance.

Conference paper

Matachana A, Co KT, Munoz Gonzalez L, Martinez D, Lupu Eet al., 2021, Robustness and transferability of universal attacks on compressed models, AAAI 2021 Workshop, Publisher: AAAI

Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their uncompressed counterparts on CIFAR-10 and SVHN datasets. Our evaluations reveal clear differences between pruning methods, including Soft Filter and Post-training Pruning. We observe that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different. This finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks. We show that, in some scenarios, quantization can produce gradient-masking, giving a false sense of security. Finally, our results suggest that conclusions about the robustness of compressed models to UAP attacks is application dependent, observing different phenomena in the two datasets used in our experiments.

Conference paper

Co KT, Muñoz-González L, Kanthan L, Lupu ECet al., 2021, Real-time Detection of Practical Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are a prominent class ofadversarial examples that exploit the systemic vulnerabilities and enablephysically realizable and robust attacks against Deep Neural Networks (DNNs).UAPs generalize across many different inputs; this leads to realistic andeffective attacks that can be applied at scale. In this paper we proposeHyperNeuron, an efficient and scalable algorithm that allows for the real-timedetection of UAPs by identifying suspicious neuron hyper-activations. Ourresults show the effectiveness of HyperNeuron on multiple tasks (imageclassification, object detection), against a wide variety of universal attacks,and in realistic scenarios, like perceptual ad-blocking and adversarialpatches. HyperNeuron is able to simultaneously detect both adversarial mask andpatch UAPs with comparable or better performance than existing UAP defenseswhilst introducing a significantly reduced latency of only 0.86 millisecondsper image. This suggests that many realistic and practical universal attackscan be reliably mitigated in real-time, which shows promise for the robustdeployment of machine learning systems.

Journal article

Matachana AG, Co KT, Muñoz-González L, Martinez D, Lupu ECet al., 2020, Robustness and transferability of universal attacks on compressed models, Publisher: arXiv

Neural network compression methods like pruning and quantization are veryeffective at efficiently deploying Deep Neural Networks (DNNs) on edge devices.However, DNNs remain vulnerable to adversarial examples-inconspicuous inputsthat are specifically designed to fool these models. In particular, UniversalAdversarial Perturbations (UAPs), are a powerful class of adversarial attackswhich create adversarial perturbations that can generalize across a large setof inputs. In this work, we analyze the effect of various compressiontechniques to UAP attacks, including different forms of pruning andquantization. We test the robustness of compressed models to white-box andtransfer attacks, comparing them with their uncompressed counterparts onCIFAR-10 and SVHN datasets. Our evaluations reveal clear differences betweenpruning methods, including Soft Filter and Post-training Pruning. We observethat UAP transfer attacks between pruned and full models are limited,suggesting that the systemic vulnerabilities across these models are different.This finding has practical implications as using different compressiontechniques can blunt the effectiveness of black-box transfer attacks. We showthat, in some scenarios, quantization can produce gradient-masking, giving afalse sense of security. Finally, our results suggest that conclusions aboutthe robustness of compressed models to UAP attacks is application dependent,observing different phenomena in the two datasets used in our experiments.

Working paper

Grama M, Musat M, Muñoz-González L, Passerat-Palmbach J, Rueckert D, Alansary Aet al., 2020, Robust aggregation for adaptive privacy preserving federated learning in healthcare, Publisher: arXiv

Federated learning (FL) has enabled training models collaboratively frommultiple data owning parties without sharing their data. Given the privacyregulations of patient's healthcare data, learning-based systems in healthcarecan greatly benefit from privacy-preserving FL approaches. However, typicalmodel aggregation methods in FL are sensitive to local model updates, which maylead to failure in learning a robust and accurate global model. In this work,we implement and evaluate different robust aggregation methods in FL applied tohealthcare data. Furthermore, we show that such methods can detect and discardfaulty or malicious local clients during training. We run two sets ofexperiments using two real-world healthcare datasets for training medicaldiagnosis classification tasks. Each dataset is used to simulate theperformance of three different robust FL aggregation strategies when facingdifferent poisoning attacks. The results show that privacy preserving methodscan be successfully applied alongside Byzantine-robust aggregation techniques.We observed in particular how using differential privacy (DP) did notsignificantly impact the final learning convergence of the differentaggregation strategies.

Working paper

Hau Z, Demetriou S, Muñoz-González L, Lupu ECet al., 2020, Shadow-Catcher: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing

LiDAR-driven 3D sensing allows new generations of vehicles to achieveadvanced levels of situation awareness. However, recent works have demonstratedthat physical adversaries can spoof LiDAR return signals and deceive 3D objectdetectors to erroneously detect "ghost" objects. Existing defenses are eitherimpractical or focus only on vehicles. Unfortunately, it is easier to spoofsmaller objects such as pedestrians and cyclists, but harder to defend againstand can have worse safety implications. To address this gap, we introduceShadow-Catcher, a set of new techniques embodied in an end-to-end prototype todetect both large and small ghost object attacks on 3D detectors. Wecharacterize a new semantically meaningful physical invariant (3D shadows)which Shadow-Catcher leverages for validating objects. Our evaluation on theKITTI dataset shows that Shadow-Catcher consistently achieves more than 94%accuracy in identifying anomalous shadows for vehicles, pedestrians, andcyclists, while it remains robust to a novel class of strong "invalidation"attacks targeting the defense system. Shadow-Catcher can achieve real-timedetection, requiring only between 0.003s-0.021s on average to process an objectin a 3D point cloud on commodity hardware and achieves a 2.17x speedup comparedto prior work

Journal article

Carnerero-Cano J, Mu noz-González L, Spencer P, Lupu ECet al., 2020, Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Other

Co KT, Munoz Gonzalez L, de Maupeou S, Lupu Eet al., 2019, Procedural noise adversarial examples for black-box attacks on deep neural networks, 26th ACM Conference on Computer and Communications Security, Publisher: ACM, Pages: 275-289

Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms attest time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.

Conference paper

Muñoz-González L, Lupu EC, 2019, The security of machine learning systems, AI in Cybersecurity, Publisher: Springer, Pages: 47-79

© Springer Nature Switzerland AG 2019. Machine learning lies at the core of many modern applications, extracting valuable information from data acquired from numerous sources. It has produced a disruptive change in society, providing new functionality, improved quality of life for users, e.g., through personalization, optimized use of resources, and the automation of many processes. However, machine learning systems can themselves be the targets of attackers, who might gain a significant advantage by exploiting the vulnerabilities of learning algorithms. Such attacks have already been reported in the wild in different application domains. This chapter describes the mechanisms that allow attackers to compromise machine learning systems by injecting malicious data or exploiting the algorithms’ weaknesses and blind spots. Furthermore, mechanisms that can help mitigate the effect of such attacks are also explained, along with the challenges of designing more secure machine learning systems.

Book chapter

Soikkeli J, Muñoz-González L, Lupu E, 2019, Efficient attack countermeasure selection accounting for recovery and action costs, the 14th International Conference, Publisher: ACM Press

The losses arising from a system being hit by cyber attacks can be staggeringly high, but defending against such attacks can also be costly. This work proposes an attack countermeasure selection approach based on cost impact analysis that takes into account the impacts of actions by both the attacker and the defender.We consider a networked system providing services whose functionality depends on other components in the network. We model the costs and losses to service availability from compromises and defensive actions to the components, and show that while containment of the attack can be an effective defense, it may be more cost-efficient to allow parts of the attack to continue further whilst focusing on recovering services to a functional state. Based on this insight, we build a countermeasure selection method that chooses the most cost-effective action based on its impact on expected losses and costs over a given time horizon. Our method is evaluated using simulations in synthetic graphs representing network dependencies and vulnerabilities, and performs well in comparison to alternatives.

Conference paper

Co KT, Munoz Gonzalez L, Lupu E, 2019, Sensitivity of Deep Convolutional Networks to Gabor Noise, ICML 2019 Workshop

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood. Our work shows that visually similar procedural noise patterns also act as UAPs. In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns. This behaviour, its causes, and implications deserve further in-depth study.

Conference paper

Collinge G, Lupu E, Munoz Gonzalez L, 2019, Defending against Poisoning Attacks in Online Learning Settings, European Symposium on Artificial Neural Networks, Publisher: ESANN

Machine learning systems are vulnerable to data poisoning, acoordinated attack where a fraction of the training dataset is manipulatedby an attacker to subvert learning. In this paper we first formulate an optimal attack strategy against online learning classifiers to assess worst-casescenarios. We also propose two defence mechanisms to mitigate the effectof online poisoning attacks by analysing the impact of the data points inthe classifier and by means of an adaptive combination of machine learning classifiers with different learning rates. Our experimental evaluationsupports the usefulness of our proposed defences to mitigate the effect ofpoisoning attacks in online learning settings.

Conference paper

Munoz Gonzalez L, Sgandurra D, Barrere Cambrun M, Lupu ECet al., 2019, Exact Inference Techniques for the Analysis of Bayesian Attack Graphs, IEEE Transactions on Dependable and Secure Computing, Vol: 16, Pages: 231-244, ISSN: 1941-0018

Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise network resources. The uncertainty about the attacker's behaviour makes Bayesian networks suitable to model attack graphs to perform static and dynamic analysis. Previous approaches have focused on the formalization of attack graphs into a Bayesian model rather than proposing mechanisms for their analysis. In this paper we propose to use efficient algorithms to make exact inference in Bayesian attack graphs, enabling the static and dynamic network risk assessments. To support the validity of our approach we have performed an extensive experimental evaluation on synthetic Bayesian attack graphs with different topologies, showing the computational advantages in terms of time and memory use of the proposed techniques when compared to existing approaches.

Journal article

Paudice A, Muñoz-González L, Lupu EC, 2019, Label sanitization against label flipping poisoning attacks, Nemesis'18. Workshop in Recent Advances in Adversarial Machine Learning, Publisher: Springer Verlag, Pages: 5-15, ISSN: 0302-9743

Many machine learning systems rely on data collected in thewild from untrusted sources, exposing the learning algorithms to datapoisoning. Attackers can inject malicious data in the training datasetto subvert the learning process, compromising the performance of thealgorithm producing errors in a targeted or an indiscriminate way. Labelflipping attacks are a special case of data poisoning, where the attackercan control the labels assigned to a fraction of the training points. Evenif the capabilities of the attacker are constrained, these attacks havebeen shown to be effective to significantly degrade the performance ofthe system. In this paper we propose an efficient algorithm to performoptimal label flipping poisoning attacks and a mechanism to detect andrelabel suspicious data points, mitigating the effect of such poisoningattacks.

Conference paper

Muñoz-González L, Co KT, Lupu EC, 2019, Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

Federated learning enables training collaborative machine learning models atscale with many participants whilst preserving the privacy of their datasets.Standard federated learning techniques are vulnerable to Byzantine failures,biased local datasets, and poisoning attacks. In this paper we introduceAdaptive Federated Averaging, a novel algorithm for robust federated learningthat is designed to detect failures, attacks, and bad updates provided byparticipants in a collaborative model. We propose a Hidden Markov Model tomodel and learn the quality of model updates provided by each participantduring training. In contrast to existing robust federated learning schemes, wepropose a robust aggregation rule that detects and discards bad or maliciouslocal model updates at each training iteration. This includes a mechanism thatblocks unwanted participants, which also increases the computational andcommunication efficiency. Our experimental evaluation on 4 real datasets showthat our algorithm is significantly more robust to faulty, noisy and maliciousparticipants, whilst being computationally more efficient than otherstate-of-the-art robust federated learning methods such as Multi-KRUM andcoordinate-wise median.

Journal article

Co KT, Muñoz-González L, Lupu EC, 2019, Sensitivity of Deep Convolutional Networks to Gabor Noise

Deep Convolutional Networks (DCNs) have been shown to be sensitive toUniversal Adversarial Perturbations (UAPs): input-agnostic perturbations thatfool a model on large portions of a dataset. These UAPs exhibit interestingvisual patterns, but this phenomena is, as yet, poorly understood. Our workshows that visually similar procedural noise patterns also act as UAPs. Inparticular, we demonstrate that different DCN architectures are sensitive toGabor noise patterns. This behaviour, its causes, and implications deservefurther in-depth study.

Journal article

Munoz Gonzalez L, Lupu E, 2019, The Security of Machine Learning Systems, AI in Cybersecurity, Editors: Sikos

Book chapter

Mu noz-González L, Pfitzner B, Russo M, Carnerero-Cano J, Lupu ECet al., 2019, Poisoning Attacks with Generative Adversarial Nets

Other

Kott A, Blakely B, Henshel D, Wehner G, Rowell J, Evans N, Muñoz-González L, Leslie N, French DW, Woodard D, Krutilla K, Joyce A, Linkov I, Mas-Machuca C, Sztipanovits J, Harney H, Kergl D, Nejib P, Yakabovicz E, Noel S, Dudman T, Trepagnier P, Badesha S, Møller Aet al., 2018, Approaches to Enhancing Cyber Resilience: Report of the North Atlantic Treaty Organization (NATO) Workshop IST-153, Approaches to Enhancing Cyber Resilience: Report of the North Atlantic Treaty Organization (NATO) Workshop IST-153, AD1050894

This report summarizes the discussions and findings of the 2017 NorthAtlantic Treaty Organization (NATO) Workshop, IST-153, on Cyber Resilience,held in Munich, Germany, on 23-25 October 2017, at the University ofBundeswehr. Despite continual progress in managing risks in the cyber domain,anticipation and prevention of all possible attacks and malfunctions are notfeasible for the current or future systems comprising the cyber infrastructure.Therefore, interest in cyber resilience (as opposed to merely risk-basedapproaches) is increasing rapidly, in literature and in practice. Unlikeconcepts of risk or robustness - which are often and incorrectly conflated withresilience - resiliency refers to the system's ability to recover or regenerateits performance to a sufficient level after an unexpected impact produces adegradation of its performance. The exact relation among resilience, risk, androbustness has not been well articulated technically. The presentations anddiscussions at the workshop yielded this report. It focuses on the followingtopics that the participants of the workshop saw as particularly important:fundamental properties of cyber resilience; approaches to measuring andmodeling cyber resilience; mission modeling for cyber resilience; systemsengineering for cyber resilience, and dynamic defense as a path toward cyberresilience.

Report

Munoz Gonzalez L, Lupu E, 2018, The secret of machine learning, ITNOW, Vol: 60, Pages: 38-39, ISSN: 1746-5702

Luis Muñoz-González and Emil C. Lupu, from Imperial College London, explore the vulnerabilities of machine learning algorithms.

Journal article

Illiano V, Lupu E, Muñoz-González L, Paudice APet al., 2018, Determining Resilience Gains from Anomaly Detection for Event Integrity in Wireless Sensor Networks, ACM Transactions on Sensor Networks, Vol: 14, ISSN: 1550-4859

Measurements collected in a wireless sensor network (WSN) can be maliciously compromised through several attacks, but anomaly detection algorithms may provide resilience by detecting inconsistencies in the data. Anomaly detection can identify severe threats to WSN applications, provided that there is a sufficient amount of genuine information. This article presents a novel method to calculate an assurance measure for the network by estimating the maximum number of malicious measurements that can be tolerated. In previous work, the resilience of anomaly detection to malicious measurements has been tested only against arbitrary attacks, which are not necessarily sophisticated. The novel method presented here is based on an optimization algorithm, which maximizes the attack’s chance of staying undetected while causing damage to the application, thus seeking the worst-case scenario for the anomaly detection algorithm. The algorithm is tested on a wildfire monitoring WSN to estimate the benefits of anomaly detection on the system’s resilience. The algorithm also returns the measurements that the attacker needs to synthesize, which are studied to highlight the weak spots of anomaly detection. Finally, this article presents a novel methodology that takes in input the degree of resilience required and automatically designs the deployment that satisfies such a requirement.

Journal article

Paudice A, Muñoz-González L, Gyorgy A, Lupu ECet al., 2018, Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

Machine learning has become an important component for many systems andapplications including computer vision, spam filtering, malware and networkintrusion detection, among others. Despite the capabilities of machine learningalgorithms to extract valuable information from data and produce accuratepredictions, it has been shown that these algorithms are vulnerable to attacks.Data poisoning is one of the most relevant security threats against machinelearning systems, where attackers can subvert the learning process by injectingmalicious samples in the training data. Recent work in adversarial machinelearning has shown that the so-called optimal attack strategies cansuccessfully poison linear classifiers, degrading the performance of the systemdramatically after compromising a small fraction of the training dataset. Inthis paper we propose a defence mechanism to mitigate the effect of theseoptimal poisoning attacks based on outlier detection. We show empirically thatthe adversarial examples generated by these attack strategies are quitedifferent from genuine points, as no detectability constrains are considered tocraft the attack. Hence, they can be detected with an appropriate pre-filteringof the training dataset.

Working paper

Muñoz-González L, Biggio B, Demontis A, Paudice A, Wongrassamee V, Lupu EC, Roli Fet al., 2017, Towards poisoning of deep learning algorithms with back-gradient optimization, Pages: 27-38

© 2017 Association for Computing Machinery. A number of online services nowadays rely upon machine learning to extract valuable information from data collected in the wild. This exposes learning algorithms to the threat of data poisoning, i.e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process. To date, these attacks have been devised only against a limited class of binary learning algorithms, due to the inherent complexity of the gradient-based procedure used to optimize the poisoning points (a.k.a. adversarial training examples). In this work, we first extend the definition of poisoning attacks to multiclass problems. We then propose a novel poisoning algorithm based on the idea of back-gradient optimization, i.e., to compute the gradient of interest through automatic differentiation, while also reversing the learning procedure to drastically reduce the attack complexity. Compared to current poisoning strategies, our approach is able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures. We empirically evaluate its effectiveness on several application examples, including spam filtering, malware detection, and handwritten digit recognition. We finally show that, similarly to adversarial test examples, adversarial training examples can also be transferred across different learning algorithms.

Conference paper

Munoz Gonzalez L, Lupu E, 2017, Bayesian Attack Graphs for Security Risk Assessment, IST-153 NATO Workshop on Cyber Resilience

Conference paper

Muñoz-González L, Sgandurra D, Paudice A, Lupu ECet al., 2017, Efficient Attack Graph Analysis through Approximate Inference, ACM Transactions on Privacy and Security, Vol: 20, ISSN: 2471-2566

Attack graphs provide compact representations of the attack paths an attacker can follow to compromise network resources from the analysis of network vulnerabilities and topology. These representations are a powerful tool for security risk assessment. Bayesian inference on attack graphs enables the estimation of the risk of compromise to the system's components given their vulnerabilities and interconnections, and accounts for multi-step attacks spreading through the system. Whilst static analysis considers the risk posture at rest, dynamic analysis also accounts for evidence of compromise, e.g. from SIEM software or forensic investigation. However, in this context, exact Bayesian inference techniques do not scale well. In this paper we show how Loopy Belief Propagation - an approximate inference technique - can be applied to attack graphs, and that it scales linearly in the number of nodes for both static and dynamic analysis, making such analyses viable for larger networks. We experiment with different topologies and network clustering on synthetic Bayesian attack graphs with thousands of nodes to show that the algorithm's accuracy is acceptable and that it converges to a stable solution. We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages and gains of approximate inference techniques when scaling to larger attack graphs.

Journal article

Illiano V, Muñoz-Gonzàlez L, Lupu E, 2016, Don't fool me!: Detection, Characterisation and Diagnosis of Spoofed and Masked Events in Wireless Sensor Networks, IEEE Transactions on Dependable and Secure Computing, Vol: 14, Pages: 279-293, ISSN: 1545-5971

Wireless Sensor Networks carry a high risk of being compromised, as their deployments are often unattended, physicallyaccessible and the wireless medium is difficult to secure. Malicious data injections take place when the sensed measurements aremaliciously altered to trigger wrong and potentially dangerous responses. When many sensors are compromised, they can collude witheach other to alter the measurements making such changes difficult to detect. Distinguishing between genuine and maliciousmeasurements is even more difficult when significant variations may be introduced because of events, especially if more events occursimultaneously. We propose a novel methodology based on wavelet transform to detect malicious data injections, to characterise theresponsible sensors, and to distinguish malicious interference from faulty behaviours. The results, both with simulated and realmeasurements, show that our approach is able to counteract sophisticated attacks, achieving a significant improvement overstate-of-the-art approaches.

Journal article

Muñoz-González L, Lázaro-Gredilla M, Figueiras-Vidal AR, 2015, Laplace Approximation for Divisive Gaussian Processes for Nonstationary Regression, IEEE transactions on Pattern Analysis and Machine Intelligence, Vol: 38, Pages: 618-624, ISSN: 2160-9292

The standard Gaussian Process regression (GP) is usually formulated under stationary hypotheses: The noise power is considered constant throughout the input space and the covariance of the prior distribution is typically modeled as depending only on the difference between input samples. These assumptions can be too restrictive and unrealistic for many real-world problems. Although nonstationarity can be achieved using specific covariance functions, they require a prior knowledge of the kind of nonstationarity, not available for most applications. In this paper we propose to use the Laplace approximation to make inference in a divisive GP model to perform nonstationary regression, including heteroscedastic noise cases. The log-concavity of the likelihood ensures a unimodal posterior and makes that the Laplace approximation converges to a unique maximum. The characteristics of the likelihood also allow to obtain accurate posterior approximations when compared to the Expectation Propagation (EP) approximations and the asymptotically exact posterior provided by a Markov Chain Monte Carlo implementation with Elliptical Slice Sampling (ESS), but at a reduced computational load with respect to both, EP and ESS.

Journal article

Munoz-Gonzalez L, Lazaro-Gredilla M, Figueiras-Vidal AR, 2014, Divisive Gaussian Processes for Nonstationary Regression, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, Vol: 25, Pages: 1991-2003, ISSN: 2162-237X

Journal article

Muñoz Gonzalez L, Lázaro-Gredilla M, Figueiras-Vidal AR, 2014, Laplace Approximation with Gaussian Processes for Volatility Forecasting, International Workshop on Cognitive Information Processing, Publisher: IEEE, Pages: 1-6

Generalized Autoregressive Conditional Heteroscedascity (GARCH) models are ad hoc methods very used to predict volatility in financial time series. On the other hand, Gaussian Processes (GPs) offer very good performance for regression and prediction tasks, giving estimates of the average and dispersion of the predicted values, and showing resilience to overfitting. In this paper, a GP model is proposed to predict volatility using a reparametrized form of the Ornstein-Uhlenbeck covariance function, which reduces the underlying latent function to be an AR(1) process, suitable for the Brownian motion typical of financial time series. The tridiagonal character of the inverse of this covariance matrix and the Laplace method proposed to perform inference allow accurate predictions at a reduced cost compared to standard GP approaches. The experimental results confirm the usefulness of the proposed method to predict volatility, outperforming GARCH models with more accurate forecasts and a lower computational burden.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00847600&limit=30&person=true