Imperial College London

MrPatrickHenriksen

Faculty of EngineeringDepartment of Computing

Casual- Student demonstrator - lower rate
 
 
 
//

Contact

 

patrick.henriksen18 Website

 
 
//

Location

 

CDT space, room 402Sherfield BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

6 results found

Leofante F, Henriksen P, Lomuscio A, 2023, Verification-friendly networks: the case for parametric ReLUs, International Joint Conference on Neural Networks (IJCNN 2023), Publisher: IEEE, Pages: 1-9

It has increasingly been recognised that verification can contribute to the validation and debugging of neural networks before deployment, particularly in safety-critical areas. While progress has been made in the area of verification of neural networks, present techniques still do not scale to large ReLU-based neural networks used in many applications. In this paper we show that considerable progress can be made by employing Parametric ReLU activation functions in lieu of plain ReLU functions. We give training procedures that produce networks which achieve one order of magnitude gain in verification overheads and 30-100% fewer timeouts with VeriNet, a SoA Symbolic Interval Propagation-based verification toolkit, while not compromising the resulting accuracy. Furthermore, we show that adversarial training combined with our approachimproves certified robustness up to 36% compared to adversarial training performed on baseline ReLU networks.

Conference paper

Henriksen P, Lomuscio A, 2023, Robust training of neural networks against bias field perturbations, AAAI Conference on Artificial Intelligence (AAAI23), Publisher: AAAI, Pages: 14865-14873, ISSN: 2374-3468

Robust training of neural networks has so far been developed in the context of white noise perturbations to limit their susceptibility to adversarial attacks. However, in applications neural networks need to be robust to a wider range of input perturbations, including contrast, brightness, and beyond. We here introduce the problem of training neural networks such that they are robust against a class of smooth intensity perturbations modelled by bias fields. We first develop an approach towards this goal based on a State-of-the-Art (SoA) robust training method utilising Interval Bound Propagation (IBP). We analyse the resulting algorithm and observe that IBP often produces very loose bounds for bias field perturbations, which may be detrimental to training. We propose an alternative approach based on Symbolic Interval Propagation (SIP), which usually results in significantly tighter bounds than IBP. We present ROBNET, a tool implementing these approaches for bias field robust training. In experiments networks trained with the SIP-based approach achieved up to 31% higher certified robustness while also maintaining a better accuracy than networks trained with the IBP approach.

Conference paper

Henriksen P, Leofante F, Lomuscio A, 2022, Repairing misclassifications in neural networks using limited data, SAC '22, Pages: 1031-1038

We present a novel and computationally efficient method for repairing a feed-forward neural network with respect to a finite set of inputs that are misclassified. The method assumes no access to the training set. We present a formal characterisation for repairing the neural network and study its resulting properties in terms of soundness and minimality. We introduce a gradient-based algorithm that performs localised modifications to the network's weights such that misclassifications are repaired while marginally affecting network accuracy on correctly classified inputs. We introduce an implementation, I-REPAIR, and show it is able to repair neural networks while reducing accuracy drops by up to 90% when compared to other state-of-the-art approaches for repair.

Conference paper

Henriksen P, Hammernik K, Rueckert D, Lomuscio Aet al., 2021, Bias Field Robustness Verification of Large Neural Image Classifiers, British Machine Vision Conference (BMVC21)

Conference paper

Henriksen P, Lomuscio A, 2021, DEEPSPLIT: An Efficient Splitting Method for Neural Network Verification via Indirect Effect Analysis, Pages: 2549-2555, ISSN: 1045-0823

We propose a novel, complete algorithm for the verification and analysis of feed-forward, ReLU-based neural networks. The algorithm, based on symbolic interval propagation, introduces a new method for determining split-nodes which evaluates the indirect effect that splitting has on the relaxations of successor nodes. We combine this with a new efficient linear-programming encoding of the splitting constraints to further improve the algorithm's performance. The resulting implementation, DEEPSPLIT, achieved speedups of around 1-2 orders of magnitude and 21-34% fewer timeouts when compared to the current SoA toolkits.

Conference paper

Henriksen P, Lomuscio A, 2020, Efficient Neural Network Verification via Adaptive Refinement and Adversarial Search, European Conference on Artificial Intelligence (ECAI20)

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01582851&limit=30&person=true