219 results found
Perez-Nieves N, Leung VCH, Dragotti PL, et al., 2021, Neural heterogeneity promotes robust learning, Nature Communications, ISSN: 2041-1723
<jats:title>Abstract</jats:title><jats:p>The brain has a hugely diverse, heterogeneous structure. Whether or not heterogeneity at the neural level plays a functional role remains unclear, and has been relatively little explored in models which are often highly homogeneous. We compared the performance of spiking neural networks trained to carry out tasks of real-world difficulty, with varying degrees of heterogeneity, and found that it substantially improved task performance. Learning was more stable and robust, particularly for tasks with a rich temporal structure. In addition, the distribution of neuronal parameters in the trained networks closely matches those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.</jats:p><jats:sec><jats:title>Summary</jats:title><jats:p>Neural heterogeneity is metabolically efficient for learning, and optimal parameter distribution matches experimental data.</jats:p></jats:sec>
Wang X, Jiang L, Li L, et al., 2021, Joint learning of 3D lesion segmentation and classification for explainable COVID-19 diagnosis., IEEE Trans Med Imaging, Vol: PP
Given the outbreak of COVID-19 pandemic and the shortage of medical resource, extensive deep learning models have been proposed for automatic COVID-19 diagnosis, based on 3D computed tomography (CT) scans. However, the existing models independently process the 3D lesion segmentation and disease classification, ignoring the inherent correlation between these two tasks. In this paper, we propose a joint deep learning model of 3D lesion segmentation and classification for diagnosing COVID-19, called DeepSC-COVID, as the first attempt in this direction. Specifically, we establish a large-scale CT database containing 1,805 3D CT scans with fine-grained lesion annotations, and reveal 4 findings about lesion difference between COVID-19 and community acquired pneumonia (CAP). Inspired by our findings, DeepSC-COVID is designed with 3 subnets: a cross-task feature subnet for feature extraction, a 3D lesion subnet for lesion segmentation, and a classification subnet for disease diagnosis. Besides, the task-aware loss is proposed for learning the task interaction across the 3D lesion and classification subnets. Different from all existing models for COVID-19 diagnosis, our model is interpretable with fine-grained 3D lesion distribution. Finally, extensive experimental results show that the joint learning framework in our model significantly improves the performance of 3D lesion segmentation and disease classification in both efficiency and efficacy.
Verinaz-Jadan H, Song P, Howe CL, et al., 2021, Deep learning for light field microscopy using physics-based models, Pages: 1091-1094, ISSN: 1945-7928
Light Field Microscopy (LFM) is an imaging technique that captures 3D spatial information in a single 2D image. LFM is attractive because of its relatively simple implementation and fast acquisition rate. However, classic 3D reconstruction typically suffers from high computational cost, low lateral resolution, and reconstruction artifacts. In this work, we propose a new physics-based learning approach to improve the performance of the reconstruction under realistic conditions, these being lack of training data, background noise, and high data dimensionality. First, we propose a novel description of the system using a linear convolutional neural network. This description is complemented by a method that compacts the number of views of the acquired light field. Then, this model is used to solve the inverse problem under two scenarios. If labelled data is available, we train an end-to-end network that uses the Learned Iterative Shrinkage and Thresholding Algorithm (LISTA). If no labelled data is available, we propose an unsupervised technique that uses only unlabelled data to train LISTA by making use of Wasserstein Generative Adversarial Networks (WGANs). We experimentally show that our approach performs better than classic strategies in terms of artifact reduction and image quality.
Yu Q, Huang J-J, Zhu J, et al., 2021, Deep phase retrieval: Analyzing over-parameterization in phase retrieval, SIGNAL PROCESSING, Vol: 180, ISSN: 0165-1684
Hilton M, Alexandru R, Dragotti PL, 2021, Time Encoding Using the Hyperbolic Secant Kernel, 28th European Signal Processing Conference (EUSIPCO), Publisher: IEEE, Pages: 2304-2308, ISSN: 2076-1465
Huang J-J, Dragotti PL, 2020, Learning deep analysis dictionaries for image super-resolution, IEEE Transactions on Signal Processing, Vol: 68, Pages: 6633-6648, ISSN: 1053-587X
Inspired by the recent success of deep neural networks and the recent efforts to develop multi-layer dictionary models, we propose a Deep Analysis dictionary Model (DeepAM) which is optimized to address a specific regression task known as single image super-resolution. Contrary to other multi-layer dictionary models, our architecture contains L layers of analysis dictionary and soft-thresholding operators to gradually extract high-level features and a layer of synthesis dictionary which is designed to optimize the regression task at hand. In our approach, each analysis dictionary is partitioned into two sub-dictionaries: an Information Preserving Analysis Dictionary (IPAD) and a Clustering Analysis Dictionary (CAD). The IPAD together with the corresponding soft-thresholds is designed to pass the key information from the previous layer to the next layer, while the CAD together with the corresponding soft-thresholding operator is designed to produce a sparse feature representation of its input data that facilitates discrimination of key features. DeepAM uses both supervised and unsupervised setup. Simulation results show that the proposed deep analysis dictionary model achieves better performance compared to a deep neural network that has the same structure and is optimized using back-propagation when training datasets are small. On noisy image super-resolution, DeepAM can be well adapted to unseen testing noise levels by rescaling the IPAD and CAD thresholds of the first layer.
Howe CL, Quicke P, Song P, et al., 2020, Comparing volumetric reconstruction algorithms for light field imaging of high signal-to-noise ratio neuronal calcium transients
<jats:title>Abstract</jats:title><jats:p>Light field microscopy (LFM) enables fast, light efficient, volumetric imaging of neuronal activity with functional fluorescence indicators. Here we apply LFM to single-cell and bulk-labeled imaging of the red calcium dye, CaSiR-1 in acute mouse brain slices. We compare two common light field volume reconstruction algorithms: synthetic refocusing and Richardson-Lucy 3D deconvolution. We compare temporal signal-to-noise ratio (SNR) and spatial signal confinement between the two LFM algorithms and conventional widefield image series. Both algorithms can resolve calcium signals from neuronal processes in three dimensions. Increasing deconvolution iteration number improves spatial signal confinement but reduces SNR compared to synthetic refocusing.</jats:p>
Parsi M, Crossley P, Dragotti PL, et al., 2020, Wavelet based fault location on power transmission lines using real-world travelling wave data, ELECTRIC POWER SYSTEMS RESEARCH, Vol: 186, ISSN: 0378-7796
Quicke P, Howe CL, Song P, et al., 2020, Subcellular resolution three-dimensional light-field imaging with genetically encoded voltage indicators, Neurophotonics, Vol: 7, ISSN: 2329-4248
Significance: Light-field microscopy (LFM) enables high signal-to-noise ratio (SNR) and light efficient volume imaging at fast frame rates. Voltage imaging with genetically encoded voltage indicators (GEVIs) stands to particularly benefit from LFM's volumetric imaging capability due to high required sampling rates and limited probe brightness and functional sensitivity. Aim: We demonstrate subcellular resolution GEVI light-field imaging in acute mouse brain slices resolving dendritic voltage signals in three spatial dimensions. Approach: We imaged action potential-induced fluorescence transients in mouse brain slices sparsely expressing the GEVI VSFP-Butterfly 1.2 in wide-field microscopy (WFM) and LFM modes. We compared functional signal SNR and localization between different LFM reconstruction approaches and between LFM and WFM. Results: LFM enabled three-dimensional (3-D) localization of action potential-induced fluorescence transients in neuronal somata and dendrites. Nonregularized deconvolution decreased SNR with increased iteration number compared to synthetic refocusing but increased axial and lateral signal localization. SNR was unaffected for LFM compared to WFM. Conclusions: LFM enables 3-D localization of fluorescence transients, therefore eliminating the need for structures to lie in a single focal plane. These results demonstrate LFM's potential for studying dendritic integration and action potential propagation in three spatial dimensions.
Quicke P, Howe CL, Song P, et al., 2020, Subcellular resolution 3D light field imaging with genetically encoded voltage indicators, Neurophotonics, Vol: 7, ISSN: 2329-4248
Significance: Light-field microscopy (LFM) enables high signal-to-noise ratio (SNR) and light efficient volume imaging at fast frame rates. Voltage imaging with genetically encoded voltage indicators (GEVIs) stands to particularly benefit from LFM’s volumetric imaging capability due to high required sampling rates and limited probe brightness and functional sensitivity.Aim: We demonstrate subcellular resolution GEVI light-field imaging in acute mouse brain slices resolving dendritic voltage signals in three spatial dimensions.Approach: We imaged action potential-induced fluorescence transients in mouse brain slices sparsely expressing the GEVI VSFP-Butterfly 1.2 in wide-field microscopy (WFM) and LFM modes. We compared functional signal SNR and localization between different LFM reconstruction approaches and between LFM and WFM.Results: LFM enabled three-dimensional (3-D) localization of action potential-induced fluorescence transients in neuronal somata and dendrites. Nonregularized deconvolution decreased SNR with increased iteration number compared to synthetic refocusing but increased axial and lateral signal localization. SNR was unaffected for LFM compared to WFM.Conclusions: LFM enables 3-D localization of fluorescence transients, therefore eliminating the need for structures to lie in a single focal plane. These results demonstrate LFM’s potential for studying dendritic integration and action potential propagation in three spatial dimensions.
Erdemir E, Dragotti PL, Gunduz D, 2020, Privacy-aware time-series data sharing with deep reinforcement learning, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURIT, Vol: 16, Pages: 389-401, ISSN: 1556-6013
Internet of things (IoT) devices are becoming increasingly popular thanks to many new services and applications they offer. However, in addition to their many benefits, they raise privacy concerns since they share fine-grained time-series user data with untrusted third parties. In this work, we study the privacy-utility trade-off (PUT) in time-series data sharing. Existing approaches to PUT mainly focus on a single data point; however, temporal correlations in time-series data introduce new challenges. Methods that preserve the privacy for the current time may leak significant amount of information at the trace level as the adversary can exploit temporal correlations in a trace. We consider sharing the distorted version of a user’s true data sequence with an untrusted third party. We measure the privacy leakage by the mutual information between the user’s true data sequence and shared version. We consider both the instantaneous and average distortion between the two sequences, under a given distortion measure, as the utility loss metric. To tackle the history-dependent mutual information minimization, we reformulate the problem as a Markov decision process (MDP), and solve it using asynchronous actor-critic deep reinforcement learning (RL). We evaluate the performance of the proposed solution in location trace privacy on both synthetic and GeoLife GPS trajectory datasets. For the latter, we show the validity of our solution by testing the privacy of the released location trajectory against an adversary network.
Song P, Verinaz Jadan H, Howe C, et al., 2020, 3D localization for light-field microscopy via convolutional sparse coding on epipolar images, IEEE transactions on computational imaging, Vol: 6, Pages: 1017-1032, ISSN: 2333-9403
Light-field microscopy (LFM) is a type of all-optical imaging system that is able to capture 4D geometric information of light rays and can reconstruct a 3D model from a single snapshot. In this paper, we propose a new 3D localization approach to effectively detect 3D positions of neuronal cells from a single light-field image with high accuracy and outstanding robustness to light scattering. This is achieved by constructing a depth-aware dictionary and by combining it with convolutional sparse coding. Specifically, our approach includes 3 key parts: light-field calibration, depth-aware dictionary construction, and localization based on convolutional sparse coding (CSC). In the first part, an observed raw light-field image is calibrated and then decoded into a two-plane parameterized 4D format which leads to the epi-polar plane image (EPI). The second part involves simulating a set of light-fields using a wave-optics forward model for a ball-shaped volume that is located at different depths. Then, a depth-aware dictionary is constructed where each element is a synthetic EPI associated to a specific depth. Finally, by taking full advantage of the sparsity prior and shift-invariance property of EPI, 3D localization is achieved via convolutional sparse coding on an observed EPI with respect to the depth-aware EPI dictionary. We evaluate our approach on both non-scattering specimen (fluorescent beads suspended in agarose gel) and scattering media (brain tissues of genetically encoded mice). Extensive experiments demonstrate that our approach can reliably detect the 3D positions of granular targets with small Root Mean Square Error (RMSE), high robustness to optical aberration and light scattering in mammalian brain tissues.
Verinaz-Jadan H, Song P, Howe CL, et al., 2020, Volume reconstruction for light field microscopy, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Publisher: IEEE, Pages: 1459-1463
Light Field Microscopy (LFM) is a 3D imaging technique that captures volumetric information in a single snapshot. It is appealing in microscopy because of its simple implementation and the peculiarity that it is much faster than methods involving scanning. However, volume reconstruction for LFM suffers from low lateral resolution, high computational cost, and reconstruction artifacts near the native object plane. In this work, we make two contributions. First, we propose a simplification of the forward model based on a novel discretization approach that allows us to accelerate the computation without drastically increasing memory consumption. Second, we experimentally show that by including regularization priors and an appropriate initialization strategy, it is possible to remove the artifacts near the native object plane. The algorithm we use for this is ADMM. Finally, the combination of the two techniques leads to a method that outperforms classic volume reconstruction approaches (variants of Richardson-Lucy) in terms of average computational time and image quality (PSNR).
Leung VCH, Huang J-J, Dragotti PL, 2020, Reconstruction of FRI signals using deep neural network approaches, 2020 International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020), Publisher: IEEE
Finite Rate of Innovation (FRI) theory considers sampling and reconstruction of classes of non-bandlimited continuous signals that have a small number of free parameters, such as a stream of Diracs. The task of reconstructing FRI signals from discrete samples is often transformed into a spectral estimation problem and solved using Prony's method and matrix pencil method which involve estimating signal subspaces. They achieve an optimal performance given by the Cramér-Rao bound yet break down at a certain peak signal-to-noise ratio (PSNR). This is probably due to the so-called subspace swap event. In this paper, we aim to alleviate the subspace swap problem and investigate alternative approaches including directly estimating FRI parameters using deep neural networks and utilising deep neural networks as denoisers to reduce the noise in the samples. Simulations show significant improvements on the breakdown PSNR over existing FRI methods, which still outperform learning-based approaches in medium to high PSNR regimes.
Deng X, Dragotti PL, 2020, Deep convolutional neural network for multi-modal image restoration and fusion, IEEE Transactions on Pattern Analysis and Machine Intelligence, Pages: 1-17, ISSN: 0162-8828
In this paper, we propose a novel deep convolutional neural network to solve the general multi-modal image restoration (MIR) and multi-modal image fusion (MIF) problems. Different from other methods based on deep learning, our network architecture is designed by drawing inspirations from a new proposed multi-modal convolutional sparse coding (MCSC) model. The key feature of the proposed network is that it can automatically split the common information shared among different modalities, from the unique information that belongs to each single modality, and is therefore denoted with CU-Net, i.e., Common and Unique information splitting network. Specifically, the CU-Net is composed of three modules, i.e., the unique feature extraction module (UFEM), common feature preservation module (CFPM), and image reconstruction module (IRM). The architecture of each module is derived from the corresponding part in the MCSC model, which consists of several learned convolutional sparse coding (LCSC) blocks. Extensive numerical results verify the effectiveness of our method on a variety of MIR and MIF tasks, including RGB guided depth image super-resolution, flash guided non-flash image denoising, multi-focus and multi-exposure image fusion.
Deng X, Yang R, Xu M, et al., 2020, Wavelet domain style transfer for an effective perception-distortion tradeoff in single image super-resolution, IEEE/CVF International Conference on Computer Vision (ICCV), Publisher: IEEE COMPUTER SOC, Pages: 3076-3085, ISSN: 1550-5499
In single image super-resolution (SISR), given a low-resolution (LR) image, one wishes to find a high-resolution (HR) version of it which is both accurate and photorealistic. Recently, it has been shown that there exists a fundamental tradeoff between low distortion and high perceptual quality, and the generative adversarial network (GAN) is demonstrated to approach the perception-distortion (PD) bound effectively. In this paper, we propose a novel method based on wavelet domain style transfer (WDST), which achieves a better PD tradeoff than the GAN based methods. Specifically, we propose to use 2D stationary wavelet transform (SWT) to decompose one image into low-frequency and high-frequency sub-bands. For the low-frequency sub-band, we improve its objective quality through an enhancement network. For the high-frequency sub-band, we propose to use WDST to effectively improve its perceptual quality. By feat of the perfect reconstruction property of wavelets, these sub-bands can be re-combined to obtain an image which has simultaneously high objective and perceptual quality. The numerical results on various datasets show that our method achieves the best trade-off between the distortion and perceptual quality among the existing state-of-the-art SISR methods.
Yan S, Huang J-J, Daly N, et al., 2020, REVEALING HIDDEN DRAWINGS IN LEONARDO'S 'THE VIRGIN OF THE ROCKS' FROM MACRO X-RAY FLUORESCENCE SCANNING DATA THROUGH ELEMENT LINE LOCALISATION, IEEE International Conference on Acoustics, Speech, and Signal Processing, Publisher: IEEE, Pages: 1444-1448, ISSN: 1520-6149
Alexandra R, Blu T, Dragotti PL, 2020, D-SLAM: DIFFUSION SOURCE LOCALIZATION AND TRAJECTORY MAPPING, IEEE International Conference on Acoustics, Speech, and Signal Processing, Publisher: IEEE, Pages: 5600-5604, ISSN: 1520-6149
Howe CL, Quicke P, Song P, et al., 2020, Comparing wide field to 3d light field for imaging red calcium transients in mammalian brain
We apply light field (LF) microscopy to single-cell and bulk-loaded imaging of the red calcium dye, CaSiR-1 in mouse brain slices. We characterize the signal-to-noise ratio of images reconstructed from LF to wide-field time series.
Alexandru R, Dragotti PL, 2020, Reconstructing classes of non-bandlimited signals from time encoded information, IEEE Transactions on Signal Processing, Vol: 68, Pages: 747-763, ISSN: 1053-587X
We investigate time encoding as an alternative method to classical sampling, and address the problem of reconstructing classes of non-bandlimited signals from time-based samples. We consider a sampling mechanism based on first filtering the input, before obtaining the timing information using a time encoding machine. Within this framework, we show that sampling by timing is equivalent to a non-uniform sampling problem, where the reconstruction of the input depends on the characteristics of the filter and on its non-uniform shifts. The classes of filters we focus on are exponential and polynomial splines, and we show that their fundamental properties are locally preserved in the context of non-uniform sampling. Leveraging these properties, we then derive sufficient conditions and propose novel algorithms for perfect reconstruction of classes of non-bandlimited signals such as: streams of Diracs, sequences of pulses and piecewise constant signals. Next, we extend these methods to operate with arbitrary filters, and also present simulation results on synthetic noisy data.
Alexandru R, Thao NT, Rzepka D, et al., 2020, y SAMPLING CLASSES OF NON-BANDLIMITED SIGNALS USING INTEGRATE-AND-FIRE DEVICES: AVERAGE CASE ANALYSIS, IEEE International Conference on Acoustics, Speech, and Signal Processing, Publisher: IEEE, Pages: 9279-9283, ISSN: 1520-6149
Song P, Deng X, Mota JFC, et al., 2020, Multimodal image super-resolution via joint sparse representations induced by coupled dictionaries, IEEE transactions on computational imaging, Vol: 6, Pages: 57-72, ISSN: 2333-9403
Real-world data processing problems often involve various image modalities associated with a certain scene, including RGB images, infrared images, or multispectral images. The fact that different image modalities often share certain attributes, such as edges, textures, and other structure primitives, represents an opportunity to enhance various image processing tasks. This paper proposes a new approach to construct a high-resolution (HR) version of a low-resolution (LR) image, given another HR image modality as guidance, based on joint sparse representations induced by coupled dictionaries. The proposed approach captures complex dependency correlations, including similarities and disparities, between different image modalities in a learned sparse feature domain in lieu of the original image domain. It consists of two phases: coupled dictionary learning phase and coupled superresolution phase. The learning phase learns a set of dictionaries from the training dataset to couple different image modalities together in the sparse feature domain. In turn, the super-resolution phase leverages such dictionaries to construct an HR version of the LR target image with another related image modality for guidance. In the advanced version of our approach, multistage strategy and neighbourhood regression concept are introduced to further improve the model capacity and performance. Extensive guided image super-resolution experiments on real multimodal images demonstrate that the proposed approach admits distinctive advantages with respect to the state-of-the-art approaches, for example, overcoming the texture copying artifacts commonly resulting from inconsistency between the guidance and target images. Of particular relevance, the proposed model demonstrates much better robustness than competing deep models in a range of noisy scenarios.
Lawson M, Brookes M, Dragotti PL, 2019, Scene estimation from a swiped image, IEEE Transactions on Computational Imaging, Vol: 5, Pages: 540-555, ISSN: 2333-9403
The image blurring that results from moving a camera with the shutter open is normally regarded as undesirable. However, the blurring of the images encapsulates information which can be extracted to recover the light rays present within the scene. Given the correct recovery of the light rays that resulted in a blurred image, it is possible to reconstruct images of the scene from different camera locations. Therefore, rather than resharpening an image with motion blur, the goal of this paper is to recover the information needed to resynthesise images of the scene from different viewpoints. Estimation of the light rays within a scene is achieved by using a layer-based model to represent objects in the scene as layers, and by using an extended level set method to segment the blurred image into planes at different depths. The algorithm described in this paper has been evaluated on real and synthetic images to produce an estimate of the underlying Epipolar Plane Image.
Kotzagiannidis MS, Dragotti PL, 2019, Sampling and reconstruction of sparse signals on circulant graphs – an introduction to graph-FRI, Applied and Computational Harmonic Analysis, Vol: 47, Pages: 539-565, ISSN: 1096-603X
With the objective of employing graphs toward a more generalized theory of signal processing, we present a novel sampling framework for (wavelet-)sparse signals defined on circulant graphs which extends basic properties of Finite Rate of Innovation (FRI) theory to the graph domain, and can be applied to arbitrary graphs via suitable approximation schemes. At its core, the introduced Graph-FRI-framework states that any K-sparse signal on the vertices of a circulant graph can be perfectly reconstructed from its dimensionality-reduced representation in the graph spectral domain, the Graph Fourier Transform (GFT), of minimum size 2K. By leveraging the recently developed theory of e-splines and e-spline wavelets on graphs, one can decompose this graph spectral transformation into the multiresolution low-pass filtering operation with a graph e-spline filter, with subsequent transformation to the spectral graph domain; this allows to infer a distinct sampling pattern, and, ultimately, the structure of an associated coarsened graph, which preserves essential properties of the original, including circularity and, where applicable, the graph generating set.
Deng X, Dragotti PL, 2019, Deep coupled ISTA network for multi-modal image super-resolution, IEEE Transactions on Image Processing, Vol: 29, Pages: 1683-1698, ISSN: 1057-7149
Given a low-resolution (LR) image, multi-modal image super-resolution (MISR) aims to find the high-resolution (HR) version of this image with the guidance of an HR image from another modality. In this paper, we use a model-based approach to design a new deep network architecture for MISR. We first introduce a novel joint multi-modal dictionary learning (JMDL) algorithm to model cross-modality dependency. In JMDL, we simultaneously learn three dictionaries and two transform matrices to combine the modalities. Then, by unfolding the iterative shrinkage and thresholding algorithm (ISTA), we turn the JMDL model into a deep neural network, called deep coupled ISTA network. Since the network initialization plays an important role in deep network training, we further propose a layer-wise optimization algorithm (LOA) to initialize the parameters of the network before running back-propagation strategy. Specifically, we model the network initialization as a multi-layer dictionary learning problem, and solve it through convex optimization. The proposed LOA is demonstrated to effectively decrease the training loss and increase the reconstruction accuracy. Finally, we compare our method with other state-of-the-art methods in the MISR task. The numerical results show that our method consistently outperforms others both quantitatively and qualitatively at different upscaling factors for various multi-modal scenarios.
Perez-Nieves N, Leung VCH, Dragotti PL, et al., 2019, Advantages of heterogeneity of parameters in spiking neural network training, 2019 Conference on Cognitive Computational Neuroscience, Publisher: Cognitive Computational Neuroscience
It is very common in studies of the learning capabilities of spiking neural networks (SNNs) to use homogeneous neural and synaptic parameters (time constants, thresholds, etc.). Even in studies in which these parameters are distributed heterogeneously, the advantages or disadvantages of the heterogeneity have rarely been studied in depth. By contrast, in the brain, neurons and synapses are highly diverse, leading naturally to the hypothesis that this heterogeneity may be advantageous for learning. Starting from two state-of-the-art methods for training spiking neural networks (Nicola & Clopath, 2017, Shrestha & Orchard 2018}, we found that adding parameter heterogeneity reduced errors when the network had to learn more complex patterns, increased robustness to hyperparameter mistuning, and reduced the number of training iterations required. We propose that neural heterogeneity may be an important principle for brains to learn robustly in real world environments with highly complex structure, and where task-specific hyperparameter tuning may be impossible. Consequently, heterogeneity may also be a good candidate design principle for artificial neural networks, to reduce the need for expensive hyperparameter tuning as well as for reducing training time.
Kotzagiannidis MS, Dragotti PL, 2019, Splines and Wavelets on Circulant Graphs, Applied and Computational Harmonic Analysis, Vol: 47, Pages: 481-515, ISSN: 1096-603X
We present novel families of wavelets and associated filterbanks for the analysis and representation of functions defined on circulant graphs. In this work, we leverage the inherent vanishing moment property of the circulant graph Laplacian operator, and by extension, the e-graph Laplacian, which is established as a parameterization of the former with respect to the degree per node, for the design of vertex-localized and critically-sampled higher-order graph (e-)spline wavelet filterbanks, which can reproduce and annihilate classes of (exponential) polynomial signals on circulant graphs. In addition, we discuss similarities and analogies of the detected properties and resulting constructions with splines and spline wavelets in the Euclidean domain. Ultimately, we consider generalizations to arbitrary graphs in the form of graph approximations, with focus on graph product decompositions. In particular, we proceed to show how the use of graph products facilitates a multi-dimensional extension of the proposed constructions and properties.
Leung VCH, Huang J-J, Dragotti PL, 2019, Reconstruction of FRI Signals using Deep Neural Networks, Signal Processing with Adaptive Sparse Structured Representations (SPARS 2019)
Finite Rate of Innovation (FRI) theory considers sampling and reconstruction of classes of non-bandlimited signals, such as streams of Diracs. Widely used FRI reconstruction methods including Prony's method and matrix pencil method involve Singular Value Decomposition (SVD). When samples are corrupted with noise, they achieve an optimal performance given by the Cramér-Rao bound yet break down at a certain Signal-to-Noise Ratio (SNR) due to the so-called subspace swap problem. In this paper, we investigate a deep neural network approach for FRI signal reconstruction that directly learns a transformation from signal samples to FRI parameters. Simulations show significant improvement on the breakdown SNR over existing FRI methods.
Deng X, Song P, Rodrigues MRD, et al., 2019, RADAR: robust algorithm for depth image super resolution based on FRI theory and multimodal dictionary learning, IEEE Transactions on Circuits and Systems for Video Technology, Pages: 1-1, ISSN: 1051-8215
Depth image super-resolution is a challenging problem, since normally high upscaling factors are required (e.g., 16×), and depth images are often noisy. In order to achieve large upscaling factors and resilience to noise, we propose a Robust Algorithm for Depth imAge super Resolution (RADAR) that combines the power of finite rate of innovation (FRI) theory with multimodal dictionary learning. Given a low-resolution (LR) depth image, we first model its rows and columns as piece-wise polynomials and propose a FRI-based depth upscaling (FDU) algorithm to super-resolve the image. Then, the upscaled moderate quality (MQ) depth image is further enhanced with the guidance of a registered high-resolution (HR) intensity image. This is achieved by learning multimodal mappings from the joint MQ depth and HR intensity pairs to the HR depth, through a recently proposed triple dictionary learning (TDL) algorithm. Moreover, to speed up the super-resolution process, we introduce a new projection-based rapid upscaling (PRU) technique that pre-calculates the projections from the joint MQ depth and HR intensity pairs to the HR depth. Compared with state-of-the-art deep learning based methods, our approach has two distinct advantages: we need a fraction of training data but can achieve the best performance, and we are resilient to mismatches between training and testing datasets. Extensive numerical results show that the proposed method outperforms other state-of-the-art methods on either noise-free or noisy datasets with large upscaling factors up to 16× and can handle unknown blurring kernels well.
Alexandru R, Dragotti PL, 2019, Rumour source detection in social networks using partial observations, IEEE Global Conference on Signal and Information Processing 2018, Publisher: IEEE
The spread of information on graphs has beenextensively studied in engineering, biology, and economics. Re-cently, however, several authors have started to address themore challenging inverse problem, of localizing the origin of anepidemic, given observed traces of infection. In this paper, weintroduce a novel technique to estimate the location of a sourceof multiple epidemics on a general graph, assuming knowledgeof the start times of rumours, and using observations from asmall number of monitors.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.