103 results found
Garnelo M, Shanahan M, 2019, Reconciling deep learning with symbolic artificial intelligence: representing objects and relations, Current Opinion in Behavioral Sciences, Vol: 29, Pages: 17-23
© 2018 The Authors In the history of the quest for human-level artificial intelligence, a number of rival paradigms have vied for supremacy. Symbolic artificial intelligence was dominant for much of the 20th century, but currently a connectionist paradigm is in the ascendant, namely machine learning with deep neural networks. However, both paradigms have strengths and weaknesses, and a significant challenge for the field today is to effect a reconciliation. A central tenet of the symbolic paradigm is that intelligence results from the manipulation of abstract compositional representations whose elements stand for objects and relations. If this is correct, then a key objective for deep learning is to develop architectures capable of discovering objects and relations in raw data, and learning how to represent them in ways that are useful for downstream processing. This short review highlights recent progress in this direction.
Roseboom W, Fountas Z, Nikiforou K, et al., 2019, Activity in perceptual classification networks as a basis for human subjective time perception, NATURE COMMUNICATIONS, Vol: 10, ISSN: 2041-1723
Kaplanis C, Shanahan M, Clopath C, 2018, Continual reinforcement learning with complex synapses, Pages: 3893-3902
© CURRAN-CONFERENCE. All rights reserved. Unlike humans, who are capable of continual learning over their lifetimes, artificial neural networks have long been known to suffer from a phenomenon known as catastrophic forgetting, whereby new learning can lead to abrupt erasure of previously acquired knowledge. Whereas in a neural network the parameters are typically modelled as scalar values, an individual synapse in the brain comprises a complex network of interacting biochemical components that evolve at different timescales. In this paper, we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity (Benna & Fusi, 2016), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as well as enabling continual learning across sequential training of two simple tasks, it can also be used to overcome within-task forgetting by reducing the need for an experience replay database.
Gamelo M, Rosenbaum D, Maddison CJ, et al., 2018, Conditional neural processes, Pages: 2738-2747
© 2018 35th International Conference on Machine Learning, ICML 2018. All rights reserved. Deep neural networks excel at function approximation, yet they are typically trained from scratch for each new function. On the other hand, Baycsian methods, such as Gaussian Processes (GPS), exploit prior knowledge to quickly infer the shape of a new function at test time. Yet GPS are computationally expensive, and it can be hard to design appropriate priors. In this paper we propose a family of neural models, Conditional Neural Processes (CNPs), that combine the benefits of both. CNPs arc inspired by the flexibility of stochastic processes such as GPS, but are structured as neural networks and trained via gradient descent. CNPs make accurate predictions after observing only a handful of training data points, yet scale to complex functions and large datasets. We demonstrate the performance and versatility of the approach on a range of canonical machine learning tasks, including regression, classification and image completion.
Dilokthanakul N, Shanahan M, 2018, Deep Reinforcement Learning with Risk-Seeking Exploration, Pages: 201-211, ISSN: 0302-9743
© 2018, Springer Nature Switzerland AG. In most contemporary work in deep reinforcement learning (DRL), agents are trained in simulated environments. Not only are simulated environments fast and inexpensive, they are also ‘safe’. By contrast, training in a real world environment (using robots, for example) is not only slow and costly, but actions can also result in irreversible damage, either to the environment or to the agent (robot) itself. In this paper, we consider taking advantage of the inherent safety in computer simulation by extending the Deep Q-Network (DQN) algorithm with an ability to measure and take risk. In essence, we propose a novel DRL algorithm that encourages risk-seeking behaviour to enhance information acquisition during training. We demonstrate the merit of the exploration heuristic by (i) arguing that our risk estimator implicitly contains both parametric uncertainty and inherent uncertainty of the environment which are propagated back through Temporal Difference error across many time steps and (ii) evaluating our method on three games in the Atari domain and showing that the technique works well on Montezuma’s Revenge, a game that epitomises the challenge of sparse reward.
Fountas Z, Shanahan M, 2017, The role of cortical oscillations in a spiking neural network model of the basal ganglia, PLOS ONE, Vol: 12, ISSN: 1932-6203
Fountas Z, Shanahan M, 2017, Assessing Selectivity in the Basal Ganglia: The "Gearbox" Hypothesis
Despite experimental evidence, the literature so far contains no systematic attempt to address the impact of cortical oscillations on the ability of the basal ganglia (BG) to select. In this study, we employed a state-of-the-art spiking neural model of the BG circuitry and investigated the effectiveness of this circuitry as an action selection device. We found that cortical frequency, phase, dopamine and the examined time scale, all have a very important impact on this process. Our simulations resulted in a canonical profile of selectivity, termed selectivity portraits, which suggests that the cortex is the structure that determines whether selection will be performed in the BG and what strategy will be utilized. Some frequency ranges promote the exploitation of highly salient actions, others promote the exploration of alternative options, while the remaining frequencies halt the selection process. Based on this behaviour, we propose that the BG circuitry can be viewed as the "gearbox" of action selection. Coalitions of rhythmic cortical areas are able to switch between a repertoire of available BG modes which, in turn, change the course of information flow within the cortico-BG-thalamo-cortical loop. Dopamine, akin to "control pedals", either stops or initiates a decision, while cortical frequencies, as a "gear lever", determine whether a decision can be triggered and what type of decision this will be. Finally, we identified a selection cycle with a period of around 200ms, which was used to assess the biological plausibility of the popular cognitive architectures.
Tax TMS, Mediano PAM, Shanahan M, 2017, The Partial Information Decomposition of Generative Neural Network Models, ENTROPY, Vol: 19, ISSN: 1099-4300
Nikiforou K, Mediano PAM, Shanahan M, 2017, An Investigation of the Dynamical Transitions in Harmonically Driven Random Networks of Firing-Rate Neurons, COGNITIVE COMPUTATION, Vol: 9, Pages: 351-363, ISSN: 1866-9956
Dilokthanakul N, Mediano PAM, Garnelo M, et al., 2016, Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders
We study a variant of the variational autoencoder model (VAE) with a Gaussianmixture as a prior distribution, with the goal of performing unsupervisedclustering through deep generative models. We observe that the known problem ofover-regularisation that has been shown to arise in regular VAEs also manifestsitself in our model and leads to cluster degeneracy. We show that a heuristiccalled minimum information constraint that has been shown to mitigate thiseffect in VAEs can also be applied to improve unsupervised clusteringperformance with our model. Furthermore we analyse the effect of this heuristicand provide an intuition of the various processes with the help ofvisualizations. Finally, we demonstrate the performance of our model onsynthetic data, MNIST and SVHN, showing that the obtained clusters aredistinct, interpretable and result in achieving competitive performance onunsupervised clustering to the state-of-the-art results.
Bhowmik D, Nikiforou K, Shanahan M, et al., 2016, A RESERVOIR COMPUTING MODEL OF EPISODIC MEMORY, International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, Pages: 5202-5209, ISSN: 2161-4393
Arulkumaran K, Dilokthanakul N, Shanahan M, et al., 2016, Classifying Options for Deep Reinforcement Learning.
Vasa F, Shanahan M, Hellyer PJ, et al., 2015, Effects of lesions on synchrony and metastability in cortical networks, NEUROIMAGE, Vol: 118, Pages: 456-467, ISSN: 1053-8119
Hellyer PJ, Scott G, Shanahan M, et al., 2015, Cognitive Flexibility through Metastable Neural Dynamics Is Disrupted by Damage to the Structural Connectome, JOURNAL OF NEUROSCIENCE, Vol: 35, Pages: 9050-9063, ISSN: 0270-6474
Fountas Z, Shanahan M, 2015, GPU-based Fast Parameter Optimization for Phenomenological Spiking Neural Models, International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, ISSN: 2161-4393
Bhowmik D, Shanahan M, 2015, STDP Produces Well Behaved Oscillations and Synchrony, 4th International Conference on Cognitive Neurodynamics (ICCN), Publisher: SPRINGER, Pages: 241-252
Teixeira FPP, Shanahan M, 2015, Local and Global Criticality within Oscillating Networks of Spiking Neurons, International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, ISSN: 2161-4393
Carhart-Harris RL, Leech R, Hellyer PJ, et al., 2014, The entropic brain: a theory of conscious states informed by neuroimaging research with psychedelic drugs, FRONTIERS IN HUMAN NEUROSCIENCE, Vol: 8, ISSN: 1662-5161
Hellyer PJ, Shanahan M, Scott G, et al., 2014, The Control of Global Brain Dynamics: Opposing Actions of Frontoparietal Control and Default Mode Networks on Attention, JOURNAL OF NEUROSCIENCE, Vol: 34, Pages: 451-461, ISSN: 0270-6474
Fountas Z, Shanahan M, 2014, Phase Offset Between Slow Oscillatory Cortical Inputs Influences Competition in a Model of the Basal Ganglia, International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, Pages: 2407-2414, ISSN: 2161-4393
Teixeira FPP, Shanahan M, 2014, Does Plasticity Promote Criticality?, International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, Pages: 2383-2390, ISSN: 2161-4393
Shanahan M, 2014, Review of "consciousness and robot sentience" by Pentti Haikonen, International Journal of Machine Consciousness, Vol: 6, Pages: 63-65, ISSN: 1793-8430
Fountas Z, Shanahan M, 2013, A cognitive neural architecture as a robot controller, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 8064 LNAI, Pages: 371-373, ISSN: 0302-9743
This work proposes a biologically plausible cognitive architecture implemented in spiking neurons, which is based on well- established models of neuronal global workspace, action selection in the basal ganglia and corticothalamic circuits and can be used to control agents in virtual or physical environments. The aim of this system is the investigation of a number of aspects of cognition using real embodied systems, such as the ability of the brain to globally access and process information concurrently, as well as the ability to simulate potential future scenarios and use these predictions to drive action selection. © 2013 Springer-Verlag Berlin Heidelberg.
Shanahan M, Bingman VP, Shimizu T, et al., 2013, Large-scale network organization in the avian forebrain: a connectivity matrix and theoretical analysis, FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, Vol: 7, ISSN: 1662-5188
Fidjeland AK, Gamez D, Shanahan MP, et al., 2013, Three Tools for the Real-Time Simulation of Embodied Spiking Neural Networks Using GPUs, NEUROINFORMATICS, Vol: 11, Pages: 267-290, ISSN: 1539-2791
Bhowmik D, Shanahan M, 2013, Metastability and Inter-Band Frequency Modulation in Networks of Oscillating Spiking Neuron Populations, PLOS ONE, Vol: 8, ISSN: 1932-6203
Bhowmik D, Shanahan M, 2013, STDP Produces Robust Oscillatory Architectures That Exhibit Precise Collective Synchronization, 2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), ISSN: 2161-4393
Wildie M, Shanahan M, 2012, Metastability and chimera states in modular delay and pulse-coupled oscillator networks, CHAOS, Vol: 22, ISSN: 1054-1500
Shanahan M, 2012, The brain's connective core and its role in animal cognition, PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, Vol: 367, Pages: 2704-2714, ISSN: 0962-8436
Shanahan M, 2012, Satori Before Singularity, JOURNAL OF CONSCIOUSNESS STUDIES, Vol: 19, Pages: 87-102, ISSN: 1355-8250
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.