Imperial College London

ProfessorMurrayShanahan

Faculty of EngineeringDepartment of Computing

Professor in Cognitive Robotics
 
 
 
//

Contact

 

+44 (0)20 7594 8262m.shanahan Website

 
 
//

Location

 

573Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

111 results found

Shanahan M, Crosby M, Beyret B, Cheke Let al., 2020, Artificial Intelligence and the Common Sense of Animals, TRENDS IN COGNITIVE SCIENCES, Vol: 24, Pages: 862-872, ISSN: 1364-6613

Journal article

Shanahan M, Nikiforou K, Creswell A, Kaplanis C, Barrett D, Garnelo Met al., 2020, An explicitly relational neural network architecture

With a view to bridging the gap between deep learning and symbolic AI, we present a novel endto-end neural network architecture that learns to form propositional representations with an explicitly relational structure from raw pixel data. In order to evaluate and analyse the architecture, we introduce a family of simple visual relational reasoning tasks of varying complexity. We show that the proposed architecture, when pre-trained on a curriculum of such tasks, learns to generate reusable representations that better facilitate subsequent learning on previously unseen tasks when compared to a number of baseline architectures. The workings of a successfully trained model are visualised to shed some light on how the architecture functions.

Working paper

Dilokthanakul N, Kaplanis C, Pawlowski N, Shanahan Met al., 2019, Feature Control as Intrinsic Motivation for Hierarchical Reinforcement Learning, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, Vol: 30, Pages: 3409-3418, ISSN: 2162-237X

Journal article

Garnelo M, Shanahan M, 2019, Reconciling deep learning with symbolic artificial intelligence: representing objects and relations, Current Opinion in Behavioral Sciences, Vol: 29, Pages: 17-23, ISSN: 2352-1546

In the history of the quest for human-level artificial intelligence, a number of rival paradigms have vied for supremacy. Symbolic artificial intelligence was dominant for much of the 20th century, but currently a connectionist paradigm is in the ascendant, namely machine learning with deep neural networks. However, both paradigms have strengths and weaknesses, and a significant challenge for the field today is to effect a reconciliation. A central tenet of the symbolic paradigm is that intelligence results from the manipulation of abstract compositional representations whose elements stand for objects and relations. If this is correct, then a key objective for deep learning is to develop architectures capable of discovering objects and relations in raw data, and learning how to represent them in ways that are useful for downstream processing. This short review highlights recent progress in this direction.

Journal article

Roseboom W, Fountas Z, Nikiforou K, Bhowmik D, Shanahan M, Seth AKet al., 2019, Activity in perceptual classification networks as a basis for human subjective time perception, Nature Communications, Vol: 10, ISSN: 2041-1723

Despite being a fundamental dimension of experience, how the human brain generates the perception of time remains unknown. Here, we provide a novel explanation for how human time perception might be accomplished, based on non-temporal perceptual classification processes. To demonstrate this proposal, we build an artificial neural system centred on a feed-forward image classification network, functionally similar to human visual processing. In this system, input videos of natural scenes drive changes in network activation, and accumulation of salient changes in activation are used to estimate duration. Estimates produced by this system match human reports made about the same videos, replicating key qualitative biases, including differentiating between scenes of walking around a busy city or sitting in a cafe or office. Our approach provides a working model of duration perception from stimulus to estimation and presents a new direction for examining the foundations of this central aspect of human experience.

Journal article

Kaplanis C, Shanahan M, Clopath C, 2018, Continual Reinforcement Learning with Complex Synapses, 35th International Conference on Machine Learning (ICML), Publisher: JMLR-JOURNAL MACHINE LEARNING RESEARCH, ISSN: 2640-3498

Conference paper

Garnelo M, Rosenbaum D, Maddison CJ, Ramalho T, Saxton D, Shanahan M, Teh YW, Rezende DJ, Eslami SMAet al., 2018, Conditional Neural Processes, 35th International Conference on Machine Learning (ICML), Publisher: JMLR-JOURNAL MACHINE LEARNING RESEARCH, ISSN: 2640-3498

Conference paper

Dilokthanakul N, Shanahan M, 2018, Deep Reinforcement Learning with Risk-Seeking Exploration, 15th International Conference on the Simulation of Adaptive Behavior (SAB), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 201-211, ISSN: 0302-9743

Conference paper

Fountas Z, Shanahan M, 2017, The role of cortical oscillations in a spiking neural network model of the basal ganglia., PLoS ONE, Vol: 12, ISSN: 1932-6203

Although brain oscillations involving the basal ganglia (BG) have been the target of extensive research, the main focus lies disproportionally on oscillations generated within the BG circuit rather than other sources, such as cortical areas. We remedy this here by investigating the influence of various cortical frequency bands on the intrinsic effective connectivity of the BG, as well as the role of the latter in regulating cortical behaviour. To do this, we construct a detailed neural model of the complete BG circuit based on fine-tuned spiking neurons, with both electrical and chemical synapses as well as short-term plasticity between structures. As a measure of effective connectivity, we estimate information transfer between nuclei by means of transfer entropy. Our model successfully reproduces firing and oscillatory behaviour found in both the healthy and Parkinsonian BG. We found that, indeed, effective connectivity changes dramatically for different cortical frequency bands and phase offsets, which are able to modulate (or even block) information flow in the three major BG pathways. In particular, alpha (8-12Hz) and beta (13-30Hz) oscillations activate the direct BG pathway, and favour the modulation of the indirect and hyper-direct pathways via the subthalamic nucleus-globus pallidus loop. In contrast, gamma (30-90Hz) frequencies block the information flow from the cortex completely through activation of the indirect pathway. Finally, below alpha, all pathways decay gradually and the system gives rise to spontaneous activity generated in the globus pallidus. Our results indicate the existence of a multimodal gating mechanism at the level of the BG that can be entirely controlled by cortical oscillations, and provide evidence for the hypothesis of cortically-entrained but locally-generated subthalamic beta activity. These two findings suggest new insights into the pathophysiology of specific BG disorders.

Journal article

Fountas Z, Shanahan M, 2017, Assessing Selectivity in the Basal Ganglia: The “Gearbox” Hypothesis, Publisher: Cold Spring Harbor Laboratory

<jats:title>Abstract</jats:title><jats:p>Despite experimental evidence, the literature so far contains no systematic attempt to address the impact of cortical oscillations on the ability of the basal ganglia (BG) to select. In this study, we employed a state-of-the-art spiking neural model of the BG circuitry and investigated the effectiveness of this circuitry as an action selection device. We found that cortical frequency, phase, dopamine and the examined time scale, all have a very important impact on this process. Our simulations resulted in a canonical profile of selectivity, termed selectivity portraits, which suggests that the cortex is the structure that determines whether selection will be performed in the BG and what strategy will be utilized. Some frequency ranges promote the exploitation of highly salient actions, others promote the exploration of alternative options, while the remaining frequencies halt the selection process. Based on this behaviour, we propose that the BG circuitry can be viewed as the “gearbox” of action selection. Coalitions of rhythmic cortical areas are able to switch between a repertoire of available BG modes which, in turn, change the course of information flow within the cortico-BG-thalamo-cortical loop. Dopamine, akin to “control pedals”, either stops or initiates a decision, while cortical frequencies, as a “gear lever”, determine whether a decision can be triggered and what type of decision this will be. Finally, we identified a selection cycle with a period of around 200ms, which was used to assess the biological plausibility of the popular cognitive architectures.</jats:p><jats:sec><jats:title>Author summary</jats:title><jats:p>Our brains are continuously called to select the most appropriate action between alternative competing choices. A plethora of evidence and theoretical work indicates that a fundamental brain region called the basal ga

Working paper

Tax T, Martinez Mediano PA, Shanahan M, 2017, The Partial Information Decomposition of GenerativeNeural Network Models, Entropy, Vol: 19, ISSN: 1099-4300

In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of information theory called partial information decomposition (PID) and track the informational properties of the neurons through training. We find two differentiated phases during the training process: a first short phase in which the neurons learn redundant information about the target, and a second phase in which neurons start specialising and each of them learns unique information about the target. We also find that in smaller networks individual neurons learn more specific information about certain features of the input, suggesting that learning pressure can encourage disentangled representations.

Journal article

Nikiforou K, Mediano PAM, Shanahan M, 2017, An Investigation of the Dynamical Transitions in Harmonically Driven Random Networks of Firing-Rate Neurons, Cognitive Computation, Vol: 9, Pages: 351-363, ISSN: 1866-9956

Continuous-time recurrent neural networks are widely used as models of neural dynamics and also have applications in machine learning. But their dynamics are not yet well understood, especially when they are driven by external stimuli. In this article, we study the response of stable and unstable networks to different harmonically oscillating stimuli by varying a parameter ρ, the ratio between the timescale of the network and the stimulus, and use the dimensionality of the network’s attractor as an estimate of the complexity of this response. Additionally, we propose a novel technique for exploring the stationary points and locally linear dynamics of these networks in order to understand the origin of input-dependent dynamical transitions. Attractors in both stable and unstable networks show a peak in dimensionality for intermediate values of ρ, with the latter consistently showing a higher dimensionality than the former, which exhibit a resonance-like phenomenon. We explain changes in the dimensionality of a network’s dynamics in terms of changes in the underlying structure of its vector field by analysing stationary points. Furthermore, we uncover the coexistence of underlying attractors with various geometric forms in unstable networks. As ρ is increased, our visualisation technique shows the network passing through a series of phase transitions with its trajectory taking on a sequence of qualitatively distinct figure-of-eight, cylinder, and spiral shapes. These findings bring us one step closer to a comprehensive theory of this important class of neural networks by revealing the subtle structure of their dynamics under different conditions.

Journal article

Dilokthanakul N, Mediano PAM, Garnelo M, Lee MCH, Salimbeni H, Arulkumaran K, Shanahan Met al., 2016, Deep unsupervised clustering with Gaussian mixture variational autoencoders

We study a variant of the variational autoencoder model with a Gaussian mixture as a prior distribution, with the goal of performing unsupervised clustering through deep generative models. We observe that the standard variational approach in these models is unsuited for unsupervised clustering, and mitigate this problem by leveraging a principled information-theoretic regularisation term known as consistency violation. Adding this term to the standard variational optimisation objective yields networks with both meaningful internal representations and well-defined clusters. We demonstrate the performance of this scheme on synthetic data, MNIST and SVHN, showing that the obtained clusters are distinct, interpretable and result in achieving higher performance on unsupervised clustering classification than previous approaches.

Working paper

Arulkumaran K, Dilokthanakul N, Shanahan M, Bharath AAet al., 2016, Classifying options for deep reinforcement learning, Publisher: IJCAI

Deep reinforcement learning is the learning of multiple levels ofhierarchical representations for reinforcement learning. Hierarchicalreinforcement learning focuses on temporal abstractions in planning andlearning, allowing temporally-extended actions to be transferred between tasks.In this paper we combine one method for hierarchical reinforcement learning -the options framework - with deep Q-networks (DQNs) through the use ofdifferent "option heads" on the policy network, and a supervisory network forchoosing between the different options. We show that in a domain where we haveprior knowledge of the mapping between states and options, our augmented DQNachieves a policy competitive with that of a standard DQN, but with much lowersample complexity. This is achieved through a straightforward architecturaladjustment to the DQN, as well as an additional supervised neural network.

Working paper

Bhowmik D, Nikiforou K, Shanahan M, Maniadakis M, Trahanias Pet al., 2016, A RESERVOIR COMPUTING MODEL OF EPISODIC MEMORY, International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, Pages: 5202-5209, ISSN: 2161-4393

Conference paper

Shanahan MP, Hellyer P, Sharp DJ, Scott G, Leech Ret al., 2015, Cognitive flexibility through metastable neural dynamics is disrupted by damage to the structural connectome, Journal of Neuroscience, Vol: 35, Pages: 9050-9063, ISSN: 0270-6474

Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome.

Journal article

Váša F, Shanahan M, Hellyer P, Scott G, Cabral J, Leech Ret al., 2015, Effects of lesions on synchrony and metastability in cortical networks, Neuroimage, Vol: 118, Pages: 456-467, ISSN: 1095-9572

At the macroscopic scale, the human brain can be described as a complex network of white matter tracts integrating grey matter assemblies — the human connectome. The structure of the connectome, which is often described using graph theoretic approaches, can be used to model macroscopic brain function at low computational cost. Here, we use the Kuramoto model of coupled oscillators with time-delays, calibrated with respect to empirical functional MRI data, to study the relation between the structure of the connectome and two aspects of functional brain dynamics — synchrony, a measure of general coherence, and metastability, a measure of dynamical flexibility. Specifically, we investigate the relationship between the local structure of the connectome, quantified using graph theory, and the synchrony and metastability of the model's dynamics. By removing individual nodes and all of their connections from the model, we study the effect of lesions on both global and local dynamics. Of the nine nodal graph-theoretical properties tested, two were able to predict effects of node lesion on the global dynamics. The removal of nodes with high eigenvector centrality leads to decreases in global synchrony and increases in global metastability, as does the removal of hub nodes joining topologically segregated network modules. At the level of local dynamics in the neighbourhood of the lesioned node, structural properties of the lesioned nodes hold more predictive power, as five nodal graph theoretical measures are related to changes in local dynamics following node lesions. We discuss these results in the context of empirical studies of stroke and functional brain dynamics.

Journal article

Teixeira FPP, Shanahan M, 2015, Local and Global Criticality within Oscillating Networks of Spiking Neurons, International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, ISSN: 2161-4393

Conference paper

Bhowmik D, Shanahan M, 2015, STDP Produces Well Behaved Oscillations and Synchrony, 4th International Conference on Cognitive Neurodynamics (ICCN), Publisher: SPRINGER, Pages: 241-252

Conference paper

Fountas Z, Shanahan M, 2015, GPU-based Fast Parameter Optimization for Phenomenological Spiking Neural Models, International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, ISSN: 2161-4393

Conference paper

Carhart-Harris RL, Leech R, Hellyer PJ, Shanahan M, Feilding A, Tagliazucchi E, Chialvo DR, Nutt Det al., 2014, The entropic brain: a theory of conscious states informed by neuroimaging research with psychedelic drugs, Frontiers in Human Neuroscience, Vol: 8, Pages: 1-22, ISSN: 1662-5161

Entropy is a dimensionless quantity that is used for measuring uncertainty about the state of a system but it can also imply physical qualities, where high entropy is synonymous with high disorder. Entropy is applied here in the context of states of consciousness and their associated neurodynamics, with a particular focus on the psychedelic state. The psychedelic state is considered an exemplar of a primitive or primary state of consciousness that preceded the development of modern, adult, human, normal waking consciousness. Based on neuroimaging data with psilocybin, a classic psychedelic drug, it is argued that the defining feature of “primary states” is elevated entropy in certain aspects of brain function, such as the repertoire of functional connectivity motifs that form and fragment across time. Indeed, since there is a greater repertoire of connectivity motifs in the psychedelic state than in normal waking consciousness, this implies that primary states may exhibit “criticality,” i.e., the property of being poised at a “critical” point in a transition zone between order and disorder where certain phenomena such as power-law scaling appear. Moreover, if primary states are critical, then this suggests that entropy is suppressed in normal waking consciousness, meaning that the brain operates just below criticality. It is argued that this entropy suppression furnishes normal waking consciousness with a constrained quality and associated metacognitive functions, including reality-testing and self-awareness. It is also proposed that entry into primary states depends on a collapse of the normally highly organized activity within the default-mode network (DMN) and a decoupling between the DMN and the medial temporal lobes (which are normally significantly coupled). These hypotheses can be tested by examining brain activity and associated cognition in other candidate primary states such as rapid eye movement (REM) sleep and early ps

Journal article

Hellyer PJ, Shanahan MP, Scott G, Wise RJS, Sharp DJ, Leech Ret al., 2014, The control of global brain dynamics: opposing actions of frontoparietal control and default mode networks on attention, Journal of Neuroscience, Vol: 34, Pages: 451-461, ISSN: 1529-2401

Understanding how dynamic changes in brain activity control behavior is a major challenge of cognitive neuroscience. Here, we consider the brain as a complex dynamic system and define two measures of brain dynamics: the synchrony of brain activity, measured by the spatial coherence of the BOLD signal across regions of the brain; and metastability, which we define as the extent to which synchrony varies over time. We investigate the relationship among brain network activity, metastability, and cognitive state in humans, testing the hypothesis that global metastability is “tuned” by network interactions. We study the following two conditions: (1) an attentionally demanding choice reaction time task (CRT); and (2) an unconstrained “rest” state. Functional MRI demonstrated increased synchrony, and decreased metastability was associated with increased activity within the frontoparietal control/dorsal attention network (FPCN/DAN) activity and decreased default mode network (DMN) activity during the CRT compared with rest. Using a computational model of neural dynamics that is constrained by white matter structure to test whether simulated changes in FPCN/DAN and DMN activity produce similar effects, we demonstate that activation of the FPCN/DAN increases global synchrony and decreases metastability. DMN activation had the opposite effects. These results suggest that the balance of activity in the FPCN/DAN and DMN might control global metastability, providing a mechanistic explanation of how attentional state is shifted between an unfocused/exploratory mode characterized by high metastability, and a focused/constrained mode characterized by low metastability.

Journal article

Fountas Z, Shanahan M, 2014, Phase Offset Between Slow Oscillatory Cortical Inputs Influences Competition in a Model of the Basal Ganglia, International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, Pages: 2407-2414, ISSN: 2161-4393

Conference paper

Teixeira FPP, Shanahan M, 2014, Does Plasticity Promote Criticality?, International Joint Conference on Neural Networks (IJCNN), Publisher: IEEE, Pages: 2383-2390, ISSN: 2161-4393

Conference paper

Shanahan M, 2014, Review of "consciousness and robot sentience" by Pentti Haikonen, International Journal of Machine Consciousness, Vol: 6, Pages: 63-65, ISSN: 1793-8430

Journal article

Shanahan M, Bingman VP, Shimizu T, Wild M, Guentuerkuen Oet al., 2013, Large-scale network organization in the avian forebrain: a connectivity matrix and theoretical analysis, FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, Vol: 7

Journal article

Fidjeland AK, Gamez D, Shanahan MP, Lazdins Eet al., 2013, Three Tools for the Real-Time Simulation of Embodied Spiking Neural Networks Using GPUs, NEUROINFORMATICS, Vol: 11, Pages: 267-290, ISSN: 1539-2791

Journal article

Bhowmik D, Shanahan M, 2013, Metastability and Inter-Band Frequency Modulation in Networks of Oscillating Spiking Neuron Populations, PLOS ONE, Vol: 8, ISSN: 1932-6203

Journal article

Fountas Z, Shanahan M, 2013, A cognitive neural architecture as a robot controller, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 8064 LNAI, Pages: 371-373, ISSN: 0302-9743

This work proposes a biologically plausible cognitive architecture implemented in spiking neurons, which is based on well- established models of neuronal global workspace, action selection in the basal ganglia and corticothalamic circuits and can be used to control agents in virtual or physical environments. The aim of this system is the investigation of a number of aspects of cognition using real embodied systems, such as the ability of the brain to globally access and process information concurrently, as well as the ability to simulate potential future scenarios and use these predictions to drive action selection. © 2013 Springer-Verlag Berlin Heidelberg.

Journal article

Bhowmik D, Shanahan M, 2013, STDP Produces Robust Oscillatory Architectures That Exhibit Precise Collective Synchronization, 2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), ISSN: 2161-4393

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00169428&limit=30&person=true