Imperial College London


Faculty of EngineeringDepartment of Bioengineering




marie.tolkiehn Website




RSM 428Royal School of MinesSouth Kensington Campus





Publication Type

5 results found

Tolkiehn M, Schultz SR, 2019, Neural ensemble activity depends on stimulus type in mouse primary visual cortex

<jats:title>ABSTRACT</jats:title><jats:p>Early cortical processing of visual information has long been investigated by describing the response properties such as receptive fields or orientation selectivity of individual neurons to moving gratings. However, thanks to recent technological advances, it has been become easier to record from larger neuronal populations which allow us to analyse the population responses to probe visual information processing at the population level. In the end, it is unlikely that sensory processing is a single-neuron effort but that of an entire population. Here we show how different stimulus types evoke distinct binary activity patterns (words) of simultaneous events on different sites in the anaesthetised mouse. Spontaneous activity and natural scenes indicated lower word distribution divergences than each to drifting gratings. Accounting for firing rate differences, spontaneous activity was linked to more unique patterns than stimulus-driven responses. Multidimensional scaling conveyed that pattern probability distributions clustered for spatial frequencies but not for directions. Further, drifting gratings modulated the Shannon entropy estimated on spatial patterns in a similar fashion as classical directional and spatial frequency tuning functions of neurons. This was supported by a distinct sublinear relationship between Shannon entropy and mean population firing rate.</jats:p>

Journal article

Tolkiehn M, Schultz SR, 2019, Temporo-nasally biased moving grating selectivity in mouse primary visual cortex

<jats:title>Abstract</jats:title><jats:p>Orientation tuning in mouse primary visual cortex (V1) has long been reported to have a random or “salt-and-pepper” organisation, lacking the structure found in cats and primates. Laminar in-vivo multi-electrode array recordings here reveal previously elusive structure in the representation of visual patterns in the mouse visual cortex, with temporo-nasally drifting gratings eliciting consistently highest neuronal responses across cortical layers and columns, whilst upward moving gratings reliably evoked the lowest activities. We suggest this bias in direction selectivity to be behaviourally relevant as objects moving into the visual field from the side or behind may pose a predatory threat to the mouse whereas upward moving objects do not. We found furthermore that direction preference and selectivity was affected by stimulus spatial frequency, and that spatial and directional tuning curves showed high signal correlations decreasing with distance between recording sites. In addition, we show that despite this bias in direction selectivity, it is possible to decode stimulus identity and that spatiotemporal features achieve higher accuracy in the decoding task whereas spike count or population counts are sufficient to decode spatial frequencies implying different encoding strategies.</jats:p><jats:sec><jats:title>Significance statement</jats:title><jats:p>We show that temporo-nasally drifting gratings (i.e. opposite the normal visual flow during forward movement) reliably elicit the highest neural activity in mouse primary visual cortex, whereas upward moving gratings reliably evoke the lowest responses. This encoding may be highly behaviourally relevant, as objects approaching from the periphery may pose a threat (e.g. predators), whereas upward moving objects do not. This is a result at odds with the belief that mouse primary visual cortex is randomly organised. Fur

Journal article

Tolkiehn M, Schultz, 2015, Multi-Unit Activity contains information about spatial stimulus structure in mouse primary visual cortex, 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Publisher: IEEE, Pages: 3771-3774, ISSN: 1557-170X

This study investigates the spatial and directionaltuning of Multi-Unit Activity (MUA) in mouse primary visualcortex and how MUA can reflect spatiotemporal structurescontained in moving gratings. Analysis of multi-shank laminarelectrophysiological recordings from mouse primary visualcortex indicates a directional preference for moving gratingsaround 180 , while preferred spatial frequency peaks around0.02 cycles per degree, which is similar as reported in single-unitstudies. Using only features from MUA, we further achieved asignificant performance in decoding spatial frequency or directionof moving gratings, with average decoding performancesof up to 58.54% for 8 directions, and 44% correctly identifiedspatial frequencies against chance level of 16.7%.

Conference paper

Srivastava A, Rastogi A, Rao A, Shoeb AAM, Abid A, Fisch A, Brown AR, Santoro A, Gupta A, Garriga-Alonso A, Kluska A, Lewkowycz A, Agarwal A, Power A, Ray A, Warstadt A, Kocurek AW, Safaya A, Tazarv A, Xiang A, Parrish A, Nie A, Hussain A, Askell A, Dsouza A, Slone A, Rahane A, Iyer AS, Andreassen A, Madotto A, Santilli A, Stuhlmüller A, Dai A, La A, Lampinen A, Zou A, Jiang A, Chen A, Vuong A, Gupta A, Gottardi A, Norelli A, Venkatesh A, Gholamidavoodi A, Tabassum A, Menezes A, Kirubarajan A, Mullokandov A, Sabharwal A, Herrick A, Efrat A, Erdem A, Karakaş A, Roberts BR, Loe BS, Zoph B, Bojanowski B, Özyurt B, Hedayatnia B, Neyshabur B, Inden B, Stein B, Ekmekci B, Lin BY, Howald B, Diao C, Dour C, Stinson C, Argueta C, Ramírez CF, Singh C, Rathkopf C, Meng C, Baral C, Wu C, Callison-Burch C, Waites C, Voigt C, Manning CD, Potts C, Ramirez C, Rivera CE, Siro C, Raffel C, Ashcraft C, Garbacea C, Sileo D, Garrette D, Hendrycks D, Kilman D, Roth D, Freeman D, Khashabi D, Levy D, González DM, Perszyk D, Hernandez D, Chen D, Ippolito D, Gilboa D, Dohan D, Drakard D, Jurgens D, Datta D, Ganguli D, Emelin D, Kleyko D, Yuret D, Chen D, Tam D, Hupkes D, Misra D, Buzan D, Mollo DC, Yang D, Lee D-H, Shutova E, Cubuk ED, Segal E, Hagerman E, Barnes E, Donoway E, Pavlick E, Rodola E, Lam E, Chu E, Tang E, Erdem E, Chang E, Chi EA, Dyer E, Jerzak E, Kim E, Manyasi EE, Zheltonozhskii E, Xia F, Siar F, Martínez-Plumed F, Happé F, Chollet F, Rong F, Mishra G, Winata GI, Melo GD, Kruszewski G, Parascandolo G, Mariani G, Wang G, Jaimovitch-López G, Betz G, Gur-Ari G, Galijasevic H, Kim H, Rashkin H, Hajishirzi H, Mehta H, Bogar H, Shevlin H, Schütze H, Yakura H, Zhang H, Wong HM, Ng I, Noble I, Jumelet J, Geissinger J, Kernion J, Hilton J, Lee J, Fisac JF, Simon JB, Koppel J, Zheng J, Zou J, Kocoń J, Thompson J, Kaplan J, Radom J, Sohl-Dickstein J, Phang J, Wei J, Yosinski J, Novikova J, Bosscher J, Marsh J, Kim J, Taal J, Engel J, Alabi J, Xu J, Song J, Tang J, Waweru J, Burden J Met al., Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Language models demonstrate both quantitative improvement and new qualitativecapabilities with increasing scale. Despite their potentially transformativeimpact, these new capabilities are as yet poorly characterized. In order toinform future research, prepare for disruptive new model capabilities, andameliorate socially harmful effects, it is vital that we understand the presentand near-future capabilities and limitations of language models. To addressthis challenge, we introduce the Beyond the Imitation Game benchmark(BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442authors across 132 institutions. Task topics are diverse, drawing problems fromlinguistics, childhood development, math, common-sense reasoning, biology,physics, social bias, software development, and beyond. BIG-bench focuses ontasks that are believed to be beyond the capabilities of current languagemodels. We evaluate the behavior of OpenAI's GPT models, Google-internal densetransformer architectures, and Switch-style sparse transformers on BIG-bench,across model sizes spanning millions to hundreds of billions of parameters. Inaddition, a team of human expert raters performed all tasks in order to providea strong baseline. Findings include: model performance and calibration bothimprove with scale, but are poor in absolute terms (and when compared withrater performance); performance is remarkably similar across model classes,though with benefits from sparsity; tasks that improve gradually andpredictably commonly involve a large knowledge or memorization component,whereas tasks that exhibit "breakthrough" behavior at a critical scale ofteninvolve multiple steps or components, or brittle metrics; social bias typicallyincreases with scale in settings with ambiguous context, but this can beimproved with prompting.

Journal article

Dhole KD, Gangal V, Gehrmann S, Gupta A, Li Z, Mahamood S, Mahendiran A, Mille S, Shrivastava A, Tan S, Wu T, Sohl-Dickstein J, Choi JD, Hovy E, Dusek O, Ruder S, Anand S, Aneja N, Banjade R, Barthe L, Behnke H, Berlot-Attwell I, Boyle C, Brun C, Cabezudo MAS, Cahyawijaya S, Chapuis E, Che W, Choudhary M, Clauss C, Colombo P, Cornell F, Dagan G, Das M, Dixit T, Dopierre T, Dray P-A, Dubey S, Ekeinhor T, Giovanni MD, Goyal T, Gupta R, Gupta R, Hamla L, Han S, Harel-Canada F, Honore A, Jindal I, Joniak PK, Kleyko D, Kovatchev V, Krishna K, Kumar A, Langer S, Lee SR, Levinson CJ, Liang H, Liang K, Liu Z, Lukyanenko A, Marivate V, Melo GD, Meoni S, Meyer M, Mir A, Moosavi NS, Muennighoff N, Mun TSH, Murray K, Namysl M, Obedkova M, Oli P, Pasricha N, Pfister J, Plant R, Prabhu V, Pais V, Qin L, Raji S, Rajpoot PK, Raunak V, Rinberg R, Roberts N, Rodriguez JD, Roux C, Vasconcellos PHS, Sai AB, Schmidt RM, Scialom T, Sefara T, Shamsi SN, Shen X, Shi H, Shi Y, Shvets A, Siegel N, Sileo D, Simon J, Singh C, Sitelew R, Soni P, Sorensen T, Soto W, Srivastava A, Srivatsa KVA, Sun T, Mukund VT, Tabassum A, Tan FA, Teehan R, Tiwari M, Tolkiehn M, Wang A, Wang Z, Wang G, Wang ZJ, Wei F, Wilie B, Winata GI, Wu X, Wydmański W, Xie T, Yaseen U, Yee MA, Zhang J, Zhang Yet al., NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

Data augmentation is an important component in the robustness evaluation ofmodels in natural language processing (NLP) and in enhancing the diversity ofthe data they are trained on. In this paper, we present NL-Augmenter, a newparticipatory Python-based natural language augmentation framework whichsupports the creation of both transformations (modifications to the data) andfilters (data splits according to specific features). We describe the frameworkand an initial set of 117 transformations and 23 filters for a variety ofnatural language tasks. We demonstrate the efficacy of NL-Augmenter by usingseveral of its transformations to analyze the robustness of popular naturallanguage models. The infrastructure, datacards and robustness analysis resultsare available publicly on the NL-Augmenter repository(

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00668077&limit=30&person=true