Perceptually relevant remapping of human somatotopy in 24 hours
Abstract
Experience-dependent reorganisation of functional maps in the cerebral cortex is well described in the primary sensory cortices. However, there is relatively little evidence for such cortical reorganisation over the short-term. Using human somatosensory cortex as a model, we investigated the effects of a 24-hour gluing manipulation in which the right index and right middle fingers (digits 2 & 3) were adjoined with surgical glue. Somatotopic representations, assessed with two 7 tesla fMRI protocols, revealed rapid off-target reorganisation in the non-manipulated fingers following gluing, with the representation of the ring finger (digit 4) shifted towards the little finger (digit 5) and away from the middle finger (digit 3). These shifts were also evident in two behavioural tasks conducted in an independent cohort, showing reduced sensitivity for discriminating the temporal order of stimuli to the ring and little fingers, and increased substitution errors across this pair on a speeded reaction time task.
Article and author information
Author details
Funding
University College, Oxford
- James Kolasinski
Wellcome (104128/Z/14/Z)
- Tamar R Makin
Wellcome (102584/Z/13/Z)
- Charlotte J Stagg
Medical Research Council (MR/L009013/1)
- Saad Jbabdi
Wellcome (110027/Z/15/Z)
- Heidi Johansen-Berg
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Reviewing Editor
- Richard Ivry, University of California, Berkeley, United States
Ethics
Human subjects: All data were acquired in accordance with local central university research ethics committee approval (University of Oxford MSD-IDREC-C2-2013-05). Eighteen participants were recruited, each providing written informed consent to take part in this study, and for the results of this study to be published.
Version history
- Received: April 26, 2016
- Accepted: December 29, 2016
- Accepted Manuscript published: December 30, 2016 (version 1)
- Version of Record published: January 17, 2017 (version 2)
Copyright
© 2016, Kolasinski et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 2,638
- views
-
- 491
- downloads
-
- 36
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Combining information from multiple senses is essential to object recognition, core to the ability to learn concepts, make new inferences, and generalize across distinct entities. Yet how the mind combines sensory input into coherent crossmodal representations – the crossmodal binding problem – remains poorly understood. Here, we applied multi-echo fMRI across a 4-day paradigm, in which participants learned three-dimensional crossmodal representations created from well-characterized unimodal visual shape and sound features. Our novel paradigm decoupled the learned crossmodal object representations from their baseline unimodal shapes and sounds, thus allowing us to track the emergence of crossmodal object representations as they were learned by healthy adults. Critically, we found that two anterior temporal lobe structures – temporal pole and perirhinal cortex – differentiated learned from non-learned crossmodal objects, even when controlling for the unimodal features that composed those objects. These results provide evidence for integrated crossmodal object representations in the anterior temporal lobes that were different from the representations for the unimodal features. Furthermore, we found that perirhinal cortex representations were by default biased toward visual shape, but this initial visual bias was attenuated by crossmodal learning. Thus, crossmodal learning transformed perirhinal representations such that they were no longer predominantly grounded in the visual modality, which may be a mechanism by which object concepts gain their abstraction.
-
- Neuroscience
- Stem Cells and Regenerative Medicine
Neural stem cells (NSCs) are multipotent and correct fate determination is crucial to guarantee brain formation and homeostasis. How NSCs are instructed to generate neuronal or glial progeny is not well understood. Here we addressed how murine adult hippocampal NSC fate is regulated and describe how Scaffold Attachment Factor B (SAFB) blocks oligodendrocyte production to enable neuron generation. We found that SAFB prevents NSC expression of the transcription factor Nuclear Factor I/B (NFIB) by binding to sequences in the Nfib mRNA and enhancing Drosha-dependent cleavage of the transcripts. We show that increasing SAFB expression prevents oligodendrocyte production by multipotent adult NSCs, and conditional deletion of Safb increases NFIB expression and oligodendrocyte formation in the adult hippocampus. Our results provide novel insights into a mechanism that controls Drosha functions for selective regulation of NSC fate by modulating the post-transcriptional destabilization of Nfib mRNA in a lineage-specific manner.