Long-term implicit memory for sequential auditory patterns in humans

  1. Roberta Bianco  Is a corresponding author
  2. Peter M C Harrison
  3. Mingyue Hu
  4. Cora Bolger
  5. Samantha Picken
  6. Marcus T Pearce
  7. Maria Chait
  1. University College London, United Kingdom
  2. Max-Planck-Institut für empirische Ästhetik, Germany
  3. Queen Mary University of London, United Kingdom

Abstract

Memory, on multiple timescales, is critical to our ability to discover the structure of our surroundings, and efficiently interact with the environment. We combined behavioural manipulation and modelling to investigate the dynamics of memory formation for rarely reoccurring acoustic patterns. In a series of experiments, participants detected the emergence of regularly repeating patterns within rapid tone-pip sequences. Unbeknownst to them, a few patterns reoccurred every ~3 minutes. All sequences consisted of the same 20 frequencies and were distinguishable only by the order of tone-pips. Despite this, reoccurring patterns were associated with a rapidly growing detection-time advantage over novel patterns. This effect was implicit, robust to interference, and persisted up to 7 weeks. The results implicate an interplay between short (a few seconds) and long-term (over many minutes) integration in memory formation and demonstrate the remarkable sensitivity of the human auditory system to sporadically reoccurring structure within the acoustic environment.

Data availability

The datasets for this study can be found in the OSF repository: Dataset URL: https://osf.io/dtzs3/DOI 10.17605/OSF.IO/DTZS3

The following data sets were generated

Article and author information

Author details

  1. Roberta Bianco

    UCL Ear Institute, University College London, London, United Kingdom
    For correspondence
    r.bianco@ucl.ac.uk
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9613-8933
  2. Peter M C Harrison

    Computational Auditory Perception Research Group, Max-Planck-Institut für empirische Ästhetik, Frankfurt am Main, Germany
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9851-9462
  3. Mingyue Hu

    UCL Ear Institute, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  4. Cora Bolger

    UCL Ear Institute, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  5. Samantha Picken

    UCL Ear Institute, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  6. Marcus T Pearce

    School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  7. Maria Chait

    UCL Ear Institute, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-7808-3593

Funding

Biotechnology and Biological Sciences Research Council (BB/P003745/1)

  • Maria Chait

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Jonas Obleser, University of Lübeck, Germany

Ethics

Human subjects: The research ethics committee of University College London approved the experiment, and written informed consent was obtained from each participant.[Project ID Number]: 1490/009

Version history

  1. Received: February 16, 2020
  2. Accepted: May 18, 2020
  3. Accepted Manuscript published: May 18, 2020 (version 1)
  4. Version of Record published: July 6, 2020 (version 2)

Copyright

© 2020, Bianco et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,093
    views
  • 392
    downloads
  • 25
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Roberta Bianco
  2. Peter M C Harrison
  3. Mingyue Hu
  4. Cora Bolger
  5. Samantha Picken
  6. Marcus T Pearce
  7. Maria Chait
(2020)
Long-term implicit memory for sequential auditory patterns in humans
eLife 9:e56073.
https://doi.org/10.7554/eLife.56073

Share this article

https://doi.org/10.7554/eLife.56073

Further reading

    1. Biochemistry and Chemical Biology
    2. Neuroscience
    Maximilian Nagel, Marco Niestroj ... Marc Spehr
    Research Article

    In most mammals, conspecific chemosensory communication relies on semiochemical release within complex bodily secretions and subsequent stimulus detection by the vomeronasal organ (VNO). Urine, a rich source of ethologically relevant chemosignals, conveys detailed information about sex, social hierarchy, health, and reproductive state, which becomes accessible to a conspecific via vomeronasal sampling. So far, however, numerous aspects of social chemosignaling along the vomeronasal pathway remain unclear. Moreover, since virtually all research on vomeronasal physiology is based on secretions derived from inbred laboratory mice, it remains uncertain whether such stimuli provide a true representation of potentially more relevant cues found in the wild. Here, we combine a robust low-noise VNO activity assay with comparative molecular profiling of sex- and strain-specific mouse urine samples from two inbred laboratory strains as well as from wild mice. With comprehensive molecular portraits of these secretions, VNO activity analysis now enables us to (i) assess whether and, if so, how much sex/strain-selective ‘raw’ chemical information in urine is accessible via vomeronasal sampling; (ii) identify which chemicals exhibit sufficient discriminatory power to signal an animal’s sex, strain, or both; (iii) determine the extent to which wild mouse secretions are unique; and (iv) analyze whether vomeronasal response profiles differ between strains. We report both sex- and, in particular, strain-selective VNO representations of chemical information. Within the urinary ‘secretome’, both volatile compounds and proteins exhibit sufficient discriminative power to provide sex- and strain-specific molecular fingerprints. While total protein amount is substantially enriched in male urine, females secrete a larger variety at overall comparatively low concentrations. Surprisingly, the molecular spectrum of wild mouse urine does not dramatically exceed that of inbred strains. Finally, vomeronasal response profiles differ between C57BL/6 and BALB/c animals, with particularly disparate representations of female semiochemicals.

    1. Neuroscience
    Kenta Abe, Yuki Kambe ... Tatsuo Sato
    Research Article

    Midbrain dopamine neurons impact neural processing in the prefrontal cortex (PFC) through mesocortical projections. However, the signals conveyed by dopamine projections to the PFC remain unclear, particularly at the single-axon level. Here, we investigated dopaminergic axonal activity in the medial PFC (mPFC) during reward and aversive processing. By optimizing microprism-mediated two-photon calcium imaging of dopamine axon terminals, we found diverse activity in dopamine axons responsive to both reward and aversive stimuli. Some axons exhibited a preference for reward, while others favored aversive stimuli, and there was a strong bias for the latter at the population level. Long-term longitudinal imaging revealed that the preference was maintained in reward- and aversive-preferring axons throughout classical conditioning in which rewarding and aversive stimuli were paired with preceding auditory cues. However, as mice learned to discriminate reward or aversive cues, a cue activity preference gradually developed only in aversive-preferring axons. We inferred the trial-by-trial cue discrimination based on machine learning using anticipatory licking or facial expressions, and found that successful discrimination was accompanied by sharper selectivity for the aversive cue in aversive-preferring axons. Our findings indicate that a group of mesocortical dopamine axons encodes aversive-related signals, which are modulated by both classical conditioning across days and trial-by-trial discrimination within a day.