Memory: When neurons split the load

Various aspects of olfactory memory are represented as modulated responses across different classes of neurons in C. elegans.
  1. Itamar Lev
  2. Manuel Zimmer  Is a corresponding author
  1. Department of Neuroscience and Developmental Biology, Vienna Biocenter, University of Vienna, Austria
  2. Research Institute of Molecular Pathology, Vienna Biocenter, Austria

Memories are composed of the molecular and cellular traces that an event leaves in the nervous system. In turn, these neuronal changes enable the brain to weave together different features of the experience – for example, its outcome – with certain properties of the environment at the time.

Even the small worm Caenorhabditis elegans, a tractable and well-studied model organism with 302 neurons, can form such associations. Through conditioning, these animals can ‘learn’ to prefer a stimulus – for instance a smell – that is associated with food being present. This requires neurons to encode information so that an experience (e.g. smelling a specific odor) is correctly linked to valence (whether the situation was positive or negative, depending on the presence or absence of food).

Previous studies have already implicated specific genes and neurons in these processes, for example (Jin et al., 2016; Tomioka et al., 2006). However, this reductionist framework cannot fully capture how different aspects of a memory, such as experience and valence, are represented amongst an entire network of neurons. In C. elegans, it is possible to identify many of the neurons in these networks, and to record their activity simultaneously at single-cell resolution. This offers a unique opportunity to directly measure the features of memory traces during perception. Now, in eLife, Alon Zaslaver and colleagues at the Hebrew University of Jerusalem – including Chrisitian Pritz as first author – report the results of an extensive series of experiments which examined how olfactory memory modulates neuronal responses in this model organism (Pritz et al., 2023).

First, the team trained groups of worms to associate a conditioning odor, butanone (diluted in a solvent) with the presence or the absence of food (appetitive vs. aversive conditioning; Colbert and Bargmann, 1995; Kauffman et al., 2010). The protocol was adapted for the animals to form either short- or long-term memories of these associations.

A choice assay experiment then confirmed that in both short- and long-term conditions, appetitive and aversive conditioning respectively increased and decreased the worms’ preference for butanone over another smell (diacetyl). Two control groups were also tested: naive animals that had not been experimented on, and worms that had been through a ‘mock’ training identical to the one received during conditioning, but in the absence of butanone (only the solvent was present).

This experimental design allowed Pritz et al. to systematically isolate and investigate the different factors that influence behavior and neuronal activity. For instance, comparing mock-treated and naive individuals helped to capture the impact of experimental parameters other than smell and valence, such as the worms experiencing starvation.

Next, Pritz et al. used calcium imaging to record the activity of the same set of 24 sensory neurons in conditioned, naive and mock-treated animals exposed to butanone or diacetyl. This revealed that, for these classes of cells, the modulation of neuronal activity in response to the odors was mainly taking place for short- rather than long-term memories. Overall, a large proportion of the sensory neurons studied showed fine changes in activity following conditioning, with a few neuron classes exhibiting a stronger response. Detailed analyses highlighted that each class could encode one or several features of the memories, such as the presence of the odor, valence or a specific aspect of the training process. Mock treatment also impacted the activity of a large proportion of sensory neurons, shedding light on how parameters such as starvation can affect neuronal responses. Overall, these results suggest that the neuronal changes associated with short-term memories are distributed across multiple types of sensory neurons, rather than one class being solely dedicated to capturing a specific element of the response.

To further explore this possibility, Pritz et al. developed machine learning algorithms that could predict the type of conditioning the worms received based on their neuronal responses. The models made better predictions if information from more neuron types (up to five) was provided. Principal components analysis, which helps to pinpoint patterns in large datasets, further supported the idea that different task parameters (conditioning odor, valence, and starvation experience) create distinct activity profiles across the sensory circuit.

Next, Pritz et al. demonstrated that modulation of the sensory neurons also impacted the interneurons that they projected onto, and which relay sensory information to the rest of the nervous system (Figure 1). Three classes of interneurons were examined: while one of them mainly responded to the mock training, the others showed conditioning-specific responses. Unlike sensory neurons, however, all three interneurons could encode both short and long-term memories. While these initial findings are intriguing, additional work on larger datasets is probably needed to confirm whether short- versus long-term memory processes are generally allocated to specific classes of cells.

Conditioning induces a distributed modulation of neuronal responses to odor.

Worms were exposed to the odor butanone while food was absent (aversive conditioning; left) or present (appetitive conditioning; right); the animals formed memories of these associations, which led them to show respectively decreased or increased preference for butanone over an alternative odor diacetyl. Both types of conditioning modulate a wide range of sensory neurons (circles on outer edge; ASI, AWC and other three letter initials refer to various neuronal classes), which increase (pink) or decrease (green) their activity to varying degrees. The sensory neurons, in turn, alter the activity of three classes of interneurons (AIY, AIA and RIA) that they connect to via gap junctions or chemical synapses. The interneurons then feed information to other parts of the nervous system.

Finally, Pritz et al. used statistical modelling to examine how various sensory neurons shaped the activity of the AIY interneuron, which receives most of its inputs from these cells. The results suggest that AIY modulation was provided by different combinations of sensory neurons depending on the type of training: changes in AIY activity were driven by a single class of neurons after appetitive conditioning, but by a complex circuit of several neuronal classes after aversive conditioning. The memory of different treatment experiences is therefore retained in variable degrees of distribution (the number of classes of neurons involved); whether this could be underpinned by complex changes in the strength of the connections between sensory neurons and interneurons is an exciting hypothesis for future studies.

In conclusion, the work by Pritz et al. adds to existing evidence showing that neuronal signals are distributed across the nervous system in a wide range of organisms – from reactions to stimuli and movement control in C. elegans, to memory in animals with larger brains (Lin et al., 2023; Kato et al., 2015; Owald and Waddell, 2015; Tonegawa et al., 2015). As research on C. elegans can now also examine larger neuronal networks, it should provide new insights into how the nervous system computes and yields behavior in this and other animals. Advanced machine learning algorithms could help in this effort, as they are uniquely placed to ‘decode’ the signals embedded in large neural activity datasets – for example, which odor a worm is smelling. However, the way that algorithms process that information does not necessarily match the underlying neuronal mechanisms and biological processes accurately. Addressing these problems will require further developing computational and experimental approaches alongside one another.

References

Article and author information

Author details

  1. Itamar Lev

    Itamar Lev is in the Department of Neuroscience and Developmental Biology, Vienna Biocenter, University of Vienna, Vienna, Austria

    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9100-5100
  2. Manuel Zimmer

    Manuel Zimmer is in the Department of Neuroscience and Developmental Biology, Vienna Biocenter, University of Vienna and the Research Institute of Molecular Pathology, Vienna Biocenter, Vienna, Austria

    For correspondence
    manuel.zimmer@univie.ac.at
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8072-787X

Publication history

  1. Version of Record published: May 4, 2023 (version 1)

Copyright

© 2023, Lev and Zimmer

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 738
    views
  • 53
    downloads
  • 0
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Itamar Lev
  2. Manuel Zimmer
(2023)
Memory: When neurons split the load
eLife 12:e87861.
https://doi.org/10.7554/eLife.87861
  1. Further reading

Further reading

    1. Computational and Systems Biology
    Skander Kazdaghli, Iordanis Kerenidis ... Philip Teare
    Research Article

    Imputing data is a critical issue for machine learning practitioners, including in the life sciences domain, where missing clinical data is a typical situation and the reliability of the imputation is of great importance. Currently, there is no canonical approach for imputation of clinical data and widely used algorithms introduce variance in the downstream classification. Here we propose novel imputation methods based on determinantal point processes (DPP) that enhance popular techniques such as the multivariate imputation by chained equations and MissForest. Their advantages are twofold: improving the quality of the imputed data demonstrated by increased accuracy of the downstream classification and providing deterministic and reliable imputations that remove the variance from the classification results. We experimentally demonstrate the advantages of our methods by performing extensive imputations on synthetic and real clinical data. We also perform quantum hardware experiments by applying the quantum circuits for DPP sampling since such quantum algorithms provide a computational advantage with respect to classical ones. We demonstrate competitive results with up to 10 qubits for small-scale imputation tasks on a state-of-the-art IBM quantum processor. Our classical and quantum methods improve the effectiveness and robustness of clinical data prediction modeling by providing better and more reliable data imputations. These improvements can add significant value in settings demanding high precision, such as in pharmaceutical drug trials where our approach can provide higher confidence in the predictions made.

    1. Computational and Systems Biology
    Antony M Jose
    Research Article

    Interacting molecules create regulatory architectures that can persist despite turnover of molecules. Although epigenetic changes occur within the context of such architectures, there is limited understanding of how they can influence the heritability of changes. Here, I develop criteria for the heritability of regulatory architectures and use quantitative simulations of interacting regulators parsed as entities, their sensors, and the sensed properties to analyze how architectures influence heritable epigenetic changes. Information contained in regulatory architectures grows rapidly with the number of interacting molecules and its transmission requires positive feedback loops. While these architectures can recover after many epigenetic perturbations, some resulting changes can become permanently heritable. Architectures that are otherwise unstable can become heritable through periodic interactions with external regulators, which suggests that mortal somatic lineages with cells that reproducibly interact with the immortal germ lineage could make a wider variety of architectures heritable. Differential inhibition of the positive feedback loops that transmit regulatory architectures across generations can explain the gene-specific differences in heritable RNA silencing observed in the nematode Caenorhabditis elegans. More broadly, these results provide a foundation for analyzing the inheritance of epigenetic changes within the context of the regulatory architectures implemented using diverse molecules in different living systems.