Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires

  1. Jack Goffinet
  2. Samuel Brudner
  3. Richard Mooney
  4. John Pearson  Is a corresponding author
  1. Duke University, United States

Abstract

Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpicked features may miss important dimensions of variability, and correlations among them complicate statistical testing. Here, by contrast, we apply the variational autoencoder (VAE), an unsupervised learning method, to learn features directly from data and quantify the vocal behavior of two model species: the laboratory mouse and the zebra finch. The VAE converges on a parsimonious representation that outperforms handpicked features on a variety of common analysis tasks, enables the measurement of moment-by-moment vocal variability on the timescale of tens of milliseconds in the zebra finch, provides strong evidence that mouse ultrasonic vocalizations do not cluster as is commonly believed, and captures the similarity of tutor and pupil birdsong with qualitatively higher fidelity than previous approaches. In all, we demonstrate the utility of modern unsupervised learning approaches to the quantification of complex and high-dimensional vocal behavior.

Data availability

Dataset 1---------Online, publicly available MUPET dataset: ~5GB Available at: https://github.com/mvansegbroeck/mupet/wiki/MUPET-wiki Figs: 2, 3, 4d-eDataset 2----------Single zebra finch data: ~200-400 MB of audio generated as part of work in progress in Mooney Lab. Figs: 2e-f, 4a-c, 5a, 5d, 6b-eDataset 3---------Mouse USV dataset: ~30-40 GB of audio generated as part of work in progress in Mooney Lab. Figs: 4fDataset 5---------This is a subset of dataset 3, taken from a single mouse: ~1GB of audio. Figs: 5b-e, 6aDataset 6---------10 zebra finch pupil/tutor pairs: ~60 GB of audio generated as part of work in progress in Mooney Lab. Figs: 7Upon acceptance, all Datasets 2-6 will be archived in the Duke Digital Repository (https://research.repository.duke.edu). DOI in process.

The following previously published data sets were used

Article and author information

Author details

  1. Jack Goffinet

    Computer Science, Duke University, Durham, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-6729-0848
  2. Samuel Brudner

    Neurobiology, Duke University, Durham, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-6043-9328
  3. Richard Mooney

    Department of Neurobiology, Duke University, Durham, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-3308-1367
  4. John Pearson

    Biostatistics & Bioinformatics, Neurobiology, Center for Cognitive Neuroscience, Psychology and Neuroscience, Electrical and Computer Engineering, Duke University, Durham, United States
    For correspondence
    john.pearson@duke.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9876-7837

Funding

National Institute of Mental Health (R01-MH117778)

  • Richard Mooney

National Institute of Neurological Disorders and Stroke (R01-NS118424)

  • Richard Mooney
  • John Pearson

National Institute on Deafness and Other Communication Disorders (R01-DC013826)

  • Richard Mooney
  • John Pearson

National Institute of Neurological Disorders and Stroke (R01-NS099288)

  • Richard Mooney

Eunice Kennedy Shriver National Institute of Child Health and Human Development (F31-HD098772)

  • Samuel Brudner

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Jesse H Goldberg, Cornell University, United States

Ethics

Animal experimentation: All data generated in conjunction for this study were generated by experiments performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All of the animals were handled according to approved institutional animal care and use committee (IACUC) protocols of Duke University, protocol numbers A171-20-08 and A172-20-08.

Version history

  1. Received: February 24, 2021
  2. Accepted: May 12, 2021
  3. Accepted Manuscript published: May 14, 2021 (version 1)
  4. Version of Record published: June 18, 2021 (version 2)

Copyright

© 2021, Goffinet et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,708
    views
  • 462
    downloads
  • 57
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Jack Goffinet
  2. Samuel Brudner
  3. Richard Mooney
  4. John Pearson
(2021)
Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires
eLife 10:e67855.
https://doi.org/10.7554/eLife.67855

Share this article

https://doi.org/10.7554/eLife.67855

Further reading

    1. Computational and Systems Biology
    2. Genetics and Genomics
    Chananchida Sang-aram, Robin Browaeys ... Yvan Saeys
    Research Article

    Spatial transcriptomics (ST) technologies allow the profiling of the transcriptome of cells while keeping their spatial context. Since most commercial untargeted ST technologies do not yet operate at single-cell resolution, computational methods such as deconvolution are often used to infer the cell type composition of each sequenced spot. We benchmarked 11 deconvolution methods using 63 silver standards, 3 gold standards, and 2 case studies on liver and melanoma tissues. We developed a simulation engine called synthspot to generate silver standards from single-cell RNA-sequencing data, while gold standards are generated by pooling single cells from targeted ST data. We evaluated methods based on their performance, stability across different reference datasets, and scalability. We found that cell2location and RCTD are the top-performing methods, but surprisingly, a simple regression model outperforms almost half of the dedicated spatial deconvolution methods. Furthermore, we observe that the performance of all methods significantly decreased in datasets with highly abundant or rare cell types. Our results are reproducible in a Nextflow pipeline, which also allows users to generate synthetic data, run deconvolution methods and optionally benchmark them on their dataset (https://github.com/saeyslab/spotless-benchmark).

    1. Computational and Systems Biology
    Maksim Kleverov, Daria Zenkova ... Alexey A Sergushichev
    Research Article

    Transcriptomic profiling became a standard approach to quantify a cell state, which led to accumulation of huge amount of public gene expression datasets. However, both reuse of these datasets or analysis of newly generated ones requires significant technical expertise. Here we present Phantasus - a user-friendly web-application for interactive gene expression analysis which provides a streamlined access to more than 96000 public gene expression datasets, as well as allows analysis of user-uploaded datasets. Phantasus integrates an intuitive and highly interactive JavaScript-based heatmap interface with an ability to run sophisticated R-based analysis methods. Overall Phantasus allows users to go all the way from loading, normalizing and filtering data to doing differential gene expression and downstream analysis. Phantasus can be accessed on-line at https://alserglab.wustl.edu/phantasus or can be installed locally from Bioconductor (https://bioconductor.org/packages/phantasus). Phantasus source code is available at https://github.com/ctlab/phantasus under MIT license.