Obtaining a consistent taxonomy of neuron types is challenging mainly because of the high dimensionality of the datasets. Coupled autoencoders are a step forward in achieving this goal.
Neurons in the mammalian brain are incredibly diverse: they exhibit a wide range of morphologies, connectivity patterns, molecular profiles, electrophysiological properties, plasticity-induced mechanisms, and so forth. Being able to classify this huge neuronal diversity into consistent discrete subtypes has been one of the greatest challenges of neuroscience over the past few decades. A consistent taxonomy (or classification) of neuronal types is crucial to provide a framework for studying their function, which will help us understand how neuronal circuits participate in behavior. As an analogy, it would be very challenging to study or understand how the different constituents of a machine work and how they participate in its function, without first knowing what these constituents are. In this issue of Nature Computational Science, Gala et al.1 provide a new way of achieving consistent cell type classification by combining different neuronal features (or modalities). Their proposed method, referred to as coupled autoencoders, takes advantage of deep neural networks to aling obtained with paired modalities.
Fig: Aligning different modalities to discover cell types.
Using high-resolution gene expression data and electrophysiological properties for each neuron, the authors demonstrate that coupled autoencoders can combine these two modalities into a faithful low-dimensional latent space representation, which, in turn, can be used to determine consistent cell types across modalities by means of unsupervised clustering.
Bugeon, S. Towards a consistent neuron classification. Nat Comput Sci 1, 100–101 (2021). https://doi.org/10.1038/s43588-021-00027-w