Is Musical Instinct Innate? AI Model Suggests So

Summary: Researchers made a significant discovery using an artificial neural network model, suggesting that musical instinct may emerge naturally from the human brain. By analyzing various natural sounds through Google’s AudioSet, the team found that certain neurons in the network selectively responded to music, mimicking the behavior of the auditory cortex in real brains.

This spontaneous generation of music-selective neurons indicates that our ability to process music may be an innate cognitive function, formed as an evolutionary adaptation to better process sounds from nature.

Key Facts:

  1. The study used an artificial neural network to demonstrate that music-selective neurons can develop spontaneously without being taught music.
  2. These neurons showed similar behavior to those in the human auditory cortex, responding selectively to various forms of music across different genres.
  3. The research implies that musical ability may be an instinctive brain function, evolved to enhance the processing of natural sounds.

Source: KAIST

Music, often referred to as the universal language, is known to be a common component in all cultures. Then, could ‘musical instinct’ be something that is shared to some degree despite the extensive environmental differences amongst cultures?

On January 16, a KAIST research team led by Professor Hawoong Jung from the Department of Physics announced to have identified the principle by which musical instincts emerge from the human brain without special learning using an artificial neural network model.

This shows a woman playing piano.
The neurons in the artificial neural network model showed similar reactive behaviours to those in the auditory cortex of a real brain. Credit: Neuroscience News

Previously, many researchers have attempted to identify the similarities and differences between the music that exist in various different cultures, and tried to understand the origin of the universality.

A paper published in Science in 2019 had revealed that music is produced in all ethnographically distinct cultures, and that similar forms of beats and tunes are used. Neuroscientist have also previously found out that a specific part of the human brain, namely the auditory cortex, is responsible for processing musical information.

Professor Jung’s team used an artificial neural network model to show that cognitive functions for music forms spontaneously as a result of processing auditory information received from nature, without being taught music.

The research team utilized AudioSet, a large-scale collection of sound data provided by Google, and taught the artificial neural network to learn the various sounds. Interestingly, the research team discovered that certain neurons within the network model would respond selectively to music.

In other words, they observed the spontaneous generation of neurons that reacted minimally to various other sounds like those of animals, nature, or machines, but showed high levels of response to various forms of music including both instrumental and vocal.

The neurons in the artificial neural network model showed similar reactive behaviours to those in the auditory cortex of a real brain. For example, artificial neurons responded less to the sound of music that was cropped into short intervals and were rearranged.

This indicates that the spontaneously-generated music-selective neurons encode the temporal structure of music. This property was not limited to a specific genre of music, but emerged across 25 different genres including classic, pop, rock, jazz, and electronic.

Furthermore, suppressing the activity of the music-selective neurons was found to greatly impede the cognitive accuracy for other natural sounds. That is to say, the neural function that processes musical information helps process other sounds, and that ‘musical ability’ may be an instinct formed as a result of an evolutionary adaptation acquired to better process sounds from nature.

Professor Hawoong Jung, who advised the research, said, “The results of our study imply that evolutionary pressure has contributed to forming the universal basis for processing musical information in various cultures.”

As for the significance of the research, he explained, “We look forward for this artificially built model with human-like musicality to become an original model for various applications including AI music generation, musical therapy, and for research in musical cognition.”

He also commented on its limitations, adding, “This research however does not take into consideration the developmental process that follows the learning of music, and it must be noted that this is a study on the foundation of processing musical information in early development.”

This research, conducted by first author Dr. Gwangsu Kim of the KAIST Department of Physics (current affiliation: MIT Department of Brain and Cognitive Sciences) and Dr. Dong-Kyum Kim (current affiliation: IBS) was published in Nature Communications under the title, “Spontaneous emergence of rudimentary music detectors in deep neural networks”.

Funding: This research was supported by the National Research Foundation of Korea.

About this AI and music research news

Author: Yoonju Hong
Source: KAIST
Contact: Yoonju Hong – KAIST
Image: The image is credited to Neuroscience News

Original Research: Open access.
Spontaneous emergence of rudimentary music detectors in deep neural networks” by Hawoong Jeong et al. Nature Communications


Abstract

Spontaneous emergence of rudimentary music detectors in deep neural networks

Music exists in almost every society, has universal acoustic features, and is processed by distinct neural circuits in humans even with no experience of musical training.

However, it remains unclear how these innate characteristics emerge and what functions they serve. Here, using an artificial deep neural network that models the auditory information processing of the brain, we show that units tuned to music can spontaneously emerge by learning natural sound detection, even without learning music.

The music-selective units encoded the temporal structure of music in multiple timescales, following the population-level response characteristics observed in the brain.

We found that the process of generalization is critical for the emergence of music-selectivity and that music-selectivity can work as a functional basis for the generalization of natural sound, thereby elucidating its origin.

These findings suggest that evolutionary adaptation to process natural sounds can provide an initial blueprint for our sense of music.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.