x
Loading
+ -
Fear. (01/2022)

The sounds of silence.

Text: Yvonne Vahlensieck

Much remains to be understood about how sounds are processed in the brain. In the University of Basel’s Brain & Sound Lab, researchers are slowly coming closer to solving the puzzle.

The University of Basel’s Brain & Sound Lab.
The University of Basel’s Brain & Sound Lab. (Photo: Christian Flierl)

Complete silence is something we experience only very, very rarely in our lives,” says neuroscientist Tania Barkat. Even if we’re not aware of it, we are constantly surrounded by a soundscape of traffic noise, fragments of conversation, birdsong, humming refrigerators, beeping smartphones, and much more besides. Whatever the situation, however, our brains manage to filter out the information that matters to us.

How they do this is what Barkat and her team at the University of Basel’s Brain & Sound Lab are interested in. “We know much less about hearing than we do about vision,” she explains. “Eyesight can be corrected with a pair of glasses, but hearing is a much more complex affair.” Barkat argues that it is important to find out more about this topic – not least in light of the problems we can expect as a result of the increasingly widespread use of earphones, often at excessive volumes. Or because abnormalities in the hearing process are potentially related to attention deficit disorders.

A challenging field

The complexity of auditory perception probably helps explain why only a small number of research groups have dared to tackle the problem until now. Moreover, experiments on how sound is processed in the brain pose considerable technical challenges: In mice, which feature in the majority of experiments, an important area of the brain devoted to hearing – the auditory cortex – is barely more than a cubic millimeter in size, and difficult to access.

Nevertheless, over the last few years Barkat and her team have succeeded in overcoming these difficulties, gaining a trove of new insights into the world of hearing. Among their findings is the fact that when a sound comes to an end, it is by no means followed by radio silence inside the brain. On the contrary: When a sound ends, the auditory cortex and other areas of the brain respond with heightened activity. This phenomenon was examined in detail for the first time by neuroscientist Magdalena Solyga in her doctoral dissertation. Her findings led her to conclude that the phenomenon, known as offset response, plays an important role in the hearing process.

For her experiments, Solyga taught mice to indicate that a sound was over by licking a reward tube. This was done over the course of a two-week training program in which the hungry animals were rewarded for the right reaction with a drop of soya milk. The experiments themselves consisted in placing the mice inside a soundproof box and playing back sounds of different frequencies and durations to them, while measuring the ensuing neural activity in various parts of the brain by means of electrodes.

Activity indicates silence

Solyga observed that neural activity shot up at the start of a sound before quickly dropping back to a low level of background activity. Only after the end of the sound, did activity once again increase for around 50 to 100 milliseconds. “In other words, neurons only signal the start and end of a constant sound,” Solyga explains. Over time, this is likely to be more energy-efficient than if the neurons were to remain in a state of constant activity.

To show that this offset response is in fact necessary, she selectively deactivated the neurons involved using a technique known as optogenetics. This is an approach that employs genetic manipulation to allow particular neurons to be selectively deactivated by a light pulse. The resulting series of tests demonstrated that mice without the offset signal found it difficult to correctly identify the end of a sound. “This shows that the offset response is not just an artifact, but has a specific function,” says Barkat, who believes that in humans it could play a significant role in understanding speech. “Spoken language also includes brief pauses, which have meaning – so we have to be able to recognize exactly when a sound is over.” Likewise, in order to properly appreciate music, we need to be able to perceive even the tiniest of pauses.

Barkat hopes that these findings could one day play a part in improving the effectiveness of cochlear implants. These in-ear prostheses enable deaf people to understand spoken language – but are less effective in noisy environments or when listening to music, for example. “Perhaps the offset signal is lost with cochlear implants, and we could restore it with additional external stimulation.”

Ultimate goal: the bigger picture

Applying research findings to humans is the ultimate goal – even if there is still a long way to go. “After all, we are not performing these experiments to find out how hearing works in mice,” says Barkat. “But the questions we are interested in cannot be examined directly in the human brain.” Mice, however, are excellently suited to this purpose. Even though they do not possess language and hear in a different frequency range to humans, their brain structures and the signaling pathways they use for hearing work in fundamentally the same way. Moreover, experiments with mice allow brain research to draw on a broad range of established techniques such as optogenetics.

Besides offset activity, Tania Barkat and her team use these methods to explore numerous other facets of hearing too: Another study looked at how the brain switches from passive hearing to active listening, for instance. A current project is examining why loud sounds are perceived as lasting longer than quiet ones.

Meanwhile, experiments involving mice fitted with cochlear implants are testing the effect of these devices on processes in the brain. “As different as all these research avenues may appear, they are ultimately all about the brain’s plasticity,” Barkat explains. “We examine all these processes individually under controlled conditions in the lab in an effort to understand them. Only then can we raise the complexity level and look at how the brain is able to continually adapt its hearing to different environments and tasks under more realistic conditions.”

To top