Antsy About A.I. ANC

If you’ve traveled or commuted lately then you’ve probably seen some of the over-ear headphones worn by people traveling with you. You might even own a pair yourself. One of the better benefits of over-ear headphones is the immersive experience it creates, both in the higher sound quality and by blocking out the sounds of your environment.
ANC, active noise canceling, is the ability for headphones to cancel out background noises to give you a better listening experience. This technology works particularly well for constant noises like the hum of a plane or train engine. What ANC is not great at are unexpected and irregular noises such as a loud conversation in the row behind you or a car horn. It would be nice if headphones could learn from their environment on the fly and work to quiet those noises as well. That is where artificial intelligence makes a not surprising entrance.
At the University of Washington’s Mobile Lab ANC is being taken to the next level. Not only are they are working to isolate sounds that are historically difficult to cancel out, largely for the sudden and unexpected nature of the sound; think of dogs barking or conversations nearby. They are also working to identify sounds that you might actually want to hear or be interrupted by; think of a door knock while you’re vacuuming or the horn of a car while you are on a run in the city. Better to let them explain it themselves in this video:
The Mobile Lab at University of Washington has developed classes of sounds and the headphones can be paired with a smart device, say a phone, so that certain sound classes can be allowed or disallowed. You can probably start to imagine building a home profile of these classes where you could allow certain sounds to be heard, like a doorbell or door knock, while cutting out the voices on a TV or other steaming device someone else is watching. You might also want to allow some voices like a child that needs to get your attention. Well the team working on these headphones is imagining those scenarios too.
Currently these classes of sounds are input into the software manually by feeding in tons of similar sound variances within a specific class. Think of an example class as different types of dog barks, none sound exactly the same but close enough that we can generally tell a dog from say, a car horn.
Alright, we have a repetitive class that needs to find differing dog barks and feed them to a learning process and build a class from those sounds. That sounds like a job for A.I. and while we’re not there yet, we may be closer to that future than we think. I'm excited to check out some of the latest and greatest in audio technology this June at InfoComm 2024.
Please sign in
If you are a registered user on AVIXA Xchange, please sign in