February 26, 2017

A journey into the human ear



Blog post: Team Hertz

A journey into the human ear



Hey dear reader! Have you ever wondered about until which point humans can differentiate between two pitches? Or if they can actually differentiate better than the microphone of a phone? That’s what our team has investigated this second Biosensors’ week!


But first, what is sound and how does its perception occur?

Actually, the perception of sound between humans and machines is not as different as we could think. Sound is a perturbation of the air: a wave that makes air move.
Our ears, as well as the inner part of a microphone, process those changes in air thanks to a membrane that moves as “sound” hits it. Humans’ ears send this sound to the brain thanks to the cochlea, that converts those movements to neural signals. By its part, phones process them by electric impulses and convert it in a .mp3 (or other extensions) file.
Sound can be defined thanks to its frequency, that we could define as the quantity of air moved per second (higher pitches move air faster than low ones).
If you want to get more information on what is sound and frequency, take a look at this video!


Man V.S. Machine (you know, Irobot and this kind of stuff) is the myth that inspired us. We are three twentyish students, incubated in the modern and geek culture, so the myth of the human-machine competition fascinates us.


We also wanted to see if the resolution of the ear was stronger in the frequencies of the human voice or not, so we tested 3 frequencies: 1 lower than the human voice (60Hz), 1 higher (2000Hz), and 1 in the range of voice (250Hz).


But how did we do to measure the resolution on the human ear?

Well, in humans it is very difficult to measure things directly, especially if you don’t want to be too invasive. So we designed an experiment so that people themselves would tell us if they hear either two or one sound. For each of the 3 frequencies listed above, we created 4 soundtracks, composed of 2 sounds. The first being what we called “basal frequency” (60, 250 or 2000), and the second being a frequency very close to the first (with a difference of 2 Hz, 4 Hz, 8 Hz, and 20 Hz). That way, we just had to ask people if they heard one or two sounds on each of the soundtracks. We, then, analyzed if their response was right. We also made them listen to the basal frequencies alone, to control if they were actually not hearing any difference on those cases.


The microphone would be the one on a smartphone, to have a good quality one and an easy transfer to a .mp3 file. If you wonder how a microphone works, here is an informational picture that sums the subject up really quick. Then we would process the file with Audacity and get the frequency measure, to look the proximity with the real value (=accuracy).


On the biological side, we asked people to hear a sound and tell if they hear one or two pitches. If you are not sure to understand how the human ear works, first look at this video about human ears to have an overview of its functioning. Some of the sounds were single-pitched, some were double-pitched. Once we had the data, we wanted to see from which frequency difference (between 0 Hz and 20 Hz) humans could perceive better those two different sounds; and at which frequency it was.



Results and conclusions


Figure 1. Graph showing from wich frequency difference (0Hz, 2Hz, 4Hz, 6Hz or 20Hz) from the basal one (60Hz, 250Hz and 2000Hz) does human ear (little ear drawing) or microphone (little microphone drawing) differentiate two sounds. Axes are not in the right scale.


As we can see on this graph, the microphone is very constant and performant in its resolution (which means the capacity of perceiving little differences of measures). On the opposite, the human ear does not have the same resolution for a sound of 2000Hz than a sound of 60Hz (it has a better resolution in lower frequencies).
Thus we can say that a smartphone microphone has a better resolution than the human ear.
Even though, we have to be careful with these results, as the statistical tests were not perfectly adapted to our experiment.


To fully understand our project, contact us by...

      Email                                                Twitter
lara.narbona@cri-paris.org              @AyuuPool

No comments:

Post a Comment

Blog disclaimer

The content created by the Learning thru research Student Bloggers and those providing comments are theirs alone, and do not reflect the opinions of Centre de recherche interdisciplinaire, University Paris Descartes or any employee thereof. The authors of posts and comments are responsible for the accuracy of any of the information supplied on this blog, as well as for any content copyright issues.