February 1, 2016

EchoFind: where does the sound come from?

EchoFind is an interdisciplinary project that mixes different concepts of informatics, biology and physics. Our project aims to identify some factors that can affect our senses, by deceiving them. For example, reading out loud words written in differents colors is difficult because our eyes also see the color of the word and our brains are confused. For example if  we want to say the color of the following words: red, blue, white, pink, green, black, we will be influenced by the fact that the words are the name of colors that don’t match with the color of the word.  During our project we focused on a specific sense: hearing. Even our hearing can be confused by external factors. In our project we wanted to compare the accuracy of a person’s hearing and a microphone in order to identify the direction a sound is coming from. We also wanted to see if the content of the message (I am on your left/right) can alter the perception of sound.  Which one between the two sensors will locate the sound best?

The biological sensor that we decided to test is human hearing. For the experiment we ask to 19 people to make a positive and a negative control before starting the test.  Those controls consisted in listening to a video to show if the person has hearing problems.  Once the control is over, the person is blindfolded and placed in a room where the noises are minimized.  The person holds a clock with 12 markers that allows him to point where he thinks the sound comes from. For example if the sound comes from the left the person will put his hand on the label that corresponds to the 9 and if the sound comes from behind him he will touch the label 6.
12626196_10208744943443386_1267828535_n.jpg
Picture of the “clock” used as an indicator by test subjects

Around the room there are four speakers that send a sound message one at a time.  The messages transmitted (I’m on you left/ I’m on your right/ I’m in front of you and I’m behind you) don’t always correspond to the position of the speaker in the room to confuse the test subjet.  Here is the setup of the room:


CZvqVEkWIAEPC6I.jpg

Diagram of experimental room setup for both biological and electronic sensor.

 


For the electronic sensor we created a device that we rotated around the room to see where sound intensity was highest.  We attached the microphone amplifier to a breadboard and then to an Arduino.  We moved the microphone in front of the different positions as you can see on the following gif:








            Gif of electronic sensor protocol

We obtained very interesting results from both the biological and electronic sensor.  On the following figure you can see that all 19 people did not give the exact position the sound was coming from.  The results are dispersed, meaning that they are distributed over a more or less wide area.  For the biological sensor, we can see that there is a relatively acceptable dispersion of results.  For example, for sound B coming from the front right, almost everyone identified that the sound was coming from the arbitrary position 2, so the dispersion of the results is very small.  For the electronic sensor, there is an inconsistent dispersion.  For most sounds, the intensity of the signal measured by the sensor varies too little to be able to choose the highest value and deduce the exact direction the sound is coming from.  

IMG_2699.JPG
        Graph of human sensor results

From these results we can draw several conclusions.  First, humans can detect the general direction a sound is coming from but not always the precise position.  We also noticed that for the three sounds with contrary messages, the dispersion was larger, possibly indicating that the message distorted perception.  Since our ears are complex sensors, composed of many small bones and organs, and the signals are processed by our brains, the sentence’s message is also taken into account when determining the direction a sound is coming from.  From the results of the electronic sensor, we found that our microphone amplifier was unreliable even when the sound was louder and closer than for biological sensor.  We can conclude that to detect where a sound is coming from, it is better to be a human with two ears than a robot with a microphone amplifier directed in each direction.  A human ear is more efficient and exact than our microphone amplifier.  We would like to redo this experiment with more test subjects, in a soundproof room and with another electronic sensor to confirm our findings.  

If you want to find out more on this topic, you can check out some research done on the topic:

No comments:

Post a Comment

Blog disclaimer

The content created by the Learning thru research Student Bloggers and those providing comments are theirs alone, and do not reflect the opinions of Centre de recherche interdisciplinaire, University Paris Descartes or any employee thereof. The authors of posts and comments are responsible for the accuracy of any of the information supplied on this blog, as well as for any content copyright issues.