June 2007

You are currently browsing the monthly archive for June 2007.

One of the purpose of TweetyBot is to be able to know where sounds come from. This isn’t actually required to teach birds to sing correctly, but it’s so fun. And ultimately, the robot would be able to localize which birds is responding and turn its “head” to it…

Anyway. Here’s the problem: having two microphones, how do we know where the sound comes from ? What’s the angle ?

The problem looks like following: if we’re able to know the delay that occurs when the sound wave hits mic A and mic B, then we should be able to know the distance (speed of sound being constant), which directly depends on the angle/direction of the sound. Measuring this delay is actually feasible, so that’s the good news.

Considering borderline cases, we have:

If sound comes from left of right, “delay distance” is the same as distance between mics

If sound is localized exactly between the mics, distance is null (no delay)

This smells sinus or cosinus… After a few hours (days ?) reminding me maths and geometry when I was a little child, here’s a figure summarizing the problem, with the solution.

Note: I’ve submitted the problem to Master Fenyo. As usual, he said:

“- You’re dumb… There are multiple solutions to your problem since there’s one equation and two unknown variables.
- Ah…
- Yes. Considering your problem in a euclidian space, you can see vectors blabla, blabla…
- Ah… But…
- No, it won’t work !
- But look at this figure.
- OK. This is only valid if you consider you have flat/plane sound wave
- Ah…
- This is only valid if the distance between your microphones is greater than…
- OK. Anyway, whatever the distance will be, I assume I can use any sound waves form I want to get my result the way I want…

This being said, he’s right and results will only be an approximation…

After trying a simple peak detector (failed), a bar-graph LM3916 based sound sensor (almost success), this is time to reach the best, the amazing but scary analog-to-digital conversion… This should have been to most difficult, this is actually the simplest… Yes, thanks to a not-so-indigest 16F88 datasheets, some nice Jal libraries and precious information from Great Bert’s website, I can now have this kind of graph (rapping near the electret microphone):

How does this works ?

It uses the LM386 based preamp electret from the peak detector (refer to the mainboard to have the whole base schematic). Since it converts sound into voltage, it can be directly wired to the PIC 16F88 (I hope/think so…).

The Jal code is quite simple (use SirBot’s trunk):

include sb_config
include sb_protocol
include sb_mainboard

-- configure ADC
const ADC_hardware_Nchan      = 3         ;number of selected channels
const ADC_hardware_NVref      = 0         ;number of external references
const ADC_hardware_Rsource    = 10_000    ;maximum source resistance
const ADC_hardware_high_resolution = false;true = high resolution = 10 bits
include adc_hardware
ADC_init

pin_a0_direction = input    ; electret mic is connected to...
forever loop
    var byte res = ADC_read_low_res(0)
    echo(res)
end loop

For now, only one ADC channel is used, but soon there’ll be at least two to localize sound in space (see later). No Vref is used, so +5V/0V will be used. It’s ok since the preamp electret microphone output ranges between  those.

I’ve tested the whole in “real” condition, that is recording my birds. The result is quite nice: the sound sensor is able to detect when birds sing “like a big fat pig”. There may be problems to detect when they just “twitter in the fresh air of the morning”, though…

My first idea was to set two thresholds: one above which birds are considered to twitter, another where they sing like a bit fat pig… By this way, when the bot simulates sings, I would have been able to know when birds are responsive the correct way or not. There probably needs to have a better amplification for this.

Anyway, this sound sensor seems to be the most usable:

  • few components are required
  • no need to adjust sensitivity: everything can be configure through software
  • result is far richer than a binary response (got sound or not)
  • this is a first step to actually record sound, and play them back from the PC
Next step is to determine if acquisition time (analog-to-digital conversion) is short enough to put two (or three) sensors to localize where sounds come from (see graph here). This time should be short enough compared to the time sound waves hits one, then another sound sensor. This will require the use of timers…