handy to know a little about sound in general. So, here
goes: Sound is waves that travel through some type of
medium, like air or water. The size of the waves determines
its amplitude — or volume — and how close the waves are
from one another determines its frequency. The farther
apart the waves, the lower the frequency and vice versa.
Take a look at Figure 2.
When humans speak, a special muscle in the throat
called the larynx vibrates at different speeds. Air from the
lungs passes by the larynx, and the vibration of the muscle
causes the air to become rapidly-changing sound waves. A
60 SERVO 06.2012
similar process is involved in making sound from a guitar
string (the string vibrates, disturbing the air around it) or a
saxophone (a wooden reed vibrates as air is blown over it).
Electronic sound generators don’t use a larynx, string,
or reed, but they do use the same general principles of
waves. Sound production begins with an oscillator circuit —
in the case of a microcontroller, the oscillator is synthesized
by using software to control a timer or counter. In its very
basic form, the output of the oscillator is routed to a pin on
the microcontroller which, in turn, is attached to a speaker.
As the oscillator changes speed, the frequency of the tone
heard through the speaker alters the pitch.
Most microcontrollers have more than one
timer/counter, so it’s technically possible to produce more
than one tone at a time, with each tone at a different
pitch. Each separate tone is called a voice. Any voice can be
used by itself, though it is common to combine them to
produce more elaborate sounds.
Sound is often characterized by the shape of the waves
that produce it, not just the frequency of those waves. The
shape of the wave is one thing that affects the timbre of
the sound. Four common wave shapes include sine, saw
tooth, triangle, and square, as depicted in Figure 3.
More sophisticated sound synthesis uses a concept
called the envelope generator. It’s a kind of dynamic
volume control that varies over time. The repeating rising
and falling sound of a siren is a simple example of an
envelope. For more complex sounds — like synthesizing
musical instruments — the envelope involves more elaborate
and often rapidly occurring changes.
The basic sound envelope is characterized by four
distinct phases (see Figure 4): attack, decay, sustain, and
release, or ADSR. These phases are graphically shown as a
series of ramps.
FIGURE 2. Sound waves and the electrical signals that
produce them vary in frequency and intensity. The higher
the frequency, the higher the pitch of the sound. Greater
intensity produces louder volume.
• Attack indicates how fast the sound comes to full
volume, such as when a piano key is first pressed.
• Decay is how quickly the sound drops from its initial peak.
• Sustain is a constant volume of the sound after the decay
once the key is let go.
• Release is how quickly the sound finally fades out.
These parameters (along with the wave shape) help to
make the oscillation sound like something familiar to us — a
piano or a violin, for instance. Each of these instruments
has a peculiar ADSR envelope. Note that on some sounds,
one or more of the ADSR phases may be very short or nonexistent.
Using the Propeller to
Generate Basic Tones
An easy task for the Propeller is to generate tones. The
FIGURE 3. Four primary waveforms -- sine, saw tooth, triangle,
and square -- produce different types of timbres that affect
how the sound is perceived.