If you’ve read my previous SERVO Magazine articles — Experimenting with Machine Intelligence (October 2013) or Teaching a Robot to Play Catch with Minimal Mathematics (August 2014) — you know my approach
to such topics is certainly unconventional. This article
continues that irreverence with a non-standard look at
Organic Neural Networks
An animal’s brain (at least in a simplified model) is
composed of neurons connected together by a network of
electrically conductive pathways and synapses. When a
neuron in the network fires, it generates electrical signals
that travel down pathways to synapses and their associated
neurons. When the synapses receive a signal, they
chemically encourage or inhibit the firing of their neurons.
The receiving neurons that are encouraged more than
inhibited will then fire and send signals to even more
neurons. The resulting network pattern can be thought of
as a simple memory. An organic network can learn through
a trial-and-error process. Babies, for example, try random
movements in order to learn to control their limbs. Organic
learning is accomplished when an animal repeats actions
that work, because the repeated firing of a synapse actually
makes it easier for it to fire in the future. We often use
repetition to memorize things; we repeat a phone number
or someone’s name several times, for example, in order to
commit it to memory.
An animal’s external senses (touch, vision, smell, etc.)
can serve as the inputs to their neural network. These
signals propagate forward through the network, creating
various patterns based on the organism’s current memories.
Eventually, these signals produce an output pattern than
can activate muscles and other activity in the organism.
Artificial Neural Networks
Artificial neural networks (ANN) created with software
can allow us to emulate the parallel processing of an
organic brain. ANNs have been shown to be valuable for
many situations — especially those involving times where
the relationships between the input variables is difficult to
describe, thus making it problematic (if not impossible) to
develop an algorithm for solving the problem.
Figure 1 shows a small ANN composed of 10 neurons
labeled A though J. The first layer provides the inputs for
the network, and the final layer is the outputs. In between
is the hidden layer (multiple hidden layers are possible). It
has been shown that at least one hidden layer is essential
for handling even trivial problems.
Notice that every neuron in each layer connects to
every neuron in the next layer. A weight will be assigned to
each of these connections. This weight is analogous to the
propensity to fire of an organic synapse. Notice that one of
the connections in Figure 1 is emphasized by being bolder
than the others. Since it connects neuron H to neuron J, we
can refer to the weight of this connection as the
Weight(H-J). Let’s see how these weights control the propagation of
signals through the network.
The following is a simplified discussion of the topic. An
Internet search will provide far more details for readers with
a deeper interest in the mathematics. Let’s assume that the
input neurons A, B, and C in Figure 1 have the values of 1,
Animals have a parallel processing brain that is programmed generally
through interactions with their environment. This article shows how an
artificial neural network can control a robot, but more importantly, it explores
ways of letting the robot learn by applying the techniques used by animals.
48 SERVO 04.2016