The idea of building machines that reflect biological structures is certainly not new. From Leonardo Da Vinci who dreamt of winged flying machines to the clockwork automata, humans have been trying to build machines that can perform tasks as successfully as the myriad of creatures in our natural world.
In my view, the term BIOMORPHIC engineering,
is the engineering or scientific efforts to copy and utilize what we understood
about the functional benefits of physical bodies of creatures, such as
the utility of arms, hands, fingers, legs, and feet.
I think that this is different from
BIOMIMETIC engineering (which I believe to be a superset of these endeavors)
to mimic biological function. Walking, flying, seeing, flinching,
screaming, etc.
NEUROMORPHIC engineering then, is
the construction of computational devices
that utilize the physical structure and/or representations
found in biological nervous systems.
1) First, I believe
that 'neuromorphic' should not imply
aVLSI.
2) Now, I should point
out that in my opinion, this label is really just a mental state and not
a
definition that distinguishes something that is, or is not, "neuromorphic".
Unfortunately, my feeling is that this label is just like "artificial intelligence"
in that, once we understand why biology uses a particular representation
or structure, the idea gets adopted by engineers that say, "we're not mimicking
biology, just using the best representation for the job." Kind of
like using lateral excitation/inhibition networks in the retina/imager...
There are various goals:
(1) goal: engineering
(meaning implementation). - use neurobiology as an inspiration for
making stuff... might be analog, might be digital, might be hybrid,
but the goal is to build something that fits a particular need. - it will,
by nature, be abstracted biology.
Within this realm, we hope to
build something novel for the world to use to advantage in some way.
(2) goal: science
(meaning exploration, to understand) - to learn something new concept-wise.
Within this arena, distinctions between areas are murky. I think
that the main thrusts are:
- a) to
use our implementations as a modeling tool to predict biological expts.
- b) to
explore/understand new computational architectures
- c) to
understand the role of implementation constraints in neural computation.
I'm still working on fleshing this out. It's all hopelessly intertwined!
The way I see it is this: I
am an ENGINEER while I design the chips and build the robots, so
that I can be in a position to be a SCIENTIST in understanding how
biology solves problems with similar hardware. In my mind, it's like
developing voltage-clamp so that you can study neurons.