On “Becoming Human” – Part 2

Posted on February 24, 2010


LINK TO: Part 1

WARNING: This post is jargontastic.  I’ve included lots of Wikipedia links, but it still may be difficult for readers unfamiliar with certain concepts or unaccustomed to my blather.  If this is a greater challenge than you find enjoyable, then do skip it.  I hope to return to these concepts in a more cumulative fashion, one day, when I explore cognitive science in its proper time period.

So … I watched part 2 of Becoming Human.  It’s been a while since I watched it, and to be honest, I haven’t retained many of the details.  But what I do remember is that it got me thinking about human intelligence.  So here are some of my musings…

Individual organisms frequently use specific behaviors to benefit themselves and sometimes their families / communities / etc….  These behaviors can range from very simple to vastly complex.  At what point do we start calling the behaviors “intelligent” and at what point does the intelligence become uniquely human (if ever)?  I’ll start by trying to break down advantageous behaviors into levels of complexity.  If I’m lucky, that may also suggest an evolutionary progression.

Starting with the simplest, we have unconditional behaviors.  Consider the beating of a heart.  This is an oversimplification, of course, because the beating of a heart is often modulated in fairly sophisticated ways, but it’s basically set up to keep on beating.  The behavior is unconditional.  It’s easy to imagine how natural selection could result in the prevalence of many such behaviors.

Now consider the euglena – a type of single-celled organism with a flagellum it uses for locomotion.  It also has an eyespot that it uses to see how much sunlight it’s getting.  If it’s getting plenty of sunlight for photosynthesis, it doesn’t bother wagging its flagellum.  Why waste energy and risk propelling itself into shadow?  But if it’s dark, what has it got to lose?  It will die without a source of energy.  I’ll call this a sensory-dependent behavior.  The behavior always depends on what the organism is sensing right now.  Lights go out: flagellum goes to town.  It doesn’t have to be that simple, though.  The behavior can be determined by many sensory inputs.  In this way, sensory-dependent behavior is analogous to combinational logic circuitry in digital electronics.

To extend the electronics analogy to sequential logic, we progress to what I’m calling sensory-state behavior.  This type of behavior depends on the current sensory input as well as some “remembered” internal state which can be altered by changes in the sensory input, by the passage of time, and sometimes by a stochastic (i.e. unpredictable or “random”) component.  The “randomness” is likely to be actively utilized only where predictable behavior is disadvantageous.  Suppose our euglena friends live in any environment in which sunlight is often blocked transiently by passing objects overhead.  It may be that most periods of darkness are so short that actuating the flagellum immediately costs more energy than it yields.  Perhaps, then, a calculated delay would help.  If it gets dark, you wait a certain amount of time before you seek sunnier pastures – that way, you don’t waste energy on all those brief periods of darkness.

The sensory-state paradigm can also be much more sophisticated.  To make our example slightly more robust, what if the euglena also remembers (in its internal state) the recent frequency and average duration of the periods of darkness?  Now it can optimize its delay to suit the characteristics of a dynamic environment.

If adaptability to changing environments is what an organism needs, then it might make sense to move on to my next “level of complexity”: sensory-state-learning behavior.  This employs some set of feedback signals that reinforce behaviors leading to to primally positive outcomes (e.g. food!  yum!  good!) and discourage behaviors leading to primally negative outcomes (e.g. hunger!  starvation!  BAD!).  The feedback is used to “reprogram” the behavior to adapt to environments that change rapidly in unpredictable ways.  The feedback signals can be equated with the basic sensations of pleasure and pain.

Computational theory buffs will be quick to point out that sensory-state behavior is capable of doing anything that sensory-state-learning behavior can do.  And they’re right!  From a theory of computation standpoint, anyway.  Sensory-state behavior is Turing complete – it can emulate sensory-state-learning behavior.  The distinction between the two levels of complexity assumes that the implementation substrate requires more energy and attains less stability in maintaining its internal state than its underlying behavioral programming.  For behavior implemented using neurons, as in the human brain, that seems to be the difference between memory using neural feedback (analogous to flip-flops in digital logic or RAM in a computer) and long-term potentiation (more analogous to non-volitile memory, such as a hard drive).  It may be too energy-expensive to implement adaptive learning through sensory-state behavior, but if the underlying circuitry can be changed instead, it becomes worth it.

This is great for avoiding that thing that burns you and not pushing your limbs outside their safe ranges of motion.  It works to teach you the way home that has fewer scary run-ins with predators.  It could even help you refine any beneficial tool use you stumble into.  But it doesn’t explain where abstract thinking comes from – it doesn’t tell us why humans can create mental models of their world that allow them to predict which courses of action will likely yield the most favorable results.  For that, I suspect, you need sensory-state-learning-self-training behavior.  I’ve created a Hyphen-Monster.  The added ingredient is that the feedback signals that train the behavioral circuitry can come not only from the senses, but also from the outputs of the behavioral circuitry themselves.  Feedback sensations like “pleasure” and “pain” take on internal analogs not directly linked to the senses.  Thus, for instance, humans could experience either “physical pain” or “emotional pain”.

These internal feedback signals would allow an organism to learn not only behaviors, but also new ways of learning behaviors and ways of systematically modifying behaviors.  These “meta-behaviors”, I believe, could be the basis for rational thought, complex emotion, and even language.

I am not an expert in biology, psychology, cognitive science, or neuroscience.  But many of my friends are!  I hope they, and others, will comment on this post to let me know how this fits with ideas in those scientific communities and with their own ideas.

Thank you very much for reading!