Re: [agi] Intelligence by definition

Boris Kazachenko
Fri, 03 Jan 2003 16:34:03 -0800

Thanks for your comments!
I like to distinguish between *functional specialization* and *integrated cognition*
Novamente (my own AI system) has a mix of cognitive algorithms, which work together to provide overall cognitive functionality.  The exact mixture of algorithms is determined by a bunch of parameters.  This is one example of "integrated cognition".
Functional specialization has to do with there being modules of an intelligent system devoted to particular areas like language processing, vision processing, social interaction, etc. 
It seems to me that conceptual difference between vision & language is in the level of generalization, aside from different sensor/actuator orientation.
Social Interaction? Once you start coding things that are learnable, where do you stop before ending up with just another expert system?
Isn't this all about scalable learning, which should develop environmentally specific functional specialization on it's own?
In the Novamente design, each functionally specialized lobe has its own parameter values which determine the specific mix of cognitive algorithms operating within it.  (We haven't gotten to experimenting with this yet, now we're just experimenting with mixing cognitive algorithms.)
Generally, a mixture of cognitive algorithms is just as capable of dealing with the unknown as a single cognitive algorithm.  Sometimes more so.... 
What single algorithm? How do you evaluate 'dealing'? How do you derive/select you algorithms for unknown inputs without first quantitatively defining your objectives? You definition of intelligence doesn't seem to be functional to me, goals can't be defined solely by their complexity.
Without deductive derivation we are stuck with trial & error, which can take millenia. 
On the other hand, functional specialization biases one's system to deal with some parts of the space of the unknown better than others. 
This is a plus and a minus, obviously.  Human cognition deals with the truly unknown very slowly and awkwardly.  
I mean 'unknown' not to the cognitive system but to it's designer. Also, the reason human learning is so slow is 'hardware' - specific: it takes a lot longer to build new connections than to access them. That's not the case for computer hardware.
The human brain is specialized not only based on its sensors and actuators, but also for linguistic processing, social interaction, temporal event processing, etc. etc. etc.  This means that it would not work as well taken outside of its ordinary social and physical situations.  But it means that its limited resources are generally well deployed within its usual environments.
That's true, but human brain is an accident of incremental & obviously unfinished evolution, not some grand design.  Besides, I think to some extent these different areas are specialized not so much by genetic design but by the impact of the input types they recieve. In any case, you must admit, this stone age 'design' doesn't perform very well now & it will get worse as the changes accelerate.