agi  

RE: [agi] Intelligence by definition

Ben Goertzel
Fri, 03 Jan 2003 20:26:44 -0800

 *******
 Well, in Novamente we are not coding *specific knowledge* that is learnable... but we are coding implicit knowledge as to what sorts of learning processes are most useful in which specialized subdomains... 
---
 
 I don't know, from where I sit this distinction is artificial. Learning is generally defined as projected compression, complexity of methods to achieve it can be sequentially increased as long as it produces positive additional compression minus the expense,- until it matches complexity of the inputs. In other words, optimal methods themselves should be learned.  
*******
 
Yes, if you have a huge amount of space and time resources available, you can start your system with a blank slate -- nothing but a very simple learning algorithm, and let it learn how to learn, learn how to structure its memory, etc. etc. etc.
 
This is pretty much what OOPS does, and what is suggested in Marcus Hutter's related work.
 
It is not a practical approach, in my view.  My belief is that, given realistic resource constraints, you can't take such a general approach and have to start off the system with specific learning methods, and even further than that, with a collection of functionally-specialized combinations of learning algorithms. 
 
I could be wrong of course but I have seen no evidence to the contrary, so far...
 
 
 ***  The Novamente design is mathematically formulated, but not mathematically derived.  That is, individual formulas used in the system are mathematically derived, but the system as a whole has been designed by intuition (based on integrating a lot of different ideas from a lot of different domains) rather than by formal derivation. 
 
 In my view, we are nowhere near possessing the right kind of math to derive a realistic AI design from definitions in a rigorous way. 
---
 
To select formulas you must have an implicit criterion, why not try to make it explicit? I don't believe we need complex math for AI, complex methods can be universal, - generalization is a reduction. What we need is a an autonomously scalable method.  
***
 
Well, if you know some simple math that is adequate for deriving a practical AI design, please speak up.  Point me to the URL where you've posted the paper containing this math!  I'll be very curious to read it!!!! ;-)
 
 ****
Juergen Schmidhuber's OOPS system is an attempt in this direction, but though I like Juergen's work, I think this design is too simplistic to be a functional AGI.
---
Thanks, I am looking at it. I noticed that he starts with a known probability distribution, to me that suggests that the problem is already solved ;-)
****
 
He starts with a known pdf for theoretical purposes.  He is proving that his system can work effectively for ANY given probability distribution.  He is not assuming that his system is somehow fed the pdf in advance.
 
 
-- Ben