Fri, 03 Jan 2003 20:26:44 -0800
Well, in Novamente we are not coding *specific knowledge* that is learnable... but we are coding implicit knowledge as to what sorts of learning processes are most useful in which specialized subdomains...
I don't know, from where I sit this distinction is artificial. Learning is generally defined as projected compression, complexity of methods to achieve it can be sequentially increased as long as it produces positive additional compression minus the expense,- until it matches complexity of the inputs. In other words, optimal methods themselves should be learned.
Yes, if you have a huge amount of space and time resources available, you can start your system with a blank slate -- nothing but a very simple learning algorithm, and let it learn how to learn, learn how to structure its memory, etc. etc. etc.
This is pretty much what OOPS does, and what is suggested in Marcus Hutter's related work.
It is not a practical approach, in my view. My belief is that, given realistic resource constraints, you can't take such a general approach and have to start off the system with specific learning methods, and even further than that, with a collection of functionally-specialized combinations of learning algorithms.
I could be wrong of course but I have seen no evidence to the contrary, so far...
*** The Novamente design is mathematically formulated, but not mathematically derived. That is, individual formulas used in the system are mathematically derived, but the system as a whole has been designed by intuition (based on integrating a lot of different ideas from a lot of different domains) rather than by formal derivation.
In my view, we are nowhere near possessing the right kind of math to derive a realistic AI design from definitions in a rigorous way.