Metavalent Stigmergy

[. home . | . meta . | . think . | ► do ◄]

Subscribe with RSS Subscribe with Patreon Talk Story on Discord Launch Scheduled Zoom Call Follow on X

How New Default Consensus Realities Instantiate

19 July 2008

Why Pay Attention to Ben Goertzel?

by metavalent

While it is absurd to reduce such questions down to a single proof statement, I also think it’s fairly common for humans to light upon representative moments that both evoke such questions and justify their over simplified responses. Even if, in strictly empirical terms, we present a single anecdotal observation – an over simplified slice of data that seems to be offered as justification for a sweeping generalization – sometimes such micro samples can indeed condense and convey a great deal of both useful and accurate information. Or so I contend in this case, without venturing into an exhaustive philosophical defense, so as to actually arrive at and communicate the point of this post within the precious average 1 minute and 48 seconds (according to present site analytics) that I apparently have to communicate, today.

In the opening keynote to AGI-08 (embedded video, below) Ben plainly and dispassionately debunks some of what might be considered by many as central tenets of his own tribe’s true religion. However, I speculate that the reasons the status quo AI tribe does not vote him off the island for such potential sacrilege include, but are certainly not limited to:<ul> <li>While it can be ignored, the unvarnished truth is very hard to overtly attack, even if it is counter to current tribal custom or convention.</li> <li>It’s been my experience that authentic problem solvers and intellectuals seem to be motivated by dispassionate dissonance, rather than offended or demoralized by it.</li> </ul>Following is the portion of the talk that I’m specifically referring to, with my emphasis added to highlight portions that I believe might be considered by some within the conventional AI tribe as “fightin’ words” insofar as they might embody the potential to impact funding or reputation, i.e., if one had been fooling both others and oneself for some number of years or decades, even coming to believe that the ability to conjure such “tricks” might somehow justifying an over inflated sense of self or mission; some of the following statements could indeed come across as a significant dressing down, and rightly so. Ben Goertzel’s matter of fact approach to such things is one primary reason why I personally believe he is not only worth paying attention to, but one of the very few voices of reason that present an authentic opportunity for coaching us through the recent disciplinary malaise and on to some genuine new progress.

So, why pay attention to Ben Goertzel? In microcosm, because this is what you get:<blockquote>Generally speaking, the question is, ‘Can narrow AI incrementally lead to general AI?’ This is something that we do not really know the answer to right now. My own intuition is that it’s not going to. I do not think that just working on narrow AI applications and incrementally improving them is going to lead to general AI. I think that is actually a non-trivial lesson that we can draw from the history of the AI field. I don’t think it was obvious in the 50’s and 60’s. Back then, it seemed like doing narrow AI and just gradually broadening the scope could lead to a human, and something better. I think that what we have learned now is that narrow problems are susceptible to clever, tricky computer science approaches, which do not necessarily help you very much at approaching the problem of general AI.

You can use a bunch of analogies for that. One that has occurred to me is locomotion: the problem of moving on flat surfaces is solved quite well by wheels, but generalizing the wheel might not be the best solution to moving around on general surfaces. I think the same kind of principle holds over and over. Once you narrow the scope, you can use a trick. We in the AI field have become very good at making up clever tricks. It’s fun–you can think about something for months or years and get a solution that does something really cool. It’s kind of seductive because it is easier than making a thinking machine and you can have the satisfaction of achieving something quickly. On the other hand, I have a suspicion that it is not the right path toward making a thinking machine. You can transfer some insight from narrow AI to AGI, but it requires a lot of creativity. It is not direct or obvious. I think there are key aspects of AGI that do not arise from narrow AI whatsoever.

As I have already said, I feel like narrow AI is dominating the scene in AI research with a lot of practical successes and what I would describe as also a lot of bad theoretical failures in terms of the capability of narrow AI paradigms to really help toward AGI.</blockquote>

If I may risk an interpretive restatement, “So yeah, we’re really impressed with your Smart Bombs and Advanced Flight Data Systems, UAV’s, yada-yada-yada, but those accomplishments probably have next to NOTHING to do with achieving AGI.” Please don’t be demoralized or dissed by this assessment, rather realize that AGI is a completely different problem and one that I would personally contend, is going to be illumined as much by neuroanatomy and psychopharmacology as by mathematical and probabilistic algorithms.

Explosive Combinatorial Mitigation

Difficulty #1

Difficulty #2

tags: