AGI and the intelligence scale

223 words

The AI-Safety-as-an-effective-cause hypothesis has already been soundly refuted by others, but I think it never hurts to show more disdain for this self-recursive Bayesian superintelligence nonsense. With that in mind, let’s talk about Bostrom’s awful book. Specifically, this graph:

We see a line starting at the left of the image and continueing on to the right. At 8/60 of the way from the left, a marking says "Mouse". At 11/60, "Chimp". At 14/60, "Village idiot". At 14.3/60, "Einstein". The line continues a long way to the right, indicated to go on for a long time past the end of the image. Caption of the image says: "Figure 8. A less anthropomorphic scale? The gap between a dumb and a clever person may appear large from an anthropocentric perspective, yet in a less parochial view the two have nearly indistinguishable minds.[9] It will almost certainly prove harder and take longer to build a machine intelligence that has a general level of smartness comparable to that of a village idiot than to improve such a system so that it becomes much smarter than any human." The "vill" in "village" is highlighted because I found the correct spot in the book through searching "vill" by ctrl+f

There are two claims embedded in here. One is “when you are past village idiot level, then Einstein level is very close”. The other is “there is a lot of room beyond Einstein level”. I would argue that both are preposterous, in particular if you define intelligence by some kind of problem-solving ability.

For this post I want to focus on the empirical prediction made by these two claims. Take any task for which “AI” is better than unskilled humans, for example a game. Look at the growth of the rating over time. The claims predict that there will be very little (development/training) time between an AI becoming competitive with unskilled humans and surpassing the very best humans. Proof by picture:

Plot with time on x-axis, skill on y-axis, and horizontal lines marking "best humans" and "worst humans" close together. A squiggly diagonal line, increasing, is marked "AI", with vertical lines denoting where it crosses the "best humans" and "worst humans" mark. The space between the vertical lines is small and marked as "very short time".

Does that prediction hold up? Let’s look at everything Deepmind and OpenAI did. Go? Nope. Starcraft? Nope. Rubik’s cube? Nope. Any other shitty thing? Nope. The prediction failed just as badly before the neural net hype, considering chess and go and Age of Empires and any other game you could play against the computer.

Conclusion

Bostrom is talking out of his ass.

Leave a Reply