AGI and the intelligence scale

223 words

The AI-Safety-as-an-effective-cause hypothesis has already been soundly refuted by others, but I think it never hurts to show more disdain for this self-recursive Bayesian superintelligence nonsense. With that in mind, let’s talk about Bostrom’s awful book. Specifically, this graph:

We see a line starting at the left of the image and continueing on to the right. At 8/60 of the way from the left, a marking says "Mouse". At 11/60, "Chimp". At 14/60, "Village idiot". At 14.3/60, "Einstein". The line continues a long way to the right, indicated to go on for a long time past the end of the image. Caption of the image says: "Figure 8. A less anthropomorphic scale? The gap between a dumb and a clever person may appear large from an anthropocentric perspective, yet in a less parochial view the two have nearly indistinguishable minds.[9] It will almost certainly prove harder and take longer to build a machine intelligence that has a general level of smartness comparable to that of a village idiot than to improve such a system so that it becomes much smarter than any human." The "vill" in "village" is highlighted because I found the correct spot in the book through searching "vill" by ctrl+f

There are two claims embedded in here. One is “when you are past village idiot level, then Einstein level is very close”. The other is “there is a lot of room beyond Einstein level”. I would argue that both are preposterous, in particular if you define intelligence by some kind of problem-solving ability.

For this post I want to focus on the empirical prediction made by these two claims. Take any task for which “AI” is better than unskilled humans, for example a game. Look at the growth of the rating over time. The claims predict that there will be very little (development/training) time between an AI becoming competitive with unskilled humans and surpassing the very best humans. Proof by picture:

Plot with time on x-axis, skill on y-axis, and horizontal lines marking "best humans" and "worst humans" close together. A squiggly diagonal line, increasing, is marked "AI", with vertical lines denoting where it crosses the "best humans" and "worst humans" mark. The space between the vertical lines is small and marked as "very short time".

Does that prediction hold up? Let’s look at everything Deepmind and OpenAI did. Go? Nope. Starcraft? Nope. Rubik’s cube? Nope. Any other shitty thing? Nope. The prediction failed just as badly before the neural net hype, considering chess and go and Age of Empires and any other game you could play against the computer.

Conclusion

Bostrom is talking out of his ass.

Bayes’ rule is full of shit

273 words

We all know the type who think Bayes’ rule is the key to rational thought. They are talking out of their ass. This short note intends to explain why I say that, but without any trite arguments and just focusing on the computer science. In short, because Bayes’ rule is an information-theoretic concept without computational content.

For one, conditional probability distributions are not computable. Of course you say, but no sane person would limit themselves to only computable endeavors? And moreover, that result only applies when there is no noise, i.e., the realm of spherical cows! Your objection is valid, but we have to observe this theoretical impossibility result for the sake of completeness.

The real issue occurs in real-world complexity classes. Random SAT/UNSAT is hard in the appropriate density range.1I have trouble finding a specific reference, but anyone working on SAT solvers will tell you this. So if I pick a random SAT instance from a probability distribution known to both of us, and I tell you the bits encoding the instance one by one, your Bayesian updates on the probability that my instance is true will be hard as well. Both in theory2This is a slightly stronger claim than just \mathcal P = \mathcal{NP}. If I recall correctly, ACM SIGACT News Volume 50 Issue 1 or 2 will tell you that everyone believes this hardness statement. and in practice. No matter how much computing power you have, I can easily sample SAT instances that make your Bayes’ rule calculations suffer.

This hardness carries over to everyday problems. Bayes’ rule is typically insufficient to tell you the answers you are looking for.

We can’t simulate nematodes with only 302 neurons

148 words

C. Elegans is a wonderful creature. It’s a nematode, so it looks like a tiny worm. It is one of biologists’ favorite model organisms and we know a lot about it.

Bob Goldstein, 2007, CC-BY-SA, via Wikimedia Commons.

It has roughly 1000 cells, and its cell lineage is almost identical for every individual. This creature has been mapped out more thoroughly than any other species on the planet. You can browse a map of its cells through the OpenWorm Browser. The hermaphrodite has 302 neurons and its connectome is completely known.

And faithfully simulating a nematode remains far out of reach. The state of the art is concerned with modeling basic aspects of its locomotion, like its different gaits on different surfaces.

It is silly to think we’ll be simulating human minds any time soon, when people have been trying to simulate C. Elegans for decades now.

Deepmind will start playing fair?

195 words

In case you missed it, last week bought the most exciting piece of AI news of the past couple years. Deepmind’s Alphastar will be playing real non-NDA’d humans for multiple games (via Ars Technica).

This is cool news. Partly because they finally seem intent on fixing their cheating with superhuman micro, but mostly because repeat games could allow humans to develop counter strategies. The exhibition matches earlier this year only had the humans play one match against any given bot, thus giving the bot a huge informational advantage.1The neural nets were trained through supervised learning on human matches before being pitted against each other. These new matches will hopefully end up being more fair. However, Blizzard’s press release is vague enough that Deepmind could still decide to play only a hand full of games, which would prevent humans from gaining enough knowledge to devise counter strategies. Worse, the Alphastar team could decide to stop playing once they observe online message boards sharing good tactics between human players or any other nasty hacking of p’s.

The hype labs haven’t been doing well in fair play or reproducibility so far, but I’m willing to hope they’ll improve.

12. Why Kolmogorov complexity is useless

377 words

Mathematical hobbyists tend to be fascinated with information theory, Kolmogorov complexity and Solomonoff induction.This sentiment is very understandable. When I first learned of them, these subjects felt like they touch upon some fundamental truth of life that you don’t normally hear about. But for all its being a fundamental property of life and understanding, mathematicians treat it as a cute mathematical curiosity at most. In this post I will explain some of the reasons why so few mathematicians and computer scientists have cared about it over the past 50 years.

The zero-th reason is that it depends on your choice of encoding. You cannot cover this up by saying that any Turing machine can simulate any other with constant overhead, because a 2000 bit difference is not something you can compensate for on data-constrained topics.

The first reason is obvious and dates back to ancient times. Kolmogorov complexity is not computable. Properties that we can literally never know are pretty useless in everyday life.

The second reason is related: we cannot compute Kolmogorov complexity in practice. Even time-constrained variants are hellishly expensive to compute for large data sets.

The third reason is more typical of modern thinking in computer science theory. Namely that any theory of information needs a theory of computation to be useful in practice. This is directly related to the difference between computational and statistical indistinguishability, as well as the myth that your computer’s entropy pool could run out. Cryptography is safe not because it is information-theoretically impossible to retrieve the plaintext but because it is computationally infeasible to retrieve the plaintext. The Kolmogorov complexity of a typical encrypted data stream is low but it would be mistake to think that anyone could compute a short description. Along another route, once I have told you an NP-complete problem (with a unique solution), it won’t add any new information if I told you the answer. But still you would learn new information by getting the answer from me, because you couldn’t compute it yourself even knowing all requisite information.

Kolmogorov complexity is useless based on classical CS theory, practice and modern CS theory. This is how you know that anyone who proposes that it is an integral part of rational thought is full of shit.

11. Did MIRI cause a good thing to happen?

1134 words

The Future Perfect podcast from Vox did an episode proclaiming that AI Risk used to be a fringe concern but is now mainstream. That is why OpenAI did not open up GPT-2 to the general public.1Or to academic peer review for that matter. This was good. Everything thanks to Jaan Tallinn and Eliezer Yudkowsky. I cannot let this go uncontested.

Today: why are the people who made our writing bot so worried about what it could do? The short answer is they think that artificial intelligence models like this one can have major unintended consequences. And that’s an idea that’s moved from the fringe to the mainstream with the help of philanthropy.

[0:02:21-0:02:43]

This here is the central claim. The money of Tallinn is responsible for people thinking critically about artificial intelligence. Along the way we hear that Tallinn acquired his AI worries from Yudkowsky. Hence, Yudkowsky did something good.

AI might have unintended consequences, like taking our jobs or messing with our privacy. Or worse. There are serious researchers who think the AI could lead to people dying: lots of people. Today, this is a pretty mainstream idea. It gets a lot of mentions it any round-up by AI expert of their thinking on AI and so it’s easy to forget that a decade ago this was a pretty fringe position. If you hear this kind of thing and your reaction is like “Come on, Killer Robots? Really that sounds like science fiction”, don’t worry, you are part of a long tradition of dismissing the real world dangers of AI. The founders of the field wrote papers in which they said as an aside. “Yes. This will probably like transform human civilization and maybe kill us.” But in the last decade or so, something has started to change. AI Risk stopped being a footnote in papers because a small group of people in a small group of donors started to believe that the risks were real. Some people started saying wait if this is true, it should be our highest priority and we should be working on it. And those were mostly fringe people in the beginning. A significant driver of the focus on AI was Eliezer Yudkowsky.

[0:04:17-0:05:50]

So the driving force behind all worries about AI is said to be Yudkowsky. Because of his valiant essay-writing, Tallin got convinced and put his money towards funding MIRI and OpenAI. Because of course his real fears center around Mickey Mouse and Magic Broomstick, not on algorithms being biased against minorities or facial recognition software being used to put the Uyghur peoples in China in concentration camps. Because rational white men only focus on important problems.

Yes, so here are a couple examples every year the Pentagon discovers some bugs in their system that make them vulnerable to cybersecurity attacks. Usually they discover those before any outsiders do and they’re there for able to handle them. But if an AI system were sufficiently sophisticated, it could maybe identify the bugs that the Pentagon wouldn’t discover for years to come and therefore be able to do things like make it look to the US government like we’re being attacked by a foreign nuclear power.

[0:13:03-0:13:34]

This doesn’t have anything to do with my point, I just think its cute how people from America, the country whose army of cryptographers and hackers (all human) developed and lost the weapons responsible for some of the most devastating cyberattacks in history, worry that other countries might be able to do the same things if only they obtain the magical object that is speculated to exist in the future.

[GPT-2] is a very good example of how philantropic donations from people like Jaan Tallinn have reshaped our approach to AI. The organization that made GPT-2 to is called OpenAI. OpenAI got funding from Jaan Tallinn among many others and their mission is not just to create Artificial Intelligence. But also to make sure that the Artificial Intelligence it creates doesn’t make things worse for Humanity. They’re thinking about, as we make progress in AI, as we develop these systems with new capabilities, as we’re able to do all these new things, what’s a responsible process for letting our inventions into the world? What does being safe and responsible here look like and that’s just not something anybody thought about very much, you know, they haven’t really asked what is the safe and responsible approach to this. And when OpenAI started thinking about being responsible, they realized “Oh man, that means we should hold off on releasing GPT-2”.

[0:17:53-0:19:05]

This is boot licking journalism, completely going along with the narrative that OpenAI’s PR department is spinning, just like Vox’s original coverage of that puff news and all of the Effective Altruism community’s reaction. There is something profoundly absurd about taking a corporate lab’s press release at face value and believing that those people live in a vacuum. A vacuum where nobody had previously made unsupervised language models, as well as one where nobody had previously thought about what responsible release of ML models entails. OpenAI is fundamentally in the business of hype, to stroke their funders’ egos, and providing compute-heavy incremental progress in ML is just the means to this end.

It’s kind of reassuring but this organization is a voice at the table saying hey, let’s take this just a little slower. And the contributions from donors like Jaan Tallinn, they have to put that cautionary voice at the table and they put them there early. You know, I think it mattered. I think that the conversation we’re having now is probably more sophisticated, more careful, a little more aware of some of the risks than it would been if there hadn’t been these groups starting 10-15 years ago to start this conversation. I think I has one of those cases where something was always going to be funded only from the fringe and where it really didn’t matter that it got that funding from the fringe.

[0:20:18-0:20:53]

The writing makes a clear statement here: the people on the fringe (Yudkowsky et al.) are a significant part of the reason why people are thinking about this. I can hardly imagine how a journalist could say this after having done any research on the topic outside of their own cult-bubble, so I think they didn’t do this.

People in EA, people in ML and the staff at Vox seem almost willfully ignorant of all previous academic debate on dual use technology, none of which derives from MIRI’s fairy tales of evil genies. I blame this phenomenon on contempt of rationalists for the social sciences. If Yudkowsky contributed anything here, it might mainly be in making socio-political worries about technology seem marginally more exciting to his tech bro audience. But the counterfactual is unclear to me.

10. Compute does not scale like you think it does

520 words

One argument for why AGI might be unimaginably smarter than humans is that the physical limits of computation are so large. If humans are some amount of intelligent with some amount of compute, then an AGI with many times more compute will be many times more intelligent. This line of thought does not match modern thinking on computation.

The first obvious obstacle is that not every problem is linear time solvable. If intelligence scales as log(compute), then adding more compute will hardly affect the amount of intelligence of a system.1Whatever ‘intelligence’ might mean, let alone representing it by a number. Principal component analysis is bullshit. But if you believe in AI Risk then this likely won’t convince you.

The second, more concrete, obstacle is architecture. Let’s compare two computing devices. Device A is a cluster consisting of one billion first generation Raspberry Pi’s, for a total of 41 PFLOPS. Device B is a single PlayStation 4, coming in at 1.84 TFLOPS. Although the cluster has 22,000 times more FLOPS, there are plenty of problems that we can solve faster on the single PlayStation 4. Not all problems can be solved quicker through parallelization.2In theory, this is the open problem of P vs NC. In practice, you can easily see it to be true by imagining that the different rpi’s are all on different planets across the galaxy, which wouldn’t change their collective FLOPS but would affect their communication delay and hence their ability to compute anything together.

Modern computers are only as fast as they are because of very specific properties of existing software. Locality of reference is probably the biggest one. There is spacial locality of reference: if a processor accesses memory location x, it is likely to use location x+1 soon after that. Modern RAM exploits this fact by optimizing for sequential access, and slows down considerably when you do actual random access. There is also temporal locality of reference: if a processor accesses value x now, it is likely to access value x again in a short while. This is why processor cache provides speedup over just having RAM, and why having RAM provides a speedup over just having flash memory.3There has been some nice theory on this in the past decades. I quite like Albers, Favrholdt and Giel’s On paging with locality of reference (2005) in Journal of Computer and System Sciences.

Brains don’t exhibit such locality nearly as much. As a result, it is much easier to simulate a small “brain” than a large “brain”. Adding neurons increases the practical difficulty of simulation much more than linearly.4One caveat here is that this does not apply so much to artificial neural networks. Those can be optimized quickly partly because they are so structured. This is because of specific features of GPU’s that are outside the scope of this post. It might be possible that this would not be an obstacle for AGI, but it might also be possible for the ocean to explode, so that doesn’t tell us anything.5New cause area: funding a Fluid Intelligence Research Institute to prevent the dangers from superintelligent bodies of water.

9. Don’t work on long-term AGI x-risk now

194 words

Suppose you believe AGI will be invented in 200 years, and, if it is invented before the alignment problem is solved, everyone will be dead forever. Then you probably shouldn’t work on AGI Safety right now.

On the one hand, our ability to work on AGI Safety will increase as we get closer to making AGI. It is preposterous to think such a problem can be solved by purely reasoning from first principles. No science makes progress without observation, not even pure mathematics. Trying to solve AGI risk now is as absurd as trying to solve aging before the invention of the microscope.

On the other hand, spending resources now is much more expensive than spending resources in 100 years. Assuming a 4% annual growth rate of the economy, it would be around 50 times as expensive.1In all honesty, I don’t actually believe in unlimited exponential economic growth. But my job here is to attack the AI Safety premise, not to accurately represent my own beliefs.

Solving AGI Safety becomes easier over time, and relatively cheaper on top of that. Hence you should not work on AGI Safety if you think it can wait.

8. Links #3: the real AI was inside us all along

134 words

Olivia Solon: The rise of ‘pseudo-AI’: how tech firms quietly use humans to do bots’ work

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

Brian X. Chen and Cade Metz: Google’s Duplex Uses A.I. to Mimic Humans (Sometimes)

In other words, Duplex, which Google first showed off last year as a technological marvel using A.I., is still largely operated by humans. While A.I. services like Google’s are meant to help us, their part-machine, part-human approach could contribute to a mounting problem: the struggle to decipher the real from the fake, from bogus reviews and online disinformation to bots posing as people.

7. Links #2: AI Safety-critical reading list

792 words

So, I started the anti-AI Safety blogging series because I would be a good fit for the cause area as described by e.g., 80,000 Hours and it seemed reasonable to think through the arguments myself. As it turns out, they don’t stand up to scrunity. I decided to keep on writing for a bit anyway, as all AI Risk enthusiasts seem to be unaware of the counterarguments. I thought there was nothing out there in writing. Boy was I wrong.

This is a non-exhaustive list of links relating to AI Safety skepticism. For more, check out the similar reading lists by Marcus Vindig and by Alexander Kruel. Overlap between these lists is minimal and restricted to a couple of particularly good resources.

Rodney Brooks writes from MIT Technology Review of the seven deadly sins of predicting the future of AI. If you find a paywall, either clear your cookies or view a less edited version on Brooks’ website. His other essays on Super Intelligence are also well-worth checking out.

Wolfgang Schwarz published his referee report of Yudkowsky (MIRI) and Soares’ (MIRI) Functional Decision Theory. I’ll quote a single paragraph, which I think accurately illustrates the whole review: “The standards for deserving publication in academic philosophy are relatively simple and self-explanatory. A paper should make a significant point, it should be clearly written, it should correctly position itself in the existing literature, and it should support its main claims by coherent arguments. The paper I read sadly fell short on all these points, except the first. (It does make a significant point.)”

Ben Garfinkel gave a talk at EA Global 2018 titled “How sure are we about this AI stuff?”, calling for EA’s to be more critical about AI Safety as a cause area. Garfinkel knows his audience well, as everything is phrased so as to make EA’s think without ruffling feathers

Oren Etzioni writes in MIT Technology Review about the survey data Bostrom talks about in Superintelligence and offers alternative data that suggest a very different picture

Maciej Cegłowski‘s talks are always excellent and “Superintelligence: The Idea That Eats Smart People” is no exception. (via)

EA Forum user Fods12 wrote a five-part critique of Superintelligence. They hit on a number of good objections. The posts sadly got little quality engagement, indicative of both the writing quality and of the rest of the EA Forum’s userbase.

Even transhumanists can be reasonable, like Monica Anderson who writes Problem Solved: Unfriendly AI.

Ernest Davis wrote a review of SuperIntelligence, touching on some of the key weaknesses in Bostrom’s arguments but insufficiently elaborating on each of his arguments. MIRI published a response to the review which I think mostly nitpicks Davis’ phrasing instead of actually engaging with his objections, which to be fair might be the best you can do if you don’t have any better source of exposition on these arguments than Davis’ review. In short, Davis’ review isn’t super good, but MIRI’s response is much worse.

Neil Lawrence critiques Bostrom’s Superintelligence. If I had to excerpt a single representative line, it would be “I welcome the entry of philosophers to this debate, but I don’t think Superintelligence is contributing as positively as it could have done to the challenges we face. In its current form many of its arguments are distractingly irrelevant.”

Magnus Vindig writes Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique, in which he tackles most of the standard EA arguments and points out their hidden assumptions. Topics include, but are not limited to, the incessantly cited AI researcher survey predictions, bad Moore’s law-type arguments, slight-of-hand changing definitions of intelligence, the difficulty of alignment rising for future systems compared to current ones and the enormous experience we have with present-day systems, Instrumental Convergence being under argued, the practical value of being super intelligent. He does not rigorously take down every argument to the full extent possible, but that is probably good because the blog post is 22k words as is. Vindig also wrote Is AI Alignment Possible? in which he argues that the answer is no, both in principle and in practice.

Richard Loosemoore has the right amount of derision that AI Risk deserves, which is different from the right amount of derision for convincing the worriers that they’re wrong. One person who was not convinced is Rob Bensiger of MIRI.

Bill Hibbard has an email exchange with Yudkowsky in which he argues that a Superintelligence would not conflate smiling human faces with nano-scale depictions of such. The whole exchange is kind of predictable and not too informative.

On a related note, Nicholas Agar wrote a paper titled “Don’t Worry about Superintelligence” in which he argues that the first AIs with sophisticated agency are inherently likely to be friendly.