7. Links #2: AI Safety-critical reading list

793 words

So, I started the anti-AI Safety blogging series because I would be a good fit for the cause area as described by e.g., 80,000 Hours and it seemed reasonable to think through the arguments myself. As it turns out, they don’t stand up to scrunity. I decided to keep on writing for a bit anyway, as all AI Risk enthusiasts seem to be unaware of the counterarguments. I thought there was nothing out there in writing. Boy was I wrong.

This is a non-exhaustive list of links relating to AI Safety skepticism. For more, check out the similar reading lists by Marcus Vindig and by Alexander Kruel. Overlap between these lists is minimal and restricted to a couple of particularly good resources.

Rodney Brooks writes from MIT Technology Review of the seven deadly sins of predicting the future of AI. If you find a paywall, either clear your cookies or view a less edited version on Brooks’ website. His other essays on Super Intelligence are also well-worth checking out.

Wolfgang Schwarz published his referee report of Yudkowsky (MIRI) and Soares’ (MIRI) Functional Decision Theory. I’ll quote a single paragraph, which I think accurately illustrates the whole review: “The standards for deserving publication in academic philosophy are relatively simple and self-explanatory. A paper should make a significant point, it should be clearly written, it should correctly position itself in the existing literature, and it should support its main claims by coherent arguments. The paper I read sadly fell short on all these points, except the first. (It does make a significant point.)”

Ben Garfinkel gave a talk at EA Global 2018 titled “How sure are we about this AI stuff?”, calling for EA’s to be more critical about AI Safety as a cause area. Garfinkel knows his audience well, as everything is phrased so as to make EA’s think without ruffling feathers

Oren Etzioni writes in MIT Technology Review about the survey data Bostrom talks about in Superintelligence and offers alternative data that suggest a very different picture

Maciej Cegłowski‘s talks are always excellent and “Superintelligence: The Idea That Eats Smart People” is no exception. (via)

EA Forum user Fods12 wrote a five-part critique of Superintelligence. They hit on a number of good objections. The posts sadly got little quality engagement, indicative of both the writing quality and of the rest of the EA Forum’s userbase.

Even transhumanists can be reasonable, like Monica Anderson who writes Problem Solved: Unfriendly AI.

Ernest Davis wrote a review of SuperIntelligence, touching on some of the key weaknesses in Bostrom’s arguments but insufficiently elaborating on each of his arguments. MIRI published a response to the review which I think mostly nitpicks Davis’ phrasing instead of actually engaging with his objections, which to be fair might be the best you can do if you don’t have any better source of exposition on these arguments than Davis’ review. In short, Davis’ review isn’t super good, but MIRI’s response is much worse.

Neil Lawrence critiques Bostrom’s Superintelligence. If I had to excerpt a single representative line, it would be “I welcome the entry of philosophers to this debate, but I don’t think Superintelligence is contributing as positively as it could have done to the challenges we face. In its current form many of its arguments are distractingly irrelevant.”

Magnus Vindig writes Why Altruists Should Perhaps Not Not Prioritize Artificial Intelligence: A Lengthy Critique, in which he tackles most of the standard EA arguments and points out their hidden assumptions. Topics include, but are not limited to, the incessantly cited AI researcher survey predictions, bad Moore’s law-type arguments, slight-of-hand changing definitions of intelligence, the difficulty of alignment rising for future systems compared to current ones and the enormous experience we have with present-day systems, Instrumental Convergence being under argued, the practical value of being super intelligent. He does not rigorously take down every argument to the full extent possible, but that is probably good because the blog post is 22k words as is. Vindig also wrote Is AI Alignment Possible? in which he argues that the answer is no, both in principle and in practice.

Richard Loosemoore has the right amount of derision that AI Risk deserves, which is different from the right amount of derision for convincing the worriers that they’re wrong. One person who was not convinced is Rob Bensiger of MIRI.

Bill Hibbard has an email exchange with Yudkowsky in which he argues that a Superintelligence would not conflate smiling human faces with nano-scale depictions of such. The whole exchange is kind of predictable and not too informative.

On a related note, Nicholas Agar wrote a paper titled “Don’t Worry about Superintelligence” in which he argues that the first AIs with sophisticated agency are inherently likely to be friendly.

Best Things of 2018

703 words

Not (best (things of 2018)) but ((best things) of 2018), because recommendations get more interesting if they are rate-limited and less interesting if a recency constraint is imposed.

Best interactive web essay

By internet creators Vi Hart and Nicky Case; Parable of the polygons. Cute little triangles and squares get segregated in ways none of them ever intended against their best wishes.

Best portrait article

Portraying one of the most important trans people of the past few years, Vice Broadly’s piece on Caitlyn Jenner was a nice read.

Best economist’s story

On why setting maximum prices is bad. They Clapped by Michael Munger. Very salient, go read it.

Best academic talk

I see a lot of talks from computer science researchers, and CS people are surprisingly good at giving captivating talks. But, quoting Virginia Woolf,

[..] one must read [any book] as if it were the last volume in a fairly long series, continuing all those other books that I have been glancing at. For books continue each other, in spite of our habit of judging them separately.

Virginia Woolf, A Room of One’s Own, or page 52 in Penguin’s Vintage Mini “Liberty”

And so a talk must be considered in its social context. Based on this principle, the clear winner for this category is this keynote speech by James Mickens of Harvard University at USENIX Security 2018: Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible? Mickens is a captivating orator, the talk is funny and informative and gives a critical view on an important issue of the present day.

Best internet rabbit-hole

An old one for nostalgia. How to spot photo manipulation. Body By Victoria. Do click the links to follow-up posts, and the rest of the website is worth checking out as well.

Best description of psychologists

This text fragment reflects every interaction I’ve had with psychologists anywhere, both my gatekeepers and psychologists I visited for other reasons.

My anorexic patients sometimes complain of being forced into this mold. They’ll try to go to therapy for their inability to eat a reasonable amount of food, and their therapist will want to spend the whole time talking about their body image issues. When they complain they don’t really have body image issues, they’ll get accused of repressing it. Eventually they’ll just say “Yeah, whatever, I secretly wanted to be a ballerina” in order to make the therapist shut up and get to the part where maybe treatment happens.

Scott Alexander, Del Giudice On The Self-Starvation Cycle

Best video essay

This is not really a contest, Contrapoints’ The Aesthetic is the most beautiful piece of film I’ve seen in years. It is an honest expression of feelings and internal dialogue and conflict that trans women experience. It touches on so many uncomfortable issues without having any single clear message. Contrapoints raises the video essay to form of art. There is so much going on so many levels and I can just keep on watching the thing over and over again. Highly recommended watching for both trans and cis people.

The creator got quite some social media backlash on the video. There is exactly one reaction video that I felt was worth watching. Nobody Wins: ContraPoints, The Aesthetic, and Negative Representation by let’s talk about stuff. [This text essay is also pretty good. How Contrapoints Misunderstands Gender.]

Best book

My choice of best book for 2018 is Aphro-ism by Aph Ko and Syl Ko. It is a blog-turned-book, with a number of brilliant essays on, among others, veganism and social justice. I cannot overstate how much I like this book. I learned a lot from reading this book, and not just about the book’s subject matter.

The writings of the Ko sisters are very far from every thought I’ve ever had. This fact is reflected in how much I learned from the book, as well as in how difficult it was to understand it. I’ve re-listened this book 5 times by now. The first time, I understood literally nothing. Each time after that I understood a bit more, and I feel I understand most parts now. Not yet at the level of being to explain the ideas, but at the level of seeing good use value in them.

Links #1: Modern computers are complicated

266 wordsDavid Chisnall: C is not a low-level language

A modern Intel processor has up to 180 instructions in flight at a time (in stark contrast to a sequential C abstract machine, which expects each operation to complete before the next one begins). A typical heuristic for C code is that there is a branch, on average, every seven instructions. If you wish to keep such a pipeline full from a single thread, then you must guess the targets of the next 25 branches. This, again, adds complexity; it also means that an incorrect guess results in work being done and then discarded, which is not ideal for power consumption. This discarded work has visible side effects, which the Spectre and Meltdown attacks could exploit.

Matt Klein: Meltdown and Spectre, explained

Each cache miss adds a substantial amount of delay time to program execution. In order to mitigate this, processors are capable of executing ahead and out of order while waiting for memory loads. This is known as speculative execution. The following code snippet demonstrates this.

if (x < array1_size) {
  y = array2[array1[x] * 256];
}

In the previous snippet, imagine that array1_size is not available in cache, but the address of array1 is. The CPU might guess (speculate) that x is less than array1_size and go ahead and perform the calculations inside the if statement. Once array1_size is read from memory, the CPU can determine if it guessed correctly. If it did, it can continue having saved a bunch of time. If it didn’t, it can throw away the speculative calculations and start over. This is no worse than if it had waited in the first place.