2. How do we talk about AI?

525 words

[This badly written blog post has been superseded by a slightly better written forum post over on the EA forum.]

I am skeptical of AI Safety (AIS) as an effective cause area, at least in the way AIS is talked about by people in the effective altruism community. However, it is also the cause area that my skills and knowledge are the best fit for contributing, so it seems worthwhile for me to think my opposition to it through.

Previously: [1][latest].

All sentences are wrong, but some are useful. I think that a certain emotional salience makes us talk about AI in a way that is more wrong than necessary.

A self-driving car and a pre-driven car are the same thing, but I can feel myself thinking about the two in completely different ways.

Self-driving cars are easy to imagine: they are autonomous and you can trust the car like you trust cab drivers; they can make mistakes but probably have good intent, when they encounters an unfamiliar situation they can think about the correct way to proceed, and if something goes wrong then the car is at fault.

A pre-driven car are hard to imagine: it has to have a bunch of rules coded into it by the manufacturer and you can trust the car like you trust a bridge; it does exactly what it was built to do, but if it was built without proper testing or calculations, things will at some point go wrong. When it does, the company and engineers are at fault.

You can make these substitutions on any sentence in which a computer is ascribed agency. In the best case, “The neural network learned to recognize objects in images” becomes “The fitted model classifies images in close correspondence with the human-given labels”. In reality, that description might be too generous.

It helps to keep in mind the human component. “The YouTube algorithm shows you exactly those videos that make you spend more time on the platform” is accurate in some sense, but it completely glances over the ways in which in the algorithm does not do that. When you listen to music using YouTube’s autoplay, it isn’t hard to notice that suggestions tend to point backwards in time compared to the upload date of the video you’re watching right now, and that, apart from preventing repeats, autoplay is pretty Markovian (that is mathspeak for the algorithm not doing anything clever based on your viewing history, just “this video is best followed by that video”). Both of those properties are clearly a result from the way in which YouTube’s engineers modelled the problem they were trying to solve, I would describe YouTube’s suggestion as “The YouTube autoplay algorithm was made to link you to videos that most people watched and liked after watching the current video”.

When you rewrite AI-related statements, they tend to become more wordy. That is exactly what you would expect, but does make it unwieldy to have accurate conversations. I leave the search for catchy-but-more-accurate buzzwords as an open problem. I am particularly interested in how to translate the term “artificial general intelligence” (AGI).

Post-scripts

  1. And don’t forget the term AI of itself. Nobody is making artificial intelligence right now because the things people make aren’t intelligent.
    Everyone in the academia and industry has incentive to oversell their own product and make it seem more impressive than it is, and AI and ML-folks are particularly succesful at it.

Leave a Reply