4. Who is worried about AI Risk?

429 words

I am skeptical of AI Safety (AIS) as an effective cause area, at least in the way AIS is talked about by people in the effective altruism community. However, it is also the cause area that my skills and knowledge are the best fit for contributing, so it seems worthwhile for me to think my opposition to it through.

Previously: [1] [2] [3][latest].

There are many people talking about the risks of artificial intelligence. I want to roughly split them into three groups for now, because they worry about very different issues that tend to talk past each other, confusing outsiders.

The LessWrong-aligned view seems most popular in the EA community. Examplified by the paperclip maximizer argument, LW-aligned worriers are concerned that an Artifical General Intelligence (AGI) would accomplish their objective in unforeseen ways, and as a consequence should be treated like you should treat an evil genie, except it’d be worse because it would have less understanding of basic words than philosophers have. The principles that AI should satisfy are listed by the Future of Humanity Institute. [Though I suspect at least some of the signatories to have the FATML-aligned view in mind.] A popular book on this is Superintelligence by Nick Bostrom.

Fairness, Accountability and Transparency in Machine-Learning (FATML) is a subfield of machine learning, concerned with making algorithmic decision making fair, accountable and transparent. Exemplified by Amazon’s recent recruiting debacle, FATML-aligned worries are concerned that modern algorithmic decisionmaking will exacerbate existing social, economic and legal inequalities. The princples that AI should satisfy are listed by The Public Voice, and these Google ML guidelines fit as well. [Though I suspect at least some of the signatories to have the LW-aligned view in mind.] Popular books include Weapons of Math Destruction by Cathy O’Neil, Algorithms of Oppression by Safiya Noble and Automating Inequality by Virginia Eubanks.

Other AI-related worries commonly heard in the media, that I want to separate from the previous two categories because, compared to the above categories, these issues are more about politics and less of a technical problem. Worries include killer drones, people losing their jobs because AI replaced them, and who the self-driving car should run over given the choice.

In the next couple of posts on AI-related topics, I will focus on the first two categories. My aim is to use the FATML-aligned view to compare and contrast the LW-aligned view, hopefully gaining some insight in the process. The reason I separate the views this way, is because I agree with the FATML-aligned worries and disagree with the LW-aligned worries.

Leave a Reply