The AI Compass
Opinions on AI generally fall into four camps, much like the quadrants of the Political Compass or Paul Graham’s Four Quadrants of Conformism.
You can sort people into one of these four camps depending on how they fill in the following blanks:
- Overall, AI is { Good | Bad }
- AGI is { Close | Far }
You can also take a short quiz to find out where you are on the compass.
Those who answer Bad / Close are the doomers. They believe that general purpose AI is coming within the next few years, but will wipe out humanity if it is created. Because of this, they are virulently, sometimes violently, opposed to new AI breakthroughs. They want to stop or slow down AI development, seeing it as the only way to save humanity. However, they are usually pessimistic about our chances due to the "arms race" between AI labs like OpenAI, Deepmind, and Anthropic. They dislike open source AI models, worrying about the harm they could cause in the wrong hands. Many of this group are Effective Altruists.
Those who answer Bad / Far are the pessimists. They believe that AGI is several once-in-a-generation breakthroughs away, and might not even be possible at all as doomers imagine it. However, they are worried about bias, misinformation, and job replacement caused by current AI systems. Politically, many of them lean left. They dismiss fears about AI apocalypse as “hype” or “marketing” while pointing out more immediate harms caused by corporations and law enforcement misusing AI tools. They also point to a low quality of output from image and language models. Many of this group are academics or researchers. People in Bad / Far often disapprove of open source models because of concerns about automation or intellectual property rights.
Those who answer Good / Close are the optimists. They believe that imminent AGI is likely to lead to a utopian future, where humans don't need to work, there are cures for all diseases, and humanity has expanded to other planets. Many of them work in the industry as engineers. They like open source models because they accelerate innovation and give democratic access to the technology. Generally, they believe that current AI systems are a stepping stone to general intelligence, which is likely to happen sooner rather than later. Politically, they lean libertarian, and are concerned about overregulation or regulatory capture of AI. Americans who answer Good / Close may see AI research as an arms race between the USA and China, worrying about the possibility of "AGI with Chinese characteristics".
I am in Good / Far. While I understand that current AI systems are nowhere close to human intelligence, I do believe they are revolutionary tools. Their limitations are manageable and vastly outweighed by their positive use cases. I'm also comfortable enough with economics to know that automation doesn’t cause long term unemployment. I agree with Good / Close about open source models. But I think that AGI is far off — solving intelligence is really, really hard. When or if it is developed, AGI won’t destroy the world but instead be another powerful tool. I’m fairly sure that everyone hyping, dismissing, or catastrophizing about AI today will look hopelessly naïve in fifty years time. And I’m willing to bet that we won’t have AGI anytime soon, although I hope I lose. When the AI market cools down, I expect more will join me in Good / Far, and indeed claim that they were here all along.
—
If you were to map people’s opinions as points on a continuous Cartesian plane, OpenAI CEO Sam Altman would be much closer to the dividing line of Good / Bad than a more traditional optimist. AI CEOs like Altman tend to be Center / Close. They are usually open to both strong positive and strong negative outcomes — indeed, steering AGI towards a positive outcome is usually the reason they founded their companies in the first place. Elon Musk, who recently founded his own lab xAI, is also Center / Close. Because they’ve chosen to work on improving AI as well as aligning it, I’d classify Musk and Altman as moderate Good / Close. The concrete differences between Center / Close and the more optimistic Good / Close is that Centre / Close is less hostile to regulation and more sceptical of open source models. Other types of centrist voices are less common.
Take the test to see where you fall. If you want to be added to the list below, or you think someone else should be, reach out and let me know who I should add and in which category.
List of people by AI compass quadrant
Bad / Close
- Eliezer Yudkowsky (MIRI)
- Geoff Hinton (retired, formerly Google)
Bad / Far
- Timnit Gebru (DAIR)
- Arvind Narayanan (Princeton)
Center / Close
- Sam Altman (OpenAI)
- Elon Musk (xAI, Tesla, etc.)
Good / Close
- Roon (OpenAI)
Good / Far
- Yann LeCun (Meta)