How sure are we about this AI stuff?

Ben Garfinkel, Research Fellow
Future of Humanity Institute
Effective Alutrism Global: London 2018

  • AI is not the main area of concern for the Effective Altruism community. Only about 5% of the grants given out by the Open Philanthropy Project, and it is not ranked as the highest concern by EA community members.

  • It’s a growing area however, people are making career changes into AI and AI safety/policy/governance concerns.

  • Technology has and will continue to make a lot of changes. AI is a (the?) key vehicle for technological change in the future.

  • Intelligence seems achievable since we have the human mind as a working example. Making intelligence in artificial form has the potential to augment human intelligence in a way that will lead to a completely different future compared to what we expect.

  • Even if AI has a high impact it doesn’t follow that deploying resources to AI governance will have an impact, positive or not. This is the principle of leverage.

  • The three great risk of AI: instability, lock-in and accidents.

  • Instability can include events that permanently do harm. For example, AI leading a great power competition and war.

  • Lock-in happens from decisions that are hard to reverse. For example an international treaty or governance policy that actually has a negative effect.

  • The classic accident scenerio is AI with a poorly specified goal, for example the paperclip maximizer.