Lessons learned from modeling AI takeoffs, and a call for collaboration

I am now able to think about things like AI risk and feel like the concepts are real, not just verbal. This was the point of my modeling the world project. I’ve generated a few intuitions around what’s important in AI risk, including a few considerations that I think are being neglected. There are a few directions my line of research can be extended, and I’m looking for collaborators to pick this up and run with it.

I intend to write about much of this in more depth, but since EA Global is coming up I want a simple description to point to here. These are just loose sketches, so I'm trying to describe rather than persuade.

New intuitions around AI risk

  • Better cybersecurity norms would probably reduce the chance of an accidental singleton.
  • Transparency on AI projects’ level of progress reduces the pressure towards an AI arms race.
  • Safety is convergent - widespread improvements in AI value alignment improve our chances at a benevolent singleton, even in an otherwise multipolar dynamic.

Continue reading