There are lots of arguments out there about AI risk and likely AI takeoff scenarios, and it’s often hard to compare them, because they tacitly make very different assumptions about how the world works. This is an attempt to bridge those gaps by constructing a hierarchical conceptual framework that:
- Articulates disjunctions that commonly underly disagreements around likely AI takeoff scenarios.
- Contextualizes differing arguments as the result of differing world models.
- Provides an underlying world model, for which those differing models are special cases.
Continue reading →
Over the past several weeks I’ve been working on a document laying out my current thinking around likely AI takeoff scenarios. I’ve been calling it a hierarchical conceptual framework, lacking a more pithy or clear term for it. In the process of getting feedback on my drafts it’s become clear to me that it’s nonobvious what sort of thing I’m writing, and why I’d write it.
It’s become increasingly clear to me, thinking about AI risk why I don’t feel I have a firm understanding of the discourse around it: Continue reading →
My friend Satvik recently told me about an important project management intuition he’d acquired: it’s a very bad sign to have a lot of projects that are “90% complete”. This is bad for a few reasons, including:
- Inventory: For any process that makes things, it’s a substantial savings to have a smaller inventory. A manufacturer buys raw inputs, does work on them, and ships them to a customer. Every moment between the purchase of inputs and the delivery of finished goods is a cost to the manufacturer, because of the time value of money. Smaller inventories are what it looks like to have a faster turnaround. If a lot of your projects are 90% complete, that means you’re spending a lot of time having invested a lot of work into them, but realizing none of the value of the finished product.
- Power law: Some projects might be much more important than others. If you’re allocating time and effort evenly among many projects, you may be wasting a lot of it.
- Quality of time estimates: Things may be sitting at “90%” because they keep seeming “almost done” even as you put a lot of additional work into them. If you’re using faulty estimates of time to completion, this may make your cost-benefit calculations wrong.
- Mental overhead: Even if it were somehow optimal for an ideal worker to handle a lot of projects simultaneously, in practice humans can’t perform very well like that. Conscious attention isn’t the only constraint - there are also only so many things you can productively fit on the “back burner”.
I decided to use this insight to prioritize my work. Things that are “90% done” should be frontloaded, and idea generation should be deprioritized until I’ve “closed out” more things and cleared up mental space. Continue reading →
Focus and antifocus
As part of my motivational switch from diligence to obsession, I’ve been talking with people working on AI risk or related things about their model of the problem. I’ve found that I tend to ask two classes of question:
- What is your model of the situation?
- What are you choosing not to work on, and why?
When I asked the first question, people tended to tell me their model of the thing they were focusing on. I found this surprising, since it seemed like the most important part of their model ought to be the part that indicates why their project is more promising to focus on than the alternatives. Because of this, I began asking people what they were ignoring. Continue reading →