Yesterday was an errand day, so I didn’t anticipate getting much done at all, but I ended up running into a few people who’d read this blog and talking about stuff on it, and making an appointment to talk with a couple of other people about AI timelines. Today I intended to put my economic model of AI takeoff into a sufficiently polished form that I could ask some friends for feedback pre-publication. Unfortunately, I was unsure what I wanted the finished thing to be, which presented some motivation challenges. I’ll be spending the rest of the workday, and some of tomorrow morning, thinking that through.
- Daily summaries will now be separated from major content.
- I switched URLs to give me more control over this blog.
- I’m experimenting with the feel and pacing of my research.
- An update on what I’ve been doing
This was an abbreviated research day, partly because I spent the first couple of hours writing the prior two posts.
Shortest path to human-level AGI
The timelines research might spiral out to an unmanageable level of complexity real fast, so I though about ways to make it more tractable. We don’t really care about estimating all the timelines, just the shortest ones, or at least the shortest ones likely to lead to an intelligence explosion. So you could look at which paths to AI researchers are most excited about. Continue reading
I took a bunch of days off after my first day for various reasons, during which I came up with my plan to summarize each day’s work, so this is a retrospective with a longer lag than I hope will be usual.
I began by tracing some of my uncertainty on what to do back to uncertainty about how the world works. I decided to focus on the likely timing and speed of an intelligence explosion, because if I end up with a strong answer about this, it could narrow down my plausible options a lot.
I focused mostly on the timing of human-level artificial general intelligence, leaving the question of whether a takeoff is likely to be fast or slow for later. I also decided to leave aside the question of existential risk from AI that isn’t even reliably superhuman, although I suspect that this is a substantial risk as well.
I enumerated a few plausible paths to human-level intelligence, and began looking into how long each might take. I was not able to get a final estimate for any path, but got as far as determining that the cost and availability of computing hardware is not likely to be the primary constraining factor after about ten years, so I can’t just extrapolate using Moore’s law. Predicting these timelines is going to require a model of how long the relevant theoretical or non-computing technical insights will take to generate. This will be messy. Continue reading
I’m pivoting from working on myself to trying to understand the world, and in particular some things related to AI risk. I’ll be blogging about it here. Continue reading