Update 2: Shortest paths, Arrogance

This was an abbreviated research day, partly because I spent the first couple of hours writing the prior two posts.

Shortest path to human-level AGI

The timelines research might spiral out to an unmanageable level of complexity real fast, so I though about ways to make it more tractable. We don’t really care about estimating all the timelines, just the shortest ones, or at least the shortest ones likely to lead to an intelligence explosion. So you could look at which paths to AI researchers are most excited about.

Disadvantages of this approach include the fact that the field is not sharply delimited - for example, some approaches might only become viable after further advances in brain scanning or cognitive science. Another is that perhaps there’s no extant survey that’s suitable.

I took a look at the AI Impacts survey summary page, and it looks like mostly the surveys they found just ask about timelines, not which approaches are promising, though they may have only been looking for the former.

I looked at the Wikipedia page on artificial intelligence, and it gives a much more detailed list of approaches to AI, though not roadmaps to human-level AI. One possible next step would be to go through that list and try and organize it somehow. I may also try and skim the textbooks listed there for context.

Should I keep going?

When I looked at the AI Impacts survey summary page, I noticed that just about every survey of experts reports consistently very wide confidence intervals. On the face of it, it seems improbable that I, an amateur, can in a short amount of time outpredict experts in the field. Should I even be trying this?

It’s plausible that I should switch to looking directly into interventions where I think I could do a lot under some plausible outcome.

On the other hand, there are some reasons to keep going:

  • Worrying about AI risk the way I do pretty much requires that I think I can see some things better than the experts.
  • Even if I end up with confidence intervals just as wide, knowing what drives that uncertainty might help me interpret future information.
  • I would expect researchers to be fairly good at predicting what problems they’ll need to solve to get to some outcome. But forecasting timelines is something that humans not specially trained for it are notoriously bad at. Researchers have lots of practice picking the next problem to work on, but nearly none at predicting where they’ll be in their research in ten years.

This seems pretty inconclusive and I will probably revisit it.

This entry was posted in Updates.

Leave a Reply

Your email address will not be published. Required fields are marked *