Update 7: conceptual vocabulary, writing, feedback, lit review

  1. I’m redefining my current research project: I’m limiting its current scope to creating a conceptual framework, not getting empirical results.
  2. I’m dropping my commitment to regular updates.
  3. I got some good feedback on my draft research hierarchy writeup. Core issues I’m going to emphasize are:
    • Conflict vs cooperation as drivers of AI progress during a takeoff scenario.
    • Creating disjunctive arguments that put considerations in some relation to each other and make it clear which scenarios are mutually exclusive alternatives to one another, which are elaborations on a basic dynamic, and which might overlap.
  4. I plan to review more of the existing on AI risk to see whether anyone else has already made substantial progress I don’t know about, on a conceptual framework for AI risk.

Continue reading

Update 6: working on a research hierarchy

Each time I talk through my research with someone, I seem to get another "click" moment as part of the implied structure falls into place. This in turn has helped me figure out what function I want my "big picture" model to serve.

An example of one of these "click" moments happened yesterday, when, trying to explain why the considerations I was thinking about formed a natural group and others should be excluded, I realized that I was trying to model an equilibrium where none of the agents modeled each other, as a simpler case to build more complex models on top of. Continue reading

Update 5: writing, organizing my research

The last few workdays have been tricky because the working style that seemed to serve me well for doing new research (running across things that catch my eye and then leaning into my desire to understand it better) didn't work very well for writing. In addition, it seems that I have to repeatedly expend executive effort just to allocate large uninterrupted blocks of time for me to work, well-rested, on this project. This week I've put into place some accountability measures that should help, such as committing to examine my social engagements in advance and clarify whether I genuinely expect them to be as valuable as time spent on this project.

On the object level, while writing up my initial model of how AI takeoff might be affected by the existence of a broader economy, I realized that I was really trying to do two different things:

  1. Mathematize my intuitions about AI takeoff scenarios where each AI can either work on itself or participate in the economy.
  2. Lay out a higher-level argument about how the above fits into the bigger picture.

My plan is to finish writing these two things up, and then go back and review my research priorities, with an eye towards creating a single view where I can see the overall structure of my current model, and changes I've made to it. (Then I'll go back to doing more object level research.)