Each time I talk through my research with someone, I seem to get another "click" moment as part of the implied structure falls into place. This in turn has helped me figure out what function I want my "big picture" model to serve.
An example of one of these "click" moments happened yesterday, when, trying to explain why the considerations I was thinking about formed a natural group and others should be excluded, I realized that I was trying to model an equilibrium where none of the agents modeled each other, as a simpler case to build more complex models on top of.
Right now, my project looks like taking things that seemed like distinct considerations that were difficult to compare, only linked in my mind as "related to AI", and organizing them into a more hierarchical structure where some arguments or considerations are parts of others, or alternatives to others that can be compared by some deeper underlying model.
For instance, you can treat a "recursive self-improvement" takeoff as a special case of an AI choosing at each moment how much of its intelligence to allocate to improving its software, and how much to use working for money to buy more hardware. (Or, more generally, working to improve itself vs working for money to buy improvements from the broader market.) And that's a special case of an AI deciding how much of its resources to allocate among self-improvement, trade, and directly seizing resources.
Today, for instance, I realized that in an economic takeoff model, the amount of resources devoted to AI development by an AI project might depend more on the profitability of more investment and on the cost of capital, than on the amount of resources currently possessed by the project.
So right now I'm still writing up my conceptual hierarchy, refining and elaborating on it in the process of writing. After I publish that, and my writeup on the two-factor AI takeoff model described above, I'll circle back and reevaluate my broader research plan.