An OpenAI board seat is surprisingly expensive
The Open Philanthropy Project recently bought a seat on the board of the billion-dollar nonprofit AI research organization OpenAI for $30 million. Some people have said that this was surprisingly cheap, because the price in dollars was such a low share of OpenAI's eventual endowment: 3%.
To the contrary, this seat on OpenAI's board is very expensive, not because the nominal price is high, but precisely because it is so low.
If OpenAI hasn’t extracted a meaningful-to-it amount of money, then it follows that it is getting something other than money out of the deal. The obvious thing it is getting is buy-in for OpenAI as an AI safety and capacity venture. In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project. The important thing is mutual validation, and a nominal donation just large enough to neg the other AI safety organizations supported by the Open Philanthropy Project is simply a customary part of the ritual.
By my count, the grant is larger than all the Open Philanthropy Project's other AI safety grants combined.
(Cross-posted at LessWrong.)
19 Comments
Is there any evidence Open AI needed legitimization by OPP? They're fully funded, have famous founders, and are getting all the talent they want.
I especially take exception with the idea that supporting one org is implicitly devaluing others. That is a thing that can happen, but the attitude that it's Occam's Razor and needs no further justification is really toxic and retards progress. Note that the accusation doesn't have these negative externalities if you provide evidence, even if the evidence isn't convincing to all people. The point is that you treat it as an aberration that requires evidence, not a default.
I know putting the burden of proof on the accuser can be inhibitory and that can be bad. In this case what I care about is not that you criticized Open AI or OPP, but that you implicitly endorsed the social reality without reference to object reality.
If it turned out that OpenAI had a good $30 million AI safety program that was hard to fund due to internal politics, and that would have been much harder outside OpenAI, and this grant got it funded, I'd be very pleasantly surprised, except for the part where the grant writeup would have been a gross misrepresentation.
It's striking to me that most alternative hypotheses I've seen so far - not just yours, Kelsey's as well, and other private conversations - assume that someone's substantively misleading the public about this deal, actively, by omission, or by implication. I think it's worth considering hypotheses involving deception, but it's also important to track what people are actually saying.
* Open AI has a billion in funding *pledged*, but that doesn't necessarily translate to anywhere near that amount in their bank account at present, and $30mil could easily be a more significant share of money concretely and short-term available. * Open AI has several priorities internally and a grant+ board seat is a compromise that allows people with different priorities in AI risk to all be on board with the partnership. * The money is understood to be a trial of the value of this collaboration; if it looks valuable in three years, OPP will continue funding at the $10mil/year level or possibly higher, making the expected value of the partnership greater than $30mil.
You're saying this like it disproves Kelsey's point. If OPP is supporting an organization that increases catastrophic risk it's a bad idea whether they got a good price on it or not.
And I still haven't seen an answer to my and Kelsey's points that this post treats social reality as the only reality, and that doing so without justification is harmful even if you are correct.
I've thought a bit about your social reality objection, and I think I understand what you mean now. I very much agree that social reality isn't the only thing. For instance, if OpenAI wanted Holden's advice, so they asked him to come over and talk to them, and Holden said OK and did, that would just be knowledge transfer and I wouldn't presume that this sort of influence has large hidden costs to Holden.
But that's very different from becoming a donor and getting a board seat, which resembles much more a transactional thing where in exchange for some sort of buy-in one gets to do some of the steering. In particular, even a nominal donation suggests that a substantial part of the value-add for OpenAI is the visible support of the Open Philanthropy Project, not just Holden's advice. It seems like the Open Philanthropy Project's model is that OpenAI is undervaluing Holden's advice, so by doing the buy-in transaction in social reality, the Open Philanthropy Project can use its steering power to move objective reality in a better direction.
Consider OPP's attention to the fact that its grant to MIRI had the cost of OPP having to deal with communication difficulties. It's unclear what this actually means, but one interpretation is that OPP views socially aligning itself with MIRI as being costly because of the verbally aggressive behavior of Eliezer in both personal and professional contexts. (One notices that it would be hard for OPP to express that this was the case if it was, in fact, the case). My best guess at this point is that OPP finds this sort of behavior sufficiently harmful by the mechanism that verbally aggressive figures who support AI safety research tend to drive away capable researchers from working usefully on the problem of AI safety research because of that verbal aggression.
If this is the case, the OpenAI grant starts to look like a better idea from OPP's perspective; even in the worlds where they think MIRI is doing much more valuable work than anyone else, they might make a medium-sized grant to MIRI for the direct impact value, and then larger grants to other ostensibly AI safety oriented research groups for the social value that would come from dis-endorsing MIRI (and consequently, Eliezer's social actions) in order to promote AI safety as a field that was actually worth taking seriously.
Regardless of whether or not one of OPP's main motives is to condemn Eliezer's verbally aggressive social moves while affirming the importance of AI safety research, OPP's actions are themselves socially aggressive in relation to MIRI. However, I intuit that this is less of a problem than it sounds like it might be, since in some sense MIRI is "defecting" on social aggression towards MIRI after MIRI has, in OPP's view, already "defected" on social aggression towards something OPP cares about (the extent to which AI safety research is taken seriously).
I need to run and didn't have time to proofread, but will be checking back in on this thread later.
That social pain is almost entirely divorced from whether this is the best action to prevent dangerous AI. It is entirely possible that the socially charming latecomers will be more effective, either because they are socially charming or because being charming is associated with other good traits.
(This doesn't answer your belief that Open AI is bad for AI Risk. But if that's true then it stands on its own without picking apart the social reality).
1). Whether funding MIRI is a good idea 2). How much social pain MIRI and its proponents are feeling.
Ben mentions that those two are probably highly correlated. I'd actually not meant to bring up 2) at all, but rather to guess at OPP's evaluation of 1) as a function of 3):
3). How much EY and others act/have acted verbally dominant in the future and previously.
It's not clear to me whether OPP thinks 1) is strongly dependent on 3), or weakly or not at all so. It's also not clear to me how much 1) is *actually* dependent on 3), but my social intuitions say that 1) is dependent enough on 3)--or at least OPP's judgement of 1) is dependent enough on 3)--relative to how easy it is to change 3), that it's worth seriously considering whether changing 3) is in fact worthwhile.