Investigative journalist in the AI safety space?

By Benevolent_Rain @ 2024-11-15T08:48 (+5)

Apologies if this is clearly laid out somewhere else: Is there someone I could donate to that independently investigates the AI safety space for conflicts of interest? 

It has been mentioned that several large donors into the AI safety space have personal investments in AI. While I have no proof that this is going on, and really hope it is not, it seems smart to have at least 1 person funded at least 50% to look across the AI safety space to see if there might be conflicts of interest. 

I think a large diverse group of small donors could actually have a unique opportunity here. The funded person should refute grants from any large donors and should not accepts grants that comprise more than e.g. 5% of their total funding and all this should be extremely transparent.

This does not need to be an investigative journalist, it could be anyone with a scout mindset, ability to connect with people and a hunch for "where to look".


Habryka @ 2024-11-15T17:41 (+6)

It is trivially available public information that what you are saying here is true. This isn't something for which we need an investigative journalist, it's something for which you just need basic Google skills: 

Ulrik Horn @ 2024-11-15T20:47 (+3)

Thanks that is super helpful although some downvotes could have come from what might be perceived as a slightly infantilizing tone - haha! (no offense taken as you are right that the information is really accessible but I guess I am just a bit surprised that this is not more often mentioned on the podcasts I listen to, or perhaps I have just missed several EAF posts on this).

Ok so all major funders of AI safety are personally, and probably quite significantly going to profit from the large AI companies making AI powerful and pervasive. 

I guess the good thing is then as AI grows they will have more money to put towards making it safe - it might not be all bad. 

MichaelDickens @ 2024-11-15T22:21 (+4)

Ok so all major funders of AI safety are personally, and probably quite significantly going to profit from the large AI companies making AI powerful and pervasive.

I know of only two major funders in AI safety—Jaan Tallinn and Good Ventures—and both have investments in frontier AI companies. Do you know of any others?

Ulrik Horn @ 2024-11-16T20:45 (+5)

No, my comments are completely novice and naïve. I think I am just baffled that all of the funding of AI Safety is done by individuals who will profit massively from accelerating AI. Or, I think what baffles me most is how little focus there is on this peculiar combination of incentives - I listen to a few AI podcasts and browse the forum now and then - why am I only hearing about it now after a couple of years? Not sure what to think of it - my main feeling is just that the relative silence about this is somehow strange, especially in an environment that places importance on epistemics and biases.

MichaelDickens @ 2024-11-16T21:08 (+3)

I think most people don't talk about it because they don't think it's a big deal. FWIW I don't think it's a huge deal but it's still concerning.

Benevolent_Rain @ 2024-11-25T12:22 (+2)

FYI, weirdly timely podcast episode just out from FLI.

Ulrik Horn @ 2024-11-15T10:22 (+2)

Not sure why this is tagged Community? Ticking one of these makes it EA Community:

 

Neel Nanda @ 2024-11-15T11:39 (+5)

Community seems the right categorisation to me - the main reason to care about this is understanding the existing funding landscape in AI safety, and how much to defer to them/trust their decisions. And I would consider basically all the large funders in AI Safety to also be in the EA space, even if they wouldn't technically identify as EA.

More abstractly, a post about conflicts of interest and other personal factors, in a specific community of interest, seems to fit this category

Being categorised as community doesn't mean the post is bad, of course!

harfe @ 2024-11-15T11:25 (+1)

edit: the issue raised in this comment has been fixed