Peter Wildeford's Quick takes

By Peter Wildeford @ 2021-10-10T19:21 (+8)

null
Peter Wildeford @ 2024-08-31T16:22 (+123)

I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community.

I get a general vibe that in EA (and probably the world at large), that being a "deep thinking researcher"-type is way higher status than being an "operations/management/doer"-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin).

I see many EAs erroneously try to go into research and stick to research despite having very clear strengths on the operational side and insist that they shouldn't do operations work unless they clearly fail at research first.

I've personally felt this at times where I started my career very oriented towards research, was honestly only average or even below-average at it, and then switched into management, which I think has been much higher impact (and likely counterfactually generated at least a dozen or more researchers).

Will Aldred @ 2024-09-01T23:24 (+42)

For operations roles, and focusing on impact (rather than status), I notice that your view contrasts markedly with @abrahamrowe’s in his recent ‘Reflections on a decade of trying to have an impact’ post:

Impact Through Operations

  • I don’t really think my ops work is particularly impactful, because I think ops staff are relatively easy to hire for compared to other roles. However I have spent a lot of my time in EA doing ops work.
    • I was RP’s COO for 4 years, overseeing its non-research work (fiscal sponsorship, finance, HR, communications, fundraising, etc), and helping the organization grow from around 10 to over 100 staff within its legal umbrella.
    • Worked on several advising and consulting projects for animal welfare and AI organizations
      • I think the advising work is likely the most impactful ops work I’ve done, though I overall don’t know if I think ops is particularly impactful.

I see both Abraham and yourself as strong thinkers with expertise in this area, which makes me curious about the apparent disagreement. Meanwhile, the ‘correct’ answer to the question of an ops role’s impact relative to that of a research role should presumably inform many EAs’ career decisions, which makes the disagreement here pretty consequential. I wonder if getting to the ground truth of the matter is tractable? (I’m not sure how best to operationalize the disagreement / one’s starting point on the matter, but maybe something like “On the current margin, I believe that the ratio of early-career EAs aiming for operations vs. research roles should be [number]:1.”)

(I understand that you and Abraham overlapped for multiple years at the same org—Rethink Priorities—which makes me all the more curious about how you appear to have reached fairly opposite conclusions.)

abrahamrowe @ 2024-09-04T23:04 (+14)

Two caveats on my view:

  • I think I'm skeptical of my own impact in ops roles, but it seems likely that senior roles are harder to hire for generally, which might generally mean taking one could be more impactful (if you're good at it).
  • I think many other "doer" careers that aren't ops are very impactful in expectation — in particular founding new organizations (if done well or in an important and neglected area). I also think work like being a programs staff member at a non-research org is very much in the "doer" direction, and could be higher impact than ops or many research roles.

Also, I think our views as expressed here aren't exactly opposite — I think my work in ops has had relatively little impact ex post, but that's slightly different than thinking ops careers won't have impact in expectation (though I think I lean fairly heavily in that direction too, just due to the number of qualified candidates for many ops roles).

Overall, I suspect Peter and I don't disagree a ton (though haven't talked with him about it) on any of this, and I agree with his overall assertion (more people should consider "doer" careers over research careers), I think I just also think that more people should consider earning to give over any direct work.

Also, Peter hires for tons of research roles, and I hire for tons of ops roles, so maybe this is also just us having siloed perspectives on the spaces we work in?

Rebecca @ 2024-09-05T09:13 (+2)

How does a programs staff role differ from an ops role?

abrahamrowe @ 2024-09-08T16:41 (+2)

I mean something like directly implementing an intervention vs finance/HR/legal/back office roles, so ops just in the nonprofit sense.

Rebecca @ 2024-09-08T18:37 (+3)

In that case I suspect there’s not disagreement, and you’re just each using ops to mean somewhat different things?

Péter Drótos @ 2024-09-06T10:40 (+1)

Is there a proposed/proven way of coordinating on the prioritization?

Without a good feedback loop I can imagine the majority of the people just jump on the same path which could then run into diminishing returns if there isn’t sufficient capacity.

It would be intersting to see at least the number of people at different career stages on a given path. I assume some data should be available from regular surveys. And maybe also some estimates on the capacity of different paths.

And I assume the career coaching services likely have an even more detailed picture including missing talent/skills/experience that they can utilize for more personalized advice.

Joseph Lemien @ 2024-09-02T14:10 (+11)

I don't know the true answer to this confusion, but I have some rough (untested, and possibly untestable) hypothesis I can share:

  • It is really hard to estimate counterfactual scenarios. If you are the project manager (or head of people, or finance lead, or COO), it is really hard to have a good sense of how much better you are than the next-best candidate. Performance in general is hard to measure, but trying to estimate performance of a hypothetical other individual that you have never met strikes me as very challenging. Even if we were to survey 100 people in similar roles at other orgs, the context-specific nature of performance implies that we shouldn't be too confident about predicting how a person should perform at Org A simply from knowing their performance at Org B.
  • I'm not quite sure how to phrase this, but it might be something like "the impact of operations work has high variance," or maybe "good operations results in limiting the downside a lot but does relatively little to increase the upside." Taking a very simplistic example of accounting, if our org has bad accounting them we don't know how much money we have, we don't keep track of accounts payable, and have general administrative sloppiness relating to money which makes decision-making hard. If we have very good accounting, then we have clarity about where our funds are flowing, what we own, and what we owe. Those upsides are nice, but they aren't as impactful (in a positive way) as the downsides are impactful (in a negative way). Phrased in a different way: many operations roles are a cost center rather than a profit center (although this will certainly vary depending on the role and the organization).
  • It might just be a thing of marginal value, with non-operations roles being more impactful (overall, in general), but we still need more good operations people than we currently have.

I have a lot of uncertainty as to the reality of this, but I'm always interested in reading thoughts from people about these issues.

PeterSlattery @ 2024-09-04T19:59 (+2)

Quick response - the way that I reconcile this is that these differences were probably just due to context and competence interactions. Maybe you could call it comparative advantage fluctuations over time?

There probably no reasonable claim that advising is generally higher impact than Ops or vice versa. It will depend on the individual and the context. At some times, some people are going to be able to have much higher impact doing ops than advising, and vice versa.

From a personal perspective my advising opportunities very greatly. There are times where most of my impact comes from helping somebody else because I have been put in contact with them and I happen to have useful things to offer. There are also times where the most obviously counteractually impactful thing for me to do is to do research or some sort of operations work to enable other researchers. Both of these activities kind of have lumpy impact distributions because they only occur when certain rare criteria are collectively met.

In this case Abraham may have had much better advising opportunities relative to operations opportunities while this was not true for Peter.

Brad West @ 2024-08-31T22:25 (+38)

One question I often grapple with is the true benefit of having EAs fill certain roles, particularly compared to non-EAs. It would be valuable to see an analysis—perhaps there’s something like this on 80,000 Hours—of the types of roles where having an EA as opposed to a non-EA would significantly increase counterfactual impact. If an EA doesn’t outperform the counterfactual non-EA hire, their impact is neutralized. This is why I believe that earning to give should be a strong default for many EAs. If they choose a different path, they should consider whether:

  1. They are providing specialized and scarce labor in a high-impact area where their contribution is genuinely advancing the field. This seems more applicable in specialized research than in general management or operations.
  2. They are exceptionally competent, yet the market might not compensate them adequately, thus allowing highly effective organizations to benefit from their undercompensated talent.

I tend to agree more with you on the "doer" aspect—EAs who independently seek out opportunities to improve the world and act on these insights often have a significant impact.

Chris Leong @ 2024-09-01T03:22 (+3)

This (a strong default towards earn to give) neglects the importance of value alignment for many EA-aligned orgs.

Having an org that is focused, rather than pulled five different directions is invaluable.

Brad West @ 2024-09-01T04:00 (+10)

I didn't neglect it - I specifically raised the question of in what conditions EAs occupying roles within orgs vs non-EAs adds substantial value. You assume that having EAs in (all?) roles is critical to having a "focused" org. I think this assumption warrants scrutiny, and there may be many roles in orgs for which "identifying as an EA" may not be important and that using it as a requirement could result in neglecting a valuable talent pool.

Additionally, a much wider pool of people could align with the specific mission of an org that don't identify as EA.

Joseph Lemien @ 2024-09-01T12:58 (+14)

Do you have any ideas or suggestions (even rough thoughts) regarding how to make this change,  or for interventions that would nudge peoples' behavior?

Off the top of my head: A subsidized bootcamp on core operations skills? Getting more EAG speakers/sessions focused on operations-type topics? Various respected and well-known EAs publicly stating that Operations is important and valuable? A syllabus (readings, MOOCs, tutorials) that people can work their way through independently?

Ulrik Horn @ 2024-09-03T07:09 (+18)

I have previously suggested a new podcast that features much more "in the trenches" people than currently is the case with e.g. 80k podcast, FLI podcast, etc. While listening to edge researchers is more fun that listening to how someone implemented Asana in an efficient way, I think one can make an "in the trenches" podcast equally, if not more interesting by telling personal stories of challenges, perseverance and mental health. One example of such is Joey at AIM - from the few times I heard him talk he seems to live an unusually interesting life. I also think a lot of ops people have really cool stories to tell, like people into fish welfare poking around at Greek aquaculture installations trying to get to know their "target market". There must be a ton of good "stories from the field" out there. 

Mjreard @ 2024-09-05T05:32 (+1)

Have you listened to 80k Actually After Hours/Off the Clock? This is close to what I was aiming for, though I think we still skew a bit more abstract. 

Ulrik Horn @ 2024-09-05T07:20 (+2)

Yes it is good, but I feel like it is more of unstructured conversation and more about ideas than lived experiences. So I am thinking a bit more prepared, perhaps trying to get some narrative arcs with the struggle, the battle, the victory (or defeat!) and then the epilogue. I mean what was super interesting (and shocking in a negative way!) to listen to is "Going Infinite" - I mean it is essentially an EA story. So rich, so gripping and compelling and so dramatic. I think something only 10% as dramatic would be interesting to listen to and there must be stories out there. I think the challenge will be to find the overlap between "juicy stories" and people being willing to tell them - often I think the most interesting stuff is stuff people are concerned about being public! But I guess it also needs to be something that make people think ops work sounds interesting but this could also be examples of how gravely things can go wrong without ops - something I think is a lens one could view the FTX scandal through, for example.

ZY @ 2024-09-01T18:29 (+1)

I once saw a post https://www.alignmentforum.org/posts/ho63vCb2MNFijinzY/agi-safety-career-advice that is specific to AI, detailed the directions within both research and governance, and found it useful. Maybe some general education post (but on more general EA topics) like this would be very helpful.

SiebeRozendal @ 2024-09-03T12:50 (+7)

I guess this is the same dynamic as why movie and sports stars are high status in society: they are highly visible compared to more valuable members of society (and more entertaining to watch). We don't really see much of highly skilled operations people compared to researchers

Joseph Lemien @ 2024-09-04T17:39 (+4)

I'm reminded about The Innovation Delusion (which I've mentioned a bit previously on the EA Forum: 1, 2), and ideas of credit, visibility, absence blindness, and maintenance work. An example of Thomas Edison is good enough that I will copy and paste it here:

Edison—widely celebrated as the inventor of the lightbulb, among many other things—is a good example. Edison did not toil alone in his Menlo Park laboratory; rather, he employed a staff of several dozen men who worked as machinists, ran experiments, researched patents, sketched designs, and kept careful records in notebooks. Teams of Irish and African American servants maintained their homes and boardinghouses. Menlo Park also had a boardinghouse for the workers, where Mrs. Sarah Jordan, her daughter Ida, and a domestic servant named Kate Williams cooked for the inventors and provided a clean and comfortable dwelling. But you won’t see any of those people in the iconic images of Edison posing with his lightbulb.

If I imagine being in a hypothetical role that is analogous to Mrs. Sarah Jordan's role, in which I support other people to accomplish things, am I okay with not getting any credit? Well, like everyone else I have ego and I would like the respect and approval of others. But I guess if I am well-compensated and my colleagues understand how my work contributes to our team's success I would be okay with somebody else being the public face and getting the book deals and getting the majority of the credit. How did senior people at Apple feel about Steve Jobs being so idolized in the public eye? I don't care too much if people in general don't acknowledge my work, as long as the people I care about most acknowledge it.

Of course it would be a lot nicer to be acknowledged widely, but that is generally not how we function. Most of us (unless we specifically investigate how people accomplished things) don't know who Michael Phelps's nutritionist was, nor do we know who taught Bill Gates about computers, nor who Magnus Carlsen's training partners are, nor who Oscar Wilde bounced around ideas with and got feedback from. I think there might be something about replaceability as well. Maybe there are hundreds of different people who could be (for example) a very good nutritionist for Michael Phelps or who could help Magnus Carlsen train, but there are only a handful of people who could be a world-class swimmer or a world class-chess player on that level?

Brad West @ 2024-09-04T23:36 (+4)

The issue with support roles is that it's often difficult to assess when someone in that position truly makes a counterfactual difference. These roles can be essential but not always obviously irreplaceable. In contrast, it's much easier to argue that without the initiator or visionary, the program might never have succeeded in the first place (or at least might have been delayed significantly). Similarly, funders who provide critical resources—especially when alternative funding isn't available—may also be in a position where their absence would mean failure.

This perspective challenges a more egalitarian view of credit distribution. It suggests that while support roles are crucial, it's often the key figures—initiators, visionaries, and funders—who are more irreplaceable, and thus more deserving of disproportionate recognition. This may be controversial, but it reflects the reality that some contributions, particularly at the outset, might make all the difference in whether a project can succeed at all.

Dylan Richardson @ 2024-09-01T12:26 (+5)

I do think that the marginal good of additional researchers, journalists, content creators and etc isn't exactly as high as it is thought to be. But there's an obvious rational-actor (collective action problem?) explanation: other people may not be needed, but me, with my idiosyncratic ideologies? Yep!

This also entails that the less representative an individual is of the general movement, the higher the marginal value for him in particular to choose a research role.

Joseph Lemien @ 2024-09-04T17:18 (+4)

Which of these two things do you mean?

  • operations/management/doer careers should be higher status than they currently are within EA
  • operations/management/doer careers should be higher status than research careers within EA
Chris Leong @ 2024-09-03T15:20 (+3)

I suspect it varies by cause area. In AI Safety, the pool of people who can do useful research is smaller than the pool of people who could do good ops work (which is more likely to involve EA’s who prefer a different cause area, but are happy to just have an EA ops job).

PeterSlattery @ 2024-09-04T19:48 (+2)

Just wanted to quickly say that I hold a similar opinion to the top paragraph and have had similar experiences on terms of where I felt I had most impact.

I think that the choice of whether to be a researcher or do operations is very context dependant.

If there are no other researchers doing something important your competitive advantage may be to do some research because that will probably outperform the counterfactual (no research) and may also catalyze interest and action within that research domain.

However if there are a lot of established organizations and experienced researchers, or just researchers who are more naturally skilled than you already involved in the research domain, then you can often have a more significant impact by helping to support those researchers or attract new researchers.

One way to navigate this is to have a what I call a research hybrid role where you work as researcher but allocate some flexible amount of time to more operations / field building activities depending on what seems most valuable.

Phib @ 2024-09-01T19:14 (+2)

Did the research experience help you be a better manager and operator from within research organizations?

I feel like getting an understanding by doing some research could be helpful and probably you could gain generalizable/transferable skills but I’m just speculating here.

ZY @ 2024-09-01T18:26 (+1)

I think it might be fine if people have genuine interest in research (though had to be intrinsic motivation), which will make their learning fast with more devoted energy. But overall generally I see a lot of value in operations/management/application work, as it gives people opportunities to learn how to land research into real impacts, and how tricky sometimes real world or applications can be.

Peter Wildeford @ 2024-05-10T22:12 (+35)

This could be a long slog but I think it could be valuable to identify the top ~100 OS libraries and identify their level of resourcing to avoid future attacks like the XZ attack. In general, I think work on hardening systems is an underrated aspect of defending against future highly capable autonomous AI agents.

Ben Millwood @ 2024-05-12T17:24 (+8)

not sure if such a study would naturally also be helpful to potential attackers, perhaps even more helpful to attackers than defenders, so might need to be careful about whether / how you disseminate the information

NunoSempere @ 2024-05-12T15:57 (+7)

My sense is that 100 is an underestimate for the number of OS libraries as important as that one. But I'm not sure if the correct number is 1k, 10k or 100k.

NunoSempere @ 2024-05-12T15:59 (+4)

That said, this is a nice project, if you have a budget it shouldn't be hard to find one or a few OS enthusiasts to delegate this to.

Joseph_Chu @ 2024-05-15T19:18 (+5)

Relevant XKCD comic.

To further comment, this seems like it might be an intractable task, as the term "dependency hell" kind of implies. You'd have to scrap likely all of GitHub and calculate what libraries are used most frequently in all projects to get an accurate assessment. Then it's not clear to me how you'd identify their level of resourcing. Number of contributors? Frequency of commits?

Also, with your example of the XZ attack, it's not even clear who made the attack. If you suspect it was, say, the NSA, would you want to thwart them if their purpose was to protect American interests? (I'm assuming you're pro-American) Things like zero-days are frequently used by various state actors, and it's a morally grey question whether or not those uses are justified.

I also, as a comp sci and programmer, have doubts you'd ever be able to 100% prevent the risk of zero-days or something like the XZ attack from happening in open source code. Given how common zero-days seem to be, I suspect there are many in existing open source work that still haven't been discovered, and that XZ was just a rare exception where someone was caught. 

Yes, hardening these systems might somewhat mitigate the risk, but I wouldn't know how to evaluate how effective such an intervention would be, or even, how you'd harden them exactly. Even if you identify the at-risk projects, you'd need to do something about them. Would you hire software engineers to shore up the weaker projects? Given the cost of competent SWEs these days, that seems potentially expensive, and could compete for funding with actual AI safety work.

Matt_Lerner @ 2024-05-14T16:21 (+5)

I'd be interested in exploring funding this and the broader question of ensuring funding stability and security robustness for critical OS infrastructure. @Peter Wildeford is this something you guys are considering looking at?

Pat Myron @ 2024-07-19T22:22 (+5)

@Peter Wildeford @Matt_Lerner interested in similar. This in-depth analysis' was a bit strict in my opinion looking at file-level criteria:
https://www.metabase.com/blog/bus-factor

These massive projects were mostly maintained by 1 person last I checked a year ago:
https://github.com/curl/curl/graphs/contributors
https://github.com/vuejs/vue/graphs/contributors
https://github.com/twbs/bootstrap/graphs/contributors
https://github.com/laravel/laravel/graphs/contributors
https://github.com/pallets/flask/graphs/contributors
https://github.com/expressjs/express/graphs/contributors
https://github.com/redis/redis/graphs/contributors
https://github.com/tiangolo/fastapi/graphs/contributors
https://github.com/lodash/lodash/graphs/contributors
https://github.com/psf/requests/graphs/contributors
https://github.com/babel/babel/graphs/contributors
https://github.com/mastodon/mastodon/graphs/contributors (seemingly improved since)
https://github.com/BurntSushi/ripgrep/graphs/contributors
https://github.com/FFmpeg/FFmpeg/graphs/contributors
https://github.com/gorhill/uBlock/graphs/contributors
https://github.com/evanw/esbuild/graphs/contributors

I'd love to be able to maintain more polished current data

Peter Wildeford @ 2024-04-13T18:26 (+15)

The TV show Loot, in Season 2 Episode 1, introduces a SBF-type character named Noah Hope DeVore, who is a billionaire wonderkid who invents "analytic altruism", which uses an algorithm to determine "the most statistically optimal ways" of saving lives and naturally comes up with malaria nets. However, Noah is later arrested by the FBI for wire fraud and various other financial offenses.

Jason @ 2024-04-13T21:56 (+3)

I wonder if anyone else will getting a thinly veiled counterpart -- given that the lead character of the show seems somewhat based on MacKenzie Scott, this seems to be maybe a thing for the show.

Peter Wildeford @ 2021-10-10T19:21 (+11)

If we are taking Transformative AI (TAI) to be creating a transformation at the scale of the industrial revolution ... has anyone thought about what "aligning" the actual 1760-1820 industrial revolution might've looked like or what it could've meant for someone living in 1720 to work to ensure that the 1760-1820 industrial revolution was beneficial instead of harmful to humanity?

I guess the analogy might break down though given that the industrial revolution was still well within human control but TAI might easily not be, or that TAI might involve more discrete/fast/discontinuous takeoffs whereas the industrial revolution was rather slow/continuous, or at least slow/continuous enough that we'd expect humans born in 1740 to reasonably adapt to the new change in progress without being too bewildered.

This is similar to, but I think still a bit distinct from, asking the question of "what would a longtermist EA in the 1600s have done?" ...A question I still think is interesting but many EAs I know are not all that interested, probably because our time periods are just too disanalogous.

Daniel_Eth @ 2021-10-15T22:13 (+1)

Some people at FHI have had random conversations about this, but I don't think any serious work has been done to address the question.