Are there diseconomies of scale in the reputation of communities?

By Lizka, Ben_West🔸 @ 2023-07-27T18:43 (+52)

Summary: in what we think is a mostly reasonable model, the amount of impact a group has increases as the group gets larger, but so do the risks of reputational harm. Unless we believe that, as a group grows, the likelihood of scandals grows slowly (at most as quickly as a logarithmic function), this model implies that groups have an optimal size beyond which further growth is actively counterproductive — although this size is highly sensitive to uncertain parameters. Our best guesses for the model’s parameters suggest that it’s unlikely that EA has hit or passed this optimal size, so we reject this argument for limiting EA’s growth.[1] (And our prior, setting the model aside, is that growth for EA continues to be good.) 

You can play with the model (insert parameters that you think are reasonable) here

Epistemic status: reasonable-seeming but highly simplified model built by non-professionals. We expect that there are errors and missed considerations, and would be excited for comments pointing these out.

Overview of the model

  1. Any group engaged in social change is likely to face reputational issues from wrongdoing[2] by members, even if it avoids actively promoting harmful practices, simply because its members will commit wrongdoing at rates in the ballpark of the broader population. 
    1. Wrongdoing becomes a scandal for the group if the wrongdoing becomes prominently known by people inside and outside the group, for instance if it’s covered in the news (this is more likely if the person committing the wrongdoing is prominent themselves).
    2. Let’s pretend that “scandals” are all alike (and that this is the primary way by which a group accrues reputational harm).
  2. Reputational harm from scandals diminishes the group’s overall effectiveness (via things like it being harder to raise money).
  3. Conclusion of the model: If the reputational harm accrued by the group grows more quickly than the benefits (impact not accounting for reputational harm), then at some point, growth of the group would be counterproductive. If that’s the case, the exact point past which growth is counterproductive would depend on things like how likely and how harmful scandals are, and how big coordination benefits are.
    1. To understand whether a point like this exists, we should compare the rates at which reputational harm and impact grow with the size of the group. Both might grow greater than linearly.
      1. Reputational harm accrued by the group in a given period of time might grow greater than linearly with the size of the group, because:
        1. The total reputational harm done by each scandal probably grows with the size of the group (because more people are harmed).
        2. The number of scandals per year probably grows[3] roughly linearly with the size of the group, because there are simply more people who each might do something wrong. 
        3. These things add up to greater-than-linear growth in expected reputational damage per year as the number of people involved grows. 
      2. The impact accomplished by the group (not accounting for reputational damage) might also grow greater than linearly with the size of the group (because more people are doing what the group thinks is impactful, and because something like network effects might help larger groups more).
  4. Implications for EA
    1. If costs grow more quickly than benefits, then at some point, EA should stop growing (or should shrink); additional people in the community will decrease EA’s positive impact.
    2. The answer to the question “when should EA stop growing?” is very sensitive to parameters in the model; you get pretty different answers based on plausible parameters (even if you buy the setup of the model). 
    3. However, it seems hard to choose parameters that imply that EA has surpassed its optimal point, and much easier to choose parameters that imply that EA should grow more (at least from this narrow reputational harm perspective).
    4. Note that we’re not focusing on the question “how good is it for EA to grow” here, which would matter for things like the cost-effectiveness of outreach efforts.

Here are two plots showing how the net impact (per year, with arbitrary units of impact) would change as a group grows — the plots are very different because the parameters are different and the model is very sensitive to that:

Plot generated for the model with arbitrary parameters demonstrating an optimal size around ~6,000 members. Squiggle code linked in the post.
Alternative parameter choice: plot generated for different arbitrary parameters. These would imply that growth continues to be useful forever. Squiggle code linked in the post.

Technical model description

A more formal description of the model

Getting to the implications of the model 

(Note: this section uses asymptotic (Big O) notation.)

Discussion of parameter choices and the model setup

How does the frequency of scandals (in expectation) grow with the size of the group? (f(N))

Remember that we defined “scandal” as “wrongdoing that becomes prominent.” Given this, our best guess here is that frequency grows sublinearly with the size of the group.

  1. A naĂŻve first-order approximation is that frequency should be linear in the size of the group because each new person is similarly likely to commit wrongdoing and cause a scandal.
  2. However, wrongdoing might only become a scandal that affects the reputation of the group if it involves a prominent[4] member of the group. And it seems likely that the size of the “prominent people” group in a broader community grows more slowly than the size of the community overall (i.e. you’re less likely to be a prominent member of a group if the group is huge than you are if the group is small). 
    1. E.g. sexual misconduct seems to be more of a scandal for a university if done by a famous professor than by a random staff member.
  3. So there should be fewer “scandals” per person in larger groups, meaning that the frequency of scandals should grow sublinearly with the size of the group.

Note also that we can try to account for the variation in the importance of real-world scandals when we’re setting parameters by saying that something less significant simply has a smaller chance of causing a scandal. In other words, if you think that someone will definitely cause 3 scandals in the following year, but they’re all very small, you can model this here by saying that this is actually 1 scandal in the way we’re defining scandal here. (Whereas something unusually significant might be equivalent to two scandals.) 

How should we define and model “reputational harm”? 

The harms we would expect to accrue from scandals are things like:

  1. It’s harder for people in the group to do things the group considers high-impact:
    1. It’s harder to raise money for impactful projects
    2. It’s harder to attract employees and collaborators
    3. It’s harder to convince people to take action on your ideas
    4. People in the group are generally stressed and demoralized
  2. Some people outside of the group might no longer want to do things the group considers high-impact:
    1. They’re less likely to join the group in the future
    2. They don’t want to do things the group endorses because the group endorses them

Our guess is that reputational harm is best modeled as a percentage decrease in impact. This fits the first point above better than it fits the second, but even for 2 (a), harms might accrue in a similar pattern: the first scandal drives off the least interested x% of people, then the second scandal drives off the x% least interested of the remainder, etc. (See evaporative cooling.)

There are some costs which arguably do not fit this model. For example, the negative perception of early cryonicists may have deterred cryobiologists (who weren’t cryonicists) from doing cryonics-related research that they otherwise would have done independently. It seems plausible that from the point of view of the group, this is better modeled as an additive cost — flat negative impact — due to the reputational issues as opposed to a multiplier penalty on the positive impact of the members of the group. (Additionally, the “effectiveness penalty multiplier” model doesn’t allow for scandals to cause someone’s work to become negatively impactful, which doesn’t seem universally true.)

Another complication might be something like splintering; it’s possible that you can’t model group size as independent of scandal rate and reputational harm, because when scandals have certain effects, the group splinters into smaller groups or simply loses members. 

Still, we think the percentage decrease model is the best that we have come up with.

How much does a scandal affect each person, for a group of a given size? (K)

We want to understand: given a fixed scandal, is it more harmful per person if the group the scandal is attached to is bigger? Our best guess is that per person in the group, harm per scandal decreases with the size of the group, but we’re modeling K as a constant for simplicity. 

It seems like there are some counterbalancing factors:

  1. Worse for bigger groups: 
    1. Bigger groups have more people affiliated with the scandal, and therefore more people who can be harmed
    2. Bigger groups are more likely to be known and might be considered more newsworthy
      1. A random individual committing a crime is usually not worthy of any reporting, but if it is attached to some well-known group, that is more worthy of journalism
  2. Better for bigger groups
    1. The effects are also more diffuse in bigger groups
      1. It seems less reasonable to blame each member of a larger group as much as you would for smaller groups
    2. Bigger groups are more likely to have an existing reputation in people’s minds, which means that individual scandals are less likely to affect their overall view
      1. Many people know at least one Harry Potter fan. If a Harry Potter fan causes a scandal, that’s not that likely to affect your view of all Harry Potter fans, in part because you have a stronger prior about the group. But if a fan of some fairly niche book series causes a scandal, you might have a weaker prior and update stronger. 
  3. Some examples to inform intuition: 
    1. Public perception of academia overall probably doesn't change much when a Princeton prof is accused of harassment or the like. But the perception of Princeton might change more, and if an even smaller group is well enough known (and that prof is in that group), then maybe the other people in the group are even more affected. 
    2. But even very large groups (major religions, political parties) still seem to suffer some amount of reputational harm after scandals.
  4. Out best guess (fairly weak) is that, per person in the group, harm per scandal decreases with the size of the group; this would mean that we should model K as something like 1/log(N) instead of modeling it as a constant. But we’re going for a constant here for simplicity and because we’re quite unsure. 
  5. Note also that, because the number of people affected as the group grows goes up linearly, overall, this means that the total reputational harm per scandal should grow with the size of the group, but sublinearly. 

How do benefits, ignoring reputational harm, grow with the size of the group?

Our best guess is that benefits grow slightly superlinearly because of coordination benefits (but you can easily remove coordination benefits from the model). 

  1. A naĂŻve first-order approximation is that benefits (not accounting for reputational issues) are linear in the size of the group.
    1. If everyone in EA donated a constant amount of money, then getting more people into EA would linearly increase the amount of money being donated (which, for simplicity, we can say is a linear increase in impact)
  2. At least in some cases though, it seems like benefits are superlinear.
    1. Standard models of networks state that the value of groups tends to grow quadratically or exponentially
    2. When Ben asks people why they write for the EA Forum they often say something like “because everyone reads the Forum”; N people each writing because N people will read each thing — that’s quadratic value
    3. Brand recognition can help get things done, and larger groups have more brand recognition
    4. Other coordination benefits (e.g. a member of the group can identify and get access to people who’d be useful to coordinate with)
  3. On the other hand, there are also (non-reputation-related) costs of larger groups, like coordination costs
  4. Our tentative guess is that the benefits of groups like EA tend to grow slightly superlinearly

Parameter sensitivity

My (Ben) subjective experience of playing around with this model is that for reasonable parameter values, it seems pretty clear that groups of more than 500 people are better than smaller groups, but it's harder to get outputs that show that larger groups (or any reasonable size) are noticeably worse than smaller ones. I have to intentionally choose weird parameters to get a graph like the one above, where there is a clear peak and larger groups are worse – unless I intentionally do this, it usually seems like growth is neutral or good (although confidence intervals are often very wide). (Lizka agrees with this.)

When I try to think of scandals that plausibly decreased the effectiveness of people in EA by >5% the list feels pretty short: FTX is probably in there, but even disturbing news or incidents like the TIME article on sexual harassment seem unlikely to have caused one in 20 people to leave EA (or otherwise decreased effectiveness by >5%). And we have 10-20k person-years to have caused scandals (suggesting that the base rate of scandals per person per year is 1/20000 to 1/10000); plugging in those numbers here indicates that EA should grow vastly beyond its current size.

More importantly: when I try to argue backwards from the claim that EA is already too big, I have to put in numbers that seem absurd, like here.

So my guess is that if growth is bad, it's because this model is flawed (which, to be clear, is pretty likely, although the flaws might not necessarily point in the direction of making it more likely that growth is bad). 

Other considerations about the parameters & model setup

  1. What happens if impact per person has long tails in a way that is predictably related to parameters in the model? 
    1. I could imagine alternative models which have different results, e.g. it could be that the most impactful members are disproportionately benefited by larger groups (e.g. the best researchers disproportionately benefit from having more people to read their research)
    2. It’s not clear to me how this would shake out
    3. See also the next bullet point
  2. What if how much reputational harm affects someone is not independent of their impactfulness?
    1. Maybe the extremely committed members don’t care so much about reputational harm because they are diehards, and they are also the more impactful ones, so this model could overstate total damage
    2. Alternatively, we speculated above that reputation harms might disproportionately accrue to prominent members of the group, and it seems plausible that prominent members of the group are also disproportionately impactful (or are connected to disproportionately impactful people, who are affected more), meaning that this model understates total damage
  3. Maybe how much each new scandal affects the group’s effectiveness depends on the number of scandals that have hit the group in the past (in a way that’s hard to capture via f(N))  
    1. E.g. maybe after 3 scandals that each harmed effectiveness by 10% (maybe via driving off 10% of the least interested people after each scandal), the group is in a new type of vulnerability, and a 4th scandal would cause significantly more damage. Meaning that we can’t keep K constant. 
    2. See also Social behavior curves, equilibria, and radicalism
  4. How does this model work for multiple groups, especially when they’re overlapping? 
  5. “Scandal” is poorly defined: scandals have a complicated relationship with brand recognition, we’re modeling “scandal” in an extremely simplified way
    1. Complicated relationship with brand recognition
      1. At the extreme: “all press is positive press”
      2. This extreme seems unlikely to be true, but there are likely complex ways in which different types of scandals have different types of impact
      3. E.g. possibly radical tactics increase support for more moderate groups
    2. Not all scandals are equal
      1. For instance, if a scandal is related to the group’s focus (e.g. Walmart employee is involved in a scandal for doing Walmart-related things, like embezzling — vs. something in their personal life) or involves prominent members of the group, it probably has a bigger impact.
      2. We can try to track this by defining (expected) “scandal” in a way that forces them to have a similar impact, but this affects how we should estimate the base rates of scandals at different group sizes, and might be hard to track.
  6. Many other properties of groups can be at least somewhat related to group size and affect how much reputational harm a group accrues
    1. Some are listed below for EA.
  7. Does this argument prove too much? The existence of large groups seems like compelling evidence that the reputational costs of large groups can’t be that high
    1. But maybe there are reasons to think that existing large institutions have special circumstances that EA lacks. In particular: some institutions started before the Internet era and are seen as “part of the furniture” (e.g. major religions, political parties) and others aren’t actually trying to do anything terribly controversial (e.g. sports teams). It’s hard to think of a large group that’s trying to change the world which was started in the last 20 years that doesn’t have substantial reputational hits from the action of a minority of members.
  8. This model assumes that the rate of scandals is impossible to change, but that of course is not true. Projects like CEA’s Community Health and Special Projects team, the EA reform projectEA Good Governance Project, and others may reduce the incidence of issues.
  9. This post is intended to address the narrow question of how reputational harm scales. There are a large number of other reasons why growth might be good/bad (e.g. difficulty maintaining norms), and those are not addressed here.

EA-specific factors that this model ignores

FactorPossible implications
Probably attracts people who are more conscientious and nice than averageDecreases per-member frequency of scandals
The desire to make things work (rather than just compete) encourages most participants to try to get along and resolve conflicts amicablyDecreases per-member frequency of scandals
Some / many of the things we do are broadly regarded as good (e.g. GiveWell)Decreases per-member frequency of scandals
The group isn’t super defined / is pretty decentralized; it’s not one massive organization. So e.g. someone donating to GWWC or effective charities can continue to do that as much as they could before (except maybe they’re demotivated) if someone prominent in a big animal advocacy organization is involved in a scandalDecreases cost of scandal
Could be seen as a nonprofitUnclear; nonprofits are sometimes held to higher standards (e.g. around compensation) but also have some default assumption of goodwill
Members tend to be from privileged demographicsUnclear; makes EA more “punchable” but also members have larger safety nets and more resources to push back
Is identified as powerful and allied with powerful groups (by some)Unclear; makes EA more “punchable” but also members have larger safety nets and more resources to push back
A large set of different organizations with different practices any of which might be objectionable to someone

Increases per-member frequency of scandals, decreases cost of scandal


 

A large set of different geographic subcultures with different practicesIncreases per-member frequency of scandals, decreases cost of scandals
Tells people to take big actions which can frequently go badly and are expected to backfire at some decent rateIncreases per-member frequency of scandals
A high level of overlap between people's professional and friend networks means that almost all aspects of someone's life can be regarded as relevant for criticism, rather than just what they do in the course of their workIncreases per-member frequency of scandals
The desire to make things go well gives people a reason to stick around even if dissatisfied, and feel a moral responsibility to fight other people if they think what they're doing is harmfulIncreases per-member frequency of scandals
Nobody has the authority to impose universal rulesIncreases per-member frequency of scandals; possibly decreases cost of scandals (because scandals are legitimately not caused by the group)
Nobody can control who identifies themselves with EA, at least for the purposes of a critical journalistIncreases per-member frequency and cost of scandals
A large fraction of our communication, especially by new and less professional folks, is public and able to be used against us indefinitely (c.f. corporations or government agencies)Possibly increases per-member frequency of scandals
Is engaged in activity contrary to the views of some existing political alliances and interests, so has accumulated active and motivated haters (and also some motivated by the FTX association)Increases per-member frequency of scandals
Disproportionately attracts young people who tend to be harder to screen, and behave more erratically, and develop new mental health problems at higher ratesIncreases per-member frequency of scandals
Could be seen as a political movement trying to influence society, which makes it seem particularly fair game to attacksIncreases per-member frequency of scandals

Related work and work we’d be excited to see

Conclusion

As with many such models, you can choose parameters to get basically any possible outcome. But the settings that seem most plausible to us result in growth being good.

One of the few takeaways from this exercise that can be said with confidence is that bigger groups are likely to have more scandals, so if EA grows, that’s something we should prepare for and mitigate against. 

Contributions

The original idea for a related model was developed by a person who wishes to remain anonymous. Ben and Lizka made this more nuanced and wrote this post as well as the squiggle code. The resulting model is rough and doesn’t have fully conclusive results, but we thought it was worth sharing.

  1. ^

    Though reputation is not the only relevant consideration for thinking about whether it would be better for EA to be small.

  2. ^

    Or stigmatized or unpopular behavior

  3. ^

    We are not actually sure about this. See the linked section.

  4. ^

    Note that “prominence” here is complicated. Arguably, the thing that matters is the prominence of someone as a member of the group. For instance, if a really famous actor happens to shop at Walmart and is involved in a widely covered scandal, it probably won’t affect Walmart’s reputation. However, if the person was also a spokesperson for Walmart, it probably would, at least a bit. 

    Moreover, prominence in the group might make someone’s wrongdoing newsworthy (and via news coverage, a scandal) even if they weren’t prominent outside of the group before that happened. (Imagine a relatively unknown spokesperson for Walmart committing wrongdoing.) I’m not sure how much this actually happens. 

    It probably also matters whether the wrongdoing in question was somehow related to the group; e.g. the group already has a reputation for something related, or the wrongdoing highlights hypocrisy from the group’s perspective, etc. 


MichaelStJules @ 2023-07-28T09:30 (+11)
  1. At least in some cases though, it seems like benefits are superlinear.
    1. Standard models of networks state that the value of groups tends to grow quadratically or exponentially
    2. When Ben asks people why they write for the EA Forum they often say something like “because everyone reads the Forum”; N people each writing because N people will read each thing — that’s quadratic value

 

I think both exponential and quadratic are too fast, although it's still plausibly superlinear. You used , which seems more reasonable.

Exponential seems pretty crazy (btw, that link is broken; looks like you double-pasted it). Surely we don't have the number of (impactful) subgroups growing this quickly.

Quadratic also seems unlikely. The number of people or things a person can and is willing to interact with (much) is capped, and the average EA will try to prioritize somewhat. So, when at their limit and unwilling to increase their limit, the marginal value is what they got out of the marginal stuff minus the value of their additional attention on what they would have attended to otherwise.

As an example, consider the case of hiring. Suppose you're looking to fill exactly one position. Unless the marginal applicant is better than the average in expectation, you should expect decreasing marginal returns to increasing your applicant pool size. If you're looking to hire someone with some set of qualities (passing some thresholds, say), with the extra applicant as likely to have them as the average applicant, with independent probability  and  applicants, then the probability of finding someone with those qualities is , which is bounded above by 1 and so grows even more slowly than  for large enough . Of course, the quality of your hire could also increase with a larger pool, so you could instead model this with the expected value of the maximum of iid random variables. The expected value of the max of bounded random variables, will also be bounded above by the max of each. The expected value of the max of iid uniform random variables over  is  (source), so pretty close to constant. For the normal distribution, it's roughly proportional to  (source).

It should be similar for connections and posts, if you're limiting the number of people/posts you substantially interact with and don't increase that limit with the size of the community.

 

Furthermore, I expect the marginal post to be worse than the average, because people prioritize what they write. Also, I think some EA Forum users have had the impression that the quality of the posts and discussion has decreased as the number of active EA Forum members has increased. This could mean the value of the EA Forum for the average user decreases with the size of the community.

Similarly, extra community members from marginal outreach work could be decreasingly dedicated to EA work (potentially causing value drift and making things worse for the average EA, and at the extreme, grifters and bad actors) or generally lower priority targets for outreach on the basis of their expected contributions or the costs to bring them in.

 

Brand recognition or reputation could be a reason to expect the extra applicants to EA jobs to be better than the average ones, though.

 

Brand recognition can help get things done, and larger groups have more brand recognition

Is growing the EA community a good way to increase useful brand recognition? The EA brand seems less important than the brands of specific organizations if you're trying to do things like influence policy or attract talent.

Ben_West @ 2023-07-29T20:00 (+6)

Thanks Michael! This is a great comment. (And I fixed the link, thanks for noting that.)

My anecdotal experience with hiring is that you are right asymptotically, but not practically. E.g. if you want to hire for some skill that only one in 10,000 people have, you get approximately linear returns to growth for the size of community that EA is considering:

And you can get to very low probabilities easily: most jobs are looking for candidates with a combination of: a somewhat rare skill, willingness to work in an unusual cause area, willingness to work in a specific geographic location, etc. and multiplying these all together gets small quickly.

It does feel intuitively right that there are diminishing returns to scale here though.

MichaelStJules @ 2023-07-30T08:17 (+2)

I would guess that for the biggest EA causes (other than EA meta/community), you can often hire people who aren't part of the EA community. For animal welfare, there's a much larger animal advocacy movement and far more veg*ns, although probably harder to find people to work on invertebrate welfare and maybe few economists. For technical AI safety, there are many ML, CS (and math) PhDs, although the most promising ones may not be cheap. Global health and biorisk are not unusual causes at all. Invertebrate welfare is pretty unusual, though.

However, for more senior/management roles, you'd want some value alignment to ensure they prioritize well and avoid causing harm (e.g. significantly advancing AI capabilities).

Ozzie Gooen @ 2023-08-01T13:51 (+7)

Excited to see this!

Really sorry, but recently we released Squiggle 0.8, which added a few features, but took away some things (that we thought were kind of footguns) that you used. So the model is now broken, but can easily be fixed.

I fixed it here, with some changes to the plots.

  1. I replaced "#" comments with "//" comments. 

2. Instead of Plot.fn(), it's now Plot.distFn() or Plot.numericFn(), depending on if its returning a number or distribution. Note that these are now more powerful - you can adjust the axes scales, including adding custom symlog scales. I added a symlog yScale.

Also, in the newer version, you can specify function ranges. Like, 

expected_number_of_scandals = {|n: [1e-3, 30k]|(1 / 5000 to 1 / 500) * n}

You have the code,

growth_benefits = {|n: range|n * log(n)}

This is invalid where n=0, so I changed the range to start slightly above that. (You can also imagine other ways of doing that)

In the future, we really only plan to have breaking changes in main version numbers, and we'll watch them on Squiggle Hub. I didn't see that there were recent active Squiggle models. Sorry for the confusion, again!

 

Lizka @ 2023-08-03T22:41 (+4)

Thanks for this! I think I've replaced the relevant links. (And no need to apologize.)

Ozzie Gooen @ 2023-08-02T21:07 (+2)

Also, do feel free to post the model on Squiggle Hub. I haven't formally announced it here yet, but people are welcome to begin using it. 

MichaelStJules @ 2023-07-28T07:07 (+4)

Thanks for writing this! This is a cool model.

Our best guess is that benefits grow slightly superlinearly because of coordination benefits (but you can easily remove coordination benefits from the model). 

  1. A naĂŻve first-order approximation is that benefits (not accounting for reputational issues) are linear in the size of the group.
    1. If everyone in EA donated a constant amount of money, then getting more people into EA would linearly increase the amount of money being donated (which, for simplicity, we can say is a linear increase in impact)

Is linear a good approximation here? Conventional wisdom suggests decreasing marginal returns to additional funding and people, because we'll try to prioritize the best opportunities.

I can see this being tricky, though. Of course doubling the community size all at once would hit capacities for hiring, management, similarly good projects, and room for more funding generally, but EA community growth isn't usually abrupt like this (FTX funding aside).

In the animal space, I could imagine that doing a lot of corporate chicken (hen and broiler) welfare work first is/was important for potentially much bigger wins like:

  1. legislation/policy change, due less corporate pushback or even corporate support, and stronger org reputations
  2. getting the biggest and worst companies like McDonalds to commit to welfare reforms,
  3. moving onto less relatable animals exploited in larger numbers we can potentially help much more cost-effectively going forward, like fish, shrimp and insects.

But I also imagine that marginal corporate campaigns are less cost-effective when considering only the effects on the targeted companies and animals they use, because of the targets are prioritized and resources spent on a given campaign will have decreasing marginal returns in expectation.

 

GiveWell charities tend to have a lot of room for funding at given cost-effectiveness bars, so linear is probably close enough, unless it's easy to get more billionaires.

 

For research, the most promising projects will tend to be prioritized first, too, but with more funding and a more established reputation, you can attract people who are better fits, and can do those projects better, do projects you couldn't do without them, or identify better projects, and possibly managers who can handle more reports.

 

Maybe there's some good writing on this topic elsewhere?

Ben_West @ 2023-07-29T20:16 (+2)

My impression is that, while corporate spinoffs are common, mergers are also common, and it seems fairly normal for investors to believe that corporations substantially larger than EA are more valuable as a single entity than as independent pieces, giving some evidence for superlinear returns.

But my guess is that this is extremely contingent on specific facts about how the corporation is structured, and it's unclear to me whether EA has this kind of structure. I too would be interested in research on when you can expect increasing versus decreasing marginal returns.

Oliver Sourbut @ 2023-08-05T22:01 (+3)

Great read, and interesting analysis. I like encountering models for complex systems (like community dynamics)!

One factor I don't think was discussed (maybe the gesture at possible inadequacy of encompasses this) is the duration of scandal effects. E.g. imagine some group claiming to be the Spanish Inquisition or the Mongol Horde, or the Illuminati tried to get stuff done. I think (assuming taken seriously) they'd encounter lingering reputational damage more than one year after the original scandals! Not sure how this models out; I'm not planning to dive into it, but I think this stands out to me as the 'next marginal fidelity gain' for a model like this.

Ben_West @ 2023-08-08T21:11 (+2)

Thanks Oliver! It seems basically right to me that this is a limitation of the model, in particular , like you say.

NunoSempere @ 2023-07-27T19:31 (+3)

Neat!

Benjamin_Todd @ 2024-08-24T08:14 (+2)

Thanks for the analysis! I think it makes sense to me, but I'm wondering if you've missed an important parameter: diminishing returns to resources.

If there are 100 community members they can take the 100 most impactful opportunities (e.g. writing DGB, publicising that AI safety is even a thing), while if there are 1000 people, they will need to expand into opportunities 101-1000, which will probably be lower impact than the first 100 (e.g. becoming the 50th person working on AI safety).

I'd guess a 10x increase to labour or funding working on EA things (even setting aside coordination and reputation issues) only increases impact by ~3x.

It seems like that might make significant difference to the model - if I've understood, currently the impact of marginal members in the model is actually increasing due coordination benefits, whereas this could mean it's decreasing. 

I'd still guess marginal growth is net positive, but I feel less confident than the post suggests.

Benjamin_Todd @ 2024-08-24T08:34 (+4)

Aside: A more compelling argument against growth in this area to me is something like "EA should focus on improving its brand and comms skills, and on making reforms & changing its messaging to significantly reduce the chance of something like FTX happening again, before trying to grow aggressively again"; rather than "the possibility of scandals means it should never grow".

Another one is "it's even more high priority to grow X others movements than EA" rather than "EA is net negative to grow".

Ben_West🔸 @ 2024-08-27T01:12 (+2)

Thanks! See my response to Michael for some thoughts on diminishing returns.

10x increase in labor leading to 3x increase in impact feels surprising to me. At least in the regime of ~2xing supply I doubt returns diminish that quickly. But I haven't thought about this deeply and I agree that there is some rate of diminishing marginal returns which would make marginal growth net negative.

Benjamin_Todd @ 2024-09-05T11:46 (+2)

The response to Michael is an interesting point, but it only concerns diminishing returns in individual capabilities of new members. 

Diminishing returns are mainly driven by the quality of opportunities being used up, rather than the capabilities.

IIRC a 10x in resources to get a 3x in impact was a typical response in the old coordination forum survey responses.

In the past at 80k I'd often assume a 3x increase in inputs (e.g. advising calls) to get a 2x increase in outputs (impact-adjusted plan changes), and that seemed to be roughly consistent with the data (though the data don't tell us that much). In some cases, returns seem to diminish a lot faster than that. And you often face diminishing returns at several levels (e.g. 3x as much marketing to get 2x as many applicants to advising).

I agree returns are more linear in areas where EA resources are a small fraction of the total, like global health, but that's not the case in areas like AI safety, GCBRs, new causes like digital sentience, or promoting EA.

And even in global health, if GiveWell only had $100m to allocate, average cost-effectiveness would be a lot higher (maybe 3-10x higher?) than where the marginal dollar goes today. If GiveWell had to allocate $10bn, I'd guess returns would be at least several fold lower again on the marginal spending.

Benjamin_Todd @ 2024-08-24T08:27 (+2)

Less importantly, I also feel less confident coordination benefits would mean impact per member goes up with the number of members.

I understand that the value of a social network like Facebook grows with the number of members. But many forms of coordination become much harder with the number of members.

As an analogy, it's significantly easier for 2 people to decide where to go to dinner than for 3 people to decide. And 10 people in a group discussion can take ages to come to consensus.

Or, it's much harder to get a new policy adopted in an organisation of 100 than an organisation of 10, because there are more stakeholders to consult and compromise with, and then more people to train in the new policy etc. And large organisations are generally way more bureaucratic than smaller ones.

I think these analogies might be closer than the analogy of Facebook.

You also get effects like in a movement of under 1000, it's possible to have met in person most of the people, and know many of them well; while in a movement of 10,000, coordination has to be based on institutional mechanisms, which tend to involve a lot of overhead and not be as good.

Overall it seems to me that movement growth means more resources and skills, more shared knowledge, infrastructure and brand effects, but also many ways that it becomes harder to work together, and the movement becoming less nimble. I feel unsure which effect wins, but I put a fair bit of credence on the term decreasing rather than increasing.

If it were decreasing, and you also add in diminishing returns, then impact per member could be going down quite fast.