Much EA value comes from being a Schelling point
By L Rudolf L @ 2022-09-10T07:26 (+132)
This is a linkpost to https://www.strataoftheworld.com/2022/09/ea-as-schelling-point.html
TL;DR: A significant way in which the EA community creates value is by acting as a Schelling point where talented, ambitious, and altruistic people tend to gather and can meet each other (in addition to more direct sources of EA value like identifying the most important problems and directly pushing people to work on them). It might be useful to think about what optimising for being a Schelling point looks like, and I list some vague thoughts on that.
A Schelling point, also known as a focal point, is what people decide on in the absence of communication, especially when it's important to coordinate by coming to the same answer.
The classic example is: you were arranging a meeting with a stranger in New York City by telephone, but you used the last minute of your phone credit and the line cut off after you had agreed on the date but not location or time - where do you meet? "Grand Central Station at noon" is an answer that other people may be especially likely to converge on.
(Schelling points can be thought of as a type of acausal negotiation.)
When the Schelling point is the selling point
Schelling points are often extremely powerful and valuable. A key function of top universities is to be Schelling points for talented people. (Personally, I'd call it the most important function.) There are other valuable things too: courses that go deeper, the signalling value to employers, and so on. However, talented people generally have a preference for hanging out with other talented people, both for social reasons and to find collaborators for ambitious projects and future colleagues. At the same time, talented people are also generally spread out and present only at low densities. Top universities select hard on (some measures of) talent, and through this create environments with high talent density. A big chunk of the reason why people apply to top universities is because other people do so too, and I'd guess that even if the academic standards of Stanford, MIT, or Cambridge eroded significantly, the fact that they've established themselves as congregating points for smart people will keep people applying and visiting for a long time.
(Note that this is related to, but not equal to, the prestige and status of these places. It is possible to imagine Schelling points that are not prestigious. For example, my impression is that this described MIT at one point - it became a congregating point for uniquely ambitious STEM students and defence research before it achieved high academic status. It is also possible to imagine prestigious places that are not Schelling points, though this is a bit harder since anything with prestige becomes a Schelling point for high social status (though prestige Schelling points and talent Schelling points need not co-occur). More generally, since prestige is a thing many people care a lot about, there is a high correlation between a place being prestigious or high status and being a Schelling point for at least some type of person. However, the mechanisms are distinct - a person selecting their university based on status is selecting based on what they get to write on their CV, while a person selecting their university based on it being a Schelling point for smart people is selecting based on the fact that many other smart people that they can't coordinate with but would like to meet will also choose to go there.)
Another example is Silicon Valley. Sure, the area has many strengths - being rich and inside a large stable free market - but by far the greatest argument for living in Silicon Valley is that others also choose it. This leads to a (for now) unique combination of entrepreneurial people, great programmers, venture capitalists, and all the other types of people you need for a thriving tech business ecosystem, all there primarily because all the others are there too (how touching!). There's a lot of value of having everything in one place, and it would be very hard for all the different people who make up the value of Silicon Valley to coordinate to move to another place. That's why the Schelling point value of Silicon Valley is so enduring that people continue to tolerate large numbers of homeless drug addicts and sell kidneys to pay rent for years on end.
Note that a big part of the mechanism isn't that specific people you want to find are there, but that the types of person you'd want to find are likely to also be there, because both those people and yourself are likely to converge on the strategy of going there.
Schelling EA
The Effective Altruism (EA) community provides a lot of value, for example:
- research into figuring out what are the most important problems to solve to maximise human flourishing;
- research and concrete efforts into how to solve the most important problems discovered by the above;
- high epistemic standards and truth-seeking discussion norms;
- a uniquely wide-ranging and well-reasoned set of resources to help people pursue high-impact careers;
- tens of billions of dollars in funding.
However, in addition to these, a very critical part of the value that EA provides is being a Schelling point for talented, ambitious, and altruistically-motivated people.
Even without EA, there would be researchers studying existential risks, animal welfare, and global poverty; people trying to assess charities; communities with high epistemic norms; and billionaires trying to use their fortunes for effective good. However, thanks to EA, people in each of these categories can go to the same Effective Altruism Global conference or quickly find people in local groups, and meet collaborators, co-founders, funders, and so on. A lot of the reason why this can happen is that if you hang out with a certain group of people or on the right websites, EA looms large.
The biggest personal source of value I've gotten from EA has been having a shortcut to meeting people very high in all of talent, ambition, and altruistic motivation.
Much of this is obvious - breaking news: communities bring people together and foster connections, more at 11 - but I think taking seriously just how much of counterfactual EA community impact comes from being a Schelling point leads to some less-obvious points about possible implications.
Implications
The Schelling-point-based (and therefore necessarily incomplete) answer to "what is the EA community for?" might be something like "be an obvious Schelling point where relevant people gather, the chance of interactions that lead to useful work is maximised, and have a community and infrastructure that pushes work in the most useful direction possible". (This is in contrast to answers that emphasise e.g. directly increasing the number of people working on the most pressing problems.) (I will not argue for this being the best possible answer; my point is just that it is one possible answer, and an interesting one to examine further.)
If I were a Big Tech marketing consultant, I might call this "EA-as-a-platform".
What might maximising for such a Schelling point strategy look like?
Being obvious
A Schelling point is not a Schelling point unless it's obvious enough. For EA to be an effective Schelling point for talented/ambitious/altruistic people, those people must hear about it. Silicon Valley is obvious enough that entrepreneurial people from South Africa to Russia hear about it and decide it's where they want to be. To maximise its Schelling point value, EA should have world-spanning levels of recognition.
Note that recognition does not equal prestige or likeability. We don't care (for Schelling point reasons at least) if most people hear about EA and go "eh, sounds weird and unappealing"; what matters is that the core target demographic is excited enough to put effort into pursuing EA. Consider how Silicon Valley was not particularly high-prestige in the public even when it was already attracting tech entrepreneurs, or how many people hear about the intensity of academics at top universities and (very reasonably) think "no thanks".
Providing value
Though most of a Schelling point's value typically comes from the other people who congregate at it, a Schelling point is easier to create if it is obviously valuable. Even though the smart people they meet might be most of the benefit of university, high schoolers are still more likely to go to top universities if they provide good education, good facilities, and unambiguous social status.
Some obvious ways in which EA provides value are through funding sufficiently promising projects, and by having a very high concentration of intellectually interesting ideas.
There are risks to communicating loudly about the value-add, since this brings in people who are in it purely for personal gain ("the vultures are circling", as one Forum post put it). This works for Schelling points like Silicon Valley, but not altruism.
Optimising for matchmaking
A specific way that Schelling points provide value is by making it easy to meet other people in the specific ways that lead to productive teams forming. An existing example of this is that everyone says one-on-one meetings are the main point of conferences, and there is (of course) a lot of thinking about how to make these effective. On the more informal end of the scale, Reciprocity exists.
However, the scope and value of EA matchmaking could be expanded. I'm not aware of many ways to match together entrepreneurial teams (the Charity Entrepreneurship incubation program is the only one that comes to mind). I recently took part in an informally-organised co-founder matching process and found it extremely helpful to quickly get a lot of information on what it's like to work together with several promising people.
I'd advise for someone to think more about how to make the EA environment even more effective at matching people who should know about each other. However, I expect someone is already designing a 53-parameter one-on-one matching system with Calendly, Slack, and Matplotlib integration for the next conference, and therefore I will hold off on adding any more fuel to this fire.
Being legit
One of the specific ways in which a Schelling point becomes one is if things associated with it seem uniquely competent, successful, or otherwise good, in a clearly unfakeable way. It is helpful for Cambridge's Schelling point status that it can brag about having 121 Nobel laureates. That so many successful tech companies emerged from Silicon Valley specifically is an unfakeable signal. Any government or city can afford to throw some millions at putting up posters advertising its startup-friendliness; few can consistently produce multi-billion dollar tech companies.
No amount of community-building or image-crafting is likely to replicate the Schelling point power of obviously being the place where things happen. In some areas, I think EA already has such power: much of the research and work on existential risks happens within EA, and it might be hard to be a researcher on those topics without running into the large body of EA-originating work. However, EA goals require more than just research; note how being a project/organisation founder or working in an operations role have been creeping up the 80 000 Hours list of recommended career paths.
It would be extremely powerful, not just for direct impact reasons but also for building up EA's Schelling point status, if the EA community clearly spawned very obviously successful real-world projects. Alvea succeeding or working Nucleic Acid Observatories being built would be powerful examples. Likewise if Charity Entrepreneurship-incubated charities become clear stars of the non-profit world.
Meritocracy and impartial judgement
Right now, I think if a person somewhere in the world has a well-thought out idea for how to make the world a better place, likely their best bet to get a fair hearing, useful feedback, and - if it is competitive with the most valuable existing projects - funding and support is to post it on the EA Forum. I don't think this is very obvious outside the EA community. However, this fact, and awareness of it, could make EA a more useful Schelling point, in the same way that the impression that Silicon Valley doesn't frown on weird ideas as long as they're important enough makes it a better Schelling point.
That EA endorses cause neutrality, has high and transparent epistemic standards, and a quantitative mindset are key parts of this. However, to use this to increase EA Schelling point power, these properties need to be clearly visible to outsiders.
The most likely way for this to be become more obvious might be if specific EA organisations achieved such a reputation widely within their field (and then there was some path by which knowing of these organisations points people towards knowing about EA).
GiveWell might be an example of a clearly-EA-linked organisation with visibly high epistemics and judgement quality, though I don't know what their image or recognition level is outside the EA community. Another example is if someone created successful and famous organisations along the lines of FTX Future Fund's proposed epistemic appeals process or widespread expert polling projects.
Openness and approachability
Good Schelling points are easy to enter, and don't select on attributes that they don't have to.
Every human sub-group, even if loose and purpose-driven, tends to develop a distinctive culture that is much more specific than strictly implied by its purpose. Sometimes this is useful, since it makes it easy for humans in even a loose group to bond with each other. However, a strong and distinct internal culture is also a barrier to entry. EA is already high-risk for having a strong barrier to entry, because
- many arguments and concepts in EA require background knowledge to understand, and sometimes dense philosophical or technical background knowledge (and this is not the case just for more formal things like Forum posts; I've frequently heard "EV [expected value]", "QALY [quality-adjusted life year", and "Pascal's mugging" assumed as obvious common terminology in casual conversation);
- EA (quite obviously, given what it's about) has a high concentration of non-obvious arguments that are obscure in public discussion but have huge implications; and
- perhaps the main route into EA is caring very strongly about intellectual arguments about abstract moral principles, which tends not to be a natural way for humans to join communities.
These largely unavoidable factors already make EA somewhat unapproachable, and seem like a tightly-knit weird in-group/subculture (anecdotally, this seems to be the most common complaint about EA among Cambridge students). Weird cultural norms or quirks are (among other things!) barriers to entry. Therefore, they should be minimised - to the extent that they can be without impinging on what EA is about - if the goal is to maximise Schelling point value.
(Mostly implicit) selectivity for the right things
Some selection is usually part of a Schelling point's value. Top universities select for academic merit (though perhaps less so in the US). Silicon Valley selects for openness and interest/talent in tech/business. EA selects for openness, altruistic orientation (especially if consequentialist-leaning), good epistemics, and quantitative thinking.
I think it is counterproductive to view openness and selectivity as two ends of one scale that apply to everything. You want to select on important features and be open otherwise (note that, when creating a Schelling point, most of the selection is usually implicit - what types of people you attract - rather than explicit filtering). The key choice is not "open or selective overall?" but rather "for which X do we want to appeal only to people who have a value of X in some specific range?"
Here's a heuristic for when selectivity for X is useful: when the way X provides value is through its concentration rather than its amount. If you're at a party where you can only talk to a subset of the people during its course, you're going to care a lot about what fraction of people there are interesting - 10 interesting people in a party of 20 is better than 50 in a party of 5000.
Some cases are ambiguous. For example, if there exists a way for the good and important research to bubble to the top regardless of how much other research exists, it seems like total amount of (infohazard-free) research is the thing to maximise. However, a research area where the average paper is very high quality might help newcomers to the field, or might help lift the prestige of the field, so concentration matters at least somewhat.
To take another example, there was a recent debate over whether EA Global should be open access. Many of the arguments against boil down to thinking the path to impact runs through a uniquely high concentration of EA engagement (or other variables) among the participants; arguments in favour are often either claiming that concentration matters less than sheer amount of interactions, or that the choice of selection variable(s) is wrong, or that CEA fails to select on their chosen selection variable(s) so even if the intention is right the selection variable selected for in practice is wrong.
Hubs, and hub-related infrastructure
Finally, a key point of a Schelling point is that it is a point somewhere. Here, EA is increasingly better. Berkeley, Cambridge, Oxford, London, and Berlin all have large groups, and offices that you can apply to in order to work on EA-relevant things in the company of other EAs.
In Schelling point terms, there's also a risk that it might be better to have one really obvious and strong hub than many weaker ones (I've heard some Bay Area EAs in particular endorsing this view; invariably, their hub of choice is the Bay Area, though there is push back). In practice, it seems that many physical hubs but one virtual/intellectual hub may be best. Both airplanes and people's desires to not uproot their lives are real and relevant things.
The organisers at each EA hub might benefit from applying Schelling point thinking to the context of their local scene.
Being one thing
Finally, a Schelling point needs to be one thing, at least in some loose sense. If New York had two Grand Central Stations, the classic Schelling point game would become a lot harder to solve.
One way to increase the One Thingness of the EA Schelling point is to merge it with other things. In Schelling point land, "merging" does not mean making them the same cluster, but rather creating an obvious and visible path from one thing to another. My understanding is that increasing the obviousness of EA in somewhat-adjacent communities (tech, longevity, space, and Emergent Ventures grantees) was a large part of what Future Forum tried to achieve.
Thanks to Hugo Eberhard for feedback and discussion on drafts and the general topic.
titotal @ 2022-09-10T11:02 (+20)
An interesting concept, but you're missing a section. Namely, you need to ask, what are the downsides to optimizing for being a Schelling point?
For example, if we make EA famous as the place where smart people hang out, make connections, and make high profile jobs and money... then people who don't care at all about doing good are gonna come in for the connections and jobs, at the expense of actual truth-seeking. One advantage of being a niche weirdo group is that you can be pretty sure everyone is a true believer.
You've also gotta ask, which group of smart people are we selecting for here? The demographics of EA do not match that of smart people in general, they are closer to that of successful people in STEM. Should we be de-emphasing aspects of STEM such as mathematical reasoning to appeal to other groups, or are we happy being just the STEM Schelling point? Everything has trade-offs here.
LRudL @ 2022-09-10T15:06 (+4)
I mentioned the danger of bringing in people mostly driven by personal gain (though very briefly). I think your point about niche weirdo groups finding some types of coordination and trust very easy is underrated. As other posts point out the transition to positive personal incentives to do EA stuff is a new thing that will cause some problems, and it's unclear what to do about it (though as that post also says, "EA purity" tests are probably a bad idea).
I think the maximally-ambitious view of the EA Schelling point is one that attracts anyone who fits into the intersection of altruistic, ambitious / quantitative (in the sense of caring about the quantity of good done and wanting to make that big), and talented/competent in relevant ways. I think hardcore STEM weirdness becoming a defining EA feature (rather than just a hard-to-avoid incidental feature of a lot of it) would prevent achieving this.
In general, the wider the net you want to cast, the harder it is to become a clear Schelling point, both for cultural reasons (subgroup cultures tend more specific than their purpose strictly implies, and broad cultures tend to split), and for capacity reasons (it's harder to get many than few people to hear about something, and also simple practical things like big conferences costing more money and effort).
There is definitely an entire different post (or more) that could be written about how much and which parts of EA should be Schelling point or platform -type thing and comparing the pros and cons. In this post I don't even attempt to weigh this kind of choice.
AllAmericanBreakfast @ 2022-09-10T16:38 (+7)
I'm interested in whether or not modern prestigious universities were founded with similar goals in mind. Is this theory of how to optimizing for being a Schelling point really a good way to create a Schelling point?
To value-add to this post, I'm going to add a few summaries of how various prestigious US universities and research institutions were founded. I'll focus on institutions founded in the last 150 years or so away from the East Coast.
- University of Washington (1876): Establishment of a university recommended by the Washington Territory governor, boosted by prominent residents as a Seattle prestige- and potential-enhancing measure. Influential people were persuaded that a university had greater prestige-enhancing potential than moving the state's capital. The school initially struggled, closing three times, but took off as Seattle grew. As a result of its president Charles Odegaard persuading the state legislature to increase funding, it grew massively from 1958-1973, and benefitted from the presence of major tech companies in Seattle.
- University of Chicago(1856): A state Senator who was a big booster of Chicago and may have wanted to increase the value of his adjoining lots donated the land for what would become the Old University of Chicago. The Old U never became financially successful, and actually was called the University of Chicago until changing its name to "Old UoC" to allow a completely separate and much better-funded institution to take over the UoC name in 1890. For a while, UoC tried to affiliate with smaller regional universities on condition they improve their quality, but UoC professors disliked the program as they felt it undermined the reputation of UoC, and they stopped by 1910.
- California Institute of Technology (1891): Started and disbanded as a vocational school, then became the Polytechnic School. George Ellergy Hale worked to develop the school into a "major scientific and cultural destination." President Roosevelt and the California Legislature latched onto these efforts, pouring political support and funding into this project. This is in the context of a national effort to improve the United State's standing as a scientific leader, in an era when Germany was perhaps the world's leader in scientific accomplishment.
- Stanford (1891): Founded by Leland Stanford, a powerful and wealthy man. From wikipedia, Stanford's rise to high prestige occurred about 50 years after its founding, under the oversight of its president Wallace Sterling. At that time, the university had financial troubles. He built sources of income and started a massive fundraising program, better integrated the medical school, increased financial aid, grew the student body by 35% and tripled the size of the faculty, build a huge number of buildings, and started an overseas campus program.
Fom looking at these four universities (which are the only four I examined), I get the impression that:
- Universities were often founded as a way to make their city more prestigious.
- Universities achieved that goal in turn by explicitly optimizing for prestige.
It seems like one of the underlying factors here is that as the USA grew, it had a lot of territory and people who had just as much academic potential as anyplace else, but lacked the institutions to identify and develop that potential. At the same time, other places in the world had underutilized leadership - great professors and administrators who were perhaps jockeying for position in the Ivy Leagues of the east coast. In the run up to WWII, a lot of great German scientists fled the country and many of them came to America. My impression is that this was the turning point when then USA became the world leader in science and technology.
I'll put my interpretations of this in a separate reply to this comment.
AllAmericanBreakfast @ 2022-09-10T17:06 (+9)
My takeaways:
- One of the functions of institution-building triangulate underutilized funding, leadership, and talent to create value and opportunity for the local community. Successful efforts at the city and state level were seen as creating value for the country as a whole. These agglomerations are now also seen as creating value for the entire planet.
- Many of these founders were explicitly optimizing for prestige.
- Founders were working cooperatively with cities, states, and founders to mutually enhance their collective prestige.
- A lot of care was given to trying to craft the right culture, and balance growth with protecting the reputation of the institution. Not all of the decisions were correct, but it was a constant concern.
This analysis is flawed in all kinds of ways: small sample size, qualitative, susceptible to selection bias.
Universities may not be the right reference class for EA movement growth. This is particularly because EA often seeks to fill in gaps in altruistic work that are not adequately filled by current institutions, including universities.
There are forms of altruistic work that are neglected precisely because they are not very prestige-enhancing. There is not a lot of money or power to be had in alleviating insect suffering, for example, and by the ITN framework, that's what makes it an ideal EA cause.
On the other hand, I think this points out the operating strategy for EA, which is to engineer prestige, funding, and visibility in order to better align it with the altruistic work that we think is most pressing. In this, we may be most successful if we figure out how to benefit from and to enhance the prestige of other supporters.
These universities focused on mutual prestige-enhancement with their cities and states. We seem to be focusing on our relationships with media outlets (i.e. Vox, and perhaps now the New Yorker and similar), and also with universities themselves. I suspect that succeeding in establishing the nucleic acid observatory system would be a coup - we'd then be enhancing the prestige of the US government, and the US government may then see EA as a source of prestige.
I see a realignment of prestige being one of Eliezer Yudkowsky's major contributions to the field of AI safety. While there are plenty of AI researchers who dismiss his concerns out of hand, it's increasingly difficult to do so, and he's made it higher prestige to at least act as if you care about AI safety, and to legitimize AI safety as a field of research.
We also have failures to account for. I think that the Carrick Flynn campaign has to count as prestige-degrading. Not only did we fail in a rather humiliating way to get our candidate elected, "crypto-funded shadow group tries to buy election" is going to be something critics can say about us in the future. I think one concern about our current reliance on billionaire dollars is that we're pretty susceptible to our broader perception in society being colored by their whims and judgment and aesthetics.
OllieBase @ 2022-09-10T10:06 (+7)
I really like this framing. It captures an intuition I've had before about EA discussion spaces that's hard to articulate. Thanks for writing it up!
Gavin Bishop @ 2022-09-15T05:17 (+2)
Just adding my comment to +1 this one. Great articulation of an intuition I've had as well.
Lin BL @ 2022-09-11T21:24 (+1)
Very interesting read, thanks for writing.
I remember when I first joined EA the thing that I found the most different/beneficial was the community. I've met various people who care about having an impact, and about maximising their impact, and about rationality. But it is the community within EA and the concentration of these people all in one place that would be very difficult to replicate.
JaimeRV @ 2022-09-10T14:20 (+1)
Very good post! I agree with most of the points and the framing helps to see where there is room for improvement
Regarding this sentence: "In practice, it seems that many physical hubs but one virtual/intellectual hub may be best."
Do you have any particular thoughts on how to optimize a virtual hub or Schelling point?
For example, EAGx Virtual will take place in October and there might be some things that could make it a better Schelling point.
LRudL @ 2022-09-10T15:21 (+4)
For "virtual/intellectual hub", the central example in my mind was the EA Forum, and more generally the way in which there's a web of links (both literal hyperlinks and vaguer things) between the Forum, EA-relevant blogs, work put out by EA orgs, etc. Specifically in the sense that if you stumble across and properly engage with one bit of it, e.g. an EA blog post on wild animal suffering, then there's a high (I'd guess?) chance you'll soon see a lot of other stuff too, like being aware of centralised infrastructure like the Forum and 80k advising, and becoming aware of the central ideas like cause prio and x-risk. Therefore maybe the virtual/physical distinction was a bit misleading, and the real distinction is more like "Schelling point for intellectual output / ideas" vs "Schelling point for meeting people".
That being said, a point that comes to mind is that geographic dispersion is one of the most annoying things for real-world Schelling points and totally absent* if you do it virtually, so maybe there's some perspective like "don't think about EAGx Virtual as recreating an EAG but virtually, but rather as a chance to create a meeting-people-Schelling-point without the traditional constraints, and maybe this ends up looking more ambitious"?
(*minus timezones, but you can mail people melatonin beforehand :) )