MichaelA's Quick takes
By MichaelAšø @ 2019-12-22T05:35 (+10)
nullMichaelA @ 2021-05-16T14:26 (+40)
Notes from a call with someone who's a research assistant to a great researcher
(See also Matthew van der Merwe's thoughts. I'm sharing this because I think it might be useful to some people by itself, and so I can link to it from parts of my sequence on Improving the EA-Aligned Research Pipeline.)
- This RA says they definitely learned more from this RA role than they wouldāve if doing a PhD
- Mainly due to tight feedback loops
- And strong incentives for the senior researcher to give good feedback
- The RA is doing producing "intermediate products" for the senior researcher. So the senior researcher needs and uses what the RA produces. So the feedback is better and different.
- In contrast, if the RA was working on their own, separate projects, it would be more like the senior researcher just looks at it and grades it.
- The RA is doing producing "intermediate products" for the senior researcher. So the senior researcher needs and uses what the RA produces. So the feedback is better and different.
- The RA has mostly just had to do literature reviews of all sorts of stuff related to the broad topic the senior researcher focuses on
- So the RA person was incentivised more than pretty much anyone else to just get familiar with all the stuff under this umbrella
- They wouldnāt be able or encouraged to do that in a PhD
- So the RA person was incentivised more than pretty much anyone else to just get familiar with all the stuff under this umbrella
- The thing the RA hasnāt liked is that he hasnāt been producing his own work
- He hasnāt had to deal with figuring out project choices for himself, facing a blank page, etc., so he hasnāt learned to handle that better
- Also, he doesnāt have publicly viewable stuff heās worked on that has his name on it
- So this has been a bit of an ego-style difficulty for him
- I said: Matthew VDM is also very positive about this sort of RA role. So now I think 2 out of the 2 people who Iāve heard from whoāve done such roles have been very positive about it. Should that update me a lot? Should this be like one of the five career paths we often mention to EAs, roughly as often as PhDs? Or maybe itās like thereās a certain kind of person whoāll gravitate towards such roles, and thisāll work well for them and seem great to them, but that wouldnāt generalise?
- This RA wouldnāt have guessed that this would be a good role for him
- E.g. he likes having his name on stuff
- So that means that this is more evidence. More of an update. Rather than it just being that he's predictably the sort of person for whom this fits.
- This RA wouldnāt have guessed that this would be a good role for him
- I said: How should this update me on things like the level of guidance vs self-direction that should occur in researcher training programs?
- Hard to say
- Main tradeoff is building skill doing self-directed projects vs building skills for each of the intermediate products
- (As noted above, feedback loops for intermediate products might tend to be better)
- Maybe the theoretically ideal situation would be to have some people specialise for indefinitely being RAs and some indefinitely specialise for being principal investigators
- This is maybe sort of like what lab model looks like in hard sciences
- Could perhaps be a good way to go in the social sciences.
- But itād be a quite radical departure.
- And thereās ego constraints.
- Perhaps there could be even more specialisation, e.g. into:
- PI
- Theyād be generalists too to some extent
- Brainstorming ideas guy
- Data analysis people
- Etc.
- (Though this is then an even more āout thereā idea)
- PI
HStencil @ 2021-05-24T04:53 (+18)
For the last few years, Iāve been an RA in the general domain of ~economics at a major research university, and I think that while a lot of what youāre saying makes sense, itās important to note that the quality of oneās experience as an RA will always depend to a very significant extent on oneās supervising researcher. In fact, I think this dependency might be just about the only thing every RA role has in common. Your data points/testimonials reasonably represent what itās like to RA for a good supervisor, but bad supervisors abound (at least/especially in academia), and RAing for a bad supervisor can be positively nightmarish. Furthermore, itās harder than youād think to screen for this in advance of taking an RA job. I feel particularly lucky to be working for a great supervisor, but/because I am quite familiar with how much the alternative sucks.
On a separate note, regarding your comment about people potentially specializing in RAing as a career, I donāt really think this would yield much in the way of productivity gains relative to the current state of affairs in academia (where postdocs often already fill the role that I think you envision for career RAs). I do, however, think that it makes a lot of sense for some RAs to go into careers in research management. Though most RAs probably lack the requisite management aptitude, the ones who can effectively manage people, I think, can substantially increase the productivity of mid-to-large academic labs/research groups by working in management roles (I know J-PAL has employed former RAs in this capacity). A lot of academic research is severely management-constrained, in large part because management duties are often foisted upon PIs (and no one goes into academia because they want to be a manager, nor do PIs typically receive any management training, so the people responsible for management often enough lack both relevant interest and relevant skill). Moreover, productivity losses to bad management often go unrecognized because how well their research group is being managed is, like, literally at the very bottom of most PIsā lists of things to think about (not just because theyāre not interested in it, also because theyāre often very busy and have many different things competing for their attention). Finally, one consequence of this is that bad RAs (at least in the social sciences) can unproductively consume a research groupās resources for extended periods of time without anyone taking much notice. On the other hand, even if the group tries to avoid this by employing a more active management approach, in that case a bad RA can meaningfully impede the groupās productivity by requiring more of their supervisorās time to manage them than they save through their work. My sense is that fear of this situation pervades RA hiring processes in many corners of academia.
MichaelA @ 2021-05-24T18:59 (+3)
itās important to note that the quality of oneās experience as an RA will always depend to a very significant extent on oneās supervising researcher. p...] but bad supervisors abound (at least/especially in academia), and RAing for a bad supervisor can be positively nightmarish.
Thanks, I think this provides a useful counterpoint/nuance that I think should help people make informed decisions about whether to try to get RA roles, how to choose which roles to aim for/accept, and whether and how to facilitate/encourage other people to offer or seek RA roles.
Your second paragraph is also interesting. I hadn't previously thought about how there may be overlap between the skills/mindsets that are useful for RAs and those useful for research management, and that seems like an useful point to raise.
regarding your comment about people potentially specializing in RAing as a career, I donāt really think this would yield much in the way of productivity gains relative to the current state of affairs in academia
Minor point: That point was from the RA I spoke to, not from me. (But I do endorse the idea that such specialisation might be a good thing.)
More substantive point: It's worth noting is that, while a lot of the research and research training I particularly care about happens in traditional academia, a lot also happens in EA parts of academia (e.g., FHI, GPI), in EA orgs, in think tanks, among independent researchers, and maybe elsewhere. So even if this specialisation wouldn't yield much productivity gains compared to the current state of affairs in one of those "sectors", it could perhaps do so in others. (I don't know if it actually would, though - I haven't looked into it enough, and am just making the relatively weak claim that it might.)
HStencil @ 2021-05-25T18:39 (+3)
Yeah, I think itās very plausible that career RAs could yield meaningful productivity gains in organizations that differ structurally from ātraditionalā academic research groups, including, importantly, many EA research institutions. I think this depends a lot on the kinds of research that these organizations are conducting (in particular, the methods being employed and the intended audiences of published work), how the senior researchersā jobs are designed, what the talent pipeline looks like, etc., but itās certainly at least plausible that this could be the case.
On the parallels/overlap between what makes for a good RA and what makes for a good research manager, my view is actually probably weaker than I may have suggested in my initial comment. The reason why RAs are sometimes promoted into research management positions, as I understand it, is that effective research management is believed to require an understanding of what the research process, workflow, etc. look like in the relevant discipline and academic setting, and RAs are typically the only people without PhDs who have that context-specific understanding. Plus, theyāll also have relevant domain knowledge about the substance of the research, which is quite useful in a research manager, too. I think these are pretty much all of the reasons why RAs may make for good research managers. I donāt really think itās a matter of skills or of mindset anywhere near as much as itās about knowledge (both tacit and not). In fact, I think one difficulty with promoting RAs to research management roles is that often, being a successful RA seems to select for traits associated with not having good management skills (e.g., being happy spending oneās days reading academic papers alone with very limited opportunities for interpersonal contact). This is why I limited my original comment on this to RAs who can effectively manage people, who, as I suggested, I think are probably a small minority. Because good research managers are so rare, though, and because research is so management-constrained without them, if someone is such an RA and they have the opportunity, I would think that moving into research management could be quite an impactful path for them.
MichaelA @ 2021-05-25T18:46 (+2)
Ah, thanks for that clarification! Your comments here continue to be interesting food for thought :)
EdoArad @ 2021-05-22T06:12 (+11)
One idea that comes to mind is to set up an organization that hires RAs-as-a-service. Say, a nonprofit that works with multiple EA orgs and employees several RAs, some full-time and others part-time (think, a student job). This org can then handle recruiting, basic training, employment and some of the management. RAs could work on multiple projects with perhaps multiple different people, and tasks could be delegated to the organization as a whole to find the right RA to fit.
A financial model could be something like EA orgs pay 25-50% of the relevant salaries for projects they recruit RAs for, and the rest is complemented by donations to the non-profit itself.
MichaelA @ 2021-05-22T07:36 (+5)
Yeah, I definitely think this is worth someone spending at least a couple hours seriously thinking about doing, including maybe sending out a survey to or conducting interviews with non-junior researchers[1] to gauge interest in having an RA if it was arranged via this service.
I previously suggested a somewhat similar idea as a project to improve the long-term future:
- Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI
- This might allow them to complete additional valuable projects
- This also might help the research or writing assistants build career capital and test fit for valuable roles
- Maybe BERI can already provide this?
- Itās possible itās not worth being proactive about this, and instead waiting for people to decide they want an assistant and create a job ad for one. But Iād guess that some proactiveness would be useful (i.e., that there are cases where someone would benefit from such an assistant but hasnāt thought of it, or doesnāt think the overhead of a long search for one is worthwhile)
- See also this comment from someone who did this sort of role for Toby Ord
And Daniel Eth replied there:
As a senior research scholar at FHI, I would find this valuable if the assistant was competent and the arrangement was low cost to me (in terms of time, effort, and money). I haven't tried to set up anything like this since I expect finding someone competent, working out the details, and managing them would not be low cost, but I could imagine that if someone else (such as BERI) took care of details, it very well may be low cost. I support efforts to try to set something like this up, and I'd like to throw my hat into the ring of "researchers who would plausibly be interested in assistants" if anyone does set this up.
I'm going to now flag this idea to someone who I think might be able to actually make it happen.
MichaelA @ 2021-05-22T10:13 (+5)
Someone pointed out to me that BERI already do some amount of this. E.g., they recently hired for or are hiring for RAs for Anders Sandberg and Nick Bostrom at FHI.
It seems plausible that they're doing all the stuff that's worth doing, but also seems plausible (probable?) that there's room for more, or for trying out different models. I think anyone interested in potentially actually starting an initiative like this should probably touch base with BERI before investing lots of time into it.
EdoArad @ 2021-05-22T14:49 (+5)
Ah, right! There still might be a need outside of longtermist research, but I definitely agree that it'd be very useful to reach out to them to learn more.
For further context for people who might potentially go ahead with this, BERI is a nonprofit that supports researchers working on existential risk. I guess that Sawyer is the person to reach out to.
MichaelA @ 2021-05-22T18:42 (+3)
Btw, the other person I suggested this idea to today is apparently already considering doing this. So if someone else is interested, maybe contact both Sawyer and me, and I can put you in touch with this person.
And this person would do it for longtermist researchers, so yeah, it seems plausible/likely to me that there's more room for this for researchers focused on other cause area.
Jamie_Harris @ 2021-05-19T18:47 (+5)
These feel like they should be obvious points and yet I hadn't thought about them before. So this was also an update for me! I've been considering PhDs, and your stated downsides don't seem like big downsides for me personally, so it could be relevant to me too.
Ok, so the imagine you/we (the EA community) successfully make the case and encourage demand for RA positions. Is there supply?
- I don't recall ever seeing an RA position formally advertised (though I haven't been looking out for them per se, don't check the 80k job board very regularly, etc)
- If I imagine myself or my colleagues at Sentience Institute with an RA, I can imagine that we'd periodically find an RA helpful, but not enough for a full-time role.
- Might be different at other EA/longtermist nonprofits but we're primarily funding constrained. Apart from the sense that they might accept a slightly lower salary, why would we hire an RA when we could hire a full blown researcher (who might sometimes have to do the lit reviews and grunt-work themselves)?
HStencil @ 2021-05-24T04:54 (+6)
I actually think full-time RA roles are very commonly (probably more often than not?) publicly advertised. Some fields even have centralized job boards that aggregate RA roles across the discipline, and on top of that, there are a growing number of formalized predoctoral RA programs at major research universities in the U.S. I am actually currently working as an RA in an academic research group that has had roles posted on the 80,000 Hours job board. While I think it is common for students to approach professors in their academic program and request RA work, my sense is that non-students seeking full-time RA positions very rarely have success cold-emailing professors and asking if they need any help. Most professors do not have both ongoing need for an (additional) RA and the funding to hire one (whereas in the case of their own students, universities often have special funding set aside for studentsā research training, and professors face an expectation that they help interested students to develop as researchers).
Separately, regarding the second bullet point, I think it is extremely common for even full-time RAs to only periodically be meaningfully useful and to spend the rest of their time working on relatively low-priority āback burnerā projects. In general, my sense is that work for academic RAs often comes in waves; some weeks, your PI will hand you loads of things to do, and youāll be working late, but some weeks, there will be very little for you to do at all. In many cases, I think RAs are hired at least to some extent for the value of having them effectively on call.
EdoArad @ 2021-05-22T06:04 (+6)
In regards to the third bullet point, there might be a nontrivial boost to the senior researchers' productivity and well-being.
Doing grunt-work can be disproportionally (to its time) tiring and demotivating, and most people have some type of work that they dislike or just not good at which could perhaps be delegated. Additionally, having a (strong and motivated) RA might just be more fun and help with making personal research projects more social and meaningful.
Regarding the salary, I've quickly checked GiveWell's salaries at Glassdoor
So from that I'd guess that an RA could cost about 60% as much as a senior researcher. (I'm sure that there is better and more relevant information out there)
MichaelA @ 2021-05-20T07:26 (+2)
Ok, so the imagine you/we (the EA community) successfully make the case and encourage demand for RA positions. Is there supply?
I think you're asking "...encourage that people seek RA positions. Would there be enough demand for those aspiring RAs?"? Is that right? (I ask because I think I'm more used to thinking of demand for a type of worker, and supply of candidates for those positions.)
I don't have confident answers to those questions, but here are some quick, tentative thoughts:
- I've seen some RA positions formally advertised (e.g., on the 80k job board)
- I remember one for Nick Bostrom and I think one for an economics professor, and I think I've seen others
- I also know of at least two cases where an RA positions was opened but not widely advertised, including one case where the researcher was only a couple years into their research career
- I have a vague memory of someone saying that proactively reaching out to researchers to ask if they'd want you to be an RA might work surprisingly often
- I also have a vague impression that this is common with university students and professors
- But I think this person was saying it in relation to EA researchers
- (Of course, a vague memory of someone saying this is not very strong evidence that it's true)
- I do think there are a decent number of EA/longtermist orgs which have or could get more funding than they are currently able or willing to spend on their research efforts, e.g. due to how much time from senior people would be consumed for hiring rounds or managing and training new employees
- Some of these constraints would also constrain the org from taking on RAs
- But maybe there are cases where the constraint is smaller for RAs than for more independent researchers?
- One could think of this in terms of the org having already identified a full researcher whose judgement, choices, output, etc. the org is happy with, and they've then done further work to get that researcher on the same page with the org, more trained up, etc. The RA can slot in under that researcher and help them do their work better. So there may be less need to carefully screen them up front, and they may take up less management time from the most senior staff (instead being managed by the researcher themselves).
- I think this can also help answer "Apart from the sense that they might accept a slightly lower salary, why would we hire an RA when we could hire a full blown researcher"? Sometimes it may be easier to find someone who would be a fit for an RA role than someone who'd be a fit for a full researcher role.
- (It's worth noting that this is partly about the extent to which the person already has credible signals of fit, already has developed good judgement and research taste, etc. So some of those RAs may then later be great fits for full researcher roles. Though also some could perhaps remain as RAs and just keep providing more and more value in such roles.)
But again, these are quick, tentative thoughts. I've neither worked as nor had an RA, haven't been closely involved with any RA hiring decisions, haven't done research into how RAing works and what value it provides in non-EA academia, etc.
MichaelA @ 2021-05-29T09:41 (+3)
See also 80k on the career idea "Be research manager or a PA for someone doing really valuable work".
MichaelA @ 2021-05-19T15:29 (+30)
Readings and notes on how to do high-impact research
This shortform contains some links and notes related to various aspects of how to do high-impact research, including how to:
- come up with important research questions
- pick which ones to pursue
- come up with a "theory of change" for your research
- assess your impact
- be and stay motivated and productive
- manage an organisation, staff, or mentees to help them with the above
I've also delivered a workshop on the same topics, the slides from which can be found here.
The document has less of an emphasis on object-level things to do with just doing research well (as opposed to doing impactful research), though thatās of course important too. On that, see also Effective Thesis's collection of Resources, Advice for New Researchers - A collaborative EA doc, Resources to learn how to do research, and various non-EA resources (some are linked to from those links).
Epistemic status
This began as a Google Doc of notes to self. It's still pretty close to that status - i.e., I don't explain why each thing is relevant, haven't spent a long time thinking about the ideal way to organise this, and expect this shortform omits many great readings and tips. But several people indicated finding the doc useful, so I'm now sharing it more widely.
Iāve done ~6 FTE months of academic research (producing one paper) and ~1 FTE year of longtermist research at EA orgs.
I do not have excellent, one-size-fits-all, easy-win answers to how to do high-impact research; I just have various scraps and ideas, and this shortform is merely intend to collect those.
This shortform expresses my personal views only (and is based on a doc created before I started either of my current jobs).
Readings - misc
* Asterisks indicate sources I havenāt yet properly read myself.
Posts tagged Research methods
Posts tagged Org strategy
Ingredients for creating disruptive research teams (Forum post)
Ingredients for building disruptive research teams (EAG talk by the author of the post)
Can we intentionally improve the world? Planners vs. Hayekians
Building collaborative research teams ā Jess Whittlestone
https://www.charityentrepreneurship.com/research.html (and the pages linked to under āOUR RESEARCH PROCESSā)
What can someone do to become a stronger fit for future Open Philanthropy generalist RA openings?
Tips On Doing Impactful Research - Effective Thesis, 2020
Hard problem? Hack away at the edges.
Some of NuƱo Sempereās recent work
Advice from 80,000 Hours: How to do high impact research
Rethink Priorities 2020 Impact and 2021 Strategy - EA Forum
Center on Long-Term Risk: 2021 Plans & 2020 Review - EA Forum
EAG talk on Aggregating Knowledge in EA
Literature Review for Academic Outsiders - LessWrong
Scholarship & Learning tag - LessWrong *
Readings - Primarily relevant to generating and picking questions
How to generate research proposals - EA Forum
Advice from Charity Entrepreneurship: How to do research that matters
Transcript: Karolina Sarek: How to do research that matters - EA Forum
Research as a stochastic decision process ā Jacob Steinhardt (see also Should marginal longtermist donations support fundamental or intervention research?)
Potential benefits & downsides of making and/or sharing a research agenda [upcoming post by me, link will be added later]
Should marginal longtermist donations support fundamental or intervention research?
A case for strategy research: what it is and why we need more of it
Why EAs researching mainstream topics can be useful
Research project planning templates/resources [shared]
Readings - primarily relevant to theories of change
Theory of Change in Research [slides] (see also the accompanying worksheet)
Do research organisations make theory of change diagrams? Should they?
https://longtermrisk.org/identifying-plausible-paths-to-impact/
Modeling the impact of safety agendas
Readings - primarily relevant to assessing impact
Rethink Priorities Impact Survey - EA Forum
Should surveys about the quality/impact of research outputs be more common?
Posts tagged Impact assessment
Readings - primarily relevant to managing an organisation, staff, or mentees
Collection of collections of resources relevant to (research) management, mentorship, training, etc.
Notes
Iād guess that the best approaches to the first four points - coming up with important research questions, picking among them, coming up with a ToC, and assessing impact - will differ considerably for different topics/areas. They might be hardest for longtermism, as in that cause area goals are far away and sometimes unclear, and we get limited feedback loops.
On point 6 especially (regarding managing others), but also 1-5, I find it useful to think about the following interrelated points:
- Should we be more like planners or Hayekians?
- Should we be more top-down or bottom-up?
- How much control/guidance vs free rein should researchers be given?
- Should we value cohesion or not?
- Steve Jobs analogy:
- A biography of Jobs suggested that, instead of making the product the market wants, he leaned towards making the product he had strong inside-view reasons to think the market will want (even if they don't know it yet)
- Analogously, should research answer the questions decision-makers know they want answered, or questions we expect they would value answers to but that they haven't yet even noticed exist or understood the significance of?
- The latter might have tend to have a lower probability of success (including because you might just overestimate the importance), but might tend to lead to larger successes when it does succeed (because it causes a more fundamental shift and/or was less likely to have been done soon by someone else anyway).
Again, this was originally written like notes to self - let me know if I should clarify anything.
I'm grateful to Aleksandr Berezhnoi and Edo Arad for making useful comments on the Doc version of this shortform, and to Kat Woods for encouraging me to make a public post out of the Doc.
Kat Woods @ 2021-05-19T15:43 (+5)
Thanks for posting this! This is a gold mine of resources. This will save the Nonlinear team so much time.
Ramiro @ 2021-05-19T16:24 (+2)
Did you consider if this could get more views if it was a normal "longform" post? Maybe it's not up to your usual standards, but I think it's pretty good.
MichaelA @ 2021-05-19T17:03 (+2)
Nice to hear you think so!
I did consider that, but felt like maybe it's too much of just a rough, random grab-bag of things for a top-level post. But if the shortform or your comment gets unexpectedly many upvotes, or other people express similar views in comments, I may "promote" it.
MichaelA @ 2021-05-19T15:59 (+2)
More concretely, regarding generating and prioritising research questions, one place to start is these lists of question ideas:
- Research questions that could have a big social impact, organised by discipline
- A central directory for open research questions
- Crucial questions for longtermists
- Some history topics it might be very valuable to investigate
- This is somewhat less noteworthy than the other links
And for concrete tips on things like how to get started, see Notes on EA-related research, writing, testing fit, learning, and the Forum.
MichaelA @ 2020-03-26T10:58 (+30)
Collection of EA analyses of how social social movements rise, fall, can be influential, etc.
Movement collapse scenarios - Rebecca Baron
Why do social movements fail: Two concrete examples. - NunoSempere
What the EA community can learn from the rise of the neoliberals - Kerry Vaughan
How valuable is movement growth? - Owen Cotton-Barratt (and I think this is sort-of a summary of that article)
Long-Term Influence and Movement Growth: Two Historical Case Studies - Aron Vallinder, 2018
Some of the Sentience Institute's research, such as its "social movement case studies"* and the post How tractable is changing the course of history?
A Framework for Assessing the Potential of EA Development in Emerging Locations* - jahying
EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020
Hard-to-reverse decisions destroy option value - Schubert & Garfinkel, 2017
These aren't quite "EA analyses", but Slate Star Codex has several relevant book reviews and other posts, such as:
- https://slatestarcodex.com/2019/03/18/book-review-inventing-the-future/
- https://slatestarcodex.com/2018/04/30/book-review-history-of-the-fabian-society/
It appears Animal Charity Evaluators did relevant research, but I haven't read it, they described it as having been "of variable quality", and they've discontinued it.
In this comment, Pablo Stafforini refers to some relevant work that sounds like it's non-public.
See also my collection of work on value drift, and my list of some history topics it might be very valuable to investigate.
*Asterisks indicate I haven't read that source myself, and thus that the source might not actually be a good fit for this list.
Notes
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
Also, I'm aware that there are a lot of non-EA analyses of these topics. The reasons I'm collecting only EA analyses here are that:
- their precise focuses or methodologies may be more relevant to other EAs than would be the case with non-EA analyses
- links to non-EA work can be found in most of the things I list here
- I'd guess that many collections of non-EA analyses of these topics already exist (e.g., in reference lists)
vaidehi_agarwalla @ 2020-07-14T01:19 (+7)
I have a list here that has some overlap but also some new things: https://docs.google.com/document/d/1KyVgBuq_X95Hn6LrgCVj2DTiNHQXrPUJse-tlo8-CEM/edit#
MichaelA @ 2020-07-14T02:50 (+2)
That looks very helpful - thanks for sharing it here!
rosehadshar @ 2021-12-10T09:19 (+4)
Some more recent things:
- Mauricio, What Helped the Voiceless? Historical Case Studies (and a shorter version here)
- James Ozden, A case for the effectiveness of protest
Also fwiw, I have read the ACE case studies, and I think that the one on environmentalism is pretty high quality, more so than some of the other things listed here. I'd recommend people interested in working on this stuff to read the environmentalism one.
rosehadshar @ 2021-12-20T12:53 (+3)
Another one: Alex Hill and Jaime Sevilla, Attempt at understanding the role of moral philosophy in moral progress (on womenās suffrage and animal rights)
evakat @ 2023-04-04T21:39 (+3)
One to add to the list: More Than Just Good Causes. A Framework For Understanding How Social Movements Contribute To Change by Eugenia Lafforgue and Brett Mills (Future Matters Project)
Shri_Samson @ 2020-08-31T01:02 (+3)
This is probably too broad but here's Open Philanthropy's list of case studies on the History of Philanthropy which includes ones they have commissioned, though most are not done by EAs with the exception of Some Case Studies in Early Field Growth by Luke Muehlhauser.
Edit: fixed links
MichaelA @ 2020-08-31T06:00 (+2)
Yeah, I think those are relevant, thanks for mentioning them!
It looks like the links lead back to your comment for some reason (I think I've done similar in the past). So, for other readers, here are the links I think you mean: 1, 2.
(Also, FWIW, I think if an analysis is by a non-EA by commissioned by an EA, I'd say that essentially counts as an "EA analysis" for my purposes. This is because I expect that such work's "precise focuses or methodologies may be more relevant to other EAs than would be the case with [most] non-EA analyses".)
MichaelA @ 2020-02-24T17:51 (+29)
Collection of sources that seem very relevant to the topic of civilizational collapse and/or recovery
Civilization Re-Emerging After a Catastrophe - Karim Jebari, 2019 (see also my commentary on that talk)
Civilizational Collapse: Scenarios, Prevention, Responses - Denkenberger & Ladish, 2019
Update on civilizational collapse research - Ladish, 2020 (personally, I found Ladish's talk more useful; see the above link)
Modelling the odds of recovery from civilizational collapse - Michael Aird (i.e., me), 2020
The long-term significance of reducing global catastrophic risks - Nick Beckstead, 2015 (Beckstead never actually writes "collapse", but has very relevant discussion of probability of "recovery" and trajectory changes following non-extinction catastrophes)
How much could refuges help us recover from a global catastrophe? - Nick Beckstead, 2015 (he also wrote a related EA Forum post)
Various EA Forum posts by Dave Denkenberger (see also ALLFED's site)
Aftermath of Global Catastrophe - GCRI, no date (this page has links to other relevant articles)
A (Very) Short History of the Collapse of Civilizations, and Why it Matters - David Manheim, 2020
A grant application from Ladish, and Oliver Habryka's thoughts on it - 2019
Civilisational collapse has a bright past – but a dark future - Luke Kemp, 2019
Are we on the road to civilisation collapse? - Luke Kemp, 2019
Civilization: Institutions, Knowledge and the Future - Samo Burja, 2018
Secret of Our Success - Henrich, 2015 (not about collapse, but it has many relevant insights, in my opinion) (see also the Slate Star Codex review)
Is there a subfield of economics devoted to "fragility vs resilience"? (and the answers there) - steve6320 and various commenters, 2020
I also have some as-yet unpublished work on collapse & recovery that I'm happy to share upon request.
Things about existential risk or GCRs more broadly, but with relevant parts
Toby Ord on the precipice and humanity’s potential futures - 2020 (the first directly relevant part is in the section on nuclear war)
The Precipice - Ord, 2020
Long-Term Trajectories of Human Civilization - Baum et al., 2019 (the authors never actually write "collapse", but their section 4 is very relevant to the topic)
Towards Comprehensive Existential Risk Assessment: A Bayesian Network Model And Proposal For Assessment - Rozendal, 2019, working paper
Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter - Cotton-Barratt, Daniel, Sandberg, 2020
Causal diagrams of the paths to existential catastrophe - Michael Aird, 2020
Stuart Armstrong interview - 2014 (the relevant section is 7:45-14:30)
Existential Risk Prevention as Global Priority - Bostrom, 2012
The Future of Humanity - Bostrom, 2007 (covers similar points to the above paper)
How Would Catastrophic Risks Affect Prospects for Compromise? - Tomasik, 2013/2017
Crucial questions for longtermists - Michael Aird, 2020
Things that sound relevant, but which I haven't read/watched/listened to yet
Catastrophe, Social Collapse, and Human Extinction - Robin Hanson, 2007
The Fragile World Hypothesis: Complexity, Fragility, and Systemic Existential Risk - David Manheim,
Existential Risks: Exploring a Robust Risk Reduction Strategy - Karim Jebari, 2015
Islands as refuges for surviving global catastrophes - Turchin & Green, 2018
Videos and slides from a Princeton Workshop on Historical Systemic Collapse - 2019
Feeding Everyone No Matter What - Denkenberger & Pearce, 2014
Why and how civilisations collapse - Kemp [CSER]
https://en.wikipedia.org/wiki/Societal_collapse
https://en.wikipedia.org/wiki/Collapse:_How_Societies_Choose_to_Fail_or_Succeed [book]
https://en.wikipedia.org/wiki/The_Knowledge:_How_to_Rebuild_Our_World_from_Scratch - Dartnell [book] (there's also this TEDx Talk by the author, but I didn't find that very useful from a civilizational collapse perspective)
The Collapse of Complex Societies - Joseph Tainter, 1988
1177 B.C.: The Year Civilization Collapsed - Eric Cline, 2014
On Collapse Risk (C-Risk) - Pawntoe4, 2020
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
gavintaylor @ 2020-06-28T19:20 (+5)
Guns, Germs, and Steel - I felt this provided a good perspective on the ultimate factors leading up to agriculture and industry.
MichaelA @ 2020-06-28T22:56 (+2)
Great, thanks for adding that to the collection!
MichaelA @ 2020-09-18T07:01 (+3)
Suggested by a member of the History and Effective Altruism Facebook group:
- https://scholars-stage.blogspot.com/2019/07/a-study-guide-for-human-society-part-i.html
- Disputers of the Tao, by A. C. Graham
MichaelA @ 2020-10-13T17:07 (+2)
See also the book recommendations here.
MichaelA @ 2020-09-23T08:34 (+25)
Note: This shortform is now superseded by a top-level post I adapted it into. There is no longer any reason to read the shortform version.
Book sort-of-recommendations
Here I list all the EA-relevant books I've read or listened to as audiobooks since learning about EA, in roughly descending order of how useful I perceive/remember them being to me.
I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and Luke Muehlhauser's lists very useful.) That said, this isn't exactly a recommendation list, because:
- some of factors making these books more/less useful to me won't generalise to most other people
- I'm including all relevant books I've read (not just the top picks)
Let me know if you want more info on why I found something useful or not so useful.
(See also this list of EA-related podcasts and this list of sources of EA-related videos.)
- The Precipice, by Ord, 2020
- See here for a list of things I've written that summarise, comment on, or take inspiration from parts of The Precipice.
- I recommend reading the ebook or physical book rather than audiobook, because the footnotes contain a lot of good content and aren't included in the audiobook
- The book Superintelligence may have influenced me more, but thatās just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. Iād now recommend The Precipice first.
- Superforecasting, by Tetlock & Gardner, 2015
- How to Measure Anything, by Hubbard, 2011
- Rationality: From AI to Zombies, by Yudkowsky, 2006-2009
- I.e., āthe sequencesā
- Superintelligence, by Bostrom, 2014
- Maybe this would've been a little further down the list if Iād already read The Precipice
- Expert Political Judgement, by Tetlock, 2005
- I read this after having already read Superforecasting, yet still found it very useful
- Normative Uncertainty, by MacAskill, 2014
- This is actually a thesis, rather than a book
- I assume it's now a better idea to read MacAskill, Bykvist, and Ord's book on the same subject, which is available as a free PDF
- Though I haven't read the book version myself
- Secret of Our Success, by Henrich, 2015
- The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous, by Henrich, 2020
- See also the Wikipedia page on the book, this review on LessWrong, and my notes on the book.
- I rank Secret of Our Success as more useful to me, but that may be partly because I read it first; if I only read either this book or Secret of Our Success, I'm not sure which I'd find more useful.
- The Strategy of Conflict, by Schelling, 1960
- See here for my notes on this book, and here for some more thoughts on this and other nuclear-risk-related books.
- This and other nuclear-war-related books are more useful for me than they would be for most people, since I'm currently doing research related to nuclear war
- This is available as an audiobook, but a few Audible reviewers suggest using the physical book due to the book's use of equations and graphs. So I downloaded this free PDF into my iPad's Kindle app.
- Human-Compatible, by Russell, 2019
- The Book of Why, by Pearl, 2018
- I found an online PDF rather than listening to the audiobook version, as the book makes substantial use of diagrams
- Blueprint, by Plomin, 2018
- This is useful primarily in relation to some specific research I was doing, rather than more generically.
- Moral Tribes, by Greene, 2013
- Algorithms to Live By, by Christian & Griffiths, 2016
- The Better Angels of Our Nature, by Pinker, 2011
- See here for some thoughts on this and other nuclear-risk-related books.
- Command and Control, by Schlosser, 2013
- See here for some thoughts on this and other nuclear-risk-related books.
- The Doomsday Machine, by Ellsberg, 2017
- See here for some thoughts on this and other nuclear-risk-related books.
- The Bomb: Presidents, Generals, and the Secret History of Nuclear War, by Kaplan, 2020
- The Alignment Problem, by Christian, 2020
- This might be better than Superintelligence and Human-Compatible as an introduction to the topic of AI risk. It also seemed to me to be a surprisingly good introduction to the history of AI, how AI works, etc.
- But I'm not sure this'll be very useful for people who've already read/listened to a decent amount (e.g., the equivalent of 4 books) about those topics.
- That's why it's ranked as low as it is for me.
- But maybe I'm underestimating how useful it'd be to many other people in a similar position.
- Evidence for that is that someone told me that an AI safety researcher friend of theirs found the book helpful.
- The Sense of Style, by Pinker, 2019
- One thing to note is that I think a lot of chapter 6 (which accounts for roughly a third of the book) can be summed up as "Don't worry too much about a bunch of alleged 'rules' about grammar, word choice, etc. that prescriptivist purists sometimes criticise people for breaking."
- And I already wasn't worried most of those alleged rules, and hadn't even heard of some of them.
- And I think one could get the basic point without seeing all the examples and discussion.
- So a busy reader might want to skip or skim most of that chapter.
- Though I think many people would benefit from the part on commas.
- I read an ebook rather than listening to the audiobook, because I thought that might be a better way to absorb the lessons about writing style
- One thing to note is that I think a lot of chapter 6 (which accounts for roughly a third of the book) can be summed up as "Don't worry too much about a bunch of alleged 'rules' about grammar, word choice, etc. that prescriptivist purists sometimes criticise people for breaking."
- The Dead Hand, by Hoffman, 2009
- See here for some thoughts on this and other nuclear-risk-related books.
- Thinking, Fast and Slow, by Kahneman, 2011
- This might be the most useful of all these books for people who have little prior familiarity with the ideas, but I happened to already know a decent portion of what was covered.
- Against the Grain, by Scott, 2017
- I read this after Sapiens and thought the content would overlap a lot, but in the end I actually thought it provided a lot of independent value.
- See also this interesting Slate Star Codex review
- Sapiens, by Harari, 2015
- Destined for War, by Allison, 2017
- See here for some thoughts on this and other nuclear-risk-related books.
- The Dictatorās Handbook, by de Mesquita & Smith, 2012
- Age of Ambition, by Osnos, 2014
- Moral Mazes, by Jackall, 1989
- The Myth of the Rational Voter, by Caplan, 2007
- The Hungry Brain, by Guyenet, 2017
- If I recall correctly, I found this surprisingly useful for purposes unrelated to the topics of weight, hunger, etc.
- E.g., it gave me a better understanding of the liking-wanting distinction
- See also this Slate Star Codex review (which I can't remember whether I read)
- If I recall correctly, I found this surprisingly useful for purposes unrelated to the topics of weight, hunger, etc.
- The Quest: Energy, Security, and the Remaking of the Modern World, by Yergin, 2011
- Harry Potter and the Methods of Rationality, by Yudkowsky, 2010-2015
- Fiction
- I found this both surprisingly useful and very surprisingly enjoyable
- To be honest, I was somewhat amused and embarrassed to find what is ultimately Harry Potter fan fiction as enjoyable and thought-provoking as I found this
- This overlaps in many ways with Rationality: AI to Zombies, so it would be more valuable to someone who hadn't already read those sequences
- But I'd recommend such a person read those sequences before reading this; I think they're more useful (though less enjoyable)
- Within the 2 hours before I go to sleep, I try not to stimulate my brain too much - e.g., I try to avoid listening to most nonfiction audiobooks during that time. But I found that I could listen to this during that time without it keeping my brain too active. This is a perk, as that period of my day is less crowded with other things to do.
- Same goes for the books Steve Jobs, Power Broker, Animal Farm, and Consider the Lobster.
- Steve Jobs, by Walter Isaacson, 2011
- Surprisingly useful, considering the facts that I donāt plan to at all emulate Jobsā life and that I donāt work in a relevant industry
- Enlightenment Now, by Pinker, 2018
- The Undercover Economist Strikes Back, by Harford, 2014
- Against Empathy, by Bloom, 2016
- Inadequate Equilibria, by Yudkowksy, 2017
- Radical Markets, by Posner & Weyl, 2018
- How to Be a Dictator: The Cult of Personality in the Twentieth Century, by Dikƶtter, 2019
- On Tyranny: 20 Lessons for the 20th Century, by Snyder, 2017
- It seemed to me that most of what Snyder said was either stuff I already knew, stuff that seemed kind-of obvious or platitude-like, or stuff I was skeptical of
- This might be partly due to the book being under 2 hours, and thus giving just a quick overview of the "basics" of certain things
- So I do think it might be fairly useful per minute for someone who knew quite little about things like Hitler and the Soviet Union
- It seemed to me that most of what Snyder said was either stuff I already knew, stuff that seemed kind-of obvious or platitude-like, or stuff I was skeptical of
- Climate Matters: Ethics in a Warming World, by John Broome, 2012
- The Power Broker, by Caro, 1975
- Very interesting and engaging, but also very long and probably not super useful.
- Science in the Twentieth Century: A Social-Intellectual Survey, by Goldman, 2004
- This is actually a series of audio recordings of lectures, rather than a book
- Animal Farm, by Orwell, 1945
- Fiction
- Brave New World, by Huxley, 1932
- Fiction
- Consider the Lobster, by Wallace, 2005
- To be honest, I'm not sure why Wiblin recommended this. But I benefitted from many of Wiblin's other recommendations. And I did find this book somewhat interesting.
Honorable mention: 1984, by Orwell, 1949. I haven't included that in the above list because I read it before I learned about EA. But I think the book, despite being a novel, is actually the most detailed exploration I've seen of how a stable, global totalitarian system could arise and sustain itself. (I think this is a sign that there needs to be more actual research on that topic - a novel published more than 70 years ago shouldn't be one of the best sources on an important topic!)
(Hat tip to Aaron Gertler for sort-of prompting me to post this list.)
Aaron Gertler @ 2021-02-17T09:04 (+4)
I recommend making this a top-level post. I think it should be one of the most-upvoted posts on the "EA Books" tag, but I can't tag it as a Shortform post.
MichaelA @ 2021-02-17T10:23 (+2)
I had actually been thinking I should probably do that sometime, so your message inspired me to pull the trigger and do it now. Thanks!
(I also made a few small improvements/additions while I was at it.)
MichaelA @ 2021-04-14T06:43 (+24)
Independent impressions
Your independent impression about X is essentially what you'd believe about X if you weren't updating your beliefs in light of peer disagreement - i.e., if you weren't taking into account your knowledge about what other people believe and how trustworthy their judgement seems on this topic relative to yours. Your independent impression can take into account the reasons those people have for their beliefs (inasmuch as you know those reasons), but not the mere fact that they believe what they believe.
Armed with this concept, I try to stick to the following epistemic/discussion norms, and think it's good for other people to do so as well:
- Trying to keep track of my own independent impressions separately from my all-things-considered beliefs (which also takes into account peer disagreement)
- Trying to be clear about whether I'm reporting my independent impression or my all-things-considered belief
- Feeling comfortable reporting my own independent impression, even when I know it differs from the impressions of people with more expertise in a topic
One rationale for that bundle of norms is to avoid information cascades.
In contrast, when I actually make decisions, I try to make them based on my all-things-considered beliefs.
For example, my independent impression is that it's plausible that a stable, global authoritarian regime, or some other unrecoverable dystopia, is more likely than extinction, and that we should prioritise those risks more than we currently do. But I think that this opinion is probably uncommon among people who've thought a lot about existential risks. And that makes me somewhat less confident in this opinion and somewhat less likely to actually act on it. But I think it's still useful for me to keep track of my independent impression and report it sometimes, or else the community might end up with overly certain and overly homogenous beliefs.
This term and concept and these suggested norms aren't at all original to me - see in particular Naming beliefs and several of the posts tagged Epistemic humility (especially this one). But I wanted a clear, concise description of this specific set of terms and norms so that I could link to it whenever I say I'm reporting my independent impression, ask someone for theirs, or ask someone whether an opinion they've given is their independent impression or their all-things-considered belief.
Lukas_Finnveden @ 2021-09-26T17:55 (+6)
Thanks, I appreciate having something to link to! My independent impression is that it would be even easier to link to and easier to find as a top-level post.
MichaelA @ 2021-09-26T18:46 (+2)
Thanks for the suggestion - I've now gone ahead and made that top-level post :)
MichaelA @ 2021-04-20T07:41 (+2)
I just re-read this comment by Claire Zabel, which is also good and is probably where I originally encountered the "impressions" vs "beliefs" distinction.
(Though I still think that this shortform serves a somewhat distinct purpose, in that it jumps right to discussing that distinction, uses terms I think are a bit clearer - albeit clunkier - than just "impressions" vs "beliefs", and explicitly proposes some discussion norms that Claire doesn't quite explicitly propose.)
MichaelA @ 2020-06-26T07:17 (+18)
Collection of EA analyses of political polarisation
Book Review: Why We're Polarized - Astral Codex Ten, 2021
EA considerations regarding increasing political polarization - Alfred Dreyfus, 2020
Adapting the ITN framework for political interventions & analysis of political polarisation - OlafvdVeen, 2020
Thoughts on electoral reform - Tobias Baumann, 2020
Risk factors for s-risks - Tobias Baumann, 2019
Other EA Forum posts tagged Political Polarization
(Perhaps some older Slate Star Codex posts? I can't remember for sure.)
Notes
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
Also, I'm aware that there has also been a vast amount of non-EA analysis of this topic. The reasons I'm collecting only analyses by EAs/EA-adjacent people here are that:
- their precise focuses or methodologies may be more relevant to other EAs than would be the case with non-EA analyses
- links to non-EA work can be found in most of the things I list here
- I'd guess that many collections of non-EA analyses of these topics already exist (e.g., in reference lists)
Stefan_Schubert @ 2020-06-26T14:32 (+20)
I've written some posts on related themes.
https://www.lesswrong.com/posts/k54agm83CLt3Sb85t/clearerthinking-s-fact-checking-2-0
MichaelA @ 2020-06-26T23:18 (+4)
Great, thanks for adding these to the collection!
MichaelA @ 2020-05-05T04:54 (+18)
To provide us with more empirical data on value drift, would it be worthwhile for someone to work out how many EA Forum users each year have stopped being users the next year? E.g., how many users in 2015 haven't used it since?
Would there be an easy way to do that? Could CEA do it easily? Has anyone already done it?
One obvious issue is that it's not necessary to read the EA Forum in order to be "part of the EA movement". And this applies more strongly for reading the EA Forum while logged in, for commenting, and for posting, which are presumably the things there'd be data on.
But it still seems like this could provide useful evidence. And it seems like this evidence would have a different pattern of limitations to some other evidence we have (e.g., from the EA Survey), such that combining these lines of evidence could help us get a clearer picture of the things we really care about.
MichaelA @ 2020-02-28T17:23 (+18)
Collection of some definitions of global catastrophic risks (GCRs)
See also Venn diagrams of existential, global, and suffering catastrophes
Bostrom & Ćirković (pages 1 and 2):
The term 'global catastrophic risk' lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe. A catastrophe that caused 10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic) would count as a global catastrophe, even if some region of the world escaped unscathed. As for disasters falling between these points, the definition is vague. The stipulation of a precise cut-off does not appear needful at this stage. [emphasis added]
Open Philanthropy Project/GiveWell:
risks that could be bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g. ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial civilization or human extinction).
threats that can eliminate at least 10% of the global population.
Wikipedia (drawing on Bostrom's works):
a hypothetical future event which could damage human well-being on a global scale, even endangering or destroying modern civilization. [...]
any risk that is at least "global" in scope, and is not subjectively "imperceptible" in intensity.
Yassif (appearing to be writing for the Open Philanthropy Project):
By our working definition, a GCR is something that could permanently alter the trajectory of human civilization in a way that would undermine its long-term potential or, in the most extreme case, threaten its survival. This prompts the question: How severe would a pandemic need to be to create such a catastrophic outcome? [This is followed by interesting discussion of that question.]
Beckstead (writing for Open Philanthropy Project/GiveWell):
the Open Philanthropy Project’s work on global catastrophic risks focuses on both potential outright extinction events and global catastrophes that, while not threatening direct extinction, could have deaths amounting to a significant fraction of the world’s population or cause global disruptions far outside the range of historical experience.
(Note that Beckstead might not be saying that global catastrophes are defined as those that "could have deaths amounting to a significant fraction of the world’s population or cause global disruptions far outside the range of historical experience". He might instead mean that Open Phil is focused on the relatively extreme subset of global catastrophes which fit that description. It may be worth noting that he later quotes Open Phil's other, earlier definition of GCRs, which I listed above.)
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
My half-baked commentary
My impression is that, at least in EA-type circles, the term "global catastrophic risk" is typically used for events substantially larger than things which cause "10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic)".
E.g., the Global Challenges Foundation's definition implies that the catastrophe would have to be able to eliminate at least ~750 million people, which is 75 times higher than the number Bostrom & Ćirković give. And I'm aware of at least some existential-risk-focused EAs whose impression is that the rough cutoff would be 100 million fatalities.
With that in mind, I also find it interesting to note that Bostrom & Ćirković gave the "10 million fatalities" figure as indicating something clearly is a GCR, rather than as the lower threshold that a risk must clear in order to be a GCR. From their loose definition, it seems entirely plausible that, for example, a risk with 1 million fatalities might be a GCR.
That said, I do agree that "The stipulation of a precise cut-off does not appear needful at this stage." Personally, I plan to continue to use the term in a quite loose way, but probably primarily for risks that could cause much more than 10 million fatalities.
MichaelA @ 2020-05-30T01:40 (+7)
There is now a Stanford Existential Risk Initiative, which (confusingly) describes itself as:
a collaboration between Stanford faculty and students dedicated to mitigating global catastrophic risks (GCRs). Our goal is to foster engagement from students and professors to produce meaningful work aiming to preserve the future of humanity by providing skill, knowledge development, networking, and professional pathways for Stanford community members interested in pursuing GCR reduction. [emphasis added]
And they write:
What is a Global Catastrophic Risk?
We think of global catastrophic risks (GCRs) as risks that could cause the collapse of human civilization or even the extinction of the human species.
That is much closer to a definition of an existential risk (as long as we assume that the collapse is not recovered from) than of an global catastrophic risk. Given that fact and the clash between the term the initiative uses in its name and the term it uses when describing what they'll focus on, it appears this initiative is conflating these two terms/concepts.
This is unfortunate, and could lead to confusion, given that there are many events that would be global catastrophes without being existential catastrophes. An example would be a pandemic that kills hundreds of millions but that doesn't cause civilizational collapse, or that causes a collapse humanity later fully recovers from. (Furthermore, there may be existential catastrophes that aren't "global catastrophes" in the standard sense, such as "plateauing — progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturity" (Bostrom).)
For further discussion, see Clarifying existential risks and existential catastrophes.
(I should note that I have positive impressions of the Center for International Security and Cooperation (which this initiative is a part of), that I'm very glad to see that this initiative has been set up, and that I expect they'll do very valuable work. I'm merely critiquing their use of terms.)
MichaelA @ 2020-03-19T06:50 (+4)
Some more definitions, from or quoted in 80k's profile on reducing global catastrophic biological risks
Gregory Lewis, in that profile itself:
Global catastrophic risks (GCRs) are roughly defined as risks that threaten great worldwide damage to human welfare, and place the long-term trajectory of humankind in jeopardy. Existential risks are the most extreme members of this class.
[W]e use the term “global catastrophic risks” to refer to risks that could be globally destabilising enough to permanently worsen humanity’s future or lead to human extinction.
Schoch-Spana et al. (2017), on GCBRs, rather than GCRs as a whole:
The Johns Hopkins Center for Health Security's working definition of global catastrophic biological risks (GCBRs): those events in which biological agents—whether naturally emerging or reemerging, deliberately created and released, or laboratory engineered and escaped—could lead to sudden, extraordinary, widespread disaster beyond the collective capability of national and international governments and the private sector to control. If unchecked, GCBRs would lead to great suffering, loss of life, and sustained damage to national governments, international relationships, economies, societal stability, or global security.
MichaelA @ 2020-12-12T11:01 (+2)
Metaculus features a series of questions on global catastrophic risks. The author of these questions operationalises a global catastrophe as an event in which "the human population decrease[s] by at least 10% during any period of 5 years or less".
MichaelA @ 2020-12-07T01:45 (+2)
Baum and Barrett (2018) gesture at some additional definitions/conceptualisations of global catastrophic risk that have apparently been used by other authors:
In general terms, a global catastrophe is generally understood to be a major harm to global human civilization. Some studies have focused on catastrophes resulting in human extinction, including early discussions of nuclear winter (Sagan 1983). Several studies posit minimum damage thresholds such as the death of 10% of the human population (Cotton-Barratt et al. 2016), the death of 25% of the human population (Atkinson 1999), or 104 to 107 deaths or $109 to $1012 in damages (Bostrom and ÄirkoviÄ 2008). Other studies define global catastrophe as an event that exceeds the resilience of global human civilization, resulting in its collapse (Maher and Baum 2013; Baum and Handoh 2014).
MichaelA @ 2020-04-28T01:59 (+1)
From an FLI podcast interview with two researchers from CSER:
"Ariel Conn: [...] I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if there’s any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change."
Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome — which is something unexpectedly, unprecedentedly bad. And, actually, once you’ve got your head around that, different groups have slightly different understandings of what the differences between these three terms are.
So, for some groups, it’s all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us.
Most of the systems — be this physiological systems, the world’s ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on — they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing, and human survival are built on: that we can get food from the biosphere, that our bodies will continue to operate in a way that’s consistent with and supporting our health and our continued survival, and that the institutions that we’ve developed will still work, will still deliver food to our tables, will still suppress interpersonal and international violence, and that we’ll basically, we’ll be able to get on with our lives.
If you look at it that way, then an extreme risk, or an extreme threat, is one that pushes at least one of these systems outside of its normal boundaries of operation and creates an abnormal behavior that we then have to work really hard to respond to. A catastrophic risk is one where that happens, but then that also cascades. Particularly in global catastrophe, you have a whole system that encompasses everyone all around the world, or maybe a set of systems that encompass everyone all around the world, that are all operating in this abnormal state that’s really hard for us to respond to.
And then an existential catastrophe is one where the systems have been pushed into such an abnormal state that either you can’t get them back or it’s going to be really hard. And life as we know it cannot be resumed; We’re going to have to live in a very different and very inferior world, at least from our current way of thinking." (emphasis added)
MichaelA @ 2020-04-23T06:51 (+1)
Sears writes:
The term ‘global catastrophic risk’ (GCR) is increasingly used in the scholarly community to refer to a category of threats that are global in scope, catastrophic in intensity, and non-zero in probability (Bostrom and Cirkovic, 2008). [...] The GCR framework is concerned with low-probability, high-consequence scenarios that threaten humankind as a whole (Avin et al., 2018; Beck, 2009; Kuhlemann, 2018; Liu, 2018)
(Personally, I don't think I like that second sentence. I'm not sure what "threaten humankind" is meant to mean, but I'm not sure I'd count something that e.g. causes huge casualties on just one continent, or 20% casualties spread globally, as threatening humankind. Or if I did, I'd be meaning something like "threatens some humans", in which case I'd also count risks much smaller than GCRs. So this sentence sounds to me like it's sort-of conflating GCRs with existential risks.)
MichaelA @ 2020-09-08T09:06 (+17)
Reflections on data from a survey about things Iāve written
I recently requested people take a survey on the quality/impact of things Iāve written. So far, 22 people have generously taken the survey. (Please add yourself to that tally!)
Here Iāll display summaries of the first 21 responses (I may update this later), and reflect on what I learned from this.[1]
I had also made predictions about what the survey results would be, to give myself some sort of ramshackle baseline to compare results against. I was going to share these predictions, then felt no one would be interested; but let me know if youād like me to add them in a comment.
For my thoughts on how worthwhile this was and whether other researchers/organisations should run similar surveys, see Should surveys about the quality/impact of research outputs be more common?
(Note that many of the things I've written were related to my work with Convergence Analysis, but my comments here reflect only my own opinions.)
The data
Q1:
Q2:
Q3:
Q4:
Q5: āIf you think anything I've written has affected your beliefs, please say what that thing was (either titles or roughly what the topic was), and/or say how it affected your beliefs.ā
(I didnāt ask for permission to share peopleās comments, so, for this and the other comment questions, Iāll just highlight some recurring themes or seemingly noteworthy specifics.)
- 9/21 respondents answered this
- The writings people mentioned specifically were my collections and summaries of existing ideas/work (e.g., A central directory for open research questions), Database of existential risk estimates, Improving the future by influencing actors' benevolence, intelligence, and power, and my comments on the Google doc of another person who wanted feedback.
- Most responses seemed to indicate the shift in beliefs caused by my work was fairly small.
Q6:
Q7: āIf you think anything I've written has affected your decisions or plans, please say what that thing was (either titles or roughly what the topic was), and/or say how it affected your decisions or plans.ā
- 5/21 respondents answered this
- One respondent mentioned a way in which something I wrote contributed meaningfully to an output of theirs which I think is quite valuable
- One respondent indicated Some history topics it might be very valuable to investigate influenced them somewhat
- Another indicated Improving the future by influencing actors' benevolence, intelligence, and power might inform an important decision
- There was one other small influence
Q8:
Q8, text box: āIf you answered "Yes" to either of the above, could you say a bit about why?ā
- 15/21 respondents filled in this text box
- Some respondents indicated things āon their endā (e.g., busyness, attention span), or that theyād have said yes to one or both of those questions for most authors rather than just for me in particular
- Some respondents mentioned topics just not seeming relevant to their interests
- Some respondents mentioned my posts being long, being rambly, or failing to have a summary
- Some respondents mentioned they were already well-versed in the areas I was writing about and didnāt feel my posts were necessary for them
Q9: āDo you have any other feedback on specific things I've written, my general writing style, my topic choices, or anything else?ā
- 10/21 respondents answered this
- Several non-specific positive comments/encouragements
- Several positive or neutral comments on me having a lot of output
- Several comments suggesting I should be more concise, use summaries more consistently, and/or be clearer about what the point of what Iām writing is
- Some comments indicating appreciation of my summaries, collections, and efforts to make ideas accessible
- Some comments on my writing style and clarity being good
- Some comments that my original research wasnāt very impressive
- One comment that I seem to hung up on defining things precisely/prescriptively
- (I donāt actually endorse linguistic prescriptivism, and remember occasionally trying to make that explicit. But Iāll take this as useful data that Iāve sometimes accidentally given that impression, and try to adjust accordingly.)
Q10: āIf you would like to share your name, please do so below. But this is 100% voluntary - you're not at all obliged to do so :)ā
- 6/21 respondents gave their name/username
- 2 gave their email for if I wanted to follow-up
Some takeaways from all this
- Responses were notably more positive than expected for some questions, and notably less positive for others
- I donāt think this should notably change my bottom-line view of the overall quality and impact of my work to date
- But it does make me a little less uncertain about that all-things-considered view, as I now have slightly more data that roughly supports it
- In turn, this updates me towards being a little more confident that it makes sense for me to focus on pursuing an EA research career for now (rather than, e.g., switching to operations or civil service roles)
- This is because Iām now slightly less worried that Iām being strongly influenced by overconfidence or motivated reasoning. (I already wanted to do research or writing before learning about EA.)
- I should definitely more consistently include summaries, and/or in other ways signal early and clearly what the point of a post is
- I was already aiming to move in this direction, and had predicted responses would often mention this, but this has still given me an extra push
- I should look out for ways in which I might appear linguistically prescriptive or overly focused on definitions/precision
- I should more seriously consider moving more towards concision, even at the cost of precision, clarity, or comprehensiveness
- Though Iām still not totally sold on that
- Iām also aware that this shortform comment is not a great first step!
- I should consider moving more towards concision, even at the cost of quantity/speed of output
- With extra time on a given post, I could perhaps find ways to be more concise without sacrificing other valuable things
- I should feel less like I āhave toā produce writings rapidly
- This point is harder to explain briefly, so Iāll just scratch the surface here
- I donāt actually expect this to substantially change my behaviours, as that feeling wasnāt the main reason for my large amount of output
- But if my output slows for some other reason, I think Iāll now not feel (as) bad about that
- People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive
- The ādirectionā of this effect is in line with my expectations, but the strength was surprising
- Iāve updated towards more confidence that my summaries and (especially) my collections were valuable and worth making, and this may slightly increase the already-high chance that Iāll continue creating that sort of thing
- But this is also slightly confusing, as my original research/ideas and/or aptitude for future original research seems to have put me in good stead for various job and grant selection processes
- And I donāt have indications that my summaries or collections helped there, though they may have
- Much of my work to date may be less useful for more experienced/engaged EAs than less experienced/engaged EAs
- This is in line with my sense that I was often trying to make ideas more accessible, make getting up to speed easier, etc.
- There seemed to be a weak correlation between how recently something was posted and how often it was positively mentioned
- This broadly aligns with trends from other data sources (e.g., researchers reaching out to me, upvotes)
- This could suggest that:
- my work is getting better
- people are paying more attention to things written by me, regardless of their quality
- people just remember the recent stuff more
- Iād guess all three of those factors play some role
(I also have additional thoughts that are fuzzier or even less likely to be of interest to anyone other than me.)
[1] There are of course myriad reasons to not read into this data too much, including that:
- itās from a sample of only 21 people
- the sample was non-representative, and indeed self-selecting (so it may, for example, disproportionately represent people who like my work)
- the responses may be biased towards not hurting my feelings
That said, I think I can still learn something from this data, especially given flaws in other data sources I have. (E.g., comments from people who choose to randomly and non-anonymously reach out to me may be even more positively biased.)
If youāve made it this far, you may also be interested in the above-mentioned Should surveys about the quality/impact of research outputs be more common?
HowieL @ 2020-09-11T14:42 (+11)
"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive"
I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them).
That said, as you know, I think your summaries/collections are useful and underprovided.
MichaelA @ 2020-09-11T17:50 (+2)
Good point.
Though I guess I suspect that, if the reason a person finds my original research not so useful is just because they aren't the target audience, they'd be more likely to either not explicitly comment on it or to say something about it not seeming relevant to them. (Rather than making a generic comment about it not seeming useful.)
But I guess this seems less likely in cases where:
- the person doesn't realise that the key reason it wasn't useful is that they weren't the target audience, or
- the person feels that what they're focused on is substantially more important than anything else (because then they'll perceive "useful to them" as meaning a very similar thing to "useful")
In any case, I'm definitely just taking this survey as providing weak (though useful) evidence, and combining it with various other sources of evidence.
HowieL @ 2020-09-11T18:38 (+1)
Seems reasonable
MichaelA @ 2022-02-11T14:48 (+16)
I've now turned this into a top-level post.
Collection of work on whether/how much people should focus on the EU if theyāre interested in AI governance for longtermist/x-risk reasons
I made this quickly. Please let me know if you know of things I missed. I list things in reverse chronological order.
- Is the European Union Relevant for AGI Governance? - Nicolas, 2022
- Reasons why EU laws/policies might be important for AI outcomes - Michael Aird, 2022
- European Union AI Development and Governance Partnerships - EU AI Governance, 2022
- Argument Against Impact: EU Is Not an AI Superpower - EU AI Governance, 2022
- Will the EU regulations on AI matter to the rest of the world? - Nicolas, 2022
- EU's importance for AI governance is conditional on AI trajectories - a case study - MathiasKB, 2022
- What is the EU AI Act and why should you care about it? - MathiasKB, 2021
- AI Governance Career Paths for Europeans - careersthrowaway, 2020
- How Europe might matter for AI governance - Stefan Torges, 2019
- AI policy careers in the EU - Lauro Langosco, 2019
There may be some posts I missed with the European Union tag, and there are also posts with that tag that arenāt about AI governance but which address a similar question for other cause areas and so might have some applicable insights. There are also presumably relevant things I missed that arenāt on the Forum.
MichaelA @ 2021-04-25T18:14 (+16)
Quick thoughts on Kelsey Piper's article Is climate change an āexistential threatā ā or just a catastrophic one?
- The article was far better than I expect most reporting on climate change as a potential existential risk to be
- This is in line with Kelsey Piper generally seeming to do great work
- I particularly appreciated that it (a) emphasised how the concepts of catastrophes in general and extinction in particular are distinct and why that matters, but (b) did this in a way that I suspect has a relatively low risk of seeming callous, nit-picky, or otherwise annoying to people who care about climate change
- But I also had some substantive issues with the article, which I'll discuss below
- The article conflated āexistential threatā/āexistential riskā with āextinction riskā, thereby ignoring two other types of existential catastrophe: unrecoverable collapse and unrecoverable dystopia
- See also Venn diagrams of existential, global, and suffering catastrophes
- Some quotes from the article to demonstrate what the conflation I'm referring to:
- āBut thereās a standard meaning of that phrase [existential threat]: that itās going to wipe out humanity ā or even, as Warren implied Wednesday night, all life on our planet.ā
- āTo academics in philosophy and public policy who study the future of humankind, an existential risk is a very specific thing: a disaster that destroys all future human potential and ensures that no generations of humans will ever leave Earth and explore our universe.ā
- I think this also means the article kind-of ignores or overconfidently dismisses the possibility that climate change might cause unrecoverable collapse or unrecoverable dystopia
- (The article does mention collapse a few times, but not something that corresponds to unrecoverable collapse or unrecoverable dystopia)
- I do think it is very unlikely (e.g., below 1 in 1000 chance) that climate change would relatively directly cause those things, but thatās only a tentative view, and I think uncertainty and further research is warranted
- I expect that the fact that article kind-of ignores unrecoverable collapse would cause some non-EA-types to disagree with the article, and that theyād be right to do so
- (Though I also expect that many of these people would be overconfident that climate change would cause a major collapse, that they'd pay insufficient attention to the question of whether this collapse is "unrecoverableā, and that they also wouldn't consider reasonable scenarios of dystopias)
- The article also failed to mention the term āexistential risk factorā or allude to that basic idea
- It thus ignores or overconfidently dismisses the possibility that climate change might make x-risks more likely, even if it doesnāt directly cause them
- Finally, the article failed to mention one other, important reason why the distinction between catastrophes in general and extinction in particular matters: This distinction is relevnat to prioritisation, in a world where other things may indeed be plausible extinction risks
- If people care about climate change because they think itās fairly likely to cause extinction, but in reality itās less likely to cause extinction and something else is more likely to do so, then (in theory!), by their own values, it should be really important for them to learn that and adjust their priorities
- But this is more just something I think could've maybe made the article even better, rather than something that I felt was problematic about it
- I think itās totally understandable that Kelsey Piper didnāt discuss the above points in detail, but I think she couldāve:
- Been more careful in her phrasing to at least avoid being actively misleading
- Maybe briefly touched on (some) of these points and provided links to where they're discussed more thoroughly
(Disclaimer-ish thing: I haven't sent this to Kelsey because it doesn't seem super important, the article is from 2019, I assume she's quite busy, and I'm posting this as a shortform rather than something super prominent.)
MichaelA @ 2021-06-30T13:17 (+15)
The x-risk policy pipeline & interventions for improving it: A quick mapping
I just had a call with someone who's thinking about how to improve the existential risk research community's ability to cause useful policies to be implemented well. This made me realise I'd be keen to see a diagram of the "pipeline" from research to implementation of good policies, showing various intervention options and which steps of the pipeline they help with. I decided to quickly whip such a diagram up after the call, forcing myself to spend no more than 30 mins on it. Here's the result.
(This is of course imperfect in oodles of ways, probably overlaps with and ignores a bunch of existing work on policymaking*, presents things as more one-way and simplistic than they really are, etc. But maybe it'll be somewhat interesting/useful to some people.)
(If the images are too small for you, you can open each in a new tab.)
Feel free to ask me to explain anything that seems unclear. I could also probably give you an editable copy if you'd find that useful.
*One of many examples of the relevant stuff I haven't myself read is CSER's report on Pathways to Linking Science and Policy in the Field of Global Risk.
MichaelA @ 2021-01-02T04:21 (+14)
Why I'm less optimistic than Toby Ord about New Zealand in nuclear winter, and maybe about collapse more generally
This is a lightly edited version of some quick thoughts I wrote in May 2020. These thoughts are just my reaction to some specific claims in The Precipice, intended in a spirit of updating incrementally. This is not a substantive post containing my full views on nuclear war or collapse & recovery.
In The Precipice, Ord writes:
[If a nuclear winter occurs,] Existential catastrophe via a global unrecoverable collapse of civilisation also seems unlikely, especially if we consider somewhere like New Zealand (or the south-east of Australia) which is unlikely to be directly targeted and will avoid the worst effects of nuclear winter by being coastal. It is hard to see why they wouldnāt make it through with most of their technology (and institutions) intact.
(See also the relevant section of Ord's 80,000 Hours interview.)
I share the view that itās unlikely that New Zealand would be directly targeted by nuclear war, or that nuclear winter would cause New Zealand to suffer extreme agricultural losses or lose its technology. (That said, I haven't looked into that closely myself.) However, it seems to me relatively easy to see why New Zealand might suffer a collapse - whether immediately following the nuclear war or after months, years, or decades. For example, I think collapse in New Zealand could plausibly be caused by:
- Some massive emotional, social, and political reactions within New Zealand to a global nuclear war and nuclear winter
- Nuclear winter might kill billions and cause many countries to collapse, and it seems hard to predict how people elsewhere would react to that
- Huge numbers of people (perhaps over a billion?) trying to get into New Zealand if agriculture and/or civilization in most other places collapses
- Further military actions by panicking governments or starving populaces
- Sudden collapse of global trade[1]
But what particularly stood out to me the above passage was Ordās suggestion that itās "hard to see" why New Zealand's institutions wouldnāt remain intact. For the above reasons, I would see it as likely that thereād be major shifts in New Zealandās institutions in a scenario where nuclear winter caused collapse in most of the rest of the world. And I'd see it as plausible that these shifts would be for the worse, and would cause NZ's institutions to no longer be "intact". (I'm not sure whether this is really a strong disagreement with Ord, as I'm not sure precisely what he meant by "hard to see".)
The more generalised version of the ideas I'm expressing is that Iām quite concerned about what ārecoveryā from collapse might look like - I think in a lot of scenarios, recovery along technological and economic dimensions seems fairly likely, but it seems far harder to say what our morals, norms, social institutions, political systems, etc. would be like. Itās quite unclear to me how inevitable the apparent global trends towards something like capitalism (rather than something like feudalism), democracy, moral circle expansion, liberty for slaves, etc. were, and whether any inevitability there was would remain in place following the āscarringā and upheaval of a collapse.
This view is related to the following statements from Beckstead (2015):
If a global catastrophe occurs, I believe there is some (highly uncertain) probability that civilization would not fully recover (though I would also guess that recovery is significantly more likely than not). This seems possible to me for the general and non-specific reason that the mechanisms of civilizational progress are not understood and there is essentially no historical precedent for events severe enough to kill a substantial fraction of the worldās population. I also think that there are more specific reasons to believe that an extreme catastrophe could degrade the culture and institutions necessary for scientific and social progress, and/or upset a relatively favorable geopolitical situation. This could result in increased and extended exposure to other global catastrophic risks, an advanced civilization with a flawed realization of human values, failure to realize other āglobal upside possibilities,ā and/or other issues.
[...]
In this way, our situation seems analogous to the situation of someone who is caring for a sapling, has very limited experience with saplings, has no mechanistic understanding of how saplings work, and wants to ensure that nothing stops the sapling from becoming a great redwood. It would be hard for them to be confident that the saplingās eventual long-term growth would be unaffected by unprecedented shocksāsuch as cutting off 40% of its branches or letting it go without water for 20% longer than it ever had beforeāeven taken as given that such shocks wouldnāt directly/immediately result in its death. For similar reasons, it seems hard to be confident that humanityās eventual long-term progress would be unaffected by a catastrophe that resulted in hundreds of millions of deaths.
[1] I'm not sure precisely what any of those things would look like, how they could lead to collapse, how likely they are, or how likely recovery from such a collapse might be in any case. Perhaps Ord has looked into such possibilities in depth, and concluded they donāt pose a major concern. But to me it at least seems plausible that they could cause a major collapse even in places such as New Zealand. And if collapse does occur, I see recovery as not guaranteed (although probably >50% likely, at least for economic and technological recovery).
You can see a list of all the things I've written that summarise, comment on, or take inspiration from parts of The Precipice here.
MichaelA @ 2020-03-30T15:04 (+14)
Collection of sources related to dystopias and "robust totalitarianism"
(See also Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.?)
The Precipice - Toby Ord (Chapter 5 has a section on Dystopian Scenarios)
The Totalitarian Threat - Bryan Caplan (if that link stops working, a link to a Word doc version can be found on this page) (some related discussion on the 80k podcast here; use the "find" function)
Reducing long-term risks from malevolent actors - David Althaus and Tobias Baumann, 2020
The Centre for the Governance of AI’s research agenda - Allan Dafoe (this contains discussion of "robust totalitarianism", and related matters)
A shift in arguments for AI risk - Tom Sittler (this has a brief but valuable section on robust totalitarianism) (discussion of the overall piece here)
Existential Risk Prevention as Global Priority - Nick Bostrom (this discusses the concepts of "permanent stagnation" and "flawed realisation", and very briefly touches on their relevance to e.g. lasting totalitarianism)
The Future of Human Evolution - Bostrom, 2009 (I think some scenarios covered there might count as dystopias, depending on definitions)
The Vulnerable World Hypothesis - Bostrom, 2019
80,000 Hours interview with Tyler Cowen - 2018
Various works of fiction, most notably Orwell's 1984
Some sources on dictatorships/totalitarianism in general (without a focus on long-term future consequences)
Dikötter, F. (2019). How to Be a Dictator: The Cult of Personality in the Twentieth Century. Bloomsbury Publishing.
Glad, B. (2002). Why tyrants go too far: Malignant narcissism and absolute power. Political Psychology, 23(1), 1-2.*
Chang, J., & Halliday, J. (2007). Mao: The unknown story. Vintage.*
*Asterisks indicate I haven't read that source myself, and thus that the source might not actually be a good fit for this list.
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
MichaelA @ 2020-02-24T08:31 (+14)
Collection of all prior work I found that seemed substantially relevant to information hazards
Information hazards: a very simple typology - Will Bradshaw, 2020
Information hazards and downside risks - Michael Aird (me), 2020
Information hazards - EA concepts
Information Hazards in Biotechnology - Lewis et al., 2019
Bioinfohazards - Crawford, Adamson, Ladish, 2019
Information Hazards - Bostrom, 2011 (I believe this is the paper that introduced the term)
Terrorism, Tylenol, and dangerous information - Davis_Kingsley, 2018
Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical - Gentzel, 2018
Horsepox synthesis: A case of the unilateralist's curse? - Lewis, 2018
Mitigating catastrophic biorisks - Esvelt, 2020
The Precipice (particularly pages 135-137) - Ord, 2020
Information hazard - LW Wiki
Thoughts on The Weapon of Openness - Will Bradshaw, 2020
Exploring the Streisand Effect - Will Bradshaw, 2020
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks - Alexey Turchin, 2018
A point of clarification on infohazard terminology - eukaryote, 2020
Somewhat less directly relevant
The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? - Shevlane & Dafoe, 2020 (commentary here)
The Vulnerable World Hypothesis - Bostrom, 2019 (footnotes 39 and 41 in particular)
Managing risk in the EA policy space - weeatquince, 2019 (touches briefly on information hazards)
Strategic Implications of Openness in AI Development - Bostrom, 2017 (sort-of relevant, though not explicitly about information hazards)
[Review] On the Chatham House Rule (Ben Pace, Dec 2019) - Pace, 2019
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
MichaelA @ 2020-03-20T15:18 (+1)
Interesting example: Leo Szilard and cobalt bombs
In The Precipice, Toby Ord mentions the possibility of "a deliberate attempt to destroy humanity by maximising fallout (the hypothetical cobalt bomb)" (though he notes such a bomb may be beyond our current abilities). In a footnote, he writes that "Such a 'doomsday device' was first suggested by Leo Szilard in 1950". Wikipedia similarly says:
The concept of a cobalt bomb was originally described in a radio program by physicist Leó Szilárd on February 26, 1950. His intent was not to propose that such a weapon be built, but to show that nuclear weapon technology would soon reach the point where it could end human life on Earth, a doomsday device. Such "salted" weapons were requested by the U.S. Air Force and seriously investigated, but not deployed.[citation needed] [...]
The Russian Federation has allegedly developed cobalt warheads for use with their Status-6 Oceanic Multipurpose System nuclear torpedoes. However many commentators doubt that this is a real project, and see it as more likely to be a staged leak to intimidate the United States.
That's the extent of my knowledge of cobalt bombs, so I'm poorly placed to evaluate that action by Szilard. But this at least looks like it could be an unusually clear-cut case of one of Bostrom's subtypes of information hazards:
Attention hazard: The mere drawing of attention to some particularly potent or relevant ideas or data increases risk, even when these ideas or data are already “known”.
Because there are countless avenues for doing harm, an adversary faces a vast search task in finding out which avenue is most likely to achieve his goals. Drawing the adversary’s attention to a subset of especially potent avenues can greatly facilitate the search. For example, if we focus our concern and our discourse on the challenge of defending against viral attacks, this may signal to an adversary that viral weapons—as distinct from, say, conventional explosives or chemical weapons—constitute an especially promising domain in which to search for destructive applications. The better we manage to focus our defensive deliberations on our greatest vulnerabilities, the more useful our conclusions may be to a potential adversary.
It seems that Szilard wanted to highlight how bad cobalt bombs would be, that no one had recognised - or at least not acted on - the possibility of such bombs until he tried to raise awareness of them, and that since he did so there may have been multiple government attempts to develop such bombs.
I was a little surprised that Ord didn't discuss the potential information hazards angle of this example, especially as he discusses a similar example with regards to Japanese bioweapons in WWII elsewhere in the book.
I was also surprised by the fact that it was Szilard who took this action. This is because one of the main things I know Szilard for is being arguably one of the earliest (the earliest?) examples of a scientist bucking standard openness norms due to, basically, concerns of information hazards potentially severe enough to pose global catastrophic risks. E.g., a report by MIRI/Katja Grace states:
Leó Szilárd patented the nuclear chain reaction in 1934. He then asked the British War Office to hold the patent in secret, to prevent the Germans from creating nuclear weapons (Section 2.1). After the discovery of fission in 1938, Szilárd tried to convince other physicists to keep their discoveries secret, with limited success.
MichaelA @ 2022-01-23T08:57 (+13)
EDIT: This is now superseded by a top-level post so you should read that instead.
tl;dr: Value large impacts rather than large inputs, but be excited about megaprojects anyway because they're a new & useful tool we've unlocked
A lot of people are excited about megaprojects, and I agree that they should be. But we should remember that megaprojects are basically defined by the size of their inputs (e.g., "productively" using >$100 million per year), and that we don't intrinsically value the capacity to absorb those inputs. What we really care about is huge positive impact, and megaprojects are just one means to that end, and actually (ceteris paribus) we should be even more excited about achieving the same impacts using less inputs & smaller projects. How can we reconcile these thoughts, and why should we still be excited about megaprojects?
I suggest we think about this as follows:
- Imagine a Venn diagram with a circle for megaprojects and another circle for projects with great expected value (EV)
- Projects with great EV are really the focus and always have been
- Projects like 80,000 Hours, FHI, and Superintelligence were each far smaller than megaprojects, but in my view probably had and still have great EV, and an EV high enough to potentially justify megaproject-level spending. That's great (even better than megaprojects!), and we'd still love more projects that can punch so far above their weight.
- But there's also a large area of overlap between the two circles of the Venn diagram - a large overlap between megaprojects and projects with great EV - since it will usually take a lot of inputs to achieve a lot of good outcomes.
- And we haven't yet explored that area of overlap much - haven't found and executed on the obvious and best ideas. This is partly because it obviously generally makes sense to start with the smaller, cheaper, easier options, and partly because EA's stock of financial resources and relevant human capital has grown fairly rapidly so just a few years ago we had far less ability to execute on megaprojects. It's probably also partly because a lot of people aren't naturally sufficiently ambitious or lack sufficient self-confidence.
- Now that those stocks of financial and human capital resources have grown so much, we've sort-of "unlocked" that additional area of high-EV project options. And we're likely continuing to unlock it more each year.
- So we should now be explicitly focusing attention on that area of options; if we don't want make an explicit effort to do that, we'll continue neglecting it via inertia. This doesn't make smaller projects that also have great EV any less valuable, but we don't want to be only focusing on those.
- But we should remember that really we should first and foremost be extremely ambitious in terms of impacts, and just willing to also - as a means to that end - be extremely ambitious in terms of inputs absorbed.
- One caveat: We should also to some extent value doing larger projects for the sake of doing larger projects - like sometimes choose that over doing smaller projects with similar/great impact - since that upgrades the career capital of the project's leaders/employees and also provides community-level lessons learned, helping further unlock the option of future megaprojects. But this still isn't a matter of valuing the size of projects as an end in itself.
Caveats: I wrote this fairly quickly and didn't run it by people to check that it clearly conveys my full views here. I imagine some people could take this as "be less excited about megaprojects", which is definitely not a message I want to convey to EA in general.
My thanks to Linch Zhang for conversations that informed my thinking here, though that doesn't imply his endorsement of my thinking or of this shortform.
Linch @ 2022-01-23T09:40 (+10)
I think the general thrust of your argument is clearly right, and it's weird/frustrating that this is not the default assumption when people talk about megaprojects (though maybe I'm not reading the existing discussions of megaprojects sufficiently charitably).
2 moderately-sized caveats:
- Re 2) "Projects with great EV are really the focus and always have been", I think in the early days of EA, and to a lesser degree still today, a lot of focus of EA isn't on great EV so much as high cost-effectiveness. To some degree the megaprojects discourse was set to push back against this.
- Re: 5, "It's probably also partly because a lot of people aren't naturally sufficiently ambitious or lack sufficient self-confidence" I think this is definitely true, but maybe I'd like to push back a bit on the individual framing of this lack of ambition, as I think it's partially cultural/institutional. That is, until very recently, we (EA broadly, or the largest funders etc), haven't made it as clear that EA supports and encourages extreme ambition in outputs in a way that means we (collectively) are potentially willing to pay large per-project costs in inputs.
MichaelA @ 2022-01-24T13:43 (+6)
Thanks - I think those are both really good points! I've now made a top-level post version of this shortform, with the main modifications being adjustments in light of your points (plus, unrelatedly, adding a colourful diagram because colourful diagrams are fun).
MichaelA @ 2021-01-04T11:08 (+13)
tl;dr: Toby Ord seems to imply that economic stagnation is clearly an existential risk factor. But I that we should actually be more uncertain about that; I think itās plausible that economic stagnation would actually decrease economic risk, at least given certain types of stagnation and certain starting conditions.
(This is basically a nitpick I wrote in May 2020, and then lightly edited recently.)
---
In The Precipice, Toby Ord discusses the concept of existential risk factors: factors which increase existential risk, whether or not they themselves could ādirectlyā cause existential catastrophe. He writes:
An easy way to find existential risk factors is to consider stressors for humanity or for our ability to make good decisions. These include global economic stagnationā¦ (emphasis added)
This seems to me to imply that global economic stagnation is clearly and almost certainly an existential risk factor.
He also discusses the inverse concept, existential security factors: factors which reduce existential risk. He writes:
Many of the things we commonly think of as social goods may turn out to also be existential security factors. Things such as education, peace or prosperity may help protect us. (emphasis added)
It does seem to me quite plausible - indeed, probably >50% likely - that global economic stagnation is an existential risk factor, and that prosperity is a security factor (or at least that they tend to be these things). And in the case of prosperity, Ord merely says that prosperity may help protect us, which seems an entirely fair statement. (In the case of global economic stagnation, he seems to be making a stronger claim.)
But it also seems like how economic growth affects existential risk is still a fairly open and important question. (This is related to the idea of differential progress.)
And it also seems plausible that increasing growth from unusually low levels could be protective, while increasing it further from already high levels could increase risk, or something like that.
In fact, Ord himself separately - not in the context of economic growth - provides an interesting discussion of āthe question of variables that both increase and decrease existential risk over different parts of their domains (i.e. where existential risk is non-monotonic in that variable).ā He says that, in certain cases, we will need to consider such variables not as simply risk or security factors, but āas a more complex kind of factor insteadā.
Altogether, I think that, if I had been the person writing The Precipice:
- The book wouldāve been much less excellent
- ...But also, I wouldāve tried to make it clearer that global economic stagnation is just plausibly or probably an existential risk factor, rather than definitely one.
- I think I wouldāve highlighted economic growth as a potential example of one of the āmore complex kind[s] of factor[s]ā, for which the relationship is non-monotonic.
(See also this paper, this summary of it, and posts tagged differential progress. Based on a skim, that paper seems to suggest that economic growth reduces total existential risk, but also that it might increase annual risk in the short-run. I think that thatād roughly support Ordās statements. But given that thatās just one paper on a complex topic, I still think we shouldn't be highly confident that economic growth is (always) an existential security factor.)
You can see a list of all the things I've written that summarise, comment on, or take inspiration from parts of The Precipice here.
MichaelA @ 2020-04-18T08:55 (+13)
Epistemic status: Unimportant hot take on a paper I've only skimmed.
Watson and Watson write:
Conditions capable of supporting multicellular life are predicted to continue for another billion years, but humans will inevitably become extinct within several million years. We explore the paradox of a habitable planet devoid of people, and consider how to prioritise our actions to maximise life after we are gone.
I react: Wait, inevitably? Wait, why don't we just try to not go extinct? Wait, what about places other than Earth?
They go on to say:
Finally, we offer a personal challenge to everyone concerned about the Earth’s future: choose a lineage or a place that you care about and prioritise your actions to maximise the likelihood that it will outlive us. For us, the lineages we have dedicated our scientific and personal efforts towards are mistletoes (Santalales) and gulls and terns (Laridae), two widespread groups frequently regarded as pests that need to be controlled. The place we care most about is south-eastern Australia – a region where we raise a family, manage a property, restore habitats, and teach the next generations of conservation scientists. Playing favourites is just as much about maintaining wellbeing and connecting with the wider community via people with shared values as it is about maximising future biodiversity.
I react: Wait, seriously? Your recipe for wellbeing is declaring the only culture-creating life we know of (ourselves) irreversibly doomed, and focusing your efforts instead on ensuring that mistletoe survives the ravages of deep time?
Even if your focus is on maximising future biodiversity, I'd say it still makes sense to set your aim a little higher - try to keep us afloat to keep more biodiversity afloat. (And it seems very unclear to me why we'd value biodiversity intrinsically, rather than individual nonhuman animal wellbeing, even if we cared more about nature than humans, but that's a separate story.)
This was a reminder to me of how wide the gulf can be between different people's ways of looking at the world.
It also reminded me of this quote from Dave Denkenberger:
In 2011, I was reading this paper called Fungi and Sustainability, and the premise was that after the dinosaur killing asteroid, there would not have been sunlight and there were lots of dead trees and so mushrooms could grow really well. But its conclusion was that maybe when humans go extinct, the world will be ruled by mushrooms again. I thought, why don’t we just eat the mushrooms and not go extinct?
MichaelA @ 2021-08-14T08:48 (+12)
I've recently collected readings and notes on the following topics:
- how to write/communicate well
- how to do high-quality, efficient research
- how to get useful input from busy people
- how to do high-impact research
Just sharing here in case people would find them useful. Further info on purposes, epistemic status, etc. can be found at those links.
MichaelA @ 2021-06-05T14:41 (+12)
Notes on Galef's "Scout Mindset" (2021)
Overall thoughts
- Scout Mindset was engaging, easy to read, and had interesting stories and examples
- Galef covered a lot of important points in a clear way
- She provided good, concrete advice on how to put things into practice
- So I'm very likely to recommend this book to people who aren't in the EA community, are relatively new to it, or aren't super engaged with it
- I also liked how she mentioned effective altruism itself several times and highlighted its genuinely good features in an accurate way, but without making this the central focus or seeming preachy
- (At least, I'm guessing people wouldn't find it preachy - it's hard to say given that I'm already a convert...)
- I also liked how she mentioned effective altruism itself several times and highlighted its genuinely good features in an accurate way, but without making this the central focus or seeming preachy
- Conversely, I think I was already aware of and had internalised almost all the basic ideas and actions suggested in the book, and mostly act on these things
- This is mostly due to the various things I've read or listened to since learning about EA
- So I've put this 45th on my rough list of the 53 books I've read since learning about EA, in descending order of their perceived usefulness to me specifically
- And I wouldn't necessarily recommend this to long-time, highly engaged members of the EA community
- Though some may still find it useful, and many may still find it enjoyable
My Anki cards based on the book
Galef argues that the way many people see death is an example of a ___ ___.
Sweet lemon
[Opposite of sour grapes.]
Galef says impressions of how confident, capable, etc. a person is has more to do with the person's apparent ___ confidence than with the person's apparent ___ confidence
Social
Epistemic
What's a concrete way to adopt scout mindset when talking to a friend about an argument/disagreement you had with someone else?
Don't say which side you were on
See also
- Galef discussing ideas from the book on the Clearer Thinking podcast
- Rationality: From AI to Zombies (also the podcast version)
Misc notes
- My reasoning for making posts like this is explained in Suggestion: Make Anki cards, share them as posts, and share key updates
- But since not much in this book was new to me, I didn't make many Anki cards or have many key takeaways, and am thus doing a shortform rather than a top-level post
MichaelA @ 2023-01-02T14:07 (+11)
I've now turned this into a top-level post, and anyone who wants to read this should now read that version rather than this shortform.
Adding important nuances to "preserve option value" arguments
Summary
I fairly commonly hear (and make) arguments like "This action would be irreversible. And if we don't take the action now, we can still do so later. So, to preserve option value, we shouldn't take that action, even if it would be good to do the action now if now was our only chance."[1]
This is relevant to actions such as:
- doing field-building to a new target audience for some important cause area
- publicly discussing of some important issue in cases where that discussion could involve infohazards, cause polarization, or make our community seem wacky
I think this sort of argument is often getting at something important, but in my experience such arguments are usually oversimplified in some important ways. This shortform is a quickly written[2] attempt to provide a more nuanced picture of that kind of argument. My key points are:
- "(Ir)reversibility" is a matter of degree (not a binary), and a matter of the expected extent to which the counterfactual effects we're considering causing would (a) fade by default if we stop fuelling them, and/or (b) could be reversed by us if we actively tried to reverse them.
- Sometimes we may be surprised to find that something does seem decently reversible.
- The "option value" we retain is also a matter of degree, and we should bear in mind that delays often gradually reduce total benefits and sometimes mean missing key windows of opportunity.
- Delaying can only be better than acting now if we expect we'll be able to make a better-informed decision later and/or we expect the action to become more net-positive later.
- If we don't expect our knowledge will improve in relevant ways nor the act will become more valuable/less harmful, or we expect minor improvements that are outweighed by the downsides or delay, we should probably just act now if the action does seem good.
But again, I still think "option value" arguments are often getting at something important; I just think we may often make better decisions if we also consider the above three nuances when making "option value" arguments. And, to be clear, I definitely still think it's often worth avoiding, delaying, or consulting people about risky-seeming actions rather than just taking them right now.
I'd welcome feedback on these ideas. Also please let me know if you think this should be a top-level post.
1. On "irreversibility"
In some sense, all actions are themselves irreversible - if you do that action, you can never make it literally the case that you didn't do that action. But, of course, that doesn't matter. The important question is instead something like "If we cause this variable to move from x to y, to what extent would our counterfactual impact remain even if we later start to wish we hadn't had that impact and we adjust our behaviors accordingly?" E.g., if we make a given issue something that's known by and salient to a lot of politicians and policymakers, to what extent, in expectation, will that continue to be true even if we later realise we wish it wasn't true?
And this is really a question of degree, not a binary.
There are two key reasons why something may be fairly reversible:
- Our counterfactual effects may naturally wash out
- The variable may gradually drift back to the setting it was at before our intervention
- Or it may remain at the setting we put it to, but with it becoming increasingly likely over time that that would've happened even in the absence of our intervention, such that our counterfactual impact declines
- For example, let's say we raise the salience of some issue to politicians and policymakers because it seems ~60% likely that that's a good idea, ~20% likely it's ~neutral, and ~20% likely it's a bad idea. Then we later determine it seems it was a bad idea after all, so we stop taking any actions to keep salience high. In that case:
- The issue may gradually fall off these people's radars again, as other priorities force themselves higher up the agenda
- Even if the issue remains salient or increases in salience, it could be that this or some fraction of it would've happened anyway, just on a delay
- This is likely for issues that gradually become obviously real and important and where we notice the issues sooner than other key communities do
- We could imagine a graph with one line showing how salience of the issue would've risen by default without us, another line showing how salience rises earlier or higher if we make that happen, and a third line for if we take the action but then stop. That third line would start the same as the "we make that happen" line, then gradually revert toward the "what would've happened by default" line.
- We may be able to actively (partially) reverse our effects
- I expect this effect would usually be less important than the "naturally wash out" effect.
- Basically because when I tried to think of some examples, they all seemed either hard to achieve big results from or like they'd require "weird" or "common sense bad" actions like misleading people.
- But perhaps sometimes decently large effects could be achieved from this?
- For example, we could try to actively reduce the salience of an issue we previously increased the salience of, such as by contacting the people who we convinced and who most started to increase the issue's salience themselves (e.g., academics who started publishing relevant papers), and explaining to them our reasoning for now thinking it's counterproductive to make this issue more salient.
- I expect this effect would usually be less important than the "naturally wash out" effect.
2. On "we can still do it later"
In some sense, it's always the case that if you don't take an action at a given time, you can't later do exactly that same action or achieve exactly the same effects anymore. Sometimes this hardly matters, but sometimes it's important. The important question is something like "If we don't take this action now, to what extent could we still achieve similar expected benefits with similarly low expected harms via taking a similar action later on?"
I think very often significant value is lost by delaying net-positive actions. E.g., in general and all other factors held constant:
- delaying field-building will reduce the number of full-time-equivalent years spent on key issues before it's "too late anyway" (e.g., because an existential catastrophe has happened or the problem has already been solved)
- delaying efforts to improve prioritization & understanding of some issue will reduce the number of "policy windows" that occur between those efforts & the time when it's too late anyway
I also think that sometimes delay could mean we miss a "window of opportunity" for taking an action with a similar type and balance of benefits to harms of the action we have in mind. That is, there may not just be a decay in the benefits, but rather a somewhat "qualitative" shift in whether "something like this action" is even on the table. For example, we may miss the one key policy window we were aiming to affect.
(Somewhat relevant: Crucial questions about optimal timing of work and donations.)
3. Will we plausibly have more reason to do it later than we do now?
Delaying can only be better than acting now if at least one of the following is true:
- We expect we'll be able to make a better-informed decision later
- e.g., because our relevant knowledge will improve
- We expect the action to become more net-positive later
- e.g., because we expect favorable changes in background variables - the time will become "more ripe"
The more we expect those effects, the stronger the case for delay. The less we expect those effects, the weaker the case for delay. (A simplified way of saying this is "Why bother delaying your decision if you'd just later be facing the same or worse decision with the same or worse info?")
This can be weighed up against the degree to which we should worry about irreversibility and the degree to which we should worry about the costs of delay, in order to decide whether to act now. (Assuming the act does seem net positive & worth prioritizing, according to our current all-things-considered best guess.)
I think it's usually true that we'll (in expectation) be able to make a better-informed decision later, but how true that is can vary a lot between cases, and that magnitude matters if there are costs to delay.
I think it's sometimes true that the action will become more net-positive later, but probably usually the opposite is true (as discussed in the prior section).
- ^
See for example this post: Hard-to-reverse decisions destroy option value
I read that post ~4 years ago and remember thinking it made good points and is valuable. I expect if I re-read it I'd still agree with it. I don't think I'd explicitly noticed the nuances this shortform expresses when I read that post, and I didn't check today whether that post already accounts for these nuances well.
- ^
I expect that some of my points are obvious and that some readers might find it arrogant/naive/weird that I wrote this without citing x y z literatures. It also seems plausible some of my points or uses of terminology are mistaken. Please feel free to mention relevant literatures and feel encouraged to highlight potential mistakes!
MichaelA @ 2020-05-07T07:55 (+11)
Collection of sources relevant to moral circles, moral boundaries, or their expansion
Works by the EA community or related communities
Moral circles: Degrees, dimensions, visuals - Michael Aird (i.e., me), 2020
Why I prioritize moral circle expansion over artificial intelligence alignment - Jacy Reese, 2018
The Moral Circle is not a Circle - Grue_Slinky, 2019
The Narrowing Circle - Gwern, 2019 (see here for Aaron Gertler’s summary and commentary)
Radical Empathy - Holden Karnofsky, 2017
Various works from the Sentience Institute, including:
- "Our Perspective"
- a presentation by Jamie Harris
- a presentation by Jacy Reese (the table shown at 10:15 is perhaps especially relevant)
- another video by Reese
Extinction risk reduction and moral circle expansion: Speculating suspicious convergence - Aird, work in progress
-Less relevant, or with only a small section that’s directly relevant-
Why do effective altruists support the causes we do? - Michelle Hutchinson, 2015
Finding more effective causes - Michelle Hutchinson, 2015
Cosmopolitanism - Topher Hallquist, 2014
Three Heuristics for Finding Cause X - Kerry Vaughan, 2016
The Drowning Child and the Expanding Circle - Peter Singer, 1997
The expected value of extinction risk reduction is positive - Brauner and Grosse-Holz, 2018
Crucial questions for longtermists: Overview - Michael Aird (me), work in progress
Mass media
Should animals, plants, and robots have the same rights as you? - Sigal Samuel (for Vox’s Future Perfect), 2019
Academic works
(There appears to be a substantial and continuing amount of psychological work on this topic; the papers I list here are just a fairly random subset to get you started.)
Toward a Psychology of Moral Expansiveness - Crimston et al., 2018
Moral expansiveness: Examining variability in the extension of the moral world - Crimston et al., 2016 (my unpolished commentary on this is here) (brief summary here)
Centripetal and centrifugal forces in the moral circle: Competing constraints on moral learning - Graham et al., 2017
Expanding the moral circle: Inclusion and exclusion mindsets and the circle of moral regard - Laham, 2009
Ideological differences in the expanse of the moral circle - Waytz et al., 2019
The Expanding Circle - Peter Singer, 1981
-Less relevant, or with only a small section that’s directly relevant-
The Better Angels of Our Nature - Steven Pinker, 2011
The moral standing of animals: Towards a psychology of speciesism - Caviola, Everett, & Faber, 2019
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
See also this comment, my collection of sources relevant to the idea of “moral weight” ,and my collection of evidence about views on longtermism, time discounting, population ethics, etc. among non-EAs.
Jamie_Harris @ 2020-05-24T18:15 (+8)
The only other very directly related resource I can think of is my own presentation on moral circle expansion, and various other short content by Sentience Institute's website, e.g. our FAQ, some of the talks or videos. But I think that the academic psychology literature you refer to is very relevant here. Good starting point articles are, the "moral expansiveness" article you link to above and "Toward a psychology of moral expansiveness."
Of course, depending on definitions, a far wider literature could be relevant, e.g. almost anything related to animal advocacy, robot rights, consideration of future beings, consideration of people on the other side of the planet etc.
There's some wider content on "moral advocacy" or "values spreading," of which work on moral circle expansion is a part:
Arguments for and against moral advocacy - Tobias Baumann, 2017
Values Spreading is Often More Important than Extinction Risk - Brian Tomasik, 2013
Against moral advocacy - Paul Christiano, 2013
Also relevant: "Should Longtermists Mostly Think About Animals?"
MichaelA @ 2020-05-24T23:38 (+1)
Thanks for adding those links, Jamie!
I've now added the first few into my lists above.
Aaron Gertler @ 2020-05-12T07:43 (+3)
I continue to appreciate all the collections you've been posting! I expect to find reasons to link to many of these in the years to come.
MichaelA @ 2020-05-12T08:06 (+2)
Good to hear!
Yeah, I hope they'll be mildly useful to random people at random times over a long period :D
Although I also expect that most people they'd be mildly useful for would probably never be aware they exist, so there may be a better way to do this.
Also, if and when EA coordinates on one central wiki, these could hopefully be folded into or drawn on for that, in some way.
MichaelA @ 2020-02-24T08:45 (+11)
Collection of everything I know of that explicitly uses the terms differential progress / intellectual progress / technological development, except Forum posts)
This originally collected Forum posts as well, but now that is collected by the Differential progress tag.
Quick thoughts on the question: "Is it better to try to stop the development of a technology, or to try to get there first and shape how it is used?" - Michael Aird (i.e., me), 2021
Differential Intellectual Progress as a Positive-Sum Project - Tomasik, 2013/2015
Differential technological development: Some early thinking - Beckstead (for GiveWell), 2015/2016
Differential progress - EA Concepts
Differential technological development - Wikipedia
Existential Risk and Economic Growth - Aschenbrenner, 2019 (summary by Alex HT here)
On Progress and Prosperity - Christiano, 2014
How useful is āprogressā? - Christiano, ~2013
Differential intellectual progress - LW Wiki
Existential Risks: Analyzing Human Extinction Scenarios - Bostrom, 2002 (section 9.4) (introduced the term differential technological development, I think)
Intelligence Explosion: Evidence and Import - Muehlhauser & Salamon (for MIRI) (section 4.2) (introduced the term differential intellectual development, I think)
The Precipice - Ord, 2020 (page 206)
Superintelligence - Bostrom, 2014
Some sources that are quite relevant but that donāt explicitly use those terms
Strategic Implications of Openness in AI Development - Bostrom, 2017
Related concepts
The growth of our "power" (or "science and technology") vs our "wisdom" (see, e.g., page 34 of The Precipice)
The "pacing problem" (see, e.g., footnote 57 in Chapter 1 of The Precipice)
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
MichaelA @ 2022-02-11T13:32 (+10)
Reasons why EU laws/policies might be important for AI outcomes
Based on some reading and conversations, I think there are two main categories of reasons why EU laws/policies (including regulations)[1] might be important for AI risk outcomes, with each category containing several more specific reasons.[2] This post attempts to summarise those reasons.
But note that:
- I wrote this quite quickly, and this isn't my area of expertise.
- My aim is not to make people focus more on the EU, just to make it clearer what some possible reasons for that focus are. (Overall I actually think the EU is probably already getting enough attention from AI governance people.)
Please comment if you know of relevant prior work, if you have disagreements or think something should be added, and/or if you think I should make this a top-level post.
Note: I drafted this quickly, then wanted to improve it based on feedback & on things I read/remembered since writing it. But I then realised I'll never make the time to do that, so I'm just posting this~as-is anyway since maybe it'll be a bit useful to some people. See also Collection of work on whether/how much people should focus on the EU if theyāre interested in AI governance for longtermist/x-risk reasons.
Summary of the reasons
- EU laws/policies might influence AI development/deployment elsewhere (especially in the US, China, the UK), via one of the following:[3]
- The Brussels effect
- Copying (for other reasons)
- Soft power / shifting norms
- Providing a testing ground
- EU laws/policies might influence AI development/deployment in the EU itself, which could matter if one of the following happens:
- EU might lead: An EU-based actor might become the/a leader in AI development.
- EU might be close behind a leader: An EU-based actor might become one of the main "laggards" in the pursuit of highly advanced AI development, such that its behaviour could affect the behaviour of the leader(s).
- There may be many important advanced AI developers/deployers, including EU actors:
- We could find ourselves in a scenario with highly multipolar development/deployment, slow/continuous takeoff, and/or more misuse/structural risk than accident risk.
- If so, there might be (say) 3-10 quite important AI developers/deployers, rather than the only important actors being the āleaderā and the main 1-2 ālaggardsā.
- EU actors seem decently likely to be among those 3-10 actors, and more likely than most states, regions, or companies elsewhere are.
Some impressions and hot takes
- I think longtermists tend to care about the EU mostly for the first rather than the second set of reasons. And that does seem like the correct focus to me.
- Longtermists who are only a bit familiar with the topic of the EU's importance for AI tend to focus mostly on the Brussels effect, but actually people more familiar with the topic tend to also place significant weight on copying, soft power / shifting norms, and providing a testing ground. I think we should place significant weight on all four of those reasons.
- Longtermists tend to think it's very unlikely that an EU-based actor might become the/a leader in AI development. But I'm not sure I've seen careful analysis of that question or careful consideration of the second and third reasons in that second category. I'd appreciate someone pointing me to or creating such analyses.
Why do I mean by copying?
Copying would be policymakers/regulators or policy influencers (e.g., advocates) elsewhere copying, adapting, or taking inspiration from EU laws/policies when creating or pushing for laws/policies in their own jurisdictions. I imagine there are several reasons this might happen (this probably isn't comprehensive, and I don't know if each of these are actually noteworthy):
- Busyness on the part of the policymakers/regulators or policy influencers
- Lack of expertise on the part of the policymakers/regulators or policy influencers
- The EU laws/policies would've been tested and (maybe) shown to work at least decently well
- Copying may be more defensible than creating something new, or may make it easier to deflect blame to the EU policymakers if something goes wrong
What do I mean by soft power / shifting norms?
This would be things like:
- Making it seem like the sort of thing the EU is doing is common, standard, what sensible people do, etc.
- Shifting the Overton window
- EU actors using diplomacy, advocacy, etc. to influence other actors to do some things similarly to how the EU is doing them
What do I mean by providing a testing ground?
I primarily mean actually providing real lessons on what works, what doesn't, how best to craft policies, what unanticipated effects occur, what actors get angry about what, etc., such that these lessons can then actually inform policymakers/regulators or policy influencers elsewhere. I.e., not just making something seem more defensible or easier to convince people of, but actually informing what laws/policies are pursued and how they're crafted.
This could for example occur via longtermist actors pushing in the EU for the sort of things they think would be good in the US, UK, and China, then using lessons from the EU to inform what the push for in those other jurisdictions.
Reasons the EU could be good for this include that it's "lower stakes" (since it seems less likely to lead in AI development) and it seems "ahead" on and more receptive to substantial AI regulations.
Some additional thoughts
- There may well be important reasons I'm missing. Some possibilities:
- Affecting geopolitics
- Facilitating international treaties, standards, etc.[4]
- This post is just about reasons EU laws/policies might be important for AI outcomes, which doesn't include all reasons why working on EU laws/policies might be important for AI outcomes. The latter would also include:
- Gaining career capital (knowledge, skills, connections, credibility) that can be used for other work (including but not limited to AI-related law/policy work elsewhere).
- Doing background research with some relevance to risks, laws, and policies outside of the EU.
- I think it could be valuable to similarly explore why regions other than the US, China, EU, and UK might matter for AI development/deployment. I expect similar reasons would apply elsewhere as well, though with different strength and maybe with some reasons added or removed.
- For example, Iāve heard it argued that Singapore could be surprisingly important for reducing AI risk in part because China often copies Singaporean laws/policies.
- Again, my aim with this post is not to make people focus more on the EU, and overall I think the EU is probably getting enough attention from AI governance people.
[1] I know this topic has been written about in multiple existing posts and papers (e.g., many of the posts tagged European Union). But I seem to recall that (a) those I read mostly focused just on the Brussels effect and (b) those I read contained especially little mention of the second category of reasons the EU might matter for AI risk. The post How Europe might matter for AI governance is largely an exception to that and is also worth reading; I see its breakdown and my breakdown as complementary.
My thanks to Lukas Finnveden, Mathias Bonde, and Neil Dullaghan for helpful comments on an earlier draft. This does not imply their endorsement of this post's claims.
- ^
A reviewer wrote: "Don't forget directives and decisions!
Regulation implies a very specific thing in EU policy, if you say (including regulations) that to some extent implies that this article does not concern other EU measures such as directives, decisions, or opinions.
https://europa.eu/european-union/law/legal-acts_en" - ^
I know this topic has been written about in multiple existing posts and papers (e.g., many of the posts tagged European Union). But I seem to recall that (a) those I read mostly focused just on the Brussels effect and (b) those I read contained especially little mention of the second category of reasons the EU might matter for AI risk. The post How Europe might matter for AI governance is largely an exception to that and is also worth reading; I see its breakdown and my breakdown as complementary.
- ^
A reviewer noted "In worlds where you think EU policy can have an effect abroad, the absence of EU policies could also have an effect too right?
The absence of a united EU position on AI internationally might allow room for worse policies to advance, for actors with good policies to lack allies with enough clout. Something like acts of omission or "ally vacuum" (not sure if the latter is already a concept somewhere)"
- ^
A reviewer wrote "I agree this one might be a big deal, and would include it in your list"
Will Aldred @ 2023-03-28T18:05 (+4)
Iāve heard it argued that Singapore could be surprisingly important for reducing AI risk in part because China often copies Singaporean laws/policies.
Interesting!
(And for others who might be interested and who are based in Singapore, there's this Singapore AI Policy Career Guide.)
MichaelA @ 2021-06-04T09:59 (+10)
Collection of EA-associated historical case study research
This collection is in reverse chronological order of publication date. I think I'm forgetting lots of relevant things, and I intend to add more things in future - please let me know if you know of something I'm missing.
- Zaidi and Dafoe (2021), International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons
- Some of the Sentience Institute's research, such as its "social movement case studies"* and perhaps the post How tractable is changing the course of history?
- Open Philanthropy's list of case studies on the History of Philanthropy
- This includes work they commissioned, work done by Luke Muehlhauser, and previous non-EA-associated work which Open Philanthropy found
- Grace (2015), LeĆ³ SzilĆ”rd and the Danger of Nuclear Weapons: A Case Study in Risk Mitigation
- Muehlhauser (2013) How well will policy-makers handle AGI? (initial findings)
Possibly relevant things:
- Some book reviews by Scott Alexander, such as:
- It appears Animal Charity Evaluators did relevant research, but I haven't read it, they described it as having been "of variable quality", and they've discontinued it.
Motivation for this collection
I think it would be good for more EAs to learn about, research, and/or draw insights from history. See also Some history topics it might be very valuable to investigate and 80k's thoughts on the career idea of becoming a historian.
One relevant type of research is investigation of historical case studies to draw insights for specific other topics or questions.
I expect we might get more of that type of research, and at a higher quality level, if it was easier for people to find previous examples of such research conducted or commissioned by people in or associated with the EA community. For example, that could help people:
- Think about what sorts of questions this sort of research could shed light on
- Think about how to conduct that research
- Decide whether to fund, encourage, hire for, or support such research
Hence I'm making this collection.
Scope of this collection
I currently intend to:
- Not include work that's already on the EA Forum, since that can be found via the History tag (though that tag's scope is broader than the scope of this collection).
- Not include types of history research other than case studies, such as quantitative macrohistory
- Though often a single project or writeup includes elements of multiple types of history research
- Not include any of the huge amount of historical case study research that's not associated with EA (even research on topics EAs care about).
- People could of course learn a lot from that research as well.
- The reasons I'm collecting only EA-associated work is that:
- Otherwise this collection would just be insanely large
- I'd guess that many collections of non-EA-associated historical case study research already exist?
- The precise focuses or methodologies of EA-associated work may be more relevant to other EAs
- Links to non-EA-associated work can be found in most of the things I list here
See also
Collection of EA analyses of how social social movements rise, fall, can be influential, etc.
---
Let me know if you think it'd be useful to change the scope of this (e.g., also including Forum posts) or to make other related collections (e.g., historical case study analyses focused on drawing insights for reducing AI risk, whether or not those case studies are EA-associated and whether or not they're on the Forum).
MichaelA @ 2021-05-27T08:33 (+10)
Are there "a day in the life" / "typical workday" writeups regarding working at EA orgs? Should someone make some (or make more)?
I've had multiple calls with people who are interested in working at EA orgs, but who feel very unsure what that actually involves day to day, and so wanted to know what a typical workday is like for me. This does seem like useful info for people choosing how much to focus on working at EA vs non-EA orgs, as well as which specific types of roles and orgs to focus on.
Having write-ups on that could be more efficient than people answering similar questions multiple times. And it could make it easier for people to learn about a wider range of "typical workdays", rather than having to extrapolate from whoever they happened to talk to and whatever happened to come to mind for that person at that time.
I think such write-ups are made and shared in some other "sectors". E.g. when I was applying for a job in the UK civil service, I think I recall there being a "typical day" writeup for a range of different types of roles in and branches of the civil service.
So do such write-ups exist for EA orgs? (Maybe some posts in the Working at EA organizations series serve this function?) Should someone make some (or make more)?
One way to make them would be for people think about career options to have the calls they would've had anyway, but ask if they can take more detailed conversation notes and then post them to the Forum. (Perhaps anonymising the notes, or synthesising a few conversations into one post, if that seems best.) That might allow these people to quickly provide a handy public service. (See e.g. the surprising-to-me number of upvotes and comments from me just posting these conversation notes I'd made for my own purposes anyway.)
I think ideally these write-ups would be findable from the Working at EA vs Non-EA Orgs tag.
Jamie_Harris @ 2021-05-31T22:11 (+4)
Animal Advocacy Careers skills profiles are a bit like this for various effective animal advocacy nonprofit roles. You can also just read my notes on the interviews I did (linked within each profile) -- they usually just start with the question "what's a typical day?" https://www.animaladvocacycareers.org/skills-profiles
MichaelA @ 2021-02-10T02:31 (+10)
Notes on The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous (2020)
Cross-posted to LessWrong as a top-level post.
I recently finished reading Henrich's 2020 book The WEIRDest People in the World. I would highly recommend it, along with Henrich's 2015 book The Secret of Our Success; I've roughly ranked them the 8th and 9th most useful-to-me of the 47 EA-related books I've read since learning about EA.
In this shortform, I'll:
- Summarise my "four main updates" from this book
- Share the Anki cards I made for myself when reading the book[1]
- I intend this as a lower-effort alternative to writing notes specifically for public consumption or writing a proper book review
- If you want to download the cards themselves to import them into your own deck, follow this link.
My hope is that this will be a low-effort way for me to help some EAs to quickly:
- Gain some key insights from the book
- Work out whether reading/listening to the book is worth their time
You may find it also/more useful to read
- This review of the book on LessWrong (which I haven't read myself)
- The Wikipedia page on the book
- The Slate Star Codex review of Secret of Our Success
My four main updates
I wrote this quickly and only after finishing the book; take it all with a grain of salt.
Here are what I think are the four main ways in which WEIRDest People shifted my beliefs on relatively high-level points that seem potentially decision-relevant, as distinct from specific facts I learned:
- The book made me a bit less concerned about unrecoverable collapse and unrecoverable dystopia (i.e., the two types of existential catastrophe other than extinction, in Toby Ord's breakdown)
- This is because a big part of my concern was based on the idea that the current state and trend for things like values, institutions, and political systems seems unusually good by historical standards, and we don't fully understand how that state and trend came about, so we should worry that any "major disruption" could somehow throw us off course and that we wouldn't be able to get back on course (see Beckstead, 2015).
- E.g., perhaps a major war could knock us from a stable equilibrium with many liberal democracies to a stable equilibrium with many authoritarian regimes.
- But WEIRDest People made me a bit more confident that our current values, institutions, and political systems would stick around or re-emerge even after a "major disruption", because they or the things driving them are "fit" in a cultural evolutionary sense.
- This is because a big part of my concern was based on the idea that the current state and trend for things like values, institutions, and political systems seems unusually good by historical standards, and we don't fully understand how that state and trend came about, so we should worry that any "major disruption" could somehow throw us off course and that we wouldn't be able to get back on course (see Beckstead, 2015).
- The book made me less confident that the Industrial Revolution involved a stark change in a number of key trends, and/or made me more open to the idea that the drivers of the changes in those trends began long before the Industrial Revolution
- My previous belief was quite influenced by a post by Luke Muehlhauser
- Henrich seems to provide strong evidence that some key trends started long before 1750 (some starting in the first millennium CE, most starting by 1200-1500)
- But I'm not sure how much Henrich's book and Muehlhauser's post actually conflict with each other
- E.g., perhaps Henrich would agree (a) that there were discontinuities in all the metrics Muehlhauser looked at, and (b) that those metrics are more directly important than the metrics Henrich looked at; perhaps Henrich would say that the earlier discontinuities in the metrics he looked at were just the things that laid the foundations, not what directly mattered
- The book made me less confident that economic growth/prosperity is one of the main drivers of various ways in which the world seems to have gotten better over time (e.g., more democracy, more science, more concern for all of humanity rather than just one's ingroup)
- The book made me more open to the idea that other factors (WEIRD psychology and institutions) caused both economic growth/prosperity and those other positive trends
- E.g., I felt that the book pushed somewhat against an attitude expressed in this GiveWell post on flow-through effects
- This is related in some ways to my above-mentioned update about the industrial revolution
- The book made me more inclined to think that it's really hard to design institutions/systems based on explicit ideas about how they'll succeed in achieving desired objectives, or at least that humans tend to be bad at that, and that success more often results from a process of random variation followed by competition.
- In reality, this update was mainly caused by Henrich's previous book, Secret of Our Success, but WEIRDest People drummed it in a bit more, and it seemed worth mentioning here.
Note that:
- Each of those update was more like a partial shift than a total reversal of my previous views
- See also Update Yourself Incrementally
- E.g., I still tentatively think longtermists should devote more resources/attention should to risks of unrecoverable dystopia than they currently do, but I'm now a bit less confident about that.
- I made this list only after finishing the book, and hadn't been taking notes with this in mind along the way
- So I might be distorting these updates or forgetting other important updates
My Anki cards
See the bottom of this shortform for caveats about my Anki cards.[2]
The indented parts are the questions, the answers are in "spoiler blocks" (hover over them to reveal the text), and the parts in square brackets are my notes-to-self.
Henrich's team found that people from more market-integrated societies made ___ offers in the ultimatum game (compared to people from less market-integrated societies)
Higher, more equal
---
Credence goods are...
those that buyers can't easily assess for quality (e.g. a steel sword, whose carbon content is hard to determine)
---
Henrich discusses strategies to allow trade to happen in absence of market norms. Three I found interesting were...
Silent trade; divine oaths; and a single, widely scattered clan or ethnic group handling all aspects of moving goods through a vast trade network
---
Four things Henrich said KII and prevalence of cousin marriage were positively correlated with were...
- Psychological "tightness"
- Asch Conformity
- High claims (dishonesty) in the Impersonal Honesty Game
- Unpaid parking tickets per diplomat
---
Seven things Henrich said KII, prevalence of cousin marriage, and/or contemporary KII were negatively correlated with were...
- Individualism
- Universalism
- Analytical thinking
- Impersonal trust
- Importance of intentionality in judging a "theft"
- Contributions in the Public Good Game [there were two proxies for this]
- Voluntary blood donations per 1,000 people
[Some of these things were measured by proxies I'm somewhat skeptical of the relevance/significance of.]
---
In India and China, analytic thinking (as measured using the triad task) is negatively correlated with...
Percentage of land under rice paddy cultivation
---
What are three effects Henrich suggests that exposure to war tends to have?
- Tightening of interdependent network bonds
- Strengthening of commitments to important social norms
- Deepening of people's religious devotion
---
What 2 things does Henrich suggest has some similar effects to exposure to war?
Exposure to natural disasters
Nonviolent intergroup competition (e.g. between firms) [though he suggests this'll likely have smaller or no effects on religious devotion]
---
Henrich argues that at least 2 things (a) arose in part due to the emerging WEIRD psychology in the second millennium CE [and maybe the first as well?], and (b) then further contributed to the emergence of that WEIRD psychology. What are those 2 things?
- Democracy and/or participatory governance
- Protestantism
[He may have also mentioned other things. E.g., I think maybe he sees scientific thinking, universities, and more rational legal systems as also fitting that bill.]
---
What were the two key findings of Gurven et al. (2013)? [This has to do with personality.]
- In the first test of the five-factor model of personality variation in a largely illiterate, indigenous society, Gurven et al. failed to find support for the model
- That society's personality variation seemed to display 2 principal factors that may reflect socioecological characteristics common to small-scale societies
[I learned of this study via Henrich's WEIRDest People.]
---
What does Henrich say increases suicide rates?
Rates of Protestants relative to Catholics in an area
[He says historical Protestantism rates increased suicide rates at that time. I can't remember if he also says historical P rates increase present suicide rates, or that present P rates increase present suicide rates. But I'm guessing he believes those things.]
---
Does Henrich seem to think Protestants tend to basically have more extreme versions of WEIRD tendencies than Catholics do?
Yes
---
Muthukrishna and Henrich argue that rates of innovation are heavily influenced by what 3 factors?
- sociality (seemingly meaning both size and interconnectedness of a population)
- transmission fidelity
- cultural variance (analogous to genetic variance)
---
Henrich says that 4 voluntary associations (particularly) contributed to broadening the flow of knowledge and technology around Europe. These were:
Charter cities, monasteries, apprenticeships, universities
---
Henrich says that, historically, kings and other elites have tended to crack down on people with new ideas, inventions, or techniques that might shake up the existing power structure. He says this problem was mitigated in Europe [maybe just in the second millennium CE?] by 2 factors:
- Political disunity (there were many competing states)
- Relative cultural unity (due to transnational networks like the church, guilds, and the republic of letters)
[So people and groups could escape oppression by moving to other places.]
---
Henrich says it seems like banking deregulation increased ___, which in turn increased ___.
Interfirm competition; impersonal trust
---
What was the main way Henrich updated me away from the impression I'd gotten from Muehlhauser's industrial revolution post?
Henrich seems to provide strong evidence that key trends started long before 1750 (some starting in the first millennium CE, most starting by 1200-1500)
[See caveats in the "My four main updates" section.]
---
The emergence of sedentary agriculture drove a(n) ____ in/of kin-based institutions.
Intensification
[This led to norms related to things like cousin marriage, corporate ownership, patrilocal residence, segmentary lineages, and ancestor worship.]
---
Diamond argues that continents that are spread out in an ___ direction, such as ___, had a developmental advantage because of ___.
East-West;
Eurasia;
the ease with which crops, animals, ideas and technologies could spread between areas of similar latitude
[Quoting a PBS webpage on Guns, Germs and Steel.]
---
What does Henrich say is the basic relationship between his arguments and Diamond's arguments in Guns, Germs and Steel?
Henrich's arguments essentially pick up where Diamond's arguments leave off
[I.e. Diamond's arguments explain global inequality up to ~1000CE well, but don't explain things like why the Industrial Revolution happened in Britain, whereas Henrich's arguments can explain those later events.]
---
Henrich says that one reason why democracy hasn't been taken up as effectively/thoroughly in Islamic countries is that Islam...
Says daughters should inherit half of what sons inherit (rather than nothing/very little), which likely drove the spread of and/or sustained a custom in which daughters marry their father's brother's sons, or more broadly a custom of marrying within clans. [This is to keep wealth within a family/clan.]
This encourages intensive forms of kinship, which favours certain ways of thinking and institutions that don't mesh well with democracy.
[I may be slightly misrepresenting the ideas.
]
---
Japan, South Korea, and China have been able to adapt relatively rapidly to the economic configurations and global opportunities created by WEIRD societies. Henrich says that one factor that was likely important in that was that these societies had experienced long histories of ___, which had ___.
agriculture and state-level governance;
fostered the evolution of cultural values, customs, and norms encouraging formal education, industriousness, and a willingness to defer gratification.
[These can be seen as pre-existing cultural institutions that happened to dovetail nicely with the new institutions acquired from WEIRD societies.]
---
Japan, South Korea, and China have been able to adapt relatively rapidly to the economic configurations and global opportunities created by WEIRD societies. Henrich says that one factor that was likely important in that was that these societies had powerful ___, which ___.
top-down orientations;
helped them rapidly adopt and implement key kin-based institutions acquired from WEIRD societies (e.g. abolishing polygamy, clans, arranged marriages).
---
Henrich says studies on the effects of evolution by natural selection (not cultural selection) on length of time people spend in school indicate that...
Evolution by natural selection reduced that time by about 8 months over the 20th century
[And by about 1.5 months per generation - maybe just more recently.
But this was very much offset by cultural evolution increasing the length of time in school by a larger amount.]
-----
[1] See here for the article that inspired me to actually start using Anki properly. Hat tip to Michelle Hutchinson for linking to that article and thus prompting me to read it. Note that some of the Anki cards that I made and include in this post violate some of the advice in that article - in particular, the advice to try to ensure that questions and answers each express only one idea.
[2] Caveats about these Anki cards:
- Itās possible that some of these cards include mistakes, or will be confusing or misleading out of context.
- I havenāt fact-checked Henrich on any of these points.
- I only started making the cards after I was more than halfway through the book
- I of course only made cards for some of the interesting insights in the remaining chapters
- Some of these cards include direct quotes without having quote marks.
- Some other cards are just my own interpretations -rather than definitely 100% parroting what the book is saying - but donāt note that fact.
- A lot of the value of the book is not for the specific facts it collects, but rather its overarching theories and ways of looking at things. I think Anki cards could directly focus on those things, but I was making the cards for myself, so I mostly made them about specific facts that I thought would keep my memory of the theories and frameworks fresh.
Ramiro @ 2021-02-10T12:25 (+8)
oh, please, do post this type of stuff, specially in shortform... but, unfortunately, you can't expect a lot of karma - attention is a scarce resource, right?
I'd totally like to see you blog or send a newsletter with this.
MichaelA @ 2021-02-10T02:38 (+2)
Meta: I recently made two similar posts as top-level posts rather than as shortforms. Both got relatively little karma, especially the second. So I feel unsure whether posts/shortforms like this are worth putting in the time to make, and are worth posting as top-level posts vs as shortforms. If any readers have thoughts on that, let me know.
(Though it's worth noting that making these posts takes me far less time than making regular posts does - e.g., this shortform took me 45 minutes total. So even just being mildly useful to a few people might be sufficient to justify that time cost.)
[Edited to add: I added the "My four main updates" section to this shortform 4 days after I originally posted it and made this comment.]
Habryka @ 2021-02-10T02:40 (+5)
I really like these types of posts. I have some vague sense that these both would get more engagement and excitement on LW than the EA Forum, so maybe worth also posting them to there.
MichaelA @ 2021-02-10T02:59 (+4)
Thanks for that info and that suggestion. Given that, I've tried cross-posting my Schelling notes, as an initial experiment.
MichaelA @ 2020-05-10T04:39 (+10)
Collection of evidence about views on longtermism, time discounting, population ethics, significance of suffering vs happiness, etc. among non-EAs
Appendix A of The Precipice - Ord, 2020 (see also the footnotes, and the sources referenced)
The Long-Term Future: An Attitude Survey - Vallinder, 2019
Older people may place less moral value on the far future - Sanjay, 2019
Making people happy or making happy people? Questionnaire-experimental studies of population ethics and policy - Spears, 2017
The Psychology of Existential Risk: Moral Judgments about Human Extinction - Schubert, Caviola & Faber, 2019
Psychology of Existential Risk and Long-Termism - Schubert, 2018 (space for discussion here)
Descriptive Ethics – Methodology and Literature Review - Althaus, ~2018 (this is something like an unpolished appendix to Descriptive Population Ethics and Its Relevance for Cause Prioritization, and it would make sense to read the latter post first)
A Small Mechanical Turk Survey on Ethics and Animal Welfare - Brian Tomasik, 2015
Work on "future self continuity" might be relevant (I haven't looked into it)
Some evidence about the views of EA-aligned/EA-adjacent groups
Survey results: Suffering vs oblivion - Slate Star Codex, 2016
Survey about preferences for the future of AI - FLI, ~2017
Some evidence about the views of EAs
Facebook poll relevant to preferences for one's own suffering vs bliss - Jay Quigley, 2016
See also my collection of sources relevant to moral circles, moral boundaries, or their expansion, and my collection of sources relevant to the idea of “moral weight”.
Stefan_Schubert @ 2021-06-04T15:58 (+4)
Aron Vallinder has put together a comprehensive bibliography on the psychology of the future.
MichaelA @ 2021-06-04T16:47 (+2)
Nice, thanks.
I've now also added that to the Bibliography section of the Psychology of effective altruism entry.
MichaelA @ 2020-04-02T15:56 (+10)
If a typical mammalian species survives for ~1 million years, should a 200,000 year old species expect another 800,000 years, or another million years?
tl;dr I think it's "another million years", or slightly longer, but I'm not sure.
In The Precipice, Toby Ord writes:
How much of this future might we live to see? The fossil record provides some useful guidance. Mammalian species typically survive for around one million years before they go extinct; our close relative, Homo erectus, survived for almost two million.[38] If we think of one million years in terms of a single, eighty-year life, then today humanity would be in its adolescence - sixteen years old, just coming into our power; just old enough to get ourselves into serious trouble.
(There are various extra details and caveats about these estimates in the footnotes.)
Ord also makes similar statements on the FLI Podcast, including the following:
If you think about the expected lifespan of humanity, a typical species lives for about a million years [I think Ord meant "mammalian species"]. Humanity is about 200,000 years old. We have something like 800,000 or a million or more years ahead of us if we play our cards right and we don’t lead to our own destruction. The analogy would be 20% of the way through our life[...]
I think this is a strong analogy from a poetic perspective. And I think that highlighting the typical species' lifespan is a good starting point for thinking about how long we might have left. (Although of course we could also draw on many other facts for that analysis, as Ord discusses in the book.)
But I also think that there's a way in which the lifespan analogy might be a bit misleading. If a human is 70, we expect they have less time less to live than if a human is 20. But I'm not sure whether, if a species if 700,000 years old, we should expect that species to go extinct sooner than a species that is 200,000 years old will.
My guess would be that a ~1 million year lifespan for a typical mammalian species would translate into a roughly 1 in a million chance of extinction each year, which doesn't rise or fall very much in a predictable way over most of the species' lifespan. Specific events, like changes in a climate or another species arriving/evolving, could easily change the annual extinction rate. But I'm not aware of an analogy here to how ageing increases the annual risk of humans dying from various causes.
I would imagine that, even if a species has been around for almost or more than a million years, we should still perhaps expect a roughly 1 in a million chance of extinction each year. Or perhaps we should even expect them to have a somewhat lower annual chance of extinction, and thus a higher expected lifespan going forwards, based on how long they've survived so far?
(But I'm also not an expert on the relevant fields - not even certain what they would be - and I didn't do extra research to inform this shortform comment.)
I don't think that Ord actually intends to imply that species' "lifespans" work like humans' lifespans do. But the analogy does seem to imply it. And in the FLI interview, he does seem to briefly imply that, though of course there he was speaking off the cuff.
I'm also not sure how important this point is, given that humans are very atypical anyway. But I thought it was worth noting in a shortform comment, especially as I expect that, in the wake of The Precipice being great, statements along these lines may be quoted regularly over the coming months.
MichaelA @ 2020-03-26T06:01 (+10)
My review of Tom Chivers' review of Toby Ord's The Precipice
I thought The Precipice was a fantastic book; I'd highly recommend it. And I agree with a lot about Chivers' review of it for The Spectator. I think Chivers captures a lot of the important points and nuances of the book, often with impressive brevity and accessibility for a general audience. (I've also heard good things about Chivers' own book.)
But there are three parts of Chivers' review that seem to me to like they're somewhat un-nuanced, or overstate/oversimplify the case for certain things, or could come across as overly alarmist.
I think Ord is very careful to avoid such pitfalls in The Precipice, and I'd guess that falling into such pitfalls is an easy and common way for existential risk related outreach efforts to have less positive impacts than they otherwise could, or perhaps even backfire. I understand that a review gives on far less space to work with than a book, so I don't expect anywhere near the level of nuance and detail. But I think that overconfident or overdramatic statements of uncertain matters (for example) can still be avoided.
I'll now quote and comment on the specific parts of Chivers' review that led to that view of mine.
An alleged nuclear close call
Firstly, in my view, there are three flaws with the opening passage of the review:
Humanity has come startlingly close to destroying itself in the 75 or so years in which it has had the technological power to do so. Some of the stories are less well known than others. One, buried in Appendix D of Toby Ord’s splendid The Precipice, I had not heard, despite having written a book on a similar topic myself. During the Cuban Missile Crisis, a USAF captain in Okinawa received orders to launch nuclear missiles; he refused to do so, reasoning that the move to DEFCON 1, a war state, would have arrived first.
Not only that: he sent two men down the corridor to the next launch control centre with orders to shoot the lieutenant in charge there if he moved to launch without confirmation. If he had not, I probably would not be writing this — unless with a charred stick on a rock.
First issue: Toby Ord makes it clear that "the incident I shall describe has been disputed, so we cannot yet be sure whether it occurred." Ord notes that "others who claimed to have been present in the Okinawa missile bases at the time" have since challenged this account, although there is also "some circumstantial evidence" supporting the account. Ultimately, Ord concludes "In my view this alleged incident should be taken seriously, but until there is further confirmation, no one should rely on it in their thinking about close calls." I therefore think Chivers should've made it clear that this is a disputed story.
Second issue: My impression from the book is that, even in the account of the person claiming this story is true, the two men sent down the corridor did not turn out to be necessary to avert the launch. (That said, the book isn't explicit on the point, so I'm unsure.) Ord writes that Bassett "telephoned the Missile Operations Centre, asking the person who radioed the order to either give the DEFCON 1 order or issue a stand-down order. A stand-down order was quickly given and the danger was over." That is the end of Ord's retelling of the account itself (rather than discussion of the evidence for or against it).
Third issue: I think it's true that, if a nuclear launch had occurred in that scenario, a large-scale nuclear war probably would've occurred (though it's not guaranteed, and it's hard to say). And if that happened, it seems technically true that Chivers probably would've have written this review. But I think that's primarily because history would've just unfolded very, very difficulty. Chivers seems to imply this is because civilization probably would've collapsed, and done so so severely than even technologies such as pencils would be lost and that they'd still be lost all these decades on (such that, if he was writing this review, he'd do so with "a charred stick on a rock").
This may seem like me taking a bit of throwaway rhetoric or hyperbole too seriously, and that may be so. But I think among the key takeaways of the book were vast uncertainties around whether certain events would actually lead to major catastrophes (e.g., would a launch lead to a full-scale nuclear war?), whether catastrophes would lead to civilizational collapse (e.g., how severe and long-lasting would the nuclear winter be, and how well would we adapt?), how severe collapses would be (e.g., to pre-industrial or pre-agricultural levels?), and how long-lasting collapses would be (from memory, Ord seems to think recovery is in fact fairly likely).
So I worry that a sentence like that one makes the book sound somewhat alarmist, doomsaying, and naive/simplistic, whereas in reality it seems to me quite nuanced and open about the arguments for why existential risk from certain sources may be "quite low" - and yet still extremely worth attending to, given the stakes.
To be fair, or to make things slightly stranger, Chivers does later say:
Perhaps surprisingly, [Ord] doesn’t think that nuclear war would have been an existential catastrophe. It might have been — a nuclear winter could have led to sufficiently dreadful collapse in agriculture to kill everyone — but it seems unlikely, given our understanding of physics and biology.
(Also, as an incredibly minor point, I think the relevant appendix was Appendix C rather than D. But maybe that was different in different editions or in an early version Chivers saw.)
"Numerically small"
Secondly, Chivers writes:
[Ord] points out that although the difference between a disaster that kills 99 per cent of us and one that kills 100 per cent would be numerically small, the outcome of the latter scenario would be vastly worse, because it shuts down humanity’s future.
I don't recall Ord ever saying something like that the death of 1 percent of the population would be "numerically small". Ord very repeatedly emphasises and reminds the reader that something really can count as deeply or even unprecedently awful, and well worth expending resources to avoid, even if it's not an existential catastrophe. This seems to me a valuable thing to do, otherwise the x-risk community could easily be seen as coldly dismissive of any sub-existential catastrophes. (Plus, such catastrophes really are very bad and well worth expending resources to avoid - this is something I would've said anyway, but seems especially pertinent in the current pandemic.)
I think saying "the difference between a disaster that kills 99 per cent of us and one that kills 100 per cent would be numerically small" cuts against that goal, and again could paint Ord as more simplistic or extremist than he really is.
"Blowing ourselves up"
Finally (for the purpose of my critiques), Chivers writes:
We could live for a billion years on this planet, or billions more on millions of other planets, if we manage to avoid blowing ourselves up in the next century or so.
To me, "avoid blowing ourselves up" again sounds quite informal or naive or something like that. It doesn't leave me with the impression that the book will be a rigorous and nuanced treatment of the topic. Plus, Ord isn't primarily concerned with us "blowing ourselves up" - the specific risks he sees as the largest are unaligned AI, engineered pandemics, and "unforeseen anthropogenic risk".
And even in the case of nuclear war, Ord is quite clear that it's the nuclear winter that's the largest source of existential risk, rather than the explosions themselves (though of course the explosions are necessary for causing such a winter). In fact, Ord writes "While one often hears the claim that we have enough nuclear weapons to destroy the world may times over, this is loose talk." (And he explains why this is loose talk.)
So again, this seems like a case where Ord actively separates his clear-headed analysis of the risks from various naive, simplistic, alarmist ideas that are somewhat common among some segments of the public, but where Chivers' review makes it sound (at least to me) like the book will match those sorts of ideas.
All that said, I should again note that I thought the review did a lot right. In fact, I have no quibbles at all with anything from that last quote onwards.
Aaron Gertler @ 2020-03-27T01:11 (+5)
This was an excellent meta-review! Thanks for sharing it.
I agree that these little slips of language are important; they can easily compound into very stubborn memes. (I don't know whether the first person to propose a paperclip AI regrets it, but picking a different example seems like it could have had a meaningful impact on the field's progress.)
MichaelA @ 2020-03-30T02:07 (+1)
Agreed.
These seem to often be examples of hedge drift, and their potential consequences seem like examples of memetic downside risks.
MichaelA @ 2021-11-21T10:00 (+9)
I've made a database of AI safety/governance surveys & survey ideas. I'll copy the "READ ME" page below. Let me know if you'd like access to the database, if you'd suggest I make a more public version, or if you'd like to suggest things be added.
"This spreadsheet lists surveys & ideas for surveys that are very relevant to AI safety/governance, including surveys which are in progress, ideas for surveys, and published surveys. The intention is to make it easier for people to:
1. Find out about outputs or works-in-progress they might want to read (perhaps contacting the authors)
2. Find out about projects/ideas they might want to lead, collaborate on, or provide input to
3. Find out about projects/ideas that might fill a gap the person was otherwise considering trying to fill (i.e., reduce duplication of work)
I (Michael Aird) made this spreadsheet quite quickly. For now Iām only sharing it with people at Rethink Priorities, people at GovAI, and a couple members of the EA community who Iāve spoken to and who are potentially interested in doing AI-related survey work.
I expect this spreadsheet misses many relevant things and that its structure/content could be improved (e.g., maybe it should be a Doc or an Airtable? Maybe some columns should be added/removed?). It might also make sense to have one version thatās more private and another thatās more public.
Please feel free to leave comments/suggestions about anything and to suggest I share this with particular people.
If youād like access to a link shown in this spreadsheet that you donāt have access to, let me know."
MichaelA @ 2021-09-22T09:27 (+9)
Collection of collections of resources relevant to (research) management, mentorship, training, etc.
(See the linked doc for the most up-to-date version of this.)
The scope of this doc is fairly broad and nebulous. This is not The Definitive Collection of collections of resources on these topics - itās just the relevant things that I (Michael Aird) happen to have made or know of.
- Management & mentoring - EA Forum
- Management-related books [shared] - me
- Meeting templates for mentors/managers [shared] - me
- Goal-setting templates or processes [shared] - me
- Readings and notes on how to do high-impact research - me
- Readings and notes on how to do high-quality, efficient research - me
- Readings and notes on how to write/communicate well - me
- Resources - Effective Thesis
- Tips On Doing Impactful Research - Effective Thesis
- SERI Research Proposal Generation Tips 2021 - SERI
- Research Training Programs - EA Forum Wiki
- Review of SRF 2019 - Max Daniel and/or Rose Hadshar of FHI
- Research project planning templates/resources [shared]
- Resource library of the Management Center
- This is a non-EA nonprofit that provides management training and resources to other nonprofits
- I and several other Rethink Priorities staff did their management training. I thought it was fairly useful (though with some flaws).
- Some quick reflections on mentoring from Max Daniel
Here are some things that are āinternalā or that I donāt have permission to share, but that I might be able to make shareable versions of, share after asking permission, or talk to people about, if someone is interested::
- Summary and commentary on the book Managing to Change the World - someone at RP
- āManagement / org structure things RP could considerā - me
- āAspects of CLR's SRF which RP's internship could draw onā - me
- My Anki cards on Getting Things Done - me
- āRSP Personal Review: Meta docā - someone who was at FHI
- Performance review templates collected at the bottom of the FHI doc āPerformance feedback - Meta docā - someone who was at FHI
- Quick notes on management - someone at RP
Finally, here are my notes on relevant books (though each of these links isnāt really a ācollectionā):
- Notes on Mochary's "The Great CEO Within" (2019)
- [more to be added later]
MichaelA @ 2021-01-14T02:51 (+9)
Have any EAs involved in GCR-, x-risk-, or longtermism-related work considered submitting writing for the Bulletin? Should more EAs consider that?
I imagine many such EAs would have valuable things to say on topics the Bulletin's readers care about, and that they could say those things well and in a way that suits the Bulletin. It also seems plausible that this could be a good way of:
- disseminating important ideas to key decision-makers and thereby improving their decisions
- either through the Bulletin articles themselves or through them allowing one to then talk individually with such decision-makers
- gaining good career capital for certain career paths
- e.g., later working in security-related roles in think tanks, NGOs, or governments
That said, I haven't thought about those claims much, and I'm definitely not sure that this is a better option than other options the relevant EAs have available.
I raise this in part because I might consider writing something to submit to the Bulletin myself at a later stage of my nuclear risk research.
RyanCarey @ 2021-01-14T04:46 (+6)
https://thebulletin.org/biography/andrew-snyder-beattie/ https://thebulletin.org/biography/gregory-lewis/ https://thebulletin.org/biography/max-tegmark/
MichaelA @ 2021-01-14T05:47 (+2)
Thanks for those links!
(I also realise now that I'd already seen and found useful Gregory Lewis's piece for the Bulletin, and had just forgotten that that's the publication it was in.)
MichaelA @ 2021-01-14T02:52 (+4)
Here's the Bulletin's page on writing for them. Some key excerpts:
Readers of the Bulletin of the Atomic Scientists are informed and intelligent; they include top policymakers, researchers, and opinion makers from more than 150 countries and a large contingent of smart non-experts who are interested in the Bulletin's mission. The Bulletin publishes articles written by the world's leading science and security experts, who explore the potential for terrible damage to societies from manmade technologies. We focus on ways to prevent catastrophe from the malign or accidental misuse of technology. Our primary coverage areas are nuclear risk, climate change, and other disruptive technologies that could pose an existential threat to humanity.
[...] The Bulletin is committed to serving our readers with a diverse array of perspectives from writers of all sorts of backgrounds. We especially welcome submissions from writers of historically underrepresented groups, including those who are Black, Latinx, Indigenous, people of color, and women. We also encourage the work of younger authors through the Voices of Tomorrow program.
[...] Magazine. The bimonthly magazine features long form articles that generally run from 2,000 to 4,000 words; it is not the word count but the voice and the angle of the pieces that make the magazine distinctive. Read it to understand what the distinction isāwe want you to tackle tough topics, make strong arguments, and offer strong takeaways.
[...] Website. We accept opinion (800-1,300 words) and analysis pieces (1,000-3,000 words). Please do use the navigation on our home page to read a few of each of these types of pieces. They will be your best guide to Bulletin style and tone. Have a multimedia idea? Contact the editors directly and pitch them.
[...] Include your bio. The Bulletin is known for publishing the top experts in their respective fields. Please submit your professional biography so that we understand your expertise and what makes you the perfect author to write the piece you are pitching.
Peer review. The Bulletin is not a peer-reviewed journal; however, we do send unsolicited articles to colleagues for outside review. Be prepared to answer questions and to document your pointsāby way of hyperlinks for web pieces or in the form of footnotes for journal pieces.
[...] Do not submit a research paper. The Bulletin publishes high-concept, high-quality journalism, which is a different form than the research paper. One is not a better form than the other; a research paper is perfectly appropriate to a research journal. It just wonāt work with the Bulletinās format or audience. The Bulletin is its own publication, with long-established parameters, and the best way to gauge what will work for the Bulletin is to read the Bulletin. [Though I've been reading the Nuclear Notebook articles, and I'd say they're closer to research papers or white papers than to journalism. Maybe Nuclear Notebook is unusual in that respect?]
And here's the page on the Voices of Tomorrow feature:
In its Voices of Tomorrow feature, the Bulletin of the Atomic Scientists invites emerging scholars to submit essays, opinion pieces, and multimedia presentations addressing at least one of the Bulletinās core issues: nuclear risk, climate change, and threats from emerging technologies.
Beginning in 2015, editors will select one Voices of Tomorrow feature as winner of the Leonard M. Rieser Award; the author of that article will receive a $1,000 check plus a one-year subscription to the Bulletinās journal, in addition to the publication of their submissions.
[...] Submission process. Current students as well as recent graduates are encouraged to submit work. Essays and opinion pieces should not be longer than 2,000 words; video presentations should not exceed 5 minutes in playing time. Each entry must contain: the authorās email address, phone number, short biography, and school affiliation. Submissions should not have been previously published.
Submissions should be sent to Bulletin Contributing Editor Dawn Stover at dstover@thebulletin.org; only one contribution at a time will be accepted per author.
MichaelA @ 2020-08-11T23:41 (+9)
The old debate over "giving now vs later" is now sometimes phrased as a debate about "patient philanthropy". 80,000 Hours recently wrote a post using the term "patient longtermism", which seems intended to:
- focus only on how the debate over patient philanthropy applies to longtermists
- generalise the debate to also include questions about work (e.g., should I do a directly useful job now, or build career capital and do directly useful work later?)
They contrast this against the term "urgent longtermism", to describe the view that favours doing more donations and work sooner.
I think the terms "patient longtermism" and "urgent longtermism" are both useful. One reason I think "urgent longtermism" is useful is that it doesn't sound pejorative, whereas "impatient longtermism" would.
I suggest we also use three additional terms:
- Patient altruism
- Like "patient philanthropy" and unlike "patient longtermism", this term is cause-neutral.
- But like "patient longtermism" and unlike "patient philanthropy", this term clearly relates to both work and donations, not merely to donations.
- Discussions about "patient philanthropy" do often make some reference to optimal timing of work, but it's not usually central. Also, the term "philanthropy" is typically used just for donations.
- Urgent altruism
- Again, this is partly to avoid negative connotations, as is my next suggestion.
- Urgent philanthropy
MichaelDickens @ 2020-08-12T20:08 (+5)
I don't think "patient" and "urgent" are opposites, in the way Phil Trammell originally defined patience. He used "patient" to mean a zero pure time preference, and "impatient" to mean a nonzero pure time preference. You can believe it is urgent that we spend resources now while still having a pure time preference. Trammell's paper argued that patient actors should give later, irrespective of how much urgency you believe there is. (Although he carved out some exceptions to this.)
MichaelA @ 2020-08-13T01:40 (+2)
Yes, Trammell writes:
We will call someone āpatientā if he has low (including zero) pure time preference with respect to the welfare he creates by providing a good.
And I agree that a person with a low or zero pure time preference may still want to use a large portion of their resources now, for example due to thinking now is a much "hingier"/"higher leverage" time than average, or thinking value drift will be high.
You highlighting this makes me doubt whether 80,000 Hours should've used "patient longtermism" as they did, whether they should've used "patient philanthropy" as they arguably did*, and whether I should've proposed the term "patient altruism" for the position that we should give/work later rather than now (roughly speaking).
On the other hand, if we ignore Trammell's definition of the term, I think "patient X" does seem like a natural fit for the position that we should do X later, rather than now.
Do you have other ideas for terms to use in place of "patient"? Maybe "delayed"? (I'm definitely open to renaming the tag. Other people can as well.)
If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now.
He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, theyāll also be able to rely on the much broader knowledge available to future generations. [...]
And thereās a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? Itās possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own.
Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse?
Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended?
Or perhaps it could fail for the reverse reason, by staying true to your original vision ā if that vision turns out to be as deeply morally mistaken as the Rhodesā Scholarships initial charter, which limited it to āwhite Christian menā.
Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good.
Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? [...]
- Should we have a mixed strategy, where some altruists are patient and others impatient?
This suggests to me that 80k is, at least in that post, taking "patient philanthropy" to refer not just to a low or zero pure time preference, but instead to a low or zero rate of discounting overall, or to a favouring of giving/working later rather than now.
MichaelA @ 2022-01-03T18:04 (+8)
UPDATE: This is now fully superseded by my 2022 Interested in EA/longtermist research careers? Here are my top recommended resources, and there's no reason to read this one.
Some resources I think might be useful to the kinds of people who apply for research roles at Rethink Priorities
This shortform expresses my personal opinions only.
These resources are taken from an email I sent to AI Governance & Strategy researcher/fellowship candidates who Rethink Priorities didn't make offers to but who got pretty far through our application process. These resources followed this text: "we would also like to mention some resources that we think might assist some of the people who applied for our roles in finding other positions that might be a good fit for them or in helping them boost their skills, plan their careers, and/or pick and pursue important research projects independently. We acknowledge that you likely already know about many of these things and that this list of resources isnāt tailored to you specifically, but we hope some of it will be helpful anyway."
- Our [RP's] newsletter - keep informed about our work and learn about future job opportunities we will open up
- The Effective Altruism Newsletter - stay up to date with the effective altruism community and get updates about new job opportunities
- 80,000 Hours - lots of resources for having more social impact with your career. In particular, see their in-depth process and template for career planning.
- The 80,000 Hours Job Board - find listings of jobs relevant to effective altruism and the idea of doing good better, including jobs that could help in building skills and testing fit.
- One example of a role that weāre excited about and think may be a fit for people who made it as far in our process as you did is the Centre for the Governance of AIās Fellowships.
- I recently put together a List of EA funding opportunities, and noted: āI strongly encourage people to consider applying for one or more of these things. Given how quick applying often is and how impactful funded projects often are, applying is often worthwhile in expectation even if your odds of getting funding arenāt very high. (I think the same basic logic applies to job applications.)ā These funding opportunities could be used to support a very wide range of activities, such as research, career exploration and planning, community building, and entrepreneurship.
- You could apply to EA-aligned research training programs. You can find a list of such programs here.
- One thing applicants interested in roles at organizations like ours can do to test, improve, and demonstrate their fit for such roles is to read and write independent research for the Effective Altruism Forum and get feedback from the community. If youāre struggling to think of a good research topic, you could browse through this directory for open research questions.
- Another way to test, improve, and demonstrate fit for roles at organizations like ours may be to work toward becoming a top forecaster on Metaculus or Good Judgement Open
- I also compiled a lot of additional advice and resources along these lines here.
MichaelA @ 2022-03-26T09:54 (+2)
I'd now also suggest most people who are interested in AI governance and/or technical AI safety roles participate in the relevant track of the AGI Safety Fundamentals course (or read through the curriculum content if you see this at a time when you wouldn't be able to join the course for a while).
MichaelA @ 2021-02-22T01:20 (+8)
Some ideas for projects to improve the long-term future
In January, I spent ~1 hour trying to brainstorm relatively concrete ideas for projects that might help improve the long-term future. I later spent another ~1 hour editing what I came up with for this shortform. This shortform includes basically everything I came up with, not just a top selection, so not all of these ideas will be great. Iām also sure that my commentary misses some important points. But I thought it was worth sharing this list anyway.
The ideas vary in the extent to which the bottleneck(s) to executing them are the right person/people, buy-in from the right existing organisation, or funding.
Iām not expecting to execute these ideas in the near-term future myself, so if you think one of these ideas sounds promising and relevant to your skills, interests, etc., please feel very free to explore the idea further, to comment here, and/or to reach out to me to discuss it!
- Something along the lines of compiling a large set of potentially promising cause areas and interventions; doing rough Fermi estimates, cost-effectiveness analyses, and/or forecasts; thereby narrowing the list down; and then maybe gradually doing more extensive Fermi estimates, cost-effectiveness analyses, and/or forecasts
- This is somewhat similar to things that Ozzie Gooen, NuƱo Sempere, and Charity Entrepreneurship have done or are doing
- Ozzie also discusses some similar ideas here
- So itād probably be worth talking to them about this
- This is somewhat similar to things that Ozzie Gooen, NuƱo Sempere, and Charity Entrepreneurship have done or are doing
- Something like a team of part-time paid forecasters, both to forecast on various important questions and to be āon-callā when it looks like a catastrophe or window of opportunity might be looming
- I think I got this idea from Linch Zhang, and it might be worth talking to him about it
- 80,000 Hours-style career reviews on things like diplomacy, arms control, international organisations, becoming a Russia/India/etc specialist
- Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI
- This might allow them to complete additional valuable projects
- This also might help the research or writing assistants build career capital and test fit for valuable roles
- Maybe BERI can already provide this?
- Itās possible itās not worth being proactive about this, and instead waiting for people to decide they want an assistant and create a job ad for one. But Iād guess that some proactiveness would be useful (i.e., that there are cases where someone would benefit from such an assistant but hasnāt thought of it, or doesnāt think the overhead of a long search for one is worthwhile)
- See also this comment from someone who did this sort of role for Toby Ord
- Research or writing assistance for certain independent researchers?
- Ops assistance for orgs like FHI?
- But I think orgs like BERI and the Future of Humanity Foundation are already in this space
- Additional āResearch Training Programsā like summer research fellowships, āEarly Career Conference Programmesā, internships, or similar
- Probably best if this is at existing orgs
- Could perhaps find an org that isnāt doing this yet but has researchers who would be capable of providing valuable mentorship, suggest the idea to them, and be or find someone who can handle the organisational aspects
- Something like the Open Phil AI fellowship, but for another topic
- In particular, something that captures the good effects a āfellowshipā can have, beyond the provision of funding (since there are already some sources of funding alone, such as the Long-Term Future Fund)
- A hub for longtermism-relevant research (or a narrower area, e.g. AI) outside of US and UK
- Found an organization/community similar to HIPE and/or APPGFG, but in countries other than the UK
- Iād guess itād probably be easiest in countries where there is a substantial EA presence, and perhaps easier in smaller countries like Switzerland rather than in the US
- Why this might/might not be good:
- I donāt know a huge amount about HIPE or APPGFG, but from my limited info on those orgs they seem valuable
- Iād guess that thereās no major reason something similar to HIPE couldnāt be successfully replicated in other countries, if we could find the right person/people
- In contrast, Iād guess that there might be more barriers to successfully replicating something like APPGFG
- E.g., most countries probably donāt have an institution very similar to APPGs
- But I imagine something broadly similar could be replicated elsewhere
- Potential next steps:
- Talk to people involved in HIPE and APPGFG about whether they think these things could be replicated, how valuable they think thatād be, how theyād suggest it be done, what countries theyād suggest, and who theyād suggest talking to
- Talk to other EAs, especially outside of the UK, who are involved in politics, policy, and improving institutional decision-making
- Ask them for their thoughts, who theyād suggest reaching out to, and (in some cases) whether they might be interested in collaborating on this
- I also had some ideas for specific research or writing projects, but Iām not including them in this list
- Thatās partly because I might publish something more polished on that later
- Itās mostly because people can check out A central directory for open research questions for a broader set of research project ideas
- See also Why you (yes, you) should post on the EA Forum
See also:
The views I expressed here are my own, and do not necessarily reflect the views of my employers.
Daniel_Eth @ 2021-02-23T07:02 (+5)
"Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI"
As a senior research scholar at FHI, I would find this valuable if the assistant was competent and the arrangement was low cost to me (in terms of time, effort, and money). I haven't tried to set up anything like this since I expect finding someone competent, working out the details, and managing them would not be low cost, but I could imagine that if someone else (such as BERI) took care of details, it very well may be low cost. I support efforts to try to set something like this up, and I'd like to throw my hat into the ring of "researchers who would plausibly be interested in assistants" if anyone does set this up.
MichaelA @ 2021-08-04T09:53 (+7)
Quick thoughts on the question: "Is it better to try to stop the development of a technology, or to try to get there first and shape how it is used?"
(This is related to the general topic of differential progress.)
(Someone asked that question in a Slack workspace I'm part of, and I spent 10 mins writing a response. I've copied and pasted that below with slight modifications. This is only scratching the surface and probably makes silly errors, but maybe this'll be a little useful to some people.)
- I think the ultimate answer to that question is really something like "Whichever option has better outcomes, given the specifics of the situation."
- I don't it's just almost always best to stop the development or to shape how it's used.
- And I think we should view it in terms of consequences, not in terms of something like deontology or a doing vs allowing harm distinction.
- It might be the case that it's (say) 55-90% of the time better to do one approach or the other. But I don't know which way around that'd be, and I think it'd be better to focus on the details of the case.
- For this reason, I think it's sort of understandable and appropriate that the EA/longtermist community doesn't have a principled overall stance on this sort of thing.
- OTOH, it'd be nice to have something like a collection of considerations, heuristics, etc. that can then be applied, perhaps in a checklist-like manner, to the case at hand. And I'm not aware of such a thing. And that does seem like a failing of the EA/longtermist community.
- [Person] is writing a paper on differential technological development, and it probably makes a step in this direction, but mostly doesn't aim to do this (if I recall correctly from the draft).
- Some quick thoughts on things that could be included in that collection of considerations, heuristics, etc.:
- How much (if at all) will your action actually make it more likely that the tech is developed?
- (Or "developed before society is radically transformed for some other reason", to account for Bostrom's technological completion conjecture.)
- How much (if at all) will your action actually speed up when the tech is developed?
- How (if at all) will your action change the exact shape/nature of the resulting tech?
- E.g., maybe the same basic thing is developed, but with more safety features or in a way more conducive to guiding welfare interventions
- E.g., maybe your action highlights the potential military benefits of an AI thing and so leads to more development of militarily relevant features
- How (if at all) will your action change important aspects of the process by which the tech is developed?
- This can be relevant to e.g. AI safety
- E.g., we don't only care what the AI system is like, but also whether the development process has a race-like dynamic, or whether the development process is such that along the way powerful and dangerous AI may be released upon the world accidentally
- E.g., is a biotech thing being developed in such a way that makes lab leaks more likely?
- How (if at all) will your action change how the tech is deployed?
- How (if at all) will your action let you influence all the above things to the better via giving you "a seat at the table", or something like that, rather than via the action directly?
- How much (if at all) will your action actually make it more likely that the tech is developed?
Small case study:
- Let's say an EA-aligned funder donates to an AI lab, and thereby gets some level of influence over them or an advisory role or something.
- And let's say it seems about equally likely that this lab's existence/work increases x-risk as that it decreases it.
- It might still be good for the world that the funder funds that lab, if:
- that doesn't really much change the lab likelihood of existing or the speed of their work or whatever
- but it does give a very thoughtful EA a position of notable influence over them (which could then lead to more safety-conscious development, deployment, messaging, etc.)
MichaelA @ 2021-07-15T07:41 (+7)
Maybe someone should make ~1 Anki card each for lots of EA Wiki entries, then share that Anki deck on the Forum so others can use it?
Specifically, I suggest that someone:
- Read/skim many/most/all of the EA Wiki entries in the "Cause Areas" and "Other Concepts" sections
- Anki cards based on entries in the other sections (e.g., Organisations) would probably be less useful
- Make 1 or more Anki card for many/most of those entries
- In many cases, these cards might take forms like "The long reflection refers to... [answer]"
- In many other cases, the cards could cover other insights, concepts, questions, etc. raised in the body of the entry
- Making such cards seems less worthwhile for cases in which either:
- the entry mainly exists as a tag (without itself having much content)
- the entry is about a quite well-known thing and doesn't really say much that's not well-known in its body (e.g., the International relations tag)
- Export the file for the resulting deck and share it on the Forum and maybe elsewhere
- Other people can then either use the whole deck or pick and choose which parts of the deck to use (e.g., deleting cards when they come up, if the person feels those cards aren't relevant to their interests and plans)
I think this could also be done gradually and/or by multiple people, rather than in one big batch by one person. It could also be done for the LessWrong Wiki.
If someone does make this deck, I would very likely use some/many of the cards myself and also promote the deck to a bunch of other people.
(I also currently feel that this would be a sufficiently useful action to have taken that I'd be inclined to reward the person with some token amount of my own money to signal my appreciation / to compensate them for their time / because me saying that now might incentivise them. I'd only do this if the cards the person makes actually seems good. Feel free to contact me if you want to discuss that.)
Pablo @ 2021-07-15T12:28 (+6)
Turning the EA Wiki into a (huge) Anki deck is on my list of "Someday/Maybe" tasks. I think it might be worth waiting a bit until the Wiki is in a more settled state, but otherwise I'm very much in favor of this idea.
There is an Anki deck for the old LW wiki. It's poorly formatted and too coarse-grained (one note per article), and some of the content is outdated, but I still find it useful, which suggests to me that a better deck of the EA Wiki would provide considerable value.
MichaelA @ 2021-07-15T07:41 (+2)
Why this might be worthwhile:
- The EA community has collected and developed a very large set of ideas that aren't widely known outside of EA, such that "getting up to speed" can take a similar amount of effort to a decent fraction of a bachelor's degree
- But the community is relatively small and new (compared to e.g. most academic fields), so we have relatively little in the way of textbooks, courses, summaries, etc.
- This means it can take a lot of effort and time to get up to speed, lots of EAs have substantial "gaps" in their "EA knowledge", lots of concepts are misinterpreted or conflated or misapplied, etc.
- The EA Wiki is a good step towards having good resources to help people get up to speed
- A bunch of research indicates retrieval practice, especially when spaced and interleaved, can improve long-term retention and can also help with things like application of concepts (not just memory)
- And Anki provides such spaced, interleaved retrieval practice
- I'm being lazy in not explaining the jargon or citing my sources, but you can find some explanation and sources here: Augmenting Long-term Memory
- If one person makes an Anki deck based on the EA Wiki entries, it can then be used and/or built on by other people, can be shared with participants in EA Fellowships, etc.
Possible reasons not to do this:
- "There's a lot of stuff it'd be useful for people to know that isn't on EA Wiki entries. Why not make Anki cards on those things instead? Isn't this a bit insular?"
- I think we can and should do both, rather than one or the other
- Same goes for having Anki cards based on EA sources vs Anki cards based on non-EA sources
- Personally, I'd guess ~25% of my Anki cards are based on EA sources, ~70% are based on non-EA sources but are about topics I see as important for EA reasons, and 5% are random personal life stuff
- "This seems pretty time-consuming"
- I think there are a lot of people in the EA community for whom engaging with the EA Wiki entries to the extent required to make this deck would be worthwhile just for themselves
- I also think there are even more people in the EA community for whom using all or a subset of these cards will be worthwhile
- (Though there are also of course people for whom these things aren't true)
- "Many of the entries probably won't actually be that well-suited to Anki cards, or aren't on very important things"
- Agreed
- But many will be
- The card-maker(s) can skip entries, and the card-users can delete some cards from their own copy of the deck
- "This seems like rote learning / indoctrination / stifling creativity / rah rah"
- I quite strongly feel that these sorts of concerns about these like Anki cards are often misguided, including this case
- I can expand on that if anyone actually does feel worried about this idea for this reason
MichaelA @ 2021-01-05T09:29 (+7)
Why I think The Precipice might understate the significance of population ethics
tl;dr: In The Precipice, Toby Ord argues that some disagreements about population ethics don't substantially affect the case for prioritising existential risk reduction. I essentially agree with his conclusion, but I think one part of his argument is shaky/overstated.
This is a lightly edited version of some notes I wrote in early 2020. It's less polished, substantive, and important than most top-level posts I write. This does not capture my full views on population ethics or The Precipice. (I really liked the book overall.)
---
Ord writes:
Some of the more extreme approaches to this relatively new field of āpopulation ethicsā imply that there is no reason to avoid extinction stemming from consideration of future generations - it just doesnāt matter whether these future people come into being or not.
[But] all but the most implausible of these views agree with the immense importance of saving future generations from other kinds of existential catastrophe, such as the irrevocable collapse of civilization. Since most things that threaten extinction threaten such a collapse too, there is not much practical difference.
I agree that even many views on population ethics which would say it doesnāt matter whether future people get to come into being would agree that itās at least somewhat important to save future generations from at least some kinds of non-extinction existential catastrophe. (Itās also the case that my preferred views on population ethics very strongly support prioritising existential risk reduction.)
But I think Ord overstates things here, perhaps considerably. There are three reasons I say this.
Reason 1: The size of the stakes matters. And even in person-affecting views where avoiding irrevocable collapse matters, it matters far less than in some non-person-affecting views.
People like Ord and I believe that existential risk reduction is not just important, but rather extremely important, and thus worth prioritising despite reasonable concerns about predictability and tractability. These beliefs are substantially influenced by the futureās potential scale, duration, and quality, if we manage to avoid catastrophe (see, e.g., Ordās note 37 in chapter 8).
Ord deliberately moves away from relatively extreme / contrarian / counterintuitive versions of that sort of argument. For example, he argues that the probability of existential catastrophe in the coming century is not miniscule, and that there are a variety of reasons to believe particular interventions could reduce the risks.
But it would seem hard to argue that itās just as easy to predictably cause a significant reduction in existential risks as to predictably cause a substantial improvement in near-term global health and development or animal welfare. And I donāt believe Ord tries to make that argument. So the potentially extreme stakes involved in existential risks still seem like an important part of his claims.
Letās say we accept some view on population ethics in which we donāt care about the loss of value from things like extinction or not colonising the stars, but do care about the reduced quality of life of people who would exist in an irrevocable collapse scenario. Thus, as Ord suggests, we still acknowledge that there are some future-people-related reasons to reduce existential risks (rather than just other types of reasons, such as preventing death and suffering in the present generation or fulfilling duties to the past).
But those reasons would be about something like āthe difference between the total/average quality of life that those people would have given irrevocable collapse and the total/average quality of life that the same people - or the same number of people, or something like that - wouldāve had if not for the irrevocable collapseā. That will entail far smaller stakes than āthe difference in the total amount of value (e.g., aggregate wellbeing, or achievement, or whatever) given irrevocable collapse and the total amount of value given no existential catastrophe (so we colonise the stars, or fulfil our potential in some other wayā.
So I think that adopting that sort of view on population ethics would make a major practical difference. It wouldnāt render existential risk reduction valueless, but would substantially reduce its value, perhaps making it a lower priority than seemingly more predictable and tractable priorities such as near-term animal welfare.
Reason 2: In views which include the asymmetry principle, avoiding irrevocable collapse may not matter, as people in collapse scenarios may have net-positive lives.
In Ordās appendix on population ethics, he notes that some people have argued for:
an asymmetry principle: that adding new lives of positive wellbeing doesnāt make an outcome better, but adding new lives with negative wellbeing does make it worse.
Views which include such that sort of asymmetry principle would think it matters to prevent futures with large numbers of lives of negative wellbeing. Such views may thus indeed support existential risk reduction, but with a focus on dystopian futures and/or s-risks rather than extinction risk. (I think that thatās what Iād support if my views on population ethics included that sort of asymmetry principle.)
But recall that Ord focuses on collapse rather than dystopia:
all but the most implausible of these views agree with the immense importance of saving future generations from other kinds of existential catastrophe, such as the irrevocable collapse of civilization. Since most things that threaten extinction threaten such a collapse too, there is not much practical difference.
Iād guess most of the lives in an irrevocable collapse scenario would be somewhere around neutral or somewhat positive wellbeing. (It does seem plausible that theyād tend to be of negative wellbeing, but also plausible that theyād be of similar or greater wellbeing levels than we currently have.)
Maybe Ord considers views which include the asymmetry principle to be among āthe most implausibleā of views on population ethics. But if so, that seems fairly contestable. And if not, then these views might actually not see preventing a sizeable portion of the possible irrevocable collapse scenarios as mattering at all. That would further reduce the extent to which those views would, overall, be inclined to prioritise existential risk reduction.
One could respond by saying āBut couldnāt many things that threaten extinction also threaten the sort of scenarios these views would care about preventing, such as s-risks?ā I think that thatās plausible, but the matter is a lot more complicated than in the case of irrevocable collapse. Here are a couple somewhat relevant posts:
- How Would Catastrophic Risks Affect Prospects for Compromise?
- The long-term significance of reducing global catastrophic risks
Reason 3: Iām very unsure whether most things which threaten extinction pose a similar risk of irrevocable collapse.
Irrevocable collapse would involve a very long period of neither going extinct nor fully recovering. But it seems plausible to me that, given a collapse, it's extremely likely that we'd relatively quickly - e.g., within thousands of years - either go extinct or fully recover. (My views on this are fuzzy and confused. See also Bostrom, 2013, section 2.2.)
If that is the case, that would substantially reduce the harm the collapse represented from the perspective of views on population ethics which donāt care about extinction but would care about some collapse scenarios.
Two disclaimers
- Ord does surround the passage quoted above with caveats, and he dedicates an appendix to the topic.
- But I donāt think the caveats or appendix really address this specific point Iām making.
- Iām merely critiquing this specific argument for why population ethics may not cast doubt on whether to prioritise existential risk reduction. I personally prioritise existential risk reduction, and think there are other strong arguments for doing so despite population ethics concerns.
- E.g., I see something like a ātotal viewā as very plausible, and I see greater issues with person-affecting views than with a ātotal viewā.
- E.g., certain approaches to moral uncertainty will suggest the total view should be pretty dominant if itās at least seen as plausible (although some see this as problematic fanaticism).
You can see a list of all the things I've written that summarise, comment on, or take inspiration from parts of The Precipice here.
MichaelA @ 2020-04-07T02:06 (+7)
List of things I've written or may write that are relevant to The Precipice
Things I’ve written
- Some thoughts on Toby Ord’s existential risk estimates
- Database of existential risk estimates
- Clarifying existential risks and existential catastrophes
- Existential risks are not just about humanity
- Failures in technology forecasting? A reply to Ord and Yudkowsky
- What is existential security?
- Why I'm less optimistic than Toby Ord about New Zealand in nuclear winter, and maybe about collapse more generally
- Thoughts on Toby Ord’s policy & research recommendations
- "Toby Ord seems to imply that economic stagnation is clearly an existential risk factor. But I that we should actually be more uncertain about that"
- Why I think The Precipice might understate the significance of population ethics
- My Google Play review
- My review of Tom Chivers' review of Toby Ord's The Precipice
- If a typical mammalian species survives for ~1 million years, should a 200,000 year old species expect another 800,000 years, or another million years?
Upcoming posts
- What would it mean for humanity to protect its potential, but use it poorly?
- Arguments for and against Toby Ord's "grand strategy for humanity"
- Does protecting humanity's potential guarantee its fulfilment?
- A typology of strategies for influencing the future
Working titles of things I plan/vaguely hope to write
Note: If you might be interested in writing about similar ideas, feel very free to reach out to me. It’s very unlikely I’ll be able to write all of these posts by myself, so potentially we could collaborate, or I could just share my thoughts and notes with you and let you take it from there.
Update: It's now very unlikely that I'll get around to writing any of these things.
- The Terrible Funnel: Estimating odds of each step on the x-risk causal path (working title)
- The idea here would be to adapt something like the "Great Filter" or "Drake Equation" reasoning to estimating the probability of existential catastrophe, using how humanity has fared in prior events that passed or could've passed certain "steps" on certain causal chains to catastrophe.
- E.g., even though we've never faced a pandemic involving a bioengineered pathogen, perhaps our experience with how many natural pathogens have moved from each "step" to the next one can inform what would likely happen if we did face a bioengineered pathogen, or if it did get to a pandemic level.
- This idea seems sort of implicit in the Precipice, but isn't really spelled out there. Also, as is probably obvious, I need to do more to organise my thoughts on it myself.
- This may include discussion of how Ord distinguishes natural and anthropogenic risks, and why the standard arguments for an upper bound for natural extinction risks don’t apply to natural pandemics. Or that might be a separate post.
- Developing - but not deploying - drastic backup plans (see my comment here)
- “Macrostrategy”: Attempted definitions and related concepts
- This would relate in part to Ord’s concept of “grand strategy for humanity”
- Collection of notes
- A post summarising the ideas of existential risk factors and existential security factors?
- I suspect I won’t end up writing this, but I think someone should. For one thing, it’d be good to have something people can reference/link to that explains that idea (sort of like the role EA Concepts serves).
Some selected Precipice-related works by others
MichaelA @ 2020-02-27T07:59 (+7)
Update in April 2021: This shortform is now superseded by the EA Wiki entry on Accidental harm. There is no longer any reason to read this shortform instead of that.
Collection of sources I've found that seem very relevant to the topic of downside risks/accidental harm
Information hazards and downside risks - Michael Aird (me), 2020
Ways people trying to do good accidentally make things worse, and how to avoid them - Rob Wiblin and Howie Lempel (for 80,000 Hours), 2018
How to Avoid Accidentally Having a Negative Impact with your Project - Max Dalton and Jonas Vollmer, 2018
Sources that seem somewhat relevant
https://en.wikipedia.org/wiki/Unintended_consequences (in particular, "Unexpected drawbacks" and "Perverse results", not "Unintended benefits")
(See also my lists of sources related to information hazards, differential progress, and the unilateralist's curse.)
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
MichaelA @ 2019-12-22T05:35 (+7)
Potential downsides of EA's epistemic norms (which overall seem great to me)
This is adapted from this comment, and I may develop it into a proper post later. I welcome feedback on whether it'd be worth doing so, as well as feedback more generally.
Epistemic status: During my psychology undergrad, I did a decent amount of reading on topics related to the "continued influence effect" (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But I'm a bit rusty (my Honours was in 2017, and I haven't reviewed the literature since then).
This is a quick attempt to summarise some insights from psychological findings on the continued influence effect of misinformation (and related areas) that (speculatively) might suggest downsides to some of EA's epistemic norms (e.g., just honestly contributing your views/data points to the general pool and trusting people will update on them only to the appropriate degree, or clearly acknowledging counterarguments even when you believe your position is strong).
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
From this paper's abstract:
Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered. The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning--giving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning--reminding people that facts are not always properly checked before information is disseminated--was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether. (emphasis added)
This seems to me to suggest some value in including "epistemic status" messages up front, but that this don't make it totally "safe" to make posts before having familiarised oneself with the literature and checked one's claims.
Here's a couple other seemingly relevant quotes from papers I read back then:
- "retractions [of misinformation] are less effective if the misinformation is congruent with a person’s relevant attitudes, in which case the retractions can even backfire [i.e., increase belief in the misinformation]." (source) (see also this source)
- "we randomly assigned 320 undergraduate participants to read a news article presenting either claims both for/against an autism-vaccine link [a "false balance"], link claims only, no-link claims only or non-health-related information. Participants who read the balanced article were less certain that vaccines are safe, more likely to believe experts were less certain that vaccines are safe and less likely to have their future children vaccinated. Results suggest that balancing conflicting views of the autism-vaccine controversy may lead readers to erroneously infer the state of expert knowledge regarding vaccine safety and negatively impact vaccine intentions." (emphasis added) (source)
- This seems relevant to norms around "steelmanning" and explaining reasons why one's own view may be inaccurate. Those overall seem like very good norms to me, especially given EAs typically write about issues where there truly is far less consensus than there is around things like the autism-vaccine "controversy" or climate change. But it does seem those norms could perhaps lead to overweighting of the counterarguments when they're actually very weak, perhaps especially when communicating to wider publics who might read and consider posts less carefully than self-identifying EAs/rationalists would. But that's all my own speculative generalisations of the findings on "falsely balanced" coverage.
MichaelA @ 2022-03-16T01:26 (+6)
Project ideas / active grantmaking ideas I collected
Context: What follows is a copy of a doc I made quickly in June/July 2021. Someone suggested I make it into a Forum post. But I think there are other better project idea lists, and more coming soon. And these ideas aren't especially creative, ambitious, or valuable, and I don't want people to think that they should set their sights as low as I accidentally did here. And this is now somewhat outdated in some ways. So I'm making it just a shortform rather than a top-level post, and I'm not sure whether you should bother reading it.
There are some interesting comment threads in the doc version.
Iām using this doc to collect āactive grantmakingā ideas (i.e., things Iād maybe want EA funders to proactively find a way to fund, rather than waiting for an application). Iām approaching this in a brainstorming spirit; I expect that some of these ideas are bad, and that most of the good ones arenāt great and/or wonāt happen anyway (because any given idea is hard to set up). I mostly have the EAIF and LTFF in mind, but these ideas could also be relevant to AWF or other EA funders.
EDITED TO ADD: In retrospect, I wish Iād been more creative & ambitious when making this, and maybe fleshed things out more.
Iāve put the ideas in descending order by how excited I currently feel about them. Feel free to skim or skip around.
Iād appreciate comments on the ideas, especially on:
- How good or bad does the idea seem to you?
- Do you know of someone who might be able to do the things in the first section if EA Funds gave the person money?
- Do you know of something useful the orgs/people in the second section might be able to do with more money?
- Any thoughts on the best way to approach these things, downside risks to consider, people who might have leads or give good advice?
- Are there any ideas youād particularly like / not like me to write about on the EA Forum? (By default, I might turn some subset of these ideas into one or posts/shortforms later.)
(I also previously collected ideas here, and might integrate them into this doc or the active grantmaking ideas spreadsheet later.)
Projects it might be good for some org/person to do 2
Offering prizes for things we think should be done 2
Ideas from Intervention options for improving the EA-aligned research pipeline 2
Supporting āstudent journalismā thatās EA-relevant and/or is by EAs 5
Giving EA researchers/orgs money to pay for external expert review of their work 7
Red teaming papers as an EA training exercise 9
New things kind-of like Our World in Data 10
Forecasting tournaments amplifying evaluation research 10
Subsidise creators of EA-aligned podcasts, videos, etc. to outsource some tasks (e.g., editing) 11
More expert elicitation, surveys, double cruxes, etc. on important topics 12
Ideas related to IGM-style expert panels 14
āIntro to EA Research Hackathonā 14
Subsidise/cover useful apps, software subscriptions, or similar 15
Orgs/people that might be able to turn money into impact somehow 16
EA research training program participants 18
Projects it might be good for some org/person to do
Offering prizes for things we think should be done
- I.e., saying what we want to be done, then paying people if and when they show us theyāve now done it, rather than paying people in advance for things they propose to do
- Iām guessing this has been discussed before and there are good reasons this hasnāt been done, aside from just no one having thought about it much or committed to trying it?
- Since I have that guess, I havenāt bothered thinking & writing more about why this might be good, might be bad, how to do it, etc. But Iāll probably do that if it turns out my guess was probably wrong.
- See also Prize - EA Forum and Certificate of impact - EA Forum
Ideas from Intervention options for improving the EA-aligned research pipeline
- See also Buckās reactions
- None of these ideas are āshovel-readyā (e.g., I didnāt list in that post people who could spearhead them), and some are fairly high-level/under-specified, but they could be starting points
- Some of the ideas seem more promising than others; I put them in descending order of promisingness
- Though I wasnāt specifically thinking from the perspective of a grantmaker, and if I was my order may have differed a little
- Maybe Iāll later generate more specific ideas from that list and add them to this doc
Covering the costs for EA people/orgs to go through non-EA management training courses, get books on management, or similar
(Michelleās post Training Bottlenecks in EA (professional skills) is relevant, but I havenāt read it in a few months and so am probably reinventing/ignoring some wheels here.)
- Description:
- Who could the recipients of the courses/books be?
- EA people/orgs that are already doing management or may do so in future
- Organisers of research training programs, like SERI, CHERI, CERI
- They could then draw on the training when providing advice to the external mentors they pair program participants with
- Currently it seems to me that fairly little guidance is given to the mentors, and the organisers are often ~uni students themselves (so they donāt necessarily have any management experience)
- On the other hand, Iām not sure how well this ātrickle-down trainingā approach would work, and management and mentorship are somewhat different anyway
- What course(s) should be paid for?
- I think The Management Center training would be fine
- This is what RP used
- It seemed good but flawed
- Saulius and Linch give their thoughts here
- I could share my notes if thatād be helpful
- But I havenāt looked into options at all, and itās very plausible something else would be better
- Might be best to tell orgs they can choose whatever course they think is best and weāll likely pay for it
- I think The Management Center training would be fine
- What could covering the costs of relevant books look like?
- Could pay for hard copies, ebooks, or audiobooks, and just give them to orgs, without asking them first
- Could tell orgs weād like to pay them to get this stuff, then let them apply for whatever form of it and whatever books they want
- What books should be used?
- There would be many reasonable book choices
- See e.g. my list of Management-related books [shared]
- Could also do this for other āwork skillsā or āorg strategyā or whatever books, not just management books
- E.g., Deep Work
- Who could the recipients of the courses/books be?
- Theory of change:
- EA is to some extent constrained by management, mentorship, and organisational capacity
- Non-EAs have already developed lots of useful resources on these topics
- People (including EAs) often donāt access these resources by default
- They might simply not think to do so
- They may think to do so but then not do so due to inertia, whereas they would if the cost was reduced to 0 and someone had clearly signalled that this is worth doing or it was āalready paid forā
- They may think to do so but then see the cost as prohibitive
- Courses do seem to me kind of ātoo expensiveā, even though if I really think about it I realise that the time cost of attending is probably a substantially bigger deal than the dollar cost, such that if itās worth the time itās probably worth the money too
- (Some courses arenāt worth the time, though, of course)
- Courses do seem to me kind of ātoo expensiveā, even though if I really think about it I realise that the time cost of attending is probably a substantially bigger deal than the dollar cost, such that if itās worth the time itās probably worth the money too
- Possible downsides:
- Opportunity cost of the time spent in the courses or reading the books
- At least some parts of the advice provided by these courses, books, etc. is bad, so people would be learning at least some bad thing
- Part of what I have in mind is that the epistemics of the sort of people who produce these resources does seem to typically be worse than the epistemics of the EAs who would be recipients of this stuff (e.g., managers at orgs we think are worth supporting)
- But if people spend at least a little time looking into which courses/books to go with, and seek recommendations from other EAs, it seems very hard to believe that people would be left with worse beliefs than theyād otherwise get
- Maybe something like āCausing too much homogeneity in management practicesā
- But Iām not actually sure how the homogeneity itself would be bad
- And this seems in any case avoidable by simply covering the costs of a range of courses, books, etc. and letting people pick for themselves
- Who could give advice?
- Michelle
- She wrote a relevant post that I forgot about till Iād mostly written this idea...
- People at RP
- Other EAs who run orgs or do management
- Probably a bunch of other people
- Michelle
- Who could be the project lead?
- Peopleās thoughts on this:
- Misc:
- An alternative idea would be to get some EAs to produce some resources like this
- I think doing small versions of that alongside this main idea would be good
- E.g., encouraging somewhat more EA org leaders and managers to write up their learnings and tips in docs/posts and giving workshops now and then
- But I doubt that we should aim to have EAs actually produce courses and books
- The capable EAsā opportunity cost seems very high
- This is something a lot of non-EAs work on and have pretty good incentives to do a good job of
- I think doing small versions of that alongside this main idea would be good
- An alternative idea would be to get some EAs to produce some resources like this
Covering the costs for EA people/orgs to go through non-EA courses on things like work skills and running orgs, get books on those things, or similar
- This is basically the same sort of idea as āCovering the costs for EA people/orgs to go through non-EA management training courses, get books on management, or similarā, just with different topics focused on
- Also, since thereās overlap between the topics and between the delivery mechanisms, a single project could cover both things at once
- This idea has basically the same description, theory of change, etc. as that one
- I think Ozzie has written in some places about somewhat similar things regarding being a good member of a board of advisors or running an org well
Supporting āstudent journalismā thatās EA-relevant and/or is by EAs
- Description:
- Somehow using funding to make EA types trying out āstudent journalismā easier, more common, or more successful.
- Iām not sure exactly what āstudent journalismā typically means, nor what Iād see as the best focus.
- What I have in mind is definitely not necessarily just ānewsā; could also include things analogous to Guardian Long Reads, cultural essays, reviews, and listicles
- So maybe itās also āstudent-produced magazinesā, or āstudent-produced written content for a wide audienceā
- I mostly have in mind university students, but it could make sense to do this for high school students too
- I think Iād want the writing to be done by people who are at least somewhat engaged with EA
- But itās possible it could be good to have it done by non-EAs who seem like the sort of people whoād become EAs if they think and write about EA-related topics for a while
- Could be getting students to contribute to existing publications or getting new publications to be created
- Avital: ādo you mean you want students to contribute to their schools' existing student papers, or found new ones? Founding new ones can be rough because they don't come with natural readerships, so it is potentially a lot of work for low payoffā
- Me: āI think I'm open to either approach. I think maybe actually I wouldn't mind almost no readers, since I think most of the benefit is as a pipeline for future proper journalists? I want readers mostly inasmuch as that keeps the writers motivated, allows them more feedback, etc. Readership is also more directly useful, but I think the direct impacts are less important to the pipeline stuff.ā
- Iām not sure what the best way to use money for this is
- Could suggest that community builders try to encourage and supporting students in doing this, and provide funding for community builders who can do that
- Could fund some students to pay for their time and other expenses when trying this out themselves
- Could fund students to go through whatever training would be helpful
- Could grant to an existing student newspaper/magazine so that they start a new āverticalā or section or whatever focused on relevant topics? But I donāt think they pay staff, so I donāt think this makes sense?
- Could structure the process in a "contest" kind of way to promote the kind of writing you would like to see created
- (Angelaās idea)
- There are already some other "prizes" for EA type writing, like the Forethought Institute's one, but I think they're usually more research focused, and maybe a more public-facing-writing version could be cool too.
- Owen: āYeah I quite like that. Also if there's one which is specifically aimed at student journalists, they might feel more like they have a shot and therefore pull to enterā
- Theory of change:
- Possible direct impacts:
- EA movement building and more generally spreading useful ideas/info.
- Could help the relevant uni (or high school) group attract people, since they now seem more active and interesting, people are getting exposed to sympathetic treatments of relevant ideas, or people are just more likely to hear about them
- Could improve retention by āgiving people something to doā (see also āTask Yā)
- Possibly larger indirect impact: Serve as a pipeline for future Future Perfects, BBC Futures, etc.
- Help more young EAs test fit for journalism
- Help them build career capital
- Then they could start new verticals or whatever in established media outlets, or start new outlets, or just try to cover important topics well and with good angles in regular journalism jobs
- Possible direct impacts:
- Possible downsides:
- Fin: āI guess the (most obvious) risk is that this dilutes the quality of the overall EA journo-sphere in a potentially harmful way. Worst case is that silly or wrong things are said and associated with EA / longtermism and cause harm / put people off?ā
- Luca: ā+1 on Fin's point. I think a large part of what makes Future Perfect and OWID great is that they are really careful and accurate -- not something I'd personally associate with student journalismā
- Me: Agreed. I'd probably want it to not be explicitly EA/longtermism branded, and instead just cover the same sort of ideas. Like how Future Perfect is.
- Also what I have in mind also includes things like student-produced magazines that have things more like Guardian Long Reads or essays on culture stuff, which I think have less of this downside than more news-style student journalism
- Fin: āI guess the (most obvious) risk is that this dilutes the quality of the overall EA journo-sphere in a potentially harmful way. Worst case is that silly or wrong things are said and associated with EA / longtermism and cause harm / put people off?ā
- Target audience:
- Who could give advice?
- Sky
- Nicole
- Future Perfect people
- BBC Futures people
- Who could be the project lead?
- Some community builder?
- Someone who organises and advises community builders?
- E.g., Emma Abele?
- Peopleās thoughts on this:
- Owen: āI think it could be a cool thing, but it's not obvious to me how to use money to cause it to happenā
- (Though this was before Angelaās prize suggestion, which Owen liked)
- Owen: āI think it could be a cool thing, but it's not obvious to me how to use money to cause it to happenā
Giving EA researchers/orgs money to pay for external expert review of their work
- Description:
- Providing money to pay for the sort of external expert review OP already gets for a bunch of their own work
- I think RP will be doing this in future
- The experts could be academics but donāt necessarily have to be
- Itās probably best if the experts are non-EAs
- Reasons:
- Usually the people with the most expertise on a topic (even if not those with the best judgement etc.) are non-EAs
- Non-EAs opportunity cost from our perspective is usually lower
- EAs often give review for free already
- Non-EAs bring a more distinct perspective and body of knowledge to bear, increasing the marginal value of their input compared to just what the author and maybe other reviewers thought
- But it could also make sense to make it easier to pay for EAs to review things in detail
- Partly based on the general principle that it often makes sense to pay for services that are valuable
- Partly because that could increase the chance that things are actually reviewed in detail, rather than there being lots of superficial reviews that felt to the review like just supererogatory acts unrelated to their actual work
- Reasons:
- Theory of change:
- Increase the quality of EA research outputs
- Increase the quality of EA researchers via these paid reviews working like high-quality feedback to them on what they got right and wrong and how they could change their approach in future
- Less important / more speculative:
- Field-building via causing non-EA experts to be repeatedly exposed to important EA research outputs; they may then become interested in engaging with such topics/work more
- Increasing the reputation/perceived quality of EA research outputs via the mere info that it was reviewed, in addition to any increase that occurs via actual increase in quality
- This seems most relevant for non-EA audiences
- This seems bad if the increase is to a higher level than the work warrants
- This seems good if the increase is up to the appropriate level, whereas otherwise the reputation of the work wouldāve been overly penalised for not having an impressive-sounding reviewer
- E.g., maybe economists would pay too little attention to a report about TAI and the economy unless it says it was reviewed by an economist, even if the methodology and conclusions were already sound
- Possible downsides:
- Slow down research outputs
- Especially in calendar time, due to waiting for the feedback
- Also in number of hours required, due to reading and reacting to the feedback
- Decrease the quality of research outputs
- E.g. via pushing outputs too far away from āspeculationsā or āweirdnessā that were actually sound
- E.g., via making people put less effort into seeking or providing reviews from EAs than they otherwise wouldāve
- Increase the reputation of some work to too high a level
- Slow down research outputs
- Open questions:
- How much would this cost per output?
- How much would these reviews improve output?
- At what stage in the research process should such reviews occur?
- Should EA Funds just provide unrestricted funding that can be used for this, provide funding restricted to this but with no more specific restrictions, provide funding for this for specific pieces of work or reviewers, or provide funding for specific reviewers to do this for whatever orgs ask them to do it?
- The last idea seems bad
- The rest seem reasonable
- How many orgs have work thatās important enough to warrant this but arenāt already paying for it?
- Who could give advice?
- Open Phil
- RP
- People in academia?
- Probably a bunch of other people?
- Who could be the project lead?
- N/A
- Peopleās thoughts on this:
- Misc:
- I think the way to make this happen would be one or both of:
- Publicly communicate that EA Funds is in general open to paying for such things
- Actively encourage specific orgs/people to apply for funding for such things
- I donāt think āmaking this happenā needs to be a project with a project lead
- I think the way to make this happen would be one or both of:
Red teaming papers as an EA training exercise
Buckās book review idea
New things kind-of like Our World in Data
See also the section on Our World in Data below.
- Description:
- Basically suggested by TJ in this thread:
- Theory of change:
- Possible downsides:
- Open questions:
- Is this better than just funding Our World in Data (in general or for specific activities)?
- Who could give advice?
- Who could be the project lead?
- Peopleās thoughts on this:
- Misc:
Forecasting tournaments amplifying evaluation research
- Description:
- Someone wrote the following on the Submit grant suggestions to EA Funds form:
- āPay Metaculus and Givewell to run a forecasting competition where Metaculus forecast GiveWell evaluations. Forecasters would guess the final value for the cost per life saved number that GiveWell would reach if they were to evaluate.
Slightly shakier, but GiveWell would then evaluate any which are more effective than GiveDirectly.ā - (That was the basis of me adding this idea to this doc.)
- āPay Metaculus and Givewell to run a forecasting competition where Metaculus forecast GiveWell evaluations. Forecasters would guess the final value for the cost per life saved number that GiveWell would reach if they were to evaluate.
- This could be done for other evaluators too (e.g., ACE, HLI, maybe Nuno/QURI)
- This would probably require that GiveWell commit to evaluating a random subset of the interventions/orgs included (in addition to the ones that are forecasted to be promising)
- Someone wrote the following on the Submit grant suggestions to EA Funds form:
- Theory of change:
- The idea-suggester wrote:
- ā) Cheap search. It would cheaply test if there is a way to cheaply recommend good candidates for GiveWell evaluation.
2) Wide search. It might find charities which make their way onto Givewell's top charities which would not have otherwise been seen.
3) Forecasting and evaluation. We would better understand if forecasting can predict charity evaluation. This might open up cheaper or wider evaluation opportunities in future.ā
- ā) Cheap search. It would cheaply test if there is a way to cheaply recommend good candidates for GiveWell evaluation.
- I think this would be an example of the more general idea of amplifying generalist research via forecasting
- The idea-suggester wrote:
- Possible downsides:
- Open questions:
- Who could give advice?
- Metaculus
- Ozzie
- Linch
- Other forecasting people
- Who could be the project lead?
- Metaculus
- QURI?
- Other forecasting people?
- Peopleās thoughts on this:
- Misc:
Subsidise creators of EA-aligned podcasts, videos, etc. to outsource some tasks (e.g., editing)
- Description:
- Types of tasks that might be outsourceable:
- Editing
- Transcript-making
- Animations
- Producing?
- Marketing?
- Description-writing?
- Who could be outsourced to:
- EAs who have more of a comparative advantage for these tasks than the creators of the EA-aligned content do
- This could be due to these people being more junior, less skilled at other activities, or more skilled at these up-for-outsourcing activities
- Non-EAs
- EAs who have more of a comparative advantage for these tasks than the creators of the EA-aligned content do
- Creators this might be relevant to:
- (It could be worth looking at lists of EA-related podcasts and EA-related video sources to think about which ones should have some tasks outsourced but probably donāt already. What Iāve listed is just off the top of my head.)
- Hear This Idea
- Rational Animation?
- Happier World?
- Types of tasks that might be outsourceable:
- Theory of change:
- Free the creators up to create more
- Free the creators up to do more stuff on the side
- E.g., it would suck if Spencer Greenberg did his own podcast editing, even if that didnāt reduce how rapidly he produced podcast eps
- E.g., Fin and Luca of Hear This Idea do their own editing, which presumably leaves them less time for the other RSP-related things they do (e.g., assisting Toby Ord, building AI policy career capital)
- Lead to higher quality content
- One could outsource to specialists
- Possible downsides:
- Open questions:
- How many things can be outsourced easily? How much time do they take up by default?
- How many creators are creating useful stuff or are on track to do so, arenāt yet outsourcing some tasks, but would do so if given more money?
- Who could give advice?
- Creators
- Who could be the project lead?
- N/A
- Peopleās thoughts on this:
- Misc:
- I think the way to make this happen would be one or both of:
- Publicly communicate that EA Funds is in general open to paying for such things
- Actively encourage specific creators to apply for funding for such things
- I donāt think āmaking this happenā needs to be a project with a project lead
- I think the way to make this happen would be one or both of:
More expert elicitation, surveys, double cruxes, etc. on important topics
- Description:
- This is a pretty vague/broad idea
- It was inspired by me liking Carlier et al.'s AI risk survey and thinking I might be keen to see more such things
- Could be diving deeper on some AI stuff
- Could be other x-risks
- Could be other topics
- Other examples of the kind of thing I mean:
- Database of existential risk estimates (or similar)
- Crucial questions for longtermists - EA Forum
- Clarifying some key hypotheses in AI alignment
- The in-progress AI project with many authors thatās kind-of related to the above post
- Some work by Garfinkel
- Some work by Ngo
- Conversation on forecasting with Vaniver and Ozzie Gooen - EA Forum
- Theory of change:
- The things linked to/mentioned above provide some thoughts on why this could be useful
- Possible downsides:
- Some of these ideas require using the time of people with high opportunity cost (e.g., AI alignment researchers filling in surveys)
- Could lead to more anchoring / over-deference
- But I think actually thisād mostly push in the opposite direction by making it more obvious how much disagreement and uncertainty there is
- And when there really is a wide degree of agreement, e.g. on AI risk vs asteroid risk, this does seem like something Iād like more people to know about and defer to
- And some of these ideas involve getting at underlying rationales, cruxes, etc., not just bottom-line beliefs
- Open questions:
- How best to use money to create this?
- Prizes?
- Unrestricted funding to people whoāve done useful work like this in the past?
- Request for proposals that are along these lines?
- How best to use money to create this?
- Who could give advice?
- Me
- Other people who did things like the above-mentioned projects
- Who could be the project lead?
- Peopleās thoughts on this:
- Misc:
Ideas related to IGM-style expert panels
- I have a separate, short doc on this from ~March: Ideas related to IGM-style expert panels
- Linchās forecasting ideas doc contains a somewhat similar idea, so Iām deprioritising thinking more about this myself for now, but I might return to it later
āIntro to EA Research Hackathonā
- Peter's idea
- Original Slack message:
- āRandom idea I haven't thought out but seems like something you two [Michael and Linch] would both like -- hosting an "Intro to EA Research Hackathon" (or "Intro to EA Research Festival" or another name) perhaps over four Saturdays or something, where feedback is given between each day, with the goal of making an EA Forum post. e.g.,
Day 1: Make a research agenda
Day 2: Refine your research agenda based on feedback
Day 3: Make some progress on your research
Day 4: Make a post on the EA Forum
We'd pair each person with a mentor and there would be a 1-2 week gap between the days to allow time for feedback to be given. People could still work on the project outside of the Hackathon days.
Perhaps we could select people through a mix of (a) inviting our top intern applicants that don't make it to the internship, (b) inviting some people who narrowly didn't make it to Stage 2 to do Stage 2, and (c) using our Stage 1 and 2 applications... we could also have a lottery component or something.
This would help new researchers practice making progress on important research and actually build them a precious credential to use for future research hiring. We'd also get great feedback on the quality of researchers that we could use for future hiring.
The idea is to open up something lower cost and higher volume to add even more than an internship, since even the internship is too competitive.ā
- I see pros and cons
- Discussed a bit in this thread: https://rethinkpriorities.slack.com/archives/G01EEQ179LP/p1619038497019100
Subsidise/cover useful apps, software subscriptions, or similar
- Description:
- What are some things we might want to subsidise/cover?
- Roam
- Asana
- Audible
- SavvyCal/Calendly
- Guesstimate
- Paid Slack accounts?
- Paid Airtable accounts?
- For whom might we subsidise/cover such things?
- āEAs in generalā? How to define?
- Attendees of EA events?
- Members of core EA orgs?
- Participants in EA-aligned research training programs?
- Some other group?
- How would we do this?
- Pay the company and get a promo code
- How to distribute the promo code such that itās not overly exclusive but also doesnāt end up e.g. on Reddit and then being used by lots of non-EAs?
- Pay EAs and trust theyāll use the money this way
- Pay orgs, research training programs, etc. to get group plans for their members
- (I know Remmelt did this with Asana, so could find out what he did)
- Pay the company and get a promo code
- How much of a subsidy might we want to provide?
- Partial or full?
- For how long?
- What are some things we might want to subsidise/cover?
- Theory of change:
- Boost peopleās productivity
- Make them more effective, intelligent, etc.
- E.g., Roam and Audible might do this
- Save people time theyād otherwise spend finding deals etc.
- Why would those impacts occur?
- It seems to me, and Iāve often heard it remarked, that people are weirdly averse to paying for app-like things or software subscriptions, relative to their willingness to pay for other things and to the amount of value these things provide
- Seems like this is partly just that people are used to the idea of these things being free or super cheap
- I think this leads to some/many EAs not using these things even though theyād be useful, using inferior alternatives/versions, or spending time trying to find ways to not pay or pay less
- E.g., until earlier this year, I was regularly spending a little time stopping and starting a few audible subscriptions to save something like $15/month
- (That said, I wasnāt spending much time)
- E.g., an RP intern spent a while thinking they should use Roam but not using it because they were trying to figure out if they could get a subsidised version
- E.g., until earlier this year, I was regularly spending a little time stopping and starting a few audible subscriptions to save something like $15/month
- It seems to me, and Iāve often heard it remarked, that people are weirdly averse to paying for app-like things or software subscriptions, relative to their willingness to pay for other things and to the amount of value these things provide
- Possible downsides:
- Open questions:
- Who could give advice?
- Remmelt?
- Ozzie?
- The Roam guy?
- Who could be the project lead?
- Peopleās thoughts on this:
- Misc:
Template
- Description:
- Theory of change:
- Possible downsides:
- Open questions:
- Who could give advice?
- Who could be the project lead?
- Peopleās thoughts on this:
- Misc:
Orgs/people that might be able to turn money into impact somehow
- This section focuses on ideas where I started with a thought like āThis org/person has done useful stuff in the past / seems on track to do so in future. Maybe if we give them more money theyāll do more useful stuff?ā
- Iāve come up with some specific ideas for what I might want to suggest some of the orgs/people do, but really I might want interactions with most of them to start with asking for their thoughts on whether and how they could use more money to create more impact
- In some cases, the best move is probably simply to contact the people to tell them about EAIF/LTFF and suggest they apply, or to post such a message in a relevant Slack workspace or Facebook group or whatever
- In some cases, the best move might actually be to try to find some other person/org to try to replicate something like what this person/org did, or a variant of that
- E.g., finding someone else who can start another thing like Our World in Data, but with a different focus
Our World in Data
See also New things kind-of like our world in data.
- What sort of things might I want them to use money for?
- Just expand/scale in general?
- Do something analogous to how Vox made a new āverticalā for Future Perfect?
- Like a new department or focus area
- Sketch of what this could look like:
- One of the buttons on the bar at the top of the OWID that says the name of some broad topic area relevant to EA, or something vaguer like Future Perfect
- The stuff in that area is more EA-relevant than average, and has a similar theme or angle or something. e.g., maybe it's all focused on things relevant to x-risks
- At least one OWID staff member is primarily focused on producing that sort of content.
- It's still the same sort of content as OWID's regular stuff.
- E.g., they don't have a finished page on nuclear weapons, and I don't think they have ones on bioweapons or AI. I want them to have that.
- We could either ask them to make those things specifically, or ask them to set up something like how Future Perfect works within Vox that will regularly produce that sort of thing.
- Do work on specific topics?
- Examples:
- AI
- Nuclear weapons
- They they only have āa preliminary collection of materialsā on nuclear weapons
- Examples:
- Why do I think they might be able to do useful things with money?
- Possible downsides:
- Open questions:
- What sorts of restricted funding, advice, or encouragement would they be open to?
- On the 80k podcast, Roser indicated they much preferred people to give OWID unrestricted funding and let OWID use their own judgement
- And I got the impression that maybe in general they might not be open to restricted funding
- But maybe they'd be more open to it from EA sources when we do have a really good rationale and it roughly aligns with OWIDās own vision
- On the 80k podcast, Roser indicated they much preferred people to give OWID unrestricted funding and let OWID use their own judgement
- Is this better than trying to facilitate the creation of new things kind-of like our world in data?
- What sorts of restricted funding, advice, or encouragement would they be open to?
- Who at this org might be good to contact about this?
- Luca mentioned edouard@ourworldindata.org
- https://edomt.github.io/about/
- Luca: āI talked with Edouard Mathieu (from OWID) about this, and know he's thinking about what data is relevant for longtermism (my impression from him is that it seems quite hard for certain EA topics since lots of the worries are unprecedented and thus don't really have data on them)ā
- Luca mentioned edouard@ourworldindata.org
- Who else could give advice on this?
- Peopleās thoughts on this:
- Misc:
EA research training program participants
- E.g., SERI fellows, RP interns, CHERI fellows, LPP fellows
- Iāll post an encouragement in the RP slack in July suggesting the RP interns consider applying for funding
Template
- What sort of things might I want them to use money for?
- Why do I think they might be able to do useful things with money?
- Theory of change:
- Possible downsides:
- Open questions:
- Who at this org might be good to contact about this?
- Who else could give advice on this?
- Peopleās thoughts on this:
- Misc:
MichaelA @ 2021-04-12T09:25 (+6)
Bottom line up front: I think it'd be best for longtermists to default to using more inclusive term āauthoritarianismā rather than "totalitarianism", except when a person really has a specific reason to focus on totalitarianism specifically.
I have the impression that EAs/longtermists have often focused more on "totalitarianism" than on "authoritarianism", or have used the terms as if they were somewhat interchangeable. (E.g., I think I did both of those things myself in the past.)
But my understanding is that political scientists typically consider totalitarianism to be a relatively extreme subtype of authoritarianism (see, e.g., Wikipedia). And itās not obvious to me that, from a longtermist perspective, totalitarianism is a bigger issue than other types of authoritarian regime. (Essentially, Iād guess that totalitarianism would have worse effects than other types of authoritarianism, but that itās less likely to arise in the first place.)
To provide a bit more of a sense of what I mean and why I say this, here's a relevant section of a research agenda I recently drafted:
- Longtermism-relevant typology and harms of authoritarianism
- What is the most useful way for longtermists to carve up the space of possible types of authoritarian political systems (or perhaps political systems more broadly, or political systems other than full liberal democracies)? What terms should we be using?
- Which types of authoritarian political system should we be most concerned about?
- What are the main ways in which each type of authoritarian political system could reduce (or increase) the expected value of the long-term future?
- What are the main pathways by which each type of authoritarian political system could reduce (or increase) the expected value of the long-term future?
- E.g., increasing the rate or severity of armed conflict; reducing the chance that humanity has a successful long reflection; increasing the chances of an unrecoverable dystopia.
- All things considered, how large does the existential risk from global, stable authoritarianism seem to be?
- All things considered, how large of an existential risk factor does authoritarianism seem to be?
MichaelA @ 2020-11-15T12:17 (+6)
Collection of sources relevant to impact certificates/impact purchases/similar
Certificates of impact - Paul Christiano, 2014
The impact purchase - Paul Christiano and Katja Grace, ~2015 (the whole site is relevant, not just the home page)
The Case for Impact Purchase | Part 1 - Linda Linsefors, 2020
Making Impact Purchases Viable - casebash, 2020
Plan for Impact Certificate MVP - lifelonglearner, 2020
Impact Prizes as an alternative to Certificates of Impact - Ozzie Gooen, 2019
Altruistic equity allocation - Paul Christiano, 2019
Social impact bond - Wikipedia (highlighted as relevant by Toby Ord)
Health Impact Fund - Wikipedia (highlighted as relevant by Toby Ord)
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment. I also may create a tag for relevant posts.
schethik @ 2020-12-07T21:55 (+3)
The Health Impact Fund (cited above by MichaelA) is an implementation of a broader idea outlined by Dr. Aidan Hollis here: An Efficient Reward System for Pharmaceutical Innovation. Hollis' paper, as I understand it, proposes reforming the patent system such that innovations would be rewarded by government payouts (based on impact metrics, e.g. QALYs) rather than monopoly profit/rent. The Health Impact Fund, an NGO, is meant to work alongside patents (for now) and is intended to prove that the broader concept outlined in the paper can work.
A friend and I are working on further broadening this proposal outlined by Dr. Hollis. Essentially, I believe this type of innovation incentive could be applied to other areas with easily measurable impact (e.g. energy, clean protein and agricultural innovations via a "carbon emissions saved" metric).
We'd love to collaborate with anyone else interested (feel free to message me).
EdoArad @ 2021-06-13T07:22 (+2)
Hey schethik, did you make progess with this?
schethik @ 2022-04-17T21:52 (+1)
@EdoArad
Summary: The broad concept that Hollis' paper proposes ("outcome-based financing") has already been applied to several other areas such as reducing homelessness, improving specific health outcomes, etc. Recently, McKinsey, Meta, and a few others agreed to spend $925m to fund a similar mechanism to incentivize carbon capture technology innovation. Seems like there's lots of interest in expanding this type of financing model from big funders. Maybe something for the EA community to become more engaged with since there seems to be an appetite.
More details: As I understand it, Hollis' paper's proposal fits into a broader concept known as "outcome-based financing". The space is much more developed than I had thought when I wrote this previous comment. Two primary outcome-based financing models exist -- pay-for-success ("PFS") contracts (also known as social impact bonds) and advanced market commitments ("AMCs"). Hollis' paper (from 2004) describes an application of PFS contracts. Both, PFS contracts and AMCs, are already applied to several industries including health and clean energy.
Definitions:
- PFS rewards innovators based on some per unit metric (e.g., QALYs per drug sold in Hollis' example).
- AMCs reward innovators in a pre-specified lump-sum fashion (e.g., the WHO, World Bank, a few countries, and the Bill and Melinda Gates Foundation funded a $1.5 billion AMC for entities that could create a vaccine for pneumococcal diseases).
Real-world Examples:
- PFS
- Here's a link to Oxford's PFS database (~200 projects//$500m since the concept was formalized in 2010). PFS contracts are used most commonly for reducing prison rates, improving health outcomes (in developed and developing countries), reducing homelessness, and upskilling labor. Check out the database for more details.
- Hollis' org is trying to set up a clean energy PFS fund -- seems promising, but I think doing this in cleantech is an extra tricky.
- I've been engaged with a group that's trying to get funding to do this for a specific pharmaceutical application (see Crowd Funded Cures).
- AMCs are less common. However, last week McKinsey, Stripe, Meta, and a few others decided to finance a $925m carbon capture utilization and sequestration AMC.
Seems like there's a lot of momentum for outcome-based financing. Perhaps, the EA community should become more directly engaged in promoting this since it seems tractable.
EdoArad @ 2022-04-18T12:08 (+3)
Thank you!! It'd be great if you want to write it as a top-level post, to get more visibility and to be more easily indexable, or maybe add something to this wiki page.
Crowd Funded Cures seems like an amazing initiative, wish you all the best!
MichaelA @ 2020-09-04T08:48 (+6)
If anyone reading this has read anything Iāve written on the EA Forum or LessWrong, Iād really appreciate you taking this brief, anonymous survey. Your feedback is useful whether your opinion of my work is positive, mixed, lukewarm, meh, or negative.
And remember what mama always said: If youāve got nothing nice to say, self-selecting out of the sample for that reason will just totally bias Michaelās impact survey.
(If you're interested in more info on why I'm running this survey and some thoughts on whether other people should do similar, I give that here.)
MichaelA @ 2021-02-17T07:29 (+5)
Preferences for the long-term future [an abandoned research idea]
Note: This is a slightly edited excerpt from my 2019 application to the FHI Research Scholars Program.[1] I'm unsure how useful this idea is. But twice this week I felt it'd be slightly useful to share this idea with a particular person, so I figured I may as well make a shortform of it.
Efforts to benefit the long-term future would likely gain from better understanding what we should steer towards, not merely what we should steer away from. This could allow more targeted actions with better chances of securing highly positive futures (not just avoiding existential catastrophes). It could also help us avoid negative futures that may not appear negative when superficially considered in advance. Finally, such positive visions of the future could facilitate cooperation and mitigate potential risks from competition (see Dafoe, 2018 on "AI Ideal Governance"). Researchers have begun outlining particular possible futures, arguing for or against them, and surveying peopleās preferences for them. Itād be valuable to conduct similar projects (via online surveys) that address several limitations of prior efforts.
First, these projects should provide relatively detailed portrayals of the potential futures under consideration. This could be done using summaries of scenarios richly imagined in existing sources (e.g., Tegmarkās Life 3.0, Hansonās Age of Em) or generated during the āworld-buildingā efforts to be conducted at the Augmented Intelligence Summit. This could address peopleās apparent tendency to be repelled by descriptions of futures that simplistically maximise things they claim to intrinsically value while stripping away things they donāt. It could also allow for quantitative and qualitative feedback on these scenarios and various elements of them. People may find it easier to critique and build upon presented scenarios than to imagine ideal scenarios from scratch.
Second, these projects should include large, representative, cross-national samples. Existing research has typically included only small samples which often differ greatly from the general population. This doesnāt fully capture the three above-mentioned benefits of efforts to understand what futures we actually want.
Third, experimental manipulations could be embedded within the surveys to explore the impact of different framings, different information, and different arguments, partly to reveal how fragile peopleās preferences are.
It would be useful to also similarly survey medium-term-relevant preferences (e.g., regarding institutions for managing adaptations to increasing AI capabilities; Dafoe, 2018).
One concern with this idea is that the long-term future may be so radically unfamiliar and unpredictable that any information regarding peopleās present preferences for it would be irrelevant to scenarios that are actually plausible. Another concern is that present preferences may not be worth following anyway, as they may reflect intuitions that make sense in our current environment but wouldnāt in radically different future environments. They may also not be worth following if issues like framing effects and scope neglect become particularly impactful when evaluating such unfamiliar and astronomical options.
[1] I wrote this application when I was very new to EA and I was somewhat grasping at straws to come up with longtermism-relevant research ideas that would make use of my psychology degree.
MichaelA @ 2020-04-08T08:51 (+5)
Collection of ways of classifying existential risk pathways/mechanisms
Each of the following works show or can be read as showing a different model/classification scheme/taxonomy:
- Defence in Depth Against Human Extinction:Prevention, Response, Resilience, and Why They All Matter - Cotton-Barratt, Daniel, and Sandberg, 2020
- The same model is also discussed in Toby Ord's The Precipice.
- Cotton-Barratt also discusses this model, and rationales for building such models, on the 80,000 Hours podcast.
- Classifying global catastrophic risks - Avin et al., 2018
- Causal diagrams of the paths to existential catastrophe - Michael Aird, 2020
- Conflict of interest statement: I am the aforementioned human.
- This might not quite "belong" in this list. But one could classify risks by which of the different "paths" they might follow (e.g., those that would vs wouldn't "pass through" a distinct collapse stage).
- Typology of human extinction risks - Alexey Turchin, ~2015
Personally, I think the model/classification scheme in Defence in Depth is probably the most useful. But I think at least a quick skim of the above sources is useful; I think they each provide an additional useful angle or tool for thought.
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
Wait, exactly what are you actually collecting here?
The scope of this collection is probably best revealed by checking out the above sources.
But to further clarify, here are two things I don't mean, which aren't included in the scope:
- Classifications into things like "AI risk vs biorisk", or "natural vs anthropogenic"
- Such categorisation schemes are clearly very important, but they're also well-established and you probably don't need a list of sources that show them.
- Classifications into different "types of catastrophe", such as Ord's distinction between extinction, unrecoverable collapse, and unrecoverable dystopia
- This is also very important, and maybe I should make such a collection at some point, but it's a separate matter to this.
MichaelA @ 2020-03-29T06:48 (+5)
What are the implications of the offence-defence balance for trajectories of violence?
Questions: Is a change in the offence-defence balance part of why interstate (and intrastate?) conflict appears to have become less common? Does this have implications for the likelihood and trajectories of conflict in future (and perhaps by extension x-risks)?
Epistemic status: This post is unpolished, un-researched, and quickly written. I haven't looked into whether existing work has already explored questions like these; if you know of any such work, please comment to point me to it.
Background/elaboration: Pinker argues in The Better Angels of Our Nature that many types of violence have declined considerably over history. I'm pretty sure he notes that these trends are neither obviously ephemeral nor inevitable. But the book, and other research pointing in similar directions, seems to me (and I believe others?) to at least weakly support the ideas that:
- if we avoid an existential catastrophe, things will generally continue to get better
- apart from the potential destabilising effects of technology, conflict seems to be trending downwards, somewhat reducing the risks of e.g. great power war, and by extension e.g. malicious use of AI (though of course a partial reduction in risks wouldn't necessarily mean we should ignore the risks)
But How Does the Offense-Defense Balance Scale? (by Garfinkel and Dafoe, of the Center for the Governance of AI; summary here) says:
It is well-understood that technological progress can impact offense-defense balances. In fact, perhaps the primary motivation for developing the concept has been to understand the distinctions between different eras of military technology.
For instance, European powers’ failure to predict the grueling attrition warfare that would characterize much of the First World War is often attributed to their failure to recognize that new technologies, such as machine guns and barbed wire, had shifted the European offense-defense balance for conquest significantly toward defense.
And:
holding force sizes fixed, the conventional wisdom holds that a conflict with mid-nineteenth century technology could be expected to produce a better outcome for the attacker than a conflict with early twentieth century technology. See, for instance, Van Evera, ‘Offense, Defense, and the Causes of War’.
The paper tries to use these sorts of ideas to explore how emerging technologies will affect trajectories, likelihood, etc. of conflict. E.g., the very first sentence is: "The offense-defense balance is a central concept for understanding the international security implications of new technologies."
But it occurs to me that one could also do historical analysis of just how much these effects have played a role in the sort of trends Pinker notes. From memory, I don't think Pinker discusses this possible factor in those trends. If this factor played a major role, then perhaps those trends are substantially dependent on something "we" haven't been thinking about as much - perhaps we've wondered about whether the factors Pinker discusses will continue, whereas they're less necessary and less sufficient than we thought for the overall trend (decline in violence/interstate conflict) that we really care about.
And at a guess, that might mean that that trend is more fragile or "conditional" than we might've thought. It might mean that we really really can't rely on that "background trend" continuing, or at least somewhat offsetting the potentially destabilising effects of new tech - perhaps a lot of the trend, or the last century or two of it, was largely about how tech changed things, so if the way tech changes things changes, the trend could very easily reverse entirely.
I'm not at all sure about any of that, but it seems it would be important and interesting to explore. Hopefully someone already has, in which case I'd appreciate someone pointing me to that exploration.
(Also note that what the implications of a given offence-defence balance even are is apparently somewhat complicated/debatable matter. Eg., Garfinkel and Dafoe write: "While some hold that shifts toward offense-dominance obviously favor conflict and arms racing, this position has been challenged on a number of grounds. It has even been suggested that shifts toward offense-dominance can increase stability in a number of cases.")
MichaelA @ 2020-02-24T08:53 (+5)
Update in April 2021: This shortform is now superseded by the EA Wiki entry on the Unilateralist's curse. There is no longer any reason to read this shortform instead of that.
Collection of all prior work I've found that seemed substantially relevant to the unilateralist’s curse
Unilateralist's curse [EA Concepts]
Horsepox synthesis: A case of the unilateralist's curse? [Lewis] (usefully connects the curse to other factors)
The Unilateralist's Curse and the Case for a Principle of Conformity [Bostrom et al.’s original paper]
Hard-to-reverse decisions destroy option value [CEA]
Framing issues with the unilateralist's curse - Linch, 2020
Somewhat less directly relevant
Managing risk in the EA policy space [EA Forum] (touches briefly on the curse)
Ways people trying to do good accidentally make things worse, and how to avoid them [80k] (only one section on the curse)
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
MichaelA @ 2023-01-03T08:43 (+4)
Types of downside risks of longtermism-relevant policy, field-building, and comms work [quick notes]
I wrote this quickly, as part of a set of quickly written things I wanted to share with a few Cambridge Existential Risk Initiative fellows. This is mostly aggregating ideas that are already floating around. The doc version of this shortform is here, and I'll probably occasionally update that but not this.
"Hereās my quick list of what seem to me like the main downside risks of longtermism-relevant policy work, field-building (esp. in new areas), and large-scale communications.
- Locking in bad policies
- Information hazards (primarily attention hazards)
- Advancing some risky R&D areas (e.g., some AI hardware things, some biotech) via things other than infohazards
- e.g., via providing better resources for upskilling in some areas, or via making some areas seem more exciting
- Polarizing / making partisan some important policies, ideas, or communities
- Making a bad first impression in some communities / poisoning the well
- Causing some sticky yet suboptimal framings or memes to become prominent
- Ways they could be suboptimal: inaccurate, misleading, focusing attention on the wrong things, non-appealing
- By āstickyā I mean that, one these framings/memes are prominent, itās hard to change that
- Drawing more attention/players to some topics, and thereby making it less the case that weāre operating in a niche field and can have an outsized influence
- See also https://www.overcomingbias.com/2019/03/tug-sideways.html
- This is partly about actors with unusually bad/selfish intentions or high recklessness, but also about anyone without unusually good intentions, epistemics, etc.
Feel free to let me know if youāre not sure what I mean by any of these or if you think you and me chatting more about these things seems worthwhile.
Also bear in mind the unilateralist's curse.
None of this means people shouldnāt do policy stuff or large-scale communications. Definitely some policy stuff should happen already, and over time more should happen. These are just things to be aware of so you can avoid doing bad things and so you can tweak net positive things to be even more net positive by patching the downsides.
See also Hard-to-reverse decisions destroy option value and Adding important nuances to "preserve option value" arguments"
MichaelA @ 2023-01-03T08:46 (+3)
Sometime after writing this, I saw Asya Bergal wrote an overlapping list of downsides here:
"I do think projects interacting with policymakers have substantial room for downside, including:
- Pushing policies that are harmful
- Making key issues partisan
- Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with
- āTaking up the spaceā such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project"
MichaelA @ 2022-12-29T19:11 (+4)
Often proceed gradually toward soliciting forecasts and/or doing expert surveys
tl;dr: I think it's often good to have a pipeline from untargeted thinking/discussion that stumbles upon important topics, to targeted thinking/discussion of a given important topic, to expert interviews on that topic, to soliciting quantitive forecasts / doing large expert surveys.
I wrote this quickly. I think the core ideas are useful but I imagine they're already familiar to e.g. many people with experience making surveys.[1] I'm not personally aware of an existing writeup on this and didn't bother searching for one, but please comment if you know of one!
Introduction
Let's say you wanna get a better understanding of something. If you know exactly and in detail what it is that you want to get a better understanding of, two tools that can be very useful are forecasts and expert surveys. More specifically, it can be very useful to generate well-operationalized quantitative or fixed-choice questions and then get those questions answered by a large number of people with relevant expertise and/or good forecasting track records.
But it's probably best to see that as an end point, rather than jumping to it too soon, for two reasons:
- Getting responses is costly
- Getting a lot of people with relevant expertise and/or good forecasting track records to answer your questions probably requires significant effort or money.
- It may also involve substantial opportunity cost, if those people are working on important, net-positive things.
- Generating questions is hard
- I've both done and observed a decent amount of forecasting question writing and survey design. It seems to me that it's harder to actually do well than most people would probably expect, and that people often don't realise they haven't done it well until after they get some feedback or some answers.
- One difficulty is having even a rough sense of what it's best to ask about.
- Another difficulty is figuring out what precisely what to ask about and phrasing that very clearly, such that respondents can easily understand your question, they interpret it how you wanted, and the question covers and captures all of the relevant & useful thoughts they have to share.
- This often requires/warrants a lot of thought, multiple rounds of feedback, and multiple rounds of testing on an initial batch of respondents.
So if you jump to making forecasting questions or surveys too early, you may:
- waste a lot of your or other people's time/money on unimportant topics/questions
- get responses that are confusing or misleading since the question phrasings were unclear
- fail to hear a lot of the most interesting things people had to share, since you didn't ask about those things and your questions had precise scopes
...especially because forecasting questions and surveys are typically "launched" to lots of people at once, so you may not be able to or think to adapt your questions/approach in light of the first few responses, even if the first few give you reason to do so.
The pipeline I propose for mitigating those issues
(Note: The boundaries between these "steps" are fuzzy. It probably often makes sense to jump back and forth to some extent. It probably also often makes sense to be at different stages at the same time for different subtopics/questions within a broad topic.)
- Untargeted thinking/discussion
- I.e., thinking/discussions/writing/research/whatever that either roams through many topics, or is fairly focused but not focused on the topic that this instance of "the pipeline" will end up focused on
- Sometimes this stumbles upon a new (to you) topic, or seems to suggest an already-noticed topic seems worth prioritizing further thought on
- Advantage: Very unconstrained; could stumble upon many things that you haven't already realised are worth prioritizing.
- Targeted thinking/discussion of a given important-seeming topic
- This is still unconstrained in its precise focus or its method, but now constrained to a particular broad topic.
- Advantage: Can go deeper on that topic, while still retaining flexibility regarding what the best scope, most important subquestions, etc. is
- Expert interviews on that topic
- Similar to the above "step", but now with a clearer sense of what questions you're asking and with more active effort to talk to experts specifically.
- Within this step, you might want to move through a pipeline with a similar rationale to the overall pipeline, moving from (a) talking in a fairly unstructured way to people with only moderate expertise and opportunity cost to (b) following a specified and carefully considered interview protocol in interviews with the very best experts to talk to on this topic.
- (b) could even essentially be a survey delivered verbally but with occasional unplanned follow-up questions based on what respondents said.
- Advantage: Get well-founded thoughts on well-considered questions with relatively low cost to these people's fairly scarce time.
- Soliciting quantitative forecasts and/or running expert surveys
- It may often be worth doing both.
- Probably often with some but not complete overlap in the questions and participants.
- Some questions are better suited to people with strong forecasting track records and others better suited to people with relevant expertise.
- Probably often with some but not complete overlap in the questions and participants.
- Within this step, you might want to undergo a pipeline with a similar rationale to the overall pipeline, with multiple waves of forecasting-soliciting / surveying that each have more questions, more precise operationalizations of questions, and/or more respondents.
- Advantage: Get a large volume of well-founded, easily interpretable thoughts on well-considered questions, with relatively low cost to each person's fairly scarce time (even if high cost in aggregate).
- It may often be worth doing both.
Misc thoughts
- For similar reasons, I think it's probably usually good for interview protocols and surveys to include at least one "Any other thoughts?" type question and perhaps multiple (e.g., one after each set of questions on a similar theme).
- Also for similar reasons, I think it's probably usually good to allow/encourage forecasters' to share whatever thoughts they have that they think are worth sharing, rather than solely soliciting their forecasts on the questions asked.
- ^
The specific trigger for me writing this was that I mentioned the core idea of this shortform to a colleague it was relevant to, and they said it seemed useful to them.
Another reason I bothered to write it is that in my experience this basic idea has seemed valid and useful, and I think it would've been a little useful for me to have read this a couple years ago.
MichaelA @ 2022-04-30T20:01 (+4)
Someone shared a project idea with me and, after I indicating I didn't feel very enthusiastic about it at first glance, asked me what reservations I have. Their project idea was focused on reducing political polarization and is framed as motivated by longtermism. I wrote the following and thought maybe it'd be useful for other people too, since I have similar thoughts in reaction to a large fraction of project ideas.
- "My main 'reservations' at first glance aren't so much specific concerns or downside risks as just 'I tentatively think that this doesn't sound at first glance like the kind of thing that will be very well-targeted or high-leverage for affecting the very most important things that happen this century and shape the course of the long-term future.'
- Or to come at it from different angle: I tentatively think that this doesn't sound like something that was arrived at or would be arrived at by a search process that:
- Started with the long-term future and our best understanding of the key risks, risk factors, security factors, points for intervention, etc. in mind,
- Worked backwards from there, thinking about what a given person or set of people can most impactfully do to affect the most important stories, and
- Considered many options.
- It sounds more like a project that was arrived at either:
- before becoming focused on improving the long-term future, or
- without forcing oneself to come up with and red-team a theory of change for how this makes a major difference to the long-term future, or
- without considering at least 10 other options.
- A similar but more concrete framing: If I imagine an existential catastrophe has occurred by 2100, and I ask myself what were the top 20 things that contributed to that and the top 20 things that could've tractably been done to prevent it, is the level of polarization in liberal democracies on those lists? How high up?
- Possibly you'd find these slides from a workshop I gave recently on theory of change useful, though that's just sort of overviewing the topic as a whole (rather than being at all tailored to your project) and is somewhat focused on research projects.
- But these views really are just 'tentative' and 'at first glance'.
- I do in fact think there's a plausible case for reducing polarization being on that 'top 20 list', and being one of the things I'd land on if 'backchaining' from what matters most.
- And things that aren't at the very top of the list can still be worth doing if there's a team who'd be damn good at them, and better at them than at other things.
- So I think if I was evaluating this for a grant, I wouldn't just quickly reject, but rather try to:
- (a) assess to what extent you really are motivated by longtermism/x-risk-reduction and hence will make lots of strategic and tactical decisions in ways subtly tailored to that
- (b) hear your case for this being a top longtermist priority, and think further about that myself
- (c) see whether you seem to have a great team and plan
- And based on the tiny amount I currently know about your project, it's probably 10-40% likely that after doing (a), (b), and (c), I'd ultimately recommend funding of ~$10-80k (if you thought that was an amount you could usefully use)."
Notes to readers:
- This might sound either weirdly blunt or weirdly vague, but please bear in mind I'm lifting it out of context here!
- If you can think of a good title to give this shortform to make it clearer who and what it'd be useful for, please let me know!
- Feel free to share this with people or suggest I do something else with it, if that seems useful.
MichaelA @ 2021-12-28T15:49 (+4)
I've made a small "Collection of collections of AI policy ideas" doc. Please let me know if you know of a collection of relatively concrete policy ideas relevant to improving long-term/extreme outcomes from AI. Please also let me know if you think I should share the doc / more info with you.
MichaelA @ 2021-12-28T15:50 (+2)
Here's the introductory section of the doc, but feel free to not read this:
A bunch of people are separately working or have worked on collecting policy ideas that might be relevant to long-term/extreme outcomes from AI. Iām not sure if these people all actually sharing their collections with each other would be good (e.g., maybe a given collection is too sensitive, or maybe itād be better to have more independent thinking first). But probably some such sharing would be good, and it seems at least useful for these people to be aware of the fact that theyāre all working on this sort of thing. So I quickly made this doc to list the collections Iām aware of.
Iāve put these in alphabetical order. Please let me know if there are other collections that youāre aware of. Also let me know if you have any other thoughts on whether this doc should exist at all, whether a different approach should be taken, etc.
Currently this doc is accessible only by the people who made the collections listed below, by other Rethink Priorities longtermism staff, and by a couple other people. I expect to share it with a few other people soon. I also currently intend to, at some later point, share it fairly liberally within the AI governance community, and perhaps to e.g. copy its contents into an EA Forum shortform, but Iāll check with the people whose collections are mentioned before doing so. Please let me know if you are vs arenāt happy for your collection to be listed in this doc and for the doc to be shared more widely.
MichaelA @ 2020-04-10T06:20 (+4)
Collection of work on value drift that isn't on the EA Forum
Value Drift & How to Not Be Evil Part I & Part II - Daniel Gambacorta, 2019
Value drift in effective altruism - Effective Thesis, no date
Will Future Civilization Eventually Achieve Goal Preservation? - Brian Tomasik, 2017/2020
Let Values Drift - G Gordon Worley III, 2019 (note: I haven't read this)
On Value Drift - Robin Hanson, 2018 (note: I haven't read this)
Somewhat relevant, but less so
Value uncertainty - Michael Aird (me), 2020
An idea for getting evidence on value drift in EA - Michael Aird, 2020 [this actually is on the EA Forum, but doesn't have the value drift tag because it's a shortform, so it still seems worth including here]
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment. See also my collection of EA analyses of how social social movements rise, fall, can be influential, etc.
This list originally also contained sources on the EA Forum, but when a value drift tag was created I just gave those sources that tag instead, removed them from here, and changed the heading here.
MichaelA @ 2021-09-22T09:20 (+3)
Collection of AI governance reading lists, syllabi, etc.
This is a doc I made, and I suggest reading the doc rather than shortform version (assuming you want to read this at all). But here it is copied out anyway:
What is this doc, and why did I make it?
AI governance is a large, complex, important area that intersects with a vast array of other fields. Unfortunately, itās only fairly recently that this area started receiving substantial attention, especially from specialists with a focus on existential risks and/or the long-term future. And as far as Iām aware there arenāt yet any canonical, high-quality textbooks or online courses on the topic.[1] It seems to me that this means this is an area where well-curated and well-structured reading lists, syllabi, or similar can be especially useful, helping to fill the role that textbooks otherwise could.[2]
Fortunately, when I started looking for relevant reading lists and syllabi, I was surprised by how many there were. So I decided to try to collect them all in one place. I also tried to put them in very roughly descending order of how useful Iād guess theyād be to a randomly chosen EA-aligned person interested in learning about AI governance.
I think this might help myself, my colleagues, and others who are trying to āget up to speedā, for the reasons given in the following footnote.[3]
I might later turn this doc into a proper post on the EA Forum.
See also EA syllabi and teaching materials and Courses on longtermism.
How can you help
- Please comment if you know of anything potentially relevant which I havenāt included!
- Please comment if you have opinions on anything listed!
The actual collection
- September AGI safety fundamentals curriculum - Richard Ngo
- Alignment Newsletter Database - Rohin Shah
- This is more relevant to technical AI safety than to AI governance, but some categories are pretty relevant to AI governance, especially "AI strategy and policy", "Forecasting", and "Field building"
- AI Governance Reading List - SERI 2021 Summer - Mauricio Baker
- Mauricio had also previously made a syllabus on the same topics: AI Governance Syllabus '21.docx
- Governance of AI Reading List ā Oxford Spring 2020 - Markus Anderljung
- Reading Guide for the Global Politics of Artificial Intelligence - Allan Dafoe
- Iām guessing other lists made by people associated with GovAI already draw on and superseded this, but I donāt know
- āResourcesā section from Guide to working in artificial intelligence policy and strategy - 80,000 Hours
- Note: I think the only book from there thatās available on Audible UK is The Second Machine Age.
- But the description of the book sounds to me kind-of basic and not especially longtermism-relevant.
- Note: I think the only book from there thatās available on Audible UK is The Second Machine Age.
- AI policy introductory reading list - Niel Bowerman (I think)
- Governance of AI - Some suggested readings [v0.5, shared] - Ashwin Acharya
- Drawn on for SERIās reading list
- Artificial Intelligence and International Security Syllabus [public] - Remco Zwetsloot, 2018 (I think)
- Books and lecture series relevant to AI governance - me and commenters
- Section on āUnaligned artificial intelligenceā from Syllabus ā The Precipice
- Tangential critique: I personally think that itās problematic and misleading that both The Precipice and this syllabus use the heading āunaligned artificial intelligenceā while seeming to imply that this covers all key aspects of AI risk, since I think this obscures some risk pathways.
- AI Policy Readings Draft.docx - EA Oxford
- Drawn on for SERIās reading list
- My post Crucial questions for longtermists includes a structured list of questions related to the āValue of, and best approaches to, work related to AIā, and this associated doc contains readings related to each of those questions
- I havenāt updated this much since 2020
- Questions listed there include:
- Is it possible to build an artificial general intelligence (AGI) and/or transformative AI (TAI) system? Is humanity likely to do so?
- What form(s) is TAI likely to take? What are the implications of that? (E.g., AGI agents vs comprehensive AI services)
- What will the timeline of AI developments be?
- How much should longtermistsā prioritise AI?
- What forms might an AI catastrophe take? How likely is each?
- What are the best approaches to reducing AI risk or increasing AI benefits?
- Good resources for getting a high-level understanding of AI risk - Michael Aird
- AI governance intro readings - Felipe Calero
- Luke Muehlhauserās 2013 and 2014 lists of books heād listened to recently
- I think many/most of these books were chosen for āseem[ing] likely to have passages relevant to the question of how well policy-makers will deal with AGIā
- Many/most of these arenāt available as audiobooks; Luke turned them into audiobooks himself
- A Contra AI FOOM Reading List ā Magnus Vinding
- Described in SERIās reading list as a āList of arguments (of varied quality) against āfast takeoffāā
- List of resources on AI and agency - Ben Pace
- You could also use research agendas related to AI governance as reading lists, by following the sources they cite on various topics. Relevant agendas include:
- (Note that I havenāt checked how well each of these agendas would work for this purpose. This list is taken from my central directory for open research questions.)
- The Centre for the Governance of AIās research agenda - 2018
- Some AI Governance Research Ideas - the Centre for the Governance of AI, 2021
- Promising research projects - AI Impacts, 2018
- They also made a list in 2015; I havenāt checked how much they overlap
- Cooperation, Conflict, and Transformative Artificial Intelligence (the Center on Long-Term Riskās research agenda) - Jesse Clifton, 2019
- Open Problems in Cooperative AI - Dafoe et al., 2020
- Problems in AI Alignment that philosophers could potentially contribute to - Wei Dai, 2019
- Problems in AI risk that economists could potentially contribute to - Michael Aird, 2021
- Technical AGI safety research outside AI - Richard Ngo, 2019
- Artificial Intelligence and Global Security Initiative Research Agenda - Centre for a New American Security, no date
- A survey of research questions for robust and beneficial AI - Future of Life Institute, no date
- āstudies which could illuminate our strategic situation with regard to superintelligenceā - Luke Muehlhauser, 2014 (he also made a list in 2012)
- A shift in arguments for AI risk - Tom Sittler, 2019
- Longtermist AI policy projects for economists - Risto Uuk (this doc was originally just made for Risto's own use, so the ideas shouldn't be taken as high-confidence recommendations to anyone else)
- Annotated Bibliography of Recommended Materials - CHAI
- I think this is much more focused on technical AI safety than AI governance
- Some Rethink Priorities staff may soon make a long, tiered reading list tailored to the AI governance project ideas we may work on. If it seems to me that this would be useful to other people, I might add a link to a version of it here.
- There may be additional relevant reading lists / syllabi / sections in the links given here: EA syllabi and teaching materials - EA Forum
- I think there was also a short reading list associated with the EA In-Depth Fellowship
- A related category to reading lists is newsletters that provide summaries and commentary of a bunch of research outputs. E.g.:
- Rohinās
- Jack Clarkeās
- CSETās
- ā¦
- Mauricio Baker suggested that I or people reading this doc might also be interested in āsyllabi aimed at aspiring AI technical safety researchers, such as this one: Technical AI Safety Reading List. I have a vague sense that engaging with some of this content has been helpful for my having a better broad sense of what's going on with AI safety, which seems helpful for governance.ā
- Some parts of Krakovna's AI safety resources and Maini's AI Reading List may be quite useful for AI governance people, though I think theyāre more relevant for technical AI safety people
My thanks to everyone who made these lists, as well as to Mauricio Baker for pointing me to some of the lists.
Footnotes
[1] Though there are various presumably high-quality textbooks or courses with some relevance, some high-quality non-textbook books on the topic, some in-person courses that might be high-quality (I havenāt participated in them), and some things that fill somewhat similar roles (like EA seminar series, reading groups, or fellowships).
[2] See also Research Debt and Suggestion: EAs should post more summaries and collections.
[3]
- This collection should make it easier to find additional reading lists, syllabi, etc., and thus easier to find additional readings that have been evaluated as especially worth reading in general, especially worth reading on a given topic, and/or especially good as introductory resources.
- This collection should make it easier to find and focus on reading lists, syllabi, etc. that are better and/or more relevant to oneās specific needs.
- To help with this, please comment on this doc if you have opinions about anything listed.
- Even before or without engaging with the actual items included in a given reading list, syllabus, or similar, engaging with the structure and commentary in that document itself could help one understand what the important components, divisions, concepts, etc. within AI governance are. And this collection should help people find more, better, and/or more relevant such documents.
MichaelA @ 2021-01-03T06:00 (+3)
Thoughts on Toby Ordās policy & research recommendations
In Appendix F of The Precipice, Ord provides a list of policy and research recommendations related to existential risk (reproduced here). This post contains lightly edited versions of some quick, tentative thoughts I wrote regarding those recommendations in April 2020 (but which I didnāt post at the time).
Overall, I very much like Ordās list, and I donāt think any of his recommendations seem bad to me. So most of my commentary is on things I feel are arguably missing.
Regarding āother anthropogenic risksā
Ordās list includes no recommendations specifically related to any of what he calls āother anthropogenic risksā, meaning:
- ādystopian scenariosā
- nanotechnology
- āback contaminationā from microbes from planets we explore
- aliens
- āour most radical scientific experimentsā
(Some of his āGeneralā recommendations would be useful for those risks, but there are no recommendations specifically targeted at those risks.)
This is despite the fact that Ord estimates a ~1 in 50 chance that āother anthropogenic risksā will cause existential catastrophe in the next 100 years. That's ~20 times as high as his estimate for each of nuclear war and climate change (~1 in 1000), and ~200 times as high as his estimate for all "natural risks" put together (~1 in 10,000). (Note that Ord's "natural risks" includes supervolcanic eruption, asteroid or comet impact, and stellar explosion, but does not include "'naturally' arising pandemics". See here for Ordās estimates and some commentary on them.)
Meanwhile, Ord includes 10 recommendations specifically related to "natural risk"s, 7 related to nuclear war, and 8 related to climate change. Those recommendations do all look to me like good recommendations, and like things āsomeoneā should do. But it seems odd to me that there are that many recommendations for those risks, yet none specifically related to a category Ord seems to think poses many times more existential risk.
Perhaps itās just far less clear to Ord what, concretely, should be done about āother anthropogenic risksā. And perhaps he wanted his list to only include relatively concrete, currently actionable recommendations. But I expect that, if we tried, we could find or generate such recommendations related to dystopian scenarios and nanotechnology (the two risks from this category Iām most concerned about).
So one thing I'd recommend is someone indeed having a go at finding or generating such recommendations! (I might have a go at that myself for dystopias, but probably only at least 6 months from now.)
(See also posts tagged global dystopia, atomically precise manufacturing, or space.)
Regarding naturally arising pandemics
Similarly, Ord has no recommendations specifically related to what he called āānaturallyā arising pandemicsā (as opposed to āengineered pandemicsā), which he estimates as posing as much existential risk over the next 100 years as all ānatural risksā put together (~1 in 10,000). (Again, note that he doesnāt include āānaturallyā arising pandemicsā as a ānatural riskā.)
This is despite the fact that, as noted above, he has 10 recommendations related to ānatural risksā. This also seems somewhat strange to me.
That said, one of Ord's recommendations for āEmerging Pandemicsā would also help with āānaturallyā arising pandemicsā. (This is the recommendation to āStrengthen the WHOās ability to respond to emerging pandemics through rapid disease surveillance, diagnosis and control. This involves increasing its funding and powers, as well as R&D on the requisite technologies.ā) But the other five recommendations for āEmerging Pandemicsā do seem fairly specific to emerging rather than ānaturallyā arising pandemics.
Regarding engineered pandemics
Ord recommends āIncreas[ing] transparency around accidents in BSL-3 and BSL-4 laboratories.ā āBSLā refers to ābiosafety levelā, and 4 is the highest it gets.
In Chapter 5, Ord provides some jawdropping/hilarious/horrifying tales of accidents even among labs following the BSL-4 standards (including two accidents in a row for one lab). So Iām very much on board with the recommendation to increase transparency around those accidents.
But I was a little surprised to see that Ord didnāt also call for things like:
- introducing more stringent standards (to prevent rather than be transparent about accidents),
- introducing more monitoring and enforcement of compliance with those standards, and/or
- restricting some kinds of research as too dangerous for even labs following the highest standards
Some possible reasons why he may not have called for such things:
- He may have worried thereād be too much pushback, e.g. from the bioengineering community
- He may have thought those things just actually would be net-negative, even if not for pushback
- He may have felt that his other recommendations would effectively accomplish similar results
But Iād guess (with low confidence) that at least something along the lines of the three āmissing recommendationsā mentioned above - and beyond what Ord already recommends - would probably help reduce biorisk, if done as collaboratively with the relevant communities as is practical.
Regarding existential risk communication
One of Ordās recommendations is to:
Develop better theoretical and practical tools for assessing risks with extremely high stakes that are either unprecedented or thought to have extremely low probability.
I think this is a great recommendation. (See also Database of existential risk estimates.) That recommendation also made me think that another strong recommendation might be something like:
Develop better approaches, incentives, and norms for communicating about risks with extremely high stakes that are either unprecedented or thought to have extremely low probability.
That sounds a bit vague, and Iām not sure exactly what form such approaches, incentives, or norms should take or how one would implement them. (Though I think that the same is true of the recommendation of Ordās which inspired this one.)
That proposed recommendation of mine was in part inspired by the COVID-19 situation, and more specifically by the following part of an 80,000 Hours Podcast episode. (which also gestures in the direction of concrete implications of my proposed recommendations).
Rob Wiblin: The alarm [about COVID-19] could have been sounded a lot sooner and we could have had five extra weeks to prepare. Five extra weeks to stockpile food. Five extra weeks to manufacture more hand sanitizer. Five extra weeks to make more ventilators. Five extra weeks to train people to use the ventilators. Five extra weeks to figure out what the policy should be if things got to where they are now.
Work was done in that time, but I think a lot less than could have been done if we had had just the forecasting ability to think a month or two ahead, and to think about probabilities and expected value. And this is another area where I think we could improve a great deal.
I suppose we probably wonāt fall for this exact mistake again. Probably the next time this happens, the world will completely freak out everywhere simultaneously. But we need better ability to sound the alarm, potentially greater willingness actually on the part of experts to say, āIām very concerned about this and people should start taking action, not panic, but measured action now to prepare,ā because otherwise itāll be a different disaster next time and weāll have sat on our hands for weeks wasting time that could have saved lives. Do you have anything to add to that?
Howie Lempel: I think one thing that we need as a society, although I donāt know how to get there, is an ability to see an expert say that they are really concerned about some risk. They think it likely wonāt materialize, but it is absolutely worth putting a whole bunch of resources into preparing, and seeing that happen and then seeing the risk not materialize and not just cracking down on and shaming that expert, because thatās just going to be what happens most of the time if you want to prepare for things that donāt occur that often.
Regarding AI risk
Here are Ordās four policy and research recommendations under the heading āUnaligned Artificial Intelligenceā:
Foster international collaboration on safety and risk management.
Explore options for the governance of advanced AI.
Perform technical research on aligning advanced artificial intelligence with human values.
Perform technical research on other aspects of AGI safety, such as secure containment or tripwires.
These all seem to me like excellent suggestions, and Iām glad Ord has lent additional credibility and force to such recommendations by including them in such a compelling and not-wacky-seeming book. (I think Human Compatible and The Alignment Problem were also useful in a similar way.)
But I was also slightly surprised to not see explicit mention of, for example:
- Work to actually understand what human values actually are, how theyāre structured, which aspects of them we do/should care about, etc.
- E.g., much of Stuart Armstrongās research, or some work thatās more towards the philosophical rather than technical end
- āAgent foundationsā/ādeconfusionā/MIRI-style research
- Further formalisation and critique of the various arguments and models about AI risk
But this isnāt really a criticism, because:
- Perhaps the first two of the āmissing recommendationsā I mentioned were actually meant to be implicit in Ordās third and fourth recommendations
- Perhaps Ord has good reasons to not see these recommendations as especially worth mentioning
- Perhaps Ord thought heād be unable to concisely state such recommendations (or just the MIRI-style research one) in a way that would sound concrete and clearly actionable to policymakers
- Any shortlist of a personās top recommendations will inevitably fail to 100% please all readers
You can see a list of all the things I've written that summarise, comment on, or take inspiration from parts of The Precipice here.
MichaelA @ 2020-02-20T19:14 (+3)
Some concepts/posts/papers I find myself often wanting to direct people to
https://www.lesswrong.com/posts/oMYeJrQmCeoY5sEzg/hedge-drift-and-advanced-motte-and-bailey
http://gcrinstitute.org/papers/trajectories.pdf
(Will likely be expanded as I find and remember more)
MichaelA @ 2021-08-14T18:44 (+2)
Notes on Victor's Understanding the US Government (2020)
Why I read this
- Iām interested in learning more about a wide variety of topics relevant to "longtermism-motivated AI governance/strategy/policy research, practice, advocacy, and talent-building"
- I decided that one strategy I should try for that purpose is listening to relevant Great Courses lecture series via Audible
- This decision was loosely informed by advice at the end of the post The Neglected Virtue of Scholarship
- I felt that the Understanding the US Government lecture series would be useful because a detailed, fluent understanding of how the US government works and can be influenced seems useful for AI governance work
Should you read this?
- I did find the lecture series useful
- Probably especially the first half
- But I expect that there are better resources on this topic
- Though I don't know what they are
- Things I saw as problems with the lecture series:
- Victor often covered info that seemed basic to me (e.g., what public goods or the Cold War are) as if it'd be new to the listener
- Though at least I could then just up the playback speed to 2.7-3.3
- She sometimes made claims that were unclear and/or that I'm skeptical of
- I'd guess if I fact-checked the lecture series thoroughly, I'd find several errors
- A decent fraction of that content didn't seem very relevant to my interests
- E.g., a chapter focused on things like social security benefits
- Victor often covered info that seemed basic to me (e.g., what public goods or the Cold War are) as if it'd be new to the listener
My Anki cards
For why I'm sharing these, see Suggestion: Make Anki cards, share them as posts, and share key updates.
Victor says the 3 main factors that historically have the highest predictive power for presidential election outcomes are:
1. Incumbency status
[A party that's held the presidency for 1 term and nominates the incumbent has an advantage. A party that's held the presidency for 2 terms has a disadvantage.]2. Incumbency approval rating
3. Status of the economy
How many cabinet departments does the US have?
15
How many civilian employees does the US bureaucracy have?
2.1 million
Victor describes the US executive branch as being organised into 5 buckets:
The White House
The executive office of the President [though she then says this tends to be considered part of the white house]
The 15 cabinet departments
Independent agencies (both regulatory and non-regulatory)
Government corporations
The Supreme Court can hear cases from a federal court of appeal or a state supreme court if it satisfies three rules of access:
Controversy
Standing
Mootness[These rules as necessary but not sufficient.]
Victor highlights 3 deep root sources of partisan polarisation in the US:
1. Worsening economic inequality
2. Realignment of political parties over issues of race
3. To some extent, changes in campaign finance laws [in particular, changes that mean politicians have to rely more on small donors relative to large donors, since small donors tend to be more ideologically driven]
Victor lists 2 things people often believe contribute to polarisation but for which the evidence either doesn't clearly support or contradicts such a causal link:
Gerrymandering [she notes that polarisation is similarly strong in the Senate]
The media [but then she indicates that polarised or fake news is indeed important?]
Victor says there are 7 types of "organised interests" (in US politics):
Businesses/corporations
Trade associations
Professional associations
Citizen groups
Issue groups
Labour unions
Think tanks, foundations, and institutes
What percentage of Us government spending is mandatory spending (rather than discretionary spending)?
60%
[This is money the gov is committed by law to spending, and is unaffected by the appropriations process.]
What percentage of US gov discretionary spending is for defense spending?
About 50%
How much does the US gov spend per year on non-defense discretionary spending?
$880 billion
How much did the US gov spend in 2019?
$4.4 trillion
MichaelA @ 2020-06-10T01:29 (+2)
On a 2018 episode of the FLI podcast about the probability of nuclear war and the history of incidents that could've escalated to nuclear war, Seth Baum said:
a lot of the incidents were earlier within, say, the ’40s, ’50s, ’60s, and less within the recent decades. That gave me some hope that maybe things are moving in the right direction.
I think we could flesh out this idea as the following argument:
- Premise 1. We know of fewer incidents that could've escalated to nuclear war from the 70s onwards than from the 40s-60s.
- Premise 2. If we know of fewer such incidents from the 70s onwards than from the 40s-60s, this is evidence that there really were fewer incidents from the 70s onwards than from the 40s-60s.
- Premise 3. If there were fewer such incidents from the 70s onwards than from the 40s-60s, the odds of nuclear war are lower than they were in the 40s-60s.
- Conclusion. The odds of nuclear war are (probably) lower than they were in the 40s-60s.
I don't really have much independent knowledge regarding the first premise, but I'll take Baum's word for it. And the third premise seems to make sense.
But I wonder about the second premise, which Baum's statements seem to sort-of take for granted (which is fair enough, as this was just one quick, verbal statement from him). In particular, I wonder whether the observation "I know about fewer recent than older incidents" is actually what we'd expect to see even if the rate hadn't changed, just because security-relevant secrets only gradually get released/filter into the public record? If so, should we avoid updating our beliefs about the rate based on that observation?
These are genuine rather than rhetorical questions. I don't know much about how we come to know about these sorts of incidents; if someone knows more, I'd appreciate their views on what we can make of knowing about fewer recent incidents.
This also seems relevant to some points made earlier on that podcast. In particular, Robert de Neufville said:
We don’t have incidents from China’s nuclear program, but that doesn’t mean there weren’t any, it just means it’s hard to figure out, and that scenario would be really interesting to do more research on.
(Note: This was just one of many things Baum said, and was a quick, verbal comment. He may in reality already have thought in depth about the questions I raised. And in any case, he definitely seems to think the risk of nuclear war is significant enough to warrant a lot of attention.)
MichaelA @ 2020-05-08T07:07 (+2)
Collection of sources relevant to the idea of “moral weight”
Comparisons of Capacity for Welfare and Moral Status Across Species - Jason Schukraft, 2020
Preliminary thoughts on moral weight - Luke Muehlhauser, 2018
Should Longtermists Mostly Think About Animals? - Abraham Rowe, 2020
2017 Report on Consciousness and Moral Patienthood - Luke Muehlhauser, 2017 (the idea of “moral weights” is addressed briefly in a few places)
Notes
As I’m sure you’ve noticed, this is a very small collection. I intend to add to it over time. If you know of other relevant work, please mention it in a comment.
(ETA: The following speculation appears false; see comments below.) It also appears possible this term was coined, for this particular usage, by Muehlhauser, and that in other communities other labels are used to discuss similar concepts. Please let me know if you have any information about either of those speculations of mine.
See also my collection of sources relevant to moral circles, moral boundaries, or their expansion and my collection of evidence about views on longtermism, time discounting, population ethics, etc. among non-EAs.
Jason Schukraft @ 2020-05-08T13:17 (+15)
A few months ago I compiled a bibliography of academic publications about comparative moral status. It's not exhaustive and I don't plan to update it, but it might be a good place for folks to start if they're interested in the topic.
MichaelA @ 2020-05-08T23:40 (+2)
Ah great, thanks!
Do you happen to recall if you encountered the term "moral weight" outside of EA/rationality circles? The term isn't in the titles in the bibliography (though it may be in the full papers), and I see one that says "Moral status as a matter of degree?", which would seem to refer to a similar idea. So this seems like it might be additional weak evidence that "moral weight" might be an idiosyncratic term in the EA/rationality community (whereas when I first saw Muehlhauser use it, I assumed he took it from the philosophical literature).
Jason Schukraft @ 2020-05-09T01:36 (+13)
The term 'moral weight' is occasionally used in philosophy (David DeGrazia uses it from time to time, for instance) but not super often. There are a number of closely related but conceptually distinct issues that often get lumped together under the heading moral weight:
- Capacity for welfare, which is how well or poorly a given animal's life can go
- Average realized welfare, which is how well or poorly the life of a typical member of a given species actually goes
- Moral status, which is how much the welfare of a given animal matters morally
Differences in any of those three things might generate differences in how we prioritize interventions that target different species.
Rethink Priorities is going to release a report on this subject in a couple of weeks. Stay tuned for more details!
MichaelA @ 2020-05-09T09:35 (+2)
Thanks, that's really helpful! I'd been thinking there's an important distinction between that "capacity for welfare" idea and that "moral status" idea, so it's handy to know the standard terms for that.
Looking forward to reading that!