What posts do you want someone to write?

By Aaron Gertler 🔸 @ 2020-03-24T06:41 (+71)

I really enjoyed "What posts are you planning on writing?"

This is the lazy version, for people who want a post to exist but want someone else to write it. Given that we're all stuck inside anyway, I'm hoping we can use this opportunity to get a lot of writing done (like Isaac Newton!)

So: What are some unwritten posts that you think ought to exist?

If you want something to exist badly enough that you'd pay for it, consider sharing that information! There's nothing wrong with offering money; some really good work has come from small-scale contracting of this kind.


Aaron Gertler @ 2020-03-24T07:03 (+38)

More journalistic articles about EA projects.

I don't necessarily mean "written by journalists", though there's been a lot of good journalistic coverage of EA.

I mean "in the style of long-form journalism": Telling an interesting story about the work of a person/organization, while mixing in the origin story, interesting details about the people involved, photos, etc.

Examples of projects I think could get the journalistic treatment:

Aidan O'Gara @ 2020-03-24T07:42 (+2)

That’s a super cool idea.

  • What writing currently exists like this? Vox’s Future Perfect, maybe a few one-off articles in other major publications?
  • Where’s best to publish this? Feels like a lot of work for a blogpost, but I doubt the NYT is looking for unsolicited submissions - are there publishing platforms that would be interested in this?
Aaron Gertler @ 2020-03-31T11:17 (+3)

What writing currently exists like this?

Future Perfect and a few one-off articles, mostly. Tom Chivers is a journalist with strong EA leanings who routinely writes from that perspective.

Where's best to publish this?

I wasn't thinking that these stories would have to be published by a large media outlet; I just want them to exist somewhere so that I can share them with people who are new to the movement. 

Getting published on a wider platform could be great for certain orgs (e.g. Wave is just a business, I imagine they wouldn't mind the attention), but bad for others (CSET generally keeps its work fairly private). I'd hope that anyone writing one of these hypothetical stories would check the org's publicity preferences before submitting a story anywhere!

MichaelA @ 2020-03-24T08:02 (+1)

I read in as-yet-unpublished post that the best approach for getting published in a major outlet without being on their staff is not to just write something and then send it to various publications, but rather to pick an outlet and optimise the piece (or versions of it) for that outlet's style, topic choices, readership, etc. (I'm not sure what the evidence base for that claim was, and have 0 relevant knowledge of my own.)

If that is a good approach, one could still potentially pick a few outlets and write somewhat different versions for each, rather than putting all their eggs in one basket. Or write one optimised version at a time, and not invest additional effort until that one is rejected. And one version could also be posted to the EA Forum and/or Medium and/or similar places, in the meantime. (Unless that would reduce odds of publication by a major outlet?)

Aidan O'Gara @ 2020-03-25T05:54 (+1)

Makes a lot of sense, I'm sure Vox and the New York Times are interested in very different kinds of submissions, writing with a particular style in mind probably dramatically increases the odds of publication.

I still wonder what the success rate here is - closer to 1% or to 10%? If the latter, I could see this being pretty impactful and possibly scalable.

bmg @ 2020-03-26T21:34 (+36)

I'd be really interested in reading an updated post that makes the case for there being an especially high (e.g. >10%) probability that AI alignment problems will lead to existentially bad outcomes.

There still isn't a lot of writing explaining case for existential misalignment risk. And a significant fraction of what's been produced since Superintelligence is either: (a) roughly summarizing arguments in Superintelligence, (b) pretty cursory, or (c) written by people who are relative optimists and are in large part trying to explain their relative optimism.

Since I have the (possibly mistaken) impression that a decent number of people in the EA community are quite pessimistic regarding existential misalignment risk, on the basis of reasoning that goes significantly beyond what's in Superintelligence, I'd really like to understand this position a lot better and be in a position to evaluate the arguments for it.

(My ideal version of this post would probably assume some degree of familiarity with contemporary machine learning, and contemporary safety/robustness issues, but no previous familiarity with arguments that AI poses an existential risk.)

mike_mclaren @ 2020-04-04T12:12 (+3)

I'd be really interested in reading an updated post that makes the case for there being an especially high (e.g. >10%) probability that AI alignment problems will lead to existentially bad outcomes.

My understanding is that Toby Ord does just this in his new book The Precipice (his new AI x-risk estimate is also discussed in his recent 80K podcast interview about the book), though it would still be good to have others weigh in.

bmg @ 2020-04-04T23:55 (+8)

I think that chapter in the Precipice is really good, but it's not exactly the sort of thing I have in mind.

Although Toby's less optimistic than I am, he's still only arguing for a 10% probability of existentially bad outcomes from misalignment.* The argument in the chapter is also, by necessity, relatively cursory. It's aiming to introduce the field of artificial intelligence and the concept of AGI to readers who might be unfamiliar with it, explain what misalignment risk is, make the idea vivid to readers, clarify misconceptions, describe the state of expert opinion, and add in various other nuances all within the span of about fifteen pages. I think that it succeeds very well in what it's aiming to do, but I would say that it's aiming for something fairly different.

*Technically, if I remember correctly, it's a 10% probability within the next century. So the implied overall probability is at least somewhat higher.

mike_mclaren @ 2020-04-05T11:58 (+2)

I see, thanks for the explanation!

JP Addison @ 2020-11-19T16:44 (+34)

A post making the case for donating now rather than later.

Patient philanthropy (donating later) has been gaining ground within the EA community. While there's been some critical discussion, there hasn't been a post making the positive case for why to donate now since the very early days of EA.

At the suggestion of this question post, I'll offer that I'll pay $200 for a good post in this direction. Caveated with the fact that I think most of the value comes from a very good post in this direction, so my bar will be pretty high.

Harry_Taussig @ 2021-04-22T22:18 (+4)

Has anyone done this yet? If so I'd be interested in the article, otherwise I'd be interested in giving it a go

JP Addison @ 2021-04-23T09:16 (+4)

This post probably qualifies, but I didn't love it. I'd pay out if you wrote a good one. But see note about my bar being high, I definitely don't want to make promises.

Aaron Gertler @ 2021-04-23T08:57 (+3)

I'm not aware of anything recent that was explicitly pro-"give now". There are some semi-recent posts that weigh both sides of the debate but draw "it depends"-type conclusions. I'd be interested to see your take!

You can see posts on this topic collected in the "timing of philanthropy" tag.

Aaron Gertler @ 2020-03-24T06:56 (+26)

I want a post on how to be a good donor.

Context: I work with a small foundation that asks a lot of questions when we investigate charities. We sometimes worry that we're annoying the charities we work with without providing much value for them or for ourselves, especially since we don't make grants on the same scale as larger foundations. Even when they tell us our questions are helpful/reasonable, they obviously have a strong incentive to make us feel happy and valued. 

Ideal version of this post: Someone goes to a lot of EA orgs, asks them questions related to the above dilemma, and reports the results. 

Other general questions about "what donors should know" would also be neat: How should someone with no special preferences time their donations? How much more valuable is unrestricted than restricted funding? And so on.

Sanjay @ 2020-03-24T09:58 (+7)

This commented was pointed out to me by someone who thought I may be extremely well qualified to answer this.

I have been on the trustee boards of about half a dozen charities and performed short term consulting stints (about a month at a time) for another 10(ish) charities globally, and have seen how each of those engages with its donors.

I'm also know people people/organisations surveying charities about questions like how much more valuable is unrestricted than restricted funding.

I would be happy to put something together on this topic, however I'm snowed under with other things for the time being, but could add it to the list and tackle it later?

MichaelA @ 2020-03-24T08:48 (+3)

I also think this'd be useful.

Though I wonder why you suggest that someone should ask these questions to a lot of EA orgs in particular? Did you also mean orgs that aren't explicitly "EA orgs" but that many EAs see as high-value donation opportunities? And is it possible it'd also be valuable to ask non-EA foundations about their practices and thoughts on this matter, at least as an interesting quite different example?

Aaron Gertler @ 2020-03-24T08:50 (+3)

If people want to ask other charities, that also seems fine! I suppose I was assuming that EA charities probably do more engagement with small donors (in the sense of "answering lots of questions about their work") than most other charities, and that they might be easier to contact for someone who reads the Forum and sees my post. But I'd guess there would be more value in having a wider sample of organizations.

MichaelA @ 2020-03-24T08:57 (+1)

That all seems to make sense.

tae @ 2020-11-19T21:23 (+22)

As someone dubiously planning a career affiliated with the U.S. Department of Defense, I would really appreciate an analysis of working inside and outside of The System. Historically, have altruists been able to do good from within harmful governments (fascist dictatorships, military juntas, genocidal governments, etc.)? How? Which qualities do altruism-friendly systems have?

Jessie @ 2021-01-05T14:37 (+3)

YES! Someone do this one!

Sofiabuh @ 2021-12-11T20:18 (+1)

I'm interested in this too. 

Benjamin_Todd @ 2021-04-23T13:34 (+20)

In-depth stories of people who had a lot of impact, and the rules of thumb they used / how they navigated key decision points, with the intention of drawing lessons from them.

E.g. Interview Holden or Bostrom about each key moment in their career, challenges & decisions they faced, and how they navigated them.

They wouldn't need to be within EA. It would also be great to have more examples of people like Norman Borlaug, Viktor Zhdanov and Petrov, but ideally focusing on (i) new examples (ii) people who were deliberately trying to have a big impact, and then also with (iii) more interrogation of the strategies they used and how things might have gone differently.

You could write it up as a case study, podcast interview, or journalist-style story.

It would be like Open Phil's history of philanthropy project, but focused on individual actors.

Benjamin_Todd @ 2021-04-29T20:15 (+6)

Here's an example of something in the genre: https://www.vox.com/22397833/dexamethasone-coronavirus-uk-recovery-trial

Though ideally it would contain a bunch more detail about the specific decisions they faced, what rules of thumb they used, how they'd ended up in a position to do this kind of thing etc. More critical analysis of their impact vs. the counterfactual would also be good.

EdoArad @ 2020-03-24T16:12 (+20)

Governance innovation as a cause area

Many people are working on new governance mechanisms from an altruistic perspective. There are many sub-categories such as Charter cities, space governance, decentralized governance,  RadicalXChange agenda..

I'm uncertain as to the marginal value in such projects, and I'd like to see a broad analysis that can serve as a good prior and analysis framework for specific projects.

Halffull @ 2020-03-25T15:18 (+4)

Here's an analysis by 80k. https://80000hours.org/problem-profiles/improving-institutional-decision-making/

EdoArad @ 2020-03-25T19:44 (+1)

This is not quite what I was going for, even though it is relevant. This problem profile focuses on existing institutions and on methods for collective decision making. I was thinking more in the spirit of market design, where the goal is to generate new institutions with new structures and rules so that people are selfishly incentivised to act in a way which maximizes welfare (or something else).

Halffull @ 2020-03-25T20:58 (+3)

I think the framing is weird because of EAs allergy to systemic change, but I think on practice all of the points in that cause profile apply to broader change.

EdoArad @ 2020-03-26T06:17 (+1)

No, the analysis does not seem to contain what I was going for. 

Curious about what you think is weird in the framing?

Halffull @ 2020-03-26T15:51 (+1)
Curious about what you think is weird in the framing?

The problem framing is basically spot on, talking about how our institution drive our lives. Like I said, basically all the points get it right and apply to broader systemic change like RadX, DAOs, etc.

Then, even though the problem is framed perfectly, the solution section almost universally talks about narrow interventions related to individual decision making like improving calibration.

Alexis Carlier @ 2020-04-03T17:35 (+1)

I doubt that there is any one answer re the marginal value of such projects, because the value depends on what is being governed. For instance, I think a successful implementation of regulatory markets for AI safety would be very valuable, but regulatory markets for corporate law wouldn't be; yet the same basic framework is being implemented.

For this reason, I'd be more interested in analysis of governance innovation for a particular cause area.

technicalities @ 2020-03-24T08:35 (+18)

A case study of the Scientific Revolution in Britain as intervention by a small group. This bears on one of the most surprising facts: the huge distance, 1.5 centuries, between the scientific and industrial revs. Could also shed light on the old marginal vs systemic argument: a synthesis is "do politics - to promote nonpolitical processes!"

https://forum.effectivealtruism.org/posts/RfKPzmtAwzSw49X9S/open-thread-46?commentId=rWn7HTvZaNHCedXNi

EdoArad @ 2020-03-24T15:59 (+15)

An analysis of how knowledge is constructed in the EA community, and how much weight we should assign to ideas "supported by EA". 

The recent question on reviews by non-EA researchers  is an example of that. There might be great opportunities to improve EA intellectual progress.

Ikaxas @ 2020-03-24T20:18 (+1)

Ooh, I would also very much like to see this post

Aaron Gertler @ 2020-03-24T06:51 (+14)

An AMA from someone who works at a really big foundation that leans EA but isn't quite "EA-aligned" in the same way as Open Philanthropy (e.g. Gates, Rockefeller, Chan/Zuckerberg, Skoll).

I'm interested to hear how those organizations compare different causes, distribute resources between areas/divisions, evaluate the impact of their grantmaking, etc.

Aidan O'Gara @ 2020-03-24T07:47 (+9)

Similarly, an AMA from someone working at an EA org who otherwise isn’t personally very engaged with EA. Maybe they really disagree with EA, or more likely, they’re new to EA ideas and haven’t identified with EA in the past.

They’ll be deeply engaged on the substantive issues but will bring different identities and biases, maybe offering important new perspectives.

Ben_West @ 2021-12-26T19:28 (+12)

I think it would be cool if someone wrote a post about Bob Purifoy. He's mentioned several times in Command and Control; briefly, he was an engineer and then manager at Sandia National laboratory, who was influential in nuclear security basically by just being extremely stubborn and motivated by safety. He gave a huge number of briefings (I want to say the number was in the thousands, but I can't find the reference right now) to policymakers, and occasionally stretched the rules to make nuclear weapons technology more secure.

I think it might provide a helpful model for how people can promote safety within large bureaucracies, even if they are not a top executive.

(I thought at one point I had found a eulogy which gave more information about his work, but I can't find it now. Possibly someone could reach out to Eric Schlosser, the author of command and control, to see if he has more information.)

MakoYass @ 2021-02-04T06:23 (+12)

A post re-examining the suffering impact of veganism in countries with good average livestock welfare in many product categories. New Zealand, for instance, has grass-fed cows as a norm, egg hens are usually required to have decent amounts of space and won't appear to be especially stressed, and the main supermarket chain Countdown just switched to providing mostly "free farmed" pork (birthing sows seem entirely free, but pigs destined for market are moved to barns that are only limitedly free) (excludes non store brand of pork-based products, but the store brand bacon looks pretty good quality so it might be popular enough).

I get the impression that we're unlikely to receive this kind of analysis through most channels promoting animal welfare. They might not want to tell you about the good parts. I tend to encounter a lot of copenhagen ethics and consent arguments (which can't be addressed by improving conditions no matter how much you improve them, which is a bit of a reduction to absurdity of consent arguments).

It may help to draw attention to good policies, focus attention on the worst offenders, and occasionally improve EA nutrition? Promoting animal welfare within the industry is likely to accelerate incremental change from within. Stockpeople who are doing especially well in limiting animal suffering will tend to be proud of their way of doing things and to want to promote it to legislators for both moral and economic reasons.

Having resources like this may also help for being able to come across as balanced and informed when discussing local animal welfare.

MakoYass @ 2021-09-25T20:15 (+3)

Regarding "change from within", I have since found confirmation from the excellent growth economist Mushtaq Kahn https://80000hours.org/podcast/episodes/mushtaq-khan-institutional-economics/ people within an industry are generally the best at policing others in the industry, they have the most energy for it, they know how to measure adherence, and they often have inside access. Without them, policing corruption often fails to happen.

Ben_West @ 2020-03-24T16:54 (+12)

Defining "management constraints" better.

Anecdotally, many EA organizations seem to think that they are somehow constrained by management capacity. My experience is that this term is used in different ways (for example, some places use it to mean that they need senior researchers who can mentor junior researchers; others use it to mean that they need people who can do HR really well).

It would be cool for someone to interview different organizations and get a better sense of what is actually needed here

Aaron Gertler @ 2020-03-31T11:10 (+11)

A detailed study of hyper-competent ops people. 

What makes these people so competent? What tools and processes do they use to manage information and set priorities? What does the flow of their workday look like; mostly flitting around between tasks, or mostly focused blocks of time? (And so on.)

Ben_West @ 2020-03-24T15:31 (+11)

More accessible summaries of technical work. Some things I would like summarized:

1. Existential risk and economic growth
2. Utilitarianism with and without expected utility

(You can see my own attempt to summarize something similar to #2 here , as one example.)

MichaelStJules @ 2020-07-25T06:59 (+6)

On 2, see this post (a link post for this).

I also left some comments on the EA Forum post pulling out the first two theorems and the definitions to state them in way that's hopefully a bit more accessible, skipping some unnecessary jargon and introducing notation only just before it's used, rather than at the start so you have to jump back. They're still pretty technical, though. Upon reflection, it probably took me more time to write the comments than it'll save people to read my comments instead of reading the parts of the paper where they're found. :/

There are also several other theorems in that paper.

Ben_West @ 2020-07-29T21:55 (+2)

Thanks! Yeah I should have posted that both of these have now been published, so if anyone else reading this has a request for posts that they haven't stated publicly, consider doing so!

Alex HT @ 2020-03-31T14:26 (+6)

I'm thinking of doing (1). Is there a particular way you think this should look?

How technical do you think the summary should be? The thing that would be easiest for me to write would require some maths understanding (eg. basic calculus and limits) but no economics understanding. Eg. about as technical as your summary but more maths and less philosophy.

Also, do you have thoughts on length? Eg. do you think a five page summary is substantially more accessible than the paper, or would the summary have to be much shorter than that?

(I'm also interested in what others would find useful)

Ben_West @ 2020-04-01T20:42 (+2)

Awesome!

I personally would suggest a format of:
1. One paragraph summary that any educated layperson easily can understand
2. One page summary that a layperson with college-level math skills can understand
3. 2-5 pages of detail that someone with college-level math and Econ 101 skills can understand

This is just a suggestion though, I don't have a lot of confidence that it's correct.

Alex HT @ 2020-04-30T15:06 (+12)

Now done here. It's a ~10 page summary that someone with college-level math can understand (though I think you could read it, skip the math, and get the general idea).

Ben_West @ 2020-04-30T17:34 (+2)

You rock, thanks so much!

Milan_Griffes @ 2020-03-28T18:01 (+9)

"American UBI: for and against"

"A brief history of Rosicrucianism & the Invisible College"

"Were almost all the signers of the Declaration of Independence high-degree Freemasons?"

"Have malaria case rates gone down in areas where AMF did big bednet distributions?"

"What is the relationship between economic development and mental health? Is there a margin at which further development decreases mental health?"

"Literature review: Dunbar's number"

"Why is Rwanda outperforming other African nations?"

"The longtermist case for animal welfare"

"Philosopher-Kings: why wise governance is important for the longterm future"

"Case studies: when has democracy outperform technocracy? (and vice versa)"

"Examining the tradeoff between coordination and coercion"

"Spiritual practice as an EA cause area"

"Tools for thought as an EA cause area"

"Is strong, ubiquitous encryption a net positive?"

"How important are coral reefs to ocean health? How can they be protected?"

"What role does the Amazon rainforest play in regulating the North American biosphere?"

"What can the US do to protect the Amazon from Bolsonaro?"

"Can the Singaporean governance model scale?"

"Is EA complacent?"

"Flow-through effects of widespread addiction"

Derek @ 2020-04-03T16:30 (+4)

"The longtermist case for animal welfare"

Have you seen this? https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-longtermists-mostly-think-about-animals

Milan_Griffes @ 2020-04-03T18:58 (+3)

I hadn't, thanks!

Linch @ 2020-11-21T01:31 (+8)

I'd be interested in a post by a historian (or very serious amateur historian) on what EAs can learn from the rise and fall of Mohism, the earliest proto-consequentialist school of philosophy/social movement that I'm aware of*.

*I'd also be interested in a  more general summarization post  detailing other proto-consequentialist schools of philosophy and social movements.
 

RomeoStevens @ 2020-03-28T02:26 (+8)

"Type errors in the middle of arguments explain many philosophical gotchas: 10 examples"

"CNS imaging: a review and research agenda" (high decision relevance for moral uncertainty about suffering in humans and non humans)

"Matching problems: a literature review"

"Entropy for intentional content: a formal model" (AI related)

"Graph traversal using negative and positive information, proof of divergent outcomes" (neuroscience relevant potentially)

"One weird trick that made my note taking 10x more useful"

EdoArad @ 2020-03-28T08:14 (+3)

Do you mind expanding a bit on CNS Imaging, Entropy for Intentional content, and Graph Traversal?

Benjamin_Todd @ 2021-04-27T14:22 (+6)

Investigations into promising new cause areas:

For instance, take one of the issues listed here.

Then interview 2-3 people in the area about (i) what the best interventions are (ii) who's currently working on it. Write up a summary, and add any of your own thoughts on how promising more work on the area seems.

You could use Open Phil's shallow cause reports as a starting template: https://www.openphilanthropy.org/research/cause-reports

evelynciara @ 2020-08-06T06:03 (+6)

I'd appreciate a forum post or workshop about how to interpret empirical evidence. Jennifer Doleac gives a lot of good pointers in the recent 80,000 Hours podcast, but I think the EA and public policy communities would benefit from a more thorough treatment.

Derek @ 2020-04-03T16:33 (+6)

Should Covid-19 be a priority for EAs?

A scale-neglectedness-tractability assessment, or even a full cost-effectiveness analysis, of Covid as a cause area (compared to other EA causes) could be useful. I'm starting to look into this now – please let me know if it's already been done.

El-Nino9 @ 2021-04-26T17:03 (+1)

Was asking myself the same question

Ben_West @ 2021-12-07T22:43 (+5)
  1. What an overdetermined EA is, what the evidence is for them existing, and what it implies for community building strategy.
  2. Evidence that the next ~10 years might be especially influential in terms of community building
Mati_Roy @ 2020-03-25T10:03 (+5)

Negative income taxes > UBI ?

A short mathematical demonstration of how negative income taxes compares to UBI in terms of economics 101.

Here's a thread in an EA group about the topics

technicalities @ 2020-03-24T08:40 (+5)

Collating predictions made by particularly big pundits and getting calibration curves for them. Bill Gates is getting a lot of attention now for warning of pandemic in 2015; what is his average though? (This is a bad example though, since I expect his advisors to be world-class and to totally suppress his variance.)

If this could be hosted somewhere with a lot of traffic, it could reinforce good epistemics.

Aaron Gertler @ 2020-03-24T06:48 (+5)

A post about when we should and should not use "lives saved" language in describing EA work.

I find that telling people they can save a life for $5000 often leads to a lot of confusion: Whose life is being saved? What if they die of something else a few months later?Explaining QALYs isn't too hard if you have a couple of minutes, but you often have a lot less time than that.

Is there some shorthand we can use for "giving 50 healthy years, in expectation, across a population" that makes it sound anywhere nearly as good as simply "saving a life"? How important is it to be accurate in this dimension, vs. simply allowing people to conflate QALY/VSL with "saving a specific person"?

Ben_Harack @ 2021-08-08T00:49 (+4)

Credible qualitative and/or quantitative evidence on the effectiveness of habits, tools, and techniques for knowledge work.

evelynciara @ 2021-06-27T21:44 (+4)

I think it would be really interesting for someone to write about the intellectual history of environmental ethics and animal ethics, and probably environmentalism more broadly. The rift between them dates back at least to the 1980s, and I think it's important for EAs interested in environmentalism or (wild) animal welfare to understand how they're building on/situated in this discourse.

(Inspired by the recent 80K episode on the intellectual history of x-risk.)

jackmalde @ 2021-01-04T07:31 (+4)

The implications of Brexit for the potential to do good when located in the UK.

Plausibly lessened if the UK has less influence on the world stage. I appreciate this may be seen as a somewhat political post, but I think it may be possible to write it without actually passing judgement on whether Brexit was a good or bad thing on the whole.

alexrjl @ 2021-01-04T08:48 (+1)

I'd be excited to read this.

Aaron Gertler @ 2021-07-19T02:19 (+3)

I want people to write posts about their jobs, and how they got those jobs. I think this will help a lot of people, both with object-level information about getting particular jobs, and by making a meta-level statement that it's not impossible or unrealistic to get a job in EA.

sky @ 2020-03-29T13:18 (+3)

Posts on how people came to their values, how much individuals find themselves optimizing for certain values, and how EA analysis is/isn't relevant. Bonus for points for resources for talking about this with other people.

I'd like to have more "Intro to EA" convos that start with, "When I'm prioritizing values like [X, Y, Z], I've found EA really helpful. It'd be less relevant if I valued [ABC ] instead, and it seems less relevant in those times when I prioritize other things. What do you value? How/When do you want to prioritize that? How would you explore that?"

I think personal stories here would be illustrative.

technicalities @ 2020-03-29T14:08 (+2)

A nice example of the second part, value dependence, is Ozy Brennan's series reviewing GiveWell charities.

Why might you donate to GiveDirectly?
You need a lot of warmfuzzies in order to motivate yourself to donate.
You think encouraging cash benchmarking is really important, and giving GiveDirectly more money will help that.
You want to encourage charities to do more RCTs on their programs by rewarding the charity that does that most enthusiastically.
You care about increasing people’s happiness and don’t care about saving the lives of small children, and prefer a certainty of a somewhat good outcome to a small chance of a very good outcome.
You believe, in principle, that we should let people make their own decisions about their lives.
You want an intervention that definitely has at least a small positive effect.
You have just looked at GDLive and are no longer responsible for your actions.
evelynciara @ 2020-03-26T15:05 (+3)

I care about a lot of different U.S. policy issues and would like to get a sense of their neglectedness and tractability. So I'd love it if someone could do a survey to find out how many people in the U.S. work full time on various issues and how hard it is to get bills passed on them.

EA-sy @ 2022-03-09T23:29 (+2)

An interesting post (although perhaps skewing too negative in premise) would be an article on how best to reach/what options can be advertised or made available to the ‘Reluctant Effective Altruists’ (REA’s) of society… other than giving what they can which is an obvious starting point.

For the purposes of the post the REA’s would be those who agree with EA in principle, but are any of the following, and likely fall into more categories besides:

Of course, time and attention are very precious. Perhaps reaching REA’s is best done simply by expounding the logic of Effective Altruism/associated concepts as far and wide as possible… naturally some REA’s will shift to become EA’s and thats the simplest route.

However, an assessment of the different categories of REA, and how to speak directly to them/matching options suited for each, may create a marginal gain that compounds over time… and wouldn’t otherwise be realised.

Alexis Carlier @ 2020-04-03T18:15 (+2)

When to use quantitative vs qualitative research

Without a framework for thinking about this, I'm often unsure what I should be learning from qualitative studies, and I don't always know when it makes sense to conduct them. (This seems related to the debate between cleometricians and counterfactual narrative historians; some discussion here, page 18)

Khorton @ 2020-04-03T20:36 (+3)

I don't think this can be taught in one post, because you have to be able to actually use the research methods before you can decide which one to use.

Venkatesh @ 2022-04-26T05:38 (+1)

Write about the replication crisis in the 80k hours Problem profile style. Basically, write about the problem, apply the SNT framework to it, mention orgs currently working on it, mention potential career options for someone who wants to address this problem etc..

This suggestion came after reading this post.

MichaelA @ 2020-03-27T06:10 (+1)

Posts investigating/discussing any of the questions listed here. These are questions which would be "valuable for someone to research, or at least theorise about, that the current pandemic in some way 'opens up' or will provide new evidence about, and that could inform EAs’ future efforts and priorities".

If anyone has thought of such questions, please add them as answers to to that post.

An example of such a question which I added: "What lessons can be drawn from [events related to COVID-19] for how much to trust governments, mainstream experts, news sources, EAs, rationalists, mathematical modelling by people without domain-specific expertise, etc.? What lessons can be drawn for debates about inside vs outside views, epistemic modesty, etc.?"