Announcing a contest: EA Criticism and Red Teaming

By Lizka, finm, jtm @ 2022-06-01T18:58 (+276)

Update from October 2022: This contest has wrapped up. You can see the winners of the contest here

Introduction

tl;dr: We're running a writing contest for critically engaging with theory or work in effective altruism (EA). 

Submissions can be in a range of formats (from fact-checking to philosophical critiques or major project evaluations); and can focus on a range of subject matters (from assessing empirical or normative claims to evaluating organizations and practices).  

We plan on distributing $100,000, and we may end up awarding more than this amount if we get many excellent submissions. 

The deadline is September 1, 2022. You can find the submission instructions below. Neither formal nor significant affiliation with effective altruism is required to enter into the contest.

We are: Lizka Vaintrob (the Content Specialist at the Centre for Effective Altruism), Fin Moorhouse (researcher at the Future of Humanity Institute), and Joshua Teperowski Monrad (biosecurity program associate at Effective Giving). The contest is funded via the FTX Future Fund Regranting Program, with organizational support from the Centre for Effective Altruism.

We ‘pre-announced’ this contest in March

The rest of this post gives more details, outlines the kinds of critical work we think are especially valuable, and explains our rationale. We’re also sharing a companion resource for criticisms and red teams

How to apply

Submit by posting on the EA Forum[1] and tagging the post[2] with the contest’s tag, or by filling out this form.

If you post on the Forum, you don't need to do anything except tag your post[2] with the “Criticism and Red Teaming Contest” topic, and we’ll consider your post for the contest. If you’d prefer to post your writing outside the Forum, you can submit it via this form — we’d still encourage you to cross-post it to the Forum (although please be mindful of copyright issues). 

We also encourage you to refer other people’s work to the contest if you think more people should know about it. To refer someone else’s work, please submit it via this form. If it wins, we may reward you for this — please see an explanation below.

The deadline is September 1, 2022.

Please contact us with any questions. You can also comment here.

Prizes

We have $100,000 currently set aside for prizes, which we plan on fully distributing.

Prizes will fall under three main tiers:

In addition, we may award a prize of $100,000 for outstanding work that looks likely to cause a very significant course adjustment in effective altruism.

Therefore, we’re prepared to award (perhaps significantly) more than $100,000 if we’re impressed by the quality and volume of submissions. 

We’re also offering a bounty for referring winning submissions: if you refer a winning submission (if you’re the first person to refer it, and the author never entered the contest themselves), you’ll get a referral bounty of 5% of the award.

We will also consider helping you find proactive funding for your work if you require the security of guaranteed financial support to enable a large project (though we may deduct proactive funding from prize money if you are awarded one). See the FAQ for more details.

Submissions must be posted or submitted no later than 11:59 pm BST on September 1st, and we’ll announce winners by the end of September.

Criteria

Overall, we want to reward critical work according to a question like: “to what extent did this cause me to change my mind about something important?” — where “change my mind” can mean “change my best guess about whether some claim is true”, or just “become significantly more or less confident in this important thing.”

Below are some virtues of the kind of work we expect to be most valuable. We’ll look out for these features in the judging process, but we’re aware it can be difficult or impossible to live up to all of them:

We don't expect that every winning piece needs to do well at every one of these criteria, but we do think each of these criteria can help you most effectively change people’s minds with your work.

We also want to reward clarity of writing, avoiding ‘punching down’, awareness of context, and a scout mindset. We don’t want to encourage personal attacks, or diatribes that are likely to produce much more heat than light. And we hope that subject-matter experts who don’t typically associate with EA find out about this, and share insights we haven’t yet heard.

What to submit

We’re looking for critical work that you think is important or useful for EA. That’s a broad remit, so we’ve suggested some topics and kinds of critiques below.

If you’re looking for more detail, we’ve collaborated on a separate post that collects resources for red teaming and criticisms, including guides to different kinds of criticisms, and examples. If you’re interested in participating in this contest, we highly recommend that you take a look. (We’d also love help updating and improving it.)

It’s helpful —but not required — to also suggest 1–3 people you think most need to heed your critique. For many topics, this nomination is better done privately (contact us, or submit through the form). We’ll send it their way where possible. (If you don’t know who needs to see it most, we’ll work it out.) 

Formats

You might consider framing your submission as one of the following:

Again, for more detail on topic ideas, kinds of critiques, and examples: visit our longer post with resources for critiques and red teams

We don’t want to give an analogous list for topic ideas, because any list is necessarily going to leave things out. However, you might take a look at Joshua’s post outlining four categories of effective altruism critiques: normative and moral questions, empirical questions, institutions & organizations, and social norms & practices

Browsing this Forum (especially curated lists like the Decade Review prizewinners, the EA Wiki, and the EA Handbook) could be a good way to get ideas if you are new to effective altruism.

If you’re unsure whether something you plan on writing could count for this contest, feel free to ask us.

Additional resources

We’ve compiled a companion post, in which we’ve collected some resources for criticisms and red teaming. 

We’re also tentatively planning on running (or helping with) several workshops on criticisms and red teaming, which will be open to anyone who is interested, including people who are new to effective altruism. We hope that the first two will be in June. If you’d like to hear about dates when they’re decided, you can fill out this form.

The judging panel

The judging panel is:

No one on the judging panel will be able to “veto” winners, and every submission will be read by at least two people. If submissions are technical and outside of the panelists’ fields of expertise, we will consult domain experts. 

If we get many submissions or if we find that the current panel doesn’t have enough bandwidth, we may invite more people to the panel. 

Rationale

Why do we think this matters? In short, we think there are some reasons to expect good criticism to be undersupplied relative to its real value. And that matters: as EA grows, it’s going to become increasingly important that we scrutinize the ideas and assumptions behind key decisions — and that we welcome outside experts to do the same.

Encouraging criticism is also a way to encourage a culture of independent thinking, and openness to criticism and scrutiny within the EA community. Part of what made and continues to make EA so special is its epistemic culture: a willingness to question and be questioned, and freedom to take contrarian or unusual ideas seriously. As EA continues to grow, one failure mode we anticipate is that this culture may give way to a culture of over-deference.

We also really care about raising the average quality of criticism. Perhaps you can recall some criticisms of effective altruism that you think were made in bad faith, or otherwise misrepresented their target in a mostly unhelpful and frustrating way. If we don’t make an effort to encourage more careful, well-informed critical work, then we may have less reason to complain about the harms that poor-quality work can cause, such as by misinforming people who are learning about effective altruism. Crucially, we’d also miss out on the real benefits of higher-quality, good-faith criticism.

In his opening talk for EA Global this year, Will MacAskill considered how a major risk to the success of effective altruism is the risk of degrading its quality of thinking: “if you look at other social movements, you get this club where there are certain beliefs that everyone holds, and it becomes an indicator of in-group mentality; and that can get strengthened if it’s the case that if you want to get funding and achieve very big things you have to believe certain things — I think that would be very bad indeed. Looking at other social movements should make us worried about that as a failure mode for us as well.”

It’s also possible that some of the most useful critical work goes relatively unrewarded because it might be less attention-grabbing or narrow in its conclusions. Conducting really high-quality criticism is sometimes thankless work: as the blogger Dynomight points out, there’s rarely much glory in fact-checking someone else’s work. We want to set up some incentives to attract this kind of work, as well as more broadly attention-grabbing work.

Ultimately, critiques have an impact by bringing about actual changes. The ultimate goal of this contest is to facilitate those positive changes, not just to spot what we’re currently getting wrong.

In sum, we think and hope: 

  1. Criticism will help us form truer beliefs, and that will help people with the project of doing good effectively. People and institutions in effective altruism might be wrong in significant ways — we want to catch that and correct our course.
    1. This is especially important in the non-profit context, since it lacks many of the signals in the for-profit world (like prices). For-profit companies have a strong signal of success: if they fail to make a profit, they eventually fail. One insight of effective altruism is that there are weaker pressures for nonprofits to be effective — to achieve the goals that really matter — because their ability to fundraise isn’t necessarily tied to their effectiveness. Charity evaluators like GiveWell do an excellent job at evaluating nonprofits, but we should also try to be comparably rigorous and impartial in assessing EA organizations and projects, including in areas where outputs are harder to measure. Where natural feedback loops don’t exist, it’s our responsibility to try making them!
    2. It’s also especially important for effective altruism, given that so many of the ideas are relatively new and untested. We think this is especially true of longtermist work.
  2. Stress-testing important ideas is crucial even when the result is that the ideas are confirmed; this allows us to rely more freely on the ideas.
  3. We want to sustain a culture of intellectual openness, open disagreement, and critical thinking. We hope that this contest will contribute to reinforcing that culture.
  4. Highlighting especially good examples of criticism may create more templates for future critical work, and may make the broader community more appreciative of critical work.
  5. We also think that people in the effective altruism network tend to hear more from other people in the network, and hope that this contest might bring in outside experts and voices. (You can see more discussion of this phenomenon in "The motivated reasoning critique of effective altruism".)
  6. We want to break patterns of pluralistic ignorance where people underrate how sceptical or uncertain others (including ‘experts’) are about some claim.

Finally, we want to frame this contest as one step towards generating high-quality criticism, and not the final one. For instance, we’re interested in following up with winning submissions, such as by meeting with winning entrants to discuss ways to translate your work into concrete changes and communicate your work to the relevant stakeholders.

What this is not about

Note that critical work is not automatically valuable just by virtue of being critical: it can be attention-grabbing in a negative way. It can be stressful and time-consuming to engage with bad-faith or ill-considered criticism. We have a responsibility to be especially careful here.

This contest isn’t about making EA look open-minded or self-scrutinizing in a performative way: we want to award work that actually strikes us as useful, even if it isn’t likely to be especially popular or legible for a general audience.

We’re not going to privilege arguments for more caution about projects over arguments for urgency or haste. Scrutinizing projects in their early stages is a good way to avoid errors of commission; but errors of omission (not going ahead with an ambitious project because of an unjustified amount of risk aversion, or oversensitivity to downsides over upsides) can be just as bad.

Similarly, we don’t want this initiative to only result in writing that one-directionally worries about EA ideas or projects being too ‘weird’ or too different from some consensus or intuitions. We’re just as interested to hear why some aspect of EA is being insufficiently weird — perhaps not taking certain ideas seriously enough. Relatedly, this isn’t just about being more epistemically modest: we are likely being both overconfident in some spots, and overly modest in others. What matters is being well calibrated in our beliefs!

We would also caution against criticizing the actions or questioning the motivations of a specific individual, especially without first asking them. We urge you to focus on the ideas or ‘artefacts’ individuals produce, without speculating about personal motivations or character — this is rarely helpful.

Contact us

Email criticism-contest@effectivealtruism.com, message any of the authors of this post via the Forum, or leave a comment on this post. 

Q&A

Submissions and how they’ll be judged

About the contest

Other

We're extremely grateful to everyone who helped us kick this off, including the many people who gave feedback following our pre-announcement of the contest. 

  1. ^
  2. ^

    Instructions for how to tag a post are here


QixiSail @ 2022-06-02T06:23 (+46)

A few questions, suggestions and concerns.

Firstly, I expect people who's criticisms I'd most want to hear to be very busy, I hope the contest will consider lower effort but insightful or impactful submissions to account for this?

Secondly, I'd expect people with the most valuable critiques to be more outside EA since I would expect to find blindspots in the particular way of thinking, arguing and knowing EA uses. What will the panelists do to ensure they can access pieces using a very different style of argument? Have you considered having non-EA panelists to aid with this?

Thirdly, criticisms from outside of EA might also contain mistakes about the movement but nonetheless make valid arguments. I hope this can be taken into account and such pieces not just dismissed.

Fourthly, I would also expect criticisms from people who have been heavily involved in EA over the years to be valuable but, if drawing on their experience, hard to write fully anonymously. What reassurances can you offer and safeguards do you have in place beyond trusting the panelists and administrators that pieces would be fairly assessed? What plans do you have in place to help prevent and mitigate backlash, especially given that many decisions within EA are network based and thus even with the best of intentions criticism is likely to have some costs to relationships.

finm @ 2022-06-03T23:12 (+17)

Replying in personal capacity:

I hope the contest will consider lower effort but insightful or impactful submissions to account for this?

Yes, very short submissions count. And so should "low effort" posts, in the sense of "I have a criticism I've thought through, but I don't have time to put together a meticulous writeup, so I can either write something short/scrappy, or nothing at all." I'd much rather see unpolished ideas than nothing at all.

Secondly, I'd expect people with the most valuable critiques to be more outside EA since I would expect to find blindspots in the particular way of thinking, arguing and knowing EA uses. What will the panelists do to ensure they can access pieces using a very different style of argument? Have you considered having non-EA panelists to aid with this?

Thanks, I think this is important.

  • We (co-posters) are proactively sharing this contest with non-EA circles (e.g.), and others should feel welcome and encouraged to do the same.
  • Note the incentives for referring posts from outside the Forum. This can and should include writing that was not written with this contest in mind. It could also include writing aimed at some idea associated with EA that doesn't itself mention "effective altruism".
  • It obviously shouldn't be a requirement that submissions use EA jargon.
  • I do think writing a post roughly in line with the Forum guidelines (e.g. trying to be clear and transparent in your reasoning) means the post will be more likely to get understood and acted on. As such, I do think it makes sense to encourage this manner of writing where possible, but it's not a hard requirement.
  • To this end, one idea might be to speak to someone who is more 'fluent' in modes of thinking associated with effective altruism, and to frame the submission as a dialogue or collaboration.
  • But that shouldn't be a requirement either. In cases where the style of argument is unfamiliar, but the argument itself seems potentially really good, we'll make the effort — such as by reaching out to the author for clarifications or a call. I hope there are few really important points that cannot be communicated through just having a conversation!
  • I'm curious which non-EA judges you would have liked to see! We went with EA judges (i) to credibly show that representatives for big EA stakeholders are invested in this, and (ii) because people with a lot of context on specific parts of EA seem best placed to spot which critiques are most underrated. I'm also not confident that every member of the panel would strongly identify as an "effective altruist", though I appreciate connection to EA comes in degrees.

Thirdly, criticisms from outside of EA might also contain mistakes about the movement but nonetheless make valid arguments. I hope this can be taken into account and such pieces not just dismissed.

Yes. We'll try to be charitable in looking for important insights, and and forgiving of innacuracies from missing context where they don't affect the main argument.

That said, it does seem straightforwardly useful to avoid factual errors that can easily be resolved with public information, because that's good practice in general.

What plans do you have in place to help prevent and mitigate backlash[?]

My guess is that the best plan is going to be very context specific. If you have concerns in this direction, you can email criticism-contest@effectivealtruism.com, and we will consider steps to help, such as by liaising with the community health team at CEA. I can also imagine cases where you just want to communicate a criticism privately and directly to someone. Let us know, and we can arrange for that to happen also ("we" meaning myself, Lizka, or Joshua).

AppliedDivinityStudies @ 2022-06-03T19:23 (+9)

I can't speak for everyone, but will quickly offer my own thoughts as a panelist:
1. Short and/or informally written submissions are fine. I would happily award a tweet thread it if was good enough. But I'm hesitant to say "low effort is fine", because I'm not sure what else that implies.
2. It might sound trite, but I think the point of this contest (or at least the reason I'm excited about it) is to improve EA. So if a submission is totally illegible to EA people, it is unlikely to have that impact. On "style of argument" I'll just point to my own backlog of very non-EA writing on mostly non-EA topics.
3. I wouldn't hold it against a submission as a personal matter,  and wouldn't dismiss it out of hand, but it's definitely a negative if there are substantive mistakes  that could have been avoided using only  public information.
 

Gavin @ 2022-06-02T08:56 (+45)

A big part of my getting into EA was this debate between Oxford lefties and the baby 80k staff. The socialist/deontological case was weaker. But the points that Mills makes about systemic change and the streetlight fallacy describe the two biggest ways EA practice has changed in the last decade. We moved in his direction, despite him.

Gavin @ 2022-06-02T09:02 (+26)

Maybe the lesson is: "even if you don't win, you might shape the movement"

Pablo @ 2022-06-02T23:25 (+14)

I feel that external criticism of EA was generally stronger back then. Perhaps this is just a reflection of broader recent cultural trends, which have degraded the quality of public discourse.

Here is a useful steelman of Mills' critique, courtesy of 'pragmatist' (note that "earning to give" used to be known as "professional philanthropy"):

I'm not endorsing this argument (although there are parts of it with which I sympathize), but I think it is a lot better than the case for Mills as you present it in your post:

If a friend asked me whether she should vote in the upcoming Presidential election, I would advise her not to. It would be an inconvenience, and the chance of her vote making a difference to the outcome in my state is minuscule. From a consequentialist point of view, there is a good argument that it would be (mildly) unethical for her to vote, given the non-negligible cost and the negligible benefit. So if I were her personal ethical adviser, I would advise her not to vote. This analysis applies not just to my friend, but to most people in my state. So I might conclude that I would encourage significant good if I launched a large-scale state-wide media blitz discouraging voter turn-out. But this would be a bad idea! What is sound ethical advice directed at an individual is irresponsible when directed at the aggregate.

80k strongly encourages professional philanthropism over political activism, based on an individualist analysis. Any individual's chance of making a difference as an activist is small, much smaller than his chance of making a difference as a professional philanthropist. Directed at individuals, this might be sound ethical advice. But the message has pernicious consequences when directed at the aggregate, as 80k intends.

It is possible for political activism to move society towards a fundamental systemic change that would massively reduce global injustice and suffering. However, this requires a cadre of dedicated activists. Replaceability does not hold of political activism; if one morally serious and engaged activist is lured away from activism, it depletes the cadre. Now any single activist leaving (or not joining) the cadre will not significantly affect the chances of revolution succeeding. But if there is a message in the zeitgeist that discourages political participation, instead encouraging potential revolutionaries to participate in the capitalist system, this can significantly impact the chance of revolutionary success. So 80k's message is dangerous If enough motivated and passionate young people are convinced by their argument.

It's sort of like an n-person prisoner's dilemma, where each individual's (ethically) dominant strategy is to defect (conform with the capitalist system and be a philanthropist), but the Nash equilibrium is not the Pareto optimum. This kind of analysis is not uncommon in the Marxist literature. Analytic Marxists (like Jon Elster) interpret class consciousness as a stage of development at which individuals regard their strategy in a game as representative of the strategy of everyone in their socio-economic class. This changes the game so that certain strategies which would otherwise be individually attractive but which lead to unfortunate consequences if adopted in the aggregate are rendered individually unattractive. [It's been a while since I've read this stuff, so I may be misremembering, but this is what I recall.]

RyanCarey @ 2022-06-02T23:33 (+17)

I feel that external criticism of EA was generally stronger back then. Perhaps this is just a reflection of broader recent cultural trends.

Maybe because EA was tiny and elite then, so only a true intellectual would bother to criticise.

Gavin @ 2022-06-03T08:10 (+5)

Back in my day my enemies did instrumental harm like a rational person.

particlemania @ 2022-06-02T18:40 (+8)

At the same time, if the shift in EA practice as claimed by you is indeed real (which I think it is), then it would also seem that EA has failed to do adequate mistake acknowledgement with respect to past critiques. This might hold some insights as to why certain forms of criticisms are by-default disincentivized.

(I do hope that this contest will make a genuine attempt to correct that disincentive landscape.)

Gavin @ 2022-06-02T19:52 (+5)

Sounds right

The problem is, we're not an agent and so no one makes The decision to shift and so no one is noticeably responsible for acknowledging credit and blame. But it's still fair to want it.

Luke Freeman @ 2022-06-03T00:10 (+33)

I also suspect that making a big deal about the winners would be a good thing. For example, if the winner of the prize was awarded on the main stage at an EA Global and given a fireside chat that'd further encourage good faith criticism and demonstrate that we really care about it.

Max_Daniel @ 2022-06-02T11:56 (+23)

Thank you so much for your work on this, I'm excited to see what comes out of it. 

One-time pad @ 2022-06-03T10:01 (+19)

What percentage of the people on the panel are longtermists? It seems, at first glance, that almost everyone is, or at least working in a field/org that strongly implies they are. If so, isn't this a problem for the impartiality of the results? Even if not, how is an independent outsider (like the people making submissions) supposed to believe that? 

This is likely to have the opposite effect; it will reinforce the current thinking in EA rather than challenge it, while monetarily rewarding people for parroting back the status quo. 

Gavin @ 2022-06-03T20:28 (+13)

I sympathise with this and generally think that EA should take conflicts of interest more seriously.

That said, I think this is subtly the wrong question: what we really want is, "how rational are the judges?" How often did they change their mind in response to arguments of various kinds from various places of various tones?

Can we say anything to convince you of that? Maybe.

Anyway: Most days I feel like more of a "holy shit x-risk" guy than a strong longtermist. I briefly worked in international development, was a socialist, a feminist, a vegan, an e2g, etc, etc. I took and liked a bunch of classes on weird things like Nietzsche, Derrida, Bourdieu. My comments on here are a good sample of me on my best behaviour.

AppliedDivinityStudies @ 2022-06-03T19:11 (+10)

The crucial complementary question is "what percentage of people on the panel are neartermists?"

FWIW, I have previously written about animal ethics, interviewed Open Phil's neartermist co-CEO, and am personally donating to neartermist causes.

Lizka @ 2022-09-07T19:28 (+14)

Just a quick update: we got more submissions than we were expecting, and a number of the panelists are low-capacity right now. We're still targeting the end-of-September deadline, but there's a chance that we'll get delayed by a week or two.  

I apologize in advance if that ends up happening. 

Locke @ 2022-09-04T16:22 (+13)

How do EA anchor institutions plan to operationalize changes based on these critiques? There seems to be a bit of a pattern in some that I've read where people point out problems and then nothing changes. 

AnonymousEAForumAccount @ 2022-09-15T01:45 (+5)

This is a really good question, and I’m curious whether the contest organizers have anything planned. I’d love to see some sort of after the fact analysis of whether this contest led to meaningful changes or whether it looks more like kabuki theater with hindsight. I’d be interested in looking at this question from multiple perspectives, e.g. having the largest EA organizations self-report whether they’ve updated in any way, and asking authors of contest contributions (or a subset of prize winners and/or posts that cleared a certain karma threshold) whether they think their concerns have been addressed.

I would think some sort of retrospective evaluation would be an important part of deciding whether or not to run another Red Teaming contest in the future.

vaniver @ 2022-06-03T16:51 (+11)

I'm interested in fleshing out "what you're looking for"; do you have some examples of things written in the past which changed your minds, which you would have awarded prizes to?

For example, I thought about my old comment on patient long-termism, which observes that in order to say "I'm waiting to give later" as a complete strategy you need to identify the conditions under which you would stop waiting (as otherwise, your strategy is to give never). On the one hand, it feels "too short" to be considered, but on the other hand, it seems long enough to convey its point (at least, embedded in context as it was), and so any additional length would be 'more cost without benefit'.

Gavin @ 2022-06-03T20:00 (+11)

Random personal examples:

  • This won the community's award for post of the decade. Its disagreement with EA feels half-fundamental; a sweeping change to implementation details and some methods. 
  • This was much-needed and pretty damning. About twice as long as it needed to be though.
  • This old debate looks good in hindsight
  • The initial patient longtermist posts shook me up a lot.
  • Robbie's anons were really good
  • This is on the small end of important, but still rich and additive.
  • This added momentum to the great intangibles vibe shift of 2016-8 
  • This was influential, bizarrely necessary to correct a community bubble which burned a lot of time and mental health. But hardly fundamental.
  • Can't remember where it was, a Progress Studies bit about how basic science looks bad on a naive cost-benefit view but has to date clearly been the fount of utility
  • EA is (was?) ignoring criticism

 

I like your comment and would've taken it seriously, but this contest is only accepting things written after March 2022. Here's a form for older stuff (no cash yet sorry).

David Gretzschel @ 2022-08-25T15:10 (+6)

When is the exact deadline? Like... the 1st September by UTC? Or the 1st of September by some American timezone?

Lizka @ 2022-08-26T12:19 (+5)

We didn't specify when we posted the announcement, so let's be as generous as possible and say "Anywhere on Earth." (Here's a live clock for AoE time.) 

clem @ 2022-08-28T09:46 (+1)

Sorry to be a nit pick, but what time on 1st September AoE? 00:00 or 23:59? "As generous as possible" would suggest 23:59, but granted it feels like that might be taking the piss a little. 

Lizka @ 2022-08-29T10:18 (+3)

11:59 pm AoE on September 1st

It's BST in the announcement post, but I've messed up in this comment thread (I missed the time while skimming) and now commit to AoE. Apologies for the confusion, folks!

From a different comment thread

I'd be personally grateful (and grateful in my Forum role) if people didn't wait until the last minute to post their submissions (but last-minute submissions won't be penalized in the scoring). Besides other problems, posting last-minute doesn't allow wiggle room for things to go wrong. 

And as an FYI, we're not going to be accepting any late submissions. 

Yitz @ 2022-06-02T10:25 (+6)

I’m really exited about this, and look forward to participating! Some questions—how will you determine which submissions count as “ Winners” vs “runners up” vs “honorable mentions”? I’m confused what the criteria for differentiating categories are. Also, are there any limits as to how many submissions can make each category?

Ardenlk @ 2022-06-07T12:51 (+5)

just an appreciation comment. I think this post was very well written and handled tricky questions well, especially the Q&A section.

And this seems great to highlight:

We want to encourage a sense of criticism being part of the joint enterprise to figure out the right answers to important questions.

Martin (Huge) Vlach @ 2022-10-18T14:17 (+4)

As this page comes first on Google search for the contest, I'd like to suggest linking the results on the beginning of end of this article now.

Lizka @ 2022-10-18T14:33 (+3)

Thanks for the suggestion! I just added a note. 

Ryan Beck @ 2022-06-07T11:25 (+3)

It's possible I missed it but I didn't see anything stating whether multiple submissions from one author are allowed, I assume they are though?

Gavin @ 2022-06-07T13:09 (+3)

Don't see why not, as long as it's not salami sliced.

Ryan Beck @ 2022-06-07T14:39 (+1)

Makes sense, thanks!

ibatra171 @ 2022-06-05T14:33 (+3)

Is co-authorship permitted? Apologies if I missed this in the post! 

Lizka @ 2022-06-06T11:03 (+3)

It's permitted, yes! 

The team of coauthors who write the winning submission will get the prize, and can share it as the members see fit. A good default might be to just split the prize evenly, and if you're collaborating on something that might win a prize that you think should be distributed differently, I'd recommend that you agree on this in advance. 

(No need to apologize. I don't think we discussed co-authorship anywhere in the post. I'm now thinking we should consider adding it to the Q&A section, so thank you for bringing it up!)

Locke @ 2022-06-03T14:47 (+3)

Thanks for putting this contest together! Is there a comprehensive list of major EA projects? 

Gavin @ 2022-06-03T16:13 (+3)

Best I can think of is looking for the announcement posts inside each of these tags

https://forum.effectivealtruism.org/topics/all

Dem0sthenes @ 2022-11-17T16:12 (+2)

Do the new SBF revelations cause any reconsideration of this contest? 

Nathan Young @ 2022-08-27T18:40 (+2)

The end of September 1st, right?

Brendon_Wong @ 2022-08-29T08:33 (+1)

There is a section in the article that says:

Submissions must be posted or submitted no later than 11:59 pm BST on September 1st, and we’ll announce winners by the end of September.

(I nearly missed this as well)

Lizka @ 2022-08-29T10:16 (+3)

When re-skimming the announcement post that I myself co-wrote, I missed this, too, and have now committed to "as generous as possible — "Anywhere on Earth." (Here's a live clock for AoE time.) " So it's 11:59 pm AoE on September 1st.

I'd be personally grateful (and grateful in my Forum role) if people didn't wait until the last minute to post their submissions (but last-minute submissions won't be penalized in the scoring). Besides other problems, posting last-minute doesn't allow wiggle room for things to go wrong. 

And as an FYI, we're not going to be accepting any late submissions. 

Tyner @ 2022-08-05T21:14 (+2)

Question - how did you select judges for your contest?  How did you balance expertise with diversity?

Thanks!

quinn @ 2022-06-20T03:01 (+2)

One issue is that networked and connected people may have greater access to pre-publish criticism in the form of google doc comments, and getting google doc comments seems like a fairly robust strategy for improving the quality of an essay. If simply the best essays are awarded, then we may ossify some dynamics around being networked and well connected, or failing to recognize people from outside of our ingroup. 

Dem0sthenes @ 2022-09-19T21:47 (+1)

Can we run a formal critique of the criticism contest after I find out that my submissions didn't win? I don't have a pile of cash though I do have a lot of extra special bonus points for people. 

Refined Insights @ 2022-08-19T13:25 (+1)

Hello. Would length be an issue. For instance, would a highly focused criticism of say 7 to ten thousand words count.

tyleralterman @ 2022-07-23T12:47 (+1)

I'm curious to hear more about how critiques have been processed historically by the EA movement. Shortform post here: https://forum.effectivealtruism.org/posts/boYH7XH4xE9iugxWi/tyleralterman-s-shortform?commentId=RJYzym2mwrnXP9amn

Sophia @ 2022-06-12T08:13 (+1)

Can someone post something and then re-post a better version that takes into account all of the feedback they got in the comments? (or should early versions not be tagged with the contest tag?)

Motivation for this question: trying to work out a low effort way for my smart[1] non-EA friends to

 1) post their thoughts in a way that feels relatively low-stakes but still has a clear upside; and

 2) give them the option to iterate on their ideas in the coming months based on anything that they find thought-provoking in the initial response.

  1. ^

    I have the good fortune of often being the least intelligent person in the room and I feel I should be making better use of this superpower 💪🏼

Sophia @ 2022-06-12T08:19 (+1)

It's probably extremely hard to critique people who have spent 10 years steel-manning their assumptions[1] without being able to go back and forth to build up any butterfly ideas, even if there is a great critique out there.

  1. ^
Sophia @ 2022-06-12T08:21 (+1)

(and I also am obviously not going to be nearly as good an intellectual sparring partner as the entire EA community collectively would be so it seems better to develop ideas in public than in private)

Lizka @ 2022-06-15T09:44 (+5)

I'd be happy to see this kind of process, and don't think it's against the rules of the contest. You might not want to tag early versions with the contest tag if you don't expect them to win and don't think panelists should bother voting on them, but tagging the early versions wouldn't count against you for the final version. 

On a different note (taking off my contest-oragnizer hat, putting on my Forum hat): I think people should feel free to post butterfly ideas with the idea that they will develop them further. The Forum exists in part for this kind of communal idea development. (Of course, this isn't the best approach for certain kinds of idea development. In particular, it might make sense to do some basic research on the Forum before posting certain questions or starting to write something long on a topic you're very unsure about. 

Barracuda @ 2022-06-12T00:33 (+1)

Hello, I have written a post in response to this contest but it doesn't appear to be visible for whatever reason - net downvotes perhaps? Here is a link in case anyone is interested: https://forum.effectivealtruism.org/posts/bep6LhLcKqtEj3eLs/belonging

Charles He @ 2022-06-12T01:07 (+2)

It’s visible but well off the front page without scrolling or pressing “more posts”.

Basically, there’s limited space and posts with low interest or “low quality” will fall off (I haven’t read your post, this isn’t judgement).

Even without positive votes, your post would have been visible for a few hours to a day. Usually, forum members will upvote posts they think deserve to be on the front page. You might not have gotten any votes.

I guess this is unfair or path dependent but basically there’s limited space and no better scheme has been clearly proposed (keeping new posts higher comes at the expense of older highly voted posts for example).

Teddyboy123 @ 2022-06-08T12:05 (+1)

Will you consider all submissions together post 1 September, or on an ad hoc basis as and when they are received? Is there any advantage or disadvantage to posting early? I am working on something currently but am wary of submitting it early and it falling to the back of people’s minds by the time the decisions are made in September.

TyQ @ 2022-06-07T14:03 (+1)

Are people encouraged to share this opportunity with non-EA friends and in non-EA circles? If so, maybe consider making this clear in the post?

John Bridge @ 2022-06-03T10:53 (+1)

I'm currently writing a sequence exploring the legal viability of the Windfall Clause in key jurisdictions for AI development. It isn't strictly a red-team or a fact-checking exercise, but one of my aims in writing the sequence is to critically evaluate of the Clause as a piece of longtermist policy.

If I'd like to participate, would this sort of thing be eligible? And should I submit the sequence as a whole or just the most critical posts?

finm @ 2022-06-06T10:54 (+2)

Sounds to me like that would count! Perhaps you could submit the entire sequence but highlight the critical posts.