Rational Animations' Script Writing Contest
By Writer @ 2022-09-15T16:56 (+63)
Cross-posted to LW.
Edit: The deadline has been removed! So you can continue to send scripts.
Introduction
I'm announcing a script writing contest for the Rational Animations YouTube channel. Write scripts of 2500 words maximum about a topic of longtermist relevance, and win:
- 5000 USD.
- We will make an animated video based on your script to be published on the YouTube channel with due credits. It will be animated by a team of 9 animators or an animation studio and will be of much higher quality than anything we've done so far.
- A potential job offer as a scriptwriter for the channel.
Bonus: you may suggest already written work. If we bring your suggestion to the channel as it is (without having to adapt it), you will get $500, but nothing if I am already considering the work.
Soft deadline: October 15th.
Topics
The topic of your script should be of longtermist relevance. Examples: existential risks of any kind, the future of humanity, philosophy related to longtermism, technologies of longtermist relevance. I don't mean to be too strict, I'll accept scripts about a broad range of topics, and I might even make exceptions and offer a prize to scripts not relevant to longtermism if I find them a lot better than the competition. As a reference, the videos already on the channel that I would classify as longtermism-relevant are:
- Can we make the future a million years from now go better?
- Everything might change forever this century (or we’ll go extinct)
- Will we grab the universe? Grabby aliens predictions.
- Humanity was born way ahead of its time. The reason is grabby aliens.
- We are failing to see how much better off humanity could be
- Longtermism: an idea that could save 100 billion trillion lives
Length and number of scripts
The maximum length of the script should be 2500 words. Of course, you may split a longer script into two or more parts. You may send an unlimited number of scripts.
Are there any more restrictions?
Not really.
So far, we've only published explainers on the YT channel, but you don't have to limit yourself to explainers. Do you want to write a Fable of the Dragon-Tyrant style short story? OK. Do you want to write an anime episode? OK. You can do whatever you want.
You may also adapt or send older work you've written that you think would make a good video for the YouTube channel if narrated as it is or with some tweaks. In that case you'll win the $500 prize, unless you've made a significant adaptation effort.
Assorted things that make a good script, in order of importance:
- For any claim, you should include sources. The script should be as epistemically legible as possible for the audience.
- Focus on what really matters. If you write about complex topic X, there will be a few core points that are necessary and sufficient to build a solid understanding of topic X. Focus on those points. Yes, I understand this is a bit vague as a suggestion.
- Don't simplify, but explain. Write for a really smart audience that is ignorant about the topic. Some background explanations can be skipped, and some others are required. Recognizing the difference is a bit of an art.
- Be concise, and avoid useless words and filler phrases.
- Telling stories and using concrete details helps with engagement and works better for teaching.
- How you start matters: if the beginning of the script is boring, that's especially bad for engagement on YouTube.
- Challenge the preconceived assumptions a viewer might have about a topic, if the topic lends itself to it.
- See old topics from new surprising angles, but as with point 7, don't force it.
How to send scripts
Send your scripts to rationalanimations@gmail.com (beware of the spelling, it's "rationalanimations", not "rationalanimation").
In addition to sending the script to the YouTube channel's e-mail, you may also publish it as a comment under this post or as a top-level post in the EA Forum or Less Wrong. If you do, link your post in the e-mail.
Deadline
October 15th, but you may continue to send scripts after the deadline, and I'll see if I can still offer prizes.
Prizes and winners
Prizes include:
- 5000 USD. Number of winners: 0-4, perhaps more. You can’t win this prize more than once.
- Animated YouTube video on Rational Animations based on your script, with due credits, unless you don't want to be credited. Number of winners: 0-4, perhaps more. If you didn't win the money prize, I'd only consider you for the video prize if I've already chosen at least four money prizes. We might bring the scripts to the channel as they are, or edit them beforehand.
- Potentially, a job offer as a scriptwriter for Rational Animations. Number of winners: ???
Video quality
The animation, music, and sound design quality will be much higher than in our previous videos. We just hired a team of 9 animators. The videos currently uploaded on the channel were animated by a single person. We also have the budget to hire external animation studios.
Bonus
You may suggest already written work. If we bring your suggestion to the channel as it is (without having to adapt it), you will get $500, but nothing if I am already considering the work.
Topics that we'll bring to the channel soon enough and stuff I'm already considering
To not make you waste time, here are the topics of the next two videos:
I recommend not writing scripts about these same exact topics.
I'm considering adapting the top results of the Fiction category on the EA Forum and Less Wrong, so you won't win the $500 prize by suggesting them.
Questions?
Anything unclear or that I've missed? Ask away in the comments!
WilliamKiely @ 2022-09-15T22:03 (+9)
I suggest On Caring by Nates Soares. It is ~2880 words, so slightly long, but many people have strongly recommended it over the years (myself included), such as jackva:
For me, and I have heard this from many other people in EA, this has been a deeply touching essay and is among the best short statements of the core of EA.
WilliamKiely @ 2022-09-16T18:08 (+4)
And FWIW I think a lot of the essay would work well paired with an animation, such as the discussion of scope insensitivity, the story of Daniel the college student with the birds, and the mountains of problems everywhere later on.
Jessy W @ 2022-09-16T21:44 (+6)
will add this opportunity to the EA opportunity board!
Writer @ 2022-09-17T07:19 (+1)
Thanks a lot!
WilliamKiely @ 2022-09-15T21:52 (+5)
I'm really happy to see this contest and hope it will produce high quality scripts!
I've watched all the longtermism-relevant videos on your channel and thought they were very well done overall. To be more specific, I thought the video you released promoting WWOTF was significantly better than Kurzgesagt's video promoting WWOTF and I was disappointed Kurzgesagt hadn't used a script like yours (given their very large audience).
While I'm sure you've already thought of this, I want to highlight one concern I have about the contest, namely that your $5,000 prize may provide a much smaller incentive than a prize 2-3 times as large:
Given you're hiring a team of 9 animators to work on the next video, I'd guess that $5,000 is not a large fraction of the budget (though I could be mistaken). And in my opinion, the script matters more than the animation (e.g. see my claim that your WWOTF video was better than Kurzgesagt's despite them presumably having a much larger / more expensive animation team). So I'd question the decision to spend a lot more on animators than the script (if you are in fact doing that).
Additionally, contest participants know they are not guaranteed to win the top prize. To asses the expected hourly earnings from entering the contest they need to discount the prize by the probability that they win. All things considered I'm not sure that many people who could write great scripts for you would be justified in believing they'd earn a reasonable wage in expectation by participating in the contest.
Anyway, I'm sure you picked the $5,000 amount carefully and that you've already thought of the relative value of higher prize amounts, but just wanted to provide this quick feedback in case it's helpful.
The second related point of feedback is that committing to "0-4" prizes means that someone might think "even if I write the best script, they still might not choose me and I might not win any money" leading people to discount their expected earnings even more. Perhaps commit to offering some prize for the best script regardless of whether you create a video out of it?
Writer @ 2022-09-16T07:55 (+4)
Thanks a lot for the feedback!
I have the same concern about the fact that the expected income from participating in the contest might be small. I think the other two prizes somewhat mitigate this, but I'm not sure how people value those prizes.
I'm indeed spending a lot less on scriptwriting than on animation. This hasn't always been true, but it is true now and will continue to be true as the team becomes larger because animation is just way costlier. That said, the proportion of the budget devoted to scriptwriting will increase again in the near future, but not drastically. More specifically, as I say in the post, I'd like to hire more scriptwriters, and, later on, I'd like to bring in at least one fact-checker.
For now, I'll leave the prizes unchanged. I'll wait at least a couple of weeks to see how the contest goes. Depending on how many scripts I'm getting and their quality, I might decide to change the prizes.
WilliamKiely @ 2022-09-16T18:09 (+3)
Terrific, I'm excited to see how things turn out!
Richard Y Chappell @ 2022-11-08T12:13 (+4)
Here's a script submission on the topic of Utilitarianism and Moral Opportunity. If an animation ends up being created based on this, I'd be keen to add the video to the front page of utilitarianism.net
***
Imagine that a killer asteroid is heading straight for Earth. With sufficient effort and ingenuity, humanity could work to deflect it. But no-one bothers. Everybody dies.
This is clearly not a great outcome, even if no-one has done anything morally wrong (since no-one has done anything at all).
This scenario poses a challenge to the adequacy of traditional morality, with its focus on moral prohibitions, or "thou shalt nots". While it's certainly important not to mistreat others, prohibitive morality — or what philosophers call deontology — isn't sufficient to address today's global challenges.
Prohibitions aren't enough. We need a positive moral vision that guides us towards securing a better future. Ideally, our actions should be guided by what's truly important.
[Utilitarianism]
This general idea, that we should be guided by considerations of overall value (or what's important), is called consequentialism. Different consequentialists may have different theories of value. One simple but appealing theory of value is welfarism, the view that what ultimately matters is the well-being of sentient beings like ourselves. When we combine consequentialism and welfarism, and count each individual's interests equally and without bias, the resulting moral theory is called utilitarianism.
Utilitarianism is a controversial, but commonly misunderstood, moral theory. In what follows, we'll set out the basic case for utilitarianism, and then address some common misconceptions.
Utilitarianism draws on three basic principles:
(1) Welfarism: what ultimately matters is the well-being of sentient beings
(2) Impartiality: everyone matters equally
and
(3) Consequentialism: it's better to do more good than less.
[Welfarism]
You probably agree that your own interests matter (as do the interests of those you love). But why? Is it because you aim at things? Well, so does a homing missile, but inanimate weapons surely lack intrinsic value. Is it because you're alive? That seems more plausible, until you realize that bacteria are also alive, but don't seem to matter morally.
The most plausible answer that philosophers have come up with is that what grounds our moral status is sentience: our ability to suffer or enjoy conscious experiences. It seems clear that sentient creatures matter in a way that viruses and inanimate rocks do not.
Might things beyond sentient creatures — such as flourishing natural ecosystems — also matter? Environmental preservation can of course have great instrumental value, to protect the well-being of existing and future sentient beings. So one can continue to support environmentalism whichever way one answers this theoretical question. But it's at least harder to see how an ecosystem by itself could have value, without anybody there to value it.
Finally, even if one thinks that some things do matter besides well-being, it's important not to be too extreme about it. A world where all the sentient beings were in constant agony would clearly be a terrible world, no matter what else it had going for it. So even if we allow some modest weight to other values, we should probably all agree that welfarism is at least approximately correct, in that promoting overall well-being is the most important thing that contributes to making the world better.
[Impartiality]
On to the second principle: Everyone matters equally. The greatest moral atrocities in history—from slavery to the Holocaust—stem from denying moral equality, and holding that certain groups of people don't matter and can rightly be oppressed, their interests and well-being disregarded by those with greater power.
Utilitarianism rejects the source of this evil at its root. It opposes not just racism, sexism, and homophobia, but also nationalism, speciesism, presentism, and any other bias or "ism" that would lead us to disregard the suffering of any sentient being.
Utilitarians believe that if someone can suffer, then they matter morally, and we ought in principle to care as much about preventing their suffering (and promoting their well-being) as we would anyone else's. Just as we recognize it was wrong for others, historically, to disregard others' interests, so we should expect that disregarding others' interests could lead us into moral error today.
Today, many people systematically disregard the urgent needs of the global poor, of non-human animals, and of future generations. Utilitarians urge us to rectify this error, and do what we can to help all of those in need, so that others might get to lead the sorts of flourishing lives that we would wish for ourselves and our loved ones.
Some hold that strict impartiality is too extreme. Surely, you might think, it's justifiable to prioritize your friends and family over total strangers, at least to some extent? Maybe so. But even if we can give some extra weight to our nearest and dearest, we may still agree that utilitarianism is at least approximately correct, in that it's important to still give significant weight to the interests of others. It would be a serious moral error to disregard them completely, or to come close to doing so.
[Consequentialism]
Our final principle holds that it's better to do more good than less. This sounds obvious, but is often neglected. For example, when donating to charity, very few people put effort into finding the best cause possible. But some organizations can do hundreds or even thousands of times more good than others, so the choice of where to give can be even more important than how much you give. $100 to a highly effective charity will be much more worthwhile than even $100,000 to an ineffective (let alone counterproductive) charity. For this reason, utilitarianism encourages people to find and put into practice the very best ways of doing good.
Utilitarianism gets controversial when there are tradeoffs between different people's interests. It seems wrong to kill an innocent person as a means to saving several other lives. And in practice, utilitarians will agree: anyone who thinks that violating individual rights will lead to better results in the long run is almost certainly mistaken. High social trust is incredibly valuable, and real-world utilitarians know they can do better by behaving cooperatively, rather than villainously, in pursuit of the greater good.
Critics may insist that this isn't good enough: that utilitarians are here getting the right result for the wrong reasons, and that it would be wrong to kill one as a means even if it would truly do more good. But why? If we delve into deeper moral explanations, consequentialist reasons — that this would lead to a better world than any alternative action — seem hard to beat. Non-consequentialist prohibitions, by contrast, run into the paradox of deontology: that it seems downright irrational to insist that killing is so bad that it ought not to be done, even to prevent more killings. If killing is so bad, shouldn't we wish to minimize its occurrence? Deontology looks like it cares more about clean hands than it does about people's lives, and it's hard to see how that could be an accurate view of what ultimately matters.
So, while it's clear that you shouldn't go around killing people for the so-called "greater good", utilitarians agree with this practical claim. Intuitively monstrous acts are likely to be horrendously counterproductive. Critics disagree that this is why those acts are wrong, but this makes little difference in practice. Even if you side with the critics on this explanatory question, you might still agree that utilitarianism is at least approximately correct, as it not only tells us to avoid monstrous acts, but additionally reminds us to pursue positively good ones.
[The Veil of Ignorance]
An important argument for utilitarianism invokes a thought experiment known as the veil of ignorance. The basic idea is that our judgments are often biased in our own favor. It's not a coincidence that white supremacists are overwhelmingly white themselves, for example. To avoid such biases, it's worth asking what it would be rational to want if you didn't know who in the world you were. Imagine looking down on the world from behind a "veil of ignorance": a God's-eye view of everything that occurs, but one that leaves you ignorant of which person down there is you. It would clearly be irrational to endorse white supremacy from behind a veil of ignorance, given the odds that you could end up suffering the consequences as a non-white person. This test provides a simple proof that white supremacy is morally unjustifiable, since even the white supremacist himself could no longer endorse it from the "neutral" position behind the veil.
But the veil of ignorance can be applied more broadly than this. If you assume that you're equally likely to end up as anyone, standard decision theory implies that the rational choice is whatever option maximizes well-being on average. This is worth bearing in mind when presented with a supposed counterexample to utilitarianism. Imagine it maximizes well-being to push someone in front of a trolley, activating the emergency brakes in time to save five others. Critics claim that pushing the one in front of the trolley is wrong. But note that this act is what all six people involved would agree to from behind the veil of ignorance! (After all, it gives each a 5/6 chance of survival, instead of just a ⅙ chance.) And how could it be wrong to do what everyone involved would have agreed to, if only they'd been freed of the biasing information of which of them is in the more or less privileged positions?
By violating what would be unanimously agreed upon behind the veil of ignorance, non-utilitarian views implicitly act as reactionary forces, protecting the privileges of the status quo against the greater needs of those in less safe or fortunate positions. If the one was already on the track, few would think it okay (let alone required) to lift him to safety and thereby cause the deaths of the other five. This shows that the alleged counterexample depends upon status quo bias. If you reject the idea that the default state of the world is morally privileged, you should likewise reject the distinction between 'killing' and 'letting die' that this counterexample relies upon. Whether we should prefer the outcome in which the trolley hits five people, or just hits the one, should not depend upon which we think of as being the "default". And if the choice is between consistently preferring more deaths or fewer, the moral answer is surely clear.
But again, that's just to talk about what matters in principle. We should ultimately want what's overall best for everyone. You can imagine weird hypotheticals where this yields verdicts of a sort that you wouldn't want people to act upon in real life. But the world is not a trolley problem. In practice, utilitarians agree, the best way to achieve moral goals is to respect people's rights. That doesn't require building rights into the very goal to be achieved, however. Rights are just a means — though a robustly useful one, not to be neglected — for averting harms and securing better outcomes.
[Demandingness]
We saw that status-quo privilege is implicit in non-consequentialist explanations of why killing is wrong. Privilege also shapes the other main objection to utilitarianism, namely, that it is too demanding. Consider: those who are wealthy by global standards could do a lot of good by transferring much of their wealth to the global poor. GiveWell estimates that their top charities can save a life for under $5000, which is extraordinary. To put this number in perspective: Americans spend over 250 billion dollars each year on alcohol. Utilitarianism plainly implies that it would be morally better for us to spend less on ourselves, and more to help those in need. This can be uncomfortable to hear, but it also seems hard to deny. Most moral views will agree that it would be better to do more to help others. (It's surely what you would choose from behind a global veil of ignorance.)
Utilitarianism is sometimes represented as claiming that we ought to maximize well-being. But this is to use 'ought' in an ideal sense, as picking out which action would be best. It has nothing to do with the ordinary notion of obligation, according to which falling short renders you liable for blame, guilt, and other negative reactions. Doing more good is always better. But that's not to say that anything short of moral perfection is categorically bad, as opposed to simply less than perfectly good. So it's misleading to claim that utilitarianism demands moral perfection. It simply recognizes that better is better, as is surely undeniable.
[Conclusion]
We began with the problem that prohibition-based moralities are insufficient to guide us towards a better future. We should, of course, respect others' rights, as doing otherwise would almost certainly result in more harm than good. But we can't stop there. We should also look for positive opportunities to make the world a better place. We need to think carefully about what is truly important, how to protect it, and how to safely promote it.
Utilitarianism is one moral theory that might help guide us here. It claims, plausibly enough, that what ultimately matters is the well-being of sentient creatures like ourselves. It warns us not to disregard the interests of those who are distant or different from ourselves. And it reminds us that it's better to do more good than less.
Philosophers have focused a lot of attention on objections to utilitarianism. For example: whether it offers an adequate explanation of the wrongness of killing, and whether it is excessively demanding. And we've seen how utilitarians can respond to these objections. (You can learn much more on utilitarianism.net.) But to exclusively focus on these debates risks neglecting the most important insight of utilitarian moral theory, which is that avoiding wrongness isn't what ultimately matters. After all, you could avoid wrongness by simply not existing at all. But hopefully you aspire to more than that.
What matters, according to utilitarianism, is that sentient beings' lives go well. And yes, this ultimate concern can motivate us to avoid wrongdoing — as wrong actions risk making the world much worse. But that's just a small part of the overall picture. For apt moral goals may also motivate us to positively make the world better. And that's important too!
Our lives are filled with moral opportunity, not just peril. We need a moral theory that reflects this fact. There's more to ethical life than just berating each other. If we can refocus our moral attention on the question of what's truly important, we may be better-positioned to work together, and achieve great things, when the opportunity arises.
Anton Rodenhauser @ 2024-05-04T19:04 (+3)
Maybe some of Richard Ngo's fiction writing? I like "Succession" best: (1) Succession - by Richard Ngo - Narrative Ark
Or some of his non-fiction. E.g. (1) Techno-humanism is techno-optimism for the 21st century (mindthefuture.info)
rileyharris @ 2022-10-13T13:29 (+3)
Here are some articles I think would make good scripts (I'll also be submitting one script of my own).
Summaries of the following papers:
- The Epistemic Challenge to Longtermism
- A Paradox for Tiny Probabilities and Enormous Values
- The Case for Strong Longtermism
- Forecasting transformative AI: the "biological anchors" method in a nutshell
- Are we living at the hinge of History?[1]
- In defence of fanaticism[1]
- Longtermist Institutional Reform[1]
- Doomsday Rings Twice[1]
- Asymmetry, Uncertainty, and the Longterm[1]
- Simulation in Expectation[1]
- Moral Uncertainty about population axiology[1]
- Existential risk pessimism and the time of perils[1]
I'd also suggest the following papers which I haven't seen a summary of:
- The Potato's Contribution to Population and Urbanization: Evidence from a Historical Experiment
- A Bayesian Truth Serum for Subjective Data
- Improving Judgments of Existential Risk: Better Forecasts, Questions, Explanations, Policies
- The Parliamentary Approach to Moral Uncertainty
- Is Power-Seeking AI an Existential Risk?
- I'd also suggest all of WWOTF's supplementary materials, especially Significance, Persistence and Contingency.
(Edit: Spacing)
- ^
I am writing these 8 summaries, message me if you want to see them early.
Michael Noetel @ 2022-12-05T06:14 (+2)
I popped this script in via email a few weeks ago but didn't get confirmation of receipt. I know it's been a crazy few weeks so no need to review. Still, do you mind confirming whether it's 'in' or whether the contest is closed?
Writer @ 2022-12-05T09:08 (+2)
Yes, I've received and read the script, and the contest is still open
Michael Noetel @ 2022-12-05T19:46 (+2)
Thank you!
Jordan Arel @ 2022-11-25T22:02 (+2)
Is this contest still active after the FTX fiasco?
Writer @ 2022-11-25T22:11 (+4)
Yes, it is still active
Kirsten @ 2022-10-25T14:17 (+2)
What are your plans if you don't get a script you're excited about using?
Writer @ 2022-10-25T14:21 (+1)
I'm not sure currently, but I might increase the prize and remove the deadline.
Edson Reistad @ 2022-09-30T16:13 (+2)
I have always thought that there is a lot of unpicked low-hanging fruit for animation among the most popular blogposts. Examples might be "I only believe in the paranormal" and "Great minds might not think alike"from lesswrong.
rileyharris @ 2023-10-29T13:56 (+1)
Was the result of this competition ever announced? I can't seem to locate it.
Writer @ 2023-10-29T14:00 (+2)
There is a single winner so far, and it will be announced with the corresponding video release. The contest is still open, though!
Edit: another person claimed a bonus prize, too.
stelmckay @ 2022-11-28T01:26 (+1)
Hey! I sent a submission about a week ago and I haven't heard anything back. Are you reaching out to people following reading through their scripts to give feedback and/or let them know their standing? Thanks!
stelmckay @ 2022-11-28T01:28 (+1)
Also, have there been any winners or has anybody been offered a hired position yet? Thanks :)
Writer @ 2022-11-28T12:21 (+1)
No winners or job offers yet, but I have still to decide on some, although I've read every submission. I reached out to a small minority of people with feedback when (but not every time) I thought their script could be improved easily enough to pass my bar. If you want feedback or are curious about your standing in the contest, please send an e-mail.
Alexander Gopoian @ 2022-10-25T12:56 (+1)
Is there an issue with writing in our own personal voice, such as starting with a short personal story as an example of something, or should it be written without us as a character/known narrator?
Writer @ 2022-10-25T13:25 (+1)
There could be an issue since Rob Miles is the narrator. But if the story is really good and significantly benefits from using the first person, we can figure something out, like putting a disclaimer at the beginning.