2023: highlights from the year, from the EA Newsletter
By Lizka @ 2024-01-05T21:57 (+68)
There was a lot of great EA-related content and research in 2023; I’ve highlighted content that seems particularly notable. Please add content you think should be in a list like this! I'm also sharing excerpts from every month’s edition of the EA Newsletter, in case it’s helpful for some people who might want to look through a low-resolution timeline of 2023.
A companion post lists some reviews 2023 news on AI safety, animal welfare, global health and more. (That post summarizes news, the one you're reading features content.)
Skip to:
- Content loosely organized by cause area
- Cross-cause discussions & content about causes less prominent in EA
- Global health and development: historical lessons and research on potentially under-appreciated causes
- Animal welfare: policy-related discussions, moral weights, estimates, and neglected animals
- Non-AI global catastrophic risks (including pandemic preparedness, climate change, nuclear safety, etc.): government investments, developing and deploying key technologies, and more
- AI safety & risk: strategic shifts in AI safety as a field, forecasts of AI trends, distillations of governance and other research, and more
- Month-by-month review of the year: excerpts from January, February, March, April, May, June, July, August, September, October, November, and December
- Concluding thoughts & notes on the EA Newsleter
Requests & other notes:
- Consider adding other content or research that you appreciated in the comments. (I haven't really tried to make this an exhaustive list, and I know the sample I started with has big blind spots.)
- I’d really appreciate feedback on the EA Newsletter if you’re subscribed to it (see archives and subscribe here). We get very little constructive feedback ion it, and feedback could help us improve a newsletter that goes out to 60K subscribers.
- More context on how and why I made this: I wanted to collect “important stuff from 2023” to reflect on the year, and realized that one of the resources I have is one I run — the monthly EA Newsletter. So I started compiling what was meant to be a quick doc-turned-post (by pulling out good links from the 2023 emails, occasionally updating them and remembering related content that I also thought was good). Things kind of ballooned as I worked on this post and added non-Newsletter links. Long story short, there are now two posts; see the companion post, which is less focused on content/research and more focused on "news."
- For more cool EA-related content from 2023, see: curated Forum posts, talks from EA conferences in 2023, and your EA Forum Wrapped recommendations (we’ll also try to compile a post from what people mark as “most valuable”).
Highlights: content by causes
Links that stand out, loosely organized into cause areas (although a lot of the content is useful for more than one cause area/category). Links that I appreciate an unreasonable amount are starred (not very consistently). ⭐
Cross-cause discussions & content about causes less prominent in EA
- ⭐ How long do policy changes matter? (longer than you might expect!)
- ⭐ Principles for AI welfare research and a podcast with the author, Jeff Sebo, on how to avoid sleepwalking into a moral catastrophe (also, how humans might fail to identify complex thought in AI models)
- ⭐ Christopher Brown on why slavery abolition wasn’t inevitable (80,000 Hours Podcast)
More: ⭐ The Capability Approach to Human Welfare and There is little (good) evidence that aid systematically harms political institutions from Ryan C Briggs, GWWC's evaluations of evaluators, TED Talk on effective philanthropy from Natalie Cargill, Radical tactics can increase support for more moderate groups (Social Change Lab), Why should ethical anti-realists do ethics? (Joe Carlsmith), Rethink Priorities’ Cross-Cause Cost-Effectiveness Model, EA is three radical ideas I want to protect and EA Strategy Fortnight posts, Wisdom of the Crowd vs. "the Best of the Best of the Best", and Bringing about animal-inclusive AI.
Global health and development: historical lessons and research on potentially under-appreciated causes
- ⭐ Why we didn't get a malaria vaccine sooner (from Works in Progress)
- ⭐ Salt, Sugar, Water, Zinc: how oral rehydration therapy was developed (by Matt Reynolds in Asterisk)
- How economists got Africa’s AIDS epidemic wrong (CGDev)
- Air pollution is responsible for ~12% of deaths: what might help? (Santosh Harish on the 80,000 Hours Podcast)
More: Cause area report: Antimicrobial Resistance (Akhil), ⭐ Are education interventions as cost effective as the top health interventions? (Founders Pledge report), The first results from the world’s biggest basic income experiment in Kenya are in (covered by Vox — see also a study on mortality reductions and a podcast on cash transfers and economic growth), What you can do to help stop violence against women and girls (Akhil), Lucia Coulter on preventing lead poisoning for $1.66 per child (80,000 Hours), Sectoral transformation & what we really know about growth in LMICs (Karthik Tadepalli), and Clean Water - the incredible 30% mortality reducer we can’t explain (Nick Laing).
Animal welfare: policy-related discussions, moral weights, estimates, and neglected animals
- ⭐ A Cost-Effectiveness Analysis of Historical Farmed Animal Welfare Ballot Initiatives (Laura Duffy)
- Rethink Priorities’ Welfare Range Estimates and the whole Moral Weights Project sequence (particularly this (“Don’t balk…”) and against neuron counts, personally, and posts from the CURVE sequence like this one)
- 230 billion shrimp are being farmed at any moment — more than any other farmed animal estimate, including insects (Rethink Priorities again)
More: Open Phil Should Allocate Most Neartermist Funding to Animal Welfare (Ariel Simnegar), Price-, Taste-, and Convenience-Competitive Plant-Based Meat Would Not Currently Replace Meat (Jacob Peacock), ⭐ EA’s success no one cares about (Jakub Stencel), Why I No Longer Prioritize Wild Animal Welfare (Saulius), Net global welfare may be negative and declining (see Vox coverage), Change my mind: Veganism entails trade-offs, and health is one of the axes (Elizabeth), Animal Advocacy Strategy Forum 2023 Summary, OWID’s new page about animals, and claims that only mammals and birds are sentient.
Note: This section consists entirely of links to the Forum, which makes me think I’m more systematically missing awesome content on animal welfare than I am for other causes. But also: huge Rethink Priorities representation.
Non-AI global catastrophic risks (including pandemic preparedness, climate change, nuclear safety, etc.): government investments, developing and deploying key technologies, and more
- How much should governments pay to prevent catastrophes? (EJT, CarlShulman)
- Clean air and far-UVC: Thoughts on far-UVC after working in the field for 8 months and “First clean water, now clean air” (and IFP on the topic)
- Climate-related content: ⭐ Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don’t, and a TED Talk and 80,000 Hours Podcast episode from Hannah Ritchie
More: Reviewing nuclear winter (Michael Hinge — see also Nuclear winter scepticism and Philanthropy to the Right of Boom from Founders Pledge), 20 concrete projects for reducing existential risk (Buhl), Kevin Esvelt on cults, stealth vs wildfire pandemics, and how he felt inventing gene drives (80,000 Hours Podcast), Advice on communicating in and around the biosecurity policy community (ES), and Alison Young on how top labs have jeopardised public health with repeated biosafety failures (80,000 Hours Podcast).
AI safety & risk: strategic shifts in AI safety as a field, forecasts of AI trends, distillations of governance and other research, and more
This section is mostly oriented towards strategy/analysis-of-the-field discussions and advocacy and governance-oriented work (i.e. I don’t feature links with great technical research here).
- ⭐ Let’s think about slowing down AI (Katja Grace) and other discussions about pausing and slowing down AI development, like the AI Pause Debate (see a wrap-up post), and The costs of caution from Kelsey Piper (see more on Planned Obsolescence)
- Trends and forecasts predicting how AI will develop (and how to approach forecasting this)
- ⭐ A literature review of transformative artificial intelligence timelines from Epoch and What experts in artificial intelligence expect for the future (Our World in Data).
- Predictably updating about AI risk (Joe Carlsmith)
- Tom Davidson: What a compute-centric framework says about AI takeoff speeds , podcast on how quickly AI could transform the world, and “continuous doesn’t mean slow”
- ⭐ AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years
- Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite
- AI Impacts: 2023 Expert Survey on Progress in AI (this was actually published in 2024, but I’m sneaking it in).
- Research, distillations, and proposals for policy/governance approaches (see 12 tentative ideas for US AI policy)
- ⭐ GovAI: A survey of expert opinion on best practices in AGI safety and governance
- ⭐ Success without dignity: a nearcasting story of avoiding catastrophe by luck (Holden Karnofsky), as well as a playbook for AI risk reduction, and what governments, AI companies, people, and writing can do to help
- ⭐ Lennart Heim on the compute governance era and what has to come after (80,000 Hours Podcast)
- An early warning system for novel AI risks (Google DeepMind)
- Open-Sourcing Highly Capable Foundation Models (GovAI, see also Vox on a related topic)
What Do We Mean When We Talk About “AI Democratisation”? (GovAI)
- A freshman year during the AI midgame: my approach to the next year and The current alignment plan, and how we might improve it (Buck)
- Other related topics
- Seán Ó hÉigeartaigh shares a note of caution about recent AI risk coverage.
- Discussions on “responsible scaling policies”: ARC Evals (now METR), Paul Christiano, Holden Karnofsky.
- Nobody’s on the ball on AGI alignment.
- Carl Shulman on AI takeover (Dwarkesh Patel podcast)
- Carl Robichaud argues that nuclear non-proliferation has lessons for AI governance — like the importance of understanding choke points.
There’s a bunch more on AI safety — see some content that was linked in the relevant section in the companion post. [LINK]
Month-by-month: highlights and featured news from 2023
The actual newsletters included a large amount of other content (like announcements/opportunities). See full newsletters in the archive.
January
EA Global, an update on the ozone layer, and staring into the abyss
Featured content
Why Anima International suspended the campaign to end live fish sales in Poland
Effectiveness requires noticing mistakes and correcting them. Unfortunately, people often don’t do this (we tend to flinch away from uncomfortable ideas), and even when we do, we rarely discuss it publicly. [...] Anima International had been running a campaign against live fish sales in Poland. [...] Both the farming and transportation of the fish cause a lot of suffering — so progress seemed exciting. Unfortunately, it turned out that the campaign was causing unexpected harm; some people were switching from carp to salmon, and farming salmon requires farming more fish to feed the salmon (which are carnivorous). Anima International’s models and research showed that the campaign was worse than they’d hoped, and even tentatively implied that the program was harmful overall. They decided to stop the campaign. [...]
Ben Kuhn calls this kind of thinking “staring into the abyss” and identifies it as a core life skill, key to doing great work.
Why the ozone hole is on track to be healed by mid-century (Kelsey Piper in Vox)
Good news: a panel commissioned by the United Nations reports that “the Earth’s ozone layer is on track to recover within four decades.” [...] This success shows us that the world can come together to work on big problems — we should learn from it.
Let’s think about slowing down AI (Katja Grace)
…A recent post suggests that slowing down AI progress is unreasonably discarded by people interested in AI safety in part because of a “can’t-do” attitude. The post makes the case that this approach is viable, not radical, and can be cooperative, and shares other thoughts and models.
January news & other links
- What you can do to help stop violence against women and girls.
- There’s been some progress on a potential universal flu vaccine. (Metaculus.)
- An EA Forum post argues that StrongMinds should not be a top-rated charity (yet).
- According to a report by the Social Change Lab, radical tactics can increase support for more moderate groups.
- A Twitter thread lists “good things that happened in EA this year.”
- Including airborne pathogen levels in indoor air quality standards might reduce risks from catastrophic pandemics.
The classic featured ITN: A framework for comparing global problems in terms of expected impact (and links to The ITN framework, cost-effectiveness, and cause prioritisation and Most problems fall within a 100x tractability range (under certain assumptions)).
February
New charity ideas, the abolition of slavery, and research on animal welfare
Featured content
Christopher Brown on why slavery abolition wasn’t inevitable (80,000 Hours Podcast)
…He rebuts one prominent theory — that the practice of slavery was bound to end as it was no longer profitable for slaveholders and traders — by noting that people involved in the slave trade were viewing the system as profitable. [...] One change is described as especially significant: the shift from feelings of unease about the morality of slavery to organized action in slaveholding societies. Brown notes that it might be comforting to think that “once [people] understood the cruelty of slavery, then of course they would organize and do something about it,” but he stresses that “not only did it not happen that way, but it almost never happens that way.” […]
H5N1 bird flu (Pandemic Prediction Checklist: H5N1 and What Are the Odds H5N1 Is Worse Than COVID-19?)
Worries about bird flu have been around for a while, but they’ve recently been ramping up again; in October, a strain of the flu (H5N1) infected minks at a fur farm and probably started spreading from one mink to another (which hadn’t happened with mammals before). The development is particularly concerning because minks are well suited to transmitting the disease to humans. You can read more here.
It currently seems unlikely that H5N1 will spread widely among humans — as of right now, Metaculus predicts a 2% chance that H5N1 will cause at least 10,000 human deaths. [...]
Rethink Priorities’ Welfare Range Estimates
How do you decide [between improving the lives of farmed chickens or saving some pigs from factory farms]? One of the difficulties here is that it’s not clear how to compare the experiences of different animals, and this gets harder when the animals in question are less similar to humans.
A report from Rethink Priorities estimates the “welfare ranges” of different species. These ranges track the difference between the most intense pleasures and the most intense pains that the animal can experience. […]
February news & other links
- Vox: Farm animals starve and drown while shipped overseas for slaughter. Europe is considering a ban on the trade.
- A recent post summarizes 6 months of animal welfare in 6 minutes.
- Giving What We Can discusses: Aren't the best charities those with the lowest overhead costs?
- Risks from engineered viruses increase as synthetic biology gets cheaper. A report discusses what we can do to mitigate risks from DNA synthesis.
- New content on AI and AI safety:
- Are we racing toward AI catastrophe?
- Our World in Data collects and explains what experts in artificial intelligence expect for the future (and there’s a separate literature review of transformative artificial intelligence timelines from Epoch).
- Holden Karnofksy writes about jobs that can help with AI safety.
Classic: The Moral Imperative Towards Cost-Effectiveness, with additional links to Differences in impact and Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness.
March
GPT-4, the history of a simple cure, and more
Featured content
GPT-4 and the road to out-of-control AIs (“This Changes Everything” by Ezra Klein in the NYT (paywalled) and other links about the news)
On Tuesday, OpenAI unveiled the AI model GPT-4, an even-more-capable successor to the system that powered the popular chatbot ChatGPT. That same day, Google made one of its most powerful AI models accessible to developers, Anthropic opened access to the AI chatbot Claude, and more — the news continued throughout the week.
Two days before those announcements, Ezra Klein published a column in The New York Times, “This Changes Everything” (paywalled), in which he wrote: “[developing AI] is an act of summoning. The coders casting these spells have no idea what will stumble through the portal… They are calling anyway.” If this “summoning” continues unchecked, humans might find themselves at the mercy of deeply alien and uncontrollable AI systems. (Out-of-control AI might sound like science fiction, but experts are increasingly afraid of this possibility, and in a 2022 survey of machine learning researchers, nearly half said there is at least a 1 in 10 chance that the effects of AI would be “extremely bad (e.g. human extinction).”)
GPT-4 itself probably won't be disastrous, but it is a step towards AI systems that might be. [...]
Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century’s Biggest Killer of Children (Matt Reynolds in Asterisk)
…A recent article by Matt Reynolds focuses on [why it took so long to discover ORS, a simple cure that has saved millions]. One theme highlighted in the piece is […] a lack of understanding of what was happening on a biological level. […] But while the first part of the problem was developing any effective treatment that doctors could theoretically administer, a crucial hurdle was finding a simpler, more practical treatment. By the mid-20th century, intravenous salines were often used to treat cholera. This “high-tech” treatment was effective and popular in richer areas, but inaccessible in others. The development of oral rehydration solution was a major breakthrough precisely because of its simplicity.
Can Policymakers Trust Forecasters? (Gavin Leech and Misha Yagudin in the Institute for Progress)
[…] A recent article argues that generalist forecasters — people who make and track predictions across a wide range of topics — slightly outperform domain experts and statistical models at predicting future events. Moreover, combining different approaches might be more promising, as it can help policymakers avoid biases and weaknesses of any particular group. […]
March news & other links
- A new issue of Asterisk focuses on food, but covers a wide range of topics, from whether we’re morally obligated to help wild animals, and whether we’re equipped to do that to how we might feed the world without sunlight in the aftermath of a catastrophe.
- The EU Food Agency recommends banning cages, a proposal that forecasters at Metaculus currently think has a 61% chance of passing in the next year and a half.
- In “No Silver Bullet Solutions for the Werewolf Crisis,” Lars Doucet satirizes the way societies often approach their problems.
- An analysis scored predictions made by experts in a 2016 survey on AI progress, finding that experts were decently well calibrated.
- The Library of Economic Possibility summarizes the current evidence on the impact of unconditional basic income.
- Our World in Data visualizes how dramatically the world can change within a lifetime.
Classic: Most* small probabilities aren't pascalian by Greg Lewis.
April
AI, why unconventional climate change approaches can be better, and more
Featured content
Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don’t (80,000 Hours Podcast)
Two highlights:
- Interventions can look a lot more — or less — effective when you evaluate them on a global scale. For example, some groups in Switzerland are advocating for thorough insulation of all homes to make them more energy-efficient. This would reduce emissions in Switzerland, but it would have a small impact globally because most emission growth is in countries where insulation isn’t the problem. […]
- We should accelerate the development of new clean energy technologies. […]
What should we do about risks from AI? (“We must slow down the race to God-like AI” (paywalled) in FT, and the launch of Planned Obsolescence)
[…] One post on slowing AI progress raises some key questions:
- Is it better to ask for evaluations — ongoing audits on whether systems are dangerous — instead of a pause?
- Is a 6-month pause too short? Is it even the right thing to ask for? A more continuous and iterative approach might be better. (See more.)
- Will a moratorium like this backfire by worsening competitive dynamics?
Given the uncertainty, what can all of us do? Stay informed, advocate for alignment, support people working on safety approaches, and use our skills and resources to work on the problem if we can (explore resources for upskilling).
Child and Infant Mortality (Our World in Data)
…until very recently in human history, almost half of all children died before the end of puberty. Today, global child mortality is around 4%. This still means that thousands of children die every day — far too many, but so much better than it used to be.
April news & other links
- The BBC shares news of a plan for the world's first octopus farm and explains why farming could be especially bad for the animals. (Metaculus on this industry’s growth.)
- Talks from the EA Global: Bay Area conference have been published.
- The United Kingdom and the United States (paywalled) governments are launching ambitious plans for vaccine manufacturing.
- Parfit: A Philosopher and His Mission to Save Morality by David Edmonds is coming out in a couple of days.
- How much should governments pay to prevent catastrophes? (See also news about some relevant updates to US regulatory analysis methods.)
- Researchers at Rethink Priorities report that eradicating rodenticides from U.S. pest management is less practical than they thought.
Classic: “The timing of labour aimed at reducing existential risk” by Toby Ord (and a more recent Twitter thread).
May
Good news for pig welfare, releasing billions of mosquitos, and regulating AI
Featured content
An unexpected win for animal welfare (Vox)
In an unexpected ruling against the pork industry, the US Supreme Court upheld an important animal welfare law. California’s “Proposition 12” bans the sale of some pork products that come from farms where sows are kept in extremely small “gestation crates.” The law was passed via a 2018 ballot measure that was approved by over 62% of Californian voters. […] The outcome was far from guaranteed; the Supreme Court passed the ruling by a narrow majority. […]
How releasing billions of modified mosquitos might help fight dengue fever (WMP)
[…] The World Mosquito Program (WMP) is coming at the mosquito problem from another angle; they plan to build a mosquito farm in Brazil to start releasing modified mosquitos that can’t spread certain viruses. The mosquitos will contain the bacteria Wolbachia, which should prevent the insects from transmitting viruses like dengue, Zika, and yellow fever. The farmed mosquitos will then spread Wolbachia into the wild mosquito population. WMP has run trials; they report that one project led to a 77% reduction in confirmed dengue cases in the affected area.
I don’t know if this program is especially cost-effective. Though dengue fever is probably more neglected than diseases like malaria, it also affects fewer people. But it’s still inspiring to see the range of ways that diseases can be fought. […]
How should we regulate artificial intelligence? (12 tentative ideas)
A year ago, the idea of out-of-control AI might have sounded a bit like science fiction, and people worried about catastrophic risks from AI thought that getting public interest in regulation would be difficult. But things have changed. Awareness and interest in regulation are growing, and governments are responding. In the US, Sam Altman (CEO of OpenAI) testified before the Senate today and pushed for safety-oriented regulation, and earlier the White House met with Altman and other AI CEOs to talk about potential dangers. And in the EU, a proposed AI Act would classify and regulate AI systems based on their risk levels.
Understanding what regulations are most effective is probably harder. Luke Muehlhauser, a senior program officer at Open Philanthropy, recently suggested 12 tentative ideas for US AI policy. These include tracking and licensing big clusters of cutting-edge chips, requiring that frontier AI models follow stringent information security protections, and subjecting powerful models to testing and evaluation by independent auditors. It’s helpful to understand what strategies can look like, but more research and work are required before the ideas can be implemented. [...]
May news & other links
- Ghana and Nigeria have approved a promising malaria vaccine that could be a game-changer.
- There’s a new podcast about effective animal advocacy called “How I Learned to Love Shrimp.”
- Vox discusses whether the 50-year-old Biological Weapons Convention can keep up with rapid progress in biotech.
- Tom Davidson on how quickly AI could transform the world — see also “continuous doesn’t mean slow”
- Fin Moorhouse argues that clean air interventions (such as ventilation or germ-killing lights) could be to modern societies what the shift towards clean water and sanitation was in the 19th century.
- Joe Carlsmith shared an essay on predictably updating about AI risk and what to do when your gut disagrees with your brain.
- A leaked EU legislative draft proposes substantial animal welfare improvements.
- Nikos Bosse analyzes how the wisdom of the crowd performs vs. “the best of the best” forecasters (for predictions on Metaculus).
Classic: How much should you research your career? (80,000 Hours), applying “Terminate deliberation based on resilience, not certainty.”
June
Proposals for AI governance, lessons from charity evaluation, and many opportunities
Featured content
Elie Hassenfeld on two big-picture critiques of GiveWell's approach, and six lessons from their recent work (80,000 Hours)
[...] A recent episode of the 80,000 Hours Podcast with Elie Hassenfeld, the CEO and co-founder of GiveWell, focuses on difficult questions [for GiveWell, like]:
- How can you compare interventions that have different types of benefits? […]
- Should GiveWell fund more interventions that speed up economic growth in poor countries? […]
Cause area report: Antimicrobial Resistance (Akhil)
Antibiotics and other life-saving medicines are becoming less effective due to antimicrobial resistance (AMR), which occurs when bacteria, viruses, fungi, and parasites adapt to the methods commonly used to combat them. A new report suggests that AMR is responsible for millions of deaths each year (particularly in sub-Saharan Africa) and has serious economic costs — by one estimate, $55 billion every year in the US alone. The problem is neglected and getting worse as the overuse of antibiotics in healthcare and food production continues and more drug-resistance bacteria evolve.
There are promising approaches for working on AMR. These include creating incentives to accelerate the development of new antimicrobial medicines, contributing to quantitative research on the causes of AMR, running fellowships for policymakers, improving diagnostics (and their accessibility) to prevent misuse of antimicrobials, and raising the profile of AMR. [...]
AI governance: CAIS statement on AI risk and GovAI: survey of expert opinion
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” [Statement on AI Risk]
[...] A recent survey of expert[s] found a lot of agreement on best practices in AI safety and governance. The survey (which was run by the Centre for the Governance of AI and got 51 responses out of 93 experts contacted) asked participants how much they agreed with 50 statements about what AGI labs should do; participants, on average, agreed with all of them.
Proposals based on evaluating models for risky qualities to prevent the deployment of dangerous models got the most approval. This strategy is also the subject of a new paper co-authored by researchers from Deepmind, OpenAI, Anthropic, and more.
June news & other links
- “How economists got Africa’s AIDS epidemic wrong” discusses why a now highly celebrated American foreign aid program initially seemed like an ineffective use of money.
- Open Philanthropy shares an overview of Far-UVC and a request for information.
- How does AI progress affect other EA cause areas?
- Richard Y Chappell reviews Peter Singer’s Animal Liberation Now.
- A new large study finds that cash transfer programs significantly reduce population-level mortality (especially among women and children).
- $900,000 was stolen from GiveDirectly in the Democratic Republic of the Congo.
- Rethink Priorities has shared an analysis of historical global health R&D hits.
AI-safety-related:
- The UK government will host the first global summit on AI safety.
- Seán Ó hÉigeartaigh shares a note of caution about recent AI risk coverage.
- Matthew Barnett proposes a compute-based framework for thinking about the future of AI.
Classic: Why did renewables become so cheap so fast? (Our World in Data)
July
Featured content
The Puzzle of Non-Proliferation (Carl Robichaud in Asterisk) and Lennart Heim on compute governance (80,000 Hours)
Building nuclear weapons requires enriched uranium and plutonium, which is hard to produce; this is a key “choke point” for limiting nuclear proliferation. Training powerful AI models requires a lot of advanced chips and computing resources — “compute.” While regulating algorithmic research and other resources could be especially difficult, advanced chips could be tracked, licensed, and controlled. (Compute governance is discussed at length in a recent 80,000 Hours Podcast episode.)
In an Asterisk article, Carl Robichaud argues that nuclear non-proliferation has lessons for AI governance — like the importance of understanding choke points. Another parallel with nuclear weapons is the worry that fear of competition (for instance between the US and China) could drive countries to rush AI development. […]
A Cost-Effectiveness Analysis of Historical Farmed Animal Welfare Ballot Initiatives
[…] A new report by Laura Duffy at Rethink Priorities analyzes historical US ballot initiatives aimed at improving the lives of farmed animals, focusing on four initiatives for which relevant data were available. Some insights:
- The vast majority (~99%) of reduced suffering came from restricting cage use for chickens. Ballot initiatives that targeted egg-laying hens were over a hundred times more cost-effective than those that target only veal calves and sows.
- Ballot initiatives averted about a year of extreme suffering for an animal per $10, and were about 21% to 81% as cost-effective corporate cage-free campaigns, which are unusually successful.
- There are a lot of uncertainties, but many reasons to be optimistic about future ballot initiatives. […]
Are education interventions as cost effective as the top health interventions?
[…] A new post from Founders Pledge [describes] a way to estimate how [improved] education affects someone’s future income. The post introduces five separate lines of evidence and argues that taken together, they show that improving students' test scores leads to significant and sustained increases in their future earnings. This framework suggests that a program that furnishes software for teaching numeracy and literacy in Malawi is 11 times as cost-effective as GiveDirectly (a charity often used as a high baseline of cost-effectiveness), meaning that it is as promising as top GiveWell grants. […]
July news & other links
- A new, potentially highly effective tuberculosis vaccine has entered late-phase trials thanks to $550 million in funding. The vaccine could save millions of lives and have other benefits.
- Cell-cultivated chicken was just approved for sale in the US; a Vox article explains what this means and what hurdles remain.
- The Forecasting Research Institute released results from an existential risk forecasting tournament, noting differences between the odds of a global catastrophe given by experts (1 in 5) and superforecasters (1 in 11).
- “Indoor air quality is the next great public health challenge,” argues a report by the Institute for Progress.
- AI welfare research might become increasingly important; two recent posts on the topic covered principles for AI welfare research and how humans might fail to identify complex thought in AI models.
- GiveDirectly outlines the impact of mobile phones and mobile money for people in poverty.
- 80,000 Hours covers why great power conflict might be a pressing problem and what you can do to help (just in time for the blockbuster Oppenheimer).
AI-safety-related:
- OpenAI has a new “superalignment” team (they’re hiring for different roles).
- In a TED Talk, Eliezer Yudkowsky asks whether superintelligent AI will end the world (see also a more in-depth interview with Yudkowsky).
- Dwarkesh Patel interviews Carl Shulman on AI takeover.
- Scott Alexander discusses the complicated track record of predicting when we will have human-level AI — and why the forecasts are still important.
- “Godfather of AI” Yoshua Bengio outlines how rogue AIs may arise.
Classic: On "fringe" ideas (Kelsey Piper)
August
Featured content
Thoughts on far-UVC after working in the field for 8 months
Imagine that installing special lights in places like schools and hospitals dramatically reduced indoor disease transmission (the primary driver of many epidemics). This isn’t fantasy; “germicidal” ultraviolet radiation can neutralize airborne pathogens — either in special zones removed from humans or throughout rooms via skin-safe “far-UVC” light. Far-UVC is particularly exciting because of two major advantages; the lights would cut transmission of many different diseases (including novel diseases) and installing far-UVC in essential spaces would reduce pandemic risk without relying on individuals to take specific actions.
But we’re not yet ready to use far-UVC to prevent pandemics. In a recent overview, Max Görlitz argues that more work is needed before we can widely deploy the technology. For instance, it seems that far-UVC doesn’t penetrate human skin, but its effects on eyesight should be studied more carefully. And it might not be enough to make sure that far-UVC is safe and effective; getting the most out of far-UVC might mean supporting deployment, as purely commercially driven adoption might lead to less useful installations that underinvest in protection against less frequent but more extreme situations. […]
Improving agricultural yields: Hannah Ritchie on why it makes sense to be optimistic about the environment (80,000 Hours)
[The average farmer in Tanzania has to work for a year to get as much output as the average U.S. farmer does in three to four days; some] regions have much higher crop yields (per unit of land) than others. A [key] reason for this disparity is the fact that agricultural productivity has grown much slower in sub-Saharan Africa than in other regions, and the reduced productivity causes serious issues.
In a recent podcast episode on a range of environmental topics, Hannah Ritchie argues that improving agricultural productivity in sub-Saharan Africa would significantly help global living standards. Many farmers (who account for the majority of the region’s poorest people) don’t have extra time or resources to invest in productivity improvements like better irrigation systems. They also lack access to richer markets (in part due to the EU’s agricultural policies). As a result, many are trapped in poverty. But the problem might be tractable; we know it's possible to increase agricultural productivity on a large scale as we've done it before. In the case of sub-Saharan Africa, influencing policy and supporting things like irrigation systems and high-yield crops could help launch an agritech market that helps with these problems.
These interventions would also significantly mitigate the environmental impacts of farming by reducing the need for more farmland. About half of the world’s habitable land is used for agriculture, leading to problems like biodiversity loss. Projections imply that without agricultural productivity improvements, we’d need 26% more cropland by 2050 — an area the size of India and Germany combined. […]
August news & other links
AI-safety-related:
- Podcast interviews with Jan Leike (who leads OpenAI’s “Superalignment” team), Holden Karnofsky (the co-founder of GiveWell and Open Philanthropy, discussing information security being underrated and his playbook for AI risk), and Dario Amodei (the CEO of Anthropic, discussing his expectations about AI development). (See also an article about Anthropic in Vox: “The $1 billion gamble to ensure AI doesn't destroy humanity.”)
- Vox explains the AI rules that US policymakers are considering and how “windfall profits” from AI companies could fund a universal basic income.
- Two researchers push back on the perceived dichotomy between mitigating near-term and existential risks from AI in Time (paywalled).
- OpenAI, Google, Anthropic, and Microsoft have announced the Frontier Model Forum to make progress on AI safety.
- John Wentworth argues that AI alignment work is bottlenecked on funding. Relatedly, you can see what projects the Long-term Future Fund (LTFF) would support depending on how much they raise.
Everything else:
- Important U.S. animal welfare bills might be threatened by the EATS Act, which prohibits state governments from setting standards on the production of agricultural products imported from other states. (U.S. citizens can get in touch with their legislators about this.)
- Two experts discuss the worrying lack of funding in nuclear risk reduction and how philanthropy could help.
- A report from Rethink Priorities pushes back against the idea that affecting price, taste, and convenience of meat alternatives is enough for them to replace animal-based meat.
- GiveDirectly outlines what they got right and wrong about sending cash to flood survivors.
- A Time article argues that we need to start thinking about AI rights (paywalled).
- Representatives of key groups in the effective animal advocacy space were polled on priorities in the space; a report summarizes the results.
- The world is on track to eat almost a trillion chickens in the next decade, explains Vox.
Classic: Radical Empathy.
September
TED: What could we accomplish if the global 1% gave 10%? (and more)
Featured content
What if the global 1% gave 10%? (TED Talk)
Thoughtful philanthropy can achieve a lot of good. The Rockefeller Foundation funded Norman Borlaug’s research on improving crop yields, which is estimated to have saved hundreds of millions of lives. The Pugwash Conferences helped limit the proliferation of nuclear weapons. The March of Dimes Foundation funded the development of the polio vaccine.
A recent TED talk by Natalie Cargill, the founder and co-CEO of Longview Philanthropy, discusses what we could achieve with 10% of the income (or 2.5% of the net worth) of the world’s richest 1% — $3.5 trillion in one year. […]
Biological risks and why we should be cautious about irreversibly sharing powerful AI models (shutting down DEEP VZN and open-sourcing AI models)
After two years, USAID has shut down DEEP VZN, a controversial virus-hunting program aimed at stopping the next pandemic before it happened. The plan was to collect potentially dangerous virus samples in the wild, analyze the samples in labs to identify the viruses capable of causing a pandemic, and publish a ranked list of dangerous viruses and their genomes. Some, like MIT biologist Kevin Esvelt, expressed concerns that sharing a list like that could significantly lower the barrier for terrorists or others who might want to start a pandemic by providing them with instructions on which viruses to use; “gene synthesis,” which can print a virus’s DNA given its genome, would help them do the rest. The program wound down this summer. (To lower the chances of human-made pandemics, governments could require that gene synthesis companies screen orders and customers.)
Related concerns have been raised about openly sharing powerful AI models. “Open-sourcing” is often presented in a positive light, but giving total access to an AI model is a risky and irreversible choice; there’s no way to take an open-sourced model down or set up protections if we later discover that the model is too dangerous. AI models are becoming more powerful and some worrying possibilities have already been demonstrated, like when a group of researchers discovered that a model they were using to screen new drugs for toxicity could be repurposed to suggest 40,000 new possible chemical weapons in six hours.
Instead of being shared without safeguards, powerful AI models could be evaluated for extreme risks and then released via a structured access model that lets researchers study copies without sharing the models irreversibly.
Why we didn't get a malaria vaccine sooner (Works in Progress)
It’s worth celebrating the progress of two promising malaria vaccines and the rollout of the RTS,S vaccine, which was endorsed by the WHO in 2021. But, given that malaria kills around half a million children every year (or more than 1000 children every day) and the fact that the RTS,S vaccine alone spent 23 years in trials and pilot studies before it was licensed, it’s also worth asking why it took so long to get a malaria vaccine — and what we can do to speed things up next time.
A recent article on why we didn’t get a malaria vaccine sooner explains that while malaria poses some unique problems, one obstacle was broader: lack of funding. Malaria affects some of the poorest countries (which can't afford expensive vaccines), and developing a vaccine costs a lot of money and time (especially given the high chance of failure). The expected payoff wasn't large enough to incentivize individual firms to invest the resources needed.
The authors make the case for “advance market commitments” (AMCs), where governments or philanthropists promise to subsidize a new vaccine in large quantities if it’s developed and countries actually want to use it. […]
September news & other links
- Prevalence of lead in turmeric dropped significantly after researchers collaborated with charities and the Bangladesh Food Safety Authority on interventions like monitoring and education campaigns. This work seems extremely cost-effective and was supported by GiveWell.
- 230 billion shrimp are being farmed at any moment according to recent estimates by Rethink Priorities — more than any other farmed animal estimate, including insects. An older report by the Shrimp Welfare Project analyzed what affects shrimp welfare.
- Risks from AI were discussed in a number of podcasts — Ajeya Cotra on Freakonomics, Michael Webb and Mustafa Suleyman on 80,000 Hours, and Jason Matheny on the Rachman Review.
- The EU is considering dropping its plans for stricter animal welfare measures, including the ban on caged farming.
- The EA Forum is hosting a debate on whether we should push for a pause on AI development.
- A new post on nuclear winter reviews the evidence and complexities of the topic.
- There is little (good) evidence that aid systematically harms political institutions, argues a post on the EA Forum.
- A new paper on AI consciousness suggests a framework for evaluating the consciousness of future AI models.
- Family Empowerment Media has published the results of their pilot campaign, which looks very promising.
- A recent poll shows that most Americans want to slow down AI.
Classic: Advice on how to read our advice (80,000 Hours) — related to the release of their new career guide.
October
An upcoming virtual event, biotech risks and opportunities, and many open roles
Featured content
Responsible biotech: Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives (80,000 Hours Podcast)
The day after Kevin Esvelt discovered a way to spread a modified gene through an entire population of plants or animals, he “woke up in a cold sweat.” […]
In a recent appearance on the 80,000 Hours Podcast, Esvelt explains why he now thinks CRISPR-based gene drive technology is relatively safe (it’s slow, clearly detectable, and easily countered). Accidents or carelessness could still lead to catastrophic delays and other issues, so Esvelt argues that we should only start small and local (via “daisy drives”) and only proceed with proper regulation and community buy-in.
The podcast episode also covers why and how we should protect society from those who might want to start a civilization-ending pandemic. The number of people who have the ability to identify and release a dangerous virus is growing [...]. But some things could help. For instance:
- Monitoring wastewater for suspicious patterns or signs of edited DNA to detect pandemics early
- Investing in developing better personal protective equipment (and stockpiling it)
- Installing virus-neutralizing far-UVC lights in workplaces and labs [...]
What does democratic oversight of AI look like? (GovAI: What Do We Mean When We Talk About “AI Democratisation”?)
Recent polls suggest that most U.S. voters would approve of regulation that prevents or slows down AI superintelligence, favor restricting the release of AI models we don’t fully understand, and prefer federal regulation of AI development over self-regulation by tech companies. It’s hard to accurately interpret poll results like these, but they point to unease and a disconnect between the general public and some major AI labs in the U.S. So it’s worth exploring how the public should be involved in steering AI development.
People sometimes talk about “AI democratization,” but the phrase is vague. An overview from the Centre for the Governance of AI outlines different things people mean by “AI democratization” and how the stances diverge. The post (and accompanying paper) explains that democratizing AI use (making AI more accessible), development (helping a wider range of people contribute to AI design), benefits (more equitably distributing the benefits of AI), and governance (distributing influence over how AI should be used, developed, and shared to a wider community of stakeholders) have significant differences. Democratizing AI governance doesn’t necessarily mean making it possible for everyone to use or build AI models however they want, but rather introducing democratic processes like citizen assemblies to give people input on key decisions about AI. […]
Animals in data: a new resource on animal welfare from Our World in Data
[It highlights] information like the number of animals that are farmed and slaughtered (hundreds of millions every day) and why adopting slower-growing breeds of chicken would reduce animal suffering. […]
October news & other links
- We need to treat malaria as the emergency it is; more than 1000 children die from malaria every day, but the newly approved R21/Matrix-M vaccine is only scheduled to be rolled out next year.
- Prospect Magazine commemorates the 40th anniversary of Petrov Day by highlighting work that aims to reduce the risks of human extinction.
- Open Philanthropy shares an update on their planned allocation to GiveWell’s recommendations for the next few years. GiveWell also discusses the update.
- A TED Talk by Hannah Ritchie: “Are we the last generation — or the first sustainable one?"
- “From Warp Speed to 100 Days” explains that most of the delay in rolling out COVID vaccines was spent on clinical trials. Innovation in clinical trial design could save a lot of lives next time.
- A post discusses what AI could mean for animal welfare.
- “The Risks and Rewards of Prioritizing Animals of Uncertain Sentience” describes how different philosophical worldviews lead to different conclusions about which animals someone should prioritize helping.
- A post argues for giving money directly to end extreme poverty (see also a related animation).
- To find effective animal advocacy interventions, a new report identifies developing countries that avoid the trend of growing meat production.
AI-safety-related:
- The Effective Altruism Forum recently hosted a debate on whether a pause on frontier AI development would be helpful. A wrap-up post from Scott Alexander summarizes some key disagreements that surfaced in the debate. (In related news, organizers of the PauseAI protest outline some crucial considerations for AI pause advocacy.)
- The Chinese government has issued a new law governing generative AI. Meanwhile, the U.S. has tightened its export controls on advanced AI chips and semiconductor manufacturing equipment, which make it harder for China to access advanced chips (see a summary thread on the revised controls). A related report reviews the prospect of large-scale smuggling of AI chips into China and suggests some countermeasures (also covered in The Times, paywalled).
- ARC Evals proposes “responsible scaling policies,” which would specify what level of capabilities an AI developer should be prepared to handle with their current safety measures before they can continue scaling.
Classic: Don’t think, just apply! (usually) — EA Forum
November
AI (safety) news, the invisible toll of air pollution, and more
Featured content
How long do policy changes matter?
Is advocating for policy changes effective? One of the things you'd need to evaluate to answer this question is how long a new policy would persist before it would probably [be] repealed. […]
A recent paper that analyzes historical data from the US finds that policy changes are surprisingly persistent. A narrowly passed referendum will probably (80%) still be in place a century later. Moreover, referendums that narrowly fail will probably (60%) not pass at all in the next century. This suggests that advocacy for policy change might be much more cost-effective than is often assumed. […]
Air pollution is responsible for ~12% of deaths: what might help?
Air pollution accounts for the deaths of close to 6.7 million people per year (including half a million infants). Pollution is particularly bad in countries like India, where the average person might be losing 3 to 6 years of life expectancy due to bad air. And the problem is incredibly neglected.
In a recent podcast, Santosh Harish discusses the main causes of air pollution and some potentially cost-effective interventions. Indoor pollution can result from people burning solid fuels for cooking. (Unfortunately, a lack of access to cleaner fuels like liquid petroleum gas or reliable electricity means many people have no choice except to use fuels like firewood.) Outdoor air pollution is caused by waste burning, illegal industrial gas dumping, vehicle emissions, and more. And policies meant to prevent air pollution are often outdated or left unenforced.
Research, policy outreach, and technical assistance to governments could be effective ways for philanthropy to support work on this problem. Anything that improves energy efficiency would also help, as have subsidies that help people switch from solid fuels. [...]
November news & other links
AI-safety-related:
- The UK’s AI Safety Summit gathered political and tech leaders to discuss risks from advances in AI and how to manage them, and produced the Bletchley Declaration, signed by representatives of 28 countries (including the US, UK, and China, as well as the EU).
- U.S. President Biden issued an executive order on “safe, secure, and trustworthy” AI (brief summary and analysis, fact sheet, full order), requiring reporting systems, safety precautions at bio labs, and more.
- Liv Boeree talks about the dark side of competition in AI in a recent TED Talk.
- Key scientists share a short “consensus paper”, and prominent Chinese, US, UK, and European scientists sign a statement on a join strategy for AI risk mitigation.
- In TIME (paywalled), Yoshua Bengio and Daniel Privitera outline policy goals that could help achieve AI progress, safety, and democratic participation.
- AI safety researcher Paul Christiano discusses responsible scaling policies and more on the Dwarkesh Podcast.
- Industry updates: OpenAI's CEO Sam Altman was fired. This news is important but still developing. Meta has disbanded its “Responsible AI” team.
Everything else:
- Giving Season has started! Animal Charity Evaluators shared recommendations. Charities shared what they would do with extra funding. A long-time donor answers questions about earning to give.
- Millions of farmed birds are being killed in an extremely inhumane way after a flu outbreak in the US.
- Investigative journalist Alison Young discusses how top labs have jeopardized public health with repeated biosafety failures on a podcast. (See also episodes on cash transfers and economic growth and historical perspectives on an intelligence explosion.)
- Rethink Priorities has published a series of reports on cause prioritization and uncertainty, including thinking through different types of risk aversion, a cross-cause cost-effectiveness model, and more.
- Chlorinating water can lead to a 30% reduction in child mortality, and we don’t know why.
- The world doesn’t agree on what a “humanely raised fish” is. Changing that could affect billions of aquatic animals.
- Alvea, a company aiming to speed up vaccine development, has wound down. An unofficial post shares some reflections.
Classic: What happens on the average day? (Rose Hadshar)
December
Charity spotlights, reflections on 2023, and opportunities for 2024. This was an unusual newsletter. Many people donate right at the end of the year, so the December EA Newsletter’ focused on featuring exciting charities (in case people want to donate to them) and giving readers a sense of what people who are working on EA-related projects (or projects that look pretty good from an EA perspective) actually do.
Featured content
Doing good via new charities (LEEP and Charity Entrepreneurship as a case study)
Exposure to tiny amounts of lead can lower a child’s IQ by 1-6 points, shorten their lifespan, and more. People know that lead is harmful, but few are aware of just how widespread the problem is; 1 out of every 3 children worldwide has unsafe blood lead levels. And the global toll is catastrophic. Can charities help?
In a recent podcast episode, Lucia Coulter discusses the Lead Exposure Elimination Project (LEEP), which she co-founded in 2020. LEEP identifies and alerts communities that are unwittingly being exposed to lead paint, and provides support to local governments and producers as they transition to lead-free paint. [...] LEEP seems extremely cost-effective [...].
Coulter didn't stumble into lead exposure — she applied to a 2020 charity incubation program run by Charity Entrepreneurship (CE), for which CE researchers had pre-selected eight promising charity ideas. As a participant, Lucia Coulter picked one of the eight ideas, paired up with a cofounder, refined the plan, and got LEEP off the ground with training and $60K in seed funding from CE.
Charity Entrepreneurship incubates 5-8 charities every year, and the charities they launch have a strong track record. If you want to support new charities that are likely to achieve a lot of good, consider donating to CE’s Incubated Charities Fund (see their 2024 charity ideas) or to individual charities they’ve launched.
Farmed animal welfare is neglected, but some charities are making important progress (The Humane League and the Animal Welfare Fund as examples)
In 2023, farmed animal welfare advocates got 130 companies to agree to stop using battery cages in the coming years. Corporate pledges like these are a tested approach that is expected to help a huge number of animals; if companies implement the changes they promised, the ~3000 pledges secured to date would reduce the suffering of around 800 million chickens alive at any time. (Around 90% of the pledges that have come due by last year have been fully implemented.)
This progress and some other wins were achieved by a handful of animal advocacy organizations that could accomplish more if they were less funding-constrained. (The total annual budget of all organizations that try to promote farmed animal welfare is estimated to be a bit over $200 million. For context, over $1 billion annually is spent on animal shelters in the US, supporting significantly fewer animals. As a different reference, the Metropolitan Museum of Art had a budget of over $300 million last year.) […]
Some donation options are less direct but might achieve more (highlighting charitable funds, and effective giving initiatives like Giving What We Can)
Researchers have argued that giving to expert-led charitable funds is often more effective than supporting individual charities. Fund managers’ expertise and resources mean that donations tend to be used more effectively, and recipient organizations might benefit from the consistency and additional support that funds can provide.
Three new cause-area funds were recently announced by Giving What We Can: the Global Health and Wellbeing Fund, the Effective Animal Advocacy Fund, and the Risks and Resilience Fund. They’ve also highlighted some older funds that have a strong track record.
Supporting “effective giving initiatives,” which focus on raising money for high-impact opportunities, might be another way to get more out of your donations. Giving What We Can (GWWC), for instance, seems to generate around $30 for highly effective charities per $1 spent on its operations, has encouraged thousands to pledge to give 10% of their lifetime income, and supports a variety of other projects. Some other effective giving initiatives operate primarily outside of English-speaking countries, like Effektiv Spenden, Ayuda Efectiva, and Doneer Effectief. [...]
Other exciting organizations
[…] For more comprehensive lists, explore projects featured on GWWC's site, charity recommendations from staff at Open Philanthropy and the charity evaluator GiveWell, or read posts about donation choice on the EA Forum.
December news & other links
- PEPFAR (an HIV/AIDS program that has saved millions of people) is at risk.
- The malaria R21 vaccine has been prequalified by the WHO, earlier than expected. See also this reflection from an advocate.
- The European Commission backed out of important animal welfare commitments.
- The first results from the world’s biggest basic income experiment in Kenya are in.
- Jacob Steinhardt analyzes the historical rates of catastrophes and their causes.
- Jeff Sebo discusses digital minds and how to avoid sleepwalking into a moral catastrophe.
- A post explains the role of “sectoral transformation” for economic growth in poorer countries.
- Talks from EA conferences in 2023 are live.
- Richard Y Chappell argues that doing good effectively is unusual.
AI-safety-related
- There's been a lot of media coverage of and confusion about the firing and reinstatement of OpenAI’s CEO Sam Altman. To recap, the decision to remove Altman seems to have been precipitated by some board members’ beliefs that Altman had misrepresented them during his attempt to remove a different board member (who’d co-authored a paper that was critical of OpenAI). After serious backlash, the board negotiated and agreed that Altman would return as CEO (but not as a board member) and on a new board (which consists of 2 new appointees and 1 of the board members who’d originally voted to fire Altman). You can also read some discussion on the events’ outcomes, as well as a discussion of how to interpret leaks in the media.
- Yoshua Bengio's statement for the US Senate Forum on AI Risk outlines risks from AI, explains why he isn’t reassured by counterarguments, and shares his recommendations.
- EU policymakers reach an agreement on the AI Act.
- A report discusses pitfalls that might cause AI alignment research to be counterproductive (and how to address them).
Concluding thoughts + notes on the Newsletter
I don’t think this is a great list (and I think a shorter list would be valuable), but I’m hoping that it’s a start. Please add or comment on things you want to highlight!
I also want to take this chance to solicit feedback on the monthly EA Newsletter (which goes out to ~60K subscribers at varying levels of engagement with EA).
I’m hoping to write more reflections, but I might deprioritize them and/or not turn them into a publishable form. Topics might include: things I wish we discussed more on the Forum, posts that changed my mind, a shorter links of my really-truly favorites, and links that seemed most actionable/useful.
Here’s a link to the companion post, which focuses on 2023 "news."
Vasco Grilo @ 2024-01-06T17:31 (+2)
Thanks for the summary, Lizka!
Here’s a link to the companion post, which focuses on 2023 "news."
The link here is not right, although you provide the correct one at the start of the post.
Lin BL @ 2024-01-06T16:00 (+2)
Thanks for compiling this! I skimmed this, and it was a good way of getting an overview of what is happening in parts of EA that I know less about. I found having it separated by cause then by month useful so the reader can choose which overview they prefer, although some non-AI causes could have had their own section rather than being clumped together (I slowly scrolled through month by month and clicked on some of the more interesting looking articles).