From feelings to action: spreadsheets as an act of compassion
By Zachary Robinsonđ¸ @ 2025-06-13T14:28 (+162)
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by âcold, uncaring, spreadsheet-obsessed robotsâ and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action.
Key points:
- Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling â we act. Two examples:
- When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks.
- People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field.
- We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table.
- Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories â explaining not just what we do but why it matters to us emotionally â we help others see that EA offers a concrete way to turn their compassion into meaningful impact.
You can also watch my full talk on YouTube.
One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since then, weâve seen many signs of progress, especially in recent months.
In May, TIME magazine featured EA not just once, but twice in its âPhilanthropy 100,â celebrating Peter Singerâs influence, as well as Cari Tuna and Dustin Moskovitzâs use of EA frameworks in their philanthropic giving through Open Philanthropy.
I have also been glad to see EA in the news at the New York Times, where they turned to us for input on topics ranging from donations in response to USAID cuts to the potential implications of digital sentience.
But maybe the most surprising â and, letâs be honest, definitely the most entertaining â recent coverage came from the late night Daily Show. Many of you will have already seen it, but for those of you who havenât, hereâs the video.
Thereâs a lot to love here. Shrimp welfare getting a primetime platform, and a comedy show making space to portray the horror of inhumane slaughter methods, overcrowding, and eyestalk ablation. Itâs playful and funny, showing that treating our work as deadly serious doesnât mean we have to take ourselves too seriously. Itâs a segment I hope many more people see.
But through the laughter, thereâs a line that sticks out: âFuck. Your. Feelings.â
Not the most polite message to use to kick off EAG, but one worth talking about. This caricature â that EA is populated by cold, uncaring, spreadsheet-obsessed robots â would be easy to laugh off if it was a one-off, but it isnât. This misconception follows us around, in many contexts, in polite and impolite formsâŚdespite the fact that itâs importantly wrong.
Itâs a mischaracterization that doesn't represent who we are, but has important implications nonetheless. It affects how we relate to our work, how we tell stories about our movement, and how well we are able to inspire other people to join our efforts to make the world a better place.
But most of us arenât here without feelings or despite our feelings. In many cases, weâre here because of them.
Part of why I think this caricature doesnât reflect EA as a whole is because I know that it doesnât describe me. When I think of why I want to do good, I end up thinking a lot about the feelings I have. And when I think about how feelings can motivate action, I come back to a cause that hasnât historically received lots of airtime at EA Global: potholes.
Now, given this is EAG, you might think Iâm talking about potholes in LMICs and their counterfactual impact on traffic fatalities. But, no. I actually want to talk about potholes in my hometown of Omaha, Nebraska.
For those of you who arenât familiar with Nebraska, itâs a landlocked state in the middle of the US, and when I once told a European that itâs where Iâm from, he remarked, âOoh! I know Nebraska! Famously the most boring place in America!â
It's also where my father became famous for filling potholes.
My father had a day job, and being a vigilante construction worker was not it. He ran a small online travel agency from our home in Omaha, where the city had neglected its decaying infrastructure for years. Every day, my dad and everyone else in the city were stuck driving over crumbling roads to work, or to the grocery store, or to drop kids like me off at school. And many of them felt mad about it.
And one day, when my father drove over one pothole too many, he got mad too. But unlike everyone else, he didn't just get mad. He decided he would fill the potholes himself.
So he went out, bought his own asphalt patching material, and went around the neighborhood filling in the holes. He spent countless hours and thousands of dollars just to make the roads a little less broken.
Now, I have no desire to advocate for potholes as one of the worldâs most pressing
problems. But I want to point out what unites a person who feels angry about potholes that their local government doesnât fill, a person who feels dismayed by gaps in global aid, a person who feels sad that animals are tortured en masse to feed us, and a person who feels fearful about the possibility of human extinction from transformative technology.
All of these people have something important in common when they say: This is a problem I care about, feel strongly about, and I want to take action.
A sad fact about the world is that potholes arenât the only problem that people encounter and choose to do nothing about.
Take the recent cuts to USAID. The majority of Americans support maintaining or increasing the amount of American aid that existed prior to Trump taking office, which suggests that there are at least 150 million Americans displeased with the administrationâs recent decisions. Yet only a fraction have taken action.
Take AI risk. On this side of the pond, about a quarter of Brits report believing thereâs at least a 10% chance AI causes human extinction within the next century, which makes for about 15 million adults feeling some degree of fear for the fate of humanity. Yet again, only a fraction have taken action.
There are many people who feel something is wrong and choose to look away.
When forced to drive over potholes day after day, it can feel better to turn off the part of our brains that tells us something is wrong than to sit with our building anger or sadness or fear and confront the reality that the problem has gotten really bad, and nobody is coming to fix it.
But we know that tuning out isnât the only option.
Thereâs a fraction of the world that doesnât just keep driving! There are people who look at holes much bigger than potholes and say: If no one else will fix this, I will. And a startlingly large proportion of those people are in this room[1] right now.
Many of us in the EA community have dedicated ourselves to staying with our feelings
and not looking away. We stare at eye-watering problems day after day. What distinguishes this community isnât that we donât feel. Itâs that we donât stop at feeling. We act.
Effective altruism provides tools for us to translate our feelings into action.
When the new US administration issued stop-work orders that froze USAID's global health programs, GiveWell didnât wait in vain for someone else to step in. They moved quickly to plug the most critical gaps. Within weeks, they mobilized $24 million in emergency funding to provide life-saving malaria prevention and HIV testing services. They're currently investigating more than $100 million in additional grants as they continue to manage the fallout from these cuts.
When it comes to AI safety, the effective altruism community was one of the earliest groups taking action. Open Philanthropy identified ârisks from artificial intelligenceâ as a top priority cause way back in 2015. Since then, our community has grown AI safety from a niche concern into a thriving field with its own institutions, training programs, and research agendas. Programs like MATS â Machine Learning Alignment and Theory Scholars â have trained a new generation of AI safety researchers pioneering novel research agendas like externalized reasoning oversight, situational awareness evaluations, and gradient routing â which all sound extremely important and which I will confidently pretend to understand if you ask me about them.
From conversations with many of you, and with others across our community, itâs clear that these actions were preceded by strong feelings. Sadness. Anger. Hope.
This is what compassionate action looks like: translating our feelings into concrete steps that help. EA gives us the tools to do that. Not to feel less, but to act more.
Why is it, then, that sometimes weâre portrayed as walking, talking spreadsheet robots?
Part of it, I think, is that when our stories are being told, by us or by others, the focus tends to be on what weâre doing or not doing. Itâs our actions that count, after all. In comparison, it can be difficult to make space for the how and the why. Itâs not newsworthy that someone felt upset by a pothole. It is newsworthy what someone did in response to it.
In EA, our âwhatsâ tend to attract attention â shrimp welfare and digital sentience are counterintuitive and controversial, which makes them good for clicks. Whereas our âwhysâ â our feelings â are more intuitive, more widely shared, and more recognizably human. In other words, they seem boring.
But that doesnât mean we should cut them from our story.
Thereâs a long track record of people involved in EA naming feelings as our foundations. Back in 2017, Holden Karnofsky wrote about radical empathy as a motivating reason to care about animals and people on the other side of the planet, in the same way it was once radical to recognize the moral worth of a neighbour of a different race.
In his book The Precipice, Toby Ord talks about why he cares about humanityâs future:
Indeed, when I think of the unbroken chain of generations leading to our time and of everything they have built for us, I am humbled. I am overwhelmed with gratitude; shocked by the enormity of the inheritance and at the impossibility of returning even the smallest fraction of the favor. Because a hundred billion of the people to whom I owe everything are gone forever, and because what they created is so much larger than my life, than my entire generation.
But these motivating feelings rarely make it into stories about our work. Itâs too easy to focus on whatâs different. And the thing that separates the majority of the people inside this room from the majority of people outside of it is our actions, not our feelings.
Even when itâs not newsworthy, it can be worth reminding people that, hey, like just about everybody else, weâd find it really sad if we all died.
Sometimes, itâs ok to tell people what they already know and feel. Sometimes, itâs ok not to tell people about all of the ways they could be acting differently, and instead, validate all of the ways their feelings make sense. Sometimes, itâs okay to not emphasize whatâs original and unique about EA, but rather the ways in which weâre exactly the same as everyone else.
The lack of attention on what makes us the same can intensify focus on what makes us different. And that means that oftentimes, discussions surrounding EA center less on what we do and more on what we donât do.
You arenât focused on homelessness in your own backyard. You arenât trying to fix your countryâs broken healthcare. You arenât focused on the discrimination you see day in and day out. And because thereâs so much concern with what we arenât doing, that can lead people to believe we donât care.
I donât think thatâs true, though. Iâm sad every time I see someone living and suffering on the streets where I live in San Francisco. Iâm filled with love for my parents and feel afraid when I think of their health declining. Iâm angry and upset about the racism and homophobia that I witnessed growing up in Nebraska.
But I also feel angry at a food system that plucks the eyestalks of shrimp, at governments that leave children to die of malaria, and at corporations that recklessly race to develop technologies that could endanger us all.
I care about every tortured animal, every dying child, and every future individual that could have a fulfilling life. But I canât help them all. And if we donât want to deny the worldâs problems, and if we donât want to disassociate from them, then we need a way to decide which feelings to act on, and which actions to take.
That is compassion. Compassion is what we call converting our feelings into actions that help others. It is the bridge between our awareness that other people are hurting and our actions to make it better.
Effective altruism is compassionate. This community is compassionate. We recognize that the world doesnât need us to take on its suffering; it needs us to take action to alleviate its suffering.
Taking ideas seriously is a common trait among the EA community. I think thereâs a related EA trait: taking feelings seriously.
When we feel sad or angry or afraid about the state of the world, we ask ourselves: What can we do about it, with the resources available to us? What can I do about it, with the time and money available to me?
EA principles are still the best tools I know of to answer these questions, which is to say theyâre the best tools I know of to put compassion into practice. Our feelings tell us that action is required, but in the face of so many complex and seemingly insurmountable problems, our feelings alone canât tell us what action to take, or which problem to prioritize.
Thatâs where our principles come in.
Impartiality, scope sensitivity, truthseeking, and tradeoffs may seem cold or uncaring at first glance, but theyâre not the problem â theyâre an essential part of the solution.
Without impartiality, weâd be blind to the needs of those far away or unfamiliar.
Organizations like Rethink Priorities have done pioneering work on sentience and how to weigh animal suffering, extending our moral circle.
Without scope sensitivity, weâd fail to help as many as we can by as much as we could. Scope sensitive members of the community havenât flinched while going down the species by body size and up the species by population size, from cows to chickens to shrimp to insects, ultimately improving the lives of billions of animals.
Without truthseeking, weâd be beholden to familiar ideas even when they lead us astray. This community has funded and used research from the Welfare Footprint Institute to change their strategies and better understand how to help chickens, fish, and pigs.
Whether we like it or not, everyone who wants to make a difference has to make hard tradeoffs. Recognizing tradeoffs isnât a failure of compassion. It is an expression of it. Our resources are finite, and much smaller than the full scale of all the problems and promise in our world. So we have to choose.
Between just the US and UK, there are about two million registered non-profits, over 100 times the number of McDonald's. Meaning every time we dedicate our resources to one charity, weâre choosing not to dedicate them to millions of others.
One way to choose is to stick to what we know, whatâs close to our heart or close to home.
The other way is to make spreadsheets. We know itâs unconventional (and for many itâs counterintuitive) to express deep care by doing a cost-effectiveness calculation. Itâs understandable that, without knowing why we do this, people might watch us intentionally and calculatedly choosing not to help those close to home, and assume that we are cold and uncaring.
But I think that gets EA the wrong way round. We donât make spreadsheets because we donât care. We make spreadsheets because we care so much.
Why does this misconception matter?
If compassionate people donât see us as compassionate, and they donât hear us communicating that EA is a meaningful way to be compassionate, we risk alienating like-minded people who might otherwise dedicate their resources to tackling the worldâs most pressing problems. In every survey Rethink Priorities has run on how people first find out about EA, personal contacts come out on top.
How we show up matters. It mattered to me in my EA journey.
When I first discovered EA, I was deeply skeptical of AI safety. It took me a long time to be convinced that it was something I should be concerned about. And the primary reason it took so long wasnât that I was presented with bad arguments, and it wasnât because I was stupid. It was because I associated AI safety with cold and faceless people on the internet, so I didnât warm up to them or their ideas.
Now, this was not my most truthseeking reaction. But it was a human one. The people I mentally associated with AI didnât seem particularly compassionate to me. Instead, they seemed self-centered: inflating the importance of their work and ignoring very real diseases killing very real people on the other side of the world because the only thing that could threaten them and other wealthy Westerners was a sci-fi fantasy.
But what eventually changed my mind was spending time with them. Because only then did I learn not just about their ideas, but also about their feelings. And importantly, I learned not just about the very real terror they felt about the potential for all humanity to die, but also the compassion they demonstrated for the suffering and the poor, as people who were vegans, kidney donors and 10% pledgers.
These people clearly felt strongly about the suffering of others and sincerely believed the best way they could help others was by shaping the development of transformative technology. Once I understood that we had common motivations, I heard their arguments differently.
And while specific diets or donations are not and should not be barriers to entry to participate in this community, I think the people in this room are here because we care. I want to make it easier for others to see that caring.
However you feel, thereâs power in showing up as your full, feeling self. Our stories should convey both sides of our compassion â both the strength of our feelings and the impact of our actions. Those feelings and actions will vary from person to person, from org to org, and from story to story, which gives each of us the opportunity to reach different audiences with different emotional registers.
The EA brand wonât be controlled or contained by any one EA organization, CEA or otherwise. Ultimately, it will be a composite function of every EA actor and every EA ambassador and advocate, which is to say that it will be a composite of all of you. It will be a function of the stories told about you, and the stories you tell yourselves. We are more than our spreadsheets. The diversity of our stories can strengthen our outreach if we make EAâs public persona more reflective of the humans who built it.
So, with that in mind, Iâm going to ask that you donât fuck your feelings. Because sometimes, in order to make people think differently and act differently, we first need to make them feel.
So, I hope you spend time learning, but I also hope you spend time feeling. You are surrounded by so many compassionate people who feel the holes in the road and take heroic action to fill them in for others. I want to continue building this community because I believe in our compassion and because I believe in our principles. I feel inspired to advocate for EA because I believe weâve already achieved so much and because I believe weâre only just getting started.
- ^
Or reading this post!
NickLaing @ 2025-06-13T15:44 (+17)
Loved seeing this talk and am looking forward to showing it to a few others as well!
Side note it would be great if all the talks were on YouTube publicly ASAP. Good to be able to ride on that post - conference buzz before it dies down ;)
SiobhanBall @ 2025-06-15T19:47 (+6)
Was great to hear this speech live! I thought it was very well-structured, with just the right balance of levity and seriousness. I'll be showing my family shortly so that they get a better idea of what 'all this EA stuff' is about!
Sarah Tyers @ 2025-06-16T10:41 (+3)
Really appreciated this talk. It did a great job of holding both feeling and strategy â not shying away from anger, sadness, or hope, but also not getting stuck there. The story about your dad filling potholes really stayed with me â such a simple, clear example of what it means to notice harm and just decide to help. I think sometimes we act like caring and action need to be either emotional or analytical, but this reminded me that they can (and should) be both. Spreadsheets and systems can be powerful tools because we care â not instead of it. Thanks for naming that so clearly!
karaman @ 2025-06-15T19:02 (+2)
Thank you for this, Zachary. This talk hit me harder than I expected.
Itâs easy to slip into âoptimizer autopilotâ mode. To think that the job is just about doing the math right, picking the right interventions, and maximizing QALYs....