Who would you like to see speak at EA Global?
By Jordan Pieters 🔸 @ 2024-08-08T10:55 (+46)
I’m Jordan, I recently joined as the Content Coordinator on the EA Global team at CEA, and I’d love to hear from the community about what content you’d like to see at future conferences. You can see upcoming conference dates here.
How do we usually select content?
Traditionally, our content selection focuses on:
- Informing attendees about important developments in relevant fields (eg. founders discussing new organisations or projects, researchers sharing their findings)
- Diving deeper into key ideas with experts
- Teaching new skills relevant to EA work
Some recent sessions that were well-received included:
- Panel – When to shut down: Lessons from implementers on winding down projects
- Talk – Neela Saldanha: Improving policy take-up and implementation in scaling programs
- Workshop – Zac Hatfield Dodds: AI Safety under uncertainty
However, we recognise that conference content can (and perhaps should) fulfil many other roles, so your suggestions shouldn’t be constrained by how things have been done in the past.
What kinds of suggestions are we looking for?
We welcome suggestions in various forms:
- Specific speakers: Nominate people who you think would make great speakers (this can be yourself!).
- Topic proposals: Suggest topics that you believe deserve more attention.
- Session format ideas: Propose unique formats that could make sessions more engaging (e.g., discussion roundtables, workshops, debates).
To get an idea of what types of content we’ve had in the past, check out recordings from previous EA Global conferences.
We have limited content slots at our conferences, which means we can't promise to follow up on every suggestion. However, every suggestion helps us better understand what our attendees want to see and can provide jumping-off points for new ideas.
How to Submit Your Suggestions:
- Comment on this post and discuss your ideas with other forum users.
- Fill out this form or email speakers@eaglobal.org if you’d prefer not to post publicly.
Your input can help shape future EAGs to be even more impactful. I look forward to hearing your suggestions!
Ozzie Gooen @ 2024-08-09T22:32 (+49)
I really would like to see more communication with the Global Catastrophic Risks Capacity Building Team at Open Philanthropy, given that they're the ones in charge of funding much of the EA space. Ideally there would be a lot of capacity for Q&A here.
Kirsten @ 2024-08-08T20:03 (+36)
Personally I'd be very interested in more content from policymakers or people who regularly influence policymakers! I don't normally go to EAGs because they don't really speak to my career but I would be much more interested in this kind of content.
Karthik Tadepalli @ 2024-08-11T19:03 (+33)
Personally, I think EA's global health and development wing has become stale. There are very few new ideas these days, very little dynamism or experimentation with things beyond the typical GW/OP grant. In that spirit, I think we should invite health and development researchers and policymakers, who work on important development questions that EAs have not historically engaged much with. Here are my suggestions:
-
Doug Gollin is the foremost expert on agriculture in developing countries. Agriculture is the largest sector, so improving agricultural productivity could dramatically increase people's incomes. At the same time, the agricultural sector is a constraint on growth, limiting people's movement into higher value sectors that can power economic growth. How do we improve agriculture? Doug Gollin can tell us.
-
David McKenzie is the foremost expert on businesses in developing countries. If we want people to be able to earn more money, the most important constraint is that businesses have to expand to create more jobs that can hire people. How do we make businesses more productive? David McKenzie can tell us.
-
Pinelopi Goldberg is the foremost expert on globalization and development. The biggest tectonic shift in development in the past 50 years is the rapid globalization that followed the end of the Cold War; India and China arguably gathered a lot of steam in their growth paths from globalization that they otherwise wouldn't have had. But today we are in a de-globalizing world where the EU puts up "carbon tariffs" that mostly affect developing countries, and the US wants to "friend-shore" its supply chains. What can developing countries do to advance growth and alleviate poverty in a de-globalizing world? Pinelopi Goldberg can tell us.
In addition, I see some really promising agendas from younger scholars, who might also be more willing to talk at EAG:
-
Lauren Bergquist is at the vanguard of research on market-level interventions in developing countries. Traditionally, EAs and development economists focused on interventions that directly deliver some health commodity or income-generating support to households. But you can get an unparalleled amount of leverage by intervening at the market level, addressing market-level inefficiencies that reduce people's incomes. What are these inefficiencies and can we find cost-effective ways to address them at scale? Lauren Bergquist can tell us.
-
Jacob Moscona is at the vanguard of research on technology in developing countries. The rich world has invented technologies that improve life beyond the wildest imaginations of people who lived a hundred years ago, but most of these technologies are absent from developing countries. Why don't technologies flow from rich countries to poor countries? Jacob Moscona can tell us.
-
Ben Faber is at the vanguard of research on economic integration within developing countries. Countries cannot prosper as a collection of isolated villages. The flow of goods and people between rural and urban areas is essential for both of them to become richer. How do we facilitate this flow? Ben Faber can tell us. (Disclosure: Ben is one of my advisors.)
I've named development economists since those are the people whose work I am aware of. But I am sure that global health also has more exciting areas than EA is aware of, and I encourage people with expertise in global health to recommend global health experts in the same vein.
BrownHairedEevee @ 2024-08-17T00:13 (+3)
How about someone from the Institute for Transportation and Development Policy?
One idea for a cause area that I have is investing in public transportation in developing countries that have inadequate infrastructure for it - many of which are in Africa. Public transit can promote sustainable, equitable economic growth. Many African governments are mostly building roads even though the majority of their citizens don't own cars, so their transportation investments are not really benefiting the public.[1] And as I've written on this forum, car-centric cities are prone to congestion so they can't support large populations and high economic growth like transit-oriented cities can.
Jason @ 2024-08-11T20:44 (+2)
Personally, I think EA's global health and development wing has become stale. There are very few new ideas these days, very little dynamism or experimentation with things beyond the typical GW/OP grant.
What do you think the cause for this stagnation is? I can envision some stagnation causes for which inviting speakers who work on EA-neglected questions could have an attractive theory of impact, and other stagnation causes for which the potential pathways to impact would be murkier.
Karthik Tadepalli @ 2024-08-11T21:18 (+18)
I don't know frankly. Spitballing:
- People defer to OP/GW/CE to shine the light forward for us.
- New ideas come from new people, and EA GHD has much fewer new people entering each year than it used to because community building focuses on AI.
- The appeal of EA GHD is certain impact which makes people much more reluctant to deviate from obviously good opportunities that we have found already.
- The point above, but for funding.
I started using the EA Forum just shortly before OP ran their Cause Exploration Prize contest, and it really felt like the Forum was the most exciting place for new ideas on how to do good in global development. I used to regularly send Forum posts to my non EA development friends. I've had zero cause to do so recently.
Arepo @ 2024-08-12T13:13 (+7)
Another explanation is just that we've basically found the best global health interventions, and so there isn't much to do in the space - or at least not with current budget.
Karthik Tadepalli @ 2024-08-12T17:59 (+5)
If this were true, GW/OP wouldn't be funding new areas like lead poisoning or air pollution. You could argue those areas are speculative and may not beat GW top charities, but then there's still tremendous value of information from funding them to see if they do beat GW top charities. Either way, there's no argument for resting on our laurels.
Arepo @ 2024-08-13T13:36 (+2)
It seems consistent for it to be true and for us not to know that it's true. All GW can ultimately do is keep trying and assessing new stuff, and if it fails to to beat AMF & co, gradually increase their credence that they've found the best areas.
I'm somewhat unsure what you mean when you describe these things as having 'tremendous value of information' while also thinking they represent 'very little dynamism or experimentation' btw (not claiming you're inconsistent, just that I find them a confusing pair of statements as contextualised so far and would be interested for you to clarify).
Karthik Tadepalli @ 2024-08-13T15:04 (+6)
It seems consistent for it to be true and for us not to know that it's true.
That's what I mean by value of information. My point is that there is high value of information in testing new interventions, and that OP/GW/CE are definitely doing this exploration, but that the community is adding very little to this exploration process. What little innovation there has been in EA GHD has been mostly top down and led by these organizations rather than based on collective research.
Arepo @ 2024-08-13T16:21 (+6)
Gotcha. My guess is that's funding- and culture-driven - my sense is EA community orgs have been put under substantial pressure to prioritise longtermist/AI stuff to a substantially greater degree than they used to.
Jason @ 2024-08-12T13:36 (+2)
Maybe, although this conclusion would likely be dependent on applying GiveWell-like moral weights that heavily favor saving lives. I'm not saying those weights are wrong, just that they are not beyond questioning.
NickLaing @ 2024-08-12T16:46 (+3)
Yeah the last 6 months especially had been lean pickings on the GHD front on the forum. I came on much later than you (20 months ago) and have seen a noticeable decline even in that time. Love your suggestions for speakers.
Perhaps also as lot of EAish GHD people are getting stuck into their work rather than looking for he's opportunities too.
I would also like to see more GHD stuff on the forum from Open Phil and RP but I doubt I'll get much joy there
Karthik Tadepalli @ 2024-08-12T18:01 (+5)
This is a good opportunity to say, your posts on clean water, nurse emigration and helping individuals cost-effectively have been the best GHD contributions I've read on this Forum in years!
Jason @ 2024-08-13T14:44 (+2)
(Adding another possible structural issue: if someone is all-in on GHD and open/flexible to a lot of options of how to maximize their GHD impact, they are going to compare the EV of exploring/trialing new ideas against earning to give. Even post-FTX, that calculus is much more conducive to new-idea development in GCR than in GHD or animal welfare.)
Jason @ 2024-08-13T02:14 (+2)
Thanks -- those are similar to the causes I had in mind, although I would probably ground them even more explicitly in funding issues. For instance, it seems plausible that perceived "deference" to OP/GW/AIM (CE) is actually more like -- people don't go investigating theories of impact that don't seem to fit within established funding streams, and there are a lot of potential ideas that don't fit those funding streams very well.
It seems that AIM looks for interventions that can launch for ~$100-$250K and then produce enough results to attract continuing funding. There's a lot that will work with that model, but the ideas your answer hinted at may not be among them.
As for GW, its business processes seem to favor interventions that are more iteratively testable. By that I mean roughly those interventions for which you can get pretty decent evidence of a specific charity's effectiveness at a fairly low cost, and then fund an eight-figure RCT to promote the charity to top charity status.
Also -- and I say this lovingly as a committed GW donor! -- there's some truth to the idea that GW's top charities put band-aids on deep problems. One can think that band-aids are the best approach to these problems right now while recognizing that one will need just as many band-aids for next year's newborns. When you combine that with GW top charities having a lot of room for more funding with only a modest decrease in marginal effectiveness, you don't have much churn of established programs to make more room for the new ones.
That's not to criticize either organization! I am skeptical that any single organization could do something as broad as "global health and development" at a consistently high level, and there's a lot to be said for the Unix philosophy of doing one thing and doing it well.
Ozzie Gooen @ 2024-08-09T22:38 (+33)
Some ideas:
1. What are the main big mistakes that EAs are making? Maybe have a few people give 30-minute talks or something.
2. A summary of the funding ecosystem and key strategic considerations around EA. Who are the most powerful actors, how competent are they, what are our main bottlenecks at the moment?
3. I'd like frank discussions about how to grow funding in the EA ecosystem, outside of the current donors. I think this is pretty key.
4. It would be neat to have a debate or similar on AI policy legislation. We're facing a lot of resistance here, and some of it is uncertain.
5. Is there any decent 5-10 year plan of what EA itself should be? Right now most of the funding ultimately comes from OP, and there's very little non-OP community funding or power. Are there ideas/plans to change this?
I generally think that EA Globals have had far too little disagreeable content. It feels like they've been very focused on making things seem positive for new people, instead of focusing more on candid and more raw disagreements and improvement ideas.
Arepo @ 2024-08-10T04:19 (+11)
A critical interview format would be interesting. E.g. some highly engaged intra-EA critic like Oliver Habryka, Nuno Sempere or titotal interviewing some key player at OP or CEA with the advance understanding that it might be uncomfortable and at least somewhat adversarial (though maybe with some kind of structure agreed on in advance so no-one comes away feeling like they were unfairly caught off guard).
AnonymousEAForumAccount @ 2024-08-09T03:12 (+27)
I'd love to see Oliver Habryka get a forum to discuss some of his criticisms of EA, as has been suggested on facebook
Arepo @ 2024-08-09T03:45 (+23)
David Goldberg - he started one of the most successful EA organisations both in terms of money moved and research quality, but as far as I know has never presented at an EAG.
Arepo @ 2024-08-16T12:08 (+3)
Someone seems to have strong downvoted this for -9 karma since I looked at it earlier this evening. I wish that person would say why.
Arepo @ 2024-08-16T12:44 (+12)
Sidenote: I have a general wish for the forum a) that you could just choose any amount of karma up to your max entitlement to allocate with an up/downvote, and b) that downvotes for stronger than some value (maybe 4 or 5), would be restricted unless the downvoter commented.
I find it really quite hostile when someone has so much influence over the apparent popularity of a post/comment and chooses to wield it without trying to help fix whatever they think the problem is.
Jason @ 2024-08-16T18:50 (+2)
(was not the voter, nor have I voted on your comments)
(a) would also be beneficial to the voter as well; knowing the magnitude of a vote sharply limits the pool of people who may have cast it
Incidentally, I'd prefer that people generally say something like "+/-7 or stronger" rather than a specific number, to limit the risk someone could de-anonymize a specific vote
Arturo Macias @ 2024-08-09T19:43 (+18)
Vaclav Smil. I would ask him what is the best intervention to improve global welfare, what are the largest risk, etc. A guy that truly has the biosphere in his brain.
NickLaing @ 2024-08-11T11:59 (+17)
Speakers who are Top Academics and policy makers in in Global Health, even if they aren't very EA affiliated. I've been to one EAG and the most impactful connection was with a professor who I think isn't super EA affiliated. I think she was one of perhaps 2 in that category - I think at least 5 could have been great.
This has the benefits of connecting EA people with powerful people in the Global Health world, drawing powerful people perhaps a little closer to the EA community and way of thinking, and also gives us a bit of perspective out of the echo chamber we might be in.
Yelnats T.J. @ 2024-08-15T22:59 (+13)
Ezra Klein
Jordan Pieters 🔸 @ 2024-08-16T11:00 (+3)
In case you haven't seen it, here's a fireside chat we hosted with Ezra Klein in 2021. It might be cool to have him back at EAG though!
Vincent van der Holst @ 2024-08-14T16:41 (+12)
I work in this space, but I would like to see some Profit for Good (companies that donate the vast majority of profits to (effective) charities) company executives. Most of these companies don't donate in ways that EA's would call effective but that makes it more interesting imo to invite them.
With billions in market value (Bosch, Patagonia, Carl Zeiss, Newman's Own alone are worth around 10B I believe, all pledged to charity) in this space these are some of the biggest future donors and an interesting alternative to classic philanthropy. There's a bunch of smaller PFG's as well and Humanitix is a well known EA example who are quickly growing their donations and market value.
There's some critics of PFG in EA as well and I would enjoy a panel with both sides too.
Yelnats T.J. @ 2024-08-15T22:59 (+11)
I'd like to see any speaker that will talk about democracy protection and political system reform as these topics seem neglected at EAGs given the amount of EAs that actually engage with the topic
Arepo @ 2024-08-09T09:44 (+11)
I think an ongoing challenge for EAGxes is figuring out who and what they're optimising for. The talks can be interesting, but in practice very few of them have practical value.
As far as I can tell, the format was approximately lifted from TED talks without much experimentation with alternatives. Also, the 'connections' metric seems to have been developed more in response to the existence of the format (measuring shallow, short term interactions rather than longer term effects or deeper developments of intra-community trust)
So I would suggest thinking more broadly about
- who EAGxes are for
- what those people need (if you want to develop careers)
- what those people want (if you want to build a welcoming community)
- what alternative approaches than 'formal 1-hour talks, 30-min one-ones, generic social areas' might be worth trying
- how to measure whether those approaches are doing something valuable
Maybe you'll find that the current format is the optimal approach. But I don't feel like there's currently sufficient evidence to justify that assumption, especially given this recent analysis.
emre kaplan @ 2024-08-08T20:09 (+8)
Brigitte Gothière, Sébastian Arsac and Marek Voršilka
ASuchy @ 2024-08-14T13:46 (+1)
Agree! All exceptional activists.
NickLaing @ 2024-08-16T06:08 (+6)
Was also thinking would be great to invite and even fund one or 2 good faith, respected critics of EA. I mean serious critics who don't like EA, not just people on the fringes with lots of criticisms.
I would love to see their best presentation and be able to meet with them one on one.
Understanding what turns people off and what people don't like helps me think about how great to frame both my content and style when I present EA ideas.
leillustrations🔸 @ 2024-08-16T07:36 (+5)
- Presentations from any of the individuals who work on evaluation, getting "into the weeds" of how decisions are made, and recent work
- Presentations from Givewell grantees on what they're currently working on
- Bill / Melinda Gates, or otherwise someone from the Gates foundation
- Elon Musk, or people from Tesla, Neuralink, and SpaceX
- People from pharmaceutical companies
- Board members of EVF
- Sal Khan
- A talk from successful edutainment/social media people who discuss EA-adjacent ideas like CGP Grey, Kurzgesagt, etc. (who did not necessarily start out EA-funded)
- Podcast interviewers who discuss EA-relevant content, eg. Ezra Klein (as already mentioned), Lex Fridman, Joe Rogan.
- People running non-cause area EA interest groups, eg. SEADS, High Impact [Engineers, Law, Professionals, Medicine, etc], Religious EA groups, on what they're working on/how EA is different in their communities
Jim Buhler @ 2024-08-14T13:34 (+5)
I don't think Andreas Morgesen ever gave a talk on his (imo underrated) work on maximal cluelessness which has staggering implications for longtermists. And I find all the arguments that have been given against his conclusions (see e.g the comments under the above-linked post or under this LW question from Anthony DiGiovanni) quite unconvincing.
Vasco Grilo🔸 @ 2024-10-20T15:44 (+2)
Hi Jim,
I had already shared the below with you before, but I am reposting it here in case others find it relevant.
Would you still be clueless if the vast majority of the posterior counterfactual effect of our actions (e.g. in terms of increasing expected total hedonistic utility) was realised in at most a few decades to a century? Maybe this is the case based on the quickly decaying effect size of interventions whose effects can be more easily measured, likes ones in global health and development?
Do you think global human wellbeing has been increasing in the last few decades? If so, would you agree past actions have generally been good considering just a time horizon of a few decades after such actions? One could still argue past actions had positive effects over a few decades (i.e. welfare a few decades after the actions would be lower without such actions), but negative and significant longterm effects, such that it is unclear whether they were good overall. Do we have examples where the posterior counterfactual effects was positive at 1st, but then became negative instead of decaying to 0?
Jim Buhler @ 2024-10-21T08:59 (+3)
Nice, thanks for sharing, I'll actually give you a different answer than last time after thinking about this a bit more (and maybe understanding your questions better). :)
> Would you still be clueless if the vast majority of the posterior counterfactual effect of our actions (e.g. in terms of increasing expected total hedonistic utility) was realised in at most a few decades to a century? Maybe this is the case based on the quickly decaying effect size of interventions whose effects can be more easily measured, likes ones in global health and development?
Not sure that's what you meant, but I don't think the effects of these decay in the sense that they have big short-term impact and negligible longterm impact (this is known as the "ripple in a pond" objection to cluelessness [1]). I think their longterm impact is substantial but that we just have no clue if it's good or bad because that depends on so many longterm factors the people carrying out these short-term interventions ignore and/or can't possibly estimate in an informative non-arbitrary way.
So I don't know how to respond to your first question because it seems it implictly assumes something I find impossible and goes against how causality works in our complex World (?)
> Do you think global human wellbeing has been increasing in the last few decades? If so, would you agree past actions have generally been good considering just a time horizon of a few decades after such actions? One could still argue past actions had positive effects over a few decades (i.e. welfare a few decades after the actions would be lower without such actions), but negative and significant longterm effects, such that it is unclear whether they were good overall.
Answering the second question:
1. Yes, one could argue that.
2. One could also argue we're wrong to assume human wellbeing has been improving to begin with. Maybe we have a very flawed definition of what wellbeing is, which seems likely given how much people disagree on what kinds of wellbeing matter. Maybe we're neglecting a crucial consideration such as "there have been more people with cluster headaches with the population increasing and these are so bad that they outweigh all the good stuff". Maybe we're totally missing a similar kind of crucial consideration I can't think of.
3. Maybe most importantly, in the real World outside of this thought experiment, I don't care only about humans. If I cared only about them, I'd be less clueless because I could ignore humans' impact on aliens and other non-humans.
And to develop on 1:
> Do we have examples where the posterior counterfactual effects was positive at 1st, but then became negative instead of decaying to 0?
- Some AI behaved very well at first and did great things and then there's some distributional shift and it does bad things.
- Technological development arguably improved everyone's life at first and then it caused things like the confection of torture instruments and widespread animal farming.
- Humans were incidentally reducing wild animal suffering by deforesting but then they started becoming environmentalists and rewilding.
- Alice's life seemed wonderful at first but she eventually came down with severe chronic mental illness.
- Some pill helped people like Alice at first but then made their lives worse.
- The Smokey Bear campaign reduced wildfires at first and then it turned out it increased them.
[1] See e.g. James Lenman's and Hilary Greaves' work on cluelessness for rejections of this argument.
Vasco Grilo🔸 @ 2024-10-21T11:01 (+2)
Thanks for following up, Jim.
big short-term impact and negligible longterm impact
If these were not so for global health and development interventions, I would expect to see interventions whose posterior effect size increases as time goes by, whereas this is not observed as far as I know.
2. One could also argue we're wrong to assume human wellbeing has been improving to begin with. Maybe we have a very flawed definition of what wellbeing is, which seems likely given how much people disagree on what kinds of wellbeing matter. Maybe we're neglecting a crucial consideration such as "there have been more people with cluster headaches with the population increasing and these are so bad that they outweigh all the good stuff". Maybe we're totally missing a similar kind of crucial consideration I can't think of.
I think welfare per human-year has increased in the last few hundred years. However, even if one is clueless about that, one could still conclude human welfare has increased due to population growth, as long as one agrees humans have positive lives?
3. Maybe most importantly, in the real World outside of this thought experiment, I don't care only about humans. If I cared only about them, I'd be less clueless because I could ignore humans' impact on aliens and other non-humans.
I agree there is lots of uncertainty about whether wild and farmed animals have positive or negative lives, and about the impact of humans on animal and alien welfare. However, I think there are still robustly positive interventions, like Shrimp Welfare Project's Humane Slaughter Initiative, which I estimate is way more cost-effective than GiveWell's top charities, and arguably barely changes the number of farmed and wild animals. I understand improved slaughter will tend to increase the cost of shrimp, and therefore decrease the consumption of shrimp, which could be bad if shrimp have positive lives, but I think the increase in welfare from the less painful slaughter is the driver of the overall effect.
- Some AI behaved very well at first and did great things and then there's some distributional shift and it does bad things.
Not all AI development is good, but I would say it has generally been good at least so far and for humans.
- Technological development arguably improved everyone's life at first and then it caused things like the confection of torture instruments and widespread animal farming.
Fair. However, cluelessness about whether technological development has been good/bad does not imply cluelessness about what to do, which is what matters. For example, one could abstain from supporting technological development more closely linked to wars and factory-farming if one does not think it has generally been beneficial in those areas.
- Humans were incidentally reducing wild animal suffering by deforesting but then they started becoming environmentalists and rewilding.
I think it is very unclear whether wild animals have positive/negative lives, so I would focus on efforts trying to improve their lives instead of increasing/decreasing the number of lives.
- Alice's life seemed wonderful at first but she eventually came down with severe chronic mental illness.
I agree there are many examples where the welfare of a human decreases. However, we are far from clueless about improving human welfare. Even if welfare per human-year has not been increasing, welfare per human life has been increasing due to increases in life expectancy.
- Some pill helped people like Alice at first but then made their lives worse.
There are always counterexamples, but I suppose taking pills recommended by doctors still improves welfare in expectation (although I guess less than people imagine).
- The Smokey Bear campaign reduced wildfires at first and then it turned out it increased them.
It is unclear to me whether this interventions was positive at 1st, becaues I do not know whether wild animals have positive or negative lives, and I expect the effects on these are the major driver of the overall effect.
MWStory @ 2024-08-15T03:52 (+4)
I think EAG would benefit more from sharing expertise in management and organisational effectiveness. More and more EA organisations are switching from generating interesting ideas and steering decisions by publishing analysis to actually carrying out plans and trying to have a direct impact on the world. This requires a different set of skills and organisational norms.
Mads Vesterholt @ 2024-08-19T09:27 (+3)
I think Jon Stewart would be a great speaker.
It can be hard to discuss EA ideas with non-EA people or just introduce them to the community. I have not yet found anyone (in the EA community) who can effectively communicate it's purpose, philosophy or why it's worth joining to people who aren't already in the mindset or have advanced degrees etc.
I know Jon Stewart is aligned with EA in many ways (animal welfare, AI safety, climate change action to mention a few) His unique ability to take somewhat boring, complex and polarizing issues and distilling them through a comedic filter to then present, educate and inspire others. I believe the community could benefit greatly by someone like this.
DeborahB @ 2024-08-14T17:08 (+3)
Lewis Bollard! I would like to hear (and I think others would find it useful) to hear about especially impactful initiatives in animal welfare/rights (and maybe also cover things that people thought would make a difference, which didn't). He works as program officer at Open Philanthropy but I am not suggesting him in an official capacity, just as someone who has a good overview of this area and has researched it (Lewis - sorry if you see this and you aren't keen!). Or someone else who can speak about how to gauge what is effective in improving animal welfare/rights and perhaps has a point of view about what has been effective.
MMathur @ 2024-10-29T20:24 (+2)
Have you considered having an open, competitive process to submit talk abstracts, similarly to academic conferences?
Jonas Søvik @ 2024-12-15T14:00 (+1)
Daniel Schmachtenbeger! He is more EA than most of us. Freakin leagues above most other intellectuals I've encountered anywhere
John Vervaeke, talking about solutions to AI through distributed cognition or modern approaches to wisdom (as an extension of rationality)
Cornelis Dirk Haupt @ 2024-10-11T19:51 (+1)
Peter Turchin. He was the first guest on Julia Galef's Rationally Speaking podcast and Scott Alexander did an article on his work. But outside of that I doubt he even knows EA as a movement exists. Would love to see him understand AI timelines and see how that influences his thinking and his models and vice-versa how respected members of our community make updates (or don't) to their timelines based on Turchin's models (and why).
Alix Pham @ 2024-08-19T07:23 (+1)
- Damon Centola (Change: How to Make Big Things Happen)
- Policy speakers (US Congress, European Commission, ...)
- More about biosecurity (projects and funders)
- Maybe a bit more people not focusing on US or UK? (not sure if the current balance is actually suboptimal)