Crucial considerations in the field of Wild Animal Welfare (WAW)

By Holly_Elmore @ 2022-04-10T19:43 (+63)

Cross-posted from my blog, hollyelmore.substack.com

Wild animal welfare (WAW) is:

A crucial consideration is a consideration that warrants a major reassessment of a cause area or an intervention.

WAW is clueless or divided on a bevy of foundational and strategic crucial considerations.

The WAW account of nature

There are A LOT of wild animals

H/T Sami Mubarak via Dank EA Memes

In contrast, there are 24 billion animals alive and being raised for meat at any time

WAW: Nature is not already optimized for welfare

Therefore, nature could, in theory, be changed to improve the welfare of wild animals, which is expected to be less than what it could be.


Foundational Crucial Considerations

Should we try to affect WAW at all?

What constitutes “welfare” for wild animals? 

What are acceptable levels of abstraction?

How much confidence do we need to intervene?


Strategic Crucial Considerations

Emphasis on direct or indirect impact?

Is WAW competitive with other EA cause areas?

What is the risk of acting early vs. risk of acting late?

How will artificial general intelligence (AGI) affect WAW? How should AI affect WAW?

Convergence?

H/T Nathan Young via Dank EA Memes

Most views converge… in the short term

The long term future of WAW is at stake!


Acknowledgments

Thanks to the rest of the WAW team at Rethink Priorities, Will McAuliffe and Kim Cuddington, for help with brainstorming the talk this post was based on, to my practice audience at Rethink Priorities, and to subsequent audiences at University College London and the FTX Fellows office.


I practice post-publication editing and updating.


Denkenberger @ 2022-04-12T00:54 (+4)

Each year, there are 30 trillion wild-caught shrimp alone! (Rethink Priorities,^)

I'm not seeing the 30 trillion number in that reference - is there a direct link to the analysis? 4000 shrimp caught per person per year seems high.

Holly_Elmore @ 2022-04-13T01:33 (+2)

Okay, so it turns out the details of how that number was estimated are still unpublished, and I'll cite them as such along with that meme Peter shared.

Good catch, once again!

Denkenberger @ 2022-04-13T03:26 (+3)

Thanks - good pun!

Holly_Elmore @ 2022-04-13T19:39 (+3)

I almost preemptively disavowed it lol

Linch @ 2022-04-14T01:51 (+2)

I still don't know what pun you guys are talking about.

Holly_Elmore @ 2022-04-14T17:31 (+4)

Good "catch"

Holly_Elmore @ 2022-04-12T16:50 (+2)

Oh shoot, you seem to be right. I must have left a link out. This is the fastest link I could find that makes reference to the Rethink Priorities findings just to give you guys some assurance: https://www.facebook.com/groups/OMfCT/posts/3060710004243897

I'll get a real one!

Charles He @ 2022-04-10T23:07 (+2)

This is an awesome post!

I want to learn more!

How will artificial general intelligence (AGI) affect WAW? How should AI affect WAW?

AGI could be the only way we could implement complex solutions to WAW 

How do we hedge against different takeoff scenarios?

I guess one potential premise of this point, is the consideration that AGI may  have enormous perception and physical, real world faculty. This faculty includes deep understanding and the ability to edit complex, natural systems.  This can be used to reduce suffering in wild animal welfare.

 

Does the below seem like a useful comment?

If the above is true, then maybe in some deep sense, safety work on AGI/ASI might be disjoint from work on WAW. So the approach to AGI that was focused on WAW might be very different?

 

I don’t know much about these areas though. I would like to be corrected to learn more.

Holly_Elmore @ 2022-04-11T02:58 (+2)

It seems like it can just overpower/lock-in humans without obtaining these competencies (it doesn't even need to be AGI to be extremely dangerous).

Ideally, I think WAW would consider all the different AI timelines. TAI that just increases our industrial capacity might be enough to seriously threaten wild animals if it makes us even more capable of shaping their lives and we don't have considered values about how to look out for them.
 


So it’s possible that relatively simple tools are sufficient to improve WAW, or at least the sophistication is orthogonal to AGI?

I agree! Personally, I don't think it's lack of ntelligence per se holding us back from complex WAW intervention (by which I mean interventions that have to compensate for ripple effects on the ecosystem or require lots of active monitoring). I think we're more limited by the number of monitoring measurements we can take and our ability to deliver specific, measured intervention at specific places and times. I think we could conceivably gain this ability with hardware upgrades alone and no further improvement in algorithms.

Black Box @ 2022-06-16T05:10 (+1)

I think we're more limited by the number of monitoring measurements we can take and our ability to deliver specific, measured intervention at specific places and times.

This seems a bit surprising to me, as currently we don't even have a good understanding of biology/ecology in general, and of welfare biology in particular. (which means that we need intelligence to solve these) 

So, did you mean that engineering capabilities (e.g. the minotoring measurements that you mentioned) are more of a bottleneck to WAW than theoretical understanding (into welfare biology) is? If yes, could you explain the reason? 

One plausible reason I can think of: When developping WAW interventions, we could use a SpaceX-style approach, i.e. doing many small-scale experiments, iterating rapidly, and learn from tight feedback loops, in a trial-and-error manner. Is that what you were having in mind?