The Case for Reducing EA Jargon & How to Do It

By Akash @ 2021-11-22T18:10 (+27)

TLDR: Jargon often worsens the communication of EA ideas and makes it harder for us to update our models of the world. I think EAs should apply strategies to notice/reduce jargon, and I offer a few examples of jargon-reducing techniques.

A few weeks ago, I attended EA Global and a retreat about community building. At both events, we discussed EA-related ideas and how to most effectively communicate them.

One idea I’ve been reflecting on: It can be tempting to use jargon when explaining EA ideas, even in contexts in which jargon is not helpful.

In this post, I describe some drawbacks of EA jargon & offer some suggestions for EAs who want to reduce their use of jargon. (Note: Some of these ideas are inspired by Robert Wilbin's talk  and forum post about EA jargon. I have tried to avoid covering the same points he raises, but I strongly suggest checking out those resources.

What is jargon?

Jargon is “special words or expressions that are used by a particular profession or group and are difficult for others to understand.” It can refer to any terms/phrases that are used in shorthand to communicate broader ideas. Examples include: “Epistemic humility,” “Shapley values,” “Population ethics,” “The INT framework,” and “longtermism.”

Why should we reduce jargon? 

These two benefits have focused largely on how others react to jargon. The next two focus on how jargon may directly benefit the person who is communicating:

How can we reduce jargon?

Conclusion

Optimizing metacommunication techniques in EA ideas is difficult, especially when trying to communicate highly nuanced ideas while maintaining high-fidelity communication and strong epistemics.

In other words: Communicating about EA is hard. We discuss complicated ideas, and we want them to be discussed clearly and rigorously. 

To do this better, I suggest that we proactively notice and challenge jargon. 

This is my current model of the world, but it could very well be wrong. I welcome disagreements and feedback in the comments!

I’m grateful to Aaron Gertler, Chana Messinger, Jack Goldberg, and Liam Alexander for feedback on this post.


Linch @ 2021-11-22T22:10 (+11)

I suspect people overestimate the harm of jargon for hypothetical "other people" and underestimate the value. In particular, polls I've run on social media have historically gotten results where people have consistently expressed a preference for more jargon rather than for less jargon. 

Now, of course, these results are biased by the audience I have, rather than my "target audience," who may have different jargon preferences than the people who bother to listen to me on social media.

But if anything, I think my own target audience is more familiar with EA jargon, rather than less, compared to my actual audience. 

I think my points are less true for people in an outreach-focused position, like organizers of university groups.

Lizka @ 2021-11-22T21:28 (+8)

Jargon glossaries sound like a great idea! (I'd be very excited to see them integrated with the wiki.)

A post I quite like on the topic of jargon: 3 suggestions about jargon in EA. The tl;dr is that jargon is relatively often misused, that it's great to explain or hyperlink a particular piece of jargon the first time it's used in a post/piece of writing (if it's being used), and that we should avoid incorrectly implying that things originated in EA. 

(I especially like the second point; I love hyperlinks and appreciate it when people give me a term to Google.) 

Also, you linked Rob Wiblin's presentation (thank you!)-- the corresponding post has a bunch of comments.
 

Pablo @ 2021-11-23T14:03 (+3)

I'd be very excited to see them integrated with the wiki.

This is an idea I've considered and I'd be interested in making it happen if I continue working on the Wiki. If anyone has suggestions, feel free to leave them below or contact me privately.

Charles He @ 2021-11-22T23:09 (+3)

Like Lizka said, glossaries seem to be a great idea!

Drawing on the posts and projects for software here, here, here, and here, there seems to be a concrete, accessible software project for creating a glossary procedurally. 

(Somewhat technical stuff below, I wrote this quickly and it's sort of long.)

Sketch of project

You can programmatically create an EA Jargon glossary that can complement, not replace a human glossary. It can continuously refresh itself, capturing new words as time passes.

This is writing a Python script or module that finds EA forum words and associates it with definitions.

To be concrete, here is one sketch of how to how to build this:

Because the core work is essentially word counting and the later steps can be very sophisticated, this project would be accessible to people newer in NLP, and also interest more advanced practitioners.

 

By the way, this seems like this totally could get funded with an infrastructure grant. Maybe if you wanted go in this direction, optionally:

Maybe there's reasons to get an EA infrastructure grant to do this: 

 

Anyways, apologies for being long. I just sometimes get excited and like to write about ideas like this. Feel free to ignore me and just do it!