[Research Request] “Prestige Tax”: Quantifying Epistemic Silence in AI Safety Teams

By James-Hartree @ 2026-04-17T19:31 (+5)

Summary

Empirical research in organizational psychology suggests that hierarchical rank is the strongest predictor of whether employees speak up or self-censor. In the high-stakes, high-prestige context of AI safety, this "silence" likely functions as a significant information bottleneck. I am conducting independent research to quantify this effect within the safety community.

If you are a junior researcher, intern, or have <5 years of experience at a safety org, please consider taking this 4 minute survey.

 

The Problem: Epistemic Bottlenecks

The expected value of an AI safety organization is a function of its internal information flow. However, truth-seeking is often at odds with social signaling. In labs with steep prestige hierarchies—where a junior researcher may be sitting across from someone whose seminal papers they studied in University—the social cost of "looking naive" or "being wrong" can lead to significant signal loss.

The literature suggests this isn't just a minor friction; it’s a systemic failure mode:

Why this matters for AI Safety

In this field, the counterfactual impact of a missed clarifying question, technical criticism or a "crazy" but correct alignment theory is disproportionately high. If junior staff feel they must "filter" their thoughts to match the Overton window of their leads, the organization loses the "long tail" of creative insights.

Managers often talk about "speak-up culture," but the empirical evidence suggests that culture-messaging (e.g., "our doors are always open") is largely ineffective at overriding the structural reality of prestige. 

The Research Goal

I am currently investigating whether these findings from broader organizational psychology hold true within the AI safety community. Specifically, I want to identify:

  1. What is being withheld? (e.g., technical disagreements, "basic" questions, or organizational critiques).
  2. Why is it being withheld? (e.g., reputational risk, fear of wasting senior time, or lack of confidence).
  3. The Magnitude: Is this a marginal issue or a primary bottleneck for lab productivity?

Call to Action: 4-Minute Survey

To ground this research in actual data rather than theory, I need input from the front lines.

If you currently work at an AI safety/eval organization (e.g., METR, ARC, Apollo, Redwood, CAIS, etc.) in a junior or mid-level capacity, please fill out this survey:

[4 Minute Survey]

The Institute for Classical Dialogue (ICD) is an independent organization exploring protocol-based interventions to improve epistemic quality in high-stakes research teams. Feel free to reach out in the comments or via PM.


James-Hartree @ 2026-04-17T20:14 (+1)

Please forward this to anyone working in a junior rule in AI safety/governance orgs.