AI for Good's "Emperor's New Clothes": Does the Grand Narrative Mask Individual Despair?
By Hiyagann @ 2025-06-28T08:05 (+1)
I recently participated in a hackathon with the theme "AI for Good." However, throughout the event, looking at the organizers' promotional materials and the other competing projects, I felt that something was amiss—a certain, different kind of voice was missing.
Our lives are filled with countless "overlooked details," and I saw many projects aimed at addressing these details, such as those designed to help the elderly, children, and people with disabilities. These included intelligent recipe assistants for patients, voice models for autistic children, diagnostic systems for rural doctors, as well as projects supporting agriculture, psychological interventions, and more. These are undoubtedly wonderful designs, and they certainly improve the lives of some. Yet, seeing them, I couldn't shake the feeling that another kind of voice was absent—a voice that isn't so comfortable to hear.
When we talk about "AI for Good" and how to use artificial intelligence to make the world a better place, we always think in patterns, falling into a kind of mental inertia, focusing on things that have already been labeled. Beneath this grand narrative, I can't help but wonder: are we being selective in what we "see"?
When we think about what to do, we always start from a grand perspective. We are always eager to help those who have been labeled as "vulnerable groups"—the elderly who need companionship, the children who need better education. But what about the Swing Kids defying the Third Reich, the Zoot Suiters in 1940s America, the hippies of the counter-culture, or the Shamate, the ostracized migrant-worker youth of China's factory towns? These groups are often stigmatized, simplified, misunderstood, and relegated to the "margins," forgotten, or even shunned. Their existence, their culture, their struggles, and their rebellion against or alienation from the mainstream order seem to have never been illuminated by the light of "AI for Good." Are they really "problem youth," or are they just seeking a shred of dignity amidst loneliness, childhood trauma, and a world that doesn't understand them?
For example:
When we talk about "caring for the elderly," a man who has just come from the gym, muscular and with a generous pension, is undoubtedly "elderly." But is he the "vulnerable" person we need to prioritize helping the most?
Conversely, when we see a fifteen- or sixteen-year-old with colorful hair and a body full of tattoos, our first reaction might be "delinquent." But do we ever stop to think that he might come from a poor farming family with critically ill parents, and that he works alone in a factory, using this external "armor" to protect himself? Can a learning gadget that combines play and study solve his problems?
And what about the silent, struggling majority? The office workers living in old apartments, worrying daily about tuition and their parents' medical bills? The young people crushed by structural pressure, forced to choose between caring for their aging parents and their own children? Who is the "vulnerable" group here? Is their exhaustion and despair drowned out by the daily hustle, untouched by the grand narrative of "AI for Good"?
Beneath the glossy surface of society, who is truly bearing the structural pressure? Whose dignity is precariously eroding day by day, yet struggles to receive effective support and attention? Does the "respect" and "care" we take for granted sometimes become superficial, or even a new form of moral high ground that conceals deeper injustices?
The pain of individuals trapped in systemic predicaments, deprived of a voice, for whom even "being seen" is a luxury, is often diffuse, difficult to attribute to a single cause, and may even challenge the existing social order and our comfortable perceptions. Consequently, their needs and plights can, paradoxically, be marginalized in the mainstream "ethical agenda," or simplified into individual problems requiring "psychological counseling," rather than systemic issues demanding fundamental changes to the social structure.
In our current systems of evaluation and resource allocation, whose difficulties and needs are most often underestimated or even ignored? How can we empower those who are truly crushed at the bottom, systematically stripped of their dignity and hope, and struggling to survive outside the mainstream view, to regain control of their own destinies?
That extreme individual suffering, that despair born from being pushed to the brink of survival by poverty, discrimination, oppression, and institutional injustice, and the fundamental questioning of life's meaning that follows—do these most direct and heart-wrenching predicaments always become the core, priority issues in discussions of AI ethics? Do they receive an equal and urgent level of attention and response?
I see my friend, giving up on himself. When encouraged to study, his only response is "I'm lazy." He finds his sense of existence by attacking others, lives in constant anxiety, complains about politics and reality to a chatbot every day, and wallows in self-abandonment, thinking he'll just end it all when he can't go on anymore.
I see my friend in daily agony because of her marriage. I see my friend lost, with no idea what to do with his future. I see so many girls whose only goal is to marry a rich man. I see so many people who constantly distract themselves with all kinds of entertainment but dare not face reality.
When an individual feels that no amount of effort will allow them to meet societal expectations or improve their situation, "giving up on oneself" can become a form of... "self-preservation" or "silent protest." And the thought that "I'll just die when I can't take it anymore" is the most extreme manifestation of this despair—a signal that must be taken with the utmost seriousness and vigilance.
These are not isolated cases. To varying degrees, they reflect the pressure, confusion, and struggles that many people, especially the young, may be experiencing in modern society. When individuals feel immense pressure, injustice, powerlessness, and a lack of hope in their real lives, they may adopt negative coping mechanisms—whether it's disillusionment with relationships, confusion about the future, fantasies of "shortcuts," or immersion in the virtual world. Behind these behaviors often lies a profound longing for dignity, a sense of worth, security, understanding, love, and a "meaningful life," coupled with the immense disappointment that these desires cannot be met in reality.
When the pressures of reality are too great, the sense of frustration too strong, or the future feels hopeless, the instant gratification, sense of control, and temporary oblivion offered by the virtual world become an incredibly tempting "sanctuary." However, while this escapism can temporarily alleviate anxiety, in the long run, it often exacerbates the individual's disconnect from reality, eroding their will and ability to change their situation, thus creating a vicious cycle.
This is the true picture of those "beaten down by life"—the real, widespread, individual pain and collective anxiety that is overlooked or simplified by the mainstream narrative, hidden beneath the daily clamor. Merely providing "treatment" solutions like "early education machines" or "psychotherapy" may not touch the fundamental predicaments arising from one's "fate" —that is, the deeper social structures, economic pressures, unequal opportunities, and the resulting loss of hope.
AI for Good may not be able to cure poverty, discrimination, oppression, institutional injustice, or solve problems of justice, survival, dignity, and a future without hope. But it is precisely because we "see" all of this... this real pain and struggle hidden beneath the daily clamor... that perhaps we should try, in a... different way, to attempt to touch and heal these "wounded souls," rather than choosing to ignore them.
MildAbandon @ 2025-06-28T12:19 (+1)
I'm not sure I agree with the premise of this argument: that the concept of AI for good is faulty, because it can't solve all the problems.
I don't think "AI for good" claims to solve all the problems. Absolutely let's take issue with the idea that AI is going to resolve everything, but that doesn't mean it can't help with anything.
But I'm not worried that AI won't touch the fundamental problems of "social structures, economic pressures, and unequal opportunities". I'm worried that it already is, and is moving the dial in the wrong direction. Automation moves wealth and power away from individuals and towards companies. Concentration of wealth and power in the hands of ever smaller number of individuals and companies is exactly what drives economic, social issues and inequality.
Unless AI is governed and managed appropriately, it's going to be part of the problem, more than part of the solution.
I think this op-ed sets out some of these issues really well: https://nathanlawkc.substack.com/p/its-time-to-build-a-democracy-ai
Hiyagann @ 2025-06-28T15:30 (+1)
You've absolutely nailed it. Thank you for this incredibly insightful comment.
I want to wholeheartedly agree with your core point: my deepest fear isn't just that 'AI for Good' won't solve these fundamental problems, but that mainstream AI development, as it currently stands, is actively exacerbating them. You've perfectly articulated the mechanism behind this: the automation-driven concentration of wealth and power.
To clarify the premise of my original post: I don't believe the concept of 'AI for Good' is inherently flawed, nor is my critique that 'AI for Good is deficient because it can't solve every problem.' My critique is aimed at the narrative's focus. I am concerned that the "AI for Good" movement often directs our attention and resources towards more palatable, surface-level issues. Meanwhile, the far more powerful, fundamental engine of commercial AI development relentlessly fuels the very structural inequalities we claim to be fighting.
This is exactly what I see in some of the projects I've encountered. For instance:
- An AI project that assists with agriculture by solving pest and disease problems is a benefit to humanity. Logically, however, this doesn't necessarily benefit the small farmer. Large corporations have natural advantages of scale, while individual farmers have limited resources. Agricultural AI might not lead to more income for farmers, but could instead accelerate land consolidation by large enterprises.
- Another project advocates for developing play-and-learn hardware for children in impoverished families, supposedly giving them better resources. This is certainly helpful to some extent, but such hardware is often unaffordable for the very families it aims to help. These families typically must prioritize immediate subsistence over long-term educational investments.
- Medical AI developed for doctors in remote areas might never reach them. Furthermore, such AI doesn't necessarily lower healthcare costs for the average person and could instead risk becoming a tool for profit and exploitation by certain institutions.
Your point and mine are two sides of the same coin, and together they paint a grim picture:
My argument is that the "good" side of AI often has a focus that is too narrow, neglecting the deepest forms of suffering.
Your argument is that the dominant, commercial side of AI is actively making the root causes of this suffering worse.
This leads to a terrifying conclusion: our "AI for Good" efforts, however well-intentioned, risk becoming a rounding error—a fig leaf hiding a much larger, systemic trend towards greater inequality.
This brings me to a follow-up question that I'd love to hear your (and others') thoughts on:
Given this reality, what is the most effective role for the "AI for Good" community? Should we continue to focus on niche applications? Or should our primary focus shift towards advocacy, governance, and creating "counter-power" AI systems—tools designed specifically to challenge the concentration of wealth and power you described? How do we stop applying bandages and start treating the disease itself?
Beyond Singularity @ 2025-06-28T12:02 (+1)
I really appreciate how this post highlights the real, tangible suffering that often remains invisible beneath grand narratives like "AI for Good." It's crucial that we recognize the everyday struggles of people who are exhausted, economically strained, and emotionally burned out—struggles that tech-focused solutions frequently overlook.
Your critique resonates deeply with my recent work on the Time × Scope framework, where I suggest explicitly structuring ethics around two core parameters: how far into the future we look (Time, δ) and how broadly we extend our moral concern (Scope, w). One of the strengths of this framework is precisely its flexibility—it can prioritize both systemic, long-term challenges and deeply personal, immediate suffering.
It would be insightful to explore how this structured ethical approach might help ensure AI interventions truly reflect and address both the broad systemic goals and the immediate, tangible needs you highlight. For instance, how might we use such frameworks to ensure AI-driven initiatives genuinely ease the chronic burnout of individuals worrying about rent or basic well-being today, rather than merely amplifying abstract ideals?
I’d be very interested to hear your thoughts on balancing these two scales—macro-level visions and micro-level realities—without losing sight of either.
Hiyagann @ 2025-06-28T16:23 (+1)
Thank you so much for this insightful comment and for introducing me to your work. The "Time × Scope" framework is a powerful lens for analysis, and it gives me a new, structured language to articulate the core problems I was trying to describe.
If I'm understanding it correctly, your framework provides a crucial map for ethical deliberation. My essay, in essence, is a real-world exploration of what happens when we get the parameters on that map wrong. I would argue that the "AI for Good" narrative I critiqued often sets its Scope (w) far too narrowly, precisely because it relies on a limited, intuitive empathy that only extends to neatly labeled, "palatable" groups, while ignoring the stigmatized and the structurally oppressed.
This brings me to what I believe is the core psychological variable that your framework can help us address: empathy. It feels like the fundamental engine that drives the Scope (w) parameter. The true power of your framework might lie not just in setting these parameters top-down, but in inspiring us to ask how AI itself could be used to cultivate and expand the very empathy we need.
This could become a new, constructive direction for "AI for Good." For instance:
- Could research from cognitive psychology on our innate biases (like 'in-group favoritism') help us design AI-driven experiences that challenge and broaden our empathetic circles?
- Could we define the ultimate goal () not just as the reduction of suffering, but as the promotion of human flourishing—a concept from positive psychology rooted in dignity, agency, and meaningful connection, which are all fundamentally tied to empathy?
U
This connects directly to your excellent question about balancing macro-level visions and micro-level realities.
I believe the answer lies in using the micro to constantly ground and validate the macro. The tangible well-being of the individual—which we can only truly appreciate through empathy—must be the ultimate "ground truth" for any grand, systemic AI initiative.
In the context of your framework, the balance can be achieved by stipulating that no matter how far the Time (δ) horizon is, its implementation must demonstrably improve the "flourishing" of individuals within our immediate Scope (w). If a grand vision for the future is built upon a failure of empathy for the silent suffering of the present, the framework would tell us that our ethical equation is fundamentally flawed. The micro-reality isn't something to be balanced against the macro-vision; it's the foundation upon which that vision must be built.
Thank you again for providing such a clarifying and productive framework. It's a perfect bridge between a humanistic critique and a structured, actionable ethical approach.