I'm NOT against Artificial Intelligence
By Victoria Dias @ 2025-04-24T18:02 (+6)
I decided to translate and share here on the Forum the text I originally wrote in Portuguese for LinkedIn, aimed at people who aren't familiar with the topic. It's quite long—sorry about that! I could have split it up, but posting it all together gives me a sense of completeness.
I was taking a break from reading, studying, and writing when I came across a headline: "Every time you say 'Please' to ChatGPT, billionaire Sam Altman gets poorer." Many interpreted this as a reason to be polite to AI, as a form of protest, or even a way to safeguard against a potential machine uprising. This prompted me to reconsider my hiatus and compile the fragments I've been jotting down over the past months about the environmental and psychological impacts of unchecked/unrestrained AI usage.
I'm not an AI expert, nor am I opposed to its use—I utilize it myself to simplify daily tasks, much like many others. However, I believe we need to scrutinize how we're employing AI and, more importantly, understand the tangible consequences of its usage, which extend beyond billionaire finances or speculative sci-fi scenarios. My aim here is to share accessible information that encourages reflection on responsible and conscious engagement with this technology.
You might disagree with everything I've written, and that's perfectly okay. Predictions about AI's future hold significance but remain uncertain, even among leading experts. In my view, our focus should be on the challenges we're currently facing—issues that are already affecting our lives and will continue to impact future generations more than any hypothetical scenario that may or may not materialize.
A simple starting point
Let’s begin with a simple explanation of what Artificial Intelligence is. At its core, AI is a system that collects and processes information provided by humans. Since it's powered by computers, it can access and analyze this information much faster than a human could—allowing it to solve problems, suggest solutions, or make decisions based on that data.
Personally, I use AI quite a bit—to speed up my internet searches, check my code for bugs, or fix typos in my writing. Still, I have to admit that the way people are using AI these days is starting to feel a bit frustrating. It seems like AI is being called upon for everything—from picking wall paint colors, to creating memes, to settling WhatsApp group arguments. And honestly, it makes me wonder: Are we using this tool in the right way?
I’ve been putting off writing this for weeks—maybe even months. Even though I’m in my final semester of a Software Engineering degree, I actually know a lot more about animal welfare than I do about AI. My understanding of AI is pretty basic. And yet, what I’ve been witnessing makes me appreciate the flaws and imperfections of things made by human hands more than ever.
There are so many things I want to say that I don’t even know where to begin. I can’t seem to organize my thoughts or rank the importance of each topic I want to discuss. I’ve got one draft saved on my computer, another in my phone notes, and yet another in a Google Doc. Every time I try to write about how people are using AI, I end up abandoning the draft halfway through because I struggle to do it non-violently. I always aim to write in a way that informs and raises awareness, especially for those unfamiliar with the topic—using simple language, avoiding too much technical jargon. I draw inspiration from Marshall Rosenberg’s nonviolent communication framework, trying not to push anyone away. But this time, it’s hard—because what I’m seeing is, at the very least, concerning. And sometimes, frankly, even a little pathetic.
It’s a fact that Artificial Intelligence is evolving fast. Really fast. New updates, new versions, new tools—every single day. It’s difficult to predict how long it might take for an AI to become more capable than the human mind—or whether it ever will. In fact, even the experts can’t agree on whether that’s a real possibility.
Will AI turn against us?
That question might sound like something straight out of a sci-fi movie—but it gained surprising momentum in 2024 and 2025. A number of high-profile figures in tech—former OpenAI employees, Elon Musk, Mark Zuckerberg—began making increasingly bold predictions about the future of AI.
But the truth is: no expert can say for sure if or when that might happen. And there’s a simple reason for that. In order to create an AI that truly surpasses the human mind, we’d need to fully understand how our own minds work—and we’re nowhere near that level of understanding. The human brain isn’t just a collection of running algorithms. It includes consciousness, emotion, intuition—that last-minute “gut feeling” that makes us change course, improvise, or adapt to situations we've never experienced before. No AI can genuinely do that today.
Some are placing big bets. Daniel Kokotajlo, a former researcher at OpenAI, claimed that AI could surpass human intelligence by 2027, and by 2030, might be designing even more intelligent versions of itself. Elon Musk revised his estimate too, predicting that by the end of 2025, AI would outsmart any human. Meanwhile, in 2024, OpenAI itself acknowledged that even their most advanced models were still operating at “Level 1” on a five-stage scale toward so-called superintelligence. In other words, we’re still at the beginning.
And even when AI appears to perform well on “theory of mind” tasks—the ability to infer others’ mental states—research has shown that these systems are simply echoing patterns, like parrots, rather than truly grasping human emotions or intentions.
In short: AI is not a monster on the verge of rebellion, nor is it a superior being. For now, the greatest risk isn’t that AI will choose to destroy humanity, but rather how humans are already using it today. Until we better understand our own minds, building a machine that genuinely surpasses us remains out of reach.
As AI researcher Yann LeCun put it:
“The risk lies in who controls the technology—not in the technology itself.”
The real problem behind AI
Honestly? I think we’re spending way too much time worrying about a hypothetical “Robots vs. Humans” war, while we’re already dealing with real-world consequences—right now—caused by bad actors using AI that already exists.
And if you're someone who says “Hi, Chat! How are you?” or “Thank you, Chat—you’re amazing!” to AI out of fear of a so-called machine rebellion, I beg you: please stop.
Direct your kindness and courtesy toward real people—those around you, not just physically, but in your communities and online networks. They’re the ones with emotions. And they’re also the ones behind the world’s greatest crises—whether pandemics, nuclear conflicts, or the misuse of artificial intelligence. Consider this:
In recent elections, it was already difficult to distinguish what candidates had actually said or done, thanks to the flood of misinformation on social media—even from sources we once trusted. Now, with AI, it has become easier than ever to produce fake videos and audio (deepfakes) that appear entirely real. This isn’t theoretical—there have already been serious incidents.
For example, in early 2024, thousands of U.S. voters received robocalls using President Joe Biden’s AI-generated voice, urging them not to vote in the New Hampshire primaries. It was a complete fabrication—an audio deepfake designed to confuse and demotivate voters. In another case, in Slovakia, a fake audio recording of a candidate supposedly plotting to rig the election went viral just two days before voting—possibly shifting the result by pushing the frontrunner out of the lead.
It doesn’t stop there. Bot farms and fake accounts powered by AI are now used in coordinated disinformation campaigns—to smear candidates, flood the web with fabricated news, and manipulate public opinion. In Bangladesh, a deepfake video falsely claimed that a political candidate had dropped out of the race.
The use of AI to spread falsehoods, shape opinions, and destabilize democracies is already happening across the globe. Powerful systems are now readily available that can generate text, audio, and video so convincing that it’s nearly impossible to tell what’s real and what’s not. This fuels what some call a “passion for ignorance,” where people prefer falsehoods because they’re more convenient, more exciting—or just because they fit better with what someone already wants to believe.
Not long ago, our biggest fear was downloading a Trojan virus from a sketchy email. Today, the threat landscape is far more chilling. With the help of AI, digital attacks have become smarter, stealthier, and more dangerous for people who don’t deeply understand how the technology works.
A striking example was the 2022 cyberattack on Brazilian retailer Americanas, which took their website offline for five days and resulted in nearly R$1 billion (approx. USD $200 million) in lost sales. Hackers accessed huge databases with names, tax IDs, home addresses, and credit card information for millions of customers. In the past, an attack like this might’ve taken weeks or months to prepare. Today, with AI, large-scale breaches can happen in minutes.
Credit card fraud is no longer a manual operation. AI algorithms can test thousands of card numbers in seconds, bypassing security systems with a speed no human could match. In 2024 alone, over 339 million credit and debit cards were exposed online—26 times more than the year before—thanks to AI-powered automation.
And it doesn’t stop at data breaches. Phishing scams—those fake emails and texts trying to steal your personal info—have become nearly impossible to detect. Where once we looked for broken grammar or weird phrasing, AI now crafts perfect messages, mimics writing styles, and even clones voices to impersonate your boss, family member, or coworker. Some researchers predict that by 2025, half of all phishing attacks will involve voice deepfakes, making them even more convincing—and dangerous.
Ransomware attacks—where hackers lock you out of your data and demand payment—continue to rise, targeting companies, hospitals, and even public agencies. In Brazil alone, 25% of businesses reported financial losses due to cyberattacks in 2022, and 78% reported attempted data theft by email. There are already reports of AI being used to build malware that adapts in real time, making it harder for cybersecurity teams to detect or stop it.
The landscape has shifted drastically. Today, AI allows hackers to combine phishing, SMS, WhatsApp scams, email traps, and fake online ads—all automated and scaled globally. What was once a clunky virus is now a silent, seamless threat.
And then there’s a risk that often goes unnoticed: AI being used to develop chemical, biological, or cyberweapons. In 2022, one chilling experiment showed that an AI, originally designed to help discover new medications, was able to generate more than 40,000 toxic compounds in under six hours—including formulas similar to VX, one of the deadliest nerve agents ever created. All it took was tweaking the model’s goal.
AIs are already being used to speed up the creation of genetically modified viruses and bacteria—accelerating research that once took years. In 2024, scientists raised the alarm: generative AI tools might soon assist in designing new virus variants, more resistant mutations, and even synthetic drugs.
All of this is happening now, as you read this. We don’t need a robot uprising to be concerned. The real threat today lies in the irresponsible or malicious use of AI by people—by individuals, companies, and governments.
Is AI going to take all our jobs?
Another concern that, to me, feels as unproductive as worrying about a potential Robot vs. Human war is the fear that AI is going to take all of our jobs. History shows that every major new technology causes anxiety at first—many people assume it will spell the end of work as we know it. But in reality, these technologies tend to make life easier and create entirely new opportunities.
Take ATMs, for example. When they were first introduced, people thought human bank tellers would disappear. But what actually happened is that ATMs freed up staff to handle more complex tasks, and banks began offering new kinds of services. The same thing occurred when telephone operators were replaced by automated systems, and when typewriters gave way to computers and printers. New roles emerged—typists became data entry clerks, developers, and graphic designers.
The internet, now essential to daily life, was also once seen as a threat to traditional careers. And yet it spawned thousands of new jobs that didn’t exist two decades ago: digital influencers, data analysts, software engineers, content creators, digital marketers, and many more.
AI is following the same pattern. It automates repetitive tasks, giving people more time to focus on creative, strategic, and human-centered work. We're already seeing new roles emerge: prompt engineers, data curators, AI ethics specialists, cybersecurity analysts, algorithm trainers. Technology doesn’t eliminate the need for humans—it transforms how we work.
If you're still worried about AI wiping out jobs, consider what’s already happening in 2025. AI isn’t just replacing tasks—it’s enhancing human work, helping professionals become faster, more efficient, and more focused on what really matters.
In healthcare, for example, doctors now use AI to analyze scans, generate reports in seconds, detect diseases before symptoms even appear, and personalize treatment plans. This doesn’t replace the doctor—it saves lives and gives them more time to care for patients.
In customer service and retail, virtual assistants and chatbots are making long wait times a thing of the past. Companies can resolve issues, answer questions, and even anticipate customer needs 24/7—with efficiency no human team could match alone.
In finance, AI is helping detect fraud in real time, process massive volumes of data, and automate tedious tasks. Banks and fintech firms use it to flag suspicious transactions and protect customers—while also speeding up and securing access to financial services.
Legal professionals are also reaping the benefits. AI-powered tools like Luminance, LawGeex, eBrevia, and Lexion can review, draft, and analyze contracts in seconds—ensuring legal accuracy, identifying risks, and suggesting improvements. This frees lawyers to focus on strategy, negotiation, and client relationships—and makes legal services more accessible overall.
In real estate, agents are automating everything from contract generation to credit checks, lead scoring, and personalized property recommendations. AI tools act like 24/7 assistants, speeding up transactions, reducing errors, and allowing agents to focus on building trust and closing deals.
In industry, smart robots monitor machines, adjust processes in real time, and even predict when equipment will need maintenance—preventing accidents and losses. In agriculture, drones and AI sensors analyze soil, track crops, and boost productivity with less waste.
All of this shows that AI isn’t here to take away jobs—it’s here to assist, streamline, and open the door to new roles. Just as computers didn’t end work but transformed it, AI is repeating that cycle. It automates the repetitive, frees us up for the creative and strategic, and gives rise to entire new fields we hadn’t imagined until recently.
At the end of the day, AI is a powerful tool—but humans are still the ones who make the difference. The future of work isn’t about being replaced by machines. It’s about learning to work alongside them and making the most of what each can offer. Technology evolves, and we evolve with it—new opportunities will always arise for those willing to learn, adapt, and innovate.
The ecological impact of AI
I’m not writing this to scare anyone or cause panic. What truly bothers me—and motivated me to write this—is seeing so many people use artificial intelligence for completely trivial tasks: making memes, answering obvious questions, crafting generic messages, or even deciding what outfit to wear. Honestly, it feels like we’re outsourcing even the act of thinking.
Just the other day, I saw a well-known influencer post a screenshot of a ChatGPT conversation where she asked whether her daughter would get sleepy during a party, because the child usually naps at that time. What kind of answer was she expecting? There’s no magic here—only the parent knows the child’s routine, and even she can’t predict it with certainty. That’s the kind of question anyone could answer with basic common sense—no need for a supercomputer.
But the problem isn’t just the banality of these questions. Every time someone interacts with AI—even just to say “thank you,” ask for a joke, or get help picking a paint color—they trigger massive data centers that consume electricity and water to run and stay cool. It may seem small, but when multiplied by millions of people asking pointless questions every day, the result is staggering: massive consumption of natural resources, CO₂ emissions, and environmental strain.
To put things into perspective:
- Energy: Each AI interaction uses about 0.3 watt-hours (Wh). Multiply that by the 378 million daily AI users in 2025, and you get 113.4 million Wh per day—enough electricity to power 22,680 Brazilian homes for an entire day (assuming 150 kWh/month per home).
- Water: Each prompt can consume up to 500 ml of water for cooling. Globally, that’s 189 million liters of clean water used every day, just to answer questions like “which color should I pick for my project?” That’s equivalent to the daily water consumption of a city like Joinville (Brazil) with 1.26 million inhabitants.
Recently, the CEO of OpenAI revealed that messages containing simple politeness—“please,” “thank you”—cost the company around $48 million annually in energy costs. But let’s be honest: that’s not going to make the owners any less wealthy or dismantle capitalism. And frankly, I’m not overly concerned with their finances. What really matters here is the ecological cost. The lost money pales in comparison to the waste of drinking water, the energy that could power thousands of homes, and the CO₂ emissions that accelerate climate change. The planet is the one footing the bill for every pointless query or unnecessary courtesy we throw at a machine.
It gets worse when we talk about AI-generated visuals. Popular trends like Ghibli-style avatars, meme templates, or “perfect profile pictures” consume ten times more resources than a simple text message. In April 2025 alone, over 500 million images were generated—consuming enough energy to match 3,000 São Paulo–New York flights and using enough water to fill 125 Olympic-sized swimming pools.
So that meme you thought was “harmless,” or that impulse to ask AI a lazy question instead of thinking for two seconds—they have a real environmental cost. Before delegating a basic thought to artificial intelligence, ask yourself:
Does this question truly need to be asked?
Are we, without noticing, trading away our autonomy and creativity for convenience—while harming the planet in the process?
The mental toll of outsourcing our thinking
Have you felt like thinking is becoming… exhausting? It’s not just a feeling. In 2025, a study from Elon University found that 61% of experts view growing dependence on AI as a serious threat to our ability to solve complex problems and think critically. And that’s not an exaggeration. We’re outsourcing even the simplest decisions to machines—and it's quietly eroding our cognitive skills.
Just scroll through social media: it’s become trendy to post screenshots saying “ChatGPT, give me a caption for my photo,” “Gemini, what recipe can I make with what's in my fridge?” or even “Which emoji fits this message best?” People are asking AI to build playlists, write excuses to skip work, or craft flirty responses for dating apps. It might sound like a joke, but every time we hand off a basic decision to AI, we lose an opportunity to activate the most fundamental neural pathways. What’s worse: a 2024 study from the University of Hamburg-Eppendorf found that half of AI users trust an automated response more than their own judgment—even when they know AI can be seriously wrong.
Trend forecasters like WGSN have already dubbed 2025 “the year of therapeutic laziness.” Not thinking has become a lifestyle. TikTok and Instagram are packed with tutorials on how to use AI for birthday messages, meal planning, gift ideas, shopping lists… Everything pre-packaged, everything easy, no effort required. Thinking has become almost… optional.
But there’s a price. Studies from Brazil’s Unifesp and McKinsey warn that overuse of AI in education is leading to a generation with “mental laziness.” Children raised with virtual tutors show 30% less problem-solving ability without tech support. And adults aren’t faring much better: a European report found that people who rely on AI for daily tasks (like planning routes, picking movies, or checking grammar) show 20% less activity in the prefrontal cortex—the brain region responsible for logical reasoning.
The result? People increasingly struggle to tell real from fake. According to Brazilian news outlet G1, 78% of young people aged 18–24 can’t reliably distinguish fact from fiction without using verification tools (which ironically, also rely on AI). And so the cycle deepens: the more we outsource, the more dependent we become.
But the brain isn’t like a gym muscle that gets stronger by resting. It thrives on challenge. A study from Universidade Presbiteriana Mackenzie found that students who used AI to summarize or write their assignments had 40% lower content retention compared to those who took notes by hand.
Even relationships are shifting. In 2025, 45% of dating app users said they’d rather chat with algorithms than engage in tough conversations with real people, according to Forbes. Virtual partners, therapy chatbots, automated advice—it’s all fast, frictionless, and conflict-free. But it’s also empathy-free and emotionally stagnant.
Meanwhile, AI keeps getting smarter. As of April 2025, the OpenAI Q1 Pro reached an estimated IQ of 145—genius-level. But that intelligence doesn’t transfer to us. Quite the opposite: the more AI does for us, the less we’re pushed to stretch our own minds.
So how do we escape this trap? It’s not about abandoning AI, but using it more consciously:
- Before you ask an AI, try thinking through the answer yourself—for two minutes.
- Encourage kids to solve problems without virtual tutors and to play offline.
- Question ready-made answers—AI gets things wrong, replicates biases, and spreads misinformation too.
In the end, the cost of lazy thinking isn’t just environmental—it’s cognitive. We’re trading something far more precious: our ability to reason, create, and adapt. As philosopher Daniel Dennett said in 2024:
“AI is the next step in evolution—but only if we don’t let it be the last step in ours.”
Thinking isn’t a flaw. It’s what makes us human. And giving that up for convenience? That’s too high a price to pay.
Is everything AI says true?
There’s a trend going around among professionals on social media—“I asked ChatGPT, and here’s what it said.” It gives the impression that many people are starting to see AI as a kind of ultimate source of truth, almost as if it were superior to the human mind. But is AI really the holder of all truth?
The short answer is: No.
What AI does is process and organize information that has been fed to it—by humans. And that comes with several important implications.
First, AI does not guarantee that everything it says is 100% correct. It relies entirely on the data it’s trained on—and that data can be wrong, outdated, or outright misinformation. Recent studies have shown that even with access to thousands of academic papers and news articles, large language models like ChatGPT, Copilot, Gemini, and Perplexity AI still make frequent mistakes. According to a BBC analysis, 51% of summaries generated by these chatbots contained significant issues, and 19% included factually incorrect information, such as wrong dates, inaccurate figures, or serious contextual distortions.
Even more concerning, research suggests that using AI to verify news can increase belief in false headlines, especially when the AI fails to clearly distinguish fact from fiction. In some cases, it labels true stories as fake—or fake ones as true—further confusing users and fueling the spread of misinformation. This becomes even more alarming when we remember that AI can be trained on fake or manipulated content, either unintentionally or deliberately.
Another key point: AI has no critical thinking or awareness. It can’t tell the difference between opinion and fact, and it doesn’t grasp the broader context of complex situations. As a result, it sometimes hallucinates—invents answers that sound plausible but are completely wrong, just to avoid leaving the user without a response. And because AI tends to answer quickly and confidently, many people believe it without questioning.
The danger of treating AI as an absolute truth-teller goes beyond misinformation. It can erode trust in our own human capacity to think critically, do research, and doubt. And, as we’ve discussed, every single question asked of an AI—even those made out of idle curiosity—has an environmental cost: it uses energy, consumes water, and contributes to ecological impact.
So, it’s worth repeating: AI is a powerful tool, but not an infallible one. It reflects what we put into it—along with all our strengths, biases, and mistakes. Using AI responsibly means going beyond just asking questions. It means having the critical thinking skills to interpret the answers, evaluate their quality, and know when not to rely on the machine.
Artificial Intelligence is elitist — and it's widening social inequality
Artificial intelligence isn’t neutral, and it’s definitely not democratic. The real issue isn’t AI itself, but how it’s used: large corporations and those already in power are leveraging this technology to reinforce capitalism and deepen their advantages. The result? AI ends up exposing—and exacerbating—existing inequalities. In practice, it often serves the interests of a privileged few, while the majority are left without access or voice in this so-called “technological revolution”.
If you’re from a low-income background, live in the global South, or rely on public services, chances are you’re getting left behind. And this isn’t speculation—the data backs it up.
To start, 78% of global AI investment is concentrated in the United States, China, and the European Union (World Bank, 2024). The rest of the world—including nearly all of Africa and much of Latin America—receives less than 1% of global funding. This means most AI tools are built for those who already have plenty, while millions still lack access to basics like healthcare, education, or even internet.
And it’s not just a question of geography. Even within countries like Brazil, the gap is stark. Private hospitals in major cities now use AI to detect cancer early—while community clinics on the outskirts can’t even keep an X-ray machine running. Online education platforms like Coursera and Khan Academy use AI for personalized learning—but require fast internet, something 67% of rural Brazilian schools lack (School Census 2024).
In reality, AI has become a tool of the elite. Startups in poorer nations are forced to rely on expensive, foreign tools—priced in dollars, governed by rules that ignore local needs. This creates true “AI deserts”—places where people watch the revolution from the sidelines, unable to participate.
And the future? It’s not promising—unless something changes. The IMF warns that AI could increase global inequality by 40% by 2030. Why? Because AI is automating the simplest jobs—those that are most common in low-income regions—and increasing the value of jobs that require advanced technical skills and access to technology. Meanwhile, companies like Google, Microsoft, and OpenAI charge up to $50,000 a month for access to cutting-edge models. That means only the already-powerful get to play.
To make matters worse, many of the best tech professionals from poorer countries are headhunted by foreign companies, draining local talent and leaving their home countries even more dependent on imported solutions.
In agriculture, wealthy farmers in India use AI to monitor crops and set prices—while smallholders still rely on hand tools and predatory loans. In finance, AI decides who gets credit—and, guess what? If you live in a poor neighborhood, you’re more likely to be denied, even with stable income. Facial recognition algorithms are 35% less accurate for Black and Asian faces, leading to false arrests and digital exclusion. And if you speak a “non-standard” language like Quechua or Yoruba, AI acts like you don’t exist.
At the end of the day, AI has become a privilege for the few, creating digital castes: those who control the algorithms, and those who are controlled by them. If we don’t change course, this technology will only deepen the gap between rich and poor—both within countries and across borders.
What needs to change?
- Global regulations to ensure fair access
- Investment in local solutions and regional AI research
- Penalties for companies that reinforce tech elitism
- Public initiatives for open, free AI—like India’s BHASHINI project
AI can’t be just another rich person’s toy. If we don’t regulate and democratize it, we’ll soon be watching inequality grow at algorithmic speed.
As Thomas Piketty warned in 2025:
“AI is the new frontier of capital accumulation. If left unregulated, it will create digital castes: those who control algorithms, and those who are controlled by them.”
It’s time to choose which side we’re on.
We need laws to make AI use safe
As I’ve shown throughout this text, artificial intelligence is already deeply embedded in our lives: it helps doctors diagnose illnesses, streamlines company workflows, optimizes urban traffic systems, and even supports climate change mitigation. But like any powerful technology, when used without limits or oversight, AI carries serious risks—from personal data leaks to environmental degradation. That’s why we can’t afford to wait any longer: we need real laws and international treaties that ensure ethics, transparency, and accountability in the use of AI.
The urgency for global regulation is simple: without clear rules and coordinated respect across countries, this technology can easily become a weapon in the wrong hands. The environmental impact is already immense—billions of daily AI queries consume enough water and electricity to power entire cities. Meanwhile, deepfakes and fake news threaten elections and democratic stability, and criminals are using AI to clone credit cards, hack into banks, and even generate chemical weapons in record time. Without true international cooperation, investigating, tracing, and punishing these abuses becomes nearly impossible.
Some regions are taking action. In 2024, the European Union passed the AI Act, which classifies AI systems by risk level and bans high-risk uses like mass manipulation. In Brazil, a regulatory framework is under discussion, including a proposed agency to oversee ethical AI use, transparency, and compliance with data protection laws (LGPD). The OECD has brought 42 countries—including Brazil and Argentina—into agreement on principles for trustworthy AI, prioritizing human rights and safety.
But we’re still far from a truly global safety net. UNESCO has identified nine different approaches to AI regulation worldwide, and without a universal standard, there will always be loopholes for bad actors to exploit. Digital crimes will only be effectively tackled through binding international treaties that enable joint investigations and cross-border enforcement. And of course, we need environmental standards: data centers should be required to meet energy efficiency targets and compensate for water use, as outlined in the UN’s 2030 Agenda.
What we need is the creation of global forums for regulatory harmonization, international arbitration mechanisms to resolve AI-related disputes, and safeguards to prevent developing countries from being trapped under the influence of a few tech giants.
And here’s my main point: it’s not enough to create laws and treaties if only a handful of countries comply. It doesn’t matter whether a nation is a global superpower or a small state—ethics and accountability must come before ego, power, or profit. If AI is global, then its rules must be too.
AI is not neutral—it reflects who we are and what we value. That’s why regulation must be collective, transparent, and truly global. Only then can we ensure that AI serves humanity, and not the other way around.
As UNESCO put it:
“We can only see a short distance ahead, but we can see how much needs to be done to get there.”
The time to act is now—and we must act together.
Final thoughts
I hope the purpose of this text is clear: I’m not saying you should stop using artificial intelligence. It’s here, it’s part of our present, and it can be an incredible tool.
What I do hope is that you’ll use AI with greater awareness, critical thinking, and responsibility. Before delegating any thought or decision to a machine, take a moment to reflect: Does this really need to be automated?
After all, technology is meant to support our lives—not to replace our ability to think, create, and care for the planet.
And just to be clear: I’m not trying to diminish the value of serious research on preventing AI-related existential risks or avoiding a future “robot uprising”. These discussions matter, and the people involved in them are doing vital work. My point is simply that this narrative has come to dominate the conversation—while many of the urgent, tangible problems caused by AI are unfolding right now, with real impacts on people’s lives today and on the lives of future generations yet to be born.
Let’s not lose sight of what’s already at stake.
For anyone interested in all the references and links to the sources I used while researching for this piece, they’re available in this Google Docs link: https://docs.google.com/document/d/1fisH-3vfoiRE5jbTUkZ_5Lp_WbeYAz-MfIAlHIMlTd4/edit?usp=sharing
SummaryBot @ 2025-04-25T15:35 (+2)
Executive summary: In this personal reflection and evidence-based analysis, the author argues that while AI offers practical benefits, we urgently need to confront the immediate environmental, psychological, social, and political harms caused by its widespread and often trivialized use, rather than focusing solely on speculative future risks like a robot uprising.
Key points:
- AI is not inherently dangerous, but human misuse is: The real threats come from how people are already using AI irresponsibly today—including spreading misinformation, escalating cybercrime, and enabling social manipulation—not from a hypothetical future AI rebellion.
- Environmental costs of AI are significant and overlooked: Each trivial interaction with AI (e.g., asking for jokes or advice) consumes energy and water at scale, contributing to CO₂ emissions and straining natural resources.
- AI dependence erodes human cognitive abilities: Growing reliance on AI for simple decisions weakens critical thinking, creativity, and problem-solving skills, potentially leading to a generation less capable of independent thought.
- AI is deepening social inequality: Access to AI tools is concentrated among wealthy countries and individuals, exacerbating global and domestic inequalities, and creating "digital castes" where the powerful benefit while others are excluded.
- Global regulation of AI is essential: To prevent abuses and ensure AI serves humanity as a whole, binding international laws and environmental standards must be developed and enforced across all countries—not just a few leaders.
- Call for conscious, critical AI use: Rather than abandoning AI, the author encourages users to engage with it thoughtfully, preserving human autonomy, creativity, and responsibility in an AI-integrated future.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.