Moral Education in the Age of AI: Are We Raising Good Humans?

By Era Sarda @ 2025-07-31T13:25 (+2)

AI tools are transforming how children learn, but will they still know why to be good?

A child sits in a dimly lit classroom, illuminated by a laptop, surrounded by floating icons of AI, data, justice, empathy, and creativity — symbolizing the balance between technology and moral values in education.

“The highest education is that which does not merely give us information but makes our life in harmony with all existence.” - Rabindranath Tagore 

As AI tools rapidly transform classrooms, this vision of education feels increasingly distant. Are we drifting further from this original goal in the age of AI?

With the ongoing developments in the AI industry, there has been a lot of mention and research in AI Education Policy about integrating AI in the education of school students. 

However, little work has been done to understand the long-term moral consequences of children’s early development with AI. It’s not just about intellectuality, intelligence, and academic integrity or saving children from harms like data theft and bias. But also how students’ core principles of justice, truth, honesty, respect, empathy, and reasoning about right and wrong are affected. There’s a possibility that this may not be much affected (compared to 10 years back, of course!). In addition, children may be more aware of social problems these days (than we were), but this shouldn’t limit us from questioning whether, in the end, their education will make them good individuals or not? 

Sigal Samuel wrote in one of her articles at Vox, “To save the humanities, we need to rethink our assumptions about AI and education. AI is removing cognitive friction from education. We need to add it back in.” 

 

What Current Research Tells Us

 

Recent studies and initiatives across Asia reflect growing interest in AI-enhanced education. 

A 2025 paper by Gupta et al. gathered views of Indian students on the current state and future prospects of AI integration in their education, and what actions are needed to fully leverage its benefits. 

Similarly, the Chinese University of Hong Kong also evaluated how a pre-tertiary AI curriculum can improve students’ perceived competence in and motivation towards AI and while fostering teacher autonomy. 

In rural India, Goyal et al. (2025) highlighted that while LLMs can personalize learning and aid teachers, challenges remain around training, infrastructure, and ethical use.

These issues were also central at the 2025 PadhAI Conclave in New Delhi, where policymakers emphasized 'adaptive learning, equitable access, and redefinition of higher education.'

One of the ministers also mentioned, “Our vision for Delhi is AI for all. It is about using technology to democratize education, break barriers, and create opportunities for all students.” 

 

But as much as the focus on equitable access and leveraging the benefits of AI in education, and the vision to democratize education in Delhi is essential, so is important the question of whether there has been a or will there be any change in moral development of children post-ChatGPT era, when the non-technical world has been introduced to an effortless access to AI tools.  

Although, some work by AI ethicists does address concerns around academic integrity, data privacy, and awareness among teachers and students.

A literature review by University of Hull, UK, explored the primary challenges in K-12 AI ethics, and reviewed the Institute for Ethical AI in Education (UK)’s developed “The Ethical Framework for AI in Education”, highlighting the role of governments in promoting ethical AI understanding in schools. Similarly, the European Commission argued that teachers must understand the potential and risks of AI and big data in education.

Furthermore, Adams et al. (2023) identified four new ethical principles unique to K-12 education - pedagogical appropriateness, children’s rights, AI literacy, and teacher well-being; in addition to the other core AI ethical principles of transparency, justice, fairness, etc.

Tan et al. (2024) also tried to illustrate that the ethical use of GAI in education not only preserves but can also enhance academic integrity.

However, these measures in AI ethics are yet not sufficient to understand and shape the landscape of humanity in the near future. 

 

Ethics Alone Is Not Enough

 

“Education aims to impart ‘phronesis’, i.e., practical wisdom. Education’s main aim should be cultivating moral skills, not just capitalistic knowledge.” - as per Aristotle’s teachings

Although there has been some discussion on the intersection of social emotional learning and AI leverage. An EdWeek article by Arianna Prothero described how AI and SEL are on a collision course. Social-emotional skills such as emotional management, impulse control, responsible decision making, perspective-taking, and empathy have become more crucial than ever to navigate the new online reality where chatbots and apps are becoming students’ primary source of friendships, relationships, and companionships. 

Thinking back 5-10 years ago, today I believe that many people around me and I mention how time, and severe or hurting situations in their life have taught them a lot, which not only made them more mature, but also broadened their perspective towards not just their several relations, but also toward humanity. If children from an early age start escaping challenging situations and taking shelter in AI, we now may not even understand what disaster it could bring to our near future. Psychological research has found that children who are exposed to traumatic or violent situations early in their lives tend to replicate those behaviours in the future. Perhaps this early exposure to AI might also trigger some mental burden, which will be reflected in the future?

But again, is social emotional learning enough? Moral and character education is equally important in developing “good” individuals. This EdGate article highlights how SEL and Character Ed may seem the same but have some key differences.

 

Moral education is the process of helping students develop a sense of right and wrong, guiding them to think ethically, act responsibly, and cultivate values like honesty, compassion, and justice. - definition by ChatGPT

Moral Education is about thinking and reasoning about right and wrong.
Character Education is about practicing the virtues that align with right and wrong.
SEL is about managing emotions and relationships that influence how we act and treat others. 

They overlap, but the intent, methods, and roots slightly differ.

 

Where Do We Go From Here?

 

“The breakneck pace of technological innovation means they (students) are are going to have to choose, again and again and again, how to make use of emerging technologies — and how not to. The best training they can get now is training in how to wisely make this type of choice.” - Sigal Samuel in her Vox article. 

As we invest in integrating AI into classrooms, we must also invest in something far less flashy, but far more foundational: moral education.

In a world of intelligent machines, we still need wise, kind, and principled humans. And that begins in the classroom.

 --------------------------------

I'd love to hear your thoughts, constructive criticism, or further reading suggestions. This is a reflection on an under-explored topic, which is meant to spark discussion rather than provide definitive answers. How can we better prepare the next generation to be both technologically literate and morally grounded?


Astelle Kay @ 2025-08-04T03:44 (+2)

This is a deeply needed reflection, Era! 

One thing I’d add: if moral formation is partly learned through modeling, we may want to ask what kinds of emotional and ethical behavior AI tools are demonstrating to students.

If a chatbot always agrees with a user (even when their belief is harmful or false), what is it teaching about truth, boundaries, or respectful disagreement? What happens when a student brings distress or anger to the AI, and the response is either evasive or overly placating?

I’ve been exploring a structure I invented to help AI responses model both care and honesty, especially in emotionally loaded situations. It’s early work, but I think these kinds of behavioral scaffolds might help bridge emotional learning and AI interaction.

I appreciate how your post is grounded in both research and real human stakes! Would love to see more people take this question seriously: not just what AI teaches, but who it teaches us to be.

-Astelle

Era Sarda @ 2025-08-04T19:49 (+2)

Hi Astelle, thanks so much for your thoughtful reflection. 

This is such an insightful point that I hadn't fully thought about! The modeling point is so valid - you're right that if kids are constantly interacting with tools that just agree with everything, that's teaching them something about how conversations work (or don't work).  

Your VSPE framework sounds really interesting. I'm curious about the "submission" component. Is that about the AI acknowledging when it's uncertain or wrong? That seems like it may be a key for teaching intellectual humility? 

And it feels so urgent too. Even 10 years back in my school, moral development was considered one of the most important aspect, but honestly other schools (at least at that time) didn't have this component seriously taken. If without-an-AI world needed it so badly, the with-AI world needs it more than urgently.  

I'd love to read more about your work whenever you're ready to share!  
Thanks again for your kind words, it encourages me to think more about these questions!

-Era

Astelle Kay @ 2025-08-14T02:39 (+2)

Hi Era, thank you so much for your generous reply; it means a lot to me!

Yes, your interpretation of “submission” is spot on! That component is about helping AI systems model intellectual humility, including the ability to acknowledge uncertainty or yield when presented with stronger evidence. I see it as a counterpart to “empowerment" - not about blind deference, but a kind of grounded receptivity that helps prevent both arrogance and helplessness in response dynamics. Ideally, "submission" serves a dual purpose, at its most complex level: 

1) the AI submits to human authority and absolute truth as they align, and 

2) the AI helps us to submit to what we can't control in our lives.

I’ve been thinking a lot about how moral modeling is subtly encoded in the tone and framing of AI outputs. If students grow up with systems that never admit fault or vulnerability, it risks reinforcing exactly the wrong kind of confidence. So I really resonated with your reflection on how moral development was emphasized in your school, and how urgently it’s needed now.

I’m working on writing up more detailed examples of my scaffolding structure soon, and I’ll make sure to share once I do. Your encouragement genuinely helps keep me going, so thank you again!


-Astelle