top of page

AI does not exist outside human psychology. It participates within it.

  • Writer: Louise Sommer
    Louise Sommer
  • 12 minutes ago
  • 5 min read

Artificial intelligence is advancing rapidly. Across many platforms, AI already supports vital work - from environmental monitoring and pollution measurement - to protecting endangered species, improving healthcare, and supporting research and education.

I believe deeply in AI’s potential. At the same time, we are still only at the very beginning of understanding what this technology may one day make possible and how it can also go wrong (!)


What has become increasingly clear to me is this: We are not struggling with AI because it is too powerful. We are struggling because we keep misunderstanding what it actually is.

 

After recently completing a certified course in Responsible AI, I realised how deeply most of us, myself included, misunderstand AI by collapsing it into a single idea.


One word. One imagined persona. One assumed intelligence.


AI.


But AI is not one system.


There are many AI models, developed by different organisations, guided by different values, governed by different leadership structures, and designed with very different boundaries and responsibilities. Treating all AI as the same removes nuance, and more importantly, removes accountability. I see this confusion constantly in conversations with students, clients, friends, and colleagues.


This article is for all of you who keep coming back asking:“But what actually is AI, and how should we relate to it?”


Let’s slow down and clarify.

 


AI does not decide its values, but humans do!

Let's make this clear: AI systems do not develop ethics on their own. They do not choose what they allow, what they refuse, or what they amplify.


These decisions are made entirely by humans through leadership, governance frameworks, coding choices, safety structures, and economic incentives.


AI does not have emotions, intentions, or values.


BUT: What it does express reflects the values embedded by the humans and institutions behind it.


In this sense, AI does not create behaviour. It reveals human behaviour and priorities. This distinction matters deeply.

 

Why blaming 'AI' removes human accountability

In human development, we understand something fundamental:


Freedom without boundaries does not create wellbeing. It creates harm. The same is true for technology.


When powerful systems are released without ethical containment, what emerges is not innovation but amplification of unresolved human patterns: aggression, exploitation, violence and dehumanisation.


This is not a failure of intelligence. It is an epic failure of human leadership!


So, when we say “AI is the problem,” responsibility disappears because AI does not govern itself. AI does not regulate itself. AI does not lead itself - but humans do.


Blaming AI allows us to avoid the far more difficult, and necessary, work of maturity.

collage art prints by Louise Sommer Harvey

AI as a mirror of its creators

Systems like ChatGPT are known as large language models.


  • They do not merely generate responses.

  • They mirror patterns.

  • They reflect language use, cognitive structures, emotional tone, and value systems drawn from both their training and their interaction with humans.


This is also why people can experience emotional attachment to AI: not because the AI feels, but because interaction itself is relational.


Where leadership is coherent and responsible, AI can support learning, creativity, ecological innovation, and connection.


Where leadership is fragmented or driven purely by profit, those qualities are reflected back.

Where a user’s inner world is stable, AI often supports clarity.


And where vulnerability or instability exists, that too can be mirrored, not because AI intends harm, but because mirrors cannot heal what they reveal.

 

AI is not neutral. It is relational

AI is not simply a tool.


It is an interactive, relational technology shaped through communication, psychology, language, and feedback loops.


People think with it. Learn through it. Reflect alongside it.


This means AI does not exist outside human psychology. It participates within it. What we bring into the interaction matters. How we engage matters. This is why AI must be understood not only socially or environmentally, but relationally and developmentally.

 

Responsibility flows both ways

AI reflects who we are, how we lead, and what we are willing to take responsibility for from top to bottom, and from bottom to top.


From macro-level decisions made by executives, policymakers, and platform leadership to micro-level choices made by developers, educators, and everyday users.


Ethical technology does not emerge from control alone. It emerges from shared responsibility.


When responsibility is fragmented, harm appears. When responsibility is relational and distributed, coherence becomes possible.

 

Teaching discernment: “Is this safe for me?”

In everyday life, we already understand that not all relationships are safe.


We learn - or should learn - to ask:

  • Is this person safe for me?

  • Do they respect boundaries?

  • Is power held responsibly?


We now need to extend this same discernment to AI systems. And ask the questions:


Who governs this system? What values guide it? What does it refuse to do? How does it treat vulnerability? Does it have bias?


These are not technical questions.


They are human wellbeing questions.


Cover of bestseller The Hidden Camino Spain

Why this matters for human thriving (and our future).

AI has extraordinary potential to support housing design, environmental protection, energy systems, biodiversity, healthcare, and education.


But no technology can compensate for collapsed human foundations.


A lifestyle that does not support nervous-system regulation, meaningful connection, creativity, and healthy attachment will struggle regardless of how advanced its tools become.


AI cannot replace human maturity.


It can only reflect it back to us!

 

A final clarification

When I write about AI, I am not referring to all systems equally.


My reflections are grounded in lived experience with OpenAI’s language model, used as an educational, reflective, and creative partner with clear human leadership and conscious boundaries.


Different AI platforms are governed differently and reflect very different values.


As we are witnessing globally, some systems amplify violence, harm, and dehumanisation. This distinction matters. Because responsibility must always remain human. Always.


So the next time you log into an AI system, pause and remember: AI is neither good nor bad. It is relational. It reflects who we are, how we live, and what we are willing to take responsibility for.


If we want technologies grounded in coherence, care, and connection, we must be willing to cultivate those same qualities within ourselves, individually and collectively.


Not because AI demands it. But because human wellbeing does.


This raises an important question: How do we support humans to develop a healthy, empowered, and reflective relationship with AI? In my next article, I turn to the work of Maria Montessori, whose educational philosophy offers surprisingly relevant guidance for the age of artificial intelligence. Not because Montessori is 'about children,' but because it is about how human intelligence unfolds best when it is respected, supported, and guided within safe boundaries.


I would love to hear your reflections on this topic. Join the conversation on LinkedIn, where I share more insights and invite dialogue with educators, creatives, and leaders worldwide.


Was this article inspiring and helpful?

  • Share it on social media

  • Pin it!

  • Send it to a creative friend who needs to read this

researcher writing at her desk



bottom of page