top of page

Why AI Needs Psychological Containment. Not Just Ethical Guidelines

  • Writer: Louise Sommer
    Louise Sommer
  • 11 hours ago
  • 2 min read

We tend to talk about artificial intelligence as if it were primarily a technical or ethical problem.


How should it be regulated?

What rules should govern its use?

What is permitted, prohibited, or encouraged?


These questions matter. But they are not sufficient, because AI does not enter human life as a neutral tool. It enters psychological systems; learning environments, organisations, leadership structures, cultures, and nervous systems already shaped by anxiety, aspiration, power, and projection.


Without psychological containment, even the best ethical guidelines remain fragile.


cover of the book The Hidden Camino by Louise Sommer

The limits of ethical frameworks

Ethical frameworks assume rational actors who can interpret rules, weigh consequences, and act with reflective judgment. However, human systems do not operate purely on rational cognition.


They operate through:

  • emotion and affect

  • identity and belonging

  • unconscious expectations

  • anxiety under uncertainty

  • the delegation and displacement of responsibility


And AI intensifies all of these dynamics.


When a system accelerates decision-making, externalises cognition, or promises optimisation, it does not simply assist human judgment. It reshapes where judgment lives. Ethics can tell us what should not be done. They cannot, on their own, ensure that power is held responsibly. That is a psychological task.


What is psychological containment?

Psychological containment is the human capacity to hold complexity, uncertainty, and power without collapse, avoidance, or projection.


The concept appears across 3 fields:

  1. in education, as the ability to create learning environments where uncertainty can be explored rather than shut down

  2. in leadership, as the capacity to hold collective anxiety without displacing it onto scapegoats, tools, or quick solutions

  3. in therapeutic traditions, as the ability to receive intense emotional material without reacting defensively or prematurely


At its core, containment is not control. It is relational stability in the presence of complexity.



Why AI specifically requires containment

AI systems amplify cognitive reach and AI amplify psychological dynamics. Together, they invite:

  • projection (“the system knows best”)

  • displacement of responsibility (“the model decided”)

  • anxiety masking (“the output looks confident, so it must be right”)

  • authority confusion (“is this judgment human, technical, or organisational?”)


In moments of pressure, acceleration, or institutional uncertainty, AI can begin to function as a psychological container by default. Not because it is designed to do so, but because no human structure is adequately holding that role. From a psychological perspective, this is where significant risk emerges, often at considerable cost.


This is not because AI is inherently malicious. As noted in earlier articles, AI systems become what human contexts shape them to be. The risk arises because uncontained power does not disappear; it inevitably leaks into unintended places.


Education, learning, and professional formation

In educational and professional contexts, AI is often introduced as a skills issue:learn the tool, understand the risks, follow the guidelines.


But learning is not only cognitive. It is formative.


In the next piece, I explore why leadership itself must be understood as a containment function in the age of AI, and why this matters for education, institutions, and professional formation.


I would love to hear your reflections on this topic. Join the conversation on LinkedIn, where I share more insights and invite dialogue with educators, creatives, and leaders worldwide.


Was this article inspiring and helpful?

  • Share it on social media

  • Pin it!

  • Send it to a creative friend who needs to read this

researcher working at her desk

bottom of page