Why Leadership Is a Containment Function in the Age of AI
- Louise Sommer

- 8 hours ago
- 3 min read
Editor’s note
This article is the second in a short reflective series on artificial intelligence, psychological containment, and leadership. The first piece, Why AI Needs Psychological Containment. Not Just Ethical Guidelines, explores why ethical and technical frameworks alone are insufficient for holding the psychological and relational impacts of AI in human systems. This article continues that inquiry by examining leadership itself as a containment function. Particularly within educational and institutional contexts where AI is increasingly embedded.
Much of the current conversation about leadership in the age of AI focuses on speed, vision, and decision-making — particularly within organisations and institutions under pressure to adapt.
Leaders are expected to move faster.
To implement new technologies decisively.
To keep pace with accelerating change.
These expectations are understandable. But they overlook something essential. In times of increasing complexity, leadership is not only about direction.It is about containment.
The myth of decisive leadership
Contemporary leadership culture often rewards confidence, clarity, and rapid action. These qualities can be valuable. But they are not sufficient for the conditions institutions are now facing.
AI intensifies uncertainty.It redistributes authority.It blurs the boundary between human judgment and technical output.
Under such conditions, the greatest risk is not hesitation.It is premature certainty.
When leaders collapse ambiguity too quickly, anxiety does not disappear. It is displaced — often onto tools, systems, or simplified narratives that promise control without understanding.
This is not primarily a failure of competence.It is a failure of containment.
Leadership as psychological containment
Psychological containment refers to the capacity to hold uncertainty, anxiety, and power without deflecting them onto others or external systems.
In leadership contexts, containment involves the ability to:
hold collective anxiety without rushing to false resolution
tolerate ambiguity long enough for understanding to emerge
resist delegating judgment prematurely to systems or procedures
maintain relational stability when institutions are under pressure
Containment is not passivity.Nor is it control.
It is the capacity to remain psychologically present when complexity intensifies — so that thinking, learning, and responsible action remain possible.
Why AI intensifies the need for containment
AI systems externalise thinking. They generate outputs that appear coherent, confident, and authoritative — even when uncertainty remains unresolved.
Within organisations and educational institutions, this can lead to subtle but significant shifts:
leaders defer judgment to systems
responsibility becomes diffused
decision-making appears rational while becoming psychologically detached
authority migrates from human relationships toward technical artefacts
In moments of pressure, acceleration, or institutional uncertainty, AI can begin to function as a psychological container by default — not because it is designed to do so, but because no human structure is adequately holding that role.
As explored in the earlier article on psychological containment and AI ethics, this is where risk begins to accumulate.
AI does not create this risk.It reveals it.
Containment, education, and institutional life
Leadership does not occur in isolation. It is formed — through institutional cultures, educational practices, and everyday decision-making environments.
In higher-education and professional settings, leaders shape more than strategy. They shape psychological climate. They model how uncertainty is held, how authority is exercised, and how responsibility is shared.
In the age of AI, this modelling becomes formative:
Do leaders demonstrate reflective judgment, or rapid delegation?
Do they remain in relationship with complexity, or outsource it?
Do learning environments support sense-making, or mere compliance?
When leadership lacks containment, institutions become reactive.When containment is present, institutions can adapt without fragmenting.
A different understanding of leadership
Leadership, at its core, is not the elimination of uncertainty.
It is the capacity to hold uncertainty in ways that allow others to think, learn, and act responsibly.
In the age of AI, this function becomes more — not less — important.
Technology can accelerate processes.It cannot contain meaning, anxiety, or ethical weight.
That responsibility remains human.
Why this matters now
As AI becomes increasingly embedded in education, governance, and organisational decision-making, the question is not whether leaders will use these systems.
They will.
The question is whether leadership itself will mature fast enough to hold what AI amplifies.
Without psychological containment, leadership collapses into either control or abdication. With it, AI can remain a powerful tool — embedded within human judgment rather than replacing it.
Leadership, in this sense, is not about standing in front, but about sustaining the psychological and relational conditions under which responsible judgment can emerge.
I would love to hear your reflections on this topic. Join the conversation on LinkedIn, where I share more insights and invite dialogue with educators, creatives, and leaders worldwide.
Was this article inspiring and helpful?
Share it on social media
Pin it!
Send it to a creative friend who needs to read this





