AI Isn’t the Problem: Your Culture Is
Why Getting People “On Board” with AI Starts with Trust, Not Tools
During a recent webinar I was facilitating, a participant asked a question I’m hearing more frequently across campuses and organizations - something like:
“How do we get employees on board with using AI?” “How do I manage the resistance from my team about AI use?”
Fair questions and also revealing.
Because what most people are actually asking is not about adoption.
They’re asking about fear, trust, relevance, and control.
Ready or not = Artificial intelligence is quickly becoming embedded in our work. But resistance to AI isn’t rooted in stubbornness or technophobia.
It’s rooted in uncertainty:
Will this replace me?
Will this be used to monitor or evaluate me unfairly?
Who decides what’s acceptable?
What happens if I get it wrong?
If leaders treat AI as a purely technical rollout, another platform, another training, another expectation; we miss the real work of change.
AI isn’t just a technology shift.
It’s a cultural moment.
Rather than simply a technical upgrade or platform rollout, the introduction of AI into the workplace is a pivotal cultural moment. It is less about the tools themselves and more about profound shifts in organizational dynamics: how trust is built, how leadership is exercised, and how people relate to their work, relevance, and control. The resistance to AI is not rooted in technophobia, but in deeper uncertainties that expose an organization's existing culture around trust, transparency, and change.
“Companies expect productivity gains of up to 40%, yet the readiness gap remains enormous. A Harvard Business Review Analytic Services study found that while 91% of respondents agreed having the right talent is essential to AI success, 72% said their focus on AI had exposed skill gaps they had not anticipated.” (Stanton Chase, January 29, 2026).
A Moment That Changed the Conversation
Last summer I was sitting in a conference room with a group of professionals who had weathered more change in the last five years than many experience in an entire career. New systems. New leaders. New Responsibilities. New expectations. Morale was thin, and trust was fragile.
AI had just entered the conversation.
One person finally said what others had been circling around quietly: “I’m not afraid of learning something new. I’m afraid of how this will be used against me.”
The room went quiet, not because people disagreed, but because they felt seen. Instead of moving quickly to reassurance or explanation, we stayed there. We talked openly about past experiences where technology had been introduced without context, without choice, and without clarity. We named the fear; not to amplify it, but to acknowledge it. Only then did the conversation shift.
People began asking different questions:
What would responsible use look like here?
Where could this actually help us do better work?
What do we want to protect as humans in this process?
That moment reinforced something I see again and again in change work: people don’t resist technology; they resist being excluded from the conversation about how it will shape their work and worth.
Talking About AI Openly Is a Leadership Move
One of the most effective (and underused) strategies leaders have right now is simple, but not easy:
Talk about AI openly.
Not as a directive.
Not as a sales pitch.
But as an ongoing dialogue.
When leaders name what people are already thinking, the uncertainty, the ethical concerns, the discomfort, it builds credibility. Silence, on the other hand, creates space for worst-case assumptions.
Open conversations might include questions like:
What excites you about AI, and what worries you?
Where do you see it supporting your work?
Where does it feel intrusive?
What lines should we not cross?
These conversations signal something powerful:
You are not being forced to change; you are being invited to explore how this can change how we work and impact the systems we have created.
Shared Values Matter More Than Policies
Here’s where culture does the heavy lifting.
Instead of starting with rules about AI, start with shared values about how it will be used.
Policies answer what’s allowed.
Values answer who we are.
In organizations navigating AI well, I’m seeing leaders facilitate conversations that result in shared agreements such as:
AI is a support tool, not a replacement for human judgment.
Transparency matters- we name when and how AI is being used.
Learning is expected (and we have people to teach you); perfection is not.
Ethical use outweighs speed or efficiency.
When people help shape these values, fear decreases. Not because uncertainty disappears, but because agency increases.
And agency is one of the most powerful antidotes to resistance.
AI Reveals Existing Culture - It Doesn’t Create It
Here’s the hard truth: AI doesn’t introduce trust issues into an organization, it exposes the ones that already exist.
In cultures where:
decisions feel opaque,
evaluation feels punitive, or
change has historically been done to people,
AI becomes another symbol of loss of control.
In cultures where:
learning is valued,
dialogue is encouraged, and
leaders model curiosity and humility,
AI becomes a shared experiment rather than a looming threat.
This is why AI adoption cannot be separated from broader culture work. If trust is thin, no amount of training will fix it.
The Role of Leaders: From Experts to Stewards
Many leaders feel pressure to have answers about AI.
The reality?
What organizations need most right now is not AI experts, it’s stewards.Stewardship looks like:
Asking better questions instead of rushing to solutions
Modeling curiosity instead of certainty
Creating space for ethical reflection, not just efficiency gains
Acknowledging, “We’re learning this together.”
This is especially true in higher education, where values, mission, and human development are central, not incidental to the work.
Stewards of Inclusion
In a recent article a Gartner HR Survey Finds 65% of Employees are Excited to use AI at Work. “One challenge is that the people responsible for workforce readiness are often missing from the room where decisions around AI strategy or investments get made. Gartner research found that AI deployment decisions are frequently made without any involvement from the CHRO, which leads to poor workforce adoption, misaligned expectations between employees and executives, and organizations failing to realize business value from their AI investments.”
What this suggests is that those who are skilled, knowledgeable and trusted to develop and implement training are not at the table or included in the conversation when decisions are being made. When we are planning for a discussion or hosting a “meeting” ask this instead: Who in our organization may be able to give insight, help with coordination and create trust in the process? It may be that you are designing a “working group" ( I suggest this over adding a “committee”), that includes HR, faculty who teach in the area, IT staff, risk management and finance representation. For campuses, maybe even a student or two. (We all know they are much quicker at adapting to new technologies and tools.)
Change Happens One Conversation at a Time
If there’s one takeaway I’d offer leaders navigating AI right now, it’s this:
You don’t build buy-in through tools. You build it through trust.
Trust is built when people feel heard.
When expectations are clear.
When values guide decisions.
When learning is normalized.AI will continue to evolve rapidly. Our job as leaders is to ensure our culture evolves intentionally. Because in the end, the most important question isn’t whether your organization is using AI. It’s whether your people feel safe, supported, and seen as they learn to use it.