Leadership

Enabling Your Team, Not Mandating Them, to Use AI

Enabling Your Team, Not Mandating Them, to Use AI

Enabling Your Team, Not Mandating Them, to Use AI

Author:

Austin McDaniel

Date:

Feb 3, 2026

Austin McDaniel

Feb 3, 2026

Leadership

Austin McDaniel

Feb 3, 2026

Leadership

Feb 3, 2026

Leadership

I've watched the "you must use AI or find another job" mandates roll through the tech industry. CEOs issuing ultimatums. Employees scrambling to look productive with tools they don't understand. The whole thing feels backwards.

My background is engineering, not sales. I joke that I just play a CEO during the day—I'm actually a software engineer. So when I recently talked to my team about AI adoption, I took a different approach. No mandates. No threats. Instead, we had an honest conversation about what's changing and how we figure it out together.

The response surprised me. One team member said it was "framed as exploration rather than mandate, positioned as a superpower instead of a threat." Most importantly, "you put the definition and ownership in the hands of the people who will actually be using it day to day."

That's the way to do it.

For CEOs: What actually works

If you're a CEO reading this, here's the honest truth: you don't need to understand how AI works to help your team adopt it.

What you need to do:

  1. Acknowledge the overwhelm. Your team is feeling it even if they're not saying it.

  2. Frame AI as a superpower, not a threat. The language matters.

  3. Put ownership in your team's hands. Let them pick the tools. Let them define the workflows.

  4. Invest in the tools. Absorb the license costs. Remove the friction.

  5. Create space for experimentation. Not every experiment will work. That's the point.

  6. Share what works. The compound benefit comes from teams sharing prompts and patterns.

Why the mandate approach fails

My dad used to say you catch more flies with honey than vinegar. Force me to do something and I'll instinctively do the opposite. A lot of people feel this way.

When you mandate AI adoption, you get compliance theater. People open ChatGPT to look busy. They paste in work and paste out results without understanding what happened. Worse, they become defensive—treating AI as a threat rather than a tool.

The alternative? Meet people where they are. Acknowledge the overwhelm. Give them permission to experiment.

Even technical people are overwhelmed

Here's something executives don't say out loud enough: AI is moving at a pace that's overwhelming even for people in the thick of it.

I'm technical. I watch this space every day. And it's a LOT. Major developments every few months. New tools weekly. Autonomous agents creating their own religions on AI-only forums (yes, really). If you're someone writing code or designing products eight hours a day, keeping up feels impossible.

When I talked to my team, I acknowledged that. I said: "I can imagine how overwhelming this can feel. And I can probably imagine how you all feel when I occasionally share something new, like 'hey, have you seen this?'"

Naming the overwhelm makes it easier to move past it.

The "personal assistant" framing

People get stuck on where to start. Here's the framing I use: If you had a personal assistant, what would you give them?

You wouldn't hand a new assistant a legal document and say "handle this" without reviewing their work. You wouldn't give them your credit card on day one. You'd start small, check their output, build trust over time.

AI works the same way. Start with low-stakes tasks. Review everything. Build guardrails as you learn.

One thing I do: after sales calls, I ask AI to review the recording and roast me. What did I do poorly? How could I improve? It's a five-minute habit that makes me better. No one's job is at risk. I just get faster feedback.

"Slop" is a prompting problem

The biggest complaint about AI-generated work is that it produces generic slop. This is true—when you don't know how to instruct it.

Think about hiring an employee and telling them to "go do something" without any direction. What would they produce? You'd have no idea. AI is no different.

When I started using AI for contract review, it flagged every possible concern—most of which nobody cares about. Once I told it what actually mattered to my business (what puts us at risk, what we should care about in five years), the output got dramatically better. I went from reviewing contracts in hours to reviewing them in two shots.

The skill isn't in using the tool. It's in giving it the right context.

Let your team pick the tools

Here's where most executives get it wrong: they pick the AI tools and push them down.

We're doing the opposite. I told my team: "I will empower you all to pick them. I will tell you what I think is good, but it's going to be on you all to pick them and adopt them."

Why? Because the people doing the work know what they need. Designers need image generation. Developers need code completion. PMs need summarization. One tool doesn't fit all.

What we do insist on is consistency within teams. I'd rather things be consistently wrong than inconsistently right. If everyone's using different tools, you can't share learnings, prompts, or workflows. The compound benefits disappear.

Start small. Build from there.

The worst approach is all-or-nothing. "I'm going to have AI write everything and not even review it." We've seen that fail. Spectacularly.

The right approach is incremental. Take a little time out of your day. Do something small. See what it can achieve. Then build from that.

One developer on my team was getting repetitive code review feedback from me. I asked Claude to scan all my GitHub comments and generate a document of my patterns and preferences. Five minutes of work. Now the AI catches those issues before I ever see the code.

That's not replacing anyone. That's removing friction.

The only people AI will replace

When I hear team members worry about AI eliminating their jobs, my response is direct: the only people AI will eliminate are the ones who refuse to use it.

Someone told me recently they can't imagine not using AI anymore. I feel the same way. It would be like saying "I don't know what Google is." Not using it isn't some principled stand—it's just leaving value on the table.

The people who thrive will be the ones who learn to direct AI effectively. That's a skill. It takes practice. And it's worth developing now—together, as a team—while you still have time to experiment.

MOre Good Blogs

INTERESTED?

Design. Code. Results.

We are ready whenever you are

INTERESTED?

Design. Code. Results.

We are ready whenever you are

INTERESTED?

Design. Code. Results.

We are ready whenever you are

INTERESTED?

Design. Code. Results.

We are ready whenever you are

Goodcode

The End

010-010

(C)2025 GoodCode

The End

010-010

(C)2025 GoodCode

Goodcode

The End

010-010

(C)2025 GoodCode