Adding AI to the Mix is a Whole New Level of Chaos
I’ve been leading remote teams for over a decade. Back when everyone thought you needed to be in the same room to brainstorm effectively, I was already figuring out how to achieve targets across pretty significant areas where no one had an office and everyone was working from home or their car. Then the lockdowns hit, I shifted careers, and remote became the new way of working. Some would say my history put me at an advantage. And yes, during the pandemic, I was able to seamlessly manage teams across three continents, but then AI joined the chat.
When AI tools started flooding the market, I thought I had this. I mean, how hard could it be to add some new tools to an already complex remote workflow?
Turns out, very hard. But not for the reasons everyone talks about.
The AI Paranoia Triangle
I’m going to reiterate that I love AI. The issue is that with the integration of AI, the expectation was that everything would become easier; it’s not! Here’s what I dealt with (and deal with) when it comes to AI-tension.
First, half my team is terrified AI will replace them. They’re second-guessing every creative decision that used to come to them naturally. This fear is paralyzing their best instincts, and this means more nurturing to help them through a new type of creative block.
Second, the other half of my team is secretly using AI tools and not telling me. They’re worried I will think they’re cheating or that their work isn’t “authentically” theirs anymore. So I’m getting inconsistent output and have no idea what tools are actually being used. Some have decided to hop onto the 20% is 80% good enough bus, and that means my quality control backlogs are ever-growing.
Third, clients and founders sometimes think that AI means everything should be 10x faster and 50% cheaper. They’re comparing my team’s thoughtful, strategic work to whatever ChatGPT spits out in 30 seconds, and that simply doesn’t cut it. Realistically, the margins for saving time are obliterated by the increased time spent checking AI’s hallucinations.
This triangle is completely missing the point…
We Already Know How to Standardize (That’s Not the Problem)
The “AI will standardize creativity” statement (and ensuing panic) drives me a little crazy because, realistically, professional creatives have always worked within systematic frameworks.
Teams use brand guidelines. Design systems. Editorial standards. Content methodologies. Sales forecast. Use scripts. Follow precise metrics. We’ve been standardizing output for decades. The difference is that human standardization serves strategy, not efficiency.
When someone says AI makes everything look the same, they’re comparing professional creative work to standard AI outputs. A trained designer using MidJourney within established brand parameters isn’t going to produce generic work. They’re going to produce brand-aligned work faster. And this is where the real advantage of AI comes into play. People who know how to combine creative methodologies and AI can integrate the use of AI tools without losing their creative identity.
My writers can use AI to research faster and explore different angles,, but they’re still applying our human writing standards, brand voice guidelines, and strategic messaging frameworks. In other words, their systematic frameworks have evolved but the work remains human! This issue isn’t the AI, it’s how we are using AI and our expectations of it.
When Everyone’s an AI Expert (But Nobody Is)
Remote creative teams already lack hallway conversations that build creative cohesion. Now, everyone is experimenting with different AI tools in isolation, and I’m getting wild inconsistency in both quality and approach. One writer is using Claude for research, another is using ChatGPT for headlines, and someone else discovered that Grok is great for deep article work.
Nobody is talking to each other. Few are disclosing their AI use. And even fewer are being honest about whether their deadlines are realistic or not. This left me running interference between quality assurance on human creativity and AI simultaneously. At one point, it felt like I was leading a team where half the people are learning new software every week and the other half refuse to update from 2019. The solution was to create systems that allowed authentic voices to be used alongside AI tools.
The Useful Chaos
Every leader and organization needs to find what works for them when it comes to AI use.
First, and most importantly, transparency about AI use. Make it clear that using AI tools isn’t cheating, it’s evolution, but do let people know what you use AI for. If you’re using it for research and then checking after, say so. Leaders need to know what tools are being used so that we can maintain quality standards and client expectations.
Next, treat AI literacy like any other professional skill. Your team has to be training on new AI tools the same way they would train on new design software. It’s part of staying current, not a threat to creativity.
Finally, make sure that your methodologies provide the guardrails for AI experimentation. Teams with solid foundations adapt to AI tools faster because they know what good work looks like, regardless of how it’s produced.
Right now, I’m on the fence about what the future of AI is (not the future of human creatives). What I do know is that right now, it’s humans who know how to use AI strategically versus humans who don’t. In the same way remote work didn’t kill creativity (it just required different leadership skills), AI isn’t killing creative teams; it’s requiring different creative skills. That said, you still need people who can think strategically, understand brand implications, and make decisions that align with business objectives. AI can help with execution, but it can’t replace judgment.