Our Take on AI: Skeptical But Paying Close Attention

Last week, in one of our daily standup calls, two of our team members raised some pointed concerns about AI tools: security issues, environmental impact, and the questionable ethics of how training data was acquired. It was a good conversation. The kind of conversation I suspect a lot of agencies aren’t having, because it’s easier to either go all-in on the hype or pretend the whole thing will blow over.

We landed somewhere more uncomfortable: acknowledging that these tools have real problems and that our clients still need us to understand how they work.

The Uncomfortable Truth About How We Got Here

Here’s a personal belief I shared with the team: the way AI training data was acquired was, at best, unlawful—and at worst, straight-up illegal. Authors and creators deserve compensation for their work, and the major AI companies just… took it. Ed Zitron’s been documenting this kind of corporate behavior brilliantly on his podcast Better Offline. If you haven’t listened, consider it homework. His concept of the “Rot Economy”—alongside Cory Doctorow’s “enshittification”—captures something important about where much of mainstream tech’s been heading over the past decade.

So why would we use these tools at all?

Reality Doesn’t Care About Our Principles

Here’s the thing: people are using LLMs as search engines. They weren’t designed for that purpose, but that’s what’s happening. Journalists, analysts, investors—they’re asking ChatGPT and Claude questions about companies instead of going to the source.

For our clients—especially mission-driven organizations that depend on accurate representation—this matters. If an AI tool tells someone the wrong information about your organization, that’s a problem. If an AI tool bypasses information about your organization’s solutions or impact because it can find that information in its training set, that’s an existential crisis.

It doesn’t matter that a Large Language Model tool shouldn’t be trusted for factual claims. People trust it anyway.

We can’t fix the AI industry’s ethics. But we can help our clients make sure accurate information appears when these tools are queried. That’s a practical (and increasingly essential) part of our Content Operations service, not an endorsement of the technology.

Where AI Helps Us Serve Clients Better

Despite our skepticism about the industry, we’ve found genuine value in specific applications—places where AI makes our work faster or more accessible without compromising quality.

For our Complete Website Transformation engagements, we now use AI to generate fully interactive wireframes that clients can play with inside their own web browsers. This process replaces the mid-project step where we would move from scribbled boxes and lines to visualizing real content using flat design tools like Photoshop.

This has meaningfully reduced the time it takes to get from kickoff to client review, which means we can iterate faster. Clients see tangible progress sooner. All of our deliverables still get refined by humans who understand the client’s specific audience and goals. But AI handles that initial scaffolding well.

We’ve also started using text-to-voice AI generators inside some clients’ content projects. This one matters to me personally, because it’s about accessibility. A blog post that only exists as text excludes people who can’t read it—whether that’s due to vision impairment, learning differences, or just being stuck in a car.

Yes, most browsers now have built-in text-to-speech. But have you listened to it? The experience is rough. AI-generated voice content sounds far more natural, which means people actually use it. More accessible content means a wider audience, which is exactly what mission-driven organizations need.

We’ve also added an AI-enhanced proposal generator tool to our agency website. Instead of waiting days or weeks to connect with a member of our team about an upcoming project, our system can make an assessment and even generate an initial proposal. Our internal testing showed it’s correct about 98% of the time when we benchmarked it against actual proposals and projects we anonymized. It means you can get a real price from us at one in the morning and we can schedule a kickoff meeting within days instead of ping-ponging on scope memos. (But we’re still going to have our humans validate the whole thing before sending an actual statement of work document on a new project.)

What Our Internal Policy Says

We spent time this month developing an AI ethics policy for our agency. Not because we’re trying to look responsible—but because we needed clarity for ourselves and our team. Here’s the core of it:

AI is a tool, not a teammate. It can help with initial drafts, research summaries, code reviews. But it never replaces human judgment, and it’s never the final word on anything we deliver to clients.

Every factual claim gets verified. AI outputs are starting points, not finished products. If we can’t trace something back to a real source, it doesn’t go out the door.

We respect team members who have concerns. Not everyone in our organization is comfortable using these tools. Some have legitimate ethical objections—environmental impact, labor implications, the whole training data mess. When someone doesn’t want to personally use a specific AI tool, we either find an alternative solution or we route that work to someone else on our team.

Client data always stays private. We don’t upload sensitive information to AI systems without explicit consent. Period.

The “Neutral Stance” Paradox

We’re in an interesting spot right now. Some of our clients are firmly anti-AI—they don’t want us using it, they don’t want to talk about it, they want nothing to do with the whole thing. Others are asking us to help them go all-in.

One client’s senior leadership went from prohibiting AI usage to demanding weekly AI strategy memos in less than a year. One of our frequent agency partners pivoted their entire workflow from one-on-one consulting to “vibe coding” based on their library of intellectual property.

I don’t think I’ve seen this much rapid change since companies started thinking about how to use the Web in the late 1990s.

We’re taking what I’d call a publicly neutral stance while collectively staying informed. We’re not cheerleaders for this technology. We’re also not pretending it doesn’t exist. Our job is to help clients navigate their specific situations, not to push an ideology.

What We Won’t Do

We won’t tell you AI is going to transform your business. We won’t generate your strategy with a chatbot. We won’t produce content that could mislead your audience. And we definitely won’t upload your proprietary information to systems that might train on it.

We also won’t pretend the ethical concerns don’t exist. The environmental impact is real. The labor implications are real. The questions about intellectual property are unresolved and serious. When clients ask, we’re honest about all of that.

What We Will Do

We’ll help you understand how your organization appears in AI-powered search results—and whether that’s accurate. We’ll use AI tools where they genuinely help—faster wireframes, more accessible content, smoother research—while keeping humans in charge of everything that matters. We’ll verify every claim, maintain your voice, and never pass off machine output as human expertise.

We’ll keep having uncomfortable conversations internally—about what’s ethical, what’s practical, and how to navigate the gap between them. And we hope that everything we’re learning about the new business landscape informs and improves the results we get for our clients.