AI Can Build You a Website. It Can’t Build You Trust.

Two people seated indoors behind a large glass window, looking outward at a grassy landscape; reflections of trees and outdoor scenery appear on the glass, with a small table, coffee cup, and carafe visible beside them.

Anthropic just announced Claude Opus 4.7 can generate complete websites and presentations from a prompt. The press is focused on whether this disrupts Adobe and Figma. Venture capitalists valued the company at $800 billion — more than double its February valuation. Here’s the question nobody’s pressing on: what happens to the organizations using these tools?

I’ve spent 30 years watching mission-driven organizations — nonprofits, community health centers, public media — struggle with the gap between what they build online and what their audiences actually need. AI-generated design tools promise to close that gap by making website creation frictionless. The promise is speed. The risk is invisible.

When design becomes something you prompt for instead of something you build capability around, organizations lose more than they gain. And the communities they serve pay the price.

The Accessibility Crisis Hiding in Plain Sight

Most business owners building a website with AI don’t know to ask about accessibility. They don’t even know what they don’t know. A 2024 study found that both ChatGPT and Microsoft Copilot often produce HTML with accessibility violations — missing form labels, no focus indicators, and other failures under WCAG 2.1. The tools generate code that looks polished but excludes people who rely on screen readers or keyboard navigation.

AI models learn from existing websites. They’re generating inaccessible code because the web is still too often inaccessible.

When I audit sites for clients, I see the same patterns repeating: gorgeous designs that fail basic accessibility standards. Now we’re training AI on those same failures and asking it to generate new sites at scale.

Here’s what that looks like in practice. You run a nonprofit serving vulnerable populations. A board member who’s excited about AI pressures you to use it for your website redesign. The tool generates something beautiful in minutes. You launch. Three months later, a demand letter arrives from an attorney. That’s how you find out your site violates ADA standards.

By the time you know it’s a problem, it’s too late.

The best way to make a website accessible is to get it right in the code the first time. No plug-in solves it. No AI prompt currently ensures it.

None Pizza, Left Beef

I’ve worked with clients who can take abstract information and visualize a website from a low-resolution mockup. I’ve also worked with clients who need to see exactly how every animation and transition will work before they can make a decision.

AI tools act like assistants who are eager to please. They don’t push back. They don’t impose the constraints a user experience specialist would impose. They deliver exactly what you ask for.

That’s the problem.

Think of ordering pizza through an app that takes every request literally. You try manipulating the interface for “no sauce, easy on the toppings,” and end up with a lump of dough and meatballs rolling around on top. (The “none pizza, left beef” meme.)

The system gave you what you asked for, which wasn’t what you needed.

Over decades of usability research from Nielsen Norman Group and the Baymard Institute, the UX community has identified specific markers of trust and functionality that drive users toward completing their goals. Decision makers unfamiliar with these patterns bring biases from their own industries.

I see this especially with clients who worked on or with legacy print publications. They expect the website to look like a magazine. You can build something gorgeous, and users still won’t figure out what to do next. They rage-click on elements that don’t respond. They land on a page and don’t realize they can scroll down. They back out.

An AI tool trained on websites that are elegant but ineffective will generate more elegant, ineffective websites.

The Capability You’re About to Lose

Every few years, a new tool promises to democratize web design. Dreamweaver, RapidWeaver, Wix, GoDaddy, WordPress — each one was supposed to destroy the web design industry. None of them did. AI design tools are another iteration of that pattern.

But there’s something different this time.

Those other tools had visible constraints. You could see the template options. You could see the limitations. With AI, the interface is a conversation that says yes to everything. When someone realizes six months later that what they built isn’t working, the typical response is to scrap it and rebuild with the next model. AI tools evolve every three to six months. The models get better. Why not start over?

Here’s why. Websites are relationship infrastructure.

If you’re doing real relationship-building on a website — transactions, account management, recurring visits — changing the site too often fractures those relationships. When customers feel like they can’t find anything every time they visit, they stop visiting. They get their needs met elsewhere. You end up maintaining parallel versions of information instead of having one canonical place where people can count on finding what they need.

For mission-driven organizations, this matters even more. Your website carries credibility to funders, donors, and the communities you serve. When you can generate a website quickly but can’t sustain or evolve it strategically, you’re building on sand.

The Trust Problem You Can Sense

Most people who don’t work in technology don’t have a favorable view of AI right now. I hear this in user research studies. When regular people get a whiff of AI involvement, they express decreased trust and decreased feelings of safety with the brand. Research shows that AI chatbots using abstract message framing come across as less credible than human agents. For nonprofits, that weakens trust and lowers donation intention.

If you turn over your customer service to an AI chatbot that hasn’t been trained to meet your customers’ exact needs, don’t be surprised when repeat sales fall off. People drift to organizations still using humans in obvious ways. Your community can tell. And once that trust is gone, the path back requires real honesty.

You can recover from a misstep. It requires being consistent across every way you show up. It requires admitting mistakes and communicating that you’ve changed course. But you’re starting from a deficit you didn’t have to create.

The Split That’s Coming

Twenty years ago, the folks behind Borders invested heavily in rapid delivery services for groceries and other products beyond books. They were pioneering services we now take for granted with Instacart and Uber Eats. But they were 20 years ahead of the infrastructure needed to make it work — reliable cell networks, powerful smartphones, networks of independent delivery people. Borders’ founder tried to pivot too hard and too quickly. The bookstore chain, under new leadership, leaned hard into expanding its physical footprint, just as publishing and entertainment moved most of their offerings online. Amazon, the upstart, benefited from their overreach.

We’re about to see the same pattern with AI design tools.

Some organizations will go too hard, too fast. They’ll embrace AI so completely that the wheels come off. They’ll generate websites that look professional but fail accessibility standards. They’ll deploy chatbots that erode trust with their communities. They’ll move faster in the wrong direction.

Other organizations will find AI in the mix — it’s impossible to avoid — but they’ll have strong governance and leadership. They’ll let their people do what they’re truly best at, aided by AI. They’ll use these tools to prototype and test, not to replace strategic thinking.

Those organizations will remain steady. They’ll continue to own trust with their communities.

The difference between these two paths comes down to one question most organizations aren’t asking yet:

Are you using this tool to push further toward your goal and overdeliver for your audience, or are you using it because it’s easier than doing the hard work?

If you’re committed to asking the right questions — about accessibility, about security, about what your community actually needs — you’re positioned to take advantage of the technology. If you’re just speaking your dream into a prompt and hoping the output serves people you’ve never met, you’re about to do a lot of fooling around and finding out.

What This Means for You

AI design tools will be useful for strategists and product owners who want to develop sophisticated prototypes to refine. They’ll speed up the design and development phases. They’ll help visualize ideas faster. But they won’t replace the human who understands the gap between organizational intention and audience reality. They won’t ensure what gets built actually serves the people who need to use it.

The question isn’t whether AI will change web design. It will.

The question is whether organizations will use it to accelerate toward better decisions, or hustle toward decisions that look good in the boardroom but fail in the field.

If you’re running a mission-driven organization, your website isn’t a sand castle you rebuild every six months. It’s the foundation of how you show up for your community. It’s how people access services, find resources, and decide whether to trust you.

Before you hand that over to a tool that says yes to everything, ask yourself: who’s responsible for making sure this actually works for the people who need it? Because by the time you find out it doesn’t, it’s already too late.


If you’re trying to close the gap between what your website is and what your audience needs it to be, Website Reality Check walks you through the exact audit process I use with clients.