AI-ready design system: how to build one that AI agents can actually use

May 15, 2026Dianne Alter

Your AI keeps making up components. You ask Claude to design a screen, and it ships a button that looks nothing like the one your teammate merged last week. Different padding, different radius, different naming. It's not a Claude problem. It's a design system problem, and the fix is structural.

We've been migrating client design systems to be AI-ready, and the same five symptoms keep showing up. This post is the playbook: how to spot them, why shadcn/ui is the right foundation, and the exact steps to migrate without breaking your production codebase.

Table of contents:


What is an AI-ready design system?

An AI-ready design system is a component library structured so an AI agent can read it, understand when to use each component, and reproduce your brand without inventing new ones. It has one source of truth (the code), structured tokens, and machine-readable metadata that explains relationships between components.

The old definition (Figma library plus Storybook plus a Notion page plus a Slack thread of "use this button") doesn't survive contact with Claude Code. When you ask AI to build a destructive modal, it needs to know your destructive button exists and when to reach for it. If that knowledge only lives in your senior designer's head, the AI will invent something.

In practice, this looks like:

  • One source of truth in code, not three sources across Figma, Storybook, and Notion
  • Tokens as named variables (color-bg-destructive), not hex codes scattered across CSS
  • Component metadata that explains role, props, relationships, and AI hints

The takeaway: an AI-ready design system isn't a different design system. It's the same idea, structured so a non-human reader can use it correctly on the first try.

AI-ready design system vs. traditional design system

A traditional design system is built for humans. Documentation lives in long-form pages. Component usage is implicit, captured in team conventions and pull request reviews. The AI-ready version moves those conventions into the code itself, as structured metadata that any agent can read before it generates anything.

AI-ready design system vs. a Figma library

A Figma library is a designer-facing artifact. It's great for designers. It does nothing for Claude Code, because Claude doesn't read Figma, it reads your repo. An AI-ready design system treats the code as the source of truth and the Figma library as a downstream view of it, not the other way around.


Five signs yours is past its sell-by date

If you hit three of these, it's time to rebuild.

  1. Your components live in three different places. Figma says one thing, the codebase says another, Storybook is a year behind, and the real rules are pinned in Notion.
  2. AI generates components you've never seen before. You ask for a page and Claude ships custom inputs and bespoke cards because it can't find yours.
  3. Your product looks dated. You're avoiding screenshots in pitch decks. The components feel like 2021.
  4. Tokens are scattered. Hex codes hardcoded in CSS, no naming convention, no relationship between bg-primary and the brand palette.
  5. New team members keep asking "which button do I use?" And the answer changes every week.

Three or more means the system isn't doing the job it was built to do. The cost of ignoring this compounds. Every new feature gets a slightly different button, a slightly different card, a slightly different empty state. You spend the next year refactoring instead of shipping.

Rebuilding sounds expensive. Done right, with AI doing the heavy lifting and a sibling package keeping production safe, it's the fastest design system project you've ever run.

If you do it right, you'll see benefits like:

  • AI gets it on the first try, not the tenth. Claude reaches for the right component because the rules are in the code, not in a Slack thread.
  • One source of truth, finally. Figma, Storybook, and the codebase agree because they all point at the same tokens and metadata.
  • Faster onboarding. New designers and engineers stop asking "which button do I use" because the answer is in meta.ts.

What it looks like in practice

Three small examples of the same idea, from work we've done with TDP clients.

Example 1: the destructive modal that wasn't

The problem: A B2B SaaS client asked Claude Code to build a delete-account confirmation modal. Claude shipped a red button styled inline with a hex code. The product already had a Button with a destructive variant. Claude didn't know. The solution: We added a meta.ts to the destructive button variant with an AI hint: "Use for destructive actions in confirmation dialogs, never as a primary CTA on a marketing page." We re-ran the prompt without changing anything else. The results:

  • Claude picked the correct component on the next attempt
  • Zero new hex codes introduced across the next ten generated screens
  • The same metadata pattern propagated to inputs, cards, and toasts

Example 2: the button audit

The problem: Another client had seven buttons. Nobody could explain why. New screens kept inventing an eighth.

The solution: We ran a one-line audit prompt: "Audit the current product, list every button variant in use, rank by frequency, and flag duplicates." Claude returned a ranked list and three clear duplicates. We collapsed seven into three semantic variants (primary, secondary, destructive) with documented use cases in meta.ts.

The results:

  • 7 button variants reduced to 3
  • One sentence per variant in metadata, enough for AI to pick the right one
  • The team stopped debating button naming in PR reviews

Example 3: tokens, not hex codes

The problem: A client's CSS had 40+ shades of grey, all hardcoded, all slightly different. AI-generated screens introduced a 41st on every run.

The solution: We generated a design.json from the existing styles, snapped the 40 greys to a 9-step scale, and added a rule to AGENTS.md: "Never write raw hex codes. Always reference tokens from design.json."

The results:

  • 40 greys collapsed to 9 semantic tokens
  • Zero new hex codes in the next 50 generated screens
  • The eng team adopted the same tokens for non-AI work because they were just better

How to migrate without breaking anything

You don't need a six-month project plan or a re-platform to start. You need a sibling package, a design.json, and a metadata convention. Here's the playbook we use with TDP clients.

1. Create a sibling package, not a replacement

Don't touch your current component library. Create a new branch and a sibling package next to the existing one. If your current library lives at packages/ui, create packages/ui-next. The name signals intent: this is what's coming next, and nothing in production depends on it yet.

git checkout -b shadcn-design-system
mkdir packages/ui-next

Then prompt Claude:

Create a sibling package at packages/ui-next. This is a new component library that we want to eventually switch to. Install shadcn/ui here with all base components. Don't touch packages/ui.

The sibling approach means you can iterate fast without risking the current product. When you're ready to migrate consumers, you do it component by component.

2. Give Claude your brand context with a design.json

Before generating a single component, point Claude at your brand. The cleanest way is a design.json file at the root of your package. The schema came out of Google's work on machine-readable design tokens. It gives the AI a structured rubric: colors, typography, spacing, radii, and how they relate.

If you don't have one yet, prompt Claude to infer it from your existing system: "Read our current component library, infer the tokens, brand colors, font stack, spacing scale, and corner radii. Output a design.json at packages/ui-next/design.json." Then commit it and tell every agent (Claude, plus anything else in your AGENTS.md) to read it before building anything.

3. Pull live docs with Context7

Claude's training data has a cutoff. shadcn/ui ships updates constantly. To make sure you're generating components against current APIs, install Context7, an MCP server that fetches live documentation for open source libraries on demand.

Then prompt:

Use Context7 to pull current shadcn/ui and Storybook docs. Scaffold Storybook in packages/ui-next and start with a Button as the first story. Read design.json and apply our brand tokens.

This one prompt does a week of setup. Context7 keeps the docs current. The brand tokens load automatically. Storybook scaffolds itself.

4. Ship every component with a four-pillar meta.ts

Generating a button is easy. Making the button readable to AI is the part that matters. Every component in your new library should ship with a meta.ts file covering four pillars.

PillarWhat it capturesWhy AI needs it
ComponentAtom, molecule, or organism classification plus a one-line descriptionSo AI knows the role this component plays
Props and variantsBoolean flags, size options, semantic variantsSo AI knows what knobs to turn
RelationshipsWhich components this pairs with (destructive button to destructive modal)So AI picks the right partner component
Tokens and AI hintsWhich tokens it consumes, plus written rules like "use for primary CTAs, never inside tables"So AI picks the right component for the right context

That last column is the magic. When you ask Claude for a destructive modal and your destructive button's meta.ts says "use for destructive actions in confirmation dialogs," Claude reaches for the right component on the first try instead of inventing one on the tenth.

To enforce the pattern, add a rule to AGENTS.md:

Every new component must ship a meta.ts matching the four-pillar schema (component, props/variants, relationships, tokens/AI hints). Run on every change.

5. Start with the most-used component

Don't start with whatever's easiest. Start with whatever your product uses most. Prompt Claude:

Audit the current product. Which components are used most often? Rank by frequency.

Buttons, inputs, and cards will almost always be at the top. Build those first, validate that the metadata pattern works end-to-end (Storybook story, meta.ts, AGENTS rule), then expand to the next ten components.

Done well, this is a one or two week project, not a quarter-long re-platform. The sibling package means production keeps shipping. The metadata convention means AI gets better with every component you add.


Tools and resources

Three tools we use on every AI-ready design system migration:

  • shadcn/ui as the baseline. Open source, unopinionated about brand, the most-documented component library Claude has ever seen.
  • Context7 as an MCP server, so Claude pulls live shadcn and Storybook docs instead of stale training data.

One more thing that matters more than the tools: talk to your engineers before you build anything. Schedule a 30-minute meeting, bring three things, and don't skip any of them.

  1. The audit. Five components you want to migrate first and why.
  2. A working demo. Storybook with a meta-driven button, brand tokens applied. Show, don't pitch.
  3. One question for them: what's the lowest-risk way to introduce these components into production?

That third question is the one that determines whether anything you build actually ships. Your engineers know your deploy pipeline, your codeowners, and the migration paths that won't break consumers. They'll give you a plan. Without it, your sibling package lives in a branch forever.

The pattern we've seen work at TDP: a designer or design engineer builds the new library in ui-next, prototypes the migration in Storybook for review, and the eng team handles the codemod on the consumer side. Designers own structure and tokens. Engineers own the migration. Nobody is blocked on the other.

If your team is spending more time fighting AI-generated drift than shipping features, that's exactly what we help with at TDP. We design with code, so your design system is AI-ready by default.


If your team is spending more time fixing what the agent generated than shipping the next thing, the problem is almost never the agent. It's that your design system isn't talking back to it.

If you want the metadata schema, file structure, and skill setup we use with our clients, I'm sharing all of it inside the TDP community — plus monthly live working sessions where we build through this stuff together. Join the TDP community!


Dianne Alter

Dianne Alter

    Let’s build something awesome together!

    Get Started!