Figma vs AI-Native Canvas: Design Ideation¶
Moved from root IDEATION_CHAT.md - Historical ideation discussion.
Core Question¶
Before GenAI Coding Assistants, Figma became very popular as a design tool by UX teams but it didn't translate to code. Now that AI Coding assistants like Codex CLI are here, is it better to provide a Figma MCP server to the AI Coding assistant, or to create a new take on showing a canvas where the canvas can be generated by state in a webapp or mobile app similar to a Jupyter Notebook, in which case the canvas can be generated by the actual webapp.
Analysis¶
You're basically asking whether the AI development loop should be:
- Design-first → Figma → AI converts to code, or
- State-first → app-generated UI canvas → AI edits the real UI directly
This is a key architectural question for AI-assisted development tools.
Short answer: the second model (state-driven canvas generated by the app) is likely the stronger long-term approach, while Figma integration is a transitional bridge.
Why Figma Became Dominant¶
Tools like Figma exploded because they solved problems for humans collaborating on UI, not for machines generating code.
Advantages for UX teams: - Multiplayer design collaboration - Component libraries and design systems - Rapid prototyping - Non-engineer participation
But Figma's core abstraction is visual layout, not application state or behavior.
That's why the pipeline:
always had friction. Even with plugins, Figma never became a source of truth for production UI.
Why AI Coding Assistants Change the Equation¶
With tools like OpenAI Codex or GitHub Copilot, the AI is directly modifying the codebase.
The AI works best when it has: - actual code - runtime state - component hierarchy - user interactions - application logic
A static design artifact like Figma lacks those.
So if you attach a Figma MCP server to the AI, it can read: - frames - layout - tokens - assets
…but it still lacks the actual runtime model of the app.
The "State-Driven Canvas" Model¶
What you described—a notebook-like canvas generated by application state—is much closer to how AI agents work best.
Think something like:
Rendered as a live UI canvas.
This is conceptually similar to: - Jupyter Notebook (stateful cells) - Observable notebooks - Storybook component playgrounds
But extended so the AI edits the state and code simultaneously.
Pipeline becomes:
This closes the loop.
Why the Canvas Model is Powerful for AI¶
A runtime canvas gives the AI information Figma never had:
Real component graph:
Data flow:
Interaction traces:
Layout from actual code - Not a mock layout, but real rendering.
What Figma MCP is Still Good For¶
A Figma MCP server still has value for:
- Legacy design pipelines - UX teams still work in Figma
- Design system extraction - AI can read tokens, colors, typography, spacing, components
- Asset ingestion - Icons, images, vector shapes
So the architecture might look like:
Figma becomes input, not control plane.
The AI-Native UI Dev Environment¶
The next generation tooling might combine ideas from: - Figma (visual canvas) - Jupyter Notebook (cell/state execution) - Visual Studio Code (code editing)
Example structure:
Page
├─ Cell 1: Data query
├─ Cell 2: Component definition
├─ Cell 3: UI render
├─ Cell 4: Interaction test
The AI edits any cell. Canvas updates instantly.
Why Figma May Decline in the AI Era¶
The core issue:
- Figma describes pixels
- AI development prefers describing state and components
AI works better with:
than with:
Recommendation¶
Do both, but prioritize the canvas.
Architecture:
- Figma MCP → read tokens/components/assets
- Codebase → React / SwiftUI / Flutter / etc
- Runtime Canvas → generated UI from state (ProductGraph's focus)