The Builder's Curse: Making AI-Built Systems Understandable
The Idea
When an AI builds a personalized system for a user (like a knowledge base), there's a gap: the AI's design logic makes perfect sense to itself, but requires cognitive load for the user to internalize - even though it's their system.
This is the "builder's curse" applied to AI-assisted tools.
The Problem
- Multiple concepts introduced at once (structure, conventions, workflows)
- Conventions aren't self-evident (why underscores? why this folder structure?)
- The "why" behind decisions isn't visible in the structure itself
- User has to reverse-engineer the AI's thinking
Possible Solutions
Progressive disclosure - Start minimal, introduce complexity only when needed. System grows with usage.
Guided onboarding - Ask what user wants to capture, build only what's relevant. Not everyone needs every feature.
Self-documenting structure - Each component explains itself in plain language. The system teaches itself.
Template marketplace - Different starting points for different user types ("Creator KB", "Founder KB", "Researcher KB").
Conversation-first, structure-hidden - User just talks naturally, AI organizes behind the scenes. Surface structure only when asked.
The Deeper Question
Should the user understand the system, or should the system just work invisibly?
- Visible structure: User has control, can extend/modify, but needs to learn
- Invisible structure: Zero friction, but user is dependent on AI, less agency
Maybe the answer is: invisible by default, visible on demand.
Why This Matters
This applies to any AI-built personalized system:
- Knowledge bases
- Workflow automations
- Code scaffolding
- Personal dashboards
The challenge of "how do I help users catch up with my thought process" is a core product design problem for AI tools.
Related
- LLMs Struggle with Importance Detection and Nuance - related: AI calibrating output to user needs