Claude Certified Architect Exam: Domain Breakdown, Study Strategy & Free Resources
Anthropic's Claude Certified Architect (Foundations) certification is one of the few vendor credentials that directly tests production-grade AI system design — not just prompt writing. It validates your ability to architect, configure, and debug Claude-based systems across five technical domains. This guide breaks down everything you need to know before sitting the exam.
What the Exam Actually Certifies
The certification is aimed at developers and architects who build real Claude integrations — not end users. The exam tests design decisions: when to use multi-agent orchestration vs a single agent, how to configure CLAUDE.md hooks, which MCP transport to choose for a given deployment, and how to handle context window limits in long-running tasks. Passing requires understanding tradeoffs, not just memorizing API parameters.
Exam Format at a Glance
- Scenario-based multiple-choice questions (typically 60–75 questions)
- 5 real-world scenario types that mirror production edge cases
- No live coding — questions test architectural decisions and configuration knowledge
- Passing requires understanding tradeoffs between similar-looking design choices
- All questions map to one of the five exam domains
Domain 1 — Agentic Architecture & Orchestration (27%)
The heaviest domain by weight. This section tests your understanding of the full Claude agentic loop: how stop_reason values (end_turn, tool_use, max_tokens, stop_sequence) signal different agent states, how to design multi-agent coordination patterns, how to decompose tasks across context window boundaries, and how the Agent SDK's event hooks (pre-tool-use, stop, subagent-stop) give harnesses control over agent behavior.
- Agentic loop mechanics and stop_reason handling
- Multi-agent patterns: orchestrator-worker, peer delegation, parallel subagents
- Session and context continuity across window boundaries
- Agent SDK hooks: pre-tool-use-hook, stop-hook, subagent-stop-hook
- Task decomposition and incremental progress strategies
- Escalation patterns and human-in-the-loop checkpoints
Domain 2 — Tool Design & MCP Integration (20%)
This domain covers the full MCP ecosystem — from primitive types to transport mechanisms. The three MCP primitives — Tools (model-controlled), Resources (app-controlled), and Prompts (user-controlled) — each represent different ownership models and the exam distinguishes between them carefully. You'll also need to know when to use stdio transport versus StreamableHTTP, how MCP sampling works, and how to design tool schemas and error responses that guide the model effectively.
- JSON Schema tool design: required fields, descriptions, enum constraints
- Error response design that helps the model self-correct
- MCP Tools vs Resources vs Prompts — primitive ownership model
- stdio vs StreamableHTTP transport selection
- MCP sampling, roots, and the MCP Inspector for debugging
- Tool interface design for agentic reliability
Domain 3 — Claude Code Configuration & Workflows (20%)
Claude Code has a rich configuration system that goes beyond basic usage, and this domain tests all of it. CLAUDE.md files can exist at the project level, user level, and with path-specific rules — the exam tests your understanding of which takes precedence. Hooks (pre-tool-use, post-tool-use, stop, subagent-stop) allow external automation to intercept agent actions. Skills are reusable markdown instruction files triggered by description matching. The exam also covers plan mode, CI/CD integration, and the allowed-tools configuration.
- CLAUDE.md hierarchy: project, user, path-specific rules
- Hook types and their execution timing
- Skills: SKILL.md frontmatter, description-based triggering, allowed-tools
- Plan mode and its role in reducing irreversible actions
- CI/CD integration patterns for automated Claude Code workflows
- Subagent configuration and isolation
Domain 4 — Prompt Engineering & Structured Output (18%)
This domain bridges prompting technique with output reliability. Key topics include XML tag usage for separating instructions from context, chain-of-thought prompting to improve reasoning accuracy, role assignment in system prompts, prefill for steering response format, and the tool_choice parameter for forcing specific tool invocations. The structured output section covers JSON Schema enforcement, validation loops, and self-critique patterns.
- XML tags for context delimitation and multi-section prompts
- Chain-of-thought prompting and when to use it
- Role assignment and persona framing in system prompts
- Prefill technique for format and style steering
- tool_choice: auto, any, specific tool forcing
- Validation loops and self-critique for output reliability
Domain 5 — Context Management & Reliability (15%)
The smallest domain by weight, but often the deciding factor in production reliability. Topics include token counting, the two prompt caching TTL tiers (5-minute ephemeral and 1-hour persistent), RAG architecture fundamentals (chunking strategies, embedding retrieval), rolling window context management, and summarization strategies for long sessions. The exam also tests your awareness of position effects — how recency bias affects model attention in very long contexts.
- Token counting APIs and context budget management
- Prompt caching: cache_control, ephemeral vs persistent TTL
- RAG: chunking, embedding, retrieval, and context injection
- Rolling window and summarization strategies
- Position effects in long contexts (primacy vs recency)
- Reliability patterns: graceful degradation and fallback design
What the Questions Actually Look Like
The exam uses scenario-based questions that describe a realistic production situation and ask you to identify the best design decision or configuration. Examples include:
- An agent is looping after receiving a rate-limit error from a tool. Which stop_reason should the harness check, and what fallback should it apply?
- A CLAUDE.md file at the project root and a path-specific rule at /src/ define different tool permissions. Which takes precedence?
- An MCP server needs to stream large log files to multiple clients without maintaining per-client session state. Which transport should you use?
- A prompt caching setup is not achieving cache hits across requests from different users. What is the most likely configuration error?
Recommended Study Path
- Complete the 8 official Anthropic Skilljar courses in sequence — especially 'Building with the Claude API', the two MCP courses, and 'Claude Code in Action'
- Work through the concept library on this platform (150+ concepts organized by domain with spaced repetition)
- Review the 91-term glossary to ensure you know the precise definitions of MCP primitives, Agent SDK hooks, and caching parameters
- Practice with scenario-based questions, prioritizing your weakest domains first
- Take full mock exams under timed conditions and review every incorrect answer
Free Resources Available Now
Every resource on this platform is free to access. The concept library at /learn covers all 150+ exam concepts across the five domains with spaced repetition tracking. The /glossary has 91 terms with precise definitions. The exam simulator at /exam offers full mock exams with scenario-based questions. The /courses page links to all 8 official Anthropic Skilljar courses. If you are starting from scratch, begin with the study plan at /study-plan which provides a structured day-by-day roadmap.
Preparing for the Claude SA Exam?
Explore 150+ exam concepts, 91 glossary terms, and full mock exams — all free.