← Back to Home
Authoring Methodology
What Build It Guides Are
A beginner-first, verification-driven instructional guide that transforms a working application into a step-structured learning system where each step:
- Introduces exactly one concept/change
- Explains intent and constraints
- Includes a checkpoint and failure recovery path
- Preserves architectural boundaries (e.g., Configuration vs Runtime vs later Persistence)
Why This Approach is Distinctive
This is not "AI-generated code presented as-is." It is a structured production workflow that treats code as the executable reference and the guide as a validated instructional layer, with safeguards against drift, missing steps, and vague explanations.
Inputs
- A defined product goal (e.g., Academic Study Timer)
- A constrained scope (e.g., Configuration stage: no persistence, single window, Tkinter-only)
- A verification contract: each feature must be testable via explicit checklists and run behavior
Roles in the Collaboration
0ne29 (author/producer): Owns system design, constraints, stage boundaries, QA expectations, and the "step contract" (what every step must include and prove). Also runs the early-stage stress testing and iteration loop to intentionally converge on a reliable format and workflow.
ChatGPT (GPT-5.2): Contributes early-stage structuring—template logic, decomposition strategy, boundary enforcement, and "explain + verify + recover" scaffolding.
Claude (Opus 4.5): Executes high-volume synthesis under constraints—consistent step-by-step prose, aligned code snippets, and internal consistency across the guide.
Method (Pipeline)
1
Stage freeze & constraints
Define what is in scope and what is explicitly out of scope.
2
Atomic decomposition
Break the system into "one step = one unit of change," including dependencies and architectural placement.
3
Instruction-as-verification
Embed checkpoints and "if something breaks" recovery notes so the guide doubles as a QA harness.
4
End-to-end validation
Consolidate into a runnable reference and validate that runtime behavior matches the guide's claims.
5
Feature verification suite
Add feature-by-feature verification checklists to confirm expected behavior and guard against regression.
Quality Controls
These controls prevent "LLM fluff" and ensure consistency:
Contracted step schema: Each step includes purpose, fit, code, explanation, why-it-matters, checkpoint, and recovery.
Dependency tracking: Each step declares prerequisites, preventing silent leaps.
Run-based validation: The final code is executed to confirm key behaviors (single window, timer accuracy, settings apply logic).
Scope enforcement: Explicit "does not include" constraints are asserted and re-verified near completion.
AI Collaboration Notes
Division of Labor
0ne29 (Human):
- Product definition, stage boundaries, instructional contract, acceptance criteria, verification requirements, final editorial decisions
- Early-stage stress testing: Aggressively tested candidate page formats and decomposition strategies against failure modes (drift, missing dependencies, vague explanations, scope leakage, inconsistent naming)
- Feedback loops: Used repeated critique cycles to refine constraints until the format became deterministic—i.e., the same rules consistently produce the same quality of output
- Prompt and constraint engineering: Translated observed failures into explicit rules (schema, dependency declaration, "stop and verify" checkpoints, recovery steps) to prevent recurrence
- Integration QA: Validated that guide text, code snippets, and runtime behavior all agree, and that the guide remains coherent when read linearly or referenced out of order
ChatGPT (GPT-5.2):
- Guide format design
- Decomposition strategy
- Guardrails against scope drift
- Verification framing
Claude (Opus 4.5):
- Consistent step-by-step narrative generation
- Alignment of prose to code units
- Uniform checkpoint/recovery formatting
Verification Standard
Guide assertions are backed by explicit verification checklists and execution of the reference code to confirm runtime behavior.
Note on Authorship
The core contribution is a repeatable pipeline that converts requirements into a verified, beginner-readable system with traceable rationale per change and per feature.
What This Demonstrates
- Systems thinking: Converting an application into a staged curriculum without leaking scope
- Production discipline: Specification, decomposition, QA checklists, and reproducibility
- Modern workflow: Using AI as a constrained co-author while maintaining human-owned standards and acceptance criteria