XIOPro Production Blueprint v5.0¶
Part 9 — Project Templates¶
1. Purpose¶
This part defines the standard project templates that XIOPro uses to bootstrap new initiatives. Each template encodes a lifecycle pipeline, contextual agent definitions, stage gates, and resource allocation rules.
Status: Draft — Template definitions specified for all 6 project types.
2. Template Lifecycle Pipeline¶
Every project in XIOPro follows a lifecycle pipeline. Templates define which stages apply and what each stage requires:
- Idea — Capture raw thought, tag domain, assign priority
- Research — Gather context, prior art, constraints
- Brainstorm — Explore options, generate alternatives, evaluate trade-offs
- Manifest — Crystallize scope, success criteria, resource estimate
- Blueprint — Full technical design (architecture, ODM, agents, infra)
- Work Plan — Milestones, sprints, ticket breakdown
- Test Plan — Verification strategy, acceptance criteria, coverage targets
- Review — System Review gate (see Part 11)
- Execute — Implementation, monitoring, iteration
Not all templates use all stages. Lightweight templates (e.g., Content Creation) may skip Blueprint and collapse Research + Brainstorm.
3. Template Types¶
3.1 IT Project¶
Full lifecycle. All 9 stages. Requires Blueprint (Part 2-8 equivalent), formal System Review, and Work Plan.
3.2 Marketing¶
Stages: Idea, Research, Brainstorm, Manifest, Work Plan, Execute. No Blueprint or formal System Review.
3.3 Content Creation¶
Stages: Idea, Research, Brainstorm, Execute. Lightweight. Minimal stage gates.
3.4 Knowledge Expert¶
Stages: Idea, Research, Manifest, Review, Execute. Focused on knowledge capture, curation, and publishing.
3.5 Research Evaluation¶
Stages: Idea Capture, Specification, Applicability Assessment, Design (conditional), Review. Lightweight evaluation pipeline for incoming research about new technologies, techniques, or external innovations. Produces a verdict: integrate, watch, or dismiss.
4. Contextual Agent Definitions per Template¶
Each template defines which agents are available at each stage and what roles they play. This section will specify:
- Agent roster per template type
- Role assignments per stage (e.g., Researcher at Research stage, Architect at Blueprint stage)
- Escalation paths
- Swarm topology preferences (see Part 10)
5. Stage Gates and Approval Flows¶
Stage gates define the conditions that must be met before a project can advance to the next stage:
- Automated gates: Schema validation, cost threshold checks, test coverage minimums
- Human gates: Shai approval for budget commits, scope changes, go/no-go decisions
- Agent gates: System Review (Part 11) for IT Projects
5.1 Step-Level Review and Verification¶
Every stage AND every step within a stage requires review and verification/evaluation before progression.
This is not optional. It applies at two levels:
Stage level: Before advancing from one stage to the next (e.g., Research → Brainstorm), the stage output must be reviewed and evaluated. The evaluation record is written to the Bus before the gate can be cleared.
Step level: Within each stage, individual steps (sub-tasks, research outputs, draft artifacts) must also be reviewed and verified before the next step begins. No step is trusted by default.
Verification is the responsibility of: - The PO for stage-level gates - The executing agent for step-level verification (with escalation to PO if uncertain) - A human reviewer for L4/L5-flagged decisions
This principle aligns with the "Review and Testing Everywhere" constraint in Part 1, Section 4.14.
Approval flows will be defined per template, including: - Who can approve - Timeout and escalation rules - Override conditions
5A. Template Builder¶
The Template Builder is a researcher agent responsible for constructing new project templates when no existing template fits the target domain.
Role¶
The Template Builder is not a standing agent — it is spawned by the PO when a new template type is required. It operates as a specialist researcher with deep domain focus.
T1P Standard¶
The Template Builder follows T1P standards throughout. This means:
- Every template it produces is calibrated to the top 1% of practitioners in the target domain
- Stage definitions, step sequences, review criteria, and agent roles are drawn from the best known practices in that domain — not generic defaults
- The builder researches domain-specific lifecycle models, certification frameworks, regulatory requirements, and professional standards before defining stages
- Templates are not improvised — they are evidence-backed and peer-reviewable
Process¶
- PO identifies that a new domain requires a template (e.g., Legal Contract, Hardware Design, Clinical Trial)
- PO spawns Template Builder agent with domain specification
- Template Builder researches domain lifecycle models, standards, and practitioner workflows
- Template Builder produces a draft template: stages, steps, review criteria, agent roster, resource defaults
- Template Builder submits draft to PO for review
- PO reviews against T1P standard, escalates to human if domain expertise verification is needed
- Approved template is registered in the template registry and becomes available for future projects
Outputs¶
- Template YAML: stages, steps, gates, agent roles, resource defaults
- Research memo: sources, domain references, rationale for design choices
- T1P calibration note: how the template reflects top 1% standards for the domain
5B. Composite Projects and Sub-Projects¶
A project in XIOPro can be either a standalone project or a composite project containing sub-projects. This enables large initiatives to be decomposed into independently manageable units while maintaining coordination at the parent level.
Structure¶
Composite Project (parent)
|-- Sub-Project A (own template, own PO, own sprint plan)
|-- Sub-Project B (own template, own PO, own sprint plan)
|-- Sub-Project C (own template, own PO, own sprint plan)
|-- Sub-Project D (own template, own PO, own sprint plan)
Rules¶
- A sub-project is a full project entity with its own template, PO, sprint plan, and ticket queue
- Each sub-project can use a different template type (e.g., IT Project, Marketing, Content Creation, Knowledge Expert)
- The parent project has a Master PO that coordinates across all sub-projects: resolves cross-sub-project dependencies, manages shared resources, and reports aggregate status to GO
- Sub-projects inherit the parent project's budget ceiling but have their own allocation within it
- A sub-project cannot have its own sub-projects (one level of nesting only, to prevent complexity explosion)
- The parent project's lifecycle_phase reflects the furthest-behind sub-project (the chain moves at the speed of the slowest link)
ODM Change¶
The projects table requires a parent_project_id column (nullable UUID FK to projects.id). See Part 3, Section 4.2 for the schema addition.
Example: MVP1 as Composite Project¶
project: MVP1 (Paperclip)
template: Composite
master_po: PO-MVP1
sub_projects:
- name: MVP1-Platform
template: IT Project
po: PO-MVP1-Platform
scope: "ISO 19650 engine, Stripe integration, API, deployment"
- name: MVP1-Marketing
template: Marketing
po: PO-MVP1-Marketing
scope: "Landing page, positioning, launch campaign, analytics"
- name: MVP1-Knowledge
template: Knowledge Expert
po: PO-MVP1-Knowledge
scope: "ISO 19650 domain research, compliance rules, BIM standards"
- name: MVP1-Content
template: Content Creation
po: PO-MVP1-Content
scope: "Documentation, tutorials, onboarding guides, help center"
Sub-Project Dependencies¶
Sub-projects can run in parallel or in sequence. Each sub-project has an optional depends_on constraint:
depends_on:
project_id: "MVP1-Platform" # which project/sub-project to wait for
stage: "execute" # which lifecycle stage must complete first
- If
depends_onis null/empty → sub-project runs in parallel from day 1 - If set → sub-project waits until the referenced project reaches the specified stage
- Stage = lifecycle stage: idea_research, brainstorm, manifest, blueprint, work_plan, test_plan, review, execute
- Step = item within a stage (sub-project dependencies are at the stage level, not step level)
Example for MVP1:
sub_projects:
- name: MVP1-Platform
depends_on: null # starts immediately
- name: MVP1-Knowledge
depends_on: null # parallel with Platform
- name: MVP1-Marketing
depends_on:
project_id: MVP1-Platform
stage: execute # Marketing starts when Platform reaches execute
- name: MVP1-Content
depends_on:
project_id: MVP1-Knowledge
stage: blueprint # Content starts when Knowledge has a blueprint
The Master PO monitors these dependencies and auto-triggers sub-project activation when gates are met.
Master PO Responsibilities¶
The Master PO does not execute work directly. It coordinates:
- Dependency resolution: Monitors stage completion across sub-projects, triggers dependent sub-projects when gates are met
- Resource arbitration: If two sub-projects need the same specialist, Master PO allocates
- Aggregate reporting: Master PO produces a composite sprint summary for GO and IO
- Gate coordination: Some stage gates require all sub-projects to reach a checkpoint before the parent can advance
6. Resource Allocation per Stage¶
Each template defines default resource budgets per stage:
- Token budget (LLM cost ceiling)
- Time budget (wall-clock expectation)
- Agent count limits
- Compute constraints (container limits, storage quotas)
These are defaults that can be overridden at the project level.
7. Template Definitions¶
This section contains the complete, structured definition of each project template. Every template follows the lifecycle pipeline (Section 2) with template-specific stage selections, steps, gates, agent roles, contextual agents, and duration estimates.
Conventions¶
- Entry gate: Condition that must be true before a stage can begin.
- Exit gate: Condition that must be verified (by PO or designated reviewer) before advancing to the next stage.
- Verification: Step-level check performed by the executing agent, with escalation to PO if uncertain (per Section 5.1).
- Agent roles: Reference the unified role bundle model from Part 4. Specialists are spawned by the PO; Workers are spawned by Specialists.
- Contextual agents: Long-lived agents that persist across sprints. They maintain domain memory and are not terminated between stages.
- Durations: Wall-clock estimates assuming a single active agent per step. Parallelization reduces elapsed time but not total compute.
7.1 Template 1: IT Project¶
The IT Project template is the most comprehensive template in XIOPro. It covers the full lifecycle from idea to production monitoring and uses all 9 pipeline stages plus Execute sub-stages.
template:
name: "IT Project"
id: "tpl_it_project"
version: "1.0.0"
description: >
Full-lifecycle software/infrastructure project. Covers ideation through
production deployment and monitoring. All 9 pipeline stages active.
Execute stage decomposes into 6 sub-stages.
total_estimated_duration: "40-120 hours"
contextual_agents:
- name: "Architecture Agent"
role: architect
persistence: cross_sprint
description: >
Maintains architectural decisions, dependency maps, and system diagrams.
Consulted at Blueprint, Work Plan, and Execute stages. Owns ADR registry.
- name: "Domain Expert Agent"
role: domain_expert
persistence: cross_sprint
description: >
Holds domain-specific knowledge relevant to the project (e.g., ISO 19650,
payment processing, BIM standards). Feeds Research, Brainstorm, and Blueprint.
- name: "DevOps Agent"
role: devops
persistence: cross_sprint
description: >
Manages CI/CD pipelines, container orchestration, deployment runbooks,
and monitoring configuration. Active from Work Plan through Execute.
stages:
- name: "idea_research"
display_name: "Idea Research"
description: >
Gather context on the project idea: market landscape, prior art,
technical feasibility, competitive analysis, and constraint mapping.
entry_gate: "Project idea captured in ticket with domain tag and priority"
exit_gate: "Research report reviewed and approved by PO"
responsible: PO
estimated_duration: "2-6 hours"
t1p_formal_documents:
required:
- "Business Case Brief (1-page opportunity assessment)"
- "Competitive Analysis Summary"
recommended:
- "Opportunity Canvas (Lean UX format)"
- "Pitch document (Basecamp Shape Up format)"
t1p_standards:
- "ISO 12207 §6.4.1 — Business/Mission Analysis"
- "TOGAF Preliminary Phase — Architecture Vision"
- "SAFe Portfolio Kanban — Funnel stage"
- "Lean Startup — Problem/Solution Fit"
t1p_verification:
- "Idea aligns with at least one company OKR or strategic theme"
- "Competitive landscape assessed (even briefly)"
- "Named sponsor willing to invest discovery time identified"
- "Technical feasibility sanity-checked by an engineering lead"
steps:
- name: "market_analysis"
description: "Analyze market landscape, competitors, and positioning opportunities"
verification: "Report covers competitors, market size, target segment, and positioning"
agent_role: researcher
- name: "prior_art_survey"
description: "Survey existing solutions, open-source projects, and academic references"
verification: "Minimum 5 prior art references documented with relevance assessment"
agent_role: researcher
- name: "feasibility_assessment"
description: "Evaluate technical feasibility within current infrastructure and budget"
verification: "Feasibility score (high/medium/low) with justification and risk list"
agent_role: researcher
- name: "constraint_mapping"
description: "Identify legal, regulatory, budget, timeline, and infrastructure constraints"
verification: "Constraint register created with severity and mitigation options"
agent_role: researcher
- name: "research_synthesis"
description: "Consolidate findings into a structured research report"
verification: "PO reviews and approves research report; report published to Bus"
agent_role: PO
- name: "brainstorm"
display_name: "Brainstorm"
description: >
Explore solution options, generate alternatives, evaluate trade-offs.
Produce a shortlist of viable approaches with pros/cons.
entry_gate: "Research report approved"
exit_gate: "Solution shortlist reviewed and top approach selected by PO"
responsible: PO
estimated_duration: "2-4 hours"
t1p_prototyping_stage: "Level 0 (Napkin Sketch) — communicate the idea; Level 1 (Wireframe) — validate information architecture"
t1p_formal_documents:
required:
- "PRD draft (initial problem statement and goals)"
- "Trade-off matrix with weighted scoring"
recommended:
- "User Personas (3-5 primary)"
- "Jobs-to-be-Done (JTBD) analysis"
- "BRD (Business Requirements Document) for enterprise projects"
t1p_standards:
- "IEEE 29148:2018 — Requirements engineering"
- "ISO 12207 §6.4.2 — Stakeholder Needs and Requirements Definition"
- "Lean UX — Hypothesis-driven requirements"
- "T1P prioritization: Kano (discovery) → RICE (scoring) → MoSCoW (MVP scoping)"
t1p_verification:
- "At least 3 distinct solution approaches generated"
- "Each option has architecture sketch, cost estimate, and timeline"
- "Selected approach documented with rationale and alternatives rejected"
steps:
- name: "option_generation"
description: "Generate at least 3 distinct solution approaches"
verification: "Each option has description, architecture sketch, cost estimate, and timeline"
agent_role: architect
- name: "trade_off_analysis"
description: "Compare options across cost, complexity, time-to-market, scalability, and risk"
verification: "Trade-off matrix completed with weighted scoring"
agent_role: analyst
- name: "stakeholder_alignment"
description: "Present shortlist to PO (and human if L4+), gather feedback, select approach"
verification: "Selected approach documented with rationale; decision logged to Bus"
agent_role: PO
- name: "manifest"
display_name: "Manifest"
description: >
Crystallize scope, success criteria, resource estimate, and acceptance
criteria for the selected approach.
entry_gate: "Solution approach selected and documented"
exit_gate: "Manifest document approved by PO; budget ceiling confirmed"
responsible: PO
estimated_duration: "2-4 hours"
t1p_formal_documents:
required:
- "PRD (Product Requirements Document) — signed off by PM, Engineering Lead, UX Lead"
- "User Stories with acceptance criteria (INVEST criteria)"
- "Success Metrics Document (measurable, time-bound KPIs/OKRs)"
- "Scope Boundary Document (explicit in-scope, out-of-scope, deferred)"
optional:
- "SRS (Software Requirements Specification) — for regulated/complex systems"
- "Market Requirements Document (MRD) — for new market entry"
t1p_standards:
- "IEEE 29148:2018 — Requirements engineering"
- "ISO 12207 §6.4.2 — Stakeholder Needs and Requirements Definition"
- "SAFe PI Planning inputs"
t1p_verification:
- "All user stories have acceptance criteria and meet Definition of Ready"
- "PRD reviewed and signed off by PM, Engineering Lead, and UX Lead"
- "Prioritization framework applied (MoSCoW or RICE)"
- "Success metrics are measurable and time-bound"
- "At least 5 user interviews or equivalent research conducted"
- "Scope boundary is explicit — what is NOT being built is documented"
steps:
- name: "scope_definition"
description: "Define in-scope and out-of-scope boundaries precisely"
verification: "Scope statement with explicit exclusions; no ambiguous items remaining"
agent_role: PO
- name: "success_criteria"
description: "Define measurable success criteria and acceptance thresholds"
verification: "Each criterion is testable, measurable, and has a target value"
agent_role: PO
- name: "resource_estimation"
description: "Estimate token budget, compute hours, agent count, and calendar time"
verification: "Estimates broken down per stage; total within project budget ceiling"
agent_role: analyst
- name: "risk_register"
description: "Document top risks with probability, impact, and mitigation plan"
verification: "Minimum 5 risks identified; each has owner and mitigation action"
agent_role: analyst
- name: "blueprint"
display_name: "Blueprint"
description: >
Full technical design: architecture, data model, API contracts,
agent topology, infrastructure requirements, and security model.
entry_gate: "Manifest approved with confirmed budget"
exit_gate: "Blueprint reviewed by Architecture Agent and approved by PO"
responsible: architect
estimated_duration: "4-16 hours"
t1p_prototyping_stage: "Level 2 (Mockup) + Level 3 (Interactive Prototype) — validate design before coding; Level 4 (Technical Spike) if feasibility is uncertain"
t1p_formal_documents:
required:
- "Design Document / RFC (Google-style: Context, Goals/Non-Goals, Design, Alternatives, Cross-cutting Concerns)"
- "Architecture Decision Records (ADR) — one per significant decision, format: Status/Context/Decision/Consequences/Alternatives"
- "API Specification (OpenAPI 3.1 or GraphQL SDL)"
- "C4 Architecture Diagrams (minimum: Context + Container levels)"
required_for_complex:
- "Data Model / ERD"
- "Threat Model (STRIDE: Spoofing, Tampering, Repudiation, Info Disclosure, DoS, Elevation of Privilege)"
- "Infrastructure Architecture Diagram (Terraform/Pulumi/CDK templates)"
ux_deliverables:
- "Wireframes (low-fidelity) — Level 1 prototype"
- "Mockups (high-fidelity with real content) — Level 2 prototype"
- "Interactive Prototype (Figma/equivalent) — Level 3 prototype"
- "Design system tokens / component spec"
recommended:
- "arc42 documentation (for complex systems)"
- "Sequence diagrams for key flows"
- "Capacity planning document"
- "Dependency analysis"
t1p_standards:
architecture:
- "C4 Model (Simon Brown) — Context, Container, Component, Code diagrams"
- "arc42 — 12-section architecture documentation template"
- "TOGAF ADM — Architecture Development Method (enterprise-scale)"
- "ISO/IEC/IEEE 42010:2022 — Architecture description"
api:
- "OpenAPI Specification 3.1"
- "Google API Design Guide"
- "Microsoft REST API Guidelines"
security:
- "OWASP Threat Modeling Process"
- "STRIDE threat model"
- "OWASP ASVS (Application Security Verification Standard)"
infrastructure:
- "AWS Well-Architected Framework (or GCP/Azure equivalent)"
- "12-Factor App methodology"
t1p_verification:
- "Design doc reviewed by at least 2 senior engineers outside immediate team"
- "Cross-cutting concerns addressed: security, privacy, observability, cost"
- "ADR for every 'why not X?' decision (alternatives considered and rejected)"
- "API design reviewed separately from implementation (Stripe model)"
- "Threat model identifies top 5 risks with mitigations"
- "Infrastructure cost estimated and approved by budget owner"
- "UX prototype tested with at least 3-5 representative users"
t1p_ai_assistance:
- "AI-assisted: Design doc drafting, ADR generation, API spec scaffolding"
- "Human-led: Architecture decisions, security review, trade-off judgments"
- "Human-verified: All AI-generated architecture artifacts reviewed before approval"
steps:
- name: "architecture_design"
description: "Design system architecture: components, boundaries, communication patterns"
verification: "Architecture diagram exists; all components have defined interfaces"
agent_role: architect
- name: "data_model_design"
description: "Design ODM extensions, database schema, migration plan"
verification: "Schema validated against ODM (Part 3); migration scripts drafted"
agent_role: architect
- name: "api_contract_design"
description: "Define API endpoints, request/response schemas, auth requirements"
verification: "OpenAPI spec or equivalent contract document complete"
agent_role: architect
- name: "agent_topology"
description: "Define which agents are needed, their roles, communication patterns"
verification: "Agent roster with role assignments per stage; swarm topology selected"
agent_role: architect
- name: "infrastructure_plan"
description: "Define container requirements, networking, storage, monitoring"
verification: "Infra plan reviewed by DevOps Agent; cost projection within budget"
agent_role: devops
- name: "security_review"
description: "Identify attack surfaces, auth flows, secrets management, data protection"
verification: "Security checklist completed; no critical findings unmitigated"
agent_role: security_specialist
- name: "blueprint_approval"
description: "Architecture Agent and PO review complete blueprint"
verification: "Sign-off recorded on Bus; blueprint version tagged"
agent_role: PO
- name: "work_plan"
display_name: "Work Plan"
description: >
Break blueprint into milestones, sprints, and tickets.
Define dependencies, parallelization opportunities, and critical path.
entry_gate: "Blueprint approved"
exit_gate: "Work plan with all tickets created and prioritized; PO approved"
responsible: PO
estimated_duration: "2-4 hours"
t1p_formal_documents:
required:
- "Sprint Backlog (epics → stories → tasks)"
- "Definition of Done (DoD) — code reviewed, tests green, coverage threshold met, no P0/P1 bugs, observability configured"
- "Definition of Ready (DoR) — INVEST criteria, acceptance criteria clear, estimates complete"
recommended:
- "Risk Register (top 5 risks with probability, impact, mitigation, owner)"
- "Release Plan (milestones with target dates)"
- "Dependency Map (cross-team, external)"
- "RACI Matrix — for cross-team projects"
optional:
- "Work Breakdown Structure (formal WBS)"
- "Gantt chart — for fixed-deadline projects"
t1p_standards:
- "Scrum Guide (2020) — Sprint Planning event"
- "SAFe PI Planning — for scaled environments"
- "Basecamp Shape Up — Shaping + Betting + 6-week cycles"
- "PMI PMBOK — WBS, Risk Register, Communications Plan"
t1p_verification:
- "Every story meets Definition of Ready before entering a sprint"
- "All stories estimated; no story exceeds 1 sprint (decompose if so)"
- "Top 5 risks have identified mitigations"
- "Dependencies acknowledged by dependent teams"
- "Team commits to sprint (not assigned by management)"
steps:
- name: "milestone_definition"
description: "Define major milestones with deliverables and target dates"
verification: "Milestones are sequential, each has clear deliverable and date"
agent_role: PO
- name: "sprint_planning"
description: "Decompose milestones into sprints; assign capacity per sprint"
verification: "Sprint backlog created; no sprint exceeds capacity ceiling"
agent_role: PO
- name: "ticket_breakdown"
description: "Create individual tickets with acceptance criteria, estimates, dependencies"
verification: "Every ticket has: description, AC, estimate, dependency list, assignable role"
agent_role: PO
- name: "critical_path_analysis"
description: "Identify critical path and parallelization opportunities"
verification: "Critical path documented; parallel tracks identified with resource needs"
agent_role: analyst
- name: "test_plan"
display_name: "Test Plan"
description: >
Define verification strategy: unit tests, integration tests, E2E tests,
performance benchmarks, security scans, and acceptance test procedures.
entry_gate: "Work plan approved with tickets created"
exit_gate: "Test plan reviewed and approved by PO; coverage targets set"
responsible: PO
estimated_duration: "2-6 hours"
t1p_testing_pyramid:
distribution: "70% unit / 20% integration / 10% E2E"
unit_tests:
target_ratio: "~70% of total test suite"
characteristics: "Pure logic, fast, isolated; run on every commit"
coverage_target: "≥80% line coverage for new code"
integration_tests:
target_ratio: "~20% of total test suite"
characteristics: "API contract tests, service interaction tests, database integration"
run_frequency: "Every PR (target: minutes)"
e2e_tests:
target_ratio: "~10% of total test suite"
characteristics: "Critical user journeys only; run on staging/pre-prod"
run_frequency: "Every merge to main (smoke: < 10 min); full suite nightly or on RC"
performance_tests:
run_frequency: "Weekly or on-demand"
gate: "Performance within defined SLOs (response time, throughput)"
security_tests:
sast: "Continuous (every push)"
dast: "Periodic (per release candidate)"
pen_test: "Major releases and regulated contexts"
t1p_formal_documents:
required:
- "Test Strategy document (levels, tools, environments, coverage targets)"
- "Traceability Matrix (every manifest success criterion mapped to test cases)"
recommended:
- "Test Plan (per release)"
- "Performance Test Report (SLO baseline)"
- "Security Test Report (SAST/DAST results)"
- "Exploratory Testing Charter"
optional:
- "Penetration Test Report — for major releases"
- "Accessibility Audit Report (WCAG 2.1 AA)"
t1p_standards:
- "ISO/IEC 25010:2023 — Software quality models"
- "ISO/IEC/IEEE 29119 — Software testing"
- "OWASP Testing Guide v4.2"
- "OWASP ASVS"
- "ISTQB Foundation Level Syllabus"
t1p_verification:
- "Testing pyramid ratios approximately maintained (70/20/10)"
- "Code coverage meets threshold (≥80% line coverage for new code)"
- "All critical user journeys covered by E2E tests"
- "Zero P0 bugs in release candidate"
- "Performance within defined SLOs"
- "SAST scan clean (no critical/high findings unaddressed)"
- "DAST scan clean (no OWASP Top 10 vulnerabilities)"
- "Accessibility audit passed (WCAG 2.1 AA)"
steps:
- name: "test_strategy"
description: "Define testing levels, tools, environments, and coverage targets"
verification: "Strategy document covers unit, integration, E2E, performance, security"
agent_role: tester
- name: "test_case_design"
description: "Design test cases for critical paths and edge cases"
verification: "Test cases cover all acceptance criteria from manifest; edge cases documented"
agent_role: tester
- name: "test_environment_spec"
description: "Specify test environment requirements, data fixtures, mock services"
verification: "Environment spec reviewed by DevOps Agent; reproducible setup documented"
agent_role: devops
- name: "acceptance_criteria_mapping"
description: "Map every manifest success criterion to specific test cases"
verification: "Traceability matrix complete; no criterion unmapped"
agent_role: tester
- name: "review"
display_name: "System Review"
description: >
Formal review gate per Part 11. All artifacts reviewed:
blueprint, work plan, test plan, risk register, budget.
entry_gate: "Blueprint, work plan, and test plan all approved individually"
exit_gate: "System Review passed; go/no-go decision recorded on Bus"
responsible: PO
estimated_duration: "1-2 hours"
steps:
- name: "artifact_completeness_check"
description: "Verify all required artifacts exist and are current"
verification: "Checklist of artifacts (blueprint, work plan, test plan, risk register) all present"
agent_role: PO
- name: "cross_reference_validation"
description: "Verify consistency across blueprint, work plan, and test plan"
verification: "No contradictions between artifacts; all references valid"
agent_role: analyst
- name: "budget_final_review"
description: "Confirm total estimated cost is within project budget ceiling"
verification: "Cost projection documented and within ceiling; contingency identified"
agent_role: PO
- name: "go_no_go_decision"
description: "PO (and human for L4+ projects) makes final go/no-go decision"
verification: "Decision recorded on Bus with rationale; if no-go, remediation plan created"
agent_role: PO
- name: "execute"
display_name: "Execute"
description: >
Implementation phase. Decomposes into 6 ordered sub-stages.
Each sub-stage has its own entry/exit gates.
entry_gate: "System Review passed with go decision"
exit_gate: "All sub-stages complete; production deployment verified and stable"
responsible: PO
estimated_duration: "20-80 hours"
t1p_prototyping_stage: "Level 5 (MVP) — real code, minimal feature set; Level 6 (Beta) — feature-complete, limited audience; Level 7 (GA) — SLOs met, documentation complete"
t1p_dora_metrics:
deployment_frequency:
elite_target: "Multiple per day (on-demand)"
measurement: "How often code is deployed to production"
lead_time_for_changes:
elite_target: "< 1 hour"
measurement: "Time from commit to production"
change_failure_rate:
elite_target: "< 5%"
measurement: "Percentage of deployments causing production failure"
mttr:
elite_target: "< 1 hour"
measurement: "Mean Time to Restore after production incident"
note: "Monitor DORA metrics from first deployment; report in monitoring sub-stage"
t1p_ai_assistance:
coding: "AI-assisted: code generation, test writing, boilerplate, refactoring (20-30% productivity gain)"
review: "AI-assisted: code review suggestions, bug triage, flaky test detection"
deployment: "AI-assisted: runbook generation, incident response triage, log analysis"
guardrails:
- "All AI-generated code reviewed with same rigor as human code"
- "AI security blind spots: auth and crypto code requires extra human review"
- "License contamination check on AI-generated code"
- "Human-led: architecture decisions, security review, product judgments"
t1p_formal_documents:
coding:
required:
- "Commit messages following Conventional Commits specification"
- "Pull Request descriptions (What, Why, How to test, Screenshots for UI)"
- "Sprint demo notes / recording"
- "Sprint retrospective notes with action items"
recommended:
- "Technical Debt Register (new debt items logged)"
- "Architecture Decision Records (for mid-sprint decisions)"
- "Changelog entries (SemVer 2.0)"
deployment:
required:
- "Deployment Record (who, what, when, outcome)"
- "Rollback Plan (tested, not just documented)"
- "Release Notes (internal and external)"
recommended:
- "Runbook (operational playbook for the service)"
- "SLO/SLI Definitions"
- "On-call Rotation Schedule"
monitoring:
required:
- "Incident Post-Mortem (for every significant incident — within 72 hours)"
- "SLO Report (monthly)"
recommended:
- "Feature Adoption Report"
- "Lessons Learned Document"
t1p_standards:
development:
- "DORA Metrics — Deployment Frequency, Lead Time, Change Failure Rate, MTTR"
- "ISO 12207 §6.4.7 — Software Construction"
- "Scrum Guide (2020) — Sprint events"
- "Conventional Commits specification"
- "Semantic Versioning (SemVer 2.0)"
- "Google Engineering Practices — Code Review guidelines"
deployment:
- "Google SRE Book — Error Budgets, SLOs, SLIs"
- "DORA Metrics — Deployment Frequency, Lead Time, Change Failure Rate, MTTR"
- "ISO 12207 §6.4.9 — Software Transition"
- "ITIL 4 — Change Enablement"
- "Deployment strategies: Canary (default T1P), Blue-Green, Rolling, Feature Flags"
post_launch:
- "Google SRE — Blameless Post-Mortems, Error Budgets"
- "ISO 12207 §6.4.10 — Software Operation"
- "ISO 12207 §6.4.11 — Software Maintenance"
- "ITIL 4 — Incident Management, Problem Management, Continual Improvement"
sub_stages:
- name: "coding"
display_name: "Coding"
description: "Implement features per work plan tickets"
entry_gate: "System Review passed"
exit_gate: "All coding tickets complete; code reviewed; unit tests passing"
estimated_duration: "10-40 hours"
steps:
- name: "environment_setup"
description: "Set up development environment, branches, CI pipeline"
verification: "Dev environment functional; CI pipeline runs on push"
agent_role: devops
- name: "feature_implementation"
description: "Implement features per sprint tickets"
verification: "Each ticket's acceptance criteria met; code reviewed by peer agent"
agent_role: developer
- name: "unit_test_implementation"
description: "Write unit tests per test plan coverage targets"
verification: "Coverage target met; all unit tests green"
agent_role: developer
- name: "code_review"
description: "Peer review all implementation code"
verification: "All review comments resolved; no critical issues open"
agent_role: developer
- name: "configuration"
display_name: "Configuration"
description: "Configure application settings, feature flags, environment variables"
entry_gate: "Core coding complete"
exit_gate: "All configuration verified in staging environment"
estimated_duration: "1-4 hours"
steps:
- name: "app_configuration"
description: "Set application config, feature flags, environment-specific values"
verification: "Config validated per environment; no hardcoded secrets"
agent_role: devops
- name: "secrets_management"
description: "Configure secrets via SOPS/vault; verify rotation policy"
verification: "All secrets encrypted; rotation schedule documented"
agent_role: devops
- name: "installation"
display_name: "Installation"
description: "Provision infrastructure, deploy dependencies, set up databases"
entry_gate: "Configuration verified"
exit_gate: "Infrastructure provisioned and health-checked"
estimated_duration: "1-4 hours"
steps:
- name: "infrastructure_provisioning"
description: "Provision containers, networking, storage per infrastructure plan"
verification: "All resources provisioned; health checks passing"
agent_role: devops
- name: "dependency_installation"
description: "Install runtime dependencies, database migrations, external service connections"
verification: "All dependencies resolved; database migrated; external services reachable"
agent_role: devops
- name: "testing"
display_name: "Testing"
description: "Execute full test suite: integration, E2E, performance, security"
entry_gate: "Infrastructure provisioned and healthy"
exit_gate: "All test suites green; coverage targets met; performance within SLA"
estimated_duration: "4-12 hours"
steps:
- name: "integration_testing"
description: "Run integration tests across service boundaries"
verification: "All integration tests passing; failure reports reviewed"
agent_role: tester
- name: "e2e_testing"
description: "Run end-to-end test scenarios per test plan"
verification: "All E2E scenarios passing; screenshots/logs captured"
agent_role: tester
- name: "performance_testing"
description: "Run load tests and benchmark against SLA targets"
verification: "Performance within defined SLA; bottlenecks documented if any"
agent_role: tester
- name: "security_scanning"
description: "Run security scans: dependency audit, SAST, secrets detection"
verification: "No critical/high vulnerabilities; findings documented with remediation"
agent_role: security_specialist
- name: "deployment"
display_name: "Deployment"
description: "Deploy to production with rollback plan"
entry_gate: "All test suites green; PO approval for production release"
exit_gate: "Production deployment successful; smoke tests passing"
estimated_duration: "1-4 hours"
steps:
- name: "pre_deployment_checklist"
description: "Verify deployment prerequisites: backups, rollback plan, runbook"
verification: "Checklist complete; backup verified; rollback tested in staging"
agent_role: devops
- name: "production_deployment"
description: "Execute deployment via CI/CD pipeline or runbook"
verification: "Deployment successful; no errors in deployment logs"
agent_role: devops
- name: "smoke_testing"
description: "Run smoke tests against production"
verification: "All smoke tests passing; core user flows functional"
agent_role: tester
- name: "rollback_readiness"
description: "Verify rollback can be executed within SLA if needed"
verification: "Rollback procedure documented and tested; recovery time confirmed"
agent_role: devops
- name: "monitoring"
display_name: "Monitoring"
description: "Establish production monitoring and confirm stability"
entry_gate: "Production deployment successful; smoke tests passing"
exit_gate: "Monitoring active; 24h stability window passed; project marked complete"
estimated_duration: "2-8 hours"
steps:
- name: "monitoring_setup"
description: "Configure alerting, dashboards, log aggregation"
verification: "Alerts firing correctly on test triggers; dashboard shows key metrics"
agent_role: devops
- name: "stability_observation"
description: "Monitor production for 24h stability window"
verification: "No critical alerts during stability window; error rates within SLA"
agent_role: devops
- name: "handoff_documentation"
description: "Document runbook, known issues, operational procedures"
verification: "Runbook complete; on-call procedures documented; PO signs off"
agent_role: devops
t1p_enhancements:
source: "T1P IT Project Lifecycle Research (2026-03-30)"
research_file: "struxio-knowledge/vault/research/t1p_it_project_lifecycle.md"
progressive_prototyping_ladder:
description: >
T1P companies de-risk development by building confidence incrementally.
Each level is a decision point: Continue, Pivot, Kill, or Park.
Cost of killing at Level 2 is orders of magnitude less than at Level 6.
levels:
- level: 0
name: "Napkin Sketch"
purpose: "Communicate the idea"
investment: "Minutes"
gate: "Does this make sense?"
xiopro_stage: "idea_research / brainstorm"
- level: 1
name: "Wireframe"
purpose: "Information architecture and flow (low-fidelity)"
investment: "Hours"
gate: "Is the flow right?"
xiopro_stage: "brainstorm"
- level: 2
name: "Mockup"
purpose: "Visual design validation (high-fidelity, real content)"
investment: "Days"
gate: "Does this look and feel right?"
xiopro_stage: "blueprint"
- level: 3
name: "Interactive Prototype"
purpose: "Usability testing with real users (Figma/InVision)"
investment: "Days to 1 week"
gate: "Can users accomplish their goals?"
xiopro_stage: "blueprint"
- level: 4
name: "Technical Spike / Proof of Concept"
purpose: "Validate technical approach (throwaway code)"
investment: "1-3 days"
gate: "Can we actually build this?"
xiopro_stage: "blueprint (if uncertainty) or execute/coding"
- level: 5
name: "MVP (Minimum Viable Product)"
purpose: "Market validation (real code, minimal feature set, real users)"
investment: "2-6 weeks"
gate: "Do users actually want and use this?"
xiopro_stage: "execute (first sprint)"
- level: 6
name: "Beta"
purpose: "Production hardening, edge case discovery (feature-complete, limited audience)"
investment: "2-4 weeks beyond MVP"
gate: "Is it reliable enough for GA?"
xiopro_stage: "execute (testing + deployment sub-stages)"
- level: 7
name: "General Availability (GA)"
purpose: "Value delivery (full production deployment)"
investment: "Continuous"
gate: "SLOs met, documentation complete, support ready"
xiopro_stage: "execute/monitoring"
dora_metrics:
description: >
DORA Four Key Metrics measure engineering performance. Elite (T1P) targets
are the verification benchmark for the execute stage.
metrics:
deployment_frequency:
elite: "On-demand (multiple per day)"
high: "Weekly-monthly"
medium: "Monthly-biannually"
low: "Biannually+"
lead_time_for_changes:
elite: "< 1 hour"
high: "1 day - 1 week"
medium: "1 week - 1 month"
low: "1-6 months"
change_failure_rate:
elite: "< 5%"
high: "5-10%"
medium: "10-15%"
low: "> 15%"
mttr:
elite: "< 1 hour"
high: "< 1 day"
medium: "< 1 week"
low: "> 1 week"
testing_pyramid:
description: >
T1P companies use the testing pyramid as organizing principle for QA.
QA is continuous, automated, and shift-left — not a phase gate.
distribution:
unit: "~70% — Pure logic, fast, isolated; run on every commit; target ≥80% coverage"
integration: "~20% — API contract tests, service interaction tests, database integration; every PR"
e2e: "~10% — Critical user journeys only; staging/pre-prod; smoke suite < 10 min"
iteration_pattern:
- "Unit tests: every commit (seconds)"
- "Integration tests: every PR (minutes)"
- "E2E smoke suite: every merge to main (< 10 minutes)"
- "Full E2E suite: nightly or on release candidate"
- "Performance tests: weekly or on-demand"
- "Security scans: SAST continuous + DAST periodic"
- "Exploratory testing: per sprint"
ai_assisted_development:
description: >
AI-assisted development (2025-2026): 85% of developers use AI tools;
22% of merged code is AI-authored; ~20-30% productivity gain in specific workflows.
integration_pattern:
human_led: "Problem definition, architecture decisions, security review, product decisions"
ai_assisted: "Code generation, test writing, documentation, boilerplate, refactoring"
ai_automated: "Linting, formatting, dependency updates, simple bug fixes, changelog generation"
human_verified: "All AI output reviewed before merge"
per_stage:
idea_research: "Market research synthesis, competitive analysis"
brainstorm: "User interview synthesis, persona generation"
blueprint: "Design doc drafting, ADR generation, API spec scaffolding"
work_plan: "Story generation from PRD, estimation assistance"
execute_coding: "Code generation, test writing, code review assistance"
execute_testing: "Test generation, bug triage, flaky test detection"
execute_deployment: "Runbook generation, incident response assistance"
execute_monitoring: "Log analysis, anomaly detection, post-mortem drafting"
risks_to_manage:
- "Hallucinated code — review AI-generated code with same rigor as human code"
- "License contamination — AI may reproduce copyrighted code"
- "Security blind spots — AI may generate insecure patterns (especially auth/crypto)"
- "Over-reliance — team skills atrophy if AI does all the thinking"
- "Context drift — AI context windows may miss cross-cutting concerns"
master_document_registry:
description: "Complete formal document list per T1P stage with XIOPro stage mapping"
documents:
- doc: "Business Case Brief"
t1p_stage: "0 — Pre-Project"
xiopro_stage: "idea_research"
required: true
- doc: "Competitive Analysis"
t1p_stage: "0 — Pre-Project"
xiopro_stage: "idea_research"
required: false
- doc: "PRD (Product Requirements Document)"
t1p_stage: "1 — Discovery"
xiopro_stage: "manifest"
required: true
- doc: "User Stories with Acceptance Criteria"
t1p_stage: "1 — Discovery"
xiopro_stage: "manifest"
required: true
- doc: "Success Metrics Document"
t1p_stage: "1 — Discovery"
xiopro_stage: "manifest"
required: true
- doc: "Design Document / RFC"
t1p_stage: "2 — Architecture"
xiopro_stage: "blueprint"
required: true
- doc: "Architecture Decision Records (ADR)"
t1p_stage: "2+ — Architecture onwards"
xiopro_stage: "blueprint / execute"
required: true
- doc: "C4 Architecture Diagrams (Context + Container minimum)"
t1p_stage: "2 — Architecture"
xiopro_stage: "blueprint"
required: true
- doc: "API Specification (OpenAPI 3.1 or GraphQL SDL)"
t1p_stage: "2 — Architecture"
xiopro_stage: "blueprint"
required: "if API exists"
- doc: "Data Model / ERD"
t1p_stage: "2 — Architecture"
xiopro_stage: "blueprint"
required: "if data model"
- doc: "Threat Model (STRIDE)"
t1p_stage: "2 — Architecture"
xiopro_stage: "blueprint"
required: "if auth/data"
- doc: "UX Wireframes"
t1p_stage: "2 — Architecture"
xiopro_stage: "blueprint"
required: "if UI"
- doc: "Sprint Backlog"
t1p_stage: "3 — Planning"
xiopro_stage: "work_plan"
required: true
- doc: "Definition of Done"
t1p_stage: "3 — Planning"
xiopro_stage: "work_plan"
required: true
- doc: "Definition of Ready"
t1p_stage: "3 — Planning"
xiopro_stage: "work_plan"
required: true
- doc: "Risk Register"
t1p_stage: "3 — Planning"
xiopro_stage: "manifest / work_plan"
required: false
- doc: "Test Strategy"
t1p_stage: "3-5 — Planning/QA"
xiopro_stage: "test_plan"
required: true
- doc: "PR Descriptions (Conventional Commits)"
t1p_stage: "4 — Development"
xiopro_stage: "execute/coding"
required: true
- doc: "Sprint Demo Notes"
t1p_stage: "4 — Development"
xiopro_stage: "execute/coding"
required: true
- doc: "Test Execution Report"
t1p_stage: "5 — QA"
xiopro_stage: "execute/testing"
required: true
- doc: "QA Sign-off"
t1p_stage: "5 — QA"
xiopro_stage: "execute/testing"
required: true
- doc: "Deployment Record"
t1p_stage: "6 — Deployment"
xiopro_stage: "execute/deployment"
required: true
- doc: "Rollback Plan (tested)"
t1p_stage: "6 — Deployment"
xiopro_stage: "execute/deployment"
required: true
- doc: "Release Notes"
t1p_stage: "6 — Deployment"
xiopro_stage: "execute/deployment"
required: true
- doc: "Runbook"
t1p_stage: "6 — Deployment"
xiopro_stage: "execute/monitoring"
required: true
- doc: "SLO/SLI Definitions"
t1p_stage: "6 — Deployment"
xiopro_stage: "execute/monitoring"
required: true
- doc: "Incident Post-Mortem"
t1p_stage: "7 — Post-Launch"
xiopro_stage: "execute/monitoring"
required: "per incident"
dependencies:
external_inputs:
- "Project idea with domain tag and priority (from IO or human)"
- "Budget ceiling (from project or parent composite)"
- "Infrastructure access (Hetzner containers, Tailscale VPN)"
- "Domain knowledge (from Domain Expert Agent or human)"
- "Security policies (from Part 7 Governance)"
cross_references:
- "Part 2: Architecture patterns"
- "Part 3: ODM schema extensions"
- "Part 4: Agent role bundles"
- "Part 7: Governance and compliance rules"
- "Part 8: Infrastructure constraints"
- "Part 10: Swarm topology selection"
- "Part 11: System Review gate procedure"
7.2 Template 2: Marketing¶
The Marketing template covers market research through campaign optimization. It does not require a formal Blueprint or System Review but includes rigorous analytics and iterative optimization loops.
template:
name: "Marketing"
id: "tpl_marketing"
version: "1.0.0"
description: >
Marketing campaign lifecycle from research through optimization.
No formal Blueprint or System Review stages. Emphasis on data-driven
iteration and channel-specific execution.
total_estimated_duration: "20-60 hours"
contextual_agents:
- name: "Brand Agent"
role: brand_specialist
persistence: cross_sprint
description: >
Maintains brand guidelines, voice/tone standards, visual identity rules,
and messaging hierarchy. Consulted at every content-producing step to
ensure brand consistency. Persists brand memory across campaigns.
- name: "Analytics Agent"
role: analytics_specialist
persistence: cross_sprint
description: >
Owns analytics instrumentation, KPI tracking, attribution modeling,
and performance dashboards. Active from Channel Setup through Optimize.
Maintains historical campaign performance data for benchmarking.
stages:
- name: "market_research"
display_name: "Market Research"
description: >
Understand target audience, competitive landscape, market trends,
and positioning opportunities.
entry_gate: "Marketing objective defined with target audience segment"
exit_gate: "Market research report approved by PO"
responsible: PO
estimated_duration: "2-6 hours"
steps:
- name: "audience_analysis"
description: "Define target audience: demographics, psychographics, pain points, channels"
verification: "Audience persona document with at least 2 personas; validated against data"
agent_role: researcher
- name: "competitive_analysis"
description: "Analyze competitor marketing: messaging, channels, positioning, spend estimates"
verification: "Competitor matrix with minimum 3 competitors; strengths/weaknesses documented"
agent_role: researcher
- name: "trend_analysis"
description: "Identify relevant market trends, seasonal patterns, and emerging channels"
verification: "Trend report with actionable insights and timing recommendations"
agent_role: researcher
- name: "positioning_assessment"
description: "Define differentiation strategy and value proposition"
verification: "Positioning statement drafted; competitive advantage articulated"
agent_role: researcher
- name: "strategy"
display_name: "Strategy"
description: >
Define campaign strategy: goals, KPIs, budget allocation, channel mix,
timeline, and messaging framework.
entry_gate: "Market research report approved"
exit_gate: "Strategy document approved by PO; budget allocated per channel"
responsible: PO
estimated_duration: "2-4 hours"
steps:
- name: "goal_setting"
description: "Define SMART goals and KPIs for the campaign"
verification: "Each goal is specific, measurable, achievable, relevant, time-bound"
agent_role: PO
- name: "channel_selection"
description: "Select marketing channels based on audience analysis and budget"
verification: "Channel list with rationale, expected reach, and cost per channel"
agent_role: analyst
- name: "budget_allocation"
description: "Allocate budget across channels, content production, and tools"
verification: "Budget breakdown documented; total within project ceiling; contingency reserved"
agent_role: PO
- name: "messaging_framework"
description: "Define core messages, value props, CTAs per audience segment"
verification: "Messaging matrix reviewed by Brand Agent; consistent with brand guidelines"
agent_role: brand_specialist
- name: "timeline_creation"
description: "Build campaign timeline with milestones and dependencies"
verification: "Timeline accounts for content production lead times and channel requirements"
agent_role: PO
- name: "campaign_design"
display_name: "Campaign Design"
description: >
Design the campaign: creative concepts, content formats,
landing pages, email sequences, ad creatives.
entry_gate: "Strategy document approved"
exit_gate: "Campaign design deck reviewed by PO and Brand Agent"
responsible: PO
estimated_duration: "4-10 hours"
steps:
- name: "creative_concept"
description: "Develop creative concepts aligned with messaging framework"
verification: "Minimum 2 creative concepts presented with mockups; Brand Agent approved"
agent_role: creative_specialist
- name: "landing_page_design"
description: "Design landing page structure, copy, and conversion flow"
verification: "Wireframe and copy complete; CTA placement validated; mobile-responsive"
agent_role: creative_specialist
- name: "email_sequence_design"
description: "Design email nurture sequence: triggers, content, timing"
verification: "Email sequence mapped with subject lines, body copy, and send timing"
agent_role: content_writer
- name: "ad_creative_design"
description: "Design ad creatives per channel (display, social, search)"
verification: "Creatives meet channel specs; copy within character limits; Brand Agent reviewed"
agent_role: creative_specialist
- name: "content_creation"
display_name: "Content Creation"
description: >
Produce all campaign content: copy, visuals, landing pages, emails, ads.
entry_gate: "Campaign design approved"
exit_gate: "All content assets produced, reviewed, and approved"
responsible: PO
estimated_duration: "4-12 hours"
steps:
- name: "copywriting"
description: "Write all campaign copy: landing page, emails, ads, social posts"
verification: "Copy reviewed by Brand Agent for voice/tone; proofread for errors"
agent_role: content_writer
- name: "visual_production"
description: "Produce visual assets: images, graphics, video if applicable"
verification: "Visuals meet brand guidelines; channel size requirements met"
agent_role: creative_specialist
- name: "landing_page_build"
description: "Build landing page with tracking pixels, forms, and analytics"
verification: "Page loads correctly; forms submit; tracking fires; mobile tested"
agent_role: developer
- name: "content_review"
description: "Final review of all content assets against brand and strategy"
verification: "PO and Brand Agent sign off; all assets tagged and organized"
agent_role: PO
- name: "channel_setup"
display_name: "Channel Setup"
description: >
Configure marketing channels: ad accounts, email platforms,
social scheduling, tracking, and attribution.
entry_gate: "All content assets approved"
exit_gate: "All channels configured, tested, and ready for launch"
responsible: PO
estimated_duration: "2-4 hours"
steps:
- name: "ad_platform_setup"
description: "Configure ad campaigns in platform (targeting, budgets, schedules)"
verification: "Campaigns in draft/review state; targeting verified; budgets set"
agent_role: analyst
- name: "email_platform_setup"
description: "Load email sequences, configure triggers, test deliverability"
verification: "Test emails sent and received; formatting correct across clients"
agent_role: analyst
- name: "tracking_setup"
description: "Configure UTM parameters, conversion tracking, analytics dashboards"
verification: "Analytics Agent verifies tracking fires correctly on test clicks"
agent_role: analytics_specialist
- name: "pre_launch_checklist"
description: "Final pre-launch verification of all channels and assets"
verification: "Checklist complete; all links working; all tracking verified"
agent_role: PO
- name: "launch"
display_name: "Launch"
description: >
Activate the campaign across all channels simultaneously or in
planned sequence.
entry_gate: "All channels configured and pre-launch checklist passed"
exit_gate: "Campaign live on all channels; first 24h metrics baseline captured"
responsible: PO
estimated_duration: "1-2 hours"
steps:
- name: "campaign_activation"
description: "Activate ads, send first emails, publish social posts"
verification: "All channels confirmed live; no error states"
agent_role: PO
- name: "launch_monitoring"
description: "Monitor first hours for technical issues, delivery problems, anomalies"
verification: "No critical issues in first 4 hours; delivery rates within expected range"
agent_role: analytics_specialist
- name: "baseline_capture"
description: "Capture 24h baseline metrics for future comparison"
verification: "Baseline metrics recorded: impressions, clicks, conversions, spend"
agent_role: analytics_specialist
- name: "analytics"
display_name: "Analytics"
description: >
Ongoing performance measurement, attribution analysis,
and reporting.
entry_gate: "Campaign live with 24h baseline captured"
exit_gate: "Performance report delivered to PO with optimization recommendations"
responsible: analytics_specialist
estimated_duration: "2-6 hours"
steps:
- name: "performance_tracking"
description: "Monitor KPIs against goals: CAC, ROAS, conversion rate, engagement"
verification: "KPI dashboard updated; performance vs target documented"
agent_role: analytics_specialist
- name: "attribution_analysis"
description: "Analyze which channels and touchpoints drive conversions"
verification: "Attribution model applied; channel contribution documented"
agent_role: analytics_specialist
- name: "cohort_analysis"
description: "Analyze performance by audience segment, geography, device"
verification: "Cohort breakdown with actionable insights per segment"
agent_role: analytics_specialist
- name: "performance_report"
description: "Compile comprehensive performance report with recommendations"
verification: "Report delivered to PO; includes data, insights, and recommended actions"
agent_role: analytics_specialist
- name: "optimize"
display_name: "Optimize"
description: >
Iterative optimization based on analytics. A/B testing, budget
reallocation, creative refresh, and audience refinement.
entry_gate: "Performance report delivered with optimization recommendations"
exit_gate: "Optimization cycle complete; improved metrics documented; campaign concluded or next cycle initiated"
responsible: PO
estimated_duration: "2-8 hours (per cycle)"
steps:
- name: "hypothesis_formation"
description: "Form testable hypotheses based on analytics insights"
verification: "Each hypothesis has expected impact, test method, and success threshold"
agent_role: analyst
- name: "ab_testing"
description: "Design and run A/B tests on copy, creative, targeting, or landing pages"
verification: "Tests reach statistical significance; winner identified"
agent_role: analyst
- name: "budget_reallocation"
description: "Shift budget to higher-performing channels and audiences"
verification: "Reallocation justified by data; updated budget documented"
agent_role: PO
- name: "creative_refresh"
description: "Update underperforming creatives based on test results"
verification: "New creatives reviewed by Brand Agent; deployed to channels"
agent_role: creative_specialist
- name: "cycle_report"
description: "Document optimization results and decide: continue, iterate, or conclude"
verification: "PO decision recorded; if concluding, final campaign report produced"
agent_role: PO
dependencies:
external_inputs:
- "Marketing objective with target audience (from IO or human)"
- "Brand guidelines and visual identity (from Brand Agent memory)"
- "Budget ceiling (from project or parent composite)"
- "Channel access credentials (ad platforms, email tools, analytics)"
cross_references:
- "Part 4: Agent role bundles"
- "Part 7: Governance (budget controls)"
7.3 Template 3: Content Creation¶
The Content Creation template covers documentation, tutorials, blog posts, video scripts, and podcast scripts. It is lighter than IT Project and Marketing, with emphasis on editorial quality and distribution.
template:
name: "Content Creation"
id: "tpl_content_creation"
version: "1.0.0"
description: >
Lightweight content lifecycle: research through analytics. Covers
documentation, tutorials, blog posts, video scripts, podcast scripts.
No Blueprint or formal System Review.
total_estimated_duration: "8-30 hours"
contextual_agents:
- name: "Style Guide Agent"
role: style_specialist
persistence: cross_sprint
description: >
Maintains editorial style guide, voice/tone rules, formatting
standards, and terminology glossary. Reviews all content before
publish. Ensures consistency across content types and authors.
- name: "SEO Agent"
role: seo_specialist
persistence: cross_sprint
description: >
Manages keyword research, on-page SEO optimization, metadata,
internal linking strategy, and search performance tracking.
Active from Topic Research through Analytics.
stages:
- name: "topic_research"
display_name: "Topic Research"
description: >
Identify content opportunities: keyword gaps, audience questions,
trending topics, and competitive content analysis.
entry_gate: "Content need identified with target audience and content type"
exit_gate: "Topic brief approved by PO with target keywords and angle"
responsible: PO
estimated_duration: "1-3 hours"
steps:
- name: "keyword_research"
description: "Identify target keywords, search volume, difficulty, and intent"
verification: "Keyword list with volume, difficulty, and intent classification"
agent_role: seo_specialist
- name: "audience_question_mining"
description: "Identify questions the target audience is asking (forums, social, support)"
verification: "Question list prioritized by frequency and relevance"
agent_role: researcher
- name: "competitive_content_analysis"
description: "Analyze top-ranking content for target keywords"
verification: "Content gap analysis: what exists, what's missing, differentiation opportunity"
agent_role: researcher
- name: "topic_brief"
description: "Create structured topic brief: angle, keywords, target length, format, audience"
verification: "PO reviews and approves brief; SEO Agent validates keyword targeting"
agent_role: PO
- name: "content_strategy"
display_name: "Content Strategy"
description: >
Plan the content piece: outline, structure, research needs,
visual requirements, and distribution plan.
entry_gate: "Topic brief approved"
exit_gate: "Content outline and distribution plan approved by PO"
responsible: PO
estimated_duration: "1-2 hours"
steps:
- name: "outline_creation"
description: "Create detailed content outline with section headers and key points"
verification: "Outline covers topic comprehensively; logical flow; SEO structure validated"
agent_role: content_writer
- name: "research_planning"
description: "Identify sources, data needs, expert references, and citations required"
verification: "Source list compiled; data gaps identified with acquisition plan"
agent_role: researcher
- name: "visual_planning"
description: "Plan diagrams, screenshots, images, or video segments needed"
verification: "Visual requirements list with descriptions and placement in outline"
agent_role: content_writer
- name: "distribution_planning"
description: "Plan channels, timing, and promotion strategy for the content"
verification: "Distribution plan with channels, dates, and cross-promotion tactics"
agent_role: PO
- name: "draft"
display_name: "Draft"
description: >
Write the first complete draft following the approved outline.
entry_gate: "Content outline approved"
exit_gate: "Complete first draft submitted for review"
responsible: content_writer
estimated_duration: "2-8 hours"
steps:
- name: "research_execution"
description: "Gather all required data, sources, quotes, and reference materials"
verification: "All research items from plan collected; sources documented"
agent_role: researcher
- name: "first_draft"
description: "Write complete first draft following outline and style guide"
verification: "Draft covers all outline sections; meets target length; follows style guide"
agent_role: content_writer
- name: "seo_integration"
description: "Integrate target keywords naturally; optimize headings, meta description"
verification: "SEO Agent validates keyword density, heading structure, and metadata"
agent_role: seo_specialist
- name: "visual_creation"
description: "Create or source all planned visual elements"
verification: "All visuals created; alt text written; properly formatted for target platform"
agent_role: creative_specialist
- name: "review"
display_name: "Review"
description: >
Expert and editorial review of the draft for accuracy,
completeness, and quality.
entry_gate: "Complete first draft submitted"
exit_gate: "Review feedback consolidated; revision priorities agreed"
responsible: PO
estimated_duration: "1-3 hours"
steps:
- name: "technical_review"
description: "Domain expert reviews for factual accuracy and completeness"
verification: "All technical claims verified; inaccuracies flagged with corrections"
agent_role: domain_expert
- name: "editorial_review"
description: "Style Guide Agent reviews for voice, tone, grammar, and formatting"
verification: "Style guide compliance confirmed; grammar issues flagged"
agent_role: style_specialist
- name: "seo_review"
description: "SEO Agent reviews final keyword integration and structure"
verification: "SEO checklist passed: keywords, headings, links, metadata"
agent_role: seo_specialist
- name: "feedback_consolidation"
description: "PO consolidates all review feedback and prioritizes revisions"
verification: "Consolidated feedback document with action items and priorities"
agent_role: PO
- name: "edit"
display_name: "Edit"
description: >
Revise draft based on review feedback. Final polish and
quality assurance.
entry_gate: "Review feedback consolidated"
exit_gate: "Final version approved by PO; ready for publish"
responsible: content_writer
estimated_duration: "1-4 hours"
steps:
- name: "revision"
description: "Address all review feedback items"
verification: "Every feedback item addressed or justified as not applicable"
agent_role: content_writer
- name: "proofreading"
description: "Final proofread for typos, broken links, formatting issues"
verification: "Zero typos; all links verified; formatting consistent"
agent_role: content_writer
- name: "final_approval"
description: "PO gives final approval for publication"
verification: "PO sign-off recorded; version tagged as publication-ready"
agent_role: PO
- name: "publish"
display_name: "Publish"
description: >
Publish content to target platform(s) with proper metadata,
formatting, and tracking.
entry_gate: "Final version approved by PO"
exit_gate: "Content live on target platform(s); tracking verified"
responsible: PO
estimated_duration: "0.5-1 hour"
steps:
- name: "platform_formatting"
description: "Format content for target platform (CMS, docs site, social, etc.)"
verification: "Content renders correctly on target platform; mobile-responsive"
agent_role: content_writer
- name: "metadata_configuration"
description: "Set title tags, meta descriptions, Open Graph, canonical URLs"
verification: "SEO Agent verifies all metadata correctly configured"
agent_role: seo_specialist
- name: "publication"
description: "Publish content and verify it is publicly accessible"
verification: "Content accessible at target URL; no 404s; rendering correct"
agent_role: PO
- name: "distribute"
display_name: "Distribute"
description: >
Promote and distribute published content across channels.
entry_gate: "Content published and verified live"
exit_gate: "Distribution complete across all planned channels"
responsible: PO
estimated_duration: "1-2 hours"
steps:
- name: "social_distribution"
description: "Share on social media channels with platform-specific formatting"
verification: "Posts published on all planned social channels; links verified"
agent_role: content_writer
- name: "email_distribution"
description: "Include in newsletter or dedicated email blast if applicable"
verification: "Email sent; delivery rate normal; links working"
agent_role: content_writer
- name: "cross_linking"
description: "Add internal links from related existing content"
verification: "Minimum 3 internal links added; backlinks from relevant pages"
agent_role: seo_specialist
- name: "community_distribution"
description: "Share in relevant communities, forums, or partner channels"
verification: "Posted to planned communities; engagement monitored"
agent_role: content_writer
- name: "analytics"
display_name: "Analytics"
description: >
Track content performance and extract insights for future content.
entry_gate: "Content distributed; minimum 7 days elapsed since publication"
exit_gate: "Performance report delivered with insights for future content"
responsible: analytics_specialist
estimated_duration: "1-2 hours"
steps:
- name: "traffic_analysis"
description: "Analyze page views, unique visitors, time on page, bounce rate"
verification: "Traffic metrics documented with comparison to baseline/goals"
agent_role: seo_specialist
- name: "engagement_analysis"
description: "Analyze social shares, comments, backlinks, email click-throughs"
verification: "Engagement metrics documented per channel"
agent_role: seo_specialist
- name: "conversion_analysis"
description: "Track any conversion goals: signups, downloads, inquiries"
verification: "Conversion data documented; attribution clear"
agent_role: seo_specialist
- name: "lessons_learned"
description: "Extract insights for future content: what worked, what to improve"
verification: "Lessons documented and added to content knowledge base"
agent_role: PO
dependencies:
external_inputs:
- "Content need with target audience and type (from IO or human)"
- "Brand and style guidelines (from Style Guide Agent memory)"
- "Platform access (CMS, social accounts, email tool)"
- "Domain expertise (from Domain Expert if technical content)"
cross_references:
- "Part 4: Agent role bundles"
- "Part 5: Knowledge System (content as knowledge artifact)"
7.4 Template 4: Knowledge Expert¶
The Knowledge Expert template is designed for building deep domain expertise, creating structured knowledge bases, and designing training/certification programs. Used for projects like the ISO 19650 knowledge base in MVP1.
template:
name: "Knowledge Expert"
id: "tpl_knowledge_expert"
version: "1.0.0"
description: >
Domain expertise development lifecycle. Covers research, knowledge
structuring, curriculum design, and certification. Used for building
organizational expertise in specific domains (e.g., ISO 19650, BIM,
regulatory compliance).
total_estimated_duration: "30-80 hours"
contextual_agents:
- name: "Domain Expert Agent"
role: domain_expert
persistence: cross_sprint
description: >
Accumulates and maintains domain knowledge throughout the project.
Becomes the persistent knowledge authority for the domain. Can be
queried by other projects and agents after the knowledge project
completes.
- name: "Curriculum Agent"
role: curriculum_specialist
persistence: cross_sprint
description: >
Designs learning pathways, assessment structures, and knowledge
delivery sequences. Ensures pedagogical soundness of all training
materials and certification criteria.
stages:
- name: "domain_research"
display_name: "Domain Research"
description: >
Deep research into the target domain: standards, regulations,
best practices, key concepts, terminology, and expert sources.
entry_gate: "Domain identified with learning objectives and scope"
exit_gate: "Domain research report approved by PO; source library established"
responsible: PO
estimated_duration: "4-12 hours"
steps:
- name: "standards_survey"
description: "Identify and catalog all relevant standards, regulations, and specifications"
verification: "Standards list with version, applicability, and compliance requirements"
agent_role: researcher
- name: "literature_review"
description: "Review academic papers, industry reports, and practitioner publications"
verification: "Annotated bibliography with minimum 20 sources; key findings summarized"
agent_role: researcher
- name: "expert_identification"
description: "Identify domain experts, institutions, and authoritative references"
verification: "Expert directory with areas of expertise and reference works"
agent_role: researcher
- name: "terminology_extraction"
description: "Extract and define domain-specific terminology and concepts"
verification: "Glossary with minimum 50 terms; definitions reviewed for accuracy"
agent_role: researcher
- name: "gap_analysis"
description: "Identify knowledge gaps: what is well-documented vs what requires synthesis"
verification: "Gap report with prioritized areas requiring deeper research"
agent_role: domain_expert
- name: "knowledge_mapping"
display_name: "Knowledge Mapping"
description: >
Structure the domain knowledge into a coherent map: concepts,
relationships, dependencies, and prerequisite chains.
entry_gate: "Domain research report approved"
exit_gate: "Knowledge map reviewed and approved by PO"
responsible: domain_expert
estimated_duration: "3-8 hours"
steps:
- name: "concept_hierarchy"
description: "Organize concepts into a hierarchical taxonomy"
verification: "Taxonomy tree with clear parent-child relationships; no orphan concepts"
agent_role: domain_expert
- name: "dependency_mapping"
description: "Map prerequisite relationships between concepts"
verification: "Dependency graph with no circular dependencies; learning order derivable"
agent_role: domain_expert
- name: "complexity_classification"
description: "Classify each concept by complexity level (foundational, intermediate, advanced, expert)"
verification: "All concepts classified; distribution across levels is reasonable"
agent_role: curriculum_specialist
- name: "cross_domain_links"
description: "Identify connections to other knowledge domains in the organization"
verification: "Cross-reference map showing how this domain connects to existing knowledge"
agent_role: domain_expert
- name: "curriculum_design"
display_name: "Curriculum Design"
description: >
Design learning pathways, module structure, and progression
sequences based on the knowledge map.
entry_gate: "Knowledge map approved"
exit_gate: "Curriculum design approved by PO; module list finalized"
responsible: curriculum_specialist
estimated_duration: "3-6 hours"
steps:
- name: "learning_path_design"
description: "Design learning paths for different audience levels and goals"
verification: "Learning paths defined for at least: beginner, practitioner, expert tracks"
agent_role: curriculum_specialist
- name: "module_structure"
description: "Define learning modules with objectives, topics, and prerequisites"
verification: "Each module has: title, objectives, topics, prerequisites, estimated duration"
agent_role: curriculum_specialist
- name: "assessment_strategy"
description: "Design assessment approach per module: quizzes, exercises, practical tasks"
verification: "Assessment type defined per module; pass criteria specified"
agent_role: curriculum_specialist
- name: "delivery_format_selection"
description: "Select delivery format per module: text, interactive, video, hands-on lab"
verification: "Format selected per module with justification; resource requirements estimated"
agent_role: curriculum_specialist
- name: "content_development"
display_name: "Content Development"
description: >
Develop the actual knowledge content: articles, guides,
reference documents, examples, exercises.
entry_gate: "Curriculum design approved"
exit_gate: "All module content drafted and internally reviewed"
responsible: PO
estimated_duration: "10-30 hours"
steps:
- name: "reference_content"
description: "Write authoritative reference documentation per module"
verification: "Reference docs cover all module topics; technically accurate; well-sourced"
agent_role: domain_expert
- name: "practical_examples"
description: "Create real-world examples, case studies, and worked problems"
verification: "Minimum 2 examples per module; realistic and domain-appropriate"
agent_role: domain_expert
- name: "exercise_creation"
description: "Create practice exercises and hands-on activities"
verification: "Exercises align with module objectives; solutions provided; difficulty appropriate"
agent_role: curriculum_specialist
- name: "quick_reference_guides"
description: "Create cheat sheets, decision trees, and quick-reference materials"
verification: "Quick references are concise, accurate, and usable standalone"
agent_role: content_writer
- name: "internal_review"
description: "Domain Expert Agent reviews all content for accuracy and completeness"
verification: "All factual claims verified; no gaps in coverage; quality consistent"
agent_role: domain_expert
- name: "peer_review"
display_name: "Peer Review"
description: >
External or cross-team review of knowledge content for accuracy,
completeness, and T1P standard compliance.
entry_gate: "All module content internally reviewed"
exit_gate: "Peer review feedback addressed; PO approves final content"
responsible: PO
estimated_duration: "2-6 hours"
steps:
- name: "accuracy_review"
description: "Subject matter expert (human if available) reviews for factual accuracy"
verification: "All flagged inaccuracies corrected; reviewer sign-off obtained"
agent_role: domain_expert
- name: "completeness_review"
description: "Verify coverage against original domain research and knowledge map"
verification: "Coverage matrix complete; all knowledge map nodes have content"
agent_role: PO
- name: "t1p_calibration_review"
description: "Verify content reflects top 1% practitioner standards, not generic knowledge"
verification: "T1P calibration note written; sources include practitioner-level references"
agent_role: domain_expert
- name: "revision_cycle"
description: "Address all peer review feedback and produce final versions"
verification: "All feedback items resolved; final versions tagged"
agent_role: content_writer
- name: "certification_design"
display_name: "Certification Design"
description: >
Design certification or competency validation system for the domain.
entry_gate: "Content peer-reviewed and finalized"
exit_gate: "Certification framework approved by PO"
responsible: curriculum_specialist
estimated_duration: "2-4 hours"
steps:
- name: "competency_framework"
description: "Define competency levels and what each level requires"
verification: "Competency levels defined (e.g., awareness, practitioner, expert); criteria per level"
agent_role: curriculum_specialist
- name: "assessment_design"
description: "Design certification assessments: knowledge tests, practical evaluations"
verification: "Assessment per competency level; question bank with minimum 30 questions"
agent_role: curriculum_specialist
- name: "pass_criteria"
description: "Define passing thresholds and retake policies"
verification: "Pass thresholds justified; retake rules documented; appeal process defined"
agent_role: curriculum_specialist
- name: "certification_metadata"
description: "Define certification naming, validity period, renewal requirements"
verification: "Certification metadata complete; aligned with industry conventions"
agent_role: PO
- name: "training_delivery"
display_name: "Training Delivery"
description: >
Deploy knowledge content and make it accessible to target audiences
(agents, humans, or both).
entry_gate: "Certification framework approved; content finalized"
exit_gate: "Knowledge base deployed and accessible; initial users onboarded"
responsible: PO
estimated_duration: "2-4 hours"
steps:
- name: "knowledge_base_deployment"
description: "Deploy content to knowledge system (Part 5) with proper indexing"
verification: "All modules accessible; search returns relevant results; navigation works"
agent_role: developer
- name: "agent_integration"
description: "Configure Domain Expert Agent with the new knowledge for runtime queries"
verification: "Agent can answer domain questions accurately using the new knowledge"
agent_role: developer
- name: "onboarding_guide"
description: "Create onboarding guide for users of the knowledge base"
verification: "Guide covers: how to navigate, how to search, how to take assessments"
agent_role: content_writer
- name: "initial_onboarding"
description: "Onboard first cohort of users and gather feedback"
verification: "First users successfully navigating content; feedback collected"
agent_role: PO
- name: "assessment"
display_name: "Assessment"
description: >
Evaluate effectiveness of the knowledge base and training program.
Iterate based on results.
entry_gate: "Knowledge base deployed; initial users onboarded"
exit_gate: "Assessment report delivered; improvement plan created or project concluded"
responsible: PO
estimated_duration: "2-4 hours"
steps:
- name: "usage_analytics"
description: "Analyze knowledge base usage: access patterns, completion rates, time spent"
verification: "Usage report with metrics per module; drop-off points identified"
agent_role: analyst
- name: "assessment_results_analysis"
description: "Analyze certification/assessment results: pass rates, common failures"
verification: "Results analysis with insights on knowledge gaps and content quality"
agent_role: curriculum_specialist
- name: "feedback_collection"
description: "Collect and analyze user feedback on content quality and usefulness"
verification: "Feedback synthesized; actionable improvements identified"
agent_role: PO
- name: "improvement_plan"
description: "Create plan for content updates, new modules, or curriculum changes"
verification: "Improvement plan with prioritized actions; timeline for next iteration"
agent_role: PO
dependencies:
external_inputs:
- "Domain specification with learning objectives (from IO or human)"
- "Access to domain standards and reference materials"
- "Subject matter expert availability for peer review (human if possible)"
- "Knowledge system (Part 5) access for deployment"
cross_references:
- "Part 4: Agent role bundles"
- "Part 5: Knowledge System (deployment target)"
- "Part 7: Governance (knowledge quality standards)"
7.5 Template 5: Template Builder (Meta-Template)¶
The Template Builder is the meta-tool: it is the template that creates templates. It uses T1P standards to research a target domain and produce a production-quality project template that can be registered in the template registry for future use.
template:
name: "Template Builder"
id: "tpl_template_builder"
version: "1.0.0"
description: >
Meta-template for creating new project templates. Uses T1P (Top 1%
Practitioner) standards to research target domains and produce
evidence-backed, peer-reviewable templates. This template creates
templates.
total_estimated_duration: "8-20 hours"
contextual_agents:
- name: "T1P Research Agent"
role: t1p_researcher
persistence: cross_sprint
description: >
Specialist researcher calibrated to top 1% practitioner standards.
Researches domain-specific lifecycle models, certification frameworks,
regulatory requirements, and professional standards. Maintains a
library of domain research across template-building projects.
stages:
- name: "domain_analysis"
display_name: "Domain Analysis"
description: >
Understand the target domain: what type of work it involves,
who the practitioners are, what lifecycle models exist.
entry_gate: "Domain specification provided: name, description, example projects, goals"
exit_gate: "Domain analysis report approved by PO"
responsible: PO
estimated_duration: "1-3 hours"
steps:
- name: "domain_scoping"
description: "Define the boundaries of the target domain and types of projects it covers"
verification: "Domain scope document with inclusions, exclusions, and example project types"
agent_role: t1p_researcher
- name: "practitioner_profiling"
description: "Profile who works in this domain: roles, skills, typical workflows"
verification: "Practitioner profiles for key roles; workflow descriptions documented"
agent_role: t1p_researcher
- name: "lifecycle_model_survey"
description: "Survey existing lifecycle models used in this domain (industry standards, frameworks)"
verification: "Minimum 3 lifecycle models identified with source references"
agent_role: t1p_researcher
- name: "domain_analysis_report"
description: "Synthesize findings into a domain analysis report"
verification: "Report reviewed and approved by PO"
agent_role: t1p_researcher
- name: "best_practices_research"
display_name: "Best Practices Research"
description: >
Deep dive into T1P standards for the domain: what do the best
practitioners do, what processes do they follow, what tools do they use.
entry_gate: "Domain analysis report approved"
exit_gate: "Best practices report with T1P calibration note approved by PO"
responsible: t1p_researcher
estimated_duration: "2-6 hours"
steps:
- name: "standards_research"
description: "Research formal standards, certifications, and regulatory frameworks"
verification: "Standards catalog with applicability assessment and compliance requirements"
agent_role: t1p_researcher
- name: "expert_workflow_analysis"
description: "Analyze how top practitioners structure their work: phases, reviews, quality gates"
verification: "Expert workflow documented with stage descriptions and decision points"
agent_role: t1p_researcher
- name: "common_failures_research"
description: "Research common failure modes and anti-patterns in the domain"
verification: "Failure catalog with prevention strategies and detection methods"
agent_role: t1p_researcher
- name: "tool_ecosystem_survey"
description: "Survey tools, platforms, and technologies used by practitioners"
verification: "Tool landscape documented; relevance to XIOPro agent capabilities assessed"
agent_role: t1p_researcher
- name: "t1p_calibration"
description: "Document what distinguishes top 1% practitioners from average in this domain"
verification: "T1P calibration note written; differentiators specific and evidence-backed"
agent_role: t1p_researcher
- name: "stage_design"
display_name: "Stage Design"
description: >
Design the lifecycle stages for the new template based on
domain research and best practices.
entry_gate: "Best practices report with T1P calibration approved"
exit_gate: "Stage design reviewed and approved by PO"
responsible: PO
estimated_duration: "1-3 hours"
steps:
- name: "stage_selection"
description: "Select which lifecycle stages apply; determine stage ordering and dependencies"
verification: "Stage list with ordering rationale; justified skips for unused stages"
agent_role: t1p_researcher
- name: "stage_description"
description: "Write detailed description for each stage: purpose, inputs, outputs"
verification: "Each stage has description, purpose statement, expected inputs and outputs"
agent_role: t1p_researcher
- name: "stage_duration_estimation"
description: "Estimate wall-clock duration per stage based on domain benchmarks"
verification: "Duration estimates with reference to domain benchmarks or practitioner data"
agent_role: t1p_researcher
- name: "step_design"
description: "Design steps within each stage: name, description, verification criteria"
verification: "Every stage has minimum 2 steps; each step has verification criteria"
agent_role: t1p_researcher
- name: "gate_definition"
display_name: "Gate Definition"
description: >
Define entry and exit gates for every stage and verification
criteria for every step.
entry_gate: "Stage design approved"
exit_gate: "Gate definitions reviewed and approved by PO"
responsible: PO
estimated_duration: "1-2 hours"
steps:
- name: "entry_gate_definition"
description: "Define entry conditions for each stage: what must be true before starting"
verification: "Every stage has an entry gate; gates are specific and verifiable"
agent_role: t1p_researcher
- name: "exit_gate_definition"
description: "Define exit conditions for each stage: what must be verified before advancing"
verification: "Every stage has an exit gate; gates reference concrete artifacts or metrics"
agent_role: t1p_researcher
- name: "verification_criteria"
description: "Define step-level verification criteria per Section 5.1 requirements"
verification: "Every step has verification criteria; criteria are testable"
agent_role: t1p_researcher
- name: "approval_flow_design"
description: "Define who approves at each gate: PO, agent, human, or combination"
verification: "Approval flow per stage; escalation paths for L4+ decisions defined"
agent_role: PO
- name: "agent_role_assignment"
display_name: "Agent Role Assignment"
description: >
Define which agent roles are needed for the template and assign
them to stages and steps.
entry_gate: "Gate definitions approved"
exit_gate: "Agent role assignments reviewed and approved by PO"
responsible: PO
estimated_duration: "1-2 hours"
steps:
- name: "role_identification"
description: "Identify all agent roles needed for the template"
verification: "Role list with descriptions; each role maps to Part 4 role bundles or defines new"
agent_role: t1p_researcher
- name: "contextual_agent_definition"
description: "Define contextual agents that persist across sprints"
verification: "Each contextual agent has: name, role, persistence policy, description"
agent_role: t1p_researcher
- name: "role_stage_mapping"
description: "Assign roles to stages and steps"
verification: "Every step has an assigned agent role; no step has 'unassigned'"
agent_role: PO
- name: "resource_defaults"
description: "Define default resource budgets per stage: tokens, time, agent count"
verification: "Resource defaults specified per stage; totals within reasonable bounds"
agent_role: PO
- name: "template_testing"
display_name: "Template Testing"
description: >
Validate the template by dry-running it against a real or
hypothetical project scenario.
entry_gate: "Agent role assignments approved; complete draft template exists"
exit_gate: "Template validated; all issues from dry run resolved"
responsible: PO
estimated_duration: "1-3 hours"
steps:
- name: "dry_run_scenario"
description: "Select a real or hypothetical project scenario to test the template against"
verification: "Scenario documented with enough detail to exercise all stages"
agent_role: PO
- name: "walkthrough_execution"
description: "Walk through the template stage by stage with the test scenario"
verification: "Every stage and step exercised; gaps, ambiguities, and blockers identified"
agent_role: t1p_researcher
- name: "gap_resolution"
description: "Fix identified gaps, ambiguities, and issues from the walkthrough"
verification: "All identified issues resolved; template updated"
agent_role: t1p_researcher
- name: "comparison_validation"
description: "Compare template against existing XIOPro templates for consistency"
verification: "Consistent naming conventions, gate patterns, and role assignments with other templates"
agent_role: PO
- name: "template_publication"
display_name: "Template Publication"
description: >
Finalize, document, and register the template for use in
future projects.
entry_gate: "Template validated and all dry run issues resolved"
exit_gate: "Template registered in template registry; documentation complete"
responsible: PO
estimated_duration: "0.5-1 hour"
steps:
- name: "template_yaml_finalization"
description: "Produce final template YAML with all stages, steps, gates, roles, resources"
verification: "YAML validates against template schema; all required fields present"
agent_role: t1p_researcher
- name: "research_memo"
description: "Write research memo: sources, domain references, rationale for design choices"
verification: "Memo references sources from best practices research; rationale per stage"
agent_role: t1p_researcher
- name: "t1p_calibration_note"
description: "Write T1P calibration note: how the template reflects top 1% standards"
verification: "Calibration note is specific to domain; not generic boilerplate"
agent_role: t1p_researcher
- name: "registry_registration"
description: "Register template in the XIOPro template registry (ODM)"
verification: "Template ID assigned; template queryable from registry; PO notified"
agent_role: PO
dependencies:
external_inputs:
- "Domain specification: name, description, example projects, goals (from PO or human)"
- "Access to domain literature, standards, and practitioner references"
- "Existing XIOPro template registry for consistency comparison"
cross_references:
- "Part 4: Agent role bundles (role alignment)"
- "Part 3: ODM template registry schema"
- "Section 5A: Template Builder process (this template implements that process)"
- "Section 5.1: Step-level review requirements (all templates must comply)"
7.6 Template 6: Research Evaluation¶
The Research Evaluation template is a lightweight pipeline for evaluating incoming research, technologies, or external innovations against the current XIOPro stack. It is not a build template — it produces a verdict (integrate, watch, or dismiss) and, if applicable, a watch list entry with re-evaluation triggers.
template:
name: "Research Evaluation"
id: "tpl_research_evaluation"
version: "1.0.0"
description: >
Evaluation pipeline for incoming research about new technologies, techniques,
or external innovations. Assesses applicability to the current stack and
produces a verdict: integrate (proceed to build), watch (monitor with triggers),
or dismiss (not relevant). Does not include build/execute stages.
total_estimated_duration: "1-4 hours"
exit_states:
- integrate: "Research is directly applicable now. Proceed to IT Project or other build template."
- watch: "Not applicable now but strategically relevant. Monitor with defined triggers."
- dismiss: "Not relevant to current or foreseeable stack. Archive and close."
contextual_agents:
- name: "Research Agent"
role: researcher
persistence: project_scoped
description: >
Gathers and structures information about the research subject. Produces
the specification document and applicability assessment. Terminated on
project close.
- name: "Stack Analyst Agent"
role: analyst
persistence: project_scoped
description: >
Evaluates research against the current infrastructure, dependencies,
and strategic roadmap. Produces the applicability score card.
stages:
- name: "idea_capture"
display_name: "Idea Capture"
description: >
Research arrives via agent discovery, human idea, external feed, or
conversation. Log the raw input with source attribution and timestamp.
entry_gate: "Research topic identified (human question, agent alert, or feed item)"
exit_gate: "Raw input logged with source, date, and initial domain tag"
responsible: PO
estimated_duration: "5-15 minutes"
steps:
- name: "log_input"
description: "Record the raw research input: what was asked, who asked, when, and initial context"
verification: "Input logged with source attribution, timestamp, and domain tag"
agent_role: PO
- name: "assign_researcher"
description: "Spawn or assign a Research Agent to investigate the topic"
verification: "Research Agent assigned; investigation ticket created"
agent_role: PO
- name: "specification"
display_name: "Specification"
description: >
Structure the research into a standardized format: what is it, who made it,
what does it do, what is its maturity level, and what is the community reception.
entry_gate: "Idea captured and Research Agent assigned"
exit_gate: "Specification document completed and reviewed by PO"
responsible: researcher
estimated_duration: "30-90 minutes"
steps:
- name: "identify_source"
description: "Identify the primary source: paper, repo, announcement, blog post, or patent"
verification: "Primary source URL/DOI documented; authors and affiliations identified"
agent_role: researcher
- name: "classify_technology"
description: "Classify the technology: type (algorithm, framework, hardware, protocol), domain, and layer"
verification: "Technology classified with type, domain, and infrastructure layer"
agent_role: researcher
- name: "assess_maturity"
description: "Evaluate maturity: publication date, code availability, community adoption metrics, production deployments"
verification: "Maturity assessment includes age, code status, repo count, star count, and known deployments"
agent_role: researcher
- name: "write_specification"
description: "Consolidate findings into a structured specification document"
verification: "Specification covers: what, who, how, maturity, community reception; PO reviews"
agent_role: researcher
- name: "applicability_assessment"
display_name: "Applicability Assessment"
description: >
Evaluate the research against the current XIOPro stack, infrastructure, and
strategic roadmap. Produce a scored assessment across three dimensions.
entry_gate: "Specification document approved"
exit_gate: "Applicability score card completed with justification for each dimension"
responsible: analyst
estimated_duration: "30-60 minutes"
scoring_dimensions:
- name: "direct_applicability"
description: "Can this be used in the current stack without major changes?"
scale: "HIGH / MEDIUM / LOW / N/A"
- name: "effort_to_integrate"
description: "If applicable, how much work to integrate?"
scale: "LOW (drop-in) / MEDIUM (days) / HIGH (weeks+) / N/A"
- name: "strategic_value"
description: "Even if not applicable now, does this matter for the roadmap?"
scale: "HIGH / MEDIUM / LOW"
steps:
- name: "stack_comparison"
description: "Map the technology against current stack components, dependencies, and constraints"
verification: "Every relevant stack component assessed for compatibility/overlap"
agent_role: analyst
- name: "effort_estimate"
description: "Estimate integration effort if the technology were adopted (or mark N/A)"
verification: "Effort estimate includes scope, prerequisites, and risk factors"
agent_role: analyst
- name: "strategic_alignment"
description: "Evaluate strategic value: roadmap alignment, cost impact, competitive advantage"
verification: "Strategic value justified with reference to roadmap or market position"
agent_role: analyst
- name: "score_card"
description: "Produce the final applicability score card with all three dimensions"
verification: "Score card complete; each dimension scored with written justification"
agent_role: analyst
- name: "design"
display_name: "Design (Conditional)"
description: >
Conditional stage. If applicability is HIGH → produce integration design spec.
If applicability is LOW/MEDIUM → produce watch list entry with triggers for
re-evaluation. If dismiss → skip this stage entirely.
entry_gate: "Applicability score card approved by PO"
exit_gate: "Integration spec OR watch list entry completed and reviewed"
responsible: PO
estimated_duration: "15-60 minutes"
conditional_paths:
- condition: "direct_applicability == HIGH"
action: "Produce integration design spec; recommend build template (IT Project, etc.)"
- condition: "strategic_value >= MEDIUM AND direct_applicability < HIGH"
action: "Produce watch list entry with re-evaluation triggers and next-check date"
- condition: "strategic_value == LOW AND direct_applicability == LOW"
action: "Skip to Review with dismiss recommendation"
steps:
- name: "determine_path"
description: "Based on score card, determine which conditional path to follow"
verification: "Path selected with explicit reference to score card values"
agent_role: PO
- name: "integration_spec"
description: "(If integrate path) Produce integration design: affected components, migration plan, test strategy"
verification: "Integration spec covers scope, affected systems, migration steps, and risks"
agent_role: architect
condition: "integrate path"
- name: "watch_list_entry"
description: "(If watch path) Define re-evaluation triggers, monitoring sources, and next-check date"
verification: "Watch list entry has at least 2 triggers, monitoring plan, and next-check date"
agent_role: analyst
condition: "watch path"
- name: "dismiss_memo"
description: "(If dismiss path) Write brief dismissal rationale for the record"
verification: "Dismiss memo explains why technology is not relevant; archived"
agent_role: PO
condition: "dismiss path"
- name: "review"
display_name: "Review"
description: >
PO or human reviews the assessment and approves the verdict. The verdict
determines the project exit state.
entry_gate: "Design stage output (integration spec, watch list entry, or dismiss memo) completed"
exit_gate: "Verdict approved by PO or human reviewer; project closed with exit state"
responsible: PO
estimated_duration: "10-30 minutes"
steps:
- name: "review_assessment"
description: "PO reviews the full evaluation chain: specification, score card, and design output"
verification: "PO confirms assessment is complete, consistent, and well-justified"
agent_role: PO
- name: "approve_verdict"
description: "Approve the verdict: integrate, watch, or dismiss"
verification: "Verdict logged to Bus; exit state recorded on project entity"
agent_role: PO
- name: "follow_up_actions"
description: "Create follow-up actions based on verdict"
verification: >
If integrate: IT Project ticket created with link to this evaluation.
If watch: Calendar reminder set for next-check date; triggers documented.
If dismiss: Project archived; no follow-up required.
agent_role: PO
dependencies:
external_inputs:
- "Research topic (from human, agent, or feed)"
- "Current stack inventory and roadmap (from Part 2-3)"
cross_references:
- "Part 4: Agent role bundles (researcher, analyst, architect)"
- "Part 3: ODM project entity (exit_state field)"
- "Section 5.1: Step-level review requirements"
- "Section 8: Template Registry (registration)"
8. Template Registry and Selection¶
8.1 Registry¶
All templates are registered in the ODM project_templates table with:
| Field | Type | Description |
|---|---|---|
id |
UUID | Template identifier |
name |
string | Human-readable name |
version |
semver | Template version |
stages |
JSONB | Complete stage/step/gate definitions |
contextual_agents |
JSONB | Long-lived agent definitions |
resource_defaults |
JSONB | Default budgets per stage |
created_by |
UUID | Agent or human that created the template |
created_at |
timestamp | Creation time |
status |
enum | draft, active, deprecated |
8.2 Template Selection¶
When a new project is created, the PO selects a template based on:
- Project type — IT, marketing, content, knowledge, or composite
- Scope — If the project fits a standard template, use it. If not, spawn the Template Builder.
- Overrides — Any template can be customized at the project level: stages can be skipped (with PO justification), durations adjusted, and resource defaults overridden.
The selected template is recorded on the project entity and cannot be changed after the Execute stage begins (to prevent mid-execution structural changes).
9. Review Engine¶
The Review Engine is a reusable T1P review pipeline that can review any artifact — blueprints, code, designs, content, security posture, or domain-specific work products. It orchestrates internal pre-review, external multi-model AI reviews, comparative analysis, and human consultation into a structured 11-stage pipeline.
9.1 Pipeline¶
review_engine:
name: "Review Engine"
version: "1.0.0"
description: >
T1P multi-reviewer pipeline for any artifact. Supports internal pre-review,
external AI reviews (ChatGPT, Gemini, Claude, NotebookLM), and human
consultation. 11 stages from entry criteria check through optional re-review.
stages:
- name: "entry_criteria_check"
stage_number: 0
description: "Validate the artifact is ready for review. Present in ATAM, Fagan, IEEE 1028, and all NASA/DoD gate reviews. Prevents wasting reviewer time on incomplete work."
steps:
- name: "artifact_completeness"
description: "Verify the artifact is complete — no TODO placeholders, no missing sections, no broken references"
verification: "Completeness checklist passed (0 placeholders, 0 broken refs)"
agent_role: PO
- name: "scope_definition"
description: "Define review scope — what is being reviewed, what is explicitly out of scope, what changed since last review"
verification: "Scope document written with in/out boundaries"
agent_role: PO
- name: "stakeholder_readiness"
description: "Confirm reviewers are available, APIs are accessible, budget is approved"
verification: "All selected reviewers confirmed reachable"
agent_role: specialist
- name: "quality_attribute_map"
description: "Map key quality attributes to evaluate (inspired by ATAM's Utility Tree). What matters most for this artifact?"
verification: "Quality attributes ranked by priority with rationale"
agent_role: PO
entry_gate: "Review request exists"
exit_gate: "All entry criteria met — artifact is review-worthy"
reject_action: "Return to author with specific gaps to address before re-submitting"
- name: "material_preparation"
description: >
Gather and organize the artifact for review. Concatenate files,
resolve references, ensure completeness.
steps:
- name: "identify_artifact"
description: "Define what is being reviewed (BP, code module, design doc, content piece)"
verification: "Artifact path, type, and scope are documented"
agent_role: PO
- name: "gather_materials"
description: "Collect all files, dependencies, and context needed for a complete review"
verification: "All referenced files exist and are accessible"
agent_role: specialist
- name: "prepare_review_package"
description: "Concatenate, format, and package materials into a reviewable format"
verification: "Package is under API token limits, readable, self-contained"
agent_role: specialist
entry_gate: "Artifact exists and is in a reviewable state"
exit_gate: "Review package prepared and verified complete"
- name: "internal_pre_review"
description: >
Run an internal review to verify the artifact is review-worthy.
Catch obvious issues before external reviewers.
steps:
- name: "self_review"
description: "Author or PO reviews for completeness, consistency, and obvious errors"
verification: "No broken references, no placeholder content, no stale data"
agent_role: PO
- name: "automated_checks"
description: "Run automated validation (schema checks, link validation, spell check, format compliance)"
verification: "All automated checks pass"
agent_role: specialist
- name: "t1p_calibration"
description: "Verify the artifact meets T1P standards for its domain — would a top 1% practitioner consider this review-ready?"
verification: "T1P calibration note written with confidence assessment"
agent_role: specialist
entry_gate: "Review package prepared"
exit_gate: "All pre-review issues resolved OR escalated"
loop: "If issues found -> fix -> return to internal_pre_review (max 3 iterations)"
- name: "prompt_engineering"
description: >
Craft the review prompt. The prompt quality determines the review quality.
steps:
- name: "define_review_criteria"
description: "Specify evaluation dimensions, scoring system, and what specifically to examine. Can use template defaults or custom criteria."
verification: "Criteria are specific, measurable, and relevant to the artifact type"
agent_role: PO
- name: "craft_review_prompt"
description: "Write the full prompt including: context, artifact summary, criteria, scoring format, output structure, comparison requirements"
verification: "Prompt tested with a dry run or reviewed by a second agent"
agent_role: specialist
- name: "select_reviewers"
description: "Choose which reviewers to use based on artifact type, budget, and available APIs"
verification: "At least 2 reviewers selected, API keys verified, budget approved"
agent_role: PO
entry_gate: "Artifact passed internal pre-review"
exit_gate: "Prompt finalized, reviewers selected, APIs verified"
- name: "external_review_submission"
description: >
Submit the artifact + prompt to selected external reviewers in parallel.
steps:
- name: "submit_to_reviewers"
description: "Send to each reviewer via their API. Capture raw responses."
verification: "Each reviewer returns a response (or error is logged)"
agent_role: specialist
reviewers:
- id: claude_opus
method: "internal agent spawn (model: opus)"
cost: "API tokens"
strengths: "deep architecture analysis, implementation realism"
- id: chatgpt
method: "POST api.openai.com/v1/chat/completions"
cost: "$0.01-0.10 per review"
strengths: "broad knowledge, clear scoring"
- id: gemini
method: "POST generativelanguage.googleapis.com"
cost: "free tier or $0.01 per review"
strengths: "large context, design quality focus"
- id: notebooklm
method: "notebooklm-py (create notebook, upload, generate)"
cost: "free"
strengths: "audio format, synthesis, unique perspective"
formats: ["audio_podcast", "briefing_doc", "mind_map", "deep_research"]
entry_gate: "Prompt finalized and APIs verified"
exit_gate: "All selected reviewers have responded (or timed out)"
- name: "response_collection"
description: >
Save each review response as a document and to the database.
steps:
- name: "save_review_documents"
description: "Write each review to resources/reviews/ with standard naming: REVIEW_{artifact}_{reviewer}.md"
verification: "All review files exist and contain substantive content"
agent_role: specialist
- name: "save_to_database"
description: "Store review metadata in Bus DB: reviewer, scores, timestamp, prompt used, artifact version"
verification: "DB records created for each review"
agent_role: specialist
- name: "save_prompt_record"
description: "Archive the exact prompt used for reproducibility"
verification: "Prompt saved alongside review results"
agent_role: specialist
entry_gate: "At least 2 reviewer responses received"
exit_gate: "All responses saved to files and DB"
- name: "comparative_analysis"
description: >
Analyze all reviews together — find agreement, disagreement, and synthesis.
steps:
- name: "build_comparison_matrix"
description: "Side-by-side scoring table across all reviewers and all dimensions"
verification: "Matrix is complete with no missing cells"
agent_role: specialist
- name: "identify_consensus"
description: "Find areas where all reviewers agree (both strengths and weaknesses)"
verification: "Consensus items documented with confidence level"
agent_role: specialist
- name: "identify_divergence"
description: "Find areas of significant disagreement, explain why different reviewers may have different perspectives"
verification: "Divergences explained, not just listed"
agent_role: specialist
- name: "synthesize_recommendations"
description: "Merge all reviewer recommendations into a prioritized action list"
verification: "Action list is deduplicated, prioritized, and actionable"
agent_role: specialist
entry_gate: "All reviews collected and saved"
exit_gate: "Comparison matrix + synthesis document written"
- name: "fix_list_generation"
description: >
Create a concrete, ticketable list of all required fixes.
steps:
- name: "extract_fixes"
description: "Extract every specific fix recommendation from all reviews"
verification: "Each fix has: description, severity, source reviewer(s), estimated effort"
agent_role: specialist
- name: "prioritize_fixes"
description: "Order by severity x impact. Group by theme."
verification: "Priority list is clear and actionable"
agent_role: PO
- name: "estimate_effort"
description: "Estimate implementation time for each fix"
verification: "Estimates are realistic (T1P calibrated)"
agent_role: specialist
entry_gate: "Comparative analysis complete"
exit_gate: "Fix list with priorities and estimates ready for human review"
- name: "human_consultation"
description: >
Present findings to human via IO. Create L4 alert for RC session.
steps:
- name: "create_consultation_alert"
description: "IO creates an L4 alert with: summary of findings, fix list, GO's recommendation"
verification: "Alert created in Bus with full context"
agent_role: IO
- name: "present_findings"
description: "In RC session: walk human through comparison matrix, key findings, recommended fixes"
verification: "Human has reviewed all materials"
agent_role: IO
- name: "capture_decisions"
description: "Record human's decisions: approved fixes, rejected fixes, modified scope, new priorities"
verification: "All decisions recorded as Bus events"
agent_role: IO
entry_gate: "Fix list ready"
exit_gate: "Human has reviewed and provided decisions"
- name: "fix_execution"
description: >
Execute approved fixes.
steps:
- name: "create_fix_tickets"
description: "Create tickets for each approved fix"
verification: "Tickets created in Bus with correct priority and assignment"
agent_role: PO
- name: "execute_fixes"
description: "Specialists execute the fixes per tickets"
verification: "Each fix implemented and verified"
agent_role: specialist
entry_gate: "Human approval received"
exit_gate: "All approved fixes implemented"
- name: "re_review"
description: >
Optional — run the review cycle again on the fixed artifact to verify improvements.
steps:
- name: "decide_re_review"
description: "PO decides if a re-review is warranted based on fix magnitude"
verification: "Decision documented with rationale"
agent_role: PO
- name: "execute_re_review"
description: "If warranted: return to stage 2 with the fixed artifact"
verification: "Re-review scores improved over original"
agent_role: specialist
entry_gate: "Fixes executed"
exit_gate: "Re-review complete OR waived with documented rationale"
optional: true
9.2 Review Types¶
The Review Engine supports different review configurations. Each type has default criteria but can be customized per review instance.
| Review Type | Artifact | Default Focus Areas |
|---|---|---|
| Architecture Review | Blueprint, system design, technical decisions | Completeness, consistency, scalability, implementability, cost realism, security |
| Code Review | Module, PR, or full codebase | Correctness, performance, readability, test coverage, security, maintainability |
| Content Review | Documentation, marketing copy, tutorials | Accuracy, clarity, completeness, audience fit, SEO, style guide compliance |
| Design Review | UI/UX, branding, visual identity | Usability, accessibility, brand consistency, visual hierarchy, responsiveness |
| Security Audit | Any artifact with security implications | Vulnerability assessment, compliance check, threat modeling, secrets exposure |
| Domain Expert Review | Domain-specific work products | Domain-specific validation (e.g., ISO 19650 compliance, BIM standards, regulatory adherence) |
9.3 Review Engine ODM Schema¶
The following tables support the Review Engine and are added to the Bus database:
review_sessions:
id: uuid
artifact_type: text # bp, code, content, design, security, domain
artifact_path: text
artifact_version: text
prompt_used: text
status: review_status # preparing, reviewing, analyzing, consulting, executing, complete
created_by: text # agent ID
created_at: timestamptz
review_results:
id: uuid
session_id: uuid # FK to review_sessions
reviewer_id: text # claude_opus, chatgpt, gemini, notebooklm
reviewer_model: text # specific model version
scores: jsonb # { dimension: score }
content: text # full review text
cost: numeric
response_time_ms: integer
created_at: timestamptz
The review_status enum values track pipeline progression:
| Status | Meaning |
|---|---|
preparing |
Stages 1-2: material preparation and internal pre-review |
reviewing |
Stages 3-5: prompt engineering, external submission, response collection |
analyzing |
Stages 6-7: comparative analysis and fix list generation |
consulting |
Stage 8: human consultation |
executing |
Stage 9: fix execution |
complete |
Stage 10: re-review complete or waived; session closed |
9.4 Integration Points¶
- Project Templates (this Part): Any template can invoke the Review Engine at its Review stage or at any custom review checkpoint.
- Part 4 (Agent Hierarchy): The PO owns the review session. Specialists execute material preparation, prompt engineering, and analysis. IO handles human consultation.
- Part 5 (Knowledge System): Review results feed back into the knowledge base as lessons learned.
- Part 7 (Governance): L4 alert for human consultation follows the standard alert taxonomy.
- Part 11 (System Review): The System Review gate can use the Review Engine as its implementation mechanism.
9.5 Review Metrics (T1P Process Improvement)¶
Every review session collects metrics for continuous process improvement. Required by IEEE 1028 and Fagan Inspection standards.
Per-Review Metrics: | Metric | Description | Collected At | |---|---|---| | defect_count | Total issues found by each reviewer | Stage 5 (Response Collection) | | defect_overlap_rate | % of issues found by 2+ reviewers (measures reviewer independence) | Stage 6 (Comparative Analysis) | | unique_defect_rate | % of issues found by only 1 reviewer (measures reviewer diversity value) | Stage 6 | | fix_acceptance_rate | % of recommended fixes approved by human | Stage 8 (Human Consultation) | | fix_implementation_rate | % of approved fixes actually implemented | Stage 9 (Fix Execution) | | score_improvement | Delta between original and re-review scores | Stage 10 (Re-Review) | | review_cost | Total API cost across all reviewers | Stage 5 | | review_duration | Wall-clock time from Stage 0 to Stage 8 | Stage 8 | | prompt_effectiveness | Correlation between prompt specificity and defect discovery rate | Stage 6 |
Process Improvement Loop: After every 5 reviews, the Review Engine analyzes accumulated metrics to: 1. Identify which reviewers find the most unique defects (optimize reviewer selection) 2. Identify which prompt patterns yield the highest defect discovery (optimize prompts) 3. Track fix acceptance trends (calibrate severity ratings) 4. Compute ROI per reviewer (cost vs unique defects found)
Store in: review_metrics table (add to ODM schema):
review_metrics:
id: uuid
session_id: uuid # FK to review_sessions
metric_name: text
metric_value: numeric
metadata: jsonb
recorded_at: timestamptz
10. Template Engines¶
This section defines three engines that turn XIOPro into a project factory — capable of accepting any domain, producing a template skeleton, and instantiating executable project frames from it.
The engines build on the Template Builder (Section 5A), the template definitions (Section 7), and the Template Registry (Section 8). They add automation, cross-reference intelligence, and a universal base skeleton extracted from our four production templates.
10.0 Core Design Principle: Interactive Prompt-Driven Wizards¶
Template building and project building are NOT batch jobs. They are interactive, step-by-step prompting sessions driven by IO (Interaction Orchestrator):
- System shows what it knows — from T1P research, existing templates, knowledge vault
- System asks contextual questions — about what it doesn't know or needs human judgment on
- System proposes — stages, steps, gates, agent roles based on accumulated context
- Human approves/modifies — each proposal individually, not all at once
- System creates tickets — for approved stages only
- Tickets execute — produce artifacts that feed context into the next prompting round
- Progressive disclosure — don't overwhelm with all stages at once; reveal as context grows
This makes each engine a conversational project assembly line. The human co-creates the template/project with the system, guided by T1P research but shaped by domain-specific human knowledge.
IO owns the wizard session. PO executes the resulting tickets. The cycle continues until the template or project frame is complete.
10.1 Engine 1: Template Skeleton Builder¶
Purpose: Accept an unknown domain and produce a complete template skeleton ready for human review and registration.
Trigger: PO identifies a project idea that does not fit any existing template in the registry.
Process (7 phases):
Phase 1: Existing Knowledge Check¶
Before any external research, the builder checks what XIOPro already knows:
- Query the Knowledge Vault (struxio-knowledge/vault/) for domain-related documents
- Query the Template Registry for templates with overlapping characteristics
- Query the Review Engine's lesson history for relevant insights
- Check if any completed project has partial coverage of the target domain
If sufficient material exists, skip or reduce Phase 2. This prevents redundant research and respects budget constraints.
Phase 2: T1P Domain Research¶
Using the T1P Research Agent (contextual, cross-sprint): 1. Industry standards scan — Identify ISO standards, regulatory frameworks, professional certifications relevant to the domain 2. Practitioner workflow mapping — Document how top 1% practitioners structure their work: phases, deliverables, review points, quality gates 3. Stakeholder identification — Map roles (who commissions, who executes, who reviews, who consumes) 4. Lifecycle model survey — Find at least 3 established lifecycle models used in the domain (e.g., RIBA Plan of Work for architecture, SDLC for software, AIDA for marketing) 5. Common failure catalog — Research the top failure modes and anti-patterns 6. Tool ecosystem scan — Identify domain-specific tools and how they map to XIOPro agent capabilities
Output: Domain Research Report with T1P calibration note.
Phase 3: Cross-Reference with Base Skeleton¶
Compare the domain research against the Universal Base Skeleton (see Section 10.3): - Which universal stages apply as-is? - Which universal stages need domain-specific customization? - Which domain-specific stages must be added? - Which universal stages can be skipped (with justification)?
This produces a stage delta — the difference between the universal skeleton and the target domain.
Phase 4: Skeleton Assembly¶
Assemble the template skeleton:
skeleton:
domain: "<domain name>"
domain_id: "<slug>"
version: "0.1.0-draft"
t1p_calibration: "<summary of how this reflects top 1%>"
contextual_agents:
- name: "<domain-specific agent>"
role: "<role>"
persistence: cross_sprint
description: "<what it maintains across sprints>"
stages:
- name: "<stage_name>"
display_name: "<Stage Name>"
source: "universal | domain_specific | adapted"
description: "<purpose>"
entry_gate: "<condition>"
exit_gate: "<condition>"
responsible: "<role>"
estimated_duration: "<range>"
steps:
- name: "<step_name>"
description: "<what>"
verification: "<how to verify>"
agent_role: "<role>"
iteration_rules:
max_iterations_per_stage: <n>
rework_trigger: "<condition that sends back to previous stage>"
escalation_threshold: "<when to involve human>"
verification_patterns:
stage_level: "<who reviews, how>"
step_level: "<who verifies, how>"
cross_stage: "<consistency checks between stages>"
Phase 5: Similarity Scoring¶
Score the new skeleton against all existing templates (IT, Marketing, Content, Knowledge) using: - Stage overlap percentage - Role overlap percentage - Gate pattern similarity - Estimated complexity ratio
If similarity > 80% to an existing template, recommend extending that template instead of creating a new one.
Phase 6: Human Review Gate¶
Present the skeleton to the IO (human) for approval: - Domain Research Report summary - Stage list with rationale per stage - Comparison to closest existing template - T1P calibration note - Estimated total duration range - Resource requirements
The IO can: approve, request revisions, or reject (with reason recorded to Bus).
Phase 7: Registration¶
On approval:
1. Assign template ID (tpl_<domain_slug>)
2. Write template YAML to struxio-logic/templates/
3. Register in TEMPLATE_REGISTRY.yaml
4. Register in ODM project_templates table
5. Publish research memo to Knowledge Vault
6. Notify PO that the template is available
10.2 Engine 2: Project Frame Builder¶
Purpose: Take an approved template skeleton + a specific project idea + constraints, and produce a fully executable project frame in the Bus DB.
Trigger: PO creates a new project and selects a template.
Process (4 phases):
Phase 1: Input Collection¶
Accept:
- template_id — registered template from the registry
- project_description — what specifically needs to be done
- constraints:
- budget_ceiling — maximum token/compute cost
- timeline — target completion date or duration
- team — available agent roles, human reviewer availability
- priority — L1-L5
- parent_project_id — if this is a sub-project (Section 5B)
Phase 2: Template Instantiation¶
For each stage in the template:
- Create project record in Bus DB (
projectstable) with: - Project ID, name, description, template_id, parent_project_id
- Budget ceiling, timeline, priority
-
Status:
idea_research(first stage) -
Create sprint plan — Map template stages to sprints:
- Each stage becomes one or more sprints depending on estimated duration vs sprint cadence
- Set sprint start/end dates based on timeline constraint
-
Assign capacity per sprint based on team constraint
-
Generate tickets — For each step in each stage:
- Create a Paperclip ticket with:
- Title:
[Stage] Step Name - Description: step description from template
- Acceptance criteria: step verification criteria from template
- Estimate: derived from stage duration / step count
- Dependencies: previous step in same stage, plus cross-stage dependencies
- Assignable role: agent_role from template
- Title:
-
Tag tickets with stage name, template_id, project_id
-
Assign agent roles:
- Match template agent_role requirements to available specialist types
- If template requires a role not currently available, flag for PO resolution
-
Spawn contextual agents defined in the template (cross-sprint persistence)
-
Configure sub-project structure (if composite):
- Create sub-project records with their own templates
- Set
depends_onconstraints per Section 5B rules -
Assign Master PO
-
Set resource budgets per stage:
- Distribute budget ceiling across stages using template
resource_defaultsratios - Validate total does not exceed ceiling
- Reserve contingency (default: 10% of ceiling)
Phase 3: Verification Setup¶
For each stage gate:
1. Create a gate checklist in the Bus DB (stage_gates table):
- Entry conditions (from template entry_gate)
- Exit conditions (from template exit_gate)
- Approval flow (from template or default: PO for stage gates, agent for step verification)
2. Configure the Review Engine (Section 9) for any stage that requires formal review
3. Set up automated checks where applicable:
- Budget threshold alerts (75%, 90%, 100% of stage budget)
- Duration alerts (if stage exceeds estimated duration by > 50%)
- Ticket completion tracking (percentage of steps complete per stage)
Phase 4: Output¶
Produce a fully configured project:
project_frame:
project_id: "<uuid>"
project_name: "<name>"
template_id: "<template_id>"
status: "ready_to_execute"
sprints:
- sprint_id: "<uuid>"
stage: "<stage_name>"
start_date: "<date>"
end_date: "<date>"
tickets: ["<ticket_ids>"]
budget_allocation: <amount>
agents:
contextual: ["<agent_ids>"] # long-lived
specialist_slots: ["<roles>"] # to be spawned per sprint
gates:
- stage: "<stage_name>"
entry_checklist: ["<conditions>"]
exit_checklist: ["<conditions>"]
approval_flow: "<who>"
sub_projects: [] # if composite
monitoring:
budget_alerts: [75, 90, 100]
duration_alerts: true
completion_tracking: true
The PO receives a Bus notification that the project frame is ready. Execution can begin immediately — the PO picks up the first sprint's tickets.
10.3 Engine 3: Process Skeleton Extractor (Universal Base Skeleton)¶
Purpose: Analyze the four production templates (IT Project, Marketing, Content Creation, Knowledge Expert) and extract the common skeleton that underlies all project types regardless of domain. This skeleton becomes the base that the Template Skeleton Builder extends.
Extraction Analysis¶
Universal stages (appear in all 4 templates, in order):
| Universal Stage | IT Project | Marketing | Content Creation | Knowledge Expert |
|---|---|---|---|---|
| Research | idea_research | market_research | topic_research | domain_research |
| Strategy/Planning | brainstorm + manifest | strategy | content_strategy | knowledge_mapping |
| Design | blueprint | campaign_design | outline (in draft) | curriculum_design |
| Production | execute:coding | content_creation | draft | content_development |
| Review | review (System Review) | content_review (in content_creation) | review | peer_review |
| Delivery | execute:deployment | launch | publish | training_delivery |
| Measurement | execute:monitoring | analytics + optimize | analytics | assessment |
Key findings:
-
Every template has a Research phase — always first, always produces a report, always gated by PO approval. Universal.
-
Every template has a Strategy/Planning phase — follows Research, synthesizes options, produces a plan. In IT this is split (Brainstorm + Manifest); in others it is combined. Universal.
-
Every template has a Production phase — the core work. Format varies (code, copy, content, knowledge docs) but the pattern is identical: create artifacts per plan, verify each artifact, internal review. Universal.
-
Every template has a Review phase — formal quality check before delivery. Scope varies (System Review for IT, peer review for Knowledge, editorial review for Content). Universal.
-
Every template has a Delivery phase — deploy/publish/launch the produced artifacts. Universal.
-
Every template has a Measurement phase — assess outcomes, collect metrics, feed back. Universal.
-
Design is near-universal — IT has Blueprint, Marketing has Campaign Design, Knowledge has Curriculum Design. Content Creation folds design into the draft stage. Present in 3/4 templates explicitly; implicitly present in the 4th.
Domain-specific stages (unique to one template):
| Stage | Template | Why Domain-Specific |
|---|---|---|
| test_plan | IT Project | Software-specific: test strategy, coverage targets, test environments |
| execute:configuration | IT Project | Software-specific: env vars, feature flags, secrets |
| execute:installation | IT Project | Infrastructure-specific: provisioning, migrations |
| channel_setup | Marketing | Marketing-specific: ad platforms, email tools, tracking |
| optimize | Marketing | Marketing-specific: A/B testing, budget reallocation |
| certification_design | Knowledge Expert | Education-specific: competency frameworks, assessments |
| distribute | Content Creation | Publishing-specific: multi-channel distribution |
Universal Base Skeleton¶
universal_base_skeleton:
version: "1.0.0"
description: >
The common process skeleton underlying all XIOPro project templates.
Every new template MUST include these stages (adapted to the domain)
unless a skip is explicitly justified and recorded.
universal_stages:
- name: "research"
display_name: "Research"
purpose: "Gather context, prior art, constraints, and domain knowledge"
universal_pattern:
entry_gate: "Project idea captured with domain tag and priority"
exit_gate: "Research report reviewed and approved by PO"
minimum_steps:
- "Context gathering (landscape, prior art, constraints)"
- "Stakeholder/audience analysis"
- "Feasibility or gap assessment"
- "Research synthesis report"
responsible: PO
verification: "PO reviews research report; report published to Bus"
- name: "strategy"
display_name: "Strategy & Planning"
purpose: "Evaluate options, select approach, define scope and success criteria"
universal_pattern:
entry_gate: "Research report approved"
exit_gate: "Strategy/plan document approved by PO with resource allocation"
minimum_steps:
- "Option generation (minimum 2 alternatives)"
- "Trade-off analysis or prioritization"
- "Scope and success criteria definition"
- "Resource estimation and budget allocation"
responsible: PO
verification: "Selected approach documented with rationale; decision logged to Bus"
- name: "design"
display_name: "Design"
purpose: "Produce the detailed design for the work to be done"
universal_pattern:
entry_gate: "Strategy approved"
exit_gate: "Design reviewed and approved by PO (and domain specialist if applicable)"
minimum_steps:
- "Structure/architecture definition"
- "Detailed component/section design"
- "Resource and dependency mapping"
- "Design approval"
responsible: "domain specialist or PO"
verification: "Design document complete; reviewed by relevant specialist agent"
note: "May be collapsed into Production for lightweight templates (justify skip)"
- name: "production"
display_name: "Production"
purpose: "Create the core deliverables per the approved design"
universal_pattern:
entry_gate: "Design approved (or Strategy approved if Design skipped)"
exit_gate: "All deliverables created and internally reviewed"
minimum_steps:
- "Artifact creation (code, content, documents, etc.)"
- "Internal quality check per artifact"
- "Integration/assembly of components"
- "Internal review before formal review"
responsible: "domain specialist"
verification: "Every artifact verified against its acceptance criteria"
- name: "review"
display_name: "Review"
purpose: "Formal quality assessment of produced deliverables"
universal_pattern:
entry_gate: "All deliverables internally reviewed"
exit_gate: "Review feedback addressed; PO approves for delivery"
minimum_steps:
- "Quality/accuracy review by specialist or external reviewer"
- "Completeness check against plan/design"
- "Feedback consolidation and revision"
- "Final approval"
responsible: PO
verification: "Review Engine (Section 9) invoked if applicable; all feedback items resolved"
- name: "delivery"
display_name: "Delivery"
purpose: "Deploy, publish, or deliver the produced artifacts to their target"
universal_pattern:
entry_gate: "Review approved"
exit_gate: "Deliverables accessible/operational at target; verified working"
minimum_steps:
- "Delivery preparation (formatting, packaging, configuration)"
- "Delivery execution (deploy, publish, launch)"
- "Delivery verification (smoke test, access check)"
responsible: PO
verification: "Deliverables confirmed accessible and functional at destination"
- name: "measurement"
display_name: "Measurement"
purpose: "Assess outcomes, collect metrics, extract lessons"
universal_pattern:
entry_gate: "Delivery verified"
exit_gate: "Performance report delivered; lessons recorded; project concluded or next cycle triggered"
minimum_steps:
- "Performance/usage metrics collection"
- "Outcome assessment against success criteria"
- "Lessons learned extraction"
- "Improvement plan or project closure"
responsible: PO
verification: "Metrics recorded to Bus; lessons added to Knowledge Vault"
universal_properties:
verification_at_every_level: true
step_level_verification: "Executing agent verifies; escalates to PO if uncertain"
stage_level_verification: "PO reviews stage output; evaluation record written to Bus"
bus_logging: "Every gate transition logged to Bus with timestamp and approver"
budget_tracking: "Per-stage budget tracked; alerts at 75%, 90%, 100%"
iteration_support: "Any stage can loop back to a previous stage with PO approval"
t1p_standard: "All templates must reflect top 1% practitioner standards for their domain"
How the Engines Connect¶
┌──────────────────────────────┐
│ Universal Base Skeleton │
│ (Engine 3 output) │
└──────────┬───────────────────┘
│
│ extends
▼
┌──────────────────────────────┐
Domain ──────>│ Template Skeleton Builder │──────> Template
Description │ (Engine 1) │ Skeleton
│ T1P research + base extend │
└──────────┬───────────────────┘
│
│ registered
▼
┌──────────────────────────────┐
Project │ Project Frame Builder │──────> Executable
Idea + ──────>│ (Engine 2) │ Project Frame
Constraints │ Instantiate + tickets │ (Bus DB)
└──────────────────────────────┘
Flow: 1. Engine 3 runs once (already complete — the skeleton above). Updated when new templates are added. 2. Engine 1 runs when a new domain is needed. Uses the base skeleton as its starting point. 3. Engine 2 runs for every new project. Takes a registered template and produces a ready-to-execute project.
11. Project-Level Template Customization¶
A project instance can override template defaults: add/remove steps, adjust gate criteria, modify agent roles. Overrides are stored in the project's config, not in the template itself. Template integrity is preserved.
This means: - The registered template is never mutated by a project instance - All project-specific changes live in the project record in the Bus DB - Template upgrades do not overwrite project-level overrides - The PO applies overrides at project frame build time (Engine 2)
Override schema fields on the project record:
template_overrides:
added_steps: [] # step definitions not in the base template
removed_steps: [] # step IDs to skip
gate_criteria_delta: {} # per-gate criteria adjustments
agent_role_overrides: {} # role substitutions or additions
Changelog¶
| Version | Date | Author | Change |
|---|---|---|---|
| 5.0.5 | 2026-03-30 | GO | N12: Added Section 11 — Project-Level Template Customization. Override schema for add/remove steps, gate criteria delta, agent role overrides. Template integrity preservation rule. |
| 5.0.4 | 2026-03-30 | GO | Section 10 added: Template Engines — three engines that turn XIOPro into a project factory. Engine 1 (Template Skeleton Builder): T1P research pipeline that produces template skeletons for unknown domains. Engine 2 (Project Frame Builder): instantiates templates into executable project frames with tickets, sprints, gates, and agents in Bus DB. Engine 3 (Process Skeleton Extractor): analyzes 4 production templates to extract the Universal Base Skeleton — 7 universal stages that all templates share. |
| 5.0.3 | 2026-03-30 | GO | Review Engine T1P upgrade — Stage 0 (Entry Criteria Check) added per ATAM/Fagan/IEEE 1028 standards. Metrics instrumentation added (Section 9.5): defect rate, overlap, fix acceptance, ROI. Pipeline now 11 stages (0-10). ODM schema extended with review_metrics table. |
| 5.0.2 | 2026-03-30 | GO | Section 9 added: Review Engine — T1P multi-reviewer pipeline (10 stages) for any artifact. Supports Claude, ChatGPT, Gemini, NotebookLM reviewers. Review types table, ODM schema (review_sessions, review_results), integration points. |
| 5.0.1 | 2026-03-30 | GO | Sections 7-8 added: Complete template definitions for all 5 project types (IT Project, Marketing, Content Creation, Knowledge Expert, Template Builder). Each with full stage/step/gate/agent/duration specifications. Template Registry and Selection rules added. Status upgraded from placeholder to draft. |
| 5.0.0 | 2026-03-30 | GO | Section 5B added: Composite Projects and Sub-Projects — parent/child project structure with Master PO coordination, ODM parent_project_id reference. |
| 4.2.15 | 2026-03-29 | GO | Section 5.1 added: step-level review and verification required at every stage and step before progression. Section 5A added: Template Builder — researcher agent that constructs domain-specific templates using T1P standards. |
| 4.2.14 | 2026-03-29 | GO | Part 9 created as placeholder during BP reorganization (DEVXIO renamed to XIOPro, parts renumbered). |