Skip to content

POG Diagrams

This page contains all the visual diagrams for the Prompt Orchestration Governance framework.


Diagram 1: POG Dual Architecture Overview

This diagram illustrates the two core functions of POG: Prompt Warehouse Management and SDLC Integration.

graph TD
    subgraph PW["Prompt Warehouse Management"]
        PW1["Discover Prompts"]
        PW2["Normalize & Structure"]
        PW3["Validate & Test"]
        PW4["Version & Store"]
        PW5["Prompt Repository"]
    end

    subgraph SDLC["SDLC Integration Layer"]
        S1[Requirements Phase]
        S2[Design Phase]
        S3[Development Phase]
        S4[Testing Phase]
        S5[Deployment Phase]
        S6[Maintenance Phase]
    end

    PW1 --> PW2
    PW2 --> PW3
    PW3 --> PW4
    PW4 --> PW5

    PW5 -.provides prompts.-> S1
    PW5 -.provides prompts.-> S2
    PW5 -.provides prompts.-> S3
    PW5 -.provides prompts.-> S4
    PW5 -.provides prompts.-> S5
    PW5 -.provides prompts.-> S6

    S1 -.feedback & new prompts.-> PW1
    S2 -.feedback & new prompts.-> PW1
    S3 -.feedback & new prompts.-> PW1
    S4 -.feedback & new prompts.-> PW1
    S5 -.feedback & new prompts.-> PW1
    S6 -.feedback & new prompts.-> PW1

Description: The Prompt Warehouse Management function discovers, normalizes, validates, and stores prompts. The SDLC Integration Layer provides phase-specific prompts to each stage of software development. Feedback from SDLC usage continuously improves the prompt repository.


Diagram 2: SDLC Phase Prompt Flow

This diagram shows how different prompts are invoked at each SDLC phase with specific examples.

flowchart TD
    subgraph Requirements["Requirements Phase"]
        R1[User Story<br/>Elicitation]
        R2[Acceptance<br/>Criteria Generator]
        R3[Risk Analysis]
    end

    subgraph Design["Design Phase"]
        D1[Architecture<br/>Pattern Advisor]
        D2[API Design<br/>Assistant]
        D3[Data Model<br/>Designer]
    end

    subgraph Development["Development Phase"]
        DV1[Code<br/>Generator]
        DV2[Code Review<br/>Assistant]
        DV3[Documentation<br/>Generator]
    end

    subgraph Testing["Testing Phase"]
        T1[Test Case<br/>Generator]
        T2[Bug Analysis<br/>Assistant]
        T3[Coverage<br/>Analyzer]
    end

    subgraph Deployment["Deployment Phase"]
        DP1[Release Notes<br/>Generator]
        DP2[Configuration<br/>Validator]
        DP3[Rollback<br/>Planner]
    end

    subgraph Maintenance["Maintenance Phase"]
        M1[Incident<br/>Analyzer]
        M2[Performance<br/>Optimizer]
        M3[Tech Debt<br/>Assessor]
    end

    Requirements ==> Design
    Design ==> Development
    Development ==> Testing
    Testing ==> Deployment
    Deployment ==> Maintenance
    Maintenance -.continuous improvement.-> Requirements

Description: Each SDLC phase has dedicated prompt categories that address specific needs. The cycle is continuous, with maintenance phase insights feeding back into requirements for the next iteration.


Diagram 3: Prompt Lifecycle State Machine

This diagram illustrates how prompts evolve from ad-hoc interactions to production-ready skill prompts.

stateDiagram-v2
    [*] --> Interaction: Ad-hoc chat usage

    Interaction --> Discovery: Identify value
    Discovery --> Normalization: Extract & structure
    Normalization --> Validation: Test & verify
    Validation --> Repository: Approve & store
    Repository --> Active: Deploy to production

    Active --> Monitoring: Track usage
    Monitoring --> Refinement: Collect feedback
    Refinement --> Normalization: Improve

    Monitoring --> Deprecated: Low usage/outdated
    Deprecated --> [*]: Archive

    Active --> [*]: End of lifecycle

    note right of Interaction
        Informal prompts used
        in chat or exploration
    end note

    note right of Repository
        Versioned, governed
        production-ready prompts
    end note

    note right of Monitoring
        Usage metrics, feedback,
        effectiveness measurement
    end note

Description: Prompts begin as informal interactions, progress through discovery and normalization, get validated, and become production-ready skill prompts. Continuous monitoring enables refinement, and prompts can be deprecated when no longer useful.


Diagram 4: Meta-Loop - POG Self-Improvement

This diagram shows how POG continuously improves its own prompt library through feedback and usage analysis.

graph TD
    A[POG Prompt Repository] --> B[Projects Use Prompts]
    B --> C[Collect Usage Data]
    C --> D[Analyze Metrics & Feedback]

    D --> E{Evaluation}

    E -->|High value,<br/>frequently used| F[Promote to<br/>Core Library]
    E -->|Effective but<br/>needs refinement| G[Improve<br/>Existing Prompt]
    E -->|Gap identified| H[Create<br/>New Prompt]
    E -->|Low usage,<br/>outdated| I[Deprecate<br/>or Archive]

    F --> J[Update Repository]
    G --> J
    H --> J
    I --> K[Archive]

    J --> A

    B -.new prompts from projects.-> L[Discovery Queue]
    L --> M[Review & Normalize]
    M --> J

Description: POG implements a meta-loop where prompt usage is continuously monitored, analyzed, and fed back into the repository. High-value prompts are promoted, existing ones are refined, gaps trigger new prompt creation, and low-value prompts are deprecated. This ensures the prompt library evolves with organizational needs.


Diagram 5: Orchestration Layers Hierarchy

This diagram shows the hierarchical layers that organize prompts by scope and context.

graph TB
    A[Runtime Context Layer]
    B[Task Layer]
    C[Domain Layer]
    D[Foundation Layer]

    A --> B
    B --> C
    C --> D

    A1["`**User Input**
    Project context
    Environment variables
    Session state`"]

    B1["`**Specific Tasks**
    Generate unit tests
    Review API design
    Create user stories
    Analyze incident`"]

    C1["`**Domain Rules**
    Healthcare regulations
    Financial compliance
    E-commerce patterns
    IoT protocols`"]

    D1["`**Core Capabilities**
    Safety constraints
    Output formatting
    Error handling
    Security policies`"]

    A1 -.describes.-> A
    B1 -.describes.-> B
    C1 -.describes.-> C
    D1 -.describes.-> D

Description: Prompts are organized in hierarchical layers, from the foundational system capabilities up to runtime-specific context. Each layer builds upon the layers below it:

  • Foundation Layer: Universal system policies and constraints
  • Domain Layer: Business-specific rules and patterns
  • Task Layer: Specific development tasks and intents
  • Runtime Context Layer: Dynamic, session-specific information

This layering enables prompt composition and context management.


Diagram 6: Prompt Lifecycle Flow (Detailed)

This diagram provides a detailed view of the prompt management lifecycle with decision points.

flowchart TD
    Start([New Prompt Need]) --> Source{Source?}

    Source -->|Chat interaction| Chat[Interaction Prompt]
    Source -->|Project discovery| Project[Project Prompt]
    Source -->|Team contribution| Contrib[Contributed Prompt]

    Chat --> Capture[Capture Prompt]
    Project --> Capture
    Contrib --> Capture

    Capture --> Review{Worth<br/>formalizing?}

    Review -->|No| Discard[Discard]
    Review -->|Yes| Normalize[Normalize<br/>& Structure]

    Normalize --> Param[Parameterize<br/>Variables]
    Param --> Meta[Add Metadata]
    Meta --> Eval[Create<br/>Evaluation Cases]

    Eval --> Test[Test Prompt]
    Test --> Pass{Tests<br/>pass?}

    Pass -->|No| Fix[Refine Prompt]
    Fix --> Test

    Pass -->|Yes| Version[Assign Version]
    Version --> Cat[Categorize by<br/>SDLC Phase]
    Cat --> Store[Store in<br/>Repository]

    Store --> Deploy[Deploy to<br/>Production]
    Deploy --> Monitor[Monitor Usage]

    Monitor --> Feedback{Feedback?}
    Feedback -->|Needs improvement| Fix
    Feedback -->|Working well| Continue[Continue<br/>Monitoring]
    Feedback -->|Not used| Archive[Archive]

    Continue --> Monitor

    Discard --> End([End])
    Archive --> End

Description: This detailed flow shows the complete journey of a prompt from identification through deployment and monitoring, including quality gates and decision points that ensure only valuable, tested prompts reach production.


For more information, see the main whitepaper.