Skip to content

crypto-Lin/AI-Workflow-Primitives

Repository files navigation

AI Workflow Primitives

A minimal, composable primitive set for building AI workflows
combining deterministic computation, streaming LLM execution,
and policy-driven control.


Why this repository exists

Most AI systems fail not because of model quality, but because execution, control, and contracts are poorly defined.

This repository extracts a domain-agnostic primitive set from a real-world AI system, focusing on:

  • determinism vs probabilistic inference
  • streaming execution vs transactional control
  • policy-driven interruption and recovery
  • explicit failure semantics

The goal is not to provide an implementation, but to define the smallest correct architectural units.


What is a “Primitive” here

A primitive in this repository is defined as a unit that is:

  • Controllable — has explicit control points
  • Composable — can be combined without hidden coupling
  • Verifiable — exposes clear failure semantics

If a concept cannot meet all three criteria, it is intentionally excluded.

(See: PRINCIPLES.md)


Architecture at a glance

Below is the complete primitive map of the system.

flowchart LR
  P1["P1 Session<br/>Lifecycle"]
  P20["P2.0 Input<br/>Canonicalization"]
  P2["P2 Deterministic<br/>Compute"]
  P3["P3 Prompt<br/>Assembly"]
  P4["P4 Streaming<br/>Execution"]
  P5["P5 Billing &<br/>Entitlement"]
  C1["C1 Client–Server<br/>Contract"]

  P1 --> P20 --> P2 --> P3 --> P4
  P5 -.policy.-> P4
  C1 --- P1
  C1 --- P4
  C1 --- P5
Loading

(See: primitive_map.mmd)

This diagram intentionally separates:

  • Execution Plane (P1–P4)
  • Control Plane (P5)
  • Contract Layer (C1)

(Full explanation in: ARCHITECTURE.md)


The Primitive Set (v1.0)

ID Primitive Responsibility (one line)
P1 Session Lifecycle Ephemeral session state and lifetime control
P2.0 Input Canonicalization Normalize raw inputs into machine-ready form
P2 Deterministic Compute Produce replayable structured features
P3 Prompt Assembly Compile features into versioned prompt objects
P4 Streaming Execution Emit ordered streamed output with explicit visible termination / gating (no cursor)
P5 Billing & Entitlement Policy-based visibility gating and continuation control
C1 Client–Server Contract REST + stream protocol with explicit failure semantics

Detailed definitions live in /primitives and /contract.


Contracts (where integrators should start)

The contract layer is split into:

(Entry point: contract/README.md)


End-to-end execution flow

The following sequence shows how the primitives compose at runtime (MVP-faithful). Canonical definition: contract/stream_protocol.md (termination rules) + contract/rest_api_contract.md (control plane).

sequenceDiagram
  participant Client
  participant API
  participant Stream
  participant Policy

  Client->>API: Create Session
  API-->>Client: sessionId

  Client->>Stream: Start Stream(sessionId, topic)
  Stream->>Policy: Check Entitlement
  Policy-->>Stream: allow / gate

  Stream-->>Client: analysisChunk (0..n)

  alt visibility gated
    Stream-->>Client: paywall (terminal visibility)
    Client->>API: Checkout
    API->>Policy: Grant Entitlement
    Client->>Stream: Reconnect / Re-request (sessionId, topic)
    Stream-->>Client: unlock(content) or unlock(waiting=true)
    Note over Client: If waiting=true, the client may need follow-up retrieval via a non-stream path
  else completed without gate
    Stream-->>Client: analysisStatus(completed) OR response
  end
Loading

(See: e2e_sequence.mmd)

This flow highlights a core design principle: Streaming visibility must be interruptible and replayable (reconnect + result reuse), without relying on cursor/resume protocols.


What this repository intentionally does NOT include

This is an architectural artifact, not a product repository.

It intentionally excludes:

  • business-specific rules
  • domain semantics
  • model providers or prompts
  • persistence and long-term analytics
  • UI / UX implementations

Rationale: See appendix/what_is_intentionally_missing.md


Evidence without implementation

To keep the primitives concrete without leaking business logic, this repository provides schema-level examples only:

  • session records
  • canonicalized inputs
  • structured features
  • prompt objects
  • stream event logs
  • entitlement decisions

See: /examples
These examples are sufficient to verify composability without exposing proprietary logic.


Who this is for

This repository is useful if you are:

  • designing AI systems with streaming outputs
  • combining deterministic logic with LLM inference
  • introducing policy gates (billing, quota, approval)
  • struggling with replay / reconnect / partial failure (no cursor)

It is not a tutorial and not a framework.


Status

  • Primitive Set: v1.0 (stable)
  • Scope: single-session, ephemeral execution
  • Extensions (governance, persistence) are intentionally deferred

Changes are tracked in CHANGELOG.md.


Explicit Authoring Constraints

This repository does not claim that multi-AI collaboration is novel. Human–AI and multi-AI workflows are already common in practice.

The distinguishing choice here is the decision to explicitly externalize authoring structure, constraints, and control points required to keep the output MVP-faithful and semantically consistent.

Constraints that are often implicit in an experienced practitioner’s head were made explicit, including:

  • ordering dependencies between authoring steps,
  • separation between factual baselines and generated narrative,
  • semantic drift and context loss treated as failure modes,
  • human arbitration as the final authority on implementation reality.

Making these constraints explicit:

  • reduces rework caused by uncontrolled iteration,
  • enables cross-document consistency to be audited,
  • and allows the production model to be reused across MVPs and domains.

These constraints are treated as part of the system’s integrity, not merely as a tooling preference.

This authoring model is presented as one observed, sufficient configuration, not a universal prescription.

Reading guide (recommended)

If you are new to this repository:

  1. Read this README
  2. Open ARCHITECTURE.md (primitive relationships)
  3. Scan primitives/README.md (one-line definitions)
  4. Read contract/stream_protocol.md
  5. Inspect examples/stream_events.example.ndjson

You should be able to understand the system without reading any code.

License Notice This repository is published as a portfolio artifact and architectural reference. Commercial use or derivative redistribution is not permitted without explicit permission.

About

A minimal, contract-first AI workflow architecture that decouples execution from visibility. Focused on streaming LLM execution, policy-gated visibility, explicit failure semantics, and cursor-free replay, extracted from a real-world MVP.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors