From 53113ff026a4186b3ad1d753561389fd7870822c Mon Sep 17 00:00:00 2001 From: Andre Manoel Date: Wed, 15 Apr 2026 20:14:38 +0000 Subject: [PATCH 1/8] docs: add plan for workflow chaining and allow_resize removal Proposes replacing the in-place allow_resize mechanism with a Pipeline class that chains multiple generation stages. Each stage gets a fresh fixed-size tracker, and resize becomes a between-stage concern. --- plans/workflow-chaining/workflow-chaining.md | 204 +++++++++++++++++++ 1 file changed, 204 insertions(+) create mode 100644 plans/workflow-chaining/workflow-chaining.md diff --git a/plans/workflow-chaining/workflow-chaining.md b/plans/workflow-chaining/workflow-chaining.md new file mode 100644 index 000000000..9a498a6f8 --- /dev/null +++ b/plans/workflow-chaining/workflow-chaining.md @@ -0,0 +1,204 @@ +--- +date: 2026-04-15 +authors: + - amanoel +--- + +# Plan: Workflow chaining and `allow_resize` removal + +## Problem + +DataDesigner workflows are self-contained: one config, one `create()` call, one output. There is no first-class way to combine workflows in sequence, where the output of one feeds the input of the next. Users who need this must manually wire `DataFrameSeedSource` between calls. + +Separately, the `allow_resize` flag on column configs lets a generator change the row count mid-generation. This works in the sync engine via in-place buffer replacement, but is fundamentally incompatible with the async engine's fixed-size `CompletionTracker` grid. The async engine currently rejects `allow_resize=True` with a validation error. Pre-batch processors that resize have a similar problem: the async path handles shrinking accidentally (via drop-marking), but expansion is silently ignored. + +These are the same problem viewed from different angles: the need to change row counts between generation steps. + +## Proposed solution + +Replace the in-place resize mechanism with **workflow chaining**: a thin orchestration layer that sequences multiple generation stages, passing each stage's output as the next stage's seed dataset. + +This is a three-part change: + +1. **Remove `allow_resize`** from the column config and all engine code that supports it. +2. **Disallow row-count changes in pre-batch processors** (fail-fast if the processor returns a different number of rows). +3. **Add a `Pipeline` class** in the interface layer that auto-chains stages, with support for explicit multi-stage configs. + +### Why chaining instead of fixing async resize + +The async scheduler's `CompletionTracker` pre-allocates a (row_group x row_index x column) task grid. Supporting mid-run resize requires either rebuilding the tracker (complex, error-prone) or pausing execution at resize boundaries (loses parallelism). Chaining sidesteps this entirely: each stage gets a fresh tracker sized to its actual input. The engine stays simple - always fixed-size - and resize becomes a between-stage concern. + +## Design + +### Part 1: Remove `allow_resize` + +**Config changes** (`data-designer-config`): + +- Remove `allow_resize: bool = False` from `SingleColumnConfig` (or its base class `ColumnConfigBase`). +- Deprecation: keep the field for one release cycle with a deprecation warning, then remove. + +**Engine changes** (`data-designer-engine`): + +- Remove `_cell_resize_mode`, `_cell_resize_results`, and the resize branch in `_finalize_fan_out()` from `DatasetBuilder`. +- Remove `allow_resize` parameter from `DatasetBatchManager.replace_buffer()`. +- Remove `_validate_async_compatibility()` (no longer needed - nothing to reject). +- Simplify `_run_full_column_generator()` to always enforce row-count invariance. + +**Migration path**: Users with `allow_resize=True` columns split their config into a pipeline with a stage boundary at the resize column. The resize column becomes the last column of its stage, and downstream columns move to the next stage. + +### Part 2: Fail-fast on pre-batch processor resize + +In `ProcessorRunner.run_pre_batch()` and `run_pre_batch_on_df()`, raise `DatasetProcessingError` if the returned DataFrame has a different row count than the input. + +This applies to both sync and async paths. Users who need to filter or expand between seeds and generation use the pipeline's between-stage callback instead. + +For users who need programmatic filtering at the seed boundary, a seed reader plugin is the escape hatch (the seed reader can filter/transform before the engine ever sees the data). + +### Part 3: Pipeline class + +A new `Pipeline` class in `data_designer.interface` that orchestrates multi-stage generation. + +#### User-facing API + +**Explicit multi-stage pipeline:** + +```python +pipeline = dd.pipeline() +pipeline.add_stage("personas", config_personas, num_records=100) +pipeline.add_stage("conversations", config_convos, num_records=1000) # explode: 100 -> 1000 +pipeline.add_stage("judged", config_judge) # defaults to previous stage's output size + +results = pipeline.run() + +results["personas"].load_dataset() # stage 1 output +results["conversations"].load_dataset() # stage 2 output +results["judged"].load_dataset() # final output +``` + +**Auto-chaining from a single config (future):** + +The engine detects columns that were previously `allow_resize=True` (or a new marker like `stage_boundary=True`) and auto-splits the DAG into stages. This is a convenience layer on top of the explicit API - not required for v1. + +#### Between-stage callbacks + +Users may need to transform data between stages. The pipeline supports an optional callback: + +```python +def filter_high_quality(stage_output_path: Path) -> Path: + df = pd.read_parquet(stage_output_path / "data") + df = df[df["quality_score"] > 0.8] + out = stage_output_path.parent / "filtered" + df.to_parquet(out / "data.parquet") + return out + +pipeline.add_stage("generated", config_gen, num_records=1000) +pipeline.add_stage( + "enriched", + config_enrich, + after=filter_high_quality, # runs on stage output before next stage seeds from it +) +``` + +The callback receives the path to the completed stage's artifacts and returns a path to the (possibly modified) artifacts. This keeps large DataFrames on disk and gives users full control. + +The callback signature is `(Path) -> Path`. If the user returns the same path, no copy is made. If they return a new path, the next stage seeds from that. + +#### `num_records` behavior + +- If `num_records` is explicitly set on a stage, that value is used. +- If omitted, defaults to the previous stage's output row count (after any between-stage callback). +- The seed reader's existing cycling behavior handles the explode case: requesting 1000 records from a 100-row seed cycles through the seed 10 times. + +#### Artifact management + +Each stage writes to its own subdirectory under the pipeline's artifact path: + +``` +artifacts/ + pipeline-name/ + stage-1-personas/ + parquet-files/ + metadata.json + stage-2-conversations/ + parquet-files/ + metadata.json + stage-3-judged/ + parquet-files/ + metadata.json + pipeline-metadata.json # stage order, configs, lineage +``` + +#### Checkpointing and resume + +Each stage produces durable parquet output before the next stage starts. This provides natural checkpoint boundaries: + +- If stage 3 of 4 fails, stages 1 and 2 are already on disk. +- A `resume=True` flag on `pipeline.run()` skips completed stages (detected via `pipeline-metadata.json`). +- Within a stage, batch-level resume (#525) can further reduce re-work. + +The connection to #525: chaining gives coarse (stage-level) checkpointing for free. #525 gives fine (batch-level) checkpointing within a stage. They are complementary. + +#### Provenance + +`pipeline-metadata.json` records: +- Stage order, names, and configs used +- `num_records` requested vs actual per stage +- Which stage's output seeded the next +- Timestamp and duration per stage + +### Where it fits in the architecture + +| Layer | Changes | +|-------|---------| +| `data-designer-config` | Remove `allow_resize` field. No new config models needed for v1 (pipeline is imperative, not declarative). | +| `data-designer-engine` | Remove resize code paths. Add fail-fast guard in `ProcessorRunner`. No new engine features. | +| `data-designer` (interface) | New `Pipeline` class. Thin orchestration: calls `DataDesigner.create()` per stage, wires `DataFrameSeedSource` between stages for in-memory handoff or `LocalFileSeedSource` for on-disk handoff. | + +The engine does not know about pipelines. Each stage is a regular `DatasetBuilder.build()` call. + +## Implementation phases + +### Phase 1: Pipeline class (can ship independently) + +- Add `Pipeline` class with `add_stage()`, `run()`, between-stage callbacks. +- Add `pipeline-metadata.json` writing. +- Add `dd.pipeline()` factory method on `DataDesigner`. +- Tests: multi-stage runs, explode/filter via callbacks, num_records defaulting, artifact layout. + +### Phase 2: Remove `allow_resize` + +- Deprecate `allow_resize` with a warning pointing to pipelines. +- Remove resize code from sync engine (`_cell_resize_mode`, `_finalize_fan_out` resize branch, `replace_buffer` `allow_resize` param). +- Remove `_validate_async_compatibility()` from async engine. +- Add fail-fast guard in `ProcessorRunner` for pre-batch row-count changes. +- Tests: verify rejection, migration path examples. + +### Phase 3: Stage-level resume + +- Add `resume=True` to `pipeline.run()`. +- Read `pipeline-metadata.json` to detect completed stages. +- Skip completed stages, seed next stage from last completed output. +- Depends on artifact layout from phase 1. + +### Phase 4 (future): Auto-chaining from single config + +- Detect stage boundaries in the DAG (via a new config marker or heuristic). +- Auto-split into pipeline stages internally. +- User sees a single `dd.create(config)` call but gets multi-stage execution. + +## Open questions + +1. **In-memory vs on-disk handoff between stages**: For small datasets, `DataFrameSeedSource` avoids disk I/O. For large datasets, writing parquet between stages is safer. Should the pipeline auto-detect based on row count, or always go through disk for consistency? + +2. **Preview support**: Should `pipeline.preview()` run all stages with small `num_records`? Or just preview the last stage seeded from a prior full run? + +3. **Config serialization**: A pipeline config can't be serialized to YAML if stages use `DataFrameSeedSource`. For persistence, stages would need symbolic references ("seed from stage X's output"). This is needed for auto-chaining (phase 4) but not for the explicit API (phases 1-3). + +4. **Naming**: `Pipeline` vs `Chain` vs `WorkflowChain`. `Pipeline` is the most intuitive and aligns with ML pipeline terminology. + +## Related issues + +- #447 - AsyncRunController refactor (partially superseded: pre-batch resize handling moves to pipeline level instead of controller level) +- #525 - Resume interrupted runs (complementary: stage-level resume from pipeline, batch-level resume from #525) +- #462 - Progress bar and scheduler polish (independent) +- #464 - Custom column retryable errors (independent) From d1c6c64e200d9675b6639ef9d59f521d97e73707 Mon Sep 17 00:00:00 2001 From: Andre Manoel Date: Wed, 15 Apr 2026 20:21:22 +0000 Subject: [PATCH 2/8] docs: reframe plan - chaining is the primary goal, allow_resize removal is secondary --- plans/workflow-chaining/workflow-chaining.md | 73 +++++++++++--------- 1 file changed, 40 insertions(+), 33 deletions(-) diff --git a/plans/workflow-chaining/workflow-chaining.md b/plans/workflow-chaining/workflow-chaining.md index 9a498a6f8..9394e8af1 100644 --- a/plans/workflow-chaining/workflow-chaining.md +++ b/plans/workflow-chaining/workflow-chaining.md @@ -4,25 +4,30 @@ authors: - amanoel --- -# Plan: Workflow chaining and `allow_resize` removal +# Plan: Workflow chaining ## Problem DataDesigner workflows are self-contained: one config, one `create()` call, one output. There is no first-class way to combine workflows in sequence, where the output of one feeds the input of the next. Users who need this must manually wire `DataFrameSeedSource` between calls. -Separately, the `allow_resize` flag on column configs lets a generator change the row count mid-generation. This works in the sync engine via in-place buffer replacement, but is fundamentally incompatible with the async engine's fixed-size `CompletionTracker` grid. The async engine currently rejects `allow_resize=True` with a validation error. Pre-batch processors that resize have a similar problem: the async path handles shrinking accidentally (via drop-marking), but expansion is silently ignored. +This matters for several use cases: -These are the same problem viewed from different angles: the need to change row counts between generation steps. +- **Filter-then-enrich**: Generate candidates, filter to high-quality rows, then generate detailed content from survivors. The second stage's row count depends on the first stage's filter output. +- **Explode**: Generate a small set of seed entities (e.g., 100 personas), then generate many records from each (e.g., 1000 conversations). The seed reader's cycling handles the expansion, but the user must manually wire stages. +- **Generate-then-judge**: Generate a dataset, then run a separate LLM-as-judge pass with different models or stricter prompts. Iterating on the judging config shouldn't require re-generating the base data. +- **Multi-turn construction**: Each conversation turn has a different prompt structure and possibly a different model. Composing these as sequential stages is more natural than a single flat config. ## Proposed solution -Replace the in-place resize mechanism with **workflow chaining**: a thin orchestration layer that sequences multiple generation stages, passing each stage's output as the next stage's seed dataset. +Add **workflow chaining**: a thin orchestration layer that sequences multiple generation stages, passing each stage's output as the next stage's seed dataset. This is the primary deliverable. -This is a three-part change: +As a secondary benefit, chaining also enables the removal of `allow_resize` and simplification of the engine's resize handling. -1. **Remove `allow_resize`** from the column config and all engine code that supports it. -2. **Disallow row-count changes in pre-batch processors** (fail-fast if the processor returns a different number of rows). -3. **Add a `Pipeline` class** in the interface layer that auto-chains stages, with support for explicit multi-stage configs. +### Secondary benefit: `allow_resize` removal + +The `allow_resize` flag on column configs lets a generator change the row count mid-generation. This works in the sync engine but is fundamentally incompatible with the async engine's fixed-size `CompletionTracker` grid (currently rejected with a validation error). Pre-batch processors that resize have a similar problem. + +With chaining in place, resize becomes a between-stage concern rather than a mid-generation concern. This lets us remove `allow_resize` and the associated engine complexity, and disallow row-count changes in pre-batch processors. Users who need resize use a pipeline with a stage boundary at the resize point. ### Why chaining instead of fixing async resize @@ -30,31 +35,7 @@ The async scheduler's `CompletionTracker` pre-allocates a (row_group x row_index ## Design -### Part 1: Remove `allow_resize` - -**Config changes** (`data-designer-config`): - -- Remove `allow_resize: bool = False` from `SingleColumnConfig` (or its base class `ColumnConfigBase`). -- Deprecation: keep the field for one release cycle with a deprecation warning, then remove. - -**Engine changes** (`data-designer-engine`): - -- Remove `_cell_resize_mode`, `_cell_resize_results`, and the resize branch in `_finalize_fan_out()` from `DatasetBuilder`. -- Remove `allow_resize` parameter from `DatasetBatchManager.replace_buffer()`. -- Remove `_validate_async_compatibility()` (no longer needed - nothing to reject). -- Simplify `_run_full_column_generator()` to always enforce row-count invariance. - -**Migration path**: Users with `allow_resize=True` columns split their config into a pipeline with a stage boundary at the resize column. The resize column becomes the last column of its stage, and downstream columns move to the next stage. - -### Part 2: Fail-fast on pre-batch processor resize - -In `ProcessorRunner.run_pre_batch()` and `run_pre_batch_on_df()`, raise `DatasetProcessingError` if the returned DataFrame has a different row count than the input. - -This applies to both sync and async paths. Users who need to filter or expand between seeds and generation use the pipeline's between-stage callback instead. - -For users who need programmatic filtering at the seed boundary, a seed reader plugin is the escape hatch (the seed reader can filter/transform before the engine ever sees the data). - -### Part 3: Pipeline class +### Part 1: Pipeline class A new `Pipeline` class in `data_designer.interface` that orchestrates multi-stage generation. @@ -146,6 +127,32 @@ The connection to #525: chaining gives coarse (stage-level) checkpointing for fr - Which stage's output seeded the next - Timestamp and duration per stage +### Part 2: Remove `allow_resize` + +With the pipeline in place, `allow_resize` is no longer needed as an engine-internal mechanism. Resize becomes a between-stage concern. + +**Config changes** (`data-designer-config`): + +- Remove `allow_resize: bool = False` from `SingleColumnConfig` (or its base class `ColumnConfigBase`). +- Deprecation: keep the field for one release cycle with a deprecation warning, then remove. + +**Engine changes** (`data-designer-engine`): + +- Remove `_cell_resize_mode`, `_cell_resize_results`, and the resize branch in `_finalize_fan_out()` from `DatasetBuilder`. +- Remove `allow_resize` parameter from `DatasetBatchManager.replace_buffer()`. +- Remove `_validate_async_compatibility()` (no longer needed - nothing to reject). +- Simplify `_run_full_column_generator()` to always enforce row-count invariance. + +**Migration path**: Users with `allow_resize=True` columns split their config into a pipeline with a stage boundary at the resize column. The resize column becomes the last column of its stage, and downstream columns move to the next stage. + +### Part 3: Fail-fast on pre-batch processor resize + +In `ProcessorRunner.run_pre_batch()` and `run_pre_batch_on_df()`, raise `DatasetProcessingError` if the returned DataFrame has a different row count than the input. + +This applies to both sync and async paths. Users who need to filter or expand between seeds and generation use the pipeline's between-stage callback instead. + +For users who need programmatic filtering at the seed boundary, a seed reader plugin is the escape hatch (the seed reader can filter/transform before the engine ever sees the data). + ### Where it fits in the architecture | Layer | Changes | From 9f640ca8f8a69b86e38bcb90d8dffba1d7f59455 Mon Sep 17 00:00:00 2001 From: Andre Manoel Date: Wed, 15 Apr 2026 20:31:42 +0000 Subject: [PATCH 3/8] docs: add to_config_builder convenience method and concrete use cases --- plans/workflow-chaining/workflow-chaining.md | 112 ++++++++++++++++++- 1 file changed, 111 insertions(+), 1 deletion(-) diff --git a/plans/workflow-chaining/workflow-chaining.md b/plans/workflow-chaining/workflow-chaining.md index 9394e8af1..f5a8b618e 100644 --- a/plans/workflow-chaining/workflow-chaining.md +++ b/plans/workflow-chaining/workflow-chaining.md @@ -56,6 +56,24 @@ results["conversations"].load_dataset() # stage 2 output results["judged"].load_dataset() # final output ``` +**Convenience method on results (lightweight, for notebooks):** + +For interactive use where a full pipeline is overkill, a `to_config_builder()` method on `DatasetCreationResults` returns a pre-seeded `DataDesignerConfigBuilder`: + +```python +# Stage 1 +result = dd.create(config_personas, num_records=100) + +# Stage 2 - just grab the result and keep going +config_convos = ( + result.to_config_builder(columns=["name", "age", "background"]) # optional column selection + .add_column(name="conversation", column_type="llm_text", prompt="...") +) +result_2 = dd.create(config_convos, num_records=1000) +``` + +This is a thin wrapper: loads the dataset, optionally filters columns, wraps in `DataFrameSeedSource`, returns a new config builder. No tracking, no provenance, no callbacks - just a quick bridge for iteration. + **Auto-chaining from a single config (future):** The engine detects columns that were previously `allow_resize=True` (or a new marker like `stage_boundary=True`) and auto-splits the DAG into stages. This is a convenience layer on top of the explicit API - not required for v1. @@ -163,10 +181,102 @@ For users who need programmatic filtering at the seed boundary, a seed reader pl The engine does not know about pipelines. Each stage is a regular `DatasetBuilder.build()` call. +## Use cases for implementation and testing + +These should guide the implementation and serve as the basis for tutorial notebooks. + +### 1. Explode: personas to conversations + +Generate a small, high-quality set of personas, then produce many conversations from each. + +```python +# Stage 1: 100 diverse personas +config_personas = ( + DataDesignerConfigBuilder() + .add_column(name="name", column_type="sampler", sampler_type="person_name") + .add_column(name="age", column_type="sampler", sampler_type="uniform_int", params=...) + .add_column(name="background", column_type="llm_text", prompt="Write a short background for {{ name }}, age {{ age }}.") +) + +# Stage 2: 1000 conversations (each persona used ~10 times via seed cycling) +config_convos = ( + DataDesignerConfigBuilder() + .add_column(name="topic", column_type="llm_text", prompt="Generate a conversation topic for {{ name }}...") + .add_column(name="conversation", column_type="llm_text", prompt="Write a conversation between {{ name }} and an assistant about {{ topic }}...") +) + +pipeline = dd.pipeline() +pipeline.add_stage("personas", config_personas, num_records=100) +pipeline.add_stage("conversations", config_convos, num_records=1000) +results = pipeline.run() +``` + +### 2. Filter-then-enrich + +Generate candidates, use a between-stage callback to filter, then enrich survivors. + +```python +config_gen = ... # generates rows with a quality_score column +config_enrich = ... # adds detailed analysis columns + +def keep_high_quality(stage_output_path: Path) -> Path: + df = pd.read_parquet(stage_output_path / "parquet-files") + df = df[df["quality_score"] > 0.8] + out = stage_output_path.parent / "filtered" + out.mkdir(exist_ok=True) + df.to_parquet(out / "data.parquet") + return out + +pipeline = dd.pipeline() +pipeline.add_stage("candidates", config_gen, num_records=5000) +pipeline.add_stage("enriched", config_enrich, after=keep_high_quality) +results = pipeline.run() +``` + +### 3. Generate-then-judge with different models + +Iterate on the judging config without re-generating the base data. + +```python +# Stage 1: generate with a fast model +config_gen = DataDesignerConfigBuilder(model_configs=[fast_model])... + +# Stage 2: judge with a stronger model +config_judge = DataDesignerConfigBuilder(model_configs=[strong_model])... + +pipeline = dd.pipeline() +pipeline.add_stage("generated", config_gen, num_records=1000) +pipeline.add_stage("judged", config_judge) +results = pipeline.run() + +# Later: tweak judging config, resume from stage 1 output +pipeline_v2 = dd.pipeline() +pipeline_v2.add_stage("generated", config_gen, num_records=1000) +pipeline_v2.add_stage("judged", config_judge_v2) +results_v2 = pipeline_v2.run(resume=True) # skips stage 1 +``` + +### 4. Interactive notebook chaining (lightweight, no pipeline) + +Quick iteration using `to_config_builder()`: + +```python +result = dd.create(config_personas, num_records=50) +result.load_dataset() # inspect, looks good + +# Chain into next step +config_2 = ( + result.to_config_builder(columns=["name", "background"]) + .add_column(name="question", column_type="llm_text", prompt="...") +) +result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 +``` + ## Implementation phases -### Phase 1: Pipeline class (can ship independently) +### Phase 1: Pipeline class and `to_config_builder()` (can ship independently) +- Add `to_config_builder()` on `DatasetCreationResults` and `PreviewResults`. - Add `Pipeline` class with `add_stage()`, `run()`, between-stage callbacks. - Add `pipeline-metadata.json` writing. - Add `dd.pipeline()` factory method on `DataDesigner`. From 1457cc96db8fed6310400957bc2f92638d9b8c95 Mon Sep 17 00:00:00 2001 From: Andre Manoel Date: Wed, 15 Apr 2026 20:50:10 +0000 Subject: [PATCH 4/8] docs: address review feedback - data contract, resume safety, seed controls, edge cases --- plans/workflow-chaining/workflow-chaining.md | 55 +++++++++++++------- 1 file changed, 37 insertions(+), 18 deletions(-) diff --git a/plans/workflow-chaining/workflow-chaining.md b/plans/workflow-chaining/workflow-chaining.md index f5a8b618e..1b7e797ae 100644 --- a/plans/workflow-chaining/workflow-chaining.md +++ b/plans/workflow-chaining/workflow-chaining.md @@ -23,11 +23,13 @@ Add **workflow chaining**: a thin orchestration layer that sequences multiple ge As a secondary benefit, chaining also enables the removal of `allow_resize` and simplification of the engine's resize handling. -### Secondary benefit: `allow_resize` removal +### Secondary benefit: `allow_resize` removal and sync/async convergence The `allow_resize` flag on column configs lets a generator change the row count mid-generation. This works in the sync engine but is fundamentally incompatible with the async engine's fixed-size `CompletionTracker` grid (currently rejected with a validation error). Pre-batch processors that resize have a similar problem. -With chaining in place, resize becomes a between-stage concern rather than a mid-generation concern. This lets us remove `allow_resize` and the associated engine complexity, and disallow row-count changes in pre-batch processors. Users who need resize use a pipeline with a stage boundary at the resize point. +`allow_resize` is one of the remaining divergences between sync and async. Since the long-term direction is to remove the sync engine entirely, maintaining a sync-only feature is counterproductive. With chaining in place, resize becomes a between-stage concern rather than a mid-generation concern. This lets us remove `allow_resize` and the associated engine complexity, and disallow row-count changes in pre-batch processors. Users who need resize use a pipeline with a stage boundary at the resize point. + +Note: `allow_resize` is currently documented in custom columns, plugin examples, and agent rollout ingestion docs. Removal requires a deprecation cycle and doc updates. ### Why chaining instead of fixing async resize @@ -72,21 +74,28 @@ config_convos = ( result_2 = dd.create(config_convos, num_records=1000) ``` -This is a thin wrapper: loads the dataset, optionally filters columns, wraps in `DataFrameSeedSource`, returns a new config builder. No tracking, no provenance, no callbacks - just a quick bridge for iteration. +This is a thin wrapper: loads the dataset into memory, optionally filters columns, wraps in `DataFrameSeedSource`, returns a new config builder. No tracking, no provenance, no callbacks - just a quick bridge for iteration. Not suitable for large datasets (loads full DataFrame into memory) or serializable configs (`DataFrameSeedSource` can't be written to YAML). For production pipelines, use the `Pipeline` class. **Auto-chaining from a single config (future):** The engine detects columns that were previously `allow_resize=True` (or a new marker like `stage_boundary=True`) and auto-splits the DAG into stages. This is a convenience layer on top of the explicit API - not required for v1. +#### Stage data contract + +Each stage seeds from the **previous stage's final dataset** - the post-processor output with dropped columns excluded. This is the same DataFrame returned by `DatasetCreationResults.load_dataset()`. + +Processor outputs (named processor artifacts) and media assets (images stored on disk with relative paths in the DataFrame) are NOT automatically forwarded. If a downstream stage needs image columns from an upstream stage, the pipeline must resolve image paths relative to the upstream stage's artifact directory. This needs explicit handling - TBD in implementation. + #### Between-stage callbacks Users may need to transform data between stages. The pipeline supports an optional callback: ```python def filter_high_quality(stage_output_path: Path) -> Path: - df = pd.read_parquet(stage_output_path / "data") + df = pd.read_parquet(stage_output_path / "parquet-files") df = df[df["quality_score"] > 0.8] out = stage_output_path.parent / "filtered" + out.mkdir(exist_ok=True) df.to_parquet(out / "data.parquet") return out @@ -98,52 +107,58 @@ pipeline.add_stage( ) ``` -The callback receives the path to the completed stage's artifacts and returns a path to the (possibly modified) artifacts. This keeps large DataFrames on disk and gives users full control. +The callback receives the path to the completed stage's artifact directory (containing `parquet-files/`, `metadata.json`, etc.) and returns a path that the next stage will seed from. This keeps large DataFrames on disk and gives users full control. -The callback signature is `(Path) -> Path`. If the user returns the same path, no copy is made. If they return a new path, the next stage seeds from that. +**Empty stage policy**: If a callback filters all rows (or a stage produces zero rows), the pipeline raises `DataDesignerPipelineError` by default. Stages can opt in to empty output with `allow_empty=True` on `add_stage()`, in which case the pipeline short-circuits and skips subsequent stages. -#### `num_records` behavior +#### `num_records` and seed behavior - If `num_records` is explicitly set on a stage, that value is used. - If omitted, defaults to the previous stage's output row count (after any between-stage callback). - The seed reader's existing cycling behavior handles the explode case: requesting 1000 records from a 100-row seed cycles through the seed 10 times. +- `add_stage()` accepts optional `sampling_strategy` (ordered/shuffle) and `selection_strategy` (IndexRange/PartitionBlock) to control how the previous stage's output is sampled. Defaults to ordered. #### Artifact management -Each stage writes to its own subdirectory under the pipeline's artifact path: +The pipeline owns its directory layout directly, bypassing `ArtifactStorage`'s default auto-rename behavior (which appends timestamps to non-empty directories). Stage directories use stable, deterministic names based on stage index and name: ``` artifacts/ pipeline-name/ - stage-1-personas/ + stage-0-personas/ parquet-files/ metadata.json - stage-2-conversations/ + stage-1-conversations/ parquet-files/ metadata.json - stage-3-judged/ + stage-2-judged/ parquet-files/ metadata.json - pipeline-metadata.json # stage order, configs, lineage + pipeline-metadata.json ``` +The pipeline creates each stage's `ArtifactStorage` with the stage directory as `dataset_name`, ensuring stable paths across reruns. + #### Checkpointing and resume Each stage produces durable parquet output before the next stage starts. This provides natural checkpoint boundaries: - If stage 3 of 4 fails, stages 1 and 2 are already on disk. -- A `resume=True` flag on `pipeline.run()` skips completed stages (detected via `pipeline-metadata.json`). +- A `resume=True` flag on `pipeline.run()` skips completed stages. - Within a stage, batch-level resume (#525) can further reduce re-work. +**Resume safety**: Naive "skip if directory exists" is not sufficient. Configs, model settings, callbacks, or DD version may have changed between runs. Resume must compare a fingerprint of each stage's inputs (config hash, num_records, DD version, upstream stage fingerprint) against what's recorded in `pipeline-metadata.json`. If any input changed, that stage and all downstream stages must re-run. This is a phase 3 concern but the metadata format in phase 1 should record enough information to support it. + The connection to #525: chaining gives coarse (stage-level) checkpointing for free. #525 gives fine (batch-level) checkpointing within a stage. They are complementary. #### Provenance `pipeline-metadata.json` records: - Stage order, names, and configs used +- Config fingerprint (hash) per stage for resume invalidation - `num_records` requested vs actual per stage - Which stage's output seeded the next -- Timestamp and duration per stage +- Timestamp, duration, and DD version per stage ### Part 2: Remove `allow_resize` @@ -167,9 +182,7 @@ With the pipeline in place, `allow_resize` is no longer needed as an engine-inte In `ProcessorRunner.run_pre_batch()` and `run_pre_batch_on_df()`, raise `DatasetProcessingError` if the returned DataFrame has a different row count than the input. -This applies to both sync and async paths. Users who need to filter or expand between seeds and generation use the pipeline's between-stage callback instead. - -For users who need programmatic filtering at the seed boundary, a seed reader plugin is the escape hatch (the seed reader can filter/transform before the engine ever sees the data). +This applies to both sync and async paths. Users who need to filter or expand between seeds and generation use the pipeline's between-stage callback instead. Note that a seed reader plugin is NOT an equivalent escape hatch: seed readers run before any columns are generated (including samplers), so they can't filter on generated column values. ### Where it fits in the architecture @@ -305,7 +318,7 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 ## Open questions -1. **In-memory vs on-disk handoff between stages**: For small datasets, `DataFrameSeedSource` avoids disk I/O. For large datasets, writing parquet between stages is safer. Should the pipeline auto-detect based on row count, or always go through disk for consistency? +1. **In-memory vs on-disk handoff between stages**: For small datasets, `DataFrameSeedSource` avoids disk I/O. For large datasets, writing parquet between stages is safer. Should the pipeline auto-detect based on row count, or always go through disk for consistency? (Leaning toward always-on-disk for simplicity and resume support.) 2. **Preview support**: Should `pipeline.preview()` run all stages with small `num_records`? Or just preview the last stage seeded from a prior full run? @@ -313,6 +326,12 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 4. **Naming**: `Pipeline` vs `Chain` vs `WorkflowChain`. `Pipeline` is the most intuitive and aligns with ML pipeline terminology. +5. **Image/media column forwarding**: Images in create mode are stored as relative file paths. If a downstream stage seeds from an upstream stage that produced images, the relative paths break. Options: (a) resolve to absolute paths at stage boundary, (b) copy media assets into downstream stage's directory, (c) document as unsupported in v1. + +6. **Branch/fan-out semantics**: Linear chaining covers the common cases. But "generate once, judge several ways" (fan-out) currently requires building multiple pipelines that repeat stage 1. Should the pipeline support DAG-shaped stage graphs, or is that future work? + +7. **Downstream seeding scope**: Should downstream stages only seed from the final dataset, or should they also be able to access dropped columns or named processor outputs from upstream stages? + ## Related issues - #447 - AsyncRunController refactor (partially superseded: pre-batch resize handling moves to pipeline level instead of controller level) From af44492fe15dfb27b80d1a0ca38792898c2aeb29 Mon Sep 17 00:00:00 2001 From: Andre Manoel Date: Thu, 7 May 2026 20:05:10 +0000 Subject: [PATCH 5/8] docs: refresh plan against current main - deprecation already shipped, fingerprint feature available - Update allow_resize framing: now logs DeprecationWarning and falls back to sync (#553), no longer hard-rejected. Async is default as of #592. - Reference DataDesignerConfig.fingerprint() (#587) as the per-stage hash for resume invalidation. - Rename _validate_async_compatibility() to _resolve_async_compatibility() to match current code. - Mark Phase 2 step 1 as done; list the concrete docs that still need updates. --- plans/workflow-chaining/workflow-chaining.md | 30 +++++++++++++------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/plans/workflow-chaining/workflow-chaining.md b/plans/workflow-chaining/workflow-chaining.md index 1b7e797ae..f97a043b8 100644 --- a/plans/workflow-chaining/workflow-chaining.md +++ b/plans/workflow-chaining/workflow-chaining.md @@ -25,11 +25,11 @@ As a secondary benefit, chaining also enables the removal of `allow_resize` and ### Secondary benefit: `allow_resize` removal and sync/async convergence -The `allow_resize` flag on column configs lets a generator change the row count mid-generation. This works in the sync engine but is fundamentally incompatible with the async engine's fixed-size `CompletionTracker` grid (currently rejected with a validation error). Pre-batch processors that resize have a similar problem. +The `allow_resize` flag on column configs lets a generator change the row count mid-generation. This works in the sync engine but is fundamentally incompatible with the async engine's fixed-size `CompletionTracker` grid. As of #553, an `allow_resize=True` config in async mode logs a `DeprecationWarning` and silently falls back to the sync engine for that run; it is no longer hard-rejected. -`allow_resize` is one of the remaining divergences between sync and async. Since the long-term direction is to remove the sync engine entirely, maintaining a sync-only feature is counterproductive. With chaining in place, resize becomes a between-stage concern rather than a mid-generation concern. This lets us remove `allow_resize` and the associated engine complexity, and disallow row-count changes in pre-batch processors. Users who need resize use a pipeline with a stage boundary at the resize point. +`allow_resize` is one of the remaining divergences between sync and async. The async engine is the default execution path as of #592; sync remains only as a fallback for `allow_resize` runs. Maintaining a sync-only feature to keep one fallback path alive is counterproductive. With chaining in place, resize becomes a between-stage concern rather than a mid-generation concern. This lets us remove `allow_resize` and the associated engine complexity, and disallow row-count changes in pre-batch processors. Users who need resize use a pipeline with a stage boundary at the resize point. -Note: `allow_resize` is currently documented in custom columns, plugin examples, and agent rollout ingestion docs. Removal requires a deprecation cycle and doc updates. +Note: `allow_resize` is documented in custom columns, plugin examples, and agent rollout ingestion docs (verified post-Fern migration in #581). The deprecation warning has shipped in #553; full removal still requires doc updates and the migration of any in-tree usage. ### Why chaining instead of fixing async resize @@ -147,7 +147,14 @@ Each stage produces durable parquet output before the next stage starts. This pr - A `resume=True` flag on `pipeline.run()` skips completed stages. - Within a stage, batch-level resume (#525) can further reduce re-work. -**Resume safety**: Naive "skip if directory exists" is not sufficient. Configs, model settings, callbacks, or DD version may have changed between runs. Resume must compare a fingerprint of each stage's inputs (config hash, num_records, DD version, upstream stage fingerprint) against what's recorded in `pipeline-metadata.json`. If any input changed, that stage and all downstream stages must re-run. This is a phase 3 concern but the metadata format in phase 1 should record enough information to support it. +**Resume safety**: Naive "skip if directory exists" is not sufficient. Configs, model settings, callbacks, or DD version may have changed between runs. Resume must compare a fingerprint of each stage's inputs against what's recorded in `pipeline-metadata.json`. The per-stage fingerprint composes: + +- `DataDesignerConfig.fingerprint()` (introduced in #587) — content-addressable sha256 over the data-relevant portion of the config +- `num_records` (requested) +- DD version +- Upstream stage fingerprint (the directly preceding stage's recorded fingerprint, so a change anywhere in the chain invalidates downstream stages) + +If any component changed, that stage and all downstream stages must re-run. This is a phase 3 concern but the metadata format in phase 1 should record enough information to support it. The connection to #525: chaining gives coarse (stage-level) checkpointing for free. #525 gives fine (batch-level) checkpointing within a stage. They are complementary. @@ -155,7 +162,7 @@ The connection to #525: chaining gives coarse (stage-level) checkpointing for fr `pipeline-metadata.json` records: - Stage order, names, and configs used -- Config fingerprint (hash) per stage for resume invalidation +- Per-stage fingerprint for resume invalidation: `DataDesignerConfig.fingerprint()` (#587) combined with `num_records`, DD version, and the upstream stage fingerprint - `num_records` requested vs actual per stage - Which stage's output seeded the next - Timestamp, duration, and DD version per stage @@ -167,13 +174,13 @@ With the pipeline in place, `allow_resize` is no longer needed as an engine-inte **Config changes** (`data-designer-config`): - Remove `allow_resize: bool = False` from `SingleColumnConfig` (or its base class `ColumnConfigBase`). -- Deprecation: keep the field for one release cycle with a deprecation warning, then remove. +- The deprecation warning has already shipped in #553. After one release cycle from that point, remove the field. **Engine changes** (`data-designer-engine`): - Remove `_cell_resize_mode`, `_cell_resize_results`, and the resize branch in `_finalize_fan_out()` from `DatasetBuilder`. - Remove `allow_resize` parameter from `DatasetBatchManager.replace_buffer()`. -- Remove `_validate_async_compatibility()` (no longer needed - nothing to reject). +- Remove `_resolve_async_compatibility()` and the sync-fallback branch in `_build_async()` (no longer needed - nothing to fall back for). - Simplify `_run_full_column_generator()` to always enforce row-count invariance. **Migration path**: Users with `allow_resize=True` columns split their config into a pipeline with a stage boundary at the resize column. The resize column becomes the last column of its stage, and downstream columns move to the next stage. @@ -297,9 +304,11 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 ### Phase 2: Remove `allow_resize` -- Deprecate `allow_resize` with a warning pointing to pipelines. +- (Done in #553) `allow_resize=True` in async mode emits a `DeprecationWarning` and falls back to sync. +- Update docs that still reference `allow_resize` (`docs/concepts/custom_columns.md`, `docs/plugins/example.md`, `docs/concepts/agent-rollout-ingestion.md`) to point at pipelines. - Remove resize code from sync engine (`_cell_resize_mode`, `_finalize_fan_out` resize branch, `replace_buffer` `allow_resize` param). -- Remove `_validate_async_compatibility()` from async engine. +- Remove `_resolve_async_compatibility()` and its sync-fallback branch from `_build_async()`. +- Remove the `allow_resize` field from the config schema. - Add fail-fast guard in `ProcessorRunner` for pre-batch row-count changes. - Tests: verify rejection, migration path examples. @@ -307,7 +316,8 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 - Add `resume=True` to `pipeline.run()`. - Read `pipeline-metadata.json` to detect completed stages. -- Skip completed stages, seed next stage from last completed output. +- Compute each stage's fingerprint via `DataDesignerConfig.fingerprint()` (#587) combined with `num_records`, DD version, and upstream stage fingerprint; invalidate the stage and everything downstream on any mismatch. +- Skip stages whose fingerprints match, seed next stage from last completed output. - Depends on artifact layout from phase 1. ### Phase 4 (future): Auto-chaining from single config From 85dba6879f79a5646c95268d1afa51c7b3b1e213 Mon Sep 17 00:00:00 2001 From: Andre Manoel Date: Thu, 7 May 2026 21:03:17 +0000 Subject: [PATCH 6/8] docs: bake parallel-async carefulness into the plan - throttle invariant, on-disk handoffs, DAG-ready, acreate sidecar - Resolve in-memory vs on-disk handoff to always-on-disk inside Pipeline; reserve in-memory for to_config_builder() notebook ergonomic. - Add Composability section: parent DataDesigner reuse is a load-bearing API contract for throttle coordination across stages and parallel branches. - Add Engine API surface section: acreate() as a small additive sidecar, independent of chaining v1 but a hard dependency for Phase 4. - Promote DAG semantics from "future work" to "designed-in"; add Phase 4 (parallel branches via asyncio.gather over acreate); demote auto-chaining to Phase 5. - New Resolved decisions section captures the three load-bearing API decisions; trim the Open questions list accordingly. - Mention possible future external orchestration only as a vague composability constraint, no commitment. --- plans/workflow-chaining/workflow-chaining.md | 77 ++++++++++++++++---- 1 file changed, 64 insertions(+), 13 deletions(-) diff --git a/plans/workflow-chaining/workflow-chaining.md b/plans/workflow-chaining/workflow-chaining.md index f97a043b8..3ea040bd8 100644 --- a/plans/workflow-chaining/workflow-chaining.md +++ b/plans/workflow-chaining/workflow-chaining.md @@ -74,7 +74,9 @@ config_convos = ( result_2 = dd.create(config_convos, num_records=1000) ``` -This is a thin wrapper: loads the dataset into memory, optionally filters columns, wraps in `DataFrameSeedSource`, returns a new config builder. No tracking, no provenance, no callbacks - just a quick bridge for iteration. Not suitable for large datasets (loads full DataFrame into memory) or serializable configs (`DataFrameSeedSource` can't be written to YAML). For production pipelines, use the `Pipeline` class. +This is a thin wrapper: loads the dataset into memory, optionally filters columns, wraps in `DataFrameSeedSource`, returns a new config builder. No tracking, no provenance, no callbacks - just a quick bridge for iteration. Not suitable for large datasets (loads full DataFrame into memory) or serializable configs (`DataFrameSeedSource` can't be written to YAML). + +This is the *only* place in the chaining surface that uses an in-memory handoff. `Pipeline` itself always hands off between stages on disk - see "Composability and the throttle invariant" below. For production pipelines, use the `Pipeline` class. **Auto-chaining from a single config (future):** @@ -167,6 +169,31 @@ The connection to #525: chaining gives coarse (stage-level) checkpointing for fr - Which stage's output seeded the next - Timestamp, duration, and DD version per stage +#### Composability and the throttle invariant + +The `Pipeline` is constructed via `dd.pipeline()` and holds a reference to the parent `DataDesigner`. Every stage runs `dd.create()` (or `dd.acreate()` once available - see Engine API surface below) on that same instance. This is a load-bearing API contract for two reasons. + +**Throttle coordination across stages.** A `DataDesigner` owns one `ModelRegistry`, which owns one `ThrottleManager`. AIMD rate-limit state is per-instance. If the pipeline constructed a fresh `DataDesigner` per stage, each stage would adapt independently and the aggregate request rate against a provider could exceed the configured cap by a multiple of the stage count. The same hazard applies to parallel branches in Phase 4: branches sharing one `DataDesigner` automatically share throttling; branches each holding their own `DataDesigner` silently fragment it. Reusing one instance is the simple, correct default. + +**Door open for external orchestration.** If cross-process or distributed execution is ever introduced, the natural seam is the throttle backend (today an in-memory `ThrottleManager`; potentially a coordinator-backed implementation later). By keeping ownership of throttling *explicit* on the parent `DataDesigner` rather than *implicit* per stage, the pipeline's shape does not preclude swapping in such a backend. v1 does not need this and will not implement it; v1 only needs to avoid encoding assumptions that would prevent it. + +**On-disk handoffs for the same reason.** Stage handoffs go through parquet on disk via `LocalFileSeedSource`, never through an in-memory `DataFrameSeedSource`. This composes with any future orchestration model (in-process, cross-process, distributed) without per-environment branching. The cost is one parquet round-trip per stage boundary, which is negligible compared to LLM call time at any realistic scale. The notebook ergonomic `to_config_builder()` is the in-memory escape hatch and is explicitly not a Pipeline. + +**Internal stage model is a graph, not a list.** v1 exposes a linear `add_stage()` API and runs stages sequentially. Internally the pipeline represents stages as a DAG with the linear case being the default chain. This lets Phase 4 add parallel branches as an additive API change without restructuring orchestration. + +#### Engine API surface: `acreate()` + +`Pipeline` v1 calls `DataDesigner.create()` synchronously per stage and runs them in order. Sequential execution doesn't need an async API. Parallel execution does, and the engine doesn't expose one today. + +Adding `async def acreate(...)` on `DataDesigner` is a small, additive change. The underlying `_build_async` already runs on a singleton background event loop and submits work via a `concurrent.futures.Future`; `acreate()` bridges it into the caller's loop via `asyncio.wrap_future`. The sync `create()` becomes a one-line wrapper. No breaking changes. + +`acreate()` enables two things without touching `Pipeline`: + +- **Parallel-independent workflows.** Users can `asyncio.gather(dd.acreate(c1), dd.acreate(c2))` for unrelated configs and get coordinated throttling automatically through the shared `ThrottleManager`. +- **Pipeline DAG branches (Phase 4).** When the pipeline graduates to a DAG, parallel branches are a pure orchestration change - `asyncio.gather` over `acreate()` calls inside `pipeline.run()` - with no further engine work required. + +`acreate()` is *not* part of chaining v1. It ships as its own small piece of work that can land before, alongside, or after Phase 1; the dependency only becomes hard for Phase 4. Listed as a sidecar under Implementation phases. + ### Part 2: Remove `allow_resize` With the pipeline in place, `allow_resize` is no longer needed as an engine-internal mechanism. Resize becomes a between-stage concern. @@ -197,7 +224,7 @@ This applies to both sync and async paths. Users who need to filter or expand be |-------|---------| | `data-designer-config` | Remove `allow_resize` field. No new config models needed for v1 (pipeline is imperative, not declarative). | | `data-designer-engine` | Remove resize code paths. Add fail-fast guard in `ProcessorRunner`. No new engine features. | -| `data-designer` (interface) | New `Pipeline` class. Thin orchestration: calls `DataDesigner.create()` per stage, wires `DataFrameSeedSource` between stages for in-memory handoff or `LocalFileSeedSource` for on-disk handoff. | +| `data-designer` (interface) | New `Pipeline` class. Thin orchestration: holds a reference to the parent `DataDesigner`, calls `DataDesigner.create()` per stage, hands off between stages on disk via `LocalFileSeedSource`. All stages share the same `ModelRegistry` and `ThrottleManager`. Optionally consumes `DataDesigner.acreate()` (sidecar) once available, for Phase 4 parallel branches. | The engine does not know about pipelines. Each stage is a regular `DatasetBuilder.build()` call. @@ -297,10 +324,19 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 ### Phase 1: Pipeline class and `to_config_builder()` (can ship independently) - Add `to_config_builder()` on `DatasetCreationResults` and `PreviewResults`. -- Add `Pipeline` class with `add_stage()`, `run()`, between-stage callbacks. +- Add `Pipeline` class with `add_stage()`, `run()`, between-stage callbacks. Pipeline holds a reference to the parent `DataDesigner` and reuses it across stages. +- Stage handoff is always on disk via `LocalFileSeedSource`; no in-memory handoff path inside `Pipeline`. +- Internal stage representation is a DAG (linear-only inputs in v1). - Add `pipeline-metadata.json` writing. - Add `dd.pipeline()` factory method on `DataDesigner`. -- Tests: multi-stage runs, explode/filter via callbacks, num_records defaulting, artifact layout. +- Tests: multi-stage runs, explode/filter via callbacks, num_records defaulting, artifact layout, throttle reuse across stages. + +### Sidecar: `acreate()` on `DataDesigner` (independent of chaining v1) + +- Add `async def acreate(...)` mirroring `create()` but returning the awaitable instead of blocking. +- `create()` becomes a one-line wrapper around `acreate()` (or both share a common builder helper). +- Tests: parallel-independent workflows via `asyncio.gather`; verify shared `ThrottleManager` keeps aggregate request rate within configured caps. +- Can ship before, alongside, or after Phase 1. Hard dependency for Phase 4. ### Phase 2: Remove `allow_resize` @@ -320,27 +356,42 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 - Skip stages whose fingerprints match, seed next stage from last completed output. - Depends on artifact layout from phase 1. -### Phase 4 (future): Auto-chaining from single config +### Phase 4: DAG-shaped stages with parallel branches + +- Extend `add_stage()` with an optional `depends_on=[stage_name, ...]` argument; default keeps the linear behavior. +- `pipeline.run()` walks the resulting DAG, gathering independent branches via `asyncio.gather` over `dd.acreate()` calls. +- Per-stage fingerprint composition (Phase 3) generalizes naturally: a stage's upstream fingerprint becomes the hash of all its parents' fingerprints. +- Throttle coordination relies on the existing invariant: all branches run on the same parent `DataDesigner`, so `ThrottleManager` is shared. +- Hard dependency on the `acreate()` sidecar. +- Tests: fan-out (one upstream, multiple parallel children); join (multiple upstreams, one child); resume invalidation when one branch's fingerprint changes; throttle behavior under N parallel branches. + +### Phase 5 (future): Auto-chaining from single config - Detect stage boundaries in the DAG (via a new config marker or heuristic). - Auto-split into pipeline stages internally. - User sees a single `dd.create(config)` call but gets multi-stage execution. -## Open questions +## Resolved decisions + +These were open in earlier drafts; recording the resolutions here so the design is unambiguous. + +1. **In-memory vs on-disk handoff between stages** -> Always on-disk inside `Pipeline`. The in-memory `DataFrameSeedSource` mode is reserved for the lightweight `to_config_builder()` notebook ergonomic, which is explicitly *not* a `Pipeline`. Reasons: single execution model, simpler resume story, and composability with any future external orchestration that can't share an in-memory DataFrame across process boundaries. Cost is one parquet round-trip per stage, negligible relative to LLM call time. -1. **In-memory vs on-disk handoff between stages**: For small datasets, `DataFrameSeedSource` avoids disk I/O. For large datasets, writing parquet between stages is safer. Should the pipeline auto-detect based on row count, or always go through disk for consistency? (Leaning toward always-on-disk for simplicity and resume support.) +2. **Branch/fan-out semantics (DAG)** -> Designed-in but not v1. The internal stage representation is a DAG; v1 only accepts linear inputs through `add_stage()`. Phase 4 ships parallel branches via `asyncio.gather` over `acreate()`. v1 stays sequential. -2. **Preview support**: Should `pipeline.preview()` run all stages with small `num_records`? Or just preview the last stage seeded from a prior full run? +3. **Pipeline construction** -> `Pipeline` is created via `dd.pipeline()` and reuses the parent `DataDesigner`'s `ModelRegistry` and `ThrottleManager` across all stages. The pipeline does not construct its own `DataDesigner` instances. This is the throttle-coordination invariant (see Composability section). + +## Open questions -3. **Config serialization**: A pipeline config can't be serialized to YAML if stages use `DataFrameSeedSource`. For persistence, stages would need symbolic references ("seed from stage X's output"). This is needed for auto-chaining (phase 4) but not for the explicit API (phases 1-3). +1. **Preview support**: Should `pipeline.preview()` run all stages with small `num_records`? Or just preview the last stage seeded from a prior full run? -4. **Naming**: `Pipeline` vs `Chain` vs `WorkflowChain`. `Pipeline` is the most intuitive and aligns with ML pipeline terminology. +2. **Config serialization**: For persistence, pipeline configs would need symbolic stage references ("seed from stage X's output"). With the on-disk handoff decision above, the `DataFrameSeedSource` blocker is no longer relevant; the remaining question is how to encode stage dependencies in YAML. Needed for auto-chaining (Phase 5) but not for the explicit API (phases 1-4). -5. **Image/media column forwarding**: Images in create mode are stored as relative file paths. If a downstream stage seeds from an upstream stage that produced images, the relative paths break. Options: (a) resolve to absolute paths at stage boundary, (b) copy media assets into downstream stage's directory, (c) document as unsupported in v1. +3. **Naming**: `Pipeline` vs `Chain` vs `WorkflowChain`. `Pipeline` is the most intuitive and aligns with ML pipeline terminology. -6. **Branch/fan-out semantics**: Linear chaining covers the common cases. But "generate once, judge several ways" (fan-out) currently requires building multiple pipelines that repeat stage 1. Should the pipeline support DAG-shaped stage graphs, or is that future work? +4. **Image/media column forwarding**: Images in create mode are stored as relative file paths. If a downstream stage seeds from an upstream stage that produced images, the relative paths break. Options: (a) resolve to absolute paths at stage boundary, (b) copy media assets into downstream stage's directory, (c) document as unsupported in v1. -7. **Downstream seeding scope**: Should downstream stages only seed from the final dataset, or should they also be able to access dropped columns or named processor outputs from upstream stages? +5. **Downstream seeding scope**: Should downstream stages only seed from the final dataset, or should they also be able to access dropped columns or named processor outputs from upstream stages? ## Related issues From 17d534362c6c41f264845fb4c1a4df4e24b116cf Mon Sep 17 00:00:00 2001 From: Andre Manoel Date: Thu, 7 May 2026 22:05:03 +0000 Subject: [PATCH 7/8] docs: align plan framing with cross-process orchestration discussion - Soften "Door open for external orchestration" - drop throttle-backend-as-seam framing; cross-reference Future considerations. - Make acreate() scope explicit (in-process); cross-process orchestration is not the same problem. - Add Phase 4 scope clarifier - branch parallelism, not stage pipelining. - New Future considerations section: external orchestration (vague, uncommitted) and pipelined execution of dependent stages. --- plans/workflow-chaining/workflow-chaining.md | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/plans/workflow-chaining/workflow-chaining.md b/plans/workflow-chaining/workflow-chaining.md index 3ea040bd8..791884aaf 100644 --- a/plans/workflow-chaining/workflow-chaining.md +++ b/plans/workflow-chaining/workflow-chaining.md @@ -175,7 +175,7 @@ The `Pipeline` is constructed via `dd.pipeline()` and holds a reference to the p **Throttle coordination across stages.** A `DataDesigner` owns one `ModelRegistry`, which owns one `ThrottleManager`. AIMD rate-limit state is per-instance. If the pipeline constructed a fresh `DataDesigner` per stage, each stage would adapt independently and the aggregate request rate against a provider could exceed the configured cap by a multiple of the stage count. The same hazard applies to parallel branches in Phase 4: branches sharing one `DataDesigner` automatically share throttling; branches each holding their own `DataDesigner` silently fragment it. Reusing one instance is the simple, correct default. -**Door open for external orchestration.** If cross-process or distributed execution is ever introduced, the natural seam is the throttle backend (today an in-memory `ThrottleManager`; potentially a coordinator-backed implementation later). By keeping ownership of throttling *explicit* on the parent `DataDesigner` rather than *implicit* per stage, the pipeline's shape does not preclude swapping in such a backend. v1 does not need this and will not implement it; v1 only needs to avoid encoding assumptions that would prevent it. +**Door open for external orchestration.** The pipeline's choice to reuse one `DataDesigner` is the in-process strategy: shared throttling across stages, branches gathered in the orchestrator process. A cross-process strategy is a separate but compatible model - see Future considerations. v1 only needs to avoid encoding assumptions that would prevent it. **On-disk handoffs for the same reason.** Stage handoffs go through parquet on disk via `LocalFileSeedSource`, never through an in-memory `DataFrameSeedSource`. This composes with any future orchestration model (in-process, cross-process, distributed) without per-environment branching. The cost is one parquet round-trip per stage boundary, which is negligible compared to LLM call time at any realistic scale. The notebook ergonomic `to_config_builder()` is the in-memory escape hatch and is explicitly not a Pipeline. @@ -183,7 +183,7 @@ The `Pipeline` is constructed via `dd.pipeline()` and holds a reference to the p #### Engine API surface: `acreate()` -`Pipeline` v1 calls `DataDesigner.create()` synchronously per stage and runs them in order. Sequential execution doesn't need an async API. Parallel execution does, and the engine doesn't expose one today. +`Pipeline` v1 calls `DataDesigner.create()` synchronously per stage and runs them in order. Sequential execution doesn't need an async API. *In-process* parallel execution does, and the engine doesn't expose one today. Cross-process orchestration is not the same problem: each worker runs sync `create()` in its own process and doesn't need an async surface. Adding `async def acreate(...)` on `DataDesigner` is a small, additive change. The underlying `_build_async` already runs on a singleton background event loop and submits work via a `concurrent.futures.Future`; `acreate()` bridges it into the caller's loop via `asyncio.wrap_future`. The sync `create()` becomes a one-line wrapper. No breaking changes. @@ -363,6 +363,7 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 - Per-stage fingerprint composition (Phase 3) generalizes naturally: a stage's upstream fingerprint becomes the hash of all its parents' fingerprints. - Throttle coordination relies on the existing invariant: all branches run on the same parent `DataDesigner`, so `ThrottleManager` is shared. - Hard dependency on the `acreate()` sidecar. +- **Scope: branch parallelism, not stage pipelining.** Stages still wait for their dependencies to fully complete before starting; pipelined execution of dependent stages is a separate direction sketched in Future considerations. - Tests: fan-out (one upstream, multiple parallel children); join (multiple upstreams, one child); resume invalidation when one branch's fingerprint changes; throttle behavior under N parallel branches. ### Phase 5 (future): Auto-chaining from single config @@ -371,6 +372,14 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 - Auto-split into pipeline stages internally. - User sees a single `dd.create(config)` call but gets multi-stage execution. +## Future considerations + +Items not on the current roadmap but worth flagging so they don't get accidentally precluded by v1-v5 design choices. + +**External orchestration for cross-process / distributed execution.** There is interest in eventually running DataDesigner workloads across processes or nodes - self-hosted serving, multi-host fan-out, scheduling against external clusters. The specific shape of that orchestration is still under discussion and is not committed to here. The chaining plan's design choices (parent `DataDesigner` reuse, on-disk handoffs, no new engine surface) compose naturally with such a system: an external orchestrator could dispatch independent `DataDesigner.create()` calls against partitioned slices and per-replica endpoints without the pipeline class needing to change. v1-v5 do not depend on this materializing. + +**Pipelined execution of dependent stages.** Today the stage data contract is "final dataset" - a downstream stage waits for its upstream to fully complete. A future direction is to let downstream stages consume upstream batches as they're produced, overlapping execution across the dependency edge. Required changes: streaming seed sources, an explicit "stage done" sentinel rather than file-completion checks, and resume semantics for partially-consumed upstreams. Most useful when stage bottlenecks are heterogeneous (LLM-bound stage feeding a CPU-bound validator); little gain when both stages are LLM-bound since they share provider capacity. Not designed here; flagged so the stage contract isn't quietly closed off. + ## Resolved decisions These were open in earlier drafts; recording the resolutions here so the design is unambiguous. From 3de41007ec3147d4625179d95e93dc1922d01d6b Mon Sep 17 00:00:00 2001 From: Andre Manoel Date: Fri, 8 May 2026 20:20:50 +0000 Subject: [PATCH 8/8] docs: address workflow chaining review comments --- plans/workflow-chaining/workflow-chaining.md | 29 ++++++++++++-------- 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/plans/workflow-chaining/workflow-chaining.md b/plans/workflow-chaining/workflow-chaining.md index 791884aaf..cb204e5ec 100644 --- a/plans/workflow-chaining/workflow-chaining.md +++ b/plans/workflow-chaining/workflow-chaining.md @@ -46,7 +46,7 @@ A new `Pipeline` class in `data_designer.interface` that orchestrates multi-stag **Explicit multi-stage pipeline:** ```python -pipeline = dd.pipeline() +pipeline = dd.pipeline(name="persona-conversations") pipeline.add_stage("personas", config_personas, num_records=100) pipeline.add_stage("conversations", config_convos, num_records=1000) # explode: 100 -> 1000 pipeline.add_stage("judged", config_judge) # defaults to previous stage's output size @@ -58,6 +58,8 @@ results["conversations"].load_dataset() # stage 2 output results["judged"].load_dataset() # final output ``` +`name` is required and is the durable identity for artifact lookup and resume. Reusing the same name across Python sessions lets `pipeline.run(resume=True)` find the previous `pipeline-metadata.json`. + **Convenience method on results (lightweight, for notebooks):** For interactive use where a full pipeline is overkill, a `to_config_builder()` method on `DatasetCreationResults` returns a pre-seeded `DataDesignerConfigBuilder`: @@ -101,6 +103,7 @@ def filter_high_quality(stage_output_path: Path) -> Path: df.to_parquet(out / "data.parquet") return out +pipeline = dd.pipeline(name="filter-enrich") pipeline.add_stage("generated", config_gen, num_records=1000) pipeline.add_stage( "enriched", @@ -122,11 +125,11 @@ The callback receives the path to the completed stage's artifact directory (cont #### Artifact management -The pipeline owns its directory layout directly, bypassing `ArtifactStorage`'s default auto-rename behavior (which appends timestamps to non-empty directories). Stage directories use stable, deterministic names based on stage index and name: +The pipeline owns its directory layout directly, bypassing `ArtifactStorage`'s default auto-rename behavior (which appends timestamps to non-empty directories). `dd.pipeline(name=...)` maps to `artifacts//`; no timestamp, UUID, or object-derived default is used for resumable pipelines. Stage directories use stable, deterministic names based on stage index and name: ``` artifacts/ - pipeline-name/ + / stage-0-personas/ parquet-files/ metadata.json @@ -139,7 +142,7 @@ artifacts/ pipeline-metadata.json ``` -The pipeline creates each stage's `ArtifactStorage` with the stage directory as `dataset_name`, ensuring stable paths across reruns. +The pipeline creates each stage's `ArtifactStorage` with the stage directory as `dataset_name`, ensuring stable paths across reruns. A fresh `dd.pipeline(name="gen-judge")` finds the same `artifacts/gen-judge/pipeline-metadata.json` path as the original run. #### Checkpointing and resume @@ -163,6 +166,7 @@ The connection to #525: chaining gives coarse (stage-level) checkpointing for fr #### Provenance `pipeline-metadata.json` records: +- Pipeline name - Stage order, names, and configs used - Per-stage fingerprint for resume invalidation: `DataDesignerConfig.fingerprint()` (#587) combined with `num_records`, DD version, and the upstream stage fingerprint - `num_records` requested vs actual per stage @@ -171,7 +175,7 @@ The connection to #525: chaining gives coarse (stage-level) checkpointing for fr #### Composability and the throttle invariant -The `Pipeline` is constructed via `dd.pipeline()` and holds a reference to the parent `DataDesigner`. Every stage runs `dd.create()` (or `dd.acreate()` once available - see Engine API surface below) on that same instance. This is a load-bearing API contract for two reasons. +The `Pipeline` is constructed via `dd.pipeline(name=...)` and holds a reference to the parent `DataDesigner`. Every stage runs `dd.create()` (or `dd.acreate()` once available - see Engine API surface below) on that same instance. This is a load-bearing API contract for two reasons. **Throttle coordination across stages.** A `DataDesigner` owns one `ModelRegistry`, which owns one `ThrottleManager`. AIMD rate-limit state is per-instance. If the pipeline constructed a fresh `DataDesigner` per stage, each stage would adapt independently and the aggregate request rate against a provider could exceed the configured cap by a multiple of the stage count. The same hazard applies to parallel branches in Phase 4: branches sharing one `DataDesigner` automatically share throttling; branches each holding their own `DataDesigner` silently fragment it. Reusing one instance is the simple, correct default. @@ -252,7 +256,7 @@ config_convos = ( .add_column(name="conversation", column_type="llm_text", prompt="Write a conversation between {{ name }} and an assistant about {{ topic }}...") ) -pipeline = dd.pipeline() +pipeline = dd.pipeline(name="persona-conversations") pipeline.add_stage("personas", config_personas, num_records=100) pipeline.add_stage("conversations", config_convos, num_records=1000) results = pipeline.run() @@ -274,7 +278,7 @@ def keep_high_quality(stage_output_path: Path) -> Path: df.to_parquet(out / "data.parquet") return out -pipeline = dd.pipeline() +pipeline = dd.pipeline(name="filter-enrich") pipeline.add_stage("candidates", config_gen, num_records=5000) pipeline.add_stage("enriched", config_enrich, after=keep_high_quality) results = pipeline.run() @@ -291,13 +295,13 @@ config_gen = DataDesignerConfigBuilder(model_configs=[fast_model])... # Stage 2: judge with a stronger model config_judge = DataDesignerConfigBuilder(model_configs=[strong_model])... -pipeline = dd.pipeline() +pipeline = dd.pipeline(name="gen-judge") pipeline.add_stage("generated", config_gen, num_records=1000) pipeline.add_stage("judged", config_judge) results = pipeline.run() # Later: tweak judging config, resume from stage 1 output -pipeline_v2 = dd.pipeline() +pipeline_v2 = dd.pipeline(name="gen-judge") pipeline_v2.add_stage("generated", config_gen, num_records=1000) pipeline_v2.add_stage("judged", config_judge_v2) results_v2 = pipeline_v2.run(resume=True) # skips stage 1 @@ -328,7 +332,7 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 - Stage handoff is always on disk via `LocalFileSeedSource`; no in-memory handoff path inside `Pipeline`. - Internal stage representation is a DAG (linear-only inputs in v1). - Add `pipeline-metadata.json` writing. -- Add `dd.pipeline()` factory method on `DataDesigner`. +- Add `dd.pipeline(name: str)` factory method on `DataDesigner`. - Tests: multi-stage runs, explode/filter via callbacks, num_records defaulting, artifact layout, throttle reuse across stages. ### Sidecar: `acreate()` on `DataDesigner` (independent of chaining v1) @@ -352,6 +356,7 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 - Add `resume=True` to `pipeline.run()`. - Read `pipeline-metadata.json` to detect completed stages. +- Resolve the metadata path from the explicit pipeline name. - Compute each stage's fingerprint via `DataDesignerConfig.fingerprint()` (#587) combined with `num_records`, DD version, and upstream stage fingerprint; invalidate the stage and everything downstream on any mismatch. - Skip stages whose fingerprints match, seed next stage from last completed output. - Depends on artifact layout from phase 1. @@ -360,7 +365,7 @@ result_2 = dd.create(config_2, num_records=200) # explode: 50 -> 200 - Extend `add_stage()` with an optional `depends_on=[stage_name, ...]` argument; default keeps the linear behavior. - `pipeline.run()` walks the resulting DAG, gathering independent branches via `asyncio.gather` over `dd.acreate()` calls. -- Per-stage fingerprint composition (Phase 3) generalizes naturally: a stage's upstream fingerprint becomes the hash of all its parents' fingerprints. +- Per-stage fingerprint composition (Phase 3) generalizes naturally: a stage's upstream fingerprint becomes the hash of all parent fingerprints sorted by stage name, making joins stable regardless of `depends_on` declaration order. - Throttle coordination relies on the existing invariant: all branches run on the same parent `DataDesigner`, so `ThrottleManager` is shared. - Hard dependency on the `acreate()` sidecar. - **Scope: branch parallelism, not stage pipelining.** Stages still wait for their dependencies to fully complete before starting; pipelined execution of dependent stages is a separate direction sketched in Future considerations. @@ -388,7 +393,7 @@ These were open in earlier drafts; recording the resolutions here so the design 2. **Branch/fan-out semantics (DAG)** -> Designed-in but not v1. The internal stage representation is a DAG; v1 only accepts linear inputs through `add_stage()`. Phase 4 ships parallel branches via `asyncio.gather` over `acreate()`. v1 stays sequential. -3. **Pipeline construction** -> `Pipeline` is created via `dd.pipeline()` and reuses the parent `DataDesigner`'s `ModelRegistry` and `ThrottleManager` across all stages. The pipeline does not construct its own `DataDesigner` instances. This is the throttle-coordination invariant (see Composability section). +3. **Pipeline construction** -> `Pipeline` is created via `dd.pipeline(name=...)` and reuses the parent `DataDesigner`'s `ModelRegistry` and `ThrottleManager` across all stages. The explicit name is the durable artifact identity used for resume, and the pipeline does not construct its own `DataDesigner` instances. This is the throttle-coordination invariant (see Composability section). ## Open questions