Skip to content

api/conversion: implement structural DGDR conversion#9262

Open
sttts wants to merge 5 commits intoai-dynamo:mainfrom
sttts:sttts-dgdr-structural
Open

api/conversion: implement structural DGDR conversion#9262
sttts wants to merge 5 commits intoai-dynamo:mainfrom
sttts:sttts-dgdr-structural

Conversation

@sttts
Copy link
Copy Markdown
Contributor

@sttts sttts commented May 7, 2026

Summary

This PR aligns DGDR conversion with the structural DGD/DCD conversion rules from api/CONVERSION.md after #9257. DGDR is already used in production since Dynamo 1.0, so the conversion keeps backward-compatible read behavior and continues writing legacy downgrade annotations while 1.2 -> 1.1 downgrade must remain supported.

  • Implement structural DGDR conversion with live source fields authoritative.
  • Preserve source-version-only values sparsely through nvidia.com/dgdr-spec and nvidia.com/dgdr-status annotations.
  • Read old legacy DGDR annotations and new sparse annotations in parallel.
  • Continue writing old legacy DGDR annotations for downgrade compatibility.
  • Keep the pre-structural DGDR converter in tests as a legacy compatibility oracle.

Compatibility and Fuzz Scope

  • Main DGDR roundtrip fuzz tests have the fixed-field TODO workarounds disabled, so the newly fixed fields are exercised directly.
  • Legacy-vs-structural fuzz tests intentionally normalize inputs to the old converter's representable behavior, then compare legacy and structural conversion modulo only the new sparse annotations.
  • This keeps the legacy oracle useful for downgrade compatibility without hiding fixed structural roundtrip bugs in the main fuzz tests.

Fixed Bugs

  • Preserve v1alpha1 spec.profilingConfig.nodeSelector through v1beta1.spec.overrides.profilingJob.template.spec.nodeSelector.
  • Preserve v1alpha1 status state=Initializing and state=DeploymentDeleted through v1beta1 using sparse annotations.
  • Preserve v1beta1 spec.hardware through v1alpha1 using sparse annotations.
  • Preserve v1beta1 spec.workload.concurrency and spec.workload.requestRate through v1alpha1 using sparse annotations.
  • Preserve v1beta1 spec.sla.e2eLatency through v1alpha1 using sparse annotations.
  • Preserve v1beta1 spec.overrides.dgd and hub-only spec.overrides.profilingJob leaves through v1alpha1 using sparse annotations.
  • Preserve v1beta1 spec.searchStrategy through v1alpha1 using sparse annotations.
  • Preserve explicit disabled spec.features.mocker.enabled=false through v1alpha1 without overriding live useMocker=true.
  • Preserve v1beta1 status phase=Deployed through v1alpha1 status conversion.
  • Preserve v1beta1 status.profilingPhase through v1alpha1 using sparse annotations.
  • Preserve v1beta1 status.profilingResults.pareto through v1alpha1 using sparse annotations.
  • Preserve v1beta1 status.deploymentInfo through v1alpha1 using sparse annotations.
  • Stop stale legacy profiling config annotations from restoring old live hub fields such as SLA, Workload, ModelCache, and Planner.

Prompt History Trace

  • Base the DGDR work on the DGD/DCD structural conversion cleanup from PR refactor(operator): structure DCD and DGD converters #9257 and follow api/CONVERSION.md.
  • Preserve existing DGDR v1alpha1/v1beta1 read behavior because DGDR conversion has been in use since Dynamo 1.0.
  • Continue writing legacy DGDR annotations until 1.2 downgrade compatibility no longer matters.
  • Keep the old DGDR converter around in tests so fuzzing and compatibility tests can compare against legacy behavior.
  • Rename legacy annotation constants to legacyAnn* and mark them with TODO(sttts) for removal after 1.2.
  • Use CONVERSION.md naming for structural converters, including ConvertFromDynamoGraphDeploymentRequest* and ConvertToDynamoGraphDeploymentRequest*.
  • Keep main DGDR roundtrip fuzz tests unmasked for fixed fields; keep legacy-vs-structural fuzz tests normalized to the old converter's representable behavior.
  • Split the PR into one structural conversion commit and one legacy-oracle commit.

Testing

  • GOCACHE=/tmp/dynamo-go-cache go test ./api -run "TestFuzzRoundTrip_DGDR|TestFuzzRoundTripMutability/DGDR" -roundtrip-fuzz-iters=3000 -count=1 -v
  • GOCACHE=/tmp/dynamo-go-cache go test ./api/v1alpha1 -run TestDGDRFuzzLegacyAndStructural -dgdr-legacy-fuzz-iters=3000 -count=1 -v
  • GOCACHE=/tmp/dynamo-go-cache go test ./api/... -count=1
  • git diff --check

sttts added 3 commits May 7, 2026 16:03
Signed-off-by: Dr. Stefan Schimanski <sschimanski@nvidia.com>
Signed-off-by: Dr. Stefan Schimanski <sschimanski@nvidia.com>
Signed-off-by: Dr. Stefan Schimanski <sschimanski@nvidia.com>
@sttts sttts requested a review from a team as a code owner May 7, 2026 16:55
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 7, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 7, 2026

👋 Hi sttts! Thank you for contributing to ai-dynamo/dynamo.

Just a reminder: The NVIDIA Test Github Validation CI runs an essential subset of the testing framework to quickly catch errors.Your PR reviewers may elect to test the changes comprehensively before approving your changes.

🚀

@github-actions github-actions Bot added external-contribution Pull request is from an external contributor documentation Improvements or additions to documentation deployment::k8s Relates to dynamo deployment in kubernetes labels May 7, 2026
@sttts sttts force-pushed the sttts-dgdr-structural branch from c2654ed to 8642f67 Compare May 7, 2026 16:56
@sttts sttts changed the title api/conversion: preserve DGDR structurally api/conversion: implement structural DGDR conversion May 7, 2026
@sttts sttts force-pushed the sttts-dgdr-structural branch from 8642f67 to e264634 Compare May 7, 2026 16:59
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 7, 2026

Review Change Stack

Walkthrough

This PR refactors API v1alpha1↔v1beta1 conversion pipelines across DynamoComponentDeployment, DynamoGraphDeployment, and DynamoGraphDeploymentRequest by exporting previously internal converter functions, introducing typed conversion contexts to control preservation behavior, formalizing sparse annotation-based preservation/restoration logic with blob projections, and ensuring backward compatibility with legacy annotations.

Changes

Unified Conversion Refactoring

Layer / File(s) Summary
Conversion Policy Documentation
deploy/operator/api/CONVERSION.md
Formalized rules requiring conversion policy in exported v1alpha1 helpers only, fixed converter parameter order (src, dst, restored, save, ctx), explicit restrictions on non-conversion code reading annotations, and mutability requirements for converters.
Conversion Context Type Definitions
deploy/operator/api/v1alpha1/dynamocomponentdeployment_conversion.go, deploy/operator/api/v1alpha1/dynamographdeployment_conversion.go, deploy/operator/api/v1alpha1/shared_spec_conversion.go
Introduced exported context types carrying conversion configuration: DynamoComponentDeploymentConversionContext (ObjectName, IncludeOriginSplits), DynamoGraphDeploymentConversionContext (IncludeOriginSplits, SaveHubOrigin), DynamoComponentDeploymentSharedSpecConversionContext (IncludeOriginSplits, PodTemplateOrigin).
Shared Spec Conversion with Leaf Converter Functions
deploy/operator/api/v1alpha1/shared_spec_conversion.go
Exported shared-spec bidirectional converters for simple types (Multinode, ModelReference, TopologyConstraint, EPP, SharedMemory), scaling adapters, and per-experimental-feature converters (GPUMemoryService, Failover, Checkpoint); refactored pod-template build/decompose with context-driven field inclusion.
DynamoComponentDeployment Spec and Status Converters
deploy/operator/api/v1alpha1/dynamocomponentdeployment_conversion.go
Exported v1alpha1↔v1beta1 spec converters (ConvertFromDynamoComponentDeploymentSpec, ConvertToDynamoComponentDeploymentSpec) with new context parameter and shared-spec wiring; exported status converters (ConvertFromDynamoComponentDeploymentStatus, ConvertToDynamoComponentDeploymentStatus).
DynamoGraphDeployment Spec, Status, and Leaf Type Converters
deploy/operator/api/v1alpha1/dynamographdeployment_conversion.go
Exported v1alpha1↔v1beta1 spec converters with component list handling; exported status converters and leaf type converters for Restart, RestartStrategy, SpecTopologyConstraint, CheckpointStatus, RollingUpdateStatus, and ServiceReplicaStatus in both directions.
DGDR Preservation and Restoration Framework
deploy/operator/api/v1alpha1/dynamographdeploymentrequest_conversion.go
Refactored ConvertTo and ConvertFrom to restore hub/spoke annotations from preservation keys, scrub internal conversion annotations, convert spec/status, then save only relevant preserved subsets back; introduced constants for hub/spoke/legacy annotation keys.
DGDR Spec Conversion with Blob Projection
deploy/operator/api/v1alpha1/dynamographdeploymentrequest_conversion.go
Implemented spec conversion by parsing ProfilingConfig.Config JSON blob, projecting SLA/workload/model-cache/planner into typed v1beta1 fields, saving alpha-only remainder sparsely, handling hub-only overlays, and reconstructing blob during reverse conversion with blob utilities (strip typed keys, clone maps).
DGDR Status Conversion with Alpha/Hub Separation
deploy/operator/api/v1alpha1/dynamographdeploymentrequest_conversion.go
Split status conversion into alpha-only (backend, profiling results, deployment status) and hub-only (phase, profiling phase, job name, pareto, deploymentInfo) subsets with phase-consistency helpers (dgdrAlphaStateHasHubPhase, dgdrAlphaStatusMatchesHubPhase) gating deployment-status restoration.
Fuzz Test Normalization for Round-Trip Validation
deploy/operator/api/roundtrip_fuzz_test.go
Added compatibility annotation ignorers; normalized alpha spec (profiling resources nil when empty, SLA shell for certain Workload configs, features nil when both Planner/Mocker nil); enforced beta status phase via fuzzing and tightened ProfilingResults pruning.
Shared Spec and DGDR Conversion Unit Tests
deploy/operator/api/v1alpha1/shared_spec_conversion_bugs_test.go, deploy/operator/api/v1alpha1/dynamographdeploymentrequest_conversion_test.go, deploy/operator/api/v1alpha1/dynamographdeployment_conversion_test.go
Added edge-case tests for shared-memory quantity mapping and checkpoint identity nil-ness; updated DGDR tests to verify legacy annotation reading/writing, profiling-config blob preservation, hub-only round-tripping, and phase consistency; added comment updates for renamed functions.
Legacy Conversion Test Helpers for Backward Compatibility
deploy/operator/api/v1alpha1/dynamographdeploymentrequest_legacy_conversion_test.go
Provided legacy converter functions (legacyDGDRConvertToHubForTest, legacyDGDRConvertFromHubForTest) simulating pre-structural annotation-based conversions, plus per-blob-section helpers (SLA/workload, model cache, planner, profiling resources) enabling validation of backward compatibility between old and new converters.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.34% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'api/conversion: implement structural DGDR conversion' directly describes the main objective of the PR—implementing structural conversion for DynamoGraphDeploymentRequest (DGDR) using invariants from api/CONVERSION.md.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description check ✅ Passed The PR description is comprehensive and well-structured, covering all required sections with clear details about changes, compatibility, and testing.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sttts sttts force-pushed the sttts-dgdr-structural branch from e264634 to 912d1ef Compare May 7, 2026 17:07
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
deploy/operator/api/roundtrip_fuzz_test.go (1)

103-128: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Only ignore the legacy DGDR compatibility keys here.

This transformer drops annDGDRSpec and annDGDRStatus too, so the fuzz suite will no longer fail if a no-op round trip starts emitting structural preservation blobs. That weakens the sparse-save invariant this PR is trying to enforce.

Suggested tightening
 var ignoreDGDRCompatibilityAnnotations = cmpopts.AcyclicTransformer(
 	"ignoreDGDRCompatibilityAnnotations",
 	func(m metav1.ObjectMeta) metav1.ObjectMeta {
 		if len(m.Annotations) == 0 {
 			return m
 		}
 		annotations := make(map[string]string, len(m.Annotations))
 		for k, v := range m.Annotations {
 			annotations[k] = v
 		}
-		for k := range annotations {
-			if strings.HasPrefix(k, "nvidia.com/dgdr-") {
-				delete(annotations, k)
-			}
-		}
+		for _, k := range []string{
+			"nvidia.com/dgdr-config-map-ref",
+			"nvidia.com/dgdr-output-pvc",
+			"nvidia.com/dgdr-enable-gpu-discovery",
+			"nvidia.com/dgdr-deployment-overrides",
+			"nvidia.com/dgdr-profiling-config",
+			"nvidia.com/dgdr-status-backend",
+			"nvidia.com/dgdr-profiling-results",
+			"nvidia.com/dgdr-deployment-status",
+			"nvidia.com/dgdr-profiling-job-name",
+		} {
+			delete(annotations, k)
+		}
 		if len(annotations) == 0 {
 			m.Annotations = nil
 		} else {
 			m.Annotations = annotations
 		}

Also applies to: 494-501

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@deploy/operator/api/roundtrip_fuzz_test.go` around lines 103 - 128, The
transformer ignoreDGDRCompatibilityAnnotations is currently deleting more keys
than intended (it removes annDGDRSpec and annDGDRStatus), so update the deletion
logic inside the function that iterates annotations to only remove legacy keys
that strictly match the DGDR compatibility prefix (e.g., strings.HasPrefix(k,
"nvidia.com/dgdr-")) while explicitly preserving keys named annDGDRSpec and
annDGDRStatus; adjust the conditional that calls delete(annotations, k) to skip
those two identifier keys so the structural preservation blobs remain intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@deploy/operator/api/v1alpha1/dynamographdeploymentrequest_conversion.go`:
- Around line 695-727: saveDGDRHubOnlySpec currently allocates save.Workload and
save.SLA whenever src.Workload or src.SLA exist, even if their inner pointers
(Concurrency, RequestRate, E2ELatency) are all nil, causing
dgdrHubSpecSaveIsZero to misreport non-empty and write an empty hub payload;
change saveDGDRHubOnlySpec so it only creates and assigns save.Workload when at
least one of src.Workload.Concurrency or src.Workload.RequestRate is non-nil
(copy those fields as now), and only creates and assigns save.SLA when
src.SLA.E2ELatency is non-nil, leaving save.Workload/save.SLA nil otherwise;
keep other behavior (Hardware, Overrides, Features, SearchStrategy) unchanged.

---

Outside diff comments:
In `@deploy/operator/api/roundtrip_fuzz_test.go`:
- Around line 103-128: The transformer ignoreDGDRCompatibilityAnnotations is
currently deleting more keys than intended (it removes annDGDRSpec and
annDGDRStatus), so update the deletion logic inside the function that iterates
annotations to only remove legacy keys that strictly match the DGDR
compatibility prefix (e.g., strings.HasPrefix(k, "nvidia.com/dgdr-")) while
explicitly preserving keys named annDGDRSpec and annDGDRStatus; adjust the
conditional that calls delete(annotations, k) to skip those two identifier keys
so the structural preservation blobs remain intact.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 336be1fd-cca2-4c05-8387-2d9320185d2b

📥 Commits

Reviewing files that changed from the base of the PR and between f5a9463 and e264634.

📒 Files selected for processing (10)
  • deploy/operator/api/CONVERSION.md
  • deploy/operator/api/roundtrip_fuzz_test.go
  • deploy/operator/api/v1alpha1/dynamocomponentdeployment_conversion.go
  • deploy/operator/api/v1alpha1/dynamographdeployment_conversion.go
  • deploy/operator/api/v1alpha1/dynamographdeployment_conversion_test.go
  • deploy/operator/api/v1alpha1/dynamographdeploymentrequest_conversion.go
  • deploy/operator/api/v1alpha1/dynamographdeploymentrequest_conversion_test.go
  • deploy/operator/api/v1alpha1/dynamographdeploymentrequest_legacy_conversion_test.go
  • deploy/operator/api/v1alpha1/shared_spec_conversion.go
  • deploy/operator/api/v1alpha1/shared_spec_conversion_bugs_test.go

Comment on lines +695 to +727
func saveDGDRHubOnlySpec(src *v1beta1.DynamoGraphDeploymentRequestSpec, save *v1beta1.DynamoGraphDeploymentRequestSpec) {
if src == nil || save == nil {
return
}
if src.Hardware != nil {
save.Hardware = src.Hardware.DeepCopy()
}
if src.Workload != nil {
save.Workload = &v1beta1.WorkloadSpec{}
if src.Workload.Concurrency != nil {
v := *src.Workload.Concurrency
save.Workload.Concurrency = &v
}
if src.Workload.RequestRate != nil {
v := *src.Workload.RequestRate
save.Workload.RequestRate = &v
}
}
if src.SLA != nil {
save.SLA = &v1beta1.SLASpec{}
if src.SLA.E2ELatency != nil {
v := *src.SLA.E2ELatency
save.SLA.E2ELatency = &v
}
}
if src.Overrides != nil {
saveDGDRHubOnlyOverrides(src.Overrides, save)
}
if src.Features != nil && src.Features.Mocker != nil && !src.Features.Mocker.Enabled {
save.Features = &v1beta1.FeaturesSpec{Mocker: &v1beta1.MockerSpec{Enabled: false}}
}
save.SearchStrategy = src.SearchStrategy
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Keep the hub-only save sparse for Workload and SLA.

saveDGDRHubOnlySpec allocates save.Workload/save.SLA whenever the live hub object has those structs, even if Concurrency, RequestRate, and E2ELatency are all nil. That makes dgdrHubSpecSaveIsZero fail and writes annDGDRSpec with no hub-only payload.

Suggested tightening
 func saveDGDRHubOnlySpec(src *v1beta1.DynamoGraphDeploymentRequestSpec, save *v1beta1.DynamoGraphDeploymentRequestSpec) {
 	if src == nil || save == nil {
 		return
 	}
 	if src.Hardware != nil {
 		save.Hardware = src.Hardware.DeepCopy()
 	}
-	if src.Workload != nil {
-		save.Workload = &v1beta1.WorkloadSpec{}
-		if src.Workload.Concurrency != nil {
-			v := *src.Workload.Concurrency
-			save.Workload.Concurrency = &v
-		}
-		if src.Workload.RequestRate != nil {
-			v := *src.Workload.RequestRate
-			save.Workload.RequestRate = &v
-		}
+	if src.Workload != nil && (src.Workload.Concurrency != nil || src.Workload.RequestRate != nil) {
+		save.Workload = &v1beta1.WorkloadSpec{}
+		if src.Workload.Concurrency != nil {
+			v := *src.Workload.Concurrency
+			save.Workload.Concurrency = &v
+		}
+		if src.Workload.RequestRate != nil {
+			v := *src.Workload.RequestRate
+			save.Workload.RequestRate = &v
+		}
 	}
-	if src.SLA != nil {
+	if src.SLA != nil && src.SLA.E2ELatency != nil {
 		save.SLA = &v1beta1.SLASpec{}
-		if src.SLA.E2ELatency != nil {
-			v := *src.SLA.E2ELatency
-			save.SLA.E2ELatency = &v
-		}
+		v := *src.SLA.E2ELatency
+		save.SLA.E2ELatency = &v
 	}
 	if src.Overrides != nil {
 		saveDGDRHubOnlyOverrides(src.Overrides, save)
 	}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@deploy/operator/api/v1alpha1/dynamographdeploymentrequest_conversion.go`
around lines 695 - 727, saveDGDRHubOnlySpec currently allocates save.Workload
and save.SLA whenever src.Workload or src.SLA exist, even if their inner
pointers (Concurrency, RequestRate, E2ELatency) are all nil, causing
dgdrHubSpecSaveIsZero to misreport non-empty and write an empty hub payload;
change saveDGDRHubOnlySpec so it only creates and assigns save.Workload when at
least one of src.Workload.Concurrency or src.Workload.RequestRate is non-nil
(copy those fields as now), and only creates and assigns save.SLA when
src.SLA.E2ELatency is non-nil, leaving save.Workload/save.SLA nil otherwise;
keep other behavior (Hardware, Overrides, Features, SearchStrategy) unchanged.

sttts added 2 commits May 7, 2026 19:15
Use sparse value preservation annotations for DGDR fields that cannot be represented directly by the target API version.

Signed-off-by: Dr. Stefan Schimanski <sschimanski@nvidia.com>
Keep the pre-structural DGDR converter in tests so the legacy annotation format stays readable while downgrade compatibility is required.

Add fuzz coverage that compares legacy and structural DGDR conversion on legacy-compatible visible shapes after stripping only the new sparse preservation annotations.

Signed-off-by: Dr. Stefan Schimanski <sschimanski@nvidia.com>
@sttts sttts force-pushed the sttts-dgdr-structural branch from 912d1ef to 5b7b005 Compare May 7, 2026 17:15
if src.Hardware != nil {
save.Hardware = src.Hardware.DeepCopy()
}
if src.Workload != nil {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This allocates Workload/SLA whenever the source has those structs, even when only alpha representable fields are present.
we should only save and restore Workload when Concurrency or RequestRate is present, and SLA when E2ELatency is present


func dgdrAlphaStatusMatchesHubPhase(state DGDRState, deployment *DeploymentStatus, phase v1beta1.DGDRPhase) bool {
alphaPhase := dgdrStateToPhase(string(state), deployment)
return alphaPhase == phase || phase == v1beta1.DGDRPhaseDeployed && alphaPhase == v1beta1.DGDRPhaseReady
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is too broad.
it should be

if alphaPhase == phase {
		return true
	}
return phase == v1beta1.DGDRPhaseDeployed && state == DGDRStateReady

That still preserves the intended lossy round trip:
hub Deployed -> alpha Ready -> hub Deployed
But it stops this bad stale restore:
hub Deployed -> alpha DeploymentDeleted -> hub Deployed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deployment::k8s Relates to dynamo deployment in kubernetes documentation Improvements or additions to documentation external-contribution Pull request is from an external contributor size/XXL

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants