diff --git a/content/en/docs/glossary-abbreviations.md b/content/en/docs/glossary-abbreviations.md index 86829c5b..3db20c34 100644 --- a/content/en/docs/glossary-abbreviations.md +++ b/content/en/docs/glossary-abbreviations.md @@ -45,7 +45,7 @@ services, applications, etc.) which: * abstracts configuration file structure and storage from operations that act upon the configuration data; clients manipulating configuration data don’t need to directly interact with storage (git, container images) -Source of definition and more information about Configuration as Data can be found in the [kpt documentation]({{% relref "/docs/porch/config-as-data.md" %}}). +Source of definition and more information about Configuration as Data can be found in the [porch documentation](https://docs.porch.nephio.org/docs/2_concepts/theory/#configuration-as-data-cad). ## Controller This term comes from Kubernetes where diff --git a/content/en/docs/guides/install-guides/install-on-byoc.md b/content/en/docs/guides/install-guides/install-on-byoc.md index db349282..68eb63a3 100644 --- a/content/en/docs/guides/install-guides/install-on-byoc.md +++ b/content/en/docs/guides/install-guides/install-on-byoc.md @@ -21,7 +21,7 @@ your environment and choices. - *kubectl* [installed ](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)on your workstation - *kpt* [installed](https://kpt.dev/installation/kpt-cli) on your workstation (version v1.0.0-beta.43 or later) - - *porchctl* [installed]({{% relref "/docs/porch/user-guides/porchctl-cli-guide.md" %}}) on your workstation + - *porchctl* [installed](https://docs.porch.nephio.org/docs/3_getting_started/installing-porchctl/) on your workstation - Sudo-less *docker*, *Podman*, or *nerdctl*. If using *Podman* or *nerdctl*, you must set the [`KPT_FN_RUNTIME`](https://kpt.dev/reference/cli/fn/render/?id=environment-variables) diff --git a/content/en/docs/guides/install-guides/install-on-multiple-vm.md b/content/en/docs/guides/install-guides/install-on-multiple-vm.md index ff619c6e..4ed0fd4b 100644 --- a/content/en/docs/guides/install-guides/install-on-multiple-vm.md +++ b/content/en/docs/guides/install-guides/install-on-multiple-vm.md @@ -19,7 +19,7 @@ weight: 7 * Kubernetes version 1.26+ * *kubectl* [installed ](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/) * *kpt* [installed](https://kpt.dev/installation/kpt-cli) (version v1.0.0-beta.43 or later) -* *porchctl* [installed]({{% relref "/docs/porch/user-guides/porchctl-cli-guide.md" %}}) on your workstation +* *porchctl* [installed](https://docs.porch.nephio.org/docs/3_getting_started/installing-porchctl/) on your workstation ## Installation of the management cluster diff --git a/content/en/docs/guides/user-guides/usecase-user-guides/exercise-3-fluxcd-wl.md b/content/en/docs/guides/user-guides/usecase-user-guides/exercise-3-fluxcd-wl.md index 69950ae4..aa1543e2 100644 --- a/content/en/docs/guides/user-guides/usecase-user-guides/exercise-3-fluxcd-wl.md +++ b/content/en/docs/guides/user-guides/usecase-user-guides/exercise-3-fluxcd-wl.md @@ -64,7 +64,7 @@ oai-core-packages git Package false True https://github Once *Ready*, we can utilize blueprint packages from these upstream repositories. -In this example, we will use the [Porch package variant controller]({{% relref "/docs/porch/package-variant.md#core-concepts" %}}) +In this example, we will use the [Porch package variant controller](https://docs.porch.nephio.org/docs/5_architecture_and_components/relevant_old_docs/package-variant/#core-concepts) to deploy the new Workload Cluster. This fully automates the onboarding process, including the auto approval and publishing of the new package. diff --git a/content/en/docs/guides/user-guides/usecase-user-guides/exercise-5-argocd-wl.md b/content/en/docs/guides/user-guides/usecase-user-guides/exercise-5-argocd-wl.md index 033735dd..59ddc2a3 100644 --- a/content/en/docs/guides/user-guides/usecase-user-guides/exercise-5-argocd-wl.md +++ b/content/en/docs/guides/user-guides/usecase-user-guides/exercise-5-argocd-wl.md @@ -161,7 +161,7 @@ oai-core-packages git Package false True https://github Once *Ready*, we can utilize blueprint packages from these upstream repositories. -In this example, we will use the [Porch package variant controller]({{% relref "/docs/porch/package-variant.md#core-concepts" %}}) +In this example, we will use the [Porch package variant controller](https://docs.porch.nephio.org/docs/5_architecture_and_components/relevant_old_docs/package-variant/#core-concepts) to deploy the new Workload Cluster. This fully automates the onboarding process, including the auto approval and publishing of the new package. diff --git a/content/en/docs/neo-porch/10_security_and_compliance/_index.md b/content/en/docs/neo-porch/10_security_and_compliance/_index.md deleted file mode 100644 index dc9db4ae..00000000 --- a/content/en/docs/neo-porch/10_security_and_compliance/_index.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: "Security & Compliance" -type: docs -weight: 10 -description: Security & Compliance Description ---- - -## Lorem Ipsum - -Example: - -- Authentication & authorization: how Porch ensures secure access -- Secrets / credentials handling (for Git, registries, etc.) -- Security considerations for function runner / templates / untrusted code -- TLS, encryption in transit / at rest if applicable diff --git a/content/en/docs/neo-porch/10_security_and_compliance/relevant_old_docs/_index.md b/content/en/docs/neo-porch/10_security_and_compliance/relevant_old_docs/_index.md deleted file mode 100644 index 85d770ba..00000000 --- a/content/en/docs/neo-porch/10_security_and_compliance/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: 2 -description: ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/10_security_and_compliance/relevant_old_docs/extracted_from_old_porch_concepts.md b/content/en/docs/neo-porch/10_security_and_compliance/relevant_old_docs/extracted_from_old_porch_concepts.md deleted file mode 100644 index 518e31fb..00000000 --- a/content/en/docs/neo-porch/10_security_and_compliance/relevant_old_docs/extracted_from_old_porch_concepts.md +++ /dev/null @@ -1,8 +0,0 @@ -## Open Issues/Questions - -### Deployment Rollouts & Orchestration - -**Not Yet Resolved** - -Cross-cluster rollouts and orchestration of deployment activity. For example, a package deployed by Config Sync in cluster -A, and only on success, the same (or a different) package deployed by Config Sync in cluster B. \ No newline at end of file diff --git a/content/en/docs/neo-porch/10_security_and_compliance/relevant_old_docs/old.md b/content/en/docs/neo-porch/10_security_and_compliance/relevant_old_docs/old.md deleted file mode 100644 index b8a7cb88..00000000 --- a/content/en/docs/neo-porch/10_security_and_compliance/relevant_old_docs/old.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Old Content" -type: docs -weight: 2 -description: old content here ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -## Old Content Template - -old content here diff --git a/content/en/docs/neo-porch/11_glossary/_index.md b/content/en/docs/neo-porch/11_glossary/_index.md deleted file mode 100644 index 599f5f66..00000000 --- a/content/en/docs/neo-porch/11_glossary/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Glossary" -type: docs -weight: 11 -description: Glossary Description ---- - -## Lorem Ipsum - -We need a refurbish of the [old-glossary-guide](./relevant_old_docs/glossary-abbreviations.md) \ No newline at end of file diff --git a/content/en/docs/neo-porch/11_glossary/relevant_old_docs/_index.md b/content/en/docs/neo-porch/11_glossary/relevant_old_docs/_index.md deleted file mode 100644 index 85d770ba..00000000 --- a/content/en/docs/neo-porch/11_glossary/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: 2 -description: ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/11_glossary/relevant_old_docs/glossary-abbreviations.md b/content/en/docs/neo-porch/11_glossary/relevant_old_docs/glossary-abbreviations.md deleted file mode 100644 index 4274b353..00000000 --- a/content/en/docs/neo-porch/11_glossary/relevant_old_docs/glossary-abbreviations.md +++ /dev/null @@ -1,397 +0,0 @@ ---- -title: Glossary and Abbreviations -description: -weight: 10 ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -We use many terms in our Nephio discussions, coming from different domains -including telecommunications, Kubernetes, configuration management, and our own -[Nephio-specific](#nephio-related-abbreviations) terms. This glossary is intended to help clarify our usage of -these terms. - - -## Config -See [Configuration](#configuration). - -## Config Injection -See [Injector](#injector). - -## Configuration -In Nephio, this *usually* refers to the Kubernetes resources used to provision -and manage network functions, their underlying infrastructure, and their -internal operation. Unfortunately this is a very general term and often is -overloaded with multiple meanings. - -Sometimes, folks will say *network config* or *workload config* to refer to the -internal configuration of the network functions. Consider that most network -functions today cannot be directly configured via Kubernetes resources. Instead, -they are configured via a proprietary configuration file, netconf, or even an -API. In that case, those terms usually refer to this proprietary configuration -language rather than Kubernetes resources. It is a goal for Nephio to help -vendors enable KRM-based management of this internal configuration, to allow -leveraging all the techniques we are building for KRM-based configuration (this -is part of the "Kubernetes Everywhere" principle). - -As a community, we should try to use a common set of terminology for different types of configuration. See -[docs#4](https://github.com/nephio-project/nephio/issues/266). - -## Configuration as Data -Configuration as Data is an approach to management of configuration (incl. configuration of infrastructure, policy, -services, applications, etc.) which: - -* makes configuration data the source of truth, stored separately from the live state -* uses a uniform, serializable data model to represent configuration -* separates code that acts on the configuration from the data and from packages / bundles of the data -* abstracts configuration file structure and storage from operations that act upon the configuration data; clients - manipulating configuration data don’t need to directly interact with storage (git, container images) - -Source of definition and more information about Configuration as Data can be found in the [kpt documentation]({{% relref "/docs/porch/config-as-data.md" %}}). - -## Controller -This term comes from Kubernetes where -[controller](https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-controller) is defined as a control -loop that watches the intended and actual state of the cluster, and attempts to make changes as needed to make the -actual state match the intended state. More specifically, this typically refers to software that processes Kubernetes -Resources residing in the Kubernetes API server, and either transforms them into new resources, or calls to other APIs -that change the state of some entity external to the API server. For example, `kubelet` itself is a controller that -processes Pod resources to create and manage containers on a Node. - -*See also*: [Operator](#operator), [Injector](#injector), [KRM -function](#krm-function), [Specializer](#specializer) - -## Controller Manager -This term comes from Kubernetes and refers to an executable that bundles many -[controllers](#controller) into one binary. - -*See also*: [Controller](#controller), [Operator](#operator) - -## CR -See [Custom Resource](#custom-resource). - -## CRD -See [Custom Resource Definition](#custom-resource-definition). - -## Custom Resource -A Custom Resource (CR) is a resource in a Kubernetes API server that has a -Group/Version/Kind. It was added to the API server via a -[Custom Resource Definition](#custom-resource-definition). The -relationship between a CR and a CRD is analogous to that of an object and a -class in Object-Oriented Programming; the CRD defines the schema, and the CR is -a particular instance. - -Note that it is common for people to say "CRD" when in fact they mean "CR", so -be sure to ask for clarification if necessary. - -*See also*: [Custom Resource Definition](#custom-resource-definition) - -## Custom Resource Definition -A [Custom Resource -Definition (CRD)](https://kubernetes.io/docs/reference/glossary/?fundamental=true#term-CustomResourceDefinition) -is a built-in Kubernetes resource used to define custom resources -within a Kubernetes API server. It is used to extend the functionality -of a Kubernetes API server by adding new resource types. The CRD, identified by -its Group/Version/Kind, defines the schema associated with the resource, as well -as the resource API endpoints. - -Note that it is common for people to say "CRD" when in fact they mean "CR", so -be sure to ask for clarification if necessary. - -*See also*: [Custom Resource](#custom-resource) - -## Dehydration -See [Hydration](#hydration). - -## DRY -This is a common software engineering term that stands for [Don't Repeat -Yourself](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself). DRY attempts -to reduce repetition in software development. In the Kubernetes configuration -management context, a good example is a Helm chart, which attempts to abstract -the particular manifests for a given workload. A kpt package that is not yet -ready to deploy is also an example of a DRY artifact. In general, any sort of -"template" or "blueprint" is usually an attempt to capture some repeatable -pattern, following this principle. - -*See also*: [Hydration](#hydration), [WET](#wet) - -## Fanout -This term refers to the process of taking a package and customizing it across a -series of targets. It is a type of [Variant Generation](#variant-generation) but -more specific than that term. It is also an application of the [DRY](#dry) -principle. - -Some examples: - * A script that loops through an array, feeding values into Helm and rendering - individually specialized manifests for each entry in the array. - * The PackageDeployment controller from the ONE Summit 2022 Workshop uses a - label selector to identify target clusters, then clones a kpt package for - each, creating one package revision per cluster. - * The PackageVariantSet controller in Porch can be used to clone a package - across a set of repositories, or can create multiple clones of the same - package with different names in a single repository, based on arbitrary - object selectors. - -*See also*: [Hydration](#hydration), [Variant](#variant), [Variant Generation](#variant-generation) - -## Hydration -A play on [DRY](#dry) and [WET](#wet), this is the process by which a DRY -artifact becomes ready for deployment. A familiar example is rendering a Helm -chart. A lot of the effort in the configuration management aspects of Nephio are -spent on making the hydration process scalable, collaborative, and manageable in -Day 2 and beyond, all of which are challenges with current techniques. - -Hydration may be *out-of-place*, where the source material (e.g., the Helm -chart), is separate from the output of the hydration process (the manifests). -This is probably the most familiar type of hydration, used by Helm and -Kustomize, for example. Think of it as a pipeline with an input artifact, input -values, and output artifacts. - -Hydration may also be *in-place*, where modifications are directly written to -the manifests in question. There is no separate input artifact and output -artifact. Rather, you may have a starting artifact, some operations you perform -on that artifact to achieve your goal, but you store the results of those -operations directly in the same artifact. Utilization of a version control -system such as Git is critical in this case. This is the kind of hydration we -typically use when operating on kpt packages. - -With out-of-place hydration, the author of the template has to figure out, -upfront, all the possible outcomes of the hydration process. Then, they have to -make available inputs to the pipeline in order to make all of those different -outcomes achievable. This leads to "over-parameterization" - where effectively -every option possible in the outputs becomes an option in the input. At that -point, you have mostly *moved* complexity rather than *reduced* complexity. -In-place hydration can help with the over-parameterization, as values that are -rarely changed by users can be edited in-place. - -While related, *DRY* and *WET* are not exactly the same concepts as *in-place* and -*out-of-place* hydration. The former two refer to principles, whereas the latter -two are more about the operational pipeline. - -Note that occasionally people say "dehydration" when they mean "hydration", -likely due to the fact that "dehydration" is a more familiar word in common -language. Please offer folks some leeway in this, especially since we have many -non-native English speakers. - -*See also*: [DRY](#dry), [WET](#wet) - -## Injection -See [Injector](#injector). - -## Injector -We introduced this term during the Nephio [ONE Summit 2022 -Workshop](https://github.com/nephio-project/one-summit-22-workshop#packages). -However, it has been renamed to [specializer](#specializer). - -There is still the concept of an injector, but it is limited to the -PackageVariant and PackageVariantSet controllers. This process allows the author -of the PackageVariant(Set) to configure the controller to pull in a resource -from the management cluster, and copy it into the package. This allows us to -combine upstream ([DRY](#dry)) configuration with cluster-specific configuration -based upon the target cluster. - -## kpt -[Kpt](https://kpt.dev) is an open source tool for managing bundles of Kubernetes -resource configurations, called kpt [packages](#package), using the -[Configuration as Data](#configuration-as-data) methodology. - -The `kpt` command-line tool allows pulling, pushing, cloning and otherwise -managing packages stored in version control repositories (Git or OCI), as well -as execution of [KRM functions](#krm-function) to perform consistent and -repeatable modifications to package resources. - -[Porch](#porch) provides these package management, manipulation, and lifecycle -operations in a Kubernetes-based API, allowing automation of these operations -using standard Kubernetes controller techniques. - -## kpt Function -See [KRM Function](#krm-function). - -## KRM -See [Kubernetes Resource Model](#kubernetes-resource-model). - -## KRM Function -A [KRM -Function](https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md) -is an executable that takes Kubernetes resources as inputs, and produces -Kubernetes resources as outputs. The function may add, remove, or modify the -input resources to produce the outputs. This is similar to a Unix pipeline, but -with KRM on the input and output, rather than simple streams. - -Generally, best practices suggest KRM functions be hermetic (that is, they do -not access the outside world). - -In terms of the specification linked above, Kustomize, kpt, and Porch are all -*orchestrators*. - -*See also*: [Controller](#controller), [kpt](#kpt), [Porch](#porch) - -## Kubernetes Resource Model -The [Kubernetes Resource Model -(KRM)](https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/resource-management.md) -is the underlying declarative, intent-based API model and machinery for -Kubernetes. It is the general name for what you likely think of when you hear -"Kubernetes API". Additional background: -* [Kubernetes API Overview](https://kubernetes.io/docs/concepts/overview/kubernetes-api/) -* [Kubernetes API - Concepts](https://kubernetes.io/docs/reference/using-api/api-concepts/) - -## Manifest -A file (or files) containing a representation of resources. Typically YAML -files, but it could also be JSON or some other format. - -## Mutation -The act of changing the configuration. There are different processes that can be -used for mutation, including controllers, specializers, KRM functions, web -hooks, and manual in-place edits. - -*See also*: [Validation](#validation) - -## Operator -An operator is a software component - usually a collection of one or more -[controller managers](#controller-manager) - that manages a particular type of -workload. For example, a set of Kubernetes controllers to manage MySQL instances -would be an operator. - -Speaking loosely, [controller](#controller) and operator are often used -interchangeably, though an operator always refers to code managing CRs rather -than Kubernetes built-in types. - -See [CNFs and -Operators](https://docs.google.com/document/d/1Le8TUgr0dXix7fvq7BqMSY3rgeEwaxW7mEf9G72itBI/edit?usp=sharing) -for a thorough discussion. - -## Package -Generically, a logical grouping of Kubernetes resources or templated resources, -for example representing a particular workload or network function installation. - -For kpt packages, this specifically means well-formed Kubernetes resources along -with a Kptfile. See the kpt [package -documentation](https://kpt.dev/book/02-concepts/01-packages). - -This could also refer to a Helm chart, though generally we mean "kpt package" -when we say "package". - -## Package Revision -This specifically refers to the Porch `PackageRevision` resource. Porch adds -opinionated versioning and lifecycle management to packages, beyond what the -baseline `kpt` CLI expects. See the [Porch -documentation](https://kpt.dev/book/08-package-orchestration/04-package-authoring) -for more information. - -## Porch - -[Porch](https://kpt.dev/book/08-package-orchestration/) is "kpt-as-a-service", -providing opinionated package management, manipulation, and lifecycle -operations in a Kubernetes-based API. This allows automation of these -operations using standard Kubernetes controller techniques. - -Short for **P**ackage **Orch**estration. - -*See also*: [kpt](#kpt) - -## Resource - -A [Kubernetes -term](https://kubernetes.io/docs/reference/using-api/api-concepts/#standard-api-terminology) -referring to a specific object stored in the API server, -although we also use it to refer to the external representation of that object -(for example text in a YAML file). - -Also see [REST](https://en.wikipedia.org/wiki/Representational_state_transfer). - -## Specializer -This refers to a software component that runs in the Nephio Management cluster, -and could be considered a type of [controller](#controller). However, it -specifically watches for `PackageRevision` resources in a Draft state, and -checks for the conditions on those resources. When it finds -unsatisfied conditions of the type it handles, the specializer will -[mutate](#mutation) (modify) the Draft package by adding or -changing resources. - -For example, the IPAM specializer monitors package revision drafts for unresolved -IP address claims. When it sees one, it takes information from the claim and -uses it to allocate an IP address from the IP address management system. It -writes the result back into the draft package, where a KRM function can process -the result and copy ([propagate](#value-propagation)) it to the correct -resources in the package. - -## Validation -The act of verifying that the configuration is syntactical correct, and that it -matches a set of rules (or policies). Those rules or policies may be for -internal consistency (e.g., matching Deployment and Service label selectors), -or they may be organizationally related (e.g., all Deployments must contain a -label indicating cost allocation center). - -## Value Propagation -The same value in a configuration is often used in more than one place. *Value -propagation* is the technique of setting or generating the value once, and then -copying (or propagating) it to different places in the configuration. For -example, setting a Helm value in the *values.yaml* file, and then having it used -in multiple places across different resources. - -## Variant -A *variant* is an modified version of a package. Sometimes it is the output of -the hydration process, particularly when using out-of-place hydration. For -example, if you use the same Helm chart with different inputs to create -per-cluster workloads, you are generating variants. - -In Nephio, we use kpt packages to help keep an association between a package and -the variants of that package. When you clone a kpt package, an association is -maintained with the upstream package. Every deployable variant of a package is a -clone of the original, upstream package. This assists greatly in Day 2 -operations; when you update the original package, you can identify all variants -and merge the updates from the upstream into the downstream. This behavior is -automated via the PackageVariant controller. - -## Variant Generation -The process of creating [variants](#variant), typically in an automated way. -Variants could be created across different dimensions - for example, you could -create a package per cluster. Alternatively, you may create a variant per -environment - for example, development, staging, and production variants. - -Different methods may be warranted depending on the reason for your variants. In -the ONE Summit 2022 Workshop, the PackageDeployment controller generated -variants based upon the target clusters. The Porch PackageVariantSet allows more -general-purpose generation of variants, based upon an explicitly list, a label -selector on repositories, or an arbitrary object selector. As we develop Nephio, -we may build new types of variant generators, and may even compose them (for -example, to produce variants that are affected by both environment and cluster). - -## WET - -This term, which we use as an acronym for "Write Every Time", comes from -[software engineering](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself), and is a somewhat pejorative term in -contrast to [DRY](#dry). However, in the context of *configuration-as-data*, rather than *code*, the idea of storing the -configuration as fully-formed data enables automation and the use of data-management techniques to manage the -configuration at scale. - -*See also*: [DRY](#dry), [Hydration](#hydration) - -## Workload - -A workload is any application running on Kubernetes, including network -functions. - -## Nephio Related Abbreviations - -* PV: Package Variant -* PVS: Package Variant Set - -## 5G 3GPP Related Abbreviations - -* NF: Network Function -* AMF: Access and Mobility Management Function -* SMF: Session Management Function -* UPF: User Plane Function -* AUSF: Authentication Server Function -* NRF: Network Repository Function -* UDR: Unified Data Repository -* UDM: Unified Data Management -* DNN: Data Network Name - -## Kubernetes Networking - -* NAD: Network Attachment Definition diff --git a/content/en/docs/neo-porch/12_contributing/_index.md b/content/en/docs/neo-porch/12_contributing/_index.md deleted file mode 100644 index 8a567b6f..00000000 --- a/content/en/docs/neo-porch/12_contributing/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Contributing" -type: docs -weight: 12 -description: Contributing Description ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/12_contributing/relevant_old_docs/_index.md b/content/en/docs/neo-porch/12_contributing/relevant_old_docs/_index.md deleted file mode 100644 index 85d770ba..00000000 --- a/content/en/docs/neo-porch/12_contributing/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: 2 -description: ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/12_contributing/relevant_old_docs/dev-process.md b/content/en/docs/neo-porch/12_contributing/relevant_old_docs/dev-process.md deleted file mode 100644 index 71343107..00000000 --- a/content/en/docs/neo-porch/12_contributing/relevant_old_docs/dev-process.md +++ /dev/null @@ -1,285 +0,0 @@ ---- -title: "Development process" -type: docs -weight: 3 -description: ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -After you ran the setup script as explained in the [environment setup]({{% relref "/docs/porch/contributors-guide/environment-setup.md" %}}) you are ready to start the actual development of porch. That process involves (among others) a combination of the tasks explained below. - -## Build and deploy all of porch - -The following command will rebuild all of porch and deploy all of its components into your porch-test kind cluster (created in the [environment setup]({{% relref "/docs/porch/contributors-guide/environment-setup.md" %}})): - -```bash -make run-in-kind -``` - -## Troubleshoot the porch API server - -There are several ways to develop, test and troubleshoot the porch API server. In this chapter we describe an option where every other parts of porch is running in the porch-test kind cluster, but the porch API server is running locally on your machine, typically in an IDE. - -The following command will rebuild and deploy porch, except the porch API server component, and also prepares your environment for connecting the local API server with the in-cluster components. - -```bash -make run-in-kind-no-server -``` - -After issuing this command you are expected to start the porch API server locally on your machine (outside of the kind cluster); probably in your IDE, potentially in a debugger. - -### Configure VS Code to run the Porch (API)server - -The simplest way to run the porch API server is to launch it in a VS Code IDE, as described by the following process: - -1. Open the *porch.code-workspace* file in the root of the porch git repository. - -1. Edit your local *.vscode/launch.json* file as follows: Change the `--kubeconfig` argument of the Launch Server - configuration to point to a *KUBECONFIG* file that is set to the kind cluster as the current context. - -{{% alert title="Note" color="primary" %}} - - If your current *KUBECONFIG* environment variable already points to the porch-test kind cluster, then you don't have to touch anything. - - {{% /alert %}} - -1. Launch the Porch server locally in VS Code by selecting the *Launch Server* configuration on the VS Code - *Run and Debug* window. For more information please refer to the - [VS Code debugging documentation](https://code.visualstudio.com/docs/editor/debugging). - -### Check to ensure that the API server is serving requests: - -```bash -curl https://localhost:4443/apis/porch.kpt.dev/v1alpha1 -k -``` - -
-Sample output - -```json -{ - "kind": "APIResourceList", - "apiVersion": "v1", - "groupVersion": "porch.kpt.dev/v1alpha1", - "resources": [ - { - "name": "packagerevisionresources", - "singularName": "", - "namespaced": true, - "kind": "PackageRevisionResources", - "verbs": [ - "get", - "list", - "patch", - "update" - ] - }, - { - "name": "packagerevisions", - "singularName": "", - "namespaced": true, - "kind": "PackageRevision", - "verbs": [ - "create", - "delete", - "get", - "list", - "patch", - "update", - "watch" - ] - }, - { - "name": "packagerevisions/approval", - "singularName": "", - "namespaced": true, - "kind": "PackageRevision", - "verbs": [ - "get", - "patch", - "update" - ] - }, - { - "name": "packages", - "singularName": "", - "namespaced": true, - "kind": "Package", - "verbs": [ - "create", - "delete", - "get", - "list", - "patch", - "update" - ] - } - ] -} -``` - -
- - -## Troubleshoot the porch controllers - -There are several ways to develop, test and troubleshoot the porch controllers (i.e. *PackageVariant*, *PackageVariantSet*). In this chapter we describe an option where every other parts of porch is running in the porch-test kind cluster, but the process hosting all porch controllers is running locally on your machine. - -The following command will rebuild and deploy porch, except the porch-controllers component: - -```bash -make run-in-kind-no-controllers -``` - -After issuing this command you are expected to start the porch controllers process locally on your machine (outside of -the kind cluster); probably in your IDE, potentially in a debugger. If you are using VS Code you can use the -**Launch Controllers** configuration that is defined in the -[launch.json](https://github.com/nephio-project/porch/blob/main/.vscode/launch.json) file of the porch git repository. - -## Run the unit tests - -```bash -make test -``` - -## Run the end-to-end tests - -To run the end-to-end tests against the Kubernetes API server where *KUBECONFIG* points to, simply issue: - -```bash -make test-e2e -``` - -To run the end-to-end tests against a clean deployment, issue: - -```bash -make test-e2e-clean -``` -This will -- create a brand new kind cluster, -- rebuild porch -- deploy the newly built porch into the new cluster -- run the end-to-end tests against that -- deletes the kind cluster if all tests passed - -This process closely mimics the end-to-end tests that are run against your PR on GitHub. - -In order to run just one particular test case you can execute something similar to this: - -```bash -E2E=1 go test -v ./test/e2e -run TestE2E/PorchSuite/TestPackageRevisionInMultipleNamespaces -``` -or this: -```bash -E2E=1 go test -v ./test/e2e/cli -run TestPorch/rpkg-lifecycle - -``` - -To run the end to end tests on your local machine towards a Porch server running in VS Code, be aware of the following if the tests are not running: -- Set the actual load balancer IP address for the function runner in your *launch.json*, for example - "--function-runner=172.18.255.201:9445" -- Clear the git cache of your Porch workspace before every test run, for example - `rm -fr /.cache/git/*` - -## Run the load test - -A script is provided to run a Porch load test against the Kubernetes API server where *KUBECONFIG* points to. - -```bash -porch % scripts/run-load-test.sh -h - -run-load-test.sh - runs a load test on porch - - usage: run-load-test.sh [-options] - - options - -h - this help message - -s hostname - the host name of the git server for porch git repositories - -r repo-count - the number of repositories to create during the test, a positive integer - -p package-count - the number of packages to create in each repo during the test, a positive integer - -e package-revision-count - the number of packagerevisions to create on each package during the test, a positive integer - -f result-file - the file where the raw results will be stored, defaults to load_test_results.txt - -o repo-result-file - the file where the results by reop will be stored, defaults to load_test_repo_results.csv - -l log-file - the file where the test log will be stored, defaults to load_test.log - -y - dirty mode, do not clean up after tests -``` - -The load test creates, copies, proposes and approves `repo-count` repositories, each with `package-count` packages -with `package-revision-count` package revisions created for each package. The script initializes or copies each -package revision in turn. It adds a pipeline with two "apply-replacements" kpt functions to the Kptfile of each -package revision. It updates the package revision, and then proposes and approves it. - -The load test script creates repositories on the git server at `hostname`, so it's URL will be `http://nephio:secret@hostname:3000/nephio/`. -The script expects a git server to be running at that URL. - -The `result-file` is a text file containing the time it takes for a package to move from being initialized or -copied to being approved. It also records the time it takes to proppose-delete and delete each package revision. - -The `repo-result-file` is a CSV file that tabulates the results from `result-file` into columns for each repository created. - -For example: - -```bash -porch % scripts/run-load-test.sh -s 172.18.255.200 -r 4 -p 2 -e 3 -running load test towards git server http://nephio:secret@172.18.255.200:3000/nephio/ - 4 repositories will be created - 2 packages in each repo - 3 pacakge revisions in each package - results will be stored in "load_test_results.txt" - repo results will be stored in "load_test_repo_results.csv" - the log will be stored in "load_test.log" -load test towards git server http://nephio:secret@172.18.255.200:3000/nephio/ completed -``` - -In the load test above, a total of 24 package revisions were created and deleted. - -|REPO-1-TEST|REPO-1-TIME|REPO-2-TEST|REPO-2-TIME|REPO-3-TEST|REPO-3-TIME|REPO-4-TEST|REPO-4-TIME| -|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| -1:1|1.951846|1:1|1.922723|1:1|2.019615|1:1|1.992746 -1:2|1.762657|1:2|1.864306|1:2|1.873962|1:2|1.846436 -1:3|1.807281|1:3|1.930068|1:3|1.860375|1:3|1.881649 -2:1|1.829227|2:1|1.904997|2:1|1.956160|2:1|1.988209 -2:2|1.803494|2:2|1.912169|2:2|1.915905|2:2|1.902103 -2:3|1.816716|2:3|1.948171|2:3|1.931904|2:3|1.952902 -del-6a0b3…|.918442|del-e757b…|.904881|del-d39cd…|.944850|del-6222f…|.911060 -del-378a4…|.831815|del-9211c…|.866386|del-316a5…|.898638|del-31d9f…|.895919 -del-89073…|.874867|del-97d45…|.876450|del-830e0…|.905896|del-7d411…|.866947 -del-4756f…|.850528|del-c95db…|.903599|del-4c450…|.884997|del-587f8…|.842529 -del-9860a…|.887118|del-9c1b9…|1.018930|del-66ae…|.929470|del-6ae3d…|.905359 -del-a11e5…|.845834|del-71540…|.899935|del-8d1e8…|.891296|del-9e2bb…|.864382 -del-1d789…|.851242|del-ffdc3…|.897862|del-75e45…|.852323|del-82eef…|.916630 -del-8ae7e…|.872696|del-58097…|.894618|del-d164f…|.852093|del-9da24…|.849919 - -## Switching between tasks - -The `make run-in-kind`, `make run-in-kind-no-server` and `make run-in-kind-no-controller` commands can be executed right after each other. No clean-up or restart is required between them. The make scripts will intelligently do the necessary changes in your current porch deployment in kind (e.g. removing or re-adding the porch API server). - -You can always find the configuration of your current deployment in *.build/deploy*. - -You can always use `make test` and `make test-e2e` to test your current setup, no matter which of the above detailed configurations it is. - -## Getting to know the make targets - -Try: `make help` - -## Restart with a clean-slate - -Sometimes the development kind cluster gets cluttered and you may experience weird behavior from porch. -In this case you might want to restart with a clean slate: -First, delete the development kind cluster with the following command: - -```bash -kind delete cluster --name porch-test -``` - -then re-run the [setup script](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh): - -```bash -./scripts/setup-dev-env.sh -``` - -finally deploy porch into the kind cluster by any of the methods explained above. - diff --git a/content/en/docs/neo-porch/12_contributing/relevant_old_docs/environment-setup-vm.md b/content/en/docs/neo-porch/12_contributing/relevant_old_docs/environment-setup-vm.md deleted file mode 100644 index 30cc7224..00000000 --- a/content/en/docs/neo-porch/12_contributing/relevant_old_docs/environment-setup-vm.md +++ /dev/null @@ -1,166 +0,0 @@ ---- -title: "Setting up a VM environment" -type: docs -weight: 2 -description: ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -This tutorial gives short instructions on how to set up a development environment for Porch on a Nephio VM. It outlines the steps to -get a [kind](https://kind.sigs.k8s.io/) cluster up and running to which a Porch instance running in Visual Studio Code -can connect to and interact with. If you are not familiar with how porch works, it is highly recommended that you go -through the [Starting with Porch tutorial]({{% relref "/docs/porch/user-guides/install-porch.md" %}}) before going through this one. - -## Setting up the environment - -1. The first step is to install the Nephio sandbox environment on your VM using the procedure described in -[Installation on a single VM]({{% relref "/docs/guides/install-guides/install-on-single-vm.md" %}}). In short, log onto your VM and give the command -below: - -```bash -wget -O - https://raw.githubusercontent.com/nephio-project/test-infra/main/e2e/provision/init.sh | \ -sudo NEPHIO_DEBUG=false \ - NEPHIO_BRANCH=main \ - NEPHIO_USER=ubuntu \ - bash -``` - -2. Set up your VM for development (optional but recommended step). - -```bash -echo '' >> ~/.bashrc -echo 'source <(kubectl completion bash)' >> ~/.bashrc -echo 'source <(kpt completion bash)' >> ~/.bashrc -echo 'source <(porchctl completion bash)' >> ~/.bashrc -echo '' >> ~/.bashrc -echo 'alias h=history' >> ~/.bashrc -echo 'alias k=kubectl' >> ~/.bashrc -echo '' >> ~/.bashrc -echo 'complete -o default -F __start_kubectl k' >> ~/.bashrc - -sudo usermod -a -G syslog ubuntu -sudo usermod -a -G docker ubuntu -``` - -3. Log out of your VM and log in again so that the group changes on the *ubuntu* user are picked up. - -```bash -> exit - -> ssh ubuntu@thevmhostname -> groups -ubuntu adm dialout cdrom floppy sudo audio dip video plugdev syslog netdev lxd docker -``` - -4. Install *go* so that you can build Porch on the VM: - -```bash -wget -O - https://go.dev/dl/go1.22.5.linux-amd64.tar.gz | sudo tar -C /usr/local -zxvf - - -echo '' >> ~/.profile -echo '# set PATH for go' >> ~/.profile -echo 'if [ -d "/usr/local/go" ]' >> ~/.profile -echo 'then' >> ~/.profile -echo ' PATH="/usr/local/go/bin:$PATH"' >> ~/.profile -echo 'fi' >> ~/.profile -``` - -5. Log out of your VM and log in again so that the *go* is added to your path. Verify that *go* is in the path: - -```bash -> exit - -> ssh ubuntu@thevmhostname - -> go version -go version go1.22.5 linux/amd64 -``` - -6. Install *go delve* for debugging on the VM: - -```bash -go install -v github.com/go-delve/delve/cmd/dlv@latest -``` - -7. Clone Porch onto the VM - -```bash -mkdir -p git/github/nephio-project -cd ~/git/github/nephio-project - -# Clone porch -git clone https://github.com/nephio-project/porch.git -cd porch -``` - -8. Change the Kind cluster name in the Porch Makefile to match the Kind cluster name on the VM: - -```bash -sed -i "s/^KIND_CONTEXT_NAME ?= porch-test$/KIND_CONTEXT_NAME ?= "$(kind get clusters)"/" Makefile -``` - -9. Expose the Porch function runner so that the Nephio server running in VS Code can access it - -```bash -kubectl expose svc -n porch-system function-runner --name=xfunction-runner --type=LoadBalancer --load-balancer-ip='172.18.0.202' -``` - -10. Set the *KUBECONFIG* and *FUNCTION_RUNNER_IP* environment variables in the *.profile* file - You **must** do this step before connecting with VS Code because VS Code caches the environment on the server. If you - want to change the values of these variables subsequently, you must restart the VM server. - - ```bash - echo '' >> ~/.profile - echo 'export KUBECONFIG="/home/ubuntu/.kube/config"' >> ~/.profile - echo 'export FUNCTION_RUNNER_IP="172.18.0.202"' >> ~/.profile - ``` - -You have now set up the VM so that it can be used for remove debugging of Porch. - -## Setting up VS Code - -Use the [VS Code Remote SSH] -(https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) -plugin to debug from VS Code running on your local machine towards a VM. Detailed documentation -on the plugin and its use is available on the -[Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) in the VS Code -documentation. - -1. Use the **Connect to a remote host** instructions on the -[Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) page to connect to your VM. - -2. Click **Open Folder** and browse to the Porch code on the VM, */home/ubuntu/git/github/nephio-project/porch* in this - case: - -![Browse to Porch code](/static/images/porch/contributor/01_VSCodeOpenPorchFolder.png) - -3. VS Code now opens the Porch project on the VM. - -![Porch code is open](/static/images/porch/contributor/02_VSCodeConnectedPorch.png) - -4. We now need to install support for *go* debugging in VS Code. Trigger this by launching a debug configuration in - VS Code. - Here we use the **Launch Override Server** configuration. - -![Launch the Override Server VS Code debug configuration](/static/images/porch/contributor/03_LaunchOverrideServer.png) - -5. VS Code complains that *go* debugging is not supported, click the **Install go Extension** button. - -![VS Code go debugging not supported message](/static/images/porch/contributor/04_GoDebugNotSupportedPopup.png) - -6. Go automatically presents the Go debug plugin for installation. Click the **Install** button. - -![VS Code Go debugging plugin selected](/static/images/porch/contributor/05_GoExtensionAutoSelected.png) - -7. VS Code installs the plugin. - -![VS Code Go debugging plugin installed](/static/images/porch/contributor/06_GoExtensionInstalled.png) - -You have now set up VS Code so that it can be used for remove debugging of Porch. - -## Getting started with actual development - -You can find a detailed description of the actual development process [here]({{% relref "/docs/porch/contributors-guide/dev-process.md" %}}). diff --git a/content/en/docs/neo-porch/1_overview/_index.md b/content/en/docs/neo-porch/1_overview/_index.md deleted file mode 100644 index d08a77bb..00000000 --- a/content/en/docs/neo-porch/1_overview/_index.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: "Overview" -type: docs -weight: -1 -description: Overview of Porch ---- - -## What is Porch - -Porch is a specialization and orchestration tool for managing distributed systems. It helps GitOps engineers, -developers, integrators and telecom operators to manage complex systems in a cloud native environment. Porch runs -[kpt](https://kpt.dev/) at a scale for package specialization. It provides collaboration and governance enablers and -integration with GitOps for the packages. - -## Goals and scope of Porch - -The goal of Porch is to orchestrate kpt packages in a configuration as code and GitOps context. It provides an API and a CLI as enablers to build lifecycle management, package repository management, package discovery, and authoring kpt packages. - -Porch does not intend to run mutation pipelines, package specialization and basic package manipulation in GitOps, but uses kpt for these operations. - diff --git a/content/en/docs/neo-porch/1_overview/new.md b/content/en/docs/neo-porch/1_overview/new.md deleted file mode 100644 index ec65d3ea..00000000 --- a/content/en/docs/neo-porch/1_overview/new.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "New Content Template" -type: docs -weight: 2 -description: new content here ---- - -## New Content Template - -new content here diff --git a/content/en/docs/neo-porch/1_overview/relevant_old_docs/_index.md b/content/en/docs/neo-porch/1_overview/relevant_old_docs/_index.md deleted file mode 100644 index ca21f73f..00000000 --- a/content/en/docs/neo-porch/1_overview/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: -1 -description: ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/1_overview/relevant_old_docs/config-as-data.md b/content/en/docs/neo-porch/1_overview/relevant_old_docs/config-as-data.md deleted file mode 100644 index f40c7855..00000000 --- a/content/en/docs/neo-porch/1_overview/relevant_old_docs/config-as-data.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -title: "Configuration as Data" -type: docs -weight: 2 -description: ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -This document provides the background context for Package Orchestration, which is further -elaborated in a dedicated [document]({{% relref "/docs/porch/package-orchestration.md" %}}). - -## Configuration as data (CaD) - -CaD is an approach to the management of configuration. It includes the configuration of -infrastructure, policy, services, applications, and so on. CaD performs the following actions: - -* Making configuration data the source of truth, stored separately from the live state. -* Using a uniform, serializable data model to represent the configuration. -* Separating the code that acts on the configuration from the data and from packages/bundles of - data. -* Abstracting the configuration file structure and storage from the operations that act on the - configuration data. Clients manipulating the configuration data do not need to interact directly - with the storage (such as git, container images, and so on). - -![CaD Overview](/static/images/porch/CaD-Overview.svg) - -### Key principles - -A system based on CaD should observe the following key principles: - -* Separate *handling of secret data* (credentials, certificates, etc.) out to a secret-focused storage system, such as - ([cert-manager](https://cert-manager.io/)). -* Maintain a *versioned history of configuration changes* to bundles of related configuration data. -* Maintain *uniformity and consistency of the configuration format*, including type metadata, and rely on this to enable - pattern-based operations on the configuration data (along the lines of [duck typing](https://en.wikipedia.org/wiki/Duck_typing)). -* *Separate the configuration data from its schemas*. - * Rely on the schema information to: - * define strongly-typed operations. - * disambiguate data structures and other variations within the schema. -* *Decouple configuration abstractions* from collections of configuration data. -* Represent *abstractions of configuration generators* as data with schemas, as with other configuration data. -* Implement *get/list functionality* to find, filter, query, select, and/or validate: - * configuration data. - * code (functions) that can operate on the resource types that make up configuration data. -* Separate the *actuation* (reconciliation of configuration data with live state) from the *intermediate processing* - (validation and transformation) of the configuration data. - * Actuation should be conducted according to the declarative data model. -* Prefer *transforming configuration data* to generating it wholesale, especially for value propagation - * except in the case of dramatic changes (for example, an expansion >10x). -* *Decouple generation of transformation input data* from propagation. -* Obtain deployment context inputs from *well-defined "provider context" objects*. -* *Identifiers and references* should be declarative. -* *Link the live state back to the configuration as source of truth*. - -## Kubernetes Resouce Model configuration as data (KRM CaD) - -The kpt implementation of the Configuration as Data approach ([kpt][kpt], [Config Sync][Config Sync], and [Package Orchestration][Porch]) -is built on the foundation of the [Kubernetes Resource Model][krm] (KRM). - -{{% alert title="Note" color="primary" %}} - -KRM is not a hard requirement of CaD, just as Python or Go templates, or Jinja, are not specifically requirements for -[IaC](https://en.wikipedia.org/wiki/Infrastructure_as_code). However, the choice of a different fundamental format for -configuration data would necessitate the implementation of adapters for all types of infrastructure and applications -configured, including Kubernetes, CRDs, and GCP resources. Likewise, choosing another configuration format would require -the redesign of several of the configuration management mechanisms that have already been designed for KRM, such as three-way -merge, structural merge patch, schema descriptions, resource metadata, references, status conventions, and so on. - -{{% /alert %}} - - -**KRM CaD**, then, is a specific approach to implementing *Configuration as Data*. It uses and builds on the following -existing concepts: - -* [KRM][krm] as the configuration serialization data model. -* [Kptfile](https://kpt.dev/reference/schema/kptfile/) to store kpt package metadata. -* [ResourceList](https://kpt.dev/reference/schema/resource-list/) as a serialized package wire format. -* A kpt function with input → output in the form `ResourceList → ResultList` as the foundational, composable unit of code - with which to conduct package manipulation. - - {{% alert title="Note" color="primary" %}} - - Other forms of code can also manipulate packages, such as UIs and custom algorithms not necessarily packaged and used - as kpt functions. - - {{% /alert %}} - - -KRM CaD provides the following basic use cases: - -* Create a new (empty) kpt package. -* Load a serialized package from a repository (as a ResourceList). Examples of a repository may be one or more of the - following: - * Local HDD - * Git repository - * OCI - * Cloud storage -* Save a serialized package (as a ResourceList) to a package repository. -* Evaluate a function on a serialized package (ResourceList). -* [Render](https://kpt.dev/book/04-using-functions/#declarative-function-execution) a package (in the process evaluating - the functions declared within the package itself). -* Create a new (empty) package. -* Fork (or clone) an existing package from one package repository (called *upstream*) to another (called *downstream*). -* Delete a package from a repository. -* Associate a version with a package's condition at a particular point in time. - * Publish a package with an assigned version, guaranteeing the immutability of the package at that version. -* Incorporate changes from a newly-published version of an upstream package into a new version of a downstream package - (three-way merge). -* Revert a package to a prior version. - -### Configuration values - -The CaD approach enables the following key values, which other configuration management approaches provide to a lesser -extent or not at all: - -* Capabilities enabled by version-control (unlike imperative tools, such as UI and CLI, that directly modify the live - state via APIs): - * Versioning - * Undo - * Configuration history audits - * Review/approval flows - * Previewing before deployment - * Validation and safety checks - * Constraint-based policy enforcement - * Disaster recovery -* *Detecting and remedying drift* in the live state via continuous reconciliation, whether by direct re-application or - targeted mutations of the sources of truth. -* *Exporting state* to reusable blueprints without needing to create and manage templates manually. -* *Bulk-changing* configuration data across multiple sources of truth. -* *Configuration injection* to address horizontally-applied variations. - * Maintaining the resulting configuration variants without needing to invest effort wither in parameterisation frameworks - or in manually constructing and maintaining patches. -* *Merging* of multiple sources of truth. - -* *Simplified configuration authoring* using a variety of sources and editing methods. -* *What-you-see-is-what-you-get (WYSIWYG) interaction* with the configuration using a simple data serialization formation, - rather than a code-like format. -* *Layering of interoperable interface surfaces* (notably GUIs) over the declarative configuration mechanisms, rather than - forcing choices between exclusive alternatives (exclusively, UI/CLI or IaC initially, followed by exclusively UI/CLI or - exclusively IaC). - * The ability to apply UX techniques to simplify configuration authoring and viewing. -* *Cooperative configuration editing* by human and automated agents (for example, for security remediation, which is usually - implemented against live-state APIs). - -* *Combination of multiple* independent configuration transformations. -* *Reusability of configuration transformation functions* and function code across multiple bodies of configuration data - containing the same resource types. - * Reducing the frequency of changes to the existing transformation code. -* *Language-agnostic implementation* of configuration transformations, including both programming and scripting approaches. -* *Separation of roles* - for example, between developer (configuration author) and non-developer (configuration user). -* *Improved security on sources of truth*, including access control, validation, and invariant enforcement. - -#### Related articles - -For more information about Configuration as Data and the Kubernetes Resource Model, visit the following links: - -* [Rationale for kpt](https://kpt.dev/guides/rationale) -* [Understanding Configuration as Data](https://cloud.google.com/blog/products/containers-kubernetes/understanding-configuration-as-data-in-kubernetes) - blog post -* [Kubernetes Resource Model](https://cloud.google.com/blog/topics/developers-practitioners/build-platform-krm-part-1-whats-platform) - blog post series - - -## Core Components of Configuration as Data Implementation - -The Package Orchestration CaD implementation consists of a set of components and APIs enabling the following broad use -cases: - -* Register repositories (Git, OCI) containing kpt packages. -* Automatically discover existing packages in registered repositories. -* Manage package revision lifecycle, including: - * Authoring and versioning of a package through creation, mutation, and deletion of package revision drafts. - * A 2-step approval process where a draft package revision is first proposed for publishing, and only published on a - second (approval) operation. -* Manage package lifecycle - operations such as: - * Package upgrade - assisted or automated rollout of a downstream (cloned) package when a new revision of the upstream - package is published. - * Package rollback to a previous package revision. -* Deploy packages from deployment repositories and observe their deployment status. -* Role-based access control to Porch APIs via Kubernetes standard roles. - -### Deployment mechanism - -The deployment mechanism is responsible for deploying packages from a repository and affecting the live state. The "default" -deployment mechanism tested with the CaD implementation is [Config Sync][Config Sync], but since the configuration is stored -in repositories of standard types, the exact software used for deployment is less of a concern. - -Here we highlight some key attributes of the deployment mechanism and its integration within the CaD paradigm: - -* _Published_ packages in a deployment repository are considered ready to be deployed. -* _Draft_ packages need to be identified in such a way that Config Sync can easily avoid deploying them. -* Config Sync supports deploying individual packages and whole repositories. For Git specifically, this translates to a - requirement to be able to specify repository, branch/tag/ref, and directory when instructing Config Sync to deploy a - package. -* Config Sync needs to be able to pin to specific versions of deployable packages in order to orchestrate rollouts and - rollbacks. This means it must be possible to *get* a specific package revision. -* Config Sync needs to be able to discover when new package versions are available for deployment. diff --git a/content/en/docs/neo-porch/1_overview/relevant_old_docs/old-porch-overview.md b/content/en/docs/neo-porch/1_overview/relevant_old_docs/old-porch-overview.md deleted file mode 100644 index fe0d3b0d..00000000 --- a/content/en/docs/neo-porch/1_overview/relevant_old_docs/old-porch-overview.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: "Old Porch Overview" -type: docs -weight: 3 -description: Old Porch Overview ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -## Old Porch Overview - -Porch is “kpt-as-a-service”, providing opinionated package management, manipulation, and lifecycle operations in a -Kubernetes-based API. This allows automation of these operations using standard Kubernetes controller techniques. - -"Porch" is short for "Package Orchestration". - -## Porch in the Nephio architecture, history and outlook - -Porch is a key component of the Nephio architecture. It was originally developed in the -[kpt](https://github.com/kptdev/kpt) project. When kpt was donated to the [CNCF](https://www.cncf.io/projects/kpt/) it -was decided that Porch would not be part of the kpt project and the code was donated to Nephio. - -Porch is maintained by the Nephio community. Porch will evolve with Nephio and -its architecture and implementation will be updated to meet the functional and non-functional requirements on it -and on Nephio as a whole. diff --git a/content/en/docs/neo-porch/2_concepts/_index.md b/content/en/docs/neo-porch/2_concepts/_index.md deleted file mode 100644 index 83989cfd..00000000 --- a/content/en/docs/neo-porch/2_concepts/_index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "Concepts" -type: docs -weight: 1 ---- - -The Concepts section helps you learn about the abstractions Porch uses to store and orchestrate your kpt packages, with -high-level descriptions of the repository management, package orchestration, and package revision lifecycle management -use cases. diff --git a/content/en/docs/neo-porch/2_concepts/architectural.md b/content/en/docs/neo-porch/2_concepts/architectural.md deleted file mode 100644 index 415a9335..00000000 --- a/content/en/docs/neo-porch/2_concepts/architectural.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -title: "Architectural Concepts" -type: docs -weight: 4 -description: | - The architectual concepts behind Porch; Porch microservices and their primary components. ---- - -### High-Level CaD Architecture - -At the high level, the CaD functionality comprises: - -* A generic (i.e. not task-specific) package orchestration service implementing: - * package revision authoring and lifecycle management. - * package repository management. - -* [porchctl]({{% relref "/docs/neo-porch/7_cli_api/porchctl.md" %}}) - a Git-native, schema-aware, extensible client-side - tool for managing KRM packages in Porch. -* A GitOps-based deployment mechanism (for example, [Config Sync](https://cloud.google.com/anthos-config-management/docs/config-sync-overview) - or [FluxCD](https://fluxcd.io/)), which distributes and deploys configuration, and provides observability of the status - of deployed resources. -* A task-specific UI supporting repository management, package discovery, authoring, and lifecycle. - -![CaD Core Architecture](/static/images/porch/CaD-Core-Architecture.svg) - -### Porch Architecture - -Porch consists of several microservices, designed to be hosted in a [Kubernetes](https://kubernetes.io/) cluster. - -The overall architecture is shown below, including additional components external to Porch (Kubernetes API server and -deployment mechanism). - -![Porch Architecture](/static/images/porch/Porch-Architecture.drawio.svg) - -In addition to satisfying requirements highlighted above, the focus of the architecture is to: - -* establish clear components and interfaces. -* support low latency in package authoring operations. - -The primary Porch components are: - -#### Porch Server - -The Porch server is implemented as a [Kubernetes extension API server](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) -which works with the Kubernetes API aggregation layer. The benefits of this approach are: - -* seamless integration with the well-defined Kubernetes API style -* availability of generated clients for use in code -* integration with existing Kubernetes ecosystem and tools such as `kubectl` CLI, - [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) -* avoids requirement to open another network port to access a separate endpoint running inside k8s cluster - * this is a distinct advantage over GRPC which was initially considered as an alternative approach - -The Porch server serves the primary Kubernetes required for basic package authoring and lifeycle management, including: - -* A `Repository` [custom resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/), - which supports repository registration. -* For each package revision (see [Package Revisions]({{% relref "/docs/neo-porch/2_concepts/fundamentals.md#package-revisions" %}})): - * `PackageRevision` - represents the *metadata* of the package revision stored in a repository. - * `PackageRevisionResources` - represents the *file contents* of the package revision. - -{{% alert color="primary" %}} -Note that each package revision is represented by both a `PackageRevision` and a `PackageRevisionResources` - each presents -a different view (or [representation](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#differing-representations)) -of the same underlying package revision. -{{% /alert %}} - -The **Porch server** itself includes the following key components: - -* The *aggregated API server*, which implements the integration into the main Kubernetes API server and serves API requests - for the `PackageRevision` and `PackageRevisionResources` resources. -* Package orchestration *engine*, which implements the package lifecycle operations and package mutation workflows. -* *CaD Library*, which implements specific package manipulation algorithms such as package rendering (evaluation of package's - function *pipeline*), initialization of a new package, etc. The CaD Library is a fork of `kpt` to allow Porch to reuse - the `kpt` algorithms and fulfil its overarching use case to be "kpt as a service". -* *Package cache*, which enables: - * local caching to allow package lifecycle and content manipulation operations to be executed within the Porch server - with minimal latency. - * abstracting package operations upward so they can be used without having to take account of the underlying storage - repository software mechanism (Git or OCI). -* *Repository adapters* for Git and OCI, which implement the specific logic of interacting with each repository type. -* *Function Runner runtime*, which evaluates individual [KRM functions][functions] (or delegates to the dedicated - [function runner](#function-runner)), incorporating a multi-tier cache of functions to support low-latency evaluation. - -#### Function Runner - -The **Function Runner** is a separate microservice responsible for evaluating [KRM functions][functions]. It exposes a -[GRPC](https://grpc.io/) endpoint which enables evaluating a specified kpt function on a provided configuration package. - -GRPC was chosen for the function runner service because the [benefits of an API server](#porch-server) that prompted its -use for the Porch server do not apply in this case. The function runner is an internal microservice, an implementation -detail not exposed to external callers, which makes GRPC perfectly suitable. - -The function runner maintains a cache of functions to support low-latency function evaluation. It achieves this through -two mechanisms available to it for evaluation of a function: - -* The **Executable Evaluation** mechanism executes the function directly inside the `function-runner` pod through shell-based - invocation of a function's binary executable. This applies only to a selected subset of popular functions, whose binaries - are baked into the `function-runner` image itself at compile-time to form a sort of pre-cache. -* The **Pod Evaluation** mechanism is the fallback when the invoked function is not one of those packaged in the `function-runner` - image for the Executable Evaluation approach. The `function-runner` pod spawns a separate *function pod*, based on the - image of the invoked function, along with a corresponding front-end service. Once the pod and service are ready, the - exposed GRPC endpoint is invoked to evaluate the function with the package contents as input. Once a function pod completes - evaluation and returns the result to the `function-runner` pod, the function pod is kept in existence temporarily so it - can be re-used quickly as a cache hit. After a pre-configured period of disuse (default 30 minutes), the function runner - terminates the function pod and its service, to recreate them from the start on the next invocation of that function. - -#### CaD (kpt) Operations - -The [kpt CLI](https://kpt.dev/installation/kpt-cli) already implements the fundamental package manipulation algorithms -(explained [in kpt documentation](https://kpt.dev/book/03-packages/)) in order to provide its command line user experience. - -The same set of primitive operations form the foundational building blocks of the package orchestration service. Further, -Porch combines these blocks into higher-level operations (for example, Porch renders packages automatically on changes; -future versions will support bulk operations such as upgrade of multiple packages, etc.). - - - -[functions]: https://kpt.dev/book/02-concepts/#functions \ No newline at end of file diff --git a/content/en/docs/neo-porch/2_concepts/fundamentals.md b/content/en/docs/neo-porch/2_concepts/fundamentals.md deleted file mode 100644 index 1be57b9d..00000000 --- a/content/en/docs/neo-porch/2_concepts/fundamentals.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -title: "Porch Fundamentals" -type: docs -weight: 2 -description: | - The fundamental topics necessary to understand Porch as "package orchestration" on a conceptual level. ---- - -## Core Concepts - -This section introduces some core concepts of Porch's package orchestration: - -* ***Package***: A package, in Porch, is specifically a [kpt package](https://kpt.dev/book/02-concepts/#packages) - a - collection of related YAML files including one or more **[KRM resources](https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/resource-management.md)** - and a [Kptfile](https://kpt.dev/reference/schema/kptfile). - {{% alert title="N.B." color="warning" %}} - - There is no such thing as a "Porch Package" - rather, **Porch stores and orchestrates kpt packages**. - - {{% /alert %}} - -* ***Repository***: This is a version-control [repository](#repositories) used to store packages. For example, a - [Git](https://git-scm.org/) or (experimentally) [OCI](https://github.com/opencontainers/image-spec/blob/main/spec.md) - repository. - -* ***Package Revision***: This refers to the state of a package as of a specific version. Packages are sequentially -[versioned](#package-revisions) such that multiple versions of the same package may exist in a repository. Each successive -version is considered a *package revision*. - -* ***Lifecycle***: This refers to a package revision's current stage in the process of its orchestration by Porch. A package -revision may be in one of several lifecycle stages: - * ***Draft*** - the package is being authored (created or edited). The package contents can be modified but the package - revision is not ready to be used/deployed. Previously-published package revisions, reflecting earlier states of the - package files, can still be deployed. - * ***Proposed*** - intermediate state. The package's author has proposed that the package revision be published as a new - version of the package with its files in the current state. - * ***Published*** - the changes to the package have been approved and the package is ready to be used. Published packages - may be deployed, cloned to a new package, or edited to continue development. - * ***DeletionProposed*** - intermediate state. A user has proposed that this package revision be deleted from the - repository. - -* ***Functions***: specifically, [KRM functions](https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md). - Functions can be added to a package's kptfile [pipeline](https://kpt.dev/book/04-using-functions/#declarative-function-execution) - in the course of modifying a package revision in *Draft* state. Porch runs the pipeline on the package contents, mutating - or validating the KRM resource files. - -* ***Package Variant*** and ***Package Variant Set***: these Kubernetes objects represent higher levels of package revision - automation. Package variants can be used to automatically track an upstream package (at a specific revision) and manage - cloning it to one or several downstream packages, as well as preparing new downstream package revisions when a new revision - of the upstream package is published. Package variant sets enable the same behaviour for package variants themselves. - Use of package variants involves advanced concepts worthy of their own separate document: - [Package Variants]({{% relref "/docs/neo-porch/5_architecture_and_components/controllers/pkg-variant-controllers.md" %}}) - - -In addition, some terms may be used with specific qualifiers, frequently enough to count them as sub-concepts: - -* ***Upstream package revision***: a package revision of an ***upstream package*** may be *cloned*, producing a new, -***downstream package*** and associated package revision. The downstream package maintains a link (URL) to the upstream -package revision from which it was cloned. [More details]({{% relref "/docs/neo-porch/5_architecture_and_components/relevant_old_docs/extracted_from_old_porch_concepts.md#package-relationships---upstream-and-downstream" %}}) - -* ***Deployment repository***: a repository can be designated as a deployment repository. Package revisions in *Published* -state in a deployment repository are considered [deployment-ready]({{% relref "/docs/neo-porch/2_concepts/theory.md#deployment-mechanism" %}}). - -* ***Package revision workspace***, or `workspaceName`: a user-defined string and element of package revision names automatically -assembled by Porch. Used to uniquely identify a package revision while in *Draft* state, especially to distinguish between -multiple drafts undergoing concurrent development. **N.B.**: a package revision workspace does not refer to any distinct -"folder" or "space", but only to the in-development draft. The same workspace name may be assigned to multiple package -revisions **of different packages** and **does not of itself indicate any connection between the packages**. - -## Core Concepts Elaborated - -Some of the concepts, briefly introduced above, bear examination in greater detail. - -### Repositories - -{{% alert title="Note" color="primary" %}} - -Currently, Porch primarily integrates with Git repositories. OCI support is available, but it is experimental and possibly -unstable. - -{{% /alert %}} - -A *Porch repository* represents Porch's connection to a Git repository containing kpt packages. It allows a Porch user to -read existing packages from Git, and to author new or existing packages with Porch's create, clone, and modify operations. -Once a repository is *registered* (created in Porch), Porch performs an initial read operation against it, scanning its -file structure to discover kpt packages, and building a local cache to improve the performance of subsequent operations. - -Any repositories registered must be capable of storing the following minimum data and metadata: -* kpt packages' file contents. -* Package versions. -* Sufficient metadata associated with the package to capture: - * Package dependency relationships (upstream - downstream). - * Package lifecycle state (draft, proposed, published). - * Package purpose (base package). - * Customer-defined attributes (optionally). - -### Package Revisions - -In a manner similar to Git commits, Porch allows the user to modify packages (including creating or cloning new ones) on -a basis of incremental releases. A new version of a package is not released immediately, but starts out as a draft, allowing -the user to develop it in safety before it is proposed and published as a standalone version of the package. Porch enables -this by modelling each successive version (whether published or still under development) as a *package revision*. - -Package revisions are sequentially versioned using a simple integer sequence. This enables the following important capabilities: - -* Compare any two versions of a package to establish "newer than", equal, or "older than" relationships. -* Automatically assign new version numbers on publication. -* [Optimistic concurrency](https://en.wikipedia.org/wiki/Optimistic_concurrency_control) of package revision development, - by comparing version numbers. -* Identify the latest (most recently published) package revision. -* A simple versioning model which easily supports automation. - -Porch's get/list operations provide these versions to the user in a package revision's `revision` field. - -#### Latest Package Revision - -The "latest" package revision is the one most recently published, corresponding to the numerically-greatest revision number. -For additional ease of use, the PackageRevision resource type applies a Kubernetes label to the latest package revision -when read using the `porchctl` or `kubectl` CLI: `kpt.dev/latest-revision: "true"` diff --git a/content/en/docs/neo-porch/2_concepts/package-lifecycle.md b/content/en/docs/neo-porch/2_concepts/package-lifecycle.md deleted file mode 100644 index 7c0ff271..00000000 --- a/content/en/docs/neo-porch/2_concepts/package-lifecycle.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: "Package Lifecycle" -type: docs -weight: 3 -description: ---- - -## Package Lifecycle Workflow - -Packages managed by Porch progress through several states, from creation to final publication. This workflow ensures that packages are reviewed and approved before they are published and consumed. - -The typical lifecycle of a package is as follows: - -1. **Draft:** A user initializes a new package or clones an existing one. The package is in a `Draft` state, allowing the user to make changes freely in their local workspace. -2. **Proposed:** Once the changes are ready for review, the user pushes the package, which transitions it to the `Proposed` state. In this stage, the package is available for review by other team members. -3. **Review and Approval:** - * **Approved:** If the package is approved, it is ready to be published. - * **Rejected:** If changes are required, the package is rejected. The user must pull the package, make the necessary modifications, and re-propose it for another review. -4. **Published:** After approval, the package is published. Published packages are considered stable and are available for deployment and consumption by other systems or clusters. They typically become the "latest" version of a package. - -![Flowchart](/static/images/porch/flowchart.drawio.svg) diff --git a/content/en/docs/neo-porch/2_concepts/relevant_old_docs/_index.md b/content/en/docs/neo-porch/2_concepts/relevant_old_docs/_index.md deleted file mode 100644 index e8db45d7..00000000 --- a/content/en/docs/neo-porch/2_concepts/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: -1 -description: Porch Concepts relevant old docs ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/2_concepts/relevant_old_docs/config-as-data.md b/content/en/docs/neo-porch/2_concepts/relevant_old_docs/config-as-data.md deleted file mode 100644 index f40c7855..00000000 --- a/content/en/docs/neo-porch/2_concepts/relevant_old_docs/config-as-data.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -title: "Configuration as Data" -type: docs -weight: 2 -description: ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -This document provides the background context for Package Orchestration, which is further -elaborated in a dedicated [document]({{% relref "/docs/porch/package-orchestration.md" %}}). - -## Configuration as data (CaD) - -CaD is an approach to the management of configuration. It includes the configuration of -infrastructure, policy, services, applications, and so on. CaD performs the following actions: - -* Making configuration data the source of truth, stored separately from the live state. -* Using a uniform, serializable data model to represent the configuration. -* Separating the code that acts on the configuration from the data and from packages/bundles of - data. -* Abstracting the configuration file structure and storage from the operations that act on the - configuration data. Clients manipulating the configuration data do not need to interact directly - with the storage (such as git, container images, and so on). - -![CaD Overview](/static/images/porch/CaD-Overview.svg) - -### Key principles - -A system based on CaD should observe the following key principles: - -* Separate *handling of secret data* (credentials, certificates, etc.) out to a secret-focused storage system, such as - ([cert-manager](https://cert-manager.io/)). -* Maintain a *versioned history of configuration changes* to bundles of related configuration data. -* Maintain *uniformity and consistency of the configuration format*, including type metadata, and rely on this to enable - pattern-based operations on the configuration data (along the lines of [duck typing](https://en.wikipedia.org/wiki/Duck_typing)). -* *Separate the configuration data from its schemas*. - * Rely on the schema information to: - * define strongly-typed operations. - * disambiguate data structures and other variations within the schema. -* *Decouple configuration abstractions* from collections of configuration data. -* Represent *abstractions of configuration generators* as data with schemas, as with other configuration data. -* Implement *get/list functionality* to find, filter, query, select, and/or validate: - * configuration data. - * code (functions) that can operate on the resource types that make up configuration data. -* Separate the *actuation* (reconciliation of configuration data with live state) from the *intermediate processing* - (validation and transformation) of the configuration data. - * Actuation should be conducted according to the declarative data model. -* Prefer *transforming configuration data* to generating it wholesale, especially for value propagation - * except in the case of dramatic changes (for example, an expansion >10x). -* *Decouple generation of transformation input data* from propagation. -* Obtain deployment context inputs from *well-defined "provider context" objects*. -* *Identifiers and references* should be declarative. -* *Link the live state back to the configuration as source of truth*. - -## Kubernetes Resouce Model configuration as data (KRM CaD) - -The kpt implementation of the Configuration as Data approach ([kpt][kpt], [Config Sync][Config Sync], and [Package Orchestration][Porch]) -is built on the foundation of the [Kubernetes Resource Model][krm] (KRM). - -{{% alert title="Note" color="primary" %}} - -KRM is not a hard requirement of CaD, just as Python or Go templates, or Jinja, are not specifically requirements for -[IaC](https://en.wikipedia.org/wiki/Infrastructure_as_code). However, the choice of a different fundamental format for -configuration data would necessitate the implementation of adapters for all types of infrastructure and applications -configured, including Kubernetes, CRDs, and GCP resources. Likewise, choosing another configuration format would require -the redesign of several of the configuration management mechanisms that have already been designed for KRM, such as three-way -merge, structural merge patch, schema descriptions, resource metadata, references, status conventions, and so on. - -{{% /alert %}} - - -**KRM CaD**, then, is a specific approach to implementing *Configuration as Data*. It uses and builds on the following -existing concepts: - -* [KRM][krm] as the configuration serialization data model. -* [Kptfile](https://kpt.dev/reference/schema/kptfile/) to store kpt package metadata. -* [ResourceList](https://kpt.dev/reference/schema/resource-list/) as a serialized package wire format. -* A kpt function with input → output in the form `ResourceList → ResultList` as the foundational, composable unit of code - with which to conduct package manipulation. - - {{% alert title="Note" color="primary" %}} - - Other forms of code can also manipulate packages, such as UIs and custom algorithms not necessarily packaged and used - as kpt functions. - - {{% /alert %}} - - -KRM CaD provides the following basic use cases: - -* Create a new (empty) kpt package. -* Load a serialized package from a repository (as a ResourceList). Examples of a repository may be one or more of the - following: - * Local HDD - * Git repository - * OCI - * Cloud storage -* Save a serialized package (as a ResourceList) to a package repository. -* Evaluate a function on a serialized package (ResourceList). -* [Render](https://kpt.dev/book/04-using-functions/#declarative-function-execution) a package (in the process evaluating - the functions declared within the package itself). -* Create a new (empty) package. -* Fork (or clone) an existing package from one package repository (called *upstream*) to another (called *downstream*). -* Delete a package from a repository. -* Associate a version with a package's condition at a particular point in time. - * Publish a package with an assigned version, guaranteeing the immutability of the package at that version. -* Incorporate changes from a newly-published version of an upstream package into a new version of a downstream package - (three-way merge). -* Revert a package to a prior version. - -### Configuration values - -The CaD approach enables the following key values, which other configuration management approaches provide to a lesser -extent or not at all: - -* Capabilities enabled by version-control (unlike imperative tools, such as UI and CLI, that directly modify the live - state via APIs): - * Versioning - * Undo - * Configuration history audits - * Review/approval flows - * Previewing before deployment - * Validation and safety checks - * Constraint-based policy enforcement - * Disaster recovery -* *Detecting and remedying drift* in the live state via continuous reconciliation, whether by direct re-application or - targeted mutations of the sources of truth. -* *Exporting state* to reusable blueprints without needing to create and manage templates manually. -* *Bulk-changing* configuration data across multiple sources of truth. -* *Configuration injection* to address horizontally-applied variations. - * Maintaining the resulting configuration variants without needing to invest effort wither in parameterisation frameworks - or in manually constructing and maintaining patches. -* *Merging* of multiple sources of truth. - -* *Simplified configuration authoring* using a variety of sources and editing methods. -* *What-you-see-is-what-you-get (WYSIWYG) interaction* with the configuration using a simple data serialization formation, - rather than a code-like format. -* *Layering of interoperable interface surfaces* (notably GUIs) over the declarative configuration mechanisms, rather than - forcing choices between exclusive alternatives (exclusively, UI/CLI or IaC initially, followed by exclusively UI/CLI or - exclusively IaC). - * The ability to apply UX techniques to simplify configuration authoring and viewing. -* *Cooperative configuration editing* by human and automated agents (for example, for security remediation, which is usually - implemented against live-state APIs). - -* *Combination of multiple* independent configuration transformations. -* *Reusability of configuration transformation functions* and function code across multiple bodies of configuration data - containing the same resource types. - * Reducing the frequency of changes to the existing transformation code. -* *Language-agnostic implementation* of configuration transformations, including both programming and scripting approaches. -* *Separation of roles* - for example, between developer (configuration author) and non-developer (configuration user). -* *Improved security on sources of truth*, including access control, validation, and invariant enforcement. - -#### Related articles - -For more information about Configuration as Data and the Kubernetes Resource Model, visit the following links: - -* [Rationale for kpt](https://kpt.dev/guides/rationale) -* [Understanding Configuration as Data](https://cloud.google.com/blog/products/containers-kubernetes/understanding-configuration-as-data-in-kubernetes) - blog post -* [Kubernetes Resource Model](https://cloud.google.com/blog/topics/developers-practitioners/build-platform-krm-part-1-whats-platform) - blog post series - - -## Core Components of Configuration as Data Implementation - -The Package Orchestration CaD implementation consists of a set of components and APIs enabling the following broad use -cases: - -* Register repositories (Git, OCI) containing kpt packages. -* Automatically discover existing packages in registered repositories. -* Manage package revision lifecycle, including: - * Authoring and versioning of a package through creation, mutation, and deletion of package revision drafts. - * A 2-step approval process where a draft package revision is first proposed for publishing, and only published on a - second (approval) operation. -* Manage package lifecycle - operations such as: - * Package upgrade - assisted or automated rollout of a downstream (cloned) package when a new revision of the upstream - package is published. - * Package rollback to a previous package revision. -* Deploy packages from deployment repositories and observe their deployment status. -* Role-based access control to Porch APIs via Kubernetes standard roles. - -### Deployment mechanism - -The deployment mechanism is responsible for deploying packages from a repository and affecting the live state. The "default" -deployment mechanism tested with the CaD implementation is [Config Sync][Config Sync], but since the configuration is stored -in repositories of standard types, the exact software used for deployment is less of a concern. - -Here we highlight some key attributes of the deployment mechanism and its integration within the CaD paradigm: - -* _Published_ packages in a deployment repository are considered ready to be deployed. -* _Draft_ packages need to be identified in such a way that Config Sync can easily avoid deploying them. -* Config Sync supports deploying individual packages and whole repositories. For Git specifically, this translates to a - requirement to be able to specify repository, branch/tag/ref, and directory when instructing Config Sync to deploy a - package. -* Config Sync needs to be able to pin to specific versions of deployable packages in order to orchestrate rollouts and - rollbacks. This means it must be possible to *get* a specific package revision. -* Config Sync needs to be able to discover when new package versions are available for deployment. diff --git a/content/en/docs/neo-porch/2_concepts/relevant_old_docs/package-orchestration.md b/content/en/docs/neo-porch/2_concepts/relevant_old_docs/package-orchestration.md deleted file mode 100644 index a8a42656..00000000 --- a/content/en/docs/neo-porch/2_concepts/relevant_old_docs/package-orchestration.md +++ /dev/null @@ -1,468 +0,0 @@ ---- -title: "Package Orchestration" -type: docs -weight: 2 -description: ---- - -Customers who want to take advantage of the benefits of [Configuration as Data]({{% relref "/docs/porch/config-as-data.md" %}}) -can do so today using the [kpt](https://kpt.dev) CLI and the kpt function ecosystem, including its -[functions catalog](https://catalog.kpt.dev/). Package authoring is possible using a variety of -editors with [YAML](https://yaml.org/) support. That said, a UI experience of -what-you-see-is-what-you-get (WYSIWYG) package authoring which supports a broader package lifecycle, -including package authoring with *guardrails*, approval workflows, package deployment, and more, is -not yet available. - -The *Package Orchestration* (Porch) service is a part of the Nephio implementation of the -Configuration as Data approach. It offers an API and a CLI that enable you to build the UI -experience for supporting the configuration lifecycle. - -## Core concepts - -This section briefly describes core concepts of package orchestration: - -***Package***: A package is a collection of related configuration files containing configurations -of [KRM][krm] **resources**. Specifically, configuration packages are [kpt packages](https://kpt.dev/book/02-concepts/#packages). -Packages are sequentially ***versioned***. Multiple versions of the same package may exist in a -([repository](#package-versioning)). A package may have a link (URL) to an -***upstream package*** (a specific version) ([from which it was cloned](#package-relationships)) . Packages go through three lifecycle stages: ***Draft***, ***Proposed***, and ***Published***: - - * ***Draft***: The package is being created or edited. The contents of the package can be - modified; however, the package is not ready to be used (or deployed). - * ***Proposed***: The author of the package has proposed that the package be published. - * ***Published***: The changes to the package have been approved and the package is ready to be - used. Published packages can be deployed or cloned. - -***Repository***: The repository stores packages. [git][] and [OCI][oci] are two examples of a -([repository](#repositories)). A repository can be designated as a -***deployment repository***. *Published* packages in a deployment repository are considered to be -([deployment-ready](#deployment)). -***Functions***: Functions (specifically, [KRM functions][krm functions]) can be applied to -packages to mutate or validate the resources within them. Functions can be applied to a -package to create specific package mutations while editing a package draft. Functions can be added -to a package's Kptfile [pipeline][]. - -## Core components of the Configuration as Data (CAD) implementation - -The core implementation of Configuration as Data, or *CaD Core*, is a set of components and APIs -which collectively enable the following: - -* Registration of the repositories (Git, OCI) containing kpt packages and the discovery of packages. -* Management of package lifecycles. This includes the authoring, versioning, deletion, creation, -and mutations of a package draft, the process of proposing the package draft, and the publishing of -the approved package. -* Package lifecycle operations, such as the following: - - * The assisted or automated rollout of a package upgrade when a new version of the upstream - package version becomes available (the three-way merge). - * The rollback of a package to its previous version. - -* The deployment of the packages from the deployment repositories, and the observability of their -deployment status. -* A permission model that allows role-based access control (RBAC). - -### High-level architecture - -At the high level, the Core CaD functionality consists of the following components: - -* A generic (that is, not task-specific) package orchestration service implementing the following: - - * package repository management - * package discovery, authoring, and lifecycle management - -* The Porch CLI tool [porchctl]({{% relref "/docs/porch/user-guides/porchctl-cli-guide.md" %}}): this is a Git-native, -schema-aware, extensible client-side tool for managing KRM packages. -* A GitOps-based deployment mechanism (for example [configsync][]), which distributes and deploys -configurations, and provides observability of the status of the deployed resources. -* A task-specific UI supporting repository management, package discovery, authoring, and lifecycle. - -![CaD Core Architecture](/static/images/porch/CaD-Core-Architecture.svg) - -## CaD concepts elaborated - -The concepts that were briefly introduced in **High-level architecture** are elaborated in more -detail in this section. - -### Repositories - -Porch and [configsync][] currently integrate with [git][] repositories. There is an existing design -that adds OCI support to kpt. Initially, the Package Orchestration service will prioritize -integration with [git][]. Support for additional repository types may be added in the future, as -required. - -Requirements applicable to all repositories include the ability to store the packages and their -versions, and sufficient metadata associated with the packages to capture the following: - -* package dependency relationships (upstream - downstream) -* package lifecycle state (draft, proposed, published) -* package purpose (base package) -* customer-defined attributes (optional) - -At repository registration, the customers must be able to specify the details needed to store the -packages in appropriate locations in the repository. For example, registration of a Git repository -must accept a branch and a directory. - -{{% alert title="Note" color="primary" %}} - -A user role with sufficient permissions can register a package or a function repository, including -repositories containing functions authored by the customer, or by other providers. Since the -functions in the registered repositories become discoverable, customers must be aware of the -implications of registering function repositories and trust the contents thereof. - -{{% /alert %}} - -### Package versioning - -Packages are versioned sequentially. The requirements are as follows: - -* The ability to compare any two versions of a package as "newer than", "equal to", or "older than" - the other. -* The ability to support the automatic assignment of versions. -* The ability to support the [optimistic concurrency][optimistic-concurrency] of package changes - via version numbers. -* A simple model that easily supports automation. - -A simple integer sequence is used to represent the package versions. - -### Package relationships - -The Kpt packages support the concept of ***upstream***. When one package is cloned from another, -the new package, known as the ***downstream*** package, maintains an upstream link to the version -of the package from which it was cloned. If a new version of the upstream package becomes available, -then the upstream link can be used to update the downstream package. - -### Deployment - -The deployment mechanism is responsible for deploying the configuration packages from a repository -and affecting the live state. Because the configuration is stored in standard repositories (Git, - -and in the future OCI), the deployment component is pluggable. By default, [Config Sync](https://cloud.google.com/kubernetes-engine/enterprise/config-sync/docs/overview) is the -deployment mechanism used by CaD Core implementation. However, other deployment mechanisms can be -also used. - -Some of the key attributes of the deployment mechanism and its integration within the CaD Core are -highlighted here: - -* _Published_ packages in a deployment repository are considered to be ready to be deployed. -* configsync supports the deployment of individual packages and whole repositories. For Git - specifically, that translates to a requirement to be able to specify the repository, - branch/tag/ref, and directory when instructing configsync to deploy a package. -* _Draft_ packages need to be identified in such a way that configsync can easily avoid deploying - them. -* configsync needs to be able to pin to specific versions of deployable packages, in order to - orchestrate rollouts and rollbacks. This means it must be possible to get a specific version of a - package. -* configsync needs to be able to discover when new versions are available for deployment. - -## Package Orchestration (Porch) - -Having established the context of the CaD Core components and the overall architecture, the -remainder of the document will focus on the Package Orchestration service, or **Porch** for short. - -The role of the Package Orchestration service among the CaD Core components covers the following -areas: - -* [Repository Management](#repository-management) -* [Package Discovery](#package-discovery) -* [Package Authoring](#package-authoring) and Lifecycle - -In the next sections we will expand on each of these areas. The term _client_ used in these -sections can be either a person interacting with the user interface, such as a web application or a -command-line tool, or an automated agent or process. - -### Repository management - -The repository management functionality of the Package Orchestration service enables the client to -do the following: - -* Register, unregister, and update the registration of the repositories, and discover registered - repositories. Git repository integration will be available first, with OCI and possibly more - delivered in the subsequent releases. -* Manage repository-wide upstream/downstream relationships, that is, designate the default upstream - repositories from which the packages will be cloned. -* Annotate the repositories with metadata, such as whether or not each repository contains - deployment-ready packages. Metadata can be application- or customer-specific. - -### Package discovery - -The package discovery functionality of the Package Orchestration service enables the client to do -the following: - -* Browse the packages in a repository. -* Discover the configuration packages in the registered repositories, and sort and/or filter them - based on the repository containing the package, package metadata, version, and package lifecycle - stage (draft, proposed, and published). -* Retrieve the resources and metadata of an individual package, including the latest version, or - any specific version or draft of a package, for the purpose of introspection of a single package, - or for comparison of the contents of multiple versions of a package or related packages. -* Enumerate the _upstream_ packages that are available for creating (cloning) a _downstream_ - package. -* Identify the downstream packages that need to be upgraded after a change has been made to an - upstream package. -* Identify all the deployment-ready packages in a deployment repository that are ready to be synced - to a deployment target by configsync. -* Identify new versions of packages in a deployment repository that can be rolled out to a - deployment target by configsync. - -### Package authoring - -The package authoring and lifecycle functionality of the package Orchestration service enables the -client to do the following: - -* Create a package _draft_ via one of the following means: - - * An empty draft from scratch (`porchctl rpkg init`). - * A clone of an upstream package (`porchctl rpkg clone`) from a registered upstream repository or - from another accessible, unregistered repository. - * Editing an existing package (`porchctl rpkg pull`). - * Rolling back or restoring a package to any of its previous versions - (`porchctl rpkg pull` of a previous version). - -* Push changes to a package _draft_. In general, mutations include adding, modifying, and deleting - any part of the package's contents. Specific examples include the following: - - * Adding, changing, or deleting package metadata (that is, some properties in the `Kptfile`). - * Adding, changing, or deleting resources in the package. - * Adding function mutators/validators to the package's pipeline. - * Adding, changing, or deleting sub-packages. - * Retrieving the contents of the package for arbitrary client-side mutations - (`porchctl rpkg pull`). - * Updating or replacing the package contents with new contents, for example, the results of - client-side mutations by a UI (`porchctl rpkg push`). - -* Rebase a package onto another upstream base package or onto a newer version of the same package - (to assist with conflict resolution during the process of publishing a draft package). - -* Get feedback during package authoring, and assistance in recovery from merge conflicts, invalid - package changes, or guardrail violations. - -* Propose that a _draft_ package be _published_. -* Apply arbitrary decision criteria, and by a manual or an automated action, approve or reject a - proposal for _draft_ package to be _published_. -* Perform bulk operations, such as the following: - - * Assisted/automated updates (upgrades and rollbacks) of groups of packages matching specific - criteria (for example, if a base package has new version or a specific base package version has - a vulnerability and needs to be rolled back). - * Proposed change validation (prevalidating changes that add a validator function to a base - package). - -* Delete an existing package. - -#### Authoring and latency - -An important aim of the Package Orchestration service is to support the building of task-specific -UIs. To deliver a low-latency user experience that is acceptable to UI interactions, the innermost -authoring loop depicted below requires the following: - -* high-performance access to the package store (loading or saving a package) with caching -* low-latency execution of mutations and transformations of the package contents -* low-latency [KRM function][krm functions] evaluation and package rendering (evaluation of a - package's function pipelines) - -![Inner Loop](/static/images/porch/Porch-Inner-Loop.svg) - -#### Authoring and access control - -A client can assign actors (for example, persons, service accounts, and so on) to roles that -determine which operations they are allowed to perform, in order to satisfy the requirements of the -basic roles. For example, only permitted roles can do the following: - -* Manipulate repository registration, and enforcement of repository-wide invariants and guardrails. -* Create a draft of a package and propose that the draft be published. -* Approve or reject a proposal to publish a draft package. -* Clone a package from a specific upstream repository. -* Perform bulk operations, such as rollout upgrade of downstream packages, including rollouts - across multiple downstream repositories. - -### Porch architecture - -The Package Orchestration (**Porch**) service is designed to be hosted in a -[Kubernetes](https://kubernetes.io/) cluster. - -The overall architecture is shown in the following figure. It also includes existing components, -such as the k8s apiserver and configsync. - -![Porch Architecture](/static/images/porch/Porch-Architecture.svg) - -In addition to satisfying the requirements highlighted above, the focus of the architecture was to -do the following: - -* Establish clear components and interfaces. -* Support a low-latency package authoring experience required by the UIs. - -The Porch architecture comprises three components: - -* the Porch server -* the function runner -* the CaD Library - -#### Porch server - -The Porch server is implemented as a [Kubernetes extension API server][apiserver]. The benefits of -using the Kubernetes extension API server are as follows: - -* A well-defined and familiar API style. -* The availability of generated clients. -* Integration with the existing Kubernetes ecosystem and tools, such as the `kubectl` CLI, - [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/). -* The Kubernetes extension API server removes the need to open another network port to access a - separate endpoint running inside the k8s cluster. This is a clear advantage over Google Remote - Procedure Calls (GRPC), which was considered as an alternative approach. - -The resources implemented by Porch include the following: - -* `PackageRevision`: This represents the _metadata_ of the configuration package revision stored in - a _package_ repository. -* `PackageRevisionResources`: This represents the _contents_ of the package revision. - -{{% alert title="Note" color="primary"%}} - -Each configuration package revision is represented by a _pair_ of resources, each of which presents -a different view, or a [representation][] of the same underlying package revision. - -{{% /alert %}} - -Repository registration is supported by a `Repository` [custom resource][crds]. - -The **Porch server** itself comprises several key components, including the following: - -* The *Porch aggregated apiserver* - The *Porch aggregated apiserver* implements the integration into the main Kubernetes apiserver, - and directly serves the API requests for the `PackageRevision`, `PackageRevisionResources` - resources. -* The Package Orchestration *engine* - The Package Orchestration *engine* implements the package lifecycle operations, and the package - mutation workflows. -* The *CaD Library* - The *CaD Library* implements specific package manipulation algorithms, such as package rendering - (the evaluation of a package's function *pipeline*), the initialization of a new package, and so - on. The CaD Library is shared with `kpt`, where it likewise provides the core package - manipulation algorithms. -* The *package cache* - The *package cache* enables both local caching, as well as the abstract manipulation of packages - and their contents, irrespective of the underlying storage mechanism, such as Git, or OCI. -* The *repository adapters* for Git and OCI - The *repository adapters* for Git and OCI implement the specific logic of interacting with those types of package - repositories. -* The *function runtime* - The *function runtime* implements support for evaluating the [kpt functions][functions] and the - multitier cache of functions to support low-latency function evaluation. - -#### Function runner - -The **function runner** is a separate service that is responsible for evaluating the -[kpt functions][functions]. The function runner exposes a Google Remote Procedure Calls -([GRPC](https://grpc.io/)) endpoint, which enables the evaluation of a kpt function on the provided -configuration package. - -The GRPC technology was chosen for the function runner service because the -[requirements](#grpc-api) that informed the choice of the KRM API for the Package Orchestration -service do not apply. The function runner is an internal microservice, an implementation detail not -exposed to external callers. This makes GRPC particularly suitable. - -The function runner also maintains a cache of functions to support low-latency function evaluation. -It achieves this through two mechanisms that are available for the evaluation of a function. - -The **Executable Evaluation** approach executes the function within the pod runtime through a -shell-based invocation of the function binary, for which the function binaries are bundled inside -the function runner image itself. - -The **Pod Evaluation** approach is used when the invoked function is not available via the -Executable Evaluation approach, wherein the function runner pod starts the function pod that -corresponds to the invoked function, along with a front-end service. Once the pod and the service -are up and running, its exposed GRPC endpoint is invoked for function evaluation, passing the input -package. For this mechanism, the function runner reads the list of functions and their images -supplied via a configuration file at startup, and spawns function pods, along with a corresponding -front-end service for each configured function. These function pods and services are terminated -after a preconfigured period of inactivity (the default is 30 minutes) by the function runner and -are recreated on the next invocation. - -#### CaD Library - -The [kpt](https://kpt.dev/) CLI already implements foundational package manipulation algorithms, in -order to provide the command line user experience, including the following: - -* [kpt pkg init](https://kpt.dev/reference/cli/pkg/init/): this creates an empty, valid KRM package. -* [kpt pkg get](https://kpt.dev/reference/cli/pkg/get/): this creates a downstream package by - cloning an upstream package. It sets up the upstream reference of the downstream package. -* [kpt pkg update](https://kpt.dev/reference/cli/pkg/update/): this updates the downstream package - with changes from the new version of the upstream, three-way merge. -* [kpt fn eval](https://kpt.dev/reference/cli/fn/eval/): this evaluates a kpt function on a package. -* [kpt fn render](https://kpt.dev/reference/cli/fn/render/): this renders the package by executing - the function pipeline of the package and its nested packages. -* [kpt fn source](https://kpt.dev/reference/cli/fn/source/) and - [kpt fn sink](https://kpt.dev/reference/cli/fn/sink/): these read packages from a local disk as - a `ResourceList` and write the packages represented as a `ResourcesList` into the local disk. - -The same set of primitives form the building blocks of the package orchestration service. Further, -the Package Orchestration service combines these primitives into higher-level operations (for -example, package orchestrator renders the packages automatically on changes. Future versions will -support bulk operations, such as the upgrade of multiple packages, and so on). - -The implementation of the package manipulation primitives in the kpt was refactored (with the -initial refactoring completed, and more to be performed as needed), in order to do the following: - -* Create a reusable CaD library, usable by both the kpt CLI and the Package Orchestration service. -* Create abstractions for dependencies which differ between the CLI and Porch. Most notable are - the dependency on Docker for function evaluation, and the dependency on the local file system for - package rendering. - -Over time, the CaD Library will provide the package manipulation primitives, to perform the -following tasks: - -* Create a valid empty package (init). -* Update the package upstream pointers (get). -* Perform three-way merges (update). -* Render: using a core package rendering algorithm that uses a pluggable function evaluator, to - support the following: - - * Function evaluation via Docker (used by kpt CLI). - * Function evaluation via an RPC to a service or an appropriate function sandbox. - * High-performance evaluation of trusted, built-in functions without a sandbox. - -* Heal the configuration (restore comments after lossy transformation). - -Both the kpt CLI and Porch will consume the library. This approach will allow the leveraging of the -investment already made into the high-quality package manipulation primitives, and enable -functional parity between the kpt CLI and the Package Orchestration service. - -## User Guide - -The Porch User Guide can be found in a dedicated document, via this link: -[document]({{% relref "/docs/porch/user-guides/" %}}). - -## Open issues and questions - -### Deployment rollouts and orchestration - -__Not Yet Resolved__ - -Cross-cluster rollouts and orchestration of deployment activity. For example, a package deployed by -configsync in cluster A, and only on success, the same (or a different) package deployed by -configsync in cluster B. - -## Alternatives considered - -### GRPC API - -The use of Google Remote Procedure Calls ([GRPC]()) was considered for the Porch API. The primary -advantages of implementing Porch as an extension of the Kubernetes apiserver are as follows: - -* Customers would not have to open another port to their Kubernetes cluster and would be able to - reuse their existing infrastructure. -* Customers could likewise reuse the existing Kubernetes tooling ecosystem. - - -[krm]: https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/resource-management.md -[functions]: https://kpt.dev/book/02-concepts/#functions -[krm functions]: https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md -[pipeline]: https://kpt.dev/book/04-using-functions/#declarative-function-execution -[Config Sync]: https://cloud.google.com/anthos-config-management/docs/config-sync-overview -[kpt]: https://kpt.dev/ -[git]: https://git-scm.org/ -[optimistic-concurrency]: https://en.wikipedia.org/wiki/Optimistic_concurrency_control -[apiserver]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/ -[representation]: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#differing-representations -[crds]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/ -[oci]: https://github.com/opencontainers/image-spec/blob/main/spec.md diff --git a/content/en/docs/neo-porch/2_concepts/theory.md b/content/en/docs/neo-porch/2_concepts/theory.md deleted file mode 100644 index 1ae27fa4..00000000 --- a/content/en/docs/neo-porch/2_concepts/theory.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -title: "Theoretical Concepts" -type: docs -weight: 3 -description: | - The principles and theories behind Porch; Porch as a service which provides kpt workflows, implementing a "configuration - as data" approach to management of kpt packages. ---- - -## Configuration as Data (CaD) - -CaD is an approach to the management of configuration: namely, configuration data for infrastructure, policy, services, -applications, etc. To treat configuration as data means implementing the following principles: - -* Making configuration data the source of truth, stored separately from the live state. -* Using a uniform, serializable data model to represent the configuration. -* Separating the data (including, if applicable, packages or bundles of data), from code that applies or acts on the - configuration. -* Abstracting the configuration file structure and storage from operations that act on the configuration data. Clients - manipulating the configuration data do not need to interact directly with the storage (such as git, container images, - etc.). - -![CaD Overview](/static/images/porch/CaD-Overview.svg) - -### Key principles - -A system based on CaD should observe the following key principles: - -* *Decouple configuration abstractions* from collections of configuration data. -* Represent *abstractions of configuration generators* as data with schemas, as with other configuration data. -* *Separate the configuration data from its schemas*. - * Rely on the schema information to distinguish between data structures and other versions/variations within the schema. -* Separate the *actuation* (reconciliation of configuration data with live state) from the *intermediate processing* - (validation and transformation) of the configuration data. - * Actuation should be conducted according to the declarative data model. -* Prefer *transforming configuration data* to generating it wholesale, especially for value propagation - * except in the case of dramatic changes (for example, an expansion by 10x or more). -* *Decouple generation of transformation input data* from propagation. -* *Link the live state back to the configuration as source of truth*. - -## Package Orchestration - Porch - -Having established the basics of a very generic CaD architecture, the remainder of the document will focus -on **Porch** - the Package Orchestration service. - -Package Orchestration - "Porch" for short - is "[kpt](https://kpt.dev/)-as-a-service". It provides opinionated, -Kubernetes-based interfaces to manage and orchestrate kpt configuration packages, allowing the use of standard Kubernetes -controller techniques to automate: -* package management, including [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations, content - manipulation, and lifecycle operations -* connection to package repositories, with automatic discovery of packages contained in them -* [WYSIWYG](https://en.wikipedia.org/wiki/WYSIWYG) package authoring -* evaluation of [KRM functions](https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md) - on package contents -* package version control, with a proposal/approval workflow around publishing new package versions -* package customization with guardrails - -To cement the role of Porch as part of the Nephio CaD implementation, it covers: - -* [Repository Management](#repository-management) -* [Package Discovery](#package-discovery) -* [Package Authoring](#package-authoring) and Lifecycle - -The following section expands more on each of these areas. The term *client* used in these sections can be either a person -interacting with the API (e.g., through a web application or a command-line tool), or an automated agent or process. - -### Rationale behind Porch - -The benefits of Configuration as Data are already available in CLI form, using kpt and the KRM function ecosystem, which -includes a kpt-hosted [function catalog](https://catalog.kpt.dev/function-catalog/). YAML files can be created and -organised into packages using any editor with YAML support. However, a WYSIWYG user experience of package management is -not yet available which can support broader package lifecycle management and necessary development guardrails. - -Porch enables development of such a user experience. It enables workflows similar to those supported by the kpt CLI, offering -them as-a-service over an API and CLI which provide lifecycle management of kpt packages, package authoring with guardrails, -a proposal/approval workflow, package deployment, and more. - -### Repository Management - -Porch's repository management functionality enables the client to manage Porch repositories: - -* Register (create) and unregister (delete) repositories. - * A repository may be registered as a *deployment* repository to indicate that it contains deployment-ready packages. -* Discover (read) and update registered repositories. - * Since Porch repositories are Kubernetes objects, the update operation may be used to add arbitrary metadata, in the - form of annotations or labels, for the benefit of applications or customers. - -Git repository integration is available, with limited experimental support for OCI. - -### Package Discovery - -Porch's package discovery functionality enables the client to read package data: - -* List package revisions in registered repositories. - * Sort and filter based on package metadata (labels) or a selection of field values. - * To improve performance and latency, package revisions are automatically discovered and cached in Porch upon repository - registration. - * Porch's repository-synchronisation then polls the repository at a user-customizable interval to keep the cache up to - date. -* Retrieve details of an individual package revision. -* Discover upstream packages with new latest revisions to which their downstream packages can be upgraded. -* Identify deployment-ready packages that are available to be deployed by the chosen deployment software. - -### Package Authoring - -Porch's package lifecycle management enables the client to orchestrate packages and package revisions: - -* Create a *draft* package revision in any of the following ways: - * Create an empty draft 'from scratch' (`porchctl rpkg init`). - * Clone an upstream package (`porchctl rpkg clone`) from either a registered upstream repository or from an unregistered - repository accessible by URL. - * Edit an existing package (`porchctl rpkg edit`). - -* Retrieve the contents of a package's files for local review or editing (`porchctl rpkg pull`). - -* Manage approval status of a package revision: - * Propose a *Draft* package for publication, moving it to *Proposed* status. - * Reject a *Proposed* package, setting it back to *Draft* status. - * Approve a *Proposed* package, releasing it to *Published* status. - -* Update the package contents of a draft package revision by pushing an edited local copy to the draft (`porchctl rpkg push`). - Example edits: - * Add, modify, or delete resources in the package. - * Add, modify, or delete the KRM functions in the pipeline in the package's `kptfile`. - * e.g.: mutator functions to transform the [KRM resources](https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/resource-management.md) - in the package contents; validator functions to enforce validation. - * Add, modify, or delete a sub-package. - -* Guard against pushing invalid package changes: - * As part of the `porchctl rpkg push` operation, Porch renders the kpt package, running the pipeline. - * If the pipeline encounters a failure, error, or validation violation, Porch refuses to update the package contents. - -* Perform bulk operations using package variants, such as: - * Assisted/automated update (upgrade, rollback) of groups of packages matching specific criteria (e.g. base package has - a new version; specific base package version has a vulnerability and should be rolled back). - * Proposed change validation (pre-validating change that adds a validator function to a base package). - -* Delete an existing package or package revision. - -#### Authoring & Latency - -An important goal of Porch is to support building of task-specific UIs. In order for Porch to sustain a quick turnaround -of operations, package authors must ensure their packages allow the innermost authoring loop (depicted below) to execute -quickly in the following areas: -* Low-latency execution of mutations and transformations on the package contents. -* Low-latency rendering of the package's KRM function pipeline. - -![Inner Loop](/static/images/porch/Porch-Inner-Loop.svg) - -#### Authoring & Access Control - -Using Kubernetes Roles and RoleBindings, a user can apply role-based access control to limit the operations an actor (other -user, service account) can perform. For example, access can be segregated to restrict who can: - -* register and unregister repositories. -* create a new draft package revision and propose it for publication. -* approve (or reject) the a proposed package revision. -* clone packages from a specific upstream repository. -* perform bulk operations (using package variants, scripts, user-developed client, etc.) such as rolling out upgrade of - downstream packages, including rollouts across multiple downstream repositories. diff --git a/content/en/docs/neo-porch/3_getting_started/_index.md b/content/en/docs/neo-porch/3_getting_started/_index.md deleted file mode 100644 index b34cb782..00000000 --- a/content/en/docs/neo-porch/3_getting_started/_index.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: "Getting Started" -type: docs -weight: 3 -description: "A set of guides for installing Porch prerequisites, the porchctl CLI, and deploying Porch components on a Kubernetes cluster." ---- - -## Prerequisites - -1. A supported OS (Linux/MacOS) -2. [git](https://git-scm.com/) ({{< params "version_git" >}}) -3. [Docker](https://www.docker.com/get-started/) - either Docker Desktop or Docker Engine ({{< params "version_docker" >}}) -4. [kubectl](https://kubernetes.io/docs/reference/kubectl/) - make sure that [kubectl context](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) configured with your cluster ({{< params "version_kube" >}}) -5. [kpt](https://kpt.dev/installation/kpt-cli/) ({{< params "version_kpt" >}}) -6. [The go programming language](https://go.dev/) ({{< params "version_go" >}}) -7. A Kubernetes Cluster - -{{% alert color="primary" title="Note:" %}} -The versions above relate to the latest tested versions confirmed to work and are **NOT** the only compatible versions. -{{% /alert %}} diff --git a/content/en/docs/neo-porch/3_getting_started/installing-porch.md b/content/en/docs/neo-porch/3_getting_started/installing-porch.md deleted file mode 100644 index 4dae7912..00000000 --- a/content/en/docs/neo-porch/3_getting_started/installing-porch.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: "Installing Porch" -type: docs -weight: 2 -description: Install guide for the porch system on a Kubernetes cluster. ---- - -## Deploying Porch on a cluster - -Create a new directory for the kpt package and path inside of it - -```bash -mkdir porch-{{% params "latestTag" %}} && cd porch-{{% params "latestTag" %}} -``` - -Download the latest Porch kpt package blueprint: - -```bash -curl -LO "https://github.com/nephio-project/porch/releases/download/v{{% params "latestTag" %}}/porch_blueprint.tar.gz" -``` - -Extract the Porch kpt package contents: - -```bash -tar -xzf porch_blueprint.tar.gz -``` - -Initialize and apply the Porch kpt package: - -```bash -kpt live init && kpt live apply -``` - -You can check that Porch is up and running with the following command: - -```bash -kubectl get all -n porch-system -``` - -A healthy Porch install should look like the following: - -```bash -NAME READY STATUS RESTARTS AGE -pod/function-runner-567ddc76d-7k8sj 1/1 Running 0 4m3s -pod/function-runner-567ddc76d-x75lv 1/1 Running 0 4m3s -pod/porch-controllers-d8dfccb4-8lc6j 1/1 Running 0 4m3s -pod/porch-server-7dc5d7cd4f-smhf5 1/1 Running 0 4m3s - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/api ClusterIP 10.96.108.221 443/TCP,8443/TCP 4m3s -service/function-runner ClusterIP 10.96.237.108 9445/TCP 4m3s - -NAME READY UP-TO-DATE AVAILABLE AGE -deployment.apps/function-runner 2/2 2 2 4m3s -deployment.apps/porch-controllers 1/1 1 1 4m3s -deployment.apps/porch-server 1/1 1 1 4m3s - -NAME DESIRED CURRENT READY AGE -replicaset.apps/function-runner-567ddc76d 2 2 2 4m3s -replicaset.apps/porch-controllers-d8dfccb4 1 1 1 4m3s -replicaset.apps/porch-server-7dc5d7cd4f 1 1 1 4m3s -``` diff --git a/content/en/docs/neo-porch/3_getting_started/installing-porchctl.md b/content/en/docs/neo-porch/3_getting_started/installing-porchctl.md deleted file mode 100644 index 17abbaca..00000000 --- a/content/en/docs/neo-porch/3_getting_started/installing-porchctl.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: "Installing Porchctl CLI" -type: docs -weight: 1 -description: Install guide for the Porchctl CLI. ---- - -## Download the latest porchctl binary - -{{< tabpane lang="bash" >}} -{{< tab header="Linux AMD64" >}} -curl -LO "https://github.com/nephio-project/porch/releases/download/v{{}}/porchctl_{{}}_linux_amd64.tar.gz" -{{< /tab >}} -{{< tab header="Linux ARM64" >}} -curl -LO "https://github.com/nephio-project/porch/releases/download/v{{}}/porchctl_{{}}_linux_arm64.tar.gz" -{{< /tab >}} -{{< tab header="macOS AMD64" >}} -curl -LO "https://github.com/nephio-project/porch/releases/download/v{{}}/porchctl_{{}}_darwin_amd64.tar.gz" -{{< /tab >}} -{{< tab header="macOS ARM64" >}} -curl -LO "https://github.com/nephio-project/porch/releases/download/v{{}}/porchctl_{{}}_darwin_arm64.tar.gz" -{{< /tab >}} -{{< /tabpane >}} - -{{% alert color="primary" title="Note:" %}} -To download a specific version of porch and its porchctl binary you can do so by replacing the version number and machine type its for in the curl link above. - -For example, to download the **[1.5.0](https://github.com/nephio-project/porch/releases/tag/v1.5.0)** release version of porch on **macOS AMD64** the URL would be: - -```bash -curl -LO "https://github.com/nephio-project/porch/releases/download/v1.5.0/porchctl_1.5.0_darwin_amd64.tar.gz" -``` - -{{% /alert %}} - -## Install the porchctl binary - -This extracts the tar file containting the binary executable and installs it into the root binary directory of the machine. - -{{% alert color="primary" title="Note:" %}} -This requires **root** permissions on the host machine. -{{% /alert %}} - -```bash -tar -xzf porchctl_{{% params "latestTag" %}}_linux_amd64.tar.gz | sudo install -o root -g root -m 0755 porchctl /usr/local/bin/ -``` - -{{% alert color="primary" title="Note:" %}} -If you do not have root access on the target system, you can still install porchctl to the `~/.local/bin` directory: -{{% /alert %}} - -```bash -tar -xzf porchctl_{{% params "latestTag" %}}_linux_amd64.tar.gz -chmod +x ./porchctl -mkdir -p ~/.local/bin -mv ./porchctl ~/.local/bin/porchctl -# and then append (or prepend) ~/.local/bin to $PATH -``` - -You can test that the CLI has been installed correctly with the `porchctl version` command. The output should be a printout that looks similar to this. - -```bash -Version: {{% params "latestTag" %}} -Git commit: cddc13bdcd569141142e2b632f09eb7a3e4988c9 (dirty) -``` - -## Enable porchctl autocompletion (optional) - -Create the completions directory (if it doesn't already exist): - -```bash -mkdir -p ~/.local/share/bash-completion/completions -``` - -{{% alert color="primary" title="Note:" %}} -This is the auto-completion directory for Ubuntu 24.04 LTS and a few other distributions. -Please do your due diligence and use/create the directory for your appropriate OS/distribution. -{{% /alert %}} - -Generate and install the completion script: - -```bash -porchctl completion bash > ~/.local/share/bash-completion/completions/porchctl -``` - -Reload your shell: - -```bash -exec bash -``` - -{{% alert color="primary" title="Note:" %}} -You can reload/refresh your terminal manually without the command by just closing the terminal and starting a new one. Either works as intended. -{{% /alert %}} - -Test that the auto-completion works with the following command and pressing the auto-complete key, which is usually ``, twice. - -```bash -porchctl -``` - -If auto-completion is working as intended, this should return a similar output to the one below: - -```bash -completion (Generate the autocompletion script for the specified shell) -help (Help about any command) -repo (Manage package repositories.) -rpkg (Manage packages.) -version (Print the version number of porchctl) -``` diff --git a/content/en/docs/neo-porch/3_getting_started/relevant_old_docs/_index.md b/content/en/docs/neo-porch/3_getting_started/relevant_old_docs/_index.md deleted file mode 100644 index ca21f73f..00000000 --- a/content/en/docs/neo-porch/3_getting_started/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: -1 -description: ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/_index.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/_index.md deleted file mode 100644 index 01e845ff..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/_index.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: "Guides" -type: docs -weight: 4 -description: Guides detailing how to use Porch ---- - -## Overview - -{{% alert title="Note" color="primary" %}} -The tutorials in this section assume you have a local development environment running (Porch + Gitea in kind). If you plan to follow the walkthroughs locally, please set up the Local Dev Environment first. For more information, see [Local Development Environment Setup]({{% relref "/docs/neo-porch/6_configuration_and_deployments/deployments/local-dev-env-deployment.md" %}}). -{{% /alert %}} diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/relevant_old_docs/_index.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/relevant_old_docs/_index.md deleted file mode 100644 index ca21f73f..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: -1 -description: ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/relevant_old_docs/preparing-the-environment.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/relevant_old_docs/preparing-the-environment.md deleted file mode 100644 index e6bba7d9..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/relevant_old_docs/preparing-the-environment.md +++ /dev/null @@ -1,1682 +0,0 @@ ---- -title: "Preparing the Environment" -type: docs -weight: 2 -description: "A tutorial to preparing the environment for Porch" ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -## Exploring the Porch resources - -We have configured three repositories in Porch: - -```bash -kubectl get repositories -n porch-demo -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -edge1 git Package true True http://172.18.255.200:3000/nephio/edge1.git -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -A repository is a CR of the Porch Repository CRD. You can examine the *repositories.config.porch.kpt.dev* CRD with -either of the following commands (both of which are rather verbose): - -```bash -kubectl get crd -n porch-system repositories.config.porch.kpt.dev -o yaml -kubectl describe crd -n porch-system repositories.config.porch.kpt.dev -``` - -You can examine any other CRD using the commands above and changing the CRD name/namespace. - -The full list of Nephio CRDs is as below: - -```bash -kubectl api-resources --api-group=porch.kpt.dev -NAME SHORTNAMES APIVERSION NAMESPACED KIND -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -packages porch.kpt.dev/v1alpha1 true Package -``` - -The PackageRevision CRD is used to keep track of revision (or version) of each package found in the repositories. - -```bash -kubectl get packagerevision -n porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -external-blueprints-922121d0bcdd56bfa8cae6c375720e2b5f358ab0 free5gc-cp main main false Published external-blueprints -external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 free5gc-cp v1 v1 true Published external-blueprints -external-blueprints-716aae722092dbbb9470e56079b90ad76ec8f0d5 free5gc-operator main main false Published external-blueprints -external-blueprints-d65dc89f7a2472650651e9aea90edfcc81a9afc6 free5gc-operator v1 v1 false Published external-blueprints -external-blueprints-9fee880e8fa52066f052c9cae7aac2e2bc1b5a54 free5gc-operator v2 v2 false Published external-blueprints -external-blueprints-91d60ee31d2d0a1a6d5f1807593d5419434accd3 free5gc-operator v3 v3 false Published external-blueprints -external-blueprints-21f19a0641cf520e7dc6268e64c58c2c30c27036 free5gc-operator v4 v4 false Published external-blueprints -external-blueprints-bf2e7522ee92680bd49571ab309e3f61320cf36d free5gc-operator v5 v5 true Published external-blueprints -external-blueprints-c1b9ecb73118e001ab1d1213e6a2c94ab67a0939 free5gc-upf main main false Published external-blueprints -external-blueprints-5d48b1516e7b1ea15830ffd76b230862119981bd free5gc-upf v1 v1 true Published external-blueprints -external-blueprints-ed97798b46b36d135cf23d813eccad4857dff90f pkg-example-amf-bp main main false Published external-blueprints -external-blueprints-ed744bfdf4a4d15d4fcf3c46fde27fd6ac32d180 pkg-example-amf-bp v1 v1 false Published external-blueprints -external-blueprints-5489faa80782f91f1a07d04e206935d14c1eb24c pkg-example-amf-bp v2 v2 false Published external-blueprints -external-blueprints-16e2255bd433ef532684a3c1434ae0bede175107 pkg-example-amf-bp v3 v3 false Published external-blueprints -external-blueprints-7689cc6c953fa83ea61283983ce966dcdffd9bae pkg-example-amf-bp v4 v4 false Published external-blueprints -external-blueprints-caff9609883eea7b20b73b7425e6694f8eb6adc3 pkg-example-amf-bp v5 v5 true Published external-blueprints -external-blueprints-00b6673c438909975548b2b9f20c2e1663161815 pkg-example-smf-bp main main false Published external-blueprints -external-blueprints-4f7dfbede99dc08f2b5144ca550ca218109c52f2 pkg-example-smf-bp v1 v1 false Published external-blueprints -external-blueprints-3d9ab8f61ce1d35e264d5719d4b3c0da1ab02328 pkg-example-smf-bp v2 v2 false Published external-blueprints -external-blueprints-2006501702e105501784c78be9e7d57e426d85e8 pkg-example-smf-bp v3 v3 false Published external-blueprints -external-blueprints-c97ed7c13b3aa47cb257217f144960743aec1253 pkg-example-smf-bp v4 v4 false Published external-blueprints -external-blueprints-3bd78e46b014dac5cc0c58788c1820d043d61569 pkg-example-smf-bp v5 v5 true Published external-blueprints -external-blueprints-c3f660848d9d7a4df5481ec2e06196884778cd84 pkg-example-upf-bp main main false Published external-blueprints -external-blueprints-4cb00a17c1ee2585d6c187ba4d0211da960c0940 pkg-example-upf-bp v1 v1 false Published external-blueprints -external-blueprints-5903efe295026124e6fea926df154a72c5bd1ea9 pkg-example-upf-bp v2 v2 false Published external-blueprints -external-blueprints-16142d8d23c1b8e868a9524a1b21634c79b432d5 pkg-example-upf-bp v3 v3 false Published external-blueprints -external-blueprints-60ef45bb8f55b63556e7467f16088325022a7ece pkg-example-upf-bp v4 v4 false Published external-blueprints -external-blueprints-7757966cc7b965f1b9372370a4b382c8375a2b40 pkg-example-upf-bp v5 v5 true Published external-blueprints -``` - -The PackageRevisionResources resource is an API Aggregation resource that Porch uses to wrap the GET URL for the package -on its repository. - -```bash -kubectl get packagerevisionresources -n porch-demo -NAME PACKAGE WORKSPACENAME REVISION REPOSITORY FILES -external-blueprints-922121d0bcdd56bfa8cae6c375720e2b5f358ab0 free5gc-cp main main external-blueprints 28 -external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 free5gc-cp v1 v1 external-blueprints 28 -external-blueprints-716aae722092dbbb9470e56079b90ad76ec8f0d5 free5gc-operator main main external-blueprints 14 -external-blueprints-d65dc89f7a2472650651e9aea90edfcc81a9afc6 free5gc-operator v1 v1 external-blueprints 11 -external-blueprints-9fee880e8fa52066f052c9cae7aac2e2bc1b5a54 free5gc-operator v2 v2 external-blueprints 11 -external-blueprints-91d60ee31d2d0a1a6d5f1807593d5419434accd3 free5gc-operator v3 v3 external-blueprints 14 -external-blueprints-21f19a0641cf520e7dc6268e64c58c2c30c27036 free5gc-operator v4 v4 external-blueprints 14 -external-blueprints-bf2e7522ee92680bd49571ab309e3f61320cf36d free5gc-operator v5 v5 external-blueprints 14 -external-blueprints-c1b9ecb73118e001ab1d1213e6a2c94ab67a0939 free5gc-upf main main external-blueprints 6 -external-blueprints-5d48b1516e7b1ea15830ffd76b230862119981bd free5gc-upf v1 v1 external-blueprints 6 -external-blueprints-ed97798b46b36d135cf23d813eccad4857dff90f pkg-example-amf-bp main main external-blueprints 16 -external-blueprints-ed744bfdf4a4d15d4fcf3c46fde27fd6ac32d180 pkg-example-amf-bp v1 v1 external-blueprints 7 -external-blueprints-5489faa80782f91f1a07d04e206935d14c1eb24c pkg-example-amf-bp v2 v2 external-blueprints 8 -external-blueprints-16e2255bd433ef532684a3c1434ae0bede175107 pkg-example-amf-bp v3 v3 external-blueprints 16 -external-blueprints-7689cc6c953fa83ea61283983ce966dcdffd9bae pkg-example-amf-bp v4 v4 external-blueprints 16 -external-blueprints-caff9609883eea7b20b73b7425e6694f8eb6adc3 pkg-example-amf-bp v5 v5 external-blueprints 16 -external-blueprints-00b6673c438909975548b2b9f20c2e1663161815 pkg-example-smf-bp main main external-blueprints 17 -external-blueprints-4f7dfbede99dc08f2b5144ca550ca218109c52f2 pkg-example-smf-bp v1 v1 external-blueprints 8 -external-blueprints-3d9ab8f61ce1d35e264d5719d4b3c0da1ab02328 pkg-example-smf-bp v2 v2 external-blueprints 9 -external-blueprints-2006501702e105501784c78be9e7d57e426d85e8 pkg-example-smf-bp v3 v3 external-blueprints 17 -external-blueprints-c97ed7c13b3aa47cb257217f144960743aec1253 pkg-example-smf-bp v4 v4 external-blueprints 17 -external-blueprints-3bd78e46b014dac5cc0c58788c1820d043d61569 pkg-example-smf-bp v5 v5 external-blueprints 17 -external-blueprints-c3f660848d9d7a4df5481ec2e06196884778cd84 pkg-example-upf-bp main main external-blueprints 17 -external-blueprints-4cb00a17c1ee2585d6c187ba4d0211da960c0940 pkg-example-upf-bp v1 v1 external-blueprints 8 -external-blueprints-5903efe295026124e6fea926df154a72c5bd1ea9 pkg-example-upf-bp v2 v2 external-blueprints 8 -external-blueprints-16142d8d23c1b8e868a9524a1b21634c79b432d5 pkg-example-upf-bp v3 v3 external-blueprints 17 -external-blueprints-60ef45bb8f55b63556e7467f16088325022a7ece pkg-example-upf-bp v4 v4 external-blueprints 17 -external-blueprints-7757966cc7b965f1b9372370a4b382c8375a2b40 pkg-example-upf-bp v5 v5 external-blueprints 17 -``` - -Let's examine the *free5gc-cp v1* package. - -The PackageRevision CR name for *free5gc-cp v1* is external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9. - -```bash -kubectl get packagerevision -n porch-demo external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 -o yaml -``` - -```yaml -apiVersion: porch.kpt.dev/v1alpha1 -kind: PackageRevision -metadata: - creationTimestamp: "2023-06-13T13:35:34Z" - labels: - kpt.dev/latest-revision: "true" - name: external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 - namespace: porch-demo - resourceVersion: 5fc9561dcd4b2630704c192e89887490e2ff3c61 - uid: uid:free5gc-cp:v1 -spec: - lifecycle: Published - packageName: free5gc-cp - repository: external-blueprints - revision: v1 - workspaceName: v1 -status: - publishTimestamp: "2023-06-13T13:35:34Z" - publishedBy: dnaleksandrov@gmail.com - upstreamLock: {} -``` - -Getting the *PackageRevisionResources* pulls the package from its repository with each file serialized into a name-value -map of resources in it's spec. - -
-Open this to see the command and the result - -```bash -kubectl get packagerevisionresources -n porch-demo external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 -o yaml -``` -```yaml -apiVersion: porch.kpt.dev/v1alpha1 -kind: PackageRevisionResources -metadata: - creationTimestamp: "2023-06-13T13:35:34Z" - name: external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 - namespace: porch-demo - resourceVersion: 5fc9561dcd4b2630704c192e89887490e2ff3c61 - uid: uid:free5gc-cp:v1 -spec: - packageName: free5gc-cp - repository: external-blueprints - resources: - Kptfile: | - apiVersion: kpt.dev/v1 - kind: Kptfile - metadata: - name: free5gc-cp - annotations: - config.kubernetes.io/local-config: "true" - info: - description: this package represents free5gc NFs, which are required to perform E2E conn testing - pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.4.1 - configPath: package-context.yaml - README.md: "# free5gc-cp\n\n## Description\nPackage representing free5gc control - plane NFs.\n\nPackage definition is based on [Towards5gs helm charts](https://github.com/Orange-OpenSource/towards5gs-helm), - \nand service level configuration is preserved as defined there.\n\n### Network - Functions (NFs)\n\nfree5gc project implements following NFs:\n\n\n| NF | Description - | local-config |\n| --- | --- | --- |\n| AMF | Access and Mobility Management - Function | true |\n| AUSF | Authentication Server Function | false |\n| NRF - | Network Repository Function | false |\n| NSSF | Network Slice Selection Function - | false |\n| PCF | Policy Control Function | false |\n| SMF | Session Management - Function | true |\n| UDM | Unified Data Management | false |\n| UDR | Unified - Data Repository | false |\n\nalso Database and Web UI is defined:\n\n| Service - | Description | local-config |\n| --- | --- | --- |\n| mongodb | Database to - store free5gc data | false |\n| webui | UI used to register UE | false |\n\nNote: - `local-config: true` indicates that this resources won't be deployed to the - workload cluster\n\n### Dependencies\n\n- `mongodb` requires `Persistent Volume`. - We need to assure that dynamic PV provisioning will be available on the cluster\n- - `NRF` should be running before other NFs will be instantiated\n - all NFs - packages contain `wait-nrf` init-container\n- `NRF` and `WEBUI` require DB\n - \ - packages contain `wait-mongodb` init-container\n- `WEBUI` service is exposed - as `NodePort` \n - will be used to register UE on the free5gc side\n- Communication - via `SBI` between NFs and communication with `mongodb` is defined using K8s - `ClusterIP` services\n - it forces you to deploy all NFs on a single cluster - or consider including `service mesh` in a multi-cluster scenario\n\n## Usage\n\n### - Fetch the package\n`kpt pkg get REPO_URI[.git]/PKG_PATH[@VERSION] free5gc-cp`\n\nDetails: - https://kpt.dev/reference/cli/pkg/get/\n\n### View package content\n`kpt pkg - tree free5gc-cp`\n\nDetails: https://kpt.dev/reference/cli/pkg/tree/\n\n### - Apply the package\n```\nkpt live init free5gc-cp\nkpt live apply free5gc-cp - --reconcile-timeout=2m --output=table\n```\n\nDetails: https://kpt.dev/reference/cli/live/\n\n" - ausf/ausf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - ausf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n ausfcfg.yaml: |\n info:\n version: 1.0.2\n description: - AUSF initial local configuration\n\n configuration:\n serviceNameList:\n - \ - nausf-auth\n \n sbi:\n scheme: http\n registerIPv4: - ausf-nausf # IP used to register to NRF\n bindingIPv4: 0.0.0.0 # - IP used to bind the service\n port: 80\n tls:\n key: - config/TLS/ausf.key\n pem: config/TLS/ausf.pem\n \n nrfUri: - http://nrf-nnrf:8000\n plmnSupportList:\n - mcc: 208\n mnc: - 93\n - mcc: 123\n mnc: 45\n groupId: ausfGroup001\n eapAkaSupiImsiPrefix: - false\n\n logger:\n AUSF:\n ReportCaller: false\n debugLevel: - info\n" - ausf/ausf-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-ausf\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: ausf\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: ausf\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: ausf\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n \n containers:\n - \ - name: ausf\n image: towards5gs/free5gc-ausf:v3.1.1\n imagePullPolicy: - IfNotPresent\n securityContext:\n {}\n ports:\n - - containerPort: 80\n command: [\"./ausf\"]\n args: [\"-c\", \"../config/ausfcfg.yaml\"]\n - \ env:\n - name: GIN_MODE\n value: release\n volumeMounts:\n - \ - mountPath: /free5gc/config/\n name: ausf-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: ausf-volume\n projected:\n sources:\n - - configMap:\n name: ausf-configmap\n" - ausf/ausf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: ausf-nausf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: ausf - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: ausf - mongodb/dep-sts.yaml: "---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n - \ name: mongodb\n namespace: default\n labels:\n app.kubernetes.io/name: - mongodb\n app.kubernetes.io/instance: free5gc\n app.kubernetes.io/component: - mongodb\nspec:\n serviceName: mongodb\n updateStrategy:\n type: RollingUpdate\n - \ selector:\n matchLabels:\n app.kubernetes.io/name: mongodb\n app.kubernetes.io/instance: - free5gc\n app.kubernetes.io/component: mongodb\n template:\n metadata:\n - \ labels:\n app.kubernetes.io/name: mongodb\n app.kubernetes.io/instance: - free5gc\n app.kubernetes.io/component: mongodb\n spec:\n \n serviceAccountName: - mongodb\n affinity:\n podAffinity:\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - \ - podAffinityTerm:\n labelSelector:\n matchLabels:\n - \ app.kubernetes.io/name: mongodb\n app.kubernetes.io/instance: - free5gc\n app.kubernetes.io/component: mongodb\n namespaces:\n - \ - \"default\"\n topologyKey: kubernetes.io/hostname\n - \ weight: 1\n nodeAffinity:\n \n securityContext:\n - \ fsGroup: 1001\n sysctls: []\n containers:\n - name: - mongodb\n image: docker.io/bitnami/mongodb:4.4.4-debian-10-r0\n imagePullPolicy: - \"IfNotPresent\"\n securityContext:\n runAsNonRoot: true\n - \ runAsUser: 1001\n env:\n - name: BITNAMI_DEBUG\n - \ value: \"false\"\n - name: ALLOW_EMPTY_PASSWORD\n value: - \"yes\"\n - name: MONGODB_SYSTEM_LOG_VERBOSITY\n value: - \"0\"\n - name: MONGODB_DISABLE_SYSTEM_LOG\n value: - \"no\"\n - name: MONGODB_ENABLE_IPV6\n value: \"no\"\n - \ - name: MONGODB_ENABLE_DIRECTORY_PER_DB\n value: \"no\"\n - \ ports:\n - name: mongodb\n containerPort: - 27017\n livenessProbe:\n exec:\n command:\n - \ - mongo\n - --disableImplicitSessions\n - - --eval\n - \"db.adminCommand('ping')\"\n initialDelaySeconds: - 30\n periodSeconds: 10\n timeoutSeconds: 5\n successThreshold: - 1\n failureThreshold: 6\n readinessProbe:\n exec:\n - \ command:\n - bash\n - -ec\n - - |\n mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.hello().isWritablePrimary - || db.hello().secondary' | grep -q 'true'\n initialDelaySeconds: - 5\n periodSeconds: 10\n timeoutSeconds: 5\n successThreshold: - 1\n failureThreshold: 6\n resources:\n limits: - {}\n requests: {}\n volumeMounts:\n - name: datadir\n - \ mountPath: /bitnami/mongodb/data/db/\n subPath: \n - \ volumes:\n volumeClaimTemplates:\n - metadata:\n name: datadir\n - \ spec:\n accessModes:\n - \"ReadWriteOnce\"\n resources:\n - \ requests:\n storage: \"6Gi\"\n" - mongodb/serviceaccount.yaml: | - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: mongodb - namespace: default - labels: - app.kubernetes.io/name: mongodb - app.kubernetes.io/instance: free5gc - secrets: - - name: mongodb - mongodb/svc.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: mongodb - namespace: default - labels: - app.kubernetes.io/name: mongodb - app.kubernetes.io/instance: free5gc - app.kubernetes.io/component: mongodb - spec: - type: ClusterIP - ports: - - name: mongodb - port: 27017 - targetPort: mongodb - nodePort: null - selector: - app.kubernetes.io/name: mongodb - app.kubernetes.io/instance: free5gc - app.kubernetes.io/component: mongodb - namespace.yaml: | - apiVersion: v1 - kind: Namespace - metadata: - name: example - labels: - pod-security.kubernetes.io/warn: "privileged" - pod-security.kubernetes.io/audit: "privileged" - pod-security.kubernetes.io/enforce: "privileged" - nrf/nrf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - nrf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n nrfcfg.yaml: |\n info:\n version: 1.0.1\n description: - NRF initial local configuration\n \n configuration:\n MongoDBName: - free5gc\n MongoDBUrl: mongodb://mongodb:27017\n\n serviceNameList:\n - \ - nnrf-nfm\n - nnrf-disc\n\n sbi:\n scheme: http\n - \ registerIPv4: nrf-nnrf # IP used to serve NFs or register to another - NRF\n bindingIPv4: 0.0.0.0 # IP used to bind the service\n port: - 8000\n tls:\n key: config/TLS/nrf.key\n pem: config/TLS/nrf.pem\n - \ DefaultPlmnId:\n mcc: 208\n mnc: 93\n\n logger:\n NRF:\n - \ ReportCaller: false\n debugLevel: info\n" - nrf/nrf-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-nrf\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: nrf\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: nrf\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: nrf\n spec:\n initContainers:\n - \ - name: wait-mongo\n image: busybox:1.32.0\n env:\n - - name: DEPENDENCIES\n value: mongodb:27017\n command: [\"sh\", - \"-c\", \"until nc -z $DEPENDENCIES; do echo waiting for the MongoDB; sleep - 2; done;\"]\n containers:\n - name: nrf\n image: towards5gs/free5gc-nrf:v3.1.1\n - \ imagePullPolicy: IfNotPresent\n securityContext:\n {}\n - \ ports:\n - containerPort: 8000\n command: [\"./nrf\"]\n - \ args: [\"-c\", \"../config/nrfcfg.yaml\"]\n env: \n - - name: DB_URI\n value: mongodb://mongodb/free5gc\n - name: - GIN_MODE\n value: release\n volumeMounts:\n - mountPath: - /free5gc/config/\n name: nrf-volume\n resources:\n limits:\n - \ cpu: 100m\n memory: 128Mi\n requests:\n - \ cpu: 100m\n memory: 128Mi\n readinessProbe:\n - \ initialDelaySeconds: 0\n periodSeconds: 1\n timeoutSeconds: - 1\n failureThreshold: 40\n successThreshold: 1\n httpGet:\n - \ scheme: \"HTTP\"\n port: 8000\n livenessProbe:\n - \ initialDelaySeconds: 120\n periodSeconds: 10\n timeoutSeconds: - 10\n failureThreshold: 3\n successThreshold: 1\n httpGet:\n - \ scheme: \"HTTP\"\n port: 8000\n dnsPolicy: ClusterFirst\n - \ restartPolicy: Always\n\n volumes:\n - name: nrf-volume\n projected:\n - \ sources:\n - configMap:\n name: nrf-configmap\n" - nrf/nrf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: nrf-nnrf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: nrf - spec: - type: ClusterIP - ports: - - port: 8000 - targetPort: 8000 - protocol: TCP - name: http - selector: - project: free5gc - nf: nrf - nssf/nssf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - nssf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n nssfcfg.yaml: |\n info:\n version: 1.0.1\n description: - NSSF initial local configuration\n\n configuration:\n serviceNameList:\n - \ - nnssf-nsselection\n - nnssf-nssaiavailability\n\n sbi:\n - \ scheme: http\n registerIPv4: nssf-nnssf # IP used to register - to NRF\n bindingIPv4: 0.0.0.0 # IP used to bind the service\n port: - 80\n tls:\n key: config/TLS/nssf.key\n pem: config/TLS/nssf.pem\n - \ \n nrfUri: http://nrf-nnrf:8000\n \n nsiList:\n - - snssai:\n sst: 1\n nsiInformationList:\n - nrfId: - http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: 10\n - - snssai:\n sst: 1\n sd: 1\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 11\n - snssai:\n sst: 1\n sd: 2\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 12\n - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 12\n - snssai:\n sst: 1\n sd: 3\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 13\n - snssai:\n sst: 2\n nsiInformationList:\n - - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: 20\n - \ - snssai:\n sst: 2\n sd: 1\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 21\n - snssai:\n sst: 1\n sd: 010203\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 22\n amfSetList:\n - amfSetId: 1\n amfList:\n - - ffa2e8d7-3275-49c7-8631-6af1df1d9d26\n - 0e8831c3-6286-4689-ab27-1e2161e15cb1\n - \ - a1fba9ba-2e39-4e22-9c74-f749da571d0d\n nrfAmfSet: http://nrf-nnrf:8081/nnrf-nfm/v1/nf-instances\n - \ supportedNssaiAvailabilityData:\n - tai:\n plmnId:\n - \ mcc: 466\n mnc: 92\n tac: - 33456\n supportedSnssaiList:\n - sst: 1\n sd: - 1\n - sst: 1\n sd: 2\n - sst: - 2\n sd: 1\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33457\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - \ - sst: 1\n sd: 2\n - amfSetId: 2\n nrfAmfSet: - http://nrf-nnrf:8084/nnrf-nfm/v1/nf-instances\n supportedNssaiAvailabilityData:\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33456\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - - sst: 1\n sd: 3\n - sst: 2\n sd: - 1\n - tai:\n plmnId:\n mcc: 466\n - \ mnc: 92\n tac: 33458\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - \ - sst: 2\n nssfName: NSSF\n supportedPlmnList:\n - - mcc: 208\n mnc: 93\n supportedNssaiInPlmnList:\n - plmnId:\n - \ mcc: 208\n mnc: 93\n supportedSnssaiList:\n - \ - sst: 1\n sd: 010203\n - sst: 1\n sd: - 112233\n - sst: 1\n sd: 3\n - sst: 2\n sd: - 1\n - sst: 2\n sd: 2\n amfList:\n - nfId: - 469de254-2fe5-4ca0-8381-af3f500af77c\n supportedNssaiAvailabilityData:\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33456\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 2\n - - sst: 2\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33457\n supportedSnssaiList:\n - \ - sst: 1\n sd: 1\n - sst: 1\n - \ sd: 2\n - nfId: fbe604a8-27b2-417e-bd7c-8a7be2691f8d\n - \ supportedNssaiAvailabilityData:\n - tai:\n plmnId:\n - \ mcc: 466\n mnc: 92\n tac: - 33458\n supportedSnssaiList:\n - sst: 1\n - - sst: 1\n sd: 1\n - sst: 1\n sd: - 3\n - sst: 2\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33459\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - \ - sst: 2\n - sst: 2\n sd: 1\n - \ - nfId: b9e6e2cb-5ce8-4cb6-9173-a266dd9a2f0c\n supportedNssaiAvailabilityData:\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33456\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - - sst: 1\n sd: 2\n - sst: 2\n - tai:\n - \ plmnId:\n mcc: 466\n mnc: - 92\n tac: 33458\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - - sst: 2\n - sst: 2\n sd: 1\n taList:\n - - tai:\n plmnId:\n mcc: 466\n mnc: 92\n tac: - 33456\n accessType: 3GPP_ACCESS\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - sst: 1\n sd: - 2\n - sst: 2\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33457\n accessType: 3GPP_ACCESS\n - \ supportedSnssaiList:\n - sst: 1\n - sst: 1\n - \ sd: 1\n - sst: 1\n sd: 2\n - - sst: 2\n - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33458\n accessType: 3GPP_ACCESS\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - - sst: 1\n sd: 3\n - sst: 2\n restrictedSnssaiList:\n - \ - homePlmnId:\n mcc: 310\n mnc: 560\n - \ sNssaiList:\n - sst: 1\n sd: 3\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33459\n accessType: 3GPP_ACCESS\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - - sst: 2\n - sst: 2\n sd: 1\n restrictedSnssaiList:\n - \ - homePlmnId:\n mcc: 310\n mnc: 560\n - \ sNssaiList:\n - sst: 2\n sd: 1\n - \ mappingListFromPlmn:\n - operatorName: NTT Docomo\n homePlmnId:\n - \ mcc: 440\n mnc: 10\n mappingOfSnssai:\n - - servingSnssai:\n sst: 1\n sd: 1\n homeSnssai:\n - \ sst: 1\n sd: 1\n - servingSnssai:\n - \ sst: 1\n sd: 2\n homeSnssai:\n sst: - 1\n sd: 3\n - servingSnssai:\n sst: - 1\n sd: 3\n homeSnssai:\n sst: 1\n - \ sd: 4\n - servingSnssai:\n sst: 2\n - \ sd: 1\n homeSnssai:\n sst: 2\n sd: - 2\n - operatorName: AT&T Mobility\n homePlmnId:\n mcc: - 310\n mnc: 560\n mappingOfSnssai:\n - servingSnssai:\n - \ sst: 1\n sd: 1\n homeSnssai:\n sst: - 1\n sd: 2\n - servingSnssai:\n sst: - 1\n sd: 2\n homeSnssai:\n sst: 1\n - \ sd: 3 \n\n logger:\n NSSF:\n ReportCaller: - false\n debugLevel: info\n" - nssf/nssf-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-nssf\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: nssf\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: nssf\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: nssf\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n\n containers:\n - \ - name: nssf\n image: towards5gs/free5gc-nssf:v3.1.1\n imagePullPolicy: - IfNotPresent\n securityContext:\n {}\n ports:\n - - containerPort: 80\n command: [\"./nssf\"]\n args: [\"-c\", \"../config/nssfcfg.yaml\"]\n - \ env: \n - name: GIN_MODE\n value: release\n volumeMounts:\n - \ - mountPath: /free5gc/config/\n name: nssf-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: nssf-volume\n projected:\n sources:\n - - configMap:\n name: nssf-configmap\n" - nssf/nssf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: nssf-nnssf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: nssf - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: nssf - package-context.yaml: | - apiVersion: v1 - kind: ConfigMap - metadata: - name: kptfile.kpt.dev - annotations: - config.kubernetes.io/local-config: "true" - data: - name: free5gc - namespace: free5gc - pcf/pcf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - pcf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n pcfcfg.yaml: |\n info:\n version: 1.0.1\n description: - PCF initial local configuration\n\n configuration:\n serviceList:\n - \ - serviceName: npcf-am-policy-control\n - serviceName: npcf-smpolicycontrol\n - \ suppFeat: 3fff\n - serviceName: npcf-bdtpolicycontrol\n - - serviceName: npcf-policyauthorization\n suppFeat: 3\n - serviceName: - npcf-eventexposure\n - serviceName: npcf-ue-policy-control\n\n sbi:\n - \ scheme: http\n registerIPv4: pcf-npcf # IP used to register - to NRF\n bindingIPv4: 0.0.0.0 # IP used to bind the service\n port: - 80\n tls:\n key: config/TLS/pcf.key\n pem: config/TLS/pcf.pem\n - \ \n mongodb: # the mongodb connected by this PCF\n name: - free5gc # name of the mongodb\n url: mongodb://mongodb:27017 - # a valid URL of the mongodb\n \n nrfUri: http://nrf-nnrf:8000\n pcfName: - PCF\n timeFormat: 2019-01-02 15:04:05\n defaultBdtRefId: BdtPolicyId-\n - \ locality: area1\n\n logger:\n PCF:\n ReportCaller: false\n - \ debugLevel: info\n" - pcf/pcf-deployment.yaml: | - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - name: free5gc-pcf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: pcf - spec: - replicas: 1 - selector: - matchLabels: - project: free5gc - nf: pcf - template: - metadata: - labels: - project: free5gc - nf: pcf - spec: - initContainers: - - name: wait-nrf - image: towards5gs/initcurl:1.0.0 - env: - - name: DEPENDENCIES - value: http://nrf-nnrf:8000 - command: ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure --connect-timeout 1 -s -o /dev/null -w "%{http_code}" $dependency) -ne 200 ]; do echo waiting for dependencies; sleep 1; done; done;'] - - containers: - - name: pcf - image: towards5gs/free5gc-pcf:v3.1.1 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 80 - command: ["./pcf"] - args: ["-c", "../config/pcfcfg.yaml"] - env: - - name: GIN_MODE - value: release - volumeMounts: - - mountPath: /free5gc/config/ - name: pcf-volume - resources: - limits: - cpu: 100m - memory: 128Mi - requests: - cpu: 100m - memory: 128Mi - dnsPolicy: ClusterFirst - restartPolicy: Always - - volumes: - - name: pcf-volume - projected: - sources: - - configMap: - name: pcf-configmap - pcf/pcf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: pcf-npcf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: pcf - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: pcf - udm/udm-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - udm-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n udmcfg.yaml: |\n info:\n version: 1.0.2\n description: - UDM initial local configuration\n\n configuration:\n serviceNameList:\n - \ - nudm-sdm\n - nudm-uecm\n - nudm-ueau\n - nudm-ee\n - \ - nudm-pp\n \n sbi:\n scheme: http\n registerIPv4: - udm-nudm # IP used to register to NRF\n bindingIPv4: 0.0.0.0 # IP used - to bind the service\n port: 80\n tls:\n key: config/TLS/udm.key\n - \ pem: config/TLS/udm.pem\n \n nrfUri: http://nrf-nnrf:8000\n - \ # test data set from TS33501-f60 Annex C.4\n SuciProfile:\n - - ProtectionScheme: 1 # Protect Scheme: Profile A\n PrivateKey: c53c22208b61860b06c62e5406a7b330c2b577aa5558981510d128247d38bd1d\n - \ PublicKey: 5a8d38864820197c3394b92613b20b91633cbd897119273bf8e4a6f4eec0a650\n - \ - ProtectionScheme: 2 # Protect Scheme: Profile B\n PrivateKey: - F1AB1074477EBCC7F554EA1C5FC368B1616730155E0041AC447D6301975FECDA\n PublicKey: - 0472DA71976234CE833A6907425867B82E074D44EF907DFB4B3E21C1C2256EBCD15A7DED52FCBB097A4ED250E036C7B9C8C7004C4EEDC4F068CD7BF8D3F900E3B4\n\n - \ logger:\n UDM:\n ReportCaller: false\n debugLevel: info\n" - udm/udm-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-udm\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: udm\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: udm\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: udm\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n\n containers:\n - \ - name: udm\n image: towards5gs/free5gc-udm:v3.1.1\n imagePullPolicy: - IfNotPresent\n ports:\n - containerPort: 80\n command: - [\"./udm\"]\n args: [\"-c\", \"../config/udmcfg.yaml\"]\n env: - \n - name: GIN_MODE\n value: release\n volumeMounts:\n - \ - mountPath: /free5gc/config/\n name: udm-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: udm-volume\n projected:\n sources:\n - - configMap:\n name: udm-configmap\n" - udm/udm-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: udm-nudm - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: udm - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: udm - udr/udr-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - udr-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n udrcfg.yaml: |\n info:\n version: 1.0.1\n description: - UDR initial local configuration\n\n configuration:\n sbi:\n scheme: - http\n registerIPv4: udr-nudr # IP used to register to NRF\n bindingIPv4: - 0.0.0.0 # IP used to bind the service\n port: 80\n tls:\n key: - config/TLS/udr.key\n pem: config/TLS/udr.pem\n\n mongodb:\n name: - free5gc\n url: mongodb://mongodb:27017 \n \n nrfUri: - http://nrf-nnrf:8000\n\n logger:\n MongoDBLibrary:\n ReportCaller: - false\n debugLevel: info\n OpenApi:\n ReportCaller: false\n - \ debugLevel: info\n PathUtil:\n ReportCaller: false\n debugLevel: - info\n UDR:\n ReportCaller: false\n debugLevel: info\n" - udr/udr-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-udr\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: udr\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: udr\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: udr\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n\n containers:\n - \ - name: udr\n image: towards5gs/free5gc-udr:v3.1.1\n imagePullPolicy: - IfNotPresent\n ports:\n - containerPort: 80\n command: - [\"./udr\"]\n args: [\"-c\", \"../config/udrcfg.yaml\"]\n env: - \n - name: DB_URI\n value: mongodb://mongodb/free5gc\n - - name: GIN_MODE\n value: release\n volumeMounts:\n - - mountPath: /free5gc/config/\n name: udr-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: udr-volume\n projected:\n sources:\n - - configMap:\n name: udr-configmap\n" - udr/udr-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: udr-nudr - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: udr - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: udr - webui/webui-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n - \ name: webui-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ app: free5gc\ndata:\n webuicfg.yaml: |\n info:\n version: 1.0.0\n - \ description: WEBUI initial local configuration\n\n configuration:\n - \ mongodb:\n name: free5gc\n url: mongodb://mongodb:27017\n - \ \n logger:\n WEBUI:\n ReportCaller: false\n debugLevel: - info\n" - webui/webui-deployment.yaml: | - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - name: free5gc-webui - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: webui - spec: - replicas: 1 - selector: - matchLabels: - project: free5gc - nf: webui - template: - metadata: - labels: - project: free5gc - nf: webui - spec: - initContainers: - - name: wait-mongo - image: busybox:1.32.0 - env: - - name: DEPENDENCIES - value: mongodb:27017 - command: ["sh", "-c", "until nc -z $DEPENDENCIES; do echo waiting for the MongoDB; sleep 2; done;"] - containers: - - name: webui - image: towards5gs/free5gc-webui:v3.1.1 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 5000 - command: ["./webconsole"] - args: ["-c", "../config/webuicfg.yaml"] - env: - - name: GIN_MODE - value: release - volumeMounts: - - mountPath: /free5gc/config/ - name: webui-volume - resources: - limits: - cpu: 100m - memory: 128Mi - requests: - cpu: 100m - memory: 128Mi - readinessProbe: - initialDelaySeconds: 0 - periodSeconds: 1 - timeoutSeconds: 1 - failureThreshold: 40 - successThreshold: 1 - httpGet: - scheme: HTTP - port: 5000 - livenessProbe: - initialDelaySeconds: 120 - periodSeconds: 10 - timeoutSeconds: 10 - failureThreshold: 3 - successThreshold: 1 - httpGet: - scheme: HTTP - port: 5000 - dnsPolicy: ClusterFirst - restartPolicy: Always - - volumes: - - name: webui-volume - projected: - sources: - - configMap: - name: webui-configmap - webui/webui-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: webui-service - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: webui - spec: - type: NodePort - ports: - - port: 5000 - targetPort: 5000 - nodePort: 30500 - protocol: TCP - name: http - selector: - project: free5gc - nf: webui - revision: v1 - workspaceName: v1 -status: - renderStatus: - error: "" - result: - exitCode: 0 - metadata: - creationTimestamp: null -``` -
- -## The porchctl command - -The `porchtcl` command is an administration command for acting on Porch `Repository` (repo) and `PackageRevision` (rpkg) -CRs. See its [documentation for usage information]({{% relref "/docs/porch/user-guides/porchctl-cli-guide.md" %}}). - -Check that porchctl lists our repositories: - -```bash -porchctl repo -n porch-demo get -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -edge1 git Package true True http://172.18.255.200:3000/nephio/edge1.git -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -
-Check that porchctl lists our remote packages (PackageRevisions): - -``` -porchctl rpkg -n porch-demo get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.network-function.great-outdoors free5gc-cp main main false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-cp v1 v1 true Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator main main false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator v1 v1 false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator v2 v2 false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator v3 v3 false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator v4 v4 false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator v5 v5 true Published external-blueprints -porch-test.network-function.great-outdoors free5gc-upf main main false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-upf v1 v1 true Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp main main false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp v1 v1 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp v2 v2 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp v3 v3 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp v4 v4 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp v5 v5 true Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp main main false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp v1 v1 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp v2 v2 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp v3 v3 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp v4 v4 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp v5 v5 true Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp main main false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp v1 v1 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp v2 v2 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp v3 v3 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp v4 v4 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp v5 v5 true Published external-blueprints -``` -
- -The output above is similar to the output of `kubectl get packagerevision -n porch-demo` above. - -## Creating a blueprint in Porch - -### Blueprint with no Kpt pipelines - -Create a new package in our *management* repository using the sample *network-function* package provided. This network -function Kpt package is a demo Kpt package that installs [Nginx](https://github.com/nginx). - -``` -porchctl -n porch-demo rpkg init network-function --repository=management --workspace=v1 -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 created -porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 false Draft management -``` - -This command creates a new *PackageRevision* CR in porch and also creates a branch called *network-function/v1* in our -Gitea *management* repository. Use the Gitea web UI to confirm that the branch has been created and note that it only has -default content as yet. - -We now pull the package we have initialized from Porch. - -``` -porchctl -n porch-demo rpkg pull management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 blueprints/initialized/network-function -``` - -We update the initialized package and add our local changes. -``` -cp blueprints/local-changes/network-function/* blueprints/initialized/network-function -``` - -Now, we push the package contents to porch: -``` -porchctl -n porch-demo rpkg push management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 blueprints/initialized/network-function -``` - -Check on the Gitea web UI and we can see that the actual package contents have been pushed. - -Now we propose and approve the package. - -``` -porchctl -n porch-demo rpkg propose management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 proposed - -porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 false Proposed management - -porchctl -n porch-demo rpkg approve management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 approved - -porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 v1 true Published management - -``` - -Once we approve the package, the package is merged into the main branch in the *management* repository and the branch called -*network-function/v1* in that repository is removed. Use the Gitea UI to verify this. We now have our blueprint package in our -*management* repository and we can deploy this package into workload clusters. - -### Blueprint with a Kpt pipeline - -The second blueprint in the *blueprint* directory is called *network-function-auto-namespace*. This network -function is exactly the same as the *network-function* package except that it has a Kpt function that automatically -creates a namespace with the namespace configured in the name field in the *package-context.yaml* file. Note that no -namespace is defined in the metadata of the *deployment.yaml* file of this Kpt package. - -We use the same sequence of commands again to publish our blueprint package for *network-function-auto-namespace*. - -``` -porchctl -n porch-demo rpkg init network-function-auto-namespace --repository=management --workspace=v1 -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 created - -porchctl -n porch-demo rpkg pull management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 blueprints/initialized/network-function-auto-namespace - -cp blueprints/local-changes/network-function-auto-namespace/* blueprints/initialized/network-function-auto-namespace - -porchctl -n porch-demo rpkg push management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 blueprints/initialized/network-function-auto-namespace -``` - -Examine the *drafts/network-function-auto-namespace/v1* branch in Gitea. Notice that the set-namespace Kpt function in -the pipeline in the *Kptfile* has set the namespace in the *deployment.yaml* file to the value default-namespace-name, -which it read from the *package-context.yaml* file. - -Now we propose and approve the package. - -``` -porchctl -n porch-demo rpkg propose management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 proposed - -porchctl -n porch-demo rpkg approve management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 approved - -porchctl -n porch-demo rpkg get --name network-function-auto-namespace -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-f9a6f2802111b9e81c296422c03aae279725f6df network-function-auto-namespace v1 main false Published management -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 network-function-auto-namespace v1 v1 true Published management - -``` - -## Deploying a blueprint onto a workload cluster - -### Blueprint with no Kpt pipelines - -The process of deploying a blueprint package from our *management* repository clones the package, then modifies it for use on -the workload cluster. The cloned package is then initialized, pushed, proposed, and approved onto the *edge1* repository. -Remember that the *edge1* repository is being monitored by configsync from the edge1 cluster, so once the package appears in -the *edge1* repository on the management cluster, it will be pulled by configsync and applied to the edge1 cluster. - -``` -porchctl -n porch-demo rpkg pull management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 tmp_packages_for_deployment/edge1-network-function-a.clone.tmp - -find tmp_packages_for_deployment/edge1-network-function-a.clone.tmp - -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/deployment.yaml -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/.KptRevisionMetadata -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/README.md -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/Kptfile -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/package-context.yaml -``` -The package we created in the last section is cloned. We now remove the original metadata from the package. -``` -rm tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/.KptRevisionMetadata -``` - -We use a *kpt* function to change the namespace that will be used for the deployment of the network function. - -``` -kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployment/edge1-network-function-a.clone.tmp -- namespace=edge1-network-function-a - -[RUNNING] "gcr.io/kpt-fn/set-namespace:v0.4.1" -[PASS] "gcr.io/kpt-fn/set-namespace:v0.4.1" in 300ms - Results: - [info]: namespace "" updated to "edge1-network-function-a", 1 value(s) changed -``` - -We now initialize and push the package to the *edge1* repository: - -``` -porchctl -n porch-demo rpkg init edge1-network-function-a --repository=edge1 --workspace=v1 -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 created - -porchctl -n porch-demo rpkg pull edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 tmp_packages_for_deployment/edge1-network-function-a - -cp tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/* tmp_packages_for_deployment/edge1-network-function-a -rm -fr tmp_packages_for_deployment/edge1-network-function-a.clone.tmp - -porchctl -n porch-demo rpkg push edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 tmp_packages_for_deployment/edge1-network-function-a - -porchctl -n porch-demo rpkg get --name edge1-network-function-a -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 network-function-a v1 false Draft edge1 -``` - -You can verify that the package is in the *network-function-a/v1* branch of the deployment repository using the Gitea web UI. - - -Check that the *edge1-network-function-a* package is not deployed on the edge1 cluster yet: -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-a -No resources found in network-function-a namespace. - -``` - -We now propose and approve the deployment package, which merges the package to the *edge1* repository and further triggers -configsync to apply the package to the edge1 cluster. - -``` -export KUBECONFIG=~/.kube/kind-management-config - -porchctl -n porch-demo rpkg propose edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 proposed - -porchctl -n porch-demo rpkg approve edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 approved - -porchctl -n porch-demo rpkg get --name edge1-network-function-a -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 network-function-a v1 v1 true Published edge1 -``` - -We can now check that the *network-function-a* package is deployed on the edge1 cluster and that the pod is running -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-a -No resources found in network-function-a namespace. - -kubectl get pod -n edge1-network-function-a -NAME READY STATUS RESTARTS AGE -network-function-9779fc9f5-4rqp2 1/1 ContainerCreating 0 9s - -kubectl get pod -n edge1-network-function-a -NAME READY STATUS RESTARTS AGE -network-function-9779fc9f5-4rqp2 1/1 Running 0 44s -``` - -### Blueprint with a Kpt pipeline - -The process for deploying a blueprint with a *Kpt* pipeline runs the Kpt pipeline automatically with whatever configuration we give it. Rather than explicitly running a *Kpt* function to change the namespace, we will specify the namespace as configuration and the pipeline will apply it to the deployment. - -``` -porchctl -n porch-demo rpkg pull management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp - -find tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp - -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/deployment.yaml -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/.KptRevisionMetadata -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/README.md -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/Kptfile -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/package-context.yaml -``` - -We now remove the original metadata from the package. -``` -rm tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/.KptRevisionMetadata -``` - -The package we created in the last section is cloned. We now initialize and push the package to the *edge1* repository: - -``` -porchctl -n porch-demo rpkg init edge1-network-function-auto-namespace-a --repository=edge1 --workspace=v1 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 created - -porchctl -n porch-demo rpkg pull edge1-48997da49ca0a733b0834c1a27943f1a0e075180 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a - -cp tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/* tmp_packages_for_deployment/edge1-network-function-auto-namespace-a -rm -fr tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp -``` - - -We now simply configure the namespace we want to apply. edit the *tmp_packages_for_deployment/edge1-network-function-auto-namespace-a/package-context.yaml* file and set the namespace to use: - -``` -8c8 -< name: default-namespace-name ---- -> name: edge1-network-function-auto-namespace-a -``` - -We now push the package to the *edge1* repository: - -``` -porchctl -n porch-demo rpkg push edge1-48997da49ca0a733b0834c1a27943f1a0e075180 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a -[RUNNING] "gcr.io/kpt-fn/set-namespace:v0.4.1" -[PASS] "gcr.io/kpt-fn/set-namespace:v0.4.1" - Results: - [info]: namespace "default-namespace-name" updated to "edge1-network-function-auto-namespace-a", 1 value(s) changed - -porchctl -n porch-demo rpkg get --name edge1-network-function-auto-namespace-a -``` - -You can verify that the package is in the *network-function-auto-namespace-a/v1* branch of the deployment repository using the -Gitea web UI. You can see that the kpt pipeline fired and set the edge1-network-function-auto-namespace-a namespace in -the *deployment.yaml* file on the *drafts/edge1-network-function-auto-namespace-a/v1* branch on the *edge1* repository in -Gitea. - -Check that the *edge1-network-function-auto-namespace-a* package is not deployed on the edge1 cluster yet: -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-auto-namespace-a -No resources found in network-function-auto-namespace-a namespace. - -``` - -We now propose and approve the deployment package, which merges the package to the *edge1* repository and further triggers -configsync to apply the package to the edge1 cluster. - -``` -export KUBECONFIG=~/.kube/kind-management-config - -porchctl -n porch-demo rpkg propose edge1-48997da49ca0a733b0834c1a27943f1a0e075180 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 proposed - -porchctl -n porch-demo rpkg approve edge1-48997da49ca0a733b0834c1a27943f1a0e075180 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 approved - -porchctl -n porch-demo rpkg get --name edge1-network-function-auto-namespace-a -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 edge1-network-function-auto-namespace-a v1 v1 true Published edge1 -``` - -We can now check that the *network-function-auto-namespace-a* package is deployed on the edge1 cluster and that the pod is running -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-auto-namespace-a -No resources found in network-function-auto-namespace-a namespace. - -kubectl get pod -n edge1-network-function-auto-namespace-a -NAME READY STATUS RESTARTS AGE -network-function-auto-namespace-85bc658d67-rbzt6 1/1 ContainerCreating 0 3s - -kubectl get pod -n edge1-network-function-auto-namespace-a -NAME READY STATUS RESTARTS AGE -network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 0 10s -``` - -## Deploying using Package Variant Sets - -### Simple PackageVariantSet - -The PackageVariant CR is defined as: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet - -metadata: - name: network-function - namespace: porch-demo - -spec: - upstream: - repo: management - package: network-function - revision: v1 - targets: - - repositories: - - name: edge1 - packageNames: - - network-function-b - - network-function-c -``` - -In this very simple PackageVariant, the *network-function* package in the *management* repository is cloned into the *edge1* -repository as the *network-function-b* and *network-function-c* package variants. - -{{% alert title="Note" color="primary" %}} - -This simple package variant does not specify any configuration changes. Normally, as well as cloning and renaming, -configuration changes would be applied on a package variant. - -Use `kubectl explain PackageVariantSet` to get help on the structure of the PackageVariantSet CRD. - -{{% /alert %}} - -Applying the PackageVariantSet creates the new packages as draft packages: - -```bash -kubectl apply -f simple-variant.yaml - -kubectl get PackageRevisions -n porch-demo | grep -v 'external-blueprints' -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-bc8294d121360ad305c9a826a8734adcf5f1b9c0 network-function-a v1 main false Published edge1 -edge1-9b4b4d99c43b5c5c8489a47bbce9a61f79904946 network-function-a v1 v1 true Published edge1 -edge1-a31b56c7db509652f00724dd49746660757cd98a network-function-b packagevariant-1 false Draft edge1 -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 network-function-c packagevariant-1 false Draft edge1 -management-49580fc22bcf3bf51d334a00b6baa41df597219e network-function v1 main false Published management -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 v1 true Published management - -porchctl -n porch-demo rpkg get --name network-function-b -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-a31b56c7db509652f00724dd49746660757cd98a network-function-b packagevariant-1 false Draft edge1 - -porchctl -n porch-demo rpkg get --name network-function-c -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 network-function-c packagevariant-1 false Draft edge1 -``` - -We can see that our two new packages are created as draft packages on the *edge1* repository. We can also examine the -*PacakgeVariant* CRs that have been created: - -```bash -kubectl get PackageVariant -n porch-demo -NAMESPACE NAME READY STATUS RESTARTS AGE -network-function-a network-function-9779fc9f5-2tswc 1/1 Running 0 19h -network-function-b network-function-9779fc9f5-6zwhh 1/1 Running 0 76s -network-function-c network-function-9779fc9f5-h7nsb 1/1 Running 0 41s -``` - - -It is also interesting to examine the YAML of the *PackageVariant*: - -```yaml -kubectl get PackageVariant -n porch-demo -o yaml -apiVersion: v1 -items: -- apiVersion: config.porch.kpt.dev/v1alpha1 - kind: PackageVariant - metadata: - creationTimestamp: "2024-01-09T15:00:00Z" - finalizers: - - config.porch.kpt.dev/packagevariants - generation: 1 - labels: - config.porch.kpt.dev/packagevariantset: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - name: network-function-edge1-network-function-b - namespace: porch-demo - ownerReferences: - - apiVersion: config.porch.kpt.dev/v1alpha2 - controller: true - kind: PackageVariantSet - name: network-function - uid: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - resourceVersion: "237053" - uid: 7a81099c-5a0b-49d8-b73c-48e33cd134e5 - spec: - downstream: - package: network-function-b - repo: edge1 - upstream: - package: network-function - repo: management - revision: v1 - status: - conditions: - - lastTransitionTime: "2024-01-09T15:00:00Z" - message: all validation checks passed - reason: Valid - status: "False" - type: Stalled - - lastTransitionTime: "2024-01-09T15:00:31Z" - message: successfully ensured downstream package variant - reason: NoErrors - status: "True" - type: Ready - downstreamTargets: - - name: edge1-a31b56c7db509652f00724dd49746660757cd98a -- apiVersion: config.porch.kpt.dev/v1alpha1 - kind: PackageVariant - metadata: - creationTimestamp: "2024-01-09T15:00:00Z" - finalizers: - - config.porch.kpt.dev/packagevariants - generation: 1 - labels: - config.porch.kpt.dev/packagevariantset: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - name: network-function-edge1-network-function-c - namespace: porch-demo - ownerReferences: - - apiVersion: config.porch.kpt.dev/v1alpha2 - controller: true - kind: PackageVariantSet - name: network-function - uid: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - resourceVersion: "237056" - uid: da037d0a-9a7a-4e85-842c-1324e9da819a - spec: - downstream: - package: network-function-c - repo: edge1 - upstream: - package: network-function - repo: management - revision: v1 - status: - conditions: - - lastTransitionTime: "2024-01-09T15:00:01Z" - message: all validation checks passed - reason: Valid - status: "False" - type: Stalled - - lastTransitionTime: "2024-01-09T15:00:31Z" - message: successfully ensured downstream package variant - reason: NoErrors - status: "True" - type: Ready - downstreamTargets: - - name: edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 -kind: List -metadata: - resourceVersion: "" -``` - -We now want to customize and deploy our two packages. To do this we must pull the packages locally, render the *kpt* -functions, and then push the rendered packages back up to the *edge1* repository. - -```bash -porchctl rpkg pull edge1-a31b56c7db509652f00724dd49746660757cd98a tmp_packages_for_deployment/edge1-network-function-b --namespace=porch-demo -kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployment/edge1-network-function-b -- namespace=network-function-b -porchctl rpkg push edge1-a31b56c7db509652f00724dd49746660757cd98a tmp_packages_for_deployment/edge1-network-function-b --namespace=porch-demo - -porchctl rpkg pull edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 tmp_packages_for_deployment/edge1-network-function-c --namespace=porch-demo -kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployment/edge1-network-function-c -- namespace=network-function-c -porchctl rpkg push edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 tmp_packages_for_deployment/edge1-network-function-c --namespace=porch-demo -``` - -Check that the namespace has been updated on the two packages in the *edge1* repository using the Gitea web UI. - -Now our two packages are ready for deployment: - -```bash -porchctl rpkg propose edge1-a31b56c7db509652f00724dd49746660757cd98a --namespace=porch-demo -edge1-a31b56c7db509652f00724dd49746660757cd98a proposed - -porchctl rpkg approve edge1-a31b56c7db509652f00724dd49746660757cd98a --namespace=porch-demo -edge1-a31b56c7db509652f00724dd49746660757cd98a approved - -porchctl rpkg propose edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 --namespace=porch-demo -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 proposed - -porchctl rpkg approve edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 --namespace=porch-demo -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 approved -``` - -We can now check that the *network-function-b* and *network-function-c* packages are deployed on the edge1 cluster and -that the pods are running - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -network-function-a network-function-9779fc9f5-2tswc 1/1 Running 0 19h -network-function-b network-function-9779fc9f5-6zwhh 1/1 Running 0 76s -network-function-c network-function-9779fc9f5-h7nsb 1/1 Running 0 41s -``` - -### Using a PackageVariantSet to automatically set the package name and package namespace - -The *PackageVariant* CR defined as: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - name: network-function-auto-namespace - namespace: porch-demo -spec: - upstream: - repo: management - package: network-function-auto-namespace - revision: v1 - targets: - - repositories: - - name: edge1 - packageNames: - - network-function-auto-namespace-x - - network-function-auto-namespace-y - template: - downstream: - packageExpr: "target.package + '-cumulonimbus'" -``` - - -In this *PackageVariant*, the *network-function-auto-namespace* package in the *management* repository is cloned into the -*edge1* repository as the *network-function-auto-namespace-x* and *network-function-auto-namespace-y* package variants, -similar to the *PackageVariant* in *simple-variant.yaml*. - -An extra template section provided for the repositories in the PackageVariant: - -```yaml -template: - downstream: - packageExpr: "target.package + '-cumulus'" -``` - -This template means that each package in the spec.targets.repositories..packageNames list will have the suffix --cumulus added to its name. This allows us to automatically generate unique package names. Applying the -*PackageVariantSet* also automatically sets a unique namespace for each network function because applying the -*PackageVariantSet* automatically triggers the Kpt pipeline in the *network-function-auto-namespace* *Kpt* package to -generate unique namespaces for each deployed package. - -{{% alert title="Note" color="primary" %}} - -Many other mutations can be performed using a *PackageVariantSet*. Use `kubectl explain PackageVariantSet` to get help on -the structure of the *PackageVariantSet* CRD to see the various mutations that are possible. - -{{% /alert %}} - -Applying the *PackageVariantSet* creates the new packages as draft packages: - -```bash -kubectl apply -f name-namespace-variant.yaml -packagevariantset.config.porch.kpt.dev/network-function-auto-namespace created - -kunectl get -n porch-demo PackageVariantSet network-function-auto-namespace -NAME AGE -network-function-auto-namespace 38s - -kubectl get PackageRevisions -n porch-demo | grep auto-namespace -edge1-1f521f05a684adfa8562bf330f7bc72b50e21cc5 edge1-network-function-auto-namespace-a v1 main false Published edge1 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 edge1-network-function-auto-namespace-a v1 v1 true Published edge1 -edge1-009659a8532552b86263434f68618554e12f4f7c network-function-auto-namespace-x-cumulonimbus packagevariant-1 false Draft edge1 -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e network-function-auto-namespace-y-cumulonimbus packagevariant-1 false Draft edge1 -management-f9a6f2802111b9e81c296422c03aae279725f6df network-function-auto-namespace v1 main false Published management -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 network-function-auto-namespace v1 v1 true Published management -``` - -{{% alert title="Note" color="primary" %}} - -The suffix `x-cumulonimbus` and `y-cumulonimbus` has been placed on the package names. - -{{% /alert %}} - -Examine the *edge1* repository on Gitea and you should see two new draft branches. - -- drafts/network-function-auto-namespace-x-cumulonimbus/packagevariant-1 -- drafts/network-function-auto-namespace-y-cumulonimbus/packagevariant-1 - -In these packages, you will see that: - -1. The package name has been generated as network-function-auto-namespace-x-cumulonimbus and - network-function-auto-namespace-y-cumulonimbus in all files in the packages -2. The namespace has been generated as network-function-auto-namespace-x-cumulonimbus and - network-function-auto-namespace-y-cumulonimbus respectively in the *demployment.yaml* files -3. The PackageVariant has set the data.name field as network-function-auto-namespace-x-cumulonimbus and - network-function-auto-namespace-y-cumulonimbus respectively in the *pckage-context.yaml* files - -This has all been performed automatically; we have not had to perform the -`porchctl rpkg pull/kpt fn render/porchctl rpkg push` combination of commands to make these changes as we had to in the -*simple-variant.yaml* case above. - -Now, let us explore the packages further: - -```bash -porchctl -n porch-demo rpkg get --name network-function-auto-namespace-x-cumulonimbus -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-009659a8532552b86263434f68618554e12f4f7c network-function-auto-namespace-x-cumulonimbus packagevariant-1 false Draft edge1 - -porchctl -n porch-demo rpkg get --name network-function-auto-namespace-y-cumulonimbus -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e network-function-auto-namespace-y-cumulonimbus packagevariant-1 false Draft edge1 -``` - -We can see that our two new packages are created as draft packages on the edge1 repository. We can also examine the -*PacakgeVariant* CRs that have been created: - -```bash -kubectl get PackageVariant -n porch-demo -NAME AGE -network-function-auto-namespace-edge1-network-function-35079f9f 3m41s -network-function-auto-namespace-edge1-network-function-d521d2c0 3m41s -network-function-edge1-network-function-b 38m -network-function-edge1-network-function-c 38m -``` - -It is also interesting to examine the YAML of a *PackageVariant*: - -```yaml -kubectl get PackageVariant -n porch-demo network-function-auto-namespace-edge1-network-function-35079f9f -o yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - creationTimestamp: "2024-01-24T15:10:19Z" - finalizers: - - config.porch.kpt.dev/packagevariants - generation: 1 - labels: - config.porch.kpt.dev/packagevariantset: 71edbdff-21c1-45f4-b9cb-6d2ecfc3da4e - name: network-function-auto-namespace-edge1-network-function-35079f9f - namespace: porch-demo - ownerReferences: - - apiVersion: config.porch.kpt.dev/v1alpha2 - controller: true - kind: PackageVariantSet - name: network-function-auto-namespace - uid: 71edbdff-21c1-45f4-b9cb-6d2ecfc3da4e - resourceVersion: "404083" - uid: 5ae69c2d-6aac-4942-b717-918325650190 -spec: - downstream: - package: network-function-auto-namespace-x-cumulonimbus - repo: edge1 - upstream: - package: network-function-auto-namespace - repo: management - revision: v1 -status: - conditions: - - lastTransitionTime: "2024-01-24T15:10:19Z" - message: all validation checks passed - reason: Valid - status: "False" - type: Stalled - - lastTransitionTime: "2024-01-24T15:10:49Z" - message: successfully ensured downstream package variant - reason: NoErrors - status: "True" - type: Ready - downstreamTargets: - - name: edge1-009659a8532552b86263434f68618554e12f4f7c -``` -Our two packages are ready for deployment: - -```bash -porchctl rpkg propose edge1-009659a8532552b86263434f68618554e12f4f7c --namespace=porch-demo -edge1-009659a8532552b86263434f68618554e12f4f7c proposed - -porchctl rpkg approve edge1-009659a8532552b86263434f68618554e12f4f7c --namespace=porch-demo -edge1-009659a8532552b86263434f68618554e12f4f7c approved - -porchctl rpkg propose edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e --namespace=porch-demo -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e proposed - -porchctl rpkg approve edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e --namespace=porch-demo -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e approved -``` - -We can now check that the packages are deployed on the edge1 cluster and that the pods are running - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 45m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 44m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 0/1 ContainerCreating 0 1s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 45m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 44m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 10s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 45m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 50s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 46m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 51s -network-function-auto-namespace-y-cumulonimbus network-function-auto-namespace-85bc658d67-tp5m8 0/1 ContainerCreating 0 1s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 46m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 54s -network-function-auto-namespace-y-cumulonimbus network-function-auto-namespace-85bc658d67-tp5m8 1/1 Running 0 4s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 46m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m -``` diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/_index.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/_index.md deleted file mode 100644 index 03138433..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/_index.md +++ /dev/null @@ -1,190 +0,0 @@ ---- -title: "Working with Package Revisions" -type: docs -weight: 3 -description: A group of guides outlining how to interact with Package Revisions in Porch ---- - -## Prerequisites - -- Porch deployed on a Kubernetes cluster [Setup Porch Guide]({{% relref "/docs/neo-porch/3_getting_started/installing-porch.md" %}}). -- **Porchctl** CLI tool installed [Setup Porchctl Guide]({{% relref "/docs/neo-porch/3_getting_started/installing-porchctl.md" %}}). -- A Git repository registered with Porch [Setup Repositories Guide]({{% relref "/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-registration.md" %}}). -- **Kubectl** configured to access your cluster. - ---- - -## Understanding Package Revisions - -In Porch, you work with **PackageRevisions** - there is no separate "Package" resource. When we say "package" colloquially, we're referring to a PackageRevision. The `rpkg` command stands for "revision package". - -PackageRevisions are Kubernetes resources that represent versioned collections of configuration files stored in Git repositories. Each PackageRevision contains: - -- **KRM Resources**: Kubernetes Resource Model files (YAML configurations) -- **Kptfile**: Package metadata and pipeline configuration -- **Pipeline Functions**: KRM functions that transform package resources -- **Lifecycle State**: Current state in the package workflow - -**PackageRevision Operations:** - -- **Creation**: `init`, `clone`, `copy` - Create new PackageRevisions from scratch, existing packages, or new revisions -- **Inspection**: `get` - List and view PackageRevision information and metadata -- **Content Management**: `pull`, `push` - Move PackageRevision content between Git repositories and local filesystem -- **Lifecycle Management**: `propose`, `approve`, `reject` - Control PackageRevision workflow states -- **Upgrading**: `upgrade` - Create new revision upgrading downstream to more recent upstream package -- **Deletion**: `propose-delete`, `del` - Propose deletion of published PackageRevisions, then delete them - ---- - -## PackageRevision Lifecycle - -PackageRevisions follow a structured lifecycle with three main states: - -- **Draft**: Work in progress, fully editable. Revision number is 0. -- **Proposed**: Ready for review, still editable. Revision number remains 0. -- **Published**: Approved and immutable. Revision number increments to 1+. - -**Lifecycle Transitions:** - -1. **Draft → Proposed**: `porchctl rpkg propose` - Signal readiness for review -2. **Proposed → Published**: `porchctl rpkg approve` - Approve and make immutable -3. **Proposed → Draft**: `porchctl rpkg reject` - Return for more work - -**Additional States:** - -- **DeletionProposed**: PackageRevision marked for deletion, pending approval - ---- - -## PackageRevision Naming - -Porch generates PackageRevision names automatically using a consistent format: - -- **Format**: `{repositoryName}.{packageName}.{workspaceName}` -- **Example**: `porch-test.my-first-package.v1` - -**Name Components:** - -- **Repository Name**: Name of the registered Git repository -- **Package Name**: Logical name for the package (can have multiple revisions) -- **Workspace Name**: Unique identifier within the package (maps to Git branch/tag) - -**Important Notes:** - -- Workspace names must be unique within a package -- Multiple PackageRevisions can share the same package name with different workspaces -- Published PackageRevisions get tagged in Git using the workspace name - ---- - -## Working with PackageRevision Content - -PackageRevisions contain structured configuration files that can be modified through various operations: - -**Local Operations:** - -1. **Pull**: Download PackageRevision contents to local filesystem -2. **Modify**: Edit files locally using standard tools -3. **Push**: Upload changes back to Porch (triggers pipeline rendering) - -**Pipeline Processing:** - -- KRM functions defined in the Kptfile automatically transform resources -- Functions run when PackageRevisions are pushed to Porch -- Common functions: set-namespace, apply-replacements, search-replace - -**Content Structure:** - -```bash -package-revision/ -├── Kptfile # Package metadata and pipeline -├── .KptRevisionMetadata # Porch-managed metadata -├── package-context.yaml # Package context information -├── README.md # Package documentation -└── *.yaml # KRM resources -``` - ---- - -## Repository Integration - -PackageRevisions are stored in Git repositories registered with Porch: - -**Git Branch Mapping:** - -- **Draft**: Stored in `draft/{workspace}` branch -- **Proposed**: Stored in `proposed/{workspace}` branch -- **Published**: Tagged as `{workspace}` and stored in main branch - -**Repository Types:** - -- **Blueprint Repositories**: Contain upstream package templates for cloning -- **Deployment Repositories**: Store deployment-ready packages (marked with `--deployment` flag) - -**Synchronization:** - -- Porch automatically syncs with Git repositories -- Manual sync: `porchctl repo sync ` -- Periodic sync can be configured with cron expressions - ---- - -## Troubleshooting - -Common issues when working with PackageRevisions and their solutions: - -**PackageRevision stuck in Draft?** - -- Check readiness conditions: `porchctl rpkg get -o yaml | grep -A 5 conditions` -- Verify all required fields are populated in the Kptfile -- Check for pipeline function errors in Porch server logs - -**Push fails with conflict?** - -- Pull the latest version first: `porchctl rpkg pull ./dir` -- The PackageRevision may have been modified by someone else -- Resolve conflicts locally and push again - -**Cannot modify Published PackageRevision?** - -- Published PackageRevisions are immutable by design -- Create a new revision using `porchctl rpkg copy` -- Use the copying workflow to create editable versions - -**PackageRevision not found?** - -- Verify the exact PackageRevision name: `porchctl rpkg get --namespace default` -- Check you're using the correct namespace -- Ensure the repository is registered and synchronized - -**Permission denied errors?** - -- Check RBAC permissions: `kubectl auth can-i get packagerevisions -n default` -- Verify service account has proper roles for PackageRevision operations -- Ensure repository authentication is configured correctly - -**Pipeline functions failing?** - -- Check function image availability and version -- Verify function configuration in Kptfile -- Review function logs in Porch server output during push operations - ---- - -## Key Concepts - -Important terms and concepts for working with PackageRevisions: - -- **PackageRevision**: The Kubernetes resource managed by Porch (there is no separate "Package" resource) -- **Workspace**: Unique identifier for a PackageRevision within a package (maps to Git branch/tag) -- **Lifecycle**: Current state of the PackageRevision (Draft, Proposed, Published, DeletionProposed) -- **Revision Number**: 0 for Draft/Proposed, 1+ for Published (increments with each publication) -- **Latest**: Flag indicating the most recent published PackageRevision of a package -- **Pipeline**: KRM functions defined in Kptfile that transform PackageRevision resources -- **Upstream/Downstream**: Relationship between source PackageRevisions (upstream) and their clones (downstream) -- **Repository**: Git repository where PackageRevisions are stored and managed -- **Namespace Scope**: PackageRevisions exist within a Kubernetes namespace and inherit repository namespace -- **Rendering**: Process of executing pipeline functions to transform PackageRevision resources -- **Kptfile**: YAML file containing PackageRevision metadata, pipeline configuration, and dependency information - ---- diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/cloning-packages.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/cloning-packages.md deleted file mode 100644 index 50955bb0..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/cloning-packages.md +++ /dev/null @@ -1,301 +0,0 @@ ---- -title: "Cloning Package Revisions" -type: docs -weight: 5 -description: "A step by step guide to cloning package revisions in Porch" ---- - -## Tutorial Overview - -You will learn how to: - -1. Find a PackageRevision to clone -2. Clone a PackageRevision to a different repository -3. Modify the cloned PackageRevision -4. Propose and approve the new revision - -{{% alert title="Note" color="primary" %}} -Please note the tutorial assumes repositories are initialized with the "blueprints" and "deployments" names. -We recommended to use these for simpler copy pasting of commands otherwise replace these values with your repository names in the below commands. -{{% /alert %}} - ---- - -## Understanding Clone Operations - -Cloning creates a new PackageRevision based on an existing one and works **across different repositories**. The cloned PackageRevision maintains an **upstream reference** to its source, allowing it to receive updates. - ---- - -## When to Use Clone - -**Use `porchctl rpkg clone` when:** - -- You need to import a package from a **different repository** (cross-repository operation) -- You want to maintain an **upstream relationship** for future updates -- You're importing blueprints from a central repository to deployment repositories -- You need to pull packages from external Git or OCI repositories -- You want to keep deployment packages synchronized with their upstream sources -- You're following the blueprint → deployment pattern - -**Do NOT use clone when:** - -- Source and target are in the **same repository** - use `porchctl rpkg copy` instead -- You want a completely independent copy with no upstream link - use `porchctl rpkg copy` instead -- You're just creating a new version within the same repository - use `porchctl rpkg copy` instead - -{{% alert title="Note" color="primary" %}} -For same-repository operations without upstream relationships, see [Copying Package Revisions Guide]({{% relref "/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/copying-packages.md" %}}). -{{% /alert %}} - ---- - -## Step 1: Find a PackageRevision to Clone - -First, list available PackageRevisions to find one to clone: - -```bash -porchctl rpkg get --namespace default -``` - -**Example output:** - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -blueprints.nginx.main nginx main 5 true Published blueprints -blueprints.wordpress.v1 wordpress v1 3 true Published blueprints -deployments.my-app.v1 my-app v1 1 true Published deployments -``` - -**What to look for:** - -- Published PackageRevisions from blueprint repositories are good candidates for cloning -- Note the full NAME (e.g., `blueprints.nginx.main`) -- Check the REPOSITORY column to identify the source repository - ---- - -## Step 2: Clone the PackageRevision - -Clone an existing PackageRevision to a different repository: - -```bash -porchctl rpkg clone \ - blueprints.nginx.main \ - my-nginx \ - --namespace default \ - --repository deployments \ - --workspace v1 -``` - -**What this does:** - -- Creates a new PackageRevision based on `blueprints.nginx.main` -- Names the new PackageRevision `my-nginx` (package name) -- Places it in the `deployments` repository (different from source) -- Uses `v1` as the workspace name -- Starts in `Draft` lifecycle state -- Maintains an upstream reference to `blueprints.nginx.main` - -**Verify the clone was created:** - -```bash -porchctl rpkg get --namespace default --name my-nginx -``` - -**Example output:** - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -deployments.my-nginx.v1 my-nginx v1 0 false Draft deployments -``` - ---- - -## Step 3: Modify the Cloned PackageRevision - -After cloning, you can modify the new PackageRevision. Pull it locally: - -```bash -porchctl rpkg pull deployments.my-nginx.v1 ./my-nginx --namespace default -``` - -**Make your changes:** - -```bash -vim ./my-nginx/Kptfile -``` - -For example, customize the namespace: - -```yaml -apiVersion: kpt.dev/v1 -kind: Kptfile -metadata: - name: my-nginx - annotations: - config.kubernetes.io/local-config: "true" -info: - description: Nginx deployment for production -upstream: - type: git - git: - repo: blueprints - directory: nginx - ref: main -pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.4.1 - configMap: - namespace: production -``` - -**Push the changes back:** - -```bash -porchctl rpkg push deployments.my-nginx.v1 ./my-nginx --namespace default -``` - ---- - -## Step 4: Propose and Approve - -Once you're satisfied with the changes, propose the PackageRevision: - -```bash -porchctl rpkg propose deployments.my-nginx.v1 --namespace default -``` - -**Verify the state change:** - -```bash -porchctl rpkg get deployments.my-nginx.v1 --namespace default -``` - -**Example output:** - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -deployments.my-nginx.v1 my-nginx v1 0 false Proposed deployments -``` - -**Approve to publish:** - -```bash -porchctl rpkg approve deployments.my-nginx.v1 --namespace default -``` - -**Verify publication:** - -```bash -porchctl rpkg get deployments.my-nginx.v1 --namespace default -``` - -**Example output:** - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -deployments.my-nginx.v1 my-nginx v1 1 true Published deployments -``` - ---- - -{{% alert title="Note" color="primary" %}} -For complete details on the `porchctl rpkg clone` command options and flags, see the [Porch CLI Guide]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md" %}}). -{{% /alert %}} - ---- - -## Common Use Cases - -Here are practical scenarios where cloning PackageRevisions is useful. - -### Importing from Blueprint Repository - -Clone a blueprint package to your deployment repository: - -```bash -# Clone from blueprints to deployments -porchctl rpkg clone \ - blueprints.base-app.main \ - my-app \ - --namespace default \ - --repository deployments \ - --workspace v1 - -# Customize and publish -porchctl rpkg pull deployments.my-app.v1 ./my-app --namespace default -# ... customize ... -porchctl rpkg push deployments.my-app.v1 ./my-app --namespace default -porchctl rpkg propose deployments.my-app.v1 --namespace default -porchctl rpkg approve deployments.my-app.v1 --namespace default -``` - -### Cloning from External Git Repository - -Clone directly from a Git repository URL: - -```bash -# Clone from external Git repo -porchctl rpkg clone \ - https://github.com/example/blueprints.git \ - external-app \ - --namespace default \ - --repository deployments \ - --workspace v1 \ - --ref main \ - --directory packages/app - -# Publish -porchctl rpkg propose deployments.external-app.v1 --namespace default -porchctl rpkg approve deployments.external-app.v1 --namespace default -``` - ---- - -## Troubleshooting - -Common issues when cloning PackageRevisions and how to resolve them. - -**Clone fails with "repository not found"?** - -- Verify the target repository exists: `porchctl repo get --namespace default` -- Check the repository name is correct -- Ensure you have permission to write to the target repository - -**Clone fails with "source not found"?** - -- Verify the source PackageRevision exists: `porchctl rpkg get --namespace default` -- Check the exact name including repository, package, and workspace -- Ensure you have permission to read the source PackageRevision - -**Clone fails with "workspace already exists"?** - -- The workspace name must be unique within the package in the target repository -- Choose a different workspace name: `--workspace v2` or `--workspace prod` -- List existing workspaces: `porchctl rpkg get --namespace default --name ` - -**Cloned PackageRevision has unexpected content?** - -- The clone includes all resources from the source at the time of cloning -- Pull and inspect: `porchctl rpkg pull ./dir --namespace default` -- Make corrections and push back - -**Need to clone within the same repository?** - -- Use `porchctl rpkg copy` instead of `clone` for same-repository operations -- The `copy` command is simpler and doesn't maintain upstream references -- See [Copying Package Revisions Guide]({{% relref "/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/copying-packages.md" %}}) - ---- - -## Key Concepts - -- **Clone**: Creates a new PackageRevision that can be in a different repository -- **Upstream Reference**: Cloned packages maintain a link to their source for updates -- **Cross-repository**: Clone works across different repositories, unlike copy -- **Source Types**: Can clone from Porch packages, Git URLs, or OCI repositories -- **Workspace**: Must be unique within the package in the target repository -- **Blueprint Pattern**: Common pattern is blueprints repository → deployment repositories - ---- diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/copying-packages.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/copying-packages.md deleted file mode 100644 index 2b7e1fe6..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/copying-packages.md +++ /dev/null @@ -1,281 +0,0 @@ ---- -title: "Copying Package Revisions" -type: docs -weight: 4 -description: "A step by step guide to copying package revisions in Porch" ---- - -## Tutorial Overview - -You will learn how to: - -1. Find a PackageRevision to copy -2. Copy a PackageRevision to create a new revision -3. Modify the copied PackageRevision -4. Propose and approve the new revision - -{{% alert title="Note" color="primary" %}} -The tutorial assumes a porch repository is initialized with the "porch-test" name. -We recommended to use this for simpler copy pasting of commands otherwise replace any "porch-test" value with your repository's name in the below commands. -{{% /alert %}} - ---- - -## Key Concepts - -- **Copy**: Creates a new independent PackageRevision within the same repository -- **Source PackageRevision**: The original PackageRevision being copied -- **Target PackageRevision**: The new PackageRevision created by the copy operation -- **Workspace**: Must be unique within the package for the target -- **Same-repository operation**: Copy only works within a single repository -- **Immutability**: Published PackageRevisions cannot be modified, only copied -- **Clone vs Copy**: Use clone for cross-repository operations, copy for same-repository versions - ---- - -## Understanding Copy Operations - -Copying creates a new PackageRevision based on an existing one **within the same repository**. The copied PackageRevision is completely **independent with no upstream link** to the source. - ---- - -## When to Use Copy - -**Use `porchctl rpkg copy` when:** - -- You need to create a new version of a published PackageRevision (published revisions are immutable) -- You want to create variations of a package within the same repository -- You need an independent copy with no upstream relationship -- You're iterating on a package and need a new workspace -- Source and target are in the **same repository** - -**Do NOT use copy when:** - -- You need to move a package to a **different repository** - use `porchctl rpkg clone` instead -- You want to maintain an upstream relationship for updates - use `porchctl rpkg clone` instead -- You're importing blueprints from a central repository - use `porchctl rpkg clone` instead - -{{% alert title="Note" color="primary" %}} -For cross-repository operations or maintaining upstream relationships, see the [Cloning Package Revisions Guide]({{% relref "/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/cloning-packages.md" %}}). -{{% /alert %}} - ---- - -## Step 1: Find a PackageRevision to Copy - -First, list available PackageRevisions to find one to copy: - -```bash -porchctl rpkg get --namespace default -``` - -**Example output:** - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.my-app.v1 my-app v1 1 true Published porch-test -blueprints.nginx.main nginx main 5 true Published blueprints -``` - -**What to look for:** - -- Published PackageRevisions are good candidates for copying -- Note the full NAME (e.g., `porch-test.my-app.v1`) -- Check the LATEST column to find the most recent version - ---- - -## Step 2: Copy the PackageRevision - -Copy an existing PackageRevision to create a new one: - -```bash -porchctl rpkg copy \ - porch-test.my-app.v1 \ - my-app \ - --namespace default \ - --workspace v2 -``` - -**What this does:** - -- Creates a new PackageRevision based on `porch-test.my-app.v1` -- Names the new PackageRevision `my-app` (package name) -- Uses `v2` as the workspace name (must be unique within the package) -- Starts in `Draft` lifecycle state -- Copies all resources from the source PackageRevision - -**Verify the copy was created:** - -```bash -porchctl rpkg get --namespace default --name my-app -``` - -**Example output:** - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.my-app.v1 my-app v1 1 true Published porch-test -porch-test.my-app.v2 my-app v2 0 false Draft porch-test -``` - ---- - -## Step 3: Modify the Copied PackageRevision - -After copying, you can modify the new PackageRevision. Pull it locally: - -```bash -porchctl rpkg pull porch-test.my-app.v2 ./my-app-v2 --namespace default -``` - -**Make your changes:** - -```bash -vim ./my-app-v2/Kptfile -``` - -For example, you can update the description: - -```yaml -apiVersion: kpt.dev/v1 -kind: Kptfile -metadata: - name: my-app - annotations: - config.kubernetes.io/local-config: "true" -info: - description: My app version 2 with improvements -pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.4.1 - configMap: - namespace: production -``` - -**Push the changes back:** - -```bash -porchctl rpkg push porch-test.my-app.v2 ./my-app-v2 --namespace default -``` - ---- - -## Step 4: Propose and Approve - -Once you're satisfied with the changes, propose the PackageRevision: - -```bash -porchctl rpkg propose porch-test.my-app.v2 --namespace default -``` - -**Verify the state change:** - -```bash -porchctl rpkg get porch-test.my-app.v2 --namespace default -``` - -**Example output:** - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.my-app.v2 my-app v2 0 false Proposed porch-test -``` - -**Approve to publish:** - -```bash -porchctl rpkg approve porch-test.my-app.v2 --namespace default -``` - -**Verify the publication:** - -```bash -porchctl rpkg get --namespace default --name my-app -``` - -**Example output:** - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.my-app.v1 my-app v1 1 false Published porch-test -porch-test.my-app.v2 my-app v2 2 true Published porch-test -``` - -Notice the following changes: - -- `v2` now has revision number `2` -- `v2` is marked as `LATEST` -- `v1` is no longer the latest - ---- - -{{% alert title="Note" color="primary" %}} -For complete details on the `porchctl rpkg copy` command options and flags, see the [Porch CLI Guide]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md" %}}). -{{% /alert %}} - ---- - -## Common Use Cases - -Here are practical scenarios where copying PackageRevisions is useful. - -### Creating a New Version - -When you need to update a published PackageRevision in the same Repository: - -```bash -# Copy the latest published version -porchctl rpkg copy porch-test.my-app.v2 my-app --namespace default --workspace v3 - -# Make changes -porchctl rpkg pull porch-test.my-app.v3 ./my-app-v3 --namespace default -# ... edit files ... -porchctl rpkg push porch-test.my-app.v3 ./my-app-v3 --namespace default - -# Publish -porchctl rpkg propose porch-test.my-app.v3 --namespace default -porchctl rpkg approve porch-test.my-app.v3 --namespace default -``` - -### Creating Environment-Specific Workspaces - -Create different workspace variations of the same base PackageRevision: - -```bash -# Copy for development environment -porchctl rpkg copy porch-test.my-app.v1 my-app --namespace default --workspace dev - -# Copy for staging environment -porchctl rpkg copy porch-test.my-app.v1 my-app --namespace default --workspace staging - -# Copy for production environment -porchctl rpkg copy porch-test.my-app.v1 my-app --namespace default --workspace prod -``` - ---- - -## Troubleshooting - -Common issues when copying PackageRevisions and how to resolve them. - -**Copy fails with "workspace already exists":** - -- The workspace name must be unique within the package -- Choose a different workspace name: `--workspace v3` or `--workspace dev-2` -- List existing workspaces with the `porchctl rpkg get --namespace default --name ` command - -**Copy fails with "source not found":** - -- Verify that the source PackageRevision exists with the `porchctl rpkg get --namespace default` command -- Check the exact name including repository, package, and workspace -- Ensure you have permission to read the source PackageRevision -- Ensure the source is in the same repository (copy only works within the same repository) - -**Copied PackageRevision has unexpected content:** - -- The copy includes all resources from the source at the time of copying -- Pull and inspect with the `porchctl rpkg pull ./dir --namespace default` command -- Make corrections and push back - ---- diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/creating-packages.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/creating-packages.md deleted file mode 100644 index f9f9eaf3..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/creating-packages.md +++ /dev/null @@ -1,309 +0,0 @@ ---- -title: "Creating Package Revisions" -type: docs -weight: 3 -description: "A step by step guide to creating a package revision in Porch" ---- - -## Tutorial Overview - -You will learn how to: - -1. Initialize a new package revision -2. Pull the package revision locally -3. Modify the package revision contents -4. Push changes back to Porch -5. Propose the package revision for review -6. Approve or reject the package revision - ---- - -{{% alert title="Note" color="primary" %}} -Please note the tutorial assumes a porch repository is initialized with the "porch-test" name. -We recommended to use this for simpler copy pasting of commands otherwise replace any "porch-test" value with your repository's name in the below commands. -{{% /alert %}} - -## Step 1: Initialize Your First Package Revision - -Create a new package revision in Porch using the `init` command: - -```bash -porchctl rpkg init my-first-package \ - --namespace=default \ - --repository=porch-test \ - --workspace=v1 \ - --description="My first Porch package" -``` - -**What this does:** - -- Creates a new PackageRevision named `my-first-package` -- Places it in the `porch-test` repository -- Uses `v1` as the workspace name (must be unique within this package) -- Starts in `Draft` lifecycle state - -![Diagram](/static/images/porch/guides/init-workflow.drawio.svg) - -**Verify the package revision was created:** - -```bash -porchctl rpkg get --namespace default -``` - -You should see your package revision listed with lifecycle `Draft`: - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.my-first-package.v1 my-first-package v1 0 false Draft porch-test -``` - ---- - -## Step 2: Pull the Package Revision Locally - -Download the package revision contents to your local filesystem: - -```bash -porchctl rpkg pull porch-test.my-first-package.v1 ./my-first-package --namespace default -``` - -**What this does:** - -- Fetches all resources from the PackageRevision -- Saves them to the `./my-first-package` directory -- Includes the Kptfile and any other resources - -![Diagram](/static/images/porch/guides/pull-workflow.drawio.svg) - -**Explore the package revision contents:** - -```bash -ls -al ./my-first-package -``` - -You'll see: - -- The `Kptfile` - PackageRevision metadata and pipeline configuration -- Other YAML files (if any were created) - -```bash -total 24 -drwxr-x--- 2 user user 4096 Nov 24 13:27 . -drwxr-xr-x 4 user user 4096 Nov 24 13:27 .. --rw-r--r-- 1 user user 259 Nov 24 13:27 .KptRevisionMetadata --rw-r--r-- 1 user user 177 Nov 24 13:27 Kptfile --rw-r--r-- 1 user user 488 Nov 24 13:27 README.md --rw-r--r-- 1 user user 148 Nov 24 13:27 package-context.yaml -``` - -**Alternatively:** - -If you have the tree command installed on your system you can use it to view the hierarchy of the package - -```bash -tree ./my-first-package/ -``` - -Should return the following output: - -```bash -my-first-package/ -├── Kptfile -├── README.md -└── package-context.yaml - -1 directory, 3 files -``` - ---- - -## Step 3: Modify the Package Revision - -Let's add a simple KRM function to the pipeline. - -Open the `Kptfile` in your editor of choice: - -```bash -vim ./my-first-package/Kptfile -``` - -Add a mutator function to the pipeline section so that your `Kptfile` looks like so: - -```yaml -apiVersion: kpt.dev/v1 -kind: Kptfile -metadata: - name: my-first-package - annotations: - config.kubernetes.io/local-config: "true" -info: - description: My first Porch package -pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.4.1 - configMap: - namespace: production -``` - -**What this does:** - -- Adds a `set-namespace` function to the pipeline -- This function will set the namespace to `production` for all resources -- These Functions are not rendered until the package is "pushed" to porch - -**Add new resource:** - -Create a new configmap: - -```bash -vim ./my-first-package/test-config.yaml -``` - -Now add the following content to this new configmap - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: test-config -data: - key: "value" -``` - -**Save and close the file.** - -{{% alert title="Note" color="primary" %}} -Changes are LOCAL ONLY (Porch doesn't know about them yet) at this stage -{{% /alert %}} - ---- - -## Step 4: Push Changes Back to Porch - -Upload your modified package revision back to Porch: - -```bash -porchctl rpkg push porch-test.my-first-package.v1 ./my-first-package --namespace default -``` - -**What this does:** - -- Updates the PackageRevision in Porch -- Triggers rendering (executes pipeline functions) -- PackageRevision remains in `Draft` state - -![Diagram](/static/images/porch/guides/push-workflow.drawio.svg) - -**Successful output:** - -This describes how the KRM function was run by porch and has updated the namespace in our new configmap. - -```bash -[RUNNING] "gcr.io/kpt-fn/set-namespace:v0.4.1" -[PASS] "gcr.io/kpt-fn/set-namespace:v0.4.1" - Results: - [info]: namespace "" updated to "production", 1 value(s) changed -porch-test.my-first-package.v1 pushed -``` - ---- - -## Step 5: Propose the Package Revision - -Move the package revision to `Proposed` state for review: - -```bash -porchctl rpkg propose porch-test.my-first-package.v1 --namespace default -``` - -**What this does:** - -- Changes lifecycle from `Draft` to `Proposed` -- Signals the package revision is ready for review -- PackageRevision can still be modified if needed - -![Diagram](/static/images/porch/guides/propose-workflow.drawio.svg) - -{{% alert title="Note" color="primary" %}} -A lifecycle state change from `Draft` to `Proposed` means that in Git the package revision has moved from the `draft` branch to the `proposed` branch -{{% /alert %}} - -**Verify the state change:** - -```bash -porchctl rpkg get porch-test.my-first-package.v1 --namespace default -``` - -The lifecycle should now show `Proposed`. - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.my-first-package.v1 my-first-package v1 0 false Proposed porch-test -``` - ---- - -## Step 6a: Approve the Package Revision - -If the package revision looks good, approve it to publish: - -```bash -porchctl rpkg approve porch-test.my-first-package.v1 --namespace default -``` - -**What this does:** - -- Changes PackageRevision lifecycle from `Proposed` (revision 0) to `Published` (revision 1) -- PackageRevision becomes **immutable** (content cannot be changed) -- Records who approved and when -- PackageRevision is now available for cloning/deployment - -![Diagram](/static/images/porch/guides/approve-workflow.drawio.svg) - -**Verify publication:** - -```bash -porchctl rpkg get porch-test.my-first-package.v1 --namespace default -o yaml | grep -E "lifecycle|publishedBy|publishTimestamp" -``` - -**Verify the state change:** - -```bash -porchctl rpkg get porch-test.my-first-package.v1 --namespace default -``` - -The lifecycle should now show `Published`. - -```bash -NAME                               PACKAGE            WORKSPACENAME   REVISION   LATEST   LIFECYCLE   REPOSITORY -porch-test.my-first-package.main   my-first-package   main            -1         true     Published   porch-test -porch-test.my-first-package.v1     my-first-package   v1              1          false    Published   porch-test -``` - -{{% alert title="Note" color="primary" %}} -Porch automatically creates a main branch-tracking PackageRevision (with workspace `main` and revision `-1`) to track the latest published version of this package. -{{% /alert %}} - ---- - -## Step 6b: Reject the Package Revision (Alternative) - -If the package revision needs more work, reject it to return to `Draft`: - -```bash -porchctl rpkg reject porch-test.my-first-package.v1 --namespace default -``` - -**What this does:** - -- Changes lifecycle from `Proposed` back to `Draft` -- Allows further modifications -- You can then make changes and re-propose - -![Diagram](/static/images/porch/guides/reject-workflow.drawio.svg) - -If the package revision is rejected, the process begins again from Step 2 until the desired state is achieved. - -![Diagram](/static/images/porch/guides/lifecycle-workflow.drawio.svg) - ---- diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/deleting-packages.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/deleting-packages.md deleted file mode 100644 index 17dedf31..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/deleting-packages.md +++ /dev/null @@ -1,401 +0,0 @@ ---- -title: "Deleting Package Revisions" -type: docs -weight: 7 -description: "A step by step guide to deleting package revisions in Porch" ---- - -## Tutorial Overview - -You will learn how to: - -1. Delete Draft and Proposed PackageRevisions directly -2. Propose Published PackageRevisions for deletion -3. Approve or reject deletion proposals -4. Understand the deletion workflow and safety mechanisms - -{{% alert title="Note" color="primary" %}} -This tutorial assumes a porch repository is initialized with the "porch-test" name. -Replace any "porch-test" value with your repository's name in the commands below. -{{% /alert %}} - ---- - -## Understanding PackageRevision Deletion - -PackageRevision deletion in Porch follows different workflows depending on the lifecycle state: - -**Direct Deletion:** - -- **Draft** and **Proposed** PackageRevisions can be deleted immediately -- No approval process required -- Permanently removes the PackageRevision and its Git branch - -**Deletion Proposal Workflow:** - -- **Published** PackageRevisions require a two-step deletion process -- First propose deletion, then approve the proposal -- Provides safety mechanism to prevent accidental deletion of production packages - -**Branch-Tracking PackageRevisions:** - -- When you publish a PackageRevision, Porch automatically creates a "main" branch-tracking PackageRevision -- These have revision `-1` and workspace name `main` -- They track the current state of the package on the main Git branch -- **Important**: These are managed automatically by Porch and should not be directly modified -- The only user interaction should be deletion after all regular PackageRevisions of **that specific package** are deleted - ---- - -## Step 1: Create Test PackageRevisions - -Let's create some test PackageRevisions to demonstrate the deletion workflows: - -```bash -# Create a Draft PackageRevision -porchctl rpkg init test-draft-package \ - --namespace=default \ - --repository=porch-test \ - --workspace=draft-v1 \ - --description="Test package for deletion" - -# Create a Proposed PackageRevision -porchctl rpkg init test-proposed-package \ - --namespace=default \ - --repository=porch-test \ - --workspace=proposed-v1 \ - --description="Test package for deletion" - -porchctl rpkg propose porch-test.test-proposed-package.proposed-v1 --namespace=default - -# Create a Published PackageRevision -porchctl rpkg init test-published-package \ - --namespace=default \ - --repository=porch-test \ - --workspace=published-v1 \ - --description="Test package for deletion" - -porchctl rpkg propose porch-test.test-published-package.published-v1 --namespace=default -porchctl rpkg approve porch-test.test-published-package.published-v1 --namespace=default -``` - -**Verify the PackageRevisions were created:** - -```bash -porchctl rpkg get --namespace=default -``` - -You should see an output similar to: - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.test-draft-package.draft-v1 test-draft-package draft-v1 0 false Draft porch-test -porch-test.test-proposed-package.proposed-v1 test-proposed-package proposed-v1 0 false Proposed porch-test -porch-test.test-published-package.main test-published-package main -1 false Published porch-test -porch-test.test-published-package.published-v1 test-published-package published-v1 1 true Published porch-test -``` - ---- - -## Step 2: Delete Draft PackageRevisions - -Draft PackageRevisions can be deleted immediately without any approval process: - -```bash -porchctl rpkg del porch-test.test-draft-package.draft-v1 --namespace=default -``` - -**What this does:** - -- Immediately removes the Draft PackageRevision -- Deletes the corresponding Git branch (`draft/draft-v1`) -- No approval or confirmation required - -**Verify the deletion:** - -```bash -porchctl rpkg get --namespace=default -``` - -The Draft PackageRevision should no longer appear in the list: - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.test-proposed-package.proposed-v1 test-proposed-package proposed-v1 0 false Proposed porch-test -porch-test.test-published-package.main test-published-package main -1 false Published porch-test -porch-test.test-published-package.published-v1 test-published-package published-v1 1 true Published porch-test -``` - ---- - -## Step 3: Delete Proposed PackageRevisions - -Proposed PackageRevisions can also be deleted directly: - -```bash -porchctl rpkg del porch-test.test-proposed-package.proposed-v1 --namespace=default -``` - -**What this does:** - -- Immediately removes the Proposed PackageRevision -- Deletes the corresponding Git branch (`proposed/proposed-v1`) -- No approval process required - -**Verify the deletion:** - -```bash -porchctl rpkg get --namespace=default -``` - -The Proposed PackageRevision should no longer appear in the list: - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.test-published-package.main test-published-package main -1 false Published porch-test -porch-test.test-published-package.published-v1 test-published-package published-v1 1 true Published porch-test -``` - ---- - -## Step 4: Propose Published PackageRevision for Deletion - -Published PackageRevisions cannot be deleted directly. You must first propose them for deletion: - -```bash -porchctl rpkg propose-delete porch-test.test-published-package.published-v1 --namespace=default -``` - -**What this does:** - -- Changes the PackageRevision lifecycle from `Published` to `DeletionProposed` -- Signals that the PackageRevision should be deleted -- Requires approval before actual deletion occurs - -**Verify the state change:** - -```bash -porchctl rpkg get porch-test.test-published-package.published-v1 --namespace=default -``` - -The lifecycle should now show `DeletionProposed`: - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.test-published-package.main test-published-package main -1 false Published porch-test -porch-test.test-published-package.published-v1 test-published-package published-v1 1 true DeletionProposed porch-test -``` - ---- - -## Step 5a: Approve Deletion Proposal - -If you want to proceed with the deletion, approve the deletion proposal: - -```bash -porchctl rpkg del porch-test.test-published-package.published-v1 --namespace=default -``` - -**What this does:** - -- Permanently deletes the PackageRevision -- Removes the Git tag and any associated branches -- **Important**: This cannot be undone once completed - -**Verify the deletion:** - -```bash -porchctl rpkg get --namespace=default -``` - -The published PackageRevision should no longer exist, but the main branch-tracking PackageRevision remains: - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.test-published-package.main test-published-package main -1 false Published porch-test -``` - ---- - -## Step 5b: Reject Deletion Proposal (Alternative) - -If you decide not to delete the PackageRevision, you can reject the deletion proposal: - -```bash -# First, let's create another Published PackageRevision for this example -porchctl rpkg init test-reject-delete \ - --namespace=default \ - --repository=porch-test \ - --workspace=reject-v1 \ - --description="Test package for rejection" - -porchctl rpkg propose porch-test.test-reject-delete.reject-v1 --namespace=default -porchctl rpkg approve porch-test.test-reject-delete.reject-v1 --namespace=default - -# Propose it for deletion -porchctl rpkg propose-delete porch-test.test-reject-delete.reject-v1 --namespace=default - -# Now reject the deletion proposal -porchctl rpkg reject porch-test.test-reject-delete.reject-v1 --namespace=default -``` - -**What this does:** - -- Changes lifecycle from `DeletionProposed` back to `Published` -- PackageRevision returns to normal published state -- The package can be used again normally - -**Verify the state change:** - -```bash -porchctl rpkg get porch-test.test-reject-delete.reject-v1 --namespace=default -``` - -The lifecycle should be back to `Published`: - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.test-reject-delete.reject-v1 test-reject-delete reject-v1 1 true Published porch-test -``` - ---- - -## Batch Deletion Operations - -You can delete multiple PackageRevisions with a single command: - -**Delete multiple Draft/Proposed PackageRevisions:** - -```bash -porchctl rpkg del package1 package2 package3 --namespace=default -``` - -**Propose multiple Published PackageRevisions for deletion:** - -```bash -porchctl rpkg propose-delete package1 package2 package3 --namespace=default -``` - -**Approve multiple deletion proposals:** - -```bash -porchctl rpkg del package1 package2 package3 --namespace=default -``` - ---- - -## Deletion Workflow Summary - -The complete deletion workflow depends on the PackageRevision lifecycle state: - -```mermaid -graph TD - A[PackageRevision] --> B{Lifecycle State?} - B -->|Draft| C[porchctl rpkg del] - B -->|Proposed| C - B -->|Published| D[porchctl rpkg propose-delete] - C --> E[Immediately Deleted] - D --> F[DeletionProposed State] - F --> G{Decision?} - G -->|Approve| H[porchctl rpkg del] - G -->|Reject| I[porchctl rpkg reject] - H --> E - I --> J[Back to Published] -``` - ---- - -## Safety Considerations - -**Published PackageRevision Protection:** - -- Two-step deletion process prevents accidental removal -- Deletion proposals can be reviewed before approval -- Rejected proposals restore the PackageRevision to Published state - -**Git Repository Impact:** - -- Draft/Proposed deletions remove Git branches -- Published deletions remove Git tags and references -- Deletion is permanent and cannot be undone - -**Dependency Considerations:** - -- Check if other PackageRevisions depend on the one being deleted -- Deleting upstream packages may affect downstream clones -- Consider the impact on deployed workloads - ---- - -## Troubleshooting - -**Cannot delete Published PackageRevision directly:** - -```bash -Error: cannot delete published package revision directly, use propose-delete first -``` - -- Use `porchctl rpkg propose-delete` first, then `porchctl rpkg del` - -**PackageRevision not found:** - -- Verify the exact PackageRevision name with the `porchctl rpkg get --namespace=default` command -- Check you're using the correct namespace -- Ensure the PackageRevision hasn't already been deleted - -**Permission denied:** - -- Check RBAC permissions with the `kubectl auth can-i delete packagerevisions -n default` command -- Verify your service account has proper deletion roles - -**Deletion proposal stuck:** - -- Check the PackageRevision status with the `porchctl rpkg get -o yaml` command -- Look for conditions that might prevent deletion -- Ensure no other processes are modifying the PackageRevision - ---- - -## Complete Cleanup - -After deleting PackageRevisions, you may notice "main" branch-tracking PackageRevisions still exist. These are automatically created by Porch when packages are published and must be deleted separately. - -{{% alert title="Important" color="warning" %}} -Main branch-tracking PackageRevisions (with workspace name "main" and revision "-1") are managed automatically by Porch. Do not modify, propose, approve, or otherwise interact with them except for deletion after all regular PackageRevisions of that specific package have been removed. -{{% /alert %}} - -**Check for remaining PackageRevisions:** - -```bash -porchctl rpkg get --namespace=default -``` - -You might see an output like: - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.test-published-package.main test-published-package main -1 false Published porch-test -porch-test.test-reject-delete.main test-reject-delete main -1 false Published porch-test -``` - -**Delete the main branch-tracking PackageRevisions:** - -```bash -# Propose deletion of main branch PackageRevisions -porchctl rpkg propose-delete porch-test.test-published-package.main --namespace=default -porchctl rpkg propose-delete porch-test.test-reject-delete.main --namespace=default - -# Approve the deletions -porchctl rpkg del porch-test.test-published-package.main --namespace=default -porchctl rpkg del porch-test.test-reject-delete.main --namespace=default -``` - -**Verify complete cleanup:** - -```bash -porchctl rpkg get --namespace=default -``` - -All test PackageRevisions should now be removed. - ---- diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/inspecting-packages.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/inspecting-packages.md deleted file mode 100644 index 81243573..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/inspecting-packages.md +++ /dev/null @@ -1,361 +0,0 @@ ---- -title: "Getting Package Revisions" -type: docs -weight: 3 -description: "A guide to getting/listing, reading, querying, and inspecting package revisions in Porch" ---- - - - ---- - -## Basic Operations - -These operations cover the fundamental commands for viewing and inspecting package revisions in Porch. - -### Getting All Package Revisions - -Get all package revisions across all repositories in a namespace: - -```bash -porchctl rpkg get --namespace default -``` - -**What this does:** - -- Queries Porch for all PackageRevisions in the specified namespace -- Displays a summary table with key information -- Shows PackageRevisions from all registered repositories - -{{% alert title="Note" color="primary" %}} -`porchctl rpkg list` is an alias for `porchctl rpkg get` and can be used interchangeably: - -```bash -porchctl rpkg list --namespace default -``` - -{{% /alert %}} - -**Example output:** - -```bash -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.my-app.v1 my-app v1 1 true Published porch-test -porch-test.my-app.v2 my-app v2 0 false Draft porch-test -blueprints.nginx.main nginx main 5 true Published blueprints -blueprints.postgres.v1 postgres v1 0 false Proposed blueprints -``` - -**Understanding the output:** - -- **NAME**: Full package revision identifier following the pattern `repository.([pathnode.]*)package.workspace` - - Format: `.[.].` - - Example: `porch-test.basedir.subdir.edge-function.v1` - - Repository: `porch-test` - - Path: `basedir/subdir` (directory structure) - - Package: `edge-function` - - Workspace: `v1` - - Simple example: `blueprints.nginx.main` (no path nodes) - - Repository: `blueprints` - - Package: `nginx` - - Workspace: `main` - -- **PACKAGE**: Package name with directory path if not in repository root - - Example: `basedir/subdir/network-function` shows location in repository - -- **WORKSPACENAME**: User-selected identifier for this PackageRevision - - Scoped to the package - `v1` in package A is independent from `v1` in package B - - Maps to Git branch or tag name - -- **REVISION**: Version number indicating publication status - - `1+`: Published PackageRevisions (increments with each publish: 1, 2, 3...) - - `0`: Unpublished PackageRevisions (Draft or Proposed) - - `-1`: Placeholder PackageRevisions pointing to Git branch/tag head - -- **LATEST**: Whether this is the latest published PackageRevision - - Only one PackageRevision per package marked as latest - - Based on highest revision number - -- **LIFECYCLE**: Current state of the PackageRevision - - `Draft`: Work-in-progress, freely editable, visible to authors - - `Proposed`: Read-only, awaiting approval, can be approved or rejected - - `Published`: Immutable, production-ready, assigned revision numbers - - `DeletionProposed`: Marked for removal, awaiting deletion approval - -- **REPOSITORY**: Source repository name - ---- - -### Get Detailed PackageRevision Information - -Get complete details about a specific PackageRevision: - -```bash -porchctl rpkg get porch-test.my-app.v1 --namespace default -o yaml -``` - -**What this does:** - -- Retrieves the full PackageRevision resource -- Shows all metadata, spec, and status fields -- Displays in YAML format for easy reading - -**Example output:** - -```yaml -apiVersion: porch.kpt.dev/v1alpha1 -kind: PackageRevision -metadata: - creationTimestamp: "2025-11-24T13:00:14Z" - labels: - kpt.dev/latest-revision: "true" - name: porch-test.my-first-package.v1 - namespace: default - resourceVersion: 5778e0e3e9a92d248fec770cef5baf142958aa54 - uid: f9f6507d-20fc-5319-97b2-6b8050c4f9cc -spec: - lifecycle: Published - packageName: my-first-package - repository: porch-test - revision: 1 - tasks: - - init: - description: My first Porch package - type: init - workspaceName: v1 -status: - publishTimestamp: "2025-11-24T16:38:41Z" - upstreamLock: {} -``` - -**Key fields to inspect:** - -- **spec.lifecycle**: Current PackageRevision state -- **spec.tasks**: History of operations performed on this PackageRevision -- **status.publishTimestamp**: When the PackageRevision was published - -{{% alert title="Tip" color="primary" %}} -Use `jq` to extract specific fields: `porchctl rpkg get -n default -o json | jq '.metadata'` -{{% /alert %}} - ---- - -### Reading PackageRevision Resources - -Read the actual contents of a PackageRevision: - -```bash -porchctl rpkg read porch-test.my-first-package.v1 --namespace default -``` - -**What this does:** - -- Fetches PackageRevision resources and outputs to stdout -- Shows all KRM resources in ResourceList format -- Displays the complete PackageRevision contents - -**Example output:** - -```yaml -apiVersion: config.kubernetes.io/v1 -kind: ResourceList -items: -- apiVersion: "" - kind: KptRevisionMetadata - metadata: - name: porch-test.my-first-package.v1 - namespace: default - creationTimestamp: "2025-11-24T13:00:14Z" - resourceVersion: 5778e0e3e9a92d248fec770cef5baf142958aa54 - uid: f9f6507d-20fc-5319-97b2-6b8050c4f9cc - annotations: - config.kubernetes.io/path: '.KptRevisionMetadata' -- apiVersion: kpt.dev/v1 - kind: Kptfile - metadata: - name: my-first-package - annotations: - config.kubernetes.io/local-config: "true" - config.kubernetes.io/path: 'Kptfile' - info: - description: My first Porch package - pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.4.1 - configMap: - namespace: production -- apiVersion: v1 - kind: ConfigMap - metadata: - name: kptfile.kpt.dev - annotations: - config.kubernetes.io/local-config: "true" - config.kubernetes.io/path: 'package-context.yaml' - data: - name: example -- apiVersion: v1 - kind: ConfigMap - metadata: - name: test-config - namespace: production - annotations: - config.kubernetes.io/path: 'test-config.yaml' - data: - Key: "Value" -``` - ---- - -## Advanced Filtering - -Porch provides multiple ways to filter PackageRevisions. You can either use `porchctl`'s built-in flags, Kubernetes label selectors, or field selectors depending on your needs. - -### Using Porchctl Flags - -Filter PackageRevisions using built-in porchctl flags: - -**Filter by package name (substring match):** - -```bash -porchctl rpkg get --namespace default --name my-app -``` - -**Filter by revision number (exact match):** - -```bash -porchctl rpkg get --namespace default --revision 1 -``` - -**Filter by workspace name:** - -```bash -porchctl rpkg get --namespace default --workspace v1 -``` - -**Example output:** - -```bash -$ porchctl rpkg get --namespace default --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.network-function.v1 network-function v1 1 false Published porch-test -porch-test.network-function.v2 network-function v2 2 true Published porch-test -porch-test.network-function.main network-function main 0 false Draft porch-test -``` - ---- - -### Using Kubectl Label Selectors - -Filter using Kubernetes [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#list-and-watch-filtering) with the `--selector` flag: - -**Get all "latest" published PackageRevisions:** - -```bash -kubectl get packagerevisions -n default --selector 'kpt.dev/latest-revision=true' -``` - -**Example output:** - -```bash -$ kubectl get packagerevisions -n default --show-labels --selector 'kpt.dev/latest-revision=true' -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY LABELS -porch-test.my-app.v2 my-app v2 2 true Published porch-test kpt.dev/latest-revision=true -blueprints.nginx.main nginx main 5 true Published blueprints kpt.dev/latest-revision=true -``` - -{{% alert title="Note" color="primary" %}} -PackageRevision resources have limited labels. To filter by repository, package name, or other attributes, use `--field-selector` instead (see next section). -{{% /alert %}} - ---- - -### Using Kubectl Field Selectors - -Filter using PackageRevision [fields](https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/) with the `--field-selector` flag: - -**Supported fields:** - -- `metadata.name` -- `metadata.namespace` -- `spec.revision` -- `spec.packageName` -- `spec.repository` -- `spec.workspaceName` -- `spec.lifecycle` - -**Filter by repository:** - -```bash -kubectl get packagerevisions -n default --field-selector 'spec.repository==porch-test' -``` - -**Filter by lifecycle:** - -```bash -kubectl get packagerevisions -n default --field-selector 'spec.lifecycle==Published' -``` - -**Filter by package name:** - -```bash -kubectl get packagerevisions -n default --field-selector 'spec.packageName==my-app' -``` - -**Combine multiple filters:** - -```bash -kubectl get packagerevisions -n default \ - --field-selector 'spec.repository==porch-test,spec.lifecycle==Published' -``` - -**Example output:** - -```bash -$ kubectl get packagerevisions -n default --field-selector 'spec.repository==porch-test' -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.my-app.v1 my-app v1 1 false Published porch-test -porch-test.my-app.v2 my-app v2 2 true Published porch-test -porch-test.my-service.main my-service main 3 true Published porch-test -``` - -{{% alert title="Note" color="primary" %}} -The `--field-selector` flag supports only the `=` and `==` operators. **The `!=` operator is not supported** due to Porch's internal caching behavior. -{{% /alert %}} - ---- - -## Additional Operations - -Beyond basic listing and filtering, these operations help you monitor changes and format output. - -### Watch for PackageRevision Changes - -Monitor PackageRevisions in real-time: - -```bash -kubectl get packagerevisions -n default --watch -``` - -### Sort by Creation Time - -Find recently created PackageRevisions: - -```bash -kubectl get packagerevisions -n default --sort-by=.metadata.creationTimestamp -``` - -### Output Formatting - -Both `porchctl` and `kubectl` support standard Kubernetes [output formatting flags](https://kubernetes.io/docs/reference/kubectl/#output-options): - -- `-o yaml` - YAML format -- `-o json` - JSON format -- `-o wide` - Additional columns -- `-o name` - Resource names only -- `-o custom-columns=...` - Custom column output - -{{% alert title="Note" color="primary" %}} -For a complete reference of all available command options and flags, see the [Porch CLI Guide]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md" %}}). -{{% /alert %}} - ---- diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/upgrading-packages.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/upgrading-packages.md deleted file mode 100644 index 34480421..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_package_revisions/upgrading-packages.md +++ /dev/null @@ -1,348 +0,0 @@ ---- -title: "Upgrading Package Revisions" -type: docs -weight: 6 -description: "A guide to upgrade package revisions using Porch and porchctl" ---- - -The package upgrade feature in Porch is a powerful mechanism for keeping deployed packages (downstream) up-to-date with their source blueprints (upstream). This guide walks through the entire workflow, from creating packages to performing an upgrade, with a special focus on the different upgrade scenarios and merge strategies. - -For detailed command reference, see the [porchctl CLI guide]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide/#package-upgrade" %}}). - - - -## Key Concepts - -To understand the upgrade process, it's essential to be familiar with the three states of a package during a merge operation: - -* **Original:** The state of the package when it was first cloned from the blueprint (e.g., `Blueprint v1`). This serves as the common ancestor for the merge. -* **Upstream:** The new, updated version of the source blueprint (e.g., `Blueprint v2`). This contains the changes you want to incorporate. -* **Local:** The current state of your deployment package, including any customizations you have applied since it was cloned. - -The upgrade process combines changes from the **Upstream** blueprint with your **Local** customizations, using the **Original** version as a base to resolve differences. - -## End-to-End Upgrade Example - -This example demonstrates the complete process of creating, customizing, and upgrading a package. - -### Step 1: Create a Base Blueprint Package (revision 1) - -Create the initial revision of our blueprint. This will be the "upstream" source for our deployment package. - -```bash -# Initialize a new package draft named 'blueprint' in the 'porch-test' repository -porchctl rpkg init blueprint --namespace=porch-demo --repository=porch-test --workspace=1 - -# Propose the draft for review -porchctl rpkg propose porch-test.blueprint.1 --namespace=porch-demo - -# Approve and publish the package, making it available as revision 1 -porchctl rpkg approve porch-test.blueprint.1 --namespace=porch-demo -``` - -![Step 1: Create Base Blueprint](/static/images/porch/upgrade-step1.drawio.svg) - -**PackageRevisions State After Step 1:** -```bash -$ porchctl rpkg get --namespace=porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.blueprint.main blueprint main -1 false Published porch-test -porch-test.blueprint.1 blueprint 1 1 true Published porch-test -``` - -### Step 2: Create a New Blueprint Package Revision (revision 2) - -Create a new revision of the blueprint to simulate an update. In this case, we add a new ConfigMap. - -```bash -# Create a new draft (v2) by copying v1 -porchctl rpkg copy porch-test.blueprint.1 --namespace=porch-demo --workspace=2 - -# Pull the contents of the new draft locally to make changes -porchctl rpkg pull porch-test.blueprint.2 --namespace=porch-demo ./tmp/blueprint-v2 - -# Add a new resource file to the package -kubectl create configmap test-cm --dry-run=client -o yaml > ./tmp/blueprint-v2/new-configmap.yaml - -# Push the local changes back to the Porch draft -porchctl rpkg push porch-test.blueprint.2 --namespace=porch-demo ./tmp/blueprint-v2 - -# Propose and approve the new version -porchctl rpkg propose porch-test.blueprint.2 --namespace=porch-demo -porchctl rpkg approve porch-test.blueprint.2 --namespace=porch-demo -``` -At this point, we have two published blueprint versions: `v1` (the original) and `v2` (with the new ConfigMap). - -![Step 2: Create New Blueprint Revision](/static/images/porch/upgrade-step2.drawio.svg) - -**PackageRevisions State After Step 2:** -```bash -$ porchctl rpkg get --namespace=porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.blueprint.main blueprint main -1 false Published porch-test -porch-test.blueprint.1 blueprint 1 1 false Published porch-test -porch-test.blueprint.2 blueprint 2 2 true Published porch-test -``` - -### Step 3: Clone Blueprint revision 1 into a Deployment Package - -Clone the blueprint to create a "downstream" deployment package. - -```bash -# Clone blueprint v1 to create a new deployment package -porchctl rpkg clone porch-test.blueprint.1 --namespace=porch-demo --repository=porch-test --workspace=1 deployment - -# Pull the new deployment package locally to apply customizations -porchctl rpkg pull porch-test.deployment.1 --namespace=porch-demo ./tmp/deployment-v1 - -# Apply a local customization (e.g., add an annotation to the Kptfile) -kpt fn eval --image gcr.io/kpt-fn/set-annotations:v0.1.4 ./tmp/deployment-v1/Kptfile -- kpt.dev/annotation=true - -# Push the local changes back to Porch -porchctl rpkg push porch-test.deployment.1 --namespace=porch-demo ./tmp/deployment-v1 - -# Propose and approve the deployment package -porchctl rpkg propose porch-test.deployment.1 --namespace=porch-demo -porchctl rpkg approve porch-test.deployment.1 --namespace=porch-demo -``` - -![Step 3: Clone Blueprint into Deployment Package](/static/images/porch/upgrade-step3.drawio.svg) - -**PackageRevisions State After Step 3:** -```bash -$ porchctl rpkg get --namespace=porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.blueprint.main blueprint main -1 false Published porch-test -porch-test.blueprint.1 blueprint 1 1 false Published porch-test -porch-test.blueprint.2 blueprint 2 2 true Published porch-test -porch-test.deployment.main deployment main -1 false Published porch-test -porch-test.deployment.1 deployment 1 1 true Published porch-test -``` - -### Step 4: Discover and Perform the Upgrade - -Our deployment package is based on `blueprint.1`, but we know `blueprint.2` is available. We can discover and apply this upgrade. - -```bash -# Discover available upgrades for packages cloned from 'upstream' repositories -porchctl rpkg upgrade --discover=upstream -# This will list 'porch-test.deployment.1' as having an available upgrade to revision 2. - -# Upgrade the deployment package to revision 2 of its upstream blueprint -# This creates a new draft package: 'porch-test.deployment.2' -porchctl rpkg upgrade porch-test.deployment.1 --namespace=porch-demo --revision=2 --workspace=2 - -# Propose and approve the upgraded package -porchctl rpkg propose porch-test.deployment.2 --namespace=porch-demo -porchctl rpkg approve porch-test.deployment.2 --namespace=porch-demo -``` - -After approval, `porch-test.deployment.2` is the new, published deployment package. It now contains: -1. The `new-configmap.yaml` from the upstream `blueprint.2`. -2. The local `kpt.dev/annotation=true` customization applied in Step 3. - -![Step 4: Discover and Perform Upgrade](/static/images/porch/upgrade-step4.drawio.svg) - -**PackageRevisions State After Step 4:** -```bash -$ porchctl rpkg get --namespace=porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.blueprint.main blueprint main -1 false Published porch-test -porch-test.blueprint.1 blueprint 1 1 false Published porch-test -porch-test.blueprint.2 blueprint 2 2 true Published porch-test -porch-test.deployment.main deployment main -1 false Published porch-test -porch-test.deployment.1 deployment 1 1 false Published porch-test -porch-test.deployment.2 deployment 2 2 true Published porch-test -``` - -## Understanding Merge Strategies - -![Package Upgrade Flow](/static/images/porch/upgrade.drawio.svg) - -**Schema Explanation:** -The diagram above illustrates the package upgrade workflow in Porch: - -1. **CLONE**: A deployment package (Deployment.v1) is initially cloned from a blueprint (Blueprint.v1) in the blueprints repository -2. **COPY**: The blueprint evolves to a new version (Blueprint.v2) with additional features or fixes -3. **UPGRADE**: The deployment package is upgraded to incorporate changes from the new blueprint version, creating Deployment.v2 - -The dashed line shows the relationship between the new blueprint version and the upgrade process, indicating that the upgrade "uses the new blueprint" as its source for changes. - -The outcome of an upgrade depends on the changes made in the upstream blueprint and the local deployment package, combined with the chosen merge strategy. You can specify a strategy using the `--strategy` flag (e.g., `porchctl rpkg upgrade ... --strategy=copy-merge`). - -### Merge Strategy Comparison - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Scenarioresource-merge (Default)copy-mergeforce-delete-replacefast-forward
File added in UpstreamFile is added to Local.File is added to Local.File is added to Local.Fails (Local must be unchanged).
File modified in Upstream onlyChanges are applied to Local.Upstream file overwrites Local file.Upstream file overwrites Local file.Fails (Local must be unchanged).
File modified in Local onlyLocal changes are kept.Local changes are kept.Local changes are discarded; Upstream version is used.Fails (Local must be unchanged).
File modified in both (no conflict)Both changes are merged.Upstream file overwrites Local file.Upstream file overwrites Local file.Fails (Local must be unchanged).
File modified in both (conflict)Merge autoconflic resolution: always choose the new upstream version.Upstream file overwrites Local file.Upstream file overwrites Local file.Fails (Local must be unchanged).
File deleted in UpstreamFile is deleted from Local.File is deleted from Local.File is deleted from Local.Fails (Local must be unchanged).
Local package is unmodifiedUpgrade succeeds.Upgrade succeeds.Upgrade succeeds.Upgrade succeeds.
- -### Detailed Strategy Explanations - -#### **resource-merge (Default)** -This is a structural 3-way merge designed for Kubernetes resources. It understands the structure of YAML files and attempts to intelligently merge changes from the upstream and local packages. - -* **Use case:** This is the **recommended default strategy** for managing Kubernetes configuration. Use it when you want to preserve local customizations while incorporating upstream updates. - -#### **copy-merge** -A file-level replacement strategy. For any file present in both local and upstream, the upstream version is used, overwriting local changes. Files that only exist locally are kept. - -* **Use case:** When you trust the upstream source more than local changes or when Porch cannot parse the file contents (e.g., non-KRM files). - -#### **force-delete-replace** -The most aggressive strategy. It completely discards the local package's contents and replaces them with the contents of the new upstream package. - -* **Use case:** To completely reset a deployment package to a new blueprint version, abandoning all previous customizations. - -#### **fast-forward** -A fail-fast safety check. The upgrade only succeeds if the local package has **zero modifications** compared to the original blueprint version it was cloned from. - -* **Use case:** To guarantee that you are only upgrading unmodified packages, preventing accidental overwrites of important local customizations. - -## Practical examples: upgrade strategies in action - -This section contains short, focused examples showing how each merge strategy behaves in realistic scenarios. Each example assumes you have a deployment package `porch-test.deployment.1` cloned from `porch-test.blueprint.1` and that `porch-test.blueprint.2` is available upstream. - -### Example A — resource-merge (default) - -Scenario: Upstream adds a new ConfigMap and local changes added an annotation to Kptfile. `resource-merge` should apply the upstream addition while preserving the local annotation. - -Commands: - -```bash -# discover available upgrades -porchctl rpkg upgrade --discover=upstream -``` - -```bash -# perform upgrade using the default strategy -porchctl rpkg upgrade porch-test.deployment.1 --namespace=porch-demo --revision=2 --workspace=2 -``` - -Outcome: A new draft `porch-test.deployment.2` is created containing both the new `ConfigMap` and the local annotation. - -### Example B — copy-merge - -Scenario: Upstream changes a file that the local package also modified, but you want the upstream version to win (file-level overwrite). - -Commands: - -```bash -porchctl rpkg upgrade porch-test.deployment.1 --namespace=porch-demo --revision=2 --workspace=2 --strategy=copy-merge -``` - -Outcome: Files present in both upstream and local are replaced with the upstream copy. Files only present locally are preserved. - -### Example C — force-delete-replace - -Scenario: The blueprint has diverged substantially; you want to reset the deployment package to exactly match upstream v2. - -Commands: - -```bash -porchctl rpkg upgrade porch-test.deployment.1 --namespace=porch-demo --revision=2 --workspace=2 --strategy=force-delete-replace -``` - -Outcome: The new draft contains only the upstream contents; local customizations are discarded. - -### Example D — fast-forward - -Scenario: You want to ensure upgrades are only applied to unmodified, pristine clones. - -Commands: - -```bash -porchctl rpkg upgrade porch-test.deployment.1 --namespace=porch-demo --revision=2 --workspace=2 --strategy=fast-forward -``` - -Outcome: The upgrade succeeds only if `porch-test.deployment.1` has no local modifications compared to the original clone. If local changes exist, the command fails and reports the modifications that prevented a fast-forward. - -## Reference - -### Command Flags - -The `porchctl rpkg upgrade` command has several key flags: - -* `--workspace=`: (Mandatory) The name for the new workspace where the upgraded package draft will be created. -* `--revision=`: (Optional) The specific revision number of the upstream package to upgrade to. If not specified, Porch will automatically use the latest published revision. -* `--strategy=`: (Optional) The merge strategy to use. Defaults to `resource-merge`. Options are `resource-merge`, `copy-merge`, `force-delete-replace`, `fast-forward`. - -For more details, run `porchctl rpkg upgrade --help`. - -### Best Practices - -* **Separate Repositories:** For better organization and access control, keep blueprint packages and deployment packages in separate Git repositories. -* **Understand Your Strategy:** Before upgrading, be certain which merge strategy fits your use case to avoid accidentally losing important local customizations. When in doubt, the default `resource-merge` is the safest and most intelligent option. - -### Cleanup - -To remove the packages created in this guide, you must first propose them for deletion and then perform the final deletion. - -```bash -# Clean up local temporary directory used in these examples -rm -rf ./tmp - -# Propose all packages for deletion -porchctl rpkg propose-delete porch-test.blueprint.1 porch-test.blueprint.2 porch-test.deployment.1 porch-test.deployment.2 porch-test.blueprint.main porch-test.deployment.main --namespace=porch-demo - -# Delete the packages -porchctl rpkg delete porch-test.blueprint.1 porch-test.blueprint.2 porch-test.deployment.1 porch-test.deployment.2 porch-test.blueprint.main porch-test.deployment.main --namespace=porch-demo -``` diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/_index.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/_index.md deleted file mode 100644 index b938aa3e..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/_index.md +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: "Working with Porch Repositories" -type: docs -weight: 2 -description: A group of guides outlining how to interact with Porch repositories ---- - -## Prerequisites - -- Porch deployed on a Kubernetes cluster [Setup Porch Guide]({{% relref "/docs/neo-porch/3_getting_started/installing-porch.md" %}}). -- **Porchctl** CLI tool installed [Setup Porchctl Guide]({{% relref "/docs/neo-porch/3_getting_started/installing-porchctl.md" %}}). -- **Kubectl** configured to access your cluster. -- A Git repository to register with Porch. If you need to create one, see [GitHub's Repository Guide](https://docs.github.com/en/repositories/creating-and-managing-repositories/quickstart-for-repositories). - ---- - -## Understanding Repositories - -Before Porch can manage packages, you must register repositories where those packages are stored. Repositories tell Porch: - -- Where to find package blueprints -- Where to store deployment packages -- How to authenticate with the repository - -Porch primarily supports **Git repositories** from providers like GitHub, GitLab, Gitea, Bitbucket, and other Git-compatible services. - -**Repository Types by Purpose:** - -- **Blueprint Repositories**: Contain upstream package templates and blueprints that can be cloned and customized. These are typically read-only sources of reusable configurations. -- **Deployment Repositories**: Store deployment-ready packages that are actively managed and deployed to clusters. Mark repositories as deployment repositories using the `--deployment` flag during registration. - ---- - -## Repository Types - -Porch primarily supports Git repositories for storing and managing packages. Git is the recommended and production-ready storage backend. - -### Git Repositories - -Git repositories are the primary and recommended type for use with Porch. - -**Requirements:** - -- Git repository with an initial commit (to establish main branch) -- For private repos: Personal Access Token or Basic Auth credentials - -**Supported Git hosting services:** - -- GitHub -- GitLab -- Gitea -- Bitbucket -- Any Git-compatible service - ---- - -### OCI Repositories (Experimental) - -{{% alert title="Warning" color="warning" %}} -OCI repository support is **experimental** and not actively maintained. Use at your own risk. This feature may have limitations, bugs, or breaking changes. For production deployments, use Git repositories. -{{% /alert %}} - -Porch has experimental support for OCI (Open Container Initiative) repositories that store packages as container images. This feature is not recommended for production use. - ---- - -## Troubleshooting - -Common issues when working with repositories and their solutions: - -**Repository shows READY: False?** - -- Check repository URL is accessible -- Verify authentication credentials are correct -- Inspect repository conditions: `porchctl repo get -n -o yaml` -- Check Porch server logs for detailed errors - -**Packages not appearing after registration?** - -- Ensure repository has been synchronized (check SYNC SCHEDULE or trigger manual sync) -- Verify packages have valid Kptfile in repository -- Check repository directory configuration matches package location -- If re-registering a previously unregistered repository, packages in Git will reappear after sync - -**Authentication failures?** - -- For GitHub: Ensure Personal Access Token has `repo` scope -- For private repos: Verify credentials are correctly configured -- Check secret exists: `kubectl get secret -n ` - -**Need to change repository configuration?** - -- Repository settings (branch, directory, credentials) cannot be updated via porchctl -- Use `kubectl edit repository -n ` to modify the Repository resource -- Alternatively, unregister and re-register the repository with new settings - -**Sync not working?** - -- Verify cron expression syntax is correct -- Check minimum 1-minute delay for manual syncs -- Inspect repository status for sync errors - ---- - -## Key Concepts - -Important terms and concepts for working with Porch repositories: - -- **Repository**: A Git repository registered with Porch for package management. Repositories are namespace-scoped Kubernetes resources. -- **Blueprint Repository**: Contains upstream package templates that can be cloned and customized. Typically used as read-only sources. -- **Deployment Repository**: Repository marked with `--deployment` flag containing deployment-ready packages that are actively managed. -- **Sync Schedule**: Cron expression defining periodic repository synchronization (e.g., `*/10 * * * *` for every 10 minutes). -- **Content Type**: Defines what the repository stores. `Package` is the standard type for KRM configuration packages. Other types like `Function` exist for storing KRM functions. -- **Branch**: Git branch Porch monitors for packages (defaults to `main`). Each repository tracks a single branch. -- **Directory**: Subdirectory within repository where packages are located. Use `/` for root or specify a path like `/blueprints`. -- **Namespace Scope**: Repositories exist within a Kubernetes namespace. Repository names must be unique per namespace, and packages inherit the repository's namespace. The same Git repository can be registered in multiple namespaces with different names, creating isolated package views per namespace. - ---- diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-basic-usage.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-basic-usage.md deleted file mode 100644 index 5d711488..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-basic-usage.md +++ /dev/null @@ -1,174 +0,0 @@ ---- -title: "Repositories Basic Usage" -type: docs -weight: 4 -description: "A basic usage of repositories guide in Porch" ---- - -## Basic Operations - -These operations cover the fundamental commands for viewing and managing registered repositories. - -### List Registered Repositories - -View all repositories registered with Porch: - -```bash -porchctl repo get --namespace default -``` - -**What this does:** - -- Queries Porch for all registered repositories in the specified namespace -- Displays repository type, content, sync schedule, and status -- Shows the repository address - -{{% alert title="Note" color="primary" %}} -`porchctl repo list` is an alias for `porchctl repo get` and can be used interchangeably: - -```bash -porchctl repo list --namespace default -``` - -{{% /alert %}} - -**Using kubectl:** - -You can also use kubectl to list repositories: - -```bash -kubectl get repositories -n default -``` - -List repositories across all namespaces: - -```bash -kubectl get repositories --all-namespaces -``` - -**Example output:** - -```bash -NAME TYPE CONTENT SYNC SCHEDULE DEPLOYMENT READY ADDRESS -porch-test git Package True https://github.com/example-org/test-packages.git -blueprints git Package */10 * * * * True https://github.com/example/blueprints.git -infra git Package */10 * * * * true True https://github.com/nephio-project/catalog -``` - -**Understanding the output:** - -- **NAME**: Repository name in Kubernetes -- **TYPE**: Repository type (`git` or `oci`) -- **CONTENT**: Content type (typically `Package`) -- **SYNC SCHEDULE**: Cron expression for periodic synchronization (if configured). -- **DEPLOYMENT**: Whether this is a deployment repository -- **READY**: Repository health status -- **ADDRESS**: Repository URL - ---- - -### Get Detailed Repository Information - -View complete details about a specific repository: - -```bash -porchctl repo get porch-test --namespace default -o yaml -``` - -**What this does:** - -- Retrieves the full Repository resource -- Shows configuration, authentication, and status information -- Displays in YAML format for easy reading - -**Example output:** - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: Repository -metadata: - name: porch-test - namespace: default - creationTimestamp: "2025-11-21T16:27:27Z" -spec: - content: Package - type: git - git: - repo: https://github.com/example-org/test-packages.git - branch: main - directory: / - secretRef: - name: porch-test-auth -status: - conditions: - - type: Ready - status: "True" - reason: Ready - message: 'Repository Ready (next sync scheduled at: 2025-11-26T09:48:03Z)' - lastTransitionTime: "2025-11-26T09:45:03Z" -``` - -**Key fields to inspect:** - -- **spec.type**: Repository type (typically `git`) -- **spec.git**: Git-specific configuration (repo URL, branch, directory, credentials) -- **spec.content**: Content type stored in repository -- **status.conditions**: Repository health and sync status - - **status**: `"True"` (healthy) or `"False"` (error) - - **reason**: `Ready` or `Error` - - **message**: Detailed error message when status is False - ---- - -### Update Repository Configuration - -The typical workflow for changing repository settings is to unregister and re-register the repository with new configuration. This is the recommended approach. - -{{% alert title="Note" color="primary" %}} -There is no `porchctl repo update` command. The standard approach is unregister/reregister. -{{% /alert %}} - -**Recommended approach - Unregister and re-register:** - -```bash -# Unregister the repository -porchctl repo unregister porch-test --namespace default - -# Re-register with new configuration -porchctl repo register https://github.com/example/porch-test.git \ - --namespace default \ - --name=porch-test \ - --branch=develop \ - --directory=/new-path -``` - -**Alternative - Direct kubectl editing (NOT RECOMMENDED):** - -While you can edit the repository resource directly with kubectl, this is highly discouraged: - -```bash -kubectl edit repository porch-test -n default -``` - -{{% alert title="Warning" color="warning" %}} -Direct editing with kubectl is not recommended because: - -- Some values like `url`, `branch`, and `directory` are immutable or not designed to be changed this way -- Changes to authentication secrets (like `secretRef`) are cached by Porch and won't take effect immediately -- Secret changes only apply when authentication fails and Porch refreshes the cached credentials -- This can lead to unpredictable behavior -{{% /alert %}} - -**If you must use kubectl editing:** - -Only modify fields that are safe to change, such as: - -- `secretRef.name`: Change authentication credentials (with caching caveats above) - -Avoid changing: - -- `url`: Repository URL -- `branch`: Git branch -- `directory`: Package directory path - ---- diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-registration.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-registration.md deleted file mode 100644 index 936c798b..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-registration.md +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: "Registering Repositories" -type: docs -weight: 3 -description: "Registering Repositories guide in Porch" ---- - -Registering a repository connects Porch to your Git storage backend, allowing it to discover and manage packages. You can register repositories with various authentication methods and configuration options. - -### Register a Git Repository - -Register a Git repository with Porch: - -```bash -porchctl repo register https://github.com/example/porch-test.git \ - --namespace default \ - --name=porch-test \ - --description="Blueprint packages" \ - --branch=main -``` - -**What this does:** - -- Registers the Git repository with Porch -- Creates a Repository resource in Kubernetes -- Begins synchronizing packages from the repository - -**Example output:** - -```bash -porch-test created -``` - -**Verify registration:** - -Check that the repository was registered successfully: - -```bash -porchctl repo get porch-test --namespace default -``` - -Look for `READY: True` in the output to confirm the repository is accessible and synchronized. - -```bash -NAME TYPE CONTENT SYNC SCHEDULE DEPLOYMENT READY ADDRESS -porch-test git Package True https://github.com/example-repo/porch-test.git -``` - -**If READY shows False:** - -Inspect the detailed status to see the error message: - -```bash -porchctl repo get porch-test --namespace default -o yaml -``` - -Check the `status.conditions` section for the error details: - -```yaml -status: - conditions: - - type: Ready - status: "False" - reason: Error - message: 'failed to list remote refs: repository not found: Repository not found.' - lastTransitionTime: "2025-11-27T14:32:20Z" -``` - -**Common error messages:** - -- `failed to list remote refs: repository not found` - Repository URL is incorrect or repository doesn't exist -- `failed to resolve credentials: cannot resolve credentials in a secret /: secrets "" not found` - Authentication secret doesn't exist or name is misspelled -- `failed to resolve credentials: resolved credentials are invalid` - Credentials in the secret are invalid or malformed -- `branch "" not found in repository` - Specified branch doesn't exist in the repository -- `repository URL is empty` - Repository URL not specified in configuration -- `target branch is empty` - Branch name not specified in configuration - ---- - -### Register with Authentication - -For private repositories, provide authentication credentials: - -**Using Basic Auth:** - -```bash -porchctl repo register https://github.com/example/private-repo.git \ - --namespace default \ - --name=private-repo \ - --repo-basic-username=myusername \ - --repo-basic-password=mytoken -``` - -**Using Workload Identity (GCP):** - -```bash -porchctl repo register https://github.com/example/private-repo.git \ - --namespace default \ - --name=private-repo \ - --repo-workload-identity -``` - -{{% alert title="Note" color="primary" %}} -For production environments, use secret management solutions (external secret stores, sealed-secrets) rather than embedding credentials in commands. - -See [Authenticating to Remote Git Repositories]({{% relref "/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/git-authentication-config.md" %}}) for detailed authentication configuration. -{{% /alert %}} - ---- - -### Register with Advanced Options - -Configure additional repository settings: - -```bash -porchctl repo register https://github.com/nephio-project/catalog \ - --namespace default \ - --name=infra \ - --directory=infra \ - --deployment=true \ - --sync-schedule="*/10 * * * *" \ - --description="Infrastructure packages" -``` - -**Common flags:** - -- `--name`: Repository name in Kubernetes (defaults to last segment of URL) -- `--description`: Brief description of the repository -- `--branch`: Git branch to use (defaults to `main`) -- `--directory`: Subdirectory within repository containing packages. Use `/` for repository root, or specify a path like `/blueprints` or `infra/packages`. Leading slash is optional. -- `--deployment`: Mark as deployment repository (packages are deployment-ready) -- `--sync-schedule`: Cron expression for periodic sync (e.g., `*/10 * * * *` for every 10 minutes). Format: `minute hour day month weekday`. -- `--repo-basic-username`: Username for basic authentication -- `--repo-basic-password`: Password/token for basic authentication -- `--repo-workload-identity`: Use workload identity for authentication - -{{% alert title="Note" color="primary" %}} -For complete command syntax and all available flags, see the [Porchctl CLI Guide]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md" %}}). -{{% /alert %}} - ---- diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-synchronization.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-synchronization.md deleted file mode 100644 index a60c0d56..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-synchronization.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: "Synchronizing Repositories" -type: docs -weight: 5 -description: "Synchronizing repositories guide in Porch" ---- - -## Repository Synchronization - -Porch periodically synchronizes with registered repositories to discover new packages and updates. You can also trigger manual synchronization when you need immediate updates. - -{{% alert title="Note" color="primary" %}} -**Sync Schedule Format:** Cron expressions follow the format `minute hour day month weekday`. For example, `*/10 * * * *` means "every 10 minutes". -{{% /alert %}} - -### Trigger Manual Sync - -Force an immediate synchronization of a repository: - -```bash -porchctl repo sync porch-test --namespace default -``` - -**What this does:** - -- Schedules a one-time sync (minimum 1-minute delay) -- Updates packages from the repository -- Independent of periodic sync schedule - -**Example output:** - -```bash -Repository porch-test sync scheduled -``` - ---- - -### Sync Multiple Repositories - -Sync several repositories at once: - -```bash -porchctl repo sync repo1 repo2 repo3 --namespace default -``` - ---- - -### Sync All Repositories - -Sync all repositories in a namespace: - -```bash -porchctl repo sync --all --namespace default -``` - -Sync across all namespaces: - -```bash -porchctl repo sync --all --all-namespaces -``` - ---- - -### Schedule Delayed Sync - -Schedule sync with custom delay: - -```bash -# Sync in 5 minutes -porchctl repo sync porch-test --namespace default --run-once 5m - -# Sync in 2 hours 30 minutes -porchctl repo sync porch-test --namespace default --run-once 2h30m - -# Sync at specific time -porchctl repo sync porch-test --namespace default --run-once "2024-01-15T14:30:00Z" -``` - -{{% alert title="Note" color="primary" %}} -**Sync behavior:** - -- Minimum delay is 1 minute from command execution -- Updates `spec.sync.runOnceAt` field in Repository CR -- Independent of existing periodic sync schedule -- Past timestamps are automatically adjusted to minimum delay -{{% /alert %}} - ---- \ No newline at end of file diff --git a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-unregistration.md b/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-unregistration.md deleted file mode 100644 index b1fd5b1e..00000000 --- a/content/en/docs/neo-porch/4_tutorials_and_how-tos/working_with_porch_repositories/repository-unregistration.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "Unregistering Repositories" -type: docs -weight: 6 -description: "Unregistering repositories guide in Porch" ---- - -## Unregistering Repositories - -When you no longer need Porch to manage packages from a repository, you can unregister it. This removes Porch's connection to the repository without affecting the underlying Git storage. - -### Unregister a Repository - -Remove a repository from Porch: - -```bash -porchctl repo unregister porch-test --namespace default -``` - -**What this does:** - -- Removes the Repository resource from Kubernetes -- Stops synchronizing packages from the repository -- Removes Porch's cached metadata for the repository -- Does not delete the underlying Git repository or its contents - -{{% alert title="Warning" color="warning" %}} -Unregistering a repository does not delete the underlying Git repository or its contents. It only removes Porch's connection to it. -{{% /alert %}} - -**Example output:** - -```bash -porch-test unregistered -``` - -**What happens to packages:** - -- **Published packages in Git**: Remain in the Git repository and are preserved. If you re-register the same repository later, these packages will reappear when Porch synchronizes. -- **Draft/Proposed packages pushed to Git**: Also remain in Git and will reappear upon re-registration. -- **Unpushed work-in-progress packages**: Cached packages that were never pushed to Git (draft packages being edited) are removed and cannot be recovered. - ---- diff --git a/content/en/docs/neo-porch/5_architecture_and_components/_index.md b/content/en/docs/neo-porch/5_architecture_and_components/_index.md deleted file mode 100644 index bfa1ae54..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Architecture & Components" -type: docs -weight: 5 -description: Porch Architecture and its underlying components ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/5_architecture_and_components/controllers/_index.md b/content/en/docs/neo-porch/5_architecture_and_components/controllers/_index.md deleted file mode 100644 index e5c7b50a..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/controllers/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Controllers" -type: docs -weight: 3 -description: controllers ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/5_architecture_and_components/controllers/pkg-variant-controllers.md b/content/en/docs/neo-porch/5_architecture_and_components/controllers/pkg-variant-controllers.md deleted file mode 100644 index d8f204f5..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/controllers/pkg-variant-controllers.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Package Variant Controller" -type: docs -weight: 1 -description: package variant controller ---- - -## Lorem Ipsum - -Lorem Ipsum [relevant old content]({{% relref "/docs/neo-porch/5_architecture_and_components/relevant_old_docs/package-variant.md" %}}) diff --git a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/_index.md b/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/_index.md deleted file mode 100644 index 376a1f82..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Custom Resources" -type: docs -weight: 5 -description: Custom Resources ---- - -## Repositories - -explain porch repositories diff --git a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-rev.md b/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-rev.md deleted file mode 100644 index 61966fab..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-rev.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Package Rev" -type: docs -weight: 4 -description: Package Rev ---- - -## Package Rev - -low level explanation of Package Revs diff --git a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-revision-resources.md b/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-revision-resources.md deleted file mode 100644 index b5c744cf..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-revision-resources.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Package Revision Resources" -type: docs -weight: 2 -description: Package Revision Resources ---- - -## Package Revision Resources - -low level explanation of Package Revision Resources diff --git a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-revision.md b/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-revision.md deleted file mode 100644 index 222e0bab..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-revision.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Package Revision" -type: docs -weight: 1 -description: Package Revision ---- - -## Package Revision - -low level explanation of Package Revision diff --git a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-variant-and-set.md b/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-variant-and-set.md deleted file mode 100644 index bb0f40d5..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/package-variant-and-set.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Package Variant & Package Variant Set" -type: docs -weight: 4 -description: Package Variant & Package Variant Set ---- - -## Package Variant & Package Variant Set - -low level explanation of Package Variant & Package Variant Set content explaining section can be found at [package-variant-old-content]({{% relref "/docs/neo-porch/5_architecture_and_components/relevant_old_docs/package-variant.md" %}}) diff --git a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/repositories/_index.md b/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/repositories/_index.md deleted file mode 100644 index 607e66e6..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/repositories/_index.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -title: "Repositories" -type: docs -weight: 4 ---- - -# Porch Repository Overview - -## What is a Repository CR? - -The Porch Repository is a Kubernetes [custom resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) that represents an external repository containing KPT packages. It serves as Porch's interface to Git repositories and OCI* registries that store package content. - -## Purpose and Use Cases - -### Primary Functions -- **Package Discovery**: Automatically discovers and catalogs packages from external repositories -- **Lifecycle Management**: Manages the complete lifecycle of configuration packages -- **Synchronization**: Keeps Porch's internal cache synchronized with external repository changes -- **Access Control**: Provides authentication and authorization for repository access - -### Use Cases -- **Blueprint Repositories**: Store reusable configuration templates and blueprints -- **Deployment Repositories**: Store deployment-ready configurations for specific environments -- **Package Catalogs**: Centralized repositories of shareable configuration packages -- **Multi-Environment Management**: Separate repositories for dev, staging, and production configurations - -## Repository Types - -### Git Repositories -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: Repository -metadata: - name: blueprints - namespace: default -spec: - type: git - git: - repo: https://github.com/example/blueprints.git - branch: main - directory: packages -``` - -## Key Specifications - -### Repository Spec Fields - -For detailed Repository CR specification fields, see the [API documentation](https://doc.crds.dev/github.com/nephio-project/porch/config.porch.kpt.dev/Repository/v1alpha1@v1.5.3#spec): - -{{< iframe src="https://doc.crds.dev/github.com/nephio-project/porch/config.porch.kpt.dev/Repository/v1alpha1@v1.5.3#spec" sub="https://doc.crds.dev/github.com/nephio-project/porch/config.porch.kpt.dev/Repository/v1alpha1@v1.5.3#spec">}} - -### Deployment vs Non-Deployment Repositories -- **Non-Deployment**: Contains blueprint packages for reuse and customization -- **Deployment**: Contains finalized, environment-specific configurations ready for deployment - -## Package Structure Requirements - -### Git Repository Structure -
-repository-root/
-├── package-a/
-│   ├── Kptfile
-│   └── resources.yaml
-├── package-b/
-│   ├── Kptfile
-│   └── manifests/
-└── nested/
-    └── package-c/
-        ├── Kptfile
-        └── config.yaml
-
- -### Package Identification -- Each package must contain a `Kptfile` at its root -- Packages can be nested within subdirectories -- The `directory` field in git spec defines the search root - -## Authentication - -### Basic Authentication - -For basic authentication configuration and repository registration examples, see the [Basic Auth]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md#basic-auth" %}}) documentation. - -### Workload Identity - -For workload identity configuration and repository registration examples, see the [Workload Identity]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md#workload-identity" %}}) documentation. - -## Repository Lifecycle - -### Registration -1. Create Repository CR (manually or via `porchctl repo reg`) -2. Porch validates repository accessibility -3. Initial package discovery and caching -4. Repository marked as `Ready` - -### Synchronization -1. Periodic sync based on `spec.sync.schedule` or default frequency -2. One-time sync using `spec.sync.runOnceAt` for immediate synchronization -3. Package discovery and cache updates -4. Status condition updates -5. Package change detection and notification - -**Note**: One-time syncs should only be used when discrepancies are found between the external repository and Porch cache. Under normal conditions, rely on periodic syncs for regular synchronization. - -### Package Operations -- **Discovery**: Automatic detection of new packages -- **Caching**: Local storage of package metadata and content -- **Revision Tracking**: Tracking of package revisions and changes -- **Access**: API access to package content through Porch - -## Status and Conditions - -### Repository Status -```yaml -status: - conditions: - - type: Ready - status: "True" - reason: Ready - message: 'Repository Ready (next sync scheduled at: 2025-11-05T11:55:38Z)' - lastTransitionTime: "2024-01-15T10:30:00Z" -``` - -### Condition Types -- **Ready**: Repository is accessible and synchronized -- **Error**: Authentication, network, or configuration issues -- **Reconciling**: Package reconciliation(sync) in progress - -## Integration with Porch APIs - -### PackageRevision Resources -- Repository CR enables creation of PackageRevision resources -- Each package in the repository becomes available as PackageRevision -- Package operations (clone, edit, propose, approve) work through PackageRevision API - -### Function Evaluation -- Packages may contain KRM functions for validation and transformation -- Functions are executed during package operations (render, clone, etc.) -- Repository CR provides access to package content that may contain function configurations - -## Best Practices - -### Repository Organization -- Use clear, descriptive repository names -- Organize packages in logical directory structures -- Separate blueprint and deployment repositories -- Use consistent naming conventions - -### Synchronization -- Set appropriate sync schedules based on change frequency -- Use one-time sync for immediate updates after changes -- Monitor repository conditions for sync issues - -### Security -- Use least-privilege authentication credentials -- Regularly rotate authentication tokens -- Separate repositories by access requirements - -### Performance -- Avoid overly large repositories -- Use directory filtering to limit package scope -- Monitor sync performance and adjust schedules accordingly - ---- - -{{% alert title="Note" color="primary" %}} -OCI repository support is experimental and may not have full feature parity with Git repositories. -{{% /alert %}} \ No newline at end of file diff --git a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/repositories/upstream-vs-downstream.md b/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/repositories/upstream-vs-downstream.md deleted file mode 100644 index 62c9419b..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/custom-resources/repositories/upstream-vs-downstream.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "Upstream vs Downstream" -type: docs -weight: 2 -description: Upstream vs Downstream description ---- - -## Repositories - -explain porch repositories - -## [UPSTREAM] - -EXPLAIN PORCH INTERACTION WITH UPSTREAM REPO'S - -## [DOWNSTREAM] - -EXPLAIN PORCH INTERACTION WITH DOWNSTREAM REPO'S - -## [DEPLOYMENT VS NON DEPLOYMENT REPO] - -EXPLAIN WHAT THAT MEANS - -### [4 WAYS PKG REV COMES INTO EXISTENCE] - -[UPSTREAM IS THE SOURCE OF THE CLONE] - -### [CREATED USING RPKG INIT/API] - -[IN THE CASE THERE IS NO UPSTREAM] - -### [COPY FROM ANOTHER REV IN THE SAME PKG] - -[NO UPSTREAM?] - -### [CAN BE CLONED FROM ANOTHER PKG REV A NEW] - -[HAS UPSTREAM] - -### [CAN BE LOADED FROM GIT] - -[DEPENDS ON WEATHER IT HAD A HAD A CLONE SOURCE OR NOT AT THE TIME] \ No newline at end of file diff --git a/content/en/docs/neo-porch/5_architecture_and_components/function-runner/_index.md b/content/en/docs/neo-porch/5_architecture_and_components/function-runner/_index.md deleted file mode 100644 index e252d9fe..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/function-runner/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Function Runner" -type: docs -weight: 2 -description: function runner ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/5_architecture_and_components/function-runner/pod-templating.md b/content/en/docs/neo-porch/5_architecture_and_components/function-runner/pod-templating.md deleted file mode 100644 index 241c5b59..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/function-runner/pod-templating.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Pod Templating" -type: docs -weight: 4 -description: pod templating ---- - -## Lorem Ipsum - -Lorem Ipsum [example relevant content]({{% relref "/docs/neo-porch/5_architecture_and_components/relevant_old_docs/function-runner-pod-templates.md" %}}) diff --git a/content/en/docs/neo-porch/5_architecture_and_components/function-runner/rendering.md b/content/en/docs/neo-porch/5_architecture_and_components/function-runner/rendering.md deleted file mode 100644 index 323b4fa9..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/function-runner/rendering.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: "Rendering" -type: docs -weight: 3 -description: Render Runtime ---- - -## Render logic - -render content - -## Built-in/Native Runtime - -render content - -## Executable Runtime - -render content - -## Function Pod Runtime - -render content diff --git a/content/en/docs/neo-porch/5_architecture_and_components/function-runner/task-pipeline.md b/content/en/docs/neo-porch/5_architecture_and_components/function-runner/task-pipeline.md deleted file mode 100644 index 49f9a84e..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/function-runner/task-pipeline.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Task Pipeline" -type: docs -weight: 2 -description: Task pipeline ---- - -## Lorem Ipsum - -Lorem Ipsum - -## Pipeline Ordering - -render content e.g. [render order old content]({{% relref "/docs/neo-porch/5_architecture_and_components/relevant_old_docs/package-mutation-pipeline-order.md" %}}) diff --git a/content/en/docs/neo-porch/5_architecture_and_components/porch-server/_index.md b/content/en/docs/neo-porch/5_architecture_and_components/porch-server/_index.md deleted file mode 100644 index 7853cbcb..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/porch-server/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Porch Server" -type: docs -weight: 1 -description: porch server ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/5_architecture_and_components/porch-server/aggregated-api-server.md b/content/en/docs/neo-porch/5_architecture_and_components/porch-server/aggregated-api-server.md deleted file mode 100644 index 2bb9ea0e..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/porch-server/aggregated-api-server.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Aggregated Api Server" -type: docs -weight: 2 -description: Aggregated Api Server ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/5_architecture_and_components/porch-server/cache/_index.md b/content/en/docs/neo-porch/5_architecture_and_components/porch-server/cache/_index.md deleted file mode 100644 index 54d51c2b..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/porch-server/cache/_index.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Cache" -type: docs -weight: 5 -description: Caching ---- - -## CR cache explanation - -Lorem Ipsum low level explanations - -## DB cache explanation - -Lorem Ipsum low level explanations diff --git a/content/en/docs/neo-porch/5_architecture_and_components/porch-server/cache/repo-sync/_index.md b/content/en/docs/neo-porch/5_architecture_and_components/porch-server/cache/repo-sync/_index.md deleted file mode 100644 index 453751a9..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/porch-server/cache/repo-sync/_index.md +++ /dev/null @@ -1,251 +0,0 @@ ---- -title: "Repository Sync" -type: docs -weight: 4 -description: "Porch repository synchronization architecture with SyncManager, cache handlers, and background processes for Git/OCI repositories." ---- - -## Overview - -The Porch sync system manages the synchronization of package repositories between external sources (Git/OCI*) and the internal cache. It consists of two main cache implementations that both utilize a common sync manager to handle periodic and one-time synchronization operations. The architecture consists of two main flows: **SyncManager-driven synchronization** for package content and **Background process** for Repository CR lifecycle management. - -### High-Level Architecture - -![Repository Sync Architecture](/static/images/porch/repository-sync.svg) - -{{< rawhtml >}} -📊 Interactive Architecture Diagram -{{< /rawhtml >}} - -## Core Components - -### 1. SyncManager - -**Purpose**: Central orchestrator for repository synchronization operations. - -**Components**: -- **Handler**: Interface for cache-specific sync operations -- **Core Client**: Kubernetes API client for cluster communication -- **Next Sync Time**: Tracks when the next synchronization should occur -- **Last Sync Error**: Records any errors from previous sync attempts - -**Goroutines**: - -1. **Periodic Sync Goroutine** - Handles recurring synchronization - - Performs initial sync at startup, then uses timer to track intervals - - Supports both cron expressions from repository configuration and default frequency fallback - - Recalculates next sync time when cron expression changes - - Updates repository status conditions after each sync - -2. **One-time Sync Goroutine** - Manages scheduled single synchronizations - - Monitors repository configuration for one-time sync requests - - Creates and cancels timers when the scheduled time changes - - Skips past timestamps and handles timer cleanup - - Operates independently of periodic sync schedule - - -### 2. Cache Handlers (Implements SyncHandler) - -Both cache implementations follow the same interface pattern: - -#### Database Cache Handler -- Persistent storage-backed repository cache -- Synchronizes with external Git/OCI* repositories -- Thread-safe operations using mutex locks -- Tracks synchronization statistics and metrics - -#### Custom Resource Cache Handler -- Memory-based repository cache for faster access -- Synchronizes with external Git/OCI* repositories -- Thread-safe operations using mutex locks -- Integrates with Kubernetes metadata storage - -### 3. Background Process - -**Purpose**: Manages Repository CR lifecycle and cache updates. - -**Components**: -- **K8S API** - Source of Repository CRs -- **Repository CRs** - Custom resources defining repositories -- **Watch Events** - Real-time CR change notifications -- **Periodic Ticker** - RepoSyncFrequency-based updates - -## Architecture Flows - -### Package Content Synchronization - -
-SyncManager → Goroutines   →   Cache Handlers   → Condition Management
-     ↓              ↓              ↓                  ↓
-  Start()     syncForever()     SyncOnce()      Set/Build/Apply
-             handleRunOnceAt()                  RepositoryCondition
-
- -**Process**: -1. SyncManager starts two goroutines -2. Goroutines call handler.SyncOnce() on cache implementations -3. Cache handlers perform sync operations -4. All components update repository conditions - -### Repository Lifecycle Management - -
-K8S API   →  Repository CRs   →  Watch Events   →  Background.go    →  Cache Spec Update
-    ↓             ↓                 ↓                    ↓                  ↓
-Kubernetes    CR Changes        Added/Modified/      Event Handler      OpenRepository/
- Cluster                           Deleted          cacheRepository     CloseRepository
-
- -**Process**: -1. Repository CRs created/modified/deleted in Kubernetes -2. Watch events generated for CR changes -3. Background.go receives and processes events -4. Cache updated via OpenRepository/CloseRepository calls -5. Periodic ticker ensures consistency - -### Event-Driven Status Updates - -
-Repository CRs  →  Watch Events  →  Background Process
-        ↑                                        ↓
-        |                                 Cache Updates
-        |                                        ↓
-Status Updates  ←  Condition Mgmt  ←  Sync Operations
-        ↑                                        ↑
-        └─────────── Sync Triggers ──────────────┘
-
- -**Flow**: -- **Repository CRs** generate watch events when created/modified/deleted -- **Background Process** receives events and triggers cache updates -- **Cache Updates** initiate sync operations through SyncManagers -- **Sync Operations** update conditions, which flow back to Repository CR status - -## Sync Process Details - -### Common Sync Process (Both Caches) - -
-Start Sync
-    ↓
-Acquire Mutex Lock
-    ↓
-Set "sync-in-progress"
-    ↓
-Fetch Cached Packages ←→ Fetch External Packages
-    ↓                           ↓
-    └─── Compare & Identify Differences ───┘
-                    ↓
-            Update Cache
-         (Add/Remove Packages)
-                    ↓
-            Release Mutex
-                    ↓
-          Update Final Condition
-                    ↓
-                Complete
-
- -**Process Steps**: -1. **Acquire mutex lock** (if applicable) - Ensures thread-safe access to cache -2. **Set condition to "sync-in-progress"** - Updates repository status for visibility -3. **Fetch cached package revisions** - Retrieves current cache state -4. **Fetch external package revisions** - Queries external repository for latest packages -5. **Compare and identify differences** - Determines what packages need to be added/removed -6. **Update cache (add/remove packages)** - Applies changes to internal cache -7. **Release mutex and update final condition** - Completes sync and updates status - -### Background Event Handling -1. **Added/Modified Events**: Initialize or update repository cache when repositories are created or changed -2. **Deleted Events**: Clean up and remove repository cache when repositories are deleted -3. **Bookmark Events**: Update resource version tracking to maintain watch continuity -4. **Status Updates**: Refresh Repository Custom Resource status conditions - -## Condition Management - -### Condition States -- **sync-in-progress**: Repository synchronization actively running - - ⚠️ **Important**: Do not perform API operations (create, update, delete packages) on the repository while this condition is active. Wait for the sync to complete and the repository to return to "ready" state to avoid conflicts and data inconsistencies. -- **ready**: Repository synchronized and ready for use -- **error**: Synchronization failed with error details - - ⚠️ **Important**: Do not perform API operations on the repository while in error state. Check the error message in the condition details, debug and resolve the underlying issue (e.g., network connectivity, authentication, repository access), then wait for the repository to return to "ready" state before running API calls. See the [troubleshooting guide]({{% relref "/docs/neo-porch/9_troubleshooting_and_faq/repository-sync.md" %}}) for common sync issues and solutions. - -### Condition Functions -- **Set Repository Condition**: Updates the status of a repository with new condition information -- **Build Repository Condition**: Creates condition objects with appropriate status, reason, and message -- **Apply Repository Condition**: Writes condition updates to Repository Custom Resources in Kubernetes - -## Interface Contracts - -### SyncHandler Interface - -The SyncHandler interface defines the contract for repository synchronization operations: - -- **SyncOnce**: Performs a single synchronization operation with the external repository -- **Key**: Returns the unique identifier for the repository being synchronized -- **GetSpec**: Retrieves the repository configuration specification - -This interface is implemented by two cache types: - -- **Database Cache**: Persistent storage implementation for repository synchronization -- **Custom Resource Cache**: In-memory implementation optimized for Kubernetes Custom Resource operations - -## Configuration - -For repository sync configuration options, see the [Repository Sync Configuration]({{% relref "/docs/neo-porch/6_configuration_and_deployments/configurations/repository-sync.md" %}}) documentation. - -### Background Process Configuration -- **RepoSyncFrequency**: Periodic sync interval -- **Watch Reconnection**: Exponential backoff (1s - 30s) - -## Error Handling & Resilience - -### SyncManager Errors -- Captured in the last sync error field for tracking -- Reflected in repository status conditions for visibility -- Automatically retried on the next scheduled sync cycle - -### Background Process Errors -- Watch connection failures → Exponential backoff reconnection -- Repository validation errors → Status condition with error message -- API conflicts on status updates → Retry with backoff - -### Condition Update Errors -- Logged as warnings -- Don't block sync operations -- Include retry logic with conflict resolution - -## Concurrency & Safety - -### Thread Safety -- **Database Cache**: Uses mutex locks to ensure safe concurrent access during sync operations -- **Custom Resource Cache**: Uses mutex locks to protect cache data during concurrent access -- **Background Process**: Serializes watch events to prevent race conditions - -### Context Management -- Cancellable contexts for graceful shutdown -- Separate contexts for sync operations -- Timeout handling for long-running operations - -## Monitoring & Observability - -### Logging -- Sync start/completion times with duration -- Package revision statistics (cached/external/both) -- Error conditions and warnings -- Schedule changes and next sync times -- Background event processing -- Watch connection status - -### Key Metrics (via logging) -- Sync duration and frequency -- Package counts and changes -- Success/failure rates -- Condition transition events -- Background event processing rates - ---- - -{{% alert title="Note" color="primary" %}} -OCI repository support is experimental and may not have full feature parity with Git repositories. -{{% /alert %}} \ No newline at end of file diff --git a/content/en/docs/neo-porch/5_architecture_and_components/porch-server/engine.md b/content/en/docs/neo-porch/5_architecture_and_components/porch-server/engine.md deleted file mode 100644 index b82181f3..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/porch-server/engine.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Engine" -type: docs -weight: 3 -description: Engine ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/_index.md b/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/_index.md deleted file mode 100644 index b808200e..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: -1 -description: ---- - -## Lorem Ipsum Heading - -Lorem Ipsum content diff --git a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/architecture.md b/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/architecture.md deleted file mode 100644 index 3b7e89d7..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/architecture.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "Nephio Architecture" -type: docs -weight: 5 -description: Reference for the Nephio Architecture ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -Some experiments on working with [C4 model](https://c4model.com/) to document Nephio. - -## Prerequisites -1. [Graphviz](https://graphviz.org/download/) is required to render some of the diagrams in this document. - -## System Context View - - -![System Context](/static/images/architecture/level1-nephio-system.png) - -The system context view gives a high level perspective of the Nephio software system and the external entities that it interacts with. There are no deployment considerations in this view - the main purpose of the picture is to depict what is the responsibility and scope of Nephio, and the key interfaces and capabilities it exposes to deliver on that responsibility. - -## System Landscape View - -![System Landscape](/static/images/architecture/level2-nephio-container.png) - -Nephio is an amalgamation of software systems, so a system landscape provides a high-level view of how those software systems operate together. - -## Component Views - -### Nephio Core - -![Nephio Core Component View](/static/images/architecture/level3-nephio-core-component.png) - -Nephio core is a collection of operators and functions that perform the fundamental aspects of Nephio use cases, independent of the specifics of vendor implementations. - -The controllers for OAI and Free5GC are represented here. Although they are vendor extensions to Nephio, they are for now part of the Nephio system. - - -### Porch - -![Nephio Porch Component View](/static/images/architecture/nephio-porch-component-view.png) diff --git a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/extracted_from_old_porch_concepts.md b/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/extracted_from_old_porch_concepts.md deleted file mode 100644 index 2dc2eaa8..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/extracted_from_old_porch_concepts.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -title: "[### Extracted from Old Porch Concepts Document ###]" -type: docs -weight: 4 ---- - -### Package Relationships - Upstream and Downstream - -kpt packages support the concept of ***upstream*** and ***downstream*** relationships. When a package is cloned from another, -the new package (the downstream package) maintains an upstream link to the specific package revision from which it was cloned. -If a new revision of the upstream package is published, the upstream link can be used to upgrade the downstream package. - -### High-Level CaD Architecture - -At the high level, the CaD functionality comprises: - -* A generic (i.e. not task-specific) package orchestration service implementing: - * package revision authoring and lifecycle management. - * package repository management. - -* [porchctl]({{% relref "/docs/neo-porch/7_cli_api/porchctl.md" %}}) - a Git-native, schema-aware, extensible client-side - tool for managing KRM packages in Porch. -* A GitOps-based deployment mechanism (for example, [Config Sync](https://cloud.google.com/anthos-config-management/docs/config-sync-overview) - or [FluxCD](https://fluxcd.io/)), which distributes and deploys configuration, and provides observability of the status - of deployed resources. -* A task-specific UI supporting repository management, package discovery, authoring, and lifecycle. - -![CaD Core Architecture](/static/images/porch/CaD-Core-Architecture.svg) - -### Porch Architecture - -Porch consists of several microservices, designed to be hosted in a [Kubernetes](https://kubernetes.io/) cluster. - -The overall architecture is shown below, including additional components external to Porch (Kubernetes API server and deployment -mechanism). - -![Porch Architecture](/static/images/porch/Porch-Architecture.drawio.svg) - -In addition to satisfying requirements highlighted above, the focus of the architecture is to: - -* establish clear components and interfaces. -* support low latency in package authoring operations. - -The primary Porch components are: - -#### Porch Server - -The Porch server is implemented as a [Kubernetes extension API server](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) which works with the Kubernetes API -aggregation layer. The benefits of this approach are: - -* seamless integration with the well-defined Kubernetes API style -* availability of generated clients for use in code -* integration with existing Kubernetes ecosystem and tools such as `kubectl` CLI, - [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) -* avoids requirement to open another network port to access a separate endpoint running inside k8s cluster - * this is a distinct advantage over GRPC which was initially considered as an alternative approach - -The Porch server serves the primary Kubernetes -resources required for basic package authoring and lifeycle management, including: - -* For each package revision (see [Package Revisions]({{% relref "/docs/neo-porch/2_concepts/fundamentals.md#package-revisions" %}})): - * `PackageRevision` - represents the *metadata* of the package revision stored in a repository. - * `PackageRevisionResources` - represents the *file contents* of the package revision. - {{% alert color="primary" %}} - Note that each package revision is represented by a *pair* of resources, each presenting a different view - (or [representation](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#differing-representations)) - of the same underlying package revision. - {{% /alert %}} -* A `Repository` [custom resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/), which supports repository registration. - -The **Porch server** itself includes the following key components: - -* The *aggregated API server*, which implements the integration into the main Kubernetes API server and - serves API requests for the `PackageRevision` and `PackageRevisionResources` resources. -* Package orchestration *engine*, which implements the package lifecycle operations and package mutation workflows. -* *CaD Library*, which implements specific package manipulation algorithms such as package rendering (evaluation of - package's function *pipeline*), initialization of a new package, etc. The CaD Library is a fork of `kpt` to allow Porch - to reuse the `kpt` algorithms and fulfil its overarching use case to be "kpt as a service". -* *Package cache*, which enables: - * local caching to allow package lifecycle and content manipulation operations to be executed within the Porch server - with minimal latency. - * abstracting package operations upward so they can be used without having to take account of the underlying storage - repository software mechanism (Git or OCI). -* *Repository adapters* for Git and OCI, which implement the specific logic of interacting with each repository type. -* *Function Runner runtime*, which evaluates individual [KRM functions][functions] (or delegates to the dedicated - [function runner](#function-runner)), incorporating a multi-tier cache of functions to support low-latency evaluation. - -#### Function Runner - -The **Function Runner** is a separate microservice responsible for evaluating [KRM functions][functions]. It exposes -a [GRPC](https://grpc.io/) endpoint which enables evaluating a specified kpt function on a provided configuration package. - -GRPC was chosen for the function runner service because the [benefits of an API server](#porch-server) that prompted its use -for the Porch server do not apply in this case. The function runner is an internal microservice, an implementation detail not exposed -to external callers, which makes GRPC perfectly suitable. - -The function runner maintains a cache of functions to support low-latency function evaluation. It achieves this through -two mechanisms available to it for evaluation of a function: - -* The **Executable Evaluation** mechanism executes the function directly inside the `function-runner` pod through shell-based - invocation of a function's binary executable. This applies only to a selected subset of popular functions, whose binaries - are baked into the `function-runner` image itself at compile-time to form a sort of pre-cache. -* The **Pod Evaluation** mechanism is the fallback when the invoked function is not one of those packaged in the `function-runner` - image for the Executable Evaluation approach. The `function-runner` pod spawns a separate *function pod*, based on the - image of the invoked function, along with a corresponding front-end service. Once the pod and service are ready, the - exposed GRPC endpoint is invoked to evaluate the function with the package contents as input. Once a function pod completes - evaluation and returns the result to the `function-runner` pod, the function pod is kept in existence temporarily so - it can be re-used quickly as a cache hit. After a pre-configured period of disuse (default 30 minutes), the function - runner terminates the function pod and its service, to recreate them from the start on the next invocation of that function. - -#### Repository registration - -At repository registration, customers must be able to specify details needed to store packages in an appropriate location -in the repository. For example, registration of a Git repository must accept a URL or directory path to locate the repository, -a branch and a directory to narrow down the location of packages, and any credentials needed to read from and/or write to -the repository - -A successful repository registration results in the creation of a Repository custom resource, a *Repository object*. This -is not to be confused with, for example, the remote Git repository - the Porch repository only stores the details Porch -uses to interact with the Git repository. - -{{% alert title="Note" color="primary" %}} - -A user role with sufficient permissions can register a repository at practically any URL, including repositories containing -packages authored by third parties. Since the contents of the registered repositories become discoverable, a customer -registering a third-part repository must be aware of the implications and trust the contents thereof. - -{{% /alert %}} - - -#### CaD Library - -The [kpt](https://kpt.dev/) CLI already implements the fundamental package manipulation algorithms in order to provide its command line user experience: - -* [kpt pkg init](https://kpt.dev/reference/cli/pkg/init/) - create a bare-bones, valid, kpt package. -* [kpt pkg get](https://kpt.dev/reference/cli/pkg/get/) - create a downstream package by cloning an upstream package; - set up the upstream reference of the downstream package. -* [kpt pkg update](https://kpt.dev/reference/cli/pkg/update/) - update the downstream package with changes from new - version of upstream, 3-way merge. -* [kpt fn eval](https://kpt.dev/reference/cli/fn/eval/) - evaluate a KRM function on a package. -* [kpt fn render](https://kpt.dev/reference/cli/fn/render/) - render the package by executing the function pipeline of - the package and its nested packages. -* [kpt fn source](https://kpt.dev/reference/cli/fn/source/) and [kpt fn sink](https://kpt.dev/reference/cli/fn/sink/) - - read package from local disk as a `ResourceList` and write package represented as `ResourcesList` into local disk. - -The same set of primitive operations form the foundational building blocks of the package orchestration service. Further, -Porch combines these blocks into higher-level operations (for example, Porch renders packages automatically on changes; -future versions will support bulk operations such as upgrade of multiple packages, etc.). - -A longer-term goal is to refactor kpt and Porch to extract the package manipulation operations into a reusable CaD Library, which will consumed by both the kpt CLI and Porch to allow them equal reuse of the same operations: -* create a valid empty package (init). -* clone a package and add upstream pointers (get). -* perform 3-way merge (upgrade). -* render - core package rendering algorithm using a pluggable function evaluator to support: - * function evaluation via Docker (as used by kpt CLI). - * function evaluation via an RPC to a service or appropriate function sandbox. - * high-performance evaluation of trusted, built-in, functions without sandbox. -* heal configuration (restore comments after lossy transformation). - -This approach will allow leveraging the investment already made into the high-quality package manipulation operations, maintain functional parity between the kpt CLI and Porch, and allow dependencies to be abstracted away which differ between CLI and Porch (most notably the dependency on Docker for function evaluation and on the local file system for package rendering). - - - -[functions]: https://kpt.dev/book/02-concepts/#functions \ No newline at end of file diff --git a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/function-runner-pod-templates.md b/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/function-runner-pod-templates.md deleted file mode 100644 index 1c8c07b7..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/function-runner-pod-templates.md +++ /dev/null @@ -1,145 +0,0 @@ ---- -title: "Function runner pod templating" -type: docs -weight: 4 -description: ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -## Overview - -The `porch-fn-runner` implements a simple function-as-a-service for executing kpt functions, running -the necessary kpt functions wrapped in a GRPC server. The function of the `porch-fn-runner` is to -start up a number of function evaluator pods for each of the kpt functions, along with a front-end -service, pointing to its respective pod. As with any operator that manages pods, it is good to -provide some templating and parameterization capabilities of the pods that will be managed by the -function runner. - -## Contract for writing pod templates - -The following contract needs to be fulfilled by any function evaluator pod template: - -1. There is a container. The container is named "function". -2. The entry point of the “function” container will start the wrapper GRPC server. -3. The image of the “function” container can be set to the image of the kpt function without - impacting the starting of the entry point. -4. The arguments of the “function” container can be appended with the entries from the Dockerfile - ENTRYPOINT of the kpt function image. - -## Enabling pod templating on function runner - -A ConfigMap with the pod template should be created in the namespace where the porch-fn-runner pod -is running. The name of the ConfigMap should be included as `--function-pod-template`, in the -command line arguments in the pod specification of the function runner. - -```yaml -... -spec: - serviceAccountName: porch-fn-runner - containers: - - name: function-runner - image: gcr.io/example-google-project-id/porch-function-runner:latest - imagePullPolicy: IfNotPresent - command: - - /server - - --config=/config.yaml - - --functions=/functions - - --pod-namespace=porch-fn-system - - --function-pod-template=kpt-function-eval-pod-template - env: - - name: WRAPPER_SERVER_IMAGE - value: gcr.io/example-google-project-id/porch-wrapper-server:latest - ports: - - containerPort: 9445 - # Add grpc readiness probe to ensure the cache is ready - readinessProbe: - exec: - command: - - /grpc-health-probe - - -addr - - localhost:9445 -... -``` - -Additionally, the porch-fn-runner pod requires `read` access to the pod template ConfigMap. Assuming -the porch-fn-runner pod is running in the porch-system namespace, the following Role and -RoleBindings need to be added to the Porch deployment manifests. - -```yaml -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: porch-fn-runner - namespace: porch-system -rules: - - apiGroups: [""] - resources: ["configmaps"] - verbs: ["get", "list"] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: porch-fn-runner - namespace: porch-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: porch-fn-runner -subjects: - - kind: ServiceAccount - name: porch-fn-runner -``` - -## Example pod template - -The pod template ConfigMap below matches the default behavior: - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: kpt-function-eval-pod-template -data: - template: | - apiVersion: v1 - kind: Pod - annotations: - cluster-autoscaler.kubernetes.io/safe-to-evict: true - spec: - initContainers: - - name: copy-wrapper-server - image: docker.io/nephio/porch-wrapper-server:latest - command: - - cp - - -a - - /wrapper-server/. - - /wrapper-server-tools - volumeMounts: - - name: wrapper-server-tools - mountPath: /wrapper-server-tools - containers: - - name: function - image: image-replaced-by-kpt-func-image - command: - - /wrapper-server-tools/wrapper-server - volumeMounts: - - name: wrapper-server-tools - mountPath: /wrapper-server-tools - volumes: - - name: wrapper-server-tools - emptyDir: {} - serviceTemplate: | - apiVersion: v1 - kind: Service - spec: - ports: - - port: 9446 - protocol: TCP - targetPort: 9446 - selector: - fn.kpt.dev/image: to-be-replaced - type: ClusterIP -``` diff --git a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/package-mutation-pipeline-order.md b/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/package-mutation-pipeline-order.md deleted file mode 100644 index 382c9b20..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/package-mutation-pipeline-order.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -title: "Package Mutation Pipeline Order" -type: docs -weight: 1 -description: ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -## Why - -This document explains the two different traversal strategies for package hydration in Porch's rendering pipeline: **Depth-First Search (DFS)** and **Breadth-First Search (BFS)**. These strategies determine the order in which kpt packages and their subpackages are processed during mutation and validation. - -## Background - -Porch uses a hydration process to transform kpt packages by running functions (mutators and validators) defined in Kptfiles. The order in which packages are processed can significantly impact the final output, especially when parent and child packages have interdependent transformations. - -## Traversal Strategies - -### Terminology - -For a package structure like: -``` -ROOT/ -├── A/ -├── B/ -└───└─ C/ -``` -Let's define the key terms used throughout this documentation. - -- Root: The top-level package that initiates the hydration process (e.g., ROOT) -- Child: A direct subpackage of another package (e.g., A, B, C are children of ROOT) -- Sibling: Packages that share the same parent (e.g., A and B are siblings) -- Descendant: Any package in the subtree below a given package, including children, grandchildren, etc. - -### Default: Depth-First Search (DFS) - -**Function**: `hydrate()` - -The default hydration strategy processes packages using depth-first traversal in post-order. This means: -- All subpackages are processed **before** their parent packages -- Recursion naturally handles the traversal order -- Resources flow **bottom-up** through the package hierarchy - -#### Processing Order -For the package structure shown before: - -The execution order is: **C → A → B → ROOT** (alphabetical order within each level, then parent) - -#### Implementation Details -- Uses recursive function calls to traverse the package tree -- Each package's pipeline receives: - - All resources from its processed subpackages - - Its own local resources -- Subpackage resources are appended to the parent's input before running the parent's pipeline - -### Optional: Breadth-First Search (BFS) - -**Function**: `hydrateBfsOrder()` - -The BFS strategy processes packages in a top-down approach using explicit queues: -- Parent packages are processed **before** their subpackages -- Uses two-phase execution: discovery and pipeline execution -- Resources flow **top-down** through the package hierarchy - -#### Processing Order -For the package structure shown before: - -The execution order is: **ROOT → A → B → C** (parent first, then children in alphabetical order) - -#### Implementation Details -- **Phase 1**: Breadth-first discovery of all packages and loading of local resources -- **Phase 2**: Sequential pipeline execution with scoped visibility -- Each package's pipeline receives: - - Its own local resources - - All resources from its descendants (children, grandchildren, etc.) - -## Enabling BFS Mode - -To use the BFS traversal strategy, add the following annotation to your root package's Kptfile: - -```yaml -apiVersion: kpt.dev/v1 -kind: Kptfile -metadata: - name: root-package - annotations: - kpt.dev/bfs-rendering: "true" -``` - -**Important**: -- The annotation must be set to exactly `"true"` (case-sensitive) -- Any other value or missing annotation defaults to DFS mode -- The annotation is only checked on the root package's Kptfile - -## Key Differences and Use Cases - -| Aspect | DFS (Default) | BFS (Optional) | -|--------|---------------|----------------| -| **Traversal Pattern** | Depth-first, post-order | Breadth-first, level-order | -| **Processing Direction** | Bottom-up (children → parent) | Top-down (parent → children) | -| **Resource Flow** | Subpackages feed into parent | Parent influences subpackages | -| **Queue Implementation** | Implicit (recursion) | Explicit (two queues) | -| **Resource Visibility** | Parent sees all subpackage outputs | Package sees self + all descendants | -| **Cycle Detection** | During traversal | During discovery phase | - -### When to Use DFS (Default) -- **Aggregation scenarios**: When parent packages need to collect and process outputs from subpackages -- **Bottom-up customization**: When specializations at lower levels should inform higher-level decisions -- **Traditional kpt workflows**: Most existing kpt packages expect this behavior - -### When to Use BFS -- **Template expansion**: When a root package serves as a template that configures subpackages -- **Top-down configuration**: When parent-level settings should cascade to children -- **Consistent base customization**: When you want to apply base transformations before specialized ones - -## Practical Examples - -### DFS Scenario: Configuration Aggregation -``` -ROOT/ # Collects all service configs -├── service-a/ # Defines service-a configuration -├── service-b/ # Defines service-b configuration -└── monitoring/ # Defines monitoring for both services -``` - -With DFS, the ROOT package can aggregate configurations from all services and create a unified monitoring dashboard. - -### BFS Scenario: Template-Based Deployment -``` -ROOT/ # Contains base templates and global config -├── staging/ # Staging-specific overrides -├── production/ # Production-specific overrides -└── development/ # Development-specific overrides -``` - -With BFS, the ROOT package can set up base templates and global configurations that are then specialized by each environment-specific subpackage. - -## Implementation Architecture - -### Core Components - -1. **hydrationContext**: Maintains global state during hydration including: - - Package registry with hydration states (Dry, Hydrating, Wet) - - Input/output file tracking for pruning - - Function execution counters and results - -2. **pkgNode**: Represents individual packages in the hydration graph: - - Package metadata and file system access - - Hydration state tracking - - Accumulated resources after processing - -3. **Pipeline Execution**: Both strategies share the same pipeline execution logic: - - Mutator functions transform resources - - Validator functions verify resources without modification - - Function selection and exclusion based on selectors - -### Resource Scoping - -**DFS Resource Scope**: -- Input = subpackage outputs + own local resources -- Processes transitively accumulated resources - -**BFS Resource Scope**: -- Input = own local resources + all descendant local resources -- Each package sees its complete subtree - -## Error Handling and Validation - -Both strategies include: -- **Cycle Detection**: Prevents infinite loops in package dependencies -- **State Validation**: Ensures packages are processed in correct order -- **Resource Validation**: Verifies KRM resource format compliance -- **Pipeline Validation**: Checks function configurations before execution - -## Related Resources - -- [Tree Traversal Algorithms](https://en.wikipedia.org/wiki/Tree_traversal) - -## See Also - -- **Source Code**: https://github.com/nephio-project/porch -- **File**: `internal/kpt/util/render/executor.go` -- **Key Functions**: `hydrate()` and `hydrateBfsOrder()` -- **Configuration**: `kpt.dev/bfs-rendering` annotation in `pkg/kpt/api/kptfile/v1/types.go` \ No newline at end of file diff --git a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/package-variant.md b/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/package-variant.md deleted file mode 100644 index 133e155b..00000000 --- a/content/en/docs/neo-porch/5_architecture_and_components/relevant_old_docs/package-variant.md +++ /dev/null @@ -1,1343 +0,0 @@ ---- -title: "Package Variant Controller" -type: docs -weight: 3 -description: ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -## Overview - -When deploying workloads across large fleets of clusters, it is often necessary to modify the -workload configuration for a specific cluster. Additionally, these workloads may evolve over time -with security or other patches that require updates. [Configuration as Data]({{% relref "/docs/porch/config-as-data.md" %}}) in -general, and [Package Orchestration]({{% relref "/docs/porch/package-orchestration.md" %}}) in particular, can assist in this. -However, they are still centered around a manual, one-by-one hydration and configuration of a -workload. - -This proposal introduces a number of concepts and a set of resources for automating the creation -and lifecycle management of the package variants. These are designed to address several different -dimensions of scalability: - -- the number of different workloads for a given cluster -- the number of clusters across which the workloads are deployed -- the different types or characteristics of the clusters -- the complexity of the organizations deploying the workloads -- changes to those workloads over time - -For further information, see the following links: - -- [Package Orchestration]({{% relref "/docs/porch/package-orchestration.md" %}}) -- [#3347](https://github.com/GoogleContainerTools/kpt/issues/3347) Bulk package creation -- [#3243](https://github.com/GoogleContainerTools/kpt/issues/3243) Support bulk package upgrades -- [#3488](https://github.com/GoogleContainerTools/kpt/issues/3488) Porch: BaseRevision controller aka Fan Out - controller - but more -- [Managing Package - Revisions](https://docs.google.com/document/d/1EzUUDxLm5jlEG9d47AQOxA2W6HmSWVjL1zqyIFkqV1I/edit?usp=sharing) -- [Porch UpstreamPolicy Resource - API](https://docs.google.com/document/d/1OxNon_1ri4YOqNtEQivBgeRzIPuX9sOyu-nYukjwN1Q/edit?usp=sharing&resourcekey=0-2nDYYH5Kw58IwCatA4uDQw) - -## Core concepts - -For this solution, the workloads are represented by packages. A package is a more general concept, -being an arbitrary bundle of resources, and is therefore sufficient to solve the problem that was -stated originally. - -The idea here is to introduce a *PackageVariant* resource that manages the derivation of a variant -of a package from the original source package, and to manage the evolution of that variant over -time. This effectively automates the human-centered process for variant creation that might be used -with *kpt*, and allows you to do the following: - -- Clone an upstream package locally. -- Make changes to the local package, setting values in the resources and executing KRM functions. -- Push the package to a new repository and tag it as a new version. - -Similarly, the *PackageVariant* can manage the process of updating a package when a new version of -the upstream package is published. In the human-centered workflow, a user uses the `kpt pkg update` -to pull in changes to their derivative package. When using a *PackageVariant* resource, the change -is made to the upstream specification in the resource, and the controller proposes a new draft -package reflecting the outcome of the `kpt pkg update`. - -Automating this process opens up the possibility of performing systematic changes that tie back to -the different dimensions of scalability. We can use data about the specific variant we are creating -to look up an additional context in the Porch cluster, and copy that information into the variant. -That context is a well-structured resource, not simply a set of key/value pairs. The KRM functions -within the package can interpret the resource, modifying other resources in the package accordingly. -The context can come from multiple sources that vary differently along those dimensions of -scalability. For example, one piece of information may vary by region, another by individual site, -another by cloud provider, and another based on whether we are deploying to development, staging, -or production. By using the resources in the Porch cluster as our input model, we can represent this -complexity in a manageable model that is reused across many packages, rather than scattered in -package-specific templates or key/value pairs without any structure. The KRM functions, also reused -across packages, but configured as needed for the specific package, are used to interpret the -resources within the package. This decouples the authoring of the packages, the creation of the -input model, and the deploy time use of that input model within the packages, thereby allowing -those activities to be performed by different teams or organizations. - -The mechanism described above is referred to as configuration injection. Configuration injection -enables the dynamic, context-aware creation of variants. Another way to think about it is as a -continuous reconciliation, much like other Kubernetes controllers. In this case, the inputs are a -parent package *P* and a context *C* (which may be a collection of many independent resources), -with the output being the derived package *D*. When a new version of *C* is created by updates to -in-cluster resources, we get a new revision of *D*, customized according to the updated context. -Similarly, the user (or an automation) can monitor for new versions of *P*. When a new version -arrives, the PackageVariant can be updated to point to that new version. This results in a newly -proposed draft *D*, updated to reflect the upstream changes. This will be explained in more -detail below. - -This proposal also introduces a way of “fanning out”, or creating multiple PackageVariant resources -declaratively based on a list or selector with the PackageVariantSet resource. This is combined with -the injection mechanism to enable the generation of large sets of variants that are specialized for -a particular target repository, cluster, or other resource. - -## Basic package cloning - -The *PackageVariant* resource controls the creation and lifecycle of a variant of a package. That -is, it defines the original (upstream) package, the new (downstream) package, and the changes, or -mutations, that need to be made to transform the upstream package into the downstream package. It -also allows the user to specify the policies around the adoption, deletion, and update of package -revisions that are under the control of the package variant controller. - -The clone operation is shown in *Figure 1*. - -| ![Figure 1: Basic package cloning](/static/images/porch/packagevariant-clone.png) | ![Legend](/static/images/porch/packagevariant-legend.png) | -| :---: | :---: | -| *Figure 1: Basic package cloning* | *Legend* | - -{{% alert title="Note" color="primary" %}} - -*Proposals* and *approvals* are not handled by the package variant controller. They are left to -other types of controller. The exception to this is the proposal to delete (there is no such thing -as a draft deletion). This is performed by the package variant controller, depending on the -specified deletion policy. - -{{% /alert %}} - -### PackageRevision metadata - -The package variant controller utilizes Porch APIs. This means that it is not just performing a -clone operation, but is also creating a Porch *PackageRevision* resource. In particular, this -resource can contain Kubernetes metadata that is not a part of the package, as stored in the -repository. - -Some of this metadata is necessary for the management of the *PackageRevision* by the package -variant controller, for example, the owner reference that indicates which *PackageVariant* created -the *PackageRevision*. This metadata is not under the user's control. However, the *PackageVariant* -resource does make the annotations and labels of the *PackageRevision* available as -values that the user may control during the creation of the *PackageRevision*. This can assist in -additional automation workflows. - -## Introducing variance - -Since cloning by itself is not particularly interesting, the *PackageVariant* resource also allows -you to control the various ways of mutating the original package to create the variant. - -### Package context[^porch17] - -Every *kpt* package that is fetched with `--for-deployment` contains a ConfigMap called -*kptfile.kpt.dev*. Analogously, when Porch creates a package in a deployment repository, it creates -a ConfigMap, if it does not already exist. *Kpt* (or Porch) automatically adds a key name to the -ConfigMap data, with the value of the package name. This ConfigMap can then be used as input to the -functions in the *kpt* function pipeline. - -This process also holds true for the package revisions created via the package variant controller. -Additionally, the author of the *PackageVariant* resource can specify additional key-value pairs to -insert into the package context, as shown in *Figure 2*. - -| ![Figure 2: Package context mutation](/static/images/porch/packagevariant-context.png) | -| :---: | -| *Figure 2: Package context mutation* | - -While this is convenient, it can easily be misused, leading to over-parameterization. The preferred -approach is configuration injection, as described below, since it allows inputs to adhere to a -well-defined, reusable schema, rather than simple key/value pairs. - -### Kptfile function pipeline editing[^porch18] - -In the manual workflow, one of the ways in which packages are edited is by running KRM functions -imperatively. The *PackageVariant* offers a similar capability, by allowing the user to add -functions to the beginning of the downstream package *Kptfile* mutators pipeline. These functions -then execute before the functions present in the upstream pipeline. This method is not exactly the -same as running functions imperatively, because they are also run in every subsequent execution of -the downstream package function pipeline. However, it can achieve the same goals. - -Consider, for example, an upstream package that includes a Namespace resource. In many -organizations, the deployer of the workload may not have the permissions to provision cluster-scoped -resources such as namespaces. This means that they would not be able to use this upstream package -without removing the Namespace resource (assuming that they only have access to a pipeline that -deploys with constrained permissions). By adding a function that removes Namespace resources, and -a call to set-namespace, they can take advantage of the upstream package. - -Similarly, the *Kptfile* pipeline editing feature provides an easy mechanism for the deployer to -create and set the namespace, if their downstream package application pipeline allows it, as seen in -*Figure 3*.[^setns] - -| ![Figure 3: KRM function pipeline editing](/static/images/porch/packagevariant-function.png) | -| :---: | -| *Figure 3: Kptfile function pipeline editing* | - -### Configuration injection[^porch18] - -Adding values to the package context or functions to the pipeline works for configurations that are -under the control of the creator of the *PackageVariant* resource. However, in more advanced use -cases, it may be necessary to specialize the package based on other contextual information. This -comes into play in particular when the user deploying the workload does not have direct control -over the context in which it is being deployed. For example, one part of the organization may manage -the infrastructure - that is, the cluster in which the workload is being deployed - and another part -the actual workload. It would be desirable to be able to pull in the inputs specified by the -infrastructure team automatically, based on the cluster to which the workload is deployed, or -possibly the region in which the cluster is deployed. - -To facilitate this, the package variant controller can "inject" configuration directly into the -package. This means it uses information specific to this instance of the package to look up a -resource in the Porch cluster and copy that information into the package. The package has to be -ready to receive this information. Therefore, there is a protocol that is used to facilitate this: - -- Packages may contain resources annotated with *kpt.dev/config-injection* -- These resources are often also *config.kubernetes.io/local-config* resources, as they are likely - to be used only by the local functions as input. However, this is not mandatory. -- The package variant controller looks for any resource in the Kubernetes cluster that matches the - Group, Version, and Kind of the package resource, and satisfies the injection selector. -- The package variant controller copies the specification field from the matching in-cluster - resource to the in-package resource, or the data field, in the case of a ConfigMap. - -| ![Figure 4: Configuration injection](/static/images/porch/packagevariant-config-injection.png) | -| :---: | -| *Figure 4: Configuration injection* | - -{{% alert title="Note" color="primary" %}} - -Because the data is being injected from the Kubernetes cluster, this data can also be monitored for -changes. For each resource that is injected, the package variant controller establishes a -Kubernetes “watch” on the resource (or on the collection of such resources). A change to that -resource results in a new draft package with the updated configuration injected. - -{{% /alert %}} - -There are a number of additional details that will be described in the detailed design below, along -with the specific API definition. - -## Lifecycle management - -### Upstream changes - -The package variant controller allows you to specify an upstream package revision to clone. -Alternatively, you can specify a floating tag[^notimplemented]. - -If you specify an upstream revision, then the downstream will not be changed unless the -*PackageVariant* resource itself is modified to point to a new revision. That is, the user must -edit the *PackageVariant* and change the upstream package reference. When that is done, the package -variant controller updates any existing draft package under its ownership by performing the -equivalent of a `kpt pkg update`. This updates the downstream so that it is based on the new -upstream revision. If a draft does not exist, then the package variant controller creates a new -draft based on the current published downstream, and applies the `kpt pkg update`. This updated -draft must then be proposed and approved, as with other package changes. - -If a floating tag is used, then explicit modification of the *PackageVariant* is not required. -Rather, when the floating tag is moved to a new tagged revision of the upstream package, the package -revision controller will notice and automatically propose an update to that revision. For example, -the upstream package author may designate three floating tags: stable, beta, and alpha. The upstream -package author can move these tags to specific revisions, and any *PackageVariant* resource tracking -them will propose updates to their downstream packages. - -### Adoption and deletion policies - -When a *PackageVariant* resource is created, it has a particular repository and package name as the -downstream. The adoption policy determines whether or not the package variant controller takes over -an existing package with that name, in that repository. - -Analogously, when a *PackageVariant* resource is deleted, a decision must be made about whether or -not to delete the downstream package. This is controlled by the deletion policy. - -## Fanning out of variant generation[^pvsimpl] - -When used with a single package, the package variant controller mostly helps to handle the time -dimension: that is, producing new versions of a package as the upstream changes, or as injected -resources are updated. It can also be useful for automating common, systematic changes that are -made when bringing an external package into an organization, or an organizational package into a -team repository. - -This is useful, but not particularly compelling by itself. More interesting is when we use the -*PackageVariant* as a primitive for automations that act on other dimensions of scale. This means -writing controllers that emit *PackageVariant* resources. For example, we can create a controller -that instantiates a *PackageVariant* for each developer in our organization, or we can create a -controller to manage the *PackageVariant*s across environments. The ability not only to clone a -package, but also to make systematic changes to that package, enables flexible automation. - -The workload controllers in Kubernetes are a useful analogy. In Kubernetes, there are different -workload controllers, such as Deployment, StatefulSet, and DaemonSet. These all ultimately result -in pods. However, the decisions as to what kind of pods to create, how to schedule them across the -nodes, how to configure the pods, and how to manage them as changes take place, differ with each -workload controller. Similarly, we can build different controllers to handle the different ways in -which we want to generate the *PackageRevisions*. The *PackageVariant* resource provides a -convenient primitive for all of these controllers, allowing them to leverage a range of well-defined -operations to mutate the packages as needed. - -A common requirement is the ability to generate multiple variants of a package based on a simple -list of an entity. Examples include the following: - -- Generating package variants to spin up development environments for each developer in an - organization. -- Instantiating the same package, with minor configuration changes, across a fleet of clusters. -- Instantiating the packages for each customer. - -The package variant set controller is designed to meet this common need. The controller consumes -and outputs the *PackageVariant* resources. The *PackageVariantSet* defines the following: - -- the upstream package -- the targeting criteria -- a template for generating one *PackageVariant* per target - -Three types of targeting are supported: - -- an explicit list of repositories and package names -- a label selector for the repository objects -- an arbitrary object selector - -The rules for generating a *PackageVariant* are associated with a list of targets using a template. -This template can have explicit values for various *PackageVariant* fields, or it can use -[Common Expression Language (CEL)](https://github.com/google/cel-go) expressions to specify the -field values. - -*Figure 5* shows an example of the creation of *PackageVariant* resources based on the explicit -list of repositories. In this example, for the *cluster-01* and *cluster-02* repositories, no -template is defined for the resulting *PackageVariant*s. It simply takes the defaults. However, for -*cluster-03*, a template is defined to change the downstream package name to *bar*. - -| ![Figure 5: PackageVariantSet with the repository list](/static/images/porch/packagevariantset-target-list.png) | -| :---: | -| *Figure 5: PackageVariantSet with the repository list* | - -It is also possible to target the same package to a repository more than once, using different -names. This is useful if, for example, the package is used for provisioning namespaces and you -would like to provision multiple namespaces in the same cluster. It is also useful if a repository -is shared across multiple clusters. In *Figure 6*, two *PackageVariant* resources for creating the -*foo* package in the *cluster-01* repository are generated, one for each listed package name. Since -no *packageNames* field is listed for *cluster-02*, only one instance is created for that -repository. - -| ![Figure 6: PackageVariantSet with the package list](/static/images/porch/packagevariantset-target-list-with-packages.png) | -| :---: | -| *Figure 6: PackageVariantSet with the package list* | - -*Figure 7* shows an example that combines a repository label selector with configuration injectors -that differ according to the target. The template for the *PackageVariant* includes a CEL expression -for one of the injectors, so that the injection varies systematically according to the attributes of -the target. - -| ![Figure 7: PackageVariantSet with the repository selector](/static/images/porch/packagevariantset-target-repo-selector.png) | -| :---: | -| *Figure 7: PackageVariantSet with the repository selector* | - -## Detailed design - -### PackageVariant API - -The Go types below define the *PackageVariantSpec*. - -```go -type PackageVariantSpec struct { - Upstream *Upstream `json:"upstream,omitempty"` - Downstream *Downstream `json:"downstream,omitempty"` - - AdoptionPolicy AdoptionPolicy `json:"adoptionPolicy,omitempty"` - DeletionPolicy DeletionPolicy `json:"deletionPolicy,omitempty"` - - Labels map[string]string `json:"labels,omitempty"` - Annotations map[string]string `json:"annotations,omitempty"` - - PackageContext *PackageContext `json:"packageContext,omitempty"` - Pipeline *kptfilev1.Pipeline `json:"pipeline,omitempty"` - Injectors []InjectionSelector `json:"injectors,omitempty"` -} - -type Upstream struct { - Repo string `json:"repo,omitempty"` - Package string `json:"package,omitempty"` - Revision string `json:"revision,omitempty"` -} - -type Downstream struct { - Repo string `json:"repo,omitempty"` - Package string `json:"package,omitempty"` -} - -type PackageContext struct { - Data map[string]string `json:"data,omitempty"` - RemoveKeys []string `json:"removeKeys,omitempty"` -} - -type InjectionSelector struct { - Group *string `json:"group,omitempty"` - Version *string `json:"version,omitempty"` - Kind *string `json:"kind,omitempty"` - Name string `json:"name"` -} - -``` - -#### Basic specification fields - -The Upstream and Downstream fields specify the source package, and the destination repository and -package name. The Repo fields refer to the names of the Porch Repository resources in the same -namespace as the *PackageVariant* resource. The Downstream field does not contain a revision, -because the package variant controller only creates the draft packages. The revision of the eventual *PackageRevision* resource is determined by Porch at the time of approval. - -The Labels and Annotations fields list the metadata to include in the created *PackageRevision*. -These values are set only at the time a draft package is created. They are ignored for subsequent -operations, even if the *PackageVariant* itself has been modified. This means users are free to -change these values on the *PackageRevision*. The package variant controller will not touch them -again. - -The AdoptionPolicy controls how the package variant controller behaves if it finds an existing -*PackageRevision* draft matching the Downstream field. If the status of the AdoptionPolicy is -*adoptExisting*, then the package variant controller takes ownership of the draft, associating it -with this *PackageVariant*. This means that it will begin to reconcile the draft, as if it had -created it in the first place. If the status of the AdoptionPolicy is *adoptNone* (this is the -default setting), then the package variant controller simply ignores any matching drafts that were -not created by the controller. - -The DeletionPolicy controls how the package variant controller behaves with respect to the -*PackageRevisions* that package variant controller created when the *PackageVariant* resource itself -was deleted. The *delete* value (the default value) deletes the *PackageRevision*, potentially -removing it from a running cluster, if the downstream package has been deployed. The *orphan* value -removes the owner references and leaves the *PackageRevisions* in place. - -#### Package context injection - -*PackageVariant* resource authors may specify key-value pairs in the spec.packageContext.data field -of the resource. These key-value pairs are automatically added to the data of the *kptfile.kpt.dev* -ConfigMap, if it exists. - -Specifying the key name is invalid and must fail the validation of the *PackageVariant*. This key -is reserved for *kpt* or Porch to set to the package name. Similarly, the package-path is reserved -and will result in an error. - -The spec.packageContext.removeKeys field can also be used to specify a list of keys that the package -variant controller should remove from the data field of the *kptfile.kpt.dev* ConfigMap. - -When creating or updating a package, the package variant controller ensures the following: - -- The *kptfile.kpt.dev* ConfigMap exists. If it does not exist, then the package variant controller - will fail the ConfigMap. -- All of the key-value pairs in the spec.packageContext.data exist in the data field of the - ConfigMap. -- None of the keys listed in spec.packageContext.removeKeys exists in the ConfigMap. - -{{% alert title="Note" color="primary" %}} - -If a user adds a key via the *PackageVariant*, then changes the *PackageVariant* to not add that key -anymore, then it will not be removed automatically, unless the user also lists the key in the -removeKeys list. This avoids the need to track which keys were added by the *PackageVariant*. - -Similarly, if a user manually adds a key in the downstream that is also listed in the removeKeys -field, then the package variant controller will remove that key the next time it needs to update -the downstream package. There will be no attempt to coordinate “ownership” of these keys. - -{{% /alert %}} - -If, for some reason, the controller cannot modify the ConfigMap, then this is considered to be an -error and will prevent the generation of the draft. This will result in the Ready condition being -set to *False*. - -#### Editing the Kptfile function pipeline - -The *PackageVariant* resource creators may specify a list of KRM functions to add to the beginning -of the *Kptfile's* pipeline. These functions are listed in the spec.pipeline field, which is a -[Pipeline](https://github.com/GoogleContainerTools/kpt/blob/cf1f326486214f6b4469d8432287a2fa705b48f5/pkg/api/kptfile/v1/types.go#L236), just as in the *Kptfile*. The user can therefore prepend both validators -and mutators. - -Functions added in this way are always added to the *beginning* of the *Kptfile* pipeline. To enable -the management of the list on subsequent reconciliations, functions added by the package variant -controller use the Name field of the -[Function](https://github.com/GoogleContainerTools/kpt/blob/cf1f326486214f6b4469d8432287a2fa705b48f5/pkg/api/kptfile/v1/types.go#L283). In the *Kptfile*, each function is named as the dot-delimited -concatenation of the *PackageVariant*, the name of the *PackageVariant* resource, the function name -as specified in the pipeline of the *PackageVariant* resource (if present), and the positional -location of the function in the array. - -For example, if the *PackageVariant* resource contains the following: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: my-pv -spec: - ... - pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.1 - configMap: - namespace: my-ns - name: my-func - - image: gcr.io/kpt-fn/set-labels:v0.1 - configMap: - app: foo -``` - -then the resulting *Kptfile* will have the following two entries prepended to its mutators list: - -```yaml - pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.1 - configMap: - namespace: my-ns - name: PackageVariant.my-pv.my-func.0 - - image: gcr.io/kpt-fn/set-labels:v0.1 - configMap: - app: foo - name: PackageVariant.my-pv..1 -``` - -This allows the controller, during subsequent reconciliations, to identify the functions within its -control, remove them all, and add them again, based on its updated content. Including the -*PackageVariant* name enables chains of *PackageVariants* to add functions, as long as the user is -careful about their choice of resource names and avoids conflicts. - -If, for some reason, the controller cannot modify the pipeline, then this is considered to be an -error and should prevent the generation of the draft. This will result in the Ready condition being -set to *False*. - -#### Configuration injection details - -As described [above](#configuration-injection), configuration injection is a process whereby -in-package resources are matched to in-cluster resources, and the specifications of the in-cluster -resources are copied to the in-package resource. - -Configuration injection is controlled by a combination of in-package resources with annotations, and -injectors (also known as *injection selectors*) defined on the *PackageVariant* resource. Package -authors control the injection points they allow in their packages, by flagging specific resources as -*injection points* with an annotation. Creators of the *PackageVariant* resource specify how to map -in-cluster resources to those injection points using the injection selectors. Injection selectors -are defined in the spec.injectors field of the *PackageVariant*. This field is an ordered array of -structs containing a group, version, kind (GVK) tuple as separate fields, and a name. Only the name -is required. To identify a match, all fields present must match the in-cluster object, and all *GVK* -fields present must match the in-package resource. In general, the name will not match the -in-package resource. This is discussed in more detail below. - -The annotations, along with the GVK of the annotated resource, allow a package to “advertise” the -injections it can accept and understand. These injection points effectively form a configuration API -for the package. The injection selectors provide a way for the *PackageVariant* author to specify -the inputs for those APIs from the possible values in the management cluster. If we define the APIs -carefully, they can be used across many packages. Since they are KRM resources, we can apply -versioning and schema validation to them as well. This creates a more maintainable, automatable set -of APIs for package customization than simple key/value pairs. - -As an example, we can define a GVK that contains service endpoints that many applications use. In -each application package, we then include an instance of the resource. We can call this resource, -for example, *service-endpoints*. We then configure a function to propagate the values from this -resource to other resources within our package. As those endpoints may vary by region, we can create -in our Porch cluster an instance of this GVK for each region: *useast1-service-endpoints*, *useast2-service-endpoints*, *uswest1-service-endpoints*, and so on. When we instantiate the -*PackageVariant* for a cluster, we want to inject the resource corresponding to the region in which -the cluster exists. Therefore, for each cluster we will create a *PackageVariant* resource pointing -to the upstream package, but with injection selector name values that are specific to the region for -that cluster. - -It is important to understand that the name of the in-package resource and that of the in-cluster -resource need not match. In fact, it would be an unusual coincidence if they did match. The names in -the package are the same across the *PackageVariants* using that upstream, but we want to inject -different resources for each *PackageVariant*. In addition, we do not want to change the name in the -package, because it likely has meaning within the package and will be used by the functions in the -package. Also, different owners control the names of the in-package and in-cluster resources. The -names in the package are in the control of the package author. The names in the cluster are in the -control of whomever populates the cluster (for example, an infrastructure team). The selector is the -glue between them, and is in control of the *PackageVariant* resource creator. - -The GVK, however, has to be the same for the in-package resource and the in-cluster resource. This -is because the GVK tells us the API schema for the resource. Also, the namespace of the in-cluster -object needs to be the same as that of the *PackageVariant* resource. Otherwise, we could leak -resources from those namespaces to which our *PackageVariant* user does not have access. - -With this in mind, the injection process works as follows: - -1. The controller examines all the in-package resources, looking for those that have an annotation - named *kpt.dev/config-injection*, with either of the following values: - - *required* - - *optional* - These are called injection points. It is the responsibility of the package author to define these - injection points, and to specify which are required and which are optional. Optional injection - points are a way of specifying default values. -2. For each injection point, a condition is created *in the downstream PackageRevision*, with the - ConditionType set to the dot-delimited concatenation of the config.injection, with the in-package - resource kind and name, and the value set to *False*. - - {{% alert title="Note" color="primary" %}} - - Since the package author controls the name of the resource, the kind and the name are sufficient - to identify the injection point. This ConditionType is called the "injection point - ConditionType". - - {{% /alert %}} - -3. For each required injection point, the injection point ConditionType is added to the - *PackageRevision* readinessGates by the package variant controller. The ConditionTypes of the - optional injection points must not be added to the readinessGates by the package variant - controller. However, other actors may do so at a later date, and the package variant controller - should not remove them on subsequent reconciliations. Also, this relies on the readinessGates - gating publishing the package to a *deployment* repository, but not gating publishing to a - blueprint repository. -4. The injection processing proceeds as follows. For each injection point, the following is the - case: - - - The controller identifies all in-cluster objects in the same namespace as the *PackageVariant* - resource, with the GVK matching the injection point (the in-package resource). If the - controller is unable to load these objects (for example, there are none and the CRD is not - installed), then the injection point ConditionType will be set to *False*, with a message - indicating the error. Processing then proceeds to the next injection point. - - {{% alert title="Note" color="primary" %}} - - For optional injection, this may be an acceptable outcome. Therefore, it does not interfere - with the overall generation of the draft. - - {{% /alert %}} - - - The controller looks through the list of injection selectors in order and checks if any of the - in-cluster objects match the selector. If there is an in-cluster object that matches, then that - in-cluster object is selected and processing of the list of injection selectors ceases. - - {{% alert title="Note" color="primary" %}} - - The namespace is set according to the *PackageVariant* resource. The GVK is set according to - the in-package resource. Each selector requires a name. Therefore, one match at most is - possible for any given selector. - - Additionally, *all fields present in the selector* must match the in-cluster resource. Only - the *GVK fields present in the selector* must match the in-package resource. - - {{% /alert %}} - - - If no in-cluster object is selected, then the injection point ConditionType is set to *False*, - with a message that no matching in-cluster resource was found. Processing proceeds to the next - injection point. - - - If a matching in-cluster object is selected, then it is injected as follows: - - - For the ConfigMap resources, the data field from the in-cluster resource is copied to the - data field of the in-package resource (the injection point), overwriting it. - - For the other resource types, the specification field from the in-cluster resource is copied - to the specification field of the in-package resource (the injection point), overwriting it. - - An annotation with the name *kpt.dev/injected-resource-name* and the value set to the name - of the in-cluster resource is added (or overwritten) in the in-package resource. - -If, for some reason, the overall injection cannot be completed, or if either of the problems set -out below exists in the upstream package, then it is considered to be an error and should prevent -the generation of the draft. The two possible problems are the following: - - - There is a resource annotated as an injection point which, however, has an invalid annotation - value (that is, a value other than *required* or *optional*). - - There are ambiguous condition types, due to conflicting GVK and name values. If this is the - case, then these must be disambiguated in the upstream package. - -This results in the Ready condition being set to *False*. - -{{% alert title="Note" color="primary" %}} - -Whether or not all the required injection points are fulfilled does not affect the *PackageVariant* -conditions. It only affects the *PackageRevision* conditions. - -{{% /alert %}} - -**A Further note on selectors** - -By allowing the use, and not just name, of the GVK in the selector, more precision in the selection -is enabled. This is a way to constrain the injections that are performed. That is, if the package -has 10 different objects with a config-injection annotation, then the *PackageVariant* could say it -only wants to replace certain GVKs, thereby allowing better control. - -Consider, for example, if the cluster contains the following resources: - -- GVK1 foo -- GVK1 bar -- GVK2 foo -- GVK2 bar - -If we could define injection selectors based only on their names, it would be impossible to ever -inject one GVK with *foo* and another with *bar*. Instead, by using the GVK, we can accomplish this -with a list of selectors, such as the following: - - - GVK1 foo - - GVK2 bar - -That said, often a name is sufficiently unique when combined with the in-package resource GVK. -Therefore, making the selector GVK optional is more convenient. This allows a single injector to -apply to multiple injection points with different GVKs. - -#### Order of mutations - -During creation, the first step the controller takes is to clone the upstream package to create the -downstream package. - -For the update, first note that changes to the downstream *PackageRevision* can be triggered for the -following reasons: - -1. The *PackageVariant* resource is updated. This could change any of the options for introducing - variance, or could also change the upstream package revision referenced. -2. A new revision of the upstream package has been selected, due to a floating tag change, or due - to a force retagging of the upstream. -3. An injected in-cluster object has been updated. - -The downstream *PackageRevision* may have been updated by humans or other automation actors since -creation. Therefore, we cannot simply recreate the downstream *PackageRevision* from scratch when a -change occurs. Instead, the controller must maintain the later edits by performing the equivalent -of a `kpt pkg update`, in the case of changes to the upstream, for any reason. Any other changes -require a reapplication of the *PackageVariant* functionality. With this in mind, we can see that -the controller performs mutations on the downstream package in the following order, for both -creation and update: - -1. Create (via clone) or update (via `kpt pkg update` equivalent): - - - This is carried out by the Porch server, not directly by the package variant controller. - - This means that Porch runs the *Kptfile* pipeline after clone or update. - -2. The package variant controller applies configured mutations: - - - Package context injections - - *Kptfile* KRM function pipeline additions/changes - - Config injection - -3. The package variant controller saves the *PackageRevision* and the *PackageRevisionResources*: - - - The Porch server executes the *Kptfile* pipeline. - -The package variant controller mutations edit the resources (including the *Kptfile*) according to -the contents of the *PackageVariant* and the injected in-cluster resources. However, they cannot -affect one another. The results of these mutations throughout the rest of the package are manifested -by the execution of the *Kptfile* pipeline during the *save* operation. - -#### PackageVariant status - -The PackageVariant sets the following status conditions: - - - **Stalled** - The PackageVariant sets this condition to *True* if there has been a failure that likely requires - intervention by the user. - - **Ready** - The PackageVariant sets this condition to *True* if the last reconciliation has successfully - produced an up-to-date draft. - -The *PackageVariant* resource also contains a DownstreamTargets field. This field contains a list of -downstream *Draft* and *Proposed* *PackageRevisions* owned by this *PackageVariant* resource, or the -latest published *PackageRevision*, if there are none in the *Draft* or *Proposed* state. Typically, -there is only a single draft, but the use of the *adopt* value for the AdoptionPolicy could result -in multiple drafts being owned by the same *PackageVariant*. - -### PackageVariantSet API[^pvsimpl] - -The Go types below define the `PackageVariantSetSpec`. - -```go -// PackageVariantSetSpec defines the desired state of PackageVariantSet -type PackageVariantSetSpec struct { - Upstream *pkgvarapi.Upstream `json:"upstream,omitempty"` - Targets []Target `json:"targets,omitempty"` -} - -type Target struct { - // Exactly one of Repositories, RepositorySeletor, and ObjectSelector must be - // populated - // option 1: an explicit repositories and package names - Repositories []RepositoryTarget `json:"repositories,omitempty"` - - // option 2: a label selector against a set of repositories - RepositorySelector *metav1.LabelSelector `json:"repositorySelector,omitempty"` - - // option 3: a selector against a set of arbitrary objects - ObjectSelector *ObjectSelector `json:"objectSelector,omitempty"` - - // Template specifies how to generate a PackageVariant from a target - Template *PackageVariantTemplate `json:"template,omitempty"` -} -``` - -At the highest level, a *PackageVariantSet* is just an upstream and a list of targets. For each -target, there is a set of criteria for generating a list, and a set of rules (a template) for -creating a *PackageVariant* from each list entry. - -Since the template is optional, let us start with describing the different types of targets, and how -the criteria in each target is used to generate a list that seeds the *PackageVariant* resources. - -The target structure must include one of three different ways of generating the list. The first is -a simple list of repositories and package names for each of these repositories[^repo-pkg-expr]. The -package name list is required for uses cases in which you want to repeatedly instantiate the same -package in a single repository. For example, if a repository represents the contents of a cluster, -you may want to instantiate a namespace package once for each namespace, with a name matching the -namespace. - -The following example shows how to use the repositories field: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - namespace: default - name: example -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - repositories: - - name: cluster-01 - - name: cluster-02 - - name: cluster-03 - packageNames: - - foo-a - - foo-b - - foo-c - - name: cluster-04 - packageNames: - - foo-a - - foo-b -``` - -In the following case, the *PackageVariant* resources are created for each of the pairs of -downstream repositories and package names: - -| Repository | Package name | -| ---------- | ------------ | -| cluster-01 | foo | -| cluster-02 | foo | -| cluster-03 | foo-a | -| cluster-03 | foo-b | -| cluster-03 | foo-c | -| cluster-04 | foo-a | -| cluster-04 | foo-b | - -All of the *PackageVariants* in the above list have the same upstream. - -The second criteria targeting is via a label selector against the Porch repository objects, along -with a list of package names. These packages are instantiated in each matching repository. As in the -first example, not listing a package name defaults to one package, with the same name as the -upstream package. Suppose, for example, we have the following four repositories defined in our Porch -cluster: - -| Repository | Labels | -| ---------- | ------------------------------------- | -| cluster-01 | region=useast1, env=prod, org=hr | -| cluster-02 | region=uswest1, env=prod, org=finance | -| cluster-03 | region=useast2, env=prod, org=hr | -| cluster-04 | region=uswest1, env=prod, org=hr | - -If we create a *PackageVariantSet* with the following specificattion: - -```yaml -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - repositorySelector: - matchLabels: - env: prod - org: hr - - repositorySelector: - matchLabels: - region: uswest1 - packageNames: - - foo-a - - foo-b - - foo-c -``` - -then the *PackageVariant* resources will be created with the following repository and package names: - -| Repository | Package name | -| ---------- | ------------ | -| cluster-01 | foo | -| cluster-03 | foo | -| cluster-04 | foo | -| cluster-02 | foo-a | -| cluster-02 | foo-b | -| cluster-02 | foo-c | -| cluster-04 | foo-a | -| cluster-04 | foo-b | -| cluster-04 | foo-c | - -The third possibility allows the use of *arbitrary* resources in the Porch cluster as targeting -criteria. The objectSelector looks like this: - -```yaml -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - objectSelector: - apiVersion: krm-platform.bigco.com/v1 - kind: Team - matchLabels: - org: hr - role: dev -``` - -The object selector works in the same way as the repository selector - in fact, the repository -selector is equivalent to the object selector, with the apiVersion and kind values set to point to -the Porch repository resources. That is, the repository name comes from the object name, and the -package names come from the listed package names. In the description of the template, we will see -how to derive different repository names from the objects. - -#### PackageVariant template - -As discussed earlier, the list entries generated by the target criteria result in *PackageVariant* -entries. If no template is specified, then the *PackageVariant* default is used, along with the -downstream repository name and the package name, as described in the previous section. The template -allows the user to have control over all the values in the resulting *PackageVariant*. The template -API is shown below. - -```go -type PackageVariantTemplate struct { - // Downstream allows overriding the default downstream package and repository name - // +optional - Downstream *DownstreamTemplate `json:"downstream,omitempty"` - - // AdoptionPolicy allows overriding the PackageVariant adoption policy - // +optional - AdoptionPolicy *pkgvarapi.AdoptionPolicy `json:"adoptionPolicy,omitempty"` - - // DeletionPolicy allows overriding the PackageVariant deletion policy - // +optional - DeletionPolicy *pkgvarapi.DeletionPolicy `json:"deletionPolicy,omitempty"` - - // Labels allows specifying the spec.Labels field of the generated PackageVariant - // +optional - Labels map[string]string `json:"labels,omitempty"` - - // LabelsExprs allows specifying the spec.Labels field of the generated PackageVariant - // using CEL to dynamically create the keys and values. Entries in this field take precedent over - // those with the same keys that are present in Labels. - // +optional - LabelExprs []MapExpr `json:"labelExprs,omitempty"` - - // Annotations allows specifying the spec.Annotations field of the generated PackageVariant - // +optional - Annotations map[string]string `json:"annotations,omitempty"` - - // AnnotationsExprs allows specifying the spec.Annotations field of the generated PackageVariant - // using CEL to dynamically create the keys and values. Entries in this field take precedent over - // those with the same keys that are present in Annotations. - // +optional - AnnotationExprs []MapExpr `json:"annotationExprs,omitempty"` - - // PackageContext allows specifying the spec.PackageContext field of the generated PackageVariant - // +optional - PackageContext *PackageContextTemplate `json:"packageContext,omitempty"` - - // Pipeline allows specifying the spec.Pipeline field of the generated PackageVariant - // +optional - Pipeline *PipelineTemplate `json:"pipeline,omitempty"` - - // Injectors allows specifying the spec.Injectors field of the generated PackageVariant - // +optional - Injectors []InjectionSelectorTemplate `json:"injectors,omitempty"` -} - -// DownstreamTemplate is used to calculate the downstream field of the resulting -// package variants. Only one of Repo and RepoExpr may be specified; -// similarly only one of Package and PackageExpr may be specified. -type DownstreamTemplate struct { - Repo *string `json:"repo,omitempty"` - Package *string `json:"package,omitempty"` - RepoExpr *string `json:"repoExpr,omitempty"` - PackageExpr *string `json:"packageExpr,omitempty"` -} - -// PackageContextTemplate is used to calculate the packageContext field of the -// resulting package variants. The plain fields and Exprs fields will be -// merged, with the Exprs fields taking precedence. -type PackageContextTemplate struct { - Data map[string]string `json:"data,omitempty"` - RemoveKeys []string `json:"removeKeys,omitempty"` - DataExprs []MapExpr `json:"dataExprs,omitempty"` - RemoveKeyExprs []string `json:"removeKeyExprs,omitempty"` -} - -// InjectionSelectorTemplate is used to calculate the injectors field of the -// resulting package variants. Exactly one of the Name and NameExpr fields must -// be specified. The other fields are optional. -type InjectionSelectorTemplate struct { - Group *string `json:"group,omitempty"` - Version *string `json:"version,omitempty"` - Kind *string `json:"kind,omitempty"` - Name *string `json:"name,omitempty"` - - NameExpr *string `json:"nameExpr,omitempty"` -} - -// MapExpr is used for various fields to calculate map entries. Only one of -// Key and KeyExpr may be specified; similarly only on of Value and ValueExpr -// may be specified. -type MapExpr struct { - Key *string `json:"key,omitempty"` - Value *string `json:"value,omitempty"` - KeyExpr *string `json:"keyExpr,omitempty"` - ValueExpr *string `json:"valueExpr,omitempty"` -} - -// PipelineTemplate is used to calculate the pipeline field of the resulting -// package variants. -type PipelineTemplate struct { - // Validators is used to caculate the pipeline.validators field of the - // resulting package variants. - // +optional - Validators []FunctionTemplate `json:"validators,omitempty"` - - // Mutators is used to caculate the pipeline.mutators field of the - // resulting package variants. - // +optional - Mutators []FunctionTemplate `json:"mutators,omitempty"` -} - -// FunctionTemplate is used in generating KRM function pipeline entries; that -// is, it is used to generate Kptfile Function objects. -type FunctionTemplate struct { - kptfilev1.Function `json:",inline"` - - // ConfigMapExprs allows use of CEL to dynamically create the keys and values in the - // function config ConfigMap. Entries in this field take precedent over those with - // the same keys that are present in ConfigMap. - // +optional - ConfigMapExprs []MapExpr `json:"configMapExprs,omitempty"` -} -``` - -To make this complex structure more comprehensible, the first thing to notice is that many fields -have a plain version and an Expr version. The plain version is used when the value is static across -all the *PackageVariants*. The Expr version is used when the value needs to vary across the -*PackageVariants*. - -Let us consider a simple example. Suppose we have a package for provisioning namespaces that is -called *base-ns*. We would like to instantiate this several times in the *cluster-01* repository. -We could do this with the following *PackageVariantSet*: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - namespace: default - name: example -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - targets: - - repositories: - - name: cluster-01 - packageNames: - - ns-1 - - ns-2 - - ns-3 -``` - -This will produce three *PackageVariant* resources with the same upstream, all with the same -downstream repository, and each with a different downstream package name. If we also want to set -some labels identically across the packages, we can do this with the template.labels field: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - namespace: default - name: example -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - targets: - - repositories: - - name: cluster-01 - packageNames: - - ns-1 - - ns-2 - - ns-3 - template: - labels: - package-type: namespace - org: hr -``` - -The resulting *PackageVariant* resources include labels in their specification, and are identical, -apart from their names and the downstream.package: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaaa -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - downstream: - repo: cluster-01 - package: ns-1 - labels: - package-type: namespace - org: hr ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaab -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - downstream: - repo: cluster-01 - package: ns-2 - labels: - package-type: namespace - org: hr ---- - -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaac -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - downstream: - repo: cluster-01 - package: ns-3 - labels: - package-type: namespace - org: hr -``` - -When using other targeting means, the use of the Expr fields becomes more probable, since we have -more possible sources for the different field values. The Expr values are all -[Common Expression Language (CEL)](https://github.com/google/cel-go) expressions, rather than static -values. This allows the user to construct values based on the various fields of the targets. -Consider again the RepositorySelector example, where we have these repositories in the cluster. - -| Repository | Labels | -| ---------- | ------------------------------------- | -| cluster-01 | region=useast1, env=prod, org=hr | -| cluster-02 | region=uswest1, env=prod, org=finance | -| cluster-03 | region=useast2, env=prod, org=hr | -| cluster-04 | region=uswest1, env=prod, org=hr | - -If we create a *PackageVariantSet* with the following specification, then we can use the Expr fields -to add labels to the *PackageVariantSpecs* (and therefore to the resulting *PackageRevisions* later) -that vary according to the cluster. We can also use this to diversify the injectors defined for each -*PackageVariant*, resulting in each *PackageRevision* having different resources injected. The -following specification results in three *PackageVariant* resources, one for each repository, with -the *env=prod* and *org=hr* labels. - -```yaml -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - repositorySelector: - matchLabels: - env: prod - org: hr - template: - labelExprs: - key: org - valueExpr: "repository.labels['org']" - injectorExprs: - - nameExpr: "repository.labels['region'] + '-endpoints'" -``` - -The labels and injectors fields of the *PackageVariantSpec* are different for each of the -*PackageVariants*, as determined by the use of the Expr fields in the template, as shown here: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaaa -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - downstream: - repo: cluster-01 - package: foo - labels: - org: hr - injectors: - name: useast1-endpoints ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaab -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - downstream: - repo: cluster-03 - package: foo - labels: - org: hr - injectors: - name: useast2-endpoints ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaac -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - downstream: - repo: cluster-04 - package: foo - labels: - org: hr - injectors: - name: uswest1-endpoints -``` - -Since the injectors are different for each *PackageVariant*, each of the resulting -*PackageRevisions* has different resources injected. - -When CEL expressions are evaluated, they have an environment associated with them. That is, there -are certain objects that are accessible within the CEL expression. For CEL expressions used in the *PackageVariantSet* template field, the following variables are available: - -| CEL variable | Variable contents | -| -------------- | ------------------------------------------------------------ | -| repoDefault | The default repository name based on the targeting criteria. | -| packageDefault | The default package name based on the targeting criteria. | -| upstream | The upstream *PackageRevision*. | -| repository | The downstream repository. | -| target | The target object (details vary. See below). | - -There is one expression that is an exception to the above table. Since the repository value -corresponds to the downstream repository, we must first evaluate the downstream.repoExpr expression -to find that repository. Therefore, for this expression only, *repository* is not a valid variable. - -There is one other variable that is available across all the CEL expressions: the target variable. -This variable has a meaning that varies depending on the type of target, as follows: - -| Target type | Target variable contents contents | -| ----------------------------------------------------------------------------- | -| Repo/package list | A struct that has two fields: repo and package, as with | -| | the repoDefault and the packageDefault values. | -| Repository selector | The repository selected by the selector. Although not | -| | recommended, this can be different from the repository | -| | value, which can be altered with the downstream.repo or | -| | the downstream.repoExpr. | -| Object selector | The Object selected by the selector. | - -For the various resource variables - upstream, repository, and target - arbitrary access to all the -fields of the object could lead to security concerns. Therefore, only a subset of the data is -available for use in CEL expressions, specifically, the following fields: name, namespace, labels, -and annotations. - -Given the minor quirk with the repoExpr, it may be helpful to state the processing flow for the -template evaluation: - -1. The upstream *PackageRevision* is loaded. It must be in the same namespace as the - *PackageVariantSet*[^multi-ns-reg]. -2. The targets are determined. -3. For each target, the following is the case: - - 1. The CEL environment is prepared with repoDefault, packageDefault, upstream, and the target - variables. - 2. The downstream repository is determined and loaded, as follows: - - - If present, the downstream.repoExpr is evaluated using the CEL environment. The result is - used as the downstream repository name. - - If the downstream.repo is set, then this is used as the downstream repository name. - - If neither the downstream.repoExpr nor the downstream.repo is present, then the default - repository name, based on the target, is used (that is, the same value as the repoDefault - variable). - - The resulting downstream repository name is used to load the corresponding repository - object in the same namespace as the *PackageVariantSet*. - - 3. The downstream repository is added to the CEL environment. - 4. All other CEL expressions are evaluated. - -4. If any of the resources, such as the upstream *PackageRevision* or the downstream repository, - are not found or otherwise fail to load, then the processing stops and a failure condition is - raised. Similarly, if a CEL expression cannot be properly evaluated, due to syntax or other - issues, then the processing stops and a failure condition is raised. - -#### Other considerations - -It seems convenient to automatically inject the *PackageVariantSet* targeting resource. However, it -is better to require the package to advertise the ways in which it accepts injections (that is, the -GVKs that it understands), and only inject those. This keeps the separation of concerns cleaner. The -package does not build in an awareness of the context in which it expects to be deployed. For -example, a package should not accept a Porch repository resource just because that happens to be the -targeting mechanism. That would make the package unusable in other contexts. - -#### PackageVariantSet status - -The *PackageVariantSet* status uses the following conditions: - - - Stalled is set to *True*, if there has been a failure that likely requires user intervention. - - Ready is set to *True*, if the last reconciliation has successfully reconciled all the targeted - *PackageVariant* resources. - -## Future considerations -- As an alternative to the floating tag proposal, it may instead be desirable to have a separate tag - tracking controller that can update the PV and PVS resources, to tweak their upstream as the tag - moves. -- Installing a collection of packages across a set of clusters, or performing the same mutations to - each package in a collection, is only supported by creating multiple *PackageVariant*/ - *PackageVariantSet* resources. These are options to consider for the following use cases: - - - Upstreams listing multiple packages. - - Label the selector against *PackageRevisions*. This does not seem particularly useful, as - *PackageRevisions* are highly reusable and would probably be composed in many different ways. - - A *PackageRevisionSet* resource that simply contains a list of upstream structures and could be - used as an upstream. This is functionally equivalent to the upstreams option, except this list - is reusable across resources. - - Listing multiple *PackageRevisionSets* in the upstream is also desirable. - - Any or all of the above use cases could be implemented in the *PackageVariant* or - *PackageVariantSet*, or both. - -## Footnotes - -[^porch17]: Implemented in Porch v0.0.17. -[^porch18]: Available in Porch v0.0.18. -[^notimplemented]: Proposed here, but not yet implemented in Porch v0.0.18. -[^setns]: As of writing, the set-namespace function does not have a *create* option. This should be - added, in order to avoid the user needing also to use the `upsert-resource` function. Such common - operations should be simple for users. -[^pvsimpl]: This document describes *PackageVariantSet* v1alpha2, which will be available from - Porch v0.0.18 onwards. In Porch v0.0.16 and 17, the v1alpha1 implementation is available, but it - is a somewhat different API, which does not support CEL or any injection. It is focused only on - fan-out targeting, and uses a [slightly different targeting API](https://github.com/nephio-project/porch/blob/main/controllers/packagevariants/api/v1alpha1/packagevariant_types.go). -[^repo-pkg-expr]: This is not exactly correct. As we will see later in the template discussion, the - repository and package names listed are just defaults for the template. They can be further - manipulated in the template to reference different downstream repositories and package names. The - same is true for the repositories selected via the `repositorySelector` option. However, this can - be ignored for now. -[^multi-ns-reg]: Note that the same upstream repository can be registered in multiple namespaces - without any problems. This simplifies access controls, avoiding the need for cross-namespace - relationships between the repositories and other Porch resources. diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/_index.md b/content/en/docs/neo-porch/6_configuration_and_deployments/_index.md deleted file mode 100644 index 96250b16..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Configuration & Deployments" -type: docs -weight: 6 -description: Configuring porch deployments ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/_index.md b/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/_index.md deleted file mode 100644 index 24dc6727..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Configurations" -type: docs -weight: 2 -description: ---- - -## Lorem Ipsum - -this section should explain the different configuration options that porch has e.g. cr/db cache or using private registries or cert manager for webhooks etc diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/cache.md b/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/cache.md deleted file mode 100644 index bd22417c..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/cache.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Cache Configuration" -type: docs -weight: 2 -description: ---- - -## Lorem Ipsum - -explain that by default we automatically set up the CR cache, db cache can be set up by doing xyz (not the place for explaining its inner workings just its setup/config) diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/cert-manager-webhooks.md b/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/cert-manager-webhooks.md deleted file mode 100644 index a04367e3..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/cert-manager-webhooks.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Cert Manager Webhooks" -type: docs -weight: 5 -description: ---- - -## Lorem Ipsum - -explain how to configure cert manager to automatically create and sign tls certificates for webhook management. [deplyoment-catalog-config](https://github.com/nephio-project/catalog/tree/main/nephio/optional/porch-cert-manager-webhook) diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/jeager-tracing.md b/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/jeager-tracing.md deleted file mode 100644 index 3c3c8dc7..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/jeager-tracing.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Jaeger Tracing" -type: docs -weight: 4 -description: ---- - -## Lorem Ipsum - -explain how to configure Jaeger to trace porch [old-jaeger-setup]({{% relref "/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/environment-setup/#enabling-open-telemetryjaeger-tracing" %}}) diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/private-registries.md b/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/private-registries.md deleted file mode 100644 index ba722e09..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/private-registries.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: "Private Registries" -type: docs -weight: 3 -description: ---- - -## Lorem Ipsum - -PUBLIC VS PRIVATE REPO’S FOR KPT FUNCTIONS USED BY PORCH NOT REPOS WHERE PACKAGES ARE STORED!!! - -## (Default) Public Image Registries - -by default we have [PUBLIC IMAGE REPOSITORIES] GCR OR KPT/DEV - -## Setting up Private Registries - -old guide can be found here [old-private-registry-setup-guide]({{% relref "/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/git-authentication-config.md" %}}) - -## Setting up TLS Authentication for Private Registries - -old guide can be found here [old-private-registry-tls-setup-guide]({{% relref "/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/using-authenticated-private-registries.md" %}}) diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/repository-sync.md b/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/repository-sync.md deleted file mode 100644 index 7cb6df4a..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/configurations/repository-sync.md +++ /dev/null @@ -1,162 +0,0 @@ ---- -title: "Repository Sync Configuration" -type: docs -weight: 1 -description: Configure repository synchronization for Porch Repositories ---- - -## Sync Configuration Fields - -The `spec.sync` field in a Repository CR controls synchronization behavior with the external repository. Repositories without sync configuration use the system default for periodic synchronization (10 minutes, overridden by RepoSyncFrequency parameter in porch-server deployment). - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: Repository -metadata: - name: example-repo - namespace: default -spec: - sync: - schedule: "*/10 * * * *" # Periodic sync using cron expression - runOnceAt: "2024-01-15T10:30:00Z" # One-time sync at specific time -``` - -### Schedule Field - -The `schedule` field accepts standard [cron expressions](https://en.wikipedia.org/wiki/Cron) for periodic synchronization: - -- **Format**: Standard 5-field cron expression (`minute hour day month weekday`) -- **Examples**: - - `"*/10 * * * *"` - Every 10 minutes - - `"0 */2 * * *"` - Every 2 hours - - `"0 9 * * 1-5"` - 9 AM on weekdays - - `"0 0 * * 0"` - Weekly on Sunday at midnight - -### RunOnceAt Field - -The `runOnceAt` field schedules a one-time sync at a specific timestamp: - -- **Format**: RFC3339 timestamp (`metav1.Time`) -- **Examples**: - - `"2025-01-15T14:30:00Z"` - Sync at 2:30 PM UTC on January 15, 2025 - - `"2025-12-25T00:00:00Z"` - Sync at midnight UTC on Christmas Day - - `"2025-06-01T09:15:30Z"` - Sync at 9:15:30 AM UTC on June 1st - - `"2025-12-10T15:45:00-05:00"` - Sync at 3:45 PM EST (UTC-5) on March 10th -- **Behavior**: - - Executes once at the specified time - - Ignored if timestamp is in the past - - Independent of periodic schedule - - Can be updated to reschedule - -**Note**: One-time syncs should only be used when discrepancies are found between the external repository and Porch cache. Under normal conditions, rely on periodic syncs for regular synchronization. - -## Complete Examples - -### Git Repository with Periodic Sync - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: Repository -metadata: - name: blueprints - namespace: default -spec: - description: Blueprints with hourly sync - type: git - sync: - schedule: "0 * * * *" # Every hour - git: - repo: https://github.com/example/blueprints.git - branch: main - directory: packages -``` - -### Combined Periodic and One-time Sync - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: Repository -metadata: - name: combined-sync - namespace: default -spec: - type: git - sync: - schedule: "0 */6 * * *" # Every 6 hours - runOnceAt: "2024-01-15T09:00:00Z" # Sync once - git: - repo: https://github.com/example/repo.git - branch: main -``` - -## Sync Behavior - -### Default Behavior -- Without `spec.sync`: Uses system default sync frequency -- Empty `schedule`: Falls back to default frequency -- Invalid cron expression: Falls back to default frequency - -**Default frequency**: 10 minutes (overridden by RepoSyncFrequency parameter in porch-server deployment) - -### Sync Manager Operation -Each repository runs two independent goroutines: - -1. **Periodic Sync** (`syncForever`): - - Syncs once at startup - - Follows cron schedule or default frequency - - Updates repository conditions after each sync - -2. **One-time Sync** (`handleRunOnceAt`): - - Monitors `runOnceAt` field changes - - Creates/cancels timers as needed - - Executes independently of periodic sync - -### Status Updates -Repository sync status is reflected in the Repository CR conditions: - -```yaml -status: - conditions: - - type: Ready - status: "True" - reason: Ready - message: 'Repository Ready (next sync scheduled at: 2025-11-05T11:55:38Z)' - lastTransitionTime: "2024-01-15T10:30:00Z" -``` - -## Troubleshooting - -### Common Issues - -1. **Invalid Cron Expression**: - - Check porch-server logs for parsing errors - - Verify 5-field format - - Repository falls back to default frequency - -2. **Past RunOnceAt Time**: - - One-time sync is skipped - - Update to future timestamp - - Check porch-server logs for details - -3. **Sync Failures**: - - Check repository conditions - - Verify authentication credentials - - Review repository accessibility - - Check porch-server logs for detailed error information - -### Monitoring -- Repository conditions show sync status -- Porch-server logs contain detailed sync information, next sync times, and any errors - - -## CLI Commands - -For repository registration and sync commands, see the [porchctl CLI guide]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md" %}}): -- [Repository Registration]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md#repository-registration" %}}) - Register repositories with sync configuration -- [Repository Sync Command]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md#repository-sync-command" %}}) - Trigger immediate repository synchronization - ---- - -{{% alert title="Note" color="primary" %}} -OCI repository support is experimental and may not have full feature parity with Git repositories. -{{% /alert %}} \ No newline at end of file diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/deployments/_index.md b/content/en/docs/neo-porch/6_configuration_and_deployments/deployments/_index.md deleted file mode 100644 index 3f305d67..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/deployments/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Deployments" -type: docs -weight: 1 -description: ---- - -## Lorem Ipsum - -this section should explain deploying porch in its different environments e.g. catalog standard deployment or the development environment (not explaining launch.json and config etc) \ No newline at end of file diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/deployments/catalog-deployment.md b/content/en/docs/neo-porch/6_configuration_and_deployments/deployments/catalog-deployment.md deleted file mode 100644 index d2c46555..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/deployments/catalog-deployment.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Catalog Deployment" -type: docs -weight: 2 -description: ---- - -## Lorem Ipsum - -installing porch from the [catalog](https://github.com/nephio-project/catalog/tree/main/nephio/core/porch). -how to deploy guide can be found [old-install-guide]({{% relref "/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/install-porch.md" %}}) diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/deployments/local-dev-env-deployment.md b/content/en/docs/neo-porch/6_configuration_and_deployments/deployments/local-dev-env-deployment.md deleted file mode 100644 index e1ede2bc..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/deployments/local-dev-env-deployment.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -title: "Local Development Environment Setup" -type: docs -weight: 3 -description: "A guide to setting up a local environment for developing and testing with Porch." ---- - -# Local Development Environment Setup - -This guide provides instructions for setting up a local development environment using `kind` (Kubernetes in Docker). This setup is ideal for developing, testing, and exploring Porch functionalities. - -## Table of Contents - -- [Prerequisites](#prerequisites) -- [Local Environment Setup](#local-environment-setup) -- [Verifying the Setup](#verifying-the-setup) - -## Prerequisites - -Before you begin, ensure you have the following tools installed on your system: - -* **[Docker](https://docs.docker.com/get-docker/):** For running containers, including the `kind` cluster. -* **[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/):** The Kubernetes command-line tool for interacting with your cluster. -* **[kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation):** A tool for running local Kubernetes clusters using Docker container "nodes". - -The setup scripts provided in the Porch repository will handle the installation of Porch itself and its CLI, `porchctl`. - -## Local Environment Setup - -Follow these steps from the root directory of your cloned Porch repository to set up your local environment. - -1. **Bring up the `kind` cluster:** - - This script creates a local Kubernetes cluster with the necessary configuration for Porch. - - ```bash - ./scripts/setup-dev-env.sh - ``` - -2. **Build and load Porch images:** - - **Choose one of the following options** to build the Porch container images and load them into your `kind` cluster. - - * **CR-CACHE (Default):** Uses a cache backed by a Custom Resource (CR). - ```bash - make run-in-kind - ``` - - * **DB-CACHE:** Uses a PostgreSQL database as the cache backend. - ```bash - make run-in-kind-db-cache - ``` - -## Verifying the Setup - -After the setup scripts complete, verify that all components are running correctly. - -1. **Check Pod Status:** - - Ensure all pods in the `porch-system` namespace are in the `READY` state. - - ```bash - kubectl get pods -n porch-system - ``` - -2. **Verify CRD Availability:** - - Confirm that the `PackageRevision` Custom Resource Definition (CRD) has been successfully registered. - - ```bash - kubectl api-resources | grep packagerevisions - ``` - -3. **Configure `porchctl` (Optional):** - - The `porchctl` binary is built into the `.build/` directory. For convenient access, add it to your system's `PATH`. - - ```bash - # You can copy the binary to a directory in your PATH, for example: - sudo cp ./.build/porchctl /usr/local/bin/porchctl - - # Alternatively, you can add the build directory to your PATH: - export PATH="$(pwd)/.build:$PATH" - ``` - -4. **Access Gitea UI (Optional):** - - The local environment includes a Gitea instance for Git repository hosting. You can access it at [http://localhost:3000](http://localhost:3000). - - * **Username:** `nephio` - * **Password:** `secret` diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/_index.md b/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/_index.md deleted file mode 100644 index ca21f73f..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: -1 -description: ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/environment-setup.md b/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/environment-setup.md deleted file mode 100644 index a9b1a882..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/environment-setup.md +++ /dev/null @@ -1,255 +0,0 @@ ---- -title: "Setting up a local environment" -type: docs -weight: 2 -description: ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -This tutorial gives short instructions on how to set up a development environment for Porch on your local machine. It outlines the steps to -get a [kind](https://kind.sigs.k8s.io/) cluster up and running to which a Porch instance running in Visual Studio Code -can connect to and interact with. If you are not familiar with how porch works, it is highly recommended that you go -through the [Starting with Porch tutorial]({{% relref "/docs/porch/user-guides/install-porch.md" %}}) before going through this one. - -{{% alert title="Note" color="primary" %}} - -As your development environment, you can run the code on a remote VM and use the -[VS Code Remote SSH](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) -plugin to connect to it. - -{{% /alert %}} - -## Extra steps for MacOS users - -The script the make deployment-config target to generate the deployment files for porch. The scripts called by this -make target use recent *bash* additions. MacOS comes with *bash* 3.x.x - -1. Install *bash* 4.x.x or better of *bash* using homebrew, see - [this post for details](https://apple.stackexchange.com/questions/193411/update-bash-to-version-4-0-on-osx) -2. Ensure that */opt/homebrew/bin* is earlier in your path than */bin* and */usr/bin* - -{{% alert title="Note" color="primary" %}} - -The changes above **permanently** change the *bash* version for **all** applications and may cause side -effects. - -{{% /alert %}} - - -## Setup the environment automatically - -The [*./scripts/setup-dev-env.sh*](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh) setup -script automatically builds a porch development environment. - -{{% alert title="Note" color="primary" %}} - -This is only one of many possible ways of building a working porch development environment so feel free -to customize it to suit your needs. - -{{% /alert %}} - -The setup script will perform the following steps: - -1. Install a kind cluster. The name of the cluster is read from the PORCH_TEST_CLUSTER environment variable, otherwise - it defaults to porch-test. The configuration of the cluster is taken from - [here](https://github.com/nephio-project/porch/blob/main/deployments/local/kind_porch_test_cluster.yaml). -1. Install the MetalLB load balancer into the cluster, in order to allow LoadBalancer typed Services to work properly. -1. Install the Gitea git server into the cluster. This can be used to test porch during development, but it is not used - in automated end-to-end tests. Gitea is exposed to the host via port 3000. The GUI is accessible via - , or (username: nephio, password: secret). - {{% alert title="Note" color="primary" %}} - - If you are using WSL2 (Windows Subsystem for Linux), then Gitea is also accessible from the Windows host via the - URL. - - {{% /alert %}} -1. Generate the PKI resources (key pairs and certificates) required for end-to-end tests. -1. Build the porch CLI binary. The result will be generated as *.build/porchctl*. - -That's it! If you want to run the steps manually, please use the code of the script as a detailed description. - -The setup script is idempotent in the sense that you can rerun it without cleaning up first. This also means that if the -script is interrupted for any reason, and you run it again it should effectively continue the process where it left off. - -## Extra manual steps - -Copy the *.build/porchctl* binary (that was built by the setup script) to somewhere in your $PATH, or add the *.build* -directory to your PATH. - -## Build and deploy porch - -You can build all of porch, and also deploy it into your newly created kind cluster with this command. - -```bash -make run-in-kind -``` - -See more advanced variants of this command in the [detailed description of the development process]({{% relref "/docs/porch/contributors-guide/dev-process.md" %}}). - -## Check that everything works as expected - -At this point you are basically ready to start developing porch, but before you start it is worth checking that -everything works as expected. - -### Check that the APIservice is ready - -```bash -kubectl get apiservice v1alpha1.porch.kpt.dev -``` - -Sample output: - -```bash -NAME SERVICE AVAILABLE AGE -v1alpha1.porch.kpt.dev porch-system/api True 18m -``` - -### Check the porch api-resources - -```bash -kubectl api-resources | grep porch -``` - -Sample output: - -```bash -packagerevs config.porch.kpt.dev/v1alpha1 true PackageRev -packagevariants config.porch.kpt.dev/v1alpha1 true PackageVariant -packagevariantsets config.porch.kpt.dev/v1alpha2 true PackageVariantSet -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -packages porch.kpt.dev/v1alpha1 true PorchPackage -``` - -## Create Repositories using your local Porch server - -To connect Porch to Gitea, follow [step 7 in the Starting with Porch]({{% relref "/docs/porch/user-guides/install-porch.md" %}}) -tutorial to create the repositories in Porch. - -You will notice logging messages in VS Code when you run the `kubectl apply -f porch-repositories.yaml` command. - -You can check that your locally running Porch server has created the repositories by running the `porchctl` command: - -```bash -porchctl repo get -A -``` - -Sample output: - -```bash -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -You can also check the repositories using *kubectl*. - -```bash -kubectl get repositories -n porch-demo -``` - -Sample output: - -```bash -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -You now have a locally running Porch (API)server. Happy developing! - -## Restart from scratch - -Sometimes the development cluster gets cluttered and you may experience weird behavior from porch. -In this case you might want to restart from scratch, by deleting the development cluster with the following -command: - -```bash -kind delete cluster --name porch-test -``` - -and running the [setup script](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh) again: - -```bash -./scripts/setup-dev-env.sh -``` - -## Getting started with actual development - -You can find a detailed description of the actual development process [here]({{% relref "/docs/porch/contributors-guide/environment-setup.md" %}}). - -## Enabling Open Telemetry/Jaeger tracing - -### Enabling tracing on a Porch deployment - -Follow the steps below to enable Open Telemetry/Jaeger tracing on your Porch deployment. - -1. Apply the Porch *deployment.yaml* manifest for Jaeger. - -```bash -kubectl apply -f https://raw.githubusercontent.com/nephio-project/porch/refs/heads/main/deployments/tracing/deployment.yaml -``` - -2. Add the environment variable *OTEL* to the porch-server manifest: - -```bash -kubectl edit deployment -n porch-system porch-server -``` - -```bash -env: -- name: OTEL - value: otel://jaeger-oltp:4317 -``` - -3. Set up port forwarding of the Jaeger HTTP port to your local machine: - -```bash -kubectl port-forward -n porch-system service/jaeger-http 16686 -``` - -4. Open the Jaeger UI in your browser at *http://localhost:16686* - -### Enable tracing on a local Porch server - -Follow the steps below to enable Open Telemetry/Jaeger tracing on a porch server running locally on your machine, such as in VS Code. - -1. Download the Jaeger binary tarball for your local machine architecture from [the Jaeger download page](https://www.jaegertracing.io/download/#binaries) and untar the tarball in some suitable directory. - -2. Run Jaeger: - -```bash -cd jaeger -./jaeger-all-in-one -``` - -3. Configure the Porch server to output Open Telemetry traces: - - Set the *OTEL* environment variable to point at the Jaeger server - - In *.vscode/launch.json*: - -```bash -"env": { - ... - ... -"OTEL": "otel://localhost:4317", - ... - ... -} -``` - - In a shell: - -```bash -export OTEL="otel://localhost:4317" -``` - -4. Open the Jaeger UI in your browser at *http://localhost:16686* - -5. Run the Porch Server. - diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/git-authentication-config.md b/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/git-authentication-config.md deleted file mode 100644 index 170eb397..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/git-authentication-config.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -title: "Authenticating to Remote Git Repositories" -type: docs -weight: 2 -description: "" ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -## Porch Server to Git Interaction - -The porch server handles interaction with associated git repositories through the use of porch repository CR (Custom Resource) which act as a link between the porch server and the git repositories the server is meant to interact with and store packages on. - -More information on porch repositories can be found [here]({{% relref "/docs/porch/package-orchestration.md#repositories" %}}). - -There are 2 main methods of authenticating to a git repository and an additional configuration. -These are - -1. Basic Authentication -2. Bearer Token Authentication -3. HTTPS/TLS Configuration - -### Basic Authentication - -A porch repository object can be created through the use of the `porchctl repo reg porch-test-repository -n porch-test http://example-ip:example-port/repo.git --repo-basic-password=password --repo-basic-username=username` command which creates a secret and repository object. - -The basic authentication secret must meet the following criteria: - -- Exist in the same namespace as the Repository CR (Custom Resource) that requires it. -- Have a Data keys named *username* and *password* containing the relevant information. -- Be of type *basic-auth*. - -The value used in the *password* field can be substituted for a base64 encoded Personal Access Token (PAT) from the GIT instance being used. An Example of this can be found [here]({{% relref "/docs/porch/user-guides/porchctl-cli-guide.md#repository-registration" %}}) - -Which would be the equivalent of doing a `kubectl apply -f` on a yaml file with the following content (assuming the porch-test namespace exists on the cluster): - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: git-auth-secret - namespace: porch-test -data: - username: base-64-encoded-username - password: base-64-encoded-password # or base64-encoded-PAT -type: kubernetes.io/basic-auth - ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: Repository - -metadata: - name: porch-test-repository - namespace: porch-test - -spec: - description: porch test repository - content: Package - deployment: false - type: git - git: - repo: http://example-ip:example-port/repo.git - directory: / - branch: main - secretRef: - name: git-auth-secret -``` - -When The Porch Server is interacting with a Git instance through this http-basic-auth configuration it does so over HTTP. An example HTTP Request using this configuration can be seen below. - -```logs -PUT -https://example-ip/apis/config.porch.kpt.dev/v1alpha1/namespaces/porch-test/repositories/porch-test-repo/status -Request Headers: - User-Agent: __debug_bin1520795790/v0.0.0 (linux/amd64) kubernetes/$Format - Authorization: Basic bmVwaGlvOnNlY3JldA== - Accept: application/json, */* - Content-Type: application/json -``` - -where *bmVwaGlvOnNlY3JldA==* is base64 encoded in the format *username:password* and after base64 decoding becomes *nephio:secret*. For simple personal access token login, the password section can be substituted with the PAT token. - -### Bearer Token Authentication - -The authentication to the git repository can be configured to be in bearer token format by altering the secret used in the porch repository object. - -The bearer token authentication secret must meet the following criteria: - -- Exist in the same namespace as the Repository CR (Custom Resource) that requires it -- Have a Data key named *bearerToken* containing the relevant git token information. -- Be of type *Opaque*. - -For example: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: git-auth-secret - namespace: porch-test -data: - bearerToken: base-64-encoded-bearer-token -type: Opaque -``` - -When The Porch Server is interacting with a Git instance through this http-token-auth configuration, it does so overt HTTP. An example HTTP Request using this configuration can be seen below. - -```logs -PUT https://example-ip/apis/config.porch.kpt.dev/v1alpha1/namespaces/porch-test/repositories/porch-test-repo/status -Request Headers: - User-Agent: __debug_bin1520795790/v0.0.0 (linux/amd64) kubernetes/$Format - Authorization: Bearer 4764aacf8cc6d72cab58e96ad6fd3e3746648655 - Accept: application/json, */* - Content-Type: application/json -``` - -where *4764aacf8cc6d72cab58e96ad6fd3e3746648655* in the Authorization header is a PAT token, but can be whichever type of bearer token is accepted by the user's git instance. - -{{% alert title="Note" color="primary" %}} -Please Note that the Porch server caches the authentication credentials from the secret, therefore if the secret's contents are updated they may in fact not be the credentials used in the authentication. - -When the cached old secret credentials are no longer valid the porch server will query the secret again to use the new credentials. - -If these new credentials are valid they become the new cached authentication credentials. -{{% /alert %}} - -### HTTPS/TLS Configuration - -To enable the porch server to communicate with a custom git deployment over HTTPS, we must: - -1. Provide an additional arguments flag *use-git-cabundle=true* to the porch-server deployment. -2. Provide an additional Kubernetes secret containing the relevant certificate chain in the form of a cabundle. - -The secret itself must meet the following criteria: - -- Exist in the same namespace as the Repository CR that requires it. -- Be named specifically \-ca-bundle. -- Have a Data key named *ca.crt* containing the relevant ca certificate (chain). - -For example, a Git Repository is hosted over HTTPS at the URL: `https://my-gitlab.com/joe.bloggs/blueprints.git` - -Before creating the new Repository in the **GitLab** namespace, we must create a secret that fulfils the criteria above. - -`kubectl create secret generic gitlab-ca-bundle --namespace=gitlab --from-file=ca.crt` - -Which would produce the following: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: gitlab-ca-bundle - namespace: gitlab -type: Opaque -data: - ca.crt: FAKE1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNuakNDQWdHZ0F3SUJBZ0lRTEdmUytUK3YyRDZDczh1MVBlUUlKREFLQmdncWhrak9QUVFEQkRBZE1Sc3cKR1FZRFZRUURFeEpqWlhKMExXMWhibUZuWlhJdWJHOWpZV3d3SGhjTk1qUXdOVE14TVRFeU5qTXlXaGNOTWpRdwpPREk1TVRFeU5qTXlXakFWTVJNd0VRWURWUVFGRXdveE1qTTBOVFkzT0Rrd01JSUJJakFOQmdrcWhraUc5dzBCCkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXhCUUtWMEVzQ1JOOGxuV3lQR1ZWNXJwam5QZkI2emszK0N4cEp2NVMKUWhpMG1KbDI0elV1WWZjRzNxdFUva1NuREdjK3NQRUY0RmlOcUlsSTByWHBQSXBPazhKbjEvZU1VT3RkZUUyNgpSWEZBWktjeDVvdUJyZVNja3hsN2RPVkJnOE1EM1h5RU1PQU5nM0hJZ1J4ZWx2U2p1dy8vMURhSlRnK0lBS0dUCkgrOVlRVFcrZDIwSk5wQlR3NkdnQlRsYmdqL2FMRWEwOXVYSVBjK0JUSkpXRThIeDhkVjFNbEtHRFlDU29qZFgKbG9TN1FIa0dsSVk3M0NPZVVGWEVnTlFVVmZaZHdreXNsT3F4WmdXUTNZTFZHcEFyRitjOVdyUGpQQU5NQWtORQpPdHRvaG8zTlRxQ3FST3JEa0RMYWdsU1BKSUd1K25TcU5veVVxSUlWWkV5R1dRSURBUUFCbzJBd1hqQU9CZ05WCkhROEJBZjhFQkFNQ0JhQXdEQVlEVlIwVEFRSC9CQUl3QURBZkJnTlZIU01FR0RBV2dCUitFZTVDTnVJSkcwZjkKV3J3VzdqYUZFeVdzb1RBZEJnTlZIUkVFRmpBVWdoSm5hWFJzWVdJdVpYaGhiWEJzWlM1amIyMHdDZ1lJS29aSQp6ajBFQXdRRGdZb0FNSUdHQWtGLzRyNUM4bnkwdGVIMVJlRzdDdXJHYk02SzMzdTFDZ29GTkthajIva2ovYzlhCnZwODY0eFJKM2ZVSXZGMEtzL1dNUHNad2w2bjMxUWtXT2VpM01aYWtBUUpCREw0Kyt4UUxkMS9uVWdqOW1zN2MKUUx3NXVEMGxqU0xrUS9mOTJGYy91WHc4QWVDck5XcVRqcDEycDJ6MkUzOXRyWWc1a2UvY2VTaWFPUm16eUJuTwpTUTg9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0= -``` diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/install-porch.md b/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/install-porch.md deleted file mode 100644 index 675ce46d..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/install-porch.md +++ /dev/null @@ -1,431 +0,0 @@ ---- -title: "Installing Porch" -type: docs -weight: 1 -description: "A tutorial to install Porch" ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -This tutorial is a guide to installing Porch. It is based on the -[Porch demo produced by Tal Liron of Google](https://github.com/tliron/klab/tree/main/environments/porch-demo). Users -should be comfortable using *git*, *docker*, and *kubernetes*. - -See also [the Nephio Learning Resource](https://github.com/nephio-project/docs/blob/main/learning.md) page for -background help and information. - -## Prerequisites - -The tutorial can be executed on a Linux VM or directly on a laptop. It has been verified to execute on a MacBook Pro M1 -machine and an Ubuntu 20.04 VM. - -The following software should be installed prior to running through the tutorial: - -1. [git](https://git-scm.com/) -2. [Docker](https://www.docker.com/get-started/) -3. [kubectl](https://kubernetes.io/docs/reference/kubectl/) - make sure that [kubectl context](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) configured with your cluster -4. [kind](https://kind.sigs.k8s.io/) -5. [kpt](https://github.com/kptdev/kpt) -6. [The go programming language](https://go.dev/) -7. [Visual Studio Code](https://code.visualstudio.com/download) -8. [VS Code extensions for go](https://code.visualstudio.com/docs/languages/go) - -## Clone the repository and cd into the tutorial - -```bash -git clone https://github.com/nephio-project/porch.git - -cd porch/examples/tutorials/starting-with-porch/ -``` - -## Create the Kind clusters for management and edge1 - -Create the clusters: - -```bash -kind create cluster --config=kind_management_cluster.yaml -kind create cluster --config=kind_edge1_cluster.yaml -``` - -Output the *kubectl* configuration for the clusters: - -```bash -kind get kubeconfig --name=management > ~/.kube/kind-management-config -kind get kubeconfig --name=edge1 > ~/.kube/kind-edge1-config -``` - -Toggling *kubectl* between the clusters: - -```bash -export KUBECONFIG=~/.kube/kind-management-config - -export KUBECONFIG=~/.kube/kind-edge1-config -``` - -## Install MetalLB on the management cluster - -Install the MetalLB load balancer on the management cluster to expose services: - -```bash -export KUBECONFIG=~/.kube/kind-management-config -kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml -kubectl wait --namespace metallb-system \ - --for=condition=ready pod \ - --selector=component=controller \ - --timeout=90s -``` - -Check the subnet that is being used by the kind network in docker - -```bash -docker network inspect kind | grep Subnet -``` - -Sample output: - -```yaml -"Subnet": "172.18.0.0/16", -"Subnet": "fc00:f853:ccd:e793::/64" -``` - -Edit the *metallb-conf.yaml* file and ensure the spec.addresses range is in the IPv4 subnet being used by the kind network in docker. - -```yaml -... -spec: - addresses: - - 172.18.255.200-172.18.255.250 -... -``` - -Apply the MetalLB configuration: - -```bash -kubectl apply -f metallb-conf.yaml -``` - -## Deploy and set up Gitea on the management cluster using kpt - -Get the *gitea kpt* package: - -```bash -export KUBECONFIG=~/.kube/kind-management-config - -cd kpt_packages - -kpt pkg get https://github.com/nephio-project/catalog/tree/main/distros/sandbox/gitea -``` - -Comment out the preconfigured IP address from the *gitea/service-gitea.yaml* file in the *gitea kpt* package: - -```bash -11c11 -< metallb.universe.tf/loadBalancerIPs: 172.18.0.200 ---- -> # metallb.universe.tf/loadBalancerIPs: 172.18.0.200 -``` - -Now render, init and apply the *gitea kpt* package: - -```bash -kpt fn render gitea -kpt live init gitea # You only need to do this command once -kpt live apply gitea -``` - -Once the package is applied, all the Gitea pods should come up and you should be able to reach the Gitea UI on the -exposed IP Address/port of the Gitea service. - -```bash -kubectl get svc -n gitea gitea - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -gitea LoadBalancer 10.96.243.120 172.18.255.200 22:31305/TCP,3000:31102/TCP 10m -``` - -The UI is available at http://172.18.255.200:3000 in the example above. - -To login to Gitea, use the credentials nephio:secret. - -## Create repositories on Gitea for management and edge1 - -On the Gitea UI, click the **+** opposite **Repositories** and fill in the form for both the *management* and *edge1* -repositories. Use default values except for the following fields: - -- Repository Name: "Management" or "edge1" -- Description: Something appropriate - -Alternatively, we can create the repositories via curl: - -```bash -curl -k -H "content-type: application/json" "http://nephio:secret@172.18.255.200:3000/api/v1/user/repos" --data '{"name":"management"}' - -curl -k -H "content-type: application/json" "http://nephio:secret@172.18.255.200:3000/api/v1/user/repos" --data '{"name":"edge1"}' -``` - -Check the repositories: - -```bash - curl -k -H "content-type: application/json" "http://nephio:secret@172.18.255.200:3000/api/v1/user/repos" | grep -Po '"name": *\K"[^"]*"' -``` - -Now initialize both repositories with an initial commit. - -Initialize the *management* repository: - -```bash -cd ../repos -git clone http://172.18.255.200:3000/nephio/management -cd management - -touch README.md -git init -git checkout -b main -git config user.name nephio -git add README.md - -git commit -m "first commit" -git remote remove origin -git remote add origin http://nephio:secret@172.18.255.200:3000/nephio/management.git -git remote -v -git push -u origin main -cd .. - ``` - -Initialize the *edge1* repository: - -```bash -git clone http://172.18.255.200:3000/nephio/edge1 -cd edge1 - -touch README.md -git init -git checkout -b main -git config user.name nephio -git add README.md - -git commit -m "first commit" -git remote remove origin -git remote add origin http://nephio:secret@172.18.255.200:3000/nephio/edge1.git -git remote -v -git push -u origin main -cd ../../ -``` - -## Install Porch - -We will use the *Porch Kpt* package from Nephio catalog repository. - -```bash -cd kpt_packages - -kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/core/porch -``` - -Now we can install porch. We render the *kpt* package and then init and apply it. - -```bash -kpt fn render porch -kpt live init porch # You only need to do this command once -kpt live apply porch -``` - -Check that the Porch PODs are running on the management cluster: - -```bash -kubectl get pod -n porch-system -NAME READY STATUS RESTARTS AGE -function-runner-7994f65554-nrzdh 1/1 Running 0 81s -function-runner-7994f65554-txh9l 1/1 Running 0 81s -porch-controllers-7fb4497b77-2r2r6 1/1 Running 0 81s -porch-server-68bfdddbbf-pfqsm 1/1 Running 0 81s -``` - -Check that the Porch CRDs and other resources have been created: - -```bash -kubectl api-resources | grep porch -packagerevs config.porch.kpt.dev/v1alpha1 true PackageRev -packagevariants config.porch.kpt.dev/v1alpha1 true PackageVariant -packagevariantsets config.porch.kpt.dev/v1alpha2 true PackageVariantSet -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -packages porch.kpt.dev/v1alpha1 true Package -``` - -## Connect the Gitea repositories to Porch - -Create a demo namespace: - -```bash -kubectl create namespace porch-demo -``` - -Create a secret for the Gitea credentials in the demo namespace: - -```bash -kubectl create secret generic gitea \ - --namespace=porch-demo \ - --type=kubernetes.io/basic-auth \ - --from-literal=username=nephio \ - --from-literal=password=secret -``` - -Now, define the Gitea repositories in Porch: - -```bash -kubectl apply -f porch-repositories.yaml -``` - -Check that the repositories have been correctly created: - -```bash -kubectl get repositories -n porch-demo -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -edge1 git Package true True http://172.18.255.200:3000/nephio/edge1.git -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -## Configure configsync on the workload cluster - -configsync is installed on the edge1 cluster so that it syncs the contents of the *edge1* repository onto the edge1 -workload cluster. We will use the configsync package from Nephio. - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -cd kpt_packages - -kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/core/configsync -kpt fn render configsync -kpt live init configsync -kpt live apply configsync -``` - -Check that the configsync PODs are up and running: - -```bash -kubectl get pod -n config-management-system -NAME READY STATUS RESTARTS AGE -config-management-operator-6946b77565-f45pc 1/1 Running 0 118m -reconciler-manager-5b5d8557-gnhb2 2/2 Running 0 118m -``` - -Now, we need to set up a RootSync CR to synchronize the *edge1* repository: - -```bash -kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/optional/rootsync -``` - -Edit the *rootsync/package-context.yaml* file to set the name of the cluster/repo we are syncing from/to: - -```bash -9c9 -< name: example-rootsync ---- -> name: edge1 -``` - -Render the package. This configures the *rootsync/rootsync.yaml* file in the Kpt package: - -```bash -kpt fn render rootsync -``` - -Edit the *rootsync/rootsync.yaml* file to set the IP address of Gitea and to turn off authentication for accessing -Gitea: - -```bash -11c11 -< repo: http://172.18.0.200:3000/nephio/example-cluster-name.git ---- -> repo: http://172.18.255.200:3000/nephio/edge1.git -13,15c13,16 -< auth: token -< secretRef: -< name: example-cluster-name-access-token-configsync ---- -> auth: none -> # auth: token -> # secretRef: -> # name: edge1-access-token-configsync -``` - -Initialize and apply RootSync: - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -kpt live init rootsync # This command is only needed once -kpt live apply rootsync -``` - -Check that the RootSync CR is created: - -```bash -kubectl get rootsync -n config-management-system -NAME RENDERINGCOMMIT RENDERINGERRORCOUNT SOURCECOMMIT SOURCEERRORCOUNT SYNCCOMMIT SYNCERRORCOUNT -edge1 613eb1ad5632d95c4336894f8a128cc871fb3266 613eb1ad5632d95c4336894f8a128cc871fb3266 613eb1ad5632d95c4336894f8a128cc871fb3266 -``` - -Check that configsync is synchronized with the repository on the management cluster: - -```bash -kubectl get pod -n config-management-system -l app=reconciler -NAME READY STATUS RESTARTS AGE -root-reconciler-edge1-68576f878c-92k54 4/4 Running 0 2d17h - -kubectl logs -n config-management-system root-reconciler-edge1-68576f878c-92k54 -c git-sync -f - -``` - -The result should be similar to: - -```bash -INFO: detected pid 1, running init handler -I0105 17:50:11.472934 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="" "cmd"="git config --global gc.autoDetach false" -I0105 17:50:11.493046 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="" "cmd"="git config --global gc.pruneExpire now" -I0105 17:50:11.513487 15 main.go:473] "level"=0 "msg"="starting up" "pid"=15 "args"=["/git-sync","--root=/repo/source","--dest=rev","--max-sync-failures=30","--error-file=error.json","--v=5"] -I0105 17:50:11.514044 15 main.go:923] "level"=0 "msg"="cloning repo" "origin"="http://172.18.255.200:3000/nephio/edge1.git" "path"="/repo/source" -I0105 17:50:11.514061 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="" "cmd"="git clone -v --no-checkout -b main --depth 1 http://172.18.255.200:3000/nephio/edge1.git /repo/source" -I0105 17:50:11.706506 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git rev-parse HEAD" -I0105 17:50:11.729292 15 main.go:737] "level"=0 "msg"="syncing git" "rev"="HEAD" "hash"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.729332 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git fetch -f --tags --depth 1 http://172.18.255.200:3000/nephio/edge1.git main" -I0105 17:50:11.920110 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git cat-file -t 385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.945545 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git rev-parse 385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.967150 15 main.go:726] "level"=1 "msg"="removing worktree" "path"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.967359 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git worktree prune" -I0105 17:50:11.987522 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git worktree add --detach /repo/source/385295a2143f10a6cda0cf4609c45d7499185e01 385295a2143f10a6cda0cf4609c45d7499185e01 --no-checkout" -I0105 17:50:12.057698 15 main.go:772] "level"=0 "msg"="adding worktree" "path"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "branch"="origin/main" -I0105 17:50:12.057988 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "cmd"="git reset --hard 385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:12.099783 15 main.go:833] "level"=0 "msg"="reset worktree to hash" "path"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "hash"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:12.099805 15 main.go:838] "level"=0 "msg"="updating submodules" -I0105 17:50:12.099976 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "cmd"="git submodule update --init --recursive --depth 1" -I0105 17:50:12.442466 15 main.go:694] "level"=1 "msg"="creating tmp symlink" "root"="/repo/source/" "dst"="385295a2143f10a6cda0cf4609c45d7499185e01" "src"="tmp-link" -I0105 17:50:12.442494 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/" "cmd"="ln -snf 385295a2143f10a6cda0cf4609c45d7499185e01 tmp-link" -I0105 17:50:12.453694 15 main.go:699] "level"=1 "msg"="renaming symlink" "root"="/repo/source/" "old_name"="tmp-link" "new_name"="rev" -I0105 17:50:12.453718 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/" "cmd"="mv -T tmp-link rev" -I0105 17:50:12.467904 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git gc --auto" -I0105 17:50:12.492329 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git cat-file -t HEAD" -I0105 17:50:12.518878 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git rev-parse HEAD" -I0105 17:50:12.540979 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 -I0105 17:50:27.553609 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git rev-parse HEAD" -I0105 17:50:27.600401 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git ls-remote -q http://172.18.255.200:3000/nephio/edge1.git refs/heads/main" -I0105 17:50:27.694035 15 main.go:1065] "level"=1 "msg"="no update required" "rev"="HEAD" "local"="385295a2143f10a6cda0cf4609c45d7499185e01" "remote"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:27.694159 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 -I0105 17:50:42.695482 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git rev-parse HEAD" -I0105 17:50:42.733276 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git ls-remote -q http://172.18.255.200:3000/nephio/edge1.git refs/heads/main" -I0105 17:50:42.826422 15 main.go:1065] "level"=1 "msg"="no update required" "rev"="HEAD" "local"="385295a2143f10a6cda0cf4609c45d7499185e01" "remote"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:42.826611 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 - -....... - -I0108 11:04:05.935586 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git rev-parse HEAD" -I0108 11:04:05.981750 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git ls-remote -q http://172.18.255.200:3000/nephio/edge1.git refs/heads/main" -I0108 11:04:06.079536 15 main.go:1065] "level"=1 "msg"="no update required" "rev"="HEAD" "local"="385295a2143f10a6cda0cf4609c45d7499185e01" "remote"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0108 11:04:06.079599 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 -``` diff --git a/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/using-authenticated-private-registries.md b/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/using-authenticated-private-registries.md deleted file mode 100644 index c0180aa3..00000000 --- a/content/en/docs/neo-porch/6_configuration_and_deployments/relevant_old_docs/using-authenticated-private-registries.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -title: "Using authenticated private registries with the Porch function runner" -type: docs -weight: 3 -description: "" ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -The Porch function runner pulls kpt function images from registries and uses them for rendering kpt packages in Porch. The function runner is set up by default to fetch kpt function images from public container registries such as [GCR](https://gcr.io/kpt-fn/) and the configuration options described here are not required for such public registries. - -## 1. Configuring function runner to operate with private container registries - -This section describes how set up authentication for a private container registry containing kpt functions online e.g. (GitHub's GHCR) or locally e.g. (Harbor or JFrog) that require authentication (username/password or token). - -To enable pulling of kpt function images from authenticated private registries by the Porch function runner the system requires: - -1. Creating a Kubernetes secret using a JSON file according to the Docker configuration schema, containing valid credentials for each authenticated registry. -2. Mounting this new secret as a volume on the function runner. -3. Configuring private registry functionality in the function runner's arguments: - 1. Enabling the functionality using the argument *--enable-private-registries*. - 2. Providing the path and name of the mounted secret using the arguments *--registry-auth-secret-path* and *--registry-auth-secret-name* respectively. - -### 1.1 Kubernetes secret setup for private registry using docker configuration - -An example template of what a docker *config.json* file looks like is as follows below. The base64 encoded value *bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=* of the *auth* key decodes to *my_username:my_password*, which is the format used by the configuration when authenticating. - -```json -{ - "auths": { - "https://index.docker.io/v1/": { - "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=" - }, - "ghcr.io": { - "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=" - } - } -} -``` - -A quick way to generate this secret for your use using your docker *config.json* would be to run the following command: - -```bash -kubectl create secret generic --from-file=.dockerconfigjson=/path/to/your/config.json --type=kubernetes.io/dockerconfigjson --dry-run=client -o yaml -n porch-system -``` - -{{% alert title="Note" color="primary" %}} -The secret must be in the same namespace as the function runner deployment. By default, this is the *porch-system* namespace. -{{% /alert %}} - -This should generate a secret template, similar to the one below, which you can add to the *2-function-runner.yaml* file in the Porch catalog package found [here](https://github.com/nephio-project/catalog/tree/main/nephio/core/porch) - -```yaml -apiVersion: v1 -data: - .dockerconfigjson: -kind: Secret -metadata: - creationTimestamp: null - name: - namespace: porch-system -type: kubernetes.io/dockerconfigjson -``` - -### 1.2 Mounting docker configuration secret to the function runner - -Next you must mount the secret as a volume on the function runner deployment. Add the following sections to the Deployment object in the *2-function-runner.yaml* file: - -```yaml - volumeMounts: - - mountPath: /var/tmp/auth-secret - name: docker-config - readOnly: true -volumes: - - name: docker-config - secret: - secretName: -``` - -You may specify your desired paths for each `mountPath:` so long as the function runner can access them. - -{{% alert title="Note" color="primary" %}} -The chosen `mountPath:` should use its own, dedicated sub-directory, so that it does not overwrite access permissions of the existing directory. For example, if you wish to mount on `/var/tmp` you should use `mountPath: /var/tmp/` etc. -{{% /alert %}} - -### 1.3 Configuring function runner environment variables for private registries - -Lastly you must enable private registry functionality along with providing the path and name of the secret. Add the `--enable-private-registries`, `--registry-auth-secret-path` and `--registry-auth-secret-name` arguments to the function-runner Deployment object in the *2-function-runner.yaml* file: - -```yaml -command: - - --enable-private-registries=true - - --registry-auth-secret-path=/var/tmp/auth-secret/.dockerconfigjson - - --registry-auth-secret-name= -``` - -The `--enable-private-registries`, `--registry-auth-secret-path` and `--registry-auth-secret-name` arguments have default values of *false*, */var/tmp/auth-secret/.dockerconfigjson* and *auth-secret* respectively; however, these should be overridden to enable the functionality and match user specifications. - -With this last step, if your Porch package uses kpt function images stored in an private registry (for example `- image: ghcr.io/private-registry/set-namespace:customv2`), the function runner will now use the secret info to replicate your secret on the `porch-fn-system` namespace and specify it as an `imagePullSecret` for the function pods, as documented [here](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). - -## 2. Configuring function runner to use custom TLS for private container registries - -If your private container registry uses a custom certificate for TLS authentication then extra configuration is required for the function runner to integrate with. See below - -1. Creating a Kubernetes secret using TLS information valid for all private registries you wish to use. -2. Mounting the secret containing the registries' TLS information to the function runner similarly to step 2. -3. Enabling TLS functionality and providing the path of the mounted secret to the function runner using the arguments *--enable-private-registries-tls* and *--tls-secret-path* respectively. - -### 2.1 Kubernetes secret layout for TLS certificate - -A typical secret containing TLS information will take on the a similar format to the following: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: - namespace: porch-system -data: - : -type: kubernetes.io/tls -``` - -{{% alert title="Note" color="primary" %}} -The content in ** must be in PEM (Privacy Enhanced Mail) format, and the ** must be *ca.crt* or *ca.pem*. No other values are accepted. -{{% /alert %}} - -### 2.2 Mounting TLS certificate secret to the function runner - -The TLS secret must then be mounted onto the function runner similarly to how the docker configuration secret was done previously in section 1.2 - -```yaml - volumeMounts: - - mountPath: /var/tmp/tls-secret/ - name: tls-registry-config - readOnly: true -volumes: - - name: tls-registry-config - secret: - secretName: -``` - -### 2.3 Configuring function runner environment variables for TLS on private registries - -The *--enable-private-registries-tls* and *--tls-secret-path* variables are only required if a private registry has TLS enabled. They indicate to the function runner that it should attempt authentication to the registry using TLS, and should use the TLS certificate information found on the path provided in *--tls-secret-path*. - -```yaml -command: - - --enable-private-registries-tls=true - - --tls-secret-path=/var/tmp/tls-secret/ -``` - -The *--enable-private-registries-tls* and *--tls-secret-path* arguments have default values of *false* and */var/tmp/tls-secret/* respectively; however, these should be configured by the user and are only necessary when using a private registry secured with TLS. - -### Function runner logic flow when TLS registries are enabled - -It is important to note that enabling TLS registry functionality makes the function runner attempt connection to the registry provided in the porch file using the mounted TLS certificate. If this certificate is invalid for the provided registry, it will try again using the Intermediate Certificates stored on the machine for use in TLS with "well-known websites" (e.g. GitHub). If this also fails, it will attempt to connect without TLS: if this last resort fails, it will return an error to the user. - -{{% alert title="Note" color="primary" %}} -It is vital that the user has pre-configured the Kubernetes node which the function runner is operating on with the same TLS certificate information as is used in the ** secret. If this is not configured correctly, then even if the certificate is correctly configured in the function runner, the kpt function will not run - the function runner will be able to pull the image, but the KRM function pod created to run the function will fail with the error *x509 certificate signed by unknown authority*. -This pre-configuration setup is heavily cluster/implementation-dependent - consult your cluster's specific documentation about adding self-signed certificates or private/internal CA certs to your cluster. -{{% /alert %}} diff --git a/content/en/docs/neo-porch/7_cli_api/_index.md b/content/en/docs/neo-porch/7_cli_api/_index.md deleted file mode 100644 index 055356e4..00000000 --- a/content/en/docs/neo-porch/7_cli_api/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "CLI / API Reference" -type: docs -weight: 7 -description: Reference to the underlying porch cli and api ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/7_cli_api/porchctl.md b/content/en/docs/neo-porch/7_cli_api/porchctl.md deleted file mode 100644 index ae06d055..00000000 --- a/content/en/docs/neo-porch/7_cli_api/porchctl.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Porchctl Guide" -type: docs -weight: 2 -description: Usage guide of the porchctl CLI ---- - -## Lorem Ipsum - -most of the old cli guide can be proofread and likely adapted in as the new guide [old-porchctl-cli-guide]({{% relref "/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md" %}}) diff --git a/content/en/docs/neo-porch/7_cli_api/relevant_old_docs/_index.md b/content/en/docs/neo-porch/7_cli_api/relevant_old_docs/_index.md deleted file mode 100644 index 85d770ba..00000000 --- a/content/en/docs/neo-porch/7_cli_api/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: 2 -description: ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md b/content/en/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md deleted file mode 100644 index cc90703d..00000000 --- a/content/en/docs/neo-porch/7_cli_api/relevant_old_docs/porchctl-cli-guide.md +++ /dev/null @@ -1,895 +0,0 @@ ---- -title: "Using the Porch CLI tool" -type: docs -weight: 3 -description: ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -## Setting up the porchctl CLI - -The Porch CLI uses the `porchctl` command. -To use it locally, [download](https://github.com/nephio-project/porch/releases/tag/dev), unpack and add it to your PATH. - -{{% alert title="Note" color="primary" %}} - -Installation of Porch, including its prerequisites, is covered in a [dedicated document]({{% relref "/docs/porch/user-guides/install-porch.md" %}}). - -{{% /alert %}} - -*Optional*: Generate the autocompletion script for the specified shell to add to your sh profile. - -``` -porchctl completion bash -``` - -The `porchctl` command is an administration command for acting on Porch *Repository* (repo) and *PackageRevision* (rpkg) -CRs. - -The commands for administering repositories are: - -| Command | Description | -| --------------------- | -----------------------------------| -| `porchctl repo get` | List registered repositories. | -| `porchctl repo reg` | Register a package repository. | -| `porchctl repo unreg` | Unregister a repository. | -| `porchctl repo sync` | schedules one-time synchronization.| | - -The commands for administering package revisions are: - -| Command | Description | -| ------------------------------ | ------------------------------------------------------------------------------------------------ | -| `porchctl rpkg approve` | Approve a proposal to publish a package revision. | -| `porchctl rpkg clone` | Create a clone of an existing package revision. | -| `porchctl rpkg copy` | Create a new package revision from an existing one. | -| `porchctl rpkg del` | Delete a package revision. | -| `porchctl rpkg get` | List package revisions in registered repositories. | -| `porchctl rpkg init` | Initializes a new package revision in a repository. | -| `porchctl rpkg propose` | Propose that a package revision should be published. | -| `porchctl rpkg propose-delete` | Propose deletion of a published package revision. | -| `porchctl rpkg pull` | Pull the content of the package revision. | -| `porchctl rpkg push` | Push resources to a package revision. | -| `porchctl rpkg reject` | Reject a proposal to publish or delete a package revision. | -| `porchctl rpkg upgrade` | Update a downstream package revision to a more recent revision of its upstream package revision. | - -## Using the porchctl CLI - -### Guide prerequisites -* [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) - -Make sure that your `kubectl` context is set up for `kubectl` to interact with the correct Kubernetes instance (see -[installation instructions]({{% relref "/docs/porch/user-guides/install-porch.md" %}}) guide for details). - -To check whether `kubectl` is configured with your Porch cluster (or local instance), run: - -```bash -kubectl api-resources | grep porch -``` - -You should see the following API resources listed: - -```bash -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -``` - -## Porch Resources - -Porch server manages the following resources: - -1. `repositories`: a repository (Git or OCI) can be registered with Porch to support discovery or management of KRM - configuration packages in those repositories. -2. `packagerevisions`: a specific revision of a KRM configuration package managed by Porch in one of the registered - repositories. This resource represents a _metadata view_ of the KRM configuration package. -3. `packagerevisionresources`: this resource represents the contents of the configuration package (KRM resources - contained in the package) - -{{% alert title="Note" color="primary" %}} - -`packagerevisions` and `packagerevisionresources` represent different _views_ of the same underlying KRM -configuration package. `packagerevisions` represents the package metadata, and `packagerevisionresources` represents the -package content. The matching resources share the same `name` (as well as API group and version: -`porch.kpt.dev/v1alpha1`) and differ in resource kind (`PackageRevision` and `PackageRevisionResources` respectively). - -{{% /alert %}} - - -## Repository Registration - -To use Porch with a Git repository, you will need: - -* A Git repository for your blueprints. An otherwise empty repository with an - initial commit works best. The initial commit is required to establish the - `main` branch. -* If the repository requires authentication you will require either - - A [Personal Access Token](https://github.com/settings/tokens) (when using GitHub repository) for Porch to authenticate - with the repository if the repository. Porch requires the 'repo' scope. - - Basic Auth credentials for Porch to authenticate with the repository. - -To use Porch with an OCI repository ([Artifact Registry](https://console.cloud.google.com/artifacts) or -[Google Container Registry](https://cloud.google.com/container-registry)), first make sure to: - -* Enable [workload identity](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity) for Porch -* Assign appropriate roles to the Porch workload identity service account - (`iam.gke.io/gcp-service-account=porch-server@$(GCP_PROJECT_ID).iam.gserviceaccount.com`) - to have appropriate level of access to your OCI repository. - -Use the `porchctl repo register` command to register your repository with Porch. - -```bash -# Unauthenticated Repositories -porchctl repo register --namespace default https://github.com/platkrm/test-blueprints.git --name=test-blueprints --sync-schedule="*/10 * * * *" -porchctl repo register --namespace default https://github.com/nephio-project/catalog --name=oai --directory=workloads/oai -porchctl repo register --namespace default https://github.com/nephio-project/catalog --name=infra --directory=infra --deployment=true --sync-schedule="*/10 * * * *" -``` - -# Authenticated Repositories - -## Basic Auth - -```bash -GITHUB_USERNAME= -GITHUB_TOKEN= - -$ porchctl repo register \ - --namespace default \ - --repo-basic-username=${GITHUB_USERNAME} \ - --repo-basic-password=${GITHUB_TOKEN} \ - https://github.com/${GITHUB_USERNAME}/blueprints.git \ - --sync-schedule="*/10 * * * *" -``` - -## Workload Identity - -```bash -$ porchctl repo register \ - --namespace default \ - --repo-workload-identity \ - https://github.com/example/private-blueprints.git \ - --sync-schedule="0 */2 * * *" -``` - -For more details on configuring authenticated repositories see [Authenticating to Remote Git Repositories]({{% relref "/docs/porch/user-guides/git-authentication-config.md" %}}). - -The command line flags supported by `porchctl repo register` are: - -* `--directory` - Directory within the repository where to look for packages. -* `--branch` - Branch in the repository where finalized packages are committed (defaults to `main`). -* `--name` - Name of the package repository Kubernetes resource. If unspecified, will default to the name portion (last - segment) of the repository URL (`blueprint` in the example above) -* `--description` - Brief description of the package repository. -* `--deployment` - Boolean value; If specified, repository is a deployment repository; published packages in a - deployment repository are considered deployment-ready. -* `--repo-basic-username` - Username for repository authentication using basic auth. -* `--repo-basic-password` - Password for repository authentication using basic auth. -* `--repo-workload-identity` - Use workload identity for authentication. -* `--sync-schedule` - Cron expression for periodic repository synchronization (e.g., "*/10 * * * *" for every 10 minutes). - -Additionally, common `kubectl` command line flags for controlling aspects of -interaction with the Kubernetes apiserver, logging, and more (this is true for -all `porchctl` CLI commands which interact with Porch). - -Use the `porchctl repo get` command to query registered repositories: - -```bash -$ porchctl repo get -A -NAMESPACE NAME TYPE CONTENT SYNC SCHEDULE DEPLOYMENT READY ADDRESS -default oai git Package True https://github.com/nephio-project/catalog -default test-blueprints git Package */10 * * * * True https://github.com/platkrm/test-blueprints.git -default infra git Package */10 * * * * true True https://github.com/nephio-project/catalog -``` - -The `porchctl get` commands support common `kubectl` -[flags](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#formatting-output) to format output, for example -`porchctl repo get --output=yaml`. - -The command `porchctl repo unregister` can be used to unregister a repository: - -```bash -$ porchctl repo unregister test-blueprints --namespace default -``` -## Repository Sync Command - -The command `porchctl repo sync` can be used to schedule one-time synchronization of repositories: - -```bash -# Sync specific repository (schedules 1-minute delayed sync) -$ porchctl repo sync test-blueprints --namespace default - -# Sync multiple repositories -$ porchctl repo sync repo1 repo2 repo3 --namespace default - -# Sync all repositories in namespace -$ porchctl repo sync --all --namespace default - -# Sync all repositories across all namespaces -$ porchctl repo sync --all --all-namespaces - -# Schedule sync with custom delay -$ porchctl repo sync my-repo --run-once 5m -$ porchctl repo sync my-repo --run-once 2h30m - -# Schedule sync at specific time -$ porchctl repo sync my-repo --run-once "2024-01-15T14:30:00Z" -``` - -### Sync Command Flags -- `--all`: Sync all repositories in namespace -- `--all-namespaces`: Include all namespaces -- `--run-once`: Schedule one-time sync (duration or RFC3339 timestamp) -- `--namespace`: Target namespace - -### Sync Behavior -- Minimum delay: 1 minute from command execution -- Updates `spec.sync.runOnceAt` field in Repository CR -- Independent of existing periodic sync schedule -- Past timestamps automatically adjusted to minimum delay - -## Package Discovery And Introspection - -The `porchctl rpkg` command group contains commands for interacting with package revisions managed by the Package Orchestration -service. the `r` prefix used in the command group name stands for 'remote'. - -The `porchctl rpkg get` command list the package revisions in registered repositories: - -```bash -$ porchctl rpkg get -A -NAMESPACE NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -default infra.infra.baremetal.bmh-template.main infra/baremetal/bmh-template main -1 false Published infra -default infra.infra.capi.cluster-capi.main infra/capi/cluster-capi main -1 false Published infra -default infra.infra.capi.cluster-capi.v2.0.0 infra/capi/cluster-capi v2.0.0 -1 false Published infra -default infra.infra.capi.cluster-capi.v3.0.0 infra/capi/cluster-capi v3.0.0 -1 false Published infra -default infra.infra.capi.vlanindex.main infra/capi/vlanindex main -1 false Published infra -default infra.infra.capi.vlanindex.v2.0.0 infra/capi/vlanindex v2.0.0 -1 false Published infra -default infra.infra.capi.vlanindex.v3.0.0 infra/capi/vlanindex v3.0.0 -1 false Published infra -default infra.infra.gcp.nephio-blueprint-repo.main infra/gcp/nephio-blueprint-repo main -1 false Published infra -default infra.infra.gcp.nephio-blueprint-repo.v1 infra/gcp/nephio-blueprint-repo v1 1 true Published infra -default infra.infra.gcp.nephio-blueprint-repo.v2.0.0 infra/gcp/nephio-blueprint-repo v2.0.0 -1 false Published infra -default infra.infra.gcp.nephio-blueprint-repo.v3.0.0 infra/gcp/nephio-blueprint-repo v3.0.0 -1 false Published infra -default oai.workloads.oai.oai-ran-operator.main workloads/oai/oai-ran-operator main -1 false Published oai -default oai.workloads.oai.oai-ran-operator.v1 workloads/oai/oai-ran-operator v1 1 true Published oai -default oai.workloads.oai.oai-ran-operator.v2.0.0 workloads/oai/oai-ran-operator v2.0.0 -1 false Published oai -default oai.workloads.oai.oai-ran-operator.v3.0.0 workloads/oai/oai-ran-operator v3.0.0 -1 false Published oai -default oai.workloads.oai.pkg-example-cucp-bp.main workloads/oai/pkg-example-cucp-bp main -1 false Published oai -default oai.workloads.oai.pkg-example-cucp-bp.v1 workloads/oai/pkg-example-cucp-bp v1 1 true Published oai -default oai.workloads.oai.pkg-example-cucp-bp.v2.0.0 workloads/oai/pkg-example-cucp-bp v2.0.0 -1 false Published oai -default oai.workloads.oai.pkg-example-cucp-bp.v3.0.0 workloads/oai/pkg-example-cucp-bp v3.0.0 -1 false Published oai -default oai.workloads.oai.pkg-example-cuup-bp.main workloads/oai/pkg-example-cuup-bp main -1 false Published oai -default test-blueprints.basens.main basens main -1 false Published test-blueprints -default test-blueprints.basens.v1 basens v1 1 false Published test-blueprints -default test-blueprints.basens.v2 basens v2 2 false Published test-blueprints -default test-blueprints.basens.v3 basens v3 3 true Published test-blueprints -default test-blueprints.empty.main empty main -1 false Published test-blueprints -default test-blueprints.empty.v1 empty v1 1 true Published test-blueprints -porch-demo porch-test.basedir.subdir.subsubdir.edge-function.inadir basedir/subdir/subsubdir/edge-function inadir 0 false Draft porch-test -porch-demo porch-test.basedir.subdir.subsubdir.network-function.dirdemo basedir/subdir/subsubdir/network-function dirdemo 0 false Draft porch-test -porch-demo porch-test.network-function.innerhome network-function innerhome 2 true Published porch-test -porch-demo porch-test.network-function.innerhome3 network-function innerhome3 0 false Proposed porch-test -porch-demo porch-test.network-function.innerhome4 network-function innerhome4 0 false Draft porch-test -porch-demo porch-test.network-function.main network-function main -1 false Published porch-test -porch-demo porch-test.network-function.outerspace network-function outerspace 1 false DeletionProposed porch-test -``` - -The `NAME` column gives the kubernetes name of the package revision resource. Names are of the form: - -**repository.([pathnode.]*)package.workspace** - -1. The first part (up to the first dot) is the **repository** that the package revision is in. -1. The second (optional) part is zero or more **pathnode** nodes, identifying the path of the package. -1. The second last part (between the second last and last dots) is the **package** that the package revision is in. -1. The last part (after the last dot) is the **workspace** of the package revision, which uniquely identifies the package revision in the package. - -From the listing above, the package revision with the name `test-blueprints.basens.v3` is in a repository called `test-blueprints`. It is in the root of that -repository because there are no **pathnode** entries in its name. It is in a package called `basens` and its workspace name is `v3`. - -The package revision with the name `porch-test.basedir.subdir.subsubdir.edge-function.inadir` is in the repo `porch-test`. It has a path of -`basedir/subdir/subsubdir`. The package name is `edge-function` and its workspace name is `inadir`. - -The entire name must comply with the constraints on DNS Subdomain Names -specified in [kubernetes rules for naming objects and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/). -The name must: - -- contain no more than 253 characters -- contain only lowercase alphanumeric characters, '-' or '.' -- start with an alphanumeric character -- end with an alphanumeric character - -Each part of the name must comply with the constraints on RFC 1123 label names -specified in [kubernetes rules for naming objects and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/). -Each part of the name must: - -- contain at most 63 characters -- contain only lowercase alphanumeric characters or '-' -- start with an alphanumeric character -- end with an alphanumeric character - -The `PACKAGE` column contains the package name of a pacakge revision. Of course, all package revisions in a package have the same package name. The -package name includes the path to the directory containing the package if the package is not in the root directory of the repo. For example, in the listing above -the packages `basedir/subdir/subsubdir/edge-function` and `basedir/subdir/subsubdir/network-function` are in the directory `basedir/subdir/subsubdir`. The -`basedir/subdir/subsubdir/network-function` and `network-function` packages are different packages because they are in different directories. - -The `REVISION` column indicates the revision of the package. -- Revisions of `1` or greater indicate released package revisions. When a package revision is `Published` it is assigned the next - available revision number, starting at `1`. In the listing above, the `porch-test.network-function.innerhome` revision of package `network-function` - has a revision of `2` and is the latest revision of the package. The `porch-test.network-function.outerspace` revision of the package has a - revision of `1`. If the `porch-test.network-function.innerhome3` revision is published, it will be assigned a revision of `3` and will become - the latest package revision. -- Package revisions that are not published (package revisions with a lifecycle status of `Draft` or `Proposed`) have a revision number of `0`. There can be many - revisions of a package with revision `0` as is shown with revisions `porch-test.network-function.innerhome3` and `porch-test.network-function.innerhome4` - of package `network-function` above. -- Placeholder package revisions that point at the head of a git branch or tag have a revision number of `-1` - -The `LATEST` column indicates whether the package revision is the latest among the revisions of the same package. In the -output above, `3` is the latest revision of `basens` package and `1` is the latest revision of `empty` package. - -The `LIFECYCLE` column indicates the lifecycle stage of the package revision, one of: `Draft`, `Proposed`, `Published` or `DeletionProposed`. - -The `WORKSPACENAME` column indicates the workspace name of a package revision. The workspace name is selected by a user when a draft -package revision is created. The workspace name must be unique among package revisions in the same package. A user is free to -select any workspace name that complies with the constraints on DNS Subdomain Names specified in -[kubernetes rules for naming objects and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/). - -{{% alert title="Scope of WORKSPACENAME" color="primary" %}} -The scope of a workspace name is restricted to its package and it is merely a string that identifies a package revision within a package. -The workspace name `V1` on the `empty` package has no relation to the workspace name `V1` on the `basens` package listed above. -A user has simply decided to use the same workspace name on two separate package revisions. -{{% /alert %}} - -{{% alert title="Setting WORKSPACENAME and REVISION from repositories" color="primary" %}} -When Porch connects to a repository, it scans the branches and tags of the Git repository for package revisions. It descends the directory tree of the repo -looking for files called `Kptfile`. When it finds a Kptfile in a directory, Porch knows that it has found a kpt package and it does not search any child directories -of this directory. Porch then examines all branches and tags that have references to that package and finds package revisions using the following rules: -1. Look for a commit message of the form `kpt:{"package":"","workspaceName":"","revision":""}` at the tip of the branch/tag and - set the workspace name and revision from the commit message. The commit message `kpt:{"package":"network-function","workspaceName":"outerspace","revision":"1"}` - is used to set the workspace name to `outerspace` and the revision to `1` in the case of the `porch-test.network-function.outerspace` - package revision in the listing above. -2. If 1. fails, and if the reference is of the form `.v1`, set the workspace name to `v1` and the revision to `1` as is the case for the - `oai.workloads.oai.oai-ran-operator.v1` package revision in the listing above. -3. If 2. fails, set the workspace name to the branch or tag name, and the revision to `-1`, as is the case for the `infra.infra.gcp.nephio-blueprint-repo.v3.0.0` - package revision in the listing above. The workspace name is set to the branch name `v3.0.0`, and the revision is set to `-1`. -{{% /alert %}} - -## Package Revision Filtering - -Simple filtering of package revisions by name (substring) and revision (exact match) is supported by the CLI using -`--name`, `--revision` and `--workspace` flags: - -```bash -$ porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.network-function.dirdemo network-function dirdemo 1 false Published porch-test -porch-test.network-function.innerhome network-function innerhome 2 true Published porch-test -porch-test.network-function.main network-function main -1 false Published porch-test - -$ porchctl -n porch-demo rpkg get --revision 1 -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.basedir.subdir.subsubdir.edge-function2.diredge basedir/subdir/subsubdir/edge-function2 diredge 1 true Published porch-test -porch-test.edge-function2.diredgeab edge-function2 diredgeab 1 true Published porch-test -porch-test.edge-function.diredge edge-function diredge 1 true Published porch-test -porch-test.network-function3.outerspace network-function3 outerspace 1 true Published porch-test -porch-test.network-function.dirdemo network-function dirdemo 1 false Published porch-test - -$ porchctl -n porch-demo rpkg get --workspace outerspace -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.outerspace network-function3 outerspace 1 true Published porch-test -``` - -You can also filter package revisions using the `kubectl` CLI with the `--selector` and `--field-selector` flags under the same conventions as for other KRM objects. - -The `--selector` flag can be used to filter on one or more [metadata labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#list-and-watch-filtering): -```bash -$ kubectl get packagerevisions --show-labels --selector 'kpt.dev/latest-revision=true' -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY LABELS -test-blueprints.basens.v3 basens v3 3 true Published test-blueprints kpt.dev/latest-revision=true -test-blueprints.empty.v1 empty v1 1 true Published test-blueprints kpt.dev/latest-revision=true -``` - -The `--field-selector` flag can be used to filter on one or more package revision [fields](https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/). - -### Supported Fields - -As per Kubernetes convention, the `--field-selector` flag supports a subset of the PackageRevision resource type's fields: -- `metadata.name` -- `metadata.namespace` -- `spec.revision` -- `spec.packageName` -- `spec.repository` -- `spec.workspaceName` -- `spec.lifecycle` - -{{% alert title="Note" color="primary" %}} - - The `spec.versions[*].selectableFields` field is not available for the PackageRevision resource type. Changing the fields supported by `--field-selector` requires editing Porch's source code and rebuilding the porch-server microservice. - -{{% /alert %}} - -For example: -```bash -$ kubectl get packagerevisions --show-labels --field-selector 'spec.repository==oai' -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY LABELS -oai.database.main database main -1 false Published oai -oai.oai-amf.main oai-amf main -1 false Published oai -oai.oai-ausf.main oai-ausf main -1 false Published oai -oai.oai-cp-operators.main oai-cp-operators main -1 false Published oai -oai.oai-nrf.main oai-nrf main -1 false Published oai -oai.oai-repository.main oai-repository main -1 false Published oai -oai.oai-smf.main oai-smf main -1 false Published oai -oai.oai-udm.main oai-udm main -1 false Published oai -oai.oai-udr.main oai-udr main -1 false Published oai -oai.oai-upf-edge.main oai-upf-edge main -1 false Published oai -oai.oai-up-operators.main oai-up-operators main -1 false Published oai -``` - -{{% alert title="Note" color="primary" %}} - -Due to the restrictions of Porch's internal caching behavior, the `--field-selector` flag supports only the `=` and `==` operators. **The `!=` operator is not supported.** - -{{% /alert %}} - -The common `kubectl` [flags that control output format](https://kubernetes.io/docs/reference/kubectl/#output-options) are available as well: - -```bash -$ porchctl rpkg get -n porch-demo porch-test.network-function.innerhome -o yaml -apiVersion: porch.kpt.dev/v1alpha1 -kind: PackageRevision -metadata: - labels: - kpt.dev/latest-revision: "true" - name: porch-test.network-function.innerhome - namespace: porch-demo -spec: - lifecycle: Published - packageName: network-function - repository: porch-test - revision: 2 - workspaceName: innerhome -... -``` - -The `porchctl rpkg pull` command can be used to read the package revision resources. - -The command can be used to print the package revision resources as `ResourceList` to `stdout`, which enables -[chaining](https://kpt.dev/book/04-using-functions/#chaining-functions-using-the-unix-pipe) -evaluation of functions on the package revision pulled from the Package Orchestration server. - -```bash -$ porchctl rpkg pull -n porch-demo porch-test.network-function.innerhome -apiVersion: config.kubernetes.io/v1 -kind: ResourceList -items: -- apiVersion: "" - kind: KptRevisionMetadata - metadata: - name: porch-test.network-function.innerhome - namespace: porch-demo -... -``` - -One of the driving motivations for the Package Orchestration service is enabling -WYSIWYG authoring of packages, including their contents, in highly usable UIs. -Porch therefore supports reading and updating package *contents*. - -In addition to using a [UI](https://kpt.dev/guides/namespace-provisioning-ui/) with Porch, we -can change the package contents by pulling the package from Porch onto the local -disk, make any desired changes, and then pushing the updated contents to Porch. - -```bash -$ porchctl rpkg pull -n porch-demo porch-test.network-function.innerhome ./innerhome - -$ find innerhome - -./innerhome -./innerhome/.KptRevisionMetadata -./innerhome/README.md -./innerhome/Kptfile -./innerhome/package-context.yaml -``` - -The command downloaded the `innerhome/v1` package revision contents and saved -them in the `./innerhome` directory. Now you will make some changes. - -First, note that even though Porch updated the namespace name (in -`namespace.yaml`) to `innerhome` when the package was cloned, the `README.md` -was not updated. Let's fix it first. - -Open the `README.md` in your favorite editor and update its contents, for -example: - -``` -# innerhome - -## Description -kpt package for provisioning Innerhome namespace -``` - -In the second change, add a new mutator to the `Kptfile` pipeline. Use the -[set-labels](https://catalog.kpt.dev/function-catalog/set-labels/v0.1/) function which will add -labels to all resources in the package. Add the following mutator to the -`Kptfile` `pipeline` section: - -```yaml - - image: gcr.io/kpt-fn/set-labels:v0.1.5 - configMap: - color: orange - fruit: apple -``` - -The whole `pipeline` section now looks like this: - -```yaml -pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.4.1 - configPath: package-context.yaml - - image: gcr.io/kpt-fn/apply-replacements:v0.1.1 - configPath: update-rolebinding.yaml - - image: gcr.io/kpt-fn/set-labels:v0.1.5 - configMap: - color: orange - fruit: apple -``` - -Save the changes and push the package contents back to the server: - -```sh -# Push updated package contents to the server -$ porchctl rpkg push -n porch-demo porch-test.network-function.innerhome ./innerhome -``` - -Now, pull the contents of the package revision again, and inspect one of the -configuration files. - -```sh -# Pull the updated package contents to local drive for inspection: -$ porchctl rpkg pull -n porch-demo porch-test.network-function.innerhome ./updated-innerhome - -# Inspect updated-innerhome/namespace.yaml -$ cat updated-innerhome/namespace.yaml - -apiVersion: v1 -kind: Namespace -metadata: - name: innerhome - labels: - color: orange - fruit: apple -spec: {} -``` - -The updated namespace now has new labels! What happened? - -Whenever a package is updated during the authoring process, in case current functions -of the pipline were changed or a new function was added to the pipeline list, -Porch automatically re-renders the package to make sure that all mutators and validators are -executed. So when we added the new `set-labels` mutator, as soon as we pushed -the updated package contents to Porch, Porch re-rendered the package and -the `set-labels` function applied the labels we requested (`color: orange` and -`fruit: apple`). - -## Authoring Packages - -Several commands in the `porchctl rpkg` group support package authoring: - -* `init` - Initializes a new package revision in the target repository. -* `clone` - Creates a clone of a source package revision in the target repository. -* `copy` - Creates a new package revision from an existing one. -* `push` - Pushes package revision resources into a remote package. -* `del` - Deletes one or more package revisions in registered repositories. - -The `porchctl rpkg init` command can be used to initialize a new package revision. Porch server will create and -initialize a new package revision (as a draft) and save it in the specified repository. - -```bash -$ porchctl rpkg init new-package --repository=porch-test --workspace=my-workspace -n porch-demo -porch-test.new-package.my-workspace created - -$ porchctl rpkg get -n porch-demo porch-test.new-package.my-workspace -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -The new package revision is created in the `Draft` lifecycle stage. This is true also for all commands that create new package -revision (`init`, `clone` and `copy`). - -Additional flags supported by the `porchctl rpkg init` command are: - -* `--repository` - Repository in which the package revision will be created. -* `--workspace` - Workspace of the new package revision. -* `--description` - Short description of the package revision. -* `--keywords` - List of keywords for the package revision. -* `--site` - Link to page with information about the package revision. - - -Use `porchctl rpkg clone` command to create a _downstream_ package revision by cloning an _upstream_ package revision. You can find out more about the _upstream_ and _downstream_ sections of the `Kptfile` in a [Getting a Package](https://kpt.dev/book/03-packages/#getting-a-package). - -```bash -$ porchctl rpkg clone porch-test.new-package.my-workspace new-package-clone --repository=porch-deployment -n porch-demo -porch-deployment.new-package-clone.v1 created - -# Confirm the package revision was created -porchctl rpkg get porch-deployment.new-package-clone.v1 -n porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-deployment.new-package-clone.v1 new-package-clone v1 0 false Draft porch-deployment -``` - -{{% alert title="Note" color="primary" %}} - A cloned package must be created in a repository in the same namespace as - the source package. Cloning a package with the Package Orchestration Service - retains a reference to the upstream package revision in the clone, and - cross-namespace references are not allowed. Package revisions in repositories - in other namespaces can be cloned using a reference directly to the underlying - oci or git repository as described below. -{{% /alert %}} - -`porchctl rpkg clone` can also be used to clone package revisions that are in repositories not registered with Porch, for -example: - -```bash -$ porchctl rpkg clone \ - https://github.com/nephio-project/catalog.git cloned-pkg-example-ue-bp \ - --directory=workloads/oai/pkg-example-ue-bp \ - --ref=main \ - --repository=porch-deployment \ - --namespace=porch-demo -porch-deployment.cloned-pkg-example-ue-bp.v1 created - -# Confirm the package revision was created -$ porchctl rpkg get -n porch-demo porch-deployment.cloned-pkg-example-ue-bp.v1 -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -``` - -The flags supported by the `porchctl rpkg clone` command are: - -* `--directory` - Directory within the upstream repository where the upstream - package revision is located. -* `--ref` - Ref in the upstream repository where the upstream package revision is - located. This can be a branch, tag, or SHA. -* `--repository` - Repository to which package revision will be cloned (downstream - repository). -* `--workspace` - Workspace to assign to the downstream package revision. - -The `porchctl rpkg copy` command can be used to create a new revision of an existing package. It is a means to -modifying an already published package revision. - -```bash -$ porchctl rpkg copy porch-test.network-function.innerhome --workspace=great-outdoors -n porch-demo -porch-test.network-function.great-outdoors created - -# Confirm the package revision was created -$ porchctl rpkg get porch-test.network-function.great-outdoors -n porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.network-function.great-outdoors network-function great-outdoors 0 false Draft porch-test -``` -Unlike `clone` of a package which establishes the upstream-downstream -relationship between the respective packages, and updates the `Kptfile` -to reflect the relationship, the `copy` command does *not* change the -upstream-downstream relationships. The copy of a package shares the same -upstream package as the package from which it was copied. Specifically, -in this case both packages have identical contents, -including upstream information, and differ in revision only. - -The `porchctl rpkg pull` and `porchctl rpkg push` commands can be used to update the resources (package revision contents) of a package _draft_: - -```bash -$ porchctl rpkg pull porch-test.network-function.great-outdoors ./great-outdoors -n porch-demo - -# Make edits using your favorite YAML editor, for example adding a new resource -$ cat < ./great-outdoors/config-map.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: example-config-map -data: - color: green -EOF - -# Push the updated contents to the Package Orchestration server, updating the -# package revision contents. -$ porchctl rpkg push porch-test.network-function.great-outdoors ./great-outdoors -n porch-demo - -# Confirm that the remote package revision now includes the new ConfigMap resource -$ porchctl rpkg pull porch-test.network-function.great-outdoors -n porch-demo -apiVersion: config.kubernetes.io/v1 -kind: ResourceList -items: -... -- apiVersion: v1 - kind: ConfigMap - metadata: - name: example-config-map - annotations: - config.kubernetes.io/index: '0' - internal.config.kubernetes.io/index: '0' - internal.config.kubernetes.io/path: 'config-map.yaml' - config.kubernetes.io/path: 'config-map.yaml' - data: - color: green -... -``` -Package revision can be deleted using `porchctl rpkg del` command: - -```bash -# Delete package revision -$ porchctl rpkg del porch-test.network-function.great-outdoors -n porch-demo -porch-test.network-function.great-outdoors deleted -``` - -## Package Lifecycle and Approval Flow - -Authoring is performed on the package revisions in the _Draft_ lifecycle stage. Before a package revision can be deployed, copied or -cloned, it must be _Published_. The approval flow is the process by which the package revision is advanced from _Draft_ state -through _Proposed_ state and finally to _Published_ lifecycle stage. - -The commands used to manage package revision lifecycle stages include: - -* `propose` - Proposes to finalize a package revision draft -* `approve` - Approves a proposal to finalize a package revision. -* `reject` - Rejects a proposal to finalize a package revision - -In the [Authoring Packages](#authoring-packages) section above we created several _draft_ package revisions and in this section -we will create proposals for publishing some of them. - -```bash -# List package revisions to identify relevant drafts: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 0 false Draft porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -... - -# Propose two package revisions to be be published -$ porchctl rpkg propose \ - porch-deployment.new-package-clone.v1 \ - porch-test.network-function3.innerhome6 \ - -n porch-demo - -porch-deployment.new-package-clone.v1 proposed -porch-test.network-function3.innerhome6 proposed - -# Confirm the package revisions are now Proposed -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 0 false Proposed porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Proposed porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -At this point, a person in _platform administrator_ role, or even an automated process, will review and either approve -or reject the proposals. To aid with the decision, the platform administrator may inspect the package revision contents using the -commands above, such as `porchctl rpkg pull`. - -```bash -# Approve a proposal to publish a package revision -$ porchctl rpkg approve porch-deployment.new-package-clone.v1 -n porch-demo -porch-deployment.new-package-clone.v1 approved - -# Reject a proposal to publish a package revision -$ porchctl rpkg reject porch-test.network-function3.innerhome6 -n porch-demo -porch-test.network-function3.innerhome6 no longer proposed for approval -``` - -Now the user can confirm lifecycle stages of the package revisions: - -```bash -# Confirm package revision lifecycle stages after approvals: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 1 true Published porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -Observe that the rejected proposal returned the package revision back to _Draft_ lifecycle stage. The package revision whose -proposal was approved is now in _Published_ state. - -An approved pacakge revision cannot be directly deleted, it must first be proposed for deletion. - -```bash -porchctl rpkg propose-delete -n porch-demo porch-deployment.new-package-clone.v1 - -# Confirm package revision lifecycle stages after deletion proposed: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 1 true DeletionProposed porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -At this point, a person in _platform administrator_ role, or even an automated process, will review and either approve -or reject the deletion. - -```bash -porchctl rpkg reject -n porch-demo porch-deployment.new-package-clone.v1 - -# Confirm package revision deletion has been rejected: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 1 true Published porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -The package revision can again be proposed for deletion. - -```bash -porchctl rpkg propose-delete -n porch-demo porch-deployment.new-package-clone.v1 - -# Confirm package revision lifecycle stages after deletion proposed: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 1 true DeletionProposed porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -At this point, a person in _platform administrator_ role, or even an automated process, decides to proceed with the deletion. - -```bash -porchctl rpkg delete -n porch-demo porch-deployment.new-package-clone.v1 - -# Confirm package revision is deleted: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -## Package Upgrade - - -The flags supported by the `porchctl rpkg upgrade` command are: - -* `--revision` - (*Optional*) The revision number of the upstream package that the target - downstream package revision should be upgraded to. - The corresponding revision must be published. If not set, the latest will be chosen. -* `--workspace` - The workspace name of the newly created package revision. -* `--strategy` - The strategy to use for the upgrade. -Options: `resource-merge` (*default*), `fast-forward`, `force-delete-replace`, `copy-merge`. - -The `porchctl rpkg upgrade` command can be used to create a new revision which upgrades a published downstream to a more recent published revision of its upstream package. - -```bash -# upgrade repository.package.v1 package to the latest of its upstream, using resource-merge strategy -$ porchctl rpkg upgrade repository.package.1 --workspace=2 - -# upgrade repository.package.v1 package to revision v3 of its upstream, using resource-merge strategy -$ porchctl rpkg upgrade repository.package.1 --workspace=2 --revision=3 - -# upgrade repository.package.v1 package to revision v3 of its upstream, using copy-merge strategy -$ porchctl rpkg upgrade repository.package.1 --workspace=2 --revision=3 --strategy=copy-merge -``` diff --git a/content/en/docs/neo-porch/8_best_practices/_index.md b/content/en/docs/neo-porch/8_best_practices/_index.md deleted file mode 100644 index cdf9396a..00000000 --- a/content/en/docs/neo-porch/8_best_practices/_index.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: "Best Practices" -type: docs -weight: 8 -description: Best practices to follow when using porch ---- - -## Lorem Ipsum - -This is the section where we describe the "Right way to use porch" if users are actively going against what we state are best practices support cannot be provided. - -Examples: - -- Recommendations for structuring packages, versions & variants -- How to design reusable templates/functions -- Performance / scaling tips (e.g. for large numbers of packages or functions) -- Operational guidance: monitoring, logging, health checks - -FOR REPOSITORIES: - -- THE USAGE OF MULTIPLE PORCH REPOS ON A SINGLE GIT REPO IS NOT RECOMMENDED FOR PORCH REPOS THAT PORCH WRITES PACKAGE REVISIONS TO AND SHOULD ONLY BE USED FOR READ ONLY UPSTREAM REPOS. THIS WILL BE SLOWER AND TAKE A PERFORMANCE HIT. -- THE OPTIMAL USE CASE IS A SINGLE PORCH REPO PER GIT REPO FOR EFFICIENCY SAKE. diff --git a/content/en/docs/neo-porch/8_best_practices/relevant_old_docs/_index.md b/content/en/docs/neo-porch/8_best_practices/relevant_old_docs/_index.md deleted file mode 100644 index cb126027..00000000 --- a/content/en/docs/neo-porch/8_best_practices/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: 2 -description: ---- - -## Lorem Ipsum - -This section is where we describe diff --git a/content/en/docs/neo-porch/8_best_practices/relevant_old_docs/old.md b/content/en/docs/neo-porch/8_best_practices/relevant_old_docs/old.md deleted file mode 100644 index b8a7cb88..00000000 --- a/content/en/docs/neo-porch/8_best_practices/relevant_old_docs/old.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Old Content" -type: docs -weight: 2 -description: old content here ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -## Old Content Template - -old content here diff --git a/content/en/docs/neo-porch/9_troubleshooting_and_faq/_index.md b/content/en/docs/neo-porch/9_troubleshooting_and_faq/_index.md deleted file mode 100644 index e7d4c0df..00000000 --- a/content/en/docs/neo-porch/9_troubleshooting_and_faq/_index.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: "Troubleshooting & FAQ" -type: docs -weight: 9 -description: Trouble & FAQ Description ---- - -## Lorem Ipsum - -Examples: - -- Common problems & their solutions -- Error messages & diagnostic steps -- Debugging tips / tools -- FAQ: questions new users often ask diff --git a/content/en/docs/neo-porch/9_troubleshooting_and_faq/relevant_old_docs/_index.md b/content/en/docs/neo-porch/9_troubleshooting_and_faq/relevant_old_docs/_index.md deleted file mode 100644 index 85d770ba..00000000 --- a/content/en/docs/neo-porch/9_troubleshooting_and_faq/relevant_old_docs/_index.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "[### Old Docs ###]" -type: docs -weight: 2 -description: ---- - -## Lorem Ipsum - -Lorem Ipsum diff --git a/content/en/docs/neo-porch/9_troubleshooting_and_faq/relevant_old_docs/old.md b/content/en/docs/neo-porch/9_troubleshooting_and_faq/relevant_old_docs/old.md deleted file mode 100644 index b8a7cb88..00000000 --- a/content/en/docs/neo-porch/9_troubleshooting_and_faq/relevant_old_docs/old.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Old Content" -type: docs -weight: 2 -description: old content here ---- - -
- ⚠️ Outdated Notice: This page refers to an older version of the documentation. This content has simply been moved into its relevant new section here and must be checked, modified, rewritten, updated, or removed entirely. -
- -## Old Content Template - -old content here diff --git a/content/en/docs/neo-porch/9_troubleshooting_and_faq/repository-sync/_index.md b/content/en/docs/neo-porch/9_troubleshooting_and_faq/repository-sync/_index.md deleted file mode 100644 index 0aa503d1..00000000 --- a/content/en/docs/neo-porch/9_troubleshooting_and_faq/repository-sync/_index.md +++ /dev/null @@ -1,232 +0,0 @@ ---- -title: "Repository Sync Troubleshooting" -type: docs -weight: 9 -description: Common repository sync issues and their solutions ---- - -## Common Problems & Solutions - -### Repository Not Syncing - -**Problem**: Repository shows as Ready but packages aren't updating - -**Solutions**: -```bash -# Check repository status -kubectl get repositories -n - -# Check repository conditions -kubectl describe repository -n - -# Verify sync configuration -kubectl get repository -n -o yaml | grep -A5 sync - -# Check repository synchronization logs -kubectl logs -n porch-system deployment/porch-server | grep "repositorySync.*" - -# Check for sync errors -kubectl logs -n porch-system deployment/porch-server | grep ".*error" -``` - -**Common causes**: -- Invalid cron expression falls back to default frequency -- Repository authentication issues -- Network connectivity problems - -### Authentication Failures - -**Problem**: Repository status shows authentication errors - -**Error messages**: -- `"authentication required"` -- `"invalid credentials"` -- `"permission denied"` - -**Solutions**: -```bash -# Check secret exists (secret name is referenced in repository spec) -kubectl get repository -n -o jsonpath='{.spec.git.secretRef.name}' -kubectl get secret -n - -# Verify secret data -kubectl get secret -n -o yaml - -# Recreate secret with correct credentials -kubectl delete secret -n -kubectl create secret generic \ - --namespace= \ - --type=kubernetes.io/basic-auth \ - --from-literal=username= \ - --from-literal=password= - -# Porch will automatically retry authentication at every repo-sync-frequency set in porch-server(default 10m) - -# For immediate retry, re-register the repository with correct credentials: -kubectl delete repository -n -kubectl delete secret -n # repo reg will create a new secret -porchctl repo reg --name --repo-basic-username --repo-basic-password -``` - -### Invalid Cron Expression - -**Problem**: CLI registration fails with cron validation error - -**Error message**: `"invalid sync-schedule cron expression"` - -**Solutions**: -```bash -# Valid cron formats (5 fields: minute hour day month weekday) -porchctl repo reg --sync-schedule "*/10 * * * *" # Every 10 minutes -porchctl repo reg --sync-schedule "0 */2 * * *" # Every 2 hours -porchctl repo reg --sync-schedule "0 9 * * 1-5" # 9 AM weekdays - -# Invalid examples to avoid: -# "10 * * * *" # Missing minute field -# "* * * * * *" # Too many fields (6 instead of 5) -``` - -### Repository Stuck in Reconciling - -**Problem**: Repository condition shows `Reason: Reconciling` indefinitely - -**Diagnostic steps**: -```bash -# Check Porch logs -kubectl logs -n porch-system deployment/porch-server - -# Look for repository synchronization errors -kubectl logs -n porch-system deployment/porch-server | grep "repositorySync.*" - -# Check repository accessibility -git ls-remote # For Git repos -``` - -**Common causes**: -- Large repository taking time to clone/sync -- Network timeouts -- Repository structure issues - -### One-time Sync Not Triggering - -**Problem**: `porchctl repo sync` command succeeds but sync doesn't happen - -**Diagnostic steps**: -```bash -# Check runOnceAt field was set -kubectl get repository -n -o jsonpath='{.spec.sync.runOnceAt}' - -# Verify timestamp is in future -date -u # Compare with runOnceAt value - -# Check one-time synchronization logs -kubectl logs -n porch-system deployment/porch-server | grep "one-time sync" -``` - -**Solutions**: -- Ensure timestamp is at least 1 minute in future -- Verify namespace is correct - -## Error Messages & Diagnostic Steps - -### "repository is required positional argument" -**Command**: `porchctl repo reg` -**Solution**: Provide repository URL as argument -```bash -porchctl repo reg https://github.com/example/repo.git -``` - -### "both username/password and workload identity specified" -**Command**: `porchctl repo reg` -**Solution**: Use only one authentication method -```bash -# Use only one authentication method during registration -# Either basic auth OR workload identity, not both -``` - -### "no repositories found in namespace" -**Command**: `porchctl repo sync --all` -**Solution**: Check namespace and repository existence -```bash -kubectl get repositories -n -kubectl get repositories --all-namespaces -``` - -### "Scheduled time is within 1 minute or in the past" -**Command**: `porchctl repo sync --run-once` -**Solution**: Use future timestamp or longer duration -```bash -porchctl repo sync --run-once 5m -porchctl repo sync --run-once "2024-12-01T15:00:00Z" -``` - -## Debugging Tips & Tools - -### Enable Verbose Logging -```bash -# Increase Porch server log level -kubectl patch deployment porch-server -n porch-system -p '{"spec":{"template":{"spec":{"containers":[{"name":"porch-server","args":["--v=2"]}]}}}}' -``` - -### Monitor Repository Events -```bash -# Watch repository changes -kubectl get repositories -w -n - -# Monitor events -kubectl get events -n --field-selector involvedObject.kind=Repository -``` - -### Check Repository Synchronization Status -```bash -# Repository sync logs -kubectl logs -n porch-system deployment/porch-server | grep "repositorySync.*" - -# Next sync time -kubectl logs -n porch-system deployment/porch-server | grep "next scheduled time" -``` - -### Validate Repository Structure -```bash -# For Git repositories -git clone -find . -name "Kptfile" -type f # Should find package directories - -# Check branch exists -git branch -r | grep -``` - -## FAQ - -### Q: How often do repositories sync by default? -**A**: Without a custom sync schedule, repositories use the system default frequency of 10 minutes. This default can be customized by setting the `repo-sync-frequency` parameter in the Porch server deployment. - -### Q: Can I have both periodic and one-time sync? -**A**: Yes, periodic scheduling and one-time sync work independently. One-time synchronization executes regardless of the periodic schedule. - -### Q: Why is my cron expression not working? -**A**: Porch uses standard 5-field cron format. Common mistakes: -- Using 6 fields (seconds not supported) -- Missing fields -- Invalid ranges or values - -### Q: How do I stop repository syncing? -**A**: Repository synchronization cannot be completely stopped. Porch continuously monitors repositories for changes. You can only modify the sync frequency by updating the sync schedule configuration or remove custom schedules to use the default frequency. - -### Q: Can I sync repositories across namespaces? -**A**: Use `--all-namespaces` flag: -```bash -porchctl repo sync --all --all-namespaces -``` - -### Q: What happens if repository is deleted during sync? -**A**: The synchronization system gracefully handles repository deletion and stops sync operations for that repository. - -### Q: How do I check if authentication is working? -**A**: Repository condition will show `Ready: True` if authentication succeeds. Check `kubectl describe repository` for detailed status. - ---- - -{{% alert title="Note" color="primary" %}} -OCI repository support is experimental and may not have full feature parity with Git repositories. -{{% /alert %}} \ No newline at end of file diff --git a/content/en/docs/neo-porch/_index.md b/content/en/docs/neo-porch/_index.md deleted file mode 100644 index ae57d497..00000000 --- a/content/en/docs/neo-porch/_index.md +++ /dev/null @@ -1,470 +0,0 @@ ---- -title: "Porch Documentation Restructure" -type: docs -weight: 1 -description: ---- - -
- ⚠️ Outdated Notice: The most up to date version of this restructure guide can be found - here. -
- -The Kubernetes documentation follows a table of contents detailed in the below section. taking this template and adapting it to the porch code base/documentation we get the following. - -1. The different sections required to be covered following the Kubernetes documentation. -2. The currently available documentation relating to this section available now on . -3. The gaps/missing sections not found in the current documentation but required in the rework. - -Sections marked as [Section in Green] means they have been reviewed and marked as necessary to be included in the new documentation. - -Sections marked as [Section in Red] means they have not been reviewed but could still be placed in the docs in the given location once they are. TLDR its a topic of interest to be looked at just not signed off on yet as as a mandatory addition. - ---- - -## Table of Contents - -1. Overview -2. Concepts -3. Getting Started -4. Tutorials & How‑tos -5. Architecture & Components -6. Configuration & Deployment -7. CLI / API / Reference -8. Best Practices & Patterns -9. Troubleshooting & FAQ -10. Security & Compliance -11. Glossary -12. Contributing -13. Release Notes / Changelog - ---- - -## 1. Overview - -**Section Must Contain:** - -* What is **Porch** -   • Short description (“Porch = package orchestration, opinionated package management …”) (Porch already has this under Overview). ([https://docs.nephio.org/docs/porch/](https://docs.nephio.org/docs/porch/)) -* Goals & scope (what Porch intends to do, what it does *not* do) -* Audiences: which users should care (operators, GitOps engineers, developers, integrators, etc.) -* Prerequisites / compatibility (Kubernetes versions, environments, permissions, dependencies) - -**Currently Available Resources:** - -* **Porch documentation** ([https://docs.nephio.org/docs/porch/](https://docs.nephio.org/docs/porch/)) -* *Overview* — what Porch is. () exists but requires refresh -* *Porch in the Nephio architecture, history and outlook* (exists but we should do away with it) - -**Gaps / additions Required:** - -* Statement of *goals & scope* (what Porch intends to do / NOT do). -* Target audiences (operators, developers, integrators). -* Supported environments / prerequisites summary. - -
-More Detail on concepts requiring explanation - -* Stuff here - -
- ---- - -## 2. Concepts - -**Section Must Contain:** - -* Key terminology (package, variant, mutation pipeline, function runner, etc.) -* Core models/entities: what Porch works with (packages, modules, variants, pipelines). -* High‑level flow / lifecycle: how a package moves through Porch (creation → mutation → deployment or consumption) -* Relationships to other Nephio components / external systems (Git repos, registries, etc.) - -**Currently Available Resources:** - -* [Porch Concepts](https://docs.nephio.org/docs/porch/package-orchestration/) -* [Configuration as Data](https://docs.nephio.org/docs/porch/config-as-data/) -* [Package Mutation Pipeline Order](https://docs.nephio.org/docs/porch/package-mutation-pipeline-order/) -* [Function Runner Pod Templating](https://docs.nephio.org/docs/porch/function-runner-pod-templates/) -* [Package Variant Controller](https://docs.nephio.org/docs/porch/package-variant/) - -**Gaps / additions Required:** - -* Central glossary of key terms -* Visual lifecycle diagram of a package -* Mapping of Porch functions vs Nephio components - -
-More Detail on concepts requiring explanation - -* [PACKAGE-ORCHESTRATION] ← A LOT OF GREAT REUSABLE CONTENT HERE FOR THIS SECTION. THIS SECTION CAN BE OUR HIGH LEVEL INTRODUCTION OF THE ENTIRE PROJECT BEFORE DIVING DEEPER INTO THE DETAILS IN THE LATER SECTIONS BELOW. -* [MODIFYING-PORCH-PACKAGES] MUTATORS VS VALIDATORS -* [REVISION/VERSION] Explain what the revision of a package revision is. -* [LIFECYCLE] DRAFT PROPOSED PUBLISHED PROPOSED-DELETE Explain what the lifecycle of a package is. -* [LATEST] Explain the "latest" field works, and how it relates to the "latest" annotation -* [REPOSITORIES] GIT repo short hand explanation. -[WORKSPACE/(REVISION IDENTIFIER)] Explain what a workspace is****, see -* [PACKAGE-RELATIONSHIPS] UPSTREAM VS DOWNSTREAM VS Explain the behavior of packages which contain embedded packages -* [EXPLAIN UPSTREAM AT A HIGH LEVEL HERE BEFORE GOING IN DETAIL] - -
- ---- - -## 3. Getting Started - -**Section Must Contain:** - -* Install Porch: requirements, supported platforms/environments, step‑by‑step install. Porch already has *Installing Porch*. -* Environment preparation: what users need locally, or on a cluster. (Porch has *Preparing the Environment*.) -* First example / quick start: minimal working example (e.g. a package, mutate, deploy) -* Using the Porch CLI / basic commands. Porch has *Using the Porch CLI tool*. - -**Currently Available Resources:** - -* [Installing Porch](https://docs.nephio.org/docs/porch/user-guides/install-porch/) needs quick version e.g. script containing (./scripts/setup-dev-env.sh + make run-in-kind) -* Preparing the Environment -* Using the Porch CLI tool - -**Gaps / additions Required:** - -* End-to-end quickstart walkthrough -* Output examples (logs/screenshots) -* Supported platforms/environment matrix - ---- - -## 4. Tutorials & How‑to's - -**Section Must Contain:** - -* Common tasks with step‑by‑step instructions: - * Authenticating with remote Git repositories. - * Using private registries. - * Running Porch in different environments (cloud, on‑prem, VMs). E.g. *Running Porch on GKE*. -* Advanced how‑tos: customizing the mutation pipeline, variant selection, function runner templating, etc. - -**Currently Available Resources:** - -* Authenticating with remote Git -* Using authenticated private registries -* Running Porch on GKE -* Mutation pipeline & function runner content (under Concepts) - -**Gaps / additions Required:** - -* Complete end-to-end sample with full mutation + deployment -* Real-world examples (multi-repo setups) -* CI/CD testing integration - -
-More Detail on concepts requiring explanation - -* [EXPECT EXAMPLE SCRIPT HAVING DEPLOYED PORCH FOR THEM AS A FIRST TIME USER] -* [STEP 1: SETUP PORCH REPOSITORIES RESOURCE] LIKELY FIRST STEP FROM A DEPLOYMENT OF PORCH TO USE IT -* [FLOWCHART EXPLAINING FLOW E2E] init → pull → locally do changes → push → proposed → approved/rejected → if rejected changes required then re proposed → if approved → becomes published/latest → -* [CREATING FIRST PACKAGE] INIT HOLLOW PKG -> PULL PKG LOCALLY FROM REPO -> MODIFY LOCALLY -> PUSH TO UPSTREAM -> PROPOSE FOR APPROVAL -> APPROVE TO UPSTREAM REPO E.G. SAMPLE -* [UPGRADE EXAMPLES] [ALL THE DIFF SCENARIOS] [THIS IS THE MOST COMPLEX PART] [IT NEEDS TO BE VERY SPECIFIC ON WHAT DO/DONT WE SUPPORT] -* [CREATE A GENERIC PACKAGE AND RUN IT THROUGH THE DIFFERENT UPGRADES TO SHOW HOW THEY WORK AND CHANGE] -* in upgrade scenario we expect that we have NEW BLUEPRINT IS PUBLISHED → DEPLOYMENT PACKAGE CAN BE UPGRADED IF IT WAS BASED ON THAT BLUEPRINT (AKA THE UPSTREAM OF THIS PACKAGE POINTS AT THAT BLUEPRINT). assuming 2 repositories -* [RESOURCE MERGE] IS A STRUCTURAL 3 WAY MERGE → HAS CONTEXT OF THE STRUCTURE OF THE FILES -> -* [COPY MERGE] IS A FILE REPLACEMENT STRATEGY → USEFUL WHEN YOU DONT NEED PORCH TO BE AWARE OF THE CONTENT OF THE FILES ESPECIALLY IF THERE IS CONTENT INSIDE THE FILES THAT DO NOT COMPLY WITH KUSTOMIZE. - * [OTHER STRATEGIES] … - -
- ---- - -## 5. Architecture & Components - -**Section Must Contain:** - -* Overall architecture diagram -* Main components/modules of Porch (controllers, function runner, variant controller, etc.) -* Data flow and interaction: how packages move through system, lifecycle events, error paths, etc. -* Dependencies: e.g. what external services Porch relies on (Git, registry, Kubernetes APIs) - -**Currently Available Resources:** - -* Porch in the Nephio Architecture -* Individual component pages: Function Runner, Variant Controller, etc. - -**Gaps / additions Required:** - -* Single consolidated diagram of Porch system -* Component interaction maps -* Package lifecycle description and flow diagram - -
-More Detail on concepts requiring explanation - -* [PORCH-SERVER] PORCH SERVER SPECIFIC MAIN CHUNK FOR DETAIL HERE - * [AGGREGATED API SERVER] HOW CERTAIN PORCH RESOURCES ARE SERVED AND HANDLED AKA NOT THROUGH CRDS BUT AGGR API - * [REPO SYNC] ENSURES LOCAL DB/CR CACHE AND UPSTREAM REPO’S ARE KEPT IN SYNC -* [ENGINE] MAIN BRAIN/LOGIC USED IN PROCESSING PACKAGES - * [CACHE SYSTEM] - * [DB-CACHE] EXPLAIN DIFFERENCE IN OPERATION COMPARED TO OTHER (E.G. DB DOESNT PUSH TO REPO UNTIL APPROVED) - * [CR-CACHE] EXPLAIN DIFFERENCE IN OPERATION COMPARED TO OTHER -* [FUNCTION-RUNNER] MAIN CHUNK OF DETAIL REGARDING THIS HERE - * [TASK PIPELINE] MUTATORS VS VALIDATORS + IMAGES ETC Explain how the package mutation pipelines work and are triggered - * [PIPELINE ORDER] - * [POD TEMPLATING] -* [CONTROLLERS] MAIN CHUNK OF DETAIL REGARDING THIS HERE - * [PKG VARIANT CONTROLLER] -* [GIT-REPO] MAIN CHUNK OF DETAIL REGARDING THIS HERE - * [DEPLOYMENT VS NON DEPLOYMENT REPO] EXPLAIN - * [4 WAYS PKG REV COMES INTO EXISTENCE], [UPSTREAM IS THE SOURCE OF THE CLONE] - * [CREATED USING RPKG INIT/API] , [IN THE CASE THERE IS NO UPSTREAM] - * [COPY FROM ANOTHER REV IN THE SAME PKG] ,[NO UPSTREAM?] - * [CAN BE CLONED FROM ANOTHER PKG REV A NEW ] [HAS UPSTREAM] - * [CAN BE LOADED FROM GIT] [DEPENDS ON WEATHER IT HAD A HAD A CLONE SOURCE OR NOT AT THE TIME] - * [UPSTREAM] EXPLAIN PORCH INTERACTION WITH UPSTREAM REPO'S - * [DOWNSTREAM] EXPLAIN PORCH INTERACTION WITH DOWNSTREAM REPO'S -* [PORCH-SPECIFIC-RESOURCES] SUMMARY OF MAIN RESOURCES E.G. PACKAGE-REVISIONS. - * [PACKAGE-REVISION] MORE DETAIL HERE - * [PACKAGE-REVISION-RESOURCES] MORE DETAIL HERE - * [PACKAGE-REV] MORE DETAIL HERE - * [REPOSITORIES] MORE DETAIL HERE - * [GIT VS OCI] PORCH SUPPORTS THE CONCEPT OF MULTIPLE REPOS OCI IS EXPERIMENTAL. AN EXERNAL REPO IS AN IMPLEMENTATION OF PORCH REPOSTIORY INTERFACE WHICH STORES PKG REVISION ON AN EXTERNAL SYSTEM. TODAY THERE ARE 2 EXTERNAL REPO IMPLEMENTATIONS. THEY ARE GIT(FULLY SUPPORTED) & OCI(EXPERIMENTALLY). DEVELOPERS ARE FREE TO DESIGN AND IMPLEMENT NEW EXTERNAL REPOS TYPES IF THEY WISH E.G. DB INTERFACE -* [PACKAGE-VARIANTS/-SETS] MORE DETAIL HERE -* [PACKAGES] UNSURE IF THIS IS STILL USED? - -
- ---- - -## 6. Configuration & Deployment - -**Section Must Contain:** - -* Configuration options (config as data, configuration schema) — key settings, environment variables, flags. Porch has *Configuration as Data*. -* Deployment modes: how Porch can be deployed (cluster, single VM, etc.) -* Versioning and upgrades -* Authentication, authorization configuration (connecting to Git, registries) - -**Currently Available Resources:** - -* Configuration as Data -* Git & Registry Auth (under How-Tos) -* GKE Deployment Guide - -**Gaps / additions Required:** - -* Config file schema and field definitions -* Supported deployment topologies -* Upgrade instructions / versioning policy - -
-More Detail on concepts requiring explanation - -* [DEPLOYMENTS] HOW DO DEPLOY/INSTALL PORCH ON DIFFERENT ENV’S - * [OFFICIAL DEPLOYMENT] - * [INSTALLING PORCH] - * [LOCAL DEV ENV DEPLOYMENT] - * [DEV PROCESS] -* [CONFIGURATION] DIFFERENT WAYS TO CONFIGURE PORCH - * [DB/CR CACHE SETUPS] HOW TO CONFIGURE PORCH TO RUN WITH A DB CACHE VS THE DEFAULT CR CACHE - * [REPOSITORY TYPES] PUBLIC VS PRIVATE REPO’S FOR KPT FUNCTIONS USED BY PORCH NOT REPOS WHERE PACKAGES ARE STORED!!! - * [PUBLIC IMAGE REPOSITORIES] GCR OR KPT/DEV - * [PRIVATE IMAGE REPOSITORIES] - * [PRIVATE REPOSITORY TLS AUTH] - * [CERT MANAGER] CONFIGURING PORCH TO USE CERT MANAGER FOR WEBHOOK HANDLING < See > - -
- ---- - -## 7. CLI / API / Reference - -**Section Must Contain:** - -* CLI tool reference: all commands, flags, examples -* APIs / CRDs / Resources: full spec for Porch‑specific Kubernetes resources, with fields, validation, defaulting -* Schema definitions or API versioning -* Configuration schema reference, file formats etc. - -**Currently Available Resources:** - -* CLI usage guide (basic) - -**Gaps / additions Required:** - -* Full CLI command reference (flags, subcommands) -* CRD reference (e.g., PackageVariant, Repository) -* YAML schema definitions and validation docs - -
-More Detail on concepts requiring explanation - -* [CLI] largely already completed here - -
- ---- - -## 8. Best Practices & Patterns - -**Section Must Contain:** - -* Recommendations for structuring packages, versions & variants -* How to design reusable templates/functions -* Performance / scaling tips (e.g. for large numbers of packages or functions) -* Operational guidance: monitoring, logging, health checks - -**Currently Available Resources:** - -* Not directly addressed - -**Gaps / additions Required:** - -* Package/variant organization patterns -* Best practices for reusable mutations/functions -* Monitoring/logging guides - -
-More Detail on concepts requiring explanation - -
- ---- - -## 9. Troubleshooting & FAQ - -**Section Must Contain:** - -* Common problems & their solutions -* Error messages & diagnostic steps -* Debugging tips / tools -* FAQ: questions new users often ask - -**Currently Available Resources:** - -* None found - -**Gaps / additions Required:** - -* FAQ page -* Error resolution page -* CLI diagnostic/debugging guide - -
-More Detail on concepts requiring explanation - -
- ---- - -## 10. Security & Compliance - -**Section Must Contain:** - -* Authentication & authorization: how Porch ensures secure access -* Secrets / credentials handling (for Git, registries, etc.) -* Security considerations for function runner / templates / untrusted code -* TLS, encryption in transit / at rest if applicable - -**Currently Available Resources:** - -* Git & Registry authentication (under How-Tos) - -**Gaps / additions Required:** - -* Security model for untrusted functions -* Secrets handling / rotation model -* RBAC requirements and guidance - -
-More Detail on concepts requiring explanation - - [TLS IN CONTAINER REG'S] - [GIT REG AUTH IN PORCH REPOSITORIES RESOUCES] - [SELF SIGNED TLS IN PORCH SERVER] - [WEBHOOKS AND RBAC] - -
- ---- - -## 11. Glossary - -**Section Must Contain:** - -* Define domain‑specific or technical terms used throughout the docs (variant, package orchestration, mutation, etc.) - -**Currently Available Resources:** - -* An old and likely in need of reconstruction glossary page was found here - -**Gaps / additions Required:** - -* Term definitions + cross-links - -
-More Detail on concepts requiring explanation - -
- ---- - -## 12. Contributing - -**Section Must Contain:** - -* How to contribute (code, documentation) -* Developer setup (how to build and run Porch locally) — Porch has *Setting up a local environment*. -* Process for submitting changes, code review, governance - -**Currently Available Resources:** - -* Developer setup guide - -**Gaps / additions Required:** - -* CONTRIBUTING.md page with PR process -* Code style conventions -* Maintainer guide / governance model - -
-More Detail on concepts requiring explanation - -* [SIGNING CLA GUIDE/OTHER REQUIREMENTS ETC] -* [DEPLOY DEV ENV GUIDE] - * [LOCAL PORCH SERVER] - * [OPTIONS] CAN BASICALLY DESCRIBE THE settings in launch.json & settings.json - * [DBCACHE] ... - * [CRCACHE] ... - * [IN POD DEPLOYMENT] -* [RUN TESTS LOCALLY] -* [CREATE PR PROCEDURE] - * [COMMON PR GOTCHA'S] COULD BE COVERED BY A TEMPLATE - -
- ---- - -## 13. Release Notes / Changelog - -**Section Must Contain:** - -* What’s new in each release -* Breaking changes, deprecations -* Migration guides if necessary - -**Currently Available Resources:** - -* None found in public docs - -**Gaps / additions Required:** - -* Changelog per version -* Migration instructions -* Release tagging structure - -
-More Detail on concepts requiring explanation - -* [CAN POINT IN SOME WAY TO THE RELEASE IN PORCH HERE] - -
- ---- diff --git a/content/en/docs/neo-porch/release.md b/content/en/docs/neo-porch/release.md deleted file mode 100644 index 095be459..00000000 --- a/content/en/docs/neo-porch/release.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: "Release" -type: docs -weight: 0 -description: Release Description ---- - -## Lorem Ipsum - -The release notes can be found here diff --git a/content/en/docs/porch/_index.md b/content/en/docs/porch/_index.md index 8425fd58..c9130a83 100644 --- a/content/en/docs/porch/_index.md +++ b/content/en/docs/porch/_index.md @@ -1,12 +1,16 @@ --- -title: "Porch documentation" +title: "Porch" type: docs weight: 6 -description: Documentation of Porch --- - ## Overview +{{% alert title="Note" color="primary" %}} + +**The Porch documentation has been moved to [https://docs.porch.nephio.org/](https://docs.porch.nephio.org/).** + +{{% /alert %}} + Porch is “kpt-as-a-service”, providing opinionated package management, manipulation, and lifecycle operations in a Kubernetes-based API. This allows automation of these operations using standard Kubernetes controller techniques. @@ -20,4 +24,4 @@ was decided that Porch would not be part of the kpt project and the code was don Porch is maintained by the Nephio community. Porch will evolve with Nephio and its architecture and implementation will be updated to meet the functional and non-functional requirements on it -and on Nephio as a whole. \ No newline at end of file +and on Nephio as a whole. diff --git a/content/en/docs/porch/config-as-data.md b/content/en/docs/porch/config-as-data.md deleted file mode 100644 index 65c781a3..00000000 --- a/content/en/docs/porch/config-as-data.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -title: "Configuration as Data (CaD)" -type: docs -weight: 1 -description: ---- - -This document provides the background context for Package Orchestration, which is further -elaborated in a dedicated [document]({{% relref "/docs/porch/package-orchestration.md" %}}). - -## Configuration as data (CaD) - -CaD is an approach to the management of configuration. It includes the configuration of -infrastructure, policy, services, applications, and so on. CaD performs the following actions: - -* Making configuration data the source of truth, stored separately from the live state. -* Using a uniform, serializable data model to represent the configuration. -* Separating the code that acts on the configuration from the data and from packages/bundles of - data. -* Abstracting the configuration file structure and storage from the operations that act on the - configuration data. Clients manipulating the configuration data do not need to interact directly - with the storage (such as git, container images, and so on). - -![CaD Overview](/static/images/porch/CaD-Overview.svg) - -## Key principles - -A system based on CaD should observe the following key principles: - -* Separate handling of secrets in secret storage, in a secret-focused storage system, such as - ([example](https://cert-manager.io/)). -* Storage of a versioned history of configuration changes by change sets to bundles of related - configuration data. -* Reliance on the uniformity and consistency of the configuration format, including type metadata, - to enable pattern-based operations on the configuration data, along the lines of duck typing. -* Separation of the configuration data from its schemas, and reliance on the schema information for - strongly typed operations and disambiguation of data structures and other variations within the - model. -* Decoupling of abstractions of configuration from collections of configuration data. -* Representation of abstractions of configuration generators as data with schemas, as with other - configuration data. -* Finding, filtering, querying, selecting, and/or validating of configuration data that can be - operated on by given code (functions). -* Finding and/or filtering, querying, and selecting of code (functions) that can operate on - resource types contained within a body of configuration data. -* Actuation (reconciliation of configuration data with live state) that is separate from the - transformation of the configuration data, and is driven by the declarative data model. -* Transformations. Transformations, particularly value propagation, are preferable to wholesale - configuration generation, except when the expansion is dramatic (for example, >10x). -* Transformation input generation: this should usually be decoupled from propagation. -* Deployment context inputs: these should be taken from well-defined “provider context” objects. -* Identifiers and references: these should be declarative. -* Live state: this should be linked back to sources of truth (configuration). - -## Kubernetes Resouce Model configuration as data (KRM CaD) - -Our implementation of the Configuration as Data approach ( -[kpt](https://kpt.dev), -[Config Sync](https://cloud.google.com/anthos-config-management/docs/config-sync-overview), -and [Package Orchestration](https://github.com/nephio-project/porch)) -is built on the foundation of the -[Kubernetes Resource Model](https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/resource-management.md) -(KRM). - -{{% alert title="Note" color="primary" %}} - -Even though KRM is not a requirement of CaD (just as Python or Go templates, or Jinja, are not -specifically requirements for [IaC](https://en.wikipedia.org/wiki/Infrastructure_as_code)), the -choice of another foundational configuration representation format would necessitate the -implementation of adapters for all types of infrastructure and applications configured, including -Kubernetes, CRDs, GCP resources, and more. Likewise, choosing another configuration format would -require the redesign of several of the configuration management mechanisms that have already been -designed for KRM, such as three-way merge, structural merge patch, schema descriptions, resource -metadata, references, status conventions, and so on. - -{{% /alert %}} - - -**KRM CaD** is, therefore, a specific approach to implementing *Configuration as Data* which uses -the following: - -* [KRM](https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/resource-management.md) - as the configuration serialization data model. -* [Kptfile](https://kpt.dev/reference/schema/kptfile/) to store package metadata. -* [ResourceList](https://kpt.dev/reference/schema/resource-list/) as a serialized package wire - format. -* A function `ResourceList → ResultList` (*kpt* function) as the foundational, composable unit of - package manipulation code. - - {{% alert title="Note" color="primary" %}} - - Other forms of code can also manipulate packages, such as UIs and custom algorithms not - necessarily packaged and used as kpt functions. - - {{% /alert %}} - - -**KRM CaD** provides the following basic functionalities: - -* Loading a serialized package from a repository (as a ResourceList). Examples of a repository may - be one or more of the following: - * Local HDD - * Git repository - * OCI - * Cloud storage -* Saving a serialized package (as a ResourceList) to a package repository. -* Evaluating a function on a serialized package (ResourceList). -* [Rendering](https://kpt.dev/book/04-using-functions/#declarative-function-execution) a package - (evaluating the functions declared within the package itself). -* Creating a new (empty) package. -* Forking (or cloning) an existing package from one package repository (called upstream) to another - (called downstream). -* Deleting a package from a repository. -* Associating a version with the package and guaranteeing the immutability of packages with an - assigned version. -* Incorporating changes from the new version of an upstream package into a new version of a - downstream package (three-way merge). -* Reverting to a prior version of a package. - -## Configuration values - -The configuration as data approach enables some key values which are available in other -configuration management approaches to a lesser extent or not at all. - -The values enabled by the configuration as data approach are as follows: - -* Simplified authoring of the configuration using a variety of methods and sources. -* What-you-see-is-what-you-get (WYSIWYG) interaction with the configuration using a simple data - serialization formation, rather than a code-like format. -* Layering of interoperable interface surfaces (notably GUIs) over declarative configuration - mechanisms, rather than forcing choices between exclusive alternatives (exclusively, UI/CLI or - IaC initially, followed by exclusively UI/CLI or exclusively IaC). -* The ability to apply UX techniques to simplify configuration authoring and viewing. -* Compared to imperative tools, such as UI and CLI, that directly modify the live state via APIs, - CaD enables versioning, undo, audits of configuration history, review/approval, predeployment - preview, validation, safety checks, constraint-based policy enforcement, and disaster recovery. -* Bulk changes to configuration data in their sources of truth. -* Injection of configuration to address horizontal concerns. -* Merging of multiple sources of truth. -* State export to reusable blueprints without manual templatization. -* Cooperative editing of configurations by humans and automation, such as for security remediation, - which is usually implemented against live-state APIs. -* Reusability of the configuration transformation code across multiple bodies of configuration data - containing the same resource types, amortizing the effort of writing, testing, and documenting - the code. -* A combination of independent configuration transformations. -* Implementation of configuration transformations using the languages of choice, including both - programming and scripting approaches. -* Reducing the frequency of changes to the existing transformation code. -* Separation of roles between developer and non-developer configuration users. -* Defragmenting the configuration transformation ecosystem. -* Admission control and invariant enforcement on sources of truth. -* Maintaining variants of configuration blueprints without one-size-fits-all full - struct-constructor-style parameterization and without manually constructing and maintaining - patches. -* Drift detection and remediation for most of the desired state via continuous reconciliation, - using apply and/or for specific attributes via a targeted mutation of the sources of truth. - -## Related articles - -For more information about configuration as data and the Kubernetes Resource Model, visit the -following links: - -* [Rationale for kpt](https://kpt.dev/guides/rationale) -* [Understanding Configuration as Data](https://cloud.google.com/blog/products/containers-kubernetes/understanding-configuration-as-data-in-kubernetes) - blog post -* [Kubernetes Resource Model](https://cloud.google.com/blog/topics/developers-practitioners/build-platform-krm-part-1-whats-platform) - blog post series diff --git a/content/en/docs/porch/contributors-guide/_index.md b/content/en/docs/porch/contributors-guide/_index.md deleted file mode 100644 index f2e8e275..00000000 --- a/content/en/docs/porch/contributors-guide/_index.md +++ /dev/null @@ -1,112 +0,0 @@ ---- -title: "Porch Contributor Guide" -type: docs -weight: 7 -description: ---- - -## Changing Porch API - -If you change the API resources, in `api/porch/.../*.go`, update the generated code by running: - -```sh -make generate -``` - -## Components - -Porch comprises of several software components: - -* [api](https://github.com/nephio-project/porch/tree/main/api): Definition of the KRM API supported by the Porch - extension apiserver -* [porchctl](https://github.com/nephio-project/porch/tree/main/cmd/porchctl): CLI command tool for administration of - Porch `Repository` and `PackageRevision` custom resources. -* [apiserver](https://github.com/nephio-project/porch/tree/main/pkg/apiserver): The Porch apiserver implementation, REST - handlers, Porch `main` function -* [engine](https://github.com/nephio-project/porch/tree/main/pkg/engine): Core logic of Package Orchestration - - operations on package contents -* [func](https://github.com/nephio-project/porch/tree/main/func): KRM function evaluator microservice; exposes GRPC API -* [repository](https://github.com/nephio-project/porch/blob/main/pkg/repository): Repository integration package -* [git](https://github.com/nephio-project/porch/tree/main/pkg/externalrepo/git): Integration with Git repository. -* [oci](https://github.com/nephio-project/porch/tree/main/pkg/externalrepo/oci): Integration with OCI repository. -* [cache](https://github.com/nephio-project/porch/tree/main/pkg/cache): Package caching. -* [controllers](https://github.com/nephio-project/porch/tree/main/controllers): `Repository` CRD. No controller; - Porch apiserver watches these resources for changes as repositories are (un-)registered. -* [test](https://github.com/nephio-project/porch/tree/main/test): Test Git Server for Porch e2e testing, and - [e2e](https://github.com/nephio-project/porch/tree/main/test/e2e) tests. - -## Running Porch - -See dedicated documentation on running Porch: - -* [locally]({{% relref "/docs/porch/contributors-guide/environment-setup.md" %}}) -* [on GKE]({{% relref "/docs/porch/running-porch/running-on-GKE.md" %}}) - -## Build the Container Images - -Build Docker images of Porch components: - -```sh -# Build Images -make build-images - -# Push Images to Docker Registry -make push-images - -# Supported make variables: -# IMAGE_TAG - image tag, i.e. 'latest' (defaults to 'latest') -# GCP_PROJECT_ID - GCP project hosting gcr.io repository (will translate to gcr.io/${GCP_PROJECT_ID}) -# IMAGE_REPO - overwrites the default image repository - -# Example: -IMAGE_TAG=$(git rev-parse --short HEAD) make push-images -``` - -## Debugging - -To debug Porch, run Porch locally [running-locally.md]({{% relref "/docs/porch/contributors-guide/environment-setup.md" %}}), exit porch server running -in the shell, and launch Porch under the debugger. VS Code debug session is pre-configured in -[launch.json](https://github.com/nephio-project/porch/blob/main/.vscode/launch.json). - -Update the launch arguments to your needs. - -## Code Pointers - -Some useful code pointers: - -* Porch REST API handlers in [registry/porch](https://github.com/nephio-project/porch/tree/main/pkg/registry/porch), - for example [packagerevision.go](https://github.com/nephio-project/porch/tree/main/pkg/registry/porch/packagerevision.go) -* Background task handling cache updates in [background.go](https://github.com/nephio-project/porch/tree/main/pkg/registry/porch/background.go) -* Git repository integration in [pkg/git](https://github.com/nephio-project/porch/tree/main/pkg/externalrepo/git) -* OCI repository integration in [pkg/oci](https://github.com/nephio-project/porch/tree/main/pkg/externalrepo/oci) -* CaD Engine in [engine](https://github.com/nephio-project/porch/tree/main/pkg/engine) -* e2e tests in [e2e](https://github.com/nephio-project/porch/tree/main/test/e2e). See below more on testing. - -## Running Tests - -All tests can be run using `make test`. Individual tests can be run using `go test`. -End-to-End tests assume that Porch instance is running and `KUBECONFIG` is configured -with the instance. The tests will automatically detect whether they are running against -Porch running on local machine or k8s cluster and will start Git server appropriately, -then run test suite against the Porch instance. - -## Makefile Targets - -* `make generate`: generate code based on Porch API definitions (runs k8s code generators) -* `make tidy`: tidies all Porch modules -* `make fmt`: formats golang sources -* `make build-images`: builds Porch Docker images -* `make push-images`: builds and pushes Porch Docker images -* `make deployment-config`: customizes configuration which installs Porch - in k8s cluster with correct image names, annotations, service accounts. - The deployment-ready configuration is copied into `./.build/deploy` -* `make deploy`: deploys Porch in the k8s cluster configured with current kubectl context -* `make push-and-deploy`: builds, pushes Porch Docker images, creates deployment configuration, and deploys Porch -* `make` or `make all`: builds and runs Porch [locally]({{% relref "/docs/porch/contributors-guide/environment-setup.md" %}}) -* `make test`: runs tests - -## VS Code - -[VS Code](https://code.visualstudio.com/) works really well for editing and debugging. -Just open VS Code from the root folder of the Porch repository and it will work fine. The folder contains the needed -configuration to Launch different functions of Porch. diff --git a/content/en/docs/porch/contributors-guide/dev-process.md b/content/en/docs/porch/contributors-guide/dev-process.md deleted file mode 100644 index 25d192cd..00000000 --- a/content/en/docs/porch/contributors-guide/dev-process.md +++ /dev/null @@ -1,281 +0,0 @@ ---- -title: "Development process" -type: docs -weight: 3 -description: ---- - -After you ran the setup script as explained in the [environment setup]({{% relref "/docs/porch/contributors-guide/environment-setup.md" %}}) you are ready to start the actual development of porch. That process involves (among others) a combination of the tasks explained below. - -## Build and deploy all of porch - -The following command will rebuild all of porch and deploy all of its components into your porch-test kind cluster (created in the [environment setup]({{% relref "/docs/porch/contributors-guide/environment-setup.md" %}})): - -```bash -make run-in-kind -``` - -## Troubleshoot the porch API server - -There are several ways to develop, test and troubleshoot the porch API server. In this chapter we describe an option where every other parts of porch is running in the porch-test kind cluster, but the porch API server is running locally on your machine, typically in an IDE. - -The following command will rebuild and deploy porch, except the porch API server component, and also prepares your environment for connecting the local API server with the in-cluster components. - -```bash -make run-in-kind-no-server -``` - -After issuing this command you are expected to start the porch API server locally on your machine (outside of the kind cluster); probably in your IDE, potentially in a debugger. - -### Configure VS Code to run the Porch (API)server - -The simplest way to run the porch API server is to launch it in a VS Code IDE, as described by the following process: - -1. Open the *porch.code-workspace* file in the root of the porch git repository. - -1. Edit your local *.vscode/launch.json* file as follows: Change the `--kubeconfig` argument of the Launch Server - configuration to point to a *KUBECONFIG* file that is set to the kind cluster as the current context. - -{{% alert title="Note" color="primary" %}} - - If your current *KUBECONFIG* environment variable already points to the porch-test kind cluster, then you don't have to touch anything. - - {{% /alert %}} - -1. Launch the Porch server locally in VS Code by selecting the *Launch Server* configuration on the VS Code - *Run and Debug* window. For more information please refer to the - [VS Code debugging documentation](https://code.visualstudio.com/docs/editor/debugging). - -### Check to ensure that the API server is serving requests: - -```bash -curl https://localhost:4443/apis/porch.kpt.dev/v1alpha1 -k -``` - -
-Sample output - -```json -{ - "kind": "APIResourceList", - "apiVersion": "v1", - "groupVersion": "porch.kpt.dev/v1alpha1", - "resources": [ - { - "name": "packagerevisionresources", - "singularName": "", - "namespaced": true, - "kind": "PackageRevisionResources", - "verbs": [ - "get", - "list", - "patch", - "update" - ] - }, - { - "name": "packagerevisions", - "singularName": "", - "namespaced": true, - "kind": "PackageRevision", - "verbs": [ - "create", - "delete", - "get", - "list", - "patch", - "update", - "watch" - ] - }, - { - "name": "packagerevisions/approval", - "singularName": "", - "namespaced": true, - "kind": "PackageRevision", - "verbs": [ - "get", - "patch", - "update" - ] - }, - { - "name": "packages", - "singularName": "", - "namespaced": true, - "kind": "Package", - "verbs": [ - "create", - "delete", - "get", - "list", - "patch", - "update" - ] - } - ] -} -``` - -
- - -## Troubleshoot the porch controllers - -There are several ways to develop, test and troubleshoot the porch controllers (i.e. *PackageVariant*, *PackageVariantSet*). In this chapter we describe an option where every other parts of porch is running in the porch-test kind cluster, but the process hosting all porch controllers is running locally on your machine. - -The following command will rebuild and deploy porch, except the porch-controllers component: - -```bash -make run-in-kind-no-controllers -``` - -After issuing this command you are expected to start the porch controllers process locally on your machine (outside of -the kind cluster); probably in your IDE, potentially in a debugger. If you are using VS Code you can use the -**Launch Controllers** configuration that is defined in the -[launch.json](https://github.com/nephio-project/porch/blob/main/.vscode/launch.json) file of the porch git repository. - -## Run the unit tests - -```bash -make test -``` - -## Run the end-to-end tests - -To run the end-to-end tests against the Kubernetes API server where *KUBECONFIG* points to, simply issue: - -```bash -make test-e2e -``` - -To run the end-to-end tests against a clean deployment, issue: - -```bash -make test-e2e-clean -``` -This will -- create a brand new kind cluster, -- rebuild porch -- deploy the newly built porch into the new cluster -- run the end-to-end tests against that -- deletes the kind cluster if all tests passed - -This process closely mimics the end-to-end tests that are run against your PR on GitHub. - -In order to run just one particular test case you can execute something similar to this: - -```bash -E2E=1 go test -v ./test/e2e -run TestE2E/PorchSuite/TestPackageRevisionInMultipleNamespaces -``` -or this: -```bash -E2E=1 go test -v ./test/e2e/cli -run TestPorch/rpkg-lifecycle - -``` - -To run the end to end tests on your local machine towards a Porch server running in VS Code, be aware of the following if the tests are not running: -- Set the actual load balancer IP address for the function runner in your *launch.json*, for example - "--function-runner=172.18.255.201:9445" -- Clear the git cache of your Porch workspace before every test run, for example - `rm -fr /.cache/git/*` - -## Run the load test - -A script is provided to run a Porch load test against the Kubernetes API server where *KUBECONFIG* points to. - -```bash -porch % scripts/run-load-test.sh -h - -run-load-test.sh - runs a load test on porch - - usage: run-load-test.sh [-options] - - options - -h - this help message - -s hostname - the host name of the git server for porch git repositories - -r repo-count - the number of repositories to create during the test, a positive integer - -p package-count - the number of packages to create in each repo during the test, a positive integer - -e package-revision-count - the number of packagerevisions to create on each package during the test, a positive integer - -f result-file - the file where the raw results will be stored, defaults to load_test_results.txt - -o repo-result-file - the file where the results by reop will be stored, defaults to load_test_repo_results.csv - -l log-file - the file where the test log will be stored, defaults to load_test.log - -y - dirty mode, do not clean up after tests -``` - -The load test creates, copies, proposes and approves `repo-count` repositories, each with `package-count` packages -with `package-revision-count` package revisions created for each package. The script initializes or copies each -package revision in turn. It adds a pipeline with two "apply-replacements" kpt functions to the Kptfile of each -package revision. It updates the package revision, and then proposes and approves it. - -The load test script creates repositories on the git server at `hostname`, so it's URL will be `http://nephio:secret@hostname:3000/nephio/`. -The script expects a git server to be running at that URL. - -The `result-file` is a text file containing the time it takes for a package to move from being initialized or -copied to being approved. It also records the time it takes to proppose-delete and delete each package revision. - -The `repo-result-file` is a CSV file that tabulates the results from `result-file` into columns for each repository created. - -For example: - -```bash -porch % scripts/run-load-test.sh -s 172.18.255.200 -r 4 -p 2 -e 3 -running load test towards git server http://nephio:secret@172.18.255.200:3000/nephio/ - 4 repositories will be created - 2 packages in each repo - 3 pacakge revisions in each package - results will be stored in "load_test_results.txt" - repo results will be stored in "load_test_repo_results.csv" - the log will be stored in "load_test.log" -load test towards git server http://nephio:secret@172.18.255.200:3000/nephio/ completed -``` - -In the load test above, a total of 24 package revisions were created and deleted. - -|REPO-1-TEST|REPO-1-TIME|REPO-2-TEST|REPO-2-TIME|REPO-3-TEST|REPO-3-TIME|REPO-4-TEST|REPO-4-TIME| -|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| -1:1|1.951846|1:1|1.922723|1:1|2.019615|1:1|1.992746 -1:2|1.762657|1:2|1.864306|1:2|1.873962|1:2|1.846436 -1:3|1.807281|1:3|1.930068|1:3|1.860375|1:3|1.881649 -2:1|1.829227|2:1|1.904997|2:1|1.956160|2:1|1.988209 -2:2|1.803494|2:2|1.912169|2:2|1.915905|2:2|1.902103 -2:3|1.816716|2:3|1.948171|2:3|1.931904|2:3|1.952902 -del-6a0b3…|.918442|del-e757b…|.904881|del-d39cd…|.944850|del-6222f…|.911060 -del-378a4…|.831815|del-9211c…|.866386|del-316a5…|.898638|del-31d9f…|.895919 -del-89073…|.874867|del-97d45…|.876450|del-830e0…|.905896|del-7d411…|.866947 -del-4756f…|.850528|del-c95db…|.903599|del-4c450…|.884997|del-587f8…|.842529 -del-9860a…|.887118|del-9c1b9…|1.018930|del-66ae…|.929470|del-6ae3d…|.905359 -del-a11e5…|.845834|del-71540…|.899935|del-8d1e8…|.891296|del-9e2bb…|.864382 -del-1d789…|.851242|del-ffdc3…|.897862|del-75e45…|.852323|del-82eef…|.916630 -del-8ae7e…|.872696|del-58097…|.894618|del-d164f…|.852093|del-9da24…|.849919 - -## Switching between tasks - -The `make run-in-kind`, `make run-in-kind-no-server` and `make run-in-kind-no-controller` commands can be executed right after each other. No clean-up or restart is required between them. The make scripts will intelligently do the necessary changes in your current porch deployment in kind (e.g. removing or re-adding the porch API server). - -You can always find the configuration of your current deployment in *.build/deploy*. - -You can always use `make test` and `make test-e2e` to test your current setup, no matter which of the above detailed configurations it is. - -## Getting to know the make targets - -Try: `make help` - -## Restart with a clean-slate - -Sometimes the development kind cluster gets cluttered and you may experience weird behavior from porch. -In this case you might want to restart with a clean slate: -First, delete the development kind cluster with the following command: - -```bash -kind delete cluster --name porch-test -``` - -then re-run the [setup script](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh): - -```bash -./scripts/setup-dev-env.sh -``` - -finally deploy porch into the kind cluster by any of the methods explained above. - diff --git a/content/en/docs/porch/contributors-guide/environment-setup-vm.md b/content/en/docs/porch/contributors-guide/environment-setup-vm.md deleted file mode 100644 index fe1e4edd..00000000 --- a/content/en/docs/porch/contributors-guide/environment-setup-vm.md +++ /dev/null @@ -1,162 +0,0 @@ ---- -title: "Setting up a VM environment" -type: docs -weight: 2 -description: ---- - -This tutorial gives short instructions on how to set up a development environment for Porch on a Nephio VM. It outlines the steps to -get a [kind](https://kind.sigs.k8s.io/) cluster up and running to which a Porch instance running in Visual Studio Code -can connect to and interact with. If you are not familiar with how porch works, it is highly recommended that you go -through the [Starting with Porch tutorial]({{% relref "/docs/porch/user-guides/install-porch.md" %}}) before going through this one. - -## Setting up the environment - -1. The first step is to install the Nephio sandbox environment on your VM using the procedure described in -[Installation on a single VM]({{% relref "/docs/guides/install-guides/install-on-single-vm.md" %}}). In short, log onto your VM and give the command -below: - -```bash -wget -O - https://raw.githubusercontent.com/nephio-project/test-infra/main/e2e/provision/init.sh | \ -sudo NEPHIO_DEBUG=false \ - NEPHIO_BRANCH=main \ - NEPHIO_USER=ubuntu \ - bash -``` - -2. Set up your VM for development (optional but recommended step). - -```bash -echo '' >> ~/.bashrc -echo 'source <(kubectl completion bash)' >> ~/.bashrc -echo 'source <(kpt completion bash)' >> ~/.bashrc -echo 'source <(porchctl completion bash)' >> ~/.bashrc -echo '' >> ~/.bashrc -echo 'alias h=history' >> ~/.bashrc -echo 'alias k=kubectl' >> ~/.bashrc -echo '' >> ~/.bashrc -echo 'complete -o default -F __start_kubectl k' >> ~/.bashrc - -sudo usermod -a -G syslog ubuntu -sudo usermod -a -G docker ubuntu -``` - -3. Log out of your VM and log in again so that the group changes on the *ubuntu* user are picked up. - -```bash -> exit - -> ssh ubuntu@thevmhostname -> groups -ubuntu adm dialout cdrom floppy sudo audio dip video plugdev syslog netdev lxd docker -``` - -4. Install *go* so that you can build Porch on the VM: - -```bash -wget -O - https://go.dev/dl/go1.22.5.linux-amd64.tar.gz | sudo tar -C /usr/local -zxvf - - -echo '' >> ~/.profile -echo '# set PATH for go' >> ~/.profile -echo 'if [ -d "/usr/local/go" ]' >> ~/.profile -echo 'then' >> ~/.profile -echo ' PATH="/usr/local/go/bin:$PATH"' >> ~/.profile -echo 'fi' >> ~/.profile -``` - -5. Log out of your VM and log in again so that the *go* is added to your path. Verify that *go* is in the path: - -```bash -> exit - -> ssh ubuntu@thevmhostname - -> go version -go version go1.22.5 linux/amd64 -``` - -6. Install *go delve* for debugging on the VM: - -```bash -go install -v github.com/go-delve/delve/cmd/dlv@latest -``` - -7. Clone Porch onto the VM - -```bash -mkdir -p git/github/nephio-project -cd ~/git/github/nephio-project - -# Clone porch -git clone https://github.com/nephio-project/porch.git -cd porch -``` - -8. Change the Kind cluster name in the Porch Makefile to match the Kind cluster name on the VM: - -```bash -sed -i "s/^KIND_CONTEXT_NAME ?= porch-test$/KIND_CONTEXT_NAME ?= "$(kind get clusters)"/" Makefile -``` - -9. Expose the Porch function runner so that the Nephio server running in VS Code can access it - -```bash -kubectl expose svc -n porch-system function-runner --name=xfunction-runner --type=LoadBalancer --load-balancer-ip='172.18.0.202' -``` - -10. Set the *KUBECONFIG* and *FUNCTION_RUNNER_IP* environment variables in the *.profile* file - You **must** do this step before connecting with VS Code because VS Code caches the environment on the server. If you - want to change the values of these variables subsequently, you must restart the VM server. - - ```bash - echo '' >> ~/.profile - echo 'export KUBECONFIG="/home/ubuntu/.kube/config"' >> ~/.profile - echo 'export FUNCTION_RUNNER_IP="172.18.0.202"' >> ~/.profile - ``` - -You have now set up the VM so that it can be used for remove debugging of Porch. - -## Setting up VS Code - -Use the [VS Code Remote SSH] -(https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) -plugin to debug from VS Code running on your local machine towards a VM. Detailed documentation -on the plugin and its use is available on the -[Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) in the VS Code -documentation. - -1. Use the **Connect to a remote host** instructions on the -[Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) page to connect to your VM. - -2. Click **Open Folder** and browse to the Porch code on the VM, */home/ubuntu/git/github/nephio-project/porch* in this - case: - -![Browse to Porch code](/static/images/porch/contributor/01_VSCodeOpenPorchFolder.png) - -3. VS Code now opens the Porch project on the VM. - -![Porch code is open](/static/images/porch/contributor/02_VSCodeConnectedPorch.png) - -4. We now need to install support for *go* debugging in VS Code. Trigger this by launching a debug configuration in - VS Code. - Here we use the **Launch Override Server** configuration. - -![Launch the Override Server VS Code debug configuration](/static/images/porch/contributor/03_LaunchOverrideServer.png) - -5. VS Code complains that *go* debugging is not supported, click the **Install go Extension** button. - -![VS Code go debugging not supported message](/static/images/porch/contributor/04_GoDebugNotSupportedPopup.png) - -6. Go automatically presents the Go debug plugin for installation. Click the **Install** button. - -![VS Code Go debugging plugin selected](/static/images/porch/contributor/05_GoExtensionAutoSelected.png) - -7. VS Code installs the plugin. - -![VS Code Go debugging plugin installed](/static/images/porch/contributor/06_GoExtensionInstalled.png) - -You have now set up VS Code so that it can be used for remove debugging of Porch. - -## Getting started with actual development - -You can find a detailed description of the actual development process [here]({{% relref "/docs/porch/contributors-guide/dev-process.md" %}}). diff --git a/content/en/docs/porch/contributors-guide/environment-setup.md b/content/en/docs/porch/contributors-guide/environment-setup.md deleted file mode 100644 index a4eef984..00000000 --- a/content/en/docs/porch/contributors-guide/environment-setup.md +++ /dev/null @@ -1,251 +0,0 @@ ---- -title: "Setting up a local environment" -type: docs -weight: 2 -description: ---- - -This tutorial gives short instructions on how to set up a development environment for Porch on your local machine. It outlines the steps to -get a [kind](https://kind.sigs.k8s.io/) cluster up and running to which a Porch instance running in Visual Studio Code -can connect to and interact with. If you are not familiar with how porch works, it is highly recommended that you go -through the [Starting with Porch tutorial]({{% relref "/docs/porch/user-guides/install-porch.md" %}}) before going through this one. - -{{% alert title="Note" color="primary" %}} - -As your development environment, you can run the code on a remote VM and use the -[VS Code Remote SSH](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) -plugin to connect to it. - -{{% /alert %}} - -## Extra steps for MacOS users - -The script the make deployment-config target to generate the deployment files for porch. The scripts called by this -make target use recent *bash* additions. MacOS comes with *bash* 3.x.x - -1. Install *bash* 4.x.x or better of *bash* using homebrew, see - [this post for details](https://apple.stackexchange.com/questions/193411/update-bash-to-version-4-0-on-osx) -2. Ensure that */opt/homebrew/bin* is earlier in your path than */bin* and */usr/bin* - -{{% alert title="Note" color="primary" %}} - -The changes above **permanently** change the *bash* version for **all** applications and may cause side -effects. - -{{% /alert %}} - - -## Setup the environment automatically - -The [*./scripts/setup-dev-env.sh*](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh) setup -script automatically builds a porch development environment. - -{{% alert title="Note" color="primary" %}} - -This is only one of many possible ways of building a working porch development environment so feel free -to customize it to suit your needs. - -{{% /alert %}} - -The setup script will perform the following steps: - -1. Install a kind cluster. The name of the cluster is read from the PORCH_TEST_CLUSTER environment variable, otherwise - it defaults to porch-test. The configuration of the cluster is taken from - [here](https://github.com/nephio-project/porch/blob/main/deployments/local/kind_porch_test_cluster.yaml). -1. Install the MetalLB load balancer into the cluster, in order to allow LoadBalancer typed Services to work properly. -1. Install the Gitea git server into the cluster. This can be used to test porch during development, but it is not used - in automated end-to-end tests. Gitea is exposed to the host via port 3000. The GUI is accessible via - , or (username: nephio, password: secret). - {{% alert title="Note" color="primary" %}} - - If you are using WSL2 (Windows Subsystem for Linux), then Gitea is also accessible from the Windows host via the - URL. - - {{% /alert %}} -1. Generate the PKI resources (key pairs and certificates) required for end-to-end tests. -1. Build the porch CLI binary. The result will be generated as *.build/porchctl*. - -That's it! If you want to run the steps manually, please use the code of the script as a detailed description. - -The setup script is idempotent in the sense that you can rerun it without cleaning up first. This also means that if the -script is interrupted for any reason, and you run it again it should effectively continue the process where it left off. - -## Extra manual steps - -Copy the *.build/porchctl* binary (that was built by the setup script) to somewhere in your $PATH, or add the *.build* -directory to your PATH. - -## Build and deploy porch - -You can build all of porch, and also deploy it into your newly created kind cluster with this command. - -```bash -make run-in-kind -``` - -See more advanced variants of this command in the [detailed description of the development process]({{% relref "/docs/porch/contributors-guide/dev-process.md" %}}). - -## Check that everything works as expected - -At this point you are basically ready to start developing porch, but before you start it is worth checking that -everything works as expected. - -### Check that the APIservice is ready - -```bash -kubectl get apiservice v1alpha1.porch.kpt.dev -``` - -Sample output: - -```bash -NAME SERVICE AVAILABLE AGE -v1alpha1.porch.kpt.dev porch-system/api True 18m -``` - -### Check the porch api-resources - -```bash -kubectl api-resources | grep porch -``` - -Sample output: - -```bash -packagerevs config.porch.kpt.dev/v1alpha1 true PackageRev -packagevariants config.porch.kpt.dev/v1alpha1 true PackageVariant -packagevariantsets config.porch.kpt.dev/v1alpha2 true PackageVariantSet -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -packages porch.kpt.dev/v1alpha1 true PorchPackage -``` - -## Create Repositories using your local Porch server - -To connect Porch to Gitea, follow [step 7 in the Starting with Porch]({{% relref "/docs/porch/user-guides/install-porch.md" %}}) -tutorial to create the repositories in Porch. - -You will notice logging messages in VS Code when you run the `kubectl apply -f porch-repositories.yaml` command. - -You can check that your locally running Porch server has created the repositories by running the `porchctl` command: - -```bash -porchctl repo get -A -``` - -Sample output: - -```bash -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -You can also check the repositories using *kubectl*. - -```bash -kubectl get repositories -n porch-demo -``` - -Sample output: - -```bash -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -You now have a locally running Porch (API)server. Happy developing! - -## Restart from scratch - -Sometimes the development cluster gets cluttered and you may experience weird behavior from porch. -In this case you might want to restart from scratch, by deleting the development cluster with the following -command: - -```bash -kind delete cluster --name porch-test -``` - -and running the [setup script](https://github.com/nephio-project/porch/blob/main/scripts/setup-dev-env.sh) again: - -```bash -./scripts/setup-dev-env.sh -``` - -## Getting started with actual development - -You can find a detailed description of the actual development process [here]({{% relref "/docs/porch/contributors-guide/environment-setup.md" %}}). - -## Enabling Open Telemetry/Jaeger tracing - -### Enabling tracing on a Porch deployment - -Follow the steps below to enable Open Telemetry/Jaeger tracing on your Porch deployment. - -1. Apply the Porch *deployment.yaml* manifest for Jaeger. - -```bash -kubectl apply -f https://raw.githubusercontent.com/nephio-project/porch/refs/heads/main/deployments/tracing/deployment.yaml -``` - -2. Add the environment variable *OTEL* to the porch-server manifest: - -```bash -kubectl edit deployment -n porch-system porch-server -``` - -```bash -env: -- name: OTEL - value: otel://jaeger-oltp:4317 -``` - -3. Set up port forwarding of the Jaeger HTTP port to your local machine: - -```bash -kubectl port-forward -n porch-system service/jaeger-http 16686 -``` - -4. Open the Jaeger UI in your browser at *http://localhost:16686* - -### Enable tracing on a local Porch server - -Follow the steps below to enable Open Telemetry/Jaeger tracing on a porch server running locally on your machine, such as in VS Code. - -1. Download the Jaeger binary tarball for your local machine architecture from [the Jaeger download page](https://www.jaegertracing.io/download/#binaries) and untar the tarball in some suitable directory. - -2. Run Jaeger: - -```bash -cd jaeger -./jaeger-all-in-one -``` - -3. Configure the Porch server to output Open Telemetry traces: - - Set the *OTEL* environment variable to point at the Jaeger server - - In *.vscode/launch.json*: - -```bash -"env": { - ... - ... -"OTEL": "otel://localhost:4317", - ... - ... -} -``` - - In a shell: - -```bash -export OTEL="otel://localhost:4317" -``` - -4. Open the Jaeger UI in your browser at *http://localhost:16686* - -5. Run the Porch Server. - diff --git a/content/en/docs/porch/function-runner-pod-templates.md b/content/en/docs/porch/function-runner-pod-templates.md deleted file mode 100644 index 4c205daf..00000000 --- a/content/en/docs/porch/function-runner-pod-templates.md +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: "Function runner pod templating" -type: docs -weight: 4 -description: ---- - -## Overview - -The `porch-fn-runner` implements a simple function-as-a-service for executing kpt functions, running -the necessary kpt functions wrapped in a GRPC server. The function of the `porch-fn-runner` is to -start up a number of function evaluator pods for each of the kpt functions, along with a front-end -service, pointing to its respective pod. As with any operator that manages pods, it is good to -provide some templating and parameterization capabilities of the pods that will be managed by the -function runner. - -## Contract for writing pod templates - -The following contract needs to be fulfilled by any function evaluator pod template: - -1. There is a container. The container is named "function". -2. The entry point of the “function” container will start the wrapper GRPC server. -3. The image of the “function” container can be set to the image of the kpt function without - impacting the starting of the entry point. -4. The arguments of the “function” container can be appended with the entries from the Dockerfile - ENTRYPOINT of the kpt function image. - -## Enabling pod templating on function runner - -A ConfigMap with the pod template should be created in the namespace where the porch-fn-runner pod -is running. The name of the ConfigMap should be included as `--function-pod-template`, in the -command line arguments in the pod specification of the function runner. - -```yaml -... -spec: - serviceAccountName: porch-fn-runner - containers: - - name: function-runner - image: gcr.io/example-google-project-id/porch-function-runner:latest - imagePullPolicy: IfNotPresent - command: - - /server - - --config=/config.yaml - - --functions=/functions - - --pod-namespace=porch-fn-system - - --function-pod-template=kpt-function-eval-pod-template - env: - - name: WRAPPER_SERVER_IMAGE - value: gcr.io/example-google-project-id/porch-wrapper-server:latest - ports: - - containerPort: 9445 - # Add grpc readiness probe to ensure the cache is ready - readinessProbe: - exec: - command: - - /grpc-health-probe - - -addr - - localhost:9445 -... -``` - -Additionally, the porch-fn-runner pod requires `read` access to the pod template ConfigMap. Assuming -the porch-fn-runner pod is running in the porch-system namespace, the following Role and -RoleBindings need to be added to the Porch deployment manifests. - -```yaml -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: porch-fn-runner - namespace: porch-system -rules: - - apiGroups: [""] - resources: ["configmaps"] - verbs: ["get", "list"] ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: porch-fn-runner - namespace: porch-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: porch-fn-runner -subjects: - - kind: ServiceAccount - name: porch-fn-runner -``` - -## Example pod template - -The pod template ConfigMap below matches the default behavior: - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: kpt-function-eval-pod-template -data: - template: | - apiVersion: v1 - kind: Pod - annotations: - cluster-autoscaler.kubernetes.io/safe-to-evict: true - spec: - initContainers: - - name: copy-wrapper-server - image: docker.io/nephio/porch-wrapper-server:latest - command: - - cp - - -a - - /wrapper-server/. - - /wrapper-server-tools - volumeMounts: - - name: wrapper-server-tools - mountPath: /wrapper-server-tools - containers: - - name: function - image: image-replaced-by-kpt-func-image - command: - - /wrapper-server-tools/wrapper-server - volumeMounts: - - name: wrapper-server-tools - mountPath: /wrapper-server-tools - volumes: - - name: wrapper-server-tools - emptyDir: {} - serviceTemplate: | - apiVersion: v1 - kind: Service - spec: - ports: - - port: 9446 - protocol: TCP - targetPort: 9446 - selector: - fn.kpt.dev/image: to-be-replaced - type: ClusterIP -``` diff --git a/content/en/docs/porch/package-mutation-pipeline-order.md b/content/en/docs/porch/package-mutation-pipeline-order.md deleted file mode 100644 index 7b22e562..00000000 --- a/content/en/docs/porch/package-mutation-pipeline-order.md +++ /dev/null @@ -1,183 +0,0 @@ ---- -title: "Package Mutation Pipeline Order" -type: docs -weight: 1 -description: ---- - -## Why - -This document explains the two different traversal strategies for package hydration in Porch's rendering pipeline: **Depth-First Search (DFS)** and **Breadth-First Search (BFS)**. These strategies determine the order in which kpt packages and their subpackages are processed during mutation and validation. - -## Background - -Porch uses a hydration process to transform kpt packages by running functions (mutators and validators) defined in Kptfiles. The order in which packages are processed can significantly impact the final output, especially when parent and child packages have interdependent transformations. - -## Traversal Strategies - -### Terminology - -For a package structure like: -``` -ROOT/ -├── A/ -├── B/ -└───└─ C/ -``` -Let's define the key terms used throughout this documentation. - -- Root: The top-level package that initiates the hydration process (e.g., ROOT) -- Child: A direct subpackage of another package (e.g., A, B, C are children of ROOT) -- Sibling: Packages that share the same parent (e.g., A and B are siblings) -- Descendant: Any package in the subtree below a given package, including children, grandchildren, etc. - -### Default: Depth-First Search (DFS) - -**Function**: `hydrate()` - -The default hydration strategy processes packages using depth-first traversal in post-order. This means: -- All subpackages are processed **before** their parent packages -- Recursion naturally handles the traversal order -- Resources flow **bottom-up** through the package hierarchy - -#### Processing Order -For the package structure shown before: - -The execution order is: **C → A → B → ROOT** (alphabetical order within each level, then parent) - -#### Implementation Details -- Uses recursive function calls to traverse the package tree -- Each package's pipeline receives: - - All resources from its processed subpackages - - Its own local resources -- Subpackage resources are appended to the parent's input before running the parent's pipeline - -### Optional: Breadth-First Search (BFS) - -**Function**: `hydrateBfsOrder()` - -The BFS strategy processes packages in a top-down approach using explicit queues: -- Parent packages are processed **before** their subpackages -- Uses two-phase execution: discovery and pipeline execution -- Resources flow **top-down** through the package hierarchy - -#### Processing Order -For the package structure shown before: - -The execution order is: **ROOT → A → B → C** (parent first, then children in alphabetical order) - -#### Implementation Details -- **Phase 1**: Breadth-first discovery of all packages and loading of local resources -- **Phase 2**: Sequential pipeline execution with scoped visibility -- Each package's pipeline receives: - - Its own local resources - - All resources from its descendants (children, grandchildren, etc.) - -## Enabling BFS Mode - -To use the BFS traversal strategy, add the following annotation to your root package's Kptfile: - -```yaml -apiVersion: kpt.dev/v1 -kind: Kptfile -metadata: - name: root-package - annotations: - kpt.dev/bfs-rendering: "true" -``` - -**Important**: -- The annotation must be set to exactly `"true"` (case-sensitive) -- Any other value or missing annotation defaults to DFS mode -- The annotation is only checked on the root package's Kptfile - -## Key Differences and Use Cases - -| Aspect | DFS (Default) | BFS (Optional) | -|--------|---------------|----------------| -| **Traversal Pattern** | Depth-first, post-order | Breadth-first, level-order | -| **Processing Direction** | Bottom-up (children → parent) | Top-down (parent → children) | -| **Resource Flow** | Subpackages feed into parent | Parent influences subpackages | -| **Queue Implementation** | Implicit (recursion) | Explicit (two queues) | -| **Resource Visibility** | Parent sees all subpackage outputs | Package sees self + all descendants | -| **Cycle Detection** | During traversal | During discovery phase | - -### When to Use DFS (Default) -- **Aggregation scenarios**: When parent packages need to collect and process outputs from subpackages -- **Bottom-up customization**: When specializations at lower levels should inform higher-level decisions -- **Traditional kpt workflows**: Most existing kpt packages expect this behavior - -### When to Use BFS -- **Template expansion**: When a root package serves as a template that configures subpackages -- **Top-down configuration**: When parent-level settings should cascade to children -- **Consistent base customization**: When you want to apply base transformations before specialized ones - -## Practical Examples - -### DFS Scenario: Configuration Aggregation -``` -ROOT/ # Collects all service configs -├── service-a/ # Defines service-a configuration -├── service-b/ # Defines service-b configuration -└── monitoring/ # Defines monitoring for both services -``` - -With DFS, the ROOT package can aggregate configurations from all services and create a unified monitoring dashboard. - -### BFS Scenario: Template-Based Deployment -``` -ROOT/ # Contains base templates and global config -├── staging/ # Staging-specific overrides -├── production/ # Production-specific overrides -└── development/ # Development-specific overrides -``` - -With BFS, the ROOT package can set up base templates and global configurations that are then specialized by each environment-specific subpackage. - -## Implementation Architecture - -### Core Components - -1. **hydrationContext**: Maintains global state during hydration including: - - Package registry with hydration states (Dry, Hydrating, Wet) - - Input/output file tracking for pruning - - Function execution counters and results - -2. **pkgNode**: Represents individual packages in the hydration graph: - - Package metadata and file system access - - Hydration state tracking - - Accumulated resources after processing - -3. **Pipeline Execution**: Both strategies share the same pipeline execution logic: - - Mutator functions transform resources - - Validator functions verify resources without modification - - Function selection and exclusion based on selectors - -### Resource Scoping - -**DFS Resource Scope**: -- Input = subpackage outputs + own local resources -- Processes transitively accumulated resources - -**BFS Resource Scope**: -- Input = own local resources + all descendant local resources -- Each package sees its complete subtree - -## Error Handling and Validation - -Both strategies include: -- **Cycle Detection**: Prevents infinite loops in package dependencies -- **State Validation**: Ensures packages are processed in correct order -- **Resource Validation**: Verifies KRM resource format compliance -- **Pipeline Validation**: Checks function configurations before execution - -## Related Resources - -- [Tree Traversal Algorithms](https://en.wikipedia.org/wiki/Tree_traversal) - -## See Also - -- **Source Code**: https://github.com/nephio-project/porch -- **File**: `internal/kpt/util/render/executor.go` -- **Key Functions**: `hydrate()` and `hydrateBfsOrder()` -- **Configuration**: `kpt.dev/bfs-rendering` annotation in `pkg/kpt/api/kptfile/v1/types.go` \ No newline at end of file diff --git a/content/en/docs/porch/package-orchestration.md b/content/en/docs/porch/package-orchestration.md deleted file mode 100644 index 1a70ee53..00000000 --- a/content/en/docs/porch/package-orchestration.md +++ /dev/null @@ -1,467 +0,0 @@ ---- -title: "Package Orchestration" -type: docs -weight: 2 -description: ---- - -Customers who want to take advantage of the benefits of [Configuration as Data]({{% relref "/docs/porch/config-as-data.md" %}}) -can do so today using the [kpt](https://kpt.dev) CLI and the kpt function ecosystem, including its -[functions catalog](https://catalog.kpt.dev/). Package authoring is possible using a variety of -editors with [YAML](https://yaml.org/) support. That said, a UI experience of -what-you-see-is-what-you-get (WYSIWYG) package authoring which supports a broader package lifecycle, -including package authoring with *guardrails*, approval workflows, package deployment, and more, is -not yet available. - -The *Package Orchestration* (Porch) service is a part of the Nephio implementation of the -Configuration as Data approach. It offers an API and a CLI that enable you to build the UI -experience for supporting the configuration lifecycle. - -## Core concepts - -This section briefly describes core concepts of package orchestration: - -***Package***: A package is a collection of related configuration files containing configurations -of [KRM][krm] **resources**. Specifically, configuration packages are [kpt packages](https://kpt.dev/book/02-concepts/#packages). -Packages are sequentially ***versioned***. Multiple versions of the same package may exist in a -([repository](#package-versioning)). A package may have a link (URL) to an -***upstream package*** (a specific version) ([from which it was cloned](#package-relationships)) . Packages go through three lifecycle stages: ***Draft***, ***Proposed***, and ***Published***: - - * ***Draft***: The package is being created or edited. The contents of the package can be - modified; however, the package is not ready to be used (or deployed). - * ***Proposed***: The author of the package has proposed that the package be published. - * ***Published***: The changes to the package have been approved and the package is ready to be - used. Published packages can be deployed or cloned. - -***Repository***: The repository stores packages. [git][] and [OCI][oci] are two examples of a -([repository](#repositories)). A repository can be designated as a -***deployment repository***. *Published* packages in a deployment repository are considered to be -([deployment-ready](#deployment)). -***Functions***: Functions (specifically, [KRM functions][krm functions]) can be applied to -packages to mutate or validate the resources within them. Functions can be applied to a -package to create specific package mutations while editing a package draft. Functions can be added -to a package's Kptfile [pipeline][]. - -## Core components of the Configuration as Data (CAD) implementation - -The core implementation of Configuration as Data, or *CaD Core*, is a set of components and APIs -which collectively enable the following: - -* Registration of the repositories (Git, OCI) containing kpt packages and the discovery of packages. -* Management of package lifecycles. This includes the authoring, versioning, deletion, creation, -and mutations of a package draft, the process of proposing the package draft, and the publishing of -the approved package. -* Package lifecycle operations, such as the following: - - * The assisted or automated rollout of a package upgrade when a new version of the upstream - package version becomes available (the three-way merge). - * The rollback of a package to its previous version. - -* The deployment of the packages from the deployment repositories, and the observability of their -deployment status. -* A permission model that allows role-based access control (RBAC). - -### High-level architecture - -At the high level, the Core CaD functionality consists of the following components: - -* A generic (that is, not task-specific) package orchestration service implementing the following: - - * package repository management - * package discovery, authoring, and lifecycle management - -* The Porch CLI tool [porchctl]({{% relref "/docs/porch/user-guides/porchctl-cli-guide.md" %}}): this is a Git-native, -schema-aware, extensible client-side tool for managing KRM packages. -* A GitOps-based deployment mechanism (for example [configsync][]), which distributes and deploys -configurations, and provides observability of the status of the deployed resources. -* A task-specific UI supporting repository management, package discovery, authoring, and lifecycle. - -![CaD Core Architecture](/static/images/porch/CaD-Core-Architecture.svg) - -## CaD concepts elaborated - -The concepts that were briefly introduced in **High-level architecture** are elaborated in more -detail in this section. - -### Repositories - -Porch and [configsync][] currently integrate with [git][] repositories. There is an existing design -that adds OCI support to kpt. Initially, the Package Orchestration service will prioritize -integration with [git][]. Support for additional repository types may be added in the future, as -required. - -Requirements applicable to all repositories include the ability to store the packages and their -versions, and sufficient metadata associated with the packages to capture the following: - -* package dependency relationships (upstream - downstream) -* package lifecycle state (draft, proposed, published) -* package purpose (base package) -* customer-defined attributes (optional) - -At repository registration, the customers must be able to specify the details needed to store the -packages in appropriate locations in the repository. For example, registration of a Git repository -must accept a branch and a directory. - -{{% alert title="Note" color="primary" %}} - -A user role with sufficient permissions can register a package or a function repository, including -repositories containing functions authored by the customer, or by other providers. Since the -functions in the registered repositories become discoverable, customers must be aware of the -implications of registering function repositories and trust the contents thereof. - -{{% /alert %}} - -### Package versioning - -Packages are versioned sequentially. The requirements are as follows: - -* The ability to compare any two versions of a package as "newer than", "equal to", or "older than" - the other. -* The ability to support the automatic assignment of versions. -* The ability to support the [optimistic concurrency][optimistic-concurrency] of package changes - via version numbers. -* A simple model that easily supports automation. - -A simple integer sequence is used to represent the package versions. - -### Package relationships - -The Kpt packages support the concept of ***upstream***. When one package is cloned from another, -the new package, known as the ***downstream*** package, maintains an upstream link to the version -of the package from which it was cloned. If a new version of the upstream package becomes available, -then the upstream link can be used to update the downstream package. - -### Deployment - -The deployment mechanism is responsible for deploying the configuration packages from a repository -and affecting the live state. Because the configuration is stored in standard repositories (Git, -and in the future OCI), the deployment component is pluggable. By default, [Config Sync](https://cloud.google.com/kubernetes-engine/enterprise/config-sync/docs/overview) is the -deployment mechanism used by CaD Core implementation. However, other deployment mechanisms can be -also used. - -Some of the key attributes of the deployment mechanism and its integration within the CaD Core are -highlighted here: - -* _Published_ packages in a deployment repository are considered to be ready to be deployed. -* configsync supports the deployment of individual packages and whole repositories. For Git - specifically, that translates to a requirement to be able to specify the repository, - branch/tag/ref, and directory when instructing configsync to deploy a package. -* _Draft_ packages need to be identified in such a way that configsync can easily avoid deploying - them. -* configsync needs to be able to pin to specific versions of deployable packages, in order to - orchestrate rollouts and rollbacks. This means it must be possible to get a specific version of a - package. -* configsync needs to be able to discover when new versions are available for deployment. - -## Package Orchestration (Porch) - -Having established the context of the CaD Core components and the overall architecture, the -remainder of the document will focus on the Package Orchestration service, or **Porch** for short. - -The role of the Package Orchestration service among the CaD Core components covers the following -areas: - -* [Repository Management](#repository-management) -* [Package Discovery](#package-discovery) -* [Package Authoring](#package-authoring) and Lifecycle - -In the next sections we will expand on each of these areas. The term _client_ used in these -sections can be either a person interacting with the user interface, such as a web application or a -command-line tool, or an automated agent or process. - -### Repository management - -The repository management functionality of the Package Orchestration service enables the client to -do the following: - -* Register, unregister, and update the registration of the repositories, and discover registered - repositories. Git repository integration will be available first, with OCI and possibly more - delivered in the subsequent releases. -* Manage repository-wide upstream/downstream relationships, that is, designate the default upstream - repositories from which the packages will be cloned. -* Annotate the repositories with metadata, such as whether or not each repository contains - deployment-ready packages. Metadata can be application- or customer-specific. - -### Package discovery - -The package discovery functionality of the Package Orchestration service enables the client to do -the following: - -* Browse the packages in a repository. -* Discover the configuration packages in the registered repositories, and sort and/or filter them - based on the repository containing the package, package metadata, version, and package lifecycle - stage (draft, proposed, and published). -* Retrieve the resources and metadata of an individual package, including the latest version, or - any specific version or draft of a package, for the purpose of introspection of a single package, - or for comparison of the contents of multiple versions of a package or related packages. -* Enumerate the _upstream_ packages that are available for creating (cloning) a _downstream_ - package. -* Identify the downstream packages that need to be upgraded after a change has been made to an - upstream package. -* Identify all the deployment-ready packages in a deployment repository that are ready to be synced - to a deployment target by configsync. -* Identify new versions of packages in a deployment repository that can be rolled out to a - deployment target by configsync. - -### Package authoring - -The package authoring and lifecycle functionality of the package Orchestration service enables the -client to do the following: - -* Create a package _draft_ via one of the following means: - - * An empty draft from scratch (`porchctl rpkg init`). - * A clone of an upstream package (`porchctl rpkg clone`) from a registered upstream repository or - from another accessible, unregistered repository. - * Editing an existing package (`porchctl rpkg pull`). - * Rolling back or restoring a package to any of its previous versions - (`porchctl rpkg pull` of a previous version). - -* Push changes to a package _draft_. In general, mutations include adding, modifying, and deleting - any part of the package's contents. Specific examples include the following: - - * Adding, changing, or deleting package metadata (that is, some properties in the `Kptfile`). - * Adding, changing, or deleting resources in the package. - * Adding function mutators/validators to the package's pipeline. - * Adding, changing, or deleting sub-packages. - * Retrieving the contents of the package for arbitrary client-side mutations - (`porchctl rpkg pull`). - * Updating or replacing the package contents with new contents, for example, the results of - client-side mutations by a UI (`porchctl rpkg push`). - -* Rebase a package onto another upstream base package or onto a newer version of the same package - (to assist with conflict resolution during the process of publishing a draft package). - -* Get feedback during package authoring, and assistance in recovery from merge conflicts, invalid - package changes, or guardrail violations. - -* Propose that a _draft_ package be _published_. -* Apply arbitrary decision criteria, and by a manual or an automated action, approve or reject a - proposal for _draft_ package to be _published_. -* Perform bulk operations, such as the following: - - * Assisted/automated updates (upgrades and rollbacks) of groups of packages matching specific - criteria (for example, if a base package has new version or a specific base package version has - a vulnerability and needs to be rolled back). - * Proposed change validation (prevalidating changes that add a validator function to a base - package). - -* Delete an existing package. - -#### Authoring and latency - -An important aim of the Package Orchestration service is to support the building of task-specific -UIs. To deliver a low-latency user experience that is acceptable to UI interactions, the innermost -authoring loop depicted below requires the following: - -* high-performance access to the package store (loading or saving a package) with caching -* low-latency execution of mutations and transformations of the package contents -* low-latency [KRM function][krm functions] evaluation and package rendering (evaluation of a - package's function pipelines) - -![Inner Loop](/static/images/porch/Porch-Inner-Loop.svg) - -#### Authoring and access control - -A client can assign actors (for example, persons, service accounts, and so on) to roles that -determine which operations they are allowed to perform, in order to satisfy the requirements of the -basic roles. For example, only permitted roles can do the following: - -* Manipulate repository registration, and enforcement of repository-wide invariants and guardrails. -* Create a draft of a package and propose that the draft be published. -* Approve or reject a proposal to publish a draft package. -* Clone a package from a specific upstream repository. -* Perform bulk operations, such as rollout upgrade of downstream packages, including rollouts - across multiple downstream repositories. - -### Porch architecture - -The Package Orchestration (**Porch**) service is designed to be hosted in a -[Kubernetes](https://kubernetes.io/) cluster. - -The overall architecture is shown in the following figure. It also includes existing components, -such as the k8s apiserver and configsync. - -![Porch Architecture](/static/images/porch/Porch-Architecture.svg) - -In addition to satisfying the requirements highlighted above, the focus of the architecture was to -do the following: - -* Establish clear components and interfaces. -* Support a low-latency package authoring experience required by the UIs. - -The Porch architecture comprises three components: - -* the Porch server -* the function runner -* the CaD Library - -#### Porch server - -The Porch server is implemented as a [Kubernetes extension API server][apiserver]. The benefits of -using the Kubernetes extension API server are as follows: - -* A well-defined and familiar API style. -* The availability of generated clients. -* Integration with the existing Kubernetes ecosystem and tools, such as the `kubectl` CLI, - [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/). -* The Kubernetes extension API server removes the need to open another network port to access a - separate endpoint running inside the k8s cluster. This is a clear advantage over Google Remote - Procedure Calls (GRPC), which was considered as an alternative approach. - -The resources implemented by Porch include the following: - -* `PackageRevision`: This represents the _metadata_ of the configuration package revision stored in - a _package_ repository. -* `PackageRevisionResources`: This represents the _contents_ of the package revision. - -{{% alert title="Note" color="primary"%}} - -Each configuration package revision is represented by a _pair_ of resources, each of which presents -a different view, or a [representation][] of the same underlying package revision. - -{{% /alert %}} - -Repository registration is supported by a `Repository` [custom resource][crds]. - -The **Porch server** itself comprises several key components, including the following: - -* The *Porch aggregated apiserver* - The *Porch aggregated apiserver* implements the integration into the main Kubernetes apiserver, - and directly serves the API requests for the `PackageRevision`, `PackageRevisionResources` - resources. -* The Package Orchestration *engine* - The Package Orchestration *engine* implements the package lifecycle operations, and the package - mutation workflows. -* The *CaD Library* - The *CaD Library* implements specific package manipulation algorithms, such as package rendering - (the evaluation of a package's function *pipeline*), the initialization of a new package, and so - on. The CaD Library is shared with `kpt`, where it likewise provides the core package - manipulation algorithms. -* The *package cache* - The *package cache* enables both local caching, as well as the abstract manipulation of packages - and their contents, irrespective of the underlying storage mechanism, such as Git, or OCI. -* The *repository adapters* for Git and OCI - The *repository adapters* for Git and OCI implement the specific logic of interacting with those types of package - repositories. -* The *function runtime* - The *function runtime* implements support for evaluating the [kpt functions][functions] and the - multitier cache of functions to support low-latency function evaluation. - -#### Function runner - -The **function runner** is a separate service that is responsible for evaluating the -[kpt functions][functions]. The function runner exposes a Google Remote Procedure Calls -([GRPC](https://grpc.io/)) endpoint, which enables the evaluation of a kpt function on the provided -configuration package. - -The GRPC technology was chosen for the function runner service because the -[requirements](#grpc-api) that informed the choice of the KRM API for the Package Orchestration -service do not apply. The function runner is an internal microservice, an implementation detail not -exposed to external callers. This makes GRPC particularly suitable. - -The function runner also maintains a cache of functions to support low-latency function evaluation. -It achieves this through two mechanisms that are available for the evaluation of a function. - -The **Executable Evaluation** approach executes the function within the pod runtime through a -shell-based invocation of the function binary, for which the function binaries are bundled inside -the function runner image itself. - -The **Pod Evaluation** approach is used when the invoked function is not available via the -Executable Evaluation approach, wherein the function runner pod starts the function pod that -corresponds to the invoked function, along with a front-end service. Once the pod and the service -are up and running, its exposed GRPC endpoint is invoked for function evaluation, passing the input -package. For this mechanism, the function runner reads the list of functions and their images -supplied via a configuration file at startup, and spawns function pods, along with a corresponding -front-end service for each configured function. These function pods and services are terminated -after a preconfigured period of inactivity (the default is 30 minutes) by the function runner and -are recreated on the next invocation. - -#### CaD Library - -The [kpt](https://kpt.dev/) CLI already implements foundational package manipulation algorithms, in -order to provide the command line user experience, including the following: - -* [kpt pkg init](https://kpt.dev/reference/cli/pkg/init/): this creates an empty, valid KRM package. -* [kpt pkg get](https://kpt.dev/reference/cli/pkg/get/): this creates a downstream package by - cloning an upstream package. It sets up the upstream reference of the downstream package. -* [kpt pkg update](https://kpt.dev/reference/cli/pkg/update/): this updates the downstream package - with changes from the new version of the upstream, three-way merge. -* [kpt fn eval](https://kpt.dev/reference/cli/fn/eval/): this evaluates a kpt function on a package. -* [kpt fn render](https://kpt.dev/reference/cli/fn/render/): this renders the package by executing - the function pipeline of the package and its nested packages. -* [kpt fn source](https://kpt.dev/reference/cli/fn/source/) and - [kpt fn sink](https://kpt.dev/reference/cli/fn/sink/): these read packages from a local disk as - a `ResourceList` and write the packages represented as a `ResourcesList` into the local disk. - -The same set of primitives form the building blocks of the package orchestration service. Further, -the Package Orchestration service combines these primitives into higher-level operations (for -example, package orchestrator renders the packages automatically on changes. Future versions will -support bulk operations, such as the upgrade of multiple packages, and so on). - -The implementation of the package manipulation primitives in the kpt was refactored (with the -initial refactoring completed, and more to be performed as needed), in order to do the following: - -* Create a reusable CaD library, usable by both the kpt CLI and the Package Orchestration service. -* Create abstractions for dependencies which differ between the CLI and Porch. Most notable are - the dependency on Docker for function evaluation, and the dependency on the local file system for - package rendering. - -Over time, the CaD Library will provide the package manipulation primitives, to perform the -following tasks: - -* Create a valid empty package (init). -* Update the package upstream pointers (get). -* Perform three-way merges (update). -* Render: using a core package rendering algorithm that uses a pluggable function evaluator, to - support the following: - - * Function evaluation via Docker (used by kpt CLI). - * Function evaluation via an RPC to a service or an appropriate function sandbox. - * High-performance evaluation of trusted, built-in functions without a sandbox. - -* Heal the configuration (restore comments after lossy transformation). - -Both the kpt CLI and Porch will consume the library. This approach will allow the leveraging of the -investment already made into the high-quality package manipulation primitives, and enable -functional parity between the kpt CLI and the Package Orchestration service. - -## User Guide - -The Porch User Guide can be found in a dedicated document, via this link: -[document](https://github.com/kptdev/kpt/blob/main/site/guides/porch-user-guide.md). - -## Open issues and questions - -### Deployment rollouts and orchestration - -__Not Yet Resolved__ - -Cross-cluster rollouts and orchestration of deployment activity. For example, a package deployed by -configsync in cluster A, and only on success, the same (or a different) package deployed by -configsync in cluster B. - -## Alternatives considered - -### GRPC API - -The use of Google Remote Procedure Calls ([GRPC]()) was considered for the Porch API. The primary -advantages of implementing Porch as an extension of the Kubernetes apiserver are as follows: - -* Customers would not have to open another port to their Kubernetes cluster and would be able to - reuse their existing infrastructure. -* Customers could likewise reuse the existing Kubernetes tooling ecosystem. - - -[krm]: https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/resource-management.md -[functions]: https://kpt.dev/book/02-concepts/03-functions -[krm functions]: https://github.com/kubernetes-sigs/kustomize/blob/master/cmd/config/docs/api-conventions/functions-spec.md -[pipeline]: https://kpt.dev/book/04-using-functions/01-declarative-function-execution -[Config Sync]: https://cloud.google.com/anthos-config-management/docs/config-sync-overview -[kpt]: https://kpt.dev/ -[git]: https://git-scm.org/ -[optimistic-concurrency]: https://en.wikipedia.org/wiki/Optimistic_concurrency_control -[apiserver]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/ -[representation]: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#differing-representations -[crds]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/ -[oci]: https://github.com/opencontainers/image-spec/blob/main/spec.md diff --git a/content/en/docs/porch/package-variant.md b/content/en/docs/porch/package-variant.md deleted file mode 100644 index 208e04df..00000000 --- a/content/en/docs/porch/package-variant.md +++ /dev/null @@ -1,1339 +0,0 @@ ---- -title: "Package Variant Controller" -type: docs -weight: 3 -description: ---- - -## Overview - -When deploying workloads across large fleets of clusters, it is often necessary to modify the -workload configuration for a specific cluster. Additionally, these workloads may evolve over time -with security or other patches that require updates. [Configuration as Data]({{% relref "/docs/porch/config-as-data.md" %}}) in -general, and [Package Orchestration]({{% relref "/docs/porch/package-orchestration.md" %}}) in particular, can assist in this. -However, they are still centered around a manual, one-by-one hydration and configuration of a -workload. - -This proposal introduces a number of concepts and a set of resources for automating the creation -and lifecycle management of the package variants. These are designed to address several different -dimensions of scalability: - -- the number of different workloads for a given cluster -- the number of clusters across which the workloads are deployed -- the different types or characteristics of the clusters -- the complexity of the organizations deploying the workloads -- changes to those workloads over time - -For further information, see the following links: - -- [Package Orchestration]({{% relref "/docs/porch/package-orchestration.md" %}}) -- [#3347](https://github.com/GoogleContainerTools/kpt/issues/3347) Bulk package creation -- [#3243](https://github.com/GoogleContainerTools/kpt/issues/3243) Support bulk package upgrades -- [#3488](https://github.com/GoogleContainerTools/kpt/issues/3488) Porch: BaseRevision controller aka Fan Out - controller - but more -- [Managing Package - Revisions](https://docs.google.com/document/d/1EzUUDxLm5jlEG9d47AQOxA2W6HmSWVjL1zqyIFkqV1I/edit?usp=sharing) -- [Porch UpstreamPolicy Resource - API](https://docs.google.com/document/d/1OxNon_1ri4YOqNtEQivBgeRzIPuX9sOyu-nYukjwN1Q/edit?usp=sharing&resourcekey=0-2nDYYH5Kw58IwCatA4uDQw) - -## Core concepts - -For this solution, the workloads are represented by packages. A package is a more general concept, -being an arbitrary bundle of resources, and is therefore sufficient to solve the problem that was -stated originally. - -The idea here is to introduce a *PackageVariant* resource that manages the derivation of a variant -of a package from the original source package, and to manage the evolution of that variant over -time. This effectively automates the human-centered process for variant creation that might be used -with *kpt*, and allows you to do the following: - -- Clone an upstream package locally. -- Make changes to the local package, setting values in the resources and executing KRM functions. -- Push the package to a new repository and tag it as a new version. - -Similarly, the *PackageVariant* can manage the process of updating a package when a new version of -the upstream package is published. In the human-centered workflow, a user uses the `kpt pkg update` -to pull in changes to their derivative package. When using a *PackageVariant* resource, the change -is made to the upstream specification in the resource, and the controller proposes a new draft -package reflecting the outcome of the `kpt pkg update`. - -Automating this process opens up the possibility of performing systematic changes that tie back to -the different dimensions of scalability. We can use data about the specific variant we are creating -to look up an additional context in the Porch cluster, and copy that information into the variant. -That context is a well-structured resource, not simply a set of key/value pairs. The KRM functions -within the package can interpret the resource, modifying other resources in the package accordingly. -The context can come from multiple sources that vary differently along those dimensions of -scalability. For example, one piece of information may vary by region, another by individual site, -another by cloud provider, and another based on whether we are deploying to development, staging, -or production. By using the resources in the Porch cluster as our input model, we can represent this -complexity in a manageable model that is reused across many packages, rather than scattered in -package-specific templates or key/value pairs without any structure. The KRM functions, also reused -across packages, but configured as needed for the specific package, are used to interpret the -resources within the package. This decouples the authoring of the packages, the creation of the -input model, and the deploy time use of that input model within the packages, thereby allowing -those activities to be performed by different teams or organizations. - -The mechanism described above is referred to as configuration injection. Configuration injection -enables the dynamic, context-aware creation of variants. Another way to think about it is as a -continuous reconciliation, much like other Kubernetes controllers. In this case, the inputs are a -parent package *P* and a context *C* (which may be a collection of many independent resources), -with the output being the derived package *D*. When a new version of *C* is created by updates to -in-cluster resources, we get a new revision of *D*, customized according to the updated context. -Similarly, the user (or an automation) can monitor for new versions of *P*. When a new version -arrives, the PackageVariant can be updated to point to that new version. This results in a newly -proposed draft *D*, updated to reflect the upstream changes. This will be explained in more -detail below. - -This proposal also introduces a way of “fanning out”, or creating multiple PackageVariant resources -declaratively based on a list or selector with the PackageVariantSet resource. This is combined with -the injection mechanism to enable the generation of large sets of variants that are specialized for -a particular target repository, cluster, or other resource. - -## Basic package cloning - -The *PackageVariant* resource controls the creation and lifecycle of a variant of a package. That -is, it defines the original (upstream) package, the new (downstream) package, and the changes, or -mutations, that need to be made to transform the upstream package into the downstream package. It -also allows the user to specify the policies around the adoption, deletion, and update of package -revisions that are under the control of the package variant controller. - -The clone operation is shown in *Figure 1*. - -| ![Figure 1: Basic package cloning](/static/images/porch/packagevariant-clone.png) | ![Legend](/static/images/porch/packagevariant-legend.png) | -| :---: | :---: | -| *Figure 1: Basic package cloning* | *Legend* | - -{{% alert title="Note" color="primary" %}} - -*Proposals* and *approvals* are not handled by the package variant controller. They are left to -other types of controller. The exception to this is the proposal to delete (there is no such thing -as a draft deletion). This is performed by the package variant controller, depending on the -specified deletion policy. - -{{% /alert %}} - -### PackageRevision metadata - -The package variant controller utilizes Porch APIs. This means that it is not just performing a -clone operation, but is also creating a Porch *PackageRevision* resource. In particular, this -resource can contain Kubernetes metadata that is not a part of the package, as stored in the -repository. - -Some of this metadata is necessary for the management of the *PackageRevision* by the package -variant controller, for example, the owner reference that indicates which *PackageVariant* created -the *PackageRevision*. This metadata is not under the user's control. However, the *PackageVariant* -resource does make the annotations and labels of the *PackageRevision* available as -values that the user may control during the creation of the *PackageRevision*. This can assist in -additional automation workflows. - -## Introducing variance - -Since cloning by itself is not particularly interesting, the *PackageVariant* resource also allows -you to control the various ways of mutating the original package to create the variant. - -### Package context[^porch17] - -Every *kpt* package that is fetched with `--for-deployment` contains a ConfigMap called -*kptfile.kpt.dev*. Analogously, when Porch creates a package in a deployment repository, it creates -a ConfigMap, if it does not already exist. *Kpt* (or Porch) automatically adds a key name to the -ConfigMap data, with the value of the package name. This ConfigMap can then be used as input to the -functions in the *kpt* function pipeline. - -This process also holds true for the package revisions created via the package variant controller. -Additionally, the author of the *PackageVariant* resource can specify additional key-value pairs to -insert into the package context, as shown in *Figure 2*. - -| ![Figure 2: Package context mutation](/static/images/porch/packagevariant-context.png) | -| :---: | -| *Figure 2: Package context mutation* | - -While this is convenient, it can easily be misused, leading to over-parameterization. The preferred -approach is configuration injection, as described below, since it allows inputs to adhere to a -well-defined, reusable schema, rather than simple key/value pairs. - -### Kptfile function pipeline editing[^porch18] - -In the manual workflow, one of the ways in which packages are edited is by running KRM functions -imperatively. The *PackageVariant* offers a similar capability, by allowing the user to add -functions to the beginning of the downstream package *Kptfile* mutators pipeline. These functions -then execute before the functions present in the upstream pipeline. This method is not exactly the -same as running functions imperatively, because they are also run in every subsequent execution of -the downstream package function pipeline. However, it can achieve the same goals. - -Consider, for example, an upstream package that includes a Namespace resource. In many -organizations, the deployer of the workload may not have the permissions to provision cluster-scoped -resources such as namespaces. This means that they would not be able to use this upstream package -without removing the Namespace resource (assuming that they only have access to a pipeline that -deploys with constrained permissions). By adding a function that removes Namespace resources, and -a call to set-namespace, they can take advantage of the upstream package. - -Similarly, the *Kptfile* pipeline editing feature provides an easy mechanism for the deployer to -create and set the namespace, if their downstream package application pipeline allows it, as seen in -*Figure 3*.[^setns] - -| ![Figure 3: KRM function pipeline editing](/static/images/porch/packagevariant-function.png) | -| :---: | -| *Figure 3: Kptfile function pipeline editing* | - -### Configuration injection[^porch18] - -Adding values to the package context or functions to the pipeline works for configurations that are -under the control of the creator of the *PackageVariant* resource. However, in more advanced use -cases, it may be necessary to specialize the package based on other contextual information. This -comes into play in particular when the user deploying the workload does not have direct control -over the context in which it is being deployed. For example, one part of the organization may manage -the infrastructure - that is, the cluster in which the workload is being deployed - and another part -the actual workload. It would be desirable to be able to pull in the inputs specified by the -infrastructure team automatically, based on the cluster to which the workload is deployed, or -possibly the region in which the cluster is deployed. - -To facilitate this, the package variant controller can "inject" configuration directly into the -package. This means it uses information specific to this instance of the package to look up a -resource in the Porch cluster and copy that information into the package. The package has to be -ready to receive this information. Therefore, there is a protocol that is used to facilitate this: - -- Packages may contain resources annotated with *kpt.dev/config-injection* -- These resources are often also *config.kubernetes.io/local-config* resources, as they are likely - to be used only by the local functions as input. However, this is not mandatory. -- The package variant controller looks for any resource in the Kubernetes cluster that matches the - Group, Version, and Kind of the package resource, and satisfies the injection selector. -- The package variant controller copies the specification field from the matching in-cluster - resource to the in-package resource, or the data field, in the case of a ConfigMap. - -| ![Figure 4: Configuration injection](/static/images/porch/packagevariant-config-injection.png) | -| :---: | -| *Figure 4: Configuration injection* | - -{{% alert title="Note" color="primary" %}} - -Because the data is being injected from the Kubernetes cluster, this data can also be monitored for -changes. For each resource that is injected, the package variant controller establishes a -Kubernetes “watch” on the resource (or on the collection of such resources). A change to that -resource results in a new draft package with the updated configuration injected. - -{{% /alert %}} - -There are a number of additional details that will be described in the detailed design below, along -with the specific API definition. - -## Lifecycle management - -### Upstream changes - -The package variant controller allows you to specify an upstream package revision to clone. -Alternatively, you can specify a floating tag[^notimplemented]. - -If you specify an upstream revision, then the downstream will not be changed unless the -*PackageVariant* resource itself is modified to point to a new revision. That is, the user must -edit the *PackageVariant* and change the upstream package reference. When that is done, the package -variant controller updates any existing draft package under its ownership by performing the -equivalent of a `kpt pkg update`. This updates the downstream so that it is based on the new -upstream revision. If a draft does not exist, then the package variant controller creates a new -draft based on the current published downstream, and applies the `kpt pkg update`. This updated -draft must then be proposed and approved, as with other package changes. - -If a floating tag is used, then explicit modification of the *PackageVariant* is not required. -Rather, when the floating tag is moved to a new tagged revision of the upstream package, the package -revision controller will notice and automatically propose an update to that revision. For example, -the upstream package author may designate three floating tags: stable, beta, and alpha. The upstream -package author can move these tags to specific revisions, and any *PackageVariant* resource tracking -them will propose updates to their downstream packages. - -### Adoption and deletion policies - -When a *PackageVariant* resource is created, it has a particular repository and package name as the -downstream. The adoption policy determines whether or not the package variant controller takes over -an existing package with that name, in that repository. - -Analogously, when a *PackageVariant* resource is deleted, a decision must be made about whether or -not to delete the downstream package. This is controlled by the deletion policy. - -## Fanning out of variant generation[^pvsimpl] - -When used with a single package, the package variant controller mostly helps to handle the time -dimension: that is, producing new versions of a package as the upstream changes, or as injected -resources are updated. It can also be useful for automating common, systematic changes that are -made when bringing an external package into an organization, or an organizational package into a -team repository. - -This is useful, but not particularly compelling by itself. More interesting is when we use the -*PackageVariant* as a primitive for automations that act on other dimensions of scale. This means -writing controllers that emit *PackageVariant* resources. For example, we can create a controller -that instantiates a *PackageVariant* for each developer in our organization, or we can create a -controller to manage the *PackageVariant*s across environments. The ability not only to clone a -package, but also to make systematic changes to that package, enables flexible automation. - -The workload controllers in Kubernetes are a useful analogy. In Kubernetes, there are different -workload controllers, such as Deployment, StatefulSet, and DaemonSet. These all ultimately result -in pods. However, the decisions as to what kind of pods to create, how to schedule them across the -nodes, how to configure the pods, and how to manage them as changes take place, differ with each -workload controller. Similarly, we can build different controllers to handle the different ways in -which we want to generate the *PackageRevisions*. The *PackageVariant* resource provides a -convenient primitive for all of these controllers, allowing them to leverage a range of well-defined -operations to mutate the packages as needed. - -A common requirement is the ability to generate multiple variants of a package based on a simple -list of an entity. Examples include the following: - -- Generating package variants to spin up development environments for each developer in an - organization. -- Instantiating the same package, with minor configuration changes, across a fleet of clusters. -- Instantiating the packages for each customer. - -The package variant set controller is designed to meet this common need. The controller consumes -and outputs the *PackageVariant* resources. The *PackageVariantSet* defines the following: - -- the upstream package -- the targeting criteria -- a template for generating one *PackageVariant* per target - -Three types of targeting are supported: - -- an explicit list of repositories and package names -- a label selector for the repository objects -- an arbitrary object selector - -The rules for generating a *PackageVariant* are associated with a list of targets using a template. -This template can have explicit values for various *PackageVariant* fields, or it can use -[Common Expression Language (CEL)](https://github.com/google/cel-go) expressions to specify the -field values. - -*Figure 5* shows an example of the creation of *PackageVariant* resources based on the explicit -list of repositories. In this example, for the *cluster-01* and *cluster-02* repositories, no -template is defined for the resulting *PackageVariant*s. It simply takes the defaults. However, for -*cluster-03*, a template is defined to change the downstream package name to *bar*. - -| ![Figure 5: PackageVariantSet with the repository list](/static/images/porch/packagevariantset-target-list.png) | -| :---: | -| *Figure 5: PackageVariantSet with the repository list* | - -It is also possible to target the same package to a repository more than once, using different -names. This is useful if, for example, the package is used for provisioning namespaces and you -would like to provision multiple namespaces in the same cluster. It is also useful if a repository -is shared across multiple clusters. In *Figure 6*, two *PackageVariant* resources for creating the -*foo* package in the *cluster-01* repository are generated, one for each listed package name. Since -no *packageNames* field is listed for *cluster-02*, only one instance is created for that -repository. - -| ![Figure 6: PackageVariantSet with the package list](/static/images/porch/packagevariantset-target-list-with-packages.png) | -| :---: | -| *Figure 6: PackageVariantSet with the package list* | - -*Figure 7* shows an example that combines a repository label selector with configuration injectors -that differ according to the target. The template for the *PackageVariant* includes a CEL expression -for one of the injectors, so that the injection varies systematically according to the attributes of -the target. - -| ![Figure 7: PackageVariantSet with the repository selector](/static/images/porch/packagevariantset-target-repo-selector.png) | -| :---: | -| *Figure 7: PackageVariantSet with the repository selector* | - -## Detailed design - -### PackageVariant API - -The Go types below define the *PackageVariantSpec*. - -```go -type PackageVariantSpec struct { - Upstream *Upstream `json:"upstream,omitempty"` - Downstream *Downstream `json:"downstream,omitempty"` - - AdoptionPolicy AdoptionPolicy `json:"adoptionPolicy,omitempty"` - DeletionPolicy DeletionPolicy `json:"deletionPolicy,omitempty"` - - Labels map[string]string `json:"labels,omitempty"` - Annotations map[string]string `json:"annotations,omitempty"` - - PackageContext *PackageContext `json:"packageContext,omitempty"` - Pipeline *kptfilev1.Pipeline `json:"pipeline,omitempty"` - Injectors []InjectionSelector `json:"injectors,omitempty"` -} - -type Upstream struct { - Repo string `json:"repo,omitempty"` - Package string `json:"package,omitempty"` - Revision string `json:"revision,omitempty"` -} - -type Downstream struct { - Repo string `json:"repo,omitempty"` - Package string `json:"package,omitempty"` -} - -type PackageContext struct { - Data map[string]string `json:"data,omitempty"` - RemoveKeys []string `json:"removeKeys,omitempty"` -} - -type InjectionSelector struct { - Group *string `json:"group,omitempty"` - Version *string `json:"version,omitempty"` - Kind *string `json:"kind,omitempty"` - Name string `json:"name"` -} - -``` - -#### Basic specification fields - -The Upstream and Downstream fields specify the source package, and the destination repository and -package name. The Repo fields refer to the names of the Porch Repository resources in the same -namespace as the *PackageVariant* resource. The Downstream field does not contain a revision, -because the package variant controller only creates the draft packages. The revision of the eventual *PackageRevision* resource is determined by Porch at the time of approval. - -The Labels and Annotations fields list the metadata to include in the created *PackageRevision*. -These values are set only at the time a draft package is created. They are ignored for subsequent -operations, even if the *PackageVariant* itself has been modified. This means users are free to -change these values on the *PackageRevision*. The package variant controller will not touch them -again. - -The AdoptionPolicy controls how the package variant controller behaves if it finds an existing -*PackageRevision* draft matching the Downstream field. If the status of the AdoptionPolicy is -*adoptExisting*, then the package variant controller takes ownership of the draft, associating it -with this *PackageVariant*. This means that it will begin to reconcile the draft, as if it had -created it in the first place. If the status of the AdoptionPolicy is *adoptNone* (this is the -default setting), then the package variant controller simply ignores any matching drafts that were -not created by the controller. - -The DeletionPolicy controls how the package variant controller behaves with respect to the -*PackageRevisions* that package variant controller created when the *PackageVariant* resource itself -was deleted. The *delete* value (the default value) deletes the *PackageRevision*, potentially -removing it from a running cluster, if the downstream package has been deployed. The *orphan* value -removes the owner references and leaves the *PackageRevisions* in place. - -#### Package context injection - -*PackageVariant* resource authors may specify key-value pairs in the spec.packageContext.data field -of the resource. These key-value pairs are automatically added to the data of the *kptfile.kpt.dev* -ConfigMap, if it exists. - -Specifying the key name is invalid and must fail the validation of the *PackageVariant*. This key -is reserved for *kpt* or Porch to set to the package name. Similarly, the package-path is reserved -and will result in an error. - -The spec.packageContext.removeKeys field can also be used to specify a list of keys that the package -variant controller should remove from the data field of the *kptfile.kpt.dev* ConfigMap. - -When creating or updating a package, the package variant controller ensures the following: - -- The *kptfile.kpt.dev* ConfigMap exists. If it does not exist, then the package variant controller - will fail the ConfigMap. -- All of the key-value pairs in the spec.packageContext.data exist in the data field of the - ConfigMap. -- None of the keys listed in spec.packageContext.removeKeys exists in the ConfigMap. - -{{% alert title="Note" color="primary" %}} - -If a user adds a key via the *PackageVariant*, then changes the *PackageVariant* to not add that key -anymore, then it will not be removed automatically, unless the user also lists the key in the -removeKeys list. This avoids the need to track which keys were added by the *PackageVariant*. - -Similarly, if a user manually adds a key in the downstream that is also listed in the removeKeys -field, then the package variant controller will remove that key the next time it needs to update -the downstream package. There will be no attempt to coordinate “ownership” of these keys. - -{{% /alert %}} - -If, for some reason, the controller cannot modify the ConfigMap, then this is considered to be an -error and will prevent the generation of the draft. This will result in the Ready condition being -set to *False*. - -#### Editing the Kptfile function pipeline - -The *PackageVariant* resource creators may specify a list of KRM functions to add to the beginning -of the *Kptfile's* pipeline. These functions are listed in the spec.pipeline field, which is a -[Pipeline](https://github.com/GoogleContainerTools/kpt/blob/cf1f326486214f6b4469d8432287a2fa705b48f5/pkg/api/kptfile/v1/types.go#L236), just as in the *Kptfile*. The user can therefore prepend both validators -and mutators. - -Functions added in this way are always added to the *beginning* of the *Kptfile* pipeline. To enable -the management of the list on subsequent reconciliations, functions added by the package variant -controller use the Name field of the -[Function](https://github.com/GoogleContainerTools/kpt/blob/cf1f326486214f6b4469d8432287a2fa705b48f5/pkg/api/kptfile/v1/types.go#L283). In the *Kptfile*, each function is named as the dot-delimited -concatenation of the *PackageVariant*, the name of the *PackageVariant* resource, the function name -as specified in the pipeline of the *PackageVariant* resource (if present), and the positional -location of the function in the array. - -For example, if the *PackageVariant* resource contains the following: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: my-pv -spec: - ... - pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.1 - configMap: - namespace: my-ns - name: my-func - - image: gcr.io/kpt-fn/set-labels:v0.1 - configMap: - app: foo -``` - -then the resulting *Kptfile* will have the following two entries prepended to its mutators list: - -```yaml - pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.1 - configMap: - namespace: my-ns - name: PackageVariant.my-pv.my-func.0 - - image: gcr.io/kpt-fn/set-labels:v0.1 - configMap: - app: foo - name: PackageVariant.my-pv..1 -``` - -This allows the controller, during subsequent reconciliations, to identify the functions within its -control, remove them all, and add them again, based on its updated content. Including the -*PackageVariant* name enables chains of *PackageVariants* to add functions, as long as the user is -careful about their choice of resource names and avoids conflicts. - -If, for some reason, the controller cannot modify the pipeline, then this is considered to be an -error and should prevent the generation of the draft. This will result in the Ready condition being -set to *False*. - -#### Configuration injection details - -As described [above](#configuration-injection), configuration injection is a process whereby -in-package resources are matched to in-cluster resources, and the specifications of the in-cluster -resources are copied to the in-package resource. - -Configuration injection is controlled by a combination of in-package resources with annotations, and -injectors (also known as *injection selectors*) defined on the *PackageVariant* resource. Package -authors control the injection points they allow in their packages, by flagging specific resources as -*injection points* with an annotation. Creators of the *PackageVariant* resource specify how to map -in-cluster resources to those injection points using the injection selectors. Injection selectors -are defined in the spec.injectors field of the *PackageVariant*. This field is an ordered array of -structs containing a group, version, kind (GVK) tuple as separate fields, and a name. Only the name -is required. To identify a match, all fields present must match the in-cluster object, and all *GVK* -fields present must match the in-package resource. In general, the name will not match the -in-package resource. This is discussed in more detail below. - -The annotations, along with the GVK of the annotated resource, allow a package to “advertise” the -injections it can accept and understand. These injection points effectively form a configuration API -for the package. The injection selectors provide a way for the *PackageVariant* author to specify -the inputs for those APIs from the possible values in the management cluster. If we define the APIs -carefully, they can be used across many packages. Since they are KRM resources, we can apply -versioning and schema validation to them as well. This creates a more maintainable, automatable set -of APIs for package customization than simple key/value pairs. - -As an example, we can define a GVK that contains service endpoints that many applications use. In -each application package, we then include an instance of the resource. We can call this resource, -for example, *service-endpoints*. We then configure a function to propagate the values from this -resource to other resources within our package. As those endpoints may vary by region, we can create -in our Porch cluster an instance of this GVK for each region: *useast1-service-endpoints*, *useast2-service-endpoints*, *uswest1-service-endpoints*, and so on. When we instantiate the -*PackageVariant* for a cluster, we want to inject the resource corresponding to the region in which -the cluster exists. Therefore, for each cluster we will create a *PackageVariant* resource pointing -to the upstream package, but with injection selector name values that are specific to the region for -that cluster. - -It is important to understand that the name of the in-package resource and that of the in-cluster -resource need not match. In fact, it would be an unusual coincidence if they did match. The names in -the package are the same across the *PackageVariants* using that upstream, but we want to inject -different resources for each *PackageVariant*. In addition, we do not want to change the name in the -package, because it likely has meaning within the package and will be used by the functions in the -package. Also, different owners control the names of the in-package and in-cluster resources. The -names in the package are in the control of the package author. The names in the cluster are in the -control of whomever populates the cluster (for example, an infrastructure team). The selector is the -glue between them, and is in control of the *PackageVariant* resource creator. - -The GVK, however, has to be the same for the in-package resource and the in-cluster resource. This -is because the GVK tells us the API schema for the resource. Also, the namespace of the in-cluster -object needs to be the same as that of the *PackageVariant* resource. Otherwise, we could leak -resources from those namespaces to which our *PackageVariant* user does not have access. - -With this in mind, the injection process works as follows: - -1. The controller examines all the in-package resources, looking for those that have an annotation - named *kpt.dev/config-injection*, with either of the following values: - - *required* - - *optional* - These are called injection points. It is the responsibility of the package author to define these - injection points, and to specify which are required and which are optional. Optional injection - points are a way of specifying default values. -2. For each injection point, a condition is created *in the downstream PackageRevision*, with the - ConditionType set to the dot-delimited concatenation of the config.injection, with the in-package - resource kind and name, and the value set to *False*. - - {{% alert title="Note" color="primary" %}} - - Since the package author controls the name of the resource, the kind and the name are sufficient - to identify the injection point. This ConditionType is called the "injection point - ConditionType". - - {{% /alert %}} - -3. For each required injection point, the injection point ConditionType is added to the - *PackageRevision* readinessGates by the package variant controller. The ConditionTypes of the - optional injection points must not be added to the readinessGates by the package variant - controller. However, other actors may do so at a later date, and the package variant controller - should not remove them on subsequent reconciliations. Also, this relies on the readinessGates - gating publishing the package to a *deployment* repository, but not gating publishing to a - blueprint repository. -4. The injection processing proceeds as follows. For each injection point, the following is the - case: - - - The controller identifies all in-cluster objects in the same namespace as the *PackageVariant* - resource, with the GVK matching the injection point (the in-package resource). If the - controller is unable to load these objects (for example, there are none and the CRD is not - installed), then the injection point ConditionType will be set to *False*, with a message - indicating the error. Processing then proceeds to the next injection point. - - {{% alert title="Note" color="primary" %}} - - For optional injection, this may be an acceptable outcome. Therefore, it does not interfere - with the overall generation of the draft. - - {{% /alert %}} - - - The controller looks through the list of injection selectors in order and checks if any of the - in-cluster objects match the selector. If there is an in-cluster object that matches, then that - in-cluster object is selected and processing of the list of injection selectors ceases. - - {{% alert title="Note" color="primary" %}} - - The namespace is set according to the *PackageVariant* resource. The GVK is set according to - the in-package resource. Each selector requires a name. Therefore, one match at most is - possible for any given selector. - - Additionally, *all fields present in the selector* must match the in-cluster resource. Only - the *GVK fields present in the selector* must match the in-package resource. - - {{% /alert %}} - - - If no in-cluster object is selected, then the injection point ConditionType is set to *False*, - with a message that no matching in-cluster resource was found. Processing proceeds to the next - injection point. - - - If a matching in-cluster object is selected, then it is injected as follows: - - - For the ConfigMap resources, the data field from the in-cluster resource is copied to the - data field of the in-package resource (the injection point), overwriting it. - - For the other resource types, the specification field from the in-cluster resource is copied - to the specification field of the in-package resource (the injection point), overwriting it. - - An annotation with the name *kpt.dev/injected-resource-name* and the value set to the name - of the in-cluster resource is added (or overwritten) in the in-package resource. - -If, for some reason, the overall injection cannot be completed, or if either of the problems set -out below exists in the upstream package, then it is considered to be an error and should prevent -the generation of the draft. The two possible problems are the following: - - - There is a resource annotated as an injection point which, however, has an invalid annotation - value (that is, a value other than *required* or *optional*). - - There are ambiguous condition types, due to conflicting GVK and name values. If this is the - case, then these must be disambiguated in the upstream package. - -This results in the Ready condition being set to *False*. - -{{% alert title="Note" color="primary" %}} - -Whether or not all the required injection points are fulfilled does not affect the *PackageVariant* -conditions. It only affects the *PackageRevision* conditions. - -{{% /alert %}} - -**A Further note on selectors** - -By allowing the use, and not just name, of the GVK in the selector, more precision in the selection -is enabled. This is a way to constrain the injections that are performed. That is, if the package -has 10 different objects with a config-injection annotation, then the *PackageVariant* could say it -only wants to replace certain GVKs, thereby allowing better control. - -Consider, for example, if the cluster contains the following resources: - -- GVK1 foo -- GVK1 bar -- GVK2 foo -- GVK2 bar - -If we could define injection selectors based only on their names, it would be impossible to ever -inject one GVK with *foo* and another with *bar*. Instead, by using the GVK, we can accomplish this -with a list of selectors, such as the following: - - - GVK1 foo - - GVK2 bar - -That said, often a name is sufficiently unique when combined with the in-package resource GVK. -Therefore, making the selector GVK optional is more convenient. This allows a single injector to -apply to multiple injection points with different GVKs. - -#### Order of mutations - -During creation, the first step the controller takes is to clone the upstream package to create the -downstream package. - -For the update, first note that changes to the downstream *PackageRevision* can be triggered for the -following reasons: - -1. The *PackageVariant* resource is updated. This could change any of the options for introducing - variance, or could also change the upstream package revision referenced. -2. A new revision of the upstream package has been selected, due to a floating tag change, or due - to a force retagging of the upstream. -3. An injected in-cluster object has been updated. - -The downstream *PackageRevision* may have been updated by humans or other automation actors since -creation. Therefore, we cannot simply recreate the downstream *PackageRevision* from scratch when a -change occurs. Instead, the controller must maintain the later edits by performing the equivalent -of a `kpt pkg update`, in the case of changes to the upstream, for any reason. Any other changes -require a reapplication of the *PackageVariant* functionality. With this in mind, we can see that -the controller performs mutations on the downstream package in the following order, for both -creation and update: - -1. Create (via clone) or update (via `kpt pkg update` equivalent): - - - This is carried out by the Porch server, not directly by the package variant controller. - - This means that Porch runs the *Kptfile* pipeline after clone or update. - -2. The package variant controller applies configured mutations: - - - Package context injections - - *Kptfile* KRM function pipeline additions/changes - - Config injection - -3. The package variant controller saves the *PackageRevision* and the *PackageRevisionResources*: - - - The Porch server executes the *Kptfile* pipeline. - -The package variant controller mutations edit the resources (including the *Kptfile*) according to -the contents of the *PackageVariant* and the injected in-cluster resources. However, they cannot -affect one another. The results of these mutations throughout the rest of the package are manifested -by the execution of the *Kptfile* pipeline during the *save* operation. - -#### PackageVariant status - -The PackageVariant sets the following status conditions: - - - **Stalled** - The PackageVariant sets this condition to *True* if there has been a failure that likely requires - intervention by the user. - - **Ready** - The PackageVariant sets this condition to *True* if the last reconciliation has successfully - produced an up-to-date draft. - -The *PackageVariant* resource also contains a DownstreamTargets field. This field contains a list of -downstream *Draft* and *Proposed* *PackageRevisions* owned by this *PackageVariant* resource, or the -latest published *PackageRevision*, if there are none in the *Draft* or *Proposed* state. Typically, -there is only a single draft, but the use of the *adopt* value for the AdoptionPolicy could result -in multiple drafts being owned by the same *PackageVariant*. - -### PackageVariantSet API[^pvsimpl] - -The Go types below define the `PackageVariantSetSpec`. - -```go -// PackageVariantSetSpec defines the desired state of PackageVariantSet -type PackageVariantSetSpec struct { - Upstream *pkgvarapi.Upstream `json:"upstream,omitempty"` - Targets []Target `json:"targets,omitempty"` -} - -type Target struct { - // Exactly one of Repositories, RepositorySeletor, and ObjectSelector must be - // populated - // option 1: an explicit repositories and package names - Repositories []RepositoryTarget `json:"repositories,omitempty"` - - // option 2: a label selector against a set of repositories - RepositorySelector *metav1.LabelSelector `json:"repositorySelector,omitempty"` - - // option 3: a selector against a set of arbitrary objects - ObjectSelector *ObjectSelector `json:"objectSelector,omitempty"` - - // Template specifies how to generate a PackageVariant from a target - Template *PackageVariantTemplate `json:"template,omitempty"` -} -``` - -At the highest level, a *PackageVariantSet* is just an upstream and a list of targets. For each -target, there is a set of criteria for generating a list, and a set of rules (a template) for -creating a *PackageVariant* from each list entry. - -Since the template is optional, let us start with describing the different types of targets, and how -the criteria in each target is used to generate a list that seeds the *PackageVariant* resources. - -The target structure must include one of three different ways of generating the list. The first is -a simple list of repositories and package names for each of these repositories[^repo-pkg-expr]. The -package name list is required for uses cases in which you want to repeatedly instantiate the same -package in a single repository. For example, if a repository represents the contents of a cluster, -you may want to instantiate a namespace package once for each namespace, with a name matching the -namespace. - -The following example shows how to use the repositories field: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - namespace: default - name: example -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - repositories: - - name: cluster-01 - - name: cluster-02 - - name: cluster-03 - packageNames: - - foo-a - - foo-b - - foo-c - - name: cluster-04 - packageNames: - - foo-a - - foo-b -``` - -In the following case, the *PackageVariant* resources are created for each of the pairs of -downstream repositories and package names: - -| Repository | Package name | -| ---------- | ------------ | -| cluster-01 | foo | -| cluster-02 | foo | -| cluster-03 | foo-a | -| cluster-03 | foo-b | -| cluster-03 | foo-c | -| cluster-04 | foo-a | -| cluster-04 | foo-b | - -All of the *PackageVariants* in the above list have the same upstream. - -The second criteria targeting is via a label selector against the Porch repository objects, along -with a list of package names. These packages are instantiated in each matching repository. As in the -first example, not listing a package name defaults to one package, with the same name as the -upstream package. Suppose, for example, we have the following four repositories defined in our Porch -cluster: - -| Repository | Labels | -| ---------- | ------------------------------------- | -| cluster-01 | region=useast1, env=prod, org=hr | -| cluster-02 | region=uswest1, env=prod, org=finance | -| cluster-03 | region=useast2, env=prod, org=hr | -| cluster-04 | region=uswest1, env=prod, org=hr | - -If we create a *PackageVariantSet* with the following specificattion: - -```yaml -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - repositorySelector: - matchLabels: - env: prod - org: hr - - repositorySelector: - matchLabels: - region: uswest1 - packageNames: - - foo-a - - foo-b - - foo-c -``` - -then the *PackageVariant* resources will be created with the following repository and package names: - -| Repository | Package name | -| ---------- | ------------ | -| cluster-01 | foo | -| cluster-03 | foo | -| cluster-04 | foo | -| cluster-02 | foo-a | -| cluster-02 | foo-b | -| cluster-02 | foo-c | -| cluster-04 | foo-a | -| cluster-04 | foo-b | -| cluster-04 | foo-c | - -The third possibility allows the use of *arbitrary* resources in the Porch cluster as targeting -criteria. The objectSelector looks like this: - -```yaml -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - objectSelector: - apiVersion: krm-platform.bigco.com/v1 - kind: Team - matchLabels: - org: hr - role: dev -``` - -The object selector works in the same way as the repository selector - in fact, the repository -selector is equivalent to the object selector, with the apiVersion and kind values set to point to -the Porch repository resources. That is, the repository name comes from the object name, and the -package names come from the listed package names. In the description of the template, we will see -how to derive different repository names from the objects. - -#### PackageVariant template - -As discussed earlier, the list entries generated by the target criteria result in *PackageVariant* -entries. If no template is specified, then the *PackageVariant* default is used, along with the -downstream repository name and the package name, as described in the previous section. The template -allows the user to have control over all the values in the resulting *PackageVariant*. The template -API is shown below. - -```go -type PackageVariantTemplate struct { - // Downstream allows overriding the default downstream package and repository name - // +optional - Downstream *DownstreamTemplate `json:"downstream,omitempty"` - - // AdoptionPolicy allows overriding the PackageVariant adoption policy - // +optional - AdoptionPolicy *pkgvarapi.AdoptionPolicy `json:"adoptionPolicy,omitempty"` - - // DeletionPolicy allows overriding the PackageVariant deletion policy - // +optional - DeletionPolicy *pkgvarapi.DeletionPolicy `json:"deletionPolicy,omitempty"` - - // Labels allows specifying the spec.Labels field of the generated PackageVariant - // +optional - Labels map[string]string `json:"labels,omitempty"` - - // LabelsExprs allows specifying the spec.Labels field of the generated PackageVariant - // using CEL to dynamically create the keys and values. Entries in this field take precedent over - // those with the same keys that are present in Labels. - // +optional - LabelExprs []MapExpr `json:"labelExprs,omitempty"` - - // Annotations allows specifying the spec.Annotations field of the generated PackageVariant - // +optional - Annotations map[string]string `json:"annotations,omitempty"` - - // AnnotationsExprs allows specifying the spec.Annotations field of the generated PackageVariant - // using CEL to dynamically create the keys and values. Entries in this field take precedent over - // those with the same keys that are present in Annotations. - // +optional - AnnotationExprs []MapExpr `json:"annotationExprs,omitempty"` - - // PackageContext allows specifying the spec.PackageContext field of the generated PackageVariant - // +optional - PackageContext *PackageContextTemplate `json:"packageContext,omitempty"` - - // Pipeline allows specifying the spec.Pipeline field of the generated PackageVariant - // +optional - Pipeline *PipelineTemplate `json:"pipeline,omitempty"` - - // Injectors allows specifying the spec.Injectors field of the generated PackageVariant - // +optional - Injectors []InjectionSelectorTemplate `json:"injectors,omitempty"` -} - -// DownstreamTemplate is used to calculate the downstream field of the resulting -// package variants. Only one of Repo and RepoExpr may be specified; -// similarly only one of Package and PackageExpr may be specified. -type DownstreamTemplate struct { - Repo *string `json:"repo,omitempty"` - Package *string `json:"package,omitempty"` - RepoExpr *string `json:"repoExpr,omitempty"` - PackageExpr *string `json:"packageExpr,omitempty"` -} - -// PackageContextTemplate is used to calculate the packageContext field of the -// resulting package variants. The plain fields and Exprs fields will be -// merged, with the Exprs fields taking precedence. -type PackageContextTemplate struct { - Data map[string]string `json:"data,omitempty"` - RemoveKeys []string `json:"removeKeys,omitempty"` - DataExprs []MapExpr `json:"dataExprs,omitempty"` - RemoveKeyExprs []string `json:"removeKeyExprs,omitempty"` -} - -// InjectionSelectorTemplate is used to calculate the injectors field of the -// resulting package variants. Exactly one of the Name and NameExpr fields must -// be specified. The other fields are optional. -type InjectionSelectorTemplate struct { - Group *string `json:"group,omitempty"` - Version *string `json:"version,omitempty"` - Kind *string `json:"kind,omitempty"` - Name *string `json:"name,omitempty"` - - NameExpr *string `json:"nameExpr,omitempty"` -} - -// MapExpr is used for various fields to calculate map entries. Only one of -// Key and KeyExpr may be specified; similarly only on of Value and ValueExpr -// may be specified. -type MapExpr struct { - Key *string `json:"key,omitempty"` - Value *string `json:"value,omitempty"` - KeyExpr *string `json:"keyExpr,omitempty"` - ValueExpr *string `json:"valueExpr,omitempty"` -} - -// PipelineTemplate is used to calculate the pipeline field of the resulting -// package variants. -type PipelineTemplate struct { - // Validators is used to caculate the pipeline.validators field of the - // resulting package variants. - // +optional - Validators []FunctionTemplate `json:"validators,omitempty"` - - // Mutators is used to caculate the pipeline.mutators field of the - // resulting package variants. - // +optional - Mutators []FunctionTemplate `json:"mutators,omitempty"` -} - -// FunctionTemplate is used in generating KRM function pipeline entries; that -// is, it is used to generate Kptfile Function objects. -type FunctionTemplate struct { - kptfilev1.Function `json:",inline"` - - // ConfigMapExprs allows use of CEL to dynamically create the keys and values in the - // function config ConfigMap. Entries in this field take precedent over those with - // the same keys that are present in ConfigMap. - // +optional - ConfigMapExprs []MapExpr `json:"configMapExprs,omitempty"` -} -``` - -To make this complex structure more comprehensible, the first thing to notice is that many fields -have a plain version and an Expr version. The plain version is used when the value is static across -all the *PackageVariants*. The Expr version is used when the value needs to vary across the -*PackageVariants*. - -Let us consider a simple example. Suppose we have a package for provisioning namespaces that is -called *base-ns*. We would like to instantiate this several times in the *cluster-01* repository. -We could do this with the following *PackageVariantSet*: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - namespace: default - name: example -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - targets: - - repositories: - - name: cluster-01 - packageNames: - - ns-1 - - ns-2 - - ns-3 -``` - -This will produce three *PackageVariant* resources with the same upstream, all with the same -downstream repository, and each with a different downstream package name. If we also want to set -some labels identically across the packages, we can do this with the template.labels field: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - namespace: default - name: example -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - targets: - - repositories: - - name: cluster-01 - packageNames: - - ns-1 - - ns-2 - - ns-3 - template: - labels: - package-type: namespace - org: hr -``` - -The resulting *PackageVariant* resources include labels in their specification, and are identical, -apart from their names and the downstream.package: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaaa -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - downstream: - repo: cluster-01 - package: ns-1 - labels: - package-type: namespace - org: hr ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaab -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - downstream: - repo: cluster-01 - package: ns-2 - labels: - package-type: namespace - org: hr ---- - -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaac -spec: - upstream: - repo: platform-catalog - package: base-ns - revision: v1 - downstream: - repo: cluster-01 - package: ns-3 - labels: - package-type: namespace - org: hr -``` - -When using other targeting means, the use of the Expr fields becomes more probable, since we have -more possible sources for the different field values. The Expr values are all -[Common Expression Language (CEL)](https://github.com/google/cel-go) expressions, rather than static -values. This allows the user to construct values based on the various fields of the targets. -Consider again the RepositorySelector example, where we have these repositories in the cluster. - -| Repository | Labels | -| ---------- | ------------------------------------- | -| cluster-01 | region=useast1, env=prod, org=hr | -| cluster-02 | region=uswest1, env=prod, org=finance | -| cluster-03 | region=useast2, env=prod, org=hr | -| cluster-04 | region=uswest1, env=prod, org=hr | - -If we create a *PackageVariantSet* with the following specification, then we can use the Expr fields -to add labels to the *PackageVariantSpecs* (and therefore to the resulting *PackageRevisions* later) -that vary according to the cluster. We can also use this to diversify the injectors defined for each -*PackageVariant*, resulting in each *PackageRevision* having different resources injected. The -following specification results in three *PackageVariant* resources, one for each repository, with -the *env=prod* and *org=hr* labels. - -```yaml -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - targets: - - repositorySelector: - matchLabels: - env: prod - org: hr - template: - labelExprs: - key: org - valueExpr: "repository.labels['org']" - injectorExprs: - - nameExpr: "repository.labels['region'] + '-endpoints'" -``` - -The labels and injectors fields of the *PackageVariantSpec* are different for each of the -*PackageVariants*, as determined by the use of the Expr fields in the template, as shown here: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaaa -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - downstream: - repo: cluster-01 - package: foo - labels: - org: hr - injectors: - name: useast1-endpoints ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaab -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - downstream: - repo: cluster-03 - package: foo - labels: - org: hr - injectors: - name: useast2-endpoints ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - namespace: default - name: example-aaac -spec: - upstream: - repo: example-repo - package: foo - revision: v1 - downstream: - repo: cluster-04 - package: foo - labels: - org: hr - injectors: - name: uswest1-endpoints -``` - -Since the injectors are different for each *PackageVariant*, each of the resulting -*PackageRevisions* has different resources injected. - -When CEL expressions are evaluated, they have an environment associated with them. That is, there -are certain objects that are accessible within the CEL expression. For CEL expressions used in the *PackageVariantSet* template field, the following variables are available: - -| CEL variable | Variable contents | -| -------------- | ------------------------------------------------------------ | -| repoDefault | The default repository name based on the targeting criteria. | -| packageDefault | The default package name based on the targeting criteria. | -| upstream | The upstream *PackageRevision*. | -| repository | The downstream repository. | -| target | The target object (details vary. See below). | - -There is one expression that is an exception to the above table. Since the repository value -corresponds to the downstream repository, we must first evaluate the downstream.repoExpr expression -to find that repository. Therefore, for this expression only, *repository* is not a valid variable. - -There is one other variable that is available across all the CEL expressions: the target variable. -This variable has a meaning that varies depending on the type of target, as follows: - -| Target type | Target variable contents contents | -| ----------------------------------------------------------------------------- | -| Repo/package list | A struct that has two fields: repo and package, as with | -| | the repoDefault and the packageDefault values. | -| Repository selector | The repository selected by the selector. Although not | -| | recommended, this can be different from the repository | -| | value, which can be altered with the downstream.repo or | -| | the downstream.repoExpr. | -| Object selector | The Object selected by the selector. | - -For the various resource variables - upstream, repository, and target - arbitrary access to all the -fields of the object could lead to security concerns. Therefore, only a subset of the data is -available for use in CEL expressions, specifically, the following fields: name, namespace, labels, -and annotations. - -Given the minor quirk with the repoExpr, it may be helpful to state the processing flow for the -template evaluation: - -1. The upstream *PackageRevision* is loaded. It must be in the same namespace as the - *PackageVariantSet*[^multi-ns-reg]. -2. The targets are determined. -3. For each target, the following is the case: - - 1. The CEL environment is prepared with repoDefault, packageDefault, upstream, and the target - variables. - 2. The downstream repository is determined and loaded, as follows: - - - If present, the downstream.repoExpr is evaluated using the CEL environment. The result is - used as the downstream repository name. - - If the downstream.repo is set, then this is used as the downstream repository name. - - If neither the downstream.repoExpr nor the downstream.repo is present, then the default - repository name, based on the target, is used (that is, the same value as the repoDefault - variable). - - The resulting downstream repository name is used to load the corresponding repository - object in the same namespace as the *PackageVariantSet*. - - 3. The downstream repository is added to the CEL environment. - 4. All other CEL expressions are evaluated. - -4. If any of the resources, such as the upstream *PackageRevision* or the downstream repository, - are not found or otherwise fail to load, then the processing stops and a failure condition is - raised. Similarly, if a CEL expression cannot be properly evaluated, due to syntax or other - issues, then the processing stops and a failure condition is raised. - -#### Other considerations - -It seems convenient to automatically inject the *PackageVariantSet* targeting resource. However, it -is better to require the package to advertise the ways in which it accepts injections (that is, the -GVKs that it understands), and only inject those. This keeps the separation of concerns cleaner. The -package does not build in an awareness of the context in which it expects to be deployed. For -example, a package should not accept a Porch repository resource just because that happens to be the -targeting mechanism. That would make the package unusable in other contexts. - -#### PackageVariantSet status - -The *PackageVariantSet* status uses the following conditions: - - - Stalled is set to *True*, if there has been a failure that likely requires user intervention. - - Ready is set to *True*, if the last reconciliation has successfully reconciled all the targeted - *PackageVariant* resources. - -## Future considerations -- As an alternative to the floating tag proposal, it may instead be desirable to have a separate tag - tracking controller that can update the PV and PVS resources, to tweak their upstream as the tag - moves. -- Installing a collection of packages across a set of clusters, or performing the same mutations to - each package in a collection, is only supported by creating multiple *PackageVariant*/ - *PackageVariantSet* resources. These are options to consider for the following use cases: - - - Upstreams listing multiple packages. - - Label the selector against *PackageRevisions*. This does not seem particularly useful, as - *PackageRevisions* are highly reusable and would probably be composed in many different ways. - - A *PackageRevisionSet* resource that simply contains a list of upstream structures and could be - used as an upstream. This is functionally equivalent to the upstreams option, except this list - is reusable across resources. - - Listing multiple *PackageRevisionSets* in the upstream is also desirable. - - Any or all of the above use cases could be implemented in the *PackageVariant* or - *PackageVariantSet*, or both. - -## Footnotes - -[^porch17]: Implemented in Porch v0.0.17. -[^porch18]: Available in Porch v0.0.18. -[^notimplemented]: Proposed here, but not yet implemented in Porch v0.0.18. -[^setns]: As of writing, the set-namespace function does not have a *create* option. This should be - added, in order to avoid the user needing also to use the `upsert-resource` function. Such common - operations should be simple for users. -[^pvsimpl]: This document describes *PackageVariantSet* v1alpha2, which will be available from - Porch v0.0.18 onwards. In Porch v0.0.16 and 17, the v1alpha1 implementation is available, but it - is a somewhat different API, which does not support CEL or any injection. It is focused only on - fan-out targeting, and uses a [slightly different targeting API](https://github.com/nephio-project/porch/blob/main/controllers/packagevariants/api/v1alpha1/packagevariant_types.go). -[^repo-pkg-expr]: This is not exactly correct. As we will see later in the template discussion, the - repository and package names listed are just defaults for the template. They can be further - manipulated in the template to reference different downstream repositories and package names. The - same is true for the repositories selected via the `repositorySelector` option. However, this can - be ignored for now. -[^multi-ns-reg]: Note that the same upstream repository can be registered in multiple namespaces - without any problems. This simplifies access controls, avoiding the need for cross-namespace - relationships between the repositories and other Porch resources. diff --git a/content/en/docs/porch/running-porch/_index.md b/content/en/docs/porch/running-porch/_index.md deleted file mode 100644 index 4c68b980..00000000 --- a/content/en/docs/porch/running-porch/_index.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: "Running Porch" -type: docs -weight: 6 -description: ---- - diff --git a/content/en/docs/porch/running-porch/running-on-GKE.md b/content/en/docs/porch/running-porch/running-on-GKE.md deleted file mode 100644 index 26c49e4b..00000000 --- a/content/en/docs/porch/running-porch/running-on-GKE.md +++ /dev/null @@ -1,246 +0,0 @@ ---- -title: "Running Porch on GKE" -type: docs -weight: 2 -description: ---- - -You can install Porch by either using one of the -[released versions](https://github.com/nephio-project/porch/releases), or building Porch from sources. - -## Prerequisites - -{{% alert title="Note" color="primary" %}} - -Porch should run on any Kubernetes cluster and should work on any cloud. We have just started by documenting one -known-good configuration: GCP and GKE. We would welcome comparable installation instructions or feedback from people -that try it out on other clouds / configurations. - -{{% /alert %}} - -To run one of the [released versions](https://github.com/nephio-project/porch/releases) of Porch on GKE, you will -need: - -* A [GCP Project](https://console.cloud.google.com/projectcreate) -* [gcloud](https://cloud.google.com/sdk/docs/install) -* [kubectl](https://kubernetes.io/docs/tasks/tools/); you can install it via `gcloud components install kubectl` -* [porchctl](https://github.com/nephio-project/porch/releases/download/dev/porchctl.tgz) -* Command line utilities such as *curl*, *tar* - -To build and run Porch on GKE, you will also need: - -* A container registry which will work with your GKE cluster. - [Artifact Registry](https://console.cloud.google.com/artifacts) or - [Container Registry](https://console.cloud.google.com/gcr) work well though you can use others too. -* [go 1.21](https://go.dev/dl/) or newer -* [docker](https://docs.docker.com/get-docker/) -* [Configured docker credential helper](https://cloud.google.com/sdk/gcloud/reference/auth/configure-docker) -* [git](https://git-scm.com/) -* [make](https://www.gnu.org/software/make/) - -## Getting Started - -Make sure your gcloud is configured with your project (alternatively, you can augment all following gcloud -commands below with --project flag): - -```bash -gcloud config set project YOUR_GCP_PROJECT -``` - -Select a GKE cluster or create a new one: - -```bash -gcloud services enable container.googleapis.com -gcloud container clusters create-auto --region us-central1 porch-dev -``` -{{% alert title="Note" color="primary" %}} - -For development of Porch, in particular for running Porch tests, Standard GKE cluster is currently preferable. Select a -[GCP region](https://cloud.google.com/compute/docs/regions-zones#available) that works best for your needs: - - ```bash -gcloud services enable container.googleapis.com -gcloud container clusters create --region us-central1 porch-dev -``` - -And ensure *kubectl* is targeting your GKE cluster: - -```bash -gcloud container clusters get-credentials --region us-central1 porch-dev -``` -{{% /alert %}} - -## Run Released Version of Porch - -To run a released version of Porch, download the release configuration bundle from -[Porch release page](https://github.com/nephio-project/porch/releases). - -Untar and apply the *porch_blueprint.tar.gz* configuration bundle. This will install: - -* Porch server -* [configsync](https://kpt.dev/gitops/configsync/) - -```bash -mkdir porch-install -tar xzf ~/Downloads/porch_blueprint.tar.gz -C porch-install -kubectl apply -f porch-install -kubectl wait deployment --for=condition=Available porch-server -n porch-system -``` - -You can verify that Porch is running by querying the api-resources: - -```bash -kubectl api-resources | grep porch -``` -Expected output will include: - -```bash -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -``` - -To install configsync: - -```bash -echo " -apiVersion: configmanagement.gke.io/v1 -kind: ConfigManagement -metadata: - name: config-management -spec: - enableMultiRepo: true -" | kubectl apply -f - -``` - -## Run Custom Build of Porch - -To run custom build of Porch, you will need additional [prerequisites](#prerequisites). The commands below use -[Google Container Registry](https://console.cloud.google.com/gcr). - -Clone this repository into *${GOPATH}/src/github.com/GoogleContainerTools/kpt*. - -```bash -git clone https://github.com/GoogleContainerTools/kpt.git "${GOPATH}/src/github.com/GoogleContainerTools/kpt" -``` - -[Configure](https://cloud.google.com/sdk/gcloud/reference/auth/configure-docker) docker credential helper for your -repository. - -If your use case doesn't require Porch to interact with GCP container registries, you can build and deploy Porch by -running the `gcr.io/YOUR-PROJECT-ID/porch-server:SHORT-COMMIT-SHA` command. It will build and push Porch Docker images into (by default) Google Container Registry -named (example shown is the Porch server image). - -```bash -IMAGE_TAG=$(git rev-parse --short HEAD) make push-and-deploy-no-sa -``` - -If you want to use a different repository, you can set IMAGE_REPO variable -(see [Makefile](https://github.com/nephio-project/porch/blob/main/Makefile#L33) for details). - -The `make push-and-deploy-no-sa` target will install Porch but not configsync. You can install configsync in your k8s -cluster manually following the -[documentation](https://github.com/GoogleContainerTools/kpt-config-sync/blob/main/docs/installation.md). - -{{% alert title="Note" color="primary" %}} - -The -no-sa (no service account) targets create Porch deployment -configuration which does not associate Kubernetes service accounts with GCP -service accounts. This is sufficient for Porch to integrate with Git repositories -using Basic Auth, for example GitHub. - -As above, you can verify that Porch is running by querying the api-resources: - -```bash -kubectl api-resources | grep porch -``` -{{% /alert %}} - -### Workload Identity - -[Workload Identity](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity) is a simple way to -access Google Cloud services from Porch. - -#### Google Cloud Source Repositories - -[Cloud Source Repositories](https://cloud.google.com/source-repositories) can be access using workload identity, -removing the need to store credentials in the cluster. - -To set it up, create the necessary service accounts and give it the required roles: - -```bash -GCP_PROJECT_ID=$(gcloud config get-value project) - -# Create GCP service account (GSA) for Porch server. -gcloud iam service-accounts create porch-server - -# We want to create and delete images. Assign IAM roles to allow repository -# administration. -gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \ - --member "serviceAccount:porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - --role "roles/source.admin" - -gcloud iam service-accounts add-iam-policy-binding porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/iam.workloadIdentityUser \ - --member "serviceAccount:${GCP_PROJECT_ID}.svc.id.goog[porch-system/porch-server]" - -# We need to associate the Kubernetes Service Account (KSA) -# with the GSA by annotating the KSA. -kubectl annotate serviceaccount porch-server -n porch-system \ - iam.gke.io/gcp-service-account=porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com -``` - -Build Porch, push images, and deploy Porch server and controllers using the `make` target that adds workload identity -service account annotations: - -```bash -IMAGE_TAG=$(git rev-parse --short HEAD) make push-and-deploy -``` - -As above, you can verify that Porch is running by querying the api-resources: - -```bash -kubectl api-resources | grep porch -``` - -To register a repository, use the following command: - -```bash -porchctl repo register --repo-workload-identity --namespace=default https://source.developers.google.com/p//r/ -``` - -#### OCI - -To integrate with OCI repositories such as -[Artifact Registry](https://console.cloud.google.com/artifacts) or -[Container Registry](https://console.cloud.google.com/gcr), Porch relies on -[workload identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity). - -For that use case, create service accounts and assign roles: - -```bash -GCP_PROJECT_ID=$(gcloud config get-value project) - -# Create GCP service account for Porch server. -gcloud iam service-accounts create porch-server -# Create GCP service account for Porch sync controller. -gcloud iam service-accounts create porch-sync - -# We want to create and delete images. Assign IAM roles to allow repository -# administration. -gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \ - --member "serviceAccount:porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - --role "roles/artifactregistry.repoAdmin" - -gcloud iam service-accounts add-iam-policy-binding porch-server@${GCP_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/iam.workloadIdentityUser \ - --member "serviceAccount:${GCP_PROJECT_ID}.svc.id.goog[porch-system/porch-server]" - -gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \ - --member "serviceAccount:porch-sync@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \ - --role "roles/artifactregistry.reader" - -gcloud iam service-accounts add-iam-policy-binding porch-sync@${GCP_PROJECT_ID}.iam.gserviceaccount.com \ - --role roles/iam.workloadIdentityUser \ - --member "serviceAccount:${GCP_PROJECT_ID}.svc.id.goog[porch-system/porch-controllers]" -``` diff --git a/content/en/docs/porch/user-guides/_index.md b/content/en/docs/porch/user-guides/_index.md deleted file mode 100644 index c8a18209..00000000 --- a/content/en/docs/porch/user-guides/_index.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: "Porch user guides" -type: docs -weight: 6 -description: ---- - diff --git a/content/en/docs/porch/user-guides/git-authentication-config.md b/content/en/docs/porch/user-guides/git-authentication-config.md deleted file mode 100644 index 293f17fe..00000000 --- a/content/en/docs/porch/user-guides/git-authentication-config.md +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: "Authenticating to Remote Git Repositories" -type: docs -weight: 4 -description: "" ---- - -## Porch Server to Git Interaction - -The porch server handles interaction with associated git repositories through the use of porch repository CR (Custom Resource) which act as a link between the porch server and the git repositories the server is meant to interact with and store packages on. - -More information on porch repositories can be found [here]({{% relref "/docs/porch/package-orchestration.md#repositories" %}}). - -There are 2 main methods of authenticating to a git repository and an additional configuration. -These are - -1. Basic Authentication -2. Bearer Token Authentication -3. HTTPS/TLS Configuration - -### Basic Authentication - -A porch repository object can be created through the use of the `porchctl repo reg porch-test-repository -n porch-test http://example-ip:example-port/repo.git --repo-basic-password=password --repo-basic-username=username` command which creates a secret and repository object. - -The basic authentication secret must meet the following criteria: - -- Exist in the same namespace as the Repository CR (Custom Resource) that requires it. -- Have a Data keys named *username* and *password* containing the relevant information. -- Be of type *basic-auth*. - -The value used in the *password* field can be substituted for a base64 encoded Personal Access Token (PAT) from the GIT instance being used. An Example of this can be found [here]({{% relref "/docs/porch/user-guides/porchctl-cli-guide.md#repository-registration" %}}) - -Which would be the equivalent of doing a `kubectl apply -f` on a yaml file with the following content (assuming the porch-test namespace exists on the cluster): - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: git-auth-secret - namespace: porch-test -data: - username: base-64-encoded-username - password: base-64-encoded-password # or base64-encoded-PAT -type: kubernetes.io/basic-auth - ---- -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: Repository - -metadata: - name: porch-test-repository - namespace: porch-test - -spec: - description: porch test repository - content: Package - deployment: false - type: git - git: - repo: http://example-ip:example-port/repo.git - directory: / - branch: main - secretRef: - name: git-auth-secret -``` - -When The Porch Server is interacting with a Git instance through this http-basic-auth configuration it does so over HTTP. An example HTTP Request using this configuration can be seen below. - -```logs -PUT -https://example-ip/apis/config.porch.kpt.dev/v1alpha1/namespaces/porch-test/repositories/porch-test-repo/status -Request Headers: - User-Agent: __debug_bin1520795790/v0.0.0 (linux/amd64) kubernetes/$Format - Authorization: Basic bmVwaGlvOnNlY3JldA== - Accept: application/json, */* - Content-Type: application/json -``` - -where *bmVwaGlvOnNlY3JldA==* is base64 encoded in the format *username:password* and after base64 decoding becomes *nephio:secret*. For simple personal access token login, the password section can be substituted with the PAT token. - -### Bearer Token Authentication - -The authentication to the git repository can be configured to be in bearer token format by altering the secret used in the porch repository object. - -The bearer token authentication secret must meet the following criteria: - -- Exist in the same namespace as the Repository CR (Custom Resource) that requires it -- Have a Data key named *bearerToken* containing the relevant git token information. -- Be of type *Opaque*. - -For example: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: git-auth-secret - namespace: porch-test -data: - bearerToken: base-64-encoded-bearer-token -type: Opaque -``` - -When The Porch Server is interacting with a Git instance through this http-token-auth configuration, it does so overt HTTP. An example HTTP Request using this configuration can be seen below. - -```logs -PUT https://example-ip/apis/config.porch.kpt.dev/v1alpha1/namespaces/porch-test/repositories/porch-test-repo/status -Request Headers: - User-Agent: __debug_bin1520795790/v0.0.0 (linux/amd64) kubernetes/$Format - Authorization: Bearer 4764aacf8cc6d72cab58e96ad6fd3e3746648655 - Accept: application/json, */* - Content-Type: application/json -``` - -where *4764aacf8cc6d72cab58e96ad6fd3e3746648655* in the Authorization header is a PAT token, but can be whichever type of bearer token is accepted by the user's git instance. - -{{% alert title="Note" color="primary" %}} -Please Note that the Porch server caches the authentication credentials from the secret, therefore if the secret's contents are updated they may in fact not be the credentials used in the authentication. - -When the cached old secret credentials are no longer valid the porch server will query the secret again to use the new credentials. - -If these new credentials are valid they become the new cached authentication credentials. -{{% /alert %}} - -### HTTPS/TLS Configuration - -To enable the porch server to communicate with a custom git deployment over HTTPS, we must: - -1. Provide an additional arguments flag *use-git-cabundle=true* to the porch-server deployment. -2. Provide an additional Kubernetes secret containing the relevant certificate chain in the form of a cabundle. - -The secret itself must meet the following criteria: - -- Exist in the same namespace as the Repository CR that requires it. -- Be named specifically \-ca-bundle. -- Have a Data key named *ca.crt* containing the relevant ca certificate (chain). - -For example, a Git Repository is hosted over HTTPS at the URL: `https://my-gitlab.com/joe.bloggs/blueprints.git` - -Before creating the new Repository in the **GitLab** namespace, we must create a secret that fulfils the criteria above. - -`kubectl create secret generic gitlab-ca-bundle --namespace=gitlab --from-file=ca.crt` - -Which would produce the following: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: gitlab-ca-bundle - namespace: gitlab -type: Opaque -data: - ca.crt: FAKE1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNuakNDQWdHZ0F3SUJBZ0lRTEdmUytUK3YyRDZDczh1MVBlUUlKREFLQmdncWhrak9QUVFEQkRBZE1Sc3cKR1FZRFZRUURFeEpqWlhKMExXMWhibUZuWlhJdWJHOWpZV3d3SGhjTk1qUXdOVE14TVRFeU5qTXlXaGNOTWpRdwpPREk1TVRFeU5qTXlXakFWTVJNd0VRWURWUVFGRXdveE1qTTBOVFkzT0Rrd01JSUJJakFOQmdrcWhraUc5dzBCCkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXhCUUtWMEVzQ1JOOGxuV3lQR1ZWNXJwam5QZkI2emszK0N4cEp2NVMKUWhpMG1KbDI0elV1WWZjRzNxdFUva1NuREdjK3NQRUY0RmlOcUlsSTByWHBQSXBPazhKbjEvZU1VT3RkZUUyNgpSWEZBWktjeDVvdUJyZVNja3hsN2RPVkJnOE1EM1h5RU1PQU5nM0hJZ1J4ZWx2U2p1dy8vMURhSlRnK0lBS0dUCkgrOVlRVFcrZDIwSk5wQlR3NkdnQlRsYmdqL2FMRWEwOXVYSVBjK0JUSkpXRThIeDhkVjFNbEtHRFlDU29qZFgKbG9TN1FIa0dsSVk3M0NPZVVGWEVnTlFVVmZaZHdreXNsT3F4WmdXUTNZTFZHcEFyRitjOVdyUGpQQU5NQWtORQpPdHRvaG8zTlRxQ3FST3JEa0RMYWdsU1BKSUd1K25TcU5veVVxSUlWWkV5R1dRSURBUUFCbzJBd1hqQU9CZ05WCkhROEJBZjhFQkFNQ0JhQXdEQVlEVlIwVEFRSC9CQUl3QURBZkJnTlZIU01FR0RBV2dCUitFZTVDTnVJSkcwZjkKV3J3VzdqYUZFeVdzb1RBZEJnTlZIUkVFRmpBVWdoSm5hWFJzWVdJdVpYaGhiWEJzWlM1amIyMHdDZ1lJS29aSQp6ajBFQXdRRGdZb0FNSUdHQWtGLzRyNUM4bnkwdGVIMVJlRzdDdXJHYk02SzMzdTFDZ29GTkthajIva2ovYzlhCnZwODY0eFJKM2ZVSXZGMEtzL1dNUHNad2w2bjMxUWtXT2VpM01aYWtBUUpCREw0Kyt4UUxkMS9uVWdqOW1zN2MKUUx3NXVEMGxqU0xrUS9mOTJGYy91WHc4QWVDck5XcVRqcDEycDJ6MkUzOXRyWWc1a2UvY2VTaWFPUm16eUJuTwpTUTg9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0= -``` diff --git a/content/en/docs/porch/user-guides/install-porch.md b/content/en/docs/porch/user-guides/install-porch.md deleted file mode 100644 index fd7985ed..00000000 --- a/content/en/docs/porch/user-guides/install-porch.md +++ /dev/null @@ -1,427 +0,0 @@ ---- -title: "Installing Porch" -type: docs -weight: 1 -description: "A tutorial to install Porch" ---- - -This tutorial is a guide to installing Porch. It is based on the -[Porch demo produced by Tal Liron of Google](https://github.com/tliron/klab/tree/main/environments/porch-demo). Users -should be comfortable using *git*, *docker*, and *kubernetes*. - -See also [the Nephio Learning Resource](https://github.com/nephio-project/docs/blob/main/learning.md) page for -background help and information. - -## Prerequisites - -The tutorial can be executed on a Linux VM or directly on a laptop. It has been verified to execute on a MacBook Pro M1 -machine and an Ubuntu 20.04 VM. - -The following software should be installed prior to running through the tutorial: - -1. [git](https://git-scm.com/) -2. [Docker](https://www.docker.com/get-started/) -3. [kubectl](https://kubernetes.io/docs/reference/kubectl/) - make sure that [kubectl context](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) configured with your cluster -4. [kind](https://kind.sigs.k8s.io/) -5. [kpt](https://github.com/kptdev/kpt) -6. [The go programming language](https://go.dev/) -7. [Visual Studio Code](https://code.visualstudio.com/download) -8. [VS Code extensions for go](https://code.visualstudio.com/docs/languages/go) - -## Clone the repository and cd into the tutorial - -```bash -git clone https://github.com/nephio-project/porch.git - -cd porch/examples/tutorials/starting-with-porch/ -``` - -## Create the Kind clusters for management and edge1 - -Create the clusters: - -```bash -kind create cluster --config=kind_management_cluster.yaml -kind create cluster --config=kind_edge1_cluster.yaml -``` - -Output the *kubectl* configuration for the clusters: - -```bash -kind get kubeconfig --name=management > ~/.kube/kind-management-config -kind get kubeconfig --name=edge1 > ~/.kube/kind-edge1-config -``` - -Toggling *kubectl* between the clusters: - -```bash -export KUBECONFIG=~/.kube/kind-management-config - -export KUBECONFIG=~/.kube/kind-edge1-config -``` - -## Install MetalLB on the management cluster - -Install the MetalLB load balancer on the management cluster to expose services: - -```bash -export KUBECONFIG=~/.kube/kind-management-config -kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml -kubectl wait --namespace metallb-system \ - --for=condition=ready pod \ - --selector=component=controller \ - --timeout=90s -``` - -Check the subnet that is being used by the kind network in docker - -```bash -docker network inspect kind | grep Subnet -``` - -Sample output: - -```yaml -"Subnet": "172.18.0.0/16", -"Subnet": "fc00:f853:ccd:e793::/64" -``` - -Edit the *metallb-conf.yaml* file and ensure the spec.addresses range is in the IPv4 subnet being used by the kind network in docker. - -```yaml -... -spec: - addresses: - - 172.18.255.200-172.18.255.250 -... -``` - -Apply the MetalLB configuration: - -```bash -kubectl apply -f metallb-conf.yaml -``` - -## Deploy and set up Gitea on the management cluster using kpt - -Get the *gitea kpt* package: - -```bash -export KUBECONFIG=~/.kube/kind-management-config - -cd kpt_packages - -kpt pkg get https://github.com/nephio-project/catalog/tree/main/distros/sandbox/gitea -``` - -Comment out the preconfigured IP address from the *gitea/service-gitea.yaml* file in the *gitea kpt* package: - -```bash -11c11 -< metallb.universe.tf/loadBalancerIPs: 172.18.0.200 ---- -> # metallb.universe.tf/loadBalancerIPs: 172.18.0.200 -``` - -Now render, init and apply the *gitea kpt* package: - -```bash -kpt fn render gitea -kpt live init gitea # You only need to do this command once -kpt live apply gitea -``` - -Once the package is applied, all the Gitea pods should come up and you should be able to reach the Gitea UI on the -exposed IP Address/port of the Gitea service. - -```bash -kubectl get svc -n gitea gitea - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -gitea LoadBalancer 10.96.243.120 172.18.255.200 22:31305/TCP,3000:31102/TCP 10m -``` - -The UI is available at http://172.18.255.200:3000 in the example above. - -To login to Gitea, use the credentials nephio:secret. - -## Create repositories on Gitea for management and edge1 - -On the Gitea UI, click the **+** opposite **Repositories** and fill in the form for both the *management* and *edge1* -repositories. Use default values except for the following fields: - -- Repository Name: "Management" or "edge1" -- Description: Something appropriate - -Alternatively, we can create the repositories via curl: - -```bash -curl -k -H "content-type: application/json" "http://nephio:secret@172.18.255.200:3000/api/v1/user/repos" --data '{"name":"management"}' - -curl -k -H "content-type: application/json" "http://nephio:secret@172.18.255.200:3000/api/v1/user/repos" --data '{"name":"edge1"}' -``` - -Check the repositories: - -```bash - curl -k -H "content-type: application/json" "http://nephio:secret@172.18.255.200:3000/api/v1/user/repos" | grep -Po '"name": *\K"[^"]*"' -``` - -Now initialize both repositories with an initial commit. - -Initialize the *management* repository: - -```bash -cd ../repos -git clone http://172.18.255.200:3000/nephio/management -cd management - -touch README.md -git init -git checkout -b main -git config user.name nephio -git add README.md - -git commit -m "first commit" -git remote remove origin -git remote add origin http://nephio:secret@172.18.255.200:3000/nephio/management.git -git remote -v -git push -u origin main -cd .. - ``` - -Initialize the *edge1* repository: - -```bash -git clone http://172.18.255.200:3000/nephio/edge1 -cd edge1 - -touch README.md -git init -git checkout -b main -git config user.name nephio -git add README.md - -git commit -m "first commit" -git remote remove origin -git remote add origin http://nephio:secret@172.18.255.200:3000/nephio/edge1.git -git remote -v -git push -u origin main -cd ../../ -``` - -## Install Porch - -We will use the *Porch Kpt* package from Nephio catalog repository. - -```bash -cd kpt_packages - -kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/core/porch -``` - -Now we can install porch. We render the *kpt* package and then init and apply it. - -```bash -kpt fn render porch -kpt live init porch # You only need to do this command once -kpt live apply porch -``` - -Check that the Porch PODs are running on the management cluster: - -```bash -kubectl get pod -n porch-system -NAME READY STATUS RESTARTS AGE -function-runner-7994f65554-nrzdh 1/1 Running 0 81s -function-runner-7994f65554-txh9l 1/1 Running 0 81s -porch-controllers-7fb4497b77-2r2r6 1/1 Running 0 81s -porch-server-68bfdddbbf-pfqsm 1/1 Running 0 81s -``` - -Check that the Porch CRDs and other resources have been created: - -```bash -kubectl api-resources | grep porch -packagerevs config.porch.kpt.dev/v1alpha1 true PackageRev -packagevariants config.porch.kpt.dev/v1alpha1 true PackageVariant -packagevariantsets config.porch.kpt.dev/v1alpha2 true PackageVariantSet -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -packages porch.kpt.dev/v1alpha1 true Package -``` - -## Connect the Gitea repositories to Porch - -Create a demo namespace: - -```bash -kubectl create namespace porch-demo -``` - -Create a secret for the Gitea credentials in the demo namespace: - -```bash -kubectl create secret generic gitea \ - --namespace=porch-demo \ - --type=kubernetes.io/basic-auth \ - --from-literal=username=nephio \ - --from-literal=password=secret -``` - -Now, define the Gitea repositories in Porch: - -```bash -kubectl apply -f porch-repositories.yaml -``` - -Check that the repositories have been correctly created: - -```bash -kubectl get repositories -n porch-demo -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -edge1 git Package true True http://172.18.255.200:3000/nephio/edge1.git -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -## Configure configsync on the workload cluster - -configsync is installed on the edge1 cluster so that it syncs the contents of the *edge1* repository onto the edge1 -workload cluster. We will use the configsync package from Nephio. - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -cd kpt_packages - -kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/core/configsync -kpt fn render configsync -kpt live init configsync -kpt live apply configsync -``` - -Check that the configsync PODs are up and running: - -```bash -kubectl get pod -n config-management-system -NAME READY STATUS RESTARTS AGE -config-management-operator-6946b77565-f45pc 1/1 Running 0 118m -reconciler-manager-5b5d8557-gnhb2 2/2 Running 0 118m -``` - -Now, we need to set up a RootSync CR to synchronize the *edge1* repository: - -```bash -kpt pkg get https://github.com/nephio-project/catalog/tree/main/nephio/optional/rootsync -``` - -Edit the *rootsync/package-context.yaml* file to set the name of the cluster/repo we are syncing from/to: - -```bash -9c9 -< name: example-rootsync ---- -> name: edge1 -``` - -Render the package. This configures the *rootsync/rootsync.yaml* file in the Kpt package: - -```bash -kpt fn render rootsync -``` - -Edit the *rootsync/rootsync.yaml* file to set the IP address of Gitea and to turn off authentication for accessing -Gitea: - -```bash -11c11 -< repo: http://172.18.0.200:3000/nephio/example-cluster-name.git ---- -> repo: http://172.18.255.200:3000/nephio/edge1.git -13,15c13,16 -< auth: token -< secretRef: -< name: example-cluster-name-access-token-configsync ---- -> auth: none -> # auth: token -> # secretRef: -> # name: edge1-access-token-configsync -``` - -Initialize and apply RootSync: - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -kpt live init rootsync # This command is only needed once -kpt live apply rootsync -``` - -Check that the RootSync CR is created: - -```bash -kubectl get rootsync -n config-management-system -NAME RENDERINGCOMMIT RENDERINGERRORCOUNT SOURCECOMMIT SOURCEERRORCOUNT SYNCCOMMIT SYNCERRORCOUNT -edge1 613eb1ad5632d95c4336894f8a128cc871fb3266 613eb1ad5632d95c4336894f8a128cc871fb3266 613eb1ad5632d95c4336894f8a128cc871fb3266 -``` - -Check that configsync is synchronized with the repository on the management cluster: - -```bash -kubectl get pod -n config-management-system -l app=reconciler -NAME READY STATUS RESTARTS AGE -root-reconciler-edge1-68576f878c-92k54 4/4 Running 0 2d17h - -kubectl logs -n config-management-system root-reconciler-edge1-68576f878c-92k54 -c git-sync -f - -``` - -The result should be similar to: - -```bash -INFO: detected pid 1, running init handler -I0105 17:50:11.472934 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="" "cmd"="git config --global gc.autoDetach false" -I0105 17:50:11.493046 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="" "cmd"="git config --global gc.pruneExpire now" -I0105 17:50:11.513487 15 main.go:473] "level"=0 "msg"="starting up" "pid"=15 "args"=["/git-sync","--root=/repo/source","--dest=rev","--max-sync-failures=30","--error-file=error.json","--v=5"] -I0105 17:50:11.514044 15 main.go:923] "level"=0 "msg"="cloning repo" "origin"="http://172.18.255.200:3000/nephio/edge1.git" "path"="/repo/source" -I0105 17:50:11.514061 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="" "cmd"="git clone -v --no-checkout -b main --depth 1 http://172.18.255.200:3000/nephio/edge1.git /repo/source" -I0105 17:50:11.706506 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git rev-parse HEAD" -I0105 17:50:11.729292 15 main.go:737] "level"=0 "msg"="syncing git" "rev"="HEAD" "hash"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.729332 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git fetch -f --tags --depth 1 http://172.18.255.200:3000/nephio/edge1.git main" -I0105 17:50:11.920110 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git cat-file -t 385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.945545 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git rev-parse 385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.967150 15 main.go:726] "level"=1 "msg"="removing worktree" "path"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:11.967359 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git worktree prune" -I0105 17:50:11.987522 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git worktree add --detach /repo/source/385295a2143f10a6cda0cf4609c45d7499185e01 385295a2143f10a6cda0cf4609c45d7499185e01 --no-checkout" -I0105 17:50:12.057698 15 main.go:772] "level"=0 "msg"="adding worktree" "path"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "branch"="origin/main" -I0105 17:50:12.057988 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "cmd"="git reset --hard 385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:12.099783 15 main.go:833] "level"=0 "msg"="reset worktree to hash" "path"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "hash"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:12.099805 15 main.go:838] "level"=0 "msg"="updating submodules" -I0105 17:50:12.099976 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/385295a2143f10a6cda0cf4609c45d7499185e01" "cmd"="git submodule update --init --recursive --depth 1" -I0105 17:50:12.442466 15 main.go:694] "level"=1 "msg"="creating tmp symlink" "root"="/repo/source/" "dst"="385295a2143f10a6cda0cf4609c45d7499185e01" "src"="tmp-link" -I0105 17:50:12.442494 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/" "cmd"="ln -snf 385295a2143f10a6cda0cf4609c45d7499185e01 tmp-link" -I0105 17:50:12.453694 15 main.go:699] "level"=1 "msg"="renaming symlink" "root"="/repo/source/" "old_name"="tmp-link" "new_name"="rev" -I0105 17:50:12.453718 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/" "cmd"="mv -T tmp-link rev" -I0105 17:50:12.467904 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git gc --auto" -I0105 17:50:12.492329 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git cat-file -t HEAD" -I0105 17:50:12.518878 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source" "cmd"="git rev-parse HEAD" -I0105 17:50:12.540979 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 -I0105 17:50:27.553609 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git rev-parse HEAD" -I0105 17:50:27.600401 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git ls-remote -q http://172.18.255.200:3000/nephio/edge1.git refs/heads/main" -I0105 17:50:27.694035 15 main.go:1065] "level"=1 "msg"="no update required" "rev"="HEAD" "local"="385295a2143f10a6cda0cf4609c45d7499185e01" "remote"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:27.694159 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 -I0105 17:50:42.695482 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git rev-parse HEAD" -I0105 17:50:42.733276 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git ls-remote -q http://172.18.255.200:3000/nephio/edge1.git refs/heads/main" -I0105 17:50:42.826422 15 main.go:1065] "level"=1 "msg"="no update required" "rev"="HEAD" "local"="385295a2143f10a6cda0cf4609c45d7499185e01" "remote"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0105 17:50:42.826611 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 - -....... - -I0108 11:04:05.935586 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git rev-parse HEAD" -I0108 11:04:05.981750 15 cmd.go:48] "level"=5 "msg"="running command" "cwd"="/repo/source/rev" "cmd"="git ls-remote -q http://172.18.255.200:3000/nephio/edge1.git refs/heads/main" -I0108 11:04:06.079536 15 main.go:1065] "level"=1 "msg"="no update required" "rev"="HEAD" "local"="385295a2143f10a6cda0cf4609c45d7499185e01" "remote"="385295a2143f10a6cda0cf4609c45d7499185e01" -I0108 11:04:06.079599 15 main.go:585] "level"=1 "msg"="next sync" "wait_time"=15000000000 -``` diff --git a/content/en/docs/porch/user-guides/porchctl-cli-guide.md b/content/en/docs/porch/user-guides/porchctl-cli-guide.md deleted file mode 100644 index 557ad769..00000000 --- a/content/en/docs/porch/user-guides/porchctl-cli-guide.md +++ /dev/null @@ -1,838 +0,0 @@ ---- -title: "Using the Porch CLI tool" -type: docs -weight: 3 -description: ---- - -## Setting up the porchctl CLI - -The Porch CLI uses the `porchctl` command. -To use it locally, [download](https://github.com/nephio-project/porch/releases/tag/dev), unpack and add it to your PATH. - -{{% alert title="Note" color="primary" %}} - -Installation of Porch, including its prerequisites, is covered in a [dedicated document]({{% relref "/docs/porch/user-guides/install-porch.md" %}}). - -{{% /alert %}} - -*Optional*: Generate the autocompletion script for the specified shell to add to your sh profile. - -``` -porchctl completion bash -``` - -The `porchctl` command is an administration command for acting on Porch *Repository* (repo) and *PackageRevision* (rpkg) -CRs. - -The commands for administering repositories are: - -| Command | Description | -| --------------------- | ------------------------------ | -| `porchctl repo get` | List registered repositories. | -| `porchctl repo reg` | Register a package repository. | -| `porchctl repo unreg` | Unregister a repository. | - -The commands for administering package revisions are: - -| Command | Description | -| ------------------------------ | ------------------------------------------------------------------------------------------------ | -| `porchctl rpkg approve` | Approve a proposal to publish a package revision. | -| `porchctl rpkg clone` | Create a clone of an existing package revision. | -| `porchctl rpkg copy` | Create a new package revision from an existing one. | -| `porchctl rpkg del` | Delete a package revision. | -| `porchctl rpkg get` | List package revisions in registered repositories. | -| `porchctl rpkg init` | Initializes a new package revision in a repository. | -| `porchctl rpkg propose` | Propose that a package revision should be published. | -| `porchctl rpkg propose-delete` | Propose deletion of a published package revision. | -| `porchctl rpkg pull` | Pull the content of the package revision. | -| `porchctl rpkg push` | Push resources to a package revision. | -| `porchctl rpkg reject` | Reject a proposal to publish or delete a package revision. | -| `porchctl rpkg upgrade` | Update a downstream package revision to a more recent revision of its upstream package revision. | - -## Using the porchctl CLI - -### Guide prerequisites -* [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) - -Make sure that your `kubectl` context is set up for `kubectl` to interact with the correct Kubernetes instance (see -[installation instructions]({{% relref "/docs/porch/user-guides/install-porch.md" %}}) guide for details). - -To check whether `kubectl` is configured with your Porch cluster (or local instance), run: - -```bash -kubectl api-resources | grep porch -``` - -You should see the following API resources listed: - -```bash -repositories config.porch.kpt.dev/v1alpha1 true Repository -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -``` - -## Porch Resources - -Porch server manages the following resources: - -1. `repositories`: a repository (Git or OCI) can be registered with Porch to support discovery or management of KRM - configuration packages in those repositories. -2. `packagerevisions`: a specific revision of a KRM configuration package managed by Porch in one of the registered - repositories. This resource represents a _metadata view_ of the KRM configuration package. -3. `packagerevisionresources`: this resource represents the contents of the configuration package (KRM resources - contained in the package) - -{{% alert title="Note" color="primary" %}} - -`packagerevisions` and `packagerevisionresources` represent different _views_ of the same underlying KRM -configuration package. `packagerevisions` represents the package metadata, and `packagerevisionresources` represents the -package content. The matching resources share the same `name` (as well as API group and version: -`porch.kpt.dev/v1alpha1`) and differ in resource kind (`PackageRevision` and `PackageRevisionResources` respectively). - -{{% /alert %}} - - -## Repository Registration - -To use Porch with a Git repository, you will need: - -* A Git repository for your blueprints. An otherwise empty repository with an - initial commit works best. The initial commit is required to establish the - `main` branch. -* If the repository requires authentication you will require either - - A [Personal Access Token](https://github.com/settings/tokens) (when using GitHub repository) for Porch to authenticate - with the repository if the repository. Porch requires the 'repo' scope. - - Basic Auth credentials for Porch to authenticate with the repository. - -To use Porch with an OCI repository ([Artifact Registry](https://console.cloud.google.com/artifacts) or -[Google Container Registry](https://cloud.google.com/container-registry)), first make sure to: - -* Enable [workload identity](https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity) for Porch -* Assign appropriate roles to the Porch workload identity service account - (`iam.gke.io/gcp-service-account=porch-server@$(GCP_PROJECT_ID).iam.gserviceaccount.com`) - to have appropriate level of access to your OCI repository. - -Use the `porchctl repo register` command to register your repository with Porch. - -```bash -# Unauthenticated Repositories -porchctl repo register --namespace default https://github.com/platkrm/test-blueprints.git -porchctl repo register --namespace default https://github.com/nephio-project/catalog --name=oai --directory=workloads/oai -porchctl repo register --namespace default https://github.com/nephio-project/catalog --name=infra --directory=infra -``` - -```bash -# Authenticated Repositories -GITHUB_USERNAME= -GITHUB_TOKEN= - -$ porchctl repo register \ - --namespace default \ - --repo-basic-username=${GITHUB_USERNAME} \ - --repo-basic-password=${GITHUB_TOKEN} \ - https://github.com/${GITHUB_USERNAME}/blueprints.git -``` - -For more details on configuring authenticated repositories see [Authenticating to Remote Git Repositories]({{% relref "/docs/porch/user-guides/git-authentication-config.md" %}}). - -The command line flags supported by `porchctl repo register` are: - -* `--directory` - Directory within the repository where to look for packages. -* `--branch` - Branch in the repository where finalized packages are committed (defaults to `main`). -* `--name` - Name of the package repository Kubernetes resource. If unspecified, will default to the name portion (last - segment) of the repository URL (`blueprint` in the example above) -* `--description` - Brief description of the package repository. -* `--deployment` - Boolean value; If specified, repository is a deployment repository; published packages in a - deployment repository are considered deployment-ready. -* `--repo-basic-username` - Username for repository authentication using basic auth. -* `--repo-basic-password` - Password for repository authentication using basic auth. - -Additionally, common `kubectl` command line flags for controlling aspects of -interaction with the Kubernetes apiserver, logging, and more (this is true for -all `porchctl` CLI commands which interact with Porch). - -Use the `porchctl repo get` command to query registered repositories: - -```bash -$ porchctl repo get -A -NAMESPACE NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -default oai git Package True https://github.com/nephio-project/catalog -default test-blueprints git Package True https://github.com/platkrm/test-blueprints.git -porch-demo porch-test git Package true http://localhost:3000/nephio/porch-test.git -``` - -The `porchctl get` commands support common `kubectl` -[flags](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#formatting-output) to format output, for example -`porchctl repo get --output=yaml`. - -The command `porchctl repo unregister` can be used to unregister a repository: - -```bash -$ porchctl repo unregister test-blueprints --namespace default -``` - -## Package Discovery And Introspection - -The `porchctl rpkg` command group contains commands for interacting with package revisions managed by the Package Orchestration -service. the `r` prefix used in the command group name stands for 'remote'. - -The `porchctl rpkg get` command list the package revisions in registered repositories: - -```bash -$ porchctl rpkg get -A -NAMESPACE NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -default infra.infra.baremetal.bmh-template.main infra/baremetal/bmh-template main -1 false Published infra -default infra.infra.capi.cluster-capi.main infra/capi/cluster-capi main -1 false Published infra -default infra.infra.capi.cluster-capi.v2.0.0 infra/capi/cluster-capi v2.0.0 -1 false Published infra -default infra.infra.capi.cluster-capi.v3.0.0 infra/capi/cluster-capi v3.0.0 -1 false Published infra -default infra.infra.capi.vlanindex.main infra/capi/vlanindex main -1 false Published infra -default infra.infra.capi.vlanindex.v2.0.0 infra/capi/vlanindex v2.0.0 -1 false Published infra -default infra.infra.capi.vlanindex.v3.0.0 infra/capi/vlanindex v3.0.0 -1 false Published infra -default infra.infra.gcp.nephio-blueprint-repo.main infra/gcp/nephio-blueprint-repo main -1 false Published infra -default infra.infra.gcp.nephio-blueprint-repo.v1 infra/gcp/nephio-blueprint-repo v1 1 true Published infra -default infra.infra.gcp.nephio-blueprint-repo.v2.0.0 infra/gcp/nephio-blueprint-repo v2.0.0 -1 false Published infra -default infra.infra.gcp.nephio-blueprint-repo.v3.0.0 infra/gcp/nephio-blueprint-repo v3.0.0 -1 false Published infra -default oai.workloads.oai.oai-ran-operator.main workloads/oai/oai-ran-operator main -1 false Published oai -default oai.workloads.oai.oai-ran-operator.v1 workloads/oai/oai-ran-operator v1 1 true Published oai -default oai.workloads.oai.oai-ran-operator.v2.0.0 workloads/oai/oai-ran-operator v2.0.0 -1 false Published oai -default oai.workloads.oai.oai-ran-operator.v3.0.0 workloads/oai/oai-ran-operator v3.0.0 -1 false Published oai -default oai.workloads.oai.pkg-example-cucp-bp.main workloads/oai/pkg-example-cucp-bp main -1 false Published oai -default oai.workloads.oai.pkg-example-cucp-bp.v1 workloads/oai/pkg-example-cucp-bp v1 1 true Published oai -default oai.workloads.oai.pkg-example-cucp-bp.v2.0.0 workloads/oai/pkg-example-cucp-bp v2.0.0 -1 false Published oai -default oai.workloads.oai.pkg-example-cucp-bp.v3.0.0 workloads/oai/pkg-example-cucp-bp v3.0.0 -1 false Published oai -default oai.workloads.oai.pkg-example-cuup-bp.main workloads/oai/pkg-example-cuup-bp main -1 false Published oai -default test-blueprints.basens.main basens main -1 false Published test-blueprints -default test-blueprints.basens.v1 basens v1 1 false Published test-blueprints -default test-blueprints.basens.v2 basens v2 2 false Published test-blueprints -default test-blueprints.basens.v3 basens v3 3 true Published test-blueprints -default test-blueprints.empty.main empty main -1 false Published test-blueprints -default test-blueprints.empty.v1 empty v1 1 true Published test-blueprints -porch-demo porch-test.basedir.subdir.subsubdir.edge-function.inadir basedir/subdir/subsubdir/edge-function inadir 0 false Draft porch-test -porch-demo porch-test.basedir.subdir.subsubdir.network-function.dirdemo basedir/subdir/subsubdir/network-function dirdemo 0 false Draft porch-test -porch-demo porch-test.network-function.innerhome network-function innerhome 2 true Published porch-test -porch-demo porch-test.network-function.innerhome3 network-function innerhome3 0 false Proposed porch-test -porch-demo porch-test.network-function.innerhome4 network-function innerhome4 0 false Draft porch-test -porch-demo porch-test.network-function.main network-function main -1 false Published porch-test -porch-demo porch-test.network-function.outerspace network-function outerspace 1 false DeletionProposed porch-test -``` - -The `NAME` column gives the kubernetes name of the package revision resource. Names are of the form: - -**repository.([pathnode.]*)package.workspace** - -1. The first part (up to the first dot) is the **repository** that the package revision is in. -1. The second (optional) part is zero or more **pathnode** nodes, identifying the path of the package. -1. The second last part (between the second last and last dots) is the **package** that the package revision is in. -1. The last part (after the last dot) is the **workspace** of the package revision, which uniquely identifies the package revision in the package. - -From the listing above, the package revision with the name `test-blueprints.basens.v3` is in a repository called `test-blueprints`. It is in the root of that -repository because there are no **pathnode** entries in its name. It is in a package called `basens` and its workspace name is `v3`. - -The package revision with the name `porch-test.basedir.subdir.subsubdir.edge-function.inadir` is in the repo `porch-test`. It has a path of -`basedir/subdir/subsubdir`. The package name is `edge-function` and its workspace name is `inadir`. - -The entire name must comply with the constraints on DNS Subdomain Names -specified in [kubernetes rules for naming objects and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/). -The name must: - -- contain no more than 253 characters -- contain only lowercase alphanumeric characters, '-' or '.' -- start with an alphanumeric character -- end with an alphanumeric character - -Each part of the name must comply with the constraints on RFC 1123 label names -specified in [kubernetes rules for naming objects and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/). -Each part of the name must: - -- contain at most 63 characters -- contain only lowercase alphanumeric characters or '-' -- start with an alphanumeric character -- end with an alphanumeric character - -The `PACKAGE` column contains the package name of a pacakge revision. Of course, all package revisions in a package have the same package name. The -package name includes the path to the directory containing the package if the package is not in the root directory of the repo. For example, in the listing above -the packages `basedir/subdir/subsubdir/edge-function` and `basedir/subdir/subsubdir/network-function` are in the directory `basedir/subdir/subsubdir`. The -`basedir/subdir/subsubdir/network-function` and `network-function` packages are different packages because they are in different directories. - -The `REVISION` column indicates the revision of the package. -- Revisions of `1` or greater indicate released package revisions. When a package revision is `Published` it is assigned the next - available revision number, starting at `1`. In the listing above, the `porch-test.network-function.innerhome` revision of package `network-function` - has a revision of `2` and is the latest revision of the package. The `porch-test.network-function.outerspace` revision of the package has a - revision of `1`. If the `porch-test.network-function.innerhome3` revision is published, it will be assigned a revision of `3` and will become - the latest package revision. -- Package revisions that are not published (package revisions with a lifecycle status of `Draft` or `Proposed`) have a revision number of `0`. There can be many - revisions of a package with revision `0` as is shown with revisions `porch-test.network-function.innerhome3` and `porch-test.network-function.innerhome4` - of package `network-function` above. -- Placeholder package revisions that point at the head of a git branch or tag have a revision number of `-1` - -The `LATEST` column indicates whether the package revision is the latest among the revisions of the same package. In the -output above, `3` is the latest revision of `basens` package and `1` is the latest revision of `empty` package. - -The `LIFECYCLE` column indicates the lifecycle stage of the package revision, one of: `Draft`, `Proposed`, `Published` or `DeletionProposed`. - -The `WORKSPACENAME` column indicates the workspace name of a package revision. The workspace name is selected by a user when a draft -package revision is created. The workspace name must be unique among package revisions in the same package. A user is free to -select any workspace name that complies with the constraints on DNS Subdomain Names specified in -[kubernetes rules for naming objects and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/). - -{{% alert title="Scope of WORKSPACENAME" color="primary" %}} -The scope of a workspace name is restricted to its package and it is merely a string that identifies a package revision within a package. -The workspace name `V1` on the `empty` package has no relation to the workspace name `V1` on the `basens` package listed above. -A user has simply decided to use the same workspace name on two separate package revisions. -{{% /alert %}} - -{{% alert title="Setting WORKSPACENAME and REVISION from repositories" color="primary" %}} -When Porch connects to a repository, it scans the branches and tags of the Git repository for package revisions. It descends the directory tree of the repo -looking for files called `Kptfile`. When it finds a Kptfile in a directory, Porch knows that it has found a kpt package and it does not search any child directories -of this directory. Porch then examines all branches and tags that have references to that package and finds package revisions using the following rules: -1. Look for a commit message of the form `kpt:{"package":"","workspaceName":"","revision":""}` at the tip of the branch/tag and - set the workspace name and revision from the commit message. The commit message `kpt:{"package":"network-function","workspaceName":"outerspace","revision":"1"}` - is used to set the workspace name to `outerspace` and the revision to `1` in the case of the `porch-test.network-function.outerspace` - package revision in the listing above. -2. If 1. fails, and if the reference is of the form `.v1`, set the workspace name to `v1` and the revision to `1` as is the case for the - `oai.workloads.oai.oai-ran-operator.v1` package revision in the listing above. -3. If 2. fails, set the workspace name to the branch or tag name, and the revision to `-1`, as is the case for the `infra.infra.gcp.nephio-blueprint-repo.v3.0.0` - package revision in the listing above. The workspace name is set to the branch name `v3.0.0`, and the revision is set to `-1`. -{{% /alert %}} - -## Package Revision Filtering - -Simple filtering of package revisions by name (substring) and revision (exact match) is supported by the CLI using -`--name`, `--revision` and `--workspace` flags: - -```bash -$ porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.network-function.dirdemo network-function dirdemo 1 false Published porch-test -porch-test.network-function.innerhome network-function innerhome 2 true Published porch-test -porch-test.network-function.main network-function main -1 false Published porch-test - -$ porchctl -n porch-demo rpkg get --revision 1 -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.basedir.subdir.subsubdir.edge-function2.diredge basedir/subdir/subsubdir/edge-function2 diredge 1 true Published porch-test -porch-test.edge-function2.diredgeab edge-function2 diredgeab 1 true Published porch-test -porch-test.edge-function.diredge edge-function diredge 1 true Published porch-test -porch-test.network-function3.outerspace network-function3 outerspace 1 true Published porch-test -porch-test.network-function.dirdemo network-function dirdemo 1 false Published porch-test - -$ porchctl -n porch-demo rpkg get --workspace outerspace -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.outerspace network-function3 outerspace 1 true Published porch-test -``` - -You can also filter package revisions using the `kubectl` CLI with the `--selector` and `--field-selector` flags under the same conventions as for other KRM objects. - -The `--selector` flag can be used to filter on one or more [metadata labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#list-and-watch-filtering): -```bash -$ kubectl get packagerevisions --show-labels --selector 'kpt.dev/latest-revision=true' -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY LABELS -test-blueprints.basens.v3 basens v3 3 true Published test-blueprints kpt.dev/latest-revision=true -test-blueprints.empty.v1 empty v1 1 true Published test-blueprints kpt.dev/latest-revision=true -``` - -The `--field-selector` flag can be used to filter on one or more package revision [fields](https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/). - -### Supported Fields - -As per Kubernetes convention, the `--field-selector` flag supports a subset of the PackageRevision resource type's fields: -- `metadata.name` -- `metadata.namespace` -- `spec.revision` -- `spec.packageName` -- `spec.repository` -- `spec.workspaceName` -- `spec.lifecycle` - -{{% alert title="Note" color="primary" %}} - - The `spec.versions[*].selectableFields` field is not available for the PackageRevision resource type. Changing the fields supported by `--field-selector` requires editing Porch's source code and rebuilding the porch-server microservice. - -{{% /alert %}} - -For example: -```bash -$ kubectl get packagerevisions --show-labels --field-selector 'spec.repository==oai' -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY LABELS -oai.database.main database main -1 false Published oai -oai.oai-amf.main oai-amf main -1 false Published oai -oai.oai-ausf.main oai-ausf main -1 false Published oai -oai.oai-cp-operators.main oai-cp-operators main -1 false Published oai -oai.oai-nrf.main oai-nrf main -1 false Published oai -oai.oai-repository.main oai-repository main -1 false Published oai -oai.oai-smf.main oai-smf main -1 false Published oai -oai.oai-udm.main oai-udm main -1 false Published oai -oai.oai-udr.main oai-udr main -1 false Published oai -oai.oai-upf-edge.main oai-upf-edge main -1 false Published oai -oai.oai-up-operators.main oai-up-operators main -1 false Published oai -``` - -{{% alert title="Note" color="primary" %}} - -Due to the restrictions of Porch's internal caching behavior, the `--field-selector` flag supports only the `=` and `==` operators. **The `!=` operator is not supported.** - -{{% /alert %}} - -The common `kubectl` [flags that control output format](https://kubernetes.io/docs/reference/kubectl/#output-options) are available as well: - -```bash -$ porchctl rpkg get -n porch-demo porch-test.network-function.innerhome -o yaml -apiVersion: porch.kpt.dev/v1alpha1 -kind: PackageRevision -metadata: - labels: - kpt.dev/latest-revision: "true" - name: porch-test.network-function.innerhome - namespace: porch-demo -spec: - lifecycle: Published - packageName: network-function - repository: porch-test - revision: 2 - workspaceName: innerhome -... -``` - -The `porchctl rpkg pull` command can be used to read the package revision resources. - -The command can be used to print the package revision resources as `ResourceList` to `stdout`, which enables -[chaining](https://kpt.dev/book/04-using-functions/#chaining-functions-using-the-unix-pipe) -evaluation of functions on the package revision pulled from the Package Orchestration server. - -```bash -$ porchctl rpkg pull -n porch-demo porch-test.network-function.innerhome -apiVersion: config.kubernetes.io/v1 -kind: ResourceList -items: -- apiVersion: "" - kind: KptRevisionMetadata - metadata: - name: porch-test.network-function.innerhome - namespace: porch-demo -... -``` - -One of the driving motivations for the Package Orchestration service is enabling -WYSIWYG authoring of packages, including their contents, in highly usable UIs. -Porch therefore supports reading and updating package *contents*. - -In addition to using a [UI](https://kpt.dev/guides/namespace-provisioning-ui/) with Porch, we -can change the package contents by pulling the package from Porch onto the local -disk, make any desired changes, and then pushing the updated contents to Porch. - -```bash -$ porchctl rpkg pull -n porch-demo porch-test.network-function.innerhome ./innerhome - -$ find innerhome - -./innerhome -./innerhome/.KptRevisionMetadata -./innerhome/README.md -./innerhome/Kptfile -./innerhome/package-context.yaml -``` - -The command downloaded the `innerhome/v1` package revision contents and saved -them in the `./innerhome` directory. Now you will make some changes. - -First, note that even though Porch updated the namespace name (in -`namespace.yaml`) to `innerhome` when the package was cloned, the `README.md` -was not updated. Let's fix it first. - -Open the `README.md` in your favorite editor and update its contents, for -example: - -``` -# innerhome - -## Description -kpt package for provisioning Innerhome namespace -``` - -In the second change, add a new mutator to the `Kptfile` pipeline. Use the -[set-labels](https://catalog.kpt.dev/function-catalog/set-labels/v0.1/) function which will add -labels to all resources in the package. Add the following mutator to the -`Kptfile` `pipeline` section: - -```yaml - - image: gcr.io/kpt-fn/set-labels:v0.1.5 - configMap: - color: orange - fruit: apple -``` - -The whole `pipeline` section now looks like this: - -```yaml -pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.4.1 - configPath: package-context.yaml - - image: gcr.io/kpt-fn/apply-replacements:v0.1.1 - configPath: update-rolebinding.yaml - - image: gcr.io/kpt-fn/set-labels:v0.1.5 - configMap: - color: orange - fruit: apple -``` - -Save the changes and push the package contents back to the server: - -```sh -# Push updated package contents to the server -$ porchctl rpkg push -n porch-demo porch-test.network-function.innerhome ./innerhome -``` - -Now, pull the contents of the package revision again, and inspect one of the -configuration files. - -```sh -# Pull the updated package contents to local drive for inspection: -$ porchctl rpkg pull -n porch-demo porch-test.network-function.innerhome ./updated-innerhome - -# Inspect updated-innerhome/namespace.yaml -$ cat updated-innerhome/namespace.yaml - -apiVersion: v1 -kind: Namespace -metadata: - name: innerhome - labels: - color: orange - fruit: apple -spec: {} -``` - -The updated namespace now has new labels! What happened? - -Whenever a package is updated during the authoring process, in case current functions -of the pipline were changed or a new function was added to the pipeline list, -Porch automatically re-renders the package to make sure that all mutators and validators are -executed. So when we added the new `set-labels` mutator, as soon as we pushed -the updated package contents to Porch, Porch re-rendered the package and -the `set-labels` function applied the labels we requested (`color: orange` and -`fruit: apple`). - -## Authoring Packages - -Several commands in the `porchctl rpkg` group support package authoring: - -* `init` - Initializes a new package revision in the target repository. -* `clone` - Creates a clone of a source package revision in the target repository. -* `copy` - Creates a new package revision from an existing one. -* `push` - Pushes package revision resources into a remote package. -* `del` - Deletes one or more package revisions in registered repositories. - -The `porchctl rpkg init` command can be used to initialize a new package revision. Porch server will create and -initialize a new package revision (as a draft) and save it in the specified repository. - -```bash -$ porchctl rpkg init new-package --repository=porch-test --workspace=my-workspace -n porch-demo -porch-test.new-package.my-workspace created - -$ porchctl rpkg get -n porch-demo porch-test.new-package.my-workspace -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -The new package revision is created in the `Draft` lifecycle stage. This is true also for all commands that create new package -revision (`init`, `clone` and `copy`). - -Additional flags supported by the `porchctl rpkg init` command are: - -* `--repository` - Repository in which the package revision will be created. -* `--workspace` - Workspace of the new package revision. -* `--description` - Short description of the package revision. -* `--keywords` - List of keywords for the package revision. -* `--site` - Link to page with information about the package revision. - - -Use `porchctl rpkg clone` command to create a _downstream_ package revision by cloning an _upstream_ package revision. You can find out more about the _upstream_ and _downstream_ sections of the `Kptfile` in a [Getting a Package](https://kpt.dev/book/03-packages/#getting-a-package). - -```bash -$ porchctl rpkg clone porch-test.new-package.my-workspace new-package-clone --repository=porch-deployment -n porch-demo -porch-deployment.new-package-clone.v1 created - -# Confirm the package revision was created -porchctl rpkg get porch-deployment.new-package-clone.v1 -n porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-deployment.new-package-clone.v1 new-package-clone v1 0 false Draft porch-deployment -``` - -{{% alert title="Note" color="primary" %}} - A cloned package must be created in a repository in the same namespace as - the source package. Cloning a package with the Package Orchestration Service - retains a reference to the upstream package revision in the clone, and - cross-namespace references are not allowed. Package revisions in repositories - in other namespaces can be cloned using a reference directly to the underlying - oci or git repository as described below. -{{% /alert %}} - -`porchctl rpkg clone` can also be used to clone package revisions that are in repositories not registered with Porch, for -example: - -```bash -$ porchctl rpkg clone \ - https://github.com/nephio-project/catalog.git cloned-pkg-example-ue-bp \ - --directory=workloads/oai/pkg-example-ue-bp \ - --ref=main \ - --repository=porch-deployment \ - --namespace=porch-demo -porch-deployment.cloned-pkg-example-ue-bp.v1 created - -# Confirm the package revision was created -$ porchctl rpkg get -n porch-demo porch-deployment.cloned-pkg-example-ue-bp.v1 -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -``` - -The flags supported by the `porchctl rpkg clone` command are: - -* `--directory` - Directory within the upstream repository where the upstream - package revision is located. -* `--ref` - Ref in the upstream repository where the upstream package revision is - located. This can be a branch, tag, or SHA. -* `--repository` - Repository to which package revision will be cloned (downstream - repository). -* `--workspace` - Workspace to assign to the downstream package revision. - -The `porchctl rpkg copy` command can be used to create a new revision of an existing package. It is a means to -modifying an already published package revision. - -```bash -$ porchctl rpkg copy porch-test.network-function.innerhome --workspace=great-outdoors -n porch-demo -porch-test.network-function.great-outdoors created - -# Confirm the package revision was created -$ porchctl rpkg get porch-test.network-function.great-outdoors -n porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.network-function.great-outdoors network-function great-outdoors 0 false Draft porch-test -``` -Unlike `clone` of a package which establishes the upstream-downstream -relationship between the respective packages, and updates the `Kptfile` -to reflect the relationship, the `copy` command does *not* change the -upstream-downstream relationships. The copy of a package shares the same -upstream package as the package from which it was copied. Specifically, -in this case both packages have identical contents, -including upstream information, and differ in revision only. - -The `porchctl rpkg pull` and `porchctl rpkg push` commands can be used to update the resources (package revision contents) of a package _draft_: - -```bash -$ porchctl rpkg pull porch-test.network-function.great-outdoors ./great-outdoors -n porch-demo - -# Make edits using your favorite YAML editor, for example adding a new resource -$ cat < ./great-outdoors/config-map.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: example-config-map -data: - color: green -EOF - -# Push the updated contents to the Package Orchestration server, updating the -# package revision contents. -$ porchctl rpkg push porch-test.network-function.great-outdoors ./great-outdoors -n porch-demo - -# Confirm that the remote package revision now includes the new ConfigMap resource -$ porchctl rpkg pull porch-test.network-function.great-outdoors -n porch-demo -apiVersion: config.kubernetes.io/v1 -kind: ResourceList -items: -... -- apiVersion: v1 - kind: ConfigMap - metadata: - name: example-config-map - annotations: - config.kubernetes.io/index: '0' - internal.config.kubernetes.io/index: '0' - internal.config.kubernetes.io/path: 'config-map.yaml' - config.kubernetes.io/path: 'config-map.yaml' - data: - color: green -... -``` -Package revision can be deleted using `porchctl rpkg del` command: - -```bash -# Delete package revision -$ porchctl rpkg del porch-test.network-function.great-outdoors -n porch-demo -porch-test.network-function.great-outdoors deleted -``` - -## Package Lifecycle and Approval Flow - -Authoring is performed on the package revisions in the _Draft_ lifecycle stage. Before a package revision can be deployed, copied or -cloned, it must be _Published_. The approval flow is the process by which the package revision is advanced from _Draft_ state -through _Proposed_ state and finally to _Published_ lifecycle stage. - -The commands used to manage package revision lifecycle stages include: - -* `propose` - Proposes to finalize a package revision draft -* `approve` - Approves a proposal to finalize a package revision. -* `reject` - Rejects a proposal to finalize a package revision - -In the [Authoring Packages](#authoring-packages) section above we created several _draft_ package revisions and in this section -we will create proposals for publishing some of them. - -```bash -# List package revisions to identify relevant drafts: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 0 false Draft porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -... - -# Propose two package revisions to be be published -$ porchctl rpkg propose \ - porch-deployment.new-package-clone.v1 \ - porch-test.network-function3.innerhome6 \ - -n porch-demo - -porch-deployment.new-package-clone.v1 proposed -porch-test.network-function3.innerhome6 proposed - -# Confirm the package revisions are now Proposed -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 0 false Proposed porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Proposed porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -At this point, a person in _platform administrator_ role, or even an automated process, will review and either approve -or reject the proposals. To aid with the decision, the platform administrator may inspect the package revision contents using the -commands above, such as `porchctl rpkg pull`. - -```bash -# Approve a proposal to publish a package revision -$ porchctl rpkg approve porch-deployment.new-package-clone.v1 -n porch-demo -porch-deployment.new-package-clone.v1 approved - -# Reject a proposal to publish a package revision -$ porchctl rpkg reject porch-test.network-function3.innerhome6 -n porch-demo -porch-test.network-function3.innerhome6 no longer proposed for approval -``` - -Now the user can confirm lifecycle stages of the package revisions: - -```bash -# Confirm package revision lifecycle stages after approvals: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 1 true Published porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -Observe that the rejected proposal returned the package revision back to _Draft_ lifecycle stage. The package revision whose -proposal was approved is now in _Published_ state. - -An approved pacakge revision cannot be directly deleted, it must first be proposed for deletion. - -```bash -porchctl rpkg propose-delete -n porch-demo porch-deployment.new-package-clone.v1 - -# Confirm package revision lifecycle stages after deletion proposed: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 1 true DeletionProposed porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -At this point, a person in _platform administrator_ role, or even an automated process, will review and either approve -or reject the deletion. - -```bash -porchctl rpkg reject -n porch-demo porch-deployment.new-package-clone.v1 - -# Confirm package revision deletion has been rejected: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 1 true Published porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -The package revision can again be proposed for deletion. - -```bash -porchctl rpkg propose-delete -n porch-demo porch-deployment.new-package-clone.v1 - -# Confirm package revision lifecycle stages after deletion proposed: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-deployment.new-package-clone.v1 new-package-clone v1 1 true DeletionProposed porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -At this point, a person in _platform administrator_ role, or even an automated process, decides to proceed with the deletion. - -```bash -porchctl rpkg delete -n porch-demo porch-deployment.new-package-clone.v1 - -# Confirm package revision is deleted: -$ porchctl rpkg get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -... -porch-deployment.cloned-pkg-example-ue-bp.v1 cloned-pkg-example-ue-bp v1 0 false Draft porch-deployment -porch-test.network-function2.outerspace network-function2 outerspace 0 false Draft porch-test -porch-test.network-function3.innerhome5 network-function3 innerhome5 0 false Draft porch-test -porch-test.network-function3.innerhome6 network-function3 innerhome6 0 false Draft porch-test -porch-test.new-package.my-workspace new-package my-workspace 0 false Draft porch-test -``` - -## Package Upgrade - - -The flags supported by the `porchctl rpkg upgrade` command are: - -* `--revision` - (*Optional*) The revision number of the upstream package that the target - downstream package revision should be upgraded to. - The corresponding revision must be published. If not set, the latest will be chosen. -* `--workspace` - The workspace name of the newly created package revision. -* `--strategy` - The strategy to use for the upgrade. -Options: `resource-merge` (*default*), `fast-forward`, `force-delete-replace`, `copy-merge`. - -The `porchctl rpkg upgrade` command can be used to create a new revision which upgrades a published downstream to a more recent published revision of its upstream package. - -```bash -# upgrade repository.package.v1 package to the latest of its upstream, using resource-merge strategy -$ porchctl rpkg upgrade repository.package.1 --workspace=2 - -# upgrade repository.package.v1 package to revision v3 of its upstream, using resource-merge strategy -$ porchctl rpkg upgrade repository.package.1 --workspace=2 --revision=3 - -# upgrade repository.package.v1 package to revision v3 of its upstream, using copy-merge strategy -$ porchctl rpkg upgrade repository.package.1 --workspace=2 --revision=3 --strategy=copy-merge -``` diff --git a/content/en/docs/porch/user-guides/preparing-the-environment.md b/content/en/docs/porch/user-guides/preparing-the-environment.md deleted file mode 100644 index 890b82ac..00000000 --- a/content/en/docs/porch/user-guides/preparing-the-environment.md +++ /dev/null @@ -1,1678 +0,0 @@ ---- -title: "Preparing the Environment" -type: docs -weight: 2 -description: "A tutorial to preparing the environment for Porch" ---- - -## Exploring the Porch resources - -We have configured three repositories in Porch: - -```bash -kubectl get repositories -n porch-demo -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -edge1 git Package true True http://172.18.255.200:3000/nephio/edge1.git -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -A repository is a CR of the Porch Repository CRD. You can examine the *repositories.config.porch.kpt.dev* CRD with -either of the following commands (both of which are rather verbose): - -```bash -kubectl get crd -n porch-system repositories.config.porch.kpt.dev -o yaml -kubectl describe crd -n porch-system repositories.config.porch.kpt.dev -``` - -You can examine any other CRD using the commands above and changing the CRD name/namespace. - -The full list of Nephio CRDs is as below: - -```bash -kubectl api-resources --api-group=porch.kpt.dev -NAME SHORTNAMES APIVERSION NAMESPACED KIND -packagerevisionresources porch.kpt.dev/v1alpha1 true PackageRevisionResources -packagerevisions porch.kpt.dev/v1alpha1 true PackageRevision -packages porch.kpt.dev/v1alpha1 true Package -``` - -The PackageRevision CRD is used to keep track of revision (or version) of each package found in the repositories. - -```bash -kubectl get packagerevision -n porch-demo -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -external-blueprints-922121d0bcdd56bfa8cae6c375720e2b5f358ab0 free5gc-cp main main false Published external-blueprints -external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 free5gc-cp v1 v1 true Published external-blueprints -external-blueprints-716aae722092dbbb9470e56079b90ad76ec8f0d5 free5gc-operator main main false Published external-blueprints -external-blueprints-d65dc89f7a2472650651e9aea90edfcc81a9afc6 free5gc-operator v1 v1 false Published external-blueprints -external-blueprints-9fee880e8fa52066f052c9cae7aac2e2bc1b5a54 free5gc-operator v2 v2 false Published external-blueprints -external-blueprints-91d60ee31d2d0a1a6d5f1807593d5419434accd3 free5gc-operator v3 v3 false Published external-blueprints -external-blueprints-21f19a0641cf520e7dc6268e64c58c2c30c27036 free5gc-operator v4 v4 false Published external-blueprints -external-blueprints-bf2e7522ee92680bd49571ab309e3f61320cf36d free5gc-operator v5 v5 true Published external-blueprints -external-blueprints-c1b9ecb73118e001ab1d1213e6a2c94ab67a0939 free5gc-upf main main false Published external-blueprints -external-blueprints-5d48b1516e7b1ea15830ffd76b230862119981bd free5gc-upf v1 v1 true Published external-blueprints -external-blueprints-ed97798b46b36d135cf23d813eccad4857dff90f pkg-example-amf-bp main main false Published external-blueprints -external-blueprints-ed744bfdf4a4d15d4fcf3c46fde27fd6ac32d180 pkg-example-amf-bp v1 v1 false Published external-blueprints -external-blueprints-5489faa80782f91f1a07d04e206935d14c1eb24c pkg-example-amf-bp v2 v2 false Published external-blueprints -external-blueprints-16e2255bd433ef532684a3c1434ae0bede175107 pkg-example-amf-bp v3 v3 false Published external-blueprints -external-blueprints-7689cc6c953fa83ea61283983ce966dcdffd9bae pkg-example-amf-bp v4 v4 false Published external-blueprints -external-blueprints-caff9609883eea7b20b73b7425e6694f8eb6adc3 pkg-example-amf-bp v5 v5 true Published external-blueprints -external-blueprints-00b6673c438909975548b2b9f20c2e1663161815 pkg-example-smf-bp main main false Published external-blueprints -external-blueprints-4f7dfbede99dc08f2b5144ca550ca218109c52f2 pkg-example-smf-bp v1 v1 false Published external-blueprints -external-blueprints-3d9ab8f61ce1d35e264d5719d4b3c0da1ab02328 pkg-example-smf-bp v2 v2 false Published external-blueprints -external-blueprints-2006501702e105501784c78be9e7d57e426d85e8 pkg-example-smf-bp v3 v3 false Published external-blueprints -external-blueprints-c97ed7c13b3aa47cb257217f144960743aec1253 pkg-example-smf-bp v4 v4 false Published external-blueprints -external-blueprints-3bd78e46b014dac5cc0c58788c1820d043d61569 pkg-example-smf-bp v5 v5 true Published external-blueprints -external-blueprints-c3f660848d9d7a4df5481ec2e06196884778cd84 pkg-example-upf-bp main main false Published external-blueprints -external-blueprints-4cb00a17c1ee2585d6c187ba4d0211da960c0940 pkg-example-upf-bp v1 v1 false Published external-blueprints -external-blueprints-5903efe295026124e6fea926df154a72c5bd1ea9 pkg-example-upf-bp v2 v2 false Published external-blueprints -external-blueprints-16142d8d23c1b8e868a9524a1b21634c79b432d5 pkg-example-upf-bp v3 v3 false Published external-blueprints -external-blueprints-60ef45bb8f55b63556e7467f16088325022a7ece pkg-example-upf-bp v4 v4 false Published external-blueprints -external-blueprints-7757966cc7b965f1b9372370a4b382c8375a2b40 pkg-example-upf-bp v5 v5 true Published external-blueprints -``` - -The PackageRevisionResources resource is an API Aggregation resource that Porch uses to wrap the GET URL for the package -on its repository. - -```bash -kubectl get packagerevisionresources -n porch-demo -NAME PACKAGE WORKSPACENAME REVISION REPOSITORY FILES -external-blueprints-922121d0bcdd56bfa8cae6c375720e2b5f358ab0 free5gc-cp main main external-blueprints 28 -external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 free5gc-cp v1 v1 external-blueprints 28 -external-blueprints-716aae722092dbbb9470e56079b90ad76ec8f0d5 free5gc-operator main main external-blueprints 14 -external-blueprints-d65dc89f7a2472650651e9aea90edfcc81a9afc6 free5gc-operator v1 v1 external-blueprints 11 -external-blueprints-9fee880e8fa52066f052c9cae7aac2e2bc1b5a54 free5gc-operator v2 v2 external-blueprints 11 -external-blueprints-91d60ee31d2d0a1a6d5f1807593d5419434accd3 free5gc-operator v3 v3 external-blueprints 14 -external-blueprints-21f19a0641cf520e7dc6268e64c58c2c30c27036 free5gc-operator v4 v4 external-blueprints 14 -external-blueprints-bf2e7522ee92680bd49571ab309e3f61320cf36d free5gc-operator v5 v5 external-blueprints 14 -external-blueprints-c1b9ecb73118e001ab1d1213e6a2c94ab67a0939 free5gc-upf main main external-blueprints 6 -external-blueprints-5d48b1516e7b1ea15830ffd76b230862119981bd free5gc-upf v1 v1 external-blueprints 6 -external-blueprints-ed97798b46b36d135cf23d813eccad4857dff90f pkg-example-amf-bp main main external-blueprints 16 -external-blueprints-ed744bfdf4a4d15d4fcf3c46fde27fd6ac32d180 pkg-example-amf-bp v1 v1 external-blueprints 7 -external-blueprints-5489faa80782f91f1a07d04e206935d14c1eb24c pkg-example-amf-bp v2 v2 external-blueprints 8 -external-blueprints-16e2255bd433ef532684a3c1434ae0bede175107 pkg-example-amf-bp v3 v3 external-blueprints 16 -external-blueprints-7689cc6c953fa83ea61283983ce966dcdffd9bae pkg-example-amf-bp v4 v4 external-blueprints 16 -external-blueprints-caff9609883eea7b20b73b7425e6694f8eb6adc3 pkg-example-amf-bp v5 v5 external-blueprints 16 -external-blueprints-00b6673c438909975548b2b9f20c2e1663161815 pkg-example-smf-bp main main external-blueprints 17 -external-blueprints-4f7dfbede99dc08f2b5144ca550ca218109c52f2 pkg-example-smf-bp v1 v1 external-blueprints 8 -external-blueprints-3d9ab8f61ce1d35e264d5719d4b3c0da1ab02328 pkg-example-smf-bp v2 v2 external-blueprints 9 -external-blueprints-2006501702e105501784c78be9e7d57e426d85e8 pkg-example-smf-bp v3 v3 external-blueprints 17 -external-blueprints-c97ed7c13b3aa47cb257217f144960743aec1253 pkg-example-smf-bp v4 v4 external-blueprints 17 -external-blueprints-3bd78e46b014dac5cc0c58788c1820d043d61569 pkg-example-smf-bp v5 v5 external-blueprints 17 -external-blueprints-c3f660848d9d7a4df5481ec2e06196884778cd84 pkg-example-upf-bp main main external-blueprints 17 -external-blueprints-4cb00a17c1ee2585d6c187ba4d0211da960c0940 pkg-example-upf-bp v1 v1 external-blueprints 8 -external-blueprints-5903efe295026124e6fea926df154a72c5bd1ea9 pkg-example-upf-bp v2 v2 external-blueprints 8 -external-blueprints-16142d8d23c1b8e868a9524a1b21634c79b432d5 pkg-example-upf-bp v3 v3 external-blueprints 17 -external-blueprints-60ef45bb8f55b63556e7467f16088325022a7ece pkg-example-upf-bp v4 v4 external-blueprints 17 -external-blueprints-7757966cc7b965f1b9372370a4b382c8375a2b40 pkg-example-upf-bp v5 v5 external-blueprints 17 -``` - -Let's examine the *free5gc-cp v1* package. - -The PackageRevision CR name for *free5gc-cp v1* is external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9. - -```bash -kubectl get packagerevision -n porch-demo external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 -o yaml -``` - -```yaml -apiVersion: porch.kpt.dev/v1alpha1 -kind: PackageRevision -metadata: - creationTimestamp: "2023-06-13T13:35:34Z" - labels: - kpt.dev/latest-revision: "true" - name: external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 - namespace: porch-demo - resourceVersion: 5fc9561dcd4b2630704c192e89887490e2ff3c61 - uid: uid:free5gc-cp:v1 -spec: - lifecycle: Published - packageName: free5gc-cp - repository: external-blueprints - revision: v1 - workspaceName: v1 -status: - publishTimestamp: "2023-06-13T13:35:34Z" - publishedBy: dnaleksandrov@gmail.com - upstreamLock: {} -``` - -Getting the *PackageRevisionResources* pulls the package from its repository with each file serialized into a name-value -map of resources in it's spec. - -
-Open this to see the command and the result - -```bash -kubectl get packagerevisionresources -n porch-demo external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 -o yaml -``` -```yaml -apiVersion: porch.kpt.dev/v1alpha1 -kind: PackageRevisionResources -metadata: - creationTimestamp: "2023-06-13T13:35:34Z" - name: external-blueprints-dabbc422fdf0b8e5942e767d929b524e25f7eef9 - namespace: porch-demo - resourceVersion: 5fc9561dcd4b2630704c192e89887490e2ff3c61 - uid: uid:free5gc-cp:v1 -spec: - packageName: free5gc-cp - repository: external-blueprints - resources: - Kptfile: | - apiVersion: kpt.dev/v1 - kind: Kptfile - metadata: - name: free5gc-cp - annotations: - config.kubernetes.io/local-config: "true" - info: - description: this package represents free5gc NFs, which are required to perform E2E conn testing - pipeline: - mutators: - - image: gcr.io/kpt-fn/set-namespace:v0.4.1 - configPath: package-context.yaml - README.md: "# free5gc-cp\n\n## Description\nPackage representing free5gc control - plane NFs.\n\nPackage definition is based on [Towards5gs helm charts](https://github.com/Orange-OpenSource/towards5gs-helm), - \nand service level configuration is preserved as defined there.\n\n### Network - Functions (NFs)\n\nfree5gc project implements following NFs:\n\n\n| NF | Description - | local-config |\n| --- | --- | --- |\n| AMF | Access and Mobility Management - Function | true |\n| AUSF | Authentication Server Function | false |\n| NRF - | Network Repository Function | false |\n| NSSF | Network Slice Selection Function - | false |\n| PCF | Policy Control Function | false |\n| SMF | Session Management - Function | true |\n| UDM | Unified Data Management | false |\n| UDR | Unified - Data Repository | false |\n\nalso Database and Web UI is defined:\n\n| Service - | Description | local-config |\n| --- | --- | --- |\n| mongodb | Database to - store free5gc data | false |\n| webui | UI used to register UE | false |\n\nNote: - `local-config: true` indicates that this resources won't be deployed to the - workload cluster\n\n### Dependencies\n\n- `mongodb` requires `Persistent Volume`. - We need to assure that dynamic PV provisioning will be available on the cluster\n- - `NRF` should be running before other NFs will be instantiated\n - all NFs - packages contain `wait-nrf` init-container\n- `NRF` and `WEBUI` require DB\n - \ - packages contain `wait-mongodb` init-container\n- `WEBUI` service is exposed - as `NodePort` \n - will be used to register UE on the free5gc side\n- Communication - via `SBI` between NFs and communication with `mongodb` is defined using K8s - `ClusterIP` services\n - it forces you to deploy all NFs on a single cluster - or consider including `service mesh` in a multi-cluster scenario\n\n## Usage\n\n### - Fetch the package\n`kpt pkg get REPO_URI[.git]/PKG_PATH[@VERSION] free5gc-cp`\n\nDetails: - https://kpt.dev/reference/cli/pkg/get/\n\n### View package content\n`kpt pkg - tree free5gc-cp`\n\nDetails: https://kpt.dev/reference/cli/pkg/tree/\n\n### - Apply the package\n```\nkpt live init free5gc-cp\nkpt live apply free5gc-cp - --reconcile-timeout=2m --output=table\n```\n\nDetails: https://kpt.dev/reference/cli/live/\n\n" - ausf/ausf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - ausf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n ausfcfg.yaml: |\n info:\n version: 1.0.2\n description: - AUSF initial local configuration\n\n configuration:\n serviceNameList:\n - \ - nausf-auth\n \n sbi:\n scheme: http\n registerIPv4: - ausf-nausf # IP used to register to NRF\n bindingIPv4: 0.0.0.0 # - IP used to bind the service\n port: 80\n tls:\n key: - config/TLS/ausf.key\n pem: config/TLS/ausf.pem\n \n nrfUri: - http://nrf-nnrf:8000\n plmnSupportList:\n - mcc: 208\n mnc: - 93\n - mcc: 123\n mnc: 45\n groupId: ausfGroup001\n eapAkaSupiImsiPrefix: - false\n\n logger:\n AUSF:\n ReportCaller: false\n debugLevel: - info\n" - ausf/ausf-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-ausf\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: ausf\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: ausf\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: ausf\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n \n containers:\n - \ - name: ausf\n image: towards5gs/free5gc-ausf:v3.1.1\n imagePullPolicy: - IfNotPresent\n securityContext:\n {}\n ports:\n - - containerPort: 80\n command: [\"./ausf\"]\n args: [\"-c\", \"../config/ausfcfg.yaml\"]\n - \ env:\n - name: GIN_MODE\n value: release\n volumeMounts:\n - \ - mountPath: /free5gc/config/\n name: ausf-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: ausf-volume\n projected:\n sources:\n - - configMap:\n name: ausf-configmap\n" - ausf/ausf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: ausf-nausf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: ausf - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: ausf - mongodb/dep-sts.yaml: "---\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n - \ name: mongodb\n namespace: default\n labels:\n app.kubernetes.io/name: - mongodb\n app.kubernetes.io/instance: free5gc\n app.kubernetes.io/component: - mongodb\nspec:\n serviceName: mongodb\n updateStrategy:\n type: RollingUpdate\n - \ selector:\n matchLabels:\n app.kubernetes.io/name: mongodb\n app.kubernetes.io/instance: - free5gc\n app.kubernetes.io/component: mongodb\n template:\n metadata:\n - \ labels:\n app.kubernetes.io/name: mongodb\n app.kubernetes.io/instance: - free5gc\n app.kubernetes.io/component: mongodb\n spec:\n \n serviceAccountName: - mongodb\n affinity:\n podAffinity:\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - \ - podAffinityTerm:\n labelSelector:\n matchLabels:\n - \ app.kubernetes.io/name: mongodb\n app.kubernetes.io/instance: - free5gc\n app.kubernetes.io/component: mongodb\n namespaces:\n - \ - \"default\"\n topologyKey: kubernetes.io/hostname\n - \ weight: 1\n nodeAffinity:\n \n securityContext:\n - \ fsGroup: 1001\n sysctls: []\n containers:\n - name: - mongodb\n image: docker.io/bitnami/mongodb:4.4.4-debian-10-r0\n imagePullPolicy: - \"IfNotPresent\"\n securityContext:\n runAsNonRoot: true\n - \ runAsUser: 1001\n env:\n - name: BITNAMI_DEBUG\n - \ value: \"false\"\n - name: ALLOW_EMPTY_PASSWORD\n value: - \"yes\"\n - name: MONGODB_SYSTEM_LOG_VERBOSITY\n value: - \"0\"\n - name: MONGODB_DISABLE_SYSTEM_LOG\n value: - \"no\"\n - name: MONGODB_ENABLE_IPV6\n value: \"no\"\n - \ - name: MONGODB_ENABLE_DIRECTORY_PER_DB\n value: \"no\"\n - \ ports:\n - name: mongodb\n containerPort: - 27017\n livenessProbe:\n exec:\n command:\n - \ - mongo\n - --disableImplicitSessions\n - - --eval\n - \"db.adminCommand('ping')\"\n initialDelaySeconds: - 30\n periodSeconds: 10\n timeoutSeconds: 5\n successThreshold: - 1\n failureThreshold: 6\n readinessProbe:\n exec:\n - \ command:\n - bash\n - -ec\n - - |\n mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.hello().isWritablePrimary - || db.hello().secondary' | grep -q 'true'\n initialDelaySeconds: - 5\n periodSeconds: 10\n timeoutSeconds: 5\n successThreshold: - 1\n failureThreshold: 6\n resources:\n limits: - {}\n requests: {}\n volumeMounts:\n - name: datadir\n - \ mountPath: /bitnami/mongodb/data/db/\n subPath: \n - \ volumes:\n volumeClaimTemplates:\n - metadata:\n name: datadir\n - \ spec:\n accessModes:\n - \"ReadWriteOnce\"\n resources:\n - \ requests:\n storage: \"6Gi\"\n" - mongodb/serviceaccount.yaml: | - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: mongodb - namespace: default - labels: - app.kubernetes.io/name: mongodb - app.kubernetes.io/instance: free5gc - secrets: - - name: mongodb - mongodb/svc.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: mongodb - namespace: default - labels: - app.kubernetes.io/name: mongodb - app.kubernetes.io/instance: free5gc - app.kubernetes.io/component: mongodb - spec: - type: ClusterIP - ports: - - name: mongodb - port: 27017 - targetPort: mongodb - nodePort: null - selector: - app.kubernetes.io/name: mongodb - app.kubernetes.io/instance: free5gc - app.kubernetes.io/component: mongodb - namespace.yaml: | - apiVersion: v1 - kind: Namespace - metadata: - name: example - labels: - pod-security.kubernetes.io/warn: "privileged" - pod-security.kubernetes.io/audit: "privileged" - pod-security.kubernetes.io/enforce: "privileged" - nrf/nrf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - nrf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n nrfcfg.yaml: |\n info:\n version: 1.0.1\n description: - NRF initial local configuration\n \n configuration:\n MongoDBName: - free5gc\n MongoDBUrl: mongodb://mongodb:27017\n\n serviceNameList:\n - \ - nnrf-nfm\n - nnrf-disc\n\n sbi:\n scheme: http\n - \ registerIPv4: nrf-nnrf # IP used to serve NFs or register to another - NRF\n bindingIPv4: 0.0.0.0 # IP used to bind the service\n port: - 8000\n tls:\n key: config/TLS/nrf.key\n pem: config/TLS/nrf.pem\n - \ DefaultPlmnId:\n mcc: 208\n mnc: 93\n\n logger:\n NRF:\n - \ ReportCaller: false\n debugLevel: info\n" - nrf/nrf-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-nrf\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: nrf\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: nrf\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: nrf\n spec:\n initContainers:\n - \ - name: wait-mongo\n image: busybox:1.32.0\n env:\n - - name: DEPENDENCIES\n value: mongodb:27017\n command: [\"sh\", - \"-c\", \"until nc -z $DEPENDENCIES; do echo waiting for the MongoDB; sleep - 2; done;\"]\n containers:\n - name: nrf\n image: towards5gs/free5gc-nrf:v3.1.1\n - \ imagePullPolicy: IfNotPresent\n securityContext:\n {}\n - \ ports:\n - containerPort: 8000\n command: [\"./nrf\"]\n - \ args: [\"-c\", \"../config/nrfcfg.yaml\"]\n env: \n - - name: DB_URI\n value: mongodb://mongodb/free5gc\n - name: - GIN_MODE\n value: release\n volumeMounts:\n - mountPath: - /free5gc/config/\n name: nrf-volume\n resources:\n limits:\n - \ cpu: 100m\n memory: 128Mi\n requests:\n - \ cpu: 100m\n memory: 128Mi\n readinessProbe:\n - \ initialDelaySeconds: 0\n periodSeconds: 1\n timeoutSeconds: - 1\n failureThreshold: 40\n successThreshold: 1\n httpGet:\n - \ scheme: \"HTTP\"\n port: 8000\n livenessProbe:\n - \ initialDelaySeconds: 120\n periodSeconds: 10\n timeoutSeconds: - 10\n failureThreshold: 3\n successThreshold: 1\n httpGet:\n - \ scheme: \"HTTP\"\n port: 8000\n dnsPolicy: ClusterFirst\n - \ restartPolicy: Always\n\n volumes:\n - name: nrf-volume\n projected:\n - \ sources:\n - configMap:\n name: nrf-configmap\n" - nrf/nrf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: nrf-nnrf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: nrf - spec: - type: ClusterIP - ports: - - port: 8000 - targetPort: 8000 - protocol: TCP - name: http - selector: - project: free5gc - nf: nrf - nssf/nssf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - nssf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n nssfcfg.yaml: |\n info:\n version: 1.0.1\n description: - NSSF initial local configuration\n\n configuration:\n serviceNameList:\n - \ - nnssf-nsselection\n - nnssf-nssaiavailability\n\n sbi:\n - \ scheme: http\n registerIPv4: nssf-nnssf # IP used to register - to NRF\n bindingIPv4: 0.0.0.0 # IP used to bind the service\n port: - 80\n tls:\n key: config/TLS/nssf.key\n pem: config/TLS/nssf.pem\n - \ \n nrfUri: http://nrf-nnrf:8000\n \n nsiList:\n - - snssai:\n sst: 1\n nsiInformationList:\n - nrfId: - http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: 10\n - - snssai:\n sst: 1\n sd: 1\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 11\n - snssai:\n sst: 1\n sd: 2\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 12\n - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 12\n - snssai:\n sst: 1\n sd: 3\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 13\n - snssai:\n sst: 2\n nsiInformationList:\n - - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: 20\n - \ - snssai:\n sst: 2\n sd: 1\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 21\n - snssai:\n sst: 1\n sd: 010203\n nsiInformationList:\n - \ - nrfId: http://nrf-nnrf:8000/nnrf-nfm/v1/nf-instances\n nsiId: - 22\n amfSetList:\n - amfSetId: 1\n amfList:\n - - ffa2e8d7-3275-49c7-8631-6af1df1d9d26\n - 0e8831c3-6286-4689-ab27-1e2161e15cb1\n - \ - a1fba9ba-2e39-4e22-9c74-f749da571d0d\n nrfAmfSet: http://nrf-nnrf:8081/nnrf-nfm/v1/nf-instances\n - \ supportedNssaiAvailabilityData:\n - tai:\n plmnId:\n - \ mcc: 466\n mnc: 92\n tac: - 33456\n supportedSnssaiList:\n - sst: 1\n sd: - 1\n - sst: 1\n sd: 2\n - sst: - 2\n sd: 1\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33457\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - \ - sst: 1\n sd: 2\n - amfSetId: 2\n nrfAmfSet: - http://nrf-nnrf:8084/nnrf-nfm/v1/nf-instances\n supportedNssaiAvailabilityData:\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33456\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - - sst: 1\n sd: 3\n - sst: 2\n sd: - 1\n - tai:\n plmnId:\n mcc: 466\n - \ mnc: 92\n tac: 33458\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - \ - sst: 2\n nssfName: NSSF\n supportedPlmnList:\n - - mcc: 208\n mnc: 93\n supportedNssaiInPlmnList:\n - plmnId:\n - \ mcc: 208\n mnc: 93\n supportedSnssaiList:\n - \ - sst: 1\n sd: 010203\n - sst: 1\n sd: - 112233\n - sst: 1\n sd: 3\n - sst: 2\n sd: - 1\n - sst: 2\n sd: 2\n amfList:\n - nfId: - 469de254-2fe5-4ca0-8381-af3f500af77c\n supportedNssaiAvailabilityData:\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33456\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 2\n - - sst: 2\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33457\n supportedSnssaiList:\n - \ - sst: 1\n sd: 1\n - sst: 1\n - \ sd: 2\n - nfId: fbe604a8-27b2-417e-bd7c-8a7be2691f8d\n - \ supportedNssaiAvailabilityData:\n - tai:\n plmnId:\n - \ mcc: 466\n mnc: 92\n tac: - 33458\n supportedSnssaiList:\n - sst: 1\n - - sst: 1\n sd: 1\n - sst: 1\n sd: - 3\n - sst: 2\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33459\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - \ - sst: 2\n - sst: 2\n sd: 1\n - \ - nfId: b9e6e2cb-5ce8-4cb6-9173-a266dd9a2f0c\n supportedNssaiAvailabilityData:\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33456\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - - sst: 1\n sd: 2\n - sst: 2\n - tai:\n - \ plmnId:\n mcc: 466\n mnc: - 92\n tac: 33458\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - - sst: 2\n - sst: 2\n sd: 1\n taList:\n - - tai:\n plmnId:\n mcc: 466\n mnc: 92\n tac: - 33456\n accessType: 3GPP_ACCESS\n supportedSnssaiList:\n - - sst: 1\n - sst: 1\n sd: 1\n - sst: 1\n sd: - 2\n - sst: 2\n - tai:\n plmnId:\n mcc: - 466\n mnc: 92\n tac: 33457\n accessType: 3GPP_ACCESS\n - \ supportedSnssaiList:\n - sst: 1\n - sst: 1\n - \ sd: 1\n - sst: 1\n sd: 2\n - - sst: 2\n - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33458\n accessType: 3GPP_ACCESS\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - - sst: 1\n sd: 3\n - sst: 2\n restrictedSnssaiList:\n - \ - homePlmnId:\n mcc: 310\n mnc: 560\n - \ sNssaiList:\n - sst: 1\n sd: 3\n - \ - tai:\n plmnId:\n mcc: 466\n mnc: - 92\n tac: 33459\n accessType: 3GPP_ACCESS\n supportedSnssaiList:\n - \ - sst: 1\n - sst: 1\n sd: 1\n - - sst: 2\n - sst: 2\n sd: 1\n restrictedSnssaiList:\n - \ - homePlmnId:\n mcc: 310\n mnc: 560\n - \ sNssaiList:\n - sst: 2\n sd: 1\n - \ mappingListFromPlmn:\n - operatorName: NTT Docomo\n homePlmnId:\n - \ mcc: 440\n mnc: 10\n mappingOfSnssai:\n - - servingSnssai:\n sst: 1\n sd: 1\n homeSnssai:\n - \ sst: 1\n sd: 1\n - servingSnssai:\n - \ sst: 1\n sd: 2\n homeSnssai:\n sst: - 1\n sd: 3\n - servingSnssai:\n sst: - 1\n sd: 3\n homeSnssai:\n sst: 1\n - \ sd: 4\n - servingSnssai:\n sst: 2\n - \ sd: 1\n homeSnssai:\n sst: 2\n sd: - 2\n - operatorName: AT&T Mobility\n homePlmnId:\n mcc: - 310\n mnc: 560\n mappingOfSnssai:\n - servingSnssai:\n - \ sst: 1\n sd: 1\n homeSnssai:\n sst: - 1\n sd: 2\n - servingSnssai:\n sst: - 1\n sd: 2\n homeSnssai:\n sst: 1\n - \ sd: 3 \n\n logger:\n NSSF:\n ReportCaller: - false\n debugLevel: info\n" - nssf/nssf-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-nssf\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: nssf\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: nssf\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: nssf\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n\n containers:\n - \ - name: nssf\n image: towards5gs/free5gc-nssf:v3.1.1\n imagePullPolicy: - IfNotPresent\n securityContext:\n {}\n ports:\n - - containerPort: 80\n command: [\"./nssf\"]\n args: [\"-c\", \"../config/nssfcfg.yaml\"]\n - \ env: \n - name: GIN_MODE\n value: release\n volumeMounts:\n - \ - mountPath: /free5gc/config/\n name: nssf-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: nssf-volume\n projected:\n sources:\n - - configMap:\n name: nssf-configmap\n" - nssf/nssf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: nssf-nnssf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: nssf - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: nssf - package-context.yaml: | - apiVersion: v1 - kind: ConfigMap - metadata: - name: kptfile.kpt.dev - annotations: - config.kubernetes.io/local-config: "true" - data: - name: free5gc - namespace: free5gc - pcf/pcf-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - pcf-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n pcfcfg.yaml: |\n info:\n version: 1.0.1\n description: - PCF initial local configuration\n\n configuration:\n serviceList:\n - \ - serviceName: npcf-am-policy-control\n - serviceName: npcf-smpolicycontrol\n - \ suppFeat: 3fff\n - serviceName: npcf-bdtpolicycontrol\n - - serviceName: npcf-policyauthorization\n suppFeat: 3\n - serviceName: - npcf-eventexposure\n - serviceName: npcf-ue-policy-control\n\n sbi:\n - \ scheme: http\n registerIPv4: pcf-npcf # IP used to register - to NRF\n bindingIPv4: 0.0.0.0 # IP used to bind the service\n port: - 80\n tls:\n key: config/TLS/pcf.key\n pem: config/TLS/pcf.pem\n - \ \n mongodb: # the mongodb connected by this PCF\n name: - free5gc # name of the mongodb\n url: mongodb://mongodb:27017 - # a valid URL of the mongodb\n \n nrfUri: http://nrf-nnrf:8000\n pcfName: - PCF\n timeFormat: 2019-01-02 15:04:05\n defaultBdtRefId: BdtPolicyId-\n - \ locality: area1\n\n logger:\n PCF:\n ReportCaller: false\n - \ debugLevel: info\n" - pcf/pcf-deployment.yaml: | - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - name: free5gc-pcf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: pcf - spec: - replicas: 1 - selector: - matchLabels: - project: free5gc - nf: pcf - template: - metadata: - labels: - project: free5gc - nf: pcf - spec: - initContainers: - - name: wait-nrf - image: towards5gs/initcurl:1.0.0 - env: - - name: DEPENDENCIES - value: http://nrf-nnrf:8000 - command: ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure --connect-timeout 1 -s -o /dev/null -w "%{http_code}" $dependency) -ne 200 ]; do echo waiting for dependencies; sleep 1; done; done;'] - - containers: - - name: pcf - image: towards5gs/free5gc-pcf:v3.1.1 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 80 - command: ["./pcf"] - args: ["-c", "../config/pcfcfg.yaml"] - env: - - name: GIN_MODE - value: release - volumeMounts: - - mountPath: /free5gc/config/ - name: pcf-volume - resources: - limits: - cpu: 100m - memory: 128Mi - requests: - cpu: 100m - memory: 128Mi - dnsPolicy: ClusterFirst - restartPolicy: Always - - volumes: - - name: pcf-volume - projected: - sources: - - configMap: - name: pcf-configmap - pcf/pcf-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: pcf-npcf - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: pcf - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: pcf - udm/udm-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - udm-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n udmcfg.yaml: |\n info:\n version: 1.0.2\n description: - UDM initial local configuration\n\n configuration:\n serviceNameList:\n - \ - nudm-sdm\n - nudm-uecm\n - nudm-ueau\n - nudm-ee\n - \ - nudm-pp\n \n sbi:\n scheme: http\n registerIPv4: - udm-nudm # IP used to register to NRF\n bindingIPv4: 0.0.0.0 # IP used - to bind the service\n port: 80\n tls:\n key: config/TLS/udm.key\n - \ pem: config/TLS/udm.pem\n \n nrfUri: http://nrf-nnrf:8000\n - \ # test data set from TS33501-f60 Annex C.4\n SuciProfile:\n - - ProtectionScheme: 1 # Protect Scheme: Profile A\n PrivateKey: c53c22208b61860b06c62e5406a7b330c2b577aa5558981510d128247d38bd1d\n - \ PublicKey: 5a8d38864820197c3394b92613b20b91633cbd897119273bf8e4a6f4eec0a650\n - \ - ProtectionScheme: 2 # Protect Scheme: Profile B\n PrivateKey: - F1AB1074477EBCC7F554EA1C5FC368B1616730155E0041AC447D6301975FECDA\n PublicKey: - 0472DA71976234CE833A6907425867B82E074D44EF907DFB4B3E21C1C2256EBCD15A7DED52FCBB097A4ED250E036C7B9C8C7004C4EEDC4F068CD7BF8D3F900E3B4\n\n - \ logger:\n UDM:\n ReportCaller: false\n debugLevel: info\n" - udm/udm-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-udm\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: udm\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: udm\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: udm\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n\n containers:\n - \ - name: udm\n image: towards5gs/free5gc-udm:v3.1.1\n imagePullPolicy: - IfNotPresent\n ports:\n - containerPort: 80\n command: - [\"./udm\"]\n args: [\"-c\", \"../config/udmcfg.yaml\"]\n env: - \n - name: GIN_MODE\n value: release\n volumeMounts:\n - \ - mountPath: /free5gc/config/\n name: udm-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: udm-volume\n projected:\n sources:\n - - configMap:\n name: udm-configmap\n" - udm/udm-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: udm-nudm - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: udm - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: udm - udr/udr-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: - udr-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n app: - free5gc\ndata:\n udrcfg.yaml: |\n info:\n version: 1.0.1\n description: - UDR initial local configuration\n\n configuration:\n sbi:\n scheme: - http\n registerIPv4: udr-nudr # IP used to register to NRF\n bindingIPv4: - 0.0.0.0 # IP used to bind the service\n port: 80\n tls:\n key: - config/TLS/udr.key\n pem: config/TLS/udr.pem\n\n mongodb:\n name: - free5gc\n url: mongodb://mongodb:27017 \n \n nrfUri: - http://nrf-nnrf:8000\n\n logger:\n MongoDBLibrary:\n ReportCaller: - false\n debugLevel: info\n OpenApi:\n ReportCaller: false\n - \ debugLevel: info\n PathUtil:\n ReportCaller: false\n debugLevel: - info\n UDR:\n ReportCaller: false\n debugLevel: info\n" - udr/udr-deployment.yaml: "---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n - \ name: free5gc-udr\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ project: free5gc\n nf: udr\nspec:\n replicas: 1\n selector:\n matchLabels:\n - \ project: free5gc\n nf: udr\n template:\n metadata:\n labels:\n - \ project: free5gc\n nf: udr\n spec:\n initContainers:\n - \ - name: wait-nrf\n image: towards5gs/initcurl:1.0.0\n env:\n - \ - name: DEPENDENCIES\n value: http://nrf-nnrf:8000\n command: - ['sh', '-c', 'set -x; for dependency in $DEPENDENCIES; do while [ $(curl --insecure - --connect-timeout 1 -s -o /dev/null -w \"%{http_code}\" $dependency) -ne 200 - ]; do echo waiting for dependencies; sleep 1; done; done;']\n\n containers:\n - \ - name: udr\n image: towards5gs/free5gc-udr:v3.1.1\n imagePullPolicy: - IfNotPresent\n ports:\n - containerPort: 80\n command: - [\"./udr\"]\n args: [\"-c\", \"../config/udrcfg.yaml\"]\n env: - \n - name: DB_URI\n value: mongodb://mongodb/free5gc\n - - name: GIN_MODE\n value: release\n volumeMounts:\n - - mountPath: /free5gc/config/\n name: udr-volume\n resources:\n - \ limits:\n cpu: 100m\n memory: 128Mi\n - \ requests:\n cpu: 100m\n memory: 128Mi\n - \ dnsPolicy: ClusterFirst\n restartPolicy: Always\n\n volumes:\n - \ - name: udr-volume\n projected:\n sources:\n - - configMap:\n name: udr-configmap\n" - udr/udr-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: udr-nudr - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: udr - spec: - type: ClusterIP - ports: - - port: 80 - targetPort: 80 - protocol: TCP - name: http - selector: - project: free5gc - nf: udr - webui/webui-configmap.yaml: "---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n - \ name: webui-configmap\n labels:\n app.kubernetes.io/version: \"v3.1.1\"\n - \ app: free5gc\ndata:\n webuicfg.yaml: |\n info:\n version: 1.0.0\n - \ description: WEBUI initial local configuration\n\n configuration:\n - \ mongodb:\n name: free5gc\n url: mongodb://mongodb:27017\n - \ \n logger:\n WEBUI:\n ReportCaller: false\n debugLevel: - info\n" - webui/webui-deployment.yaml: | - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - name: free5gc-webui - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: webui - spec: - replicas: 1 - selector: - matchLabels: - project: free5gc - nf: webui - template: - metadata: - labels: - project: free5gc - nf: webui - spec: - initContainers: - - name: wait-mongo - image: busybox:1.32.0 - env: - - name: DEPENDENCIES - value: mongodb:27017 - command: ["sh", "-c", "until nc -z $DEPENDENCIES; do echo waiting for the MongoDB; sleep 2; done;"] - containers: - - name: webui - image: towards5gs/free5gc-webui:v3.1.1 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 5000 - command: ["./webconsole"] - args: ["-c", "../config/webuicfg.yaml"] - env: - - name: GIN_MODE - value: release - volumeMounts: - - mountPath: /free5gc/config/ - name: webui-volume - resources: - limits: - cpu: 100m - memory: 128Mi - requests: - cpu: 100m - memory: 128Mi - readinessProbe: - initialDelaySeconds: 0 - periodSeconds: 1 - timeoutSeconds: 1 - failureThreshold: 40 - successThreshold: 1 - httpGet: - scheme: HTTP - port: 5000 - livenessProbe: - initialDelaySeconds: 120 - periodSeconds: 10 - timeoutSeconds: 10 - failureThreshold: 3 - successThreshold: 1 - httpGet: - scheme: HTTP - port: 5000 - dnsPolicy: ClusterFirst - restartPolicy: Always - - volumes: - - name: webui-volume - projected: - sources: - - configMap: - name: webui-configmap - webui/webui-service.yaml: | - --- - apiVersion: v1 - kind: Service - metadata: - name: webui-service - labels: - app.kubernetes.io/version: "v3.1.1" - project: free5gc - nf: webui - spec: - type: NodePort - ports: - - port: 5000 - targetPort: 5000 - nodePort: 30500 - protocol: TCP - name: http - selector: - project: free5gc - nf: webui - revision: v1 - workspaceName: v1 -status: - renderStatus: - error: "" - result: - exitCode: 0 - metadata: - creationTimestamp: null -``` -
- -## The porchctl command - -The `porchtcl` command is an administration command for acting on Porch `Repository` (repo) and `PackageRevision` (rpkg) -CRs. See its [documentation for usage information](porchctl-cli-guide.md). - -Check that porchctl lists our repositories: - -```bash -porchctl repo -n porch-demo get -NAME TYPE CONTENT DEPLOYMENT READY ADDRESS -edge1 git Package true True http://172.18.255.200:3000/nephio/edge1.git -external-blueprints git Package false True https://github.com/nephio-project/free5gc-packages.git -management git Package false True http://172.18.255.200:3000/nephio/management.git -``` - -
-Check that porchctl lists our remote packages (PackageRevisions): - -``` -porchctl rpkg -n porch-demo get -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -porch-test.network-function.great-outdoors free5gc-cp main main false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-cp v1 v1 true Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator main main false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator v1 v1 false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator v2 v2 false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator v3 v3 false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator v4 v4 false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-operator v5 v5 true Published external-blueprints -porch-test.network-function.great-outdoors free5gc-upf main main false Published external-blueprints -porch-test.network-function.great-outdoors free5gc-upf v1 v1 true Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp main main false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp v1 v1 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp v2 v2 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp v3 v3 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp v4 v4 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-amf-bp v5 v5 true Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp main main false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp v1 v1 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp v2 v2 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp v3 v3 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp v4 v4 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-smf-bp v5 v5 true Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp main main false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp v1 v1 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp v2 v2 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp v3 v3 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp v4 v4 false Published external-blueprints -porch-test.network-function.great-outdoors pkg-example-upf-bp v5 v5 true Published external-blueprints -``` -
- -The output above is similar to the output of `kubectl get packagerevision -n porch-demo` above. - -## Creating a blueprint in Porch - -### Blueprint with no Kpt pipelines - -Create a new package in our *management* repository using the sample *network-function* package provided. This network -function Kpt package is a demo Kpt package that installs [Nginx](https://github.com/nginx). - -``` -porchctl -n porch-demo rpkg init network-function --repository=management --workspace=v1 -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 created -porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 false Draft management -``` - -This command creates a new *PackageRevision* CR in porch and also creates a branch called *network-function/v1* in our -Gitea *management* repository. Use the Gitea web UI to confirm that the branch has been created and note that it only has -default content as yet. - -We now pull the package we have initialized from Porch. - -``` -porchctl -n porch-demo rpkg pull management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 blueprints/initialized/network-function -``` - -We update the initialized package and add our local changes. -``` -cp blueprints/local-changes/network-function/* blueprints/initialized/network-function -``` - -Now, we push the package contents to porch: -``` -porchctl -n porch-demo rpkg push management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 blueprints/initialized/network-function -``` - -Check on the Gitea web UI and we can see that the actual package contents have been pushed. - -Now we propose and approve the package. - -``` -porchctl -n porch-demo rpkg propose management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 proposed - -porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 false Proposed management - -porchctl -n porch-demo rpkg approve management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 approved - -porchctl -n porch-demo rpkg get --name network-function -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 v1 true Published management - -``` - -Once we approve the package, the package is merged into the main branch in the *management* repository and the branch called -*network-function/v1* in that repository is removed. Use the Gitea UI to verify this. We now have our blueprint package in our -*management* repository and we can deploy this package into workload clusters. - -### Blueprint with a Kpt pipeline - -The second blueprint in the *blueprint* directory is called *network-function-auto-namespace*. This network -function is exactly the same as the *network-function* package except that it has a Kpt function that automatically -creates a namespace with the namespace configured in the name field in the *package-context.yaml* file. Note that no -namespace is defined in the metadata of the *deployment.yaml* file of this Kpt package. - -We use the same sequence of commands again to publish our blueprint package for *network-function-auto-namespace*. - -``` -porchctl -n porch-demo rpkg init network-function-auto-namespace --repository=management --workspace=v1 -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 created - -porchctl -n porch-demo rpkg pull management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 blueprints/initialized/network-function-auto-namespace - -cp blueprints/local-changes/network-function-auto-namespace/* blueprints/initialized/network-function-auto-namespace - -porchctl -n porch-demo rpkg push management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 blueprints/initialized/network-function-auto-namespace -``` - -Examine the *drafts/network-function-auto-namespace/v1* branch in Gitea. Notice that the set-namespace Kpt function in -the pipeline in the *Kptfile* has set the namespace in the *deployment.yaml* file to the value default-namespace-name, -which it read from the *package-context.yaml* file. - -Now we propose and approve the package. - -``` -porchctl -n porch-demo rpkg propose management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 proposed - -porchctl -n porch-demo rpkg approve management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 approved - -porchctl -n porch-demo rpkg get --name network-function-auto-namespace -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -management-f9a6f2802111b9e81c296422c03aae279725f6df network-function-auto-namespace v1 main false Published management -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 network-function-auto-namespace v1 v1 true Published management - -``` - -## Deploying a blueprint onto a workload cluster - -### Blueprint with no Kpt pipelines - -The process of deploying a blueprint package from our *management* repository clones the package, then modifies it for use on -the workload cluster. The cloned package is then initialized, pushed, proposed, and approved onto the *edge1* repository. -Remember that the *edge1* repository is being monitored by configsync from the edge1 cluster, so once the package appears in -the *edge1* repository on the management cluster, it will be pulled by configsync and applied to the edge1 cluster. - -``` -porchctl -n porch-demo rpkg pull management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 tmp_packages_for_deployment/edge1-network-function-a.clone.tmp - -find tmp_packages_for_deployment/edge1-network-function-a.clone.tmp - -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/deployment.yaml -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/.KptRevisionMetadata -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/README.md -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/Kptfile -tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/package-context.yaml -``` -The package we created in the last section is cloned. We now remove the original metadata from the package. -``` -rm tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/.KptRevisionMetadata -``` - -We use a *kpt* function to change the namespace that will be used for the deployment of the network function. - -``` -kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployment/edge1-network-function-a.clone.tmp -- namespace=edge1-network-function-a - -[RUNNING] "gcr.io/kpt-fn/set-namespace:v0.4.1" -[PASS] "gcr.io/kpt-fn/set-namespace:v0.4.1" in 300ms - Results: - [info]: namespace "" updated to "edge1-network-function-a", 1 value(s) changed -``` - -We now initialize and push the package to the *edge1* repository: - -``` -porchctl -n porch-demo rpkg init edge1-network-function-a --repository=edge1 --workspace=v1 -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 created - -porchctl -n porch-demo rpkg pull edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 tmp_packages_for_deployment/edge1-network-function-a - -cp tmp_packages_for_deployment/edge1-network-function-a.clone.tmp/* tmp_packages_for_deployment/edge1-network-function-a -rm -fr tmp_packages_for_deployment/edge1-network-function-a.clone.tmp - -porchctl -n porch-demo rpkg push edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 tmp_packages_for_deployment/edge1-network-function-a - -porchctl -n porch-demo rpkg get --name edge1-network-function-a -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 network-function-a v1 false Draft edge1 -``` - -You can verify that the package is in the *network-function-a/v1* branch of the deployment repository using the Gitea web UI. - - -Check that the *edge1-network-function-a* package is not deployed on the edge1 cluster yet: -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-a -No resources found in network-function-a namespace. - -``` - -We now propose and approve the deployment package, which merges the package to the *edge1* repository and further triggers -configsync to apply the package to the edge1 cluster. - -``` -export KUBECONFIG=~/.kube/kind-management-config - -porchctl -n porch-demo rpkg propose edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 proposed - -porchctl -n porch-demo rpkg approve edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 approved - -porchctl -n porch-demo rpkg get --name edge1-network-function-a -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-d701be9b849b8b8724a6e052cbc74ca127b737c3 network-function-a v1 v1 true Published edge1 -``` - -We can now check that the *network-function-a* package is deployed on the edge1 cluster and that the pod is running -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-a -No resources found in network-function-a namespace. - -kubectl get pod -n edge1-network-function-a -NAME READY STATUS RESTARTS AGE -network-function-9779fc9f5-4rqp2 1/1 ContainerCreating 0 9s - -kubectl get pod -n edge1-network-function-a -NAME READY STATUS RESTARTS AGE -network-function-9779fc9f5-4rqp2 1/1 Running 0 44s -``` - -### Blueprint with a Kpt pipeline - -The process for deploying a blueprint with a *Kpt* pipeline runs the Kpt pipeline automatically with whatever configuration we give it. Rather than explicitly running a *Kpt* function to change the namespace, we will specify the namespace as configuration and the pipeline will apply it to the deployment. - -``` -porchctl -n porch-demo rpkg pull management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp - -find tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp - -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/deployment.yaml -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/.KptRevisionMetadata -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/README.md -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/Kptfile -tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/package-context.yaml -``` - -We now remove the original metadata from the package. -``` -rm tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/.KptRevisionMetadata -``` - -The package we created in the last section is cloned. We now initialize and push the package to the *edge1* repository: - -``` -porchctl -n porch-demo rpkg init edge1-network-function-auto-namespace-a --repository=edge1 --workspace=v1 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 created - -porchctl -n porch-demo rpkg pull edge1-48997da49ca0a733b0834c1a27943f1a0e075180 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a - -cp tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp/* tmp_packages_for_deployment/edge1-network-function-auto-namespace-a -rm -fr tmp_packages_for_deployment/edge1-network-function-auto-namespace-a.clone.tmp -``` - - -We now simply configure the namespace we want to apply. edit the *tmp_packages_for_deployment/edge1-network-function-auto-namespace-a/package-context.yaml* file and set the namespace to use: - -``` -8c8 -< name: default-namespace-name ---- -> name: edge1-network-function-auto-namespace-a -``` - -We now push the package to the *edge1* repository: - -``` -porchctl -n porch-demo rpkg push edge1-48997da49ca0a733b0834c1a27943f1a0e075180 tmp_packages_for_deployment/edge1-network-function-auto-namespace-a -[RUNNING] "gcr.io/kpt-fn/set-namespace:v0.4.1" -[PASS] "gcr.io/kpt-fn/set-namespace:v0.4.1" - Results: - [info]: namespace "default-namespace-name" updated to "edge1-network-function-auto-namespace-a", 1 value(s) changed - -porchctl -n porch-demo rpkg get --name edge1-network-function-auto-namespace-a -``` - -You can verify that the package is in the *network-function-auto-namespace-a/v1* branch of the deployment repository using the -Gitea web UI. You can see that the kpt pipeline fired and set the edge1-network-function-auto-namespace-a namespace in -the *deployment.yaml* file on the *drafts/edge1-network-function-auto-namespace-a/v1* branch on the *edge1* repository in -Gitea. - -Check that the *edge1-network-function-auto-namespace-a* package is not deployed on the edge1 cluster yet: -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-auto-namespace-a -No resources found in network-function-auto-namespace-a namespace. - -``` - -We now propose and approve the deployment package, which merges the package to the *edge1* repository and further triggers -configsync to apply the package to the edge1 cluster. - -``` -export KUBECONFIG=~/.kube/kind-management-config - -porchctl -n porch-demo rpkg propose edge1-48997da49ca0a733b0834c1a27943f1a0e075180 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 proposed - -porchctl -n porch-demo rpkg approve edge1-48997da49ca0a733b0834c1a27943f1a0e075180 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 approved - -porchctl -n porch-demo rpkg get --name edge1-network-function-auto-namespace-a -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 edge1-network-function-auto-namespace-a v1 v1 true Published edge1 -``` - -We can now check that the *network-function-auto-namespace-a* package is deployed on the edge1 cluster and that the pod is running -``` -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -n edge1-network-function-auto-namespace-a -No resources found in network-function-auto-namespace-a namespace. - -kubectl get pod -n edge1-network-function-auto-namespace-a -NAME READY STATUS RESTARTS AGE -network-function-auto-namespace-85bc658d67-rbzt6 1/1 ContainerCreating 0 3s - -kubectl get pod -n edge1-network-function-auto-namespace-a -NAME READY STATUS RESTARTS AGE -network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 0 10s -``` - -## Deploying using Package Variant Sets - -### Simple PackageVariantSet - -The PackageVariant CR is defined as: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet - -metadata: - name: network-function - namespace: porch-demo - -spec: - upstream: - repo: management - package: network-function - revision: v1 - targets: - - repositories: - - name: edge1 - packageNames: - - network-function-b - - network-function-c -``` - -In this very simple PackageVariant, the *network-function* package in the *management* repository is cloned into the *edge1* -repository as the *network-function-b* and *network-function-c* package variants. - -{{% alert title="Note" color="primary" %}} - -This simple package variant does not specify any configuration changes. Normally, as well as cloning and renaming, -configuration changes would be applied on a package variant. - -Use `kubectl explain PackageVariantSet` to get help on the structure of the PackageVariantSet CRD. - -{{% /alert %}} - -Applying the PackageVariantSet creates the new packages as draft packages: - -```bash -kubectl apply -f simple-variant.yaml - -kubectl get PackageRevisions -n porch-demo | grep -v 'external-blueprints' -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-bc8294d121360ad305c9a826a8734adcf5f1b9c0 network-function-a v1 main false Published edge1 -edge1-9b4b4d99c43b5c5c8489a47bbce9a61f79904946 network-function-a v1 v1 true Published edge1 -edge1-a31b56c7db509652f00724dd49746660757cd98a network-function-b packagevariant-1 false Draft edge1 -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 network-function-c packagevariant-1 false Draft edge1 -management-49580fc22bcf3bf51d334a00b6baa41df597219e network-function v1 main false Published management -management-8b80738a6e0707e3718ae1db3668d0b8ca3f1c82 network-function v1 v1 true Published management - -porchctl -n porch-demo rpkg get --name network-function-b -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-a31b56c7db509652f00724dd49746660757cd98a network-function-b packagevariant-1 false Draft edge1 - -porchctl -n porch-demo rpkg get --name network-function-c -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 network-function-c packagevariant-1 false Draft edge1 -``` - -We can see that our two new packages are created as draft packages on the *edge1* repository. We can also examine the -*PacakgeVariant* CRs that have been created: - -```bash -kubectl get PackageVariant -n porch-demo -NAMESPACE NAME READY STATUS RESTARTS AGE -network-function-a network-function-9779fc9f5-2tswc 1/1 Running 0 19h -network-function-b network-function-9779fc9f5-6zwhh 1/1 Running 0 76s -network-function-c network-function-9779fc9f5-h7nsb 1/1 Running 0 41s -``` - - -It is also interesting to examine the YAML of the *PackageVariant*: - -```yaml -kubectl get PackageVariant -n porch-demo -o yaml -apiVersion: v1 -items: -- apiVersion: config.porch.kpt.dev/v1alpha1 - kind: PackageVariant - metadata: - creationTimestamp: "2024-01-09T15:00:00Z" - finalizers: - - config.porch.kpt.dev/packagevariants - generation: 1 - labels: - config.porch.kpt.dev/packagevariantset: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - name: network-function-edge1-network-function-b - namespace: porch-demo - ownerReferences: - - apiVersion: config.porch.kpt.dev/v1alpha2 - controller: true - kind: PackageVariantSet - name: network-function - uid: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - resourceVersion: "237053" - uid: 7a81099c-5a0b-49d8-b73c-48e33cd134e5 - spec: - downstream: - package: network-function-b - repo: edge1 - upstream: - package: network-function - repo: management - revision: v1 - status: - conditions: - - lastTransitionTime: "2024-01-09T15:00:00Z" - message: all validation checks passed - reason: Valid - status: "False" - type: Stalled - - lastTransitionTime: "2024-01-09T15:00:31Z" - message: successfully ensured downstream package variant - reason: NoErrors - status: "True" - type: Ready - downstreamTargets: - - name: edge1-a31b56c7db509652f00724dd49746660757cd98a -- apiVersion: config.porch.kpt.dev/v1alpha1 - kind: PackageVariant - metadata: - creationTimestamp: "2024-01-09T15:00:00Z" - finalizers: - - config.porch.kpt.dev/packagevariants - generation: 1 - labels: - config.porch.kpt.dev/packagevariantset: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - name: network-function-edge1-network-function-c - namespace: porch-demo - ownerReferences: - - apiVersion: config.porch.kpt.dev/v1alpha2 - controller: true - kind: PackageVariantSet - name: network-function - uid: a923d4fc-a3a7-437c-84d1-52b30dd6cf49 - resourceVersion: "237056" - uid: da037d0a-9a7a-4e85-842c-1324e9da819a - spec: - downstream: - package: network-function-c - repo: edge1 - upstream: - package: network-function - repo: management - revision: v1 - status: - conditions: - - lastTransitionTime: "2024-01-09T15:00:01Z" - message: all validation checks passed - reason: Valid - status: "False" - type: Stalled - - lastTransitionTime: "2024-01-09T15:00:31Z" - message: successfully ensured downstream package variant - reason: NoErrors - status: "True" - type: Ready - downstreamTargets: - - name: edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 -kind: List -metadata: - resourceVersion: "" -``` - -We now want to customize and deploy our two packages. To do this we must pull the packages locally, render the *kpt* -functions, and then push the rendered packages back up to the *edge1* repository. - -```bash -porchctl rpkg pull edge1-a31b56c7db509652f00724dd49746660757cd98a tmp_packages_for_deployment/edge1-network-function-b --namespace=porch-demo -kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployment/edge1-network-function-b -- namespace=network-function-b -porchctl rpkg push edge1-a31b56c7db509652f00724dd49746660757cd98a tmp_packages_for_deployment/edge1-network-function-b --namespace=porch-demo - -porchctl rpkg pull edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 tmp_packages_for_deployment/edge1-network-function-c --namespace=porch-demo -kpt fn eval --image=gcr.io/kpt-fn/set-namespace:v0.4.1 tmp_packages_for_deployment/edge1-network-function-c -- namespace=network-function-c -porchctl rpkg push edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 tmp_packages_for_deployment/edge1-network-function-c --namespace=porch-demo -``` - -Check that the namespace has been updated on the two packages in the *edge1* repository using the Gitea web UI. - -Now our two packages are ready for deployment: - -```bash -porchctl rpkg propose edge1-a31b56c7db509652f00724dd49746660757cd98a --namespace=porch-demo -edge1-a31b56c7db509652f00724dd49746660757cd98a proposed - -porchctl rpkg approve edge1-a31b56c7db509652f00724dd49746660757cd98a --namespace=porch-demo -edge1-a31b56c7db509652f00724dd49746660757cd98a approved - -porchctl rpkg propose edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 --namespace=porch-demo -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 proposed - -porchctl rpkg approve edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 --namespace=porch-demo -edge1-ee14f7ce850ddb0a380cf201d86f48419dc291f4 approved -``` - -We can now check that the *network-function-b* and *network-function-c* packages are deployed on the edge1 cluster and -that the pods are running - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -network-function-a network-function-9779fc9f5-2tswc 1/1 Running 0 19h -network-function-b network-function-9779fc9f5-6zwhh 1/1 Running 0 76s -network-function-c network-function-9779fc9f5-h7nsb 1/1 Running 0 41s -``` - -### Using a PackageVariantSet to automatically set the package name and package namespace - -The *PackageVariant* CR defined as: - -```yaml -apiVersion: config.porch.kpt.dev/v1alpha2 -kind: PackageVariantSet -metadata: - name: network-function-auto-namespace - namespace: porch-demo -spec: - upstream: - repo: management - package: network-function-auto-namespace - revision: v1 - targets: - - repositories: - - name: edge1 - packageNames: - - network-function-auto-namespace-x - - network-function-auto-namespace-y - template: - downstream: - packageExpr: "target.package + '-cumulonimbus'" -``` - - -In this *PackageVariant*, the *network-function-auto-namespace* package in the *management* repository is cloned into the -*edge1* repository as the *network-function-auto-namespace-x* and *network-function-auto-namespace-y* package variants, -similar to the *PackageVariant* in *simple-variant.yaml*. - -An extra template section provided for the repositories in the PackageVariant: - -```yaml -template: - downstream: - packageExpr: "target.package + '-cumulus'" -``` - -This template means that each package in the spec.targets.repositories..packageNames list will have the suffix --cumulus added to its name. This allows us to automatically generate unique package names. Applying the -*PackageVariantSet* also automatically sets a unique namespace for each network function because applying the -*PackageVariantSet* automatically triggers the Kpt pipeline in the *network-function-auto-namespace* *Kpt* package to -generate unique namespaces for each deployed package. - -{{% alert title="Note" color="primary" %}} - -Many other mutations can be performed using a *PackageVariantSet*. Use `kubectl explain PackageVariantSet` to get help on -the structure of the *PackageVariantSet* CRD to see the various mutations that are possible. - -{{% /alert %}} - -Applying the *PackageVariantSet* creates the new packages as draft packages: - -```bash -kubectl apply -f name-namespace-variant.yaml -packagevariantset.config.porch.kpt.dev/network-function-auto-namespace created - -kunectl get -n porch-demo PackageVariantSet network-function-auto-namespace -NAME AGE -network-function-auto-namespace 38s - -kubectl get PackageRevisions -n porch-demo | grep auto-namespace -edge1-1f521f05a684adfa8562bf330f7bc72b50e21cc5 edge1-network-function-auto-namespace-a v1 main false Published edge1 -edge1-48997da49ca0a733b0834c1a27943f1a0e075180 edge1-network-function-auto-namespace-a v1 v1 true Published edge1 -edge1-009659a8532552b86263434f68618554e12f4f7c network-function-auto-namespace-x-cumulonimbus packagevariant-1 false Draft edge1 -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e network-function-auto-namespace-y-cumulonimbus packagevariant-1 false Draft edge1 -management-f9a6f2802111b9e81c296422c03aae279725f6df network-function-auto-namespace v1 main false Published management -management-c97bc433db93f2e8a3d413bed57216c2a72fc7e3 network-function-auto-namespace v1 v1 true Published management -``` - -{{% alert title="Note" color="primary" %}} - -The suffix `x-cumulonimbus` and `y-cumulonimbus` has been placed on the package names. - -{{% /alert %}} - -Examine the *edge1* repository on Gitea and you should see two new draft branches. - -- drafts/network-function-auto-namespace-x-cumulonimbus/packagevariant-1 -- drafts/network-function-auto-namespace-y-cumulonimbus/packagevariant-1 - -In these packages, you will see that: - -1. The package name has been generated as network-function-auto-namespace-x-cumulonimbus and - network-function-auto-namespace-y-cumulonimbus in all files in the packages -2. The namespace has been generated as network-function-auto-namespace-x-cumulonimbus and - network-function-auto-namespace-y-cumulonimbus respectively in the *demployment.yaml* files -3. The PackageVariant has set the data.name field as network-function-auto-namespace-x-cumulonimbus and - network-function-auto-namespace-y-cumulonimbus respectively in the *pckage-context.yaml* files - -This has all been performed automatically; we have not had to perform the -`porchctl rpkg pull/kpt fn render/porchctl rpkg push` combination of commands to make these changes as we had to in the -*simple-variant.yaml* case above. - -Now, let us explore the packages further: - -```bash -porchctl -n porch-demo rpkg get --name network-function-auto-namespace-x-cumulonimbus -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-009659a8532552b86263434f68618554e12f4f7c network-function-auto-namespace-x-cumulonimbus packagevariant-1 false Draft edge1 - -porchctl -n porch-demo rpkg get --name network-function-auto-namespace-y-cumulonimbus -NAME PACKAGE WORKSPACENAME REVISION LATEST LIFECYCLE REPOSITORY -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e network-function-auto-namespace-y-cumulonimbus packagevariant-1 false Draft edge1 -``` - -We can see that our two new packages are created as draft packages on the edge1 repository. We can also examine the -*PacakgeVariant* CRs that have been created: - -```bash -kubectl get PackageVariant -n porch-demo -NAME AGE -network-function-auto-namespace-edge1-network-function-35079f9f 3m41s -network-function-auto-namespace-edge1-network-function-d521d2c0 3m41s -network-function-edge1-network-function-b 38m -network-function-edge1-network-function-c 38m -``` - -It is also interesting to examine the YAML of a *PackageVariant*: - -```yaml -kubectl get PackageVariant -n porch-demo network-function-auto-namespace-edge1-network-function-35079f9f -o yaml -apiVersion: config.porch.kpt.dev/v1alpha1 -kind: PackageVariant -metadata: - creationTimestamp: "2024-01-24T15:10:19Z" - finalizers: - - config.porch.kpt.dev/packagevariants - generation: 1 - labels: - config.porch.kpt.dev/packagevariantset: 71edbdff-21c1-45f4-b9cb-6d2ecfc3da4e - name: network-function-auto-namespace-edge1-network-function-35079f9f - namespace: porch-demo - ownerReferences: - - apiVersion: config.porch.kpt.dev/v1alpha2 - controller: true - kind: PackageVariantSet - name: network-function-auto-namespace - uid: 71edbdff-21c1-45f4-b9cb-6d2ecfc3da4e - resourceVersion: "404083" - uid: 5ae69c2d-6aac-4942-b717-918325650190 -spec: - downstream: - package: network-function-auto-namespace-x-cumulonimbus - repo: edge1 - upstream: - package: network-function-auto-namespace - repo: management - revision: v1 -status: - conditions: - - lastTransitionTime: "2024-01-24T15:10:19Z" - message: all validation checks passed - reason: Valid - status: "False" - type: Stalled - - lastTransitionTime: "2024-01-24T15:10:49Z" - message: successfully ensured downstream package variant - reason: NoErrors - status: "True" - type: Ready - downstreamTargets: - - name: edge1-009659a8532552b86263434f68618554e12f4f7c -``` -Our two packages are ready for deployment: - -```bash -porchctl rpkg propose edge1-009659a8532552b86263434f68618554e12f4f7c --namespace=porch-demo -edge1-009659a8532552b86263434f68618554e12f4f7c proposed - -porchctl rpkg approve edge1-009659a8532552b86263434f68618554e12f4f7c --namespace=porch-demo -edge1-009659a8532552b86263434f68618554e12f4f7c approved - -porchctl rpkg propose edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e --namespace=porch-demo -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e proposed - -porchctl rpkg approve edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e --namespace=porch-demo -edge1-77dbfed49b6cb0723b7c672b224de04c0cead67e approved -``` - -We can now check that the packages are deployed on the edge1 cluster and that the pods are running - -```bash -export KUBECONFIG=~/.kube/kind-edge1-config - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 45m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 44m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 0/1 ContainerCreating 0 1s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 45m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 44m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 10s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 45m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 50s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 46m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 51s -network-function-auto-namespace-y-cumulonimbus network-function-auto-namespace-85bc658d67-tp5m8 0/1 ContainerCreating 0 1s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 46m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m - -kubectl get pod -A | egrep '(NAMESPACE|network-function)' -NAMESPACE NAME READY STATUS RESTARTS AGE -edge1-network-function-a network-function-9779fc9f5-87scj 1/1 Running 1 (2d1h ago) 4d22h -edge1-network-function-auto-namespace-a network-function-auto-namespace-85bc658d67-rbzt6 1/1 Running 1 (2d1h ago) 4d22h -network-function-auto-namespace-x-cumulonimbus network-function-auto-namespace-85bc658d67-86gml 1/1 Running 0 54s -network-function-auto-namespace-y-cumulonimbus network-function-auto-namespace-85bc658d67-tp5m8 1/1 Running 0 4s -network-function-b network-function-9779fc9f5-twh2g 1/1 Running 0 46m -network-function-c network-function-9779fc9f5-whhr8 1/1 Running 0 45m -``` diff --git a/content/en/docs/porch/user-guides/using-authenticated-private-registries.md b/content/en/docs/porch/user-guides/using-authenticated-private-registries.md deleted file mode 100644 index ce5dd67f..00000000 --- a/content/en/docs/porch/user-guides/using-authenticated-private-registries.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -title: "Using authenticated private registries with the Porch function runner" -type: docs -weight: 5 -description: "" ---- - -The Porch function runner pulls kpt function images from registries and uses them for rendering kpt packages in Porch. The function runner is set up by default to fetch kpt function images from public container registries such as [GCR](https://gcr.io/kpt-fn/) and the configuration options described here are not required for such public registries. - -## 1. Configuring function runner to operate with private container registries - -This section describes how set up authentication for a private container registry containing kpt functions online e.g. (GitHub's GHCR) or locally e.g. (Harbor or JFrog) that require authentication (username/password or token). - -To enable pulling of kpt function images from authenticated private registries by the Porch function runner the system requires: - -1. Creating a Kubernetes secret using a JSON file according to the Docker configuration schema, containing valid credentials for each authenticated registry. -2. Mounting this new secret as a volume on the function runner. -3. Configuring private registry functionality in the function runner's arguments: - 1. Enabling the functionality using the argument *--enable-private-registries*. - 2. Providing the path and name of the mounted secret using the arguments *--registry-auth-secret-path* and *--registry-auth-secret-name* respectively. - -### 1.1 Kubernetes secret setup for private registry using docker configuration - -An example template of what a docker *config.json* file looks like is as follows below. The base64 encoded value *bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=* of the *auth* key decodes to *my_username:my_password*, which is the format used by the configuration when authenticating. - -```json -{ - "auths": { - "https://index.docker.io/v1/": { - "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=" - }, - "ghcr.io": { - "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=" - } - } -} -``` - -A quick way to generate this secret for your use using your docker *config.json* would be to run the following command: - -```bash -kubectl create secret generic --from-file=.dockerconfigjson=/path/to/your/config.json --type=kubernetes.io/dockerconfigjson --dry-run=client -o yaml -n porch-system -``` - -{{% alert title="Note" color="primary" %}} -The secret must be in the same namespace as the function runner deployment. By default, this is the *porch-system* namespace. -{{% /alert %}} - -This should generate a secret template, similar to the one below, which you can add to the *2-function-runner.yaml* file in the Porch catalog package found [here](https://github.com/nephio-project/catalog/tree/main/nephio/core/porch) - -```yaml -apiVersion: v1 -data: - .dockerconfigjson: -kind: Secret -metadata: - creationTimestamp: null - name: - namespace: porch-system -type: kubernetes.io/dockerconfigjson -``` - -### 1.2 Mounting docker configuration secret to the function runner - -Next you must mount the secret as a volume on the function runner deployment. Add the following sections to the Deployment object in the *2-function-runner.yaml* file: - -```yaml - volumeMounts: - - mountPath: /var/tmp/auth-secret - name: docker-config - readOnly: true -volumes: - - name: docker-config - secret: - secretName: -``` - -You may specify your desired paths for each `mountPath:` so long as the function runner can access them. - -{{% alert title="Note" color="primary" %}} -The chosen `mountPath:` should use its own, dedicated sub-directory, so that it does not overwrite access permissions of the existing directory. For example, if you wish to mount on `/var/tmp` you should use `mountPath: /var/tmp/` etc. -{{% /alert %}} - -### 1.3 Configuring function runner environment variables for private registries - -Lastly you must enable private registry functionality along with providing the path and name of the secret. Add the `--enable-private-registries`, `--registry-auth-secret-path` and `--registry-auth-secret-name` arguments to the function-runner Deployment object in the *2-function-runner.yaml* file: - -```yaml -command: - - --enable-private-registries=true - - --registry-auth-secret-path=/var/tmp/auth-secret/.dockerconfigjson - - --registry-auth-secret-name= -``` - -The `--enable-private-registries`, `--registry-auth-secret-path` and `--registry-auth-secret-name` arguments have default values of *false*, */var/tmp/auth-secret/.dockerconfigjson* and *auth-secret* respectively; however, these should be overridden to enable the functionality and match user specifications. - -With this last step, if your Porch package uses kpt function images stored in an private registry (for example `- image: ghcr.io/private-registry/set-namespace:customv2`), the function runner will now use the secret info to replicate your secret on the `porch-fn-system` namespace and specify it as an `imagePullSecret` for the function pods, as documented [here](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). - -## 2. Configuring function runner to use custom TLS for private container registries - -If your private container registry uses a custom certificate for TLS authentication then extra configuration is required for the function runner to integrate with. See below - -1. Creating a Kubernetes secret using TLS information valid for all private registries you wish to use. -2. Mounting the secret containing the registries' TLS information to the function runner similarly to step 2. -3. Enabling TLS functionality and providing the path of the mounted secret to the function runner using the arguments *--enable-private-registries-tls* and *--tls-secret-path* respectively. - -### 2.1 Kubernetes secret layout for TLS certificate - -A typical secret containing TLS information will take on the a similar format to the following: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: - namespace: porch-system -data: - : -type: kubernetes.io/tls -``` - -{{% alert title="Note" color="primary" %}} -The content in ** must be in PEM (Privacy Enhanced Mail) format, and the ** must be *ca.crt* or *ca.pem*. No other values are accepted. -{{% /alert %}} - -### 2.2 Mounting TLS certificate secret to the function runner - -The TLS secret must then be mounted onto the function runner similarly to how the docker configuration secret was done previously in section 1.2 - -```yaml - volumeMounts: - - mountPath: /var/tmp/tls-secret/ - name: tls-registry-config - readOnly: true -volumes: - - name: tls-registry-config - secret: - secretName: -``` - -### 2.3 Configuring function runner environment variables for TLS on private registries - -The *--enable-private-registries-tls* and *--tls-secret-path* variables are only required if a private registry has TLS enabled. They indicate to the function runner that it should attempt authentication to the registry using TLS, and should use the TLS certificate information found on the path provided in *--tls-secret-path*. - -```yaml -command: - - --enable-private-registries-tls=true - - --tls-secret-path=/var/tmp/tls-secret/ -``` - -The *--enable-private-registries-tls* and *--tls-secret-path* arguments have default values of *false* and */var/tmp/tls-secret/* respectively; however, these should be configured by the user and are only necessary when using a private registry secured with TLS. - -### Function runner logic flow when TLS registries are enabled - -It is important to note that enabling TLS registry functionality makes the function runner attempt connection to the registry provided in the porch file using the mounted TLS certificate. If this certificate is invalid for the provided registry, it will try again using the Intermediate Certificates stored on the machine for use in TLS with "well-known websites" (e.g. GitHub). If this also fails, it will attempt to connect without TLS: if this last resort fails, it will return an error to the user. - -{{% alert title="Note" color="primary" %}} -It is vital that the user has pre-configured the Kubernetes node which the function runner is operating on with the same TLS certificate information as is used in the ** secret. If this is not configured correctly, then even if the certificate is correctly configured in the function runner, the kpt function will not run - the function runner will be able to pull the image, but the KRM function pod created to run the function will fail with the error *x509 certificate signed by unknown authority*. -This pre-configuration setup is heavily cluster/implementation-dependent - consult your cluster's specific documentation about adding self-signed certificates or private/internal CA certs to your cluster. -{{% /alert %}} diff --git a/go.sum b/go.sum index e69de29b..a130a73d 100644 --- a/go.sum +++ b/go.sum @@ -0,0 +1,8 @@ +github.com/FortAwesome/Font-Awesome v0.0.0-20230327165841-0698449d50f2/go.mod h1:IUgezN/MFpCDIlFezw3L8j83oeiIuYoj28Miwr/KUYo= +github.com/FortAwesome/Font-Awesome v0.0.0-20241216213156-af620534bfc3/go.mod h1:IUgezN/MFpCDIlFezw3L8j83oeiIuYoj28Miwr/KUYo= +github.com/google/docsy v0.12.0 h1:CddZKL39YyJzawr8GTVaakvcUTCJRAAYdz7W0qfZ2P4= +github.com/google/docsy v0.12.0/go.mod h1:1bioDqA493neyFesaTvQ9reV0V2vYy+xUAnlnz7+miM= +github.com/google/docsy/dependencies v0.7.2 h1:+t5ufoADQAj4XneFphz4A+UU0ICAxmNaRHVWtMYXPSI= +github.com/google/docsy/dependencies v0.7.2/go.mod h1:gihhs5gmgeO+wuoay4FwOzob+jYJVyQbNaQOh788lD4= +github.com/twbs/bootstrap v5.2.3+incompatible/go.mod h1:fZTSrkpSf0/HkL0IIJzvVspTt1r9zuf7XlZau8kpcY0= +github.com/twbs/bootstrap v5.3.6+incompatible/go.mod h1:fZTSrkpSf0/HkL0IIJzvVspTt1r9zuf7XlZau8kpcY0=