You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learn/about.adoc
+17-15Lines changed: 17 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,12 +3,11 @@ menu: learn
3
3
title: About Validated Patterns
4
4
weight: 10
5
5
---
6
-
7
-
# About Validated Patterns
6
+
:toc:
8
7
9
8
Validated Patterns and upstream Community Patterns are a natural progression from reference architectures with additional value. Here is a brief video to explain what patterns are all about:
This effort is focused on customer solutions that involve multiple Red Hat
14
13
products. The patterns include one or more applications that are based on successfully deployed customer examples. Example application code is provided as a demonstration, along with the various open source projects and Red Hat products required to for the deployment to work. Users can then modify the pattern for their own specific application.
@@ -17,40 +16,43 @@ How do we select and produce a pattern? We look for novel customer use cases, ob
17
16
18
17
The automation also enables the solution to be added to Continuous Integration (CI), with triggers for new product versions (including betas), so that we can proactively find and fix breakage and avoid bit-rot.
19
18
20
-
## Who should use these patterns?
19
+
[id="who-should-use-these-patterns"]
20
+
== Who should use these patterns?
21
21
22
-
It is recommended that architects or advanced developers with knowledge of Kubernetes and Red Hat OpenShift Container Platform use these patterns. There are advanced [Cloud Native](https://www.cncf.io/projects/)concepts and projects deployed as part of the pattern framework. These include, but are not limited to, OpenShift Gitops ([ArgoCD](https://argoproj.github.io/argo-cd/)), Advanced Cluster Management ([Open Cluster Management](https://open-cluster-management.io/)), and OpenShift Pipelines ([Tekton](https://tekton.dev/))
22
+
It is recommended that architects or advanced developers with knowledge of Kubernetes and Red Hat OpenShift Container Platform use these patterns. There are advanced https://www.cncf.io/projects/[Cloud Native] concepts and projects deployed as part of the pattern framework. These include, but are not limited to, OpenShift Gitops (https://argoproj.github.io/argo-cd/[ArgoCD]), Advanced Cluster Management (https://open-cluster-management.io/[Open Cluster Management]), and OpenShift Pipelines (https://tekton.dev/[Tekton])
23
23
24
-
## General Structure
24
+
[id="general-structure"]
25
+
== General Structure
25
26
26
-
All patterns assume an OpenShift cluster is available to deploy the application(s) that are part of the pattern. If you do not have an OpenShift cluster, you can use [cloud.redhat.com](https://console.redhat.com/openshift).
27
+
All patterns assume an OpenShift cluster is available to deploy the application(s) that are part of the pattern. If you do not have an OpenShift cluster, you can use https://console.redhat.com/openshift[cloud.redhat.com].
27
28
28
29
The documentation will use the `oc` command syntax but `kubectl` can be used interchangeably. For each deployment it is assumed that the user is logged into a cluster using the `oc login` command or by exporting the `KUBECONFIG` path.
29
30
30
31
The diagram below outlines the general deployment flow of a datacenter application.
31
32
32
33
But first the user must create a fork of the pattern repository. This allows changes to be made to operational elements (configurations etc.) and to application code that can then be successfully made to the forked repository for DevOps continuous integration (CI). Clone the directory to your laptop/desktop. Future changes can be pushed to your fork.
33
34
34
-

35
-
36
-
1. Make a copy of the values file. There may be one or more values files. E.g. `values-global.yaml` and/or `values-datacenter.yaml`. While most of these values allow you to specify subscriptions, operators, applications and other application specifics, there are also *secrets* which may include encrypted keys or user IDs and passwords. It is important that you make a copy and **do not push your personal values file to a repository accessible to others!**
35
+
image::/images/gitops-datacenter.png[GitOps for Datacenter]
37
36
38
-
2. Deploy the application as specified by the pattern. This may include a Helm command (`helm install`) or a make command (`make deploy`).
37
+
. Make a copy of the values file. There may be one or more values files. E.g. `values-global.yaml` and/or `values-datacenter.yaml`. While most of these values allow you to specify subscriptions, operators, applications and other application specifics, there are also _secrets_ which may include encrypted keys or user IDs and passwords. It is important that you make a copy and *do not push your personal values file to a repository accessible to others!*
38
+
. Deploy the application as specified by the pattern. This may include a Helm command (`helm install`) or a make command (`make deploy`).
39
39
40
40
When the workload is deployed the pattern first deploys OpenShift GitOps. OpenShift GitOps will then take over and make sure that all application and the components of the pattern are deployed. This includes required operators and application code.
41
41
42
42
Most patterns will have an Advanced Cluster Management operator deployed so that multi-cluster deployments can be managed.
43
43
44
-
## Edge Patterns
44
+
[id="edge-patterns"]
45
+
== Edge Patterns
45
46
46
47
Some patterns include both a data center and one or more edge clusters. The diagram below outlines the general deployment flow of applications on an edge application. The edge OpenShift cluster is often deployed on a smaller cluster than the datacenter. Sometimes this might be a three node cluster that allows workloads to be deployed on the master nodes. The edge cluster might be a single node cluster (SN0). It might be deployed on bare metal, on local virtual machines or in a public/private cloud. Provision the cluster (see above)
47
48
48
-

49
+
image::/images/gitops-edge.png[GitOps for Edge]
49
50
50
-
3. Import/join the cluster to the hub/data center. Instructions for importing the cluster can be found [here]. You're done.
51
+
. Import/join the cluster to the hub/data center. Instructions for importing the cluster can be found [here]. You're done.
51
52
52
53
When the cluster is imported, ACM on the datacenter will deploy an ACM agent and agent-addon pod into the edge cluster. Once installed and running ACM will then deploy OpenShift GitOps onto the cluster. Then OpenShift GitOps will deploy whatever applications are required for that cluster based on a label.
53
54
54
-
## OpenShift GitOps (a.k.a ArgoCD)
55
+
[id="openshift-gitops-argocd"]
56
+
== OpenShift GitOps (a.k.a ArgoCD)
55
57
56
58
When OpenShift GitOps is deployed and running in a cluster (datacenter or edge) you can launch its console by choosing ArgoCD in the upper left part of the OpenShift Console (TO-DO whenry to add an image and clearer instructions here)
* *What are they:* Best practice implementations conforming to the Validated Patterns implementation practices
18
+
* *Purpose:* Codify best practices and promote collaboration between different groups inside, and external to, Red Hat
19
+
* *Creator:* Customers, Partners, GSIs, Services/Consultants, SAs, and other Red Hat teams
20
+
21
+
[id="requirements"]
22
+
== Requirements
23
+
24
+
General requirements for all Community, and Validated patterns
25
+
26
+
[id="base"]
27
+
=== Base
28
+
29
+
. Patterns *MUST* include a top-level README highlighting the business problem and how the pattern solves it
30
+
. Patterns *MUST* include an architecture drawing. The specific tool/format is flexible as long as the meaning is clear.
31
+
. Patterns *MUST* undergo an informal architecture review by a community leader to ensure that the solution has the right products, and they are generally being used as intended.
32
+
+
33
+
For example: not using a database as a message bus.
34
+
As community leaders, contributions from within Red Hat may be subject to a higher level of scrutiny
35
+
While we strive to be inclusive, the community will have quality standards and generally using the framework does not automatically imply a solution is suitable for the community to endorse/publish.
36
+
37
+
. Patterns *MUST* undergo an informal technical review by a community leader to ensure that it conforms to the link:/requirements/implementation/[technical requirements] and meets basic reuse standards
38
+
. Patterns *MUST* document their support policy
39
+
+
40
+
It is anticipated that most community patterns will be supported by the community on a best-effort basis, but this should be stated explicitly.
41
+
The validated patterns team commits to maintaining the framework but will also accept help.
42
+
43
+
. Patterns SHOULD include a recorded demo highlighting the business problem and how the pattern solves it
Copy file name to clipboardExpand all lines: content/learn/faq.adoc
+27-19Lines changed: 27 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,55 +5,63 @@ weight: 90
5
5
aliases: /faq/
6
6
---
7
7
8
-
# FAQ
8
+
:toc:
9
9
10
-
## What is a Hybrid Cloud Pattern?
10
+
= FAQ
11
+
12
+
[id="what-is-a-hybrid-cloud-pattern"]
13
+
== What is a Hybrid Cloud Pattern?
11
14
12
15
Hybrid Cloud Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Hybrid Cloud Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways.
13
16
14
17
Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Hybrid Cloud Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use.
15
18
16
-
The first Hybrid Cloud Pattern is based on [MANUela](https://github.com/sa-mw-dach/manuela), an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis.
19
+
The first Hybrid Cloud Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis.
17
20
18
21
We are actively developing new Hybrid Cloud Patterns. Watch this space for updates!
19
22
20
-
## How are they different from XYZ?
23
+
[id="how-are-they-different-from-xyz"]
24
+
== How are they different from XYZ?
21
25
22
26
Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Hybrid Cloud Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration.
23
27
24
-
## What technologies are used?
28
+
[id="what-technologies-are-used"]
29
+
== What technologies are used?
25
30
26
31
Key technologies in the stack for Industrial Edge include:
27
32
28
-
- Red Hat OpenShift Container Platform
29
-
- Red Hat Advanced Cluster Management
30
-
- Red Hat OpenShift GitOps (based on ArgoCD)
31
-
- Red Hat OpenShift Pipelines (based on tekton)
32
-
- Red Hat Integration - AMQ Broker (ActiveMQ Artemis MQTT)
33
-
- Red Hat Integration - AMQ Streams (Kafka)
34
-
- Red Hat Integration - Camel K
35
-
- Seldon Operator
33
+
* Red Hat OpenShift Container Platform
34
+
* Red Hat Advanced Cluster Management
35
+
* Red Hat OpenShift GitOps (based on ArgoCD)
36
+
* Red Hat OpenShift Pipelines (based on tekton)
37
+
* Red Hat Integration - AMQ Broker (ActiveMQ Artemis MQTT)
38
+
* Red Hat Integration - AMQ Streams (Kafka)
39
+
* Red Hat Integration - Camel K
40
+
* Seldon Operator
36
41
37
42
In the future, we expect to further use Red Hat OpenShift, and expand the integrations with other elements of the ecosystem. How can the concept of GitOps integrate with a fleet of devices that are not running Kubernetes? What about integrations with baremetal or VM servers? Sounds like a job for Ansible! We expect to tackle some of these problems in future patterns.
38
43
39
-
## How are they structured?
44
+
[id="how-are-they-structured"]
45
+
== How are they structured?
40
46
41
-
Hybrid Cloud Patterns come in parts - we have a [common](https://github.com/hybrid-cloud-patterns/common) repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - [industrial edge](https://github.com/hybrid-cloud-patterns/industrial-edge). This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.)
47
+
Hybrid Cloud Patterns come in parts - we have a https://github.com/hybrid-cloud-patterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/hybrid-cloud-patterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.)
42
48
43
49
The common repository is primarily concerned with how to deploy the GitOps operator, and to create the namespaces that will be necessary to manage the pattern applications.
44
50
45
51
The pattern repository has the application-specific layout, and determines which components are installed in which places - hub or edge. The pattern repository also defines the hub and edge locations. Both the hub and edge are expected to have multiple components each - the hub will have pipelines and the CI/CD framework, as well as any centralization components or data analysis components. Edge components are designed to be smaller as we do not need to deploy Pipelines or the test and staging areas to the Edge.
46
52
47
53
Each application is described as a series of resources that are rendered into GitOps (ArgoCD) via Helm and Kustomize. The values for these charts are set by values files that need to be "personalized" (with your local cluster values) as the first step of installation. Subsequent pushes to the gitops repository will be reflected in the clusters running the applications.
48
54
49
-
## Who is behind this?
55
+
[id="who-is-behind-this"]
56
+
== Who is behind this?
50
57
51
58
Today, a team of Red Hat engineers including Andrew Beekhof (@beekhof), Lester Claudio (@claudiol), Martin Jackson (@mhjacks), William Henry (@ipbabble), Michele Baldessari (@mbaldessari), Jonny Rickard (@day0hero) and others.
52
59
53
60
Excited or intrigued by what you see here? We'd love to hear your thoughts and ideas! Try the patterns contained here and see below for links to our repositories and issue trackers.
54
61
55
-
## How can I get involved?
62
+
[id="how-can-i-get-involved"]
63
+
== How can I get involved?
56
64
57
-
Try out what we've done and submit issues to our [issue trackers](https://github.com/hybrid-cloud-patterns/industrial-edge/issues).
65
+
Try out what we've done and submit issues to our https://github.com/validatedpatterns/industrial-edge/issues[issue trackers].
58
66
59
-
We will review pull requests to our [pattern](https://github.com/hybrid-cloud-patterns/common)[repositories](https://validatedpatterns.io/industrial-edge).
67
+
We will review pull requests to our https://github.com/validatedpatterns/common[pattern] https://github.com/validatedpatterns/industrial-edge[repositories].
0 commit comments