You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our team recently completed the development of a validated pattern that showcases the capabilities we have at our fingertips when we combine OpenShift and other cutting edge Red Hat technologies to deliver a solution.
16
16
17
-
We've taken an application defined imperatively in an Ansible playbook and converted it into GitOps style declarative kubernetes resources. Using the validated pattern framework we are able to deploy, manage and integrate with multiple cutting edge Red Hat technologies, and provide a capability that the initial deployment strategy didn't have available to it: a lifecycle. Everything you need to take this pattern for a spin is in [git](https://github.com/hybrid-cloud-patterns/medical-diagnosis).
17
+
We've taken an application defined imperatively in an Ansible playbook and converted it into GitOps style declarative kubernetes resources. Using the validated pattern framework we are able to deploy, manage and integrate with multiple cutting edge Red Hat technologies, and provide a capability that the initial deployment strategy didn't have available to it: a lifecycle. Everything you need to take this pattern for a spin is in [git](https://github.com/validatedpatterns/medical-diagnosis).
Our first foray into the realm of Hybrid Cloud Patterns was the adaptation of the MANUela application and its associated tooling to ArgoCD and Tekton, to demonstrate the deployment of a fairly involved IoT application designed to monitor industrial equipment and use AI/ML techniques to predict failure. This resulted in the Industrial Edge validated pattern, which you can see [here](https://github.com/hybrid-cloud-patterns/industrial-edge).
17
+
Our first foray into the realm of Validated Patterns was the adaptation of the MANUela application and its associated tooling to ArgoCD and Tekton, to demonstrate the deployment of a fairly involved IoT application designed to monitor industrial equipment and use AI/ML techniques to predict failure. This resulted in the Industrial Edge validated pattern, which you can see [here](https://github.com/validatedpatterns/industrial-edge).
18
18
19
19
This was our first use of a framework to deploy a significant application, and we learned a lot by doing it. It was good to be faced with a number of problems in the “real world” before taking a look at what is really essential for the framework and why.
@@ -101,7 +101,7 @@ Now using project "hello-openshift" on server "https://api.magic-mirror-2.bluepr
101
101
Last but not least now let's apply that example route definition we just created.
102
102
103
103
```console
104
-
$ oc create -f /tmp/route-example.yaml
104
+
$ oc create -f /tmp/route-example.yaml
105
105
route.route.openshift.io/hello-openshift created
106
106
```
107
107
@@ -148,4 +148,4 @@ As you can see the *subdomain* property was replaced with the *host* property bu
148
148
149
149
Using the *subdomain* property when defining route is super useful if you are deploying your application to different clusters and it will allow you to not have to hard code the ingress domain for every cluster.
150
150
151
-
If you have any questions or want to see what we are working on please feel free to visit our [Hybrid Cloud Patterns](https://validatedpatterns.io/) site. If you are excited or intrigued by what you see here we’d love to hear your thoughts and ideas! Try the patterns contained in our [Hybrid Cloud Patterns Repo](https://github.com/hybrid-cloud-patterns). We will review your pull requests to our pattern repositories.
151
+
If you have any questions or want to see what we are working on please feel free to visit our [Validated Patterns](https://validatedpatterns.io/) site. If you are excited or intrigued by what you see here we’d love to hear your thoughts and ideas! Try the patterns contained in our [Validated Patterns Repo](https://github.com/validatedpatterns). We will review your pull requests to our pattern repositories.
Copy file name to clipboardExpand all lines: content/contribute/contribute-to-docs.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ weight: 10
7
7
8
8
:toc:
9
9
10
-
//Use the Contributor's guide to learn about ways to contribute to the Hybrid Cloud Patterns, to understand the prerequisites and toolchain required for contribution, and to follow some basic documentation style and structure guidelines.
10
+
//Use the Contributor's guide to learn about ways to contribute to the Validated Patterns, to understand the prerequisites and toolchain required for contribution, and to follow some basic documentation style and structure guidelines.
Copy file name to clipboardExpand all lines: content/contribute/extending-a-pattern.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,25 +8,25 @@ aliases: /extending-a-pattern/
8
8
# Extending an existing pattern
9
9
10
10
## Introduction to extending a pattern using a fork
11
-
Extending an existing pattern refers to adding a new product and/or configuration to an existing pattern. For example a pattern might be a great fit for a solution but requires the addition of an observability tool, e.g. Prometheus, Grafana, or Elastic. Extending an existing pattern is not very difficult. The advantage is that it automates the integration of this extra product into pattern.
11
+
Extending an existing pattern refers to adding a new product and/or configuration to an existing pattern. For example a pattern might be a great fit for a solution but requires the addition of an observability tool, e.g. Prometheus, Grafana, or Elastic. Extending an existing pattern is not very difficult. The advantage is that it automates the integration of this extra product into pattern.
12
12
13
13
Extending usually requires four steps:
14
-
1. Adding any required namespace for the product
14
+
1. Adding any required namespace for the product
15
15
1. Adding a subscription to install and operator
16
16
1. Adding one or more ArgoCD applications to manage the post-install configuration of the product
17
17
1. Adding the Helm chart needed to implement the post-install configuration identified in step 3.
18
18
19
-
Sometimes there is no operator in [OperatorHub](https://catalog.redhat.com/software/search?deployed_as=Operator) for the product and it requires installation using a Helm chart.
19
+
Sometimes there is no operator in [OperatorHub](https://catalog.redhat.com/software/search?deployed_as=Operator) for the product and it requires installation using a Helm chart.
20
20
21
21
These additions need to be made to the appropriate `values-<cluster grouping>.yaml` file in the top level pattern directory. If the component is on a hub cluster the file would be `values-hub.yaml`. If it's on a production cluster that would be in `values-production.yaml`. Look at the pattern architecture and decide where you need to add the product.
22
22
23
23
In the example below AMQ Streams (Kafka) is chosen as a product to add to a pattern.
24
24
25
25
## Before starting, fork and clone first
26
26
27
-
1. Visit the github page for the pattern that you wish to extend. E.g. [multicloud-gitops](https://github.com/hybrid-cloud-patterns/multicloud-gitops). Select “Fork” in the top right corner.
27
+
1. Visit the github page for the pattern that you wish to extend. E.g. [multicloud-gitops](https://github.com/validatedpatterns/multicloud-gitops). Select “Fork” in the top right corner.
28
28
29
-
1. On the create a new fork page, you can choose what owner repository you want and the name of the fork. Most times you will fork into your personal repo and leave the name the same. When you have made the appropriate changes press the "Create fork" button.
29
+
1. On the create a new fork page, you can choose what owner repository you want and the name of the fork. Most times you will fork into your personal repo and leave the name the same. When you have made the appropriate changes press the "Create fork" button.
30
30
31
31
1. You will need to clone from the new fork onto you laptop/desktop so that you can do the extension work effectively. So on the new fork’s main page elect the green “Code” button and copy the git repo’s ssh address.
32
32
@@ -39,7 +39,7 @@ In the example below AMQ Streams (Kafka) is chosen as a product to add to a patt
39
39
```
40
40
41
41
## Adding a namespace
42
-
The first step is to add a namespace in the `values-<cluster group>.yaml`. Sometimes a specific namespace is expected in other parts of a products configuration. E.g. Red Hat Advanced Cluster Security expects to use the namespace `stackrox`. While you might try using a different namespace you may encounter issues.
42
+
The first step is to add a namespace in the `values-<cluster group>.yaml`. Sometimes a specific namespace is expected in other parts of a products configuration. E.g. Red Hat Advanced Cluster Security expects to use the namespace `stackrox`. While you might try using a different namespace you may encounter issues.
43
43
44
44
In our example we are just going to add the namespace `my-kafka`.
45
45
@@ -48,7 +48,7 @@ In our example we are just going to add the namespace `my-kafka`.
48
48
namespaces:
49
49
... # other namespaces above my-kafka
50
50
- my-kafka
51
-
```
51
+
```
52
52
53
53
## Adding a subscription
54
54
The next step is to add the subscription information for the Kubernetes Operator. Sometimes this subscription needs to be added to a specific namespace, e.g. `openshift-operators`. Check for any operator namespace requirements. In this example just place it in the newly created `my-kafka` namespace.
@@ -60,11 +60,11 @@ subscriptions:
60
60
amq-streams:
61
61
name: amq-streams
62
62
namespace: my-kafka
63
-
```
63
+
```
64
64
65
65
## Adding the ArgoCd application
66
66
The next step is to add the application information. Sometimes you want to group applications in ArgoCD into a project and you can do this by using an existing project grouping or create a new project group. The example below uses an existing `project` called `my-app`.
67
-
67
+
68
68
```yaml
69
69
---
70
70
applications:
@@ -73,10 +73,10 @@ applications:
73
73
namespace: my-kafka
74
74
project: my-app
75
75
path: charts/all/kafka
76
-
```
76
+
```
77
77
78
78
## Adding the Helm Chart
79
-
The `path:` tag in the above kafka application tells ArgoCD where to find the Helm Chart needed to deploy this application. Paths are relative the the top level pattern directory and therefore in my example that is `~/git/multicloud-gitops`.
79
+
The `path:` tag in the above kafka application tells ArgoCD where to find the Helm Chart needed to deploy this application. Paths are relative the the top level pattern directory and therefore in my example that is `~/git/multicloud-gitops`.
80
80
81
81
ArgoCD will continuously monitor for changes to artifacts in that location for updates to apply. Each different site type would have its own `values-` file listing subscriptions and applications.
Copy file name to clipboardExpand all lines: content/learn/faq.adoc
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,20 +10,20 @@ aliases: /faq/
10
10
= FAQ
11
11
12
12
[id="what-is-a-hybrid-cloud-pattern"]
13
-
== What is a Hybrid Cloud Pattern?
13
+
== What is a Validated Pattern?
14
14
15
-
Hybrid Cloud Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Hybrid Cloud Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways.
15
+
Validated Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Validated Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways.
16
16
17
-
Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Hybrid Cloud Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use.
17
+
Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Validated Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use.
18
18
19
-
The first Hybrid Cloud Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis.
19
+
The first Validated Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis.
20
20
21
-
We are actively developing new Hybrid Cloud Patterns. Watch this space for updates!
21
+
We are actively developing new Validated Patterns. Watch this space for updates!
22
22
23
23
[id="how-are-they-different-from-xyz"]
24
24
== How are they different from XYZ?
25
25
26
-
Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Hybrid Cloud Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration.
26
+
Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Validated Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration.
27
27
28
28
[id="what-technologies-are-used"]
29
29
== What technologies are used?
@@ -44,7 +44,7 @@ In the future, we expect to further use Red Hat OpenShift, and expand the integr
44
44
[id="how-are-they-structured"]
45
45
== How are they structured?
46
46
47
-
Hybrid Cloud Patterns come in parts - we have a https://github.com/hybrid-cloud-patterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/hybrid-cloud-patterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.)
47
+
Validated Patterns come in parts - we have a https://github.com/validatedpatterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/validatedpatterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.)
48
48
49
49
The common repository is primarily concerned with how to deploy the GitOps operator, and to create the namespaces that will be necessary to manage the pattern applications.
0 commit comments