Skip to content

Commit 9da4319

Browse files
committed
Updating URLS and links post the migration to the new site and the new repo for Validated Patterns
1 parent d9a90da commit 9da4319

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+158
-156
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Hybrid Cloud Patterns documentation site
1+
# Validated Patterns documentation site
22

33
This project contains the new proof-of-concept documentation site for validatedpatterns.io
44

content/blog/2021-12-31-medical-diagnosis.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ aliases: /2021/12/31/medical-diagnosis/
1414

1515
Our team recently completed the development of a validated pattern that showcases the capabilities we have at our fingertips when we combine OpenShift and other cutting edge Red Hat technologies to deliver a solution.
1616

17-
We've taken an application defined imperatively in an Ansible playbook and converted it into GitOps style declarative kubernetes resources. Using the validated pattern framework we are able to deploy, manage and integrate with multiple cutting edge Red Hat technologies, and provide a capability that the initial deployment strategy didn't have available to it: a lifecycle. Everything you need to take this pattern for a spin is in [git](https://github.com/hybrid-cloud-patterns/medical-diagnosis).
17+
We've taken an application defined imperatively in an Ansible playbook and converted it into GitOps style declarative kubernetes resources. Using the validated pattern framework we are able to deploy, manage and integrate with multiple cutting edge Red Hat technologies, and provide a capability that the initial deployment strategy didn't have available to it: a lifecycle. Everything you need to take this pattern for a spin is in [git](https://github.com/validatedpatterns/medical-diagnosis).
1818

1919
## Pattern Workflow
2020

content/blog/2022-03-30-multicloud-gitops.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ aliases: /2022/03/30/multicloud-gitops/
1212

1313
# Validated Pattern: Multi-Cloud GitOps
1414

15-
## Hybrid Cloud Patterns: The Story so far
15+
## Validated Patterns: The Story so far
1616

17-
Our first foray into the realm of Hybrid Cloud Patterns was the adaptation of the MANUela application and its associated tooling to ArgoCD and Tekton, to demonstrate the deployment of a fairly involved IoT application designed to monitor industrial equipment and use AI/ML techniques to predict failure. This resulted in the Industrial Edge validated pattern, which you can see [here](https://github.com/hybrid-cloud-patterns/industrial-edge).
17+
Our first foray into the realm of Validated Patterns was the adaptation of the MANUela application and its associated tooling to ArgoCD and Tekton, to demonstrate the deployment of a fairly involved IoT application designed to monitor industrial equipment and use AI/ML techniques to predict failure. This resulted in the Industrial Edge validated pattern, which you can see [here](https://github.com/validatedpatterns/industrial-edge).
1818

1919
This was our first use of a framework to deploy a significant application, and we learned a lot by doing it. It was good to be faced with a number of problems in the “real world” before taking a look at what is really essential for the framework and why.
2020

content/blog/2022-09-02-route.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ kind: Route
2626
metadata:
2727
name: hello-openshift
2828
spec:
29-
host: hello-openshift-hello-openshift.<Ingress_Domain>
29+
host: hello-openshift-hello-openshift.<Ingress_Domain>
3030
port:
3131
targetPort: 8080
3232
to:
@@ -75,7 +75,7 @@ metadata:
7575
name: hello-openshift
7676
namespace: hello-openshift
7777
spec:
78-
subdomain: hello-openshift-hello-openshift
78+
subdomain: hello-openshift-hello-openshift
7979
port:
8080
targetPort: 8080
8181
to:
@@ -101,7 +101,7 @@ Now using project "hello-openshift" on server "https://api.magic-mirror-2.bluepr
101101
Last but not least now let's apply that example route definition we just created.
102102

103103
```console
104-
$ oc create -f /tmp/route-example.yaml
104+
$ oc create -f /tmp/route-example.yaml
105105
route.route.openshift.io/hello-openshift created
106106
```
107107

@@ -148,4 +148,4 @@ As you can see the *subdomain* property was replaced with the *host* property bu
148148
149149
Using the *subdomain* property when defining route is super useful if you are deploying your application to different clusters and it will allow you to not have to hard code the ingress domain for every cluster.
150150
151-
If you have any questions or want to see what we are working on please feel free to visit our [Hybrid Cloud Patterns](https://validatedpatterns.io/) site. If you are excited or intrigued by what you see here we’d love to hear your thoughts and ideas! Try the patterns contained in our [Hybrid Cloud Patterns Repo](https://github.com/hybrid-cloud-patterns). We will review your pull requests to our pattern repositories.
151+
If you have any questions or want to see what we are working on please feel free to visit our [Validated Patterns](https://validatedpatterns.io/) site. If you are excited or intrigued by what you see here we’d love to hear your thoughts and ideas! Try the patterns contained in our [Validated Patterns Repo](https://github.com/validatedpatterns). We will review your pull requests to our pattern repositories.

content/blog/2022-10-12-acm-provisioning.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ the pay-as-you-go OpenShift managed service.
4141

4242
Start by [deploying](https://validatedpatterns.io/multicloud-gitops/getting-started/) the Multi-cloud GitOps pattern on AWS.
4343

44-
Next, you'll need to create a fork of the [multicloud-gitops](https://github.com/hybrid-cloud-patterns/multicloud-gitops/)
44+
Next, you'll need to create a fork of the [multicloud-gitops](https://github.com/validatedpatterns/multicloud-gitops/)
4545
repo. Go there in a browser, make sure you’re logged in to GitHub, click the
4646
“Fork” button, and confirm the destination by clicking the big green "Create
4747
fork" button.
@@ -56,7 +56,7 @@ And finally, click through to the installed operator, and select the `Create
5656
instance` button and fill out the Create a Pattern form. Most of the defaults
5757
are fine, but make sure you update the GitSpec URL to point to your fork of
5858
`multicloud-gitops`, rather than
59-
`https://github.com/hybrid-cloud-patterns/multicloud-gitops`.
59+
`https://github.com/validatedpatterns/multicloud-gitops`.
6060

6161
### Providing your Cloud Credentials
6262

content/contribute/contribute-to-docs.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ weight: 10
77

88
:toc:
99

10-
//Use the Contributor's guide to learn about ways to contribute to the Hybrid Cloud Patterns, to understand the prerequisites and toolchain required for contribution, and to follow some basic documentation style and structure guidelines.
10+
//Use the Contributor's guide to learn about ways to contribute to the Validated Patterns, to understand the prerequisites and toolchain required for contribution, and to follow some basic documentation style and structure guidelines.
1111

1212
include::modules/contributing.adoc[leveloffset=+1]
1313
include::modules/tools-and-setup.adoc[leveloffset=+1]

content/contribute/extending-a-pattern.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,25 +8,25 @@ aliases: /extending-a-pattern/
88
# Extending an existing pattern
99

1010
## Introduction to extending a pattern using a fork
11-
Extending an existing pattern refers to adding a new product and/or configuration to an existing pattern. For example a pattern might be a great fit for a solution but requires the addition of an observability tool, e.g. Prometheus, Grafana, or Elastic. Extending an existing pattern is not very difficult. The advantage is that it automates the integration of this extra product into pattern.
11+
Extending an existing pattern refers to adding a new product and/or configuration to an existing pattern. For example a pattern might be a great fit for a solution but requires the addition of an observability tool, e.g. Prometheus, Grafana, or Elastic. Extending an existing pattern is not very difficult. The advantage is that it automates the integration of this extra product into pattern.
1212

1313
Extending usually requires four steps:
14-
1. Adding any required namespace for the product
14+
1. Adding any required namespace for the product
1515
1. Adding a subscription to install and operator
1616
1. Adding one or more ArgoCD applications to manage the post-install configuration of the product
1717
1. Adding the Helm chart needed to implement the post-install configuration identified in step 3.
1818

19-
Sometimes there is no operator in [OperatorHub](https://catalog.redhat.com/software/search?deployed_as=Operator) for the product and it requires installation using a Helm chart.
19+
Sometimes there is no operator in [OperatorHub](https://catalog.redhat.com/software/search?deployed_as=Operator) for the product and it requires installation using a Helm chart.
2020

2121
These additions need to be made to the appropriate `values-<cluster grouping>.yaml` file in the top level pattern directory. If the component is on a hub cluster the file would be `values-hub.yaml`. If it's on a production cluster that would be in `values-production.yaml`. Look at the pattern architecture and decide where you need to add the product.
2222

2323
In the example below AMQ Streams (Kafka) is chosen as a product to add to a pattern.
2424

2525
## Before starting, fork and clone first
2626

27-
1. Visit the github page for the pattern that you wish to extend. E.g. [multicloud-gitops](https://github.com/hybrid-cloud-patterns/multicloud-gitops). Select “Fork” in the top right corner.
27+
1. Visit the github page for the pattern that you wish to extend. E.g. [multicloud-gitops](https://github.com/validatedpatterns/multicloud-gitops). Select “Fork” in the top right corner.
2828

29-
1. On the create a new fork page, you can choose what owner repository you want and the name of the fork. Most times you will fork into your personal repo and leave the name the same. When you have made the appropriate changes press the "Create fork" button.
29+
1. On the create a new fork page, you can choose what owner repository you want and the name of the fork. Most times you will fork into your personal repo and leave the name the same. When you have made the appropriate changes press the "Create fork" button.
3030

3131
1. You will need to clone from the new fork onto you laptop/desktop so that you can do the extension work effectively. So on the new fork’s main page elect the green “Code” button and copy the git repo’s ssh address.
3232

@@ -39,7 +39,7 @@ In the example below AMQ Streams (Kafka) is chosen as a product to add to a patt
3939
```
4040

4141
## Adding a namespace
42-
The first step is to add a namespace in the `values-<cluster group>.yaml`. Sometimes a specific namespace is expected in other parts of a products configuration. E.g. Red Hat Advanced Cluster Security expects to use the namespace `stackrox`. While you might try using a different namespace you may encounter issues.
42+
The first step is to add a namespace in the `values-<cluster group>.yaml`. Sometimes a specific namespace is expected in other parts of a products configuration. E.g. Red Hat Advanced Cluster Security expects to use the namespace `stackrox`. While you might try using a different namespace you may encounter issues.
4343

4444
In our example we are just going to add the namespace `my-kafka`.
4545

@@ -48,7 +48,7 @@ In our example we are just going to add the namespace `my-kafka`.
4848
namespaces:
4949
... # other namespaces above my-kafka
5050
- my-kafka
51-
```
51+
```
5252
5353
## Adding a subscription
5454
The next step is to add the subscription information for the Kubernetes Operator. Sometimes this subscription needs to be added to a specific namespace, e.g. `openshift-operators`. Check for any operator namespace requirements. In this example just place it in the newly created `my-kafka` namespace.
@@ -60,11 +60,11 @@ subscriptions:
6060
amq-streams:
6161
name: amq-streams
6262
namespace: my-kafka
63-
```
63+
```
6464

6565
## Adding the ArgoCd application
6666
The next step is to add the application information. Sometimes you want to group applications in ArgoCD into a project and you can do this by using an existing project grouping or create a new project group. The example below uses an existing `project` called `my-app`.
67-
67+
6868
```yaml
6969
---
7070
applications:
@@ -73,10 +73,10 @@ applications:
7373
namespace: my-kafka
7474
project: my-app
7575
path: charts/all/kafka
76-
```
76+
```
7777

7878
## Adding the Helm Chart
79-
The `path:` tag in the above kafka application tells ArgoCD where to find the Helm Chart needed to deploy this application. Paths are relative the the top level pattern directory and therefore in my example that is `~/git/multicloud-gitops`.
79+
The `path:` tag in the above kafka application tells ArgoCD where to find the Helm Chart needed to deploy this application. Paths are relative the the top level pattern directory and therefore in my example that is `~/git/multicloud-gitops`.
8080

8181
ArgoCD will continuously monitor for changes to artifacts in that location for updates to apply. Each different site type would have its own `values-` file listing subscriptions and applications.
8282

@@ -149,7 +149,7 @@ metadata:
149149
# annotations:
150150
# argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
151151
#
152-
# NOTE if needed you can use argocd sync-wave to delay a manifest
152+
# NOTE if needed you can use argocd sync-wave to delay a manifest
153153
# argocd.argoproj.io/sync-wave: "3"
154154
spec:
155155
entityOperator:

content/learn/faq.adoc

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -10,20 +10,20 @@ aliases: /faq/
1010
= FAQ
1111

1212
[id="what-is-a-hybrid-cloud-pattern"]
13-
== What is a Hybrid Cloud Pattern?
13+
== What is a Validated Pattern?
1414

15-
Hybrid Cloud Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Hybrid Cloud Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways.
15+
Validated Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Validated Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways.
1616

17-
Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Hybrid Cloud Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use.
17+
Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Validated Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use.
1818

19-
The first Hybrid Cloud Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis.
19+
The first Validated Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis.
2020

21-
We are actively developing new Hybrid Cloud Patterns. Watch this space for updates!
21+
We are actively developing new Validated Patterns. Watch this space for updates!
2222

2323
[id="how-are-they-different-from-xyz"]
2424
== How are they different from XYZ?
2525

26-
Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Hybrid Cloud Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration.
26+
Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Validated Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration.
2727

2828
[id="what-technologies-are-used"]
2929
== What technologies are used?
@@ -44,7 +44,7 @@ In the future, we expect to further use Red Hat OpenShift, and expand the integr
4444
[id="how-are-they-structured"]
4545
== How are they structured?
4646

47-
Hybrid Cloud Patterns come in parts - we have a https://github.com/hybrid-cloud-patterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/hybrid-cloud-patterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.)
47+
Validated Patterns come in parts - we have a https://github.com/validatedpatterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/validatedpatterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.)
4848

4949
The common repository is primarily concerned with how to deploy the GitOps operator, and to create the namespaces that will be necessary to manage the pattern applications.
5050

0 commit comments

Comments
 (0)