Skip to content

Commit fdede1e

Browse files
authored
Merge pull request #294 from abhatt-rh/med-diag-adoc
Porting stashed changes from the merged branch onto main (med-diag-adoc)
2 parents 199e84a + 53bf712 commit fdede1e

File tree

7 files changed

+179
-351
lines changed

7 files changed

+179
-351
lines changed

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,4 +15,4 @@ Gemfile.lock
1515
.vscode
1616
.idea
1717
.vale
18-
modules/.vale.ini
18+
modules/.vale.ini

content/patterns/medical-diagnosis/_index.adoc

Lines changed: 40 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -24,55 +24,67 @@ ci: medicaldiag
2424
:_content-type: ASSEMBLY
2525
include::modules/comm-attributes.adoc[]
2626

27-
== Background
27+
//Module to be included
28+
//:_content-type: CONCEPT
29+
//:imagesdir: ../../images
30+
[id="about-med-diag-pattern"]
31+
= About the {med-pattern}
2832

29-
This Validated Pattern is based on a demo implementation of an automated data pipeline for chest Xray
30-
analysis previously developed by Red Hat. The original demo can be found link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs.
33+
Background::
3134

32-
This validated pattern includes the same functionality as the original demonstration. The difference is
33-
that we use the _GitOps_ framework to deploy the pattern including operators, creation of namespaces,
34-
and cluster configuration. Using GitOps provides a much more efficient means of doing continuous deployment.
35+
This validated pattern is based on a demo implementation of an automated data pipeline for chest Xray analysis previously developed by Red Hat. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs.
3536

36-
What does this pattern do?:
37+
This validated pattern includes the same functionality as the original demonstration. The difference is that we use the GitOps framework to deploy the pattern including Operators, creation of namespaces, and cluster configuration. Using GitOps provides an efficient means of implementing continuous deployment.
3738

38-
* Ingest chest Xrays from a simulated Xray machine and puts them into an objectStore based on Ceph.
39-
* The objectStore sends a notification to a Kafka topic.
39+
Workflow::
40+
41+
* Ingest chest Xrays from a simulated Xray machine and puts them into an `objectStore` based on Ceph.
42+
* The `objectStore` sends a notification to a Kafka topic.
4043
* A KNative Eventing Listener to the topic triggers a KNative Serving function.
4144
* An ML-trained model running in a container makes a risk assessment of Pneumonia for incoming images.
42-
* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed and anonymized, as well as full metrics collected from Prometheus.
45+
* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed, and anonymized, as well as full metrics collected from Prometheus.
4346

4447
This pipeline is showcased link:https://www.youtube.com/watch?v=zja83FVsm14[in this video].
4548

4649
image::medical-edge/dashboard.png[link="/images/medical-edge/dashboard.png"]
4750

48-
This validated pattern is still under development. Any questions or concerns
49-
please contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio].
51+
[WARNING]
52+
====
53+
This validated pattern is still under development. If you have any questions or concerns contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio]
54+
====
5055

51-
=== Solution elements
56+
[id="about-solution-med"]
57+
== About the solution elements
5258

53-
* How to use a GitOps approach to keep in control of configuration and operations
59+
The solution aids the understanding of the following:
60+
* How to use a GitOps approach to keep in control of configuration and operations.
5461
* How to deploy AI/ML technologies for medical diagnosis using GitOps.
5562

56-
=== Red Hat Technologies
63+
The {med-pattern} uses the following Red Hat products and technologies:
5764

58-
* {rh-ocp} (Kubernetes)
59-
* {rh-gitops} (ArgoCD)
60-
* Red Hat AMQ Streams (Apache Kafka Event Broker)
61-
* Red Hat OpenShift Serverless (Knative Eventing, Knative Serving)
62-
* Red Hat OpenShift Data Foundations (Cloud Native storage)
63-
* Grafana dashboard (OpenShift Grafana Operator)
65+
* {rh-ocp} for container orchestration
66+
* {rh-gitops}, a GitOps continuous delivery (CD) product for Kubernetes.
67+
* {rh-amq-first}, an event streaming platform based on the Apache Kafka
68+
* {rh-serverless-first} for event-driven applications
69+
* {rh-ocp-data-first} for cloud native storage capabilities
70+
* {grafana-op} to manage and share Grafana dashboards, datasources, and so on
6471
* Open Data Hub
6572
* S3 storage
6673

67-
== Architecture
74+
[id="about-architecture-med"]
75+
== About the architecture
6876

69-
In this iteration of the pattern *there is no edge component* . Future releases have planned Edge deployment capabilities as part of the pattern architecture.
77+
[IMPORTANT]
78+
====
79+
There is no edge component in this iteration of the pattern. Edge deployment capabilities are planned part of the pattern architecture for a future release.
80+
====
7081

7182
image::medical-edge/edge-medical-diagnosis-marketing-slide.png[link="/images/medical-edge/edge-medical-diagnosis-marketing-slide.png"]
7283

7384
Components are running on OpenShift either at the data center or at the medical facility (or public cloud running OpenShift).
7485

75-
=== Physical Schema
86+
[id="about-physical-schema-med"]
87+
=== About the physical schema
7688

7789
The diagram below shows the components that are deployed with the various networks that connect them.
7890

@@ -82,11 +94,12 @@ The diagram below shows the components that are deployed with the the data flows
8294

8395
image::medical-edge/physical-dataflow.png[link="/images/medical-edge/physical-dataflow.png"]
8496

85-
== Recorded Demo
97+
== Recorded demo
8698

8799
link:/videos/xray-deployment.svg[image:/videos/xray-deployment.svg[Demo\]]
88100

89-
== What Next
101+
[id="next-steps_med-diag-index"]
102+
== Next steps
90103

91104
* Getting started link:getting-started[Deploy the Pattern]
92-
* Visit the link:https://github.com/hybrid-cloud-patterns/medical-diagnosis[repository]
105+
//We have relevant links on the patterns page

content/patterns/medical-diagnosis/cluster-sizing.adoc

Lines changed: 12 additions & 207 deletions
Original file line numberDiff line numberDiff line change
@@ -7,48 +7,21 @@ aliases: /medical-diagnosis/cluster-sizing/
77
:toc:
88
:imagesdir: /images
99
:_content-type: ASSEMBLY
10+
include::modules/comm-attributes.adoc[]
1011

11-
[id="tested-platforms-cluster-sizing"]
12-
== Tested Platforms
12+
//AI: Removed the following since we have CI status linked on the patterns page
13+
//[id="tested-platforms-cluster-sizing"]
14+
//== Tested Platforms
1315

14-
The *Medical Diagnosis* pattern has been tested in the following Certified Cloud Providers.
15-
16-
|===
17-
| *Certified Cloud Providers* | 4.8 | 4.9 | 4.10 | 4.11
18-
19-
| Amazon Web Services
20-
| Tested
21-
| Tested
22-
| Tested
23-
| Tested
24-
25-
| Google Compute
26-
|
27-
|
28-
|
29-
|
30-
31-
| Microsoft Azure
32-
|
33-
|
34-
|
35-
|
36-
|===
16+
//AI: Removed the following in favor of the link to OCP docs
17+
//[id="general-openshift-minimum-requirements-cluster-sizing"]
18+
//== General OpenShift Minimum Requirements
3719

20+
To know the minimum requirements for an {ocp} cluster, for example on bare-metal, see link: https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Minimum resource requirements for cluster installation].
3821

39-
[id="general-openshift-minimum-requirements-cluster-sizing"]
40-
== General OpenShift Minimum Requirements
4122

42-
OpenShift 4 has the following minimum requirements for sizing of nodes:
43-
44-
* *Minimum 4 vCPU* (additional are strongly recommended).
45-
* *Minimum 16 GB RAM* (additional memory is strongly recommended, especially if etcd is colocated on Control Planes).
46-
* *Minimum 40 GB* hard disk space for the file system containing /var/.
47-
* *Minimum 1 GB* hard disk space for the file system containing /usr/local/bin/.
48-
49-
50-
[id="medical-diagnosis-pattern-components-cluster-sizing"]
51-
=== Medical Diagnosis Pattern Components
23+
[id="about-med-diagnosis-components"]
24+
=== About {med-pattern} components
5225

5326
Here's an inventory of what gets deployed by the *Medical Diagnosis* pattern:
5427

@@ -84,7 +57,7 @@ Here's an inventory of what gets deployed by the *Medical Diagnosis* pattern:
8457

8558

8659
[id="medical-diagnosis-pattern-openshift-cluster-size-cluster-sizing"]
87-
=== Medical Diagnosis Pattern OpenShift Cluster Size
60+
=== About {med-pattern} OpenShift Cluster Size
8861

8962
The Medical Diagnosis pattern has been tested with a defined set of configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture.
9063

@@ -112,172 +85,4 @@ The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or
11285
| Standard_D8s_v3
11386
|===
11487

115-
[id="aws-instance-types-cluster-sizing"]
116-
=== AWS Instance Types
117-
118-
The *Medical Diagnosis* pattern was tested with the highlighted AWS instances in *bold*. The OpenShift installer will let you know if the instance type meets the minimum requirements for a cluster.
119-
120-
The message that the openshift installer will give you will be similar to this message
121-
122-
[,text]
123-
----
124-
INFO Credentials loaded from default AWS environment variables
125-
FATAL failed to fetch Metadata: failed to load asset "Install Config": [controlPlane.platform.aws.type: Invalid value: "m4.large": instance type does not meet minimum resource requirements of 4 vCPUs, controlPlane.platform.aws.type: Invalid value: "m4.large": instance type does not meet minimum resource requirements of 16384 MiB Memory]
126-
----
127-
128-
Below you can find a list of the AWS instance types that can be used to deploy the *Medical Diagnosis* pattern.
129-
130-
[cols="^,^,^,^,^"]
131-
|===
132-
| Instance type | Default vCPUs | Memory (GiB) | Hub | Factory/Edge
133-
134-
|
135-
|
136-
|
137-
| 3x3 OCP Cluster
138-
| 3 Node OCP Cluster
139-
140-
| m4.xlarge
141-
| 4
142-
| 16
143-
| N
144-
| N
145-
146-
| *m4.2xlarge*
147-
| 8
148-
| 32
149-
| Y
150-
| Y
151-
152-
| m4.4xlarge
153-
| 16
154-
| 64
155-
| Y
156-
| Y
157-
158-
| m4.10xlarge
159-
| 40
160-
| 160
161-
| Y
162-
| Y
163-
164-
| m4.16xlarge
165-
| 64
166-
| 256
167-
| Y
168-
| Y
169-
170-
| *m5.xlarge*
171-
| 4
172-
| 16
173-
| Y
174-
| N
175-
176-
| *m5.2xlarge*
177-
| 8
178-
| 32
179-
| Y
180-
| Y
181-
182-
| *m5.4xlarge*
183-
| 16
184-
| 64
185-
| Y
186-
| Y
187-
188-
| m5.8xlarge
189-
| 32
190-
| 128
191-
| Y
192-
| Y
193-
194-
| m5.12xlarge
195-
| 48
196-
| 192
197-
| Y
198-
| Y
199-
200-
| m5.16xlarge
201-
| 64
202-
| 256
203-
| Y
204-
| Y
205-
206-
| m5.24xlarge
207-
| 96
208-
| 384
209-
| Y
210-
| Y
211-
|===
212-
213-
The OpenShift cluster is made of 3 Control Plane nodes and 3 Workers. For the node sizes we used the *m5.4xlarge* on AWS and this instance type met the minimum requirements to deploy the *Medical Diagnosis* pattern successfully.
214-
215-
To understand better what types of nodes you can use on other Cloud Providers we provide some of the details below.
216-
217-
[id="azure-instance-types-cluster-sizing"]
218-
=== Azure Instance Types
219-
220-
The *Medical Diagnosis* pattern was also deployed on Azure using the *Standard_D8s_v3* VM size. Below is a table of different VM sizes available for Azure. Keep in mind that due to limited access to Azure we only used the *Standard_D8s_v3* VM size.
221-
222-
The OpenShift cluster is made of 3 Control Plane nodes and 3 Workers.
223-
224-
|===
225-
| Type | Sizes | Description
226-
227-
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-general[General purpose]
228-
| B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dv4, Dsv4, Ddv4, Ddsv4
229-
| Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers.
230-
231-
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-compute[Compute optimized]
232-
| F, Fs, Fsv2, FX
233-
| High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers.
234-
235-
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-memory[Memory optimized]
236-
| Esv3, Ev3, Easv4, Eav4, Ev4, Esv4, Edv4, Edsv4, Mv2, M, DSv2, Dv2
237-
| High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics.
238-
239-
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-storage[Storage optimized]
240-
| Lsv2
241-
| High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases.
242-
243-
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu[GPU]
244-
| NC, NCv2, NCv3, NCasT4_v3, ND, NDv2, NV, NVv3, NVv4
245-
| Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs.
246-
247-
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-hpc[High performance compute]
248-
| HB, HBv2, HBv3, HC, H
249-
| Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA).
250-
|===
251-
252-
For more information please refer to the https://docs.microsoft.com/en-us/azure/virtual-machines/sizes[Azure VM Size Page].
253-
254-
[id="google-cloud-gcp-instance-types-cluster-sizing"]
255-
=== Google Cloud (GCP) Instance Types
256-
257-
The *Medical Diagnosis* pattern was also deployed on GCP using the *n1-standard-8* VM size. Below is a table of different VM sizes available for GCP. Keep in mind that due to limited access to GCP we only used the *n1-standard-8* VM size.
258-
259-
The OpenShift cluster is made of 3 Control Plane and 3 Workers cluster.
260-
261-
The following table provides VM recommendations for different workloads.
262-
263-
|===
264-
| *General purpose* | *Workload optimized* | | | |
265-
266-
| Cost-optimized | Balanced | Scale-out optimized | Memory-optimized | Compute-optimized | Accelerator-optimized
267-
268-
| E2
269-
| N2, N2D, N1
270-
| T2D
271-
| M2, M1
272-
| C2
273-
| A2
274-
275-
| Day-to-day computing at a lower cost
276-
| Balanced price/performance across a wide range of VM shapes
277-
| Best performance/cost for scale-out workloads
278-
| Ultra high-memory workloads
279-
| Ultra high performance for compute-intensive workloads
280-
| Optimized for high performance computing workloads
281-
|===
282-
283-
For more information please refer to the https://cloud.google.com/compute/docs/machine-types[GCP VM Size Page].
88+
//Removed section for instance types as we did for MCG

0 commit comments

Comments
 (0)