Skip to content

Commit 8de1c85

Browse files
authored
Revert "Porting stashed changes from the merged branch onto main (med-diag-adoc)"
1 parent fdede1e commit 8de1c85

File tree

7 files changed

+351
-179
lines changed

7 files changed

+351
-179
lines changed

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,4 +15,4 @@ Gemfile.lock
1515
.vscode
1616
.idea
1717
.vale
18-
modules/.vale.ini
18+
modules/.vale.ini

content/patterns/medical-diagnosis/_index.adoc

Lines changed: 27 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -24,67 +24,55 @@ ci: medicaldiag
2424
:_content-type: ASSEMBLY
2525
include::modules/comm-attributes.adoc[]
2626

27-
//Module to be included
28-
//:_content-type: CONCEPT
29-
//:imagesdir: ../../images
30-
[id="about-med-diag-pattern"]
31-
= About the {med-pattern}
27+
== Background
3228

33-
Background::
29+
This Validated Pattern is based on a demo implementation of an automated data pipeline for chest Xray
30+
analysis previously developed by Red Hat. The original demo can be found link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs.
3431

35-
This validated pattern is based on a demo implementation of an automated data pipeline for chest Xray analysis previously developed by Red Hat. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs.
32+
This validated pattern includes the same functionality as the original demonstration. The difference is
33+
that we use the _GitOps_ framework to deploy the pattern including operators, creation of namespaces,
34+
and cluster configuration. Using GitOps provides a much more efficient means of doing continuous deployment.
3635

37-
This validated pattern includes the same functionality as the original demonstration. The difference is that we use the GitOps framework to deploy the pattern including Operators, creation of namespaces, and cluster configuration. Using GitOps provides an efficient means of implementing continuous deployment.
36+
What does this pattern do?:
3837

39-
Workflow::
40-
41-
* Ingest chest Xrays from a simulated Xray machine and puts them into an `objectStore` based on Ceph.
42-
* The `objectStore` sends a notification to a Kafka topic.
38+
* Ingest chest Xrays from a simulated Xray machine and puts them into an objectStore based on Ceph.
39+
* The objectStore sends a notification to a Kafka topic.
4340
* A KNative Eventing Listener to the topic triggers a KNative Serving function.
4441
* An ML-trained model running in a container makes a risk assessment of Pneumonia for incoming images.
45-
* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed, and anonymized, as well as full metrics collected from Prometheus.
42+
* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed and anonymized, as well as full metrics collected from Prometheus.
4643

4744
This pipeline is showcased link:https://www.youtube.com/watch?v=zja83FVsm14[in this video].
4845

4946
image::medical-edge/dashboard.png[link="/images/medical-edge/dashboard.png"]
5047

51-
[WARNING]
52-
====
53-
This validated pattern is still under development. If you have any questions or concerns contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio]
54-
====
48+
This validated pattern is still under development. Any questions or concerns
49+
please contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio].
5550

56-
[id="about-solution-med"]
57-
== About the solution elements
51+
=== Solution elements
5852

59-
The solution aids the understanding of the following:
60-
* How to use a GitOps approach to keep in control of configuration and operations.
53+
* How to use a GitOps approach to keep in control of configuration and operations
6154
* How to deploy AI/ML technologies for medical diagnosis using GitOps.
6255

63-
The {med-pattern} uses the following Red Hat products and technologies:
56+
=== Red Hat Technologies
6457

65-
* {rh-ocp} for container orchestration
66-
* {rh-gitops}, a GitOps continuous delivery (CD) product for Kubernetes.
67-
* {rh-amq-first}, an event streaming platform based on the Apache Kafka
68-
* {rh-serverless-first} for event-driven applications
69-
* {rh-ocp-data-first} for cloud native storage capabilities
70-
* {grafana-op} to manage and share Grafana dashboards, datasources, and so on
58+
* {rh-ocp} (Kubernetes)
59+
* {rh-gitops} (ArgoCD)
60+
* Red Hat AMQ Streams (Apache Kafka Event Broker)
61+
* Red Hat OpenShift Serverless (Knative Eventing, Knative Serving)
62+
* Red Hat OpenShift Data Foundations (Cloud Native storage)
63+
* Grafana dashboard (OpenShift Grafana Operator)
7164
* Open Data Hub
7265
* S3 storage
7366

74-
[id="about-architecture-med"]
75-
== About the architecture
67+
== Architecture
7668

77-
[IMPORTANT]
78-
====
79-
There is no edge component in this iteration of the pattern. Edge deployment capabilities are planned part of the pattern architecture for a future release.
80-
====
69+
In this iteration of the pattern *there is no edge component* . Future releases have planned Edge deployment capabilities as part of the pattern architecture.
8170

8271
image::medical-edge/edge-medical-diagnosis-marketing-slide.png[link="/images/medical-edge/edge-medical-diagnosis-marketing-slide.png"]
8372

8473
Components are running on OpenShift either at the data center or at the medical facility (or public cloud running OpenShift).
8574

86-
[id="about-physical-schema-med"]
87-
=== About the physical schema
75+
=== Physical Schema
8876

8977
The diagram below shows the components that are deployed with the various networks that connect them.
9078

@@ -94,12 +82,11 @@ The diagram below shows the components that are deployed with the the data flows
9482

9583
image::medical-edge/physical-dataflow.png[link="/images/medical-edge/physical-dataflow.png"]
9684

97-
== Recorded demo
85+
== Recorded Demo
9886

9987
link:/videos/xray-deployment.svg[image:/videos/xray-deployment.svg[Demo\]]
10088

101-
[id="next-steps_med-diag-index"]
102-
== Next steps
89+
== What Next
10390

10491
* Getting started link:getting-started[Deploy the Pattern]
105-
//We have relevant links on the patterns page
92+
* Visit the link:https://github.com/hybrid-cloud-patterns/medical-diagnosis[repository]

content/patterns/medical-diagnosis/cluster-sizing.adoc

Lines changed: 207 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -7,21 +7,48 @@ aliases: /medical-diagnosis/cluster-sizing/
77
:toc:
88
:imagesdir: /images
99
:_content-type: ASSEMBLY
10-
include::modules/comm-attributes.adoc[]
1110

12-
//AI: Removed the following since we have CI status linked on the patterns page
13-
//[id="tested-platforms-cluster-sizing"]
14-
//== Tested Platforms
11+
[id="tested-platforms-cluster-sizing"]
12+
== Tested Platforms
1513

16-
//AI: Removed the following in favor of the link to OCP docs
17-
//[id="general-openshift-minimum-requirements-cluster-sizing"]
18-
//== General OpenShift Minimum Requirements
14+
The *Medical Diagnosis* pattern has been tested in the following Certified Cloud Providers.
15+
16+
|===
17+
| *Certified Cloud Providers* | 4.8 | 4.9 | 4.10 | 4.11
18+
19+
| Amazon Web Services
20+
| Tested
21+
| Tested
22+
| Tested
23+
| Tested
24+
25+
| Google Compute
26+
|
27+
|
28+
|
29+
|
30+
31+
| Microsoft Azure
32+
|
33+
|
34+
|
35+
|
36+
|===
1937

20-
To know the minimum requirements for an {ocp} cluster, for example on bare-metal, see link: https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Minimum resource requirements for cluster installation].
2138

39+
[id="general-openshift-minimum-requirements-cluster-sizing"]
40+
== General OpenShift Minimum Requirements
2241

23-
[id="about-med-diagnosis-components"]
24-
=== About {med-pattern} components
42+
OpenShift 4 has the following minimum requirements for sizing of nodes:
43+
44+
* *Minimum 4 vCPU* (additional are strongly recommended).
45+
* *Minimum 16 GB RAM* (additional memory is strongly recommended, especially if etcd is colocated on Control Planes).
46+
* *Minimum 40 GB* hard disk space for the file system containing /var/.
47+
* *Minimum 1 GB* hard disk space for the file system containing /usr/local/bin/.
48+
49+
50+
[id="medical-diagnosis-pattern-components-cluster-sizing"]
51+
=== Medical Diagnosis Pattern Components
2552

2653
Here's an inventory of what gets deployed by the *Medical Diagnosis* pattern:
2754

@@ -57,7 +84,7 @@ Here's an inventory of what gets deployed by the *Medical Diagnosis* pattern:
5784

5885

5986
[id="medical-diagnosis-pattern-openshift-cluster-size-cluster-sizing"]
60-
=== About {med-pattern} OpenShift Cluster Size
87+
=== Medical Diagnosis Pattern OpenShift Cluster Size
6188

6289
The Medical Diagnosis pattern has been tested with a defined set of configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture.
6390

@@ -85,4 +112,172 @@ The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or
85112
| Standard_D8s_v3
86113
|===
87114

88-
//Removed section for instance types as we did for MCG
115+
[id="aws-instance-types-cluster-sizing"]
116+
=== AWS Instance Types
117+
118+
The *Medical Diagnosis* pattern was tested with the highlighted AWS instances in *bold*. The OpenShift installer will let you know if the instance type meets the minimum requirements for a cluster.
119+
120+
The message that the openshift installer will give you will be similar to this message
121+
122+
[,text]
123+
----
124+
INFO Credentials loaded from default AWS environment variables
125+
FATAL failed to fetch Metadata: failed to load asset "Install Config": [controlPlane.platform.aws.type: Invalid value: "m4.large": instance type does not meet minimum resource requirements of 4 vCPUs, controlPlane.platform.aws.type: Invalid value: "m4.large": instance type does not meet minimum resource requirements of 16384 MiB Memory]
126+
----
127+
128+
Below you can find a list of the AWS instance types that can be used to deploy the *Medical Diagnosis* pattern.
129+
130+
[cols="^,^,^,^,^"]
131+
|===
132+
| Instance type | Default vCPUs | Memory (GiB) | Hub | Factory/Edge
133+
134+
|
135+
|
136+
|
137+
| 3x3 OCP Cluster
138+
| 3 Node OCP Cluster
139+
140+
| m4.xlarge
141+
| 4
142+
| 16
143+
| N
144+
| N
145+
146+
| *m4.2xlarge*
147+
| 8
148+
| 32
149+
| Y
150+
| Y
151+
152+
| m4.4xlarge
153+
| 16
154+
| 64
155+
| Y
156+
| Y
157+
158+
| m4.10xlarge
159+
| 40
160+
| 160
161+
| Y
162+
| Y
163+
164+
| m4.16xlarge
165+
| 64
166+
| 256
167+
| Y
168+
| Y
169+
170+
| *m5.xlarge*
171+
| 4
172+
| 16
173+
| Y
174+
| N
175+
176+
| *m5.2xlarge*
177+
| 8
178+
| 32
179+
| Y
180+
| Y
181+
182+
| *m5.4xlarge*
183+
| 16
184+
| 64
185+
| Y
186+
| Y
187+
188+
| m5.8xlarge
189+
| 32
190+
| 128
191+
| Y
192+
| Y
193+
194+
| m5.12xlarge
195+
| 48
196+
| 192
197+
| Y
198+
| Y
199+
200+
| m5.16xlarge
201+
| 64
202+
| 256
203+
| Y
204+
| Y
205+
206+
| m5.24xlarge
207+
| 96
208+
| 384
209+
| Y
210+
| Y
211+
|===
212+
213+
The OpenShift cluster is made of 3 Control Plane nodes and 3 Workers. For the node sizes we used the *m5.4xlarge* on AWS and this instance type met the minimum requirements to deploy the *Medical Diagnosis* pattern successfully.
214+
215+
To understand better what types of nodes you can use on other Cloud Providers we provide some of the details below.
216+
217+
[id="azure-instance-types-cluster-sizing"]
218+
=== Azure Instance Types
219+
220+
The *Medical Diagnosis* pattern was also deployed on Azure using the *Standard_D8s_v3* VM size. Below is a table of different VM sizes available for Azure. Keep in mind that due to limited access to Azure we only used the *Standard_D8s_v3* VM size.
221+
222+
The OpenShift cluster is made of 3 Control Plane nodes and 3 Workers.
223+
224+
|===
225+
| Type | Sizes | Description
226+
227+
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-general[General purpose]
228+
| B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dv4, Dsv4, Ddv4, Ddsv4
229+
| Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers.
230+
231+
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-compute[Compute optimized]
232+
| F, Fs, Fsv2, FX
233+
| High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers.
234+
235+
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-memory[Memory optimized]
236+
| Esv3, Ev3, Easv4, Eav4, Ev4, Esv4, Edv4, Edsv4, Mv2, M, DSv2, Dv2
237+
| High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics.
238+
239+
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-storage[Storage optimized]
240+
| Lsv2
241+
| High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases.
242+
243+
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu[GPU]
244+
| NC, NCv2, NCv3, NCasT4_v3, ND, NDv2, NV, NVv3, NVv4
245+
| Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs.
246+
247+
| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-hpc[High performance compute]
248+
| HB, HBv2, HBv3, HC, H
249+
| Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA).
250+
|===
251+
252+
For more information please refer to the https://docs.microsoft.com/en-us/azure/virtual-machines/sizes[Azure VM Size Page].
253+
254+
[id="google-cloud-gcp-instance-types-cluster-sizing"]
255+
=== Google Cloud (GCP) Instance Types
256+
257+
The *Medical Diagnosis* pattern was also deployed on GCP using the *n1-standard-8* VM size. Below is a table of different VM sizes available for GCP. Keep in mind that due to limited access to GCP we only used the *n1-standard-8* VM size.
258+
259+
The OpenShift cluster is made of 3 Control Plane and 3 Workers cluster.
260+
261+
The following table provides VM recommendations for different workloads.
262+
263+
|===
264+
| *General purpose* | *Workload optimized* | | | |
265+
266+
| Cost-optimized | Balanced | Scale-out optimized | Memory-optimized | Compute-optimized | Accelerator-optimized
267+
268+
| E2
269+
| N2, N2D, N1
270+
| T2D
271+
| M2, M1
272+
| C2
273+
| A2
274+
275+
| Day-to-day computing at a lower cost
276+
| Balanced price/performance across a wide range of VM shapes
277+
| Best performance/cost for scale-out workloads
278+
| Ultra high-memory workloads
279+
| Ultra high performance for compute-intensive workloads
280+
| Optimized for high performance computing workloads
281+
|===
282+
283+
For more information please refer to the https://cloud.google.com/compute/docs/machine-types[GCP VM Size Page].

0 commit comments

Comments
 (0)