Skip to content

Commit 2edb711

Browse files
committed
created modules for all main file, except for getting started
1 parent 2b819dc commit 2edb711

11 files changed

+386
-375
lines changed

content/patterns/medical-diagnosis/_index.adoc

Lines changed: 4 additions & 74 deletions
Original file line numberDiff line numberDiff line change
@@ -22,84 +22,14 @@ ci: medicaldiag
2222
:toc:
2323
:imagesdir: /images
2424
:_content-type: ASSEMBLY
25-
include::modules/comm-attributes.adoc[]
26-
27-
//Module to be included
28-
//:_content-type: CONCEPT
29-
//:imagesdir: ../../images
30-
[id="about-med-diag-pattern"]
31-
= About the {med-pattern}
32-
33-
Background::
34-
35-
This validated pattern is based on a demo implementation of an automated data pipeline for chest X-ray analysis that was previously developed by {redhat}. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs.
36-
37-
This validated pattern includes the same functionality as the original demonstration. The difference is that this solution uses the GitOps framework to deploy the pattern including Operators, creation of namespaces, and cluster configuration. Using GitOps provides an efficient means of implementing continuous deployment.
38-
39-
Workflow::
40-
41-
* Ingest chest X-rays from a simulated X-ray machine and puts them into an `objectStore` based on Ceph.
42-
* The `objectStore` sends a notification to a Kafka topic.
43-
* A KNative Eventing listener to the topic triggers a KNative Serving function.
44-
* An ML-trained model running in a container makes a risk assessment of Pneumonia for incoming images.
45-
* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed, anonymized, and full metrics collected from Prometheus.
46-
47-
This pipeline is showcased link:https://www.youtube.com/watch?v=zja83FVsm14[in this video].
48-
49-
image::medical-edge/dashboard.png[link="/images/medical-edge/dashboard.png"]
50-
51-
//[NOTE]
52-
//====
53-
//This validated pattern is still under development. If you have any questions or concerns contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio].
54-
//====
55-
56-
[id="about-solution-med"]
57-
== About the solution elements
58-
59-
The solution aids the understanding of the following:
6025

61-
* How to use a GitOps approach to keep in control of configuration and operations.
62-
* How to deploy AI/ML technologies for medical diagnosis using GitOps.
63-
64-
The {med-pattern} uses the following products and technologies:
65-
66-
* {rh-ocp} for container orchestration
67-
* {rh-gitops}, a GitOps continuous delivery (CD) solution
68-
* {rh-amq-first}, an event streaming platform based on the Apache Kafka
69-
* {rh-serverless-first} for event-driven applications
70-
* {rh-ocp-data-first} for cloud native storage capabilities
71-
* {grafana-op} to manage and share Grafana dashboards, data sources, and so on
72-
* S3 storage
73-
74-
[id="about-architecture-med"]
75-
== About the architecture
76-
77-
[IMPORTANT]
78-
====
79-
Presently, the {med-pattern} does not have an edge component. Edge deployment capabilities are planned as part of the pattern architecture for a future release.
80-
====
81-
82-
image::medical-edge/edge-medical-diagnosis-marketing-slide.png[link="/images/medical-edge/edge-medical-diagnosis-marketing-slide.png"]
83-
84-
Components are running on OpenShift either at the data center, at the medical facility, or public cloud running OpenShift.
85-
86-
[id="about-physical-schema-med"]
87-
=== About the physical schema
88-
89-
The following diagram shows the components that are deployed with the various networks that connect them.
90-
91-
image::medical-edge/physical-network.png[link="/images/medical-edge/physical-network.png"]
92-
93-
The following diagram shows the components that are deployed with the the data flows and API calls between them.
94-
95-
image::medical-edge/physical-dataflow.png[link="/images/medical-edge/physical-dataflow.png"]
26+
include::modules/comm-attributes.adoc[]
9627

97-
== Recorded demo
28+
include::modules/med-about-medical-diagnosis.adoc[leveloffset=+1]
9829

99-
link:/videos/xray-deployment.svg[image:/videos/xray-deployment.svg[Demo\]]
30+
include::modules/med-architecture-schema.adoc[leveloffset=+1]
10031

10132
[id="next-steps_med-diag-index"]
10233
== Next steps
10334

104-
* Getting started link:getting-started[Deploy the Pattern]
105-
//We have relevant links on the patterns page
35+
* Getting started link:getting-started[Deploy the Pattern]

content/patterns/medical-diagnosis/med-cluster-sizing.adoc

Lines changed: 3 additions & 91 deletions
Original file line numberDiff line numberDiff line change
@@ -7,97 +7,9 @@ aliases: /medical-diagnosis/cluster-sizing/
77
:toc:
88
:imagesdir: /images
99
:_content-type: ASSEMBLY
10-
include::modules/comm-attributes.adoc[]
11-
12-
//Module to be included
13-
//:_content-type: CONCEPT
14-
//:imagesdir: ../../images
15-
[id="about-openshift-cluster-sizing-med"]
16-
= About OpenShift cluster sizing for the {med-pattern}
17-
18-
To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster:
19-
20-
|===
21-
| Name | Kind | Namespace | Description
22-
23-
| Medical Diagnosis Hub
24-
| Application
25-
| medical-diagnosis-hub
26-
| Hub GitOps management
27-
28-
| {rh-gitops}
29-
| Operator
30-
| openshift-operators
31-
| {rh-gitops-short}
32-
33-
| {rh-ocp-data-first}
34-
| Operator
35-
| openshift-storage
36-
| Cloud Native storage solution
37-
38-
| {rh-amq-streams}
39-
| Operator
40-
| openshift-operators
41-
| AMQ Streams provides Apache Kafka access
42-
43-
| {rh-serverless-first}
44-
| Operator
45-
| - knative-serving (knative-eventing)
46-
| Provides access to Knative Serving and Eventing functions
47-
|===
48-
49-
//AI: Removed the following since we have CI status linked on the patterns page
50-
//[id="tested-platforms-cluster-sizing"]
51-
//== Tested Platforms
5210

53-
: Removed the following in favor of the link to OCP docs
54-
//[id="general-openshift-minimum-requirements-cluster-sizing"]
55-
//== General OpenShift Minimum Requirements
56-
The minimum requirements for an {ocp} cluster depend on your installation platform. For instance, for AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS], and for bare-metal, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal].
57-
58-
For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.13/installing/installing-preparing.html[{ocp} documentation].
59-
60-
//Module to be included
61-
//:_content-type: CONCEPT
62-
//:imagesdir: ../../images
63-
64-
[id="med-openshift-cluster-size"]
65-
=== About {med-pattern} OpenShift cluster size
66-
67-
The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture.
68-
69-
For {med-pattern}, the OpenShift cluster size must be a bit larger to support the compute and storage demands of OpenShift Data Foundations and other Operators.
70-
//AI:Removed a few lines from here since the content is updated to remove any ambiguity. We rather use direct links (OCP docs/ GCP/AWS/Azure)
71-
[NOTE]
72-
====
73-
You might want to add resources when more developers are working on building their applications.
74-
====
75-
76-
The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or more worker nodes.
77-
78-
[cols="^,^,^,^"]
79-
|===
80-
| Node type | Number of nodes | Cloud provider | Instance type
81-
82-
| Control plane and worker
83-
| 3 and 3
84-
| Google Cloud
85-
| n1-standard-8
86-
87-
| Control plane and worker
88-
| 3 and 3
89-
| Amazon Cloud Services
90-
| m5.2xlarge
11+
include::modules/comm-attributes.adoc[]
9112

92-
| Control plane and worker
93-
| 3 and 3
94-
| Microsoft Azure
95-
| Standard_D8s_v3
96-
|===
13+
include::modules/med-about-cluster-sizing.adoc[leveloffset=+1]
9714

98-
[role="_additional-resources"]
99-
.Additional resource
100-
* link:https://aws.amazon.com/ec2/instance-types/[AWS instance types]
101-
* link:https://learn.microsoft.com/en-us/azure/virtual-machines/sizes[Azure instance types: Sizes for virtual machines in Azure]
102-
* link:https://cloud.google.com/compute/docs/machine-resource[Google Cloud Platform instance types: Machine families resource and comparison guide]
103-
//Removed section for instance types as we did for MCG
15+
include::modules/med-ocp-cluster-sizing.adoc[leveloffset=+1]

content/patterns/medical-diagnosis/med-ideas-for-customization.adoc

Lines changed: 1 addition & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -8,25 +8,4 @@ aliases: /medical-diagnosis/ideas-for-customization/
88
:_content-type: ASSEMBLY
99
include::modules/comm-attributes.adoc[]
1010

11-
//Module to be included
12-
//:_content-type: CONCEPT
13-
//:imagesdir: ../../images
14-
15-
[id="about-customizing-pattern-med"]
16-
= About customizing the pattern {med-pattern}
17-
18-
One of the major goals of the {solution-name-upstream} development process is to create modular and customizable demos. The {med-pattern} is just an example of how AI/ML workloads built for object detection and classification can be run on OpenShift clusters. Consider your workloads for a moment - how would your workload best consume the pattern framework? Do your consumers require on-demand or near real-time responses when using your application? Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA?
19-
The {med-pattern} can answer the call to either of these requirements by using {serverless-short} and {ocp-data-short}.
20-
21-
[id="understanding-different-ways-to-use-med-pattern"]
22-
== Understanding different ways to use the {med-pattern}
23-
24-
. The {med-pattern} is scanning X-Ray images to determine the probability that a patient might or might not have Pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan C/T images for anomalies in the body such as Sepsis, Cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease.
25-
. The Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized; ultimately saving passengers from being stopped and searched at security checkpoints.
26-
. Militaries could use images collected from drones, satellites, or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin, and other such identifying characteristics.
27-
. Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area.
28-
29-
These are just a few ideas to help you understand how you could use the {med-pattern} as a framework for your application.
30-
31-
//We have relevant links on the patterns page
32-
//AI: Why does this point to AEG though? https://github.com/validatedpatterns/ansible-edge-gitops/issues[Report Bugs]
11+
include::modules/med-about-customizing-pattern.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)