You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/patterns/medical-diagnosis/_index.adoc
+4-74Lines changed: 4 additions & 74 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,84 +22,14 @@ ci: medicaldiag
22
22
:toc:
23
23
:imagesdir: /images
24
24
:_content-type: ASSEMBLY
25
-
include::modules/comm-attributes.adoc[]
26
-
27
-
//Module to be included
28
-
//:_content-type: CONCEPT
29
-
//:imagesdir: ../../images
30
-
[id="about-med-diag-pattern"]
31
-
= About the {med-pattern}
32
-
33
-
Background::
34
-
35
-
This validated pattern is based on a demo implementation of an automated data pipeline for chest X-ray analysis that was previously developed by {redhat}. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs.
36
-
37
-
This validated pattern includes the same functionality as the original demonstration. The difference is that this solution uses the GitOps framework to deploy the pattern including Operators, creation of namespaces, and cluster configuration. Using GitOps provides an efficient means of implementing continuous deployment.
38
-
39
-
Workflow::
40
-
41
-
* Ingest chest X-rays from a simulated X-ray machine and puts them into an `objectStore` based on Ceph.
42
-
* The `objectStore` sends a notification to a Kafka topic.
43
-
* A KNative Eventing listener to the topic triggers a KNative Serving function.
44
-
* An ML-trained model running in a container makes a risk assessment of Pneumonia for incoming images.
45
-
* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed, anonymized, and full metrics collected from Prometheus.
46
-
47
-
This pipeline is showcased link:https://www.youtube.com/watch?v=zja83FVsm14[in this video].
//This validated pattern is still under development. If you have any questions or concerns contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio].
54
-
//====
55
-
56
-
[id="about-solution-med"]
57
-
== About the solution elements
58
-
59
-
The solution aids the understanding of the following:
60
25
61
-
* How to use a GitOps approach to keep in control of configuration and operations.
62
-
* How to deploy AI/ML technologies for medical diagnosis using GitOps.
63
-
64
-
The {med-pattern} uses the following products and technologies:
65
-
66
-
* {rh-ocp} for container orchestration
67
-
* {rh-gitops}, a GitOps continuous delivery (CD) solution
68
-
* {rh-amq-first}, an event streaming platform based on the Apache Kafka
69
-
* {rh-serverless-first} for event-driven applications
70
-
* {rh-ocp-data-first} for cloud native storage capabilities
71
-
* {grafana-op} to manage and share Grafana dashboards, data sources, and so on
72
-
* S3 storage
73
-
74
-
[id="about-architecture-med"]
75
-
== About the architecture
76
-
77
-
[IMPORTANT]
78
-
====
79
-
Presently, the {med-pattern} does not have an edge component. Edge deployment capabilities are planned as part of the pattern architecture for a future release.
= About OpenShift cluster sizing for the {med-pattern}
17
-
18
-
To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster:
19
-
20
-
|===
21
-
| Name | Kind | Namespace | Description
22
-
23
-
| Medical Diagnosis Hub
24
-
| Application
25
-
| medical-diagnosis-hub
26
-
| Hub GitOps management
27
-
28
-
| {rh-gitops}
29
-
| Operator
30
-
| openshift-operators
31
-
| {rh-gitops-short}
32
-
33
-
| {rh-ocp-data-first}
34
-
| Operator
35
-
| openshift-storage
36
-
| Cloud Native storage solution
37
-
38
-
| {rh-amq-streams}
39
-
| Operator
40
-
| openshift-operators
41
-
| AMQ Streams provides Apache Kafka access
42
-
43
-
| {rh-serverless-first}
44
-
| Operator
45
-
| - knative-serving (knative-eventing)
46
-
| Provides access to Knative Serving and Eventing functions
47
-
|===
48
-
49
-
//AI: Removed the following since we have CI status linked on the patterns page
50
-
//[id="tested-platforms-cluster-sizing"]
51
-
//== Tested Platforms
52
10
53
-
: Removed the following in favor of the link to OCP docs
The minimum requirements for an {ocp} cluster depend on your installation platform. For instance, for AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS], and for bare-metal, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal].
57
-
58
-
For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.13/installing/installing-preparing.html[{ocp} documentation].
59
-
60
-
//Module to be included
61
-
//:_content-type: CONCEPT
62
-
//:imagesdir: ../../images
63
-
64
-
[id="med-openshift-cluster-size"]
65
-
=== About {med-pattern} OpenShift cluster size
66
-
67
-
The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture.
68
-
69
-
For {med-pattern}, the OpenShift cluster size must be a bit larger to support the compute and storage demands of OpenShift Data Foundations and other Operators.
70
-
//AI:Removed a few lines from here since the content is updated to remove any ambiguity. We rather use direct links (OCP docs/ GCP/AWS/Azure)
71
-
[NOTE]
72
-
====
73
-
You might want to add resources when more developers are working on building their applications.
74
-
====
75
-
76
-
The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or more worker nodes.
77
-
78
-
[cols="^,^,^,^"]
79
-
|===
80
-
| Node type | Number of nodes | Cloud provider | Instance type
One of the major goals of the {solution-name-upstream} development process is to create modular and customizable demos. The {med-pattern} is just an example of how AI/ML workloads built for object detection and classification can be run on OpenShift clusters. Consider your workloads for a moment - how would your workload best consume the pattern framework? Do your consumers require on-demand or near real-time responses when using your application? Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA?
19
-
The {med-pattern} can answer the call to either of these requirements by using {serverless-short} and {ocp-data-short}.
== Understanding different ways to use the {med-pattern}
23
-
24
-
. The {med-pattern} is scanning X-Ray images to determine the probability that a patient might or might not have Pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan C/T images for anomalies in the body such as Sepsis, Cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease.
25
-
. The Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized; ultimately saving passengers from being stopped and searched at security checkpoints.
26
-
. Militaries could use images collected from drones, satellites, or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin, and other such identifying characteristics.
27
-
. Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area.
28
-
29
-
These are just a few ideas to help you understand how you could use the {med-pattern} as a framework for your application.
30
-
31
-
//We have relevant links on the patterns page
32
-
//AI: Why does this point to AEG though? https://github.com/validatedpatterns/ansible-edge-gitops/issues[Report Bugs]
0 commit comments