Skip to content
This repository was archived by the owner on Jan 29, 2026. It is now read-only.

Commit 8a9ed3d

Browse files
authored
Merge branch 'master' into travis-job
2 parents d537351 + 0b93b0a commit 8a9ed3d

9 files changed

Lines changed: 18 additions & 20 deletions

File tree

community/FfDL-Seldon/pytorch-model/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ You can skip this step if you are happy to use the already packaged image ```sel
1515
The runtime MNIST scrorer is contained within a standalone [python class PyMnist.py](./PyMnist.py). This needs to be packaged in a Docker container to run within Seldon. For this we use [Redhat's Source-to-image](https://github.com/openshift/source-to-image).
1616

1717
* Install [S2I](https://github.com/openshift/source-to-image#installation)
18-
* From the pytorch-model folder run the following s2i build. You will need to change *seldonio* to your Docker repo. You will need at least 8GB for your local Docker.
18+
* From the pytorch-model folder run the following s2i build. You will need to change *seldonio* to your Docker repo. **You will need at least 8GB for your local Docker.**
1919
```
2020
s2i build . seldonio/seldon-core-s2i-python2 seldonio/ffdl-pymnist:0.1
2121
```
@@ -51,4 +51,3 @@ To test the running model with example MNIST images you can run either of two no
5151

5252
* [Ambassador Example](serving_ambassador.ipynb)
5353
* [Seldon OAuth Example](serving_oauth.ipynb)
54-

community/FfDL-Seldon/tf-model/README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ You can skip this step if you are happy to use the already packaged image ```sel
1414
The runtime MNIST scrorer is contained within a standalone [python class TFMnist.py](./TFMnist.py). This needs to be packaged in a Docker container to run within Seldon. For this we use [Redhat's Source-to-image](https://github.com/openshift/source-to-image).
1515

1616
* Install [S2I](https://github.com/openshift/source-to-image#installation)
17-
* From the tf-model folder run, (*change seldonio to your Docker repo*):
17+
* From the tf-model folder run, (*change seldonio to your Docker repo*) **You will need at least 8GB for your local Docker.** :
1818
```
1919
s2i build . seldonio/seldon-core-s2i-python2 seldonio/ffdl-mnist:0.1
2020
```
@@ -50,4 +50,3 @@ To test the running model with example MNIST images you can run either of two no
5050

5151
* [Ambassador Example](serving_ambassador.ipynb)
5252
* [Seldon OAuth Example](serving_oauth.ipynb)
53-

demos/fashion-mnist-training/fashion-mnist-webapp/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ The webapp is designed to take images that are uploaded, display them on the web
1010
```
1111

1212
2. Modify deployment resource from the template `fashion-mnist-webapp.yaml`. You need to set:
13-
* `MODEL_ENDPOINT`: Your Seldon model endpoint. (e.g. http://<AMBASSADOR_API_IP>/seldon/<modelDeploymentName>/api/v0.1/predictions")
13+
* `MODEL_ENDPOINT`: Your Seldon model endpoint. (e.g. http://<AMBASSADOR_API_IP>/seldon/<Model_Deployment_Name>/api/v0.1/predictions) The `AMBASSADOR_API_IP` is your `seldon-core-ambassador`'s service endpoint which by default is exposed with NodePort.
1414
* `image` : Your web app image at DockerHub
1515

1616
3. Congratulations, your web app should be running now. You can use the following commands to check where your web app is hosted.

demos/fashion-mnist-training/fashion-mnist-webapp/fashion-mnist-webapp.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ spec:
2020
env:
2121
- name: MODEL_ENDPOINT
2222
# Put down your Seldon model endpoint
23-
value: <your Seldon model endpoint hosted on AMBASSADOR_API_IP>
23+
value: http://<PUBLIC_IP:AMBASSADOR_API_NODEPORT>/seldon/<Model_Deployment_Name>/api/v0.1/predictions
2424
ports:
2525
- containerPort: 8088
2626
---

demos/fashion-mnist-training/seldon-deployment/fashion-seldon.json

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
2-
"apiVersion": "machinelearning.seldon.io/v1alpha1",
2+
"apiVersion": "machinelearning.seldon.io/v1alpha2",
33
"kind": "SeldonDeployment",
44
"metadata": {
55
"labels": {
@@ -17,7 +17,7 @@
1717
"oauth_secret": "oauth-secret",
1818
"predictors": [
1919
{
20-
"componentSpec": {
20+
"componentSpecs": [{
2121
"spec": {
2222
"containers": [
2323
{
@@ -76,7 +76,7 @@
7676
],
7777
"terminationGracePeriodSeconds": 20
7878
}
79-
},
79+
}],
8080
"graph": {
8181
"children": [],
8282
"name": "classifier",

docs/user-guide.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,15 +25,15 @@ Currently, Fabric for Deep Learning supports following community frameworks
2525

2626
| Framework | Versions | Processing Unit |
2727
| ------------- | ------------- | --------------- |
28-
| [tensorflow](https://hub.docker.com/r/tensorflow/tensorflow/) | 1.3.0, 1.3.0-py3, 1.4.0, 1.4.0-py3, 1.5.0, 1.5.0-py3, 1.5.1, 1.5.1-py3, 1.6.0, 1.6.0-py3, 1.7.0, 1.7.0-py3, 1.8.0, 1.8.0-py3, latest, latest-py3 | CPU |
29-
| [tensorflow](https://hub.docker.com/r/tensorflow/tensorflow/) | 1.3.0-gpu, 1.3.0-gpu-py3, 1.4.0-gpu, 1.4.0-gpu-py3, 1.5.0-gpu, 1.5.0-gpu-py3, 1.5.1-gpu, 1.5.1-gpu-py3, 1.6.0-gpu, 1.6.0-gpu-py3, 1.7.0-gpu, 1.7.0-gpu-py3, 1.8.0-gpu, 1.8.0-gpu-py3, latest-gpu, latest-gpu-py3 | GPU |
28+
| [tensorflow](https://hub.docker.com/r/tensorflow/tensorflow/) | 1.4.0, 1.4.0-py3, 1.5.0, 1.5.0-py3, 1.5.1, 1.5.1-py3, 1.6.0, 1.6.0-py3, 1.7.0, 1.7.0-py3, 1.8.0, 1.8.0-py3, 1.9.0, 1.9.0-py3, latest, latest-py3 | CPU |
29+
| [tensorflow](https://hub.docker.com/r/tensorflow/tensorflow/) | 1.4.0-gpu, 1.4.0-gpu-py3, 1.5.0-gpu, 1.5.0-gpu-py3, 1.5.1-gpu, 1.5.1-gpu-py3, 1.6.0-gpu, 1.6.0-gpu-py3, 1.7.0-gpu, 1.7.0-gpu-py3, 1.8.0-gpu, 1.8.0-gpu-py3, 1.9.0-gpu, 1.9.0-gpu-py3, latest-gpu, latest-gpu-py3 | GPU |
3030
| [caffe](https://hub.docker.com/r/bvlc/caffe/) | cpu, intel | CPU |
3131
| [caffe](https://hub.docker.com/r/bvlc/caffe/) | gpu | GPU |
3232
| [pytorch](https://hub.docker.com/r/pytorch/pytorch/) | v0.2, latest | CPU, GPU |
3333
| [caffe2](https://hub.docker.com/r/caffe2ai/caffe2/) | c2v0.8.1.cpu.full.ubuntu14.04, c2v0.8.0.cpu.full.ubuntu16.04 | CPU |
3434
| [caffe2](https://hub.docker.com/r/caffe2ai/caffe2/) | c2v0.8.1.cuda8.cudnn7.ubuntu16.04, latest | GPU |
3535
| [h2o3](https://hub.docker.com/r/opsh2oai/h2o3-ffdl/) | latest | CPU |
36-
| [horovod](https://hub.docker.com/r/uber/horovod/) | 0.13.4-tf1.8.0-torch0.4.0-py3.5, 0.13.4-tf1.8.0-torch0.4.0-py2.7 | CPU, GPU |
36+
| [horovod](https://hub.docker.com/r/uber/horovod/) | 0.13.10-tf1.9.0-torch0.4.0-py2.7, 0.13.10-tf1.9.0-torch0.4.0-py3.5 | CPU, GPU |
3737

3838
You can deploy models based on these frameworks and then train your models using the FfDL CLI or FfDL UI.
3939

etc/examples/horovod/manifest_pytorchmnist.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,5 +20,5 @@ data_stores:
2020

2121
framework:
2222
name: horovod
23-
version: "0.13.4-tf1.8.0-torch0.4.0-py3.5"
23+
version: "0.13.10-tf1.9.0-torch0.4.0-py3.5"
2424
command: python pytorch_mnist.py

etc/examples/horovod/manifest_tfmnist.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,6 @@ data_stores:
2020

2121
framework:
2222
name: horovod
23-
version: "0.13.4-tf1.8.0-torch0.4.0-py3.5"
23+
version: "0.13.10-tf1.9.0-torch0.4.0-py3.5"
2424
command: python tensorflow_mnist.py
2525
# the command is basically running the above command via openmpi, feel free to remove -x NCCL_DEBUG=INFO

templates/services/learner-configmap.yml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -31,10 +31,10 @@ data:
3131
tensorflow_gpu_1.4.0-gpu_CURRENT: manual
3232
tensorflow_cpu_1.4.0_CURRENT: manual
3333
tensorflow_cpu_1.4.0-py3_CURRENT: manual
34-
tensorflow_gpu_1.3.0-gpu-py3_CURRENT: manual
35-
tensorflow_gpu_1.3.0-gpu_CURRENT: manual
36-
tensorflow_cpu_1.3.0_CURRENT: manual
37-
tensorflow_cpu_1.3.0-py3_CURRENT: manual
34+
tensorflow_gpu_1.9.0-gpu-py3_CURRENT: manual
35+
tensorflow_gpu_1.9.0-gpu_CURRENT: manual
36+
tensorflow_cpu_1.9.0_CURRENT: manual
37+
tensorflow_cpu_1.9.0-py3_CURRENT: manual
3838
h2o3_cpu_latest_CURRENT: manual
3939
caffe_cpu_cpu_CURRENT: master-39
4040
caffe_gpu_gpu_CURRENT: master-39
@@ -45,8 +45,8 @@ data:
4545
caffe2_gpu_c2v0.8.1.cuda8.cudnn7.ubuntu16.04_CURRENT: master-39
4646
caffe2_cpu_c2v0.8.0.cpu.full.ubuntu16.04_CURRENT: master-39
4747
caffe2_gpu_latest_CURRENT: master-39
48-
horovod_gpu_0.13.4-tf1.8.0-torch0.4.0-py3.5_CURRENT: manual
49-
horovod_gpu_0.13.4-tf1.8.0-torch0.4.0-py2.7_CURRENT: manual
48+
horovod_gpu_0.13.10-tf1.9.0-torch0.4.0-py3.5_CURRENT: manual
49+
horovod_gpu_0.13.10-tf1.9.0-torch0.4.0-py2.7_CURRENT: manual
5050
---
5151

5252
apiVersion: v1

0 commit comments

Comments
 (0)