You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/developer-backends.md
+13-26Lines changed: 13 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,59 +2,46 @@
2
2
3
3
This document describes how a backend for RedisAI can be built, from this repository. It highlights the supported compilation devices on a per-backend basis, and highlights the tools and commands required. Unless indicated otherwise, a backend is compiled in a docker, which is responsible for the configuration and installation of all tools required for a given backend on a per-platform basis.
4
4
5
-
To follow these instructions, this repository must be cloned with all of its submodules (i.e *git clone --recursive https://github.com/RedisLabsModules/redisai*)
5
+
To follow these instructions, this repository must be cloned with all of its submodules (i.e *git clone --recursive https://github.com/redisai/redisai*)
6
6
7
-
GNU Make is used as a runner for the dockerfile generator. Python is the language used for the generator script, and jinja is the templating library used to create the docker file from the template.
7
+
GNU Make is used as a runner for the dockerfile generator. Python is the language used for the generator script, and jinja is the templating library used to create the docker file from a template*dockerfile.tmpl* that can be found in the directory of a given backend listed below.
8
8
9
9
## Tools
10
10
11
-
Buiding the backends requires installation of the following tools:
11
+
Building the backends requires installation of the following tools:
12
12
13
13
1. gnu make
14
14
1. python (3.0 or higher)
15
15
1. docker
16
-
1. jinja2 jinja is used to generate the platform dockerfile from a *dockerfile.tmpl* that can be found in the directory of a given backend listed below.
16
+
1. jinja2
17
17
18
-
On ubuntu bionic these can be installed by running:
18
+
On ubuntu bionic these can be installed by running the following steps, to install python3, create a virtual environment, and install the jinja templating dependency. Replace */path/to/venv* with your desired virtualenv location.
19
19
20
-
* sudo apt install python3 python3-dev make docker
21
-
* pip install --user jinja
20
+
```
21
+
sudo apt install python3 python3-dev make docker
22
+
python3 -m venv /path/to/venv
23
+
source /path/to/venv/bin/activate
24
+
pip install jinja
25
+
```
22
26
23
27
-------
24
28
25
29
## Backends
26
30
27
31
### onnxruntime
28
32
33
+
We build Onnxruntime library with DISABLE_EXTERNAL_INITIALIZERS=ON build flag. This means that loading ONNX models that use external files to store the initial (usually very large) values of the model's operations, is invalid. That is, initializers values must be part of the serialized model, which is also the standard use case.
34
+
29
35
**Compilation target devices:**
30
36
31
37
1. x86\_64 bit linux systems
32
38
33
39
1. x86\_64 bit linux systems with a GPU
34
40
35
-
1. jetson devices
36
-
37
41
**Directory:** opt/build/onnxruntime
38
42
39
43
**Build options:**
40
44
41
45
1. To build run *make*
42
46
43
47
1. To build with GPU support on x86\_64 run *make GPU=1*
44
-
45
-
1. Should you want to build multiple targets from a shared directory, run *make DOCKER_SUFFIX=<yoursuffix>* on your target system. For example, if building on an arm and x64 workload, from a shared directory run:
0 commit comments