You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 4, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+12-14Lines changed: 12 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,11 +13,11 @@
13
13
## About
14
14
Cortex is an OpenAI-compatible AI engine that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and client libraries. It can be used as a standalone server or imported as a library.
15
15
16
-
Cortex currently supports 3 inference engines:
17
-
18
-
-Llama.cpp
19
-
-ONNX Runtime
20
-
- TensorRT-LLM
16
+
## Cortex Engines
17
+
Cortex supports the following engines:
18
+
-[`cortex.llamacpp`](https://github.com/janhq/cortex.llamacpp): `cortex.llamacpp` library is a C++ inference tool that can be dynamically loaded by any server at runtime. We use this engine to support GGUF inference with GGUF models. The `llama.cpp` is optimized for performance on both CPU and GPU.
19
+
-[`cortex.onnx` Repository](https://github.com/janhq/cortex.onnx): `cortex.onnx` is a C++ inference library for Windows that leverages `onnxruntime-genai` and uses DirectML to provide GPU acceleration across a wide range of hardware and drivers, including AMD, Intel, NVIDIA, and Qualcomm GPUs.
20
+
-[`cortex.tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm): `cortex.tensorrt-llm` is a C++ inference library designed for NVIDIA GPUs. It incorporates NVIDIA’s TensorRT-LLM for GPU-accelerated inference.
21
21
22
22
## Quicklinks
23
23
@@ -26,7 +26,10 @@ Cortex currently supports 3 inference engines:
26
26
27
27
## Quickstart
28
28
### Prerequisites
29
-
Ensure that your system meets the following requirements to run Cortex:
29
+
-**OS**:
30
+
- MacOSX 13.6 or higher.
31
+
- Windows 10 or higher.
32
+
- Ubuntu 22.04 and later.
30
33
-**Dependencies**:
31
34
-**Node.js**: Version 18 and above is required to run the installation.
32
35
-**NPM**: Needed to manage packages.
@@ -35,15 +38,10 @@ Ensure that your system meets the following requirements to run Cortex:
35
38
```bash
36
39
sudo apt install openmpi-bin libopenmpi-dev
37
40
```
38
-
- **OS**:
39
-
- MacOSX 13.6 or higher.
40
-
- Windows 10 or higher.
41
-
- Ubuntu 22.04 and later.
42
41
43
42
> Visit [Quickstart](https://cortex.so/docs/quickstart) to get started.
44
43
45
44
### NPM
46
-
Install using NPM package:
47
45
``` bash
48
46
# Install using NPM
49
47
npm i -g cortexso
@@ -54,7 +52,6 @@ npm uninstall -g cortexso
54
52
```
55
53
56
54
### Homebrew
57
-
Install using Homebrew:
58
55
``` bash
59
56
# Install using Brew
60
57
brew install cortexso
@@ -65,7 +62,7 @@ brew uninstall cortexso
65
62
```
66
63
> You can also install Cortex using the Cortex Installer available on [GitHub Releases](https://github.com/janhq/cortex/releases).
67
64
68
-
To run Cortex as an API server:
65
+
## Cortex Server
69
66
```bash
70
67
cortex serve
71
68
@@ -138,7 +135,8 @@ See [CLI Reference Docs](https://cortex.so/docs/cli) for more information.
138
135
```
139
136
140
137
## Contact Support
141
-
- For support, please file a GitHub ticket.
138
+
- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose).
142
139
- For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH).
143
140
- For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai).
Copy file name to clipboardExpand all lines: cortex-js/README.md
+13-15Lines changed: 13 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,11 +13,11 @@
13
13
## About
14
14
Cortex is an OpenAI-compatible AI engine that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and client libraries. It can be used as a standalone server or imported as a library.
15
15
16
-
Cortex currently supports 3 inference engines:
17
-
18
-
-Llama.cpp
19
-
-ONNX Runtime
20
-
- TensorRT-LLM
16
+
## Cortex Engines
17
+
Cortex supports the following engines:
18
+
-[`cortex.llamacpp`](https://github.com/janhq/cortex.llamacpp): `cortex.llamacpp` library is a C++ inference tool that can be dynamically loaded by any server at runtime. We use this engine to support GGUF inference with GGUF models. The `llama.cpp` is optimized for performance on both CPU and GPU.
19
+
-[`cortex.onnx` Repository](https://github.com/janhq/cortex.onnx): `cortex.onnx` is a C++ inference library for Windows that leverages `onnxruntime-genai` and uses DirectML to provide GPU acceleration across a wide range of hardware and drivers, including AMD, Intel, NVIDIA, and Qualcomm GPUs.
20
+
-[`cortex.tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm): `cortex.tensorrt-llm` is a C++ inference library designed for NVIDIA GPUs. It incorporates NVIDIA’s TensorRT-LLM for GPU-accelerated inference.
21
21
22
22
## Quicklinks
23
23
@@ -26,7 +26,10 @@ Cortex currently supports 3 inference engines:
26
26
27
27
## Quickstart
28
28
### Prerequisites
29
-
Ensure that your system meets the following requirements to run Cortex:
29
+
-**OS**:
30
+
- MacOSX 13.6 or higher.
31
+
- Windows 10 or higher.
32
+
- Ubuntu 22.04 and later.
30
33
-**Dependencies**:
31
34
-**Node.js**: Version 18 and above is required to run the installation.
32
35
-**NPM**: Needed to manage packages.
@@ -35,16 +38,10 @@ Ensure that your system meets the following requirements to run Cortex:
35
38
```bash
36
39
sudo apt install openmpi-bin libopenmpi-dev
37
40
```
38
-
- **OS**:
39
-
- MacOSX 13.6 or higher.
40
-
- Windows 10 or higher.
41
-
- Ubuntu 22.04 and later.
42
41
43
42
> Visit [Quickstart](https://cortex.so/docs/quickstart) to get started.
44
43
45
-
46
44
### NPM
47
-
Install using NPM package:
48
45
``` bash
49
46
# Install using NPM
50
47
npm i -g cortexso
@@ -55,7 +52,6 @@ npm uninstall -g cortexso
55
52
```
56
53
57
54
### Homebrew
58
-
Install using Homebrew:
59
55
``` bash
60
56
# Install using Brew
61
57
brew install cortexso
@@ -66,7 +62,7 @@ brew uninstall cortexso
66
62
```
67
63
> You can also install Cortex using the Cortex Installer available on [GitHub Releases](https://github.com/janhq/cortex/releases).
68
64
69
-
To run Cortex as an API server:
65
+
## Cortex Server
70
66
```bash
71
67
cortex serve
72
68
@@ -139,6 +135,8 @@ See [CLI Reference Docs](https://cortex.so/docs/cli) for more information.
139
135
```
140
136
141
137
## Contact Support
142
-
- For support, please file a GitHub ticket.
138
+
- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose).
143
139
- For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH).
144
140
- For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai).
0 commit comments