Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Commit 7052afa

Browse files
authored
Update the README (#875)
1 parent 46cfd38 commit 7052afa

File tree

2 files changed

+25
-29
lines changed

2 files changed

+25
-29
lines changed

README.md

Lines changed: 12 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,11 @@
1313
## About
1414
Cortex is an OpenAI-compatible AI engine that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and client libraries. It can be used as a standalone server or imported as a library.
1515

16-
Cortex currently supports 3 inference engines:
17-
18-
- Llama.cpp
19-
- ONNX Runtime
20-
- TensorRT-LLM
16+
## Cortex Engines
17+
Cortex supports the following engines:
18+
- [`cortex.llamacpp`](https://github.com/janhq/cortex.llamacpp): `cortex.llamacpp` library is a C++ inference tool that can be dynamically loaded by any server at runtime. We use this engine to support GGUF inference with GGUF models. The `llama.cpp` is optimized for performance on both CPU and GPU.
19+
- [`cortex.onnx` Repository](https://github.com/janhq/cortex.onnx): `cortex.onnx` is a C++ inference library for Windows that leverages `onnxruntime-genai` and uses DirectML to provide GPU acceleration across a wide range of hardware and drivers, including AMD, Intel, NVIDIA, and Qualcomm GPUs.
20+
- [`cortex.tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm): `cortex.tensorrt-llm` is a C++ inference library designed for NVIDIA GPUs. It incorporates NVIDIA’s TensorRT-LLM for GPU-accelerated inference.
2121

2222
## Quicklinks
2323

@@ -26,7 +26,10 @@ Cortex currently supports 3 inference engines:
2626

2727
## Quickstart
2828
### Prerequisites
29-
Ensure that your system meets the following requirements to run Cortex:
29+
- **OS**:
30+
- MacOSX 13.6 or higher.
31+
- Windows 10 or higher.
32+
- Ubuntu 22.04 and later.
3033
- **Dependencies**:
3134
- **Node.js**: Version 18 and above is required to run the installation.
3235
- **NPM**: Needed to manage packages.
@@ -35,15 +38,10 @@ Ensure that your system meets the following requirements to run Cortex:
3538
```bash
3639
sudo apt install openmpi-bin libopenmpi-dev
3740
```
38-
- **OS**:
39-
- MacOSX 13.6 or higher.
40-
- Windows 10 or higher.
41-
- Ubuntu 22.04 and later.
4241

4342
> Visit [Quickstart](https://cortex.so/docs/quickstart) to get started.
4443

4544
### NPM
46-
Install using NPM package:
4745
``` bash
4846
# Install using NPM
4947
npm i -g cortexso
@@ -54,7 +52,6 @@ npm uninstall -g cortexso
5452
```
5553

5654
### Homebrew
57-
Install using Homebrew:
5855
``` bash
5956
# Install using Brew
6057
brew install cortexso
@@ -65,7 +62,7 @@ brew uninstall cortexso
6562
```
6663
> You can also install Cortex using the Cortex Installer available on [GitHub Releases](https://github.com/janhq/cortex/releases).
6764

68-
To run Cortex as an API server:
65+
## Cortex Server
6966
```bash
7067
cortex serve
7168
@@ -138,7 +135,8 @@ See [CLI Reference Docs](https://cortex.so/docs/cli) for more information.
138135
```
139136
140137
## Contact Support
141-
- For support, please file a GitHub ticket.
138+
- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose).
142139
- For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH).
143140
- For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai).
144141
142+

cortex-js/README.md

Lines changed: 13 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,11 @@
1313
## About
1414
Cortex is an OpenAI-compatible AI engine that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and client libraries. It can be used as a standalone server or imported as a library.
1515

16-
Cortex currently supports 3 inference engines:
17-
18-
- Llama.cpp
19-
- ONNX Runtime
20-
- TensorRT-LLM
16+
## Cortex Engines
17+
Cortex supports the following engines:
18+
- [`cortex.llamacpp`](https://github.com/janhq/cortex.llamacpp): `cortex.llamacpp` library is a C++ inference tool that can be dynamically loaded by any server at runtime. We use this engine to support GGUF inference with GGUF models. The `llama.cpp` is optimized for performance on both CPU and GPU.
19+
- [`cortex.onnx` Repository](https://github.com/janhq/cortex.onnx): `cortex.onnx` is a C++ inference library for Windows that leverages `onnxruntime-genai` and uses DirectML to provide GPU acceleration across a wide range of hardware and drivers, including AMD, Intel, NVIDIA, and Qualcomm GPUs.
20+
- [`cortex.tensorrt-llm`](https://github.com/janhq/cortex.tensorrt-llm): `cortex.tensorrt-llm` is a C++ inference library designed for NVIDIA GPUs. It incorporates NVIDIA’s TensorRT-LLM for GPU-accelerated inference.
2121

2222
## Quicklinks
2323

@@ -26,7 +26,10 @@ Cortex currently supports 3 inference engines:
2626

2727
## Quickstart
2828
### Prerequisites
29-
Ensure that your system meets the following requirements to run Cortex:
29+
- **OS**:
30+
- MacOSX 13.6 or higher.
31+
- Windows 10 or higher.
32+
- Ubuntu 22.04 and later.
3033
- **Dependencies**:
3134
- **Node.js**: Version 18 and above is required to run the installation.
3235
- **NPM**: Needed to manage packages.
@@ -35,16 +38,10 @@ Ensure that your system meets the following requirements to run Cortex:
3538
```bash
3639
sudo apt install openmpi-bin libopenmpi-dev
3740
```
38-
- **OS**:
39-
- MacOSX 13.6 or higher.
40-
- Windows 10 or higher.
41-
- Ubuntu 22.04 and later.
4241

4342
> Visit [Quickstart](https://cortex.so/docs/quickstart) to get started.
4443

45-
4644
### NPM
47-
Install using NPM package:
4845
``` bash
4946
# Install using NPM
5047
npm i -g cortexso
@@ -55,7 +52,6 @@ npm uninstall -g cortexso
5552
```
5653

5754
### Homebrew
58-
Install using Homebrew:
5955
``` bash
6056
# Install using Brew
6157
brew install cortexso
@@ -66,7 +62,7 @@ brew uninstall cortexso
6662
```
6763
> You can also install Cortex using the Cortex Installer available on [GitHub Releases](https://github.com/janhq/cortex/releases).
6864

69-
To run Cortex as an API server:
65+
## Cortex Server
7066
```bash
7167
cortex serve
7268
@@ -139,6 +135,8 @@ See [CLI Reference Docs](https://cortex.so/docs/cli) for more information.
139135
```
140136
141137
## Contact Support
142-
- For support, please file a GitHub ticket.
138+
- For support, please file a [GitHub ticket](https://github.com/janhq/cortex/issues/new/choose).
143139
- For questions, join our Discord [here](https://discord.gg/FTk2MvZwJH).
144140
- For long-form inquiries, please email [hello@jan.ai](mailto:hello@jan.ai).
141+
142+

0 commit comments

Comments
 (0)