You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 4, 2025. It is now read-only.
> ⚠️ **Cortex is currently in Development**: Expect breaking changes and bugs!
12
+
13
+
## About
14
+
Cortex is an openAI-compatible local AI server that developers can use to build LLM apps. It is packaged with a Docker-inspired command-line interface and a Typescript client library. It can be used as a standalone server, or imported as a library.
15
+
16
+
Cortex currently supports two inference engines:
17
+
18
+
- Llama.cpp
19
+
- TensorRT-LLM
20
+
21
+
> Read more about Cortex at https://jan.ai/cortex
22
+
23
+
## Quicklinks
24
+
**Cortex**:
25
+
-[Website](https://jan.ai/)
26
+
-[GitHub](https://github.com/janhq/cortex)
27
+
-[User Guides](https://jan.ai/cortex)
28
+
-[API reference](https://jan.ai/api-reference)
4
29
5
30
## Prerequisites
6
31
@@ -12,10 +37,9 @@ Before installation, ensure that you have installed the following:
12
37
-**NPM**: Needed to manage packages.
13
38
-**CPU Instruction Sets**: Available for download from the [Cortex GitHub Releases](https://github.com/janhq/cortex/releases) page.
14
39
15
-
<aside>
16
-
💡 The **CPU instruction sets** are not required for the initial installation of Cortex. This dependency will be automatically installed during the Cortex initialization if they are not already on your system.
17
40
18
-
</aside>
41
+
>💡 The **CPU instruction sets** are not required for the initial installation of Cortex. This dependency will be automatically installed during the Cortex initialization if they are not already on your system.
42
+
19
43
20
44
### **Hardware**
21
45
@@ -37,88 +61,42 @@ Ensure that your system meets the following requirements to run Cortex:
37
61
38
62
-**Disk**: At least 10GB for app and model download.
39
63
40
-
## Cortex Installation
41
-
42
-
To install Cortex, follow the steps below:
43
-
44
-
### Step 1: Install Cortex
45
-
46
-
Run the following command to install Cortex globally on your machine:
47
-
48
-
```bash
49
-
# Install using NPM globally
64
+
## Quickstart
65
+
To install Cortex CLI, follow the steps below:
66
+
1. Install the Cortex NPM package globally:
67
+
```bash
50
68
npm i -g @janhq/cortex
51
69
```
52
70
53
-
### Step 2: Verify the Installation
54
-
55
-
After installation, you can verify that Cortex is installed correctly by getting help information.
56
-
57
-
```bash
58
-
# Get the help information
59
-
cortex -h
60
-
```
61
-
62
-
### Step 3: Initialize Cortex
63
-
64
-
Once verified, you need to initialize the Cortex engine.
65
-
66
-
1. Initialize the Cortex engine:
67
-
68
-
```
71
+
2. Initialize a compatible engine:
72
+
```bash
69
73
cortex init
70
74
```
71
75
72
-
1. Select between `CPU` and `GPU` modes.
73
-
74
-
```bash
75
-
? Select run mode (Use arrow keys)
76
-
> CPU
77
-
GPU
78
-
```
79
-
80
-
2. Select between GPU types.
81
-
82
-
```bash
83
-
? Select GPU types (Use arrow keys)
84
-
> Nvidia
85
-
Others (Vulkan)
86
-
```
76
+
3. Download a GGUF model from Hugging Face:
77
+
```bash
78
+
# Pull a model most compatible with your hardware
79
+
cortex pull llama3
87
80
88
-
3. Select CPU instructions (will be deprecated soon).
81
+
# Pull a specific variant with `repo_name:branch`
82
+
cortex pull llama3:7b
89
83
90
-
```bash
91
-
? Select CPU instructions (Use arrow keys)
92
-
> AVX2
93
-
AVX
94
-
AVX-512
84
+
# Pull a model with the HuggingFace `model_id`
85
+
cortex pull microsoft/Phi-3-mini-4k-instruct-gguf
95
86
```
96
-
97
-
1. Cortex will download the required CPU instruction sets if you choose `CPU` mode. If you choose `GPU` mode, Cortex will download the necessary dependencies to use your GPU.
98
-
2. Once downloaded, Cortex is ready to use!
99
-
100
-
### Step 4: Pull a model
101
-
102
-
From HuggingFace
103
-
104
-
```bash
105
-
cortex pull janhq/phi-3-medium-128k-instruct-GGUF
87
+
4. Load the model:
88
+
```bash
89
+
cortex models start llama3:7b
106
90
```
107
91
108
-
From Jan Hub (TBD)
109
-
110
-
```bash
111
-
cortex pull llama3
92
+
5. Start chatting with the model:
93
+
```bash
94
+
cortex chat tell me a joke
112
95
```
113
96
114
-
### Step 5: Chat
115
-
116
-
```bash
117
-
cortex run janhq/phi-3-medium-128k-instruct-GGUF
118
-
```
119
97
120
98
## Run as an API server
121
-
99
+
To run Cortex as an API server:
122
100
```bash
123
101
cortex serve
124
102
```
@@ -135,18 +113,40 @@ To install Cortex from the source, follow the steps below:
0 commit comments