This repository was archived by the owner on Jul 4, 2025. It is now read-only.
File tree Expand file tree Collapse file tree 2 files changed +10
-9
lines changed
Expand file tree Collapse file tree 2 files changed +10
-9
lines changed Original file line number Diff line number Diff line change @@ -63,7 +63,7 @@ Ensure that your system meets the following requirements to run Cortex:
6363
6464## Quickstart
6565To install Cortex CLI, follow the steps below:
66- 1 . Install the Cortex NPM package:
66+ 1 . Install the Cortex NPM package globally :
6767``` bash
6868npm i -g @janhq/cortex
6969```
@@ -75,11 +75,11 @@ cortex init
7575
76763 . Download a GGUF model from Hugging Face:
7777``` bash
78- cortex models pull janhq/TinyLlama-1.1B-Chat-v1.0-GGUF
78+ cortex pull tinyllama:1b
7979```
80804 . Load the model:
8181``` bash
82- cortex models start janhq/TinyLlama-1.1B-Chat-v1.0-GGUF
82+ cortex models start tinyllama:1b
8383```
8484
85855 . Start chatting with the model:
Original file line number Diff line number Diff line change @@ -62,7 +62,8 @@ Ensure that your system meets the following requirements to run Cortex:
6262- ** Disk** : At least 10GB for app and model download.
6363
6464## Quickstart
65- 1 . Install the NPM package:
65+ To install Cortex CLI, follow the steps below:
66+ 1 . Install the Cortex NPM package globally:
6667``` bash
6768npm i -g @janhq/cortex
6869```
@@ -72,16 +73,16 @@ npm i -g @janhq/cortex
7273cortex init
7374```
7475
75- 3 . Download a GGUF model from Hugging Face
76+ 3 . Download a GGUF model from Hugging Face:
7677``` bash
77- cortex models pull janhq/TinyLlama-1.1B-Chat-v1.0-GGUF
78+ cortex pull tinyllama:1b
7879```
79- 4 . Load the model
80+ 4 . Load the model:
8081``` bash
81- cortex models start janhq/TinyLlama-1.1B-Chat-v1.0-GGUF
82+ cortex models start tinyllama:1b
8283```
8384
84- 5 . Start chatting with the model
85+ 5 . Start chatting with the model:
8586``` bash
8687cortex chat tell me a joke
8788```
You can’t perform that action at this time.
0 commit comments