forked from Comfy-Org/ComfyUI
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathRun Instruction
More file actions
100 lines (69 loc) · 2.08 KB
/
Run Instruction
File metadata and controls
100 lines (69 loc) · 2.08 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
# ComfyUI
```bash
pip install -r requirements.txt
python3 main.py --force-fp16 --fp16-vae --fp16-unet --novram --listen 0.0.0.0 --port 8188 --use-pytorch-cross-attenti
```
# LLAMA.CPP
## Convert Model to GGUF
### Llama, Mistral
```bash
python3 convert.py ./raw_models/Mistral-7B-Instruct-v0.3 --outfile ./models/Mistral-7B-Instruct-v0.3_2/model-f16.gguf --outtype f16
```
### Gemma, Others
```bash
python3 convert-hf-to-gguf.py ./raw_models/Gemma-FC --outfile ./models/Gemma-FC/f16-modified.gguf --outtype f16
```
### Quantize Model
```bash
./quantize ./models/Mistral-7B-Instruct-v0.3_2/model-f16.gguf ./models/Mistral-7B-Instruct-v0.3_2/mistral-fc.gguf q8_0
```
### Server
```bash
./server -m models/gemma/gemma.gguf -c 4096 --port 8080
```
---
# LOCALAI
## Install Dependencies
### Mac OS
Install `xcode` from the App Store
```bash
brew install abseil cmake go grpc protobuf protoc-gen-go protoc-gen-go-grpc python wget
```
After installing the above dependencies, you need to install grpcio-tools from PyPI. You could do this via a pip --user install or a virtualenv.
```bash
pip install --user grpcio-tools
```
```bash
xcode-select --print-path
sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer
```
### Debian
```bash
apt install golang protobuf-compiler-grpc libgrpc-dev make cmake
pip install --user grpcio-tools
```
## Clone the Repo
```bash
git clone https://github.com/InterSyncAnalytics/LocalAI
cd LocalAI
```
## Build the Binary
```bash
make build
make BUILD_TYPE=clblas build
make BUILD_TYPE=metal build
make BUILD_TYPE=metal BUILD_GRPC_FOR_BACKEND_LLAMA=true build
```
## Download GPT4All-J to Models
```bash
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
```
## Use a Template from the Examples
```bash
cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/
```
## Run LocalAI
```bash
./local-ai --models-path=./models/mistral.gguf --parallel-requests=true --context-size 2048 --cors=true --address=127.0.0.1:8080 --f16=false --debug=true
./local-ai --address=127.0.0.1:8080 --debug=true --config-file=./configuration/config.yaml
```