Hi, I have a few adapters with base model bert-base-uncased using adapters package.
I want to serve a bert with my trained adapters using this command:
docker run --gpus all --shm-size 1g -p 8080:80 -v $(pwd)/lorax_adapters:/app/adapters ghcr.io/predibase/lorax:latest --model-id bert-base-uncased
but I get this error: RuntimeError: weight bert.embeddings.LayerNorm.weight does not exist
Any ideas what I'm doing wrong? How should the adapter files look like?
Thanks!
My adapters look like this:
├── adapter_0
│ ├── adapter_config.json
│ └── adapter_model.safetensors
├── adapter_1
│ ├── adapter_config.json
│ └── adapter_model.safetensors
├── adapter_2
│ ├── adapter_config.json
│ └── adapter_model.safetensors
├── adapter_3
│ ├── adapter_config.json
│ └── adapter_model.safetensors
Hi, I have a few adapters with base model
bert-base-uncasedusingadapterspackage.I want to serve a bert with my trained adapters using this command:
docker run --gpus all --shm-size 1g -p 8080:80 -v $(pwd)/lorax_adapters:/app/adapters ghcr.io/predibase/lorax:latest --model-id bert-base-uncasedbut I get this error:
RuntimeError: weight bert.embeddings.LayerNorm.weight does not existAny ideas what I'm doing wrong? How should the adapter files look like?
Thanks!
My adapters look like this: