Skip to content

Question regarding BERT #782

@ziv-apex

Description

@ziv-apex

Hi, I have a few adapters with base model bert-base-uncased using adapters package.

I want to serve a bert with my trained adapters using this command:
docker run --gpus all --shm-size 1g -p 8080:80 -v $(pwd)/lorax_adapters:/app/adapters ghcr.io/predibase/lorax:latest --model-id bert-base-uncased

but I get this error: RuntimeError: weight bert.embeddings.LayerNorm.weight does not exist
Any ideas what I'm doing wrong? How should the adapter files look like?
Thanks!

My adapters look like this:

├── adapter_0
│   ├── adapter_config.json
│   └── adapter_model.safetensors
├── adapter_1
│   ├── adapter_config.json
│   └── adapter_model.safetensors
├── adapter_2
│   ├── adapter_config.json
│   └── adapter_model.safetensors
├── adapter_3
│   ├── adapter_config.json
│   └── adapter_model.safetensors

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions