Skip to content

[BUG]: custom embedding model error #351

@Jays-1111

Description

@Jays-1111

🐛 Bug description [Please make everyone to understand it]

因为当前0.1.8无法使用自定义模型,所以使用0.1.7配置自定义模型api
现象:使用本地flm server(https://fastflowlm.com/)作为自定义模型API,设置vlm模型和embedding模型,embedding模型出现错误无法连接
模型为:embed-gemma:300m
Image

🧑‍💻 Step to reproduce

1.download flm server: https://fastflowlm.com/
2.flm serve --ctx-len 32768 -s 32 -q 32
3.Minecontext 0.1.7:URL:http://127.0.0.1:52625/v1 model:embed-gemma:300m URL:http://127.0.0.1:52625/v1 model:qwen3vl-it:4b
4.save-出现错误

👾 Expected result

save 之后显示连接成功
可以成功使用本地embedding 模型

🚑 Any additional information

No response

🛠️ MineContext Version

0.1.7

💻 Platform Details

windows11

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions