Add :3b tag to default llama3.2 model config #7
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The README and documentation consistently reference llama3.2:3b, but the default config was using llama3.2 without the tag. This caused model detection issues because Ollama requires exact model names including tags.
As a result, I kept falling back to heuristic based enrichment, even thought I was using the default config and had followed the exact setup commands in the README.
Changes:
llama3.2->llama3.2:3bllama3.2->llama3.2:3bThis aligns the code default with the documented behavior and
AutoTagger.DEFAULT_MODELSconfiguration.Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.