|
2 | 2 |
|
3 | 3 | A lightweight CLI to estimate hardware requirements and quantization compatibility for Hugging Face models. |
4 | 4 |
|
5 | | -[](https://badge.fury.io/py/canirun) |
| 5 | +[](https://codecov.io/github/PythonicVarun/canirun) |
| 6 | +[](https://pypi.org/project/canirun) |
| 7 | +[ |
| 8 | +](https://pypi.org/project/canirun) |
6 | 9 | [](https://opensource.org/licenses/MIT) |
| 10 | +[](https://github.com/psf/black) |
| 11 | +[](https://github.com/astral-sh/ruff) |
| 12 | + |
| 13 | +> [!NOTE] |
| 14 | +> Currently optimized for standard Transformer architectures (Llama, Mistral, Gemma, BERT). MoE and custom architectures may have experimental support. |
7 | 15 |
|
8 | 16 | ## Key Features |
9 | 17 |
|
@@ -39,7 +47,7 @@ canirun meta-llama/Meta-Llama-3-8B --ctx 4096 |
39 | 47 | This will produce a report like this: |
40 | 48 |
|
41 | 49 | ``` |
42 | | - 🔍 ANALYSIS REPORT: meta-llama/Meta-Llama-3-8B |
| 50 | + 🔍 ANALYSIS REPORT: meta-llama/Meta-Llama-3-8B |
43 | 51 | Context Length : 4096 |
44 | 52 | Device : NVIDIA GeForce RTX 3090 |
45 | 53 | VRAM / RAM : 24.0 GB / 64.0 GB |
|
91 | 99 |
|
92 | 100 | The tool checks for different levels of quantization to see if a smaller, quantized version of the model could fit. |
93 | 101 |
|
| 102 | +## Development |
| 103 | + |
| 104 | +This project maintains strict code quality standards: |
| 105 | + |
| 106 | +[](https://github.com/psf/black) |
| 107 | +[](https://github.com/astral-sh/ruff) |
| 108 | +[](https://mypy-lang.org/) |
| 109 | +[](https://pycqa.github.io/isort/) |
| 110 | + |
| 111 | +- **Formatter**: Black |
| 112 | +- **Linter**: Ruff |
| 113 | +- **Type Checking**: MyPy (Strict) |
| 114 | +- **Docstrings**: Google Style |
| 115 | + |
94 | 116 | ## License |
95 | 117 |
|
96 | 118 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. |
0 commit comments