Skip to content

Commit 815784b

Browse files
committed
Move commands to run LLMs to llmfile using invoke-llm
Closes #8
1 parent fd11529 commit 815784b

5 files changed

Lines changed: 48 additions & 156 deletions

File tree

.env.example

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
1-
GEMINI_API_KEY=
2-
HF_TOKEN=
1+
API_TOKEN_GOOGLE=
2+
API_TOKEN_HF=
3+
API_TOKEN_OAI=

.llms/inference/hf.py

Lines changed: 0 additions & 110 deletions
This file was deleted.

README.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Extract frames
22

3-
[![unit tests](https://github.com/egorsmkv/read-video-rs/actions/workflows/test.yaml/badge.svg)](https://github.com/egorsmkv/read-video-rs/actions/workflows/test.yaml)
3+
[![unit tests](https://github.com/egorsmkv/extract-frames-rs/actions/workflows/test.yaml/badge.svg)](https://github.com/egorsmkv/extract-frames-rs/actions/workflows/test.yaml)
44
[![security audit](https://github.com/egorsmkv/extract-frames-rs/actions/workflows/audit.yaml/badge.svg)](https://github.com/egorsmkv/extract-frames-rs/actions/workflows/audit.yaml)
55

66
A Rust-based command-line application for extracting frames from video files
@@ -76,9 +76,10 @@ To contribute to this project, you'll need:
7676

7777
1. Rust toolchain (nightly version recommended)
7878
2. `cargo install action-validator dircat just`
79-
3. `cargo install --git https://github.com/ytmimi/markdown-fmt markdown-fmt --features="build-binary"`
80-
4. `brew install lefthook` (for pre-commit hooks)
81-
5. [gemma-cli](https://github.com/egorsmkv/gemma-cli) (for LLM interactions)
79+
3. `carcargo install --git https://github.com/RustedBytes/invoke-llm`
80+
4. `cargo install --git https://github.com/ytmimi/markdown-fmt markdown-fmt
81+
--features="build-binary"`
82+
5. `brew install lefthook` (for pre-commit hooks)
8283
6. [yamlfmt](https://github.com/google/yamlfmt) (for YAML formatting)
8384

8485
## Building and Testing

justfile

Lines changed: 6 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,15 @@ check: fmt
1010
check_fmt:
1111
cargo +nightly fmt -- --check
1212

13-
fmt_yaml:
13+
yaml_fmt:
1414
yamlfmt lefthook.yml
1515
yamlfmt -dstar .github/**/*.{yaml,yml}
1616

17-
fmt: fmt_yaml
17+
md_fmt:
18+
markdown-fmt -m 80 CONTRIBUTING.md
19+
markdown-fmt -m 80 README.md
20+
21+
fmt: yaml_fmt
1822
cargo +nightly fmt
1923

2024
test:
@@ -31,41 +35,3 @@ release: check
3135

3236
download_test_video:
3337
wget -O "video.mp4" "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ElephantsDream.mp4"
34-
35-
llm_ctx:
36-
dircat -b -e rs -o ctx.md .
37-
38-
llm_grammar_check: llm_ctx
39-
gemma-cli -model=gemma-3-12b-it -prompt=.llms/prompts/grammar_check.md -input=ctx.md -output=.llms/grammar_check.md
40-
41-
llm_non_idiomatic: llm_ctx
42-
gemma-cli -model=gemini-2.5-pro -prompt=.llms/prompts/non_idiomatic.md -input=ctx.md -output=.llms/non_idiomatic.md
43-
44-
llm_improve_comments: llm_ctx
45-
gemma-cli -model=gemma-3-12b-it -prompt=.llms/prompts/improve_comments.md -input=ctx.md -output=.llms/improve_comments.md
46-
47-
llm_llama_grammar_check: llm_ctx
48-
python3 .llms/inference/hf.py --model "meta-llama/Llama-4-Scout-17B-16E-Instruct:cerebras" --prompt=.llms/prompts/grammar_check.md --input=ctx.md --output=.llms/llama_grammar_check.md
49-
50-
llm_maverick_tests_enhancement: llm_ctx
51-
python3 .llms/inference/hf.py --model "meta-llama/Llama-4-Maverick-17B-128E-Instruct:cerebras" --max_tokens 65000 --prompt=.llms/prompts/enhance_tests.md --input=ctx.md --output=.llms/maverick_tests_enhancement.md
52-
53-
llm_maverick_enhance_readme: llm_ctx
54-
echo "" >> ctx.md
55-
echo "Source of of README file:" >> ctx.md
56-
echo "\`\`\`markdown" >> ctx.md
57-
cat README.md >> ctx.md
58-
echo "\`\`\`" >> ctx.md
59-
python3 .llms/inference/hf.py --model "meta-llama/Llama-4-Maverick-17B-128E-Instruct:cerebras" --max_tokens 8000 --prompt=.llms/prompts/enhance_readme.md --input=ctx.md --output=.llms/maverick_readme.md
60-
61-
llm_qwen3_coder_non_idiomatic: llm_ctx
62-
python3 .llms/inference/hf.py --model "Qwen/Qwen3-Coder-480B-A35B-Instruct:novita" --prompt=.llms/prompts/non_idiomatic.md --input=ctx.md --output=.llms/qwen3_non_idiomatic.md
63-
64-
llm_qwen3_coder_improve_comments: llm_ctx
65-
python3 .llms/inference/hf.py --model "Qwen/Qwen3-Coder-480B-A35B-Instruct:novita" --prompt=.llms/prompts/improve_comments.md --input=ctx.md --output=.llms/qwen3_improve_comments.md
66-
67-
llm_qwen3_code_review: llm_ctx
68-
python3 .llms/inference/hf.py --model "Qwen/Qwen3-Coder-480B-A35B-Instruct:novita" --prompt=.llms/prompts/code_review.md --input=ctx.md --output=.llms/qwen3_code_review.md
69-
70-
llm_glm45air_code_review: llm_ctx
71-
python3 .llms/inference/hf.py --model "zai-org/GLM-4.5-Air-FP8:together" --max_tokens 96000 --prompt=.llms/prompts/code_review.md --input=ctx.md --output=.llms/glm45air_code_review.md

llmfile

Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
set dotenv-load := true
2+
3+
gen_ctx:
4+
dircat -b -e rs -o ctx.md .
5+
6+
code_review: gen_ctx
7+
invoke-llm -e hf -m "Qwen/Qwen3-Coder-480B-A35B-Instruct:novita" -t 20000 -p .llms/prompts/code_review.md -i ctx.md -o .llms/qwen3_code_review.md
8+
9+
gemma_grammar_check: gen_ctx
10+
invoke-llm -e google -m "gemma-3-12b-it" -t 4000 -p .llms/prompts/grammar_check.md -i ctx.md -o .llms/gemma_grammar_check.md
11+
12+
llama_grammar_check: gen_ctx
13+
invoke-llm -e hf -m "meta-llama/Llama-4-Scout-17B-16E-Instruct:cerebras" -t 4000 -p .llms/prompts/grammar_check.md -i ctx.md -o .llms/llama_grammar_check.md
14+
15+
qwen3_coder_improve_comments: gen_ctx
16+
invoke-llm -e hf -m "Qwen/Qwen3-Coder-480B-A35B-Instruct:novita" -t 20000 -p .llms/prompts/improve_comments.md -i ctx.md -o .llms/qwen3_coder_improve_comments.md
17+
18+
glm_air_code_review: gen_ctx
19+
invoke-llm -e hf -m "zai-org/GLM-4.5-Air-FP8:together" -t 96000 -p .llms/prompts/code_review.md -i ctx.md -o .llms/glm_air_code_review.md
20+
21+
maverick_enhance_readme: gen_ctx
22+
echo "" >> ctx.md
23+
echo "Source of of README file:" >> ctx.md
24+
echo "\`\`\`markdown" >> ctx.md
25+
cat README.md >> ctx.md
26+
echo "\`\`\`" >> ctx.md
27+
28+
invoke-llm -e hf -m "meta-llama/Llama-4-Maverick-17B-128E-Instruct:cerebras" -t 8000 -p .llms/prompts/enhance_readme.md -i ctx.md -o .llms/maverick_readme.md
29+
30+
maverick_tests_enhancement: gen_ctx
31+
invoke-llm -e hf -m "meta-llama/Llama-4-Maverick-17B-128E-Instruct:cerebras" -t 65000 -p .llms/prompts/enhance_tests.md -i ctx.md -o .llms/maverick_tests_enhancement.md
32+
33+
qwen3_coder_non_idiomatic: gen_ctx
34+
invoke-llm -e hf -m "Qwen/Qwen3-Coder-480B-A35B-Instruct:novita" -t 20000 -p .llms/prompts/non_idiomatic.md -i ctx.md -o .llms/qwen3_non_idiomatic.md

0 commit comments

Comments
 (0)