Skip to content

Commit 0842b9b

Browse files
authored
model: fix step3.5 n_rot (ggml-org#20318)
1 parent 59db9a3 commit 0842b9b

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

src/llama-model.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7348,7 +7348,7 @@ bool llama_model::load_tensors(llama_model_loader & ml) {
73487348
// ("rope_freqs.weight") and ggml uses only the first (n_rot_l/2) entries per layer.
73497349
uint32_t n_rot_max = 0;
73507350
for (int i = 0; i < n_layer; ++i) {
7351-
n_rot_max = std::max(n_rot_max, hparams.n_rot());
7351+
n_rot_max = std::max(n_rot_max, hparams.n_rot(i));
73527352
}
73537353
if (n_rot_max == 0) {
73547354
n_rot_max = n_rot;

0 commit comments

Comments
 (0)