Skip to content

Commit c99909d

Browse files
authored
impl : use 6 digits for tensor dims (ggml-org#20094)
Many models have vocabulary sizes, and thus tensor shapes, with more than 5 digits (ex: Gemma 3's vocab size is 262,208). I already fixed this for `llama_format_tensor_shape` but missed it for `llama_format_tensor_shape` until now. Oops.
1 parent cb8f4fa commit c99909d

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

src/llama-impl.cpp

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,9 +100,9 @@ std::string format(const char * fmt, ...) {
100100

101101
std::string llama_format_tensor_shape(const std::vector<int64_t> & ne) {
102102
char buf[256];
103-
snprintf(buf, sizeof(buf), "%5" PRId64, ne.at(0));
103+
snprintf(buf, sizeof(buf), "%6" PRId64, ne.at(0));
104104
for (size_t i = 1; i < ne.size(); i++) {
105-
snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), ", %5" PRId64, ne.at(i));
105+
snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), ", %6" PRId64, ne.at(i));
106106
}
107107
return buf;
108108
}

0 commit comments

Comments
 (0)