Skip to content

Conversation

@ilya-bobyr
Copy link
Contributor

Convenient while debugging the plugin.


Seems like we both worked on the same functionality in parallel.
You cleaned up the logging buffer here: 9f6a696

But I also had this change for a few days now.

There are a number of minor details that I was trying to polish here:

  1. This change scrolls the log buffer if the cursor is on the last line.
  2. I've optimized updates, to avoid rewriting the whole log buffer. It is quite noticeably slow when I'm typing. It is still not ideal, but I'm not sure how to speed it up more without async updates.
  3. I've considered multiple views on the log buffer.

My change is based on the version before your last update.
But, I believe, my version contains all the functionality from your update, except that the command has a different name and is always created.

I think it is better to always create the command.
Otherwise, it could be confusing to the user.

Let me know if you want me to create a merge that would use more of your last update code.

Convenient while debugging the plugin.
@ilya-bobyr
Copy link
Contributor Author

By the way, in which case log messages could be functions:

   call map(a:lines, { k, L -> type(L) == v:t_func ? call(L, []) : L })

?

Existing version will only evaluate functions after the on-disk log file update.
I've moved this logic into the preprocessing of the log messages here:

    elseif type(l:message) == v:t_func
      " Evaluate deferred log lines.
      " TODO When does this happen?
      call add(l:lines, call(l:message, []))

https://github.com/gergap/vim-ollama/pull/85/files#diff-8ce4277781b8b0c8c6ae6239c42b88d69022ee931d7b7a442fee1a708f50280aR72-R75

But it would be nice to add a note explaining when is it useful, if it really happens.

@gergap
Copy link
Owner

gergap commented Aug 4, 2025

Seems like we both worked on the same functionality in parallel.

Yes, after your last PR I started to clean up some things.
This logger.vim is something I copied from vim-copilot when I started the plugin. There is not much left of vim-copilot code, but it was a good starting point.
In the mentioned commit I refactored the logging to buffer into a separate function, to make it more clear what happens. I also added some comments. Actually, I wanted to remove also the nvim code. Vim-ollama is vim-only, so we can get rid of any neovim stuff.

call map(a:lines, { k, L -> type(L) == v:t_func ? call(L, []) : L })
But it would be nice to add a note explaining when is it useful, if it really happens.

I did not write this part, but my explanation is the following. Often when logging data, you need to prepare some strings, which sometime is an expensive task (CPU and/or memory wise). If then the logger discards it due to a too low logging level you are wasting these resources. By using a function reference you can do these toString() conversion only when needed (lazy evaluation).

@gergap
Copy link
Owner

gergap commented Aug 4, 2025

Actually, the logging to buffer is only a fallback if logging to file didn't work.
We could add an option to force logging to buffer, but I would not do this by default, because it can take up lots of RAM resources.

@ilya-bobyr
Copy link
Contributor Author

Seems like we both worked on the same functionality in parallel.

Yes, after your last PR I started to clean up some things. This logger.vim is something I copied from vim-copilot when I started the plugin. There is not much left of vim-copilot code, but it was a good starting point. In the mentioned commit I refactored the logging to buffer into a separate function, to make it more clear what happens. I also added some comments. Actually, I wanted to remove also the nvim code. Vim-ollama is vim-only, so we can get rid of any neovim stuff.

My version removes it, and the same functionality is implemented using Vim script.

call map(a:lines, { k, L -> type(L) == v:t_func ? call(L, []) : L })
But it would be nice to add a note explaining when is it useful, if it really happens.

I did not write this part, but my explanation is the following. Often when logging data, you need to prepare some strings, which sometime is an expensive task (CPU and/or memory wise). If then the logger discards it due to a too low logging level you are wasting these resources. By using a function reference you can do these toString() conversion only when needed (lazy evaluation).

The problem is, as written, this processing only happens when lines are stored in the in-memory log.
When they are output into the log file, they are converted to strings without a call.

I do not think that it saves anything, as all the logging in synchronous.
If the caller of the log function passes a closure as a message, the log function will call it immediately as part of the message processing.
So, in terms of CPU or memory usage, it is exactly the same as if the caller would have constructed the output string before calling the log function.

There would have been a difference, should one of the execution paths would not evaluate the closure.
For example, an in-memory log could potentially keep the closure until someone looks at the log.
The in-memory log is fixed size, and if nobody looks at the log until enough messages are added after the "deferred" message, the closure could be discarded without being evaluated.

But again, this does not work with a file log.
Plus, no callers pass closures at the moment.
And this is not what the code is currently doing.

So, it seems to me that this functionality is actually broken.
And the intent is not entirely clear.
So I suggest that it is removed, making the logging code a bit simpler.

Actually, the logging to buffer is only a fallback if logging to file didn't work. We could add an option to force logging to buffer, but I would not do this by default, because it can take up lots of RAM resources.

Yes, the in-memory log is currently only populated when the log file is not writable.
And considering that by default the log file is a newly created temporary file, it seems quite unlikely that it would not be writable.
Only if the user explicitly sets g:ollama_logfile to a file that can not be written, then the in-memory log kick in.

Which, again, looks broken to me.
At best, it is very non-intuitive from the user standpoint.
Took me some time to realize why I did not see anything in the log buffer.

I figured that as the in-memory log has an upper bound on the number of lines it may contain, it probably is not a problem from the memory standpoint.
Currently, the default is set to 10,000 lines.
I would think, an average line is relatively short - probably under 80 characters or so.1
Assuming UTF-8 and the log is mostly English, it means about 800k of memory maximum.

Note that the log will only ever contain this much data if debug logging level is set.
Which is something that needs to be done explicitly.

By default, the in memory log will contain just a few messages.
And an additional configuration option to prevent this seems unnecessary, as it will probably only complicate the interaction with the logging system.

I was hoping that it could all work with a single command.
When something is not working, one can run :vert OllamaDebug debug and observe the details of the interaction with the LLM.
And when they are done, :OllamaDebug stop and you are mostly back to how it was before.

Footnotes

  1. I just looked at my persistent on-disk log, where I output additional prompts that are up to 5k characters long, and the average line length is still 72 characters.
    If I skip the long prompts, the average line length is 27 characters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants