local inferencing may boost feelings of security and privacy for the end user
down-sides include greater power consumption, potentially more latency awaiting replies on top of less reliable interactions
additionally, we have to invest time supporting various models
however, we can experiment with the Ollama framework and validate several models, offering them as possible options for the user to have
ideally, the user can configure their use-case, including