-
-
Notifications
You must be signed in to change notification settings - Fork 136
Open
Description
https://github.com/ggml-org/llama.cpp
https://en.wikipedia.org/wiki/Llama.cpp
Ollama is frontend over llama.cpp
In fact, llama.cpp is both library or client/server framework for running local models, the most important advantage compared to Ollama is performance and more supported models.
Metadata
Metadata
Assignees
Labels
No labels