LLM inference in C/C++
https://github.com/ggerganov/llama.cpp
License: MIT
Formula JSON API: /api/formula/llama.cpp.json
Formula code: llama.cpp.rb
on GitHub
Bottle (binary package) installation support provided for:
Apple Silicon | sequoia | ✅ |
---|---|---|
sonoma | ✅ | |
ventura | ✅ | |
Intel | sonoma | ✅ |
ventura | ✅ | |
64-bit linux | ✅ |
Current versions:
stable | ✅ | 4112 |
head | ⚡️ | HEAD |
Depends on when building from source:
cmake | 3.31.0 | Cross-platform make |
Analytics:
Installs (30 days) | |
---|---|
llama.cpp |
4,292 |
llama.cpp --HEAD |
14 |
Installs on Request (30 days) | |
llama.cpp |
4,292 |
llama.cpp --HEAD |
14 |
Build Errors (30 days) | |
llama.cpp |
1 |
Installs (90 days) | |
llama.cpp |
11,938 |
llama.cpp --HEAD |
47 |
Installs on Request (90 days) | |
llama.cpp |
11,937 |
llama.cpp --HEAD |
47 |
Installs (365 days) | |
llama.cpp |
20,656 |
llama.cpp --HEAD |
91 |
Installs on Request (365 days) | |
llama.cpp |
20,649 |
llama.cpp --HEAD |
91 |