LLM inference in C/C++
https://github.com/ggerganov/llama.cpp
License: MIT
Formula JSON API: /api/formula/llama.cpp.json
Formula code: llama.cpp.rb
on GitHub
Bottle (binary package) installation support provided for:
Apple Silicon | sequoia | ✅ |
---|---|---|
sonoma | ✅ | |
ventura | ✅ | |
Intel | sonoma | ✅ |
ventura | ✅ | |
64-bit linux | ✅ |
Current versions:
stable | ✅ | 3933 |
head | ⚡️ | HEAD |
Depends on when building from source:
cmake | 3.30.5 | Cross-platform make |
Analytics:
Installs (30 days) | |
---|---|
llama.cpp |
4,130 |
llama.cpp --HEAD |
17 |
Installs on Request (30 days) | |
llama.cpp |
4,130 |
llama.cpp --HEAD |
17 |
Build Errors (30 days) | |
llama.cpp |
20 |
Installs (90 days) | |
llama.cpp |
10,892 |
llama.cpp --HEAD |
63 |
Installs on Request (90 days) | |
llama.cpp |
10,891 |
llama.cpp --HEAD |
63 |
Installs (365 days) | |
llama.cpp |
16,298 |
llama.cpp --HEAD |
76 |
Installs on Request (365 days) | |
llama.cpp |
16,291 |
llama.cpp --HEAD |
76 |