brew install llama.cpp
LLM inference in C/C++
https://github.com/ggerganov/llama.cpp
License: MIT
Formula JSON API: /api/formula/llama.cpp.json
Formula code: llama.cpp.rb
on GitHub
Bottle (binary package) installation support provided for:
macOS on Apple Silicon |
sequoia | ✅ |
---|---|---|
sonoma | ✅ | |
ventura | ✅ | |
macOS on Intel |
sonoma | ✅ |
ventura | ✅ | |
Linux | ARM64 | ✅ |
x86_64 | ✅ |
Current versions:
stable | ✅ | 5410 |
head | ⚡️ | HEAD |
Depends on when building from source:
cmake | 4.0.2 | Cross-platform make |
Analytics:
Installs (30 days) | |
---|---|
llama.cpp |
9,147 |
llama.cpp --HEAD |
61 |
Installs on Request (30 days) | |
llama.cpp |
8,617 |
llama.cpp --HEAD |
61 |
Build Errors (30 days) | |
llama.cpp --HEAD |
3 |
llama.cpp |
3 |
Installs (90 days) | |
llama.cpp |
23,791 |
llama.cpp --HEAD |
151 |
Installs on Request (90 days) | |
llama.cpp |
23,215 |
llama.cpp --HEAD |
151 |
Installs (365 days) | |
llama.cpp |
65,959 |
llama.cpp --HEAD |
352 |
Installs on Request (365 days) | |
llama.cpp |
65,376 |
llama.cpp --HEAD |
352 |