llama.cpp

Install command:
brew install llama.cpp

LLM inference in C/C++

https://github.com/ggml-org/llama.cpp

License: MIT

Formula JSON API: /api/formula/llama.cpp.json

Formula code: llama.cpp.rb on GitHub

Bottle (binary package) installation support provided for:

macOS on
Apple Silicon
tahoe
sequoia
sonoma
macOS on
Intel
sonoma
Linux ARM64
x86_64

Current versions:

stable 7070
head ⚡️ HEAD

Depends on when building from source:

cmake 4.1.2 Cross-platform make
pkgconf 2.5.1 Package compiler and linker metadata toolkit

Analytics:

Installs (30 days)
llama.cpp 17,063
llama.cpp --HEAD 36
Installs on Request (30 days)
llama.cpp 13,476
llama.cpp --HEAD 36
Build Errors (30 days)
llama.cpp 35
llama.cpp --HEAD 1
Installs (90 days)
llama.cpp 43,014
llama.cpp --HEAD 203
Installs on Request (90 days)
llama.cpp 35,691
llama.cpp --HEAD 203
Installs (365 days)
llama.cpp 116,291
llama.cpp --HEAD 697
Installs on Request (365 days)
llama.cpp 106,459
llama.cpp --HEAD 697