icon

Inference of Meta's LLaMA model (and others) in pure C/C++

llama_cpp-b4889-1-x86_64

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

* Plain C/C++ implementation without any dependencies
* AVX and AVX2 support for x86 architectures
* 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use

Since its inception, the project has improved significantly thanks to many contributions. It is the main playground for developing new features for the ggml library.

Name
llama_cpp
Repository
HaikuPorts
Repository Source
haikuports_x86_64
Version
b4889-1
Download Size
25.3 MB
Source available
Yes
Categories
Science and Mathematics
Version Views
89