icon

Inference of Meta's LLaMA model (and others) in pure C/C++

llama_cpp-b4889-1-x86_64

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

* Plain C/C++ implementation without any dependencies
* AVX and AVX2 support for x86 architectures
* 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use

Since its inception, the project has improved significantly thanks to many contributions. It is the main playground for developing new features for the ggml library.

Nombre
llama_cpp
Repositorio
HaikuPorts
Origen de repositorio
haikuports_x86_64
Versión
b4889-1
Tamaño de descarga
25.3 MB
Código fuente disponible
Categorías
Ciencia y matemáticas
Visitas a la versión
146