icon

Inference of Meta's LLaMA model (and others) in pure C/C++

llama_cpp-b4889-1-x86_64

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.

* Plain C/C++ implementation without any dependencies
* AVX and AVX2 support for x86 architectures
* 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use

Since its inception, the project has improved significantly thanks to many contributions. It is the main playground for developing new features for the ggml library.

Название
llama_cpp
Репозиторий
HaikuPorts
Источник репозитория
haikuports_x86_64
Версия
b4889-1
Скачиваемый объем
25.3 MB
Исходный код доступен
Да
Категории
Наука и математика
Просмотров версии
144