Skip to content

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[Unreleased]

[0.2.71]

[0.2.70]

[0.2.69]

[0.2.68]

[0.2.67]

  • fix: Ensure image renders before text in chat formats regardless of message content order by @abetlen in 3489ef0
  • fix(ci): Fix bug in use of upload-artifact failing to merge multiple artifacts into a single release by @abetlen in d03f15b

[0.2.66]

[0.2.65]

[0.2.64]

[0.2.63]

[0.2.62]

[0.2.61]

[0.2.60]

[0.2.59]

[0.2.58]

[0.2.57]

[0.2.56]

[0.2.55]

[0.2.54]

[0.2.53]

[0.2.52]

[0.2.51]

[0.2.50]

[0.2.49]

  • fix: module 'llama_cpp.llama_cpp' has no attribute 'c_uint8' in Llama.save_state by @abetlen in db776a8
  • feat: Auto detect Mixtral's slightly different format by @lukestanley in #1214

[0.2.48]

  • feat: Update llama.cpp to ggerganov/llama.cpp@15499eb
  • feat: Add Google's Gemma formatting via chat_format="gemma" by @alvarobartt in #1210
  • feat: support minItems/maxItems in JSON grammar converter by @nopperl in 3921e10
  • fix: Update from_pretrained defaults to match hf_hub_download and pull to local cache folder by @abetlen in e6d6260
  • fix: Raise exceptions when llama model or context fails to load by @abetlen in dd22010
  • docs: Update README.md to fix pip install llama cpp server by @audip in #1187

[0.2.47]

[0.2.46]

[0.2.45]

[0.2.44]

[0.2.43]

[0.2.42]

[0.2.41]

[0.2.40]

[0.2.39]

[0.2.38]

[0.2.37]

[0.2.36]

[0.2.35]

[0.2.34]

[0.2.33]

[0.2.32]

[0.2.31]

[0.2.30]

[0.2.29]

[0.2.28]

[0.2.27]

[0.2.26]

[0.2.25]

  • feat(server): Multi model support by @D4ve-R in #931
  • feat(server): Support none defaulting to infinity for completions by @swg in #111
  • feat(server): Implement openai api compatible authentication by @docmeth2 in #1010
  • fix: text_offset of multi-token characters by @twaka in #1037
  • fix: ctypes bindings for kv override by @phiharri in #1011
  • fix: ctypes definitions of llama_kv_cache_view_update and llama_kv_cache_view_free. by @e-c-d in #1028

[0.2.24]

[0.2.23]

[0.2.22]

[0.2.21]

  • Update llama.cpp to ggerganov/llama.cpp@64e64aa
  • Make building llava optional by setting CMAKE_ARGS="-DLLAVA_BUILD=OFF" and using LLAVA_CPP_LIB to specify alternative path to shared library by @abetlen in e3941d9

[0.2.20]

[0.2.19]

[0.2.18]

[0.2.17]

[0.2.16]

  • Update llama.cpp to ggerganov/llama.cp@a75fa57
  • Add set_seed to Llama class by @abetlen in fd41ed3
  • Fix server doc arguments by @kjunggithub in #892
  • Fix response_format handler in llava chat handler by @abetlen in b62c449
  • Fix default max_tokens, chat completion is now unlimited (to context length) and completion is 16 tokens to match OpenAI defaults by @abetlen in e7962d2
  • Fix json_schema_to_gbnf helper so that it takes a json schema string as input instead by @abetlen in faeae18
  • Add support for $ref and $def in json_schema_to_gbnf to handle more complex function schemas by @abetlen in 770df34
  • Update functionary chat handler for new OpenAI api by abetlen in 1b376c6
  • Fix add default stop sequence to chatml chat format by @abetlen in b84d76a
  • Fix sampling bug when logits_all=False by @abetlen in 6f0b0b1

[0.2.15]

[0.2.14]

[0.2.13]

[0.2.12]

[0.2.11]

  • Fix bug in llama_model_params object has no attribute logits_all by @abetlen in d696251

[0.2.10]

  • Fix bug 'llama_model_params' object has no attribute 'embedding' by @abetlen in 42bb721

[0.2.9]

  • Fix critical bug in pip installation of v0.2.8 due to .git directory in ac853e0

[0.2.8]

[0.2.7]

[0.2.6]

[0.2.5]

[0.2.4]

  • Add NUMA support. NOTE low level api users must call llama_backend_init at the start of their programs by abetlen in f4090a0
  • Fix tensor_split server cli argument by @abetlen in c4c440b
  • Made all Llama init parameters into keyword-only parameters by @abetlen in c8f9b8a
  • Added server params for low_vram, main_gpu, lora_base, and lora_path by @abetlen in 2920c4b
  • Removed server params for rms_norm_eps and n_gqa by @abetlen in 2920c4b
  • Fix boolean cli options by @abetlen in c999325 and 0449d29
  • Silence Pydantic Settings warnings about model_alias setting by @earonesty in #705

[0.2.3]

[0.2.2]

  • Fix bug in pip install of v0.2.1 due to scikit-build-core removing all .metal files in the source distribution (see #701)

[0.2.1]

  • Fix bug in pip install of v0.2.0 due to .git folder being included in the source distribution (see #701)

[0.2.0]

  • Migrated to scikit-build-core build system by @abetlen in #499
  • Use numpy views for LogitsProcessor and StoppingCriteria instead of python lists by @abetlen in #499
  • Drop support for end-of-life Python3.7 by @abetlen in #499
  • Convert low level llama.cpp constants to use basic python types instead of ctypes types by @abetlen in #499

[0.1.85]

[0.1.84]

  • Update llama.cpp

[0.1.83]

  • Update llama.cpp

[0.1.82]

  • Update llama.cpp

[0.1.81]

  • Update llama.cpp

[0.1.80]

  • Update llama.cpp

[0.1.79]

  • GGUF Support (breaking change requiring new model format)

[0.1.78]

  • Grammar based sampling via LlamaGrammar which can be passed to completions
  • Make n_gpu_layers == -1 offload all layers

[0.1.77]

  • (llama.cpp) Update llama.cpp add support for LLaMa 2 70B
  • (server) Add temporary n_gqa and rms_norm_eps parameters required for LLaMa 2 70B

[0.1.76]

  • (llama.cpp) Update llama.cpp add support for LLaMa 2 70B

[0.1.75]

  • Update llama.cpp

[0.1.74]

  • (server) OpenAI style error responses

[0.1.73]

  • (server) Add rope parameters to server settings

[0.1.72]

  • (llama.cpp) Update llama.cpp added custom_rope for extended context lengths

[0.1.71]

  • (llama.cpp) Update llama.cpp

  • (server) Fix several pydantic v2 migration bugs

[0.1.70]

  • (Llama.create_completion) Revert change so that max_tokens is not truncated to context_size in create_completion
  • (server) Fixed changed settings field names from pydantic v2 migration

[0.1.69]

  • (server) Streaming requests can are now interrupted pre-maturely when a concurrent request is made. Can be controlled with the interrupt_requests setting.
  • (server) Moved to fastapi v0.100.0 and pydantic v2
  • (docker) Added a new "simple" image that builds llama.cpp from source when started.
  • (server) performance improvements by avoiding unnecessary memory allocations during sampling

[0.1.68]

  • (llama.cpp) Update llama.cpp

[0.1.67]

  • Fix performance bug in Llama model by pre-allocating memory tokens and logits.
  • Fix bug in Llama model where the model was not free'd after use.

[0.1.66]

  • (llama.cpp) New model API

  • Performance issue during eval caused by looped np.concatenate call

  • State pickling issue when saving cache to disk

[0.1.65]

  • (llama.cpp) Fix struct misalignment bug

[0.1.64]

  • (llama.cpp) Update llama.cpp
  • Fix docs for seed. Set -1 for random.

[0.1.63]

  • (llama.cpp) Add full gpu utilisation in CUDA
  • (llama.cpp) Add get_vocab
  • (llama.cpp) Add low_vram parameter
  • (server) Add logit_bias parameter

[0.1.62]

  • Metal support working
  • Cache re-enabled

[0.1.61]

  • Fix broken pip installation

[0.1.60]

NOTE: This release was deleted due to a bug with the packaging system that caused pip installations to fail.

  • Truncate max_tokens in create_completion so requested tokens doesn't exceed context size.
  • Temporarily disable cache for completion requests

[v0.1.59]

  • (llama.cpp) k-quants support
  • (server) mirostat sampling parameters to server
  • Support both .so and .dylib for libllama on MacOS

[v0.1.58]

  • (llama.cpp) Metal Silicon support

[v0.1.57]

  • (llama.cpp) OpenLlama 3B support

[v0.1.56]

  • (misc) Added first version of the changelog
  • (server) Use async routes
  • (python-api) Use numpy for internal buffers to reduce memory usage and improve performance.
  • (python-api) Performance bug in stop sequence check slowing down streaming.