Skip to content

Releases: withcatai/node-llama-cpp

v3.0.0

24 Sep 01:38
97b0d86
Compare
Choose a tag to compare

node-llama-cpp 3.0 is here! ✨

Read about the release in the blog post


3.0.0 (2024-09-24)

Features

Read more

v3.0.0-beta.47

23 Sep 18:53
Compare
Choose a tag to compare
v3.0.0-beta.47 Pre-release
Pre-release

3.0.0-beta.47 (2024-09-23)

Bug Fixes

Features

  • resetChatHistory function on a LlamaChatSession (#327) (ebc4e83)

Shipped with llama.cpp release b3804

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.0.0-beta.46

20 Sep 16:16
6c644ff
Compare
Choose a tag to compare
v3.0.0-beta.46 Pre-release
Pre-release

3.0.0-beta.46 (2024-09-20)

Bug Fixes

  • no thread limit when using a GPU (#322) (2204e7a)
  • improve defineChatSessionFunction types and docs (#322) (2204e7a)
  • format numbers printed in the CLI (#322) (2204e7a)
  • revert electron-builder version used in Electron template (#323) (6c644ff)

Shipped with llama.cpp release b3787

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.0.0-beta.45

19 Sep 19:11
d0795c1
Compare
Choose a tag to compare
v3.0.0-beta.45 Pre-release
Pre-release

3.0.0-beta.45 (2024-09-19)

Bug Fixes

  • improve performance of parallel evaluation from multiple contexts (#309) (4b3ad61)
  • Llama 3.1 chat wrapper standard chat history (#309) (4b3ad61)
  • adapt to llama.cpp sampling refactor (#309) (4b3ad61)
  • Llama 3 Instruct function calling (#309) (4b3ad61)
  • don't preload prompt in the chat command when using --printTimings or --meter (#309) (4b3ad61)
  • more stable Jinja template matching (#309) (4b3ad61)

Features

  • inspect estimate command (#309) (4b3ad61)
  • move seed option to the prompt level (#309) (4b3ad61)
  • Functionary v3 support (#309) (4b3ad61)
  • Mistral chat wrapper (#309) (4b3ad61)
  • improve Llama 3.1 chat template detection (#309) (4b3ad61)
  • change autoDisposeSequence default to false (#309) (4b3ad61)
  • move download, build and clear commands to be subcommands of a source command (#309) (4b3ad61)
  • simplify TokenBias (#309) (4b3ad61)
  • better threads default value (#309) (4b3ad61)
  • make LlamaEmbedding an object (#309) (4b3ad61)
  • HF_TOKEN env var support for reading GGUF file metadata (#309) (4b3ad61)
  • TemplateChatWrapper: custom history template for each message role (#309) (4b3ad61)
  • more helpful inspect gpu command (#309) (4b3ad61)
  • all tokenizer tokens iterator (#309) (4b3ad61)
  • failed context creation automatic remedy (#309) (4b3ad61)
  • abort generation support in CLI commands (#309) (4b3ad61)
  • --gpuLayers max and --contextSize max flag support for inspect estimate command (#309) (4b3ad61)
  • extract all prebuilt binaries to external modules (#309) (4b3ad61)
  • updated docs (#309) (4b3ad61)
  • combine model downloaders (#309) (4b3ad61)
  • feat(electron example template): update badge, scroll anchoring, table support (#309) (4b3ad61)

Shipped with llama.cpp release b3785

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v2.8.16

03 Sep 02:01
51265c8
Compare
Choose a tag to compare

2.8.16 (2024-09-03)

Bug Fixes

  • bump llama.cpp release used in prebuilt binaries (#305) (660651a)
  • update documentation website URL (#306) (51265c8)

v3.0.0-beta.44

10 Aug 00:25
bf12e9c
Compare
Choose a tag to compare
v3.0.0-beta.44 Pre-release
Pre-release

3.0.0-beta.44 (2024-08-10)

Bug Fixes

  • revert to the latest stable Metal llama.cpp release (#297) (bf12e9c)

Shipped with llama.cpp release b3543

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.43

09 Aug 21:32
ecaef63
Compare
Choose a tag to compare
v3.0.0-beta.43 Pre-release
Pre-release

3.0.0-beta.43 (2024-08-09)

Bug Fixes

  • more cases of unknown characters in generation streaming (#295) (ecaef63)

Shipped with llama.cpp release b3560

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v3.0.0-beta.42

07 Aug 21:26
097b3ec
Compare
Choose a tag to compare
v3.0.0-beta.42 Pre-release
Pre-release

3.0.0-beta.42 (2024-08-07)

Bug Fixes

  • unkown characters in generation streaming (#293) (097b3ec)

Shipped with llama.cpp release b3541

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)

v2.8.15

06 Aug 21:59
c4b5d80
Compare
Choose a tag to compare

2.8.15 (2024-08-06)

Bug Fixes

v3.0.0-beta.41

02 Aug 20:56
a2b2bc3
Compare
Choose a tag to compare
v3.0.0-beta.41 Pre-release
Pre-release

3.0.0-beta.41 (2024-08-02)

Bug Fixes


Shipped with llama.cpp release b3504

To use the latest llama.cpp release available, run npx --no node-llama-cpp download --release latest. (learn more)