Releases: jseguillon/acli
Releases · jseguillon/acli
v0.4.0
What's Changed
- support and default new chat gpt as api endpoint (10x cheaper and faster/stronger than davinci) @jseguillon in #25
v0.3.1
- f17dab3 new asciinema-demo (#23)
- Joel Seguillon [email protected]
- b607a72 when possible, use tokenize lib to get more accurate max-tokens. (#22)
- Joel Seguillon [email protected]
- f21ac93 finalize rename into acli (#21)
- Joel Seguillon [email protected]
v0.3.0
What's Changed
- use go work and start with multi module structure by @jseguillon in #14
- add a flag for choosing model by @jseguillon in #16
- set default max-tokens according to model by @jseguillon in #17
- implement scripts 'fix' and 'howto' for interactive mode by @jseguillon in #18
- Add install easy script and configuration by @jseguillon in #19
Full Changelog: v0.2.0...v0.3.0
v0.2.0
- 8b93953 add a few comments in the source code (#13)
- Joel Seguillon [email protected]
- 9a68a1e ensure help displays even if env key is not defined plus reformat help messages (#12)
- Joel Seguillon [email protected]
- 7f77935 better code style via marshal mechanism for contructing query (#11)
- Joel Seguillon [email protected]
- 4196bb7 Feat/print errors on failed (#10)
- Joel Seguillon [email protected]
- 28b30a9 use spf13 cobra and allow api paremeters to be defined via flags
- Joël Séguillon [email protected]
- 540d199 make defaults value better: - max token now defaults to 2048 plus ensure prompt + max_tokens < 4096 (token max for model) - move to davinci-003 - less temperature for less creative response since it's a CLI tool - no top_p cause doc says do not tune both temperature and top_p
- Joël Séguillon [email protected]
v0.1.0
Merge pull request #7 from jseguillon/fix/eol_shell fix missing line return for better shell integration