Skip to content

Releases: OlympiaAI/raix

v1.0.2

16 Jul 01:55
f18ce41
Compare
Choose a tag to compare

What's Changed

Added

  • Added method to check for API client availability in Configuration

Changed

  • Updated ruby-openai dependency to ~> 8.1

Fixed

  • Fixed gemspec file reference

Full Changelog: v1.0.1...v1.0.2

v1.0.1

05 Jun 01:58
v1.0.1
bbbf3fe
Compare
Choose a tag to compare

What's Changed

Fixed

  • Fixed PromptDeclarations module namespace - now properly namespaced under Raix (#8)
  • Removed Rails.logger dependencies from PromptDeclarations for non-Rails environments
  • Fixed documentation example showing incorrect openai: true usage (should be model string) (#9)
  • Added comprehensive tests for PromptDeclarations module

Changed

  • Improved error handling in PromptDeclarations to catch StandardError instead of generic rescue

Issues Resolved

  • Closes #8 - Prompt declarations in README.md to not match the code
  • Closes #9 - Chat Completion Fails Due to Invalid JSON Payload (Status 400)

Full Changelog: v1.0.0...v1.0.1

v1.0.0

04 Jun 22:23
v1.0.0
7d290d6
Compare
Choose a tag to compare

Major Release: Automatic Tool Call Continuation

This major release introduces automatic continuation after tool calls, eliminating the need for the loop parameter entirely. The system now automatically handles tool execution and continues the conversation until the AI provides a final text response.

Breaking Changes

  • Deprecated loop parameter - The system now automatically continues conversations after tool calls. The loop parameter shows a deprecation warning but still works for backwards compatibility.
  • Tool-based completions now return strings instead of arrays - When functions are called, the final response is a string containing the AI's text response, not an array of function results.
  • stop_looping\! renamed to stop_tool_calls_and_respond\! - Better reflects the new automatic continuation behavior.

New Features

  • Automatic conversation continuation - Chat completions automatically continue after tool execution without needing the loop parameter.
  • max_tool_calls parameter - Controls the maximum number of tool invocations to prevent infinite loops (default: 25).
  • Configuration for max_tool_calls - Added max_tool_calls to the Configuration class with sensible defaults.

Migration Guide

# Before
response = ai.chat_completion(loop: true)

# After (automatic)
response = ai.chat_completion

# To limit tool calls
response = ai.chat_completion(max_tool_calls: 5)

Other Changes

  • Improved CI/CD workflow to use bundle exec rake ci for consistent testing
  • Fixed conflict between loop attribute and Ruby's Kernel.loop method (fixes #11)
  • Fixed various RuboCop warnings using keyword argument forwarding
  • Improved error handling with proper warning messages

See the CHANGELOG for full details.

v0.9.2

03 Jun 18:56
v0.9.2
e976d07
Compare
Choose a tag to compare

What's Changed

Fixed

  • Fixed OpenAI chat completion compatibility
  • Fixed SHA256 hexdigest generation for MCP tool names
  • Added ostruct as explicit dependency to prevent warnings
  • Fixed rubocop lint error for alphabetized gemspec dependencies
  • Updated default OpenRouter model

Full Changelog: v0.9.1...v0.9.2

v0.8

23 Apr 18:36
de45377
Compare
Choose a tag to compare

Adds experimental support for declaring MCP servers as tool functions

Full Changelog: v0.7.3...v0.8

v0.7.3

23 Apr 15:16
Compare
Choose a tag to compare

Small change to function call handling. Commits both tool call and result to transcript in one operation for thread safety.

0.7.1

13 Apr 23:58
Compare
Choose a tag to compare

Significantly improved PromptDeclarations module with many additional features.

Smaller changes:

  • Make automatic JSON parsing available to non-OpenAI providers that don't support the response_format parameter by scanning for json XML tags

Full Changelog: v0.6...0.7

v0.6

02 Apr 18:15
Compare
Choose a tag to compare

As of this release, Raix will automatically add the AI's response to the transcript by default. This behavior can be controlled with the save_response parameter, which defaults to true. You may want to set it to false when making multiple chat completion calls during the lifecycle of a single object (whether sequentially or in parallel) and want to manage the transcript updates yourself:

>> ai.chat_completion(save_response: false)

Full Changelog: 0.5.0...v0.6

v0.5.0

16 Feb 20:21
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.4.8...0.5.0

v0.4.8

06 Dec 01:47
Compare
Choose a tag to compare

Full Changelog: v0.4.5...v0.4.8

Predicate is fully supported in this release.