Skip to content

dreadnode/rigging

Folders and files

NameName
Last commit message
Last commit date

Latest commit

4874950 · Mar 18, 2025
Feb 18, 2025
May 24, 2024
Feb 5, 2025
Nov 19, 2024
Mar 18, 2025
Mar 17, 2025
May 24, 2024
Jan 30, 2024
Mar 10, 2025
Feb 5, 2025
Mar 18, 2025
Mar 18, 2025

Repository files navigation

Logo

Flexible LLM library for code and agents

PyPI - Python Version PyPI - Version GitHub License GitHub Actions Workflow Status


Rigging is a lightweight LLM framework to make using language models in production code as simple and effective as possible. Here are the highlights:

  • Structured Pydantic models can be used interchangably with unstructured text output.
  • LiteLLM as the default generator giving you instant access to a huge array of models.
  • Define prompts as python functions with type hints and docstrings.
  • Simple tool use, even for models which don't support them at the API.
  • Store different models and configs as simple connection strings just like databases.
  • Integrated tracing support with Logfire.
  • Chat templating, forking, continuations, generation parameter overloads, stripping segments, etc.
  • Async batching and fast iterations for large scale generation.
  • Metadata, callbacks, and data format conversions.
  • Modern python with type hints, async support, pydantic validation, serialization, etc.
import rigging as rg

@rg.prompt(generator_id="gpt-4")
async def get_authors(count: int = 3) -> list[str]:
    """Provide famous authors."""

print(await get_authors())

# ['William Shakespeare', 'J.K. Rowling', 'Jane Austen']

Rigging is built by dreadnode where we use it daily.

Installation

We publish every version to Pypi:

pip install rigging

If you want to build from source:

cd rigging/
poetry install

Supported LLMs

Rigging will run just about any language model:

API Keys

Pass the api_key in an generator id or use standard environment variables.

rg.get_generator("gpt-4-turbo,api_key=...")
export OPENAI_API_KEY=...
export MISTRAL_API_KEY=...
export ANTHROPIC_API_KEY=...
...

Check out the docs for more.

Getting Started

Check out the guide in the docs

  1. Get a generator using a connection string.
  2. Build a chat or completion pipeline
  3. Run the pipeline and get the output.
import rigging as rg
import asyncio

async def main():
    # 1 - Get a generator
    generator = rg.get_generator("claude-3-sonnet-20240229")

    # 2 - Build a chat pipeline
    pipeline = generator.chat(
        [
            {"role": "system", "content": "Talk like a pirate."},
            {"role": "user", "content": "Say hello!"},
        ]
    )

    # 3 - Run the pipeline
    chat = await pipeline.run()
    print(chat.conversation)

# Run the main function
asyncio.run(main())

# [system]: Talk like a pirate.
# [user]: Say hello!
# [assistant]: Ahoy, matey! Here be the salty sea dog ready to trade greetings wit' ye. Arrr!

Want more?

Examples

Documentation

docs.dreadnode.io has everything you need.

Star History

Star History Chart