Simulacra is a platform for building LLM powered Telegram bots with a template-based context system.
This project was created for personal experimentation, but is available for anyone to use. Express your interest by starring this project on GitHub. Community contributions are welcome!
For Docker specific usage, see the Docker section.
uv syncIf you wish to include development dependencies, add --dev.
Modify the example configuration file example/config.toml with your TELEGRAM_API_TOKEN and TELEGRAM_USERNAME.
- Interact with @BotFather to create a new bot and get its API token.
For more information, see the Configuration section.
uv run app.py examples/config.tomlSend a message to your bot and it will respond. Bots can also see and understand images, if the model supports this.
Send /help to see a list of commands:
Actions
/new - Start a new conversation
/retry - Retry the last response
/undo - Undo the last exchange
/clear - Clear the conversation
/continue - Request another response
/instruct (...) - Apply an instruction
/syncbook (...) - Sync current book position
Information
/stats - Show conversation statistics
/help - Show this help message
The application is configured by a TOML config file, which initializes one or more Telegram bots and defines the path to their YAML context files.
The config TOML file initializes one or more Telegram bots and defines the path to their context files.
See example/config.toml for a template config file:
[[simulacra]]
context_filepath = "example/context.yml"
telegram_token = "telegram-bot-token"
authorized_user = "@telegram-username"
[[simulacra]] # Second bot configuration
context_filepath = "example/second_bot_context.yml"
telegram_token = "second-telegram-bot-token"
authorized_user = "@telegram-username"The context file is a YAML file that defines bot configuration and state.
A context file contains the following keys:
| Key | Description |
|---|---|
character_name |
The bot's character name |
user_name |
The user's name |
conversation_file |
Relative file link to the conversation file (auto-generated) |
api_params |
API configuration object |
├─ model |
The model to use for the API |
└─ <key> |
Additional API parameters (e.g. temperature, max_tokens) |
vars |
Template variables object |
├─ system_prompt |
The bot's system prompt |
└─ <key> |
Additional template variables |
Conversations are stored separately in a conversations/ directory. Changes to the context file take effect immediately.
This project publishes a Docker image to GHCR ghcr.io/njbbaer/simulacra.
Configure your container with the following:
- Mount a directory containing your config and context files to
/config. - Set the path to your config file in the environment as
CONFIG_FILEPATH. - Set your OpenRouter API key in the environment as
OPENROUTER_API_KEY.
Ensure the context file paths in your config are accessible within the container (i.e. /config).
docker run --name simulacra \
--volume /var/lib/simulacra:/config \
--env OPENROUTER_API_KEY=your_openai_api_key \
--env CONFIG_FILEPATH=/config/config.toml \
--restart unless-stopped \
ghcr.io/njbbaer/simulacra:latestservices:
simulacra:
image: ghcr.io/njbbaer/simulacra:latest
container_name: simulacra
volumes:
- /var/lib/simulacra:/config
environment:
- OPENROUTER_API_KEY={{ your_openai_api_key }}
- CONFIG_FILEPATH=/config/config.toml
restart: unless-stoppedEnable code reloading with development mode. Create a .env file or add the following to your environment:
export ENVIRONMENT=developmentNote: Development mode can only run a single bot.
Install pre-commit hooks before committing code:
uv run pre-commit installmake lintmake testThe release script sets the new version in pyproject.toml, commits it, and pushes a tag.
A release is performed by GitHub Actions when the tag is pushed.
make release type=<major|minor|patch>