Skip to content

Releases: langgenius/dify

v1.9.1 – 1,000 Contributors, Infinite Gratitude

29 Sep 11:35
cd47a47
Compare
Choose a tag to compare

Congratulations on having our 1000th contributor!

image

🚀 New Features

  • Infrastructure & DevOps:

    • Next.js upgraded to 15.5, now leveraging Turbopack in development for a faster, more modern build pipeline by @17hz in #24346.
    • Provided X-Dify-Version headers in marketplace API access for better traceability by @RockChinQ in #26210.
    • Security reporting improvements, with new sec report workflow added by @crazywoola in #26313.
  • Pipelines & Engines:

    • Built-in pipeline templates now support language configuration, unlocking multilingual deployments by @WTW0313 in #26124.
    • Graph engine now blocks response nodes during streaming to avoid unintended outputs by @laipz8200 in #26364 / #26377.
  • Community & Documentation:

🛠 Fixes & Improvements

  • Debugging & Logging:

    • Fixed NodeRunRetryEvent debug logging not working properly in Graph Engine by @quicksandznzn in #26085.
    • Fixed LLM node losing Flask context during parallel iterations, ensuring stable concurrent runs by @quicksandznzn in #26098.
    • Fixed agent-strategy prompt generator error by @quicksandznzn in #26278.
  • Search & Parsing:

  • Pipeline & Workflow:

    • Fixed workflow variable splitting logic (requires ≥2 parts) by @zhanluxianshen in #26355.
    • Fixed tool node attribute tool_node_version judgment error causing compatibility issues by @goofy-z in #26274.
    • Fixed iteration conversation variables not syncing correctly by @laipz8200 in #26368.
    • Fixed Knowledge Base node crash when retrieval_model is null by @quicksandznzn in #26397.
    • Fixed workflow node mutation issues, preventing props from being incorrectly altered by @hyongtao-code in #26266.
    • Removed restrictions on adding workflow nodes by @zxhlyh in #26218.
  • File Handling:

    • Fixed remote filename handling so Content-Disposition: inline becomes inline instead of incorrect parsing by @sorphwer in #25877.
    • Synced FileUploader context with props to fix inconsistent file parameters in cached variable view by @Woo0ood in #26199.
    • Fixed variable not found error (#26144) by @sqewad in #26155.
    • Fixed db connection error in embed_documents() by @AkisAya in #26196.
    • Fixed model list refresh when credentials change by @zxhlyh in #26421.
    • Fixed retrieval configuration handling and missing vector_setting in dataset components by @WTW0313 in #26361 / #26380.
    • Fixed ChatClient audio_to_text files keyword bug by @EchterTimo in #26317.
    • Added missing import IO in client.py by @EchterTimo in #26389.
    • Removed FILES_URL in default .yaml settings by @JoJohanse in #26410.
  • Performance & Networking:

    • Improved pooling of httpx clients for requests to code sandbox and SSRF protection by @Blackoutta in #26052.
    • Distributed plugin auto-upgrade tasks with concurrency control by @RockChinQ in #26282.
    • Switched plugin auto-upgrade cache to Redis for reliability by @RockChinQ in #26356.
    • Fixed plugin detail panel not showing when >100 plugins are installed by @JzoNgKVO in #26405.
    • Debounce reference fix for performance stability by @crazywoola in #26433.
  • UI/UX & Display:

    • Fixed lingering display-related issues (translations, UI consistency) by @hjlarry in #26335.
    • Fixed broken CSS animations under Turbopack by naming unnamed animations in CSS modules by @lyzno1 in #26408.
    • Fixed verification code input using wrong maxLength prop by @hyongtao-code in #26244.
    • Fixed array-only filtering in List Operator picker, removed file-children fallback, aligned child types by @Woo0ood in #26240.
    • Fixed translation inconsistencies in ja-JP: “ナレッジベース” vs. “ナレッジの名前とアイコン” by @mshr-h in #26243 and @NeatGuyCoding in #26270.
    • Improved “time from now” i18n support by @hjlarry in #26328.
    • Standardized dataset-pipeline i18n terminology by @lyzno1 in #26353.
  • Code & Components:

    • Refactored component exports for consistency by @ZeroZ-lab in #26033.
    • Refactored router to apply ns.route style by @laipz8200 in #26339.
    • Refactored lint scripts to remove duplication and simplify naming by @lyzno1 in #26259.
    • Applied @console_ns.route decorators to RAG pipeline controllers (internal refactor) by @Copilot in #26348.
    • Added missing type="button" attributes in components by @Copilot in #26249.

Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.9.1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

  • fix(api): graph engine debug logging NodeRunRetryEvent not effective by @quicksandznzn in #26085
  • fix full_text_search name by @JohnJyong in #26104
  • bump nextjs to 15.5 and turbopack for development mode by @17hz in #24346
  • chore: refactor component exports for consistency by @ZeroZ-lab in #26033
  • fix:add some explanation for oceanbase parser selection by @longbingljw in #26071
  • feat(pipeline): add language support to built-in pipeline templates and update related components by @WTW0313 in #26124
  • ci: Add hotfix/** branches to build-push workflow triggers by @QuantumGhost in #26129
  • fix(api): Fix variable truncation for list[File] value in output mapping by @QuantumGhost in #26133
  • one example of Session by @asukaminato0721 in #24135
  • fix(api):LLM node losing Flask context during parallel iterations by @quicksandznzn in #26098
  • fix(search-input): ensure proper value extraction in composition end handler by @yangzheli in #26147
  • delete end_user check by @JohnJyong in #26187
  • improve: pooling httpx clients for requests to code sandbox and ssrf by @Blackoutta in #26052
  • fix: remote filename will be 'inline' if Content-Disposition: inline by @sorphwer in #25877
  • perf: provide X-Dify-Version for marketplace api access by @RockChinQ in #26210
  • Chore/remove add node restrict of workflow by @zxhlyh in #26218
  • Fix array-only filtering in List Operator picker; remove file children fallback and align child types. by @Woo0ood in #26240
  • fix: sync FileUploader context with props to fix inconsistent file parameter state in “View cached variables”. by @Woo0ood in #26199
  • fix: add echarts and zrender to transpilePackages for ESM compatibility by @lyzno1 in #26208
  • chore: fix inaccurate translation in ja-JP by @mshr-h in #26243
  • aliyun_trace: unify the span attribute & compatible CMS 2.0 endpoint by @hieheihei in #26194
  • fix(api): resolve error in agent‑strategy prompt generator by @quicksandznzn in #26278
  • minor: fix translation with the key value uses 「ナレッジの名前とアイコン」 while the rest of the file uses 「ナレッジベース」 by @NeatGuyCoding in #26270
  • refactor(web): simplify lint scripts, remove duplicates and standardize naming by @lyzno1 in #26259
  • fmt first by @asukaminato0721 in #26221
  • fix: resolve UUID parsing error for default user session lookup by @Cluas in #26109
  • Fix: avoid mutating node props by @hyongtao-code in #26266
  • update gen_ai semconv for aliyun trace by @hieheihei in #26288
  • chore: streamline AGENTS.md guidance by @laipz8200 in #26308
  • rm assigned but unused by @asukaminato0721 in #25639
  • Chore/add sec report by @crazywoola in #26313
  • Fix ChatClient.audio_to_text files keyword to make it work by @EchterTimo in #26317
  • perf: distribute concurrent pl...
Read more

1.9.0 – Orchestrating Knowledge, Powering Workflows

22 Sep 12:23
2e2c87c
Compare
Choose a tag to compare

knowledge_pipeline

🚀 Introduction

In Dify 1.9.0, we are introducing two major new capabilities: the Knowledge Pipeline and the Queue-based Graph Engine.

The Knowledge Pipeline provides a modularized and extensible workflow for knowledge ingestion and processing, while the Queue-based Graph Engine makes workflow execution more robust and controllable. We believe these will help you build and debug AI applications more smoothly, and we look forward to your experiences to help us continuously improve.


📚 Knowledge Pipeline

✨ Introduction

With the brand-new orchestration interface for knowledge pipelines, we introduce a fundamental architectural upgrade that reshapes how document processing are designed and executed, providing a more modular and flexible workflow that enables users to orchestrate every stage of the pipeline. Enhanced with a wide range of powerful plugins available in the marketplace, it empowers users to flexibly integrate diverse data sources and processing tools. Ultimately, this architecture enables building highly customized, domain-specific RAG solutions that meet enterprises’ growing demands for scalability, adaptability, and precision.

❓ Why Do We Need It?

Previously, Dify's RAG users still encounter persistent challenges in real-world adoption — from inaccurate knowledge retrieval and information loss to limited data integration and extensibility. Common pain points include:

  • 🔗 restricted integration of data sources
  • 🖼️ missing critical elements such as tables and images
  • ✂️ suboptimal chunking results

All of them lead to poor answer quality and hinder the model's overall performance.

In response, we reimagined RAG in Dify as an open and modular architecture, enabling developers, integrators, and domain experts to build document processing pipelines tailored to their specific requirements—from data ingestion to chunk storage and retrieval.

🛠️ Core Capabilities

🧩 Knowledge Pipeline Architecture

The Knowledge Pipeline is a visual, node-based orchestration system dedicated to document ingestion. It provides a customizable way to automate complex document processing, enabling fine-grained transformations and bridging raw content with structured, retrievable knowledge. Developers can build workflows step by step, like assembling puzzle pieces, making document handling easier to observe and adjust.

📑 Templates & Pipeline DSL

template

  • ⚡ Start quickly with official templates
  • 🔄 Customize and share pipelines by importing/exporting via DSL for easier reusability and collaboration

🔌 Customizable Data Sources & Tools

tools

tools-2

Each knowledge base can support multiple data sources. You can seamlessly integrate local files, online documents, cloud drives, and web crawlers through a plugin-based ingestion framework. Developers can extend the ecosystem with new data-source plugins, while marketplace processors handle specialized use cases like formulas, spreadsheets, and image parsing — ensuring accurate ingestion and structured representation.

🧾 New Chunking Strategies

In addition to General and Parent-Child modes, the new Q&A Processor plugin supports Q&A structures. This expands coverage for more use cases, balancing retrieval precision with contextual completeness.

🖼️ Image Extraction & Retrieval

image_in_pdf

Extract images from documents in multiple formats, store them as URLs in the knowledge base, and enable mixed text-image outputs to improve LLM-generated answers.

🧪 Test Run & Debugging Support

Before publishing a pipeline, you can:

  • ▶️ Execute a single step or node independently
  • 🔍 Inspect intermediate variables in detail
  • 👀 Preview string variables as Markdown in the variable inspector

This provides safe iteration and debugging at every stage.

🔄 One-Click Migration from Legacy Knowledge Bases

Seamlessly convert existing knowledge bases into the Knowledge Pipeline architecture with a single action, ensuring smooth transition and backward compatibility.

🌟 Why It Matters

The Knowledge Pipeline makes knowledge management more transparent, debuggable, and extensible. It is not the endpoint, but a foundation for future enhancements such as multimodal retrieval, human-in-the-loop collaboration, and enterprise-level data governance. We’re excited to see how you apply it and share your feedback.


⚙️ Queue-based Graph Engine

❓ Why Do We Need It?

Previously, designing workflows with parallel branches often led to:

  • 🌀 Difficulty managing branch states and reproducing errors
  • ❌ Insufficient debugging information
  • 🧱 Rigid execution logic lacking flexibility

These issues reduced the usability of complex workflows. To solve this, we redesigned the execution engine around queue scheduling, improving management of parallel tasks.

🛠️ Core Capabilities

📋 Queue Scheduling Model

All tasks enter a unified queue, where the scheduler manages dependencies and order. This reduces errors in parallel execution and makes topology more intuitive.

🎯 Flexible Execution Start Points

Execution can begin at any node, supporting partial runs, resumptions, and subgraph invocations.

🌊 Stream Processing Component

A new ResponseCoordinator handles streaming outputs from multiple nodes, such as token-by-token LLM generation or staged results from long-running tasks.

🕹️ Command Mechanism

With the CommandProcessor, workflows can be paused, resumed, or terminated during execution, enabling external control.

🧩 GraphEngineLayer

A new plugin layer that allows extending engine functionality without modifying core code. It can monitor states, send commands, and support custom monitoring.


Quickstart

  1. Prerequisites
    • Dify version: 1.9.0 or higher
  2. How to Enable
    • Enabled by default, no additional configuration required.
    • Debug mode: set DEBUG=true to enable DebugLoggingLayer.
    • Execution limits:
      • WORKFLOW_MAX_EXECUTION_STEPS=500
      • WORKFLOW_MAX_EXECUTION_TIME=1200
      • WORKFLOW_CALL_MAX_DEPTH=10
    • Worker configuration (optional):
      • WORKFLOW_MIN_WORKERS=1
      • WORKFLOW_MAX_WORKERS=10
      • WORKFLOW_SCALE_UP_THRESHOLD=3
      • WORKFLOW_SCALE_DOWN_IDLE_TIME=30
    • Applies to all workflows.

More Controllable Parallel Branches

Execution Flow:

Start ─→ Unified Task Queue ─→ WorkerPool Scheduling
                          ├─→ Branch-1 Execution
                          └─→ Branch-2 Execution
                                  ↓
                            Aggregator
                                  ↓
                                  End

Improvements:
1. All tasks enter a single queue, managed by the Dispatcher.
2. WorkerPool auto-scales based on load.
3. ResponseCoordinator manages streaming outputs, ensuring correct order.

Example: Command Mechanism

from core.workflow.graph_engine.manager import GraphEngineManager

# Send stop command
GraphEngineManager.send_stop_command(
    task_id="workflow_task_123",
    reason="Emergency stop: resource limit exceeded"
)

Note: pause/resume functionality will be supported in future versions.


Example: GraphEngineLayer

GraphEngineLayer Example


FAQ

  1. Is this release focused on performance?
    No. The focus is on stability, clarity, and correctness of parallel branches. Performance improvements are a secondary benefit.

  2. What events can be subscribed to?

    • Graph-level: GraphRunStartedEvent, GraphRunSucceededEvent, GraphRunFailedEvent, GraphRunAbortedEvent
    • Node-level: NodeRunStartedEvent, NodeRunSucceededEvent, NodeRunFailedEvent, NodeRunRetryEvent
    • Container nodes: IterationRunStartedEvent, IterationRunNextEvent, IterationRunSucceededEvent, LoopRunStartedEvent, LoopRunNextEvent, LoopRunSucceededEvent
    • Streaming output: NodeRunStreamChunkEvent
  3. How can I debug workflow execution?

    • Enable DEBUG=true to view detailed logs.
    • Use DebugLoggingLayer to record events.
    • Add custom monitoring via GraphEngineLayer.

Future Plans

This release is just the beginning. Upcoming improvements include:

  • Debugging Tools: A visual interface to view execution states and variables in real time.
  • Intelligent Scheduling: Optimize scheduling strategies using historical data.
  • More Complete Command Support: Add Pause/Resume, breakpoint debugging.
  • Human in the Loop: Support human intervention during execution.
  • Subgraph Functionality: Enhance modularity and reusability.
  • Multimodal Embedding: Support richer content types beyond text.

We look forward to your feedback and experiences to make the engine more practical.


Upgrade Guide

Important

After upgrading, you must run the following migration to transform existing datasource credentials. This step is required to ensure compatibility with the new version:

uv run flask transform-datasource-credentials

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s...
Read more

v2.0.0-beta.2

08 Sep 07:20
2a84832
Compare
Choose a tag to compare
v2.0.0-beta.2 Pre-release
Pre-release

Fixes

  • Fixed an issue in Workflow / Chatflow where using an LLM node with Memory could cause errors.
  • Fixed a blocking issue in non-pipeline mode when adding new Notion pages to the document list.
  • Fixed dark mode styling issues.

Upgrade Guide

Important

If upgrading from 0.x or 1.x, you must run the following migration to transform existing datasource credentials. This step is required to ensure compatibility with the new version:

uv run flask transform-datasource-credentials

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  1. Get the latest code from the main branch
git checkout 2.0.0-beta.2
git pull origin 2.0.0-beta.2
  1. Stop the service. Please execute in the docker directory
docker compose down
  1. Back up data
tar -cvf volumes-$(date +%s).tgz volumes
  1. Upgrade services
docker compose up -d
  1. Migrate data after the container starts
docker exec -it docker-api-1 uv run flask transform-datasource-credentials

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

git checkout 2.0.0-beta.2
  1. Update Python dependencies:
cd api
uv sync
  1. Then, let's run the migration script:
uv run flask db upgrade
uv run flask transform-datasource-credentials
  1. Finally, run the API server, Worker, and Web frontend Server again.

v2.0.0-beta.1 – Orchestrating Knowledge, Powering Workflows

04 Sep 13:38
fae6d4f
Compare
Choose a tag to compare

knowledge_pipeline

🚀 Introduction

In Dify 2.0, we are introducing two major new capabilities: the Knowledge Pipeline and the Queue-based Graph Engine.

This is a beta release, and we hope to explore these improvements together with you and gather your feedback. The Knowledge Pipeline provides a modularized and extensible workflow for knowledge ingestion and processing, while the Queue-based Graph Engine makes workflow execution more robust and controllable. We believe these will help you build and debug AI applications more smoothly, and we look forward to your experiences to help us continuously improve.


📚 Knowledge Pipeline

✨ Introduction

With the brand-new orchestration interface for knowledge pipelines, we introduce a fundamental architectural upgrade that reshapes how document processing are designed and executed, providing a more modular and flexible workflow that enables users to orchestrate every stage of the pipeline. Enhanced with a wide range of powerful plugins available in the marketplace, it empowers users to flexibly integrate diverse data sources and processing tools. Ultimately, this architecture enables building highly customized, domain-specific RAG solutions that meet enterprises’ growing demands for scalability, adaptability, and precision.

❓ Why Do We Need It?

Previously, Dify's RAG users still encounter persistent challenges in real-world adoption — from inaccurate knowledge retrieval and information loss to limited data integration and extensibility. Common pain points include:

  • 🔗 restricted integration of data sources
  • 🖼️ missing critical elements such as tables and images
  • ✂️ suboptimal chunking results

All of them lead to poor answer quality and hinder the model's overall performance.

In response, we reimagined RAG in Dify as an open and modular architecture, enabling developers, integrators, and domain experts to build document processing pipelines tailored to their specific requirements—from data ingestion to chunk storage and retrieval.

🛠️ Core Capabilities

🧩 Knowledge Pipeline Architecture

The Knowledge Pipeline is a visual, node-based orchestration system dedicated to document ingestion. It provides a customizable way to automate complex document processing, enabling fine-grained transformations and bridging raw content with structured, retrievable knowledge. Developers can build workflows step by step, like assembling puzzle pieces, making document handling easier to observe and adjust.

📑 Templates & Pipeline DSL

template

  • ⚡ Start quickly with official templates
  • 🔄 Customize and share pipelines by importing/exporting via DSL for easier reusability and collaboration

🔌 Customizable Data Sources & Tools

tools

tools-2

Each knowledge base can support multiple data sources. You can seamlessly integrate local files, online documents, cloud drives, and web crawlers through a plugin-based ingestion framework. Developers can extend the ecosystem with new data-source plugins, while marketplace processors handle specialized use cases like formulas, spreadsheets, and image parsing — ensuring accurate ingestion and structured representation.

🧾 New Chunking Strategies

In addition to General and Parent-Child modes, the new Q&A Processor plugin supports Q&A structures. This expands coverage for more use cases, balancing retrieval precision with contextual completeness.

🖼️ Image Extraction & Retrieval

image_in_pdf

Extract images from documents in multiple formats, store them as URLs in the knowledge base, and enable mixed text-image outputs to improve LLM-generated answers.

🧪 Test Run & Debugging Support

Before publishing a pipeline, you can:

  • ▶️ Execute a single step or node independently
  • 🔍 Inspect intermediate variables in detail
  • 👀 Preview string variables as Markdown in the variable inspector

This provides safe iteration and debugging at every stage.

🔄 One-Click Migration from Legacy Knowledge Bases

Seamlessly convert existing knowledge bases into the Knowledge Pipeline architecture with a single action, ensuring smooth transition and backward compatibility.

🌟 Why It Matters

The Knowledge Pipeline makes knowledge management more transparent, debuggable, and extensible. It is not the endpoint, but a foundation for future enhancements such as multimodal retrieval, human-in-the-loop collaboration, and enterprise-level data governance. We’re excited to see how you apply it and share your feedback.


⚙️ Queue-based Graph Engine

❓ Why Do We Need It?

Previously, designing workflows with parallel branches often led to:

  • 🌀 Difficulty managing branch states and reproducing errors
  • ❌ Insufficient debugging information
  • 🧱 Rigid execution logic lacking flexibility

These issues reduced the usability of complex workflows. To solve this, we redesigned the execution engine around queue scheduling, improving management of parallel tasks.

🛠️ Core Capabilities

📋 Queue Scheduling Model

All tasks enter a unified queue, where the scheduler manages dependencies and order. This reduces errors in parallel execution and makes topology more intuitive.

🎯 Flexible Execution Start Points

Execution can begin at any node, supporting partial runs, resumptions, and subgraph invocations.

🌊 Stream Processing Component

A new ResponseCoordinator handles streaming outputs from multiple nodes, such as token-by-token LLM generation or staged results from long-running tasks.

🕹️ Command Mechanism

With the CommandProcessor, workflows can be paused, resumed, or terminated during execution, enabling external control.

🧩 GraphEngineLayer

A new plugin layer that allows extending engine functionality without modifying core code. It can monitor states, send commands, and support custom monitoring.


Quickstart

  1. Prerequisites
    • Dify version: 2.0.0-beta.1 or higher
  2. How to Enable
    • Enabled by default, no additional configuration required.
    • Debug mode: set DEBUG=true to enable DebugLoggingLayer.
    • Execution limits:
      • WORKFLOW_MAX_EXECUTION_STEPS=500
      • WORKFLOW_MAX_EXECUTION_TIME=1200
      • WORKFLOW_CALL_MAX_DEPTH=10
    • Worker configuration (optional):
      • WORKFLOW_MIN_WORKERS=1
      • WORKFLOW_MAX_WORKERS=10
      • WORKFLOW_SCALE_UP_THRESHOLD=3
      • WORKFLOW_SCALE_DOWN_IDLE_TIME=30
    • Applies to all workflows.

More Controllable Parallel Branches

Execution Flow:

Start ─→ Unified Task Queue ─→ WorkerPool Scheduling
                          ├─→ Branch-1 Execution
                          └─→ Branch-2 Execution
                                  ↓
                            Aggregator
                                  ↓
                                  End

Improvements:
1. All tasks enter a single queue, managed by the Dispatcher.
2. WorkerPool auto-scales based on load.
3. ResponseCoordinator manages streaming outputs, ensuring correct order.

Example: Command Mechanism

from core.workflow.graph_engine.manager import GraphEngineManager

# Send stop command
GraphEngineManager.send_stop_command(
    task_id="workflow_task_123",
    reason="Emergency stop: resource limit exceeded"
)

Note: pause/resume functionality will be supported in future versions.


Example: GraphEngineLayer

GraphEngineLayer Example


FAQ

  1. Is this release focused on performance?
    No. The focus is on stability, clarity, and correctness of parallel branches. Performance improvements are a secondary benefit.

  2. What events can be subscribed to?

    • Graph-level: GraphRunStartedEvent, GraphRunSucceededEvent, GraphRunFailedEvent, GraphRunAbortedEvent
    • Node-level: NodeRunStartedEvent, NodeRunSucceededEvent, NodeRunFailedEvent, NodeRunRetryEvent
    • Container nodes: IterationRunStartedEvent, IterationRunNextEvent, IterationRunSucceededEvent, LoopRunStartedEvent, LoopRunNextEvent, LoopRunSucceededEvent
    • Streaming output: NodeRunStreamChunkEvent
  3. How can I debug workflow execution?

    • Enable DEBUG=true to view detailed logs.
    • Use DebugLoggingLayer to record events.
    • Add custom monitoring via GraphEngineLayer.

Future Plans

This beta release is just the beginning. Upcoming improvements include:

  • Debugging Tools: A visual interface to view execution states and variables in real time.
  • Intelligent Scheduling: Optimize scheduling strategies using historical data.
  • More Complete Command Support: Add Pause/Resume, breakpoint debugging.
  • Human in the Loop: Support human intervention during execution.
  • Subgraph Functionality: Enhance modularity and reusability.
  • Multimodal Embedding: Support richer content types beyond text.

We look forward to your feedback and experiences to make the engine more practical.


Upgrade Guide

Important

After upgrading, you must run the following migration to transform existing datasource credentials. This step is required to ensure compatibility with the new version:

uv run flask transform-datasource-credentials

Docker Compose Deployments

  1. Back up your cus...
Read more

v1.8.1

03 Sep 11:07
c7700ac
Compare
Choose a tag to compare

🌟 What's New in v1.8.1? 🌟

Welcome to version 1.8.1! 🎉🎉🎉 This release focuses on stability, performance improvements, and developer experience enhancements. We've built great features and resolved critical database issues based on community feedback.

🚀 Features

  • Export DSL from History: Able to export workflow DSL directly from version history panel. (See #24939, by GuanMu)
  • Downvote with Reason: Enhanced feedback system allowing users to provide specific reasons when downvoting responses. (See #24922, by jubinsoni)
  • Multi-modal/File: Added filename support to multi-modal prompt messages. (See #24777, by -LAN-)
  • Advanced Chat File Handling: Improved assistant content parts and file handling in advanced chat mode. (See #24663, by QIN2DIM)

⚡ Enhancements

  • DB Query: Optimized SQL queries that were performing partial full table scans. (See #24786, by Novice)
  • Type Checking: Migrated from MyPy to Basedpyright. (See #25047, by -LAN-)
  • Indonesian Language Support: Added Indonesian (id-ID) language support. (See #24951, by lyzno1)
  • Jinja2 Template: LLM prompt Jinja2 templates now support more variables. (See #24944, by 17hz)

🐛 Fixes

  • Security/XSS: Fixed XSS vulnerability in block-input and support-var-input components. (See #24835, by lyzno1)
  • Persistence Session Management: Resolved critical database session binding issues that were causing "not bound to a Session" errors. (See #25010, #24966, by Will)
  • Workflow & UI Issues: Fixed workflow publishing problems, resolved UUID v7 conflicts, and addressed various UI component issues including modal handling and input field improvements. (See #25030, #24643, #25034, #24864, by Will, -LAN-, 17hz & Atif)

Version 1.8.1 represents a significant step forward in platform stability and developer experience. The migration to modern type checking and database systems, combined with comprehensive bug fixes, creates a more robust foundation for future features.

Huge thanks to all our contributors who made this release possible! We welcome your ongoing feedback to help us continue improving the platform together.


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.8.1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more

v1.8.0 - Async workflows meet multi-model management with OAuth-powered integrations.

27 Aug 07:27
f048444
Compare
Choose a tag to compare
Image

🎉 Dify v1.8.0 Release Notes 🎉

Hello, Dify community! We're excited to bring you version 1.8.0, packed with significant improvements across the board - from enhanced security and performance optimizations to a revamped UI and powerful new workflow features. Let's dive into what's new!

🚀 New Features

Workflow & Agent Capabilities

  • Multi-Model Credentials System: Implemented a comprehensive multi-model credentials system with new database tables, enabling more flexible model management. Thanks to @hjlarry! (#24451)
  • MCP Support with OAuth: Added Model Context Protocol (MCP) support for resource discovery with OAuth authentication, expanding integration possibilities. Kudos to @CodeSpaceiiii! (#24223)
  • Default Values for Workflow Variables: All workflow start node variable types now support default values, making workflows more robust. Thanks to @17hz! (#24129)
  • Agent Node Token Usage: Exposed agent node usage metrics for better monitoring and optimization. Thanks to @DavideDelbianco! (#24355)

UI/UX Enhancements

  • Document Sorting in Knowledge Base: Added sorting functionality for document status in the Knowledge base, improving document management. Thanks to @jubinsoni! (#24252)
  • Delete Avatar Functionality: Users can now delete their avatars with a confirmation modal for safety. Thanks to @Zhehao-P! (#24099)
  • Extensible Goto-Anything Commands: Improved goto-anything commands with an extensible architecture for better navigation. Thanks to @ZeroZ-lab! (#24091)
  • Document Name Tooltips: Added helpful tooltips to document names in lists for better visibility. Thanks to @aopstudio! (#24467)
  • Auto-login After Setup: Implemented secure auto-login after admin account setup. Thanks to @laipz8200! (#24395)

API & Backend

  • Redis SSL/TLS Authentication: Added support for Redis SSL/TLS certificate authentication for enhanced security. Thanks to @laipz8200! (#23624)
  • Flask-RESTX Migration: Successfully migrated from Flask-RESTful to Flask-RESTX for better API documentation and structure. Thanks to @asukaminato0721! (#24310)
  • Swagger Authorization: Added authorization configuration support to Swagger documentation. Thanks to @hjlarry! (#24518)

🐛 Bug Fixes

Critical Fixes

  • Database Performance: Fixed major performance issue by removing provider table updates on every message creation. Thanks to @QuantumGhost! (#24520)
  • Authentication Error Handling: Fixed login error handling by properly raising exceptions instead of returning. Thanks to @laipz8200! (#24452)
  • OAuth Redis Compatibility: Resolved OAuth Redis compatibility issues. Thanks to @Mairuis! (#23959)
  • HTTP Request Node File Access: Fixed file access from Start Node with remote URLs in HTTP Request Node. Thanks to @dlmu-lq! (#24293)

Workflow Improvements

  • Loop Exit Conditions: Fixed loop exit condition to accept variables from nodes inside loops. Thanks to @baonudesifeizhai! (#24257)
  • Agent Node Token Counting: Properly separated prompt and completion tokens in agent node token counting. Thanks to @laipz8200! (#24368)
  • Number Input in Tool Configure: Fixed number input behavior in agent node tool configuration. Thanks to @Stream29! (#24152)
  • Delete Conversations via API: Fixed conversation deletion through API to properly remove from database. Thanks to @jubinsoni! (#23591)

UI/UX Fixes

  • Dark Mode Improvements: Multiple dark mode fixes including backdrop-blur for plugin dropdowns, hover button contrast, and embedded modal icons. Thanks to @lyzno1 and team!
  • React Warnings: Fixed Next.js React warnings by properly moving shareCode updates to useEffect. Thanks to @Eric-Guo! (#24468)
  • Border Radius Consistency: Fixed UI border radius inconsistencies across components. Thanks to @jubinsoni! (#24486)

🔒 Security Enhancements

  • User Enumeration Prevention: Standardized authentication error messages to prevent user enumeration attacks. Thanks to @laipz8200! (#24324)
  • Custom Headers Fix: Fixed custom headers being ignored when using bearer or basic authorization. Thanks to @liugddx! (#23584)
  • Fix SQL Injection in Oracle VDB.

⚡ Performance & Infrastructure

Workflow Performance Breakthrough

  • Async WorkflowRun/WorkflowNodeRun Repositories: Implemented asynchronous repositories for workflow execution, delivering dramatic performance improvements. This architectural change enables non-blocking operations during workflow runs, with early testing showing execution times nearly halved in typical workflows. This optimization particularly benefits complex workflows with multiple nodes and parallel operations. Thanks to @xinlmain for this game-changing performance enhancement! (#20050)

Database Optimizations

  • Semantic Version Comparison: Implemented semantic version comparison for vector database version checks. Thanks to @MatriQ! (#24416)
  • AnalyticDB Improvements: Fixed rollback issues when AnalyticDB create zhparser failed. Thanks to @lpdink! (#24260)
  • Dataset Cleanup: Optimized dataset cleanup task for better performance. Thanks to @aopstudio! (#24467)

Testing Infrastructure

  • Comprehensive Test Coverage: Added testcontainers-based integration tests for multiple services including workflow app, website, auth, conversation, and more. Massive thanks to @NeatGuyCoding for this extensive testing effort!
  • Rate Limiting Tests: Added comprehensive test suite for rate limiting module. Thanks to @farion1231! (#23765)

Docker & Deployment

  • Docker Build Optimization: Optimized Docker build process with cleanup script for Jest work files. Thanks to @WTW0313! (#24450)
  • Amazon ECS Deployment: Added deployment pattern documentation using Amazon ECS and CDK. Thanks to @tmokmss! (#23985)
  • Configurable Plugin Buffer Sizes: Added configurable stdio buffer sizes for plugins in compose file. Thanks to @crazywoola! (#23980)

📚 Documentation

  • CLAUDE.md for LLM Development: Added comprehensive CLAUDE.md file for LLM-assisted development guidance. Thanks to @laipz8200! (#23946)
  • API Documentation: Enhanced API documentation for files endpoint, MCP, and service API. Thanks to @laipz8200!
  • Localized Documentation: Updated localized README files to link to corresponding localized CONTRIBUTING.md files. Thanks to @aopstudio! (#24504)
  • Markdown Auto-formatting: Implemented auto-formatting for markdown files using mdformat tool. Thanks to @asukaminato0721! (#24242)

🧹 Code Quality & Refactoring

  • Type Safety Improvements: Major improvements to type annotations and static type checking across the codebase. Thanks to @Gnomeek, @hyongtao-code, and @asukaminato0721!
  • AST-Grep Integration: Added ast-grep tool for maintaining codebase consistency. Thanks to @asukaminato0721! (#24149)
  • Dead Code Removal: Cleaned up empty files and unused code throughout the project. Thanks to @hyongtao-code! (#23990)
  • Import Optimization: Replaced deprecated functions and optimized imports across the codebase.

🌐 Internationalization

  • Automated Translation Updates: Continuous updates to i18n translation files with improved accuracy
  • Japanese Translation Corrections: Fixed Japanese translation issues. Thanks to @kurokobo! (#24041)
  • Translation Synchronization: Better synchronization of translations across all supported languages

This release represents a major step forward in Dify's evolution, with substantial improvements to performance, security, and developer experience. We're particularly excited about the enhanced workflow capabilities and the comprehensive testing infrastructure that will help us maintain high quality standards going forward.

Thank you to all contributors who made this release possible! Your dedication to improving Dify continues to drive us forward.

Happy building with Dify 1.8.0! 🚀


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.8.0
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more

v1.7.2

11 Aug 09:26
1.7.2
0baccb9
Compare
Choose a tag to compare

✨ What’s New in v1.7.2? ✨

Alright folks, buckle up! Version 1.7.2 is here, packed with a ton of quality-of-life improvements, bug fixes, and some slick new features to make your Dify experience even smoother. This release has been a community effort, and we want to give a big shoutout to all the contributors, especially the new folks who jumped in – welcome to the party! 🎉

🚀 Major Feature: Workflow Visualization

A new relations panel allows you to visualize dependencies within your workflows. Big thanks to @Minamiyama for #21998! Now when you select any node and press Shift, you will see magic flowing lines.

image

🚀 Major Feature: Node Search

You can now easily find nodes in the workflow editor using the new search feature by @croatialu, @ZeroZ-lab, @HyaCiovo, @MatriQ, @lyzno1, @crazywoola in #23685.

image

⚙️ Enhancements

  • Notion Database Row Extraction: The Notion Database integration now extracts rows in their original order and appends the Row Page URL. Thanks @ThreeFish-AI! #22646
  • Workflow API Version Specification: You can now specify workflow versions in the workflow and chat APIs. Thanks, @qiaofenlin! #23188
  • Tool JSON Response: Datetime and UUID are now supported in tool JSON responses, making those integrations even more powerful. Kudos to @jiangbo721! #22738
  • API Documentation: The API documentation has been revamped with a modern design and improved UX. Thanks @lyzno1! #23490
  • Workflow Node Alignment: Get those workflows looking sharp with enhanced node alignment options. Thanks, @ZeroZ-lab! #23451
  • Service API File Preview Endpoint: A new endpoint to preview service API files, making it easier to manage and debug your services. Hat tip to @lyzno1! #23534
  • Testcontainers Tests: We're serious about stability! @NeatGuyCoding and others have been hard at work adding Testcontainers tests for various services (account, app, message, workflow etc.) ensuring our services are rock solid.

🛠️ Bug Fixes

  • Full-Text Search with Tencent Cloud VectorDB: Fixed an issue where metadata filters weren't being applied correctly in full-text search mode for Tencent Cloud VectorDB. Thanks, @dlmu-lq! #23564
  • Workflow Knowledge Retrieval Cache: Fixed a cache bug in workflow knowledge retrieval. Another one bites the dust, thanks to @yunqiqiliang! #23597
  • HTTP Request Component: Resolved a multipart/form-data boundary issue in the HTTP Request component. Thanks to @baonudesifeizhai for fixing this long-standing issue! #23008
  • Conversation Variable Sync: Fixed an issue where conversation variables weren't being synced for existing conversations. Thanks to @laipz8200 for hunting this down! #23649
  • Internationalization (i18n): Numerous i18n fixes and enhancements across the board. Shoutout to @lyzno1 and the i18n team for their dedication!
  • Edge Cases Handled: We squashed a number of edge-case bugs, thanks to the contributions of many in the community.

🛡️ Security

  • XSS Vulnerability: A big thank you to @lyzno1 for identifying and fixing an XSS vulnerability in the authentication check-code pages. #23295

Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.7.2
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more

v1.7.1

28 Jul 12:50
1.7.1
0d2d349
Compare
Choose a tag to compare

🎉 Dify v1.7.1 Release Notes 🎉

Hello, Dify enthusiasts! We're thrilled to announce version 1.7.1 of our platform, bringing a fresh batch of refinements and enhancements to your workflow. Here's a breakdown of what's changed:

🚀 New Features

  • Default Value for Select Inputs: Now you can set a default value for select input fields, providing a smoother user experience when working with forms. Thanks to @antonko. (#21192)

  • Selecting Variables in Conditional Filters: We've added the capability to select variables in conditional filtering within list operations. This feature, spearheaded by @leslie2046, will streamline data manipulation tasks. (#23029)

  • OpenAPI Schema Enhancement: Support for allOf in OpenAPI properties inside schema has been added, courtesy of @mike1936. It's a big win for API design consistency. (#22975)

  • K8s Pure Migration Option: We've introduced a pure migration option for the api component within Kubernetes deployments, making migrations simpler for large-scale systems. Thanks, @BorisPolonsky ! (#22750)

⚙️ Bug Fixes

  • Langfuse Integration Path: Incorrect path handling with Langfuse integration has been corrected by @chenguowei. Now it behaves just right within your API calls. (#22766)

  • CELERY_BROKER Improvements: For those using RabbitMQ, the broker handling issue during batch document segment additions has been addressed by @zhaobingshuang. No more endless processing status! (#23038)

  • Metadata Batch Edit Cross-page Issue: Resolved a previous issue with cross-page document selection during metadata batch edits. Thanks to @liugddx for smoothing out the workflow. (#23000)

  • Windows PEM KeyPath Fix: Corrected path errors for private.pem key files on Windows systems, ensuring cross-platform reliability. Thanks to @silencesdg. (#22814)

🔄 Improvements

  • ToolTip Component Refinement: We've refined the interaction of ToolTip components within menus to enhance readability and usability. Kudos to @HyaCiovo for this optimization. (#23023)

  • PostgreSQL Healthcheck: Enhanced the healthcheck command to avoid fatal log errors in PostgreSQL. Thanks to @J2M3L2's talismanic touch. (#22749)

  • Time Formatting Internationalization: The time formatting feature has been refactored for better international support, thanks to @HyaCiovo. (#22870)

🪄 Miscellaneous

  • Revamped Tool List Page: @nite-knite made the tool list page slicker and more user-friendly—check it out! (#22879)

  • Duplicate TYPE_CHECKING Import: Removed those unnecessary imports for sleeker code. Thanks, @hyongtao-db. (#23013)

Pulling all these improvements together, this release takes a big step forward in polishing everyday experiences and paving the way for future development. Enjoy the upgrade, and as always, reach out with feedback and ideas for what you'd love to see next. Keep coding! 🚀


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.7.1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more

v1.7.0 - Tool OAuth & Plugin Auto-Upgrade Enhanced

23 Jul 10:52
1.7.0
7ec94eb
Compare
Choose a tag to compare

🌟 What’s New in v1.7.0? 🌟

Version 1.7.0 is packed with features that expand our app's flexibility and enhance performance. Here's what we're bringing to the table:

🏗️ Major Feature: OAuth Support in Tool Plugins

Tool plugins now support OAuth 2.0 authentication, allowing users to securely connect with third-party services without manually managing API keys. This includes refresh token support for maintaining long-term authentication sessions.
(#22550 thanks @Mairuis @zxhlyh )
image

🏗️ Major Feature: Plugin auto upgrade strategy

Plugins can now be automatically updated with configurable upgrade policies and rollback mechanisms. The system monitors plugin repositories and performs seamless upgrades while ensuring compatibility with your Dify version.
(#19758 thanks @RockChinQ @iamjoel )
image

⚡ Enhancements

  • Citations and Attributions: Agent Nodes now support features for citing and attributing sources, care of @chiehw. #18558
  • Plugin Deprecation Notice: Stay ahead of the curve with deprecation notices for obsolete plugins, introduced by @RockChinQ. #22685
  • API Key Authentication with Query Parameter: Now supports even more streamlined security methods with an API key in query parameters, courtesy of @ACAne0320. #21656
  • Audio Configuration UI: Customize your app's audio settings right from the interface, introduced by @marcelodiaz558. #21957
  • Variable Suggestions: Suggested questions can now utilize variables by @le0zh. This brings contextual awareness to another level. #17340
  • Drag-and-Drop for Workflows: Start node variables and code node variables are now drag-and-drop enabled, simplifying workflow creation as seen in @Minamiyama's contributions. #22150 #22127
  • Custom Max Active Requests per App: Manage traffic with custom settings for your app, brought by @qiaofenlin. #22073
  • Optional OpenTelemetry (OTel) Endpoint Configuration: Ensure the best observability practices with this addition from @hieheihei. #22492
  • RFC 5322 Email Validation: Ensure compliance and smarter email validation by @NeatGuyCoding. #22540
  • Dynamic Imports for Performance: Boost your app's performance with dynamic component imports, an intelligent improvement by @WTW0313. #22614
  • External Trace ID: Maintain traceability across systems with @qiaofenlin's external trace ID propagation. #22623

🐛 Bug Fixes

  • Omitting Optional Parameters: Clean out unnecessary None settings thanks to @ACAne0320. #22171
  • Docker Networking Fix: Fix Docker file URL networking issues for plugins, resolved by @krikera. #21382
  • Plugin Installation: A persistent install hitch was ironed out by @Garden12138. #22156
  • Model Selector and App Selector: Problems with selectors were eliminated by @hjlarry. #22291
  • Session Management: Fast and reliable infrastructure with open session management by @Colstuwjx. #22306
  • Metadata and File Processing: Smarter document filtering and error handling fixed by @helojo and others. #19305

⚙️ Improvements


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.7.0
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more

v1.6.0

10 Jul 09:57
1.6.0
390e4cc
Compare
Choose a tag to compare

🌟 What’s New in v1.6.0? 🌟

Welcome to version 1.6.0! 🎉🎉🎉 This release is packed with new features, crucial fixes, and various optimisations aimed at enhancing your experience. We've listened to your feedback and made significant improvements across the board.

🎯 Spotlight Feature: Introducing MCP Support! 🎯

We’re thrilled to introduce support for Anthropic’s Model Context Protocol (MCP) — a new industry standard for structuring model inputs and outputs. MCP makes it easier than ever to integrate with cutting-edge language models using a unified, reliable format.

image

📖 FOR MORE INFORMATION

🚀 New Features

  • MCP Support: We've integrated MCP support, opening doors for more seamless interactions. (See #20716, by Novice)

⚡ Enhancements

  • Drag-and-Drop for Topics: Now, you can easily reorder your topics list with a drag-and-drop sorting feature. (See #22066, by Minamiyama)
  • SSL Verify Toggle: You now have the ability to change SSL verification settings in the HTTP Node. (See #22052, by Davide Delbianco)
  • Batch Embedding Optimisation: Optimised batch embeddings and Qdrant write consistency. (See #21776, by luckylhb90)
  • Question Classifier Enhancements: Introduced instanceId to the class-item editor for sophisticated categorisation. (See #22002, by Minamiyama)
  • Redis Fallback Mechanism: Added a robust fallback mechanism for Redis to ensure data resilience. (See #21044, by NeatGuyCoding)

🐛 Fixes

  • Json Output Issue: Resolved an issue with JSON output that was affecting data consistency. (See #22053, by baonudesifeizhai)
  • Variable Name Uniqueness: Ensured unique variable names in the list to avoid conflicts. (See #22038, by Minamiyama)
  • Overflow Hidden Fix in Drawer: Ensured that the copy button remains clickable by adding overflow hidden. (See #22103, by Heyang Wang)
  • Plugin Daemon Failures: Addressed issues preventing plugin daemons from starting. (See #21841, by Kalo Chin)

Version 1.6.0 brings major process optimisations and removes previous bottlenecks, while introducing the Model Context Protocol (MCP) standard to greatly enhance the consistency and compatibility of model inputs and outputs. This makes integration and extension smoother and more efficient than ever. Huge thanks to all our contributors! We welcome your ongoing feedback to help us keep improving the platform together.


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.6.0
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more