Skip to content

Releases: browser-use/web-ui

💥 Browser-Use + MCP: Unleashing Agent Power Beyond the Browser! 🚀

01 May 05:56
664f874
Compare
Choose a tag to compare

Hey everyone,

Get ready for a game-changing update! We're absolutely thrilled to announce the latest version of browser-use webui, packed with features that dramatically expand the capabilities of your AI agents. This release marks a significant milestone, especially with the deep integration of browser-use and MCP!

Here's what's new and exciting:

  1. Full Compatibility with browser-use 0.1.48: We've updated our integration to be fully compatible with browser-use version 0.1.48. This means you can leverage all the latest features, improvements, and stability enhancements from the core browser-use library right out of the box. Stay on the cutting edge of browser automation! ✨

  2. Browser-use Meets MCP Servers: A New Era of Agent Power! 🥁 With the introduction of MCP server support, we've unlocked a universe of possibilities for your agents!

    • What does this mean? browser-use can now seamlessly interact with external tools and services defined as MCP servers. Think of it as giving your browser agent access to a whole new set of limbs and senses outside of the web page!
    • Go Beyond Browsing: Your browser-use agent can now run desktop commands (like file operations or launching applications via tools like @wonderwhy-er/desktop-commander), interact with local services, connect to databases, run scripts, and so much more. The potential is truly vast!
    • Simple Configuration: Getting started with MCP servers is straightforward. Simply define your desired servers in your configuration file using a structure like this JSON(Claude desktop MCP json):
    {
      "mcpServers": {
        "desktop-commander": {
          "command": "npx",
          "args": [
            "-y",
            "@wonderwhy-er/desktop-commander"
          ]
        }
      }
    }
0501-mcp-test.mov
  1. Brand New Web UI for Agent Interaction:
  • Introducing our shiny new Web UI! 🌐 Interact with your browser-use agent in a conversational manner directly from your browser. This UI allows you to:
  • Provide human intervention, when needed – whether it's solving a CAPTCHA, making a complex decision, or guiding the agent through an unexpected situation. It's seamless human-agent collaboration!
  1. Enhanced DeepResearch Agent (MCP Enabled!): We've also significantly upgraded our deepresearch agent! 🧠 This new version is more powerful and efficient at gathering and synthesizing information. And yes, it also fully supports the new MCP system, enabling it to use external tools for research tasks.

How to Update:

uv pip install -r requirements.txt
python webui.py

Thanks to all the contributors!

Thank you for your continued support! Happy building! 🎉

Security Update & UI Enhancements!

29 Mar 03:37
f4f36b4
Compare
Choose a tag to compare

Hello everyone,

We're happy to announce the new release of browser-use-webui! 🎉 This update brings support for new models, several improvements, and an important security fix.

  • Hotfix some issues: open multiple tabs, etc.

Here's what's new:

  • WebUI Compatibility: Updated the WebUI to be compatible with the latest browser-use==0.1.40. Remember uv pip install -r requirements.txt.
  • 🐛 Bug Fixes: Squashed several bugs to improve stability and performance.
  • 🎨 UI Optimization: Refreshed the WebUI for a cleaner, more intuitive, and aesthetically pleasing user experience.
  • 🤖 New Model Support:
    • Gemini: Added support for gemini-2.5-pro-exp-03-25. Simply input the model name directly.
    • DeepSeek: Added support for DeepSeek-V3-0324. Select the deepseek-chat option and remember to uncheck use_vision for this model.
  • ⚙️ Improved Config Handling: Reworked the WebUI config saving and loading mechanism. It's now more robust and adaptive to user configurations.

🚨 Important Security Update: 🚨

  • We have fixed a critical security vulnerability related to loading WebUI configurations using pickle. Loading untrusted pickle files can potentially lead to arbitrary code execution.
  • To mitigate this risk, we have migrated to using json for saving and loading WebUI settings. This is a much safer standard.
  • We strongly urge all users to update to v1.7 or later immediately to protect themselves. Please avoid using older versions that load configurations via pickle. Your settings should automatically migrate where possible, but backing up your old config is always wise.

Thanks to @Wh1teZe , refer to #451

DeepResearch Lands on Browser-Use Web UI, with Collaborative Agents! 🤖🤝📚

06 Feb 12:16
7de7d90
Compare
Choose a tag to compare

Thanks to @vvincent1234. Now, you can seamlessly leverage DeepResearch's advanced capabilities in WebUI.

Important Notes:

  • DeepResearch feature is currently in alpha version and under rapid development. Stay updated by watching this repository.
  • DeepResearch consumes relatively many tokens. Please reduce Max Search Iteration and Max Query per Iteration according to your needs. These two represent the maximum number of search iterations and the number of simultaneous queries per search iteration respectively.

What's New?

2025/02/09

  1. Hotfix some bugs
  2. Split extracted content and limit max content lenght

2025/02/07

  1. Added a stop button, allowing you to stop your research at any time.
  2. Use your own browser. However, using your own browser currently only supports single searches per iteration.
  3. Currently recommending the gemini-2.0-flash-thinking-exp-01-21 model. This is because excessively long extracted content can sometimes cause API call errors.

Key benefits of this integration include:

  • DeepResearch within Your Browser: Access all DeepResearch features directly in your own browser – no more need for external search APIs! 🌐
  • Collaborative Agents: Harness the power of multiple AI agents working in concert. 🤖🤝
  • Indexed Information Sources: Easily save and access all referenced articles for future reference, promoting transparency and ensuring the reliability of your research. 📚

How to Get Started:

  1. Update Your Code: Pull the latest version to experience the new features. ⬆️
  2. Choose a Powerful LLM: To fully utilize DeepResearch, select a reasoning-capable LLM such as gemini-2.0-flash-thinking-exp-01-21, deepseek-r1, or o3-mini. 🧠
  3. Enter Your Research Topic: Navigate to the DeepResearch section within the Browser-Use Web UI and input your research theme. 📝
  4. Configure Parameters: Adjust the max_search_iteration_input and max_query_per_iter_input according to the complexity of your research. ⚙️
  5. Run Deep Research: Click the "run_deep_research" button and wait for your professional research report to be generated. ⏳

Demo:
https://www.youtube.com/watch?v=sguzGWuiRT8

🚀 Local DeepSeek-r1 Power with Ollama!

28 Jan 12:52
0c9cb9b
Compare
Choose a tag to compare

Hey everyone,

We've just rolled out a new release packed with awesome updates:

  1. Browser-Use Upgrade: We're now fully compatible with the latest browser-use version 0.1.29! 🎉
  2. Local Ollama Integration: Get ready for completely local and private AI with support for the incredible deepseek-r1 model via Ollama! 🏠

Before You Dive In:

  • Update Code: Don't forget to git pull to grab the latest code changes.
  • Reinstall Dependencies: Run pip install -r requirements.txt to ensure all your dependencies are up to date.

Important Notes on deepseek-r1:

  • Model Size Matters: We've found that deepseek-r1:14b and larger models work exceptionally well! Smaller models may not provide the best experience, so we recommend sticking with the larger options. 🤔

How to Get Started with Ollama and deepseek-r1:

  1. Install Ollama: Head over to ollama and download/install Ollama on your system. 💻
  2. Run deepseek-r1: Open your terminal and run the command: ollama run deepseek-r1:14b (or a larger model if you prefer).
  3. WebUI Setup: Launch the WebUI following the instructions. Here's a crucial step: Uncheck "Use Vision" and set "Max Actions per Step" to 1. ✅
  4. Enjoy! You're now all set to experience the power of local deepseek-r1. Have fun! 🥳

Happy Chinese New Year! 🏮

✨ DeepSeek-r1 + Browser-use = New Magic ✨

25 Jan 16:13
5bc4978
Compare
Choose a tag to compare

🚀 Exciting news! Your browser-use can now engage in deep thinking!

Notes:

  1. The current version is a preview version for DeepSeek-r1 under development, please keep updating code to use.
  2. The current version only support the official DeepSeek-r1 api to use.

How to Use:

  1. 🔑 Configure API Key: Make sure you have set the correct DEEPSEEK_API_KEY in your .env file.

  2. 🌐 Launch WebUI: Launch the WebUI as instructed in the README.

  3. 👀 Disable Vision: In Agent Settings, uncheck "Use_Vision".

  4. 🤖 Select Model: In LLM Provider, select "deepseek", and in Model Name, select "deepseek-reasoner".

  5. 🎉 Enjoy!

Hotfix some errors

16 Jan 01:52
2654e6b
Compare
Choose a tag to compare
  1. Upgrade browser-use==0.1.19 to solve Font OS error on Windows.
  2. Fix parsing result error in stream feature(Headless=True), supported return agent history file.
  3. Fix status of Stop button in stream feature.

Please update latest codes and pip install -r requirements.txt

New WebUI: Enhanced Features and Compatibility

13 Jan 15:28
be89b90
Compare
Choose a tag to compare
  1. A brand-new WebUI interface with added features like video display.
  2. Adapted for the latest version of browser-use, with native support for models like Ollama, Gemini, and DeepSeek. Please update your code and run pip install -r requirements.txt.
  3. Ability to stop agent tasks at any time.
  4. Real-time page display in the WebUI when headless=True.
  5. Improved custom browser usage, fixing a bug about using own browser on Mac.
  6. Support for Docker environment installation.

Original version

06 Jan 14:32
e481813
Compare
Choose a tag to compare
  1. A Brand New WebUI: We offer a comprehensive web interface that supports a wide range of browser-use functionalities. This UI is designed to be user-friendly and enables easy interaction with the browser agent.

  2. Expanded LLM Support: We've integrated support for various Large Language Models (LLMs), including: Gemini, OpenAI, Azure OpenAI, Anthropic, DeepSeek, Ollama etc. And we plan to add support for even more models in the future.

  3. Custom Browser Support: You can use your own browser with our tool, eliminating the need to re-login to sites or deal with other authentication challenges. This feature also supports high-definition screen recording.

  4. Customized Agent: We've implemented a custom agent that enhances browser-use with Optimized prompts.