Biomni is a general-purpose biomedical AI agent designed to autonomously execute a wide range of research tasks across diverse biomedical subfields. By integrating cutting-edge large language model (LLM) reasoning with retrieval-augmented planning and code-based execution, Biomni helps scientists dramatically enhance research productivity and generate testable hypotheses.
Biomni-E1: An unified environment for biomedical agent
Biomni employs an action discovery agent to systematically mine essential tools, databases, and protocols from tens of thousands of publications across 25 biomedical domains. These resources are then expectly curated to create the first unified agentic environment (Biomni-E1). This comprehensive mapping of the biomedical action space enables AI agent to access a wide range of specialized tools and knowledge. It enables a wide range of novel capabilities across subfields of biomedicine.
Biomni-A1: A General-Purpose Agent Architecture
Built on this foundation, Biomni features a generalist agentic architecture (Biomni-A1) that integrates LLM reasoning with retrieval-augmented planning and code-based execution, enabling complex biomedical workflows. Unlike traditional solutions that rely on pre-defined templates, Biomni can dynamically compose and execute research tasks across a wide range of biomedical subfields, such as gene prioritization, drug repurposing, rare disease diagnosis, microbiome analysis, and molecular cloning, all without task-specific tuning.
Note: We are working hard to prepare for the full open source release. Stay tuned! In the meantime, use biomni.stanford.edu to try it out!
If you want to use Biomni UI, please sign up at biomni.stanford.edu. We will gradually roll out the access!
If you are a biomedical scientist, we would love to learn from your experience and any feedback you have.
If you want to add an additional specialized tools, data, workflow, or report tasks that do not work, we would love to talk with you! Create an issue or join slack to talk to us directly!
Reach out to [email protected] or via Slack or submit an issue!