Pocket Flow is a 100-line minimalist LLM framework
-
Expressive: Everything you love from larger frameworks—(Multi-)Agents, Workflow, RAG, and more.
-
Lightweight: Just the core graph abstraction in 100 lines. Zero bloat, zero dependencies, zero vendor lock-in.
-
Principled: Built with modularity and clear separation of concerns at its heart for maintainable codes.
-
AI-Friendly: Intuitive enough for AI agents (e.g., Cursor AI) to assist humans in Vibe Coding.
-
To install,
pip install pocketflow
or just copy the source code (only 100 lines). -
To learn more, check out the documentation. For an in-depth design dive, read the essay.
-
🎉 We now have a discord!
✨ Below are examples of LLM Apps:
Formal App Name | Informal One-Liner | Difficulty | Learning Objectives |
---|---|---|---|
Ask AI Paul Graham | Ask AI Paul Graham, in case you don't get in | ★★☆ Medium |
RAG Map Reduce Text-to-Speech |
Youtube Summarizer | Explain YouTube Videos to you like you're 5 | ★☆☆ Beginner |
Map Reduce |
Cold Opener Generator | Instant icebreakers that turn cold leads hot | ★☆☆ Beginner |
Map Reduce Web Search |
-
Want to learn how I vibe code these LLM Apps? Check out my YouTube!
-
Want to create your own Python project? Start with this template!
🚀 Vibe Coding – the fastest paradigm for building LLM systems!
-
😎 Humans craft the high-level requirements and system design.
-
🤖 AI agents (e.g., Cursor AI) handle the low-level implementation.
Compare to other frameworks, Pocket Flow is purpose-built for LLM Agents:
-
🫠 LangChain-like frameworks overwhelm Cursor AI with complex abstractions, deprecated functions and irritating dependency issues.
-
😐 Without a framework, code is ad hoc—suitable only for immediate tasks, not modular or maintainable.
-
🥰 With Pocket Flow: (1) Minimal and expressive—easy for Cursor AI to pick up. (2) Nodes and Flows keep everything modular. (3) A Shared Store decouples your data structure from compute logic.
In short, the 100 lines ensures LLM Agents follows solid coding practices without sacrificing simplicity or flexibility.
The 100 lines capture what we believe to be the core abstraction of LLM frameworks:
-
Computation: A graph that breaks down tasks into nodes, with branching, looping, and nesting.
-
Communication: A shared store that all nodes can read and write to.
From there, it’s easy to implement popular design patterns like (Multi-)Agents, Workflow, RAG, etc.
-
For quick questions: Use the GPT assistant (note: it uses older models not ideal for coding).
-
For one-time LLM task: Create a ChatGPT or Claude project; upload the docs to project knowledge.
-
For LLM App development: Use Cursor AI.
-
If you want to start a new project, check out the project template.
-
If you already have a project, copy .cursorrules to your project root as Cursor Rules.
-