-
Notifications
You must be signed in to change notification settings - Fork 27
Description
Currently, the open source chat ecosystem (at least with React) is plagued with bad performance, and I think the big reason why is that all anyone ever does is follow this fairly poor memoization guide: https://ai-sdk.dev/cookbook/next/markdown-chatbot-with-memoization#markdown-chatbot-with-memoization
It's surprising to me just how copied that guide is. It's everywhere, in every new random chat app, in vibe coding products, etc.
On the other hand, the approach taken by ai-streaming-parser
is much better for performance, but it's difficult to just port to the react programming model. So, if it's something you folks would be able to do, making a decent demo chat app in React with exceptional performance even streaming markdown from models like Gemini 2.5 flash would be a life saver for the whole ecosystem.
The three big issues I see that are super common:
- Long paragraphs hundreds of words long which aren't automatically chunked up into a different block type by react-markdown still cause the same performance problems as if it wasn't memoized at all, because the memoization only applies on a block level.
- Same performance issues with long code blocks and syntax highlighting with Shiki.
- One usability issue is that the code block will often get replaced (depending on the library used) wholesale on each re-render, causing the streaming code to not be text-selectable until the codeblock is completely finished.