-
Notifications
You must be signed in to change notification settings - Fork 28
feat: AI code generation #123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Any reason why we can't just integrate this into the main backend server in rust? It will be more devops overhead to manage two backends for the playground. Also are there any automated tests to verify that this is working correctly? |
Would you mind posting some screenshots or a video of the new functionality? Also interested in your response to Brandon's questions above. |
Agree with @Voxelot here; we don't want to introduce/manage another backend (ts/express). It seems like the I also propose we run the fuel-mcp-server as a standalone SSE server; this way any LLM can integrate with this MCP that would provide fuel documentation. As for the current backend, I propose the following alternatives:
|
Thanks for the review, team! Agreed that the current ts-backend might introduce quite a bit of devops hassle. The reason behind choosing it was initially it was to planned use langchain for LLM integration, then moved away from that and used Gemini SDK for simplicity - in both cases the SDKs were not available for Rust. There's an unoffcial rust sdk which is quite old and seems unmaintained as the 2.5 models are not supported.
Yes would be the way forward - was chatting about the same idea with Nick as well. If we do that, it simplifies the integration across multiple services easily. Will look into porting the fuel docs mcp server to use http transport.
Nick suggested that allocating few bucks is not an issue so BYOK (bring your own key) can be an additional feature which we can iteratively ship when we integrate other models like gpt and claude. But we should implement IP based rate-limiting as suggested.
Adding automated tests for LLM calls can be quite tricky as the outputs are not deterministic. There are some tools available for evaluating agent outputs like promptfoo or langsmith but not sure if they're compatible with rust. Need some research on that. We can consider adding some E2E compile tests: "Within 5 AI-fix iterations, does the code compile?” @JoshuaBatty here's the demo videos: Contract generation ✨codegen-sway-playground.movFix With AI 🪄fix-with-ai-sway-playground.movSo here's the next steps I propose:
Let me know what do you guys think. Converted this PR to draft for the time-being. |
Nice; in this case I'd recommend porting the AI based logic to the frontend only. |
For automated tests I don't think we can reliably depend on the output of an LLM, its more just to make sure all the sdks and deps are working correctly together and that the API calls generally "work" in the most basic minimal sense. ie. if something changes in the gemini api or fuel mcp, it would be good to find out about it breaking the playground before users tell us :) |
@@ -93,6 +69,7 @@ function App() { | |||
saveSwayCode(code); | |||
setSwayCode(code); | |||
setIsCompiled(false); | |||
setCodeToCompile(undefined); // Clear previous compilation state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe just set this as empty string - so it always stays as string type?
}) | ||
} | ||
|
||
fn get_code_generation_prompt(&self) -> String { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: since they dont have any string interpolation; save these as markdown files and include them in the compilation step using macro (i think its include_str
or something).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this; no major issues I can identify.
Would've preferred that the AI logic remains only on the frontend (didn't want to introduce any AI functionality to backend) - but since work is done here and it's looking good; can't complain.
Unfortunately I cant seem to test this. Used the vercel deployment and also tried running locally.
Seems like it points to an endpoint which does not exist (yet); maybe it should point to local endpoint when running locally?
Thanks for the review @zees-dev!
yeah the backend hasn't been published yet as it requires some config (gemini key and mcp server url) - for now you can run it locally. E2E tests are there to verify the AI features too. Here's the steps:
|
Interesting; if we need to have MCP server running on the https://github.com/FuelLabs/fuel-mcp-server (for optimal results); wondering why not just have these AI endpoints on that server too; this would make it all encapsulated in one place? This would also imply that if any other frontend-projects need to use similar AI capabilities (forc.pub comes to mind); they won't need to point to 2 different endpoints (1 for LLM inference, the other for MCP); they would only point to the |
Thanks for the suggestion, @zees-dev I’ve intentionally kept the Gemini integration co-located with the playground service so that the LLM logic lives as close to the UI as possible. This minimizes latency and keeps local development straightforward. In the future, if we add BYOK support, routing LLM calls through our remote MCP server would be a serious security concern, since users would need to send their keys to our servers. And if the MCP server starts handling LLM calls, it ceases to be a pure “MCP” server and effectively becomes a full AI backend API, which stretches its original scope. I feel that keeping the LLM integration separate preserves a clear separation of concerns and gives us more flexibility for other front-end projects. |
Description:
This PR introduces a AI Assist feature that allows users to easily generate sway contracts using AI. Also there's an option to fix the compilation issues using AI. We use Gemini API + Fuel Docs MCP server in the backend to fulfill user requests.
Here's a summary of changes:
Frontend:
Backend: