Skip to content

feat: AI code generation #123

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 22 commits into
base: master
Choose a base branch
from
Open

feat: AI code generation #123

wants to merge 22 commits into from

Conversation

PraneshASP
Copy link

@PraneshASP PraneshASP commented Jul 3, 2025

Description:

This PR introduces a AI Assist feature that allows users to easily generate sway contracts using AI. Also there's an option to fix the compilation issues using AI. We use Gemini API + Fuel Docs MCP server in the backend to fulfill user requests.

Here's a summary of changes:

Frontend:

  • AI Code Generation Dialog: Added interactive modal for generating Sway smart contracts from natural language prompts with real-time streaming
  • Error Analysis & Auto-Fix: Integrated "Fix with AI" button that analyzes compilation errors and suggests corrected code with one-click
  • Markdown Rendering: Implemented syntax-highlighted display for AI explanations and suggestions with copy-to-clipboard functionality

Backend:

  • Gemini API Integration: Integrated Google's Gemini 2.5 Flash model for Sway code generation and error analysis
  • Rate Limiting: Basic IP Based rate limiting - defaults to 20 requests per day.
  • MCP Server: Added Fuel Docs MCP server support to provide relevant Fuel/Sway documentation context for more accurate results.

@Voxelot
Copy link
Member

Voxelot commented Jul 3, 2025

Any reason why we can't just integrate this into the main backend server in rust? It will be more devops overhead to manage two backends for the playground.

Also are there any automated tests to verify that this is working correctly?

@JoshuaBatty
Copy link
Member

Would you mind posting some screenshots or a video of the new functionality?

Also interested in your response to Brandon's questions above.

@zees-dev
Copy link

zees-dev commented Jul 4, 2025

Agree with @Voxelot here; we don't want to introduce/manage another backend (ts/express).

It seems like the ai-backend is an agentic wrapper around gemini SDK - which calls out to the fuel-mcp-server stdio client (which must be baked into the container).

I also propose we run the fuel-mcp-server as a standalone SSE server; this way any LLM can integrate with this MCP that would provide fuel documentation.
For security, we could allow whitelisted API key access to this (if needed) and/or implement basic rate-limiting based on IP.

As for the current backend, I propose the following alternatives:

  • Move the AI logic to existing rust backend; use a rust SDK (there may not be an official one); or simply use the reqwest library to make API requests to the gemini endpoint. From my understanding, the LLM SDKs are generally wrappers around REST calls to an LLM server.

  • If you don't want to do the above (migrate to rust); the LLM/agentic functionality can also simply be run on the frontend itself.
    The browser would be responsible for interfacing with the LLM endpoint(s) and calling the fuel-mcp-server. The only potential issue here is the gemini api key being exposed.
    Im not really sure if this is an issue either - since in api-keys to gemini are inexpensive; and even the current solution does not account for API abuse. If this does become an issue; then we would simply require users to pass in their own gemini API key.
    This approach could be the simplest (once the fuel-mcp-server is setup to run in SSE) as you could simply migrate all the nodejs/bun agent code to the frontend as-is. This would also reduce any load to the rust backend, since for the LLM usage the backend is a proxy to the gemini API.

@PraneshASP PraneshASP marked this pull request as draft July 5, 2025 06:59
@PraneshASP
Copy link
Author

Thanks for the review, team!

Agreed that the current ts-backend might introduce quite a bit of devops hassle. The reason behind choosing it was initially it was to planned use langchain for LLM integration, then moved away from that and used Gemini SDK for simplicity - in both cases the SDKs were not available for Rust. There's an unoffcial rust sdk which is quite old and seems unmaintained as the 2.5 models are not supported.

@zees-dev:

I also propose we run the fuel-mcp-server as a standalone SSE server; this way any LLM can integrate with this MCP that would provide fuel documentation.

Yes would be the way forward - was chatting about the same idea with Nick as well. If we do that, it simplifies the integration across multiple services easily. Will look into porting the fuel docs mcp server to use http transport.

Im not really sure if this is an issue either - since in api-keys to gemini are inexpensive; and even the current solution does not account for API abuse. If this does become an issue; then we would simply require users to pass in their own gemini API key.

Nick suggested that allocating few bucks is not an issue so BYOK (bring your own key) can be an additional feature which we can iteratively ship when we integrate other models like gpt and claude. But we should implement IP based rate-limiting as suggested.

@Voxelot

Also are there any automated tests to verify that this is working correctly?

Adding automated tests for LLM calls can be quite tricky as the outputs are not deterministic. There are some tools available for evaluating agent outputs like promptfoo or langsmith but not sure if they're compatible with rust. Need some research on that. We can consider adding some E2E compile tests: "Within 5 AI-fix iterations, does the code compile?”

@JoshuaBatty here's the demo videos:

Contract generation ✨

codegen-sway-playground.mov

Fix With AI 🪄

fix-with-ai-sway-playground.mov

So here's the next steps I propose:

  • Modify the Fuel MCP Server to use StreamableHttp transport instead of stdio and host it.
  • Port the AI based logic to Rust and use the hosted mcp server to make tool calls
  • Implement some automated tests if feasible.

Let me know what do you guys think. Converted this PR to draft for the time-being.

@zees-dev
Copy link

zees-dev commented Jul 5, 2025

Thanks for the review, team!

Agreed that the current ts-backend might introduce quite a bit of devops hassle. The reason behind choosing it was initially it was to planned use langchain for LLM integration, then moved away from that and used Gemini SDK for simplicity - in both cases the SDKs were not available for Rust. There's an unoffcial rust sdk which is quite old and seems unmaintained as the 2.5 models are not supported.

@zees-dev:

I also propose we run the fuel-mcp-server as a standalone SSE server; this way any LLM can integrate with this MCP that would provide fuel documentation.

Yes would be the way forward - was chatting about the same idea with Nick as well. If we do that, it simplifies the integration across multiple services easily. Will look into porting the fuel docs mcp server to use http transport.

Im not really sure if this is an issue either - since in api-keys to gemini are inexpensive; and even the current solution does not account for API abuse. If this does become an issue; then we would simply require users to pass in their own gemini API key.

Nick suggested that allocating few bucks is not an issue so BYOK (bring your own key) can be an additional feature which we can iteratively ship when we integrate other models like gpt and claude. But we should implement IP based rate-limiting as suggested.

@Voxelot

Also are there any automated tests to verify that this is working correctly?

Adding automated tests for LLM calls can be quite tricky as the outputs are not deterministic. There are some tools available for evaluating agent outputs like promptfoo or langsmith but not sure if they're compatible with rust. Need some research on that. We can consider adding some E2E compile tests: "Within 5 AI-fix iterations, does the code compile?”

@JoshuaBatty here's the demo videos:

Contract generation ✨

codegen-sway-playground.mov

Fix With AI 🪄

fix-with-ai-sway-playground.mov

So here's the next steps I propose:

  • Modify the Fuel MCP Server to use StreamableHttp transport instead of stdio and host it.
  • Port the AI based logic to Rust and use the hosted mcp server to make tool calls
  • Implement some automated tests if feasible.

Let me know what do you guys think. Converted this PR to draft for the time-being.

Nice; in this case I'd recommend porting the AI based logic to the frontend only.
This should be quicker as we can probably migrate 90% of the code here.
Also there would be no change/update to the backend required.

@Voxelot
Copy link
Member

Voxelot commented Jul 7, 2025

For automated tests I don't think we can reliably depend on the output of an LLM, its more just to make sure all the sdks and deps are working correctly together and that the API calls generally "work" in the most basic minimal sense.

ie. if something changes in the gemini api or fuel mcp, it would be good to find out about it breaking the playground before users tell us :)

@PraneshASP PraneshASP marked this pull request as ready for review July 16, 2025 10:38
@@ -93,6 +69,7 @@ function App() {
saveSwayCode(code);
setSwayCode(code);
setIsCompiled(false);
setCodeToCompile(undefined); // Clear previous compilation state

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe just set this as empty string - so it always stays as string type?

})
}

fn get_code_generation_prompt(&self) -> String {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: since they dont have any string interpolation; save these as markdown files and include them in the compilation step using macro (i think its include_str or something).

Copy link

@zees-dev zees-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this; no major issues I can identify.

Would've preferred that the AI logic remains only on the frontend (didn't want to introduce any AI functionality to backend) - but since work is done here and it's looking good; can't complain.

Unfortunately I cant seem to test this. Used the vercel deployment and also tried running locally.
Image

Seems like it points to an endpoint which does not exist (yet); maybe it should point to local endpoint when running locally?

@PraneshASP
Copy link
Author

PraneshASP commented Jul 17, 2025

Thanks for the review @zees-dev!

Seems like it points to an endpoint which does not exist (yet); maybe it should point to local endpoint when running locally?

yeah the backend hasn't been published yet as it requires some config (gemini key and mcp server url) - for now you can run it locally. E2E tests are there to verify the AI features too.

Here's the steps:

  • Add GEMINI_API_KEY and MCP_SERVER_URL vars to the .env file.
  • To run MCP server locally in http mode - you can follow the steps here. Note that you need to create the index first before starting the server.
  • You can skip adding the MCP_SERVER_URL and just try the features with gemini - but the results will be sub-optimal.

@zees-dev
Copy link

zees-dev commented Jul 17, 2025

  • To run MCP server locally in http mode - you can follow the steps here. Note that you need to create the index first before starting the server.

Interesting; if we need to have MCP server running on the https://github.com/FuelLabs/fuel-mcp-server (for optimal results); wondering why not just have these AI endpoints on that server too; this would make it all encapsulated in one place?

This would also imply that if any other frontend-projects need to use similar AI capabilities (forc.pub comes to mind); they won't need to point to 2 different endpoints (1 for LLM inference, the other for MCP); they would only point to the fuel-mcp-server domain for all public AI functionality.
^ which may also be ideal for request latency (i.e. faster e2e response due to AI endpoint and MCP server co-location).

@PraneshASP
Copy link
Author

Thanks for the suggestion, @zees-dev

I’ve intentionally kept the Gemini integration co-located with the playground service so that the LLM logic lives as close to the UI as possible. This minimizes latency and keeps local development straightforward. In the future, if we add BYOK support, routing LLM calls through our remote MCP server would be a serious security concern, since users would need to send their keys to our servers.

And if the MCP server starts handling LLM calls, it ceases to be a pure “MCP” server and effectively becomes a full AI backend API, which stretches its original scope. I feel that keeping the LLM integration separate preserves a clear separation of concerns and gives us more flexibility for other front-end projects.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants