Skip to content

Conversation

@philip
Copy link
Collaborator

@philip philip commented Sep 9, 2025

Rough draft for review, do not publish.

Preview

@philip philip requested a review from danieltprice as a code owner September 9, 2025 23:52
@vercel
Copy link

vercel bot commented Sep 9, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
neon-next Ready Ready Preview Comment Oct 2, 2025 9:26pm

- Scale databases to zero when idle (no compute charges when idle; storage is still billed)

Architecture assumption: This guide uses one Neon project per user for better isolation and security. For detailed architecture patterns and billing models, see the [platform integration getting started guide](/docs/guides/platform-integration-get-started).

Copy link
Collaborator

@danieltprice danieltprice Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to cover the agent plan & how it works.

  • AI Platform signs up for paid account (30K) credit (gets a "paid" Org)
  • Neon creates a second "Free" org for the account (for free Neon projects)
    Then, we nee to describe how the platform can transfer a project from the Free org to the paid org if the user needs higher usage limits for their app.
    "The partner uses the /projects/transfer API to move the project from their Free platform-managed Org to their Paid platform-managed Org when the end user upgrades to paid on the platform."
    See: https://docs.google.com/document/d/1640OEFIKYLfWcTXw0-Zkf9kK0FzGnd8uR-VOjS4-_6g/edit?tab=t.0#heading=h.2tmyzq1po7pt

| Action | Description | Endpoint |
| ------------------------------------------------------------------ | -------------------------------------------------------------------------- | --------------------------------------------------------------------------- |
| **[Create project](#application-provisioning)** | Creates a Postgres database in ~500ms with automatic scale-to-zero | `POST /projects` |
| **[Configure autoscaling](#autoscaling-configuration)** | Set compute limits (0.25-8 CU) based on user tiers | `PATCH /projects/{project_id}/endpoints/{endpoint_id}` |
Copy link
Collaborator

@danieltprice danieltprice Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than PATCH apis to modify existing projects, I think we need to document the Create project API where we set limits. We can validate this. I don't thinking patching a project is the primary flow.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces comprehensive documentation for AI agent platforms using Neon, moving from an external use case page to detailed developer-focused documentation within the docs site.

  • Replaces external links with internal documentation paths for better integration
  • Creates a new comprehensive guide for AI agent platforms with practical API examples
  • Updates navigation and icon references to match the new documentation structure

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
content/docs/navigation.yaml Updates navigation slug from external URL to internal docs path
content/docs/ai/ai-agents-tools.md Updates links and icons to reference new internal documentation
content/docs/ai/ai-agents-platform.md Creates comprehensive new documentation for AI agent platform integration

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

title: Neon API for AI agent platforms
subtitle: Workflow for provisioning databases, authentication, and versioning for your AI codegen platform.
enableTableOfContents: true
updatedOn: '2025-09-10T14:09:12.290Z'

This comment was marked as resolved.

Comment on lines +181 to +182
To avoid cold starts on critical endpoints, you can prevent suspension. This keeps compute always on and increases cost. Note: `suspend_timeout_seconds: 0` uses the default timeout; set to `-1` to disable suspension.

Copy link

Copilot AI Oct 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The explanation about suspend_timeout_seconds values could be clearer. It states that '0' uses default timeout but then shows '-1' to disable suspension, which might be confusing about what the actual default behavior is.

Suggested change
To avoid cold starts on critical endpoints, you can prevent suspension. This keeps compute always on and increases cost. Note: `suspend_timeout_seconds: 0` uses the default timeout; set to `-1` to disable suspension.
To avoid cold starts on critical endpoints, you can prevent suspension. This keeps compute always on and increases cost.
**Note:**
- Set `"suspend_timeout_seconds": 0` to use the default suspension timeout (currently 300 seconds).
- Set a positive integer (e.g., `600`) to specify a custom timeout in seconds.
- Set `"suspend_timeout_seconds": -1` to disable suspension entirely (compute will always be on).

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants