Skip to content

test(frontend): e2e tests for library page #10355

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: dev
Choose a base branch
from

Conversation

Abhi1992002
Copy link
Contributor

@Abhi1992002 Abhi1992002 commented Jul 12, 2025

In this PR, I’ve added library page tests.

Changes

I’ve added 9 tests: 8 for normal flows and 1 for checking edge cases.

Test names are something like:

  • Library navigation is accessible from the navbar.
  • The library page loads successfully.
  • Agents are visible, and cards work correctly.
  • Pagination works correctly.
  • Sorting works correctly.
  • Searching works correctly.
  • Pagination while searching works correctly.
  • Uploading an agent works correctly.
  • Edge case: Search edge cases and error handling behave correctly.

Other than that, I’ve added a new utility that uses the build page to help us create users at the start, which we could use to test the library page.

  • All tests are passing locally
Screenshot 2025-07-12 at 11 13 41 AM

Checklist 📋

For code changes:

  • I have clearly listed my changes in the PR description
  • I have made a test plan
  • I have tested my changes according to the test plan:
    • All library tests are working locally and on CI perfectly.

0ubbe and others added 3 commits July 8, 2025 19:50
## Changes 🏗️

### The Issue

- Backend returns: `"https://storage.googleapis.com/..."` (valid JSON
string)
- Frontend was calling `response.text()` which gave:
`"\"https://storage.googleapis.com/...\""`
- This resulted in a URL with extra quotes that couldn't be loaded

### The Fix
I changed both file upload methods to use `response.json()` instead of
`response.text()`:

1. **Client-side uploads** (`_makeClientFileUpload`): Changed `return
await response.text();` to `return await response.json();`
2. **Server-side uploads** (`makeAuthenticatedFileUpload`): Changed
`return await response.text();` to `return await response.json();`

Now when the backend returns a JSON string like
`"https://example.com/file.png"`, the frontend will properly parse it as
JSON and extract just the URL without the quotes.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Upload an image on your profile
  - [x] It works  


### For configuration changes:

No configuration changes
@Abhi1992002 Abhi1992002 requested review from a team as code owners July 12, 2025 05:38
@Abhi1992002 Abhi1992002 requested review from 0ubbe and Bentlybro and removed request for a team July 12, 2025 05:38
@github-project-automation github-project-automation bot moved this to 🆕 Needs initial review in AutoGPT development kanban Jul 12, 2025
Copy link
Contributor

This PR targets the master branch but does not come from dev or a hotfix/* branch.

Automatically setting the base branch to dev.

@github-actions github-actions bot changed the base branch from master to dev July 12, 2025 05:38
@github-actions github-actions bot added platform/frontend AutoGPT Platform - Front end platform/backend AutoGPT Platform - Back end platform/blocks size/xl labels Jul 12, 2025
Copy link

netlify bot commented Jul 12, 2025

Deploy Preview for auto-gpt-docs canceled.

Name Link
🔨 Latest commit a8a0654
🔍 Latest deploy log https://app.netlify.com/projects/auto-gpt-docs/deploys/6871f4c0ab194a00088bef3c

Copy link

netlify bot commented Jul 12, 2025

Deploy Preview for auto-gpt-docs-dev canceled.

Name Link
🔨 Latest commit 083965f
🔍 Latest deploy log https://app.netlify.com/projects/auto-gpt-docs-dev/deploys/6874da07ce06fd0008a3d476

@github-actions github-actions bot removed platform/backend AutoGPT Platform - Back end platform/blocks labels Jul 12, 2025
Copy link

netlify bot commented Jul 12, 2025

Deploy Preview for auto-gpt-docs canceled.

Name Link
🔨 Latest commit 083965f
🔍 Latest deploy log https://app.netlify.com/projects/auto-gpt-docs/deploys/6874da06bb22bc00089212ae

Copy link

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
🧪 PR contains tests
🔒 Security concerns

Sensitive information exposure:
The test files contain hardcoded API keys and secrets (e.g., "test-api-key", "sk-1234567890") which could be accidentally committed to version control. The SecretStr usage in tests may not properly mask these values during debugging or logging.

⚡ Recommended focus areas for review

Test Quality

The test file contains extensive mock testing but lacks integration with actual database or external services. Some tests use hardcoded credentials and may not reflect real-world usage patterns.

"""
Tests for creating blocks using the SDK.

This test suite verifies that blocks can be created using only SDK imports
and that they work correctly without decorators.
"""

from typing import Any, Optional, Union

import pytest

from backend.sdk import (
    APIKeyCredentials,
    Block,
    BlockCategory,
    BlockCostType,
    BlockOutput,
    BlockSchema,
    CredentialsMetaInput,
    OAuth2Credentials,
    ProviderBuilder,
    SchemaField,
    SecretStr,
)

from ._config import test_api, test_service


class TestBasicBlockCreation:
    """Test creating basic blocks using the SDK."""

    @pytest.mark.asyncio
    async def test_simple_block(self):
        """Test creating a simple block without any decorators."""

        class SimpleBlock(Block):
            """A simple test block."""

            class Input(BlockSchema):
                text: str = SchemaField(description="Input text")
                count: int = SchemaField(description="Repeat count", default=1)

            class Output(BlockSchema):
                result: str = SchemaField(description="Output result")

            def __init__(self):
                super().__init__(
                    id="simple-test-block",
                    description="A simple test block",
                    categories={BlockCategory.TEXT},
                    input_schema=SimpleBlock.Input,
                    output_schema=SimpleBlock.Output,
                )

            async def run(self, input_data: Input, **kwargs) -> BlockOutput:
                result = input_data.text * input_data.count
                yield "result", result

        # Create and test the block
        block = SimpleBlock()
        assert block.id == "simple-test-block"
        assert BlockCategory.TEXT in block.categories

        # Test execution
        outputs = []
        async for name, value in block.run(
            SimpleBlock.Input(text="Hello ", count=3),
        ):
            outputs.append((name, value))
        assert len(outputs) == 1
        assert outputs[0] == ("result", "Hello Hello Hello ")

    @pytest.mark.asyncio
    async def test_block_with_credentials(self):
        """Test creating a block that requires credentials."""

        class APIBlock(Block):
            """A block that requires API credentials."""

            class Input(BlockSchema):
                credentials: CredentialsMetaInput = test_api.credentials_field(
                    description="API credentials for test service",
                )
                query: str = SchemaField(description="API query")

            class Output(BlockSchema):
                response: str = SchemaField(description="API response")
                authenticated: bool = SchemaField(description="Was authenticated")

            def __init__(self):
                super().__init__(
                    id="api-test-block",
                    description="Test block with API credentials",
                    categories={BlockCategory.DEVELOPER_TOOLS},
                    input_schema=APIBlock.Input,
                    output_schema=APIBlock.Output,
                )

            async def run(
                self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
            ) -> BlockOutput:
                # Simulate API call
                api_key = credentials.api_key.get_secret_value()
                authenticated = bool(api_key)

                yield "response", f"API response for: {input_data.query}"
                yield "authenticated", authenticated

        # Create test credentials
        test_creds = APIKeyCredentials(
            id="test-creds",
            provider="test_api",
            api_key=SecretStr("test-api-key"),
            title="Test API Key",
        )

        # Create and test the block
        block = APIBlock()
        outputs = []
        async for name, value in block.run(
            APIBlock.Input(
                credentials={  # type: ignore
                    "provider": "test_api",
                    "id": "test-creds",
                    "type": "api_key",
                },
                query="test query",
            ),
            credentials=test_creds,
        ):
            outputs.append((name, value))

        assert len(outputs) == 2
        assert outputs[0] == ("response", "API response for: test query")
        assert outputs[1] == ("authenticated", True)

    @pytest.mark.asyncio
    async def test_block_with_multiple_outputs(self):
        """Test block that yields multiple outputs."""

        class MultiOutputBlock(Block):
            """Block with multiple outputs."""

            class Input(BlockSchema):
                text: str = SchemaField(description="Input text")

            class Output(BlockSchema):
                uppercase: str = SchemaField(description="Uppercase version")
                lowercase: str = SchemaField(description="Lowercase version")
                length: int = SchemaField(description="Text length")
                is_empty: bool = SchemaField(description="Is text empty")

            def __init__(self):
                super().__init__(
                    id="multi-output-block",
                    description="Block with multiple outputs",
                    categories={BlockCategory.TEXT},
                    input_schema=MultiOutputBlock.Input,
                    output_schema=MultiOutputBlock.Output,
                )

            async def run(self, input_data: Input, **kwargs) -> BlockOutput:
                text = input_data.text
                yield "uppercase", text.upper()
                yield "lowercase", text.lower()
                yield "length", len(text)
                yield "is_empty", len(text) == 0

        # Test the block
        block = MultiOutputBlock()
        outputs = []
        async for name, value in block.run(MultiOutputBlock.Input(text="Hello World")):
            outputs.append((name, value))

        assert len(outputs) == 4
        assert ("uppercase", "HELLO WORLD") in outputs
        assert ("lowercase", "hello world") in outputs
        assert ("length", 11) in outputs
        assert ("is_empty", False) in outputs


class TestBlockWithProvider:
    """Test creating blocks associated with providers."""

    @pytest.mark.asyncio
    async def test_block_using_provider(self):
        """Test block that uses a registered provider."""

        class TestServiceBlock(Block):
            """Block for test service."""

            class Input(BlockSchema):
                credentials: CredentialsMetaInput = test_service.credentials_field(
                    description="Test service credentials",
                )
                action: str = SchemaField(description="Action to perform")

            class Output(BlockSchema):
                result: str = SchemaField(description="Action result")
                provider_name: str = SchemaField(description="Provider used")

            def __init__(self):
                super().__init__(
                    id="test-service-block",
                    description="Block using test service provider",
                    categories={BlockCategory.DEVELOPER_TOOLS},
                    input_schema=TestServiceBlock.Input,
                    output_schema=TestServiceBlock.Output,
                )

            async def run(
                self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
            ) -> BlockOutput:
                # The provider name should match
                yield "result", f"Performed: {input_data.action}"
                yield "provider_name", credentials.provider

        # Create credentials for our provider
        creds = APIKeyCredentials(
            id="test-service-creds",
            provider="test_service",
            api_key=SecretStr("test-key"),
            title="Test Service Key",
        )

        # Test the block
        block = TestServiceBlock()
        outputs = {}
        async for name, value in block.run(
            TestServiceBlock.Input(
                credentials={  # type: ignore
                    "provider": "test_service",
                    "id": "test-service-creds",
                    "type": "api_key",
                },
                action="test action",
            ),
            credentials=creds,
        ):
            outputs[name] = value

        assert outputs["result"] == "Performed: test action"
        assert outputs["provider_name"] == "test_service"


class TestComplexBlockScenarios:
    """Test more complex block scenarios."""

    @pytest.mark.asyncio
    async def test_block_with_optional_fields(self):
        """Test block with optional input fields."""
        # Optional is already imported at the module level

        class OptionalFieldBlock(Block):
            """Block with optional fields."""

            class Input(BlockSchema):
                required_field: str = SchemaField(description="Required field")
                optional_field: Optional[str] = SchemaField(
                    description="Optional field",
                    default=None,
                )
                optional_with_default: str = SchemaField(
                    description="Optional with default",
                    default="default value",
                )

            class Output(BlockSchema):
                has_optional: bool = SchemaField(description="Has optional value")
                optional_value: Optional[str] = SchemaField(
                    description="Optional value"
                )
                default_value: str = SchemaField(description="Default value")

            def __init__(self):
                super().__init__(
                    id="optional-field-block",
                    description="Block with optional fields",
                    categories={BlockCategory.TEXT},
                    input_schema=OptionalFieldBlock.Input,
                    output_schema=OptionalFieldBlock.Output,
                )

            async def run(self, input_data: Input, **kwargs) -> BlockOutput:
                yield "has_optional", input_data.optional_field is not None
                yield "optional_value", input_data.optional_field
                yield "default_value", input_data.optional_with_default

        # Test with optional field provided
        block = OptionalFieldBlock()
        outputs = {}
        async for name, value in block.run(
            OptionalFieldBlock.Input(
                required_field="test",
                optional_field="provided",
            )
        ):
            outputs[name] = value

        assert outputs["has_optional"] is True
        assert outputs["optional_value"] == "provided"
        assert outputs["default_value"] == "default value"

        # Test without optional field
        outputs = {}
        async for name, value in block.run(
            OptionalFieldBlock.Input(
                required_field="test",
            )
        ):
            outputs[name] = value

        assert outputs["has_optional"] is False
        assert outputs["optional_value"] is None
        assert outputs["default_value"] == "default value"

    @pytest.mark.asyncio
    async def test_block_with_complex_types(self):
        """Test block with complex input/output types."""

        class ComplexBlock(Block):
            """Block with complex types."""

            class Input(BlockSchema):
                items: list[str] = SchemaField(description="List of items")
                mapping: dict[str, int] = SchemaField(
                    description="String to int mapping"
                )

            class Output(BlockSchema):
                item_count: int = SchemaField(description="Number of items")
                total_value: int = SchemaField(description="Sum of mapping values")
                combined: list[str] = SchemaField(description="Combined results")

            def __init__(self):
                super().__init__(
                    id="complex-types-block",
                    description="Block with complex types",
                    categories={BlockCategory.DEVELOPER_TOOLS},
                    input_schema=ComplexBlock.Input,
                    output_schema=ComplexBlock.Output,
                )

            async def run(self, input_data: Input, **kwargs) -> BlockOutput:
                yield "item_count", len(input_data.items)
                yield "total_value", sum(input_data.mapping.values())

                # Combine items with their mapping values
                combined = []
                for item in input_data.items:
                    value = input_data.mapping.get(item, 0)
                    combined.append(f"{item}: {value}")

                yield "combined", combined

        # Test the block
        block = ComplexBlock()
        outputs = {}
        async for name, value in block.run(
            ComplexBlock.Input(
                items=["apple", "banana", "orange"],
                mapping={"apple": 5, "banana": 3, "orange": 4},
            )
        ):
            outputs[name] = value

        assert outputs["item_count"] == 3
        assert outputs["total_value"] == 12
        assert outputs["combined"] == ["apple: 5", "banana: 3", "orange: 4"]

    @pytest.mark.asyncio
    async def test_block_error_handling(self):
        """Test block error handling."""

        class ErrorHandlingBlock(Block):
            """Block that demonstrates error handling."""

            class Input(BlockSchema):
                value: int = SchemaField(description="Input value")
                should_error: bool = SchemaField(
                    description="Whether to trigger an error",
                    default=False,
                )

            class Output(BlockSchema):
                result: int = SchemaField(description="Result")
                error_message: Optional[str] = SchemaField(
                    description="Error if any", default=None
                )

            def __init__(self):
                super().__init__(
                    id="error-handling-block",
                    description="Block with error handling",
                    categories={BlockCategory.DEVELOPER_TOOLS},
                    input_schema=ErrorHandlingBlock.Input,
                    output_schema=ErrorHandlingBlock.Output,
                )

            async def run(self, input_data: Input, **kwargs) -> BlockOutput:
                if input_data.should_error:
                    raise ValueError("Intentional error triggered")

                if input_data.value < 0:
                    yield "error_message", "Value must be non-negative"
                    yield "result", 0
                else:
                    yield "result", input_data.value * 2
                    yield "error_message", None

        # Test normal operation
        block = ErrorHandlingBlock()
        outputs = {}
        async for name, value in block.run(
            ErrorHandlingBlock.Input(value=5, should_error=False)
        ):
            outputs[name] = value

        assert outputs["result"] == 10
        assert outputs["error_message"] is None

        # Test with negative value
        outputs = {}
        async for name, value in block.run(
            ErrorHandlingBlock.Input(value=-5, should_error=False)
        ):
            outputs[name] = value

        assert outputs["result"] == 0
        assert outputs["error_message"] == "Value must be non-negative"

        # Test with error
        with pytest.raises(ValueError, match="Intentional error triggered"):
            async for _ in block.run(
                ErrorHandlingBlock.Input(value=5, should_error=True)
            ):
                pass


class TestAuthenticationVariants:
    """Test complex authentication scenarios including OAuth, API keys, and scopes."""

    @pytest.mark.asyncio
    async def test_oauth_block_with_scopes(self):
        """Test creating a block that uses OAuth2 with scopes."""
        from backend.sdk import OAuth2Credentials, ProviderBuilder

        # Create a test OAuth provider with scopes
        # For testing, we don't need an actual OAuth handler
        # In real usage, you would provide a proper OAuth handler class
        oauth_provider = (
            ProviderBuilder("test_oauth_provider")
            .with_api_key("TEST_OAUTH_API", "Test OAuth API")
            .with_base_cost(5, BlockCostType.RUN)
            .build()
        )

        class OAuthScopedBlock(Block):
            """Block requiring OAuth2 with specific scopes."""

            class Input(BlockSchema):
                credentials: CredentialsMetaInput = oauth_provider.credentials_field(
                    description="OAuth2 credentials with scopes",
                    scopes=["read:user", "write:data"],
                )
                resource: str = SchemaField(description="Resource to access")

            class Output(BlockSchema):
                data: str = SchemaField(description="Retrieved data")
                scopes_used: list[str] = SchemaField(
                    description="Scopes that were used"
                )
                token_info: dict[str, Any] = SchemaField(
                    description="Token information"
                )

            def __init__(self):
                super().__init__(
                    id="oauth-scoped-block",
                    description="Test OAuth2 with scopes",
                    categories={BlockCategory.DEVELOPER_TOOLS},
                    input_schema=OAuthScopedBlock.Input,
                    output_schema=OAuthScopedBlock.Output,
                )

            async def run(
                self, input_data: Input, *, credentials: OAuth2Credentials, **kwargs
            ) -> BlockOutput:
                # Simulate OAuth API call with scopes
                token = credentials.access_token.get_secret_value()

                yield "data", f"OAuth data for {input_data.resource}"
                yield "scopes_used", credentials.scopes or []
                yield "token_info", {
                    "has_token": bool(token),
                    "has_refresh": credentials.refresh_token is not None,
                    "provider": credentials.provider,
                    "expires_at": credentials.access_token_expires_at,
                }

        # Create test OAuth credentials
        test_oauth_creds = OAuth2Credentials(
            id="test-oauth-creds",
            provider="test_oauth_provider",
            access_token=SecretStr("test-access-token"),
            refresh_token=SecretStr("test-refresh-token"),
            scopes=["read:user", "write:data"],
            title="Test OAuth Credentials",
        )

        # Test the block
        block = OAuthScopedBlock()
        outputs = {}
        async for name, value in block.run(
            OAuthScopedBlock.Input(
                credentials={  # type: ignore
                    "provider": "test_oauth_provider",
                    "id": "test-oauth-creds",
                    "type": "oauth2",
                },
                resource="user/profile",
            ),
            credentials=test_oauth_creds,
        ):
            outputs[name] = value

        assert outputs["data"] == "OAuth data for user/profile"
        assert set(outputs["scopes_used"]) == {"read:user", "write:data"}
        assert outputs["token_info"]["has_token"] is True
        assert outputs["token_info"]["expires_at"] is None
        assert outputs["token_info"]["has_refresh"] is True

    @pytest.mark.asyncio
    async def test_mixed_auth_block(self):
        """Test block that supports both OAuth2 and API key authentication."""
        # No need to import these again, already imported at top

        # Create provider supporting both auth types
        # Create provider supporting API key auth
        # In real usage, you would add OAuth support with .with_oauth()
        mixed_provider = (
            ProviderBuilder("mixed_auth_provider")
            .with_api_key("MIXED_API_KEY", "Mixed Provider API Key")
            .with_base_cost(8, BlockCostType.RUN)
            .build()
        )

        class MixedAuthBlock(Block):
            """Block supporting multiple authentication methods."""

            class Input(BlockSchema):
                credentials: CredentialsMetaInput = mixed_provider.credentials_field(
                    description="API key or OAuth2 credentials",
                    supported_credential_types=["api_key", "oauth2"],
                )
                operation: str = SchemaField(description="Operation to perform")

            class Output(BlockSchema):
                result: str = SchemaField(description="Operation result")
                auth_type: str = SchemaField(description="Authentication type used")
                auth_details: dict[str, Any] = SchemaField(description="Auth details")

            def __init__(self):
                super().__init__(
                    id="mixed-auth-block",
                    description="Block supporting OAuth2 and API key",
                    categories={BlockCategory.DEVELOPER_TOOLS},
                    input_schema=MixedAuthBlock.Input,
                    output_schema=MixedAuthBlock.Output,
                )

            async def run(
                self,
                input_data: Input,
                *,
                credentials: Union[APIKeyCredentials, OAuth2Credentials],
                **kwargs,
            ) -> BlockOutput:
                # Handle different credential types
                if isinstance(credentials, APIKeyCredentials):
                    auth_type = "api_key"
                    auth_details = {
                        "has_key": bool(credentials.api_key.get_secret_value()),
                        "key_prefix": credentials.api_key.get_secret_value()[:5]
                        + "...",
                    }
                elif isinstance(credentials, OAuth2Credentials):
                    auth_type = "oauth2"
                    auth_details = {
                        "has_token": bool(credentials.access_token.get_secret_value()),
                        "scopes": credentials.scopes or [],
                    }
                else:
                    auth_type = "unknown"
                    auth_details = {}

                yield "result", f"Performed {input_data.operation} with {auth_type}"
                yield "auth_type", auth_type
                yield "auth_details", auth_details

        # Test with API key
        api_creds = APIKeyCredentials(
            id="mixed-api-creds",
            provider="mixed_auth_provider",
            api_key=SecretStr("sk-1234567890"),
            title="Mixed API Key",
        )

        block = MixedAuthBlock()
        outputs = {}
        async for name, value in block.run(
            MixedAuthBlock.Input(
                credentials={  # type: ignore
                    "provider": "mixed_auth_provider",
                    "id": "mixed-api-creds",
                    "type": "api_key",
                },
                operation="fetch_data",
            ),
            credentials=api_creds,
        ):
            outputs[name] = value

        assert outputs["auth_type"] == "api_key"
        assert outputs["result"] == "Performed fetch_data with api_key"
        assert outputs["auth_details"]["key_prefix"] == "sk-12..."

        # Test with OAuth2
        oauth_creds = OAuth2Credentials(
            id="mixed-oauth-creds",
            provider="mixed_auth_provider",
            access_token=SecretStr("oauth-token-123"),
            scopes=["full_access"],
            title="Mixed OAuth",
        )

        outputs = {}
        async for name, value in block.run(
            MixedAuthBlock.Input(
                credentials={  # type: ignore
                    "provider": "mixed_auth_provider",
                    "id": "mixed-oauth-creds",
                    "type": "oauth2",
                },
                operation="update_data",
            ),
            credentials=oauth_creds,
        ):
            outputs[name] = value

        assert outputs["auth_type"] == "oauth2"
        assert outputs["result"] == "Performed update_data with oauth2"
        assert outputs["auth_details"]["scopes"] == ["full_access"]

    @pytest.mark.asyncio
    async def test_multiple_credentials_block(self):
        """Test block requiring multiple different credentials."""
        from backend.sdk import ProviderBuilder

        # Create multiple providers
        primary_provider = (
            ProviderBuilder("primary_service")
            .with_api_key("PRIMARY_API_KEY", "Primary Service Key")
            .build()
        )

        # For testing purposes, using API key instead of OAuth handler
        secondary_provider = (
            ProviderBuilder("secondary_service")
            .with_api_key("SECONDARY_API_KEY", "Secondary Service Key")
            .build()
        )

        class MultiCredentialBlock(Block):
            """Block requiring credentials from multiple services."""

            class Input(BlockSchema):
                primary_credentials: CredentialsMetaInput = (
                    primary_provider.credentials_field(
                        description="Primary service API key"
                    )
                )
                secondary_credentials: CredentialsMetaInput = (
                    secondary_provider.credentials_field(
                        description="Secondary service OAuth"
                    )
                )
                merge_data: bool = SchemaField(
                    description="Whether to merge data from both services",
                    default=True,
                )

            class Output(BlockSchema):
                primary_data: str = SchemaField(description="Data from primary service")
                secondary_data: str = SchemaField(
                    description="Data from secondary service"
                )
                merged_result: Optional[str] = SchemaField(
                    description="Merged data if requested"
                )

            def __init__(self):
                super().__init__(
                    id="multi-credential-block",
                    description="Block using multiple credentials",
                    categories={BlockCategory.DEVELOPER_TOOLS},
                    input_schema=MultiCredentialBlock.Input,
                    output_schema=MultiCredentialBlock.Output,
                )

            async def run(
                self,
                input_data: Input,
                *,
                primary_credentials: APIKeyCredentials,
                secondary_credentials: OAuth2Credentials,
                **kwargs,
            ) -> BlockOutput:
                # Simulate fetching data with primary API key
                primary_data = f"Primary data using {primary_credentials.provider}"
                yield "primary_data", primary_data

                # Simulate fetching data with secondary OAuth
                secondary_data = f"Secondary data with {len(secondary_credentials.scopes or [])} scopes"
                yield "secondary_data", secondary_data

                # Merge if requested
                if input_data.merge_data:
                    merged = f"{primary_data} + {secondary_data}"
                    yield "merged_result", merged
                else:
                    yield "merged_result", None

        # Create test credentials
        primary_creds = APIKeyCredentials(
            id="primary-creds",
            provider="primary_service",
            api_key=SecretStr("primary-key-123"),
            title="Primary Key",
        )

        secondary_creds = OAuth2Credentials(
            id="secondary-creds",
            provider="secondary_service",
            access_token=SecretStr("secondary-token"),
            scopes=["read", "write"],
            title="Secondary OAuth",
        )

        # Test the block
        block = MultiCredentialBlock()
        outputs = {}

        # Note: In real usage, the framework would inject the correct credentials
        # based on the field names. Here we simulate that behavior.
        async for name, value in block.run(
            MultiCredentialBlock.Input(
                primary_credentials={  # type: ignore
                    "provider": "primary_service",
                    "id": "primary-creds",
                    "type": "api_key",
                },
                secondary_credentials={  # type: ignore
                    "provider": "secondary_service",
                    "id": "secondary-creds",
                    "type": "oauth2",
                },
                merge_data=True,
            ),
            primary_credentials=primary_creds,
            secondary_credentials=secondary_creds,
        ):
            outputs[name] = value

        assert outputs["primary_data"] == "Primary data using primary_service"
        assert outputs["secondary_data"] == "Secondary data with 2 scopes"
        assert "Primary data" in outputs["merged_result"]
        assert "Secondary data" in outputs["merged_result"]

    @pytest.mark.asyncio
    async def test_oauth_scope_validation(self):
        """Test OAuth scope validation and handling."""
        from backend.sdk import OAuth2Credentials, ProviderBuilder

        # Provider with specific required scopes
        # For testing OAuth scope validation
        scoped_provider = (
            ProviderBuilder("scoped_oauth_service")
            .with_api_key("SCOPED_OAUTH_KEY", "Scoped OAuth Service")
            .build()
        )

        class ScopeValidationBlock(Block):
            """Block that validates OAuth scopes."""

            class Input(BlockSchema):
                credentials: CredentialsMetaInput = scoped_provider.credentials_field(
                    description="OAuth credentials with specific scopes",
                    scopes=["user:read", "user:write"],  # Required scopes
                )
                require_admin: bool = SchemaField(
                    description="Whether admin scopes are required",
                    default=False,
                )

            class Output(BlockSchema):
                allowed_operations: list[str] = SchemaField(
                    description="Operations allowed with current scopes"
                )
                missing_scopes: list[str] = SchemaField(
                    description="Scopes that are missing for full access"
                )
                has_required_scopes: bool = SchemaField(
                    description="Whether all required scopes are present"
                )

            def __init__(self):
                super().__init__(
                    id="scope-validation-block",
                    description="Block that validates OAuth scopes",
                    categories={BlockCategory.DEVELOPER_TOOLS},
                    input_schema=ScopeValidationBlock.Input,
                    output_schema=ScopeValidationBlock.Output,
                )

            async def run(
                self, input_data: Input, *, credentials: OAuth2Credentials, **kwargs
            ) -> BlockOutput:
                current_scopes = set(credentials.scopes or [])
                required_scopes = {"user:read", "user:write"}

                if input_data.require_admin:
                    required_scopes.update({"admin:read", "admin:write"})

                # Determine allowed operations based on scopes
                allowed_ops = []
                if "user:read" in current_scopes:
                    allowed_ops.append("read_user_data")
                if "user:write" in current_scopes:
                    allowed_ops.append("update_user_data")
                if "admin:read" in current_scopes:
                    allowed_ops.append("read_admin_data")
                if "admin:write" in current_scopes:
                    allowed_ops.append("update_admin_data")

                missing = list(required_scopes - current_scopes)
                has_required = len(missing) == 0

                yield "allowed_operations", allowed_ops
                yield "missing_scopes", missing
                yield "has_required_scopes", has_required

        # Test with partial scopes
        partial_creds = OAuth2Credentials(
            id="partial-oauth",
            provider="scoped_oauth_service",
            access_token=SecretStr("partial-token"),
            scopes=["user:read"],  # Only one of the required scopes
            title="Partial OAuth",
        )

        block = ScopeValidationBlock()
        outputs = {}
        async for name, value in block.run(
            ScopeValidationBlock.Input(
                credentials={  # type: ignore
                    "provider": "scoped_oauth_service",
                    "id": "partial-oauth",
                    "type": "oauth2",
                },
                require_admin=False,
            ),
            credentials=partial_creds,
        ):
            outputs[name] = value

        assert outputs["allowed_operations"] == ["read_user_data"]
        assert "user:write" in outputs["missing_scopes"]
        assert outputs["has_required_scopes"] is False

        # Test with all required scopes
        full_creds = OAuth2Credentials(
            id="full-oauth",
            provider="scoped_oauth_service",
            access_token=SecretStr("full-token"),
            scopes=["user:read", "user:write", "admin:read"],
            title="Full OAuth",
        )

        outputs = {}
        async for name, value in block.run(
            ScopeValidationBlock.Input(
                credentials={  # type: ignore
                    "provider": "scoped_oauth_service",
                    "id": "full-oauth",
                    "type": "oauth2",
                },
                require_admin=False,
            ),
            credentials=full_creds,
        ):
            outputs[name] = value

        assert set(outputs["allowed_operations"]) == {
            "read_user_data",
            "update_user_data",
            "read_admin_data",
        }
        assert outputs["missing_scopes"] == []
        assert outputs["has_required_scopes"] is True


if __name__ == "__main__":
    pytest.main([__file__, "-v"])
Error Handling

Generic exception handling catches all exceptions and yields error messages without proper logging or specific error types. This could mask important debugging information and make troubleshooting difficult.

try:
    response = await Requests().post(url, headers=headers, json=payload)
    data = response.json()

    yield "webset_id", data.get("id", "")
    yield "status", data.get("status", "")
    yield "external_id", data.get("externalId")
    yield "created_at", data.get("createdAt", "")

except Exception as e:
    yield "error", str(e)
    yield "webset_id", ""
    yield "status", ""
    yield "created_at", ""
Performance Risk

The new get_block_error_stats function uses raw SQL with aggregation but lacks proper indexing considerations and could be slow on large datasets. The HAVING clause with COUNT(*) >= 10 is hardcoded.

async def get_block_error_stats(
    start_time: datetime, end_time: datetime
) -> list[BlockErrorStats]:
    """Get block execution stats using efficient SQL aggregation."""

    query_template = """
    SELECT 
        n."agentBlockId" as block_id,
        COUNT(*) as total_executions,
        SUM(CASE WHEN ne."executionStatus" = 'FAILED' THEN 1 ELSE 0 END) as failed_executions
    FROM {schema_prefix}"AgentNodeExecution" ne
    JOIN {schema_prefix}"AgentNode" n ON ne."agentNodeId" = n.id
    WHERE ne."addedTime" >= $1::timestamp AND ne."addedTime" <= $2::timestamp
    GROUP BY n."agentBlockId"
    HAVING COUNT(*) >= 10
    """

    result = await query_raw_with_schema(query_template, start_time, end_time)

    # Convert to typed data structures
    return [
        BlockErrorStats(
            block_id=row["block_id"],
            total_executions=int(row["total_executions"]),
            failed_executions=int(row["failed_executions"]),
        )
        for row in result
    ]

Copy link

deepsource-io bot commented Jul 12, 2025

Here's the code health analysis summary for commits 6ffe57c..083965f. View details on DeepSource ↗.

Analysis Summary

AnalyzerStatusSummaryLink
DeepSource JavaScript LogoJavaScript✅ Success
❗ 75 occurences introduced
🎯 18 occurences resolved
View Check ↗
DeepSource Python LogoPython✅ SuccessView Check ↗

💡 If you’re a repository administrator, you can configure the quality gates from the settings.

@AutoGPT-Agent
Copy link

Thank you for adding these comprehensive e2e tests for the library page! This is valuable work that will help ensure the library functionality remains stable. However, there are a few issues that need to be addressed before this PR can be merged:

Title Format

The PR title has a typo ('frontent' should be 'frontend') and should use the singular 'test' instead of 'tests' to follow the conventional commit format. Please update to:

test(frontend): e2e tests for library page

Missing Checklist

Your PR is missing the required checklist. Since you're adding substantial new test code, please include the complete checklist from the PR template, with appropriate items checked off.

Potentially Out of Scope Changes

I noticed a couple of changes that appear unrelated to adding e2e tests:

  1. In client.ts and helpers.ts, you're changing return types from response.text() to response.json(). This seems like a functional change rather than a test-only change. If this is required for the tests to work, please explain the connection in your PR description.

  2. You've modified the CI workflow file to run only your specific test file. This change should be temporary for development only and should be reverted before merging.

Data-testid Additions

You've added several data-testid attributes to components. This is good for testing, but ensure these IDs follow any project conventions for test IDs.

Once you've addressed these items, this PR will be ready for another review. The test implementation itself looks thorough and well-structured!

@AutoGPT-Agent
Copy link

Thanks for adding these comprehensive e2e tests for the library page! The tests look thorough and well-structured, covering many important aspects of the library functionality.

A few observations:

  1. The PR title has a typo: "tests(frontent)" should be "test(frontend)" - both to fix the spelling and to use the singular "test" for conventional commit format

  2. The changes to the API client (changing from .text() to .json() in two places) are small but functional changes. These seem necessary for the tests to work correctly, but it would be good to briefly mention these in the description under the Changes section

  3. The test names and organization look good, and I appreciate the detailed comments explaining what each test is checking

  4. I see that you've added a new utility for creating users at the start, which seems like a useful addition for other tests

  5. In the CI workflow, I noticed you're updating platform-frontend-ci.yml to only run the library.spec.ts test. Is this intended to be committed, or is it just for your local testing?

Overall, this is a great addition that will improve test coverage of the library page. Once you've addressed the PR title and clarified the intended changes to the workflow, this should be ready to merge.

@AutoGPT-Agent
Copy link

Hi @Abhi1992002, thanks for adding these library page e2e tests! The tests themselves look comprehensive and well-structured.

However, there are a few things that need to be addressed before this PR can be merged:

PR Format Issues

  1. The PR title doesn't follow our conventional commit format. It should use one of the allowed types: feat, fix, refactor, ci, or dx. In this case, "test" isn't an allowed type, and there's a typo in "frontent". Consider changing it to: test(frontend): e2e tests for library page or ci(frontend/library): e2e tests for library page.

  2. The PR description is missing the required checklist. Even for test changes, we need the checklist to be filled out or explicitly noted as not applicable.

Scope Issues

  1. I noticed there are changes to the API client code in src/lib/autogpt-server-api/client.ts and helpers.ts that change response handling from text to JSON. These changes aren't mentioned in your PR description and seem unrelated to the library tests. Can you explain why these changes are necessary for the tests?

  2. You've also modified the CI workflow to run only library.spec.ts rather than all tests. This change should be mentioned in the description and may impact other tests.

Next Steps

  1. Update the PR title to follow our conventional commit format
  2. Add the complete checklist to your PR description or explicitly note if parts are not applicable
  3. Explain the API client changes and CI workflow changes in your description

The test code itself looks good and comprehensive! Just need to address these documentation and scope issues before we can proceed with merging.

@AutoGPT-Agent
Copy link

Thank you for adding comprehensive e2e tests for the library page! The test coverage looks thorough with 9 tests covering various aspects of the library functionality.

However, there are a few issues that need to be addressed before this PR can be merged:

  1. PR Title Format: The title needs to follow our conventional commit format.

    • "tests" is not one of our accepted types (should be one of: feat, fix, refactor, ci, dx)
    • "frontent" is misspelled (should be "frontend")
    • Consider changing to something like: test(frontend): e2e tests for library page or feat(frontend): add e2e tests for library page
  2. Missing Checklist: Your PR description is missing the required checklist section. Since this PR adds substantial new code, a complete checklist is required. Please add the checklist and check off the appropriate items.

  3. CI Workflow Change: I noticed you've modified the platform-frontend-ci.yml to specifically run only the library.spec.ts test. This change should be temporary for testing purposes and not part of the final PR, as it would prevent other tests from running in CI.

  4. API Return Type Change: The changes to client.ts and helpers.ts from return await response.text() to return await response.json() may have unintended consequences for existing code. Can you explain why this change is necessary for the tests?

The test implementation itself looks solid with good coverage of library functionality including search, pagination, sorting, and upload features. Please address the issues above, and this PR will be ready for another review.

@Abhi1992002 Abhi1992002 changed the title tests(frontent): e2e tests for library page test(frontend): e2e tests for library page Jul 12, 2025
@AutoGPT-Agent
Copy link

Thank you for adding these comprehensive e2e tests for the library page! The test coverage looks excellent, with tests for all major functionality including navigation, searching, pagination, sorting, and uploading.

Before this PR can be merged, there are a few items that need to be addressed:

Required Changes

  1. Checklist Completion Required: Please check off all the items in your PR checklist that you've completed. While you mentioned "All tests are passing locally" in your comment, you need to actually check the checkboxes in the PR description.

  2. API Return Type Changes: I noticed you changed the return type in two places from .text() to .json() in the autogpt-server-api client and helpers:

    // client.ts line 839-842
    - return await response.text();
    + return await response.json();
    // helpers.ts line 302-305
    - return await response.text();
    + return await response.json();

    Can you explain why this change was necessary for the tests? This seems to change the API contract, which could potentially break existing functionality.

Questions/Suggestions

  1. Would it be helpful to add comments explaining the test structure for future maintainers? The test organization is good, but additional documentation could be valuable.

  2. Have you verified these tests also run correctly in CI environments?

Your test implementation looks thorough and well-structured. I particularly appreciate the comprehensive approach to testing edge cases and error handling scenarios.

@AutoGPT-Agent
Copy link

Thank you for adding these comprehensive tests for the library page! The test suite looks well-designed with good coverage of normal flows and edge cases.

Items to address before merging:

  1. Complete the checklist: The PR template requires all checklist items to be checked off. You have only checked the first item about listing changes. Please complete the test plan checklist.

  2. API response changes: I noticed you've changed response handling in two places from .text() to .json() in:

    • src/lib/autogpt-server-api/client.ts
    • src/lib/autogpt-server-api/helpers.ts

    These changes could potentially affect existing functionality. Could you explain why this change was needed and confirm that it doesn't break any existing features?

Otherwise, the tests look well-structured and comprehensive. I appreciate the detailed test cases covering normal flows and edge cases for the library page functionality.

@AutoGPT-Agent
Copy link

Thanks for this comprehensive set of e2e tests for the library page! The tests look well-structured and cover a good range of functionality including navigation, loading, agent visibility, pagination, sorting, searching, and uploads.

A couple of notes on your PR:

  1. The changes to helpers.ts and clients.ts should ideally be in a separate PR since they change actual implementation code rather than tests. While you've acknowledged these changes in your description, it would be better to separate functional code changes from test additions to maintain clean PR boundaries. The changes (switching from text to JSON responses) could potentially affect existing functionality.

  2. The test assets folder includes a testing_agent.json file which is good, but you might want to add documentation either in the file or in a README explaining how the test agents are structured and what their purpose is.

  3. Consider adding a test that specifically tests failure scenarios - for example, what happens when the upload fails or when the API returns an error. I see there's an edge case test, but expanding this to include API failures would strengthen the test suite.

  4. The utility methods in library.ts are quite extensive which is great, but there's some duplication between LibraryUtils, AgentCreationUtils, and AgentCreationService. Consider consolidating these into a more cohesive structure.

Overall, excellent work on the tests themselves! Once you address the API changes (either by justifying their inclusion or moving them to a separate PR), this should be ready to merge.

@AutoGPT-Agent
Copy link

Thanks for adding these comprehensive e2e tests for the library page! This is a valuable addition to our test suite that will help catch regressions.

I have two concerns about the PR that need to be addressed:

  1. You've made changes to API client methods in client.ts and helpers.ts, changing the return type from text() to json(). These changes seem unrelated to adding tests and could potentially break existing functionality. Could you explain why these changes are necessary for the tests? If they're fixing an actual issue, it might be better to make these changes in a separate PR focused on fixing the API client.

  2. I noticed this PR has a "Possible security concern" label. Could you clarify what security concerns might be present? This is important to address before merging.

Otherwise, the test implementation looks thorough and well-structured. I appreciate the comprehensive test coverage for different library page functionality including navigation, loading, agent cards, pagination, sorting, searching, and edge cases.

Please address these concerns, and we'll be able to move forward with this PR.

@github-actions github-actions bot added the conflicts Automatically applied to PRs with merge conflicts label Jul 17, 2025
Copy link
Contributor

This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
conflicts Automatically applied to PRs with merge conflicts platform/frontend AutoGPT Platform - Front end Possible security concern Review effort 4/5 size/xl
Projects
Status: 🆕 Needs initial review
Status: No status
Development

Successfully merging this pull request may close these issues.

3 participants