A comprehensive collection of bash scripts designed to test the robustness and reliability of the Socket.dev API endpoints. This test suite covers all major API functionality including package analysis, license policy management, security scanning, and SBOM export capabilities.
This testing suite is designed for Socket.dev customers who want to:
- Validate API robustness - Test how the API handles various scenarios and edge cases
- Verify error handling - Ensure proper responses for invalid requests
- Test rate limiting - Understand API behavior under different load conditions
- Validate data integrity - Ensure responses are consistent and well-formed
- Test authentication - Verify proper security controls are in place
The test suite is organized into focused, single-responsibility scripts that can be run individually or as part of a comprehensive test run:
socket_api_tests/
βββ .env # Environment configuration
βββ README.md # This documentation
βββ run_all_tests.sh # Master test runner
βββ 01_test_package_purl_endpoint.sh # Core package lookup tests
βββ 02_test_license_policy.sh # License policy validation tests
βββ 03_test_alert_types.sh # Alert metadata tests
βββ 04_test_full_scans.sh # Security scanning tests
βββ 05_test_repository_management.sh # Repository management tests
βββ 06_test_sbom_export.sh # SBOM export functionality tests
βββ 07_test_cve_dependency_traversal.sh # CVE dependency traversal tests
- Bash shell (macOS, Linux, or WSL)
- curl for HTTP requests
- jq (optional but recommended for JSON parsing)
- Socket.dev API token with appropriate permissions
# Clone or download the test suite
cd socket_api_tests
# Copy and configure the environment file
cp .env.example .env
# Edit .env with your actual values
nano .env
Update the .env
file with your actual Socket.dev credentials:
# Socket API Configuration
SOCKET_API_BASE_URL=https://api.socket.dev/v0
# Authentication
SOCKET_API_TOKEN=your_actual_api_token_here
# Organization details
SOCKET_ORG_SLUG=your_actual_org_slug
# Test data
TEST_REPO_SLUG=your_test_repository_slug
TEST_BRANCH=main
TEST_COMMIT_HASH=abc123def456
# Sample PURLs for testing
TEST_NPM_PACKAGE=pkg:npm/[email protected]
TEST_PYPI_PACKAGE=pkg:pypi/[email protected]
TEST_MAVEN_PACKAGE=pkg:maven/log4j/[email protected]
# Make scripts executable
chmod +x *.sh
# Run all tests
./run_all_tests.sh
# Or run individual test suites
./01_test_package_purl_endpoint.sh
./02_test_license_policy.sh
# ... etc
Scenario: Test the core package lookup functionality using PURLs
Coverage:
- Basic package lookup for different ecosystems (npm, PyPI, Maven)
- Batch package lookup with multiple packages
- Query parameter variations (alerts, actions, compact, fixable)
- License details and attribution data
- Error handling for invalid PURLs
- Authentication requirements
- Rate limiting behavior
Key Tests:
- Single package lookup across ecosystems
- Batch operations with mixed package types
- Parameter combinations and edge cases
- Invalid input handling
- Missing authentication scenarios
Scenario: Test license policy management and validation functionality
Coverage:
- License policy validation against packages
- License metadata retrieval
- License policy saturation (legacy)
- License class expansions (permissive, copyleft, etc.)
- PURL-based license policy rules
- File-based and version-based rules
- Registry metadata provenance
Key Tests:
- Basic license policy validation
- Complex tier combinations
- PURL-based rule configurations
- License metadata with full text
- Error handling for invalid policies
Scenario: Test alert type metadata and information retrieval
Coverage:
- Alert type metadata retrieval
- Multi-language support (English, German, French, Spanish, Italian, Acholi)
- Alert type filtering and search
- Alert type properties and suggestions
- Cross-language consistency
Key Tests:
- Multi-language alert descriptions
- Single and multiple alert type queries
- Empty and invalid alert type handling
- Public endpoint accessibility
- Large array handling
Scenario: Test full scan creation, management, and reporting functionality
Coverage:
- Creating new full scans from manifest files
- Listing and filtering full scans
- Retrieving scan results and metadata
- Integration with various SCM platforms
- File upload handling and validation
Key Tests:
- Scan creation with various parameters
- Repository and branch filtering
- Commit and PR information
- Integration type configurations
- Error handling for invalid requests
Scenario: Test repository creation, management, and labeling functionality
Coverage:
- Creating new repositories
- Listing and filtering repositories
- Updating repository settings
- Repository labeling and categorization
- Integration with SCM platforms
Key Tests:
- Repository CRUD operations
- Label management and associations
- Repository analytics
- Error handling for invalid operations
- GitHub integration
Scenario: Test SBOM export functionality in various formats
Coverage:
- CycloneDX SBOM export
- SPDX SBOM export
- Custom project metadata
- Vulnerability information inclusion
- File format validation
Key Tests:
- Multiple export formats
- Parameter combinations
- File integrity validation
- Error handling for invalid exports
- Custom metadata handling
Scenario: Demonstrate practical CVE remediation using Socket API data
Coverage:
- Package alert analysis for dependency identification
- Systematic version traversal to find CVE-free alternatives
- Dependency upgrade path generation
- Security remediation workflow automation
- Practical dependency management examples
Key Tests:
- Real-world CVE remediation (lodash CVE-2021-23337)
- Dependency alert analysis and pattern recognition
- Version range generation and systematic checking
- Upgrade path documentation and rollback planning
- Integration with CI/CD workflows
Each test script follows a consistent pattern. To add new tests:
- Identify the endpoint and add it to the appropriate script
- Follow the naming convention:
Test N: Description
- Use the
make_request
function for consistent error handling - Add appropriate assertions for expected responses
Example:
# Test N: New test case
print_status "Test N: New test case"
make_request "/new/endpoint" "POST" "{\"data\": \"value\"}" "New endpoint test"
Add new environment variables to .env
:
# New test configuration
NEW_TEST_VAR=value
If tests have dependencies, update the execution order in run_all_tests.sh
:
declare -A test_scripts=(
["01_test_package_purl_endpoint.sh"]="Package PURL Endpoint Tests"
["02_test_license_policy.sh"]="License Policy Tests"
["new_test_script.sh"]="New Test Suite" # Add here
# ... etc
)
Tests provide colored, structured output:
- π΅ Blue: Information and status updates
- π’ Green: Successful operations
- π‘ Yellow: Warnings and rate limiting
- π΄ Red: Errors and failures
Test results are automatically saved to:
test_results/test_run_YYYYMMDD_HHMMSS.txt
- Complete test run logtest_results/sbom_exports_YYYYMMDD_HHMMSS/
- Downloaded SBOM files
Each test tracks:
- Execution time
- HTTP status codes
- Response sizes
- Success/failure rates
-
Authentication Errors (401)
- Verify
SOCKET_API_TOKEN
is correct - Check token permissions and expiration
- Verify
-
Rate Limiting (429)
- Tests automatically wait and retry
- Adjust
RETRY_DELAY_SECONDS
if needed
-
Permission Errors (403)
- Verify token has required scopes
- Check organization access
-
Not Found Errors (404)
- Verify
SOCKET_ORG_SLUG
is correct - Check if resources exist
- Verify
Enable verbose output by modifying scripts:
# Add -v flag to curl commands for verbose output
curl -v -s -w "\n%{http_code}" \
-H "Authorization: Bearer $SOCKET_API_TOKEN" \
"$SOCKET_API_BASE_URL$endpoint"
- Never commit
.env
files to version control - Use environment-specific tokens for different environments
- Rotate tokens regularly
- Limit token scopes to minimum required permissions
- Use non-production repositories and data
- Clean up test resources after testing
- Avoid sensitive data in test manifests
To test API performance under load:
- Modify test scripts to run multiple iterations
- Adjust delays between requests
- Monitor rate limiting responses
- Track response times and throughput
- Test with large package lists
- Verify batch operation limits
- Check memory usage for large responses
- Test concurrent request handling
When new Socket API endpoints are added:
- Create new test script following naming convention
- Add comprehensive test cases covering:
- Happy path scenarios
- Error conditions
- Edge cases
- Parameter validation
- Update master runner to include new tests
- Document new test scenarios
- Add more edge cases
- Improve error handling
- Enhance validation logic
- Optimize performance
For issues with the test suite:
- Check the logs in
test_results/
directory - Verify environment configuration
- Test individual scripts to isolate issues
- Review Socket API documentation for endpoint changes
Note: This test suite is designed for Socket.dev customers to validate API robustness. Always test against non-production environments and follow Socket.dev's terms of service and rate limiting guidelines.