Skip to content

Implement Rate Limiting Middleware for API Protection #33

@djdiptayan1

Description

@djdiptayan1

✨ Feature Description

Is your feature request related to a problem? Please describe.
Currently, the GCSRM Server lacks rate limiting protection, which makes it vulnerable to:

  • API abuse and spam requests
  • DDoS attacks that could overwhelm the server
  • Excessive resource consumption by individual clients
  • Potential security vulnerabilities from automated attacks
  • Poor performance during traffic spikes

Describe the solution you'd like
Implement a configurable rate limiting middleware that can:

  • Limit requests per IP address within a specified time window
  • Support different rate limits for different API endpoints
  • Provide meaningful error responses when limits are exceeded
  • Include headers indicating current usage and reset times
  • Store rate limit data efficiently (in-memory or Redis)

Describe alternatives you've considered

  • Using cloud-based solutions (AWS API Gateway, Cloudflare) - but this adds external dependency
  • Implementing rate limiting at the reverse proxy level (nginx) - but this reduces application-level control
  • Using third-party services - but this increases costs and complexity

💡 Detailed Proposal

Implementation Approach

  1. Create a new middleware in src/middleware/rateLimiting.js
  2. Use the popular express-rate-limit package as the foundation
  3. Add Redis support for distributed rate limiting (optional)
  4. Configure different limits for different endpoint categories
  5. Integrate with existing error handling middleware

API Changes (if applicable)

  • No breaking changes to existing endpoints
  • New response headers added:
    • X-RateLimit-Limit: Maximum requests allowed
    • X-RateLimit-Remaining: Remaining requests in current window
    • X-RateLimit-Reset: Time when the rate limit resets
  • New HTTP 429 "Too Many Requests" responses when limits exceeded

Database Changes (if applicable)

  • No database schema changes required
  • Optional: Redis integration for distributed rate limiting
  • In-memory storage by default (using Map or similar)

Frontend Impact (if applicable)

  • Frontend applications should handle 429 responses gracefully
  • Display user-friendly messages when rate limits are hit
  • Implement exponential backoff for retry logic

🎯 Use Case

Who would benefit from this feature?

  • End users (protected from service degradation)
  • Developers (better API reliability)
  • System administrators (server protection)
  • API consumers (fair usage for all clients)

Example Scenario
A malicious user or bot starts making hundreds of requests per second to the /api/teams endpoint. Without rate limiting, this could:

  1. Overwhelm the MongoDB database
  2. Slow down responses for legitimate users
  3. Potentially crash the server
  4. Consume excessive server resources

With rate limiting implemented:

  1. After 100 requests in 15 minutes, further requests return 429
  2. Legitimate users continue to receive fast responses
  3. Server resources are protected
  4. Malicious traffic is automatically throttled

📋 Acceptance Criteria

  • Rate limiting middleware blocks excessive requests from single IP
  • Configurable limits per endpoint or endpoint group
  • Proper HTTP 429 responses with descriptive error messages
  • Rate limit headers included in all responses
  • Integration with existing error handling middleware
  • Unit tests covering rate limiting scenarios
  • Documentation updated with rate limiting configuration
  • Environment variables for easy configuration
  • Graceful handling when rate limit storage is unavailable

🔧 Technical Considerations

Dependencies

  • express-rate-limit: Core rate limiting functionality
  • rate-limit-redis (optional): For Redis-based storage
  • Redis client (optional): For distributed rate limiting

Security Implications

  • Protect against brute force attacks on authentication endpoints
  • Prevent API abuse and resource exhaustion
  • Consider IP spoofing and proxy detection
  • Whitelist internal services if needed
  • Rate limiting bypass for health checks

Performance Impact

  • Minimal overhead for request processing
  • In-memory storage has O(1) lookup time
  • Redis storage adds network latency but enables horizontal scaling
  • Consider cleanup of expired rate limit entries
  • Monitor memory usage with high traffic

📸 Mockups/Examples (Optional)

Example Rate Limit Configuration:

// Different limits for different endpoint types
const rateLimitConfig = {
  // General API endpoints
  general: {
    windowMs: 15 * 60 * 1000, // 15 minutes
    max: 100, // limit each IP to 100 requests per windowMs
  },
  // Authentication endpoints (stricter)
  auth: {
    windowMs: 15 * 60 * 1000,
    max: 5, // only 5 login attempts per 15 minutes
  },
  // Public endpoints (more lenient)
  public: {
    windowMs: 15 * 60 * 1000,
    max: 200,
  }
};

Example Response Headers:

X-RateLimit-Limit: 100
X-RateLimit-Remaining: 85
X-RateLimit-Reset: 1699123456

Example 429 Error Response:

{
  "success": false,
  "error": "Too Many Requests",
  "message": "Rate limit exceeded. Try again in 14 minutes.",
  "retryAfter": 840
}

🌟 Benefits

Value Proposition

  • Protects server infrastructure from abuse
  • Ensures fair usage among all API consumers
  • Improves overall system reliability and performance
  • Reduces hosting costs by preventing resource waste
  • Enhances security posture against automated attacks

User Impact

  • More consistent API response times
  • Better service availability during traffic spikes
  • Protection from service degradation caused by abusive users
  • Clear feedback when usage limits are approached

Business Impact

  • Reduced infrastructure costs and downtime
  • Better user experience leading to higher satisfaction
  • Compliance with API best practices
  • Foundation for future API monetization strategies

⚡ Priority

How important is this feature to you?

  • Nice to have
  • Important
  • Critical
  • Blocking

Timeline Expectations
This feature would be ideal to implement during Hacktoberfest 2025 (October) as it's a well-scoped enhancement that provides significant value.

✅ Checklist

  • I have searched for existing feature requests
  • I have provided a clear description of the proposed feature
  • I have explained the motivation and use case
  • I have considered implementation details
  • I have thought about potential challenges or drawbacks

🏷️ Labels to Add (Maintainers Only)

  • hacktoberfest (suitable for Hacktoberfest contributors)
  • good first issue (moderate complexity, better for intermediate contributors)
  • help wanted (community help is welcomed)
  • Priority: priority/medium
  • Size: size/medium
  • Component: middleware
  • Type: type/enhancement

🤝 Contribution Interest

Would you be interested in implementing this feature?

  • Yes, I'd like to work on this
  • Yes, with guidance from maintainers
  • No, but I'm available for testing/feedback
  • No, I'm just suggesting the idea

Implementation Notes for Contributors:
This is a great Hacktoberfest contribution opportunity! The implementation involves:

  1. Setting up express-rate-limit middleware
  2. Creating configurable rate limiting rules
  3. Adding proper error handling and responses
  4. Writing comprehensive tests
  5. Updating documentation

The feature is well-defined with clear acceptance criteria, making it perfect for intermediate developers looking to contribute meaningful functionality.


Thank you for helping make GCSRM Server better! 🚀

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions