-
Notifications
You must be signed in to change notification settings - Fork 4
Initial rate-limiting setup with mod_qos #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -8,6 +8,8 @@ | |||
|
|||
DocumentRoot /librivox/www/forum.librivox.org/phpBB | |||
|
|||
ErrorLog /var/log/apache2/other_vhosts_error.log |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might as well make this a forum-specific file, probably...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll adjust this shortly. Force-push incoming. 👍
|
||
## Defines the maximum allowed number of concurrent TCP connections for this server. | ||
# I'm not sure how this differs from MaxRequestWorkers (in mods-enabled/mpm_event.conf) which is set from Ansible | ||
#QS_SrvMaxConn 300 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm worried about this, because while MaxRequestWorkers will limit how many requests can be serviced at the same time, incoming requests are still queued up (based on my understanding of https://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxrequestworkers). This would presumably immediately return a 429 to anything above the 300 concurrent limit. Which... is maybe what we want?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I figured it was relevant enough to be worth including/documenting, so it's easy to reach for later. It's commented out for now.
We will be sending 429s whenever simultaneous forum requests from guests are over that 250 limit, but at least that won't cause trouble with Workflow and Catalog. 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm obviously worried about unknown unknowns and unintended side effects, but it's not like the current situation is sustainable/desirable, so even something with potential gotchas is still better than the status quo? I'll wait to see if you have any replies/thoughts, and then I guess we can get this merged and try it out?
This should affect just the forum, for now. Server-wide connection and keepalive limits will apply to blog/wiki/catalog/Workflow visitors, but the other limits are just on the forum. A few notables: * GET requests are rate-limited by IP, though logged-in users have fewer limits there. POST limits are more strict. * Concurrent requests by guests/bots are limited to 250 (out of a max of 300, if I understand the relation to worker threads correctly). That leaves some workers in reserve for logged-in users, and for the other sites on the server. * The forum now logs errors (including blocks) to a new file. I'd like to also turn on mod_qos's own analysis, if we have the overhead to do so. It would show aggregate stats for connections, requests rates, response times and status codes over time.
Alright, the change to the error log file is in. We might need to make some adjustments later (and the removal of SIDs from forum links also seems worth doing), but we're ready for this if you are. |
This should affect just the forum, for now. Server-wide connection and keepalive limits will apply to blog/wiki/catalog/Workflow visitors, but the other limits are just on the forum. A few notables:
Rate limits for the catalog (and for login attempts on Workflow) are on the agenda, but the main librivox.org.conf file needs careful picking apart for that.