-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Long-term plan for futures #27
Comments
I'm a bit wary about using the What could be done immediately is rework tiny-http or contribute/write to an HTTP server library that is asynchronous (with mio and not tokio). But that wouldn't be useful for rouille at the moment. If coroutines eventually land in the language, we'll be able to modify rouille to be similar to koa.js: rouille::start_server("0.0.0.0:80", move |request| {
let id: i32 = yield!(database_query("SELECT email FROM user"));
Response::text(id.to_string())
}); But that's a very long term idea. Until either coroutines or async/await land in stable Rust (which is probably not going to happen until at least 2018) nothing is going to get done here. |
Since there are now plenty of helpers on crates.io for low-level HTTP (1.0/1.1 and 2), I don't think it's necessary anymore to have a low-level backend like tiny-http. After all most of tiny-http's code is related to communication between threads and handling the quirks of synchronous I/O. Since we're providing raw headers to the user, parsing HTTP 1.0/1.1 is actually very simple. If we switch to asynchronous I/O, most of tiny-http's code thus becomes useless. In other words I think we could drop tiny-http and directly write the code that handles HTTP in this crate. |
Same for hyper for that matter. There are only about 600 lines of code in the |
The initial implementation of #144 works by having N threads poll mio events and N threads dedicated to running the request handlers, where N is the number of cores of the machine. When a mio thread detects a request on a socket, it dispatches the handling of the request to another thread, and then streams the request body. This implementation goes up to about 40-45k requests/sec on my machine (which is approximately the same as hyper 0.11). In a basic test if we invoke the handler directly in the same thread as the polling, I reached 56k requests/sec. Eventually the threads dedicated to running the handlers should dynamically adjust so that new threads are automatically spawned if we don't have enough to run handlers. This is in case some handlers take a lot of time, and we don't want to block half of the server if that happens. This makes me wonder if we can't simply do that for polling as well. In other words, there would be only one thread dedicated only to polling. When a request arrives, a thread is spawned to execute the handler. When that thread finishes the processing of the request, it then participates in the mio polling system. If it detects a new request, then dedicate the thread to this request instead of spawning a new thread. This would make the best of both worlds. I'm not sure that this is doable, but it's worth a try I guess. |
Sounds complex to implement with the current state of things. Hopefully async/await lands to make things easier to implement for both devs and users. |
Has the status of this issue changed? Twocents: I think that rouille has a very good niche of being the most lightweight webserver that doesn't rely on async. Async would be somewhat faster, but it usually comes at the cost of binary size and memory usage, plus needing another heavyweight dependency like With that in mind and considering the maintenance status, I think this issue could probably be happily closed as unplanned. cc @bradfier |
I would like to second @tgross35 on this one. |
I think it's pretty reasonable to close this as "Not appropriate" for Rouille - if you want to go down the route of Async programming for your webserver then there are lots of well supported options for you now, I think being the 'thread per request' option is more than good enough for this project. Thanks for the comments! |
The request's headers will be entirely parsed before the handler is called, but the request's body will be a
Stream
.The handler will be able to return a
Future<Item = Response>
instead of simply aResponse
.In practice, this means that as soon as the headers of a request are parsed, the handler is called. The handler then quickly builds a future and quickly returns. Then it's the library's code that will, through an events loop, advance the actual processing of the request.
The user's code will probably look much messier when using futures, but that's a problem specific to Rust that may eventually be solved by adding async/await to the language.
The text was updated successfully, but these errors were encountered: