From a09e826afcca6add2b03205bc2c80c1daaf657d3 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Florian=20F=C3=BCrstenberg?= Date: Mon, 21 Oct 2024 16:38:16 +0200 Subject: [PATCH 1/6] feat(tour): add test descriptions to the 'Scaling up' section MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Florian Fürstenberg --- docs/tour/scale-and-distribute.mdx | 137 ++++++++++++++++++++++++++++- 1 file changed, 135 insertions(+), 2 deletions(-) diff --git a/docs/tour/scale-and-distribute.mdx b/docs/tour/scale-and-distribute.mdx index c0804fdb..5006e9f4 100644 --- a/docs/tour/scale-and-distribute.mdx +++ b/docs/tour/scale-and-distribute.mdx @@ -9,6 +9,118 @@ import washboard_hello from '../images/washboard_hello.png'; ## Scaling up 📈 +So far, our hello world application can only handle a single request at a time. This is because a dedicated instance of our hello component is instantiated for each request received, but currently only a single replica is defined for it in `wadm.yaml`. Accordingly, wasmCloud instructs [wasmtime](https://wasmtime.dev/) to instantiate only a single instance for our component at any time to process incoming requests. As a result, requests received at the same time are processed sequentially, one after the other. Let's check this quickly. + + + + +For test and demonstration purposes, we add a simple `sleep` to the handler to simulate a longer processing time: + +```rust +... +use exports::wasi::http::incoming_handler::Guest; +use wasi::http::types::*; +use std::{thread, time}; // [!code ++] + +struct HttpServer; +... + { + // query string is "/?name=" e.g. localhost:8080?name=Bob + ["/?name", name] => name.to_string(), + // query string is anything else or empty e.g. localhost:8080 + _ => "World".to_string(), + }; + + let sleep = time::Duration::from_secs(2); // [!code ++:7] + wasi::logging::logging::log( + wasi::logging::logging::Level::Info, + "", + &format!("Sleep for {} to simulate longer processing time", sleep.as_secs()), + ); + thread::sleep(sleep); + + let bucket = + wasi::keyvalue::store::open("").expect("failed to open empty bucket"); + let count = wasi::keyvalue::atomics::increment(&bucket, &name, 1) + .expect("failed to increment count"); + + wasi::logging::logging::log( // [!code ++:5] + wasi::logging::logging::Level::Info, + "", + &format!("Replying greeting 'Hello x{count}, {name}!'"), + ); + + response_body + .write() + .unwrap() + .blocking_write_and_flush(format!("Hello x{count}, {name}!\n").as_bytes()) + .unwrap(); +``` + + + + +:::note[Why adding a sleep period?] +The response time of our hello handler is very low. To show that requests are not processed in parallel, we need to ensure a longer response time, which we can exploit in our tests. +::: + +Again we've made changes, so run `wash build` again to compile the updated Wasm component. + +```bash +wash build +``` + +Deploy the latest version of our component and try to send multiple requests in parallel. + +```bash +> wash app deploy wadm.yaml +> seq 1 10 | xargs -P0 -I {} curl --max-time 3 "localhost:8080?name=Alice" +Hello x1, Alice! +curl: (28) Operation timed out after 3002 milliseconds with 0 bytes received +curlc:u r(l2:8 )( 2O8p)e rOapteiroant itoinm etdi med out after 3006 omuillist after 30econ0ds6 wmiitlhl i0s ebcyotnedss recei vwith 0 bytes reed +ceived +curl: (28) Operation timed out after 3006 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3003 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3001 milliseconds with 0 bytes received +``` + +As you can see, only the first `curl` command receives the expected response in time, while all the others run into a timeout. However, if you check the logs of the WasmCloud host, you will see that multiple requests have been received and forwarded to our component one after the other. + +```txt +2024-10-20T19:29:30.897232Z INFO log: wasmcloud_host::wasmbus::handler: Greeting Bob component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:30.897253Z INFO log: wasmcloud_host::wasmbus::handler: Sleep for 2 to simulate longer processing time component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:32.905355Z INFO log: wasmcloud_host::wasmbus::handler: Replying greeting 'Hello x1, Bob!' component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:32.906138Z INFO log: wasmcloud_host::wasmbus::handler: Greeting Bob component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:32.906258Z INFO log: wasmcloud_host::wasmbus::handler: Sleep for 2 to simulate longer processing time component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:34.914152Z INFO log: wasmcloud_host::wasmbus::handler: Replying greeting 'Hello x2, Bob!' component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:34.914992Z INFO log: wasmcloud_host::wasmbus::handler: Greeting Bob component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:34.915023Z INFO log: wasmcloud_host::wasmbus::handler: Sleep for 2 to simulate longer processing time component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:36.923568Z INFO log: wasmcloud_host::wasmbus::handler: Replying greeting 'Hello x3, Bob!' component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:36.924326Z INFO log: wasmcloud_host::wasmbus::handler: Greeting Bob component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:36.924351Z INFO log: wasmcloud_host::wasmbus::handler: Sleep for 2 to simulate longer processing time component_id="rust_hello_world-http_component" level=Level::Info context="" +2024-10-20T19:29:38.933227Z INFO log: wasmcloud_host::wasmbus::handler: Replying greeting 'Hello x4, Bob!' component_id="rust_hello_world-http_component" level=Level::Info context="" +``` + +:::note[Checking the `DEBUG` logs of the wasmCloud host] +You can also check the wasmCloud host's `DEBUG` logs for more detailed information. In these logs, you can clearly see that for received requests, our hello component is instantiated sequentially. +::: + +If you want you can also check the received and forwarded messages in the corresponding NATS subject. + +```bash +nats sub "*.*.wrpc.>" +``` + +:::note[Why are not all requests forwarded to our component?] +TBD + +**Note:** After multiple requests were received but timed out, it is no longer possible to send further requests to our hello application for the reasons mentioned above. To continue, we must first delete and redeploy our application. +::: + +To receive multiple requests in parallel, we need to instruct wasmCloud to scale our component according to the incoming load. WebAssembly can be easily scaled due to its small size, portability, and [wasmtime](https://wasmtime.dev/)'s ability to efficiently instantiate multiple instances of a single WebAssembly component. We leverage these aspects to make it simple to scale your applications with wasmCloud. Components only use resources when they're actively processing requests, so you can specify the number of replicas you want to run and wasmCloud will automatically scale up and down to meet demand. Let's scale up our hello world application to 100 replicas by editing `wadm.yaml`: ```yaml {15-17} @@ -27,11 +139,32 @@ spec: traits: - type: spreadscaler properties: - # Update the scale to 100 + instances: 1 { // [!code --] + # Update the scale to 100 // [!code ++:2] replicas: 100 +... ``` -Now your hello application is ready to deploy v0.0.3 with 100 replicas, meaning it can handle up to 100 concurrent incoming HTTP requests. Just run `wash app deploy wadm.yaml` again, wasmCloud will be configured to automatically scale your component based on incoming load. +Now our hello component is ready to be deployed as version 0.0.3 with 100 replicas, meaning it can handle up to 100 simultaneous incoming HTTP requests. Just run `wash app deploy wadm.yaml` again and wasmCloud will be able to automatically scale the component according to the incoming load. Let's deploy the component and try again to send multiple requests in parallel. + +```bash +> wash app deploy wadm.yaml +> seq 1 10 | xargs -P0 -I {} curl --max-time 3 "localhost:8080?name=Bob" +Hello x1, Bob! +Hello x2, Bob! +Hello x3, Bob! +Hello x5, Bob! +Hello x4, Bob! +Hello x6, Bob! +Hello x8, Bob! +Hello x7, Bob! +Hello x9, Bob! +Hello x10, Bob! +``` + +:::note[Utilization planing is important] +As you have seen, if a component receives too many requests in parallel, it may break down and wasmCloud will not be able to forward further requests. Therefore, it is important to plan and manage the specified number of replicas for Spreadscaler components according to the expected load. +::: ## Distribute Globally 🌍 From c7fc7580d44415f473064c8aafe0d0e540356dde Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Florian=20F=C3=BCrstenberg?= Date: Sat, 26 Oct 2024 23:15:15 +0200 Subject: [PATCH 2/6] feat(tour): replaces nats CLI example with wash CLI MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Florian Fürstenberg --- docs/tour/scale-and-distribute.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/tour/scale-and-distribute.mdx b/docs/tour/scale-and-distribute.mdx index f8a9fe46..50eac875 100644 --- a/docs/tour/scale-and-distribute.mdx +++ b/docs/tour/scale-and-distribute.mdx @@ -116,10 +116,10 @@ As you can see, only the first `curl` command receives the expected response in You can also check the wasmCloud host's `DEBUG` logs for more detailed information. In these logs, you can clearly see that for received requests, our hello component is instantiated sequentially. ::: -If you want you can also check the received and forwarded messages in the corresponding NATS subject. +If you wish, you can also use `wash spy` to check which messages the capability providers and the hello component have received and sent ```bash -nats sub "*.*.wrpc.>" +wash spy --experimental rust_hello_world-http_component ``` :::note[Why are not all requests forwarded to our component?] From d2af6fc6cc3527f2d5c92f418824afc97584000d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Florian=20F=C3=BCrstenberg?= Date: Sat, 9 Nov 2024 01:34:41 +0100 Subject: [PATCH 3/6] feat(tour): added parallel executed curl example for windows powershell MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Florian Fürstenberg --- docs/tour/scale-and-distribute.mdx | 50 +++++++++++++++++++++++++++++- 1 file changed, 49 insertions(+), 1 deletion(-) diff --git a/docs/tour/scale-and-distribute.mdx b/docs/tour/scale-and-distribute.mdx index 50eac875..9f9ecb49 100644 --- a/docs/tour/scale-and-distribute.mdx +++ b/docs/tour/scale-and-distribute.mdx @@ -80,6 +80,9 @@ wash build Deploy the latest version of our component and try to send multiple requests in parallel. + + + ```bash > wash app deploy wadm.yaml > seq 1 10 | xargs -P0 -I {} curl --max-time 3 "localhost:8080?name=Alice" @@ -95,6 +98,27 @@ curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received curl: (28) Operation timed out after 3001 milliseconds with 0 bytes received ``` + + + +```bash +> wash app deploy wadm.yaml +> 1..10 | ForEach-Object -Parallel { curl --max-time 3 'localhost:8080?name=Alice' } +Hello x1, Alice! +curl: (28) Operation timed out after 3002 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3006 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3003 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3001 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received +curl: (28) Operation timed out after 3001 milliseconds with 0 bytes received +``` + + + + As you can see, only the first `curl` command receives the expected response in time, while all the others run into a timeout. However, if you check the logs of the WasmCloud host, you will see that multiple requests have been received and forwarded to our component one after the other. ```txt @@ -149,12 +173,15 @@ spec: properties: instances: 1 { // [!code --] # Update the scale to 100 // [!code ++:2] - replicas: 100 + instances: 100 ... ``` Now our hello component is ready to be deployed as version 0.0.3 with 100 replicas, meaning it can handle up to 100 simultaneous incoming HTTP requests. Just run `wash app deploy wadm.yaml` again and wasmCloud will be able to automatically scale the component according to the incoming load. Let's deploy the component and try again to send multiple requests in parallel. + + + ```bash > wash app deploy wadm.yaml > seq 1 10 | xargs -P0 -I {} curl --max-time 3 "localhost:8080?name=Bob" @@ -170,6 +197,27 @@ Hello x9, Bob! Hello x10, Bob! ``` + + + +```bash +> wash app deploy wadm.yaml +> 1..10 | ForEach-Object -Parallel { curl --max-time 3 'localhost:8080?name=Bob' } +Hello x1, Bob! +Hello x2, Bob! +Hello x3, Bob! +Hello x5, Bob! +Hello x4, Bob! +Hello x6, Bob! +Hello x8, Bob! +Hello x7, Bob! +Hello x9, Bob! +Hello x10, Bob! +``` + + + + :::note[Utilization planing is important] As you have seen, if a component receives too many requests in parallel, it may break down and wasmCloud will not be able to forward further requests. Therefore, it is important to plan and manage the specified number of replicas for Spreadscaler components according to the expected load. ::: From 64a4fa637c4a004adec21e3340403af4e956a1c4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Florian=20F=C3=BCrstenberg?= Date: Tue, 12 Nov 2024 22:29:22 +0100 Subject: [PATCH 4/6] feat(tour): added sleep example for tinygo-hello component MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Florian Fürstenberg --- docs/tour/scale-and-distribute.mdx | 71 +++++++++++++++++++++++++++--- 1 file changed, 64 insertions(+), 7 deletions(-) diff --git a/docs/tour/scale-and-distribute.mdx b/docs/tour/scale-and-distribute.mdx index 9f9ecb49..4cf96d41 100644 --- a/docs/tour/scale-and-distribute.mdx +++ b/docs/tour/scale-and-distribute.mdx @@ -19,10 +19,67 @@ This tutorial assumes you're following directly from the previous steps. Make su So far, our hello world application can only handle a single request at a time. This is because a dedicated instance of our hello component is instantiated for each request received, but currently only a single replica is defined for it in `wadm.yaml`. Accordingly, wasmCloud instructs [wasmtime](https://wasmtime.dev/) to instantiate only a single instance for our component at any time to process incoming requests. As a result, requests received at the same time are processed sequentially, one after the other. Let's check this quickly. +For test and demonstration purposes, we add a simple `sleep` to the handler to simulate a longer processing time: + - + + +```go +//go:generate go run github.com/bytecodealliance/wasm-tools-go/cmd/wit-bindgen-go generate --world hello --out gen ./wit +package main + +import ( + "fmt" + "net/http" + "time" // [!code ++] + + atomics "github.com/wasmcloud/wasmcloud/examples/golang/components/http-hello-world/gen/wasi/keyvalue/atomics" + store "github.com/wasmcloud/wasmcloud/examples/golang/components/http-hello-world/gen/wasi/keyvalue/store" + "go.wasmcloud.dev/component/log/wasilog" + "go.wasmcloud.dev/component/net/wasihttp" +) + +func init() { + // Register the handleRequest function as the handler for all incoming requests. + wasihttp.HandleFunc(handleRequest) +} + +func handleRequest(w http.ResponseWriter, r *http.Request) { + logger := wasilog.ContextLogger("handleRequest") + + name := "World" + if len(r.FormValue("name")) > 0 { + name = r.FormValue("name") + } + logger.Info("Greeting", "name", name) + + sleep := 2 * time.Second // [!code ++:3] + logger.Info(fmt.Sprintf("Sleep for %v to simulate longer processing time", sleep)) + time.Sleep(sleep) + + kvStore := store.Open("default") + if err := kvStore.Err(); err != nil { + w.Write([]byte("Error: " + err.String())) + return + } + value := atomics.Increment(*kvStore.OK(), name, 1) + if err := value.Err(); err != nil { + w.Write([]byte("Error: " + err.String())) + return + } + + logger.Info(fmt.Sprintf("Replying greeting 'Hello x%d, %s!'", *value.OK(), name)) // [!code ++] + + fmt.Fprintf(w, "Hello x%d, %s!\n", *value.OK(), name) +} + +// Since we don't run this program like a CLI, the `main` function is empty. Instead, +// we call the `handleRequest` function when an HTTP request is received. +func main() {} +``` -For test and demonstration purposes, we add a simple `sleep` to the handler to simulate a longer processing time: + + ```rust ... @@ -72,7 +129,7 @@ struct HttpServer; The response time of our hello handler is very low. To show that requests are not processed in parallel, we need to ensure a longer response time, which we can exploit in our tests. ::: -Again we've made changes, so run `wash build` again to compile the updated Wasm component. +Because we've made changes, run `wash build` again to compile the updated Wasm component. ```bash wash build @@ -99,9 +156,9 @@ curl: (28) Operation timed out after 3001 milliseconds with 0 bytes received ``` - + -```bash +```powershell > wash app deploy wadm.yaml > 1..10 | ForEach-Object -Parallel { curl --max-time 3 'localhost:8080?name=Alice' } Hello x1, Alice! @@ -198,9 +255,9 @@ Hello x10, Bob! ``` - + -```bash +```powershell > wash app deploy wadm.yaml > 1..10 | ForEach-Object -Parallel { curl --max-time 3 'localhost:8080?name=Bob' } Hello x1, Bob! From e3864013d9697389b672de40da90926e79ee7599 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Florian=20F=C3=BCrstenberg?= Date: Wed, 13 Nov 2024 21:46:35 +0100 Subject: [PATCH 5/6] feat(tour): added sleep example for typescript-hello component MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Florian Fürstenberg --- docs/tour/scale-and-distribute.mdx | 50 ++++++++++++++++++++++++------ 1 file changed, 41 insertions(+), 9 deletions(-) diff --git a/docs/tour/scale-and-distribute.mdx b/docs/tour/scale-and-distribute.mdx index 1d8f2e1f..3f67f1c9 100644 --- a/docs/tour/scale-and-distribute.mdx +++ b/docs/tour/scale-and-distribute.mdx @@ -89,12 +89,11 @@ use std::{thread, time}; // [!code ++] struct HttpServer; ... - { - // query string is "/?name=" e.g. localhost:8080?name=Bob - ["/?name", name] => name.to_string(), - // query string is anything else or empty e.g. localhost:8080 - _ => "World".to_string(), - }; + wasi::logging::logging::log( + wasi::logging::logging::Level::Info, + "", + &format!("Greeting {name}"), + ); let sleep = time::Duration::from_secs(2); // [!code ++:7] wasi::logging::logging::log( @@ -122,6 +121,39 @@ struct HttpServer; .unwrap(); ``` + + + + ```typescript + ... + // Write to the response stream + const name = getNameFromPath(req.pathWithQuery() || ''); + + log('info', '', `Greeting ${name}`); + + const sleep = 2000; // [!code ++:3] + log('info', '', `Sleep for ${sleep} to simulate longer processing time`); + await new Proise(resolve => setTimeout(resolve, sleep)); + + // Increment the bucket's count + const bucket = open('default'); + const count = increment(bucket, name, 1); + + log('info', '', `Replying greeting - Hello x{count}, {name}!`); // [!code ++] + + { + // Create a stream for the response body + let outputStream = outgoingBody.write(); + // Write hello world to the response stream + outputStream.blockingWriteAndFlush( + new Uint8Array(new TextEncoder().encode(`Hello x${count}, ${name}!\n`)), + ); + // @ts-ignore: This is required in order to dispose the stream before we return + outputStream[Symbol.dispose](); + } + ... + ``` + @@ -210,7 +242,7 @@ TBD ::: To receive multiple requests in parallel, we need to instruct wasmCloud to scale our component according to the incoming load. -WebAssembly can be easily scaled due to its small size, portability, and [wasmtime](https://wasmtime.dev/)'s ability to efficiently instantiate multiple instances of a single WebAssembly component. We leverage these aspects to make it simple to scale your applications with wasmCloud. Components only use resources when they're actively processing requests, so you can specify the number of replicas you want to run and wasmCloud will automatically scale up and down to meet demand. Let's scale up our hello world application to 100 replicas by editing `wadm.yaml`: +WebAssembly can be easily scaled due to its small size, portability, and [wasmtime](https://wasmtime.dev/)'s ability to efficiently instantiate multiple instances of a single WebAssembly component. We leverage these aspects to make it simple to scale your applications with wasmCloud. Components only use resources when they're actively processing requests, so you can specify the number of replicas you want to run and wasmCloud will automatically scale up and down to meet demand. Let's allow our hello world application to scale up to 100 instances simultaneously by editing `wadm.yaml`: ```yaml {15-17} apiVersion: core.oam.dev/v1beta1 @@ -234,7 +266,7 @@ spec: ... ``` -Now our hello component is ready to be deployed as version 0.0.3 with 100 replicas, meaning it can handle up to 100 simultaneous incoming HTTP requests. Just run `wash app deploy wadm.yaml` again and wasmCloud will be able to automatically scale the component according to the incoming load. Let's deploy the component and try again to send multiple requests in parallel. +Now our hello component is ready to be deployed as version 0.0.3 with up to 100 instances, meaning it can handle up to 100 simultaneous incoming HTTP requests. Just run `wash app deploy wadm.yaml` again and wasmCloud will be able to automatically scale the component according to the incoming load. Let's deploy the component and try again to send multiple requests in parallel. @@ -276,7 +308,7 @@ Hello x10, Bob! :::note[Utilization planing is important] -As you have seen, if a component receives too many requests in parallel, it may break down and wasmCloud will not be able to forward further requests. Therefore, it is important to plan and manage the specified number of replicas for Spreadscaler components according to the expected load. +As you have seen, if a component receives too many requests in parallel, it may break down and wasmCloud will not be able to forward further requests. Therefore, it is important to plan and manage the specified maximum number of concurrent instances for Spreadscaler components according to the expected load. ::: ## Distribute Globally 🌍 From a922dbf6db80c0335da3080b22edfd127751114c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Florian=20F=C3=BCrstenberg?= Date: Mon, 18 Nov 2024 17:05:19 +0100 Subject: [PATCH 6/6] feat(tour): updated rust example and added note for broken wrpc invokation MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Florian Fürstenberg --- docs/tour/scale-and-distribute.mdx | 102 ++++++++++++++++++----------- 1 file changed, 63 insertions(+), 39 deletions(-) diff --git a/docs/tour/scale-and-distribute.mdx b/docs/tour/scale-and-distribute.mdx index 3f67f1c9..d668cb26 100644 --- a/docs/tour/scale-and-distribute.mdx +++ b/docs/tour/scale-and-distribute.mdx @@ -22,7 +22,7 @@ So far, our hello world application can only handle a single request at a time. For test and demonstration purposes, we add a simple `sleep` to the handler to simulate a longer processing time: - + ```go //go:generate go run github.com/bytecodealliance/wasm-tools-go/cmd/wit-bindgen-go generate --world hello --out gen ./wit @@ -82,43 +82,48 @@ func main() {} ```rust -... -use exports::wasi::http::incoming_handler::Guest; -use wasi::http::types::*; +use wasmcloud_component::http::ErrorCode; +use wasmcloud_component::wasi::keyvalue::*; +use wasmcloud_component::{http, info}; use std::{thread, time}; // [!code ++] -struct HttpServer; -... - wasi::logging::logging::log( - wasi::logging::logging::Level::Info, - "", - &format!("Greeting {name}"), - ); - - let sleep = time::Duration::from_secs(2); // [!code ++:7] - wasi::logging::logging::log( - wasi::logging::logging::Level::Info, - "", - &format!("Sleep for {} to simulate longer processing time", sleep.as_secs()), - ); +struct Component; + +http::export!(Component); + +impl http::Server for Component { + fn handle( + request: http::IncomingRequest, + ) -> http::Result> { + let (parts, _body) = request.into_parts(); + let query = parts + .uri + .query() + .map(ToString::to_string) + .unwrap_or_default(); + let name = match query.split("=").collect::>()[..] { + ["name", name] => name, + _ => "World", + }; + + info!("Greeting {name}"); + + let sleep = time::Duration::from_secs(2); // [!code ++:3] + info!("Sleep for {} to simulate longer processing time", sleep.as_secs()); thread::sleep(sleep); - let bucket = - wasi::keyvalue::store::open("").expect("failed to open empty bucket"); - let count = wasi::keyvalue::atomics::increment(&bucket, &name, 1) - .expect("failed to increment count"); - - wasi::logging::logging::log( // [!code ++:5] - wasi::logging::logging::Level::Info, - "", - &format!("Replying greeting 'Hello x{count}, {name}!'"), - ); - - response_body - .write() - .unwrap() - .blocking_write_and_flush(format!("Hello x{count}, {name}!\n").as_bytes()) - .unwrap(); + let bucket = store::open("default").map_err(|e| { + ErrorCode::InternalError(Some(format!("failed to open KV bucket: {e:?}"))) + })?; + let count = atomics::increment(&bucket, &name, 1).map_err(|e| { + ErrorCode::InternalError(Some(format!("failed to increment counter: {e:?}"))) + })?; + + info!("Replying greeting 'Hello x{count}, {name}!'"); // [!code ++] + + Ok(http::Response::new(format!("Hello x{count}, {name}!\n"))) + } +} ``` @@ -225,20 +230,39 @@ As you can see, only the first `curl` command receives the expected response in 2024-10-20T19:29:38.933227Z INFO log: wasmcloud_host::wasmbus::handler: Replying greeting 'Hello x4, Bob!' component_id="rust_hello_world-http_component" level=Level::Info context="" ``` -:::note[Checking the `DEBUG` logs of the wasmCloud host] -You can also check the wasmCloud host's `DEBUG` logs for more detailed information. In these logs, you can clearly see that for received requests, our hello component is instantiated sequentially. +:::note[Checking the `DEBUG` or `TRACE` logs of the wasmCloud host] +You can also check the wasmCloud host's `DEBUG` or `TRACE` logs for more detailed information (e.g. using `wash up --log-level=debug`). In these logs, you can clearly see that for received requests, our hello component is instantiated sequentially. ::: If you wish, you can also use `wash spy` to check which messages the capability providers and the hello component have received and sent + + + + +```bash +wash spy --experimental tinygo_hello_world-http_component +``` + + + + ```bash wash spy --experimental rust_hello_world-http_component ``` -:::note[Why are not all requests forwarded to our component?] -TBD + + + +```bash +wash spy --experimental typescript_hello_world-http_component +``` + + + -**Note:** After multiple requests were received but timed out, it is no longer possible to send further requests to our hello application for the reasons mentioned above. To continue, we must first delete and redeploy our application. +:::note[Requests are not forwarded to our component anymore] +After multiple requests were received but timed out, it is no longer possible to send further requests to our hello application. The reason for this is that the httpserver capability provider is no longer able to invoke the hello component via NATS. To continue, we must first delete and redeploy our application. ::: To receive multiple requests in parallel, we need to instruct wasmCloud to scale our component according to the incoming load.