From e2944342fe465a30bb85ba782f7c8dec28a84587 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 15:42:57 +0200 Subject: [PATCH 01/38] SRVLOGIC-261: Sync getting-started latest release --- ...rkflow-service-with-kn-cli-and-vscode.adoc | 23 ++-- .../getting-familiar-with-our-tooling.adoc | 4 +- .../java-embedded-workflows.adoc | 124 ++++++++++++++++++ .../getting-started/learning-environment.adoc | 10 +- .../preparing-environment.adoc | 71 ++++++---- .../production-environment.adoc | 14 -- 6 files changed, 188 insertions(+), 58 deletions(-) create mode 100644 modules/serverless-logic/pages/getting-started/java-embedded-workflows.adoc diff --git a/modules/serverless-logic/pages/getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.adoc b/modules/serverless-logic/pages/getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.adoc index a7447f25..88ca5e6c 100644 --- a/modules/serverless-logic/pages/getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.adoc +++ b/modules/serverless-logic/pages/getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.adoc @@ -3,15 +3,15 @@ This guide showcases using the Knative Workflow CLI plugin and Visual Studio code to create & run {product_name} projects. .Prerequisites -* You have setup your environment according xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal environment setup] guide. -* Install https://k9scli.io/[k9scli.io] for easier inspection of your application resources in cluster. This is optional, you can use any tool you are fimiliar with in this regard. +* You have set up your environment according to the xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal environment setup] guide. +* Install link:{k9s_url}[k9scli.io] for easier inspection of your application resources in the cluster. This is optional, you can use any tool you are familiar with in this regard. [[proc-creating-app-with-kn-cli]] == Creating a workflow project with Visual Studio Code and KN CLI Use the `create` command with kn workflow to scaffold a new SonataFlow project. -* Navigate to you development directory and create your project. +* Navigate to your development directory and create your project. [source,bash] ---- kn workflow create -n my-sonataflow-project @@ -21,7 +21,7 @@ kn workflow create -n my-sonataflow-project ---- cd ./my-sonataflow-project ---- -* Open the folder in Visual Studo Code and examine the created `workflow.sw.json` using our extension. +* Open the folder in Visual Studio Code and examine the created `workflow.sw.json` using our extension. Now you can run the project and execute the workflow. @@ -35,7 +35,7 @@ Use the `run` command with kn workflow to build and run the {product_name} proje ---- kn workflow run ---- -* The Development UI wil be accesible at `localhost:8080/q/dev` +* The Development UI will be accessible at `localhost:8080/q/dev` * You can now work on your project. Any changes will be picked up by the hot reload feature. * See xref:testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-instances-page.adoc[Workflow instances] guide on how to run workflows via Development UI. * Once you are done developing your project navigate to the terminal that is running the `kn workflow run` command and hit `Ctlr+C` to stop the development environment. @@ -47,6 +47,11 @@ To deploy the finished project to a local cluster, proceed to the next section. Use the `deploy` command with kn workflow to deploy the {product_name} project into your local cluster. +* Create a namespace for your application +[source,bash] +---- +kubectl create namespace my-sf-application +---- * Deploy to cluster [source,bash] ---- @@ -65,7 +70,7 @@ Minikube:: minikube service hello --namespace my-sf-application --url ---- * Use this URL to access your workflow instances using the Developer UI -** {sonataflow_devmode_devui_url}/workflowInstances +** {sonataflow_devmode_devui_url}workflows -- Kind:: + @@ -86,18 +91,18 @@ kubectl port-forward service/hello :80 -n my-sf-application -- ==== -* To update the image run the `deploy` again, note that this may take some time. +* To update the image, run the `deploy` again, note that this may take some time. * To stop the deployment, use the `undeploy` command: [source,bash] ---- -kn worklow undeploy --namespace my-sf-application +kn workflow undeploy --namespace my-sf-application ---- * You can validate your pod is terminating using k9s cli. [[proc-testing-application]] == Testing your workflow application -To test your workflow application you can use any capable REST client out there. All that is needeed is the URL of your deployed worklow project. +To test your workflow application you can use any capable REST client out there. All that is needed is the URL of your deployed workflow project. .Prerequisites * You have your workflow project deployed using <> and you have the URL where it is deployed handy. diff --git a/modules/serverless-logic/pages/getting-started/getting-familiar-with-our-tooling.adoc b/modules/serverless-logic/pages/getting-started/getting-familiar-with-our-tooling.adoc index 3044dea3..880d8ca9 100644 --- a/modules/serverless-logic/pages/getting-started/getting-familiar-with-our-tooling.adoc +++ b/modules/serverless-logic/pages/getting-started/getting-familiar-with-our-tooling.adoc @@ -4,8 +4,6 @@ // Metadata: :description: Kogito Serverless Workflow Tooling :keywords: kogito, workflow, serverless, editor -// links -:kubesmarts_url: https://start.kubesmarts.org/ The tooling in {product_name} provides the best developer experience for the workflow ecosystem. The following tools are provided that you can use to author your workflow assets: @@ -13,6 +11,6 @@ The tooling in {product_name} provides the best developer experience for the wor * xref:tooling/serverless-workflow-editor/swf-editor-chrome-extension.adoc[*Chrome GitHub extension*]: View and edit the CNCF Serverless Workflow specification files in GitHub. * xref:testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-overview.adoc[*Kogito Serverless Workflow Tools extension in Quarkus Dev UI*]: View, manage, and start the workflow instances. * xref:testing-and-troubleshooting/kn-plugin-workflow-overview.adoc[*{product_name} plug-in for Knative CLI*]: Set up a local workflow project using the command line. -* link:{kubesmarts_url}[*Serverless Logic online tooling*]: Try and run the Serverless Workflow example applications in a web environment. +* link:{serverless_logic_web_tools_url}[*Serverless Logic online tooling*]: Try and run the Serverless Workflow example applications in a web environment. include::../../pages/_common-content/report-issue.adoc[] diff --git a/modules/serverless-logic/pages/getting-started/java-embedded-workflows.adoc b/modules/serverless-logic/pages/getting-started/java-embedded-workflows.adoc new file mode 100644 index 00000000..86c86e6e --- /dev/null +++ b/modules/serverless-logic/pages/getting-started/java-embedded-workflows.adoc @@ -0,0 +1,124 @@ += Workflow embedded execution in Java +:compat-mode!: +// Metadata: +:description: Embedded execution of Workflows +:keywords: kogito, workflow, embedded, java, sonataflow + + + +This guide uses a standard Java virtual machine and a small set of Maven dependencies to execute a link:{spec_doc_url}[CNCF Serverless Workflow] definition. Therefore, it is assumed you are fluent both in Java and Maven. +The workflow definition to be executed can be read from a `.json` or `.yaml` file or programmatically defined using the {product_name} fluent API. + +.Prerequisites +. Install link:{openjdk_install_url}[OpenJDK] {java_min_version} +. Install link:{maven_install_url}[Apache Maven] {maven_min_version}. + +[[embedded-file-quick-start]] +== Hello world (using existing definition file) + +The first step is to set up an empty Maven project that includes link:{swf_executor_core_maven_repo_url}[Workflow executor core] dependency. + +This guide also uses link:{slf4j_simple_maven_repo_url}[slf4j dependency] to avoid using `System.out.println` + +Let's assume you already have a workflow definition written in a JSON file in your project root directory. For example, link:{kogito_sw_examples_url}/serverless-workflow-hello-world/src/main/resources/hello.sw.json[Hello World] definition. To execute it, you must write the following main Java class (standard Java imports and package declaration are intentionally skipped for briefness) + +[source,java] +---- +import org.kie.kogito.serverless.workflow.executor.StaticWorkflowApplication; +import org.kie.kogito.serverless.workflow.models.JsonNodeModel; +import org.kie.kogito.serverless.workflow.utils.ServerlessWorkflowUtils; +import org.kie.kogito.serverless.workflow.utils.WorkflowFormat; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import io.serverlessworkflow.api.Workflow; + +public class DefinitionFileExecutor { + private static final Logger logger = LoggerFactory.getLogger(DefinitionFileExecutor.class); + + public static void main(String[] args) throws IOException { + try (Reader reader = new FileReader("hello.sw.json"); <1> + StaticWorkflowApplication application = StaticWorkflowApplication.create()) { <2> + Workflow workflow = ServerlessWorkflowUtils.getWorkflow(reader, WorkflowFormat.JSON); <3> + JsonNodeModel result = application.execute(workflow, Collections.emptyMap()); <4> + logger.info("Workflow execution result is {}", result.getWorkflowdata()); <5> + } + } +} +---- +<1> Reads the workflow file definition from the project root directory +<2> Creates a static workflow application object. It is done within the try block since the instance is `Closeable`. This is the reference that allow you to execute workflow definitions. +<3> Reads the Serverless Workflow Java SDK `Workflow` object from the file. +<4> Execute the workflow, passing `Workflow` reference and no parameters (an empty Map). The result of the workflow execution: process instance id and workflow output model, can be accessed using `result` variable. +<5> Prints the workflow model in the configured standard output. + +If you compile and execute this Java class, you will see the following log in your configured standard output: +---- +Workflow execution result is {"greeting":"Hello World","mantra":"Serverless Workflow is awesome!"} +---- + +[[embedded-fluent-quick-start]] +== Hello world (using fluent API) + +Adding link:{swf_fluent_maven_repo_url}[kogito-serverless-workflow-fluent] dependency to the Maven setup in the previous section, you can programmatically generate that workflow definition rather than loading it from a file definition by using the link:{kogito_runtimes_url}/kogito-serverless-workflow/kogito-serverless-workflow-fluent/src/main/java/org/kie/kogito/serverless/workflow/fluent[fluent API] + +Therefore, you can modify the previous example to generate the same output when it is executed, but rather than creating a `FileReader` that reads the `Workflow` object, we create the `Workflow` object using Java statements. The resulting modified main method is the following + +[source,java] +---- + try (StaticWorkflowApplication application = StaticWorkflowApplication.create()) { + Workflow workflow = workflow("HelloWorld"). <1> + start( <2> + inject( <3> + jsonObject().put("greeting", "Hello World").put("mantra","Serverless Workflow is awesome!"))) <4> + .end() <5> + .build(); <6> + logger.info("Workflow execution result is {}",application.execute(workflow, Collections.emptyMap()).getWorkflowdata()); <7> + } +---- +<1> Creates a workflow which name is `HelloWorld` +<2> Indicate that you are going to specify the start state +<3> A Inject state is the start state +<4> Inject state accepts static json, therefore this line creates the JSON data +<5> End the workflow definition +<6> Build the workflow definition +<7> Execute and print as in previous example + +=== Additional fluent examples + +You can find additional and commented examples of fluent API usage (including jq expression evaluation and orchestration of rest services) link:{kogito_sw_examples_url}/sonataflow-fluent[here] + +== Dependencies explanation + +Embedded workflow uses a modular approach to keep the number of required dependencies as small as possible. The rationale is to avoid adding something that you will not use to the dependency set. For example, the OpenAPI module is based on a Swagger parser, if you are not going to call any OpenAPI service, it is better to avoid adding Swagger to the dependency set. This means the link:{swf_executor_core_maven_repo_url}[Core] dependency does not include the stuff to use gRPC, OpenAPI, or most custom function types. + +This is the list of additional dependencies you might need to add depending on the functionality you are using: + +* link:{swf_executor_rest_maven_repo_url}[REST]: Add if you use xref:core/custom-functions-support#con-func-rest[custom Rest] function type. +* link:{swf_executor_service_maven_repo_url}[Service]: Add if your use xref:core/custom-functions-support#con-func-java[custom Service] function type. +* link:{swf_executor_openapi_maven_repo_url}[OpenAPI]: Add if you use link:{spec_doc_url}#using-functions-for-restful-service-invocations[OpenAPI] function type. See link:{kogito_runtimes_swf_test_url}/OpenAPIWorkflowApplicationTest.java[example]. +* link:{swf_executor_grpc_maven_repo_url}[gRPC]: Add if you use link:{spec_doc_url}#using-functions-for-rpc-service-invocations[gRPC] function type. See link:{kogito_runtimes_swf_test_url}/RPCWorkflowApplicationTest.java[example]. +* link:{swf_executor_python_maven_repo_url}[Python]: Add if you use xref:core/custom-functions-support#con-func-python[custom Python] function type. See link:{kogito_runtimes_swf_url}/kogito-serverless-workflow-executor-python/src/test/java/org/kie/kogito/serverless/workflow/executor/PythonFluentWorkflowApplicationTest.java[example]. +* link:{swf_executor_events_maven_repo_url}[Events]: Add if you use Event or Callback state in your workflow. Only Kafka events are supported right now. See link:{kogito_runtimes_swf_url}/kogito-serverless-workflow-executor-kafka/src/test/java/org/kie/kogito/serverless/workflow/executor/WorkflowEventPublisherTest.java[Publisher] and link:{kogito_runtimes_swf_url}/kogito-serverless-workflow-executor-kafka/src/test/java/org/kie/kogito/serverless/workflow/executor/WorkflowEventSusbcriberTest.java[Subscriber] examples. + +== Persistence support + +To enable persistence, you must include the desired {product_name} persistence add-on as a dependency and set up `StaticWorkflowApplication` to use the `ProcessInstances` implementation provided by the add-on. + +Since, within an embedded environment, you usually do not want to contact an external database, the recommendation is to use the link:{rocksdb_url}[rocksdb] embedded database. You do that by adding the link:{rocksdb_addon_maven_repo_url}[rocksdb add-on] dependency and adding the following code snippet when you create your `StaticWorkflowApplication` object. + +[source,java] +---- + StaticWorkflowApplication.create().processInstancesFactory(new RocksDBProcessInstancesFactory(new Options().setCreateIfMissing(true), tempDir.toString())) +---- + +See the link:{kogito_runtimes_swf_test_url}/PersistentApplicationTest.java[persistence example]. + + +== Additional resources + +include::../../pages/_common-content/report-issue.adoc[] + +ifeval::["{kogito_version_redhat}" != ""] +include::../../pages/_common-content/downstream-project-setup-instructions.adoc[] +endif::[] diff --git a/modules/serverless-logic/pages/getting-started/learning-environment.adoc b/modules/serverless-logic/pages/getting-started/learning-environment.adoc index 13c230eb..30516b05 100644 --- a/modules/serverless-logic/pages/getting-started/learning-environment.adoc +++ b/modules/serverless-logic/pages/getting-started/learning-environment.adoc @@ -1,15 +1,15 @@ = Learning environment .Prerequisites -* Basic knowledge of cloud environments, containers, docker and Kubernetes -* You are familiar with https://github.com/serverlessworkflow/specification/blob/0.8.x/specification.md[CNCF Serverless Workflow Specification 0.8] +* Basic knowledge of cloud environments, containers, Docker and Kubernetes +* You are familiar with link:{spec_doc_url}[CNCF Serverless Workflow Specification {spec_version}] -If you are new to {product_name} we recommend a few starting points to get up to speed with the technology and what it has to offer. +If you are new to {product_name}, we recommend a few starting points to get up to speed with the technology and what it has to offer. -* Read the xref:core/cncf-serverless-workflow-specification-support.adoc[serverless workflow specification and what is supported]. +* Read the xref:core/cncf-serverless-workflow-specification-support.adoc[Serverless Workflow specification and what is supported]. * Try our link:{serverless_logic_web_tools_url}#/sample-catalog?category=serverless-workflow[{serverless_logic_web_tools_name} samples]. -Once familiar with the specification and samples, navigate to xref:getting-started/preparing-environment.adoc[] guide to complete the necesarry setup of your environment. After that, you should be ready to create your first {product_name} application. +Once familiar with the specification and samples, navigate to xref:getting-started/preparing-environment.adoc[] guide to complete the necessary setup of your environment. After that, you should be ready to create your first {product_name} application. == Additional resources diff --git a/modules/serverless-logic/pages/getting-started/preparing-environment.adoc b/modules/serverless-logic/pages/getting-started/preparing-environment.adoc index ca18d19c..a9f1365c 100644 --- a/modules/serverless-logic/pages/getting-started/preparing-environment.adoc +++ b/modules/serverless-logic/pages/getting-started/preparing-environment.adoc @@ -3,52 +3,69 @@ This guide lists the different ways to set up your environment for {product_name} development. If you are new, start with the minimal one. +.Prerequisites +* A machine with at least 8GB memory and a link:https://en.wikipedia.org/wiki/Multi-core_processor[CPU with 8 cores] + [[proc-minimal-local-environment-setup]] == Minimal local environment setup -Recommended steps to setup your local development environment. By completing these steps you are able to +Recommended steps to set up your local development environment. By completing these steps you are able to start the development on your local machine using our guides. .Procedure -. Install https://docs.docker.com/engine/install/[Docker] or https://podman.io/docs/installation[Podman]. -. Install https://minikube.sigs.k8s.io/docs/start/[minikube] or https://kind.sigs.k8s.io/docs/user/quick-start/#installation[kind]. -. Install https://kubernetes.io/docs/tasks/tools/[Kubernetes CLI]. -. Install https://knative.dev/docs/install/quickstart-install/[Knative using quickstart]. This will also setup Knative Serving and Eventing for you and the cluster should be running. -. xref:cloud/operator/install-serverless-operator.adoc[] +. Install link:{docker_install_url}[Docker] or link:{podman_install_url}[Podman]. +. Install link:{minikube_start_url}[minikube] or link:{kind_install_url}[kind]. +. Install link:{kubectl_install_url}[Kubernetes CLI]. +. Install link:{knative_quickstart_url}[Knative using quickstart]. This will also set up Knative Serving and Eventing for you and the cluster should be running. +. Install the xref:cloud/operator/install-serverless-operator.adoc#_sonataflow_operator_manual_installation[{operator_name} manually]. . Install xref:testing-and-troubleshooting/kn-plugin-workflow-overview.adoc[Knative Workflow CLI]. -. Install https://code.visualstudio.com/[Visual Studio Code] with https://marketplace.visualstudio.com/items?itemName=kie-group.swf-vscode-extension[our extension] that simplifies development of workflows by provifing visual aids and auto-complete features. +. Install link:{visual_studio_code_url}[Visual Studio Code] with link:{visual_studio_code_swf_extension_url}[our extension] that simplifies development of workflows by providing visual aids and auto-complete features. [[proc-starting-cluster-fo-local-development]] == Starting the cluster for local development -If you have used https://knative.dev/docs/install/quickstart-install/[Knative using quickstart] guide, your selected cluster should be running and properly configured to work with our guides. +If you have used link:{knative_quickstart_url}[Knative using quickstart] guide, your selected cluster should be running and properly configured to work with our guides. -Please note, that if the knative quickstart procedure is not used, you need to install Knative Serving and Eventing manually. See <>. +Please note, that if the Knative quickstart procedure is not used, you need to install Knative Serving and Eventing manually. See <>. -.To startup the selected cluster without quickstart, use the following command: +.To start up the selected cluster without quickstart, use the following command: [tabs] ==== -Minikube:: +Minikube with Docker:: + -- -.Configure and startup minikube +.Configure and startup minikube with Docker [source,shell] ---- # Set a driver and container runtime -minikube config set driver docker/podman -minikube config set container-runtime docker/podman +minikube config set driver docker +minikube config set container-runtime docker + +# Start the cluster +# Set the memory to at least 4096, increase to 6144 or 8192 if possible +minikube start --cpus 4 --memory 4096 --addons registry --addons metrics-server --insecure-registry "10.0.0.0/24" --insecure-registry "localhost:5000" -# Set cpu and memory -# 4096 is minimal baseline, increase to 6144 or 8192 if possible -minikube config set cpus 4 -minikube config set memory 4096 +# Set the active profile +minikube profile knative +---- +-- +Minikube with Podman:: ++ +-- +.Configure and startup minikube with Podman +[source,shell] +---- +# Set a driver and container runtime +minikube config set driver podman +minikube config set container-runtime podman # Start the cluster +# Set the memory to at least 4096, increase to 6144 or 8192 if possible minikube start --cpus 4 --memory 4096 --addons registry --addons metrics-server --insecure-registry "10.0.0.0/24" --insecure-registry "localhost:5000" # Set the active profile -minikube profile minikube +minikube profile knative ---- -- Kind:: @@ -69,18 +86,18 @@ If you are interested in our Java and Quarkus development path, consider complet <>. By completing these steps you are able to start the development of applications on your local machine using our xref:use-cases/advanced-developer-use-cases/index.adoc[advanced developer guides]. .Procedure -. Install https://openjdk.org/[OpenJDK] {java_min_version} and cofigure `JAVA_HOME` appropriately by adding it to the `PATH`. -. Install https://maven.apache.org/index.html[Apache Maven] {maven_min_version}. -. Install https://quarkus.io/guides/cli-tooling[Quarkus CLI] corresponding to currently supported version by {product_name}. Currently it is {quarkus_version}. +. Install link:{openjdk_install_url}[OpenJDK] {java_min_version} and configure `JAVA_HOME` appropriately by adding it to the `PATH`. +. Install link:{maven_install_url}[Apache Maven] {maven_min_version}. +. Install link:{quarkus_cli_url}[Quarkus CLI] corresponding to the currently supported version by {product_name}. Currently, it is {quarkus_version}. [[proc-additional-options-for-local-environment]] -== Additional options for environment setup +== Additional options for local environment setup -Points listed in this section provide extra posibilties when working with our guides and are considered optional. +Points listed in this section provide extra possibilities when working with our guides and are considered optional. -* Install https://www.graalvm.org/[GraalVM] {graalvm_min_version}. This will allow you to create https://www.graalvm.org/22.0/reference-manual/native-image/[native image] of your {product_name} application. -* Install https://docs.openshift.com/serverless/1.32/install/installing-knative-serving.html[Knative Serving] for advanced customizations or in cases where you are working with Openshift. -* Install https://docs.openshift.com/serverless/1.32/install/installing-knative-eventing.html[Knative Eventing] for advanced customizations or in cases where you are working with Openshift. +* Install link:{graalvm_url}[GraalVM] {graalvm_min_version}. This will allow you to create link:{graalvm_native_image_url}[native image] of your {product_name} application. +* Install link:{knative_serving_install_yaml_url}[Knative Serving using YAML files] for advanced customizations or in cases where the quickstart procedure fails. +* Install link:{knative_eventing_install_yaml_url}[Knative Eventing using YAML files] for advanced customizations or in cases where the quickstart procedure fails. == Additional resources diff --git a/modules/serverless-logic/pages/getting-started/production-environment.adoc b/modules/serverless-logic/pages/getting-started/production-environment.adoc index e8c9c539..3385e046 100644 --- a/modules/serverless-logic/pages/getting-started/production-environment.adoc +++ b/modules/serverless-logic/pages/getting-started/production-environment.adoc @@ -1,19 +1,5 @@ = Production environment -[IMPORTANT] -==== -[subs="attributes+"] -{product_name} is a Technology Preview feature only. Technology Preview features -are not supported with Red Hat production service level agreements (SLAs) and -might not be functionally complete. Red Hat does not recommend using them -in production. These features provide early access to upcoming product -features, enabling customers to test functionality and provide feedback during -the development process. - -For more information about the support scope of Red Hat Technology Preview -features, see https://access.redhat.com/support/offerings/techpreview/. -==== - In thise guide, you can find {product_name} recommendations and best-practices for production environment. As a cluster environment, We recommend using https://docs.openshift.com/container-platform/4.15/welcome/index.html[Red Hat Openshift Container Platform]. From b7195be0965f4f56bb8c08057c848f3d8a4a08b1 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 15:48:21 +0200 Subject: [PATCH 02/38] SRVLOGIC-261: Add Java workflow intro to index --- modules/serverless-logic/pages/index.adoc | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/modules/serverless-logic/pages/index.adoc b/modules/serverless-logic/pages/index.adoc index dd64217f..6029e2ca 100644 --- a/modules/serverless-logic/pages/index.adoc +++ b/modules/serverless-logic/pages/index.adoc @@ -54,6 +54,15 @@ xref:getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.a An all-in-one starting guide. Learn how to create, run & deploy your first {product_name} project on your local environment. -- +[.card] +-- +[.card-title] +xref:getting-started/java-embedded-workflows.adoc[] + +Learn about how to execute your workflows (existing files or define them programmatically) using Java and Maven. +[.card-description] +-- + [.card-section] == Core Concepts From a825755cae3942a0c25fb8245837adc3de99e4c4 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 15:52:54 +0200 Subject: [PATCH 03/38] SRVLOGIC-261: Sync core to latest version --- ...erless-workflow-specification-support.adoc | 114 +++++++++--------- .../pages/core/configuration-properties.adoc | 4 +- .../pages/core/custom-functions-support.adoc | 13 +- ...efining-an-input-schema-for-workflows.adoc | 2 +- .../core/handling-events-on-workflows.adoc | 1 - .../core/understanding-jq-expressions.adoc | 25 ++-- .../pages/core/working-with-parallelism.adoc | 8 +- 7 files changed, 85 insertions(+), 82 deletions(-) diff --git a/modules/serverless-logic/pages/core/cncf-serverless-workflow-specification-support.adoc b/modules/serverless-logic/pages/core/cncf-serverless-workflow-specification-support.adoc index dfbf5ee0..c35c6f16 100644 --- a/modules/serverless-logic/pages/core/cncf-serverless-workflow-specification-support.adoc +++ b/modules/serverless-logic/pages/core/cncf-serverless-workflow-specification-support.adoc @@ -6,7 +6,7 @@ // links :quarkus_config_guide_url: https://quarkus.io/guides/config-reference -This document describes the information about the implementation of the link:{spec_website_url}[Cloud Native Computing Foundation (CNCF) Serverless Workflow] specification. {product_name} implements version link:{spec_doc_url}[{spec_version}] of the Serverless Workflow specification. +This document provides an overview of how SonataFlow implements the link:{spec_website_url}[Cloud Native Computing Foundation (CNCF) Serverless Workflow] specification. {product_name} implements version link:{spec_doc_url}[{spec_version}] of the Serverless Workflow specification. The following table shows the implementation status for each Serverless Workflow specification feature. @@ -23,13 +23,13 @@ specification. | Icon | Description | emoji:full_moon[] -| Fully implemented feature and compliant with the Serverless Workflow specification +| Feature fully implemented and compliant with the Serverless Workflow specification | emoji:last_quarter_moon[] -| Partially implemented feature +| Feature partially implemented | emoji:construction[] -| Not implemented +| Feature not implemented |=== @@ -38,49 +38,49 @@ specification. |=== | Feature | Status | Reference -| <> +| <> | emoji:full_moon[] -| link:{spec_doc_url}#workflow-states[Workflow States] +| link:{spec_doc_url}#Workflow-Compensation[Workflow Compensation] -| <> -| emoji:last_quarter_moon[] -| link:{spec_doc_url}#Function-Definition[Function Definition] +| <> +| emoji:full_moon[] +| link:{spec_doc_url}#workflow-constants[Workflow Constants] + +| <> +| emoji:full_moon[] +| link:{spec_doc_url}#Workflow-Error-Handling[Workflow Error Handling] | <> | emoji:last_quarter_moon[] | link:{spec_doc_url}#Event-Definition[Event Definition] -| <> -| emoji:full_moon[] -| link:{spec_doc_url}#Workflow-Data[Workflow Data] - | <> | emoji:full_moon[] | link:{spec_doc_url}#Workflow-Expressions[Workflow Expressions] -| <> -| emoji:full_moon[] -| link:{spec_doc_url}#Workflow-Error-Handling[Workflow Error Handling] +| <> +| emoji:last_quarter_moon[] +| link:{spec_doc_url}#Function-Definition[Function Definition] | <> | emoji:construction[] | link:{spec_doc_url}#Retry-Definition[Retry Definition] -| <> -| emoji:last_quarter_moon[] -| link:{spec_doc_url}#workflow-timeouts[Workflow Timeouts] - -| <> +| <> | emoji:full_moon[] -| link:{spec_doc_url}#Workflow-Compensation[Workflow Compensation] +| link:{spec_doc_url}#workflow-secrets[Workflow Secrets] -| <> +| <> | emoji:full_moon[] -| link:{spec_doc_url}#workflow-constants[Workflow Constants] +| link:{spec_doc_url}#Workflow-Data[Workflow Data] -| <> +| <> | emoji:full_moon[] -| link:{spec_doc_url}#workflow-secrets[Workflow Secrets] +| link:{spec_doc_url}#workflow-states[Workflow States] + +| <> +| emoji:last_quarter_moon[] +| link:{spec_doc_url}#workflow-timeouts[Workflow Timeouts] |=== [[states]] @@ -88,79 +88,75 @@ specification. The link:{spec_doc_url}#parallel-state[Parallel State] of the workflow states feature works in a single thread. This means that a Parallel State does not create one thread per branch, simulating an actual parallel behavior. - If an exclusive property is set to `false`, you should not use the link:{spec_doc_url}#event-state[Event State] of the workflow states feature as the starting state. In case, if it is specified that way, then it will behave as if an exclusive property was set to `true`. +If an `exclusive` property is set to `false`, you should not use the link:{spec_doc_url}#Event-State[Event State] of the workflow states feature as the starting state. In case that it is specified that way, it will behave as if an `exclusive` property was set to `true`. [NOTE] ==== {product_name} does not support the link:{spec_doc_url}#sleep-state[Sleep State] feature. However, this feature will be supported in a future release. ==== -The following table shows all the workflow states that {product_name} supports in the Serverless Workflow specification {spec_version} version: +The following table shows the implementation status in {product_name} of workflow states of the Serverless Workflow specification {spec_version} version: .Workflow States implementation status [cols="35%,30%,35%", options="header"] |=== | State | Status | Reference +| Callback +| emoji:full_moon[] +| link:{spec_doc_url}#Callback-State[Callback State] + | Event | emoji:last_quarter_moon[] | link:{spec_doc_url}#Event-State[Event State] -| Operation +| ForEach | emoji:full_moon[] -| link:{spec_doc_url}#Operation-State[Operation State] +| link:{spec_doc_url}#ForEach-State[ForEach State] -| Switch +| Inject | emoji:full_moon[] -| link:{spec_doc_url}#Switch-State[Switch State] +| link:{spec_doc_url}#Inject-State[Inject State] -| Sleep -| emoji:construction[] -| link:{spec_doc_url}#sleep-state[Sleep State] +| Operation +| emoji:full_moon[] +| link:{spec_doc_url}#Operation-State[Operation State] | Parallel | emoji:last_quarter_moon[] | link:{spec_doc_url}#Parallel-State[Parallel State] -| Inject -| emoji:full_moon[] -| link:{spec_doc_url}#Inject-State[Inject State] - -| ForEach -| emoji:full_moon[] -| link:{spec_doc_url}#ForEach-State[ForEach State] +| Sleep +| emoji:construction[] +| link:{spec_doc_url}#sleep-state[Sleep State] -| Callback +| Switch | emoji:full_moon[] -| link:{spec_doc_url}#Callback-State[Callback State] +| link:{spec_doc_url}#Switch-State[Switch State] |=== [[functions]] == Functions -The following table shows the status of the workflow functions that {product_name} supports: +The following table shows the implementation status of the workflow functions that {product_name} supports: .Workflow Functions implementation status [cols="35%,30%,35%", options="header"] |=== | Function | Status | Reference -| REST -| emoji:full_moon[] -| link:{spec_doc_url}#using-functions-for-restful-service-invocations[Using Functions for RESTful Service Invocations] +| AsyncAPI +| emoji:construction[] +| link:{spec_doc_url}#using-functions-for-async-api-service-invocations[Using Functions for AsyncAPI Service Invocations] -| RPC +| Custom | emoji:full_moon[] -| link:{spec_doc_url}#using-functions-for-rpc-service-invocations[Using Functions for RPC Service Invocations] +| link:{spec_doc_url}#defining-custom-function-types[Defining custom function types] | Expression | emoji:full_moon[] | link:{spec_doc_url}#using-functions-for-expression-evaluation[Using Functions for Expression Evaluation] -| Async API -| emoji:construction[] -| link:{spec_doc_url}#using-functions-for-async-api-service-invocations[Using Functions for Async API Service Invocations] - | GraphQL | emoji:construction[] | link:{spec_doc_url}#using-functions-for-graphql-service-invocations[Using Functions for GraphQL Service Invocations] @@ -169,9 +165,13 @@ The following table shows the status of the workflow functions that {product_nam | emoji:construction[] | link:{spec_doc_url}#using-functions-for-odata-service-invocations[Using Functions for OData Service Invocations] -| Custom +| REST | emoji:full_moon[] -| link:{spec_doc_url}#defining-custom-function-types[Defining custom function types] +| link:{spec_doc_url}#using-functions-for-restful-service-invocations[Using Functions for RESTful Service Invocations] + +| RPC +| emoji:full_moon[] +| link:{spec_doc_url}#using-functions-for-rpc-service-invocations[Using Functions for RPC Service Invocations] |=== For additional functions, the Serverless Workflow specification support the `custom` function type, such as `sysout` and `java`. For more information about these custom function types, see xref:core/custom-functions-support.adoc[Custom functions for your {product_name} service]. @@ -233,7 +233,7 @@ Alternatively, you can use xref:core/understanding-workflow-error-handling.adoc[ [[timeouts]] == Timeouts -{product_name} has limited support for the timeouts feature, which covers only workflow and event timeouts. +{product_name} has limited support for the timeouts feature, covering only workflow and event timeouts. For start event state the `exclusive` property is not supported if set to `false`, therefore the timeout is not supported for the event state when starting a workflow. diff --git a/modules/serverless-logic/pages/core/configuration-properties.adoc b/modules/serverless-logic/pages/core/configuration-properties.adoc index 0216d316..b46bb54b 100644 --- a/modules/serverless-logic/pages/core/configuration-properties.adoc +++ b/modules/serverless-logic/pages/core/configuration-properties.adoc @@ -16,10 +16,8 @@ The following table serves as a quick reference for commonly used configuration a|Defines the type of persistence database. The possible values of this property include: * `jdbc` -* `mongodb` * `filesystem` * `kafka` -* `infinispan` * `postgresql` |string | @@ -138,7 +136,7 @@ a|Defines strategy to generate the configuration key of open API specifications. |`quarkus.kogito.devservices.image-name` |Defines the Data Index image to use. |string -|`{kogito_devservices_imagename}:{page-component-version}` +|`quay.io/kiegroup/kogito-data-index-ephemeral:{page-component-version}` |No |`quarkus.kogito.devservices.shared` diff --git a/modules/serverless-logic/pages/core/custom-functions-support.adoc b/modules/serverless-logic/pages/core/custom-functions-support.adoc index 6bbbee12..62bdf789 100644 --- a/modules/serverless-logic/pages/core/custom-functions-support.adoc +++ b/modules/serverless-logic/pages/core/custom-functions-support.adoc @@ -300,6 +300,10 @@ The Camel route is responsible to produce the return value in a way that the wor include::../../pages/_common-content/camel-valid-responses.adoc[] +[[con-func-python]] +== Python custom function +{product_name} implements a custom function to execute embedded Python scripts and functions. See xref:use-cases/advanced-developer-use-cases/integrations/custom-functions-python.adoc[Invoking Python from {product_name}] + [[con-func-knative]] == Knative custom function @@ -531,7 +535,8 @@ kogito.sw.functions.greet.timeout=5000 <1> ---- <1> Time in milliseconds -== Rest custom function +[[con-func-rest]] +== REST custom function Serverless Workflow Specification defines the xref:service-orchestration/orchestration-of-openapi-based-services.adoc[OpenAPI function type], which is the preferred way to interact with existing REST servers. However, sometimes a workflow should interact with several REST APIs that are not described using an OpenAPI specification file. Since generating such files for these services might be tedious, {product_name} offers REST custom type as a shortcut. @@ -582,7 +587,7 @@ This particular endpoint expects as body a JSON object whose field `numbers` is If `inputNumbers` contains `1`, `2`, and `3`, the output of the call will be `1*3+2*3+3*3=18. -In case you want to specify headers in your HTTP request, you might do it by adding arguments starting with the `HEADER_` prefix. Therefore if you add `"HEADER_ce_id": "123"` to the previous argument set, you will be adding a header named `ce_id` with the value `123` to your request. A similar approach might be used to add query params to a GET request, in that case, you must add arguments starting with the `QUERY_` prefix. Note that you can also use {} notation for replacements of query parameters included directly in the `operation` string. +In case you want to specify headers in your HTTP request, you might do it by adding arguments starting with the `HEADER_` prefix. Therefore, if you add `"HEADER_ce_id": "123"` to the previous argument set, you will be adding a header named `ce_id` with the value `123` to your request. A similar approach might be used to add query params to a GET request, in that case, you must add arguments starting with the `QUERY_` prefix. Note that you can also use {} notation for replacements of query parameters included directly in the `operation` string. For example, given the following function definition that performs a `get` request @@ -635,7 +640,7 @@ It must contain a Java class that inherits from `WorkItemTypeHandler`. Its respo + The runtime project consists of a `WorkflowWorkItemHandler` implementation, which name must match with the one provided to `WorkItemNodeFactory` during the deployment phase, and a `WorkItemHandlerConfig` bean that registers that handler with that name. + -When a Serverless Workflow function is called, Kogito identifies the proper `WorkflowWorkItemHandler` instance to be used for that function type (using the handler name associated with that type by the deployment project) and then invokes the `internalExecute` method. The `Map` parameter contains the function arguments defined in the workflow, and the `WorkItem` parameter contains the metadata information added to the handler by the deployment project. Hence, the `executeWorkItem` implementation has an access to all the information needed to perform the computational logic intended for that custom type. +When a Serverless Workflow function is called, Kogito identifies the proper `WorkflowWorkItemHandler` instance to be used for that function type (using the handler name associated with that type by the deployment project) and then invokes the `internalExecute` method. The `Map` parameter contains the function arguments defined in the workflow, and the `WorkItem` parameter contains the metadata information added to the handler by the deployment project. Hence, the `executeWorkItem` implementation has access to all the information needed to perform the computational logic intended for that custom type. === Custom function type example @@ -662,7 +667,7 @@ The `operation` starts with `rpc`, which is the custom type identifier, and cont A Kogito addon that defines the `rpc` custom type must be developed for this function definition to be identified. It is consist of a link:{kogito_sw_examples_url}/serverless-workflow-custom-type/serverless-workflow-custom-rpc-deployment[deployment project] and a link:{kogito_sw_examples_url}/serverless-workflow-custom-type/serverless-workflow-custom-rpc[runtime project]. -The deployment project is responsible for extending the link:{kogito_sw_examples_url}/serverless-workflow-custom-type/serverless-workflow-custom-rpc-deployment/src/main/java/org/kie/kogito/examples/sw/services/RPCCustomTypeHandler.java[`WorkItemTypeHandler`] and setup the `WorkItemNodeFactory` as follows: +The deployment project is responsible for extending the link:{kogito_sw_examples_url}/serverless-workflow-custom-type/serverless-workflow-custom-rpc-deployment/src/main/java/org/kie/kogito/examples/sw/services/RPCCustomTypeHandler.java[`WorkItemTypeHandler`] and setup of the `WorkItemNodeFactory` as follows: .Example of the RPC function Java implementation diff --git a/modules/serverless-logic/pages/core/defining-an-input-schema-for-workflows.adoc b/modules/serverless-logic/pages/core/defining-an-input-schema-for-workflows.adoc index 1b3509ff..119162c4 100644 --- a/modules/serverless-logic/pages/core/defining-an-input-schema-for-workflows.adoc +++ b/modules/serverless-logic/pages/core/defining-an-input-schema-for-workflows.adoc @@ -25,7 +25,7 @@ In the previous definition, the `schema` property is a URI, which holds the path == Output schema -Serverless Workflow specification does not support JSON output schema until version 0.9. Therefore {product_name} is implementing it as a link:{spec_doc_url}#extensions[Serverless Workflow specification extension]. Output schema is applied after workflow execution to verify that the output model has the expected format. It is also useful for Swagger generation purposes. +Serverless Workflow specification does not support JSON output schema until version 0.9. Therefore, {product_name} is implementing it as a link:{spec_doc_url}#extensions[Serverless Workflow specification extension]. Output schema is applied after workflow execution to verify that the output model has the expected format. It is also useful for Swagger generation purposes. Similar to Input schema, you must specify the URL to the JSON schema, using `outputSchema` as follows: diff --git a/modules/serverless-logic/pages/core/handling-events-on-workflows.adoc b/modules/serverless-logic/pages/core/handling-events-on-workflows.adoc index 3a7e3b4b..e988610a 100644 --- a/modules/serverless-logic/pages/core/handling-events-on-workflows.adoc +++ b/modules/serverless-logic/pages/core/handling-events-on-workflows.adoc @@ -142,4 +142,3 @@ Similar to the callback state in a workflow, the workflow instance to be resumed include::../../pages/_common-content/report-issue.adoc[] - \ No newline at end of file diff --git a/modules/serverless-logic/pages/core/understanding-jq-expressions.adoc b/modules/serverless-logic/pages/core/understanding-jq-expressions.adoc index 4912454b..4bc255bc 100644 --- a/modules/serverless-logic/pages/core/understanding-jq-expressions.adoc +++ b/modules/serverless-logic/pages/core/understanding-jq-expressions.adoc @@ -15,7 +15,7 @@ The workflow expressions in the link:{spec_doc_url}#workflow-expressions[Serverl This document describes the usage of jq expressions in functions, switch state conditions, action function arguments, data filtering, and event publishing. -JQ expression might be tricky to master, for non trivial cases, it is recommended to use helper tools like link:{jq_play}[JQ Play] to validate the expression before including it in the workflow file. +JQ expression might be tricky to master, for non-trivial cases, it is recommended to use helper tools like link:{jq_play}[JQ Play] to validate the expression before including it in the workflow file. [[ref-example-jq-expression-function]] == Example of jq expression in functions @@ -217,8 +217,8 @@ The previous example of the event filter copies the content of CloudEvent data ` -- [[ref-example-jq-expression-event-publishing]] -== Example of jq expressions in event publishing. --- +== Example of jq expressions in event publishing + When publishing a Cloud Event, you can select the data that is being published using a jq expression that generates a JSON object. Note that in yaml double quotes are required to allow using `{}` characters. .Example data expression returning an object @@ -238,7 +238,7 @@ In the previous example, a CloudEvent was published when the state transitioned. ---- data={"gitRepo":"ssh://bitbucket.org/m2k-test","branch":"aaaaaaasssss","token":null,"workspaceId":"b93980cb-3943-4223-9441-8694c098eeb9","projectId":"9b305fe3-d441-48ce-b01b-d314e86e14ec","transformId":"723dce89-c25c-4c7b-9ef3-842de92e6fe6","workflowCallerId":"7ddb5193-bedc-4942-a857-596b31f377ed"} ---- --- + == Workflow secrets, constants and context @@ -265,14 +265,15 @@ So, assuming you have added to your `application.properties` a line with the `my Besides constants and secrets, you might access contextual information of the running workflow by using the $WORKFLOW reserved word. {product_name} supports the following contextual keys: - * `id`: The id of the running workflow definition - * `name`: The name of the running workflow definition - * `instanceId`: The id of the running workflow instance - * `headers`: Optional map containing the headers, if any, of the invocation that started the running workflow instance - * `prevActionResult`: In a `foreach` state using multiple actions per loop cycle, give access to the result of the previous action. See link:{kogito_sw_examples_url}/serverless-workflow-foreach-quarkus/src/main/resources/foreach.sw.json#L11[example] - * `identity`: Quarkus security identity + +* `id`: The id of the running workflow definition +* `name`: The name of the running workflow definition +* `instanceId`: The id of the running workflow instance +* `headers`: Optional map containing the headers, if any, of the invocation that started the running workflow instance +* `prevActionResult`: In a `foreach` state using multiple actions per loop cycle, give access to the result of the previous action. See link:{kogito_sw_examples_url}/serverless-workflow-foreach-quarkus/src/main/resources/foreach.sw.json#L11[example] +* `identity`: Quarkus security identity - Therefore, the following function, for a serverless workflow definition whose id is `expressionTest`, will append the string `worklow id is expressionTest` to the `message` variable +Therefore, the following function, for a serverless workflow definition whose id is `expressionTest`, will append the string `worklow id is expressionTest` to the `message` variable ---- { @@ -291,7 +292,7 @@ This feature was used to add quarkus security identity support, you can check so == Additional resources -* link:{jq_play}[JQ Play offline] +* link:{jq_play} [JQ Play offline] * xref:service-orchestration/configuring-openapi-services-endpoints.adoc[Configuring the OpenAPI services endpoints] include::../../pages/_common-content/report-issue.adoc[] diff --git a/modules/serverless-logic/pages/core/working-with-parallelism.adoc b/modules/serverless-logic/pages/core/working-with-parallelism.adoc index c4b992da..a37347f1 100644 --- a/modules/serverless-logic/pages/core/working-with-parallelism.adoc +++ b/modules/serverless-logic/pages/core/working-with-parallelism.adoc @@ -31,12 +31,12 @@ include::../../pages/_common-content/getting-started-requirement.adoc[] + -- .Example content for `parallel.sw.json` file -[source,json] +[source,json,subs="attributes+"] ---- { "id": "parallel", "version": "1.0", - "specVersion": "0.8", + "specVersion": "{spec_version}", "name": "Welcome to the Parallel dimension", "description": "Testing parallelism", "start": "Parallel", @@ -172,12 +172,12 @@ For more information, see < Date: Thu, 11 Jul 2024 15:55:58 +0200 Subject: [PATCH 04/38] SRVLOGIC-261: Sync tooling to latest version --- ...serverless-logic-web-tools-deploy-projects.adoc | 2 +- ...verless-logic-web-tools-github-integration.adoc | 8 ++++---- ...less-logic-web-tools-openshift-integration.adoc | 2 +- .../serverless-logic-web-tools-overview.adoc | 2 +- ...ls-redhat-application-services-integration.adoc | 14 +++++++------- .../swf-editor-chrome-extension.adoc | 4 ++-- .../swf-editor-overview.adoc | 4 ++-- .../swf-editor-vscode-extension.adoc | 13 ++++++------- 8 files changed, 24 insertions(+), 25 deletions(-) diff --git a/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-deploy-projects.adoc b/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-deploy-projects.adoc index 0ef6a6c2..a232bd83 100644 --- a/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-deploy-projects.adoc +++ b/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-deploy-projects.adoc @@ -63,7 +63,7 @@ After the deployment of your {product_name} project is successful, you can verif + For more information, see xref:tooling/serverless-logic-web-tools/serverless-logic-web-tools-openshift-integration.adoc[Integrating your {product_name} project with OpenShift using {serverless_logic_web_tools_name}]. * Your {product_name} project is deployed successfully. -* Deployed project must be deployed using the *Deploy as a project* option as unchecked, as the deployment page is only available using the pre-built image container. If the option *Deploy as a project* option is checked the tool opens the `index.html` file your project provides, if any. +* Deployed project must be deployed using the *Deploy as a project* option as unchecked, as the deployment page is only available using the pre-built image container. If *Deploy as a project* option is checked, the tool opens the `index.html` file your project provides, if any. .Procedure . Click on the *OpenShift deployments* icon to view a list of deployments. diff --git a/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-github-integration.adoc b/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-github-integration.adoc index 86cb95f4..0a919a3c 100644 --- a/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-github-integration.adoc +++ b/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-github-integration.adoc @@ -14,10 +14,10 @@ This document describes how you can configure the integration and synchronize yo You can generate a token from your GitHub account and add the token to the {serverless_logic_web_tools_name}. .Prerequisites -* You have an account in GitHub. +* You have a GitHub account. .Procedure -. Go to link:{serverless_logic_web_tools_url}[{serverless_logic_web_tools_name}] web application, and click the *Cogwheel* (⚙️) on the top-right corner of the screen. +. Go to link:{serverless_logic_web_tools_url}[{serverless_logic_web_tools_name}] web application, and click the *Cogwheel* (⚙️) in the top-right corner of the screen. . Go to the *GitHub* tab. . In the *GitHub* tab, click the *Add access token* button and a window will be shown. . Click *Create a new token* option. @@ -28,7 +28,7 @@ Ensure that you select the *repo* option. . Optionally, select *gist*, which enables you to import and update gists. . Copy the generated token and paste it into the *Token* field in {serverless_logic_web_tools_name} GitHub *Settings*. + -The contents of the tab are updated and displays that you are signed into the GitHub and contains all the required permissions. +The contents of the tab are updated and display that you are signed into GitHub and have all the required permissions. [[proc-sync-workspace-github-serverless-logic-web-tools]] == Synchronizing your workspaces with GitHub @@ -43,7 +43,7 @@ For more information, see < Github: Create Repository*. +. Click *Share -> GitHub: Create Repository*. . Name your repository and set the repository as *Public* or *Private*. . (Optional) Select the *Use Quarkus Accelerator* to create a repository with a base Quarkus project and move the workspace files to `src/main/resources` folder. + diff --git a/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-openshift-integration.adoc b/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-openshift-integration.adoc index 914f90ee..59d039f9 100644 --- a/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-openshift-integration.adoc +++ b/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-openshift-integration.adoc @@ -34,7 +34,7 @@ A new page opens containing your new API token along with `oc cli` login command image::tooling/serverless-logic-web-tools/serverless-logic-web-tools-openshift-info.png[] -- -. Go to the {serverless_logic_web_tools_name} web application, click the *Cogwheel* (⚙️) on the top-right corner and go to the *OpenShift* tab. +. Go to the {serverless_logic_web_tools_name} web application, click the *Cogwheel* (⚙️) in the top-right corner and go to the *OpenShift* tab. . Click the *Add connection* button and a window will be shown. . Enter your OpenShift project name in the *Namespace (project)* field. . Enter the value copied value of `--server` flag in the *Host* field. diff --git a/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-overview.adoc b/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-overview.adoc index e80b1cf5..9ad80083 100644 --- a/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-overview.adoc +++ b/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-overview.adoc @@ -18,7 +18,7 @@ The {serverless_logic_web_tools_name} provides three different editors for your [[proc-create-workflow-model-web-tools]] == Creating a workflow model in {serverless_logic_web_tools_name} -You can start by creating a new model from scratch or using one of the samples provided. +You can start by creating a new model from scratch or using one of the samples provided. The samples are available in the "Sample Catalog", which you can find in the menu on the left. Additionally, there is an option to import models, available on the main page of the application. .Procedure . Go to the link:{serverless_logic_web_tools_url}[{serverless_logic_web_tools_name}] web application. diff --git a/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-redhat-application-services-integration.adoc b/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-redhat-application-services-integration.adoc index f1e13dfd..83efa620 100644 --- a/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-redhat-application-services-integration.adoc +++ b/modules/serverless-logic/pages/tooling/serverless-logic-web-tools/serverless-logic-web-tools-redhat-application-services-integration.adoc @@ -9,7 +9,7 @@ Some of the features in {serverless_logic_web_tools_name} require integration wi This document describes how you can configure the required settings to complete the integration with Red Hat OpenShift Application and Data Services. [[proc-create-service-account-serverless-logic-web-tools]] -== Creating a service account in Red Hat OpenShift application and Data Services +== Creating a service account in Red Hat OpenShift Application and Data Services You can create or use a service account from your Red Hat OpenShift Application and Data Services console and add the service account to the {serverless_logic_web_tools_name}. @@ -35,7 +35,7 @@ A modal displaying your *Client ID* and *Client Secret* appears. -- . If you already have a service account, find your *Client ID* and *Client Secret*. -. In the {serverless_logic_web_tools_name}, click the *Cogwheel* (⚙️) on the top-right corner and go to the *Service Account* tab. +. In the {serverless_logic_web_tools_name}, click the *Cogwheel* (⚙️) in the top-right corner and go to the *Service Account* tab. . Click on the *Add service account* button and a window will be shown. . Enter your *Client ID* and *Client Secret* in the respective fields. . Click *Apply*. @@ -44,7 +44,7 @@ The content in the *Service Account* tab updates and displays *Your Service Acco [[proc-create-service-registry-serverless-logic-web-tools]] -== Creating a Service Registry in Red Hat OpenShift application and Data Services +== Creating a Service Registry in Red Hat OpenShift Application and Data Services You can create or use a Service Registry instance from your Red Hat OpenShift Application and Data Services console and add the Service Registry to {serverless_logic_web_tools_name}. @@ -52,7 +52,7 @@ You can create or use a Service Registry instance from your Red Hat OpenShift Ap * You have access to the Red Hat OpenShift Application and Data Services console. * You have created a service account. + -For information about creating a service account, see <>. +For information about creating a service account, see <>. .Procedure . To create a Service Registry instance in Red Hat Openshift Application and Data Services console, perform the following steps: @@ -74,11 +74,11 @@ The list of Service Registry instances updates with your instance. + [IMPORTANT] ==== -You must select the role as Manager or Administrator to have the read and write access. +You must select the role of Manager or Administrator to have read and write access. ==== .. Click *Save*. -.. Click on the menu on the top-right corner of the screen. +.. Click on the menu in the top-right corner of the screen. .. Click *Connection*. + A drawer opens containing the required connection and authentication information. @@ -87,7 +87,7 @@ A drawer opens containing the required connection and authentication information -- . If you already have a Service Registry, find the value of *Core Registry API* of your Service Registry. -. In the {serverless_logic_web_tools_name} web application, click the *Cogwheel* (⚙️) on the top-right corner and go to the *Service Registry* tab. +. In the {serverless_logic_web_tools_name} web application, click the *Cogwheel* (⚙️) in the top-right corner and go to the *Service Registry* tab. . Click on the *Add service registry* button and a window will be shown. . Enter a name for your registry. + diff --git a/modules/serverless-logic/pages/tooling/serverless-workflow-editor/swf-editor-chrome-extension.adoc b/modules/serverless-logic/pages/tooling/serverless-workflow-editor/swf-editor-chrome-extension.adoc index b8a8884f..b059c115 100644 --- a/modules/serverless-logic/pages/tooling/serverless-workflow-editor/swf-editor-chrome-extension.adoc +++ b/modules/serverless-logic/pages/tooling/serverless-workflow-editor/swf-editor-chrome-extension.adoc @@ -50,14 +50,14 @@ For more information, see < Date: Thu, 11 Jul 2024 15:58:00 +0200 Subject: [PATCH 05/38] SRVLOGIC-261: Sync service-orchestration to latest version --- .../configuring-openapi-services-endpoints.adoc | 2 +- .../orchestration-of-openapi-based-services.adoc | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/modules/serverless-logic/pages/service-orchestration/configuring-openapi-services-endpoints.adoc b/modules/serverless-logic/pages/service-orchestration/configuring-openapi-services-endpoints.adoc index ab8568f2..6e83474e 100644 --- a/modules/serverless-logic/pages/service-orchestration/configuring-openapi-services-endpoints.adoc +++ b/modules/serverless-logic/pages/service-orchestration/configuring-openapi-services-endpoints.adoc @@ -115,7 +115,7 @@ A Kubernetes service endpoint can be used as a service URL if the target service === Using URI alias -As an alternative to `kogito.sw.operationIdStrategy`, you can assign an alias name to an URI by using `workflow-uri-definitions` custom link:{spec_doc_url}#extensions[extension]. Then you can use that alias as configuration key and in function definitions. +As an alternative to `kogito.sw.operationIdStrategy`, you can assign an alias name to a URI by using `workflow-uri-definitions` custom link:{spec_doc_url}#extensions[extension]. Then you can use that alias as configuration key and in function definitions. .Example workflow [source,json] diff --git a/modules/serverless-logic/pages/service-orchestration/orchestration-of-openapi-based-services.adoc b/modules/serverless-logic/pages/service-orchestration/orchestration-of-openapi-based-services.adoc index 8ca9a7da..19f9f309 100644 --- a/modules/serverless-logic/pages/service-orchestration/orchestration-of-openapi-based-services.adoc +++ b/modules/serverless-logic/pages/service-orchestration/orchestration-of-openapi-based-services.adoc @@ -55,7 +55,7 @@ For more information about the tooling, see {getting-familiar-with-our-tooling}[ In the previous example function definition, the `type` attribute can be omitted as the link:{spec_doc_url}#Function-Definition[default value] is `rest`. ==== -In the previous example, the `operation` attribute is a string, which is composed using the following parameters: +In the previous example, the `operation` attribute is a string composed of the following parameters: * URI that the engine uses to locate the specification file, such as `classpath`. * Operation identifier. You can find the operation identifier in the link:{open_api_spec_url}#fixed-fields-7[OpenAPI specification file]. @@ -180,7 +180,7 @@ components: <2> Data structure of the REST operation. -- -. Use the same `operationId` to compose the final URI in the function definition as shown in the following example: +. Use the same `operationId` to compose the final URI in the workflow function definition as shown in the following example: + -- .OpenAPI functions definition in the Temperature Conversion example @@ -221,7 +221,7 @@ After defining the function definitions, you can access the defined functions in . Use a link:{spec_doc_url}#Action-Definition[workflow action] to call a function definition that you added. + -- -Any workflow action that consists of a similar approach of referencing the functions that you used in the function definition can call a defined function. +Any workflow action can call a function defined in the function definition. -- . To map the arguments of a function, you can refer to the parameters described in the link:{open_api_spec_url}#operation-object[Operation Object] section of OpenAPI specification. From 5a81bd4cafa79a5fe58a2b45b5f363549dc4596b Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 15:59:22 +0200 Subject: [PATCH 06/38] SRVLOGIC-261: Sync eventing to latest version --- .../pages/eventing/event-correlation-with-workflows.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/serverless-logic/pages/eventing/event-correlation-with-workflows.adoc b/modules/serverless-logic/pages/eventing/event-correlation-with-workflows.adoc index 910e2289..191be284 100644 --- a/modules/serverless-logic/pages/eventing/event-correlation-with-workflows.adoc +++ b/modules/serverless-logic/pages/eventing/event-correlation-with-workflows.adoc @@ -198,7 +198,7 @@ The engine stores the correlation information in the same persistence mechanism [NOTE] ==== -Currently, only `kogito-addons-quarkus-persistence-jdbc` persistence add-on supports correlation. The `kogito-addons-quarkus-persistence-jdbc` add-on is configured for PostgreSQL. Other persistence add-ons will be supported in a future release. +Currently, only `kie-addons-quarkus-persistence-jdbc` persistence add-on supports correlation. The `kie-addons-quarkus-persistence-jdbc` add-on is configured for PostgreSQL. Other persistence add-ons will be supported in a future release. ==== == Additional resources From 8983924eb4ccc11e9805863df1ea374ec4421de3 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 16:00:56 +0200 Subject: [PATCH 07/38] SRVLOGIC-261: Sync security to latest release --- ...ting-third-party-services-with-oauth2.adoc | 23 ++++++++++--------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/modules/serverless-logic/pages/security/orchestrating-third-party-services-with-oauth2.adoc b/modules/serverless-logic/pages/security/orchestrating-third-party-services-with-oauth2.adoc index e5af5d57..c8b6d8b0 100644 --- a/modules/serverless-logic/pages/security/orchestrating-third-party-services-with-oauth2.adoc +++ b/modules/serverless-logic/pages/security/orchestrating-third-party-services-with-oauth2.adoc @@ -39,10 +39,10 @@ When you use the Acme Financial Services, you can query the exchange rates using * Orchestration with services provided by Acme and currency exchange calculations. * Authentication requirements to access the services provided by Acme. -* Potential vendor lock-in problems, in case you want to change the provider in future. +* Potential vendor lock-in problems, in case you want to change the provider in the future. * Domain-specific validations and optimizations. -The further sections describes how an end-to-end solution is created in the `serverless-workflow-oauth2-orchestration-quarkus` example application. To see the source code of `serverless-workflow-oauth2-orchestration-quarkus` example application, you can clone the link:{kogito_examples_repository_url}[kogito-examples] repository in GitHub and select the `serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus` directory. +The further sections describes how an end-to-end solution is created in the `serverless-workflow-oauth2-orchestration-quarkus` example application. To see the source code of `serverless-workflow-oauth2-orchestration-quarkus` example application, you can clone the link:{kogito_examples_repository_url}[{kie_kogito_examples_repo_name}] repository in GitHub and select the `serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus` directory. The `serverless-workflow-oauth2-orchestration-quarkus` example application contains the following services to compose the solution: @@ -463,15 +463,16 @@ Once you clone the `serverless-workflow-oauth2-orchestration-quarkus` example ap .Procedure -. In a command terminal, clone the `kogito-examples` repository and navigate to the cloned directory: +. In a command terminal, clone the `{kie_kogito_examples_repo_name}` repository and navigate to the cloned directory: + -- -.Clone `kogito-examples` repository and navigate to the directory -[source, bash] +.Clone `{kie_kogito_examples_repo_name}` repository and navigate to the directory + +[source,bash,subs="attributes+"] ---- -git clone https://github.com/apache/incubator-kie-kogito-examples.git +git clone {kogito_examples_url} -cd kogito-examples/serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus ---- -- @@ -491,7 +492,7 @@ mvn clean install .Start the Keycloak server [source, bash] ---- -cd kogito-examples/serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus/scripts +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus/scripts ./startKeycloak.sh ---- @@ -501,7 +502,7 @@ Alternatively, you can start the Docker Compose using the following command: .Start Docker Compose [source, bash] ---- -cd kogito-examples/serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus/docker-compose +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus/docker-compose docker-compose up ---- @@ -513,7 +514,7 @@ docker-compose up .Start Acme Financial Service [source, bash] ---- -cd kogito-examples/serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus/acme-financial-service +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus/acme-financial-service java -jar target/quarkus-app/quarkus-run.jar ---- @@ -525,7 +526,7 @@ java -jar target/quarkus-app/quarkus-run.jar .Start currency exchange workflow [source, bash] ---- -cd kogito-examples/serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus/currency-exchange-workflow +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-oauth2-orchestration-quarkus/currency-exchange-workflow java -jar target/quarkus-app/quarkus-run.jar ---- From 4c25eb7f92f77c57e2706cc56f9b6bab619f0857 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 16:03:42 +0200 Subject: [PATCH 08/38] SRVLOGIC-261: Sync testing-and-troubleshooting to latest version --- .../kn-plugin-workflow-overview.adoc | 64 ++++++++----------- .../quarkus-dev-ui-custom-dashboard-page.adoc | 3 +- .../quarkus-dev-ui-overview.adoc | 14 ++-- ...arkus-dev-ui-workflow-definition-page.adoc | 12 ++-- ...uarkus-dev-ui-workflow-instances-page.adoc | 16 ++--- 5 files changed, 49 insertions(+), 60 deletions(-) diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc index 9db7354d..6ae6fe2e 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc @@ -22,14 +22,21 @@ You can use the {product_name} plug-in to set up your local workflow project qui * (Optional) link:{docker_install_url}[Docker] is installed. * (Optional) link:{podman_install_url}[Podman] is installed. * link:{kubectl_install_url}[Kubernetes CLI] is installed. +* link:{kn_cli_install_url}[Knative CLI] is installed. .Procedure -. Follow the procedure https://docs.openshift.com/serverless/1.32/install/installing-kn.html[Installing Knative CLI] -. Run the `kn workflow` command. +. Download the latest binary file from the link:{kie_tools_releases_page_url}[KIE Tooling Releases] page. +. Install the `kn workflow` command as a plug-in of the Knative CLI using the following steps: ++ +-- +.. Copy the `kn-workflow` binary file to a directory in your `PATH`, such as `/usr/local/bin` and ensure that the file name is `kn-workflow`. +.. Make the binary file executable as follows: ++ +`chmod +x /usr/local/bin/kn-workflow` + [WARNING] ==== -Some systems might block the application to run due to Apple enforcing policies. To fix this problem, check the *Security & Privacy* section in the *System Preferences* -> *General* tab to approve the application to run. For more information, see link:{apple_support_url}[Apple support article: Open a Mac app from an unidentified developer]. +On Mac, some systems might block the application to run due to Apple enforcing policies. To fix this problem, check the *Security & Privacy* section in the *System Preferences* -> *General* tab to approve the application to run. For more information, see link:{apple_support_url}[Apple support article: Open a Mac app from an unidentified developer]. ==== .. Run the following command to verify that `kn-workflow` plug-in is installed successfully: + @@ -41,58 +48,41 @@ After installing the plug-in, you can use `kn workflow` to run the related subco . Use the `workflow` subcommand in Knative CLI as follows: + -- -.Methods to use workflow subcommand +.Aliases to use workflow subcommand [source,shell] ---- kn workflow +kn-workflow ---- .Example output -[source,shell] +[source,text] ---- Manage SonataFlow projects - Currently, SonataFlow targets use cases with a single Serverless Workflow main - file definition (i.e. workflow.sw.{json|yaml|yml}). - - Additionally, you can define the configurable parameters of your application in the - "application.properties" file (inside the root project directory). - You can also store your spec files (i.e., Open API files) inside the "specs" folder, - schemas file inside "schemas" folder and also subflows inside "subflows" folder. - - A SonataFlow project, as the following structure by default: - - Workflow project root - /specs (optional) - /schemas (optional) - /subflows (optional) - workflow.sw.{json|yaml|yml} (mandatory) - Usage: - kn workflow [command] + kn workflow [command] Aliases: - kn workflow, kn-workflow + kn workflow, kn-workflow Available Commands: - completion Generate the autocompletion script for the specified shell - create Creates a new SonataFlow project - deploy Deploy a SonataFlow project on Kubernetes via SonataFlow Operator - gen-manifest GenerateOperator manifests - help Help about any command - quarkus Manage SonataFlow projects built in Quarkus - run Run a SonataFlow project in development mode - undeploy Undeploy a SonataFlow project on Kubernetes via SonataFlow Operator - version Show the version + completion Generate the autocompletion script for the specified shell + create Creates a new SonataFlow project + deploy Deploy a SonataFlow project on Kubernetes via SonataFlow Operator + help Help about any command + quarkus Manage SonataFlow projects built in Quarkus + run Run a SonataFlow project in development mode + undeploy Undeploy a SonataFlow project on Kubernetes via SonataFlow Operator + version Show the version Flags: - -h, --help help for kn workflow - -v, --version version for kn workflow - -Use "kn workflow [command] --help" for more information about a command. + -h, --help help for kn + -v, --version version for kn +Use "kn [command] --help" for more information about a command. ---- - +-- [[proc-create-sw-project-kn-cli]] == Creating a workflow project using Knative CLI diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-custom-dashboard-page.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-custom-dashboard-page.adoc index b70a85b0..0fd32762 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-custom-dashboard-page.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-custom-dashboard-page.adoc @@ -4,7 +4,6 @@ :description: Dashboards in {product_name} Dev UI extension :keywords: kogito, workflow, serverless, Quarkus, Dev UI, Dashboards :dashboard_guide: https://www.dashbuilder.org/docs/#chap-dashbuilder-yaml-guides -:dashboard_editor: https://start.kubesmarts.org/ In {product_name} Dev UI extension, the Dashboards page is used to display the available dashboard files. The page displays a list of available dashboards and add filters to the list. @@ -19,7 +18,7 @@ The table on the Dashboards page displays the following details: == Creating a custom dashboard === Create a custom dashboard file -See the {dashboard_guide}[dashboard guide] for creating dashboards and visualizations with YAML. You can run all examples with {dashboard_editor}[Dashbuilder YAML Online]. +See the {dashboard_guide}[dashboard guide] for creating dashboards and visualizations with YAML. You can run all examples with link:{serverless_logic_web_tools_url}[Dashbuilder YAML Online]. === Storage path of custom dashboards The default storage path for dashboard files is *src/main/resources/dashboards*, but the property *quarkus.kogito-runtime-tools.custom.dashboard.folder* can be used to set a custom storage path. diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-overview.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-overview.adoc index 3a81c0c2..242d9c8a 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-overview.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-overview.adoc @@ -25,7 +25,7 @@ The {product_name} Dev UI extension provides a console to view, manage, and star .Install {product_name} Dev UI extension [source,shell] ---- -quarkus ext add org.kie.kogito:kogito-quarkus-serverless-workflow-devui +quarkus ext add org.apache.kie.sonataflow:sonataflow-quarkus-devui ---- Executing the previous command adds the following dependency to `pom.xml` file of your project: @@ -34,19 +34,19 @@ Executing the previous command adds the following dependency to `pom.xml` file o [source,xml] ---- - org.kie.kogito - kogito-quarkus-serverless-workflow-devui + org.apache.kie.sonataflow + sonataflow-quarkus-devui ---- -- -. Enter the following command to add the `kogito-addons-quarkus-source-files` extension that provides the source code to generate the Serverless Workflow diagram in the consoles: +. Enter the following command to add the `kie-addons-quarkus-source-files` extension that provides the source code to generate the Serverless Workflow diagram in the consoles: + -- .Install Kogito source files add-on extension [source,shell] ---- -quarkus ext add org.kie.kogito:kogito-addons-quarkus-source-files +quarkus ext add org.kie:kie-addons-quarkus-source-files ---- Executing the previous command adds the following dependency to `pom.xml` file of your project: @@ -55,8 +55,8 @@ Executing the previous command adds the following dependency to `pom.xml` file o [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-source-files + org.kie + kie-addons-quarkus-source-files ---- -- diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-definition-page.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-definition-page.adoc index 345de35b..4ab20697 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-definition-page.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-definition-page.adoc @@ -40,7 +40,7 @@ The {product_name} Dev UI extension allows you to use both mechanisms. === Starting new Workflow instances using REST If you want to start a new workflow instance using the workflow REST endpoint, just click on the *Start new Workflow* button of any of the workflow in the *Workflow Definitions* table, then you'll be redirected to the *Start New Workflow* -page where you could setup the data and Business Key that will be used to start the new workflow instance. +page where you could set up the data and Business Key that will be used to start the new workflow instance. === Filling up the Workflow data Depending on your workflow configuration the page can provide different mechanisms to fill the workflow data. @@ -57,7 +57,7 @@ image::testing-and-troubleshooting/quarkus-dev-ui-extension/kogito-swf-tools-sta [NOTE] ==== -For more information about how to setup the Input Schema Definition on your {product_name}, please take a look at the +For more information about how to set up the Input Schema Definition on your {product_name}, please take a look at the xref:core/defining-an-input-schema-for-workflows.adoc[Input Schema for {product_name}] section. ==== @@ -67,13 +67,13 @@ If the *Business Key* field is blank, then an auto-generated business key is def === Starting the new Workflow instance By clicking on the *Start* button will POST the workflow data and the Business Key to the workflow REST endpoint. If the -workflow instance starts successfully, a success alert appears on the top of the screen, which contains the +workflow instance starts successfully, a success alert appears at the top of the screen, which contains the *Go to workflow list* link to navigate to the xref:testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-instances-page.adoc[Workflow Instances page]. .Example of workflow successful starting notification image::testing-and-troubleshooting/quarkus-dev-ui-extension/kogito-swf-tools-start-workflow-success-alert.png[] -If there is an issue while starting a workflow, then a failure alert appears on the top of the screen, containing the*View Details* and *Go to workflow list* options. The *View Details* enables you to view the error message. +If there is an issue while starting a workflow, then a failure alert appears at the top of the screen, containing the*View Details* and *Go to workflow list* options. The *View Details* enables you to view the error message. .Example of workflow starting failure notification image::testing-and-troubleshooting/quarkus-dev-ui-extension/kogito-swf-tools-start-workflow-fail-alert.png[] @@ -95,13 +95,13 @@ Once there, you will have to fill out the form with the Cloud Event information: .Starting a workflow using a cloud event image::testing-and-troubleshooting/quarkus-dev-ui-extension/kogito-swf-tools-trigger-cloud-events.png[] -Click the *Trigger* button to trigger the cloud event. If the workflow instance starts successfully, a success alert appears on the top of the screen, which contains the +Click the *Trigger* button to trigger the cloud event. If the workflow instance starts successfully, a success alert appears at the top of the screen, which contains the *Go to workflow list* link to navigate to the xref:testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-instances-page.adoc[Workflow Instances page]. .Example of workflow successful starting notification image::testing-and-troubleshooting/quarkus-dev-ui-extension/kogito-swf-tools-trigger-cloud-event-start-success-alert.png[] -If there is an issue while starting a workflow, then a failure alert appears on the top of the screen, containing *View Details* and *Go to workflow list* options. The *View Details* enables you to view the error message. +If there is an issue while starting a workflow, then a failure alert appears at the top of the screen, containing *View Details* and *Go to workflow list* options. The *View Details* enables you to view the error message. .Example of trigger workflow failure alert image::testing-and-troubleshooting/quarkus-dev-ui-extension/kogito-swf-tools-trigger-cloud-event-start-error-alert.png[] diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-instances-page.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-instances-page.adoc index 5716dbe4..29c02295 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-instances-page.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-instances-page.adoc @@ -49,7 +49,7 @@ The Workflow Details page consists of the following panels: Serverless Workflow Diagram panel:: + -- -The Serverless Workflow Diagram panel enables you to explore the workflow diagram and execution path of the workflow instance. The workflow diagram and execution path are displayed by consuming the source which is exposed through the `kogito-addons-quarkus-source-files`. +The Serverless Workflow Diagram panel enables you to explore the workflow diagram and execution path of the workflow instance. The workflow diagram and execution path are displayed by consuming the source which is exposed through the `kie-addons-quarkus-source-files`. To add the source files add-on configuration, add the following dependency to `pom.xml` file of your project: @@ -57,8 +57,8 @@ To add the source files add-on configuration, add the following dependency to `p [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-source-files + org.kie + kie-addons-quarkus-source-files ---- @@ -121,8 +121,8 @@ Once there, you will have to fill out the form with the Cloud Event information: .Sending a Cloud Event to an active workflow instance. image::testing-and-troubleshooting/quarkus-dev-ui-extension/kogito-swf-tools-workflow-instances-cloud-event.png[] -Additionally, you can use the *Send Cloud Event* action present available on the instance actions kebab. By using it you -will be lead to the *Trigger Cloud Event* page, but in this case the *Instance Id* field will be already filled with +Additionally, you can use the *Send Cloud Event* action present available on the instance actions kebab. By using it, you +will be led to the *Trigger Cloud Event* page, but in this case the *Instance Id* field will be already filled with the selected workflow id. .*Send Cloud Event* button in the actions kebab. @@ -130,13 +130,13 @@ image::testing-and-troubleshooting/quarkus-dev-ui-extension/kogito-swf-tools-wor [NOTE] ==== -To enable the actions kebab, make sure your project is configured to have the `kogito-addons-quarkus-process-management` +To enable the actions kebab, make sure your project is configured to have the `kie-addons-quarkus-process-management` dependency on its `pom.xml` file, like: [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-process-management + org.kie + kie-addons-quarkus-process-management ---- ==== From 5d9c6049be708e78c7168a0a24054a142268446e Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 16:04:42 +0200 Subject: [PATCH 09/38] SRVLOGIC-261: Sync persistence to latest version --- .../pages/persistence/core-concepts.adoc | 40 ++++++++++++++++--- 1 file changed, 35 insertions(+), 5 deletions(-) diff --git a/modules/serverless-logic/pages/persistence/core-concepts.adoc b/modules/serverless-logic/pages/persistence/core-concepts.adoc index e76e183b..8658cd90 100644 --- a/modules/serverless-logic/pages/persistence/core-concepts.adoc +++ b/modules/serverless-logic/pages/persistence/core-concepts.adoc @@ -6,15 +6,45 @@ :keywords: sonataflow, workflow, serverless, timeout, timer, expiration, persistence // links -Persistence in {product_name} is available on demand as a service. -Using configuration properties, users are able to configure the persistence for their workflows as required. +SonataFlow provides two persistence mechanisms to store information about the workflow instances. +The <<_workflow_runtime_persistence, Workflow runtime persistence>>, and <<_data_index_persistence, Data Index persistence>>. -The persistence is provided by our Data Index service. -To learn more about the service, examine the links in additional resources. +Each mechanism is intended for a different purpose: + +image::persistence/Persistence-Types.png[] + +== Workflow runtime persistence +The workflow runtime persistence ensures that your workflow instances remain consistent during an error or a runtime restart. For example, a pod restart, a programmed maintenance shutdown, etc. +This is achieved by storing snapshots of the executing workflow instances [xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc#saving_of_workflow_snapshots[see more details]]. +That information is stored in an internal format, and usually, you must only focus on providing the proper configurations to use it. + +To learn how to configure it we recommend that you read the following sections depending on your use case: + +* xref:cloud/operator/using-persistence.adoc[Using persistence in {operator_name} managed workflow deployments] +* xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[Using persistence in advanced development cases of {product_name} applications using Quarkus and Java] + +[NOTE] +==== +In production environments, or when your workflows use timeouts, or you use operator managed knative serving deployments, it's strongly recommended that you configure the workflow runtime persistence. +==== + +== Data Index persistence +The Data Index persistence is designed to store information about your workflow instances in a way that this information can be consumed by other services using GraphQL queries. +This is achieved by properly configuring and deploying the xref:data-index/data-index-core-concepts.adoc[Data Index Service] in your installation. + +To learn how to configure and deploy the Data Index Service we recommend that you read the following sections depending on your use case: + +* xref:cloud/operator/supporting-services.adoc[Deploying supporting services with {operator_name}] +* xref:data-index/data-index-service.adoc[Data Index standalone service] + +To learn more about this service, examine the links in additional resources. == Additional resources * xref:data-index/data-index-core-concepts.adoc[] -* xref:data-index/data-index-service.adoc[] +* xref:use-cases/advanced-developer-use-cases/data-index/data-index-as-quarkus-dev-service.adoc[] +* xref:use-cases/advanced-developer-use-cases/data-index/data-index-usecase-singleton.adoc[] +* xref:use-cases/advanced-developer-use-cases/data-index/data-index-usecase-multi.adoc[] +* xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[] include::../../pages/_common-content/report-issue.adoc[] From 35353a5d8e73872bd62ff8f03e826072c2294abc Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 16:13:12 +0200 Subject: [PATCH 10/38] SRVLOGIC-261: Sync cloud to latest version --- .../pages/cloud/custom-ingress-authz.adoc | 340 ++++++++++++++ .../serverless-logic/pages/cloud/index.adoc | 29 +- .../add-custom-ca-to-a-workflow-pod.adoc | 190 ++++++++ .../operator/build-and-deploy-workflows.adoc | 44 +- .../operator/building-custom-images.adoc | 47 +- ...onfiguring-knative-eventing-resources.adoc | 437 ++++++++++++++++-- .../cloud/operator/configuring-workflows.adoc | 61 ++- .../cloud/operator/customize-podspec.adoc | 63 ++- .../cloud/operator/developing-workflows.adoc | 63 ++- .../cloud/operator/global-configuration.adoc | 106 +++++ .../operator/install-serverless-operator.adoc | 61 +-- .../pages/cloud/operator/known-issues.adoc | 30 -- .../operator/referencing-resource-files.adoc | 20 +- .../cloud/operator/supporting-services.adoc | 255 +++++++--- .../cloud/operator/using-persistence.adoc | 317 ++++++++++--- .../operator/workflow-status-conditions.adoc | 20 +- 16 files changed, 1755 insertions(+), 328 deletions(-) create mode 100644 modules/serverless-logic/pages/cloud/custom-ingress-authz.adoc create mode 100644 modules/serverless-logic/pages/cloud/operator/add-custom-ca-to-a-workflow-pod.adoc create mode 100644 modules/serverless-logic/pages/cloud/operator/global-configuration.adoc diff --git a/modules/serverless-logic/pages/cloud/custom-ingress-authz.adoc b/modules/serverless-logic/pages/cloud/custom-ingress-authz.adoc new file mode 100644 index 00000000..cefb1775 --- /dev/null +++ b/modules/serverless-logic/pages/cloud/custom-ingress-authz.adoc @@ -0,0 +1,340 @@ += Using an Ingress to add authentication and authorization to Workflow applications +:compat-mode!: +// Metadata: +:description: Securing workflow applications via a +:keywords: cloud, kubernetes, docker, image, podman, openshift, oidc, keycloak, apisix +// links +:oidc_spec_url: https://openid.net/specs/openid-connect-core-1_0.html +:kubernetes_svc_url: https://kubernetes.io/docs/concepts/services-networking/service/ +:kubernetes_networkpolicy_url: https://kubernetes.io/docs/concepts/services-networking/network-policies/ +:sonataflow_apisix_example_url: https://github.com/apache/incubator-kie-kogito-examples/tree/stable/serverless-operator-examples/sonataflow-apisix-oidc +:keycloak_resource_owner_granttype_url: https://www.keycloak.org/docs/23.0.7/securing_apps/#_resource_owner_password_credentials_flow +:apisix_install_url: https://apisix.apache.org/docs/ingress-controller/deployments/minikube/ + +This document describes how you add an Ingress to a {product_name} workflow to handle authentication and authorization use cases. + +In the approach outlined in this guide, protect your workflows from anonymous access outside the cluster with the link:{oidc_spec_url}[OpenID Connect] specification. + +Although the example demonstrated in this document is not meant to be used in production, you can use it as a reference to create your own architecture. + +== Architecture + +The following image illustrates a simplified architecture view of the recommended approach for protecting {product_name} workflow endpoints. + +image::cloud/apisix-keycloak/ingress-apisix-keycloak.png[] + +1. User makes a request with their credentials +2. APISIX do the JWT token introspection in the OIDC Server (Keycloak) +3. Keycloak validates the token +4. APISIX forwards the request to the workflow application + +This is a simplified approach for OIDC (OpenID Connect protocol) use cases. In production environments, you can tailor your gateway and OIDC server to meet your requirements and scope. + +[IMPORTANT] +==== +This approach only protects the communication via Ingress. Direct calls to the workflow application link:{kubernetes_svc_url}[internal service] would be anonymous. +For example, another microservice in the cluster making requests to the workflow internal service. +Set link:{kubernetes_networkpolicy_url}[Kuberbetes NetworkPolicies] to your workflow applications if this is not the desired behavior. +==== + +== How to deploy the example architecture + +The following sections describe how to deploy the example architecture using APISIX and Keycloak to protect your {product_name} workflows. + +.Prerequisites + +* Minikube is installed. You can use Kind or any other cluster if you have admin access. Just ensure to adapt the steps below to your environment. +* link:{sonataflow_apisix_example_url}[Clone the example SonataFlow APISIX with Keycloak in a local directory]. +* (Optional) xref:cloud/operator/install-serverless-operator.adoc[{operator_name} is installed] if you are going to deploy via the operator. +* (Optional) xref:use-cases/advanced-developer-use-cases/deployments/deploying-on-minikube.adoc[Quarkus {product_name} workflow is deployed] if you are not using the operator. + +=== Installing Keycloak + +From the example's cloned directory 'sonataflow-apisix-oidc', run the following command: + +.Running kustomize to install Keycloak +[source,shell,subs="attributes+"] +---- +kubectl create ns keycloak +kubectl kustomize manifests/bases | kubectl apply -f - -n keycloak +---- + +This command creates a namespace called `keycloak` and a Keycloak server deployment connected to a PostgreSQL database to persist your data across cluster restarts. + +==== Exposing Keycloak locally + +[TIP] +==== +You can skip this section if you are running on OpenShift or any cluster that you can expose Keycloak via an Ingress DNS or Route. +==== + +Since Keycloak is running on Minikube, expose the service port to your local network by running the following command: + +.Exposing Keycloak to the local network +[source,shell,subs="attributes+"] +---- +kubectl port-forward $(kubectl get pods -l app=keycloak --output=jsonpath='{.items[*].metadata.name}' -n keycloak) 8080:8080 -n keycloak +---- + +From now on, every connection to the `8080` port is forwarded to the Keycloak service endpoint. + +The next step is to configure your local `/etc/hosts`. This step is needed because the token you are going to generate must come from the same URL that the APISIX server introspects once you access the workflow. + +Edit your local `/etc/hosts` file and add the following line: + +.Hosts file with the Keycloak address entry +[source,txt,subs="attributes+"] +---- +127.0.0.1 keycloak.keycloak.svc.cluster.local +---- + +You can try accessing your Keycloak admin console in the address link:http://keycloak.keycloak.svc.cluster.local:8080[]. The default user and password are `admin`. + +[IMPORTANT] +==== +In real-life environments, this step is not needed since Keycloak or any OIDC server is served by a load balancer with the correct address configured. +==== + +==== Configuring the Keycloak OIDC Server + +In the next step, log in to the Keycloak admin console in the address link:http://keycloak.keycloak.svc.cluster.local:8080[] using the default credentials. + +Once you are logged into the console, click *Create realm* in the top left menu. In this screen, create a new realm named `sonataflow`. See the image below for more details: + +.Creation of the new sonataflow realm +image::cloud/apisix-keycloak/01-create-realm.png[] + +Next, create a client for the APISIX Ingress to introspect the JWT tokens. + +In the left menu, make sure that you are in the `sonataflow` realm and click on *Clients*, then *Create client*. Give the name `apisix-ingress` and then click on *Next*. + +.Creation of the APISIX Ingress client +image::cloud/apisix-keycloak/02-create-client.png[] + +Next, add the details about this client: + +1. Turn the *Client authentication* option on. +2. Leave *Authorization* off. +3. Mark the options *Standard flow* and *Direct access grants* and leave the rest blank. + +.APISIX Ingress client details +image::cloud/apisix-keycloak/03-create-client.png[] + +Click on *Next*, leave everything blank in the next screen and click on *Save*. + +==== Creating a user + +In this example, create a user registered in the Keycloak server to access the workflow application. + +[IMPORTANT] +==== +For simplicity, use the link:{keycloak_resource_owner_granttype_url}[Grant Type Resource Owner Password]. This flow is not recommended for production architectures. Consider using other mechanisms such as Authorization Code or Client Credentials. +==== + +In the left menu, make sure that you are in the `sonataflow` realm and click on *Users* and then click *Create new user*. + +In this screen, fill in the details according to the figure below: + +1. Switch *Email verified* option on. +2. Set *Username* to `luke`. +3. Set *Email* to `luke@republic.org` +4. Set *First name* to `Luke` and *Last name* to `Skywalker` + +.Creating a workflow user +image::cloud/apisix-keycloak/05-create-user.png[] + +Click on *Create*. + +Next, set the credentials for this newly created user. Click on *Users* in the left menu and then in the name `luke`. + +In this screen, click on the tab *Credentials*, and then on *Set password*. + +.Setting user's password +image::cloud/apisix-keycloak/06-user-set-password.png[] + +Set the password as `luke` (same as the username), leave the *Temporary* option off and click on *Save*. + +Use the credentials `luke`/`luke` later in this guide to acquire a JWT token to make requests to the workflow application. + +=== Installing the APISIX Ingress + +Follow the documentation on link:{apisix_install_url}[APISIX Documentation website] and install the APISIX Ingress in your cluster (install the HELM client first). + +If you are running on Minikube, expose the APISIX Ingress server: + +.Exposing apisix-ingress service to the local network +[source,shell,subs="attributes+"] +---- +minikube service apisix-gateway --url -n ingress-apisix +---- + +The command outcome is the local URL which you can access the Ingress you create later in this guide. Leave the terminal open. + +[TIP] +==== +If you are not running on Minikube, see the APISIX Ingress documentation for more information on how to expose the Ingress already in your cluster. +==== + +After this step, Keycloak OIDC Server and APISIX Ingress Controller on your cluster are able to protect your {product_name} workflow applications from external requests. + +== Deploying the {product_name} sample workflow + +In this section, learn how to deploy the Greeting workflow example and a custom APIXSIX Ingress to protect external requests to the application's endpoints. + +.Prerequisites + +* You installed, configured, and exposed the Keycloak server +* You installed and exposed the APISIX Ingress server +* You installed the {operator_name} +* You link:{sonataflow_apisix_example_url}[cloned the example application locally] + +The first step is to deploy the {product_name} workflow. + +Enter the example project directory that you cloned locally and run the command below: + +.Deploying the Greeting workflow +[source,shell,subs="attributes+"] +---- +kubectl create ns sonataflow +kubectl apply -f workflow-app/01-sonataflow-greeting.yaml -n sonataflow +---- + +You can follow the workflow deployment by running + +.Follow the workflow deployment process +[source,shell,subs="attributes+"] +---- +kubectl -n sonataflow get workflow/greeting -w + +NAME PROFILE VERSION URL READY REASON +greeting 0.0.1 False WaitingForBuild +---- + +=== Configuring the Ingress Route + +Once you deploy the {product_name} workflow you can configure and deploy the APISIX Route. + +Open the file `workflow-app/02-sonataflow-route.yaml` in the example application you cloned earlier and change the credentials for the `apisix-ingress` client that you created in the Keycloak server: + +.Greeting workflow APISIX Route +[source,yaml,subs="attributes+"] +---- +apiVersion: apisix.apache.org/v2 +kind: ApisixRoute +metadata: + name: sonataflow +spec: + http: + - name: greeting + match: + hosts: + - local.greeting.sonataflow.org + paths: + - "/*" + backends: + - serviceName: greeting + servicePort: 80 + plugins: + - name: openid-connect <1> + enable: true + config: + client_id: apisix-ingress + client_secret: <2> + discovery: http://keycloak.keycloak.svc.cluster.local:8080/realms/sonataflow/.well-known/openid-configuration + scope: profile email + bearer_only: true + realm: sonataflow + introspection_endpoint_auth_method: client_secret_post +---- + +<1> The link:{}[OpenID Connect plugin] to make the Ingress connect to Keycloak +<2> The `apisix-ingress` client credential to be changed + +Open the Keycloak server (link:http://keycloak.keycloak.svc.cluster.local:8080[]) and in the realm `sonataflow` click on *Clients*, and then on `apisix-ingress`. + +Click on the tab *Credentials* and copy the *Client Secret*: + +.Creating the workflow user +image::cloud/apisix-keycloak/04-client-credentials.png[] + +Paste the *Client Secret* into the `ApisixRoute` file `workflow-app/02-sonataflow-route.yaml` in the example application and run: + +.Deploy the `ApisixRoute` +[source,shell,subs="attributes+"] +---- +kubectl apply -f workflow-app/02-sonataflow-route.yaml -n sonataflow +---- + +To this point, you have installed in your cluster the Keycloak and APISIX Ingress server, and deployed the example Greeting workflow application. + +=== Accessing the Workflow + +Access the workflow without a token to see a rejection: + +.Directly accessing the workflow without a token +[source,shell,subs="attributes+"] +---- +INGRESS_URL= <1> + +curl -v POST $\{INGRESS_URL\}/greeting -H "Content-type: application/json" -H "Host: local.greeting.sonataflow.org" --data '{ "name": "Luke" }' +---- + +<1> The ingress url is accessible via the Minikube service command. If you have not done it already, run `minikube service apisix-gateway --url -n ingress-apisix`. + +See a 401 HTTP Status message denying your access to the workflow. + +Next, access the application using an access token. First, you need to get the access token from the Keycloak server: + +.Requesting an access token to Keycloak server +[source,shell,subs="attributes+"] +---- +CLIENT_SECRET="secret from apisix-ingress client" <1> + +ACCESS_TOKEN=$(curl \ + -d "client_id=apisix-ingress" \ + -d "client_secret=$\{CLIENT_SECRET\}" \ + -d "username=luke" \ + -d "password=luke" \ + -d "grant_type=password" \ + "http://keycloak.keycloak.svc.cluster.local:8080/realms/sonataflow/protocol/openid-connect/token" | jq -r .access_token) <2> +---- + +<1> Copy the secret from the `apisix-ingress` client +<2> Request an access token from the Keycloak server using the user `luke` credentials + +[NOTE] +==== +The token returned with the command above has a default timeout of 5 minutes, which means that if you take too long to use it, or want to execute several requests, you might need to execute the command again and get a new token. +==== +Having the access token set in an environment variable, access the application again: + +[source,shell,subs="attributes+"] +---- +INGRESS_URL= <1> + +curl -v POST $\{INGRESS_URL\}/greeting -H "Content-type: application/json" -H "Host: local.greeting.sonataflow.org" -H "Authorization: Bearer $\{ACCESS_TOKEN\}" --data '{ "name": "Luke" }' +---- + +<1> The ingress url is accessible via the Minikube service command. If you have not done it already, run `minikube service apisix-gateway --url -n ingress-apisix`. + +This request is passing through the APISIX Gateway, which is validating the token via the `Authorization: Bearer` header. Then the request is passed internally to the workflow application which process and return to the original client. + +Finally, this time, in the last part of the command output, you should see a JSON document similar to the excerpt below, which indicates that the workflow instance was created successfully. + +[source,json] +---- +{"id":"a9fc1a97-e274-4e40-80e5-4ff5ba203231","workflowdata":{"message":"Hello from YAML Workflow, anonymous"}} +---- + +== Conclusion + +In this guide you were able to deploy an architecture of services capable of authenticating a valid user using OIDC mechanisms. Now, everytime that someone needs access to the deployed workflow, it must first get a valid JWT token in the Keycloak OIDC Server. + +Next steps now, would be to tailor this architecture for your needs such as a cluster of Keycloak servers behind a TLS and valid domain. Also, APISIX Ingress offers many other capabilities and configurations that can be tuned to favor your use cases. + +== Additional resources + +* xref:cloud/operator/install-serverless-operator.adoc[] +* xref:cloud/operator/configuring-workflows.adoc[] + +include::../../pages/_common-content/report-issue.adoc[] diff --git a/modules/serverless-logic/pages/cloud/index.adoc b/modules/serverless-logic/pages/cloud/index.adoc index 59b944cb..e8cf3bbd 100644 --- a/modules/serverless-logic/pages/cloud/index.adoc +++ b/modules/serverless-logic/pages/cloud/index.adoc @@ -13,28 +13,30 @@ The cards below list all features included in the platform to deploy workflow ap [NOTE] ==== -Eventually, these two options will converge, and the {operator_name} will also be able to handle full Quarkus projects. So if you opt-in to use Quarkus now and manually deploy your workflows, bear in mind that it's on the project's roadmap to integrate the Quarkus experience with the Operator. +Eventually, these two options will converge, and the {operator_name} will also be able to handle full Quarkus projects. So if you opt in to use Quarkus now and manually deploy your workflows, bear in mind that it's on the project's roadmap to integrate the Quarkus experience with the Operator. ==== [.card-section] -== Kubernetes with the Operator - -For developers that are looking for a native Kubernetes approach where you can model workflows using YAML definitions and directly deploy them, you can use the {operator_name}. The operator registers a new Kubernetes resource in the cluster to manage your workflow development iteration cycle and composition of services and events. The application is managed by the operator. +== Common Kubernetes Guides [.card] -- [.card-title] -xref:cloud/operator/install-serverless-operator.adoc[] +xref:cloud/custom-ingress-authz.adoc[] [.card-description] -Learn how to install the {operator_name} in a Kubernetes cluster +Learn how to secure a {product_name} workflow with OIDC -- +== Kubernetes with the Operator + +For developers that are looking for a native Kubernetes approach where you can model workflows using YAML definitions and directly deploy them, you can use the {operator_name}. The operator registers a new Kubernetes resource in the cluster to manage your workflow development iteration cycle and composition of services and events. The application is managed by the operator. + [.card] -- [.card-title] -xref:cloud/operator/enabling-jobs-service.adoc[] +xref:cloud/operator/install-serverless-operator.adoc[] [.card-description] -Learn how to deploy the Jobs Service using {operator_name} in a Kubernetes cluster +Learn how to install the {operator_name} in a Kubernetes cluster -- [.card] @@ -77,6 +79,15 @@ xref:cloud/operator/build-and-deploy-workflows.adoc[] Learn how to build and deploy workflow services with {operator_name} -- + +[.card] +-- +[.card-title] +xref:cloud/operator/global-configuration.adoc[] +[.card-description] +Learn how to change global configuration options for the {operator_name} +-- + [.card] -- [.card-title] @@ -128,7 +139,7 @@ Learn about the known issues and feature Roadmap of the {operator_name} [.card-section] == Kubernetes with Quarkus -For Java developers, you can use Quarkus and a few add-ons to help you build and deploy the application in a Kubernetes cluster. {product_name} also generates basic Kubernetes objects YAML files to help you getting started. The application should be managed by a Kubernetes administrator. +For Java developers, you can use Quarkus and a few add-ons to help you build and deploy the application in a Kubernetes cluster. {product_name} also generates basic Kubernetes objects YAML files to help you to get started. The application should be managed by a Kubernetes administrator. [.card] -- diff --git a/modules/serverless-logic/pages/cloud/operator/add-custom-ca-to-a-workflow-pod.adoc b/modules/serverless-logic/pages/cloud/operator/add-custom-ca-to-a-workflow-pod.adoc new file mode 100644 index 00000000..c9e1d308 --- /dev/null +++ b/modules/serverless-logic/pages/cloud/operator/add-custom-ca-to-a-workflow-pod.adoc @@ -0,0 +1,190 @@ += Adding a custom CA certificate to a container running Java +:compat-mode!: +:keywords: kogito, sonataflow, workflow, serverless, operator, kubernetes, minikube, openshift, containers +:keytool-docs: https://docs.oracle.com/en/java/javase/21/docs/specs/man/keytool.html + +{product_name} applications are containers running Java. If you're working with containers running Java applications and need to add a CA (Certificate Authority) certificate for secure communication this guide will explain the necesarry steps to setup CA for your workflow application. The guide assumes you are familiar with containers and have basic knowledge of working with YAML files. + +:toc: + + +== Problem space + +If you have a containerized Java application that connects to an SSL endpoint with a certificate signed by an internal authority (like SSL terminated routes on a cluster), you need to make sure Java can read and verify the CA Authority certificate. Java unfortunately doesn't load certificates directly but rather stores them in a {keytool-docs}[keystore]. + +The default trust store under `$JAVA_HOME/lib/security/cacerts` contains only CA's that are shipped with the Java distribution and there is the `keytool` tool that knows how to manipulate those key stores. +The containerized application may not know the CA certificate in build time, so we need to add it to the `trust-store` in deployment. To automate that we can use a combination of an init-container and a shared directory to pass the mutated trust store to the container before it runs. Let's run this step by step: + +=== Step 1: Obtain the CA Certificate + +Before proceeding, ensure you have the CA certificate file (in PEM format) that you want to add to the Java container. If you don't have it, you may need to obtain it from your system administrator or certificate provider. + +For this guide, we are using the k8s cluster root CA that is automatically deployed into every container under `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` + +=== Step 2: Prepare a trust store in an init-container + +Add or amend these `volumes` and `init-container` snippet to your pod spec or `podTemplate` in a deployment: + +[source,yaml] +--- +spec: + volumes: + - name: new-cacerts + emptyDir: {} + initContainers: + - name: add-kube-root-ca-to-cacerts + image: registry.access.redhat.com/ubi9/openjdk-17 + volumeMounts: + - mountPath: /opt/new-cacerts + name: new-cacerts + command: + - /bin/bash + - -c + - | + cp $JAVA_HOME/lib/security/cacerts /opt/new-cacerts/ + chmod +w /opt/new-cacerts/cacerts + keytool -importcert -no-prompt -keystore /opt/new-cacerts/cacerts -storepass changeit -file /var/run/secrets/kubernetes.io/serviceaccount/ca.crt +--- + +The default keystore under `$JAVA_HOME` is part of the container image and is not mutable. We have to create the mutated copy to a shared volume, hence the 'new-cacerts' one. + +=== Step 3: Configure Java to load the new keystore + +Here you can mount the new, modified `cacerts` into the default location where the JVM looks. +The `Main.java` example uses the standard HTTP client so alternatively you could mount the `cacerts` to a different location and configure the Java runtime to load the new keystore with a `-Djavax.net.ssl.trustStore` system property. +Note that libraries like RESTEasy don't respect that flag and may need to programmatically set the trust store location. + +[source,yaml] +--- + containers: + - command: + - /bin/bash + - -c + - | + curl -L https://gist.githubusercontent.com/rgolangh/b949d8617709d10ba6c690863e52f259/raw/bdea4d757a05b75935bbb57f3f05635f13927b34/Main.java -o curl.java + java curl.java https://kubernetes + image: registry.access.redhat.com/ubi9/openjdk-17 + imagePullPolicy: Always + name: openjdk-17 + volumeMounts: + - mountPath: /lib/jvm/java-17/lib/security + name: new-cacerts + readOnly: true + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-5npmd + readOnly: true +--- + +Notice the volume mount of the previously mutated keystore. + + +=== Full working example + +[source,yaml] +--- +apiVersion: v1 +kind: Pod +metadata: + name: root-ca-to-cacerts +spec: + initContainers: + - name: add-kube-root-ca-to-cacerts + image: registry.access.redhat.com/ubi9/openjdk-17 + volumeMounts: + - mountPath: /opt/new-cacerts + name: new-cacerts + command: + - /bin/bash + - -c + - | + cp $JAVA_HOME/lib/security/cacerts /opt/new-cacerts/ + chmod +w /opt/new-cacerts/cacerts + keytool -importcert -no-prompt -keystore /opt/new-cacerts/cacerts -storepass changeit -file /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + containers: + - command: + - /bin/bash + - -c + - | + curl -L https://gist.githubusercontent.com/rgolangh/b949d8617709d10ba6c690863e52f259/raw/bdea4d757a05b75935bbb57f3f05635f13927b34/Main.java -o curl.java + java curl.java https://kubernetes + image: registry.access.redhat.com/ubi9/openjdk-17 + imagePullPolicy: Always + name: openjdk-17 + volumeMounts: + - mountPath: /lib/jvm/java-17/lib/security/ + name: new-cacerts + readOnly: true + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-api-access-5npmd + readOnly: true + volumes: + - name: new-cacerts + emptyDir: {} + - name: kube-api-access-5npmd + projected: + sources: + - serviceAccountToken: + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt +--- + +=== {product_name} Example + +Similar to a deployment spec, a serverless workflow has a spec.podTemplate, with minor differences, but the change is almost identical. +In this case, we are mounting some ingress ca-bundle because we want our workflow to reach the `.apps.my-cluster-name.my-cluster-domain` SSL endpoint. +Here is the relevant spec section of a workflow with the changes: + +[source,yaml] +--- +#... +spec: + flow: + # ... + podTemplate: + container: + volumeMounts: + - mountPath: /lib/jvm/java-17/lib/security/ + name: new-cacerts + initContainers: + - command: + - /bin/bash + - -c + - | + cp $JAVA_HOME/lib/security/cacerts /opt/new-cacerts/ + chmod +w /opt/new-cacerts/cacerts + keytool -importcert -no-prompt -keystore /opt/new-cacerts/cacerts -storepass changeit -file /opt/ingress-ca/ca-bundle.crt + image: registry.access.redhat.com/ubi9/openjdk-17 + name: add-kube-root-ca-to-cacerts + volumeMounts: + - mountPath: /opt/new-cacerts + name: new-cacerts + - mountPath: /opt/ingress-ca + name: ingress-ca + volumes: + - emptyDir: {} + name: new-cacerts + - configMap: + name: default-ingress-cert + name: ingress-ca + - name: kube-api-access-5npmd + projected: + sources: + - serviceAccountToken: + path: token + - configMap: + items: + - key: ca.crt + path: ca.crt + name: kube-root-ca.crt +--- + +== Additional Resources + +* link:keytool-docs[Keytool documentation] +* link:https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift#end_to_end_springboot_demo[Dynamically Creating Java keystores OpenShift] + + diff --git a/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc b/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc index 2d811f91..f7910885 100644 --- a/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc +++ b/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc @@ -16,25 +16,30 @@ :docker_doc_arg_url: https://docs.docker.com/engine/reference/builder/#arg :quarkus_extensions_url: https://quarkus.io/extensions/ -This document describes how to build and deploy your workflow on a cluster using the link:{kogito_serverless_operator_url}[{operator_name}] only by having a `SonataFlow` custom resource. +This document describes how to build and deploy your workflow on a cluster using the link:{kogito_serverless_operator_url}[{operator_name}]. Every time you need to change the workflow definition the system will (re)build a new immutable version of the workflow. If you're still in development phase, please see the xref:cloud/operator/developing-workflows.adoc[] guide. [IMPORTANT] ==== -The build system implemented by the {operator_name} is not suitable for complex production use cases. Consider using an external tool to build your application such as Tekton and ArgoCD. The resulting image can then be deployed with `SonataFlow` custom resource. See more at xref:cloud/operator/customize-podspec.adoc#custom-image-default-container[Setting a custom image in the default container] section in the xref:cloud/operator/customize-podspec.adoc#custom-image-default-container[] guide. +The build system implemented by the {operator_name} is not suitable for complex production use cases. Consider using an external tool to build your application such as Tekton and ArgoCD. The resulting image can then be deployed with `SonataFlow` custom resource. More details available at xref:cloud/operator/customize-podspec.adoc#custom-image-default-container[Setting a custom image in the default container] section of the xref:cloud/operator/customize-podspec.adoc[] guide. ==== Follow the <> or <> sections of this document based on the cluster you wish to build your workflows on. .Prerequisites * A Workflow definition. -* The {operator_name} installed. See xref:cloud/operator/install-serverless-operator.adoc[] guide +* The {operator_name} installed. See xref:cloud/operator/install-serverless-operator.adoc[] guide. -[#configure-build-system] +[[configure-workflow-build-system]] == Configuring the build system -The operator can build workflows on Kubernetes or OpenShift. On Kubernetes, it uses link:{kaniko_url}[Kaniko] and on OpenShift a link:{openshift_build_url}[standard BuildConfig]. The operator build system is not tailored for advanced production use cases and you can do only a few customizations. +The operator can build workflows on Kubernetes or OpenShift. On Kubernetes, it uses link:{kaniko_url}[Kaniko] and on OpenShift a link:{openshift_build_url}[standard BuildConfig]. + +[IMPORTANT] +==== +The operator build system is not tailored for advanced production use cases and you can do only a few customizations. +==== === Using another Workflow base builder image @@ -49,9 +54,10 @@ By default, the operator will use the image distributed upstream to build workfl kubectl patch sonataflowplatform --patch 'spec:\n build:\n config:\n baseImage: ' -n ---- +[#customize-base-build] === Customize the base build Dockerfile -The operator uses the sonataflow-operator-builder-config `ConfigMap` in the operator's installation namespace ({operator_installation_namespace}) to configure and run the workflow build process. +The operator uses the `ConfigMap` named `sonataflow-operator-builder-config` in the operator's installation namespace ({operator_installation_namespace}) to configure and run the workflow build process. You can change the `Dockerfile` entry in this `ConfigMap` to tailor the Dockerfile to your needs. Just be aware that this can break the build process. .Example of the sonataflow-operator-builder-config `ConfigMap` @@ -59,9 +65,8 @@ You can change the `Dockerfile` entry in this `ConfigMap` to tailor the Dockerfi ---- apiVersion: v1 data: - DEFAULT_BUILDER_RESOURCE_NAME: Dockerfile DEFAULT_WORKFLOW_EXTENSION: .sw.json - Dockerfile: "FROM {kogito_devservices_imagename}:latest AS builder\n\n# + Dockerfile: "FROM quay.io/kiegroup/kogito-swf-builder-nightly:latest AS builder\n\n# variables that can be overridden by the builder\n# To add a Quarkus extension to your application\nARG QUARKUS_EXTENSIONS\n# Args to pass to the Quarkus CLI add extension command\nARG QUARKUS_ADD_EXTENSION_ARGS\n# Additional java/mvn arguments @@ -87,6 +92,7 @@ metadata: The excerpt above is just an example. The current version might have a slightly different version. Don't use this example in your installation. ==== +[[changing-sfplatform-resource-requirements]] === Changing resources requirements You can create or edit a `SonataFlowPlatform` in the workflow namespace specifying the link:{kubernetes_resource_management_url}[resources requirements] for the internal builder pods: @@ -138,6 +144,7 @@ spec: This parameters will only apply to new build instances. +[[passing-build-arguments-to-internal-workflow-builder]] === Passing arguments to the internal builder You can pass build arguments (see link:{docker_doc_arg_url}[Dockerfile ARG]) to the `SonataFlowBuild` instance. @@ -205,14 +212,15 @@ The table below lists the Dockerfile arguments available in the default {operato |=== | Argument | Description | Example -|QUARKUS_EXTENSIONS | List of link:{quarkus_extensions_url}[Quarkus Extensions] separated by comma that the builder should add to the workflow. | org.kie.kogito:kogito-addons-quarkus-persistence-jdbc:999-SNAPSHOT +|QUARKUS_EXTENSIONS | List of link:{quarkus_extensions_url}[Quarkus Extensions] separated by comma that the builder should add to the workflow. | org.kie:kie-addons-quarkus-persistence-jdbc:999-SNAPSHOT |QUARKUS_ADD_EXTENSION_ARGS | Arguments passed to the Quarkus CLI when adding extensions. Enabled only when `QUARKUS_EXTENSIONS` is not empty. | See the link:{quarkus_cli_url}#using-the-cli[Quarkus CLI documentation] |MAVEN_ARGS_APPEND | Arguments passed to the maven build when the workflow build is produced. | -Dkogito.persistence.type=jdbc -Dquarkus.datasource.db-kind=postgresql |=== +[[setting-env-variables-for-internal-workflow-builder]] === Setting environment variables in the internal builder -You can set environment variables to the `SonataFlowBuild` internal builder pod. +You can set environment variables to the `SonataFlowBuild` internal builder pod. This is useful in cases where you would like to influence only the build of the workflow. [IMPORTANT] ==== @@ -275,7 +283,7 @@ Since the `envs` attribute is an array of link:{kubernetes_envvar_url}[Kubernete On Minikube and Kubernetes only plain values, `ConfigMap` and `Secret` are supported due to a restriction on the build system provided by these platforms. ==== -[#building-kubernetes] +[[building-and-deploying-on-kubernetes]] == Building on Kubernetes [TIP] @@ -414,15 +422,17 @@ You don't need to do anything to build on OpenShift since the operator will conf In general, the operator will create a link:{openshift_build_url}[`BuildConfig` to build] the workflow using the mapped xref:cloud/operator/referencing-resource-files.adoc[resource files] and your workflow definition. After the build is finished, the image will be pushed to the internal OpenShift registry backed by an `ImageStream` object. +[#changing-base-builder] === Changing the base builder image If you are running on OpenShift, you have access to the Red Hat's supported registry. You can change the default builder image by editing the sonataflow-operator-builder-config `ConfigMap`. [source,bash,subs="attributes+"] ---- -oc edit cm/sonataflow-operator-builder-config -n {operator_installation_namespace} +kubectl edit cm/sonataflow-operator-builder-config -n {operator_installation_namespace} ---- -In your editor, change the first line in the `Dockerfile` entry where it reads `FROM {kogito_devservices_imagename}:{operator_version}` to the desired image. + +In your editor, change the first line in the `Dockerfile` entry where it reads `FROM quay.io/kiegroup/kogito-swf-builder-nightly:latest` to the desired image. This image must be compatible with your operator's installation. @@ -480,7 +490,7 @@ spec: end: true ---- -Save a file in your local file system with this contents named `greetings-workflow.yaml` then run: +Save a file in your local file system with this content named `greetings-workflow.yaml` then run: [source,bash,subs="attributes+"] ---- @@ -561,7 +571,7 @@ metadata: After editing the resource, the operator will start a new build of the workflow. Once this is finished, the workflow will be notified and updated accordingly. -If the build fails, but the workflow has a working deployment, the operator won't rollout a new deployment. +If the build fails, but the workflow has a working deployment, the operator won't roll out a new deployment. Ideally you should use this feature if there's a problem with your workflow or the initial build revision. @@ -569,7 +579,7 @@ Ideally you should use this feature if there's a problem with your workflow or t == Additional resources -* xref:cloud/operator/known-issues.adoc[] -* xref:cloud/operator/developing-workflows.adoc[] +* xref:cloud/operator/build-and-deploy-workflows.adoc[] +* xref:cloud/operator/building-custom-images.adoc[] include::../../../pages/_common-content/report-issue.adoc[] diff --git a/modules/serverless-logic/pages/cloud/operator/building-custom-images.adoc b/modules/serverless-logic/pages/cloud/operator/building-custom-images.adoc index 1f44fbea..61b579b2 100644 --- a/modules/serverless-logic/pages/cloud/operator/building-custom-images.adoc +++ b/modules/serverless-logic/pages/cloud/operator/building-custom-images.adoc @@ -4,51 +4,18 @@ :description: Building custom development images for SonataFlow :keywords: sonataflow, workflow, serverless, operator, kubernetes, minikube, devmode // Links: -:rh_ubi8_url: https://catalog.redhat.com/software/containers/ubi8/ubi-minimal/5c359a62bed8bd75a2c3fba8 +:rh_jdk17_url: https://catalog.redhat.com/software/containers/ubi9/openjdk-17/61ee7c26ed74b2ffb22b07f6 // NOTE: this guide can be expanded in the future to include prod images, hence the file name // please change the title section and rearrange the others once it's done -This document describes how to build a custom development image to use in SonataFlow. +This document describes how to build a custom development image to use in {product_name}. == The development mode image structure -The development image is based on the link:{rh_ubi8_url}[Red Hat UBI 8 minimal] container image. You can read its documentation for more detailed information about that image's architecture. +The development image is based on the link:{rh_jdk17_url}[Red Hat OpenJDK 17 UBI 9] container image. You can read its documentation for more detailed information about that image's architecture. -The table below lists the additional packages installed in the development mode container image. - -.List of packages -[cols="1,2"] -|=== -|Package | Description - -|shadow-utils -|The shadow-utils package includes the necessary programs for converting UNIX password files to the shadow password format. - -|tar -| - -|gzip -| - -|unzip -| - -|zip -| - -|tzdata-java -| - -|java-17-openjdk-devel -|OpenJDK 17 - -|apache-maven-3.9.3-bin.tar.gz -|Apache Maven - -|=== - -The next table lists the important paths in the container image's file system. +The table bellow lists the important paths in the container image's file system. .Important file system paths [cols="1,1"] @@ -100,7 +67,7 @@ CMD ["/home/kogito/launch/run-app-devmode.sh"] <8> ---- <1> The dev mode image as the base image -<2> Change to super user to run privileged actions +<2> Change to superuser to run privileged actions <3> Install additional packages <4> Change back to the default user without admin privileges <5> Add a new binary path to the `PATH` @@ -128,11 +95,11 @@ The container exposes port 8080 by default. When running the container locally, Next, we mount a local volume to the container's application path. Any local workflow definitions, specification files, or properties should be mounted to `src/main/resources`. Alternatively, you can also mount custom Java files to `src/main/java`. -Finally, to use the new generated image with the dev profile you can see: xref:cloud/operator/developing-workflows.adoc#_using_another_workflow_base_image[Using another Workflow base image]. +Finally, to use the new generated image with the dev profile follow the procedure at xref:cloud/operator/developing-workflows.adoc#_using_another_workflow_base_image[Using another Workflow base image] guide. == Additional resources * xref:cloud/operator/referencing-resource-files.adoc[] * xref:cloud/operator/developing-workflows.adoc[] -include::../../../pages/_common-content/report-issue.adoc[] \ No newline at end of file +include::../../../pages/_common-content/report-issue.adoc[] diff --git a/modules/serverless-logic/pages/cloud/operator/configuring-knative-eventing-resources.adoc b/modules/serverless-logic/pages/cloud/operator/configuring-knative-eventing-resources.adoc index 696554c4..ef09bfa8 100644 --- a/modules/serverless-logic/pages/cloud/operator/configuring-knative-eventing-resources.adoc +++ b/modules/serverless-logic/pages/cloud/operator/configuring-knative-eventing-resources.adoc @@ -1,59 +1,430 @@ = Knative Eventing +:sectnums: + :compat-mode!: // Metadata: -:description: Configuration of knatve eventing deployed by the operator +:description: Configuration of knative eventing deployed by the operator :keywords: kogito, sonataflow, workflow, serverless, operator, kubernetes, knative, knative-eventing, events -This document describes how you can configure the workflows to let operator create the knative eventing resources on Kubernetes. +This document describes how to configure workflows, and the supporting services, to use link:{knative_eventing_url}[Knative Eventing] as the preferred eventing system. + +In general, the following events are produced in a {product_name} installation: + +* Workflow outgoing and incoming business events. +* {product_name} system events sent from the workflow to the Data Index and Job Service respectively. +* {product_name} system events sent from the Jobs Service to the Data Index Service. + +[IMPORTANT] +==== +The content of this guide must be used only when you work with workflows using the `preview` and `gitops` profiles. +==== -{operator_name} can analyze the event definitions from the `spec.flow` and create `SinkBinding`/`Trigger` based on the type of the event. Then the workflow service can utilize them for event communications. The same purpose of this feature in quarkus extension can be found xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc#ref-example-sw-event-definition-knative[here]. +To produce a successful configuration you must follow this procedure: == Prerequisite -1. Knative is installed on the cluster and Knative Eventing is initiated with a `KnativeEventing` CR. -2. A broker named `default` is created. Currently all Triggers created by the {operator_name} will read events from `default` -== Configuring the workflow +1. The {operator_name} is installed. See xref:cloud/operator/install-serverless-operator.adoc[] guide. +2. The link:{knative_eventing_url}[Knative Eventing] system is installed and property initiated in the cluster. + +== Configuring the Knative Broker + +Create a Knative Broker to define the event mesh to collect the events with a resource like this: + +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Broker +metadata: + name: default + namespace: example-namespace +---- + +For more information on Knative Brokers link:{knative_eventing_broker_url}[see]. + +[NOTE] +==== +The example creates an in-memory broker for simplicity. In production environments, you must use a production-ready broker, like the link:{knative_eventing_kafka_broker_url}[Knative Kafka] broker. +==== + +[[querying_broker_url]] +Finally, to get the Broker URL that is needed in the next steps of the configuration, you can execute the following command: + +[source,bash] +---- +kubectl get broker -n example-namespace + +NAME URL AGE READY REASON +default http://broker-ingress.knative-eventing.svc.cluster.local/example-namespace/default 4m50s True +---- + +For a link:{knative_eventing_kafka_broker_url}[Knative Kafka] broker that the URL will look like this instead. + +[source,bash] +---- +http://kafka-broker-ingress.knative-eventing.svc.cluster.local/example-namespace/default +---- + +== Configuring the Data Index Knative Eventing Resources + +=== Workflows to DataIndex system events -For the operator to create the `SinkBinding` resources, the workflow must provide the sink information in `spec.sink`. +Create the following Knative Triggers to deliver all the {product_name} system events sent from the workflows to the Data Index Service: -.Example of a workflow with events -[source,yaml,subs="attributes+"] --- +[NOTE] +==== +In your installation you might have to adjust the `spec.broker`, the `spec.subscriber.ref.name`, and `spec.subscriber.ref.namespace` fields to use the correct names for every trigger. +==== + +For more information on Knative Triggers link:{knative_eventing_trigger_url}[see]. + +.Process definition events trigger +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: sonataflow-platform-data-index-service-process-def-trigger +spec: + broker: default + filter: + attributes: + type: ProcessDefinitionEvent + subscriber: + ref: + apiVersion: v1 + kind: Service + name: sonataflow-platform-data-index-service + namespace: example-namespace + uri: /definitions +---- + +.Process instance state events trigger +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: sonataflow-platform-data-index-service-process-state-trigger +spec: + broker: default + filter: + attributes: + type: ProcessInstanceStateDataEvent + subscriber: + ref: + apiVersion: v1 + kind: Service + name: sonataflow-platform-data-index-service + namespace: example-namespace + uri: /processes +---- + +.Process instance node events trigger +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: sonataflow-platform-data-index-service-process-node-trigger +spec: + broker: default + filter: + attributes: + type: ProcessInstanceNodeDataEvent + subscriber: + ref: + apiVersion: v1 + kind: Service + name: sonataflow-platform-data-index-service + namespace: example-namespace + uri: /processes +---- + +.Process instance error events trigger +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: sonataflow-platform-data-index-service-process-error-trigger +spec: + broker: default + filter: + attributes: + type: ProcessInstanceErrorDataEvent + subscriber: + ref: + apiVersion: v1 + kind: Service + name: sonataflow-platform-data-index-service + namespace: example-namespace + uri: /processes +---- + +.Process instance SLA events trigger +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: sonataflow-platform-data-index-service-process-sla-trigger +spec: + broker: default + filter: + attributes: + type: ProcessInstanceSLADataEvent + subscriber: + ref: + apiVersion: v1 + kind: Service + name: sonataflow-platform-data-index-service + namespace: example-namespace + uri: /processes +---- + +.Process instance variable events trigger +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: sonataflow-platform-data-index-service-process-variable-trigger +spec: + broker: default + filter: + attributes: + type: ProcessInstanceVariableDataEvent + subscriber: + ref: + apiVersion: v1 + kind: Service + name: sonataflow-platform-data-index-service + namespace: example-namespace + uri: /processes +---- + +=== Job Service to Data Index system events + +Create the following Knative Trigger to deliver all the {product_name} system events sent from the Job Service to the Data Index Service: + +[NOTE] +==== +In your installation you might have to adjust the `spec.broker`, the `spec.subscriber.ref.name`, and `spec.subscriber.ref.namespace` fields to use the correct names for every trigger. +==== + +For more information on Knative Triggers link:{knative_eventing_trigger_url}[see]. + +.Job events trigger +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: sonataflow-platform-data-index-service-jobs-trigger +spec: + broker: default + filter: + attributes: + type: JobEvent + subscriber: + ref: + apiVersion: v1 + kind: Service + name: sonataflow-platform-data-index-service + namespace: example-namespace + uri: /jobs +---- + +== Configuring the Job Service Knative Eventing Resources + +Create the following Knative Triggers to deliver all the {product_name} system events produced by the workflows to the Job Service: + +[NOTE] +==== +In your installation you might have to adjust the `spec.broker`, the `spec.subscriber.ref.name`, and `spec.subscriber.ref.namespace` fields to use the correct names for every trigger. +==== + +.Create Job events trigger +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: sonataflow-platform-jobs-service-create-job-trigger +spec: + broker: default + filter: + attributes: + type: job.create + subscriber: + ref: + apiVersion: v1 + kind: Service + name: sonataflow-platform-jobs-service + namespace: example-namespace + uri: /v2/jobs/events +---- + +.Delete Job events trigger +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: jobs-service-postgresql-delete-job-trigger + namespace: example-namespace +spec: + broker: default + filter: + attributes: + type: job.delete + subscriber: + ref: + apiVersion: v1 + kind: Service + name: sonataflow-platform-jobs-service + namespace: example-namespace + uri: /v2/jobs/events +---- + +== Data Index and Job Service installation + +To deploy these services you must use a `SonataFlowPlatform` CR and configure it according to the xref:cloud/operator/supporting-services.adoc[Supporting Services guide]. +Finally, prior to deployment into the cluster, you must add the `env` variable shown below to the field `spec.jobService.podTemplate.container`. + +[source,yaml] +---- +apiVersion: sonataflow.org/v1alpha08 +kind: SonataFlowPlatform +metadata: + name: sonataflow-platform + namespace: example-namespace +spec: + services: + dataIndex: + # Data Index requires no additional configurations to use knative eventing. + # Use the configuration of your choice according to the Supporting Services guide. + jobService: + podTemplate: + container: + env: + - name: MP_MESSAGING_OUTGOING_KOGITO_JOB_SERVICE_JOB_STATUS_EVENTS_HTTP_URL <1> + value: http://broker-ingress.knative-eventing.svc.cluster.local/example-namespace/default <2> +---- + +<1> Fixed env variable name that contains the URL of the Broker created in <<_configuring_the_knative_broker>>. +<2> To query the Broker URL <>. + +== Workflow configuration + +=== SonataFlow CR configuration + +To configure a workflow you must create a `SonataFlow` CR that fulfills your requirements. +And finally, prior to deployment into the cluster, add the `env` variables shown below to the field `spec.podTemplate.container`. + +.Workflow configuration +[source,yaml] +---- apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: -... + name: example-workflow + namespace: example-namespace + annotations: + sonataflow.org/description: Example Workflow that show Knative Eventing configuration. + sonataflow.org/version: 0.0.1 + sonataflow.org/profile: preview spec: - sink: - ref: <1> - name: default - namespace: greeting - apiVersion: eventing.knative.dev/v1 - kind: Broker + podTemplate: + container: + env: + - name: K_SINK <1> + value: http://broker-ingress.knative-eventing.svc.cluster.local/example-namespace/default <2> + - name: MP_MESSAGING_OUTGOING_KOGITO_JOB_SERVICE_JOB_REQUEST_EVENTS_URL + value: ${K_SINK} + - name: MP_MESSAGING_OUTGOING_KOGITO_PROCESSINSTANCES_EVENTS_URL + value: ${K_SINK} + - name: MP_MESSAGING_OUTGOING_KOGITO_PROCESSDEFINITIONS_EVENTS_URL + value: ${K_SINK} flow: - events: <2> - - name: requestQuote - type: kogito.sw.request.quote - kind: produced - - name: aggregatedQuotesResponse, - type: kogito.loanbroker.aggregated.quotes.response, - kind: consumed, - source: /kogito/serverless/loanbroker/aggregator -... --- -<1> `spec.sink.ref` defines the sink that all created sinkBinding will use as the destination sink for producing events -<2> `spec.flow.events` lists all the events referenced in the workflow. Events with `produced` kind will trigger the creation of `SinkBindings` by the {operator_name}, while those labeled as `consumed` will lead to the generation of `Triggers`. + start: ExampleState + events: + - name: exampleConsumedEvent1 + source: '' + type: example_event_1 <3> + kind: consumed + - name: exampleConsumedEvent2 + source: '' + type: example_event_2 <4> + kind: consumed +---- + +<1> Fixed env variable name that contains the URL of the broker created in <<_configuring_the_knative_broker>>. +<2> Must contain the broker URL. To get this value <>. The remaining env variables are fixed configurations, and you must add them as is. +<3> Every consumed event requires a trigger, <>. +<4> Every consumed event requires a trigger, <>. + +=== Configuring the Workflow Knative Eventing Resources + +For every event type consumed by the workflow you must create a corresponding trigger to deliver it from the broker. [NOTE] ==== -Knative resources are not watched by the operator, indicating they will not undergo automatic reconciliation. This grants users the freedom to make updates at their preference. +Unlike the triggers related to the Data Index Service and the Jobs Service, these triggers must be created for every workflow that consume events. +So it's recommended that you use trigger names that are linked to the workflow name. ==== +[[trigger-event-type1]] +.Trigger to consume events of type example_event_1 +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: example-workflow-example-event-1-trigger <1> +spec: + broker: default + filter: + attributes: + type: example_event_1 <2> + subscriber: + ref: + apiVersion: v1 + kind: Service + name: example-workflow + namespace: example-namespace +---- + +<1> Name for the trigger. +<2> Event type consumed by the workflow `example-workflow`. + +[[trigger-event-type2]] +.Trigger to consume events of type example_event_2 +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: example-workflow-example-event-2-trigger <1> +spec: + broker: default + filter: + attributes: + type: example_event_2 + subscriber: + ref: + apiVersion: v1 + kind: Service + name: example-workflow + namespace: example-namespace +---- + +:sectnums!: == Additional resources * https://knative.dev/docs/eventing/[Knative Eventing official site] -* xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[quarkus extension for knative eventing] -* xref:job-services/core-concepts.adoc#knative-eventing-supporting-resources[knative eventing for Job service] -* xref:data-index/data-index-core-concepts.adoc#_knative_eventing[knative eventing for data index] +* xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[Quarkus extension for Knative eventing] +* xref:job-services/core-concepts.adoc#knative-eventing-supporting-resources[Knative eventing for Job service] +* xref:data-index/data-index-core-concepts.adoc#_knative_eventing[Knative eventing for data index] include::../../../pages/_common-content/report-issue.adoc[] \ No newline at end of file diff --git a/modules/serverless-logic/pages/cloud/operator/configuring-workflows.adoc b/modules/serverless-logic/pages/cloud/operator/configuring-workflows.adoc index 95bc97b8..6f88b410 100644 --- a/modules/serverless-logic/pages/cloud/operator/configuring-workflows.adoc +++ b/modules/serverless-logic/pages/cloud/operator/configuring-workflows.adoc @@ -4,6 +4,8 @@ :description: Configuration of Workflow Services deployed by the operator :keywords: kogito, sonataflow, workflow, serverless, operator, kubernetes, minikube, config, openshift, containers +:k8s_envvar_url: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#envvar-v1-core + This document describes how to configure a Workflow service with the {operator_name}. == Editing the Workflow Configuration @@ -83,12 +85,67 @@ Other managed properties include: If you try to change any of them, the operator will override them with the default, but preserving your changes in other property keys. +=== Defining Global Managed Properties + +It's possible to set custom global managed properties for your workflows by defining them in the `SonataFlowPlatform` resource in the same namespace. + +Edit the `SonataFlowPlatform` instance and add the required properties to the `.spec.properties.flow` attribute. For example: + +.Example of a SonataFlowPlatform with flow properties +[source,yaml,subs="attributes+"] +---- +apiVersion: sonataflow.org/v1alpha08 +kind: SonataFlowPlatform +metadata: + name: sonataflow-platform +spec: + properties: + flow: <1> + - name: quarkus.log.category <2> + value: INFO <3> +---- + +<1> Attribute to set the array of custom global managed properties +<2> The property key +<3> The property value + +Every workflow in this `SonataFlowPlatform` instance's namespace will have the property `quarkus.log.category: INFO` added to its managed properties. + +[IMPORTANT] +==== +You can't override the default managed properties set by the operator using this feature. +==== + +You can add properties from other `ConfigMap` or `Secret` from the same namespace. For example: + +.Example of a SonataFlowPlatform properties from ConfigMap and Secret +[source,yaml,subs="attributes+"] +---- +apiVersion: sonataflow.org/v1alpha08 +kind: SonataFlowPlatform +metadata: + name: sonataflow-platform +spec: + properties: + flow: + - name: my.petstore.auth.token + valueFrom: <1> + secretKeyRef: petstore-credentials + keyName: AUTH_TOKEN + - name: my.petstore.url + valueFrom: + configMapRef: petstore-props + keyName: PETSTORE_URL +---- + +<1> The `valueFrom` attribute derives from the link:{k8s_envvar_url}[EnvVar Kubernetes API]. + == Additional resources -* https://quarkus.io/guides/config-reference#profile-aware-files[Quarkus - Profile aware files] +* link:https://quarkus.io/guides/config-reference#profile-aware-files[Quarkus Configuration Reference Guide - Profile aware files] * xref:core/configuration-properties.adoc[] -* xref:cloud/operator/known-issues.adoc[] * xref:cloud/operator/developing-workflows.adoc[] * xref:cloud/operator/build-and-deploy-workflows.adoc[] +* xref:cloud/operator/known-issues.adoc[] include::../../../pages/_common-content/report-issue.adoc[] diff --git a/modules/serverless-logic/pages/cloud/operator/customize-podspec.adoc b/modules/serverless-logic/pages/cloud/operator/customize-podspec.adoc index e2920856..9ae72846 100644 --- a/modules/serverless-logic/pages/cloud/operator/customize-podspec.adoc +++ b/modules/serverless-logic/pages/cloud/operator/customize-podspec.adoc @@ -7,8 +7,11 @@ :k8s_resources_limits_url: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ :k8s_podspec_api_url: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#podspec-v1-core +:knative_serving_service_url: https://github.com/knative/specs/blob/main/specs/serving/knative-api-specification-1.0.md#service +:knative_serving_initcontainer: https://knative.dev/docs/serving/configuration/feature-flags/#kubernetes-init-containers +:kubernetes_init_containers: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ -This document describes how to customize the pod specification definition in thew `SonataFlow` custom resource. +This document describes how to customize the pod specification definition in the `SonataFlow` custom resource. Sometimes you may have a specific requirement to deploy containers on Kubernetes or OpenShift such as setting link:{k8s_resources_limits_url}[Resource Limits]. @@ -49,6 +52,51 @@ The `.spec.podTemplate` attribute has the majority of fields defined in the defa The `.spec.podTemplate.container` is a special attribute that you won't find in the default Kubernetes API. The reason is to avoid misconfiguration when users require to change the specific container where the workflow application is deployed. +== Deployment Model + +By default, the {operator_name} deploys a `SonataFlow` instance in a regular Kubernetes Deployment object. Although it's possible to change this behavior and deploy the workflow instance as a link:{knative_serving_service_url}[Knative Serving Service] instead. + +To change the deployment to Knative, set the `.spec.podTemplate.deploymentModel` attribute to `knative`. For example: + +.Setting PodSpec Resources Limits example +[source,yaml,subs="attributes+"] +---- +apiVersion: sonataflow.org/v1alpha08 +kind: SonataFlow +metadata: + name: simple + annotations: + sonataflow.org/description: Simple example on k8s! + sonataflow.org/version: 0.0.1 + sonataflow.org/version: preview +spec: + podTemplate: + deploymentModel: knative <1> + flow: + start: HelloWorld + states: + - name: HelloWorld + type: inject + data: + message: Hello World + end: true +---- + +<1> The `deploymentModel` attribute + +After changing the deployment model to `knative`, the `SonataFlow` instance will be deployed as a Knative Serving Service. + +[IMPORTANT] +==== +It's not possible to deploy a `SonataFlow` instance as a Knative Service in dev profile. In this profile, this attribute is ignored by the operator. +==== + +Note that not every use case leverage a Knative deployment. Long-running workflow instances, for example, that calls services that might take too long to respond, might not be an ideal deployment model. Opt to use Knative deployments for workflows that won't take too long to run. + +The exception are workflows that have callback states. In this case, you must configure xref:cloud/operator/using-persistence.adoc[persistence]. This is required because once the workflow waits for the event to resume the execution, Knative will kill the pod. Since the workflow has persistence, it will resume the execution once it receives the callback event. + +Knative **does not support** link:{kubernetes_init_containers}[`initContainers`] by default. If your workflow requires it, you must first enable the extension in the Knative installation. See more information on the link:{knative_serving_initcontainer}[Knative documentation]. + == Customization Exceptions Besides customizing the default container, you can add more `containers`, `initContainers`, or `volumes` to the pod. There are a few exceptions listed below: @@ -147,7 +195,8 @@ For more information about configuring workflows see xref:cloud/operator/configu When setting the attribute `.spec.podTemplate.container.image` the operator understands that the workflow already have an image built and the user is responsible for the build and image maintainence. That means that the operator won't try to upgrade this image in the future or do any reconciliation changes to it. === Setting a custom image in devmode -In xref:cloud/operator/developing-workflows.adoc[development profile], it's expected that the image is based on the default `{sonataflow_devmode_imagename}:{operator_version}`. + +In xref:cloud/operator/developing-workflows.adoc[development profile], it's expected that the image is based on the default `quay.io/kiegroup/kogito-swf-devmode:latest`. === Setting a custom image in preview @@ -160,9 +209,17 @@ In this scenario, the `.spec.resources` attribute is ignored since it's only use xref:cloud/operator/known-issues.adoc[In the roadmap] you will find that we plan to consider the `.spec.resources` attribute when the image is specified in the default container. ==== -It's advised that the SonataFlow `.spec.flow` definition and the workflow built within the image corresponds to the same workflow. If these definitions don't match you may experience poorly management and configuration. The {operator_name} uses the `.spec.flow` attribute to configure the application, service discovery, and service binding with other deployments within the topology. +It's advised that the SonataFlow `.spec.flow` definition and the workflow built within the image corresponds to the same workflow. If these definitions don't match you may experience poor management and configuration. The {operator_name} uses the `.spec.flow` attribute to configure the application, service discovery, and service binding with other deployments within the topology. [IMPORTANT] ==== xref:cloud/operator/known-issues.adoc[It's on the roadmap] to add integrity check to the built images provided to the operator by customizing the default container. ==== + +== Additional resources + +* xref:cloud/operator/developing-workflows.adoc[] +* xref:cloud/operator/build-and-deploy-workflows.adoc[] +* xref:cloud/operator/building-custom-images.adoc[] + +include::../../../pages/_common-content/report-issue.adoc[] \ No newline at end of file diff --git a/modules/serverless-logic/pages/cloud/operator/developing-workflows.adoc b/modules/serverless-logic/pages/cloud/operator/developing-workflows.adoc index 18b5b389..f232d49c 100644 --- a/modules/serverless-logic/pages/cloud/operator/developing-workflows.adoc +++ b/modules/serverless-logic/pages/cloud/operator/developing-workflows.adoc @@ -16,6 +16,11 @@ Workflows in the development profile are not tailored for production environment {operator_name} is under active development with features yet to be implemented. Please see xref:cloud/operator/known-issues.adoc[]. ==== +.Prerequisites +* You have set up your environment according to the xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal environment setup] guide. +* You have the cluster instance up and running. See xref:getting-started/preparing-environment.adoc#proc-starting-cluster-fo-local-development[starting the cluster for local development] guide. + +[[proc-introduction-to-development-profile]] == Introduction to the Development Profile The development profile is the easiest way to start playing around with Workflows and the operator. @@ -74,13 +79,13 @@ spec: <2> In the `flow` attribute goes the Workflow definition as described by the xref:core/cncf-serverless-workflow-specification-support.adoc[CNCF Serverless Workflow specification]. So if you already have a workflow definition, you can use it there. Alternatively, you can use the xref:tooling/serverless-workflow-editor/swf-editor-overview.adoc[editors to create your workflow definition]. +[[proc-deploying-new-workflow]] == Deploying a New Workflow .Prerequisites -* You have xref:cloud/operator/install-serverless-operator.adoc[installed the {operator_name}] -* You have created a new {product_name} Kubernetes YAML file +* You have a new {product_name} Kubernetes Workflow definition in YAML file. You can use the Greeting example in <> section. -Having a new Kubernetes Workflow definition in a YAML file (you can use the above example), you can deploy it in your cluster with the following command: +Having a Kubernetes Workflow definition in a YAML file , you can deploy it in your cluster with the following command: .Deploying a new SonataFlow Custom Resource in Kubernetes [source,bash,subs="attributes+"] @@ -93,7 +98,7 @@ Alternatively, you can try one of the examples available in the operator reposit .Deploying the greeting Workflow example [source,bash,subs="attributes+"] ---- -kubectl apply -f {operator_community_prod_root}/test/testdata/sonataflow.org_v1alpha08_sonataflow_devmode.yaml -n +kubectl apply -f https://raw.githubusercontent.com/apache/incubator-kie-kogito-serverless-operator/{operator_version}/test/testdata/sonataflow.org_v1alpha08_sonataflow_devmode.yaml -n ---- [TIP] @@ -134,11 +139,11 @@ and changing the Workflow definition inside the Custom Resource Spec section. Alternatively, you can save the Custom Resource definition file and edit it with your desired editor and re-apply it. -For example using VSCode, there are the commands needed: +For example using VS Code, these are the commands needed: [source,bash,subs="attributes+"] ---- -curl -S {operator_community_prod_root}/test/testdata/sonataflow.org_v1alpha08_sonataflow_devmode.yaml > workflow_devmode.yaml +curl -S https://raw.githubusercontent.com/apache/incubator-kie-kogito-serverless-operator/{operator_version}/test/testdata/sonataflow.org_v1alpha08_sonataflow_devmode.yaml > workflow_devmode.yaml code workflow_devmode.yaml kubectl apply -f workflow_devmode.yaml -n ---- @@ -146,22 +151,58 @@ kubectl apply -f workflow_devmode.yaml -n The operator ensures that the latest Workflow definition is running and ready. This way, you can include the Workflow in your development scenario and start making requests to it. +[[proc-check-if-workflow-is-running]] == Check if the Workflow is running +.Prerequisites +* You have deployed a workflow to your cluster following the example in <> section. + In order to check that the {product_name} Greeting workflow is up and running, you can try to perform a test HTTP call. First, you must get the service URL: -.Exposing the Workflow -[source,bash,subs="attributes+"] +. Exposing the workflow +[tabs] +==== +Minikube:: ++ +-- +.Expose the workflow on minikube +[source,shell] ---- +# Input minikube service greeting -n --url + +# Example output, use the URL as a base to acces the current workflow http://127.0.0.1:57053 -# use the above output to get the current Workflow URL in your environment +# Your workflow is accessible at http://127.0.0.1:57053/greeting ---- +-- +Kind:: ++ +-- +.Expose the workflow on kind +[source,shell] +---- +# Find the service of your workflow +kubectl get service -n + +# Example output +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +greetings ClusterIP 10.96.0.1 31852/TCP 21h + +# Now forward the port and keep the terminal window open +kubectl port-forward service/greeting 31852:80 -n + +# Your workflow is accessible at localhost:31852/greetings +---- +-- +==== [TIP] ==== -When running on Minikube, the service is already exposed for you via `NodePort`. On OpenShift, link:{openshift_route_url}[a Route is automatically created in devmode]. If you're running on Kubernetes you can link:{kubernetes_url}[expose your service using an Ingress]. +* When running on Minikube, the service is already exposed for you via `NodePort`. +* On OpenShift, link:{openshift_route_url}[a Route is automatically created in devmode]. +* If you're running on Kubernetes you can link:{kubernetes_url}[expose your service using an Ingress]. ==== You can now point your browser to the Swagger UI and start making requests with the REST interface. @@ -259,7 +300,7 @@ It can give you a clue about what might be happening. See xref:cloud/operator/wo .Watch the workflow logs [source,shell,subs="attributes+"] ---- -kubectl logs deployment/ -f +kubectl logs deployment/ -f -n ---- + If you decide to open an issue or ask for help in {product_name} communication channels, this logging information is always useful for the person who will try to help you. diff --git a/modules/serverless-logic/pages/cloud/operator/global-configuration.adoc b/modules/serverless-logic/pages/cloud/operator/global-configuration.adoc new file mode 100644 index 00000000..e4a3649f --- /dev/null +++ b/modules/serverless-logic/pages/cloud/operator/global-configuration.adoc @@ -0,0 +1,106 @@ += Global Configuration Settings +:compat-mode!: +// Metadata: +:description: Global Configuration {operator_name} for cluster admins +:keywords: sonataflow, workflow, serverless, operator, kubernetes, minikube, openshift, containers, configuration +// links + +This document describes how to set global configuration options for the {operator_name}. + +.Prerequisites +* You have installed the operator in the target cluster. You can find more information at the xref:cloud/operator/install-serverless-operator.adoc[] guide. + +== Modifying configuration options + +After installing the operator, you can access the ConfigMap named `{operator_controller_config}` in the namespace `{operator_installation_namespace}`. +This configuration file governs the operator's behavior when creating new resources in the cluster. Existing resources won't be changed after this configuration. +See the section <> for more information. + +You can freely edit any of the options in the key `controllers_cfg.yaml` entry. The table bellow lists each possible entry. + +.Description of Global Configuration +[cols="1,1,2"] +|=== +|Configuration Key | Default Value | Description + +| `defaultPvcKanikoSize` | 1Gi | The default size of Kaniko PVC when using the internal operator builder manager. +| `healthFailureThresholdDevMode` | 50 | How much time (in seconds) to wait for a devmode workflow to start. This information is used for the controller manager to create new devmode containers and setup the healthcheck probes. +| `kanikoDefaultWarmerImageTag` | gcr.io/kaniko-project/warmer:v1.9.0 | Default image used internally by the Operator Managed Kaniko builder to create the warmup pods. +| `kanikoExecutorImageTag` | gcr.io/kaniko-project/executor:v1.9.0 | Default image used internally by the Operator Managed Kaniko builder to create the executor pods. +| `jobsServicePostgreSQLImageTag` | empty | The Jobs Service image for PostgreSQL to use, if empty the operator will use the default Apache Community one based on the current operator's version. +| `jobsServiceEphemeralImageTag` | empty | The Jobs Service image without persistence to use, if empty the operator will use the default Apache Community one based on the current operator's version. +| `dataIndexPostgreSQLImageTag` | empty | The Data Index image for PostgreSQL to use, if empty the operator will use the default Apache Community one based on the current operator's version. +| `dataIndexEphemeralImageTag` | empty | The Data Index image without persistence to use, if empty the operator will use the default Apache Community one based on the current operator's version. +| `sonataFlowBaseBuilderImageTag` | empty | {product_name} base builder image used in the internal Dockerfile to build workflow applications in preview profile. If empty the operator will use the default Apache Community one based on the current operator's version. +| `sonataFlowDevModeImageTag` | empty | The image to use to deploy {product_name} workflow images in devmode profile. If empty the operator will use the default Apache Community one based on the current operator's version. +| `builderConfigMapName` | sonataflow-operator-builder-config | The default name of the builder configMap in the operator's namespace. +| `postgreSQLPersistenceExtensions` | next column +| Quarkus extensions required for workflows persistence. These extensions are used by the {operator_name} builder in cases where the workflow being built has configured xref:cloud/operator/using-persistence.adoc[postgresql persistence]. + +`Default values`: + +{groupId_quarkus-agroal}:{artifactId_quarkus-agroal}:{quarkus_version} + +{groupId_quarkus-jdbc-postgresql}:{artifactId_quarkus-jdbc-postgresql}:{quarkus_version} + +{groupId_kie-addons-quarkus-persistence-jdbc}:{artifactId_kie-addons-quarkus-persistence-jdbc}:{kogito_version} + +|=== + +To edit this file, update the ConfigMap `sonataflow-operator-controllers-config` using your preferred tool such as `kubectl`. + +[#config-changes] +== Configuration Changes Impact + +When updating the global configuration, the changes will be taken impact immediately for *newly* created resources only. +For example, if you change `sonataFlowDevModeImageTag` property, given that you already have a workflow deployed in _devmode_, the operator won't rollout a new one with the new image configuration. Only new deployments will be affected. + +== A Note About the Base Builder Image + +As noted in xref:cloud/operator/build-and-deploy-workflows.adoc#changing-base-builder[Changing Base Builder] section, you can directly change the base builder image in the Dockerfile used by the {operator_name}. + +Additionally, you can also change the base builder image in the `SonataFlowPlatform` in the current namespace: + +.Example of SonataFlowPlatform with a custom base builder +[source,yaml,subs="attributes+"] +---- +apiVersion: sonataflow.org/v1alpha08 +kind: SonataFlowPlatform +metadata: + name: sonataflow-platform +spec: + build: + config: + baseImage: dev.local/my-workflow-builder:1.0.0 +---- + +And finally, you can also change this information directly in the global confinguration ConfigMap + +.Example of ConfigMap global configuration with a custom base builder +[source,yaml,subs="attributes+"] +---- +apiVersion: v1 +data: + controllers_cfg.yaml: | + sonataFlowBaseBuilderImageTag: dev.local/my-workflow-builder:1.0.0 +kind: ConfigMap +metadata: + name: sonataflow-operator-controllers-config + namespace: sonataflow-operator-system +---- + +The order of precedence is: + +1. The `SonataFlowPlatform` in the current context +2. The global configuration entry +3. The `FROM` clause in the Dockerfile in the operator's namespace `sonataflow-operator-builder-config` ConfigMap + +In summary, the entry in `SonataFlowPlatform` will always override any other value. + +== Additional resources + +* xref:cloud/operator/known-issues.adoc[] +* xref:cloud/operator/developing-workflows.adoc[] +* xref:cloud/operator/global-configuration.adoc[] + +include::../../../pages/_common-content/report-issue.adoc[] \ No newline at end of file diff --git a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc index 738f0781..4f9e2df3 100644 --- a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc +++ b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc @@ -11,14 +11,16 @@ :kubernetes_operator_uninstall_url: https://olm.operatorframework.io/docs/tasks/uninstall-operator/ :operatorhub_url: https://operatorhub.io/ -This guide describes how to install the {operator_name} in a Kubernetes or OpenShift cluster. The operator is in an xref:/cloud/operator/known-issues.adoc[early development stage] (community only) and has been tested on OpenShift {openshift_version_min}+ and link:{minikube_url}[Minikube]. +This guide describes how to install the {operator_name} in a Kubernetes or OpenShift cluster. The operator is in an xref:cloud/operator/known-issues.adoc[early development stage] (community only) and has been tested on OpenShift {openshift_version_min}+, Kubernetes {kubernetes_version}+, and link:{minikube_url}[Minikube]. .Prerequisites -* An OpenShift cluster with admin privileges. Alternatively, you can use Minikube or KIND. -* `kubectl` command-line tool is installed. Otherwise, Minikube provides it. +* A Kubernetes or OpenShift cluster with admin privileges and `kubectl` installed. +* Alternatively, you can use Minikube or KIND in your local environment. See xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal environment setup] and xref:getting-started/preparing-environment.adoc#proc-starting-cluster-fo-local-development[starting the cluster for local development] guides. == {product_name} Operator OpenShift installation +=== Install + To install the operator on OpenShift refer to the "link:{openshift_operator_install_url}[Adding Operators to a cluster]" from the OpenShift's documentation. When searching for the operator in the *Filter by keyword* field, use the word `{operator_openshift_keyword}`. If you're installing from the CLI, the operator's catalog name is `{operator_openshift_catalog}`. @@ -29,6 +31,8 @@ To remove the operator on OpenShift refer to the "link:{openshift_operator_unins == {product_name} Operator Kubernetes installation +=== Install + To install the operator on Kubernetes refer to the "link:{kubernetes_operator_install_url}[How to install an Operator from OperatorHub.io]" from the OperatorHub's documentation. When link:{operatorhub_url}[searching for the operator in the *Search OperatorHub* field], use the word `{operator_k8s_keyword}`. @@ -46,51 +50,25 @@ When searching for the subscription to remove, use the word `{operator_k8s_subsc If you're running on Kubernetes or OpenShift, it is highly recommended to install the operator from the OperatorHub or OpenShift Console instead since the installation is managed by OLM. Use this method only if you need a snapshot version or you're running locally on Minikube or KIND. ==== -=== Prepare a Minikube instance - -[NOTE] -==== -You can safely skip this section if you're not using Minikube. -==== - .Prerequisites -* A machine with at least 8GB memory and a link:https://en.wikipedia.org/wiki/Multi-core_processor[CPU with 8 cores] -* Docker or Podman installed - -Run the following command to create a new instance capable of installing the operator and deploy workflows: - -[source,shell,subs="attributes+"] ----- -minikube start --cpus 4 --memory 4096 --addons registry --addons metrics-server --insecure-registry "10.0.0.0/24" --insecure-registry "localhost:5000" ----- - -[NOTE] -==== -To speed up the build time, you can increase the CPUs and memory options so that your Minikube instance will have more resources. For example, use `--cpus 12 --memory 16384`. If you have already created your Minikube instance, you will need to recreate it for these changes to apply. -==== - -If Minikube does not work with the default driver, also known as `docker`, you can try to start with the `podman` driver as follows: - -.Start Minikube with the Podman driver -[source,shell,subs="attributes+"] ----- -minikube start [...] --driver podman ----- +* You have set up your environment according to the xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal environment setup] guide. +* You have the cluster instance up and running. See xref:getting-started/preparing-environment.adoc#proc-starting-cluster-fo-local-development[starting the cluster for local development] guide. +[[proc-install-serverless-operator-snapshot]] === Install To install the {product_name} Operator, you can use the following command: -.Install {product_name} Operator on Minikube +.Install {product_name} Operator on Kubernetes [source,shell,subs="attributes+"] ---- -kubectl create -f {operator_community_prod_yaml} +kubectl create -f https://raw.githubusercontent.com/apache/incubator-kie-kogito-serverless-operator/{operator_version}/operator.yaml ---- -You can also specify a version: +Replace `main` with specific version if needed: ---- kubectl create -f https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator//operator.yaml ---- -`` should be `9.99.x-prod`. +`` could be `1.44.1` for instance. You can follow the deployment of the {product_name} Operator: @@ -144,19 +122,24 @@ To uninstall the correct version of the operator, first you must get the current ---- kubectl get deployment sonataflow-operator-controller-manager -n sonataflow-operator-system -o jsonpath="{.spec.template.spec.containers[?(@.name=='manager')].image}" -${sonataflow_operator_imagename}:${operator_version} +quay.io/kiegroup/kogito-serverless-operator-nightly:latest ---- .Uninstalling the operator [source,shell,subs="attributes+"] ---- -kubectl delete -f ${operator_community_prod_yaml} +kubectl delete -f https://raw.githubusercontent.com/apache/incubator-kie-kogito-serverless-operator/.x/operator.yaml ---- +[TIP] +==== +The URL should be the same as the one you used when installing the operator. +==== + == Additional resources * xref:cloud/operator/known-issues.adoc[] * xref:cloud/operator/developing-workflows.adoc[] -* xref:cloud/operator/enabling-jobs-service.adoc[] +* xref:cloud/operator/supporting-services.adoc[Deploying Supporting Services with the Operator] include::../../../pages/_common-content/report-issue.adoc[] diff --git a/modules/serverless-logic/pages/cloud/operator/known-issues.adoc b/modules/serverless-logic/pages/cloud/operator/known-issues.adoc index f9886f5b..4ec38ee7 100644 --- a/modules/serverless-logic/pages/cloud/operator/known-issues.adoc +++ b/modules/serverless-logic/pages/cloud/operator/known-issues.adoc @@ -4,37 +4,7 @@ :description: Known issues, features, and limitations of the operator :keywords: kogito, sonataflow, workflow, serverless, operator, kubernetes, minikube, roadmap -The link:{kogito_serverless_operator_url}[{operator_name}] is currently in Alpha version, is under active development. -// == Known Bugs -== Roadmap -The following issues are currently being prioritized: - -=== CNCF Specification v0.8 Alignment - -- link:https://issues.redhat.com/browse/KOGITO-7840[Implement admission webhooks for workflow validation] - -// === Workflow Development Profile - -=== Workflow Productization Profile - -- link:https://issues.redhat.com/browse/KOGITO-8524[Enable toggle Workflow CR from devmode to production mode and vice-versa] -- link:https://issues.redhat.com/browse/KOGITO-8792[Review build failures and signal the reasoning in the Events API] -- link:https://issues.redhat.com/browse/KOGITO-8806[Evaluate internal registry integration on OpenShift, Kubernetes and Minikube] - -=== Knative Integration - -- link:https://issues.redhat.com/browse/KOGITO-9812[SonataFlow Operator integration with Knative Eventing] -- link:https://issues.redhat.com/browse/KOGITO-8496[Knative Serving Extension for Serverless Workflow specification] - -=== GitOps - -- link:https://issues.redhat.com/browse/KOGITO-9527[Extend the SonataFlow Operator with Jib builder] -- link:https://issues.redhat.com/browse/KOGITO-9833[Add external built image integrity validation] - -=== Operator SDK, OLM, OperatorHub - -- link:https://issues.redhat.com/browse/KOGITO-8182[Enable SonataFlow Operator for level 2 - Seamless Upgrades] diff --git a/modules/serverless-logic/pages/cloud/operator/referencing-resource-files.adoc b/modules/serverless-logic/pages/cloud/operator/referencing-resource-files.adoc index bdfcb483..be56488c 100644 --- a/modules/serverless-logic/pages/cloud/operator/referencing-resource-files.adoc +++ b/modules/serverless-logic/pages/cloud/operator/referencing-resource-files.adoc @@ -14,15 +14,17 @@ For example, when doing xref:service-orchestration/orchestration-of-openapi-base If these files are not in a remote location that can be accessed via the HTTP protocol, you must describe in the `SonataFlow` CR where to find them within the cluster. This is done via link:{kubernetes_configmap_url}[`ConfigMaps`]. -== Creating ConfigMaps with Workflow Additional Files +== Creating ConfigMaps with Workflow referencing additional files .Prerequisites -* You have the files available in your file system -* You have permissions to create `ConfigMaps` in the target namespace +* You have set up your environment according to the xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal environment setup] guide. +* You have the cluster instance up and running. See xref:getting-started/preparing-environment.adoc#proc-starting-cluster-fo-local-development[starting the cluster for local development] guide. +* You have permissions to create `ConfigMaps` in the target namespace of your cluster. +* (Optional) You have the files that you want to reference in your workflow definition ready. -Given that you already have the file you want to add to your workflow definition, you link:{kubernetes_create_configmap_url}[can create a `ConfigMap`] as you normally would with the contents of the file. +If you already have the files referenced in your workflow definition, you link:{kubernetes_create_configmap_url}[can create a `ConfigMap`] in your target namespace with the contents of the file. -For example, given the following workflow: +In the example below, you need to use the contents of the `specs/workflow-service-schema.json` file and `specs/workflow-service-openapi.json` file to create the `ConfigMap`: .Example of a workflow referencing additional files [source,yaml,subs="attributes+"] @@ -56,11 +58,11 @@ spec: <1> The workflow defines an input schema <2> The workflow requires an OpenAPI specification file to make a REST invocation -For this example, you have two options. You can either create two `ConfigMaps` to have a clear separation of concerns or only one with both files. +The `Hello Service` workflow in the example offers two options. You can either create two `ConfigMaps`, each for one file, to have a clear separation of concerns or group them into one. From the operator perspective, it won't make any difference since both files will be available for the workflow application at runtime. -To make it simple, you can create only one `ConfigMap`. Given that the files are available in the current directory: +To make it simple, you can create only one `ConfigMap`. Navigate into the directory where your resource files are available and create the config map using following command: .Creating a ConfigMap from the current directory [source,bash,subs="attributes+"] @@ -84,10 +86,12 @@ metadata: name: service-files data: workflow-service-schema.json: # data was removed to save space + # workflow-service-openapi.json: # data was removed to save space + # ---- -Now you can reference this `ConfigMap` to your `SonataFlow` CR: +Now you can add reference to this `ConfigMap` into your `SonataFlow` CR: .SonataFlow CR referencing a ConfigMap resource [source,yaml,subs="attributes+"] diff --git a/modules/serverless-logic/pages/cloud/operator/supporting-services.adoc b/modules/serverless-logic/pages/cloud/operator/supporting-services.adoc index 49353438..67675ad4 100644 --- a/modules/serverless-logic/pages/cloud/operator/supporting-services.adoc +++ b/modules/serverless-logic/pages/cloud/operator/supporting-services.adoc @@ -6,110 +6,228 @@ // links :kogito_serverless_operator_url: https://github.com/apache/incubator-kie-kogito-serverless-operator/ -By default, workflows use an embedded version of xref:../../data-index/data-index-core-concepts.adoc[Data Index]. This document describes how to deploy supporting services, like Data Index, on a cluster using the link:{kogito_serverless_operator_url}[{operator_name}]. -[IMPORTANT] -==== -{operator_name} is under active development with features yet to be implemented. Please see xref:cloud/operator/known-issues.adoc[]. -==== +This document describes how to configure and deploy the {product_name}'s xref:data-index/data-index-core-concepts.adoc[Data Index] and xref:job-services/core-concepts.adoc[Job Service] supporting services, using the {operator_name}. + +In general, in a regular {product_name} installation you must deploy both services to ensure a successful execution of your workflows. To get more information about each service please read the respective guides. .Prerequisites -* The {operator_name} installed. See xref:cloud/operator/install-serverless-operator.adoc[] guide -* A postgresql database, if persistence is required +* The {operator_name} installed. See xref:cloud/operator/install-serverless-operator.adoc[] guide. +* A PostgreSQL database service instance. Required if you are planning to use the <> for a supporting service. -[#deploy-supporting-services] -== Deploy supporting services +[#supporting-services-workflow-communications] +== Supporting Services and Workflow communications + +When you deploy a supporting service in a given namespace, you can do it by using an <> deployment. -=== Data Index +An enabled deployment, signals the {operator_name} to automatically intercept every workflow deployment with the `preview` or `gitops` profile, in this namespace, and automatically configure it to connect with that service. -You can deploy Data Index via `SonataFlowPlatform` configuration. The operator will then configure all new workflows, with the "prod" profile, to use that Data Index. +For example, if the Data Index is enabled, a workflow will be automatically configured to send workflow status change events to it. +And, similar configurations are produced if the Job Service is enabled, to create a Job, every time a workflow requires a timeout. +Additionally, the operator will configure the Job Service to send events to the Data Index Service, etc. -Following is a basic configuration. It will deploy an ephemeral Data Index to the same namespace as the `SonataFlowPlatform`. +As you can see, the operator can not only deploy a supporting service, but also, manage other configurations to ensure the successful execution of a workflow. + +Fortunately, all these configurations are managed automatically, and you must only provide the supporting services configuration in the `SonataFlowPlatform` CR. + +[NOTE] +==== +Scenarios where you only deploy one of the supporting services, or configure a disabled deployment, are intended for advanced use cases. +In a regular installation, you must normally configure an enabled deployment of both services to ensure a successful execution of your workflows. +==== + +[#deploy-supporting-services] +== Deploying the supporting services using the `SonataFlowPlatform` CR + +To deploy the supporting services you must use the sub-fields `dataIndex` and `jobService` in the `SonataFlowPlatform` CR `spec.services`. +That information signals the {operator_name} to deploy each service when the `SonataFlowPlatform` CR is deployed. + +[NOTE] +==== +Each service configuration is considered independently, and you can combine these configurations with any other configuration present in the `SonataFlowPlatform` CR. +==== -.Example of a SonataFlowPlatform instance with an ephemeral Data Index deployment -[source,yaml,subs="attributes+"] +The following `SonataFlowPlatform` CR fragment shows a scaffold configuration that you can use as reference: +[#supporting-services-configuration] +.Supporting services configuration +[source,yam] ---- apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: - name: sonataflow-platform + name: sonataflow-platform-example + namespace: example-namespace spec: services: - dataIndex: {} + dataIndex: <1> + enabled: true <2> + # Specific configurations for the Data Index Service + # might be included here + jobService: <3> + enabled: true <4> + # Specific configurations for the Job Service + # might be included here ---- - -If you require Data Index persistence, this can be done with a `postgresql` database. -ifeval::["{kogito_version_redhat}" != ""] -include::../../../../pages/_common-content/downstream-project-setup-instructions.adoc[] -endif::[] +[#enabled-deployment-field] +<1> Data Index Service configuration field. +<2> If true, produces an enabled Data Index Service deployment, <>. Other cases produce a disabled deployment. The default is `false`. +<3> Job Service configuration field. +<4> If true, produces an enabled Job Service deployment, <>. Other cases produce a disabled deployment. The default is `false`. -Following is a services configuration with the persistence option enabled. You'll first need to create a secret with your database credentials. +[NOTE] +==== +The configuration above produces an ephemeral deployment of each service, <>. +==== -.Create a Secret for datasource authentication. -[source,bash,subs="attributes+"] ----- -kubectl create secret generic --from-literal=POSTGRESQL_USER= --from-literal=POSTGRESQL_PASSWORD= -n workflows ----- +== Supporting Services Scope + +The `SonataFlowPlatform` CR facilitates the deployment of the supporting services with namespace scope. +It means that, all the automatically configured <>, are restricted to the namespace of the given platform. +This can be useful, in cases where you need separate supporting service instances for a set of workflows. +For example, a given application can be deployed isolated with its workflows, and the supporting services. + +Additionally, using the `SonataFlowClusterPlatform` CR it's possible to configure a <> of the supporting services. + +== Configuring the Supporting Services Persistence + +[#ephemeral-persistence-configuration] +=== Ephemeral persistence configuration -.Example of a SonataFlowPlatform instance with a Data Index deployment persisted to a postgresql database -[source,yaml,subs="attributes+"] +The ephemeral persistence of a service is supported by an embedded PostgreSQL database dedicated to it. That database is re-created by the operator on every service restart. +And thus, it's only recommended for development and testing purposes. + +The ephemeral deployment of a service requires no additional configurations than the shown, <>. + +[#postgresql-persistence-configuration] +=== PostgreSQL persistence configuration + +The PostgreSQL persistence of a service is supported by a PostgreSQL server instance that you must previously install on the cluster. +The administration of that instance is totally independent of the {operator_name} scope, and to connect a supporting service with it, you must only configure the correct database connection parameters. + +The following `SonataFlowPlatform` CR fragment shows the configuration options that you must use: + +.PostgreSQL persistence configuration +[source,yaml] ---- apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: - name: sonataflow-platform + name: sonataflow-platform-example + namespace: example-namespace spec: services: dataIndex: + enabled: true persistence: postgresql: - secretRef: - name: <1> serviceRef: - name: <2> + name: postgres-example <1> + namespace: postgres-example-namespace <2> + databaseName: example-database <3> + databaseSchema: data-index-schema <4> + port: 1234 <5> + secretRef: + name: postgres-secrets-example <6> + userKey: POSTGRESQL_USER <7> + passwordKey: POSTGRESQL_PASSWORD <8> + jobService: + enabled: true + persistence: + postgresql: + # Specific database configuration for the Job Service + # might be included here. ---- -<1> Name of your postgresql credentials secret -<2> Name of your postgresql k8s service +<1> Name of the Kubernetes Service to connect with the PostgreSQL database server. +<2> (Optional) Kubernetes namespace containing the PostgreSQL Service. Defaults to the `SonataFlowPlatform's` local namespace. +<3> Name of the PostgreSQL database to store the supporting service data. +<4> (Optional) Name of the PostgreSQL database schema to store the supporting service data. +Defaults to the `SonataFlowPlatform's` `name`, suffixed with `-data-index-service` or `-jobs-service`. For example, `sonataflow-platform-example-data-index-service`. +<5> (Optional) Port number to connect with the PostgreSQL Service. Defaults to 5432. +<6> Name of the link:{k8n_secrets_url}[Kubernetes Secret] containing the username and password to connect with the database. +<7> Name of the link:{k8n_secrets_url}[Kubernetes Secret] `key` containing the username to connect with the database. +<8> Name of the link:{k8n_secrets_url}[Kubernetes Secret] `key` containing the password to connect with the database. + +[NOTE] +==== +The persistence of each service can be configured independently by using the respective `persistence` field. +==== -.Example of a SonataFlowPlatform instance with a persisted Data Index deployment and custom pod configuration -[source,yaml,subs="attributes+"] +To create the secrets for the example above you can use a command like this: + +.Create secret example +[source,bash] +---- +kubectl create secret generic postgres-secrets-example --from-literal=POSTGRESQL_USER= --from-literal=POSTGRESQL_PASSWORD= -n postgres-example-namespace +---- + + +[#common-persistence-configuration] +=== Common PostgreSQL persistence configuration + +To configure a common PostgreSQL service instance for all the supporting services you must read, xref:cloud/operator/using-persistence.adoc#configuring-persistence-using-the-sonataflowplatform-cr[Configuring the persistence using the SonataFlowPlatform CR]. + +In that case, the {operator_name} will automatically connect any of the supporting services with that common server configured in the field `spec.persistence`. And, similarly to the workflow's persistence, the following precedence rules apply: + +* If a supporting service has a configured persistence, for example, the field `services.dataIndex.persistence` is configured. That configuration will apply. + +* If a supporting service has no configured persistence, for example, the field `services.dataIndex.persistence` is not set at all, the persistence configuration will be taken from the current platform. + + +[NOTE] +==== +When you use the common PostgreSQL configuration, the database schema for each supporting service is automatically configured as the SonataFlowPlatform’s `name`, suffixed with `-data-index-service` or `-jobs-service` respectively. +For example, `sonataflow-platform-example-data-index-service`. +==== + +== Advanced Supporting Services Configurations + +To configure the advanced options for any of the supporting services you must use the `podTemplate` field respectively, for example `dataIndex.podTemplate`: + +.Advanced configurations example for the Data Index Service. +[source,yaml] ---- apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: - name: sonataflow-platform + name: sonataflow-platform-example + namespace: example-namespace spec: services: dataIndex: - enabled: false <1> - persistence: - postgresql: - secretRef: - name: - userKey: <2> - jdbcUrl: "jdbc:postgresql://host:port/database?currentSchema=data-index-service" <3> + enabled: true podTemplate: - replicas: 1 <4> - container: - image: <5> + replicas: 2 <1> + container: <2> + env: <3> + - name: ANY_ADVANCED_CONFIG_PROPERTY + value: any-value + image: <4> + initContainers: <5> ---- -<1> Determines whether "prod" profile workflows should be configured to use this service, defaults to `true` -<2> Secret key of your postgresql credentials user, defaults to `POSTGRESQL_USER` -<3> PostgreSql JDBC URL -<4> Number of Data Index pods, defaults to `1` -<5> Custom Data Index container image name +<1> Number of replicas. Defaults to 1. In the case of the jobService this value is always overridden to 1 by the operator, since that service is a singleton service. +<2> Holds the particular configurations for the container that will execute the given supporting service. +<3> Standard Kubernetes `env` configuration. This can be useful in cases where you need to fine tune any of the supporting services properties. +<4> Standard Kubernetes `image` configuration. This can be useful in cases where you need to use an updated image for any of the supporting services. +<5> Standard Kubernetes `initContainers` for the Pod that executes the supporting service. -[#cluster-wide-services] -== Cluster-Wide Supporting Services +[NOTE] +==== +The `podTemplate` field of any supporting service has the majority of fields defined in the default Kubernetes PodSpec API. +The same Kubernetes API validation rules apply to these fields. +==== + +[#cluster-scoped-deployment] +== Cluster Scoped Supporting Services -The `SonataFlowClusterPlatform` CR is optionally used to specify a cluster-wide set of supporting services for workflow consumption. This is done by referencing an existing, namespaced `SonataFlowPlatform` resource. +The `SonataFlowClusterPlatform` CR is optionally used to specify a cluster-wide set of supporting services for workflow consumption. +This is done by referencing an existing, namespaced `SonataFlowPlatform` CR. -Following is a basic configuration. It will allow workflows cluster-wide to leverage whatever supporting services are configured in the chosen "central" namespace. +Following is a basic configuration that allows workflows, deployed in any namespace, to leverage supporting services deployed in the chosen `example-namespace` namespace. -.Example of a basic SonataFlowClusterPlatform CR -[source,yaml,subs="attributes+"] +.Example of a SonataFlowClusterPlatform CR +[source,yaml] ---- apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowClusterPlatform @@ -117,19 +235,30 @@ metadata: name: cluster-platform spec: platformRef: - name: sonataflow-platform - namespace: + name: sonataflow-platform-example <1> + namespace: example-namespace <2> ---- +<1> Name of the already installed `SonatataFlowPlatform` CR that configures the supporting services. +<2> Namespace of the already installed `SontataFlowPlatform` CR that configures the supporting services. + [NOTE] ==== These cluster-wide services can be overridden in any namespace, by configuring that namespace's `SonataFlowPlatform.spec.services`. ==== +== Conclusions + +The {operator_name} extends its scope to manage the lifecycle of the xref:data-index/data-index-core-concepts.adoc[Data Index] and xref:job-services/core-concepts.adoc[Job Service] instances, thus removing the burden on the users and allowing them to focus on the implementation of the workflows. +It takes care also of managing all the configurations to facilitate communication between the workflows and the supporting services. +Additionally, it can manage different persistence options for each service, and advanced configurations. + + == Additional resources -* xref:../../data-index/data-index-service.adoc[] -* xref:cloud/operator/enabling-jobs-service.adoc[] +* xref:data-index/data-index-core-concepts.adoc[] +* xref:job-services/core-concepts.adoc[Job Service Core Concepts] +* xref:cloud/operator/using-persistence.adoc[] * xref:cloud/operator/known-issues.adoc[] include::../../../pages/_common-content/report-issue.adoc[] diff --git a/modules/serverless-logic/pages/cloud/operator/using-persistence.adoc b/modules/serverless-logic/pages/cloud/operator/using-persistence.adoc index 21456de3..54a6e177 100644 --- a/modules/serverless-logic/pages/cloud/operator/using-persistence.adoc +++ b/modules/serverless-logic/pages/cloud/operator/using-persistence.adoc @@ -1,110 +1,301 @@ -= Using persistence in the SonataFlow Workflow CR += Using persistence in {product_name} workflows :compat-mode!: // Metadata: :description: Using persistence in the workflow instance to store its context :keywords: sonataflow, workflow, serverless, operator, kubernetes, persistence +This document describes how to configure a `SonataFlow` instance to use persistence and store the workflow context in a relational database. -This document describes how to configure a SonataFlow instance to use persistence to store the flow's context in a relational database. +Kubernetes's pods are stateless by definition. In some scenarios, this can be a challenge for workloads that require maintaining the status of +the application regardless of the pod's lifecycle. In the case of {product_name}, by default, the context of the workflow is lost when the pod restarts. -== Configuring the SonataFlow CR to use persistence +If your workflow requires recovery from such scenarios, you must provide additional configuration to enable the xref:persistence/core-concepts.adoc#_workflow_runtime_persistence[Workflow Runtime Persistence]. +That configuration must be provided by using the <> or the <<_configuring_the_persistence_using_the_sonataflow_cr, `SonataFlow` CR>>, and has different scopes depending on each case. -Kubernetes's pods are stateless by definition. In some scenarios, this can be a challenge for workloads that require maintaining the status of -the application regardless of the pod's lifecycle. In the case of {product_name}, the context of the workflow is lost when the pod restarts. -If your workflow requires recovery from such scenarios, you must to make these additions to your workflow CR: -Use the `persistence` field in the `SonataFlow` workflow spec to define the database service located in the same cluster. -There are 2 ways to accomplish this: +[#configuring-persistence-using-the-sonataflowplatform-cr] +== Configuring the persistence using the SonataFlowPlatform CR -* Using the Platform CR's defined persistence -When the Platform CR is deployed with its persistence spec populated it enables workflows to leverage its configuration to populate the persistence -properties in the workflows. +The `SonataFlowPlatform` CR facilitates the configuration of the persistence with namespace scope. It means that it will be automatically applied to all the workflows deployed in +that namespace. This can be useful to reduce the amount resources to configure, for example, in cases where the workflows deployed in that namespace belongs to the same application, etc. +That decision is left to each particular use case, however, it's important to know, that this configuration can be overridden by any workflow in that namespace by using the <<_configuring_the_persistence_using_the_sonataflow_cr, `SonataFlow` CR>>. -[source,yaml,subs="attributes+"] ---- +Finally, the {operator_name} can also use this configuration to set the xref:cloud/operator/supporting-services.adoc#common-persistence-configuration[supporting service's persistence]. + +[NOTE] +==== +Persistence configurations are applied at workflow deployment time, and potential changes in the SonataFlowPlatform will not impact already deployed workflows. +==== + +To configure the persistence you must use the `persistence` field in the SonataFlowPlatform CR `spec`: + +.SonataFlowPlatform CR persistence configuration example +[source,yaml] +---- apiVersion: sonataflow.org/v1alpha08 kind: SonataFlowPlatform metadata: - name: sonataflow-platform + name: sonataflow-platform-example + namespace: example-namespace spec: persistence: postgresql: - secretRef: - name: postgres-secrets - userKey: POSTGRES_USER - passwordKey: POSTGRES_PASSWORD serviceRef: - name: postgres - port: 5432 - databaseName: sonataflow - databaseSchema: shared - build: - config: - strategyOptions: - KanikoBuildCacheEnabled: "true" ---- - -The values of `POSTGRES_USER` and `POSTGRES_PASSWORD` are the keys in the https://kubernetes.io/docs/concepts/configuration/secret/[Kubernetes secret] that contains the credentials to connect to the postgreSQL instance. -The SonataFlow Workflow CR -is defined with its `Persistence` field defined as empty. + name: postgres-example <1> + namespace: postgres-example-namespace <2> + databaseName: example-database <3> + port: 1234 <4> + secretRef: + name: postgres-secrets-example <5> + userKey: POSTGRESQL_USER <6> + passwordKey: POSTGRESQL_PASSWORD <7> +---- + +<1> Name of the Kubernetes Service to connect with the PostgreSQL database server. +<2> (Optional) Kubernetes namespace containing the PostgreSQL Service. Defaults to the `SonataFlowPlatform's` local namespace. +<3> Name of the PostgreSQL database to store the workflow's data. +<4> (Optional) Port number to connect with the PostgreSQL Service. Defaults to 5432. +<5> Name of the link:{k8n_secrets_url}[Kubernetes Secret] containing the username and password to connect with the database. +<6> Name of the link:{k8n_secrets_url}[Kubernetes Secret] `key` containing the username to connect with the database. +<7> Name of the link:{k8n_secrets_url}[Kubernetes Secret] `key` containing the password to connect with the database. + +This configuration signals the operator that every workflow deployed in the current `SonataFlowPlatform's` namespace must be properly configured to connect with that PostgreSQL database server. +And the operator will add the relevant JDBC connection parameters in the form of environment variables to the workflow container. +Additionally, for `SonataFlow` CR deployments that use the `preview` profile, it will configure the {product_name} build system to include specific Quarkus extensions required for persistence. + +[NOTE] +==== +Currently, PostgreSQL is the only supported persistence. +==== + +Below you can see an example of the configurations produced for a workflow with the name `example-workflow`, that was deployed using the previous `SonataFlowPlatform`. +For simplicity, only the `env` configurations related to the persistence has been included. These operator managed configurations are immutable. + +[#persistence_env_vars_config_example] +.Generated persistence `env` configurations in the workflow container [source,yaml,subs="attributes+"] +---- + env: + - name: QUARKUS_DATASOURCE_USERNAME + valueFrom: + secretKeyRef: + name: postgres-secrets-example + key: POSTGRESQL_USER + - name: QUARKUS_DATASOURCE_PASSWORD + valueFrom: + secretKeyRef: + name: postgres-secrets-example + key: POSTGRESQL_PASSWORD + - name: QUARKUS_DATASOURCE_DB_KIND + value: postgresql + - name: QUARKUS_DATASOURCE_JDBC_URL + value: >- + jdbc:postgresql://postgres-example.postgres-example-namespace:1234/sonataflow?currentSchema=example-workflow + - name: KOGITO_PERSISTENCE_TYPE + value: jdbc +---- + +[IMPORTANT] +==== +When you use the `SonataFlowPlatform` persistence, every workflow is configured to use a PostgreSQL schema name equal to the workflow name. +==== + +To learn how to initialize the database schema see: <<_database_schema_initialization, Database schema initialization>>. + +== Configuring the persistence using the SonataFlow CR + +The `SonataFlow` CR facilitates the configuration of the persistence with workflow scope, and you can use it independently if the `SonataFlowPlatform` persistence was already configured in the current namespace, see: <<_persistence_configuration_precedence_rules, Persistence configuration precedence rules>>. + +To configure the persistence, you must use the `persistence` field in the `SonataFlow` CR `spec`: + +.SonataFlow CR persistence configuration example +[source,yaml] +---- apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: - name: callbackstatetimeouts + name: example-workflow annotations: - sonataflow.org/description: Callback State Timeouts Example k8s + sonataflow.org/description: Example Workflow sonataflow.org/version: 0.0.1 spec: - persistence: {} - ... ---- + persistence: + postgresql: + serviceRef: + name: postgres-example <1> + namespace: postgres-example-namespace <2> + databaseName: example-database <3> + databaseSchema: example-schema <4> + port: 1234 <5> + secretRef: + name: postgres-secrets-example <6> + userKey: POSTGRESQL_USER <7> + passwordKey: POSTGRESQL_PASSWORD <8> + flow: + ... +---- + +<1> Name of the Kubernetes Service to connect with the PostgreSQL database server. +<2> (Optional) Kubernetes namespace containing the PostgreSQL Service. Defaults to the workflow's local namespace. +<3> Name of the PostgreSQL database to store the workflow's data. +<4> (Optional) Name of the database schema to store workflow's data. Defaults to the workflow's name. +<5> (Optional) Port number to connect with the PostgreSQL Service. Defaults to 5432. +<6> Name of the link:{k8n_secrets_url}[Kubernetes Secret] containing the username and password to connect with the database. +<7> Name of the link:{k8n_secrets_url}[Kubernetes Secret] `key` containing the username to connect with the database. +<8> Name of the link:{k8n_secrets_url}[Kubernetes Secret] `key` containing the password to connect with the database. -This configuration signals the operator that the workflow requires persistence and that it expects its configuration to be populated accordingly. -The operator will add the relevant JDBC properties in the `application.properties` -generated and as part of the pod´s environment so that it can connect to the persistence service defined in the `Platform` CR. + +This configuration signals the operator that the current workflow must be properly configured to connect with that PostgreSQL database server when deployed. +Similar to the `SonataFlowPlatform` persistence, the operator will add the relevant JDBC connection parameters in the form of <> to the workflow container. + +Additionally, for `SonataFlow` CR deployments that use the `preview` profile, it can configure the {product_name} build system to include specific Quarkus extensions required for persistence. [NOTE] ==== -Currently, PostgreSQL is the only persistence supported. +Currently, PostgreSQL is the only supported persistence. ==== -* Using the custom defined persistence in the `SonataFlow` CR +To learn how to initialize the database schema see: <<_database_schema_initialization, Database schema initialization>>. -Alternatively, you can define a dedicated configuration in the `SonataFlow` CR instance using the same schema format found in the Platform CRD: +== Persistence configuration precedence rules -[source,yaml,subs="attributes+"] +<> can be used with or without the <>. + +And, if the current namespace has an already configured <>, the following rules apply: + +* If the `SonataFlow` CR has a configured persistence, that configuration will apply. +* If the `SonataFlow` CR has no configured persistence, i.e., the field `spec.persistence` is not present at all, the persistence configuration will be taken from the current platform. +* If you don't want the current workflow to use `persistence`, you must use the following configuration in the SonataFlow CR: `spec.persistence : {}` to ignore the `SonataFlowPlatform` persistence configuration. + +== Persistence configuration and SonataFlow profiles + +All the configurations shown for the `SonataFlowPlatform CR` and `SonataFlow CR`, apply exactly the same for both the `preview` and the `gitops` profiles. +However, you must not use them in the `dev` profile, since this profile will simply ignore them. + +Finally, the only distinction between `preview` and `gitops` profiles is that, when you use the `gitops` profile, the following Quarkus extensions must be added when you build your workflow image. Since that build is accomplished outside the operator scope. + +[cols="40%,40%,20%", options="header"] +|=== +|groupId +|artifactId +|version + +| {groupId_quarkus-agroal} +| {artifactId_quarkus-agroal} +| {quarkus_version} + +| {groupId_quarkus-jdbc-postgresql} +| {artifactId_quarkus-jdbc-postgresql} +| {quarkus_version} + +| {groupId_kie-addons-quarkus-persistence-jdbc} +| {artifactId_kie-addons-quarkus-persistence-jdbc} +| {kogito_version} +|=== + +If you generate your images by using the `kogito-swf-builder`, you can do it by passing it the following build argument: + +[source,bash,subs="attributes+"] +---- +QUARKUS_EXTENSIONS={groupId_quarkus-agroal}:{artifactId_quarkus-agroal}:{quarkus_version},{groupId_quarkus-jdbc-postgresql}:{artifactId_quarkus-jdbc-postgresql}:{quarkus_version},{groupId_kie-addons-quarkus-persistence-jdbc}:{artifactId_kie-addons-quarkus-persistence-jdbc}:{kogito_version} +---- + +== Database schema initialization + +When you use the `SonataFlow` PostgreSQL persistence, you can either opt to use Flyway to produce the database initialization, or manually upgrade your database via DDL scripts. + +=== Flyway managed database initialization + +To enable Flyway you must use any of the following configuration procedures: + +[NOTE] +==== +The Flyway schema initialization is disabled by default. +==== + +=== Flyway configuration by using the workflow ConfigMap + +Add the following property to your workflow ConfigMap. + +.Example of enabling Flyway by using the workflow ConfigMap +[source,yaml] +---- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + app: example-workflow + name: example-workflow-props +data: + application.properties: | + quarkus.flyway.migrate-at-start = true +---- + +=== Flyway configuration by using the workflow container env vars + +Add the following `env` var in the `spec.podTemplate.container` of the `SonataFlow` CR. + +.Example of enabling Flyway by using the workflow container env vars +[source, yaml] +---- apiVersion: sonataflow.org/v1alpha08 kind: SonataFlow metadata: - name: callbackstatetimeouts + name: example-workflow annotations: - sonataflow.org/description: Callback State Timeouts + sonataflow.org/description: Example Workflow sonataflow.org/version: 0.0.1 spec: - persistence: - postgresql: - secretRef: - name: postgres-secrets - userKey: POSTGRES_USER - passwordKey: POSTGRES_PASSWORD - serviceRef: - name: postgres - port: 5432 - databaseName: sonataflow - databaseSchema: callbackstatetimeouts - ... ---- + podTemplate: + container: + env: + - name: QUARKUS_FLYWAY_MIGRATE_AT_START + value: 'true' + flow: ... +---- + +=== Flyway configuration by using SonataFlowPlatForm properties -Like in the Platform CR case, the values of the `POSTGRES_USER` and `POSTGRES_PASSWORD` are the secret keys in the secret that contain the credentials to connect to -the PostgreSQL instace. +To apply a common Flyway configuration to all the workflows in a given namespace, you can use the `spec.properties` of the `SonataFlowPlatform` in that namespace. + +.Example of enabling Flyway by using the SonataFlowPlatform properties. +[source,yaml] +---- +apiVersion: sonataflow.org/v1alpha08 +kind: SonataFlowPlatform +metadata: + name: sonataflow-platform +spec: + properties: + - name: quarkus.flyway.migrate-at-start + value: true +---- + +[NOTE] +==== +The configuration above takes effect at workflow deployment time, so you must be sure that property is configured before you deploy your workflows. +==== + +=== Manual database initialization by using DDL + +To initialize the database schema manually, you must be sure that the following application property `quarkus.flyway.migrate-at-start` is not configured, or is set to `false`, and follow this xref:use-cases/advanced-developer-use-cases/persistence/postgresql-flyway-migration.adoc#manually-executing-scripts[procedure]. + +[NOTE] +==== +Remember that: + +* By default, every workflow is configured use a schema name equal to the workflow name, and thus, that manual initialization must be applied for every workflow. +* When you use the <> it is possible to use the schema name of your choice. +==== == Conclusion -You can enable SQL persistence in your workflows by configuring each `SonataFlow` CR instance. And when the `SonataFlowPlatform` CR contains the persistence field configured, -the operator uses this information to configure those `SonataFlow` CRs that request persistence. When both the `Platform CR` and the `SonataFlow CR` contain persistence -configuration, the operator will use the `Persistence` values from the `SonataFlow` CR. +By using the `SonataFlowPlatform` CR you can enable the persistence for all the workflows that you deploy in that namespace. +And, by using the `SonataFlow` CR you can enable the persistence of a particular workflow. If both methods are present in the current namespace, the `SonataFlow` CR configuration has precedence over the `SonataFlowpPlatform` configuration. + + == Additional resources * xref:cloud/operator/developing-workflows.adoc[] +* xref:persistence/core-concepts.adoc[] include::../../../pages/_common-content/report-issue.adoc[] \ No newline at end of file diff --git a/modules/serverless-logic/pages/cloud/operator/workflow-status-conditions.adoc b/modules/serverless-logic/pages/cloud/operator/workflow-status-conditions.adoc index 05df329c..f3fc0f34 100644 --- a/modules/serverless-logic/pages/cloud/operator/workflow-status-conditions.adoc +++ b/modules/serverless-logic/pages/cloud/operator/workflow-status-conditions.adoc @@ -8,7 +8,7 @@ This document describes the Status and Conditions of a `SonataFlow` object manag link:https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties[Kubernetes Status] is an important property to observe in order to understand what is currently happening with the object. It can also help you troubleshoot or integrate with other objects in the cluster. -You can inspect the Status of any Workflow object using the following command: +You can inspect the `Status` of any workflow object using the following command: .Checking the Workflow Status [source,bash,subs="attributes+"] @@ -18,7 +18,7 @@ kubectl get workflow -n -o jsonpath={.stat == General Status -The table below lists the general structure of a Workflow status: +The table below lists the general structure of a workflow status: .Description of SonataFlow Status object [cols="1,2"] @@ -43,11 +43,11 @@ The `Conditions` property might vary depending on the Workflow profile. The next == Development Profile Conditions -When you deploy a Workflow with the xref:cloud/operator/developing-workflows.adoc[development profile], the operator deploys a ready-to-use container with a running Workflow instance. +When you deploy a workflow with the xref:cloud/operator/developing-workflows.adoc[development profile], the operator deploys a ready-to-use container with a running workflow instance. -The following table lists the possible Conditions. +The following table lists the possible `Conditions`. -.Conditions Scenarios in Development +.Conditions Scenarios in Development mode [cols="0,0,1,2"] |=== |Condition | Status | Reason | Description @@ -80,17 +80,17 @@ The following table lists the possible Conditions. | Running | False | AttemptToRedeployFailed -| If the Workflow Deployment is not available, the operator will try to rollout the Deployment three times before entering this stage. Check the message in this Condition and the Workflow Pod logs for more info +| If the Workflow Deployment is not available, the operator will try to roll out the Deployment three times before entering this stage. Check the message in this Condition and the Workflow Pod logs for more info |=== -In normal conditions, the Workflow will transition from `Running`, `WaitingForDeployment` condition to `Running`. In case something wrong happens, consult the section xref:cloud/operator/developing-workflows.adoc#troubleshooting[Workflow Troubleshooting in Development]. +In normal conditions, the Workflow will transition from `Running` to `WaitingForDeployment`and to `Running` condition. In case something wrong happens, consult the section xref:cloud/operator/developing-workflows.adoc#troubleshooting[Workflow Troubleshooting in development mode]. -== Production Profile Conditions +== Preview Profile Conditions -Deploying the Workflow in xref:cloud/operator/build-and-deploy-workflows.adoc[Production profile] makes the operator build an immutable image for the Workflow application. The progress of the immutable image build can be followed by observing the Workflow Conditions. +Deploying the Workflow in xref:cloud/operator/build-and-deploy-workflows.adoc[preview profile] makes the operator build an immutable image for the Workflow application. The progress of the immutable image build can be followed by observing the Workflow Conditions. -.Condition Scenarios in Production +.Condition Scenarios in Preview mode [cols="0,0,1,2"] |=== |Condition | Status | Reason | Description From 3f6e17ff67bb4953c8c6ecabf278132f6b6943ff Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 16:17:21 +0200 Subject: [PATCH 11/38] SRVLOGIC-261: Sync DI, JS and integrations to latest version --- .../data-index/data-index-core-concepts.adoc | 45 +- .../pages/data-index/data-index-service.adoc | 23 +- .../pages/integrations/core-concepts.adoc | 12 +- .../pages/job-services/core-concepts.adoc | 489 ++++++++++++++++++ 4 files changed, 529 insertions(+), 40 deletions(-) create mode 100644 modules/serverless-logic/pages/job-services/core-concepts.adoc diff --git a/modules/serverless-logic/pages/data-index/data-index-core-concepts.adoc b/modules/serverless-logic/pages/data-index/data-index-core-concepts.adoc index 953e8c1d..afd69aa3 100644 --- a/modules/serverless-logic/pages/data-index/data-index-core-concepts.adoc +++ b/modules/serverless-logic/pages/data-index/data-index-core-concepts.adoc @@ -7,8 +7,6 @@ :cloud_events_url: https://cloudevents.io/ :graphql_url: https://graphql.org :vertx_url: https://vertx.io/ -:infinispan_url: https://infinispan.org/ -:mongo_url: https://www.mongodb.com/ :postgresql_url: https://www.postgresql.org/ :dev_services_url: https://quarkus.io/guides/dev-services :flyway_quarkus_url: https://quarkus.io/guides/flyway @@ -16,14 +14,17 @@ // Referenced documentation pages :path_resolution_url: https://quarkus.io/blog/path-resolution-in-quarkus/#defaults -In {product_name} platform there is a dedicated supporting service that stores the data related to the {workflow_instances} and their associated jobs called *{data_index_ref}* service. -This service also provides a GraphQL endpoint allowing users to query that data and perform operations, also known as mutations in GraphQL terms. +.Prerequisites +* Basic understanding of link:https://graphql.org/learn/[GraphQL]. -The data processed by the {data_index_ref} service is usually received via events. The events consumed can be generated by any workflow or the xref:job-services/core-concepts.adoc[Job service] itself. +*{data_index_ref}* service is a dedicated supporting service that stores the data related to the {workflow_instances} and their associated jobs. +This service provides a GraphQL endpoint allowing users to query and modify that data. + +The data processed by the {data_index_ref} service are received via events. The events that {data_index_ref} consumes can be generated by any workflow or the xref:job-services/core-concepts.adoc[Job service] itself. This event communication can be configured in different ways as described in the <> section. The {data_index_ref} service uses Apache Kafka or Knative eventing to consume link:{cloud_events_url}[CloudEvents] messages from workflows. -The event data is indexed and stored in the database for querying via GraphQL. These events contain information about units of work executed for a workflow. +The event data is indexed and stored in the database for access via GraphQL. These events contain information about units of work executed for a workflow. The {data_index_ref} service is at the core of all {product_name} search, insight, and management capabilities. The {product_name} Data Index Service has the following key attributes: @@ -43,11 +44,9 @@ The {product_name} {data_index_ref} Service is a Quarkus application, based on l The indexing functionality in the {data_index_ref} service is provided by choosing one of the following persistence providers: * link:{postgresql_url}[PostgreSQL] -* link:{infinispan_url}[Infinispan] -* link:{mongo_url}[MongoDB] ==== -The {data_index_ref} Service has been thought of as an application to store and query the existing workflow data. The data comes contained in events. The service allows multiple connection options as described in the <> section. +The {data_index_ref} Service has been designed as an application to store and query the existing workflow data. The data comes within events. The service allows multiple connection options as described in the <> section. [#data-index-deployments] == {data_index_ref} scenarios @@ -66,19 +65,19 @@ This type of deployment requires to choose the right image depending on the pers [#data-index-dev-service] === {data_index_ref} service as Quarkus Development service -It also can be deployed, transparently as a *Quarkus Development Service* when the Quarkus Dev mode is used in the {product_name} application. -When you use the {product_name} Process Quarkus extension, a temporary {data_index_ref} Service is automatically provisioned while the Quarkus application is running in development mode and the Dev Service is set up for immediate use. +When the Quarkus Dev mode is used in the {product_name} application, {data_index_ref} can be deployed transparently as a *Quarkus Development Service*. +When the {product_name} Process Quarkus extension is utilized, a temporary {data_index_ref} Service is automatically provisioned while the Quarkus application runs in development mode making the Quarkus Dev Service available for use. image::data-index/data-index-dev-service.png[Image of data-index deployment an Quarkus Dev Service] -More details are provided in the xref:data-index/data-index-service.adoc#data-index-dev-service-details[{data_index_ref} as a Quarkus Development service] section. +More details are provided in the xref:use-cases/advanced-developer-use-cases/data-index/data-index-as-quarkus-dev-service.adoc[{data_index_ref} as a Quarkus Development service] section. The {product_name} Process Quarkus extension sets up your Quarkus application to automatically replicate any {product_name} messaging events related to {workflow_instances} or jobs into the provisioned Data Index instance. For more information about Quarkus Dev Services, see link:{dev_services_url}[Dev Services guide]. === {data_index_ref} service as Quarkus extension -It can be included as part of the same {product_name} application using the *{data_index_ref} extension*, through the provided addons. +{data_index_ref} can be included as part of the same {product_name} application using the *{data_index_ref} extension*, through the provided addons. This scenario is specific to add the {data_index_ref} data indexing features and the GraphQL endpoint exposure inside a workflow application. @@ -96,7 +95,7 @@ More details are available in the xref:use-cases/advanced-developer-use-cases/da In order to store the indexed data, {data_index_ref} needs some specific tables to be created. {data_index_ref} is ready to use link:{flyway_quarkus_url}[Quarkus flyway] for that purpose. -It's necessary to activate the migrate-at-start option to migrate the {data_index_ref} schema automatically. +Activating the 'migrate-at-start' option enables automatic migration of the {data_index_ref} schema. For more details about Flyway migrations, see xref:use-cases/advanced-developer-use-cases/persistence/postgresql-flyway-migration.adoc[] section. @@ -104,7 +103,7 @@ For more details about Flyway migrations, see xref:use-cases/advanced-developer- == {data_index_ref} GraphQL endpoint {data_index_ref} provides GraphQL endpoint that allows users to interact with the stored data. -For more information about GraphQL see {graphql_url}[GraphQL] +For more information about GraphQL see {graphql_url}[GraphQL documentation] [#data-index-ext-queries] === GraphQL queries for {workflow_instances} and jobs @@ -445,12 +444,12 @@ mutation{ [NOTE] ==== -To enable described management operations on workflow instances, make sure your project is configured to have the `kogito-addons-quarkus-process-management` dependency on its `pom.xml` file to have this management operations enabled, like: +To enable described management operations on workflow instances, make sure your project is configured to have the `kie-addons-quarkus-process-management` dependency on its `pom.xml` file to have this management operations enabled, like: [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-process-management + org.kie + kie-addons-quarkus-process-management ---- ==== @@ -471,12 +470,12 @@ Retrieves the {workflow_instance} source file. When the `source` field of a {wor [NOTE] ==== -The workflow instance source field only will be available when `kogito-addons-quarkus-source-files` dependency is added on {product_name} runtime service `pom.xml` file. +The workflow instance source field only will be available when `kie-addons-quarkus-source-files` dependency is added on {product_name} runtime service `pom.xml` file. [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-source-files + org.kie + kie-addons-quarkus-source-files ---- ==== @@ -522,7 +521,7 @@ image::data-index/data-index-graphql-ui.png[Image of data-index GraphQL UI] When the {data_index_ref} is deployed as a standalone service, this UI will be available at `/graphiql/` endpoint (i.e: at http://localhost:8180/graphiql/) -To have the GraphQL UI available when the {data_index_ref} extension is deployed the property `quarkus.kogito.data-index.graphql.ui.always-include` needs to be enabled. +To enable the GraphQL UI when deploying the {data_index_ref} extension, the property `quarkus.kogito.data-index.graphql.ui.always-include` must be enabled. It will be accessible at: /graphql-ui/ (i.e: http://localhost:8080/q/graphql-ui/) @@ -594,7 +593,7 @@ mp.messaging.outgoing.kogito-processinstances-events.method=POST [NOTE] ==== -*Job service* needs also to be configured to send the events to the Knative K_SINK to have them available for {data_index_ref} related triggers. +*Job service* needs to be configured to send the events to the Knative K_SINK to have them available for {data_index_ref} related triggers. ==== === Kafka eventing diff --git a/modules/serverless-logic/pages/data-index/data-index-service.adoc b/modules/serverless-logic/pages/data-index/data-index-service.adoc index 61d57326..a8fa610e 100644 --- a/modules/serverless-logic/pages/data-index/data-index-service.adoc +++ b/modules/serverless-logic/pages/data-index/data-index-service.adoc @@ -13,7 +13,7 @@ [#data-index-service] == {data_index_ref} service deployment -{data_index_ref} service can be deployed referencing directly a distributed {data_index_ref} image. There are different images provided that take into account what persistence layer is required in each case. +{data_index_ref} service can be deployed by referencing a distributed {data_index_ref} image directly. There are different images provided that take into account what persistence layer is required in each case. In each distribution, there are some properties to configure things like the connection with the database or the communication with other services. The goal is to configure the container to allow to process ProcessInstances and Jobs *events* that incorporate their related data, to index and store that in the database and finally, to provide the xref:data-index/data-index-core-concepts.adoc#data-index-graphql[{data_index_ref} GraphQL] endpoint to consume it. @@ -29,7 +29,7 @@ There are several ways to deploy the {data_index_ref} service. But there are som . Reference the right {data_index_ref} image to match with the type of Database that will store the indexed data. . Provide the database connection properties, to allow data index store the indexed data. {data_index_ref} service does not initialize its database schema automatically. To initialize the database schema, you need to enable Flyway migration by setting QUARKUS_FLYWAY_MIGRATE_AT_START=true. -. Define the `KOGITO_DATA_INDEX_QUARKUS_PROFILE` to set the way that the events will be connected (by default: `kafka-event-support`). +. Define the `KOGITO_DATA_INDEX_QUARKUS_PROFILE` to set the way that the events will be connected (by default: `kafka-events-support` but could be also `http-events-support`). [NOTE] ==== @@ -40,12 +40,12 @@ For this purpose, it is important to make sure the following addons are included [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-events-process <1> + org.kie + kie-addons-quarkus-events-process <1> - org.kie.kogito - kogito-addons-quarkus-process-management <2> + org.kie + kie-addons-quarkus-process-management <2> ---- @@ -63,7 +63,7 @@ Here you can see in example, how the {data_index_ref} resource definition can be ---- data-index: container_name: data-index - image: quay.io/kiegroup/kogito-data-index-postgresql-nightly:main-2024-02-09 <1> + image: quay.io/kiegroup/kogito-data-index-postgresql:latest <1> ports: - "8180:8080" depends_on: @@ -80,7 +80,8 @@ Here you can see in example, how the {data_index_ref} resource definition can be QUARKUS_FLYWAY_MIGRATE_AT_START: "true" <4> QUARKUS_HIBERNATE_ORM_DATABASE_GENERATION: update ---- -<1> Reference the right {data_index_ref} image to match with the type of Database, in this case `quay.io/kiegroup/kogito-data-index-postgresql-nightly:{sonataflow_non_productized_image_tag}` + +<1> Reference the right {data_index_ref} image to match with the type of Database, in this case `quay.io/kiegroup/kogito-data-index-postgresql:latest` <2> Provide the database connection properties. <3> When `KOGITO_DATA_INDEX_QUARKUS_PROFILE` is not present, the {data_index_ref} is configured to use Kafka eventing. <4> To initialize the database schema at start using flyway. @@ -156,7 +157,7 @@ spec: spec: containers: - name: data-index-service-postgresql - image: quay.io/kiegroup/kogito-data-index-postgresql-nightly:main-2024-02-09 <1> + image: quay.io/kiegroup/kogito-data-index-postgresql:latest <1> imagePullPolicy: Always ports: - containerPort: 8080 @@ -186,7 +187,7 @@ spec: - name: QUARKUS_FLYWAY_MIGRATE_AT_START <4> value: "true" - name: KOGITO_DATA_INDEX_QUARKUS_PROFILE <3> - value: "http-events-support" + value: http-events-support - name: QUARKUS_HTTP_PORT value: "8080" --- @@ -222,7 +223,7 @@ spec: name: data-index-service-postgresql uri: /jobs <7> ---- -<1> Reference the right {data_index_ref} image to match with the type of Database, in this case `quay.io/kiegroup/kogito-data-index-postgresql-nightly:{sonataflow_non_productized_image_tag}` +<1> Reference the right {data_index_ref} image to match with the type of Database, in this case `quay.io/kiegroup/kogito-data-index-postgresql:latest` <2> Provide the database connection properties <3> KOGITO_DATA_INDEX_QUARKUS_PROFILE: http-events-support to use the http-connector with Knative eventing. <4> To initialize the database schema at start using flyway diff --git a/modules/serverless-logic/pages/integrations/core-concepts.adoc b/modules/serverless-logic/pages/integrations/core-concepts.adoc index c9fde6c9..b8454c38 100644 --- a/modules/serverless-logic/pages/integrations/core-concepts.adoc +++ b/modules/serverless-logic/pages/integrations/core-concepts.adoc @@ -1,12 +1,12 @@ = Introduction -This guides describes the possibilities of workflow services integrations. -Currently we showcase these in advanced development guides. See additional resources. +This guide describes the possibilities of workflow services integrations. +Currently, we showcase these in advanced development guides. See additional resources. == Additional resources -* xref:serverless-logic:use-cases/advanced-developer-use-cases/integrations/camel-routes-integration.adoc[] -* xref:serverless-logic:use-cases/advanced-developer-use-cases/integrations/custom-functions-knative.adoc[] -* xref:serverless-logic:use-cases/advanced-developer-use-cases/integrations/expose-metrics-to-prometheus.adoc[] -* xref:serverless-logic:use-cases/advanced-developer-use-cases/integrations/serverless-dashboard-with-runtime-data.adoc[] +* xref:use-cases/advanced-developer-use-cases/integrations/camel-routes-integration.adoc[] +* xref:use-cases/advanced-developer-use-cases/integrations/custom-functions-knative.adoc[] +* xref:use-cases/advanced-developer-use-cases/integrations/expose-metrics-to-prometheus.adoc[] +* xref:use-cases/advanced-developer-use-cases/integrations/serverless-dashboard-with-runtime-data.adoc[] diff --git a/modules/serverless-logic/pages/job-services/core-concepts.adoc b/modules/serverless-logic/pages/job-services/core-concepts.adoc new file mode 100644 index 00000000..8f0b61cd --- /dev/null +++ b/modules/serverless-logic/pages/job-services/core-concepts.adoc @@ -0,0 +1,489 @@ += Introduction +:compat-mode!: +// Metadata: +:description: Job Service to control timeouts in {product_name} +:keywords: sonataflow, workflow, serverless, timeout, timer, expiration, job service + +The Job Service facilitates the scheduled execution of tasks in a cloud environment. These tasks are implemented by independent services, and can be started by using any of the Job Service supported interaction modes, based on Http calls or Knative Events delivery. + +To schedule task execution, you must create a Job configured with the following information: + +* `Schedule`: the job triggering periodicity. +* `Recipient`: the entity that is called on the job execution for the given interaction mode, and receives the execution parameters. + +image::job-services/Job-Service-Generic-Diagram.png[] + +[#integration-with-the-workflows] +== Integration with the Workflows + +In the context of the {product_name}, the Job Service is responsible for controlling the execution of the time-triggered actions. And thus, all the time-based states that you can use in a workflow, are handled by the interaction between the workflow and the Job Service. + +For example, every time the workflow execution reaches a state with a configured timeout, a corresponding job is created in the Job Service, and when the timeout is met, a http callback is executed to notify the workflow. + +image::job-services/Time-Based-States-And-Job-Service-Interaction.png[] + +To set up this integration you can use different xref:use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc#job-service-quarkus-extensions[communication alternatives], that must be configured by combining the Job Service and the Quarkus Workflow Project configurations. +Alternatively, when you work with {operator_name} workflow deployments, the operator can manage all these configurations. + +[NOTE] +==== +If the project is not configured to use the Job Service, all time-based actions will use an in-memory implementation of that service. +However, this setup must not be used in production, since every time the application is restarted, all the timers are lost, making it unsuitable for serverless architectures where applications might scale to zero at any time, etc. +==== + +[IMPORTANT] +==== +If you are working with the {operator_name} be sure that you read this section <<_sonataflow_operator_managed_deployment, {operator_name} managed deployments>>. +==== + +== Jobs life-span + +Since the main goal of the Job Service is to work with the active jobs, such as the scheduled jobs that needs to be executed, when a job reaches a final state, it is removed from the Job Service. +However, in some cases where you want to keep the information about the jobs in a permanent repository, you can configure the Job Service to produce status change events, that can be collected by the {data_index_xref}[Data Index Service], where they can be indexed and made available by GraphQL queries. + +== {operator_name} managed deployment + +When you work with the {operator_name} to deploy your workflows, there's no need to do any manual Job Service installation or configuration, the operator already has the ability to do that. +Additionally, it can manage all the required configurations for every workflow to connect with it. + +To learn how to install and configure the Job Service in this case, you must read the xref:cloud/operator/supporting-services.adoc[Operator Supporting Services] section. + +[#executing] +== Custom Execution + +To execute the Job Service in your docker or Kubernetes environment, you must use any of the following images, depending on the persistence mechanism to use <> or <>. + +* `{jobs_service_image_postgresql}` +* `{jobs_service_image_ephemeral}` + +In the next topics you can see how to configure them. + +[NOTE] +==== +The <> and the <> are the same for both images. +==== + +We recommend that you follow this procedure: + +1. Identify the image to use depending on the persistence mechanism, and see the required configuration parameters specific for that image. +2. Identify if the <> is required for your needs and see the required configuration parameters. +3. Identify if the project containing your workflows is configured with the appropriate xref:use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc#job-service-quarkus-extensions[Job Service Quarkus Extension]. + +Finally, when you run the image, you must pass these configurations using <> or using <>. + +[#using-environent-variables] +=== Using environment variables + +To configure the image by using environment variables you must pass one environment variable per each parameter. + +.Job Service image configuration for docker execution example +[source, bash,subs="attributes+"] +---- +docker run -it -e QUARKUS_DATASOURCE_USERNAME=postgres -e VARIABLE_NAME=value {jobs_service_image_postgresql}:latest +---- + +.Job Service image configuration for Kubernetes execution example +[source, yaml,subs="attributes+"] +---- +spec: + containers: + - name: jobs-service-postgresql + image: {jobs_service_image_postgresql}:latest + imagePullPolicy: Always + ports: + - containerPort: 8080 + name: http + protocol: TCP + env: + # Set the image parameters as environment variables in the container definition. + - name: KUBERNETES_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: QUARKUS_DATASOURCE_USERNAME + value: postgres + - name: QUARKUS_DATASOURCE_PASSWORD + value: pass + - name: QUARKUS_DATASOURCE_JDBC_URL + value: jdbc:postgresql://timeouts-showcase-database:5432/postgres?currentSchema=jobs-service + - name: QUARKUS_DATASOURCE_REACTIVE_URL + value: postgresql://timeouts-showcase-database:5432/postgres?search_path=jobs-service +---- + +[NOTE] +==== +This is the recommended approach when you execute the Job Service in Kubernetes. +The timeouts showcase example xref:use-cases/advanced-developer-use-cases/timeouts/timeout-showcase-example.adoc#execute-quarkus-project-standalone-services[Quarkus Workflow Project with standalone services] contains an example of this configuration, link:{kogito_sw_examples_url}/serverless-workflow-timeouts-showcase-extended/kubernetes/jobs-service-postgresql.yml#L65[see]. +On the other hand, when you work with the {operator_name}, it can automatically manage all these configurations, xref:cloud/operator/supporting-services.adoc[see]. +==== + +[#using-java-like-system-properties] +=== Using system properties with java like names + +To configure the image by using system properties you must pass one property per parameter, however, in this case, all these properties are passed as part of a single environment with the name `JAVA_OPTIONS`. + +.Job Service image configuration for docker execution example +[source, bash,subs="attributes+"] +---- +docker run -it -e JAVA_OPTIONS='-Dquarkus.datasource.username=postgres -Dmy.sys.prop1=value1 -Dmy.sys.prop2=value2' \ +{jobs_service_image_postgresql}:latest +---- + +[NOTE] +==== +I case that you need to convert a java like property name, to the corresponding environment variable name, to use the environment variables configuration alternative, you must apply the naming convention defined in the link:{quarkus_guides_config_reference_url}#environment-variables[Quarkus Configuration Reference]. +For example, the name `quarkus.datasource.jdbc.url` must be converted to `QUARKUS_DATASOURCE_JDBC_URL`. +==== + +[#job-service-global-configurations] +== Common configurations + +Common configurations that affect the job execution retries, startup procedure, etc. + +[tabs] +==== +Using environment variables:: ++ + +[cols="2,1,1"] +|=== +|Name |Description |Default + +|`KOGITO_JOBS_SERVICE_BACKOFFRETRYMILLIS` +|A long value that defines the retry back-off time in milliseconds between job execution attempts, in case the execution fails. +|`1000` + +|`KOGITO_JOBS_SERVICE_MAXINTERVALLIMITTORETRYMILLIS` +|A long value that defines the maximum interval in milliseconds when retrying to execute jobs, in case the execution fails. +|`60000` + +|=== + +Using system properties with java like names:: ++ + +[cols="2,1,1"] +|=== +|Name |Description |Default + +|`kogito.jobs-service.backoffRetryMillis` +|A long value that defines the retry back-off time in milliseconds between job execution attempts, in case the execution fails. +|`1000` + +|`kogito.jobs-service.maxIntervalLimitToRetryMillis` +|A long value that defines the maximum interval in milliseconds when retrying to execute jobs, in case the execution fails. +|`60000` + +|=== + +==== + +[#job-service-persistence] +[#job-service-postgresql] +== Job Service PostgreSQL Configuration + +PostgreSQL is the recommended database to use with the Job Service. +Additionally, it provides an initialization procedure that integrates Flyway for the database initialization. Which automatically controls the database schema, in this way, the tables are created or updated by the service when required. + +In case you need to externally control the database schema, you can check and apply the DDL scripts for the Job Service in the same way as described in +xref:use-cases/advanced-developer-use-cases/persistence/postgresql-flyway-migration.adoc#manually-executing-scripts[Manually executing scripts] guide. + +To configure the Job Service PostgreSQL you must provide these configurations: + +[tabs] +==== +Using environment variables:: ++ + +[cols="2,1,1"] +|=== +|Variable | Description| Example value + +|`QUARKUS_DATASOURCE_USERNAME` +|Username to connect to the database. +|`postgres` + +|`QUARKUS_DATASOURCE_PASSWORD` +|Password to connect to the database +|`pass` + +|`QUARKUS_DATASOURCE_JDBC_URL` +| JDBC datasource url used by Flyway to connect to the database. +|`jdbc:postgresql://timeouts-showcase-database:5432/postgres?currentSchema=jobs-service` + +|`QUARKUS_DATASOURCE_REACTIVE_URL` +|Reactive datasource url used by the Job Service to connect to the database. +|`postgresql://timeouts-showcase-database:5432/postgres?search_path=jobs-service` + +|=== + +Using system properties with java like names:: ++ + +[cols="2,1,1"] +|=== +|Variable | Description| Example value + +|`quarkus.datasource.username` +|Username to connect to the database. +|`postgres` + +|`quarkus.datasource.password` +|Password to connect to the database +|`pass` + +|`quarkus.datasource.jdbc.url` +| JDBC datasource url used by Flyway to connect to the database. +|`jdbc:postgresql://timeouts-showcase-database:5432/postgres?currentSchema=jobs-service` + +|`quarkus.datasource.reactive.url` +|Reactive datasource url used by the Job Service to connect to the database. +|`postgresql://timeouts-showcase-database:5432/postgres?search_path=jobs-service` + +|=== +==== + +The timeouts showcase example xref:use-cases/advanced-developer-use-cases/timeouts/timeout-showcase-example.adoc#execute-quarkus-project-standalone-services[Quarkus Workflow Project with standalone services], shows how to run a PostgreSQL based Job Service as a Kubernetes deployment. +In your local environment you might have to change some of these values to point to your own PostgreSQL database. + +[#job-service-ephemeral] +== Job Service Ephemeral Configuration + +The Ephemeral persistence mechanism is based on an embedded PostgreSQL database and does not require any specific configuration other thant the <> and the <>. + +[NOTE] +==== +The database is recreated on each service restart, and thus, it must be used only for testing purposes. +==== + +[#job-service-eventing-api] +== Eventing API + +The Job Service provides a Cloud Event based API that can be used to create and delete jobs. +This API is useful in deployment scenarios where you want to use an event based communication from the workflow runtime to the Job Service. For the transport of these events you can use the <> system or the <> system. + +[#knative-eventing] +=== Knative eventing + +By default, the Job Service Eventing API is prepared to work in a link:{knative_eventing_url}[Knative eventing] system. This means that by adding no additional configurations parameters, it'll be able to receive cloud events via the link:{knative_eventing_url}[Knative eventing] system to manage the jobs. +However, you must still prepare your link:{knative_eventing_url}[Knative eventing] environment to ensure these events are properly delivered to the Job Service, see <>. + +Finally, the only configuration parameter that you must set, when needed, is to enable the propagation of the Job Status Change events, for example, if you want to register these events in the {data_index_xref}[Data Index Service]. + +[tabs] +==== +Using environment variables:: ++ + +[cols="2,1,1"] +|=== +|Variable | Description| Default value + +|`KOGITO_JOBS_SERVICE_HTTP_JOB_STATUS_CHANGE_EVENTS` +| `true` to establish if the Job Status Change events must be propagated. If you set this value to `true` you must be sure that the <> was created. +| `false` + +|=== + +Using system properties with java like names:: ++ + +[cols="2,1,1"] +|=== +|Variable | Description| Default value + +|`kogito.jobs-service.http.job-status-change-events` +| `true` to establish if the Job Status Change events must be propagated. If you set this value to `true` you must be sure that the <> was created. +| `false` + +|=== + +==== + + +[#knative-eventing-supporting-resources] +==== Knative eventing supporting resources + +To ensure the Job Service receives the Knative events to manage the jobs, you must create the <> and <> triggers shown in the diagram below. +Additionally, if you have enabled the Job Status Change events propagation you must create the <>. + +.Knative eventing supporting resources +image::job-services/Knative-Eventing-API-Resources.png[] + +The following snippets shows an example on how you can configure these resources. Consider that these configurations might need to be adjusted to your local kubernetes cluster. + +[NOTE] +==== +We recommend that you visit this example xref:use-cases/advanced-developer-use-cases/timeouts/timeout-showcase-example.adoc#execute-quarkus-project-standalone-services[Quarkus Workflow Project with standalone services] to see a full setup of all these configurations. +==== + +[#knative-eventing-supporting-resources-trigger-create] +.Create Job event trigger configuration example +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: jobs-service-postgresql-create-job-trigger +spec: + broker: default + filter: + attributes: + type: job.create + subscriber: + ref: + apiVersion: v1 + kind: Service + name: jobs-service-postgresql + uri: /v2/jobs/events +---- + +[#knative-eventing-supporting-resources-trigger-delete] +.Delete Job event trigger configuration example +[source,yaml] +---- +apiVersion: eventing.knative.dev/v1 +kind: Trigger +metadata: + name: jobs-service-postgresql-delete-job-trigger +spec: + broker: default + filter: + attributes: + type: job.delete + subscriber: + ref: + apiVersion: v1 + kind: Service + name: jobs-service-postgresql + uri: /v2/jobs/events +---- + +For more information about triggers, see link:{knative_eventing_trigger_url}[Knative Triggers]. + +[#knative-eventing-supporting-resources-sink-binding] +.Job Service sink binding configuration example +[source, yaml] +---- +apiVersion: sources.knative.dev/v1 +kind: SinkBinding +metadata: + name: jobs-service-postgresql-sb +spec: + sink: + ref: + apiVersion: eventing.knative.dev/v1 + kind: Broker + name: default + subject: + apiVersion: apps/v1 + kind: Deployment + selector: + matchLabels: + app.kubernetes.io/name: jobs-service-postgresql + app.kubernetes.io/version: 2.0.0-SNAPSHOT +---- + +For more information about sink bindings, see link:{knative_eventing_sink_binding_url}[Knative Sink Bindings]. + +[#kafka-messaging] +=== Kafka messaging + +To enable the Job Service Eventing API via the Kafka messaging system you must provide these configurations: + +[tabs] +==== +Using environment variables:: ++ + +[cols="2,1,1"] +|=== +|Variable | Description| Default value + +|`QUARKUS_PROFILE` +|Set the quarkus profile with the value `kafka-events-support` to enable the kafka messaging based Job Service Eventing API. +|By default, the kafka eventing api is disabled. + +|`KOGITO_JOBS_SERVICE_KAFKA_JOB_STATUS_CHANGE_EVENTS` +|`true` to establish if the Job Status Change events must be propagated. +|`true` when the `kafka-events-support` profile is set. + +|`KAFKA_BOOTSTRAP_SERVERS` +|A comma-separated list of host:port to use for establishing the initial connection to the Kafka cluster. +|`localhost:9092` when the `kafka-events-support` profile is set. + +|`MP_MESSAGING_INCOMING_KOGITO_JOB_SERVICE_JOB_REQUEST_EVENTS_V2_TOPIC` +|Kafka topic for events API incoming events. In general you don't need to change this value. +|`kogito-job-service-job-request-events-v2` when the `kafka-events-support` profile is set. + +|`MP_MESSAGING_OUTGOING_KOGITO_JOB_SERVICE_JOB_STATUS_EVENTS_TOPIC` +|Kafka topic for job status change outgoing events. In general you don't need to change this value. +|`kogito-jobs-events` when the `kafka-events-support` profile is set. + +|=== + +Using system properties with java like names:: ++ + +[cols="2,1,1"] +|=== +|Variable | Description| Default value + +|quarkus.profile +|Set the quarkus profile with the value `kafka-events-support` to enable the kafka messaging based Job Service Eventing API. +|By default, the kafka eventing api is disabled. + +|`kogito.jobs-service.kafka.job-status-change-events` +|`true` to establish if the Job Status Change events must be propagated. +|`true` when the `kafka-events-support` profile is set. + +|`kafka.bootstrap.servers` +|A comma-separated list of host:port to use for establishing the initial connection to the Kafka cluster. +|`localhost:9092` when the `kafka-events-support` profile is set. + +|`mp.messaging.incoming.kogito-job-service-job-request-events-v2.topic` +|Kafka topic for events API incoming events. In general you don't need to change this value. +|`kogito-job-service-job-request-events-v2` when the `kafka-events-support` profile is set. + +|`mp.messaging.outgoing.kogito-job-service-job-status-events.topic` +|Kafka topic for job status change outgoing events. In general you don't need to change this value. +|`kogito-jobs-events` when the `kafka-events-support` profile is set. + +|=== + +==== + +[NOTE] +==== +Depending on your Kafka messaging system configuration you might need to apply additional Kafka configurations to connect to the Kafka broker, etc. +To see the list of all the supported configurations you must read the link:{quarkus_guides_kafka_url}[Quarkus Apache Kafka Reference Guide]. +==== + + + +== Leader election + +Currently, the Job Service is a singleton service, and thus, just one active instance of the service can be scheduling and executing the jobs. + +To avoid issues when it is deployed in the cloud, where it is common to eventually have more than one instance deployed, the Job Service supports a leader instance election process. +Only the instance that becomes the leader activates the external communication to receive and schedule jobs. + +All the instances that are not leaders, stay inactive in a wait state and try to become the leader continuously. + +When a new instance of the service is started, it is not set as a leader at startup time but instead, it starts the process to become one. + +When an instance that is the leader for any issue stays unresponsive or is shut down, one of the other running instances becomes the leader. + +.Job Service leader election +image::job-services/job-service-leader.png[] + +[NOTE] +==== +This leader election mechanism uses the underlying persistence backend, which currently is only supported in the PostgreSQL implementation. +==== + +There is no need for any configuration to support this feature, the only requirement is to have the supported database with the data schema up-to-date as described in the <> section. + +In case the underlying persistence does not support this feature, you must guarantee that just one single instance of the Job Service is running at the same time. + +include::../../pages/_common-content/report-issue.adoc[] From cfc25d46342c05dd283b8d983ac72ae75b64efdf Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 16:28:16 +0200 Subject: [PATCH 12/38] SRVLOGIC-261: Sync basic part of SL nav.adoc to latest --- modules/ROOT/nav.adoc | 33 +++++++++++++++++++-------------- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 0b184311..b302e3c6 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -20,17 +20,17 @@ **** xref:serverless-logic:getting-started/preparing-environment.adoc[] **** xref:serverless-logic:getting-started/production-environment.adoc[] *** Getting Started -**** xref:serverless-logic:getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.adoc[Creating your first workflow service with KN CLI and VS Code] -**** xref:serverless-logic:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar with tooling] +**** xref:serverless-logic:getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.adoc[] +**** xref:serverless-logic:getting-started/getting-familiar-with-our-tooling.adoc[] *** Core -**** xref:serverless-logic:core/cncf-serverless-workflow-specification-support.adoc[CNCF Serverless Workflow specification] +**** xref:serverless-logic:core/cncf-serverless-workflow-specification-support.adoc[] **** xref:serverless-logic:core/handling-events-on-workflows.adoc[Events] **** xref:serverless-logic:core/working-with-callbacks.adoc[Callbacks] -**** xref:serverless-logic:core/understanding-jq-expressions.adoc[jq expressions] +**** xref:serverless-logic:core/understanding-jq-expressions.adoc[] **** xref:serverless-logic:core/understanding-workflow-error-handling.adoc[Error handling] -**** xref:serverless-logic:core/configuration-properties.adoc[Configuration properties] -**** xref:serverless-logic:core/defining-an-input-schema-for-workflows.adoc[Defining an input schema for your workflows] -**** xref:serverless-logic:core/custom-functions-support.adoc[Custom functions for your service] +**** xref:serverless-logic:core/configuration-properties.adoc[Configuration] +**** xref:serverless-logic:core/defining-an-input-schema-for-workflows.adoc[Input Schema] +**** xref:serverless-logic:core/custom-functions-support.adoc[Custom functions] **** xref:serverless-logic:core/timeouts-support.adoc[Timeouts] **** xref:serverless-logic:core/working-with-parallelism.adoc[Parallelism] *** Tooling @@ -38,14 +38,14 @@ ***** xref:serverless-logic:tooling/serverless-workflow-editor/swf-editor-vscode-extension.adoc[VS Code extension] ***** xref:serverless-logic:tooling/serverless-workflow-editor/swf-editor-chrome-extension.adoc[Chrome extension for GitHub] **** xref:serverless-logic:tooling/serverless-logic-web-tools/serverless-logic-web-tools-overview.adoc[{serverless_logic_web_tools_name}] -***** xref:serverless-logic:tooling/serverless-logic-web-tools/serverless-logic-web-tools-github-integration.adoc[GitHub integration] -***** xref:serverless-logic:tooling/serverless-logic-web-tools/serverless-logic-web-tools-openshift-integration.adoc[OpenShift integration] +***** xref:serverless-logic:tooling/serverless-logic-web-tools/serverless-logic-web-tools-github-integration.adoc[Integration with GitHub] +***** xref:serverless-logic:tooling/serverless-logic-web-tools/serverless-logic-web-tools-openshift-integration.adoc[Integration with OpenShift] ***** xref:serverless-logic:tooling/serverless-logic-web-tools/serverless-logic-web-tools-redhat-application-services-integration.adoc[Red Hat OpenShift Application and Data Services integration] -***** xref:serverless-logic:tooling/serverless-logic-web-tools/serverless-logic-web-tools-deploy-projects.adoc[Deploying projects] +***** xref:serverless-logic:tooling/serverless-logic-web-tools/serverless-logic-web-tools-deploy-projects.adoc[Deployment] *** Service Orchestration **** xref:serverless-logic:service-orchestration/orchestration-of-openapi-based-services.adoc[Orchestrating the OpenAPI services] ***** xref:serverless-logic:service-orchestration/configuring-openapi-services-endpoints.adoc[Configuring the OpenAPI services endpoints] -***** xref:serverless-logic:service-orchestration/working-with-openapi-callbacks.adoc[OpenAPI callback in {product_name}] +***** xref:serverless-logic:service-orchestration/working-with-openapi-callbacks.adoc[OpenAPI callbacks in {product_name}] **** xref:serverless-logic:service-orchestration/troubleshooting.adoc[Troubleshooting] *** Event Orchestration **** xref:serverless-logic:eventing/orchestration-of-asyncapi-based-services.adoc[Orchestrating AsyncAPI Services] @@ -64,12 +64,14 @@ *** Persistence **** xref:serverless-logic:persistence/core-concepts.adoc[Core concepts] *** xref:serverless-logic:cloud/index.adoc[Cloud] +*** xref:serverless-logic:cloud/custom-ingress-authz.adoc[Securing Workflows] **** Operator ***** xref:serverless-logic:cloud/operator/install-serverless-operator.adoc[Installation] +***** xref:serverless-logic:cloud/operator/global-configuration.adoc[Admin Configuration] ***** xref:serverless-logic:cloud/operator/developing-workflows.adoc[Development Mode] ***** xref:serverless-logic:cloud/operator/referencing-resource-files.adoc[Referencing Workflow Resources] -***** xref:serverless-logic:cloud/operator/configuring-workflows.adoc[Configuration] -***** xref:serverless-logic:cloud/operator/build-and-deploy-workflows.adoc[Building and Deploying Workflows] +***** xref:serverless-logic:cloud/operator/configuring-workflows.adoc[Workflow Configuration] +***** xref:serverless-logic:cloud/operator/build-and-deploy-workflows.adoc[Building and Deploying Workflow Images] ***** xref:serverless-logic:cloud/operator/supporting-services.adoc[Deploy Supporting Services] ***** xref:serverless-logic:cloud/operator/workflow-status-conditions.adoc[Custom Resource Status] ***** xref:serverless-logic:cloud/operator/building-custom-images.adoc[Building Custom Images] @@ -77,12 +79,15 @@ ***** xref:serverless-logic:cloud/operator/service-discovery.adoc[Service Discovery] ***** xref:serverless-logic:cloud/operator/using-persistence.adoc[Using persistence] ***** xref:serverless-logic:cloud/operator/configuring-knative-eventing-resources.adoc[Knative Eventing] +***** xref:cloud/operator/add-custom-ca-to-a-workflow-pod.adoc[Add Custom CA to Workflow Pod] ***** xref:serverless-logic:cloud/operator/known-issues.adoc[Roadmap and Known Issues] *** Integrations **** xref:serverless-logic:integrations/core-concepts.adoc[] *** Supporting Services +**** Jobs Service +***** xref:job-services/core-concepts.adoc[Core Concepts] **** Data Index -***** xref:serverless-logic:data-index/data-index-core-concepts.adoc[Core Concepts] +***** xref:serverless-logic:data-index/data-index-core-concepts.adoc[Core Concepts]*** ***** xref:serverless-logic:data-index/data-index-service.adoc[Data Index Standalone Service] *** Use Cases From dc583947362541a49d4560d335b900567a2f6799 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 16:39:25 +0200 Subject: [PATCH 13/38] SRVLOGIC-261: Sync Advanced section to latest --- .../callbacks/callback-state-example.adoc | 3 +- .../_dataindex_deployment_operator.adoc | 2 +- .../data-index-as-quarkus-dev-service.adoc | 7 +- .../data-index-quarkus-extension.adoc | 12 +- .../data-index-usecase-singleton.adoc | 2 +- .../_common_proc_deploy_kubectl_oc.adoc | 10 +- .../common/_proc_deploy_sw_quarkus_cli.adoc | 4 +- .../deployments/deploying-on-minikube.adoc | 3 +- .../deployments/deploying-on-openshift.adoc | 10 +- ...-produce-events-with-knative-eventing.adoc | 8 +- .../newsletter-subscription-example.adoc | 14 +- ...build-workflow-image-with-quarkus-cli.adoc | 3 +- .../create-your-first-workflow-project.adoc | 187 ++++++++++++++ .../create-your-first-workflow-service.adoc | 12 +- ...-serverless-workflow-quarkus-examples.adoc | 9 +- .../advanced-developer-use-cases/index.adoc | 4 +- .../camel-routes-integration.adoc | 2 +- .../custom-functions-knative.adoc | 6 +- .../integrations/custom-functions-python.adoc | 115 +++++++++ .../expose-metrics-to-prometheus.adoc | 6 +- ...erverless-dashboard-with-runtime-data.adoc | 6 +- .../job-service/quarkus-extensions.adoc | 238 ++++++++++++++++++ .../integration-tests-with-postgresql.adoc | 4 +- .../persistence-core-concepts.adoc | 1 + .../persistence-with-postgresql.adoc | 15 +- .../postgresql-flyway-migration.adoc | 4 +- .../kubernetes-service-discovery.adoc | 12 +- ...enapi-services-endpoints-with-quarkus.adoc | 6 +- .../timeouts/timeout-showcase-example.adoc | 32 +-- 29 files changed, 635 insertions(+), 102 deletions(-) create mode 100644 modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc create mode 100644 modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/custom-functions-python.adoc create mode 100644 modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/callbacks/callback-state-example.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/callbacks/callback-state-example.adoc index cc391f68..7b2c0dc6 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/callbacks/callback-state-example.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/callbacks/callback-state-example.adoc @@ -105,8 +105,9 @@ The `serverless-workflow-callback-quarkus` example application requires an exter Apache Kafka uses topics to publish or consume messages. In the `serverless-workflow-callback-quarkus` example application, two topics are used, matching the name of the CloudEvent types that are defined in the workflow, such as `resume` and `wait`. The `resume` and `wait` CloudEvent types are configured in the link:{kogito_sw_examples_url}/serverless-workflow-callback-quarkus/src/main/resources/application.properties[`application.properties`] file. -For more information about using Apache Kafka with events, see link:xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[Consuming and producing events using Apache Kafka]. +For more information about using Apache Kafka with events, see xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[Consuming and producing events using Apache Kafka]. -- ++ == Additional resources diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/common/_dataindex_deployment_operator.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/common/_dataindex_deployment_operator.adoc index bb1e3f48..7b3724e4 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/common/_dataindex_deployment_operator.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/common/_dataindex_deployment_operator.adoc @@ -129,7 +129,7 @@ spec: spec: containers: - name: data-index-service-postgresql - image: quay.io/kiegroup/kogito-data-index-postgresql-nightly:{sonataflow_non_productized_image_tag} + image: quay.io/kiegroup/kogito-data-index-postgresql:latest imagePullPolicy: Always resources: limits: diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-as-quarkus-dev-service.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-as-quarkus-dev-service.adoc index c16ad9ea..f093e6d1 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-as-quarkus-dev-service.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-as-quarkus-dev-service.adoc @@ -33,7 +33,7 @@ The Quarkus Dev Service also allows further configuration options including: * To disable {data_index_ref} Dev Service, use the `quarkus.kogito.devservices.enabled=false` property. * To change the port where the {data_index_ref} Dev Service runs, use the `quarkus.kogito.devservices.port=8180` property. -* To adjust the provisioned image, use `quarkus.kogito.devservices.imageName={kogito_devservices_imagename}` property. +* To adjust the provisioned image, use `quarkus.kogito.devservices.imageName=quay.io/kiegroup/kogito-data-index-ephemeral` property. * To disable sharing the {data_index_ref} instance across multiple Quarkus applications, use `quarkus.kogito.devservices.shared=false` property. For more information about Quarkus Dev Services, see link:{dev_services_url}[Dev Services guide]. @@ -66,7 +66,7 @@ The following table serves as a quick reference for commonly {data_index_ref} co | Yes |`QUARKUS_DATASOURCE_DB_KIND` -a|The kind of database to connect: `postgresql`,.. +a|The kind of database to connect: `postgresql`, ... |string | |Yes @@ -110,8 +110,7 @@ Allows to change the event connection type. The possible values are: |`quarkus.kogito.devservices.image-name` |Defines the {data_index_ref} image to use in Dev Service. |string - -|`{kogito_devservices_imagename}:{page-component-version}` +|`quay.io/kiegroup/kogito-data-index-ephemeral:{page-component-version}` |No |`quarkus.kogito.devservices.shared` diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc index 9c6746af..699b36ab 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc @@ -12,8 +12,6 @@ :kogito_sw_timeouts_showcase_embedded_example_application_properties_url: {kogito_sw_timeouts_showcase_embedded_example_url}/src/main/resources/application.properties :kogito_sw_dataindex_persistence_example_url: {kogito_sw_examples_url}/serverless-workflow-data-index-persistence-addon-quarkus -:infinispan_url: https://infinispan.org/ -:mongo_url: https://www.mongodb.com/ :postgresql_url: https://www.postgresql.org/ This document describes how you add the {data_index_ref} features to your workflow. You simply need to add the {data_index_ref} extension to the workflow and @@ -34,15 +32,11 @@ These extensions are distributed as addons ready to work with different types of * kogito-addons-quarkus-data-index-inmemory (inmemory PostgreSQL) * kogito-addons-quarkus-data-index-postgresql -* kogito-addons-quarkus-data-index-infinispan -* kogito-addons-quarkus-data-index-mongodb With the same purpose, the Quarkus {data_index_ref} persistence extension can be added to any workflow application and incorporates only the {data_index_ref} indexation and data persistence functionality into the same application without needing an external {data_index_ref} service to do that. These extensions are distributed as addons ready to work with different types of persistence: * kogito-addons-quarkus-data-index-persistence-postgresql -* kogito-addons-quarkus-data-index-persistence-infinispan -* kogito-addons-quarkus-data-index-persistence-mongodb In this case to interact with that data and related runtimes using GraphQL you will need an external {data_index_ref} service that makes that endpoint available. @@ -53,7 +47,7 @@ The {data_index_ref} extensions are provided as addons for each kind of supporte Once one of these `kogito-addons-quarkus-data-index` or `kogito-addons-quarkus-data-index-persistence` addons is added to a workflow, it incorporates the functionality to index and store the workflow data. In case of the `kogito-addons-quarkus-data-index` also incorporates the GraphQL endpoint to perform queries and management operations. -In the same way as the {data_index_ref} service, there is a specific addon for each type of persistence you want to work with. Currently, you can find {data_index_ref} addons for: link:{postgresql_url}[PostgreSQL], link:{infinispan_url}[Infinispan], and link:{mongo_url}[MongoDB] +In the same way as the {data_index_ref} service, there is a specific addon for each type of persistence you want to work with. Currently, you can find {data_index_ref} addons for: link:{postgresql_url}[PostgreSQL]. [IMPORTANT] ==== @@ -93,7 +87,7 @@ Manually to the POM.xml:: [source,xml] ---- - org.kie.kogito + org.kie kogito-addons-quarkus-data-index-postgresql ---- @@ -155,7 +149,7 @@ Manually to the POM.xml:: [source,xml] ---- - org.kie.kogito + org.kie kogito-addons-quarkus-data-index-persistence-postgresql ---- diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-usecase-singleton.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-usecase-singleton.adoc index b9ada42c..558570a1 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-usecase-singleton.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-usecase-singleton.adoc @@ -74,7 +74,7 @@ kubectl create namespace usecase1 . Deploy the {data_index_ref} Service and postgresql database: + -- -include:common/_dataindex_deployment_operator.adoc[] +include::common/_dataindex_deployment_operator.adoc[] Perform the deployments executing [source,shell] diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/common/_common_proc_deploy_kubectl_oc.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/common/_common_proc_deploy_kubectl_oc.adoc index 0dbd75e5..607af293 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/common/_common_proc_deploy_kubectl_oc.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/common/_common_proc_deploy_kubectl_oc.adoc @@ -22,8 +22,8 @@ pom.xml:: [source,xml,subs="attributes+"] ---- - org.kie.kogito - kogito-addons-quarkus-knative-eventing + org.kie + kie-addons-quarkus-knative-eventing io.quarkus @@ -35,13 +35,13 @@ Gradle:: [source,shell,subs="attributes+"] ---- quarkus-kubernetes 'io.quarkus:{quarkus-k8s-plugin}:{quarkus_version}' -quarkus-kubernetes 'org.kie.kogito:kogito-addons-quarkus-knative-eventing:{page-component-version}' +quarkus-kubernetes 'org.kie:kie-addons-quarkus-knative-eventing:{page-component-version}' ---- Quarkus CLI:: + [source,shell,subs="attributes+"] ---- -quarkus ext add org.kie.kogito:kogito-addons-quarkus-knative-eventing quarkus-openshift{page-component-version}' +quarkus ext add org.kie:kie-addons-quarkus-knative-eventing quarkus-openshift{page-component-version}' ---- ==== -- @@ -49,7 +49,7 @@ quarkus ext add org.kie.kogito:kogito-addons-quarkus-knative-eventing quarkus-op . To generate the `knative` `yaml|json` descriptors, set the following properties in the `application.properties` file of your workflow application: + -- -.System properties to generate knative descriptors +.System properties to generate Knative descriptors [source,shell,subs="attributes+"] ---- quarkus.kubernetes.deployment-target=knative diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/common/_proc_deploy_sw_quarkus_cli.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/common/_proc_deploy_sw_quarkus_cli.adoc index 958c15cf..c5b19007 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/common/_proc_deploy_sw_quarkus_cli.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/common/_proc_deploy_sw_quarkus_cli.adoc @@ -16,7 +16,7 @@ You can add the {platform} and the Kogito Knative extensions to your project wit .Add {platform} and Kogito Knative extensions to the project with Quarkus CLI [source,shell,subs="attributes+"] ---- -quarkus ext add {quarkus-k8s-plugin} kogito-addons-quarkus-knative-eventing +quarkus ext add {quarkus-k8s-plugin} kie-addons-quarkus-knative-eventing ---- -- . To deploy the workflow application using Quarkus CLI, set the following system properties in `application.properties` file: @@ -57,7 +57,7 @@ quarkus build -DskipTests [NOTE] ==== -The `kogito-examples` already have this extension added by default, and can be activated with the `container` Maven profile. +The `{kie_kogito_examples_repo_name}` already have this extension added by default, and can be activated with the `container` Maven profile. ==== // verify deployed swf diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/deploying-on-minikube.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/deploying-on-minikube.adoc index ad3618ef..ba60758c 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/deploying-on-minikube.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/deploying-on-minikube.adoc @@ -20,7 +20,6 @@ :command_line_tool: kubectl :command_line_tool_name: Kubernetes CLI // links -:kn_cli_quickstart_plugin_url: https://knative.dev/docs/install/quickstart-install/#install-the-knative-cli :knative_on_minikube_step_by_step_url: https://redhat-developer-demos.github.io/knative-tutorial/knative-tutorial/setup/minikube.html :knative_issue_url: https://github.com/knative/serving/issues/6101 @@ -80,7 +79,7 @@ For more information, see link:{kn_cli_install_url}[Install the Knative CLI]. . Configure Knative on Minikube. + -- -Knative CLI offers `quickstart` plug-in, which provides the required configurations. For information about installing the `quickstart` plug-in, see link:{kn_cli_quickstart_plugin_url}[Install Knative using quickstart]. +Knative CLI offers `quickstart` plug-in, which provides the required configurations. For information about installing the `quickstart` plug-in, see link:{knative_quickstart_url}[Install Knative using quickstart]. -- . After configuring the plug-in, execute the following command to configure a Minikube profile: diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/deploying-on-openshift.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/deploying-on-openshift.adoc index a6e8cc5a..3b15f588 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/deploying-on-openshift.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/deployments/deploying-on-openshift.adoc @@ -8,7 +8,7 @@ :registry: OpenShift's :cluster_kind: OpenShift with Red Hat OpenShift Serverless is ready :k8s_registry: image-registry.openshift-image-registry.svc:5000 -:knative_procedure: link:{ocp_knative_serving_url}[Knative Serving] +:knative_procedure: link:{ocp_knative_serving_install_url}[Knative Serving] :default_namespace: kogito-serverless :command_line_tool: oc :command_line_tool_name: OpenShift CLI @@ -53,8 +53,8 @@ If you are running OpenShift Local on Mac with M1 processors, you might not find Before proceeding further, make sure that you have access to the OpenShift cluster, the OpenShift Serverless operator is properly installed and the `Knative Serving` is ready for use. For more information on each topic, please refer the following guides: * Installing link:{ocp_swf_install_url}[OpenShift Serverless Operator]. -* Installing link:{ocp_knative_serving_url}[Knative Serving]. -* Installing link:{ocp_knative_eventing_url}[Knative Eventing]. Knative Eventing is not required for this guide, however it is important to mention how to install it, if required by your {product_name} application. +* Installing link:{ocp_knative_serving_install_url}[Knative Serving]. +* Installing link:{ocp_knative_eventing_install_url}[Knative Eventing]. Knative Eventing is not required for this guide, however it is important to mention how to install it, if required by your {product_name} application. [TIP] @@ -96,7 +96,7 @@ After checking the prerequisites, prepare the project that will be used to deplo [TIP] ==== -You can use the link:build-workflow-image-with-quarkus-cli.html#proc-building-serverless-workflow-application-using-native-image[native image] for a faster startup. +You can use the xref:use-cases/advanced-developer-use-cases/getting-started/build-workflow-image-with-quarkus-cli.adoc#proc-building-serverless-workflow-application-using-native-image[native image] for a faster startup. ==== @@ -160,7 +160,7 @@ You can read further the next sections which explain different approaches to dep [NOTE] ==== -In the next steps you will notice the value **{k8s_registry}** being used. It is the internal OpenShift's registry address where the images of the deployments will pulled from. Note that, the Container Image pushed in the previous step will be queried as `{k8s_registry}/{default_namespace}/serverless-workflow-greeting-quarkus:1.0` +In the next steps you will notice the value **{k8s_registry}** being used. It is the internal OpenShift's registry address where the images of the deployments will be pulled from. Note that, the Container Image pushed in the previous step will be queried as `{k8s_registry}/{default_namespace}/serverless-workflow-greeting-quarkus:1.0` ==== * <> diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc index a82fd5e4..8c3148b5 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc @@ -22,21 +22,21 @@ Apache Maven:: + [source,shell] ---- -mvn quarkus:add-extension -Dextensions="kogito-addons-quarkus-knative-eventing" +mvn quarkus:add-extension -Dextensions="kie-addons-quarkus-knative-eventing" ---- Quarkus CLI:: + [source,shell] ---- -quarkus extension add kogito-addons-quarkus-knative-eventing +quarkus extension add kie-addons-quarkus-knative-eventing ---- Manually:: + [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-knative-eventing + org.kie + kie-addons-quarkus-knative-eventing ---- ==== diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/event-orchestration/newsletter-subscription-example.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/event-orchestration/newsletter-subscription-example.adoc index 80573b35..f3a0bb59 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/event-orchestration/newsletter-subscription-example.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/event-orchestration/newsletter-subscription-example.adoc @@ -30,12 +30,12 @@ Here we have the Newsletter Subscription workflow: image::use-cases/newsletter-subscription/newsletter-subscription-flow.png[Workflow] .Newsletter subscription flow workflow definition -[source,json] +[source,json,subs="attributes+"] ---- { "id": "subscription_flow", "dataInputSchema": "subscription-schema.json", - "specVersion": "0.8", + "specVersion": "{spec_version}", "version": "1.0", "start": "VerifyEmail", "events": [ @@ -216,13 +216,13 @@ image::use-cases/newsletter-subscription/newsletter-subscription-backend-ui.png[ == Executing the workflows -In a command terminal, clone the `kogito-examples` repository, navigate to the cloned directory, and follow link:{kogito_sw_examples_url}/serverless-workflow-newsletter-subscription/README.md#running-on-knative[these steps]: +In a command terminal, clone the `{kie_kogito_examples_repo_name}` repository, navigate to the cloned directory, and follow link:{kogito_sw_examples_url}/serverless-workflow-newsletter-subscription/README.md#running-on-knative[these steps]: -[source, bash] +[source,bash,subs="attributes+"] ---- -git clone https://github.com/apache/incubator-kie-kogito-examples.git +git clone {kogito_examples_url} -cd kogito-examples/serverless-workflow-examples/serverless-workflow-newsletter-subscription +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-newsletter-subscription ---- === Architecture @@ -252,7 +252,7 @@ For simplification purposes, a single database instance is used for both service ==== -For more information about knative eventing outgoing CloudEvents over HTTP, see xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]. +For more information about Knative eventing outgoing CloudEvents over HTTP, see xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]. == Additional resources diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/build-workflow-image-with-quarkus-cli.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/build-workflow-image-with-quarkus-cli.adoc index e82b974b..0b4e6e86 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/build-workflow-image-with-quarkus-cli.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/build-workflow-image-with-quarkus-cli.adoc @@ -7,13 +7,12 @@ :quarkus_container_images_url: https://quarkus.io/guides/container-image :quarkus_native_builds_url: https://quarkus.io/guides/building-native-image :google_jib_url: https://github.com/GoogleContainerTools/jib -:kogito_sw_examples_git_repo_url: https://github.com/apache/incubator-kie-kogito-examples.git This document describes how to build a {product_name} container image using the link:{quarkus_cli_url}[Quarkus CLI]. .Prerequisites include::./../../../../pages/_common-content/getting-started-requirement-quarkus.adoc[] -* You have setup your environment according to xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced environment setup] guide and you cluster is ready. +* You have set up your environment according to the xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced environment setup] guide and your cluster is ready. * Optionally, GraalVM {graalvm_min_version} is installed. See xref:getting-started/preparing-environment.adoc#proc-additional-options-for-local-environment[] Quarkus provides a few extensions to build container images, such as `Jib`, `docker`, `s2i`, and `buildpacks`. For more information about the Quarkus extensions, see the link:{quarkus_container_images_url}[Quarkus documentation]. diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc new file mode 100644 index 00000000..36a31630 --- /dev/null +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc @@ -0,0 +1,187 @@ += Creating a Quarkus Workflow project + +As a developer, you can use {product_name} to create an application and in this guide we want to explore different options and provide an overview of available tools that can help. + +We will also use Quarkus dev mode for iterative development and testing. + +As a common application development, you have different phases like Analysis, Development and Deployment. Let's explore in detail each phase and what {product_name} provides in each case: + +* <> +** <> +** <> +** <> +** <> + +* <> +** <> +** <> +** <> +* <> + + +.Prerequisites +* You have set up your environment according to the xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced environment setup] guide. + +For more information about the tooling and the required dependencies, see xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar with {product_name} tooling]. + +ifeval::["{kogito_version_redhat}" != ""] +include::../../pages/_common-content/downstream-project-setup-instructions.adoc[] +endif::[] + + +[[proc-analysis-phase]] +== Analysis phase + +Start by analyzing the requirements for your {product_name} application. This will enable you to make decisions about the persistence, eventing, security, topology, and component interaction needs of your application. + +[[proc-adding-persistence]] +=== Adding persistence +Service orchestration is a relevant use case regarding the rise of microservices and event-driven architectures. These architectures focus on communication between services and there is always the need to coordinate that communication without the persistence addition requirement. + +{product_name} applications use an in-memory persistence by default. This makes all the {workflow_instance} information volatile upon runtime restarts. In the case of this guide, when the workflow runtime is restarted. +As a developer, you must decide if you need to ensure that your workflow instances remain consistent in the context. + +If your application requires persistence, you must decide what kind of persistence is needed and configure it properly. +Follow the {product_name} xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence guide] for more information. + +You can find more information about how to create an application that writes to and reads from a database following link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus application] guide. + +[[proc-adding-eventing]] +=== Adding eventing + +Quarkus unifies reactive and imperative programming you can find more information about this in the link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive Architecture] guide. + +In this phase, we must decide how the Event-Driven Architecture needs to be added to our project. +As an event-driven architecture, it uses events to trigger and communicate between services. It allows decoupled applications to publish and subscribe to events through an event broker asynchronously. The event-driven architecture is a method of developing systems that allows information to flow in real time between applications, microservices, and connected devices. + +This means that applications and devices do not need to know where they are sending information or where the information they are consuming comes from. + +If we choose to add eventing, {product_name} supports different options like: + +* *Kafka Connector* for Reactive Messaging. See xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[] for more details. +* *Knative* eventing. See xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[] for more details. + +You must choose how the different project components will communicate and what kind of communication is needed. More details about link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus Extensions enabling Reactive] + +[[proc-adding-data-index-service]] +=== Adding Data Index service + +The {data_index_ref} service can index the {workflow_instance} information using GraphQL. This is very useful if you want to consume the workflow data in different applications through a GraphQL endpoint. +For more information about {data_index_ref} service see xref:data-index/data-index-core-concepts.adoc[] for more details. + +If you decide to index the data, you must select how to integrate the {data_index_ref} service in your topology. Here are some options: + +* You can choose to have the data indexation service integrated directly into our application using the different xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[]. +This allows you to use the same data source as the application persistence uses, without the need for extra service deployment. +** *{data_index_ref} persistence extension*. That persists the indexed data directly at the application data source. +** *{data_index_ref} extension*. That persist directly the indexed data at the application data source and also provide the GraphQL endpoint to interact with the persisted data. +* Another option is to have the Data Index as a standalone service. In this case, you must properly configure the communication between your {product_name} application and the {data_index_ref} service. More details in xref:data-index/data-index-service.adoc[] + + +[[proc-adding-job-service]] +=== Adding Job service + +The Job Service facilitates the scheduled execution of tasks in a cloud environment. If any of your {product_name} workflow needs some kind of temporary schedule, you will need to integrate the Job service. + +If you decide to use Job Service, you need to select how to integrate the service into your topology. Here are some options: + +* You can choose to have the Job service integrated directly into your {product_name} Quarkus application using xref:use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc[] guide. +* Explore how to integrate the Job service and define the interaction with your {product_name} application workflows. You can find more Job service-related details in xref:job-services/core-concepts.adoc[Job Service Core concepts]. + +[[proc-development-phase]] +== Development phase + +Once you decide which components you must integrate into {product_name} project, you can jump into the workflow development phase. + +The goal is to create a workflow and be able to test and improve it. {product_name} provides some tooling to facilitate the developer to try the workflows during this development phase and refine them before going to the deployment phase. +As an overview, you have the following resources to help in this development phase: + +** <> +** <> +** <> + +[[proc-boostrapping-the-project]] +=== Bootstrapping a project, Creating a workflow, Running your workflow application and Testing your workflow application + +To create your workflow service, first you need to bootstrap a project. +Follow the {product_name} xref:use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc[] guide to setup a minimal working project. + +[[proc-logging-configuration]] +=== How to configure logging + +In order to understand what's happening in the environment. {product_name} is using Quarkus Log Management. Logs can provide a detailed history of what happened leading up to the issue. + +Quarkus uses the JBoss Log Manager logging backend for publishing application and framework logs. +Quarkus supports the JBoss Logging API and multiple other logging APIs, seamlessly integrated with JBoss Log Manager +In order to be able to see the in detail access to link:{quarkus_guides_logging_url}[Quarkus Logging Configuration guide] + +.Example adding Logging configuration properties in `application.properties` file +[source,properties] +---- +quarkus.log.console.enable=true <1> +quarkus.log.level=INFO <2> +quarkus.log.category."org.apache.kafka.clients".level=INFO +quarkus.log.category."org.apache.kafka.common.utils".level=INFO <3> +---- +<1> If console logging should be enabled, even by default is set to true +<2> The log level of the root category, which is used as the default log level for all categories +<3> Logging is configured on a per-category basis, with each category being configured independently. Configuration for a category applies recursively to all subcategories unless there is a more specific subcategory configuration + +[NOTE] +==== +Access to link:{quarkus_guides_logging_url}#loggingConfigurationReference[Logging configuration reference] to see how logs properties can be configured +==== + +[[proc-dev-ui]] +=== Refining your workflow testing with Dev-UI + +Quarkus provides a host of features when dev mode is enabled allowing things like: + +* *Change configuration values*. +* *Running Development services*, including Zero-config setup of data sources. When testing or running in dev mode Quarkus can even provide you with a zero config database out of the box, a feature we refer to as Dev Services. More information can be found in link:{quarkus_guides_logging_url}#dev-services[Quarkus introduction to Dev services]. +* *Access to Swagger-UI* that allows exploring the different {product_name} application endpoints. The quarkus-smallrye-openapi extension will expose the Swagger UI when Quarkus is running in dev mode. Additional information can be found link:{quarkus_guides_swaggerui_url}#dev-mode[Use Swagger UI for development]. +* *Data index Graph UI* that allows to perform GraphQL queries or to explore the data schema +* Allow to *explore the {workflow_instances}* if the {product_name} Runtime tools Quarkus Dev UI is included + +[NOTE] +==== +By default, Swagger UI is only available when Quarkus is started in dev or test mode. + +If you want to make it available in production too, you can include the following configuration in your application.properties: + +``` +quarkus.swagger-ui.always-include=true +``` +This is a build time property, it cannot be changed at runtime after your application is built. +==== + +[[proc-deployment-phase]] +== Deployment phase + +At this stage you have a {product_name} Quarkus application well tested and ready to be deployed. + +There are two basic modes that a Quarkus application can be deployed: + +* As an standard Java application (executable jar with libraries on the classpath) +* As a native executable which can be built using GraalVM link:{quarkus_guides_building_native}#producing-a-native-executable[Quarkus Building a native executable guide] + +If you put either the Java application or the native executable app inside a container, you can deploy the container anywhere that supports running containers. + +Quarkus provides extensions for building (and pushing) container images. +You can find more details about that container images generation in link:{quarkus_guides_container_image_url}[Quarkus Container Image extensions] + +Once this container image is built it can be used as part of the decided topology. You have different options like: + +* xref:use-cases/advanced-developer-use-cases/deployments/deploying-on-minikube.adoc[] +* xref:use-cases/advanced-developer-use-cases/deployments/deploying-on-kubernetes.adoc[] +* xref:use-cases/advanced-developer-use-cases/deployments/deploying-on-openshift.adoc[] + +== Additional resources + +* xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar with {product_name} tooling] +* xref:service-orchestration/orchestration-of-openapi-based-services.adoc[Orchestrating the OpenAPI services] +* xref:use-cases/advanced-developer-use-cases/event-orchestration/newsletter-subscription-example.adoc[] +* xref:use-cases/advanced-developer-use-cases/timeouts/timeout-showcase-example.adoc[] + +include::../../../../pages/_common-content/report-issue.adoc[] + diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc index d9297a44..c3b0a6f9 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc @@ -1,4 +1,4 @@ -= Creating a Quarkus Workflow Project += Creating a Quarkus Workflow service As a developer, you can use {product_name} and create a `Hello World` application, which includes the following procedures: @@ -18,12 +18,12 @@ This document describes how to create a workflow application that serves a `hell image::getting-started/hello-world-workflow.png[] .Prerequisites -* You have setup your environment according to xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced environment setup] guide and you cluster is ready. +* You have set up your environment according to the xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced environment setup] guide and your cluster is ready. For more information about the tooling and the required dependencies, see xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar with {product_name} tooling]. ifeval::["{kogito_version_redhat}" != ""] -include::../../../../pages/_common-content/downstream-project-setup-instructions.adoc[] +include::../../pages/_common-content/downstream-project-setup-instructions.adoc[] endif::[] [[proc-boostrapping-the-project]] @@ -105,12 +105,12 @@ After bootstrapping a project, you need to create a workflow. In the following p + -- .Example content for `hello.sw.json` file -[source,json] +[source,json,subs="attributes+"] ---- { "id": "hello_world", <1> "version": "1.0", - "specVersion": "0.8", + "specVersion": "{spec_version}", "name": "Hello World Workflow", "description": "JSON based hello world workflow", "start": "Inject Hello World", <3> @@ -292,7 +292,7 @@ __ ____ __ _____ ___ __ ____ ______ 2022-05-25 14:38:13,375 INFO [org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer] (docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,105 INFO [org.kie.kog.per.pro.ProtobufService] (main) Registering Kogito ProtoBuffer file: kogito-index.proto 2022-05-25 14:38:13,377 INFO [org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer] (docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,132 INFO [org.kie.kog.per.pro.ProtobufService] (main) Registering Kogito ProtoBuffer file: kogito-types.proto 2022-05-25 14:38:13,378 INFO [org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer] (docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,181 INFO [io.quarkus] (main) data-index-service-inmemory 1.22.0.Final on JVM (powered by Quarkus 2.9.0.Final) started in 4.691s. Listening on: http://0.0.0.0:8080 -2022-05-25 14:38:13,379 INFO [org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer] (docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,182 INFO [io.quarkus] (main) Profile prod activated. +2022-05-25 14:38:13,379 INFO [org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer] (docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,182 INFO [io.quarkus] (main) Profile preview activated. 2022-05-25 14:38:13,380 INFO [org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer] (docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,182 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, hibernate-orm-panache, inmemory-postgres, jdbc-postgresql, narayana-jta, oidc, reactive-routes, rest-client-reactive, rest-client-reactive-jackson, security, smallrye-context-propagation, smallrye-graphql-client, smallrye-health, smallrye-metrics, smallrye-reactive-messaging, smallrye-reactive-messaging-http, vertx, vertx-graphql] ---- diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc index 4f48bfa1..4ab97885 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc @@ -7,12 +7,11 @@ :quarkus_container_images_url: https://quarkus.io/guides/container-image :quarkus_native_builds_url: https://quarkus.io/guides/building-native-image :google_jib_url: https://github.com/GoogleContainerTools/jib -:kogito_sw_examples_git_repo_url: https://github.com/apache/incubator-kie-kogito-examples.git This document describes how to work with {product_name} example applications. .Prerequisites -* You have setup your environment according to xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced environment setup] guide and you cluster is ready. +* You have set up your environment according to the xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced environment setup] guide and your cluster is ready. [[proc-using-example-application]] == Using an example application @@ -21,13 +20,13 @@ To get started with our examples, you can use the link:{kogito_sw_examples_url}/ However, same procedure can be applied to any example located in link:{kogito_sw_examples_url}[{product_name} example repository]. .Procedure -. Clone the link:{kogito_sw_examples_git_repo_url}[kogito-examples] repository and navigate to the link:{kogito_sw_examples_url}/serverless-workflow-greeting-quarkus[`serverless-workflow-greeting-quarkus`] example application. +. Clone the link:{kogito_examples_url}[{kie_kogito_examples_repo_name}] repository and navigate to the link:{kogito_sw_examples_url}/serverless-workflow-greeting-quarkus[`serverless-workflow-greeting-quarkus`] example application. + .Clone an example application [source,shell,subs="attributes+"] ---- -git clone --branch main {kogito_sw_examples_git_repo_url} -cd incubator-kie-kogito-examples/serverless-workflow-examples/serverless-workflow-greeting-quarkus +git clone --branch main {kogito_examples_url} +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-greeting-quarkus ---- . To run the example application, follow the instructions located in the README.md. Every example provides a file with instructions on how to run and work with it. diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/index.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/index.adoc index 6f80ad58..d98a456b 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/index.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/index.adoc @@ -5,7 +5,7 @@ :keywords: cloud, kubernetes, docker, image, podman, openshift, pipelines // other -.Prerequsites -* You have setup your environment according to xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced environment setup] guide and you cluster is ready. +.Prerequisites +* You have set up your environment according to the xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced environment setup] guide and your cluster is ready. {product_name} allows developers to implement workflow applications for advanced use cases using Quarkus and Java. \ No newline at end of file diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/camel-routes-integration.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/camel-routes-integration.adoc index 53e1bfb3..de201002 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/camel-routes-integration.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/camel-routes-integration.adoc @@ -32,7 +32,7 @@ For more information about creating a workflow, see xref:use-cases/advanced-deve You can add YAML or XML Camel routes to your workflow project. .Procedure -. Create a YAML or XML Camel Routes using your IDE or the link:{kaoto_url}[Kaoto VSCode Editor] and place them in the `src/main/resources/routes` directory. +. Create a YAML or XML Camel Routes using your IDE or the link:{kaoto_url}[Kaoto VS Code Editor] and place them in the `src/main/resources/routes` directory. . The route `from` endpoint must be a `direct` component. That's the endpoint producer expected by the workflow engine. . The route response must be in a valid format that the workflow context can understand: + diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/custom-functions-knative.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/custom-functions-knative.adoc index 085ebb99..47687e0a 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/custom-functions-knative.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/custom-functions-knative.adoc @@ -9,7 +9,7 @@ This document describes how to call Knative services using {product_name} custom functions. The procedure described in this document is based on the link:{kogito_sw_examples_url}/serverless-workflow-custom-function-knative[`serverless-workflow-custom-function-knative`] example application. -For more details about the Knative custom function, see xref:core/custom-functions-support.adoc#knative-custom-function[Custom functions for your {product_name} service]. +For more details about the Knative custom function, see xref:core/custom-functions-support.adoc#con-func-knative[Custom functions for your {product_name} service]. .Prerequisites @@ -26,7 +26,7 @@ include::../deployments/common/_prerequisites.adoc[] [source,xml] ---- - org.kie.kogito + org.kie kogito-addons-quarkus-knative-serving ---- @@ -127,7 +127,7 @@ You should see an output like (`id` will change): == Sending as CloudEvent -Knative functions support https://github.com/knative/func/blob/main/docs/function-templates/quarkus.md#invocation-parameters[CloudEvent as the message protocol]. {product_name} can create and post CloudEvent messages in `functionRef`. For more information see xref:core/custom-functions-support.adoc#sending-cloudevents[Custom Functions - Sending a CloudEvent]. +Knative functions support https://github.com/knative/func/blob/main/docs/function-templates/quarkus.md#invocation-parameters[CloudEvent as the message protocol]. {product_name} can create and post CloudEvent messages in `functionRef`. For more information see xref:core/custom-functions-support.adoc#sending-cloudevents[] == Additional resources diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/custom-functions-python.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/custom-functions-python.adoc new file mode 100644 index 00000000..3202135b --- /dev/null +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/custom-functions-python.adoc @@ -0,0 +1,115 @@ += Invoking Python from {product_name} +:compat-mode!: +// Metadata: +:description: Describe Python execution capabilities +:keywords: kogito, workflow, quarkus, serverless, python, AI + +This document describes how to integrate python scripts and functions into your workflow using {product_name} custom functions. The code appearing in this document is copied from link:{kogito_sw_examples_url}/serverless-workflow-python-quarkus[`serverless-workflow-python-quarkus`] example application and link:{kogito_runtimes_url}/quarkus/addons/python/integration-tests/src/main/resources/PythonService.sw.json[PythonService] integration test. + +== Enable Python support + +To enable Python support, you must add the Python add-on dependency to your {product_name} module `pom.xml` file + +[source,xml] +---- + + org.apache.kie.sonataflow + sonataflow-addons-quarkus-python + +---- + +== Invoking embedded Python script. + +{product_name} supports the execution of Python script in the same memory address as the running workflow. + +To invoke a Python script the first step is to define a custom Python function at the beginning of the flow. + +[source,json] +---- + "functions": [ + { + "name": "python", + "type": "custom", + "operation": "script:python" + } + ] +---- + +Once done, you can use that function several times to execute arbitrary Python code. The Python code is provided as an argument of the function call through the `script` property. + +[source,json] +---- +"functionRef": + "refName": "python", + "arguments": { + "script": "import numpy as np" + } + } +---- + +Previous snippet imports link:https://numpy.org/[numpy] library. The same Python function can be invoked again to generate an array containing three random numbers between `0` and `10`. + +[source,json] +---- +"functionRef": { + "refName": "python", + "arguments": { + "script": "rng = np.random.default_rng().integers(low=0,high=10,size=3)" + } + } +---- + +To access the result of the embedded python invocation, {product_name} provides a special context variable: `$WORKFLOW.python`. Therefore, if you want to set the `rng` variable from the previous script as the `output` property of the workflow model, you write + +[source,json] +---- +"stateDataFilter" : { + "output" : "{result:$WORKFLOW.python.rng}" +} +---- + +== Invoking Python function. + +You can also invoke functions from standard or custom python modules. + +You must define a serverless workflow function definition that invokes the Python function. You should specify, within the `operation` property, the name of the Python module and function to be invoked when the function is called. You should separate the module name and the function name using `::` and prefix them with `services::python:` + +The following example defines a function that invokes a standard Python function link:https://www.geeksforgeeks.org/python-math-factorial-function/[math.factorial(x)] +[source,json] +---- + "functions" : [ { + "name" : "factorial", + "operation" : "service:python:math::factorial", + "type" : "custom" + } +---- + +Once you have defined the function, you might call it passing the expected arguments. In the case of factorial, an integer stored in property `x` of the workflow model. + +[source,json] +---- + "functionRef" : { + "refName": "factorial", + "arguments" : ".x" + } +---- + +The return value of the function can be handled as any other function result using the `actionDataFilter.toStateData` Serverless Workflow construct. The following will set a workflow model property called `result` with the factorial invocation returned value. + +[source,json] +---- + "actionDataFilter" : { + "toStateData" : ".result" + } +---- + +== Further reading + +The link:{kogito_sw_examples_url}/serverless-workflow-openvino-quarkus[Openvino] illustrates the powerful AI capabilities of integrating workflows with Python. It is a must-see for all interested in the topic. + +== Additional resources + +* xref:core/custom-functions-support.adoc[Custom functions for your {product_name} service] +* xref:core/understanding-jq-expressions.adoc[Understanding JQ expressions] + +include::../../../_common-content/report-issue.adoc[] diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/expose-metrics-to-prometheus.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/expose-metrics-to-prometheus.adoc index 46e2385e..915dd318 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/expose-metrics-to-prometheus.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/expose-metrics-to-prometheus.adoc @@ -27,15 +27,15 @@ You can enable the metrics in your workflow application. For more information about creating a workflow, see xref:use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc[Creating your first workflow service]. .Procedure -. To add the metrics to your workflow application, add the `org.kie.kogito:kogito-addons-quarkus-monitoring-prometheus` dependency to the `pom.xml` file of your project: +. To add the metrics to your workflow application, add the `org.kie:kie-addons-quarkus-monitoring-prometheus` dependency to the `pom.xml` file of your project: + -- .Dependency to be added to the `pom.xml` file to enable metrics [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-monitoring-prometheus + org.kie + kie-addons-quarkus-monitoring-prometheus ---- -- diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/serverless-dashboard-with-runtime-data.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/serverless-dashboard-with-runtime-data.adoc index a583cbbd..162db244 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/serverless-dashboard-with-runtime-data.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/integrations/serverless-dashboard-with-runtime-data.adoc @@ -39,15 +39,15 @@ You can build dashboards to monitor the data of your workflows using metrics. For more information about creating a workflow, see xref:use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc[Creating your first workflow service]. .Procedure -. To enable metrics for your workflows application add `org.kie.kogito:kogito-addons-quarkus-monitoring-prometheus` dependency in `pom.xml` file of your application: +. To enable metrics for your workflows application add `org.kie:kie-addons-quarkus-monitoring-prometheus` dependency in `pom.xml` file of your application: + -- .Add metrics dependency to `pom.xml` file [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-monitoring-prometheus + org.kie + kie-addons-quarkus-monitoring-prometheus ---- diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc new file mode 100644 index 00000000..d3618be7 --- /dev/null +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc @@ -0,0 +1,238 @@ +[#job-service-quarkus-extensions] += Job Service Quarkus Extensions +:compat-mode!: +// Metadata: +:description: Job Service Quarkus extensions in {product_name} +:keywords: sonataflow, workflow, serverless, job service, quarkus extensions + +The interaction xref:job-services/core-concepts.adoc#integration-with-the-workflows[between the workflows and the Job Service] is handled by the different Job Service Quarkus Extensions. Each extension is designed to work with a different communication alternative. + +For example, you can select if your workflows must interact with the Job Service by sending cloud events over the <> system or the <> system, or simply by executing direct <> calls. + +Finally, for the interaction work, you must configure your Quarkus Workflow Project with the extension of your choice. + +image::job-services/Quarkus-Workflow-Project-And-Extension.png[] + +We recommend that you follow this procedure: + +1. Identify the communication alternative that best fits your scenario. +2. Be sure that the Job Service is properly configured to support that alternative. This is very important if you want to use xref:job-services/core-concepts.adoc#knative-eventing[Knative events] or xref:job-services/core-concepts.adoc#kafka-messaging[kafka messages] to communicate with it. +3. Configure your Quarkus Workflow Project with the corresponding extension. + +[NOTE] +==== +If your workflows are not using timer-based actions, like timeouts, there is no need to add such an extension. +==== + +[#kogito-addons-quarkus-jobs-knative-eventing] +== Knative eventing interaction + +To interact with the Job Service by sending cloud events over the Knative eventing system you must follow these steps: + +. Be sure that you have read the xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[Consuming and producing events on Knative Eventing] guide, and that you have configured the project accordingly. + +. Add the `kogito-addons-quarkus-jobs-knative-eventing` extension to your Quarkus Workflow Project using any of the following alternatives: + +[tabs] +==== +Manually:: ++ +[source,xml] +---- + + org.kie + kogito-addons-quarkus-jobs-knative-eventing + +---- +Apache Maven:: ++ +[source,shell] +---- +mvn quarkus:add-extension -Dextensions="kogito-addons-quarkus-jobs-knative-eventing" +---- +Quarkus CLI:: ++ +[source,shell] +---- +quarkus extension add kogito-addons-quarkus-jobs-knative-eventing +---- +==== + +[start=3] +. Add the following configurations to the `application.properties` file of your project. + +[source,properties] +---- +mp.messaging.outgoing.kogito-job-service-job-request-events.connector=quarkus-http +mp.messaging.outgoing.kogito-job-service-job-request-events.url=${K_SINK:http://localhost:8280/v2/jobs/events} +mp.messaging.outgoing.kogito-job-service-job-request-events.method=POST +---- + +[NOTE] +==== +The `K_SINK` environment variable is automatically generated by the combination of the Knative ecosystem and the SinkBinding definition that will be automatically generated in the `kogito.yml` file. + +If this variable is not present, the default value `http://localhost:8280/v2/jobs/events` is used instead, this can be useful in development environments if you are executing the Job Service as a standalone service. +==== + +[start=2] +. Build your project and locate the automatically generated `kogito.yml` and `knative.yml` files in the `/target/kubernetes` directory of your project, xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc#proc-generating-kn-objects-build-time[see]. + +[source,shell] +---- +mvn clean install +---- + +[start=3] +. Use the generated files to deploy your workflow application in the Kubernetes cluster using the following commands: + +[source, bash] +---- +kubectl apply -f target/kogito.yml + +kubectl apply -f target/knative.yml +---- + +You can see a full example of this interaction mode configuration in the xref:use-cases/advanced-developer-use-cases/timeouts/timeout-showcase-example.adoc#execute-quarkus-project-standalone-services[Quarkus Workflow Project with standalone services] example project. + +[#kogito-addons-quarkus-jobs-messaging] +== Kafka messaging interaction + +To interact with the Job Service by sending cloud events over the kafka messaging system you must follow these steps: + +. Be sure that you have read the xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[Consuming and producing events with Kafka] guide, and you have configured the project accordingly. + +. Add the `quarkus-smallrye-reactive-messaging-kafka` and `kogito-addons-quarkus-jobs-messaging` extensions to your Quarkus Workflow Project using any of the following alternatives. + +[tabs] +==== +Manually:: ++ +[source,xml] +---- + + io.quarkus + quarkus-smallrye-reactive-messaging-kafka + + + org.kie + kogito-addons-quarkus-jobs-messaging + +---- + +Apache Maven:: ++ +[source,shell] +---- +mvn quarkus:add-extension -Dextensions="quarkus-smallrye-reactive-messaging-kafka,kogito-addons-quarkus-jobs-messaging" +---- + +Quarkus CLI:: ++ +[source,shell] +---- +quarkus extension add quarkus-smallrye-reactive-messaging-kafka kogito-addons-quarkus-jobs-messaging +---- +==== + +[start=3] +. Add the following configurations to the `application.properties` file of your project. + +[source,properties] +---- +mp.messaging.outgoing.kogito-job-service-job-request-events.connector=smallrye-kafka +mp.messaging.outgoing.kogito-job-service-job-request-events.topic=kogito-job-service-job-request-events-v2 +mp.messaging.outgoing.kogito-job-service-job-request-events.value.serializer=org.apache.kafka.common.serialization.StringSerializer +---- + +[start=4] +. Build and deploy your workflow application using any of the available procedures. + +[#kogito-addons-quarkus-jobs-management] +== REST call interaction + +To interact with the Job Service by executing direct REST calls you must follow these steps: + +. Add the `kogito-addons-quarkus-jobs-management` extension to your Quarkus Workflow Project using any of the following alternatives. + +[tabs] +==== +Manually:: ++ +[source,xml] +---- + + org.kie + kogito-addons-quarkus-jobs-management + +---- +Apache Maven:: ++ +[source,shell] +---- +mvn quarkus:add-extension -Dextensions="kogito-addons-quarkus-jobs-management" +---- +Quarkus CLI:: ++ +[source,shell] +---- +quarkus extension add kogito-addons-quarkus-jobs-management +---- +==== + +[start=3] +. Add the following configuration to the `application.properties` file of your project. + +[source,properties] +---- +kogito.jobs-service.url=http://localhost:8280 +---- + +[NOTE] +==== +When you deploy your project in a Kubernetes cluster, you must configure the `kogito.jobs-service-url` with the cloud URL of the Job Service. +In this case, you can also use an environment variable with the name `KOGITO_JOBS_SERVICE_URL` and pass it to the corresponding container, etc. +==== + +[start=4] +. Build and deploy your workflow application using any of the available procedures. + +== Job Service Embedded + +To facilitate the development and testing stage of your workflows, this extension provides an embedded Job Service instance that executes in the same runtime as your workflows, and thus, requires no additional configurations. The only consideration is that it must not be used for production installations. + +To use this extension you must: + +. Add the `kogito-addons-quarkus-jobs-service-embedded` extension to your Quarkus Workflow Project using any of the following alternatives. + +[tabs] +==== +Manually:: ++ +[source,xml] +---- + + org.kie + kogito-addons-quarkus-jobs-service-embedded + +---- +Apache Maven:: ++ +[source,shell] +---- +mvn quarkus:add-extension -Dextensions="kogito-addons-quarkus-jobs-management" +---- +Quarkus CLI:: ++ +[source,shell] +---- +quarkus extension add kogito-addons-quarkus-jobs-management +---- +==== + +[start=3] +. Build and deploy your workflow application using any of the available procedures. + +You can see a full example of Job Service embedded usage in the xref:use-cases/advanced-developer-use-cases/timeouts/timeout-showcase-example.adoc#execute-quarkus-project-embedded-services[Quarkus Workflow Project with embedded services] example project. + +include::../../../../pages/_common-content/report-issue.adoc[] diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/integration-tests-with-postgresql.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/integration-tests-with-postgresql.adoc index 7b1930b9..fdcf366b 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/integration-tests-with-postgresql.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/integration-tests-with-postgresql.adoc @@ -91,8 +91,8 @@ Ensure that the `pom.xml` file of your workflow application contains the require [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-persistence-jdbc + org.kie + kie-addons-quarkus-persistence-jdbc ---- diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc index d8859667..bb0ca34b 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc @@ -8,6 +8,7 @@ The {product_name} workflow runtime persistence is the mechanism to ensure that Every workflow instance requires some status information and data to execute, this information is automatically managed by the workflow's runtime and is persisted at different moments of the workflow execution. +[#saving_of_workflow_snapshots] For example, when a workflow instance reaches a state that needs to wait for an event, the engine takes a snapshot of the most relevant information, stores it in the database, and pauses that instance execution. In this way, resources like memory are released and can be used by other executing instances. diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/persistence-with-postgresql.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/persistence-with-postgresql.adoc index 009bbdbd..5e06d9c6 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/persistence-with-postgresql.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/persistence-with-postgresql.adoc @@ -14,19 +14,20 @@ :postgresql_doc_url: https://www.postgresql.org/docs/current/ :flyway_url: https://flywaydb.org/ -The {product_name} PostgreSQL persistence is provided by the `kogito-addons-quarkus-persistence-jdbc` add-on, which is based on the Java Database Connectivity (JDBC). +The {product_name} PostgreSQL persistence is provided by the `kie-addons-quarkus-persistence-jdbc` add-on, which is based on the Java Database Connectivity (JDBC). Additionally, it uses the Quarkus JDBC for PostgreSQL and Argoal Datasource extensions to connect with the database. And thus, it automatically inherits all the features of these extensions. For more information about Quarkus and JDBC, see link:{quarkus_datasource_guide}[Quarkus Datasources]. To see how to configure the PostgreSQL persistence, we recommend that follow the `serverless-workflow-callback-quarkus` example application in the link:{kogito_examples_repository_url}[GitHub repository], or apply the <<#configuration_procedure, configuration procedure>> directly in your project. .Getting the serverless-workflow-callback-quarkus application -. In a command terminal, clone the `kogito-examples` repository, navigate to the cloned directory, and follow link:{kogito_sw_examples_url}/serverless-workflow-callback-quarkus/README.md[these steps]: -[source, bash] +. In a command terminal, clone the `{kie_kogito_examples_repo_name}` repository, navigate to the cloned directory, and follow link:{kogito_sw_examples_url}/serverless-workflow-callback-quarkus/README.md[these steps]: + +[source,bash,subs="attributes+"] ---- -git clone https://github.com/apache/incubator-kie-kogito-examples.git +git clone {kogito_examples_url} -cd kogito-examples/serverless-workflow-examples/serverless-workflow-callback-quarkus +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-callback-quarkus ---- .Prerequisites @@ -48,8 +49,8 @@ This document relies on running PostgreSQL as a Docker service, however, if you [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-persistence-jdbc + org.kie + kie-addons-quarkus-persistence-jdbc ---- diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/postgresql-flyway-migration.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/postgresql-flyway-migration.adoc index 495c3dc8..1ca73fa3 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/postgresql-flyway-migration.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/persistence/postgresql-flyway-migration.adoc @@ -2,7 +2,7 @@ :flyway_install_url: https://flywaydb.org/download/community :flyway_migrate_existing_url: https://flywaydb.org/documentation/learnmore/existing -:kogito_ddl_script_url: https://repo1.maven.org/maven2/org/kie/kogito/kogito-ddl +:kogito_ddl_script_url: https://repository.apache.org/content/groups/snapshots/org/kie/kogito/kogito-ddl :flyway_url: https://flywaydb.org/ :flyway_baseline_migration_url: https://documentation.red-gate.com/fd/baseline-migrations-184127465.html @@ -25,7 +25,7 @@ quarkus.datasource.db-kind=postgresql -- This will create a schema history table `flyway_schema_history` in your database to track the version of each database, recording in it every versioned migration file applied to build that version. -NOTE: When using `kogito-addons-persistence-jdbc`, it is mandatory to set the `quarkus.datasource.db-kind` property, so that Flyway can locate the appropriate scripts for the database. +NOTE: When using `kie-addons-persistence-jdbc`, it is mandatory to set the `quarkus.datasource.db-kind` property, so that Flyway can locate the appropriate scripts for the database. === Migrate using Flyway CLI If you want to migrate manually you can use the Flyway migration CLI tool. diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/service-discovery/kubernetes-service-discovery.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/service-discovery/kubernetes-service-discovery.adoc index a6b302fc..21f0a109 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/service-discovery/kubernetes-service-discovery.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/service-discovery/kubernetes-service-discovery.adoc @@ -295,11 +295,11 @@ When activated, it leverages the Kubernetes Java API for service discovery, maki [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-kubernetes + org.kie + kie-addons-quarkus-kubernetes - org.kie.kogito + org.kie kogito-addons-quarkus-fabric8-kubernetes-service-catalog ---- @@ -312,11 +312,11 @@ This implementation retrieves information from the application's configuration, [source,xml] ---- - org.kie.kogito - kogito-addons-quarkus-kubernetes + org.kie + kie-addons-quarkus-kubernetes - org.kie.kogito + org.kie kogito-addons-quarkus-microprofile-config-service-catalog ---- diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/service-orchestration/configuring-openapi-services-endpoints-with-quarkus.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/service-orchestration/configuring-openapi-services-endpoints-with-quarkus.adoc index c4414644..82ddd683 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/service-orchestration/configuring-openapi-services-endpoints-with-quarkus.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/service-orchestration/configuring-openapi-services-endpoints-with-quarkus.adoc @@ -22,11 +22,11 @@ Developing an application using a service that returns different results every t The `stock-profit` service contains the following workflow definition: .Workflow definition in `stock-profit` service -[source,json] +[source,json,subs="attributes+"] ---- { "id": "stockprofit", - "specVersion": "0.8", + "specVersion": "{spec_version}", "version": "2.0.0-SNAPSHOT", "name": "Stock profit Workflow", "start": "GetStockPrice", @@ -89,7 +89,7 @@ To set properties for different profiles, each property needs to be prefixed wit * `dev`: Activates in development mode, such as `quarkus:dev` * `test`: Activates when tests are running -* `prod` (default profile): Activates when not running in development or test mode +* `preview` (default profile): Activates when not running in development or test mode You can also create additional profiles and activate them using the `quarkus.profile` configuration property. For more information about Quarkus profiles, see link:{quarkus_guides_profiles_url}[Profiles] in the Quarkus Configuration reference guide. diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/timeouts/timeout-showcase-example.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/timeouts/timeout-showcase-example.adoc index 9cf54752..01c894c6 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/timeouts/timeout-showcase-example.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/timeouts/timeout-showcase-example.adoc @@ -5,7 +5,7 @@ :description: Timeouts showcase in Serverless Workflow :keywords: kogito, workflow, serverless, timer, timeout -The timeouts showcase is designed to show how to configure and execute workflows that use timeouts, according to different deployment scenarios. +The timeouts showcase is designed to show how to configure and execute workflows that use timeouts, according to the different deployment scenarios. While all the scenarios contain the same set of workflows, they are provided as independent example projects, to facilitate the execution and understanding of each case. The following workflows are provided: @@ -233,13 +233,13 @@ And thus, there is no need for additional configurations when you use timeouts. To execute the workflows you must: -In a command terminal, clone the `kogito-examples` repository, navigate to the cloned directory, and follow https://github.com/apache/incubator-kie-kogito-examples/tree/main/serverless-workflow-examples/serverless-workflow-timeouts-showcase-operator-devprofile/README.md[these steps]: +In a command terminal, clone the `{kie_kogito_examples_repo_name}` repository, navigate to the cloned directory, and follow link:{kogito_sw_examples_url}/serverless-workflow-timeouts-showcase-operator-devprofile/README.md[these steps]: -[source, bash] +[source,bash,subs="attributes+"] ---- -git clone https://github.com/apache/incubator-kie-kogito-examples.git +git clone {kogito_examples_url} -cd kogito-examples/serverless-workflow-examples/serverless-workflow-timeouts-showcase-operator-devprofile +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-timeouts-showcase-operator-devprofile ---- [#execute-quarkus-project-embedded-services] @@ -247,25 +247,25 @@ cd kogito-examples/serverless-workflow-examples/serverless-workflow-timeouts-sho Similar to the <<#execute-operator-dev-profile, {operator_name} Dev Profile>>, this scenario shows how to configure the embedded {job_service_xref}[job service] and {data_index_xref}[data index service], when you work with a Quarkus Workflow project and it is also intended for development purposes. -In a command terminal, clone the `kogito-examples` repository, navigate to the cloned directory, and follow link:{kogito_sw_examples_url}/serverless-workflow-timeouts-showcase-embedded/README.md[these steps]: +In a command terminal, clone the `{kie_kogito_examples_repo_name}` repository, navigate to the cloned directory, and follow link:{kogito_sw_examples_url}/serverless-workflow-timeouts-showcase-embedded/README.md[these steps]: -[source, bash] +[source,bash,subs="attributes+"] ---- -git clone https://github.com/apache/incubator-kie-kogito-examples.git +git clone {kogito_examples_url} -cd kogito-examples/serverless-workflow-examples/serverless-workflow-timeouts-showcase-embedded +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-timeouts-showcase-embedded ---- [#execute-quarkus-project-standalone-services] === Quarkus Workflow Project with standalone services -This is the most complex and close to a production scenario. In this case, the workflows, the {job_service_xref}[job service], the {data_index_xref}[data index service], and the database are deployed as standalone services in the kubernetes or knative cluster. -Additionally, the communications from the workflows to the {job_service_xref}[job service], and from the {job_service_xref}[job service] to the {data_index_xref}[data index service], are resolved via the knative eventing system. +This is the most complex and close to a production scenario. In this case, the workflows, the {job_service_xref}[job service], the {data_index_xref}[data index service], and the database are deployed as standalone services in the kubernetes or Knative cluster. +Additionally, the communications from the workflows to the {job_service_xref}[job service], and from the {job_service_xref}[job service] to the {data_index_xref}[data index service], are resolved via the Knative eventing system. [NOTE] ==== -By using the knative eventing system the underlying low level communication system is transparent to the integration. +By using the Knative eventing system the underlying low level communication system is transparent to the integration. ==== @@ -297,13 +297,13 @@ For simplification purposes, a single database instance is used for both service To execute the workflows you must: -In a command terminal, clone the `kogito-examples` repository, navigate to the cloned directory, and follow link:{kogito_sw_examples_url}/serverless-workflow-timeouts-showcase-extended/README.md[these steps]: +In a command terminal, clone the `{kie_kogito_examples_repo_name}` repository, navigate to the cloned directory, and follow link:{kogito_sw_examples_url}/serverless-workflow-timeouts-showcase-extended/README.md[these steps]: -[source, bash] +[source,bash,subs="attributes+"] ---- -git clone https://github.com/apache/incubator-kie-kogito-examples.git +git clone {kogito_examples_url} -cd kogito-examples/serverless-workflow-examples/serverless-workflow-timeouts-showcase-extended +cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-timeouts-showcase-extended ---- From ad8e11928ed01bfd9ded86b98eb008552256d0c4 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 16:42:57 +0200 Subject: [PATCH 14/38] SRVLOGIC-261: Sync Advaced SL nav.adoc --- modules/ROOT/nav.adoc | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index b302e3c6..c67ecae8 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -93,7 +93,8 @@ *** Use Cases **** xref:serverless-logic:use-cases/advanced-developer-use-cases/index.adoc[Development of {product_name} applications using Quarkus and Java] ***** Getting Started -****** xref:serverless-logic:use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc[Creating your first workflow service] +****** xref:serverless-logic:use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc[] +****** xref:serverless-logic:use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc[] ****** xref:serverless-logic:use-cases/advanced-developer-use-cases/getting-started/build-workflow-image-with-quarkus-cli.adoc[] ****** xref:serverless-logic:use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc[] ****** xref:serverless-logic:use-cases/advanced-developer-use-cases/getting-started/test-serverless-workflow-quarkus-examples.adoc[] From c61fc0bad00c5d1b15d22500e6256c60be10adae Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 19:16:20 +0200 Subject: [PATCH 15/38] SRVLOGIC-261: Sync antora with correct versions --- antora.yml | 204 +++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 143 insertions(+), 61 deletions(-) diff --git a/antora.yml b/antora.yml index 5ef3eff5..9696e54a 100644 --- a/antora.yml +++ b/antora.yml @@ -14,104 +14,186 @@ asciidoc: serverlessoperatorname: OpenShift Serverless Operator certmanageroperatorname: OpenShift Cert-Manager Operator serverlessproductname: OpenShift Serverless + + # + # Serverless Logic - Names, labels and similar + # product_name: OpenShift Serverless Logic - kogito_version_redhat: 9.99.0.redhat-00007 + kogito_version_redhat: 9.100.0.redhat-00004 + kogito_branch: 9.100.x-prod operator_name: Serverless Logic Operator + operator_installation_namespace: sonataflow-operator-system + operator_controller_config: sonataflow-operator-controllers-config quarkus_platform: com.redhat.quarkus.platform - kogito_sw_ga: >- - org.kie.kogito:kogito-quarkus-serverless-workflow - quarkus_version: 3.2.9.Final-redhat-00003 - quarkus_platform_version: 3.2.9.Final-redhat-00004 + kogito_sw_ga: kogito-quarkus-serverless-workflow + data_index_ref: Data Index + workflow_instance: workflow instance + workflow_instances: workflow instances + operator_openshift_keyword: 'Serverless Logic' + operator_openshift_catalog: logic-rhel8-operator + operator_k8s_keyword: sonataflow + operator_k8s_subscription: my-sonataflow-operator + kogito_devservices_imagename: registry.redhat.io/openshift-serverless-1/logic-data-index-ephemeral-rhel8 + sonataflow_devmode_imagename: registry.redhat.io/openshift-serverless-1/logic-swf-devmode-rhel8 + sonataflow_builder_imagename: registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8 + sonataflow_devmode_devui_url: /q/dev-ui/org.apache.kie.sonataflow.sonataflow-quarkus-devui/ + serverless_logic_web_tools_name: Serverless Logic Web Tools + serverless_workflow_vscode_extension_name: Openshift Serverless Logic Workflow Editor + kie_kogito_examples_repo_name: kogito-examples + + # Jobs service image and links + jobs_service_image_ephemeral: registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8 + jobs_service_image_ephemeral_name: logic-jobs-service-ephemeral-rhel8 + jobs_service_image_ephemeral_url: https:registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8 + jobs_service_image_postgresql_name: logic-jobs-service-postgresql-rhel8 + jobs_service_image_postgresql: registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8 + jobs_service_image_postgresql_url: https://registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8 + jobs_service_image_usage_url: https://github.com/kiegroup/kogito-images/tree/9.100.x-prod#jobs-services-all-in-one + + # + # Versions + # + quarkus_version: 3.8.4.redhat-00002 + quarkus_platform_version: 3.8.4.redhat-00002 java_min_version: 17+ - maven_min_version: 3.9.6 + maven_min_version: 3.9.3 graalvm_min_version: 22.3.0 spec_version: 0.8 vscode_version: 1.84.0 - kn_cli_version: v1.32.1 - openshift_version_min: 4.12 + kn_cli_version: 1.33.0 docker_min_version: 20.10.7 docker_compose_min_version: 1.27.2 - # Tag to use with community-only images - sonataflow_non_productized_image_tag: main-2024-02-09 - operator_version: 1.32.0 - operator_openshift_keyword: 'Serverless Logic' - operator_openshift_catalog: logic-rhel8-operator - operator_installation_namespace: sonataflow-operator-system - operator_community_prod_yaml: https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/9.99.x-prod/operator.yaml - operator_community_prod_root: https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/9.99.x-prod - operator_k8s_keyword: sonataflow - operator_k8s_subscription: my-sonataflow-operator - kogito_devservices_imagename: registry.redhat.io/openshift-serverless-1-tech-preview/logic-data-index-ephemeral-rhel8 - sonataflow_builder_imagename: registry.redhat.io/openshift-serverless-1-tech-preview/logic-swf-builder-rhel8 - sonataflow_devmode_imagename: registry.redhat.io/openshift-serverless-1-tech-preview/logic-swf-devmode-rhel8 - sonataflow_operator_imagename: registry.redhat.io/openshift-serverless-1-tech-preview/logic-rhel8-operator - kogito_examples_repository_url: 'https://github.com/kiegroup/kogito-examples/tree/9.99.x-prod' - kogito_sw_operator_examples_url: https://github.com/kiegroup/kogito-examples/tree/9.99.x-prod/serverless-operator-examples - kogito_sw_examples_url: https://github.com/kiegroup/kogito-examples/tree/9.99.x-prod/serverless-workflow-examples - kogito_examples_url: 'https://github.com/kiegroup/kogito-examples.git' - kogito_apps_url: https://github.com/kiegroup/kogito-apps/tree/9.99.x-prod - quarkus_cli_url: 'https://quarkus.io/guides/cli-tooling' - spec_website_url: 'https://serverlessworkflow.io/' - spec_doc_url: >- - https://github.com/serverlessworkflow/specification/blob/0.8.x/specification.md - cloud_events_url: 'https://cloudevents.io/' - cloud_events_sdk_url: 'https://github.com/cloudevents/sdk-java' + kubernetes_version: 1.26 + openshift_version_min: 4.12 + openshift_version_max: 4.15 + knative_version: 1.13 + knative_serving_version: 1.13 + knative_eventing_version: 1.13 + kogito_version: 9.100.0.redhat-00004 + # only used in downstream + operator_version: 1.33.0 + + # Persistence extensions for the kogito-swf-builder + groupId_quarkus-agroal: io.quarkus + artifactId_quarkus-agroal: quarkus-agroal + + groupId_quarkus-jdbc-postgresql: io.quarkus + artifactId_quarkus-jdbc-postgresql: quarkus-jdbc-postgresql + + groupId_kie-addons-quarkus-persistence-jdbc: org.kie + artifactId_kie-addons-quarkus-persistence-jdbc: kie-addons-quarkus-persistence-jdbc + + # + # URLs + # + kogito_examples_repository_url: https://github.com/kiegroup/kogito-examples/tree/9.100.x-prod + kogito_sw_examples_url: https://github.com/kiegroup/kogito-examples/tree/9.100.x-prod/serverless-workflow-examples + kogito_sw_operator_examples_url: https://github.com/kiegroup/kogito-examples/tree/9.100.x-prod/serverless-operator-examples + kogito_examples_url: https://github.com/kiegroup/kogito-examples.git + kogito_apps_url: https://github.com/kiegroup/kogito-apps/tree/9.100.x-prod + kogito_runtimes_url: https://github.com/kiegroup/kogito-runtimes/tree/9.100.x-prod + kogito_runtimes_swf_url: https://github.com/kiegroup/kogito-runtimes/tree/9.100.x-prod/kogito-serverless-workflow/ + kogito_runtimes_swf_test_url: https://github.com/kiegroup/kogito-runtimes/tree/9.100.x-prod/kogito-serverless-workflow/kogito-serverless-workflow-executor-tests/src/test/java/org/kie/kogito/serverless/workflow/executor + quarkus_cli_url: https://quarkus.io/guides/cli-tooling + spec_website_url: https://serverlessworkflow.io/ + spec_doc_url: https://github.com/serverlessworkflow/specification/blob/0.8.x/specification.md + cloud_events_url: https://cloudevents.io/ + cloud_events_sdk_url: https://github.com/cloudevents/sdk-java cloud_events_git_url: https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents - open_api_spec_url: 'https://spec.openapis.org/oas/v3.1.0.html' + open_api_spec_url: https://spec.openapis.org/oas/v3.1.0.html open_api_swagger_spec_url: https://swagger.io/docs/specification - quarkus_openapi_gen_url: 'https://github.com/quarkiverse/quarkus-openapi-generator' - kie_tools_releases_page_url: 'https://github.com/kiegroup/kie-tools/releases' + quarkus_openapi_gen_url: https://github.com/quarkiverse/quarkus-openapi-generator + kie_tools_releases_page_url: https://github.com/apache/incubator-kie-tools/releases quarkus_guides_base_url: https://quarkus.io/guides quarkus_guides_kafka_url: https://quarkus.io/guides/kafka + quarkus_guides_building_native: https://quarkus.io/guides/building-native-image quarkus_guides_config_reference_url: https://quarkus.io/guides/config-reference + quarkus_guides_container_image_url: https://quarkus.io/guides/container-image + quarkus_guides_dev_services: https://quarkus.io/guides/getting-started-dev-services quarkus_guides_infinispan_client_reference_url: https://quarkus.io/guides/infinispan-client-reference + quarkus_guides_logging_url: https://quarkus.io/guides/logging quarkus_guides_profiles_url: https://quarkus.io/guides/config-reference#profiles + quarkus_guides_swaggerui_url: https://quarkus.io/guides/openapi-swaggerui quarkus_url: https://quarkus.io/ dev_services_url: https://quarkus.io/guides/dev-services test_containers_url: https://www.testcontainers.org/ - smallrye_messaging_url: >- - https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/3.3 - quarkus_config_url: 'https://quarkus.io/guides/config' + smallrye_messaging_url: https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/3.3 + quarkus_config_url: https://quarkus.io/guides/config quarkus_swagger_url: https://quarkus.io/guides/openapi-swaggerui - java_install: 'https://www.java.com/en/download/help/download_options.html' - maven_install: 'https://maven.apache.org/install.html' - docker_install: 'https://docs.docker.com/engine/install/' - podman_install: 'https://docs.podman.io/en/latest/' - kubectl_install: 'https://kubernetes.io/docs/tasks/tools/install-kubectl' - java_install_url: 'https://www.java.com/en/download/help/download_options.html' - maven_install_url: 'https://maven.apache.org/install.html' - docker_install_url: 'https://docs.docker.com/engine/install/' + java_install_url: https://www.java.com/en/download/help/download_options.html + openjdk_install_url: https://openjdk.org/install/ + maven_install_url: https://maven.apache.org/install.html + docker_install_url: https://docs.docker.com/engine/install/ + podman_install_url: https://docs.podman.io/en/latest/ + kubectl_install_url: https://kubernetes.io/docs/tasks/tools/install-kubectl docker_compose_install_url: https://docs.docker.com/compose/install/ - podman_install_url: 'https://docs.podman.io/en/latest/' - kubectl_install_url: 'https://kubernetes.io/docs/tasks/tools/install-kubectl' - kn_cli_install_url: 'https://github.com/knative/client/blob/main/docs/README.md#installing-kn' - kafka_doc_url: 'https://kafka.apache.org/documentation/' + kn_cli_install_url: https://knative.dev/docs/client/install-kn/ + knative_eventing_url: https://knative.dev/docs/eventing/ + knative_eventing_broker_url: https://knative.dev/docs/eventing/brokers/ + knative_eventing_kafka_broker_url: https://knative.dev/docs/eventing/brokers/broker-types/kafka-broker/ + knative_eventing_trigger_url: https://knative.dev/docs/eventing/triggers/ + knative_eventing_sink_binding_url: https://knative.dev/docs/eventing/sinks/#sink-parameter-example + knative_quickstart_url: https://knative.dev/docs/install/quickstart-install/#install-the-knative-cli/ + knative_serving_install_yaml_url: https://knative.dev/docs/install/yaml-install/serving/install-serving-with-yaml/ + knative_eventing_install_yaml_url: https://knative.dev/docs/install/yaml-install/eventing/install-eventing-with-yaml/ + kafka_doc_url: https://kafka.apache.org/documentation/ node_install_url: https://nodejs.org/en/download/package-manager/ pnpm_install_url: https://pnpm.io/installation golang_install_url: https://go.dev/doc/install serverless_logic_web_tools_url: https://start.kubesmarts.org/ - serverless_logic_web_tools_name: Serverless Logic Web Tools + swf_executor_core_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-core + swf_fluent_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-fluent + swf_executor_rest_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-rest + swf_executor_python_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-python + swf_executor_grpc_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-grpc + swf_executor_events_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-kafka + swf_executor_service_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-service + swf_executor_openapi_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-openapi-parser + rocksdb_addon_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-addons-persistence-rocksdb + rocksdb_url: https://rocksdb.org/ github_tokens_url: https://github.com/settings/tokens openshift_developer_sandbox_url: https://developers.redhat.com/developer-sandbox openshift_application_data_services_service_account_url: https://console.redhat.com/application-services/service-accounts openshift_application_data_services_service_registry_url: https://console.redhat.com/application-services/service-registry openshift_application_data_services_apache_kafka_url: https://console.redhat.com/application-services/streams/kafkas camel_url: https://camel.apache.org/ + visual_studio_code_url: https://code.visualstudio.com/ + visual_studio_code_swf_extension_url: https://marketplace.visualstudio.com/items?itemName=kie-group.swf-vscode-extension + k9s_url: https://k9scli.io/ + graalvm_url: https://www.graalvm.org/ + graalvm_native_image_url: https://www.graalvm.org/22.0/reference-manual/native-image/ + slf4j_simple_maven_repo_url: https://mvnrepository.com/artifact/org.slf4j/slf4j-simple # must align this version camel_extensions_url: https://camel.apache.org/camel-quarkus/3.2.x/reference/extensions kaoto_url: https://marketplace.visualstudio.com/items?itemName=redhat.vscode-kaoto minikube_url: https://minikube.sigs.k8s.io - kogito_serverless_operator_url: https://github.com/kiegroup/kogito-serverless-operator/tree/9.99.x-prod - docs_issues_url: https://github.com/apache/incubator-kie-kogito-docs/issues/new - # xreferences to documents within the serverless-logic documentation + minikube_start_url: https://minikube.sigs.k8s.io/docs/start/ + kind_install_url: https://kind.sigs.k8s.io/docs/user/quick-start/#installation + kogito_serverless_operator_url: https://github.com/kiegroup/kogito-serverless-operator/ + docs_issues_url: https://github.com/kiegroup/kogito-docs/issues/new + ocp_local_url: https://access.redhat.com/documentation/en-us/red_hat_openshift_local/2.17 + ocp_knative_serving_install_url: https://docs.openshift.com/container-platform/4.12/serverless/install/installing-knative-serving.html + ocp_knative_eventing_install_url: https://docs.openshift.com/container-platform/4.12/serverless/install/installing-knative-eventing.html + ocp_kn_cli_url: https://docs.openshift.com/container-platform/4.12/serverless/install/installing-kn.html + k8n_secrets_url: https://kubernetes.io/docs/concepts/configuration/secret + + # + # xreferences to documents within the documentation + # data_index_xref: xref:data-index/data-index-core-concepts.adoc job_service_xref: xref:job-services/core-concepts.adoc - # string unication references withing serverless logic documentation - data_index_ref: Data Index - workflow_instance: workflow instance - workflow_instances: workflow instances - sonataflow_devmode_devui_url: /q/dev/org.kie.kogito.kogito-quarkus-serverless-workflow-devui/ + + # Tag to use with community-only images + sonataflow_non_productized_image_tag: NO_TAG + operator_community_prod_yaml: https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/9.100.x-prod/operator.yaml + operator_community_prod_root: https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/9.100.x-prod + sonataflow_operator_imagename: registry.redhat.io/openshift-serverless-1-tech-preview/logic-rhel8-operator + java_install: 'https://www.java.com/en/download/help/download_options.html' + maven_install: 'https://maven.apache.org/install.html' + docker_install: 'https://docs.docker.com/engine/install/' + podman_install: 'https://docs.podman.io/en/latest/' + kubectl_install: 'https://kubernetes.io/docs/tasks/tools/install-kubectl' + # OCP KN urls ocp_knative_serving_url: https://docs.openshift.com/container-platform/4.12/serverless/install/installing-knative-serving.html ocp_knative_eventing_url: https://docs.openshift.com/container-platform/4.12/serverless/install/installing-knative-eventing.html - ocp_kn_cli_url: https://docs.openshift.com/container-platform/4.12/serverless/install/installing-kn.html \ No newline at end of file From 1d55662d296c553db8fb782d0647e31177361114 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 19:50:10 +0200 Subject: [PATCH 16/38] SRVLOGIC-261: Synx antora.yml and fix quarkus sg --- antora.yml | 36 +++++++++---------- .../create-your-first-workflow-service.adoc | 2 +- 2 files changed, 19 insertions(+), 19 deletions(-) diff --git a/antora.yml b/antora.yml index 9696e54a..d1749d05 100644 --- a/antora.yml +++ b/antora.yml @@ -128,28 +128,28 @@ asciidoc: kubectl_install_url: https://kubernetes.io/docs/tasks/tools/install-kubectl docker_compose_install_url: https://docs.docker.com/compose/install/ kn_cli_install_url: https://knative.dev/docs/client/install-kn/ - knative_eventing_url: https://knative.dev/docs/eventing/ - knative_eventing_broker_url: https://knative.dev/docs/eventing/brokers/ - knative_eventing_kafka_broker_url: https://knative.dev/docs/eventing/brokers/broker-types/kafka-broker/ - knative_eventing_trigger_url: https://knative.dev/docs/eventing/triggers/ - knative_eventing_sink_binding_url: https://knative.dev/docs/eventing/sinks/#sink-parameter-example - knative_quickstart_url: https://knative.dev/docs/install/quickstart-install/#install-the-knative-cli/ - knative_serving_install_yaml_url: https://knative.dev/docs/install/yaml-install/serving/install-serving-with-yaml/ - knative_eventing_install_yaml_url: https://knative.dev/docs/install/yaml-install/eventing/install-eventing-with-yaml/ + knative_eventing_url: https://docs.openshift.com/serverless/1.33/eventing/knative-eventing.html + knative_eventing_broker_url: https://docs.openshift.com/serverless/1.33/eventing/brokers/serverless-brokers.html + knative_eventing_kafka_broker_url: https://docs.openshift.com/serverless/1.33/eventing/brokers/serverless-broker-types.html + knative_eventing_trigger_url: https://docs.openshift.com/serverless/1.33/eventing/triggers/serverless-triggers.html + knative_eventing_sink_binding_url: https://docs.openshift.com/serverless/1.33/eventing/event-sinks/serverless-event-sinks.html + knative_quickstart_url: https://docs.openshift.com/serverless/1.33/install/installing-kn.html + knative_serving_install_yaml_url: https://docs.openshift.com/serverless/1.33/install/installing-knative-serving.html + knative_eventing_install_yaml_url: https://docs.openshift.com/serverless/1.33/install/installing-knative-eventing.html kafka_doc_url: https://kafka.apache.org/documentation/ node_install_url: https://nodejs.org/en/download/package-manager/ pnpm_install_url: https://pnpm.io/installation golang_install_url: https://go.dev/doc/install serverless_logic_web_tools_url: https://start.kubesmarts.org/ - swf_executor_core_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-core - swf_fluent_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-fluent - swf_executor_rest_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-rest - swf_executor_python_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-python - swf_executor_grpc_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-grpc - swf_executor_events_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-kafka - swf_executor_service_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-executor-service - swf_executor_openapi_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-serverless-workflow-openapi-parser - rocksdb_addon_maven_repo_url: https://mvnrepository.com/artifact/org.kie.kogito/kogito-addons-persistence-rocksdb + swf_executor_core_maven_repo_url: https://maven.repository.redhat.com/ga/org/kie/kogito/kogito-serverless-workflow-executor-core + swf_fluent_maven_repo_url: https://maven.repository.redhat.com/ga/org/kie/kogito/kogito-serverless-workflow-fluent + swf_executor_rest_maven_repo_url: https://maven.repository.redhat.com/ga/org/kie/kogito/kogito-serverless-workflow-executor-rest + swf_executor_python_maven_repo_url: https://maven.repository.redhat.com/ga/org/kie/kogito/kogito-serverless-workflow-executor-python + swf_executor_grpc_maven_repo_url: https://maven.repository.redhat.com/ga/org/kie/kogito/kogito-serverless-workflow-executor-grpc + swf_executor_events_maven_repo_url: https://maven.repository.redhat.com/ga/org/kie/kogito/kogito-serverless-workflow-executor-kafka + swf_executor_service_maven_repo_url: https://maven.repository.redhat.com/ga/org/kie/kogito/kogito-serverless-workflow-executor-service + swf_executor_openapi_maven_repo_url: https://maven.repository.redhat.com/ga/org/kie/kogito/kogito-serverless-workflow-openapi-parser + rocksdb_addon_maven_repo_url: https://maven.repository.redhat.com/ga/org/kie/kogito/kogito-addons-persistence-rocksdb rocksdb_url: https://rocksdb.org/ github_tokens_url: https://github.com/settings/tokens openshift_developer_sandbox_url: https://developers.redhat.com/developer-sandbox @@ -187,7 +187,7 @@ asciidoc: sonataflow_non_productized_image_tag: NO_TAG operator_community_prod_yaml: https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/9.100.x-prod/operator.yaml operator_community_prod_root: https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/9.100.x-prod - sonataflow_operator_imagename: registry.redhat.io/openshift-serverless-1-tech-preview/logic-rhel8-operator + sonataflow_operator_imagename: registry.redhat.io/openshift-serverless-1/logic-rhel8-operator java_install: 'https://www.java.com/en/download/help/download_options.html' maven_install: 'https://maven.apache.org/install.html' docker_install: 'https://docs.docker.com/engine/install/' diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc index c3b0a6f9..2ef43e3e 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc @@ -23,7 +23,7 @@ image::getting-started/hello-world-workflow.png[] For more information about the tooling and the required dependencies, see xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar with {product_name} tooling]. ifeval::["{kogito_version_redhat}" != ""] -include::../../pages/_common-content/downstream-project-setup-instructions.adoc[] +include::../../../../pages/_common-content/downstream-project-setup-instructions.adoc[] endif::[] [[proc-boostrapping-the-project]] From 3d09fead2f7ae89cc2baddc896977810ff9d8067 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 19:56:04 +0200 Subject: [PATCH 17/38] SRVLOGIC-261: Fix RH mvn repo setup, typos --- .../downstream-post-create-project.adoc | 2 +- .../downstream-project-setup-instructions.adoc | 10 +++++----- .../create-your-first-workflow-service.adoc | 2 +- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/modules/serverless-logic/pages/_common-content/downstream-post-create-project.adoc b/modules/serverless-logic/pages/_common-content/downstream-post-create-project.adoc index d0c42b8f..c8ba40f0 100644 --- a/modules/serverless-logic/pages/_common-content/downstream-post-create-project.adoc +++ b/modules/serverless-logic/pages/_common-content/downstream-post-create-project.adoc @@ -36,7 +36,7 @@ io.quarkus.platform quarkus-kogito-bom - {quarkus_platform_version}/version> + {quarkus_platform_version} pom import diff --git a/modules/serverless-logic/pages/_common-content/downstream-project-setup-instructions.adoc b/modules/serverless-logic/pages/_common-content/downstream-project-setup-instructions.adoc index e382d1f2..070219d1 100644 --- a/modules/serverless-logic/pages/_common-content/downstream-project-setup-instructions.adoc +++ b/modules/serverless-logic/pages/_common-content/downstream-project-setup-instructions.adoc @@ -19,14 +19,14 @@ To use the Red Hat Build of Quarkus (RHBQ) libraries, you need to configure your ---- - red-hat-earlyaccess-maven-repository + red-hat-ga-maven-repository true - red-hat-earlyaccess-maven-repository - https://maven.repository.redhat.com/earlyaccess/all/ + red-hat-ga-maven-repository + https://maven.repository.redhat.com/ga/all/ true @@ -37,8 +37,8 @@ To use the Red Hat Build of Quarkus (RHBQ) libraries, you need to configure your - red-hat-earlyaccess-maven-repository - https://maven.repository.redhat.com/earlyaccess/all/ + red-hat-ga-maven-repository + https://maven.repository.redhat.com/ga/all/ true diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc index 2ef43e3e..a95bc2a5 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc @@ -172,7 +172,7 @@ xref:core/cncf-serverless-workflow-specification-support.adoc[CNCF Serverless Wo == Building your workflow application ifeval::["{kogito_version_redhat}" != ""] -include::../../pages/_common-content/downstream-post-create-project.adoc[] +include::../../../../pages/_common-content/downstream-post-create-project.adoc[] endif::[] . To verify that project is created, compile the project using the following command: From bbf8eecd93c25643b878e74b75ccd884d0ad53ac Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 20:07:00 +0200 Subject: [PATCH 18/38] SRVLOGIC-261: Fix typo in maven GA repo --- .../downstream-project-setup-instructions.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/serverless-logic/pages/_common-content/downstream-project-setup-instructions.adoc b/modules/serverless-logic/pages/_common-content/downstream-project-setup-instructions.adoc index 070219d1..fbd0eeb9 100644 --- a/modules/serverless-logic/pages/_common-content/downstream-project-setup-instructions.adoc +++ b/modules/serverless-logic/pages/_common-content/downstream-project-setup-instructions.adoc @@ -26,7 +26,7 @@ To use the Red Hat Build of Quarkus (RHBQ) libraries, you need to configure your red-hat-ga-maven-repository - https://maven.repository.redhat.com/ga/all/ + https://maven.repository.redhat.com/ga true @@ -38,7 +38,7 @@ To use the Red Hat Build of Quarkus (RHBQ) libraries, you need to configure your red-hat-ga-maven-repository - https://maven.repository.redhat.com/ga/all/ + https://maven.repository.redhat.com/ga true From a5d841063f7aad22904ed9c3596478e79bb54086 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 20:30:02 +0200 Subject: [PATCH 19/38] SRVLOGIC-261: Fix outdated kn cli installation --- antora.yml | 1 + .../kn-plugin-workflow-overview.adoc | 59 +++++++++++++++---- 2 files changed, 49 insertions(+), 11 deletions(-) diff --git a/antora.yml b/antora.yml index d1749d05..3874321a 100644 --- a/antora.yml +++ b/antora.yml @@ -33,6 +33,7 @@ asciidoc: operator_openshift_catalog: logic-rhel8-operator operator_k8s_keyword: sonataflow operator_k8s_subscription: my-sonataflow-operator + osl_kn_cli_imagename: registry.redhat.io/openshift-serverless-1/kn-workflow-cli-artifacts-rhel8 kogito_devservices_imagename: registry.redhat.io/openshift-serverless-1/logic-data-index-ephemeral-rhel8 sonataflow_devmode_imagename: registry.redhat.io/openshift-serverless-1/logic-swf-devmode-rhel8 sonataflow_builder_imagename: registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8 diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc index 6ae6fe2e..181d2956 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc @@ -12,7 +12,7 @@ This document describes how you can install and use the `kn-plugin-workflow` plug-in in {product_name}. [[proc-install-sw-plugin-kn-cli]] -== Installing the {product_name} plug-in for Knative CLI +== Installing the {product_name} Workflow CLI You can use the {product_name} plug-in to set up your local workflow project quickly using Knative CLI. @@ -22,30 +22,50 @@ You can use the {product_name} plug-in to set up your local workflow project qui * (Optional) link:{docker_install_url}[Docker] is installed. * (Optional) link:{podman_install_url}[Podman] is installed. * link:{kubectl_install_url}[Kubernetes CLI] is installed. -* link:{kn_cli_install_url}[Knative CLI] is installed. .Procedure -. Download the latest binary file from the link:{kie_tools_releases_page_url}[KIE Tooling Releases] page. -. Install the `kn workflow` command as a plug-in of the Knative CLI using the following steps: +. Create a variable which references the KN workflow plugin container image: + --- -.. Copy the `kn-workflow` binary file to a directory in your `PATH`, such as `/usr/local/bin` and ensure that the file name is `kn-workflow`. -.. Make the binary file executable as follows: +`export IMAGE_SWF_KN_TAG={osl_kn_cli_imagename}:{operator_version}` ++ +. Pull the image using Docker or Podman ++ +`podman pull $IMAGE_SWF_KN` ++ +. Start the container image and get the ID of running container: ++ +`export KN_CONTAINER_ID=$(docker run -di $IMAGE_SWF_KN)` ++ +. Copy the binary of KN workflow plugin ++ +docker cp $KN_CONTAINER_ID:/usr/share/kn/linux_amd64/kn-workflow-linux-amd64.tar.gz kn-workflow-linux-amd64.tar.gz ++ +. Stop the container: ++ +docker stop $KN_CONTAINER_ID + -`chmod +x /usr/local/bin/kn-workflow` +. Unzip the archive: + +tar -xf kn-workflow-linux-amd64.tar.gz ++ +. Rename the archive to `kn-workflow`: ++ +mv kn-workflow-linux-amd64.tar.gz kn-workflow ++ +. Copy the `kn-workflow` binary file to a directory in your `PATH`, such as `/usr/local/bin` + [WARNING] ==== On Mac, some systems might block the application to run due to Apple enforcing policies. To fix this problem, check the *Security & Privacy* section in the *System Preferences* -> *General* tab to approve the application to run. For more information, see link:{apple_support_url}[Apple support article: Open a Mac app from an unidentified developer]. ==== -.. Run the following command to verify that `kn-workflow` plug-in is installed successfully: +. Run the following command to verify that `kn-workflow` plug-in is installed successfully: + -`kn plugin list` +`kn workflow help` After installing the plug-in, you can use `kn workflow` to run the related subcommands. -- -. Use the `workflow` subcommand in Knative CLI as follows: +. Use the `kn workflow` as follows: + -- .Aliases to use workflow subcommand @@ -60,6 +80,23 @@ kn-workflow ---- Manage SonataFlow projects +Currently, SonataFlow targets use cases with a single Serverless Workflow main +file definition (i.e. workflow.sw.{json|yaml|yml}). + +Additionally, you can define the configurable parameters of your application in the +"application.properties" file (inside the root project directory). +You can also store your spec files (i.e., Open API files) inside the "specs" folder, + schemas file inside "schemas" folder and also subflows inside "subflows" folder. + +A SonataFlow project, as the following structure by default: + +Workflow project root + /specs (optional) + /schemas (optional) + /subflows (optional) + workflow.sw.{json|yaml|yml} (mandatory) + + Usage: kn workflow [command] From 658b3d6e65b118d5686e1421c745c6ec710b29ae Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 20:34:19 +0200 Subject: [PATCH 20/38] SRVLOGIC-261: Fix CLI install order --- .../kn-plugin-workflow-overview.adoc | 21 +++++++++---------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc index 181d2956..87172a78 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc @@ -38,33 +38,27 @@ You can use the {product_name} plug-in to set up your local workflow project qui + . Copy the binary of KN workflow plugin + -docker cp $KN_CONTAINER_ID:/usr/share/kn/linux_amd64/kn-workflow-linux-amd64.tar.gz kn-workflow-linux-amd64.tar.gz +`docker cp $KN_CONTAINER_ID:/usr/share/kn/linux_amd64/kn-workflow-linux-amd64.tar.gz kn-workflow-linux-amd64.tar.gz` + . Stop the container: + -docker stop $KN_CONTAINER_ID +`docker stop $KN_CONTAINER_ID` + . Unzip the archive: + -tar -xf kn-workflow-linux-amd64.tar.gz +`tar -xf kn-workflow-linux-amd64.tar.gz` + . Rename the archive to `kn-workflow`: + -mv kn-workflow-linux-amd64.tar.gz kn-workflow +`mv kn-workflow-linux-amd64.tar.gz kn-workflow` + . Copy the `kn-workflow` binary file to a directory in your `PATH`, such as `/usr/local/bin` - -[WARNING] -==== -On Mac, some systems might block the application to run due to Apple enforcing policies. To fix this problem, check the *Security & Privacy* section in the *System Preferences* -> *General* tab to approve the application to run. For more information, see link:{apple_support_url}[Apple support article: Open a Mac app from an unidentified developer]. -==== . Run the following command to verify that `kn-workflow` plug-in is installed successfully: + `kn workflow help` -After installing the plug-in, you can use `kn workflow` to run the related subcommands. +. After installing the plug-in, you can use `kn workflow` to run the related subcommands. -- - . Use the `kn workflow` as follows: + -- @@ -75,6 +69,11 @@ kn workflow kn-workflow ---- +[WARNING] +==== +On Mac, some systems might block the application to run due to Apple enforcing policies. To fix this problem, check the *Security & Privacy* section in the *System Preferences* -> *General* tab to approve the application to run. For more information, see link:{apple_support_url}[Apple support article: Open a Mac app from an unidentified developer]. +==== + .Example output [source,text] ---- From ac157192cfd7fbf2598d4e651daa51f873ecfa2c Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 11 Jul 2024 20:38:45 +0200 Subject: [PATCH 21/38] SRVLOGIC-261: Fix cli order try 2 --- .../kn-plugin-workflow-overview.adoc | 28 +++++++++---------- 1 file changed, 13 insertions(+), 15 deletions(-) diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc index 87172a78..29b9a4ae 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc @@ -56,12 +56,9 @@ You can use the {product_name} plug-in to set up your local workflow project qui . Run the following command to verify that `kn-workflow` plug-in is installed successfully: + `kn workflow help` - -. After installing the plug-in, you can use `kn workflow` to run the related subcommands. --- -. Use the `kn workflow` as follows: + --- +. After installing the plug-in, you can use `kn workflow` to run the related subcommands as follows: + .Aliases to use workflow subcommand [source,shell] ---- @@ -103,18 +100,19 @@ Aliases: kn workflow, kn-workflow Available Commands: - completion Generate the autocompletion script for the specified shell - create Creates a new SonataFlow project - deploy Deploy a SonataFlow project on Kubernetes via SonataFlow Operator - help Help about any command - quarkus Manage SonataFlow projects built in Quarkus - run Run a SonataFlow project in development mode - undeploy Undeploy a SonataFlow project on Kubernetes via SonataFlow Operator - version Show the version + completion Generate the autocompletion script for the specified shell + create Creates a new SonataFlow project + deploy Deploy a SonataFlow project on Kubernetes via SonataFlow Operator + gen-manifest GenerateOperator manifests + help Help about any command + quarkus Manage SonataFlow projects built in Quarkus + run Run a SonataFlow project in development mode + undeploy Undeploy a SonataFlow project on Kubernetes via SonataFlow Operator + version Show the version Flags: - -h, --help help for kn - -v, --version version for kn + -h, --help help for kn workflow + -v, --version version for kn workflow Use "kn [command] --help" for more information about a command. ---- From f83c85e483e1cadd5b0a4e028c49c3202ff86c02 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Fri, 12 Jul 2024 08:23:07 +0200 Subject: [PATCH 22/38] SRVLOGIC-261: Include KN workflow installation from image --- antora.yml | 2 +- .../operator/install-kn-workflow-cli.adoc | 87 +++++++++++++++++++ 2 files changed, 88 insertions(+), 1 deletion(-) create mode 100644 modules/serverless-logic/pages/cloud/operator/install-kn-workflow-cli.adoc diff --git a/antora.yml b/antora.yml index 3874321a..bf8f7335 100644 --- a/antora.yml +++ b/antora.yml @@ -128,7 +128,7 @@ asciidoc: podman_install_url: https://docs.podman.io/en/latest/ kubectl_install_url: https://kubernetes.io/docs/tasks/tools/install-kubectl docker_compose_install_url: https://docs.docker.com/compose/install/ - kn_cli_install_url: https://knative.dev/docs/client/install-kn/ + kn_cli_install_url: https://docs.openshift.com/serverless/1.33/install/installing-kn.html knative_eventing_url: https://docs.openshift.com/serverless/1.33/eventing/knative-eventing.html knative_eventing_broker_url: https://docs.openshift.com/serverless/1.33/eventing/brokers/serverless-brokers.html knative_eventing_kafka_broker_url: https://docs.openshift.com/serverless/1.33/eventing/brokers/serverless-broker-types.html diff --git a/modules/serverless-logic/pages/cloud/operator/install-kn-workflow-cli.adoc b/modules/serverless-logic/pages/cloud/operator/install-kn-workflow-cli.adoc new file mode 100644 index 00000000..4308a798 --- /dev/null +++ b/modules/serverless-logic/pages/cloud/operator/install-kn-workflow-cli.adoc @@ -0,0 +1,87 @@ += Installing the Knative Workflow Plugin +:compat-mode!: +// Metadata: +:description: Install the operator on Kubernetes clusters +:keywords: kogito, sonataflow, workflow, serverless, operator, kubernetes, minikube, openshift, containers +// links + +*Prerequisites* + +* You have first installed the link:{kn_cli_install_url}[Knative CLI]. + +== Installing the Knative Workflow Plugin using the artifacts image + +To install the Knative Workflow Plugin using the artifacts image you must follow this procedure: + +*Start the `kn-workflow-cli-artifacts-rhel8` image* + +[source, shell] +---- +export KN_IMAGE=registry.redhat.io/openshift-serverless-1/logic-kn-workflow-cli-artifacts-rhel8:1.33.0 + +export KN_CONTAINER_ID=$(docker run -di $KN_IMAGE) +---- + +*Copy the Knative Workflow Plugin binary according to your environment* + +.Binaries copy for `Linux` amd64 / arm64 architectures +[source, shell] +---- +docker cp $KN_CONTAINER_ID:/usr/share/kn/linux_amd64/kn-workflow-linux-amd64.tar.gz kn-workflow-linux-amd64.tar.gz + +docker cp $KN_CONTAINER_ID:/usr/share/kn/linux_arm64/kn-workflow-linux-arm64.tar.gz kn-workflow-linux-arm64.tar.gz +---- + +.Binaries copy for `macOS` amd64 / arm64 architectures +[source, shell] +---- +docker cp $KN_CONTAINER_ID:/usr/share/kn/macos_amd64/kn-workflow-macos-amd64.tar.gz kn-workflow-macos-amd64.tar.gz + +docker cp $KN_CONTAINER_ID:/usr/share/kn/macos_arm64/kn-workflow-macos-arm64.tar.gz kn-workflow-macos-arm64.tar.gz +---- + +.Binaries copy for `Windows` amd64 architecture +[source, shell] +---- +docker cp $KN_CONTAINER_ID:/usr/share/kn/windows/kn-workflow-windows-amd64.zip kn-workflow-windows-amd64.zip +---- + +*Stop the Container* + +[source, shell] +---- +docker stop $KN_CONTAINER_ID + +docker rm $KN_CONTAINER_ID +---- + +*Extract the selected Knative Workflow Plugin binary* + +.Extract the binary example +[source,shell] +---- +tar xvzf kn-workflow-linux-amd64.tar.gz +---- + +In the ``, you'll find the `kn` executable that you must rename to `kn-workflow` + +[source,shell] +---- +mv /kn /kn-workflow +---- + +[IMPORTANT] +==== +Make sure that `` is included in your system PATH. +==== + +To verify that the installation was successful, you can execute the following command: +[source,shell] +---- +kn workflow version +---- +output: +[source,shell] +---- +1.33.0 +---- From 017ef2b1eea7c94cbda039a73bcc77883c5f9862 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Fri, 12 Jul 2024 08:57:12 +0200 Subject: [PATCH 23/38] SRVLOGIC-261: Fix kn wf plugin, add release notes, remove TP note --- modules/serverless-logic/pages/about.adoc | 8 ---- .../serverless-logic/pages/release-notes.adoc | 36 ++++++++-------- .../kn-plugin-workflow-overview.adoc | 42 +------------------ 3 files changed, 18 insertions(+), 68 deletions(-) diff --git a/modules/serverless-logic/pages/about.adoc b/modules/serverless-logic/pages/about.adoc index 5ec34d4b..b95d625a 100644 --- a/modules/serverless-logic/pages/about.adoc +++ b/modules/serverless-logic/pages/about.adoc @@ -1,13 +1,5 @@ = About OpenShift Serverless Logic -[IMPORTANT] -==== -{serverlessproductname} Logic is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. - -For more information about the support scope of Red Hat Technology Preview -features, see https://access.redhat.com/support/offerings/techpreview/. -==== - {serverlessproductname} Logic enables developers or architects to define declarative workflow models that orchestrate event-driven, serverless applications. Serverless Logic implements the link:https://github.com/serverlessworkflow/specification[CNCF Serverless Workflow specification], allowing developers and architects to define logical steps of execution declaratively (no code) for cloud-native services. The specification is hosted by the link:https://www.cncf.io/[Cloud Native Computing Foundation (CNCF)] and is currently a link:https://www.cncf.io/projects/serverless-workflow/[CNCF Sandbox project]. {serverlessproductname} Logic is also designed to write workflows in formats (JSON or YAML) that might be better suited for developing and deploying serverless applications in cloud or container environments. diff --git a/modules/serverless-logic/pages/release-notes.adoc b/modules/serverless-logic/pages/release-notes.adoc index 7924d16f..0d4ec5a0 100644 --- a/modules/serverless-logic/pages/release-notes.adoc +++ b/modules/serverless-logic/pages/release-notes.adoc @@ -2,30 +2,28 @@ :compat-mode!: == Known issues +* link:https://issues.redhat.com/browse/SRVLOGIC-333[SRVLOGIC-333] - Afer update of workflow a new build is not trigger when the previous failed +* link:https://issues.redhat.com/browse/SRVLOGIC-324[SRVLOGIC-324] - Live reload does not work for KN CLI plugin +* link:https://issues.redhat.com/browse/SRVLOGIC-327[SRVLOGIC-327] - Warnings in jobs service ephemeral pod logs +* link:https://issues.redhat.com/browse/SRVLOGIC-326[SRVLOGIC-326] - Warnings in data index ephemeral pod logs +* link:https://issues.redhat.com/browse/SRVLOGIC-320[SRVLOGIC-320] - Natively build examples are not able to respond to requests +* link:https://issues.redhat.com/browse/SRVLOGIC-250[SRVLOGIC-250] - Different UI results in DevUI with dev and preview scenarios +* link:https://issues.redhat.com/browse/SRVLOGIC-220[SRVLOGIC-220] - Monitoring tab (workflow list): Duration negative and timezone incorrect +* link:https://issues.redhat.com/browse/SRVLOGIC-334[SRVLOGIC-334] - Missing serverless-workflow-examples-parent:pom when building productized example -* link:https://issues.redhat.com/browse/SRVLOGIC-185[SRVLOGIC-185] - Serverless logic operator builder is not able to find builder config ConfigMap. -** Workaround - In the operator's namespace create a copy of 'logic-operator-rhel8-builder-config' configMap with name 'sonataflow-operator-builder-config`. -* link:https://issues.redhat.com/browse/SRVLOGIC-270[SRVLOGIC-270] - Servlerless Logic Operator is picking the wrong builder image -** Workaround - Configure the builder to pick this image instead (which is really the latest): `registry.redhat.io/openshift-serverless-1-tech-preview/logic-swf-builder-rhel8:1.32.0-5.` -* link:https://issues.redhat.com/browse/SRVLOGIC-220[SRVLOGIC-220] - Monitoring tab or workflow list: Duration negative and timezone incorrect -* link:https://issues.redhat.com/browse/SRVLOGIC-244[SRVLOGIC-244] - Example serverless-workflow-loanbroker-example service discovery misconfiguration -* link:https://issues.redhat.com/browse/SRVLOGIC-250[SRVLOGIC-250] - DevUI Different UI results with dev and prod scenarios == Notable changes -* link:https://issues.redhat.com/browse/SRVLOGIC-179[SRVLOGIC-179] - Provide the option to specify workflow properties at several levels -* link:https://issues.redhat.com/browse/SRVLOGIC-196[SRVLOGIC-196] - Rollout operator's deployment when custom configuration changes +* link:https://issues.redhat.com/browse/SRVLOGIC-246[SRVLOGIC-246] - Improvements on the Job Service start-up and periodic jobs loading procedure +* link:https://issues.redhat.com/browse/SRVLOGIC-232[SRVLOGIC-232] - Productize Data Index PostgreSQL and Jobs Service images +* link:https://issues.redhat.com/browse/SRVLOGIC-232[SRVLOGIC-249] - SonataFlow Operator: Knative Eventing Integration M1 +* link:https://issues.redhat.com/browse/SRVLOGIC-252[SRVLOGIC-252] - Security: Authentication and Authorization Support +* link:https://issues.redhat.com/browse/SRVLOGIC-276[SRVLOGIC-276] - [operator] Make the workflow properties available when the associated image is generated +* link:https://issues.redhat.com/browse/SRVLOGIC-278[SRVLOGIC-278] - Enhance Knative Serving Integration == Other changes and Bug fixes -* link:https://issues.redhat.com/browse/SRVLOGIC-221[SRVLOGIC-221] - Pod instances keep spawning and terminating when deploying the workflow -* link:https://issues.redhat.com/browse/SRVLOGIC-223[SRVLOGIC-223] - Kn CLI: Build of the sample project fails with NoSuchFileException -* link:https://issues.redhat.com/browse/SRVLOGIC-224[SRVLOGIC-224] - Multiple pods are started with a simple project -* link:https://issues.redhat.com/browse/SRVLOGIC-225[SRVLOGIC-225] - Incosistent versions of Quarkus core and platform across deliverables -* link:https://issues.redhat.com/browse/SRVLOGIC-230[SRVLOGIC-230] - SonataFlow Quarkus Dev UI is not loaded -* link:https://issues.redhat.com/browse/SRVLOGIC-231[SRVLOGIC-231] - Disable question about collecting Quarkus analytics -* link:https://issues.redhat.com/browse/SRVLOGIC-235[SRVLOGIC-235] - SonataFlow builder image is failing with java.lang.NoSuchMethodError -* link:https://issues.redhat.com/browse/SRVLOGIC-238[SRVLOGIC-238] - SonataFlow examples are missing quarkus 3 upgrade -* link:https://issues.redhat.com/browse/SRVLOGIC-239[SRVLOGIC-239] - SonataFlow examples are downloading logic-data-index-ephemeral-rhel8 with version 1.32 -* link:https://issues.redhat.com/browse/SRVLOGIC-251[SRVLOGIC-251] - Missing installation/prepare environment guide in documentation +* link:https://issues.redhat.com/browse/SRVLOGIC-311[SRVLOGIC-311] - Add Red Hat Product repository as part of Swf-builder and Swf-devmode settings.xml +* link:https://issues.redhat.com/browse/SRVLOGIC-311[SRVLOGIC-277] - Error in workflow not correctly propagated +* link:https://issues.redhat.com/browse/SRVLOGIC-311[SRVLOGIC-185] - Serverless logic operator builder is not able to find builder config ConfigMap diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc index 29b9a4ae..7f7698c6 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc @@ -9,12 +9,7 @@ {product_name} provides a plug-in named `kn-plugin-workflow` for Knative CLI, which enables you to set up a local workflow project quickly using the command line. -This document describes how you can install and use the `kn-plugin-workflow` plug-in in {product_name}. - -[[proc-install-sw-plugin-kn-cli]] -== Installing the {product_name} Workflow CLI - -You can use the {product_name} plug-in to set up your local workflow project quickly using Knative CLI. +This document describes fatures of the workflow plugin for KN CLI. See xref:cloud/operator/install-kn-workflow-cli.adoc[] for currently supported installation procedure. .Prerequisites * link:{java_install_url}[Java] {java_min_version} is installed. @@ -23,40 +18,6 @@ You can use the {product_name} plug-in to set up your local workflow project qui * (Optional) link:{podman_install_url}[Podman] is installed. * link:{kubectl_install_url}[Kubernetes CLI] is installed. -.Procedure -. Create a variable which references the KN workflow plugin container image: -+ -`export IMAGE_SWF_KN_TAG={osl_kn_cli_imagename}:{operator_version}` -+ -. Pull the image using Docker or Podman -+ -`podman pull $IMAGE_SWF_KN` -+ -. Start the container image and get the ID of running container: -+ -`export KN_CONTAINER_ID=$(docker run -di $IMAGE_SWF_KN)` -+ -. Copy the binary of KN workflow plugin -+ -`docker cp $KN_CONTAINER_ID:/usr/share/kn/linux_amd64/kn-workflow-linux-amd64.tar.gz kn-workflow-linux-amd64.tar.gz` -+ -. Stop the container: -+ -`docker stop $KN_CONTAINER_ID` -+ -. Unzip the archive: -+ -`tar -xf kn-workflow-linux-amd64.tar.gz` -+ -. Rename the archive to `kn-workflow`: -+ -`mv kn-workflow-linux-amd64.tar.gz kn-workflow` -+ -. Copy the `kn-workflow` binary file to a directory in your `PATH`, such as `/usr/local/bin` -. Run the following command to verify that `kn-workflow` plug-in is installed successfully: -+ -`kn workflow help` -+ . After installing the plug-in, you can use `kn workflow` to run the related subcommands as follows: .Aliases to use workflow subcommand @@ -116,7 +77,6 @@ Flags: Use "kn [command] --help" for more information about a command. ---- --- [[proc-create-sw-project-kn-cli]] == Creating a workflow project using Knative CLI From 85995d1a25b609aeb20bd92e0b59997752422dc9 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Fri, 12 Jul 2024 09:15:17 +0200 Subject: [PATCH 24/38] SRVLOGIC-261: Enhance GS guide and fix Kn cli typos --- ...te-your-first-workflow-service-with-kn-cli-and-vscode.adoc | 4 +++- .../kn-plugin-workflow-overview.adoc | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/modules/serverless-logic/pages/getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.adoc b/modules/serverless-logic/pages/getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.adoc index 88ca5e6c..5f1efe1a 100644 --- a/modules/serverless-logic/pages/getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.adoc +++ b/modules/serverless-logic/pages/getting-started/create-your-first-workflow-service-with-kn-cli-and-vscode.adoc @@ -23,7 +23,7 @@ cd ./my-sonataflow-project ---- * Open the folder in Visual Studio Code and examine the created `workflow.sw.json` using our extension. -Now you can run the project and execute the workflow. +Once you are done you can run the project and execute the workflow. [[proc-running-app-with-kn-cli]] == Running a Workflow project with Visual Studio Code and KN CLI @@ -40,6 +40,8 @@ kn workflow run * See xref:testing-and-troubleshooting/quarkus-dev-ui-extension/quarkus-dev-ui-workflow-instances-page.adoc[Workflow instances] guide on how to run workflows via Development UI. * Once you are done developing your project navigate to the terminal that is running the `kn workflow run` command and hit `Ctlr+C` to stop the development environment. +You can use any editor to develop your workflow to suit your use case. We recommend getting familiar with xref:../core/cncf-serverless-workflow-specification-support.adoc[] and guides in `Core` chapter first. + To deploy the finished project to a local cluster, proceed to the next section. [[proc-deploying-app-with-kn-cli]] diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc index 7f7698c6..39c0a964 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc @@ -18,7 +18,7 @@ This document describes fatures of the workflow plugin for KN CLI. See xref:clou * (Optional) link:{podman_install_url}[Podman] is installed. * link:{kubectl_install_url}[Kubernetes CLI] is installed. -. After installing the plug-in, you can use `kn workflow` to run the related subcommands as follows: +After installing the plug-in, you can use `kn workflow` to run the related subcommands as follows: .Aliases to use workflow subcommand [source,shell] From 7ac1869c14a7100cb2805b8e0f8d1f1062a58d1d Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Fri, 12 Jul 2024 10:34:38 +0200 Subject: [PATCH 25/38] SRVLOGIC-261: Remove quay.io occurences, replace with vars --- .../cloud/operator/build-and-deploy-workflows.adoc | 6 +++--- .../pages/cloud/operator/customize-podspec.adoc | 2 +- .../cloud/operator/install-serverless-operator.adoc | 12 ++++++------ 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc b/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc index f7910885..a3c8879e 100644 --- a/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc +++ b/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc @@ -66,7 +66,7 @@ You can change the `Dockerfile` entry in this `ConfigMap` to tailor the Dockerfi apiVersion: v1 data: DEFAULT_WORKFLOW_EXTENSION: .sw.json - Dockerfile: "FROM quay.io/kiegroup/kogito-swf-builder-nightly:latest AS builder\n\n# + Dockerfile: "FROM {sonataflow_builder_imagename}:{operator_version} AS builder\n\n# variables that can be overridden by the builder\n# To add a Quarkus extension to your application\nARG QUARKUS_EXTENSIONS\n# Args to pass to the Quarkus CLI add extension command\nARG QUARKUS_ADD_EXTENSION_ARGS\n# Additional java/mvn arguments @@ -355,7 +355,7 @@ spec: strategyOptions: KanikoBuildCacheEnabled: "true" registry: - address: quay.io/kiegroup <1> + address: registry.redhat.io/openshift-serverless-1 <1> secret: regcred <2> ---- @@ -432,7 +432,7 @@ If you are running on OpenShift, you have access to the Red Hat's supported regi kubectl edit cm/sonataflow-operator-builder-config -n {operator_installation_namespace} ---- -In your editor, change the first line in the `Dockerfile` entry where it reads `FROM quay.io/kiegroup/kogito-swf-builder-nightly:latest` to the desired image. +In your editor, change the first line in the `Dockerfile` entry where it reads `FROM {sonataflow_builder_imagename}:{operator_version}` to the desired image. This image must be compatible with your operator's installation. diff --git a/modules/serverless-logic/pages/cloud/operator/customize-podspec.adoc b/modules/serverless-logic/pages/cloud/operator/customize-podspec.adoc index 9ae72846..568dfa60 100644 --- a/modules/serverless-logic/pages/cloud/operator/customize-podspec.adoc +++ b/modules/serverless-logic/pages/cloud/operator/customize-podspec.adoc @@ -196,7 +196,7 @@ When setting the attribute `.spec.podTemplate.container.image` the operator unde === Setting a custom image in devmode -In xref:cloud/operator/developing-workflows.adoc[development profile], it's expected that the image is based on the default `quay.io/kiegroup/kogito-swf-devmode:latest`. +In xref:cloud/operator/developing-workflows.adoc[development profile], it's expected that the image is based on the default `{sonataflow_devmode_imagename}:{operator_version}`. === Setting a custom image in preview diff --git a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc index 4f9e2df3..2c8595b5 100644 --- a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc +++ b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc @@ -11,7 +11,7 @@ :kubernetes_operator_uninstall_url: https://olm.operatorframework.io/docs/tasks/uninstall-operator/ :operatorhub_url: https://operatorhub.io/ -This guide describes how to install the {operator_name} in a Kubernetes or OpenShift cluster. The operator is in an xref:cloud/operator/known-issues.adoc[early development stage] (community only) and has been tested on OpenShift {openshift_version_min}+, Kubernetes {kubernetes_version}+, and link:{minikube_url}[Minikube]. +This guide describes how to install the {operator_name} in a Kubernetes or OpenShift cluster. The operator has been tested on OpenShift {openshift_version_min}+, Kubernetes {kubernetes_version}+, and link:{minikube_url}[Minikube]. .Prerequisites * A Kubernetes or OpenShift cluster with admin privileges and `kubectl` installed. @@ -62,13 +62,13 @@ To install the {product_name} Operator, you can use the following command: .Install {product_name} Operator on Kubernetes [source,shell,subs="attributes+"] ---- -kubectl create -f https://raw.githubusercontent.com/apache/incubator-kie-kogito-serverless-operator/{operator_version}/operator.yaml +kubectl create -f https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/{kogito_branch}/operator.yaml ---- -Replace `main` with specific version if needed: +Replace with specific version if needed: ---- kubectl create -f https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator//operator.yaml ---- -`` could be `1.44.1` for instance. +`` could be `10.0.0` for instance. You can follow the deployment of the {product_name} Operator: @@ -122,13 +122,13 @@ To uninstall the correct version of the operator, first you must get the current ---- kubectl get deployment sonataflow-operator-controller-manager -n sonataflow-operator-system -o jsonpath="{.spec.template.spec.containers[?(@.name=='manager')].image}" -quay.io/kiegroup/kogito-serverless-operator-nightly:latest +{sonataflow_operator_imagename}:{operator_version} ---- .Uninstalling the operator [source,shell,subs="attributes+"] ---- -kubectl delete -f https://raw.githubusercontent.com/apache/incubator-kie-kogito-serverless-operator/.x/operator.yaml +kubectl delete -f https://raw.githubusercontent.com/kogito-serverless-operator/.x/operator.yaml ---- [TIP] From bb1de2a97bc70c56773a1c8f4a547eef0c0c4a1e Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Fri, 12 Jul 2024 10:44:37 +0200 Subject: [PATCH 26/38] SRVLOGIC-261: Fix quay.io references for data index images --- antora.yml | 2 ++ .../pages/cloud/operator/install-serverless-operator.adoc | 4 ++-- .../pages/core/configuration-properties.adoc | 2 +- .../pages/data-index/data-index-service.adoc | 8 ++++---- .../data-index/common/_dataindex_deployment_operator.adoc | 2 +- .../data-index/data-index-as-quarkus-dev-service.adoc | 4 ++-- 6 files changed, 12 insertions(+), 10 deletions(-) diff --git a/antora.yml b/antora.yml index bf8f7335..4d83a312 100644 --- a/antora.yml +++ b/antora.yml @@ -37,6 +37,8 @@ asciidoc: kogito_devservices_imagename: registry.redhat.io/openshift-serverless-1/logic-data-index-ephemeral-rhel8 sonataflow_devmode_imagename: registry.redhat.io/openshift-serverless-1/logic-swf-devmode-rhel8 sonataflow_builder_imagename: registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8 + sonataflow_dataindex_ephemeral_imagename: registry.redhat.io/openshift-serverless-1/logic-data-index-ephemeral-rhel8 + sonataflow_dataindex_postgresql_imagename: registry.redhat.io/openshift-serverless-1/logic-data-index-postgresql-rhel8 sonataflow_devmode_devui_url: /q/dev-ui/org.apache.kie.sonataflow.sonataflow-quarkus-devui/ serverless_logic_web_tools_name: Serverless Logic Web Tools serverless_workflow_vscode_extension_name: Openshift Serverless Logic Workflow Editor diff --git a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc index 2c8595b5..23801a0a 100644 --- a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc +++ b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc @@ -62,7 +62,7 @@ To install the {product_name} Operator, you can use the following command: .Install {product_name} Operator on Kubernetes [source,shell,subs="attributes+"] ---- -kubectl create -f https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/{kogito_branch}/operator.yaml +kubectl create -f {operator_community_prod_yaml} ---- Replace with specific version if needed: ---- @@ -128,7 +128,7 @@ kubectl get deployment sonataflow-operator-controller-manager -n sonataflow-oper .Uninstalling the operator [source,shell,subs="attributes+"] ---- -kubectl delete -f https://raw.githubusercontent.com/kogito-serverless-operator/.x/operator.yaml +kubectl delete -f {operator_community_prod_yaml} ---- [TIP] diff --git a/modules/serverless-logic/pages/core/configuration-properties.adoc b/modules/serverless-logic/pages/core/configuration-properties.adoc index b46bb54b..754dca60 100644 --- a/modules/serverless-logic/pages/core/configuration-properties.adoc +++ b/modules/serverless-logic/pages/core/configuration-properties.adoc @@ -136,7 +136,7 @@ a|Defines strategy to generate the configuration key of open API specifications. |`quarkus.kogito.devservices.image-name` |Defines the Data Index image to use. |string -|`quay.io/kiegroup/kogito-data-index-ephemeral:{page-component-version}` +|`{sonataflow_dataindex_ephemeral_imagename}:{page-component-version}` |No |`quarkus.kogito.devservices.shared` diff --git a/modules/serverless-logic/pages/data-index/data-index-service.adoc b/modules/serverless-logic/pages/data-index/data-index-service.adoc index a8fa610e..cbeac180 100644 --- a/modules/serverless-logic/pages/data-index/data-index-service.adoc +++ b/modules/serverless-logic/pages/data-index/data-index-service.adoc @@ -63,7 +63,7 @@ Here you can see in example, how the {data_index_ref} resource definition can be ---- data-index: container_name: data-index - image: quay.io/kiegroup/kogito-data-index-postgresql:latest <1> + image: {sonataflow_dataindex_postgresql_imagename}:latest <1> ports: - "8180:8080" depends_on: @@ -81,7 +81,7 @@ Here you can see in example, how the {data_index_ref} resource definition can be QUARKUS_HIBERNATE_ORM_DATABASE_GENERATION: update ---- -<1> Reference the right {data_index_ref} image to match with the type of Database, in this case `quay.io/kiegroup/kogito-data-index-postgresql:latest` +<1> Reference the right {data_index_ref} image to match with the type of Database, in this case `{sonataflow_dataindex_postgresql_imagename}:latest` <2> Provide the database connection properties. <3> When `KOGITO_DATA_INDEX_QUARKUS_PROFILE` is not present, the {data_index_ref} is configured to use Kafka eventing. <4> To initialize the database schema at start using flyway. @@ -157,7 +157,7 @@ spec: spec: containers: - name: data-index-service-postgresql - image: quay.io/kiegroup/kogito-data-index-postgresql:latest <1> + image: {sonataflow_dataindex_postgresql_imagename}:latest <1> imagePullPolicy: Always ports: - containerPort: 8080 @@ -223,7 +223,7 @@ spec: name: data-index-service-postgresql uri: /jobs <7> ---- -<1> Reference the right {data_index_ref} image to match with the type of Database, in this case `quay.io/kiegroup/kogito-data-index-postgresql:latest` +<1> Reference the right {data_index_ref} image to match with the type of Database, in this case `{sonataflow_dataindex_postgresql_imagename}:latest` <2> Provide the database connection properties <3> KOGITO_DATA_INDEX_QUARKUS_PROFILE: http-events-support to use the http-connector with Knative eventing. <4> To initialize the database schema at start using flyway diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/common/_dataindex_deployment_operator.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/common/_dataindex_deployment_operator.adoc index 7b3724e4..cd2a1819 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/common/_dataindex_deployment_operator.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/common/_dataindex_deployment_operator.adoc @@ -129,7 +129,7 @@ spec: spec: containers: - name: data-index-service-postgresql - image: quay.io/kiegroup/kogito-data-index-postgresql:latest + image: {sonataflow_dataindex_postgresql_imagename}:latest imagePullPolicy: Always resources: limits: diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-as-quarkus-dev-service.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-as-quarkus-dev-service.adoc index f093e6d1..f721fe90 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-as-quarkus-dev-service.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/data-index/data-index-as-quarkus-dev-service.adoc @@ -33,7 +33,7 @@ The Quarkus Dev Service also allows further configuration options including: * To disable {data_index_ref} Dev Service, use the `quarkus.kogito.devservices.enabled=false` property. * To change the port where the {data_index_ref} Dev Service runs, use the `quarkus.kogito.devservices.port=8180` property. -* To adjust the provisioned image, use `quarkus.kogito.devservices.imageName=quay.io/kiegroup/kogito-data-index-ephemeral` property. +* To adjust the provisioned image, use `quarkus.kogito.devservices.imageName={sonataflow_dataindex_ephemeral_imagename}` property. * To disable sharing the {data_index_ref} instance across multiple Quarkus applications, use `quarkus.kogito.devservices.shared=false` property. For more information about Quarkus Dev Services, see link:{dev_services_url}[Dev Services guide]. @@ -110,7 +110,7 @@ Allows to change the event connection type. The possible values are: |`quarkus.kogito.devservices.image-name` |Defines the {data_index_ref} image to use in Dev Service. |string -|`quay.io/kiegroup/kogito-data-index-ephemeral:{page-component-version}` +|`{sonataflow_dataindex_ephemeral_imagename}:{page-component-version}` |No |`quarkus.kogito.devservices.shared` From 7c4c756b344d628625b4bd83bbb4b52555e06e83 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Fri, 12 Jul 2024 14:11:55 +0200 Subject: [PATCH 27/38] SRVLOGIC-261: Fix emojis not loading --- modules/serverless-logic/pages/release-notes.adoc | 2 +- package.json | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/serverless-logic/pages/release-notes.adoc b/modules/serverless-logic/pages/release-notes.adoc index 0d4ec5a0..82f396e7 100644 --- a/modules/serverless-logic/pages/release-notes.adoc +++ b/modules/serverless-logic/pages/release-notes.adoc @@ -1,4 +1,4 @@ -= New features on {page-component-display-version} += New features on {operator_version} :compat-mode!: == Known issues diff --git a/package.json b/package.json index 6f17ffb5..3c72792e 100644 --- a/package.json +++ b/package.json @@ -20,7 +20,7 @@ "@antora/cli": "^3.0.1", "@antora/lunr-extension": "^1.0.0-alpha.6", "@antora/site-generator": "^3.0.1", - "asciidoctor-emoji": "^0.3.4", + "asciidoctor-emoji": "^0.5.0", "uglify-js": "~3.14" } } From ba3375185a3eb9befd0328a22f45adb2937924b2 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Mon, 15 Jul 2024 09:19:08 +0200 Subject: [PATCH 28/38] SRVLOGIC-261: Fix navigation for Cloud --- modules/ROOT/nav.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index c67ecae8..3df894c0 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -64,7 +64,7 @@ *** Persistence **** xref:serverless-logic:persistence/core-concepts.adoc[Core concepts] *** xref:serverless-logic:cloud/index.adoc[Cloud] -*** xref:serverless-logic:cloud/custom-ingress-authz.adoc[Securing Workflows] +**** xref:serverless-logic:cloud/custom-ingress-authz.adoc[Securing Workflows] **** Operator ***** xref:serverless-logic:cloud/operator/install-serverless-operator.adoc[Installation] ***** xref:serverless-logic:cloud/operator/global-configuration.adoc[Admin Configuration] From 47ca32a6f5dbf73f08d817f80050ac4c43e7d6b7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dominik=20Han=C3=A1k?= Date: Mon, 15 Jul 2024 09:20:41 +0200 Subject: [PATCH 29/38] Apply Jakub's suggestions from code review Co-authored-by: Jakub Schwan --- .../pages/cloud/operator/build-and-deploy-workflows.adoc | 2 +- .../pages/cloud/operator/global-configuration.adoc | 2 +- .../pages/cloud/operator/install-serverless-operator.adoc | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc b/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc index a3c8879e..c850c73e 100644 --- a/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc +++ b/modules/serverless-logic/pages/cloud/operator/build-and-deploy-workflows.adoc @@ -57,7 +57,7 @@ kubectl patch sonataflowplatform --patch 'spec:\n build:\n config: [#customize-base-build] === Customize the base build Dockerfile -The operator uses the `ConfigMap` named `sonataflow-operator-builder-config` in the operator's installation namespace ({operator_installation_namespace}) to configure and run the workflow build process. +The operator uses the `ConfigMap` named `logic-operator-builder-config` in the operator's installation namespace ({operator_installation_namespace}) to configure and run the workflow build process. You can change the `Dockerfile` entry in this `ConfigMap` to tailor the Dockerfile to your needs. Just be aware that this can break the build process. .Example of the sonataflow-operator-builder-config `ConfigMap` diff --git a/modules/serverless-logic/pages/cloud/operator/global-configuration.adoc b/modules/serverless-logic/pages/cloud/operator/global-configuration.adoc index e4a3649f..b26a7cba 100644 --- a/modules/serverless-logic/pages/cloud/operator/global-configuration.adoc +++ b/modules/serverless-logic/pages/cloud/operator/global-configuration.adoc @@ -47,7 +47,7 @@ You can freely edit any of the options in the key `controllers_cfg.yaml` entry. |=== -To edit this file, update the ConfigMap `sonataflow-operator-controllers-config` using your preferred tool such as `kubectl`. +To edit this file, update the ConfigMap `logic-operator-controllers-config` using your preferred tool such as `kubectl`. [#config-changes] == Configuration Changes Impact diff --git a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc index 23801a0a..a815d26f 100644 --- a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc +++ b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc @@ -11,7 +11,7 @@ :kubernetes_operator_uninstall_url: https://olm.operatorframework.io/docs/tasks/uninstall-operator/ :operatorhub_url: https://operatorhub.io/ -This guide describes how to install the {operator_name} in a Kubernetes or OpenShift cluster. The operator has been tested on OpenShift {openshift_version_min}+, Kubernetes {kubernetes_version}+, and link:{minikube_url}[Minikube]. +This guide describes how to install the {operator_name} in a Kubernetes or OpenShift cluster. The operator has been tested on OpenShift {openshift_version_min}+, Kubernetes {kubernetes_version}+, and link:{minikube_url}[Minikube]. .Prerequisites * A Kubernetes or OpenShift cluster with admin privileges and `kubectl` installed. From ca018a7a9fbcf047b838b052b01c5145afcf1ccb Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Mon, 15 Jul 2024 09:25:32 +0200 Subject: [PATCH 30/38] SRVLOGIC-261: Remove Kubernetes note --- .../pages/cloud/operator/install-serverless-operator.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc index a815d26f..aa600cd3 100644 --- a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc +++ b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc @@ -11,7 +11,7 @@ :kubernetes_operator_uninstall_url: https://olm.operatorframework.io/docs/tasks/uninstall-operator/ :operatorhub_url: https://operatorhub.io/ -This guide describes how to install the {operator_name} in a Kubernetes or OpenShift cluster. The operator has been tested on OpenShift {openshift_version_min}+, Kubernetes {kubernetes_version}+, and link:{minikube_url}[Minikube]. +This guide describes how to install the {operator_name} in a Kubernetes or OpenShift cluster. The operator has been tested on OpenShift {openshift_version_min}+ and link:{minikube_url}[Minikube]. .Prerequisites * A Kubernetes or OpenShift cluster with admin privileges and `kubectl` installed. From ab65f2579a92817091ced944b0f17d8610b7a1d9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dominik=20Han=C3=A1k?= Date: Mon, 15 Jul 2024 12:24:27 +0200 Subject: [PATCH 31/38] Update antora.yml namespace constant Co-authored-by: Jakub Schwan --- antora.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/antora.yml b/antora.yml index 4d83a312..fd786a1e 100644 --- a/antora.yml +++ b/antora.yml @@ -22,7 +22,7 @@ asciidoc: kogito_version_redhat: 9.100.0.redhat-00004 kogito_branch: 9.100.x-prod operator_name: Serverless Logic Operator - operator_installation_namespace: sonataflow-operator-system + operator_installation_namespace: openshift-serverless-logic operator_controller_config: sonataflow-operator-controllers-config quarkus_platform: com.redhat.quarkus.platform kogito_sw_ga: kogito-quarkus-serverless-workflow From 0b84cb3091f6e89f5a2c6ea394bd1815d5d1bcf9 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Mon, 15 Jul 2024 12:32:27 +0200 Subject: [PATCH 32/38] SRVLOGIC-261: Fix URLS for JS and DI, point to catalog now --- antora.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/antora.yml b/antora.yml index fd786a1e..f28cd6f1 100644 --- a/antora.yml +++ b/antora.yml @@ -47,10 +47,10 @@ asciidoc: # Jobs service image and links jobs_service_image_ephemeral: registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8 jobs_service_image_ephemeral_name: logic-jobs-service-ephemeral-rhel8 - jobs_service_image_ephemeral_url: https:registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8 + jobs_service_image_ephemeral_url: https://catalog.redhat.com/software/containers/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8/6614eddaaeb155f6aae45380 jobs_service_image_postgresql_name: logic-jobs-service-postgresql-rhel8 jobs_service_image_postgresql: registry.redhat.io/openshift-serverless-1/logic-jobs-service-postgresql-rhel8 - jobs_service_image_postgresql_url: https://registry.redhat.io/openshift-serverless-1/logic-jobs-service-ephemeral-rhel8 + jobs_service_image_postgresql_url: https://catalog.redhat.com/software/containers/openshift-serverless-1/logic-jobs-service-postgresql-rhel8/6614eddbaeb155f6aae45385 jobs_service_image_usage_url: https://github.com/kiegroup/kogito-images/tree/9.100.x-prod#jobs-services-all-in-one # From e2cc45699c89d74cea499be41364c172bfaada80 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Mon, 15 Jul 2024 14:36:23 +0200 Subject: [PATCH 33/38] SRVLOGIC-261: Fix sw_ga package and fix Jobs service --- antora.yml | 2 +- modules/ROOT/nav.adoc | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/antora.yml b/antora.yml index f28cd6f1..e9c0f8cc 100644 --- a/antora.yml +++ b/antora.yml @@ -25,7 +25,7 @@ asciidoc: operator_installation_namespace: openshift-serverless-logic operator_controller_config: sonataflow-operator-controllers-config quarkus_platform: com.redhat.quarkus.platform - kogito_sw_ga: kogito-quarkus-serverless-workflow + kogito_sw_ga: org.apache.kie.sonataflow:sonataflow-quarkus data_index_ref: Data Index workflow_instance: workflow instance workflow_instances: workflow instances diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 3df894c0..2dc05909 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -85,7 +85,7 @@ **** xref:serverless-logic:integrations/core-concepts.adoc[] *** Supporting Services **** Jobs Service -***** xref:job-services/core-concepts.adoc[Core Concepts] +***** xref:serverless-logic:job-services/core-concepts.adoc[Core Concepts] **** Data Index ***** xref:serverless-logic:data-index/data-index-core-concepts.adoc[Core Concepts]*** ***** xref:serverless-logic:data-index/data-index-service.adoc[Data Index Standalone Service] From f7c6392453d07d7f62639d357b0d59bafb8303c4 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 18 Jul 2024 09:01:47 +0200 Subject: [PATCH 34/38] SRVLOGIC-261: Apply fixes found during last review --- antora.yml | 1 - .../pages/cloud/operator/using-persistence.adoc | 9 +++++---- ...orking-with-serverless-workflow-quarkus-examples.adoc | 1 + 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/antora.yml b/antora.yml index e9c0f8cc..1d7fda21 100644 --- a/antora.yml +++ b/antora.yml @@ -187,7 +187,6 @@ asciidoc: job_service_xref: xref:job-services/core-concepts.adoc # Tag to use with community-only images - sonataflow_non_productized_image_tag: NO_TAG operator_community_prod_yaml: https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/9.100.x-prod/operator.yaml operator_community_prod_root: https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/9.100.x-prod sonataflow_operator_imagename: registry.redhat.io/openshift-serverless-1/logic-rhel8-operator diff --git a/modules/serverless-logic/pages/cloud/operator/using-persistence.adoc b/modules/serverless-logic/pages/cloud/operator/using-persistence.adoc index 54a6e177..d11ea82a 100644 --- a/modules/serverless-logic/pages/cloud/operator/using-persistence.adoc +++ b/modules/serverless-logic/pages/cloud/operator/using-persistence.adoc @@ -256,7 +256,7 @@ spec: === Flyway configuration by using SonataFlowPlatForm properties -To apply a common Flyway configuration to all the workflows in a given namespace, you can use the `spec.properties` of the `SonataFlowPlatform` in that namespace. +To apply a common Flyway configuration to all the workflows in a given namespace, you can use the `spec.properties.flow` of the `SonataFlowPlatform` in that namespace. .Example of enabling Flyway by using the SonataFlowPlatform properties. [source,yaml] @@ -267,13 +267,14 @@ metadata: name: sonataflow-platform spec: properties: - - name: quarkus.flyway.migrate-at-start - value: true + flow: + - name: quarkus.flyway.migrate-at-start + value: true ---- [NOTE] ==== -The configuration above takes effect at workflow deployment time, so you must be sure that property is configured before you deploy your workflows. +The configuration above takes effect at workflow deployment time, so you must be sure that the property is configured before you deploy your workflows. ==== === Manual database initialization by using DDL diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc index 4ab97885..e8487630 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc @@ -26,6 +26,7 @@ However, same procedure can be applied to any example located in link:{kogito_sw [source,shell,subs="attributes+"] ---- git clone --branch main {kogito_examples_url} +git checkout {kogit_branch} cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-greeting-quarkus ---- From eea6ffbc1ebf83696c6be90703aa0d48a1605886 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 18 Jul 2024 09:02:38 +0200 Subject: [PATCH 35/38] SRVLOGIC-261: Fix typo --- .../working-with-serverless-workflow-quarkus-examples.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc index e8487630..6d7878cf 100644 --- a/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc +++ b/modules/serverless-logic/pages/use-cases/advanced-developer-use-cases/getting-started/working-with-serverless-workflow-quarkus-examples.adoc @@ -26,7 +26,7 @@ However, same procedure can be applied to any example located in link:{kogito_sw [source,shell,subs="attributes+"] ---- git clone --branch main {kogito_examples_url} -git checkout {kogit_branch} +git checkout {kogito_branch} cd {kie_kogito_examples_repo_name}/serverless-workflow-examples/serverless-workflow-greeting-quarkus ---- From b57b0da0c77110da7ece621755c7fe79bd05bb47 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Thu, 18 Jul 2024 12:40:04 +0200 Subject: [PATCH 36/38] SRVLOGIC-261: Remove kubernetes install section --- .../operator/install-serverless-operator.adoc | 14 -------------- 1 file changed, 14 deletions(-) diff --git a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc index aa600cd3..db9d2b28 100644 --- a/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc +++ b/modules/serverless-logic/pages/cloud/operator/install-serverless-operator.adoc @@ -29,20 +29,6 @@ When searching for the operator in the *Filter by keyword* field, use the word ` To remove the operator on OpenShift refer to the "link:{openshift_operator_uninstall_url}[Deleting Operators from a cluster]" from the OpenShift's documentation. -== {product_name} Operator Kubernetes installation - -=== Install - -To install the operator on Kubernetes refer to the "link:{kubernetes_operator_install_url}[How to install an Operator from OperatorHub.io]" from the OperatorHub's documentation. - -When link:{operatorhub_url}[searching for the operator in the *Search OperatorHub* field], use the word `{operator_k8s_keyword}`. - -=== Uninstall - -To remove the operator on Kubernetes follow the document "link:{kubernetes_operator_uninstall_url}[Uninstall your operator]" from the OLM's documentation. - -When searching for the subscription to remove, use the word `{operator_k8s_subscription}`. - == {product_name} Operator Manual Installation [WARNING] From 0487a50b5a6fb14fcc96672c5fc4910659be51c7 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Tue, 23 Jul 2024 08:27:54 +0200 Subject: [PATCH 37/38] SRVLOGIC-308: [Doc] Add the content related to `kn workflow gen-manifest` in the midstream docs --- .../kn-plugin-workflow-overview.adoc | 39 +++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc index 39c0a964..3cc20e1a 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc @@ -142,6 +142,45 @@ kn workflow run -- . Once the project is ready, the Development UI will be opened up in a browser automatically (on `localhost:8080/q/dev`). +[[proc-gen-manifests-sw-project-kn-cli]] +== Generating a list of Operator manifests using Knative CLI + +After creating your workflow project, you can use the `gen-manifest` command with `kn workflow` to generate operator manifest files for your workflow project in your current directory. + +This will screate a new file in `./manifests` directory in your project. + +.Prerequisites +* {product_name} plug-in for Knative CLI is installed. ++ +For more information about installing the plug-in, see <>. + +* A workflow project is created. ++ +For more information about creating a workflow project, see <>. +* Minikube cluster is running locally. + + +.Procedure +. In Knative CLI, enter the following command to generates operator manifests for your workflow project: ++ +-- +.Generate the operator manifest files for your project. +[source,shell] +---- +kn workflow gen-manifest +---- +-- +. Apply the generated operator manifest to your cluster: ++ +-- +.Apply the manifest file. +[source,shell] +---- +kubectl apply -f manifests/01-sonataflow_hello.yaml -n +---- +-- + + [[proc-deploy-sw-project-kn-cli]] == Deploying a workflow project using Knative CLI From 6d582e92c19d8a05537db002f2870ce2dd157597 Mon Sep 17 00:00:00 2001 From: Dominik Hanak Date: Tue, 23 Jul 2024 08:50:24 +0200 Subject: [PATCH 38/38] SRVLOGIC-308: Fix typos --- .../kn-plugin-workflow-overview.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc index 3cc20e1a..ad02a8a1 100644 --- a/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc +++ b/modules/serverless-logic/pages/testing-and-troubleshooting/kn-plugin-workflow-overview.adoc @@ -147,7 +147,7 @@ kn workflow run After creating your workflow project, you can use the `gen-manifest` command with `kn workflow` to generate operator manifest files for your workflow project in your current directory. -This will screate a new file in `./manifests` directory in your project. +This will create a new file in `./manifests` directory in your project. .Prerequisites * {product_name} plug-in for Knative CLI is installed. @@ -161,7 +161,7 @@ For more information about creating a workflow project, see <