Kantra is a CLI that unifies analysis and transformation capabilities of Konveyor. It is available for Linux, Mac and Windows.
Podman 4+ is required to run kantra. By default, it is configured to use the podman executable available on the host.
Although kantra is primarily tested with podman, Docker Engine 24+ or Docker Desktop 4+ can be used as an alternative. To use docker, set the environment variable CONTAINER_TOOL
pointing to the docker executable's path:
export CONTAINER_TOOL=/usr/bin/docker
To install kantra, download the executable for your platform and add it to the path.
Note: For Mac you might get a "Apple could not verify" error message. If you do you can run xattr -d com.apple.quarantine kantra
to have Apple trust the kantra
binary.
Go to the release page and download the zip file containing a binary for your platform and architecture. Unzip the archive and add the executable in it to the path.
The easiest way to get the latest (or a particular/older) executable is to get it from the respective container image.
Set the shell variable kantra_version
to a particular version if you want to grab that version,
e.g., kantra_version=v0.4.0
.
Run:
${CONTAINER_TOOL:-podman} cp $(${CONTAINER_TOOL:-podman} create --name kantra-download quay.io/konveyor/kantra:${kantra_version:-latest}):/usr/local/bin/kantra . && ${CONTAINER_TOOL:-podman} rm kantra-download
When you are not using Docker for Desktop on your Mac (see above), you need to start a podman machine prior to running any podman commands (see Setup for Mac)
Once a machine is started, run:
${CONTAINER_TOOL:-podman} cp $(${CONTAINER_TOOL:-podman} create --name kantra-download quay.io/konveyor/kantra:${kantra_version:-latest}):/usr/local/bin/darwin-kantra . && ${CONTAINER_TOOL:-podman} rm kantra-download
When you are not using Docker for Desktop on your Windows (see above), you need to start a podman machine prior to running any podman commands (see Setup for Windows)
Once a machine is started, run:
${CONTAINER_TOOL:-podman} cp $(${CONTAINER_TOOL:-podman} create --name kantra-download quay.io/konveyor/kantra:${kantra_version:-latest}):/usr/local/bin/windows-kantra . && ${CONTAINER_TOOL:-podman} rm kantra-download
Ensure that you add the executable to the
PATH
.
On Mac and Windows, a podman machine needs to be started prior to running any commands, unless you are using Docker (for Desktop):
Prior to starting your podman machine, run:
ulimit -n unlimited
Init your podman machine :
-
Podman 4:
Podman 4 requires some host directories to be mounted within the VM:
podman machine init <vm_name> -v $HOME:$HOME -v /private/tmp:/private/tmp -v /var/folders/:/var/folders/
-
Podman 5:
Podman 5 mounts $HOME, /private/tmp and /var/folders directories by default, simply init the machine:
podman machine init <vm_name>
If the input and/or output directories you intend to use with kantra fall outside the tree of $HOME, /private/tmp and /var/folders directories, you should mount those directories in addition to the default.
Increase podman resources (minimum 4G memory is required):
podman machine set <vm_name> --cpus 4 --memory 4096
Init the machine:
podman machine init <vm_name>
Kantra has five subcommands:
-
analyze: This subcommand allows running source code analysis on input source code or a binary.
-
transform: This subcommand allows running OpenRewrite recipes on source code.
-
test: This subcommand allows testing YAML rules.
-
discover: This subcommand allows to discover application and outputs a YAML representation of source platform resources.
-
generate: This subcommand allows to analyze the source plaftform and/or application and output a discovery manifest.
analyze subcommand allows running source code and binary analysis using analyzer-lsp
To run analysis on application source code, run:
kantra analyze --input=<path/to/source/code> --output=<path/to/output/dir>
--input must point to a source code directory or a binary file, --output must point to a directory to contain analysis results.
All flags:
Flags:
--analyze-known-libraries analyze known open-source libraries
--bulk running multiple analyze commands in bulk will result to combined static report
--context-lines int number of lines of source code to include in the output for each incident (default 100)
-d, --dependency-folders stringArray directory for dependencies
--enable-default-rulesets run default rulesets with analysis (default true)
-h, --help help for analyze
--http-proxy string HTTP proxy string URL
--https-proxy string HTTPS proxy string URL
--incident-selector string an expression to select incidents based on custom variables. ex: (!package=io.konveyor.demo.config-utils)
-i, --input string path to application source code or a binary
--jaeger-endpoint string jaeger endpoint to collect traces
--json-output create analysis and dependency output as json
-l, --label-selector string run rules based on specified label selector expression
--list-sources list rules for available migration sources
--list-targets list rules for available migration targets
--maven-settings string path to a custom maven settings file to use
-m, --mode string analysis mode. Must be one of 'full' or 'source-only' (default "full")
--no-proxy string proxy excluded URLs (relevant only with proxy)
-o, --output string path to the directory for analysis output
--overwrite overwrite output directory
--rules stringArray filename or directory containing rule files. Use multiple times for additional rules: --rules <rule1> --rules <rule2> ...
--skip-static-report do not generate static report
-s, --source stringArray source technology to consider for analysis. Use multiple times for additional sources: --source <source1> --source <source2> ...
-t, --target stringArray target technology to consider for analysis. Use multiple times for additional targets: --target <target1> --target <target2> ...
By design, kantra supports single application analysis per kantra command execution. However, it is possible use --bulk
option for executing multiple kantra analyze commands with different applications to get an output directory and static-report populated with all applications analysis reports.
Example:
kantra analyze --bulk --input=<path/to/source/A> --output=<path/to/output/ABC>
kantra analyze --bulk --input=<path/to/source/B> --output=<path/to/output/ABC>
kantra analyze --bulk --input=<path/to/source/C> --output=<path/to/output/ABC>
Transform has one subcommand:
- openrewrite: This subcommand allows running one or more available OpenRewrite recipes on input source code.
openrewrite subcommand allows running OpenRewrite recipes on source code.
To transform applications using OpenRewrite, run:
kantra transform openrewrite --input=<path/to/source/code> --target=<exactly_one_target_from_the_list>
The value of --target option must be one of the available OpenRewrite recipes. To list all available recipes, run:
kantra transform --list-targets
All flags:
Flags:
-g, --goal string target goal (default "dryRun")
-h, --help help for openrewrite
-i, --input string path to application source code directory
-l, --list-targets list all available OpenRewrite recipes
-s, --maven-settings string path to a custom maven settings file to use
-t, --target string target openrewrite recipe to use. Run --list-targets to get a list of packaged recipes.
test subcommand allows running tests on YAML rules written for analyzer-lsp.
The input to test runner will be one or more test files and / or directories containing tests written in YAML.
kantra test /path/to/a/single/tests/file.test.yaml
The output of tests is printed on the console.
See different ways to run the test command in the test runner doc
Asset generation consists of two subcommands: discover and generate.
Discover application outputs a YAML representation of source platform resources.
Flags:
-h, --help help for discover
--list-platforms List available supported discovery platforms
To list all supported platforms:
kantra discover --list-platforms
Select one of the supported platforms:
kantra discover cloud-foundry -h
All flags for Cloud Foundry discovery:
Flags:
--app-name string Name of the Cloud Foundry application to discover.
--cf-config string Path to the Cloud Foundry CLI configuration file (default: ~/.cf/config). (default "~/.cf/config")
--conceal-sensitive-data Extract sensitive information in the discover manifest into a separate file (default: false).
-h, --help help for cloud-foundry
--input string input path of the manifest file or folder to analyze
--list-apps List applications available for each space.
--output-dir string Directory where output manifests will be saved (default: standard output). If the directory does not exist, it will be created automatically.
--platformType string Platform type for discovery. Allowed value is: "cloud-foundry" (default). (default "cloud-foundry")
--skip-ssl-validation Skip SSL certificate validation for API connections (default: false).
--spaces strings Comma-separated list of Cloud Foundry spaces to analyze (e.g., --spaces="space1,space2"). At least one space is required when using live discovery.
--use-live-connection Enable real-time discovery using live platform connections.
Global Flags:
--log-level uint32 log level (default 4)
--no-cleanup do not cleanup temporary resources
To run discovery on a Cloud Foundry manifest file:
kantra discover cloud-foundry --input=<path-to/manifest-yaml>
To run discovery on a Cloud Foundry manifest file and save to a directory:
kantra discover cloud-foundry --input=<path-to/manifest-yaml> --output-dir=<path-to/output-dir>
To run discovery on Cloud Foundry manifest files in an input directory and save to a directory:
kantra discover cloud-foundry --input=<path-to/manifest-dir> --output-dir=<path-to/output-dir>
To run discovery on Cloud Foundry manifest files in an input directory and list the available applications:
kantra discover cloud-foundry --input=<path-to/manifest-dir> --list-apps
To run discovery on Cloud Foundry manifest files in an input directory and separate sensitive data (credentials, secrets) into a dedicated file:
kantra discover cloud-foundry --input=<path-to/manifest-dir> --conceal-sensitive-data=true --output-dir=<path-to/output-dir>
To run live discovery from Cloud Foundry platform on a subset of spaces:
kantra discover cloud-foundry --use-live-connection --spaces=<space1,space2>
To run live discovery from Cloud Foundry platform on a subset of spaces list the available applications:
kantra discover cloud-foundry --use-live-connection --spaces=<space1,space2> --list-apps
To run live discovery from Cloud Foundry platform on a subset of spaces and save to a directory:
kantra discover cloud-foundry --use-live-connection --spaces=<space1,space2> --output-dir=<path-to/output-dir>
To run live discovery from Cloud Foundry platform on a subset of spaces and separate sensitive data (credentials, secrets) into a dedicated file:
kantra discover cloud-foundry --use-live-connection --conceal-sensitive-data=true --spaces=<space1,space2> --output-dir=<path-to/output-dir>
To run live discovery from Cloud Foundry platform on a subset of spaces and on a specific application:
kantra discover cloud-foundry --use-live-connection --spaces=<space1,space2> --app-name=<app-name>
To run live discovery from Cloud Foundry platform on a subset of spaces and on a specific application and save to a directory:
kantra discover cloud-foundry --use-live-connection --spaces=<space1,space2> --app-name=<app-name> --output-dir=<path-to/output-dir>
Analyze the source platform and/or application and output discovery manifest.
Flags:
-h, --help help for generate
The generate subcommand has a helm subcommand that generates Helm template manifests.
To generate Helm templates:
kantra generate helm --input=<path/to/discover/manifest> --chart-dir=<path/to/helmchart>
All flags for Helm generation:
Flags:
--chart-dir string Directory to the Helm chart to use for chart generation (required)
-h, --help help for helm
--input string Specifies the discover manifest file (required)
--non-k8s-only Render only the non-Kubernetes templates located in the files/konveyor directory of the chart
--output-dir string Directory to save the generated Helm chart. Defaults to stdout
--set stringArray Set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
- Example usage scenarios
- Using provider options
- Test runner for YAML rules
- Setup dev environment instructions
Refer to Konveyor's Code of Conduct here.