Skip to content

Latest commit

 

History

History
345 lines (345 loc) · 72.2 KB

configuration_options.md

File metadata and controls

345 lines (345 loc) · 72.2 KB

scylla-cluster-tests configuration options

Parameter Description Default Override environment
variable
config_files a list of config files that would be used N/A SCT_CONFIG_FILES
cluster_backend backend that will be used, aws/gce/docker N/A SCT_CLUSTER_BACKEND
test_duration Test duration (min). Parameter used to keep instances produced by tests
and for jenkins pipeline timeout and TimoutThread.
60 SCT_TEST_DURATION
n_db_nodes Number list of database nodes in multiple data centers. N/A SCT_N_DB_NODES
n_test_oracle_db_nodes Number list of oracle test nodes in multiple data centers. 1 SCT_N_TEST_ORACLE_DB_NODES
n_loaders Number list of loader nodes in multiple data centers N/A SCT_N_LOADERS
n_monitor_nodes Number list of monitor nodes in multiple data centers N/A SCT_N_MONITORS_NODES
intra_node_comm_public If True, all communication between nodes are via public addresses N/A SCT_INTRA_NODE_COMM_PUBLIC
endpoint_snitch The snitch class scylla would use

'GossipingPropertyFileSnitch' - default
'Ec2MultiRegionSnitch' - default on aws backend
'GoogleCloudSnitch'
N/A SCT_ENDPOINT_SNITCH
user_credentials_path Path to your user credentials. qa key are downloaded automatically from S3 bucket N/A SCT_USER_CREDENTIALS_PATH
cloud_credentials_path Path to your user credentials. qa key are downloaded automatically from S3 bucket ~/.ssh/support SCT_CLOUD_CREDENTIALS_PATH
cloud_cluster_id scylla cloud cluster id N/A SCT_CLOUD_CLUSTER_ID
cloud_prom_bearer_token scylla cloud promproxy bearer_token to federate monitoring data into our monitoring instance N/A SCT_CLOUD_PROM_BEARER_TOKEN
cloud_prom_path scylla cloud promproxy path to federate monitoring data into our monitoring instance N/A SCT_CLOUD_PROM_PATH
cloud_prom_host scylla cloud promproxy hostname to federate monitoring data into our monitoring instance N/A SCT_CLOUD_PROM_HOST
ip_ssh_connections Type of IP used to connect to machine instances.
This depends on whether you are running your tests from a machine inside
your cloud provider, where it makes sense to use 'private', or outside (use 'public')

Default: Use public IPs to connect to instances (public)
Use private IPs to connect to instances (private)
Use IPv6 IPs to connect to instances (ipv6)
private SCT_IP_SSH_CONNECTIONS
scylla_repo Url to the repo of scylla version to install scylla N/A SCT_SCYLLA_REPO
scylla_apt_keys APT keys for ScyllaDB repos ['17723034C56D4B19', '5E08FBD8B5D6EC9C', 'D0A112E067426AB2'] SCT_SCYLLA_APT_KEYS
unified_package Url to the unified package of scylla version to install scylla N/A SCT_UNIFIED_PACKAGE
nonroot_offline_install Install Scylla without required root priviledge N/A SCT_NONROOT_OFFLINE_INSTALL
install_mode Scylla install mode, repo/offline/web repo SCT_INSTALL_MODE
scylla_version Version of scylla to install, ex. '2.3.1'
Automatically lookup AMIs and repo links for formal versions.
WARNING: can't be used together with 'scylla_repo' or 'ami_id_db_scylla'
N/A SCT_SCYLLA_VERSION
user_data_format_version Format version of the user-data to use for scylla images,
default to what tagged on the image used
N/A SCT_USER_DATA_FORMAT_VERSION
oracle_user_data_format_version Format version of the user-data to use for scylla images,
default to what tagged on the image used
N/A SCT_ORACLE_USER_DATA_FORMAT_VERSION
oracle_scylla_version Version of scylla to use as oracle cluster with gemini tests, ex. '3.0.11'
Automatically lookup AMIs for formal versions.
WARNING: can't be used together with 'ami_id_db_oracle'
5.0.10 SCT_ORACLE_SCYLLA_VERSION
scylla_linux_distro The distro name and family name to use [centos/ubuntu-xenial/debian-jessie] ubuntu-focal SCT_SCYLLA_LINUX_DISTRO
scylla_linux_distro_loader The distro name and family name to use [centos/ubuntu-xenial/debian-jessie] centos SCT_SCYLLA_LINUX_DISTRO_LOADER
scylla_repo_m Url to the repo of scylla version to install scylla from for managment tests N/A SCT_SCYLLA_REPO_M
scylla_repo_loader Url to the repo of scylla version to install c-s for loader https://s3.amazonaws.com/downloads.scylladb.com/rpm/centos/scylla-4.6.repo SCT_SCYLLA_REPO_LOADER
scylla_mgmt_address Url to the repo of scylla manager version to install for management tests N/A SCT_SCYLLA_MGMT_ADDRESS
scylla_mgmt_agent_address Url to the repo of scylla manager agent version to install for management tests N/A SCT_SCYLLA_MGMT_AGENT_ADDRESS
manager_version Branch of scylla manager server and agent to install. Options in defaults/manager_versions.yaml 3.0 SCT_MANAGER_VERSION
target_manager_version Branch of scylla manager server and agent to upgrade to. Options in defaults/manager_versions.yaml N/A SCT_TARGET_MANAGER_VERSION
manager_scylla_backend_version Branch of scylla db enterprise to install. Options in defaults/manager_versions.yaml 2022 SCT_MANAGER_SCYLLA_BACKEND_VERSION
scylla_mgmt_agent_version N/A SCT_SCYLLA_MGMT_AGENT_VERSION
scylla_mgmt_pkg Url to the scylla manager packages to install for management tests N/A SCT_SCYLLA_MGMT_PKG
stress_cmd_lwt_i Stress command for LWT performance test for INSERT baseline N/A SCT_STRESS_CMD_LWT_I
stress_cmd_lwt_d Stress command for LWT performance test for DELETE baseline N/A SCT_STRESS_CMD_LWT_D
stress_cmd_lwt_u Stress command for LWT performance test for UPDATE baseline N/A SCT_STRESS_CMD_LWT_U
stress_cmd_lwt_ine Stress command for LWT performance test for INSERT with IF NOT EXISTS N/A SCT_STRESS_CMD_LWT_INE
stress_cmd_lwt_uc Stress command for LWT performance test for UPDATE with IF N/A SCT_STRESS_CMD_LWT_UC
stress_cmd_lwt_ue Stress command for LWT performance test for UPDATE with IF EXISTS N/A SCT_STRESS_CMD_LWT_UE
stress_cmd_lwt_de Stress command for LWT performance test for DELETE with IF EXISTS N/A SCT_STRESS_CMD_LWT_DE
stress_cmd_lwt_dc Stress command for LWT performance test for DELETE with IF condition> N/A SCT_STRESS_CMD_LWT_DC
stress_cmd_lwt_mixed Stress command for LWT performance test for mixed lwt load N/A SCT_STRESS_CMD_LWT_MIXED
stress_cmd_lwt_mixed_baseline Stress command for LWT performance test for mixed lwt load baseline N/A SCT_STRESS_CMD_LWT_MIXED_BASELINE
use_cloud_manager When define true, will install scylla cloud manager N/A SCT_USE_CLOUD_MANAGER
use_ldap When defined true, LDAP is going to be used. N/A SCT_USE_LDAP
use_ldap_authorization When defined true, will create a docker container with LDAP and configure scylla.yaml to use it N/A SCT_USE_LDAP_AUTHORIZATION
use_ldap_authentication When defined true, will create a docker container with LDAP and configure scylla.yaml to use it N/A SCT_USE_LDAP_AUTHENTICATION
prepare_saslauthd When defined true, will install and start saslauthd service N/A SCT_PREPARE_SASLAUTHD
ldap_server_type This option indicates which server is going to be used for LDAP operations. [openldap, ms_ad] N/A SCT_LDAP_SERVER_TYPE
use_mgmt When define true, will install scylla management True SCT_USE_MGMT
manager_prometheus_port Port to be used by the manager to contact Prometheus 5090 SCT_MANAGER_PROMETHEUS_PORT
target_scylla_mgmt_server_address Url to the repo of scylla manager version used to upgrade the manager server N/A SCT_TARGET_SCYLLA_MGMT_SERVER_ADDRESS
target_scylla_mgmt_agent_address Url to the repo of scylla manager version used to upgrade the manager agents N/A SCT_TARGET_SCYLLA_MGMT_AGENT_ADDRESS
update_db_packages A local directory of rpms to install a custom version on top of
the scylla installed (or from repo or from ami)
N/A SCT_UPDATE_DB_PACKAGES
monitor_branch The port of scylla management branch-4.1 SCT_MONITOR_BRANCH
db_type Db type to install into db nodes, scylla/cassandra scylla SCT_DB_TYPE
user_prefix the prefix of the name of the cloud instances, defaults to username N/A SCT_USER_PREFIX
ami_id_db_scylla_desc version name to report stats to Elasticsearch and tagged on cloud instances N/A SCT_AMI_ID_DB_SCYLLA_DESC
sct_public_ip Override the default hostname address of the sct test runner,
for the monitoring of the Nemesis.
can only work out of the box in AWS
N/A SCT_SCT_PUBLIC_IP
sct_ngrok_name Override the default hostname address of the sct test runner,
using ngrok server, see readme for more instructions
N/A SCT_NGROK_NAME
backtrace_decoding If True, all backtraces found in db nodes would be decoded automatically True SCT_BACKTRACE_DECODING
print_kernel_callstack Scylla will print kernel callstack to logs if True, otherwise, it will try and may print a message
that it failed to.
N/A SCT_PRINT_KERNEL_CALLSTACK
instance_provision instance_provision: spot on_demand spot_fleet
instance_provision_fallback_on_demand instance_provision_fallback_on_demand: create instance on_demand provision type if instance with selected 'instance_provision' type creation failed. Expected values: true false (default - false N/A
reuse_cluster If reuse_cluster is set it should hold test_id of the cluster that will be reused.
reuse_cluster: 7dc6db84-eb01-4b61-a946-b5c72e0f6d71
N/A SCT_REUSE_CLUSTER
test_id test id to filter by N/A SCT_TEST_ID
db_nodes_shards_selection How to select number of shards of Scylla. Expected values: default/random.
Default value: 'default'.
In case of random option - Scylla will start with different (random) shards on every node of the cluster
default SCT_NODES_SHARDS_SELECTION
seeds_selector How to select the seeds. Expected values: random/first/all all SCT_SEEDS_SELECTOR
seeds_num Number of seeds to select 1 SCT_SEEDS_NUM
send_email If true would send email out of the performance regression test N/A SCT_SEND_EMAIL
email_recipients list of email of send the performance regression test to ['[email protected]'] SCT_EMAIL_RECIPIENTS
email_subject_postfix Email subject postfix N/A SCT_EMAIL_SUBJECT_POSTFIX
enable_test_profiling Turn on sct profiling N/A SCT_ENABLE_TEST_PROFILING
ssh_transport Set type of ssh library to use. Could be 'fabric' (default) or 'libssh2' libssh2 SSH_TRANSPORT
bench_run If true would kill the scylla-bench thread in the test teardown N/A SCT_BENCH_RUN
fullscan If true would kill the fullscan thread in the test teardown N/A SCT_FULLSCAN
experimental when enabled scylla will use it's experimental features True SCT_EXPERIMENTAL
server_encrypt when enable scylla will use encryption on the server side N/A SCT_SERVER_ENCRYPT
client_encrypt when enable scylla will use encryption on the client side N/A SCT_CLIENT_ENCRYPT
hinted_handoff when enable or disable scylla hinted handoff (enabled/disabled) enabled SCT_HINTED_HANDOFF
authenticator which authenticator scylla will use AllowAllAuthenticator/PasswordAuthenticator N/A SCT_AUTHENTICATOR
authenticator_user the username if PasswordAuthenticator is used N/A SCT_AUTHENTICATOR_USER
authenticator_password the password if PasswordAuthenticator is used N/A SCT_AUTHENTICATOR_PASSWORD
authorizer which authorizer scylla will use AllowAllAuthorizer/CassandraAuthorizer N/A SCT_AUTHORIZER
service_level_shares List if service level shares - how many server levels to create and test. Uses in SLA test.list of int, like: [100, 200] [1000] SCT_SERVICE_LEVEL_SHARES
alternator_port Port to configure for alternator in scylla.yaml N/A SCT_ALTERNATOR_PORT
dynamodb_primarykey_type Type of dynamodb table to create with range key or not, can be:
HASH,HASH_AND_RANGE
HASH SCT_DYNAMODB_PRIMARYKEY_TYPE
alternator_write_isolation Set the write isolation for the alternator table, see https://github.com/scylladb/scylla/blob/master/docs/alternator/alternator.md#write-isolation-policies for more details N/A SCT_ALTERNATOR_WRITE_ISOLATION
alternator_use_dns_routing If true, spawn a docker with a dns server for the ycsb loader to point to N/A SCT_ALTERNATOR_USE_DNS_ROUTING
alternator_enforce_authorization If true, enable the authorization check in dynamodb api (alternator) N/A SCT_ALTERNATOR_ENFORCE_AUTHORIZATION
alternator_access_key_id the aws_access_key_id that would be used for alternator N/A SCT_ALTERNATOR_ACCESS_KEY_ID
alternator_secret_access_key the aws_secret_access_key that would be used for alternator N/A SCT_ALTERNATOR_SECRET_ACCESS_KEY
region_aware_loader When in multi region mode, run stress on loader that is located in the same region as db node N/A SCT_REGION_AWARE_LOADER
append_scylla_args More arguments to append to scylla command line --blocked-reactor-notify-ms 25 --abort-on-lsa-bad-alloc 1 --abort-on-seastar-bad-alloc --abort-on-internal-error 1 --abort-on-ebadf 1 --enable-sstable-key-validation 1 SCT_APPEND_SCYLLA_ARGS
append_scylla_args_oracle More arguments to append to oracle command line --enable-cache false SCT_APPEND_SCYLLA_ARGS_ORACLE
append_scylla_yaml More configuration to append to /etc/scylla/scylla.yaml N/A SCT_APPEND_SCYLLA_YAML
nemesis_class_name Nemesis class to use (possible types in sdcm.nemesis).
Next syntax supporting:
- nemesis_class_name: "NemesisName" Run one nemesis in single thread
- nemesis_class_name: ":" Run in
parallel threads on different nodes. Ex.: "ChaosMonkey:2"
- nemesis_class_name: ": :" Run
in parallel threads and in
parallel threads. Ex.: "DisruptiveMonkey:1 NonDisruptiveMonkey:2"
NoOpMonkey SCT_NEMESIS_CLASS_NAME
nemesis_interval Nemesis sleep interval to use if None provided specifically in the test 5 SCT_NEMESIS_INTERVAL
nemesis_sequence_sleep_between_ops Sleep interval between nemesis operations for use in unique_sequence nemesis kind of tests N/A SCT_NEMESIS_SEQUENCE_SLEEP_BETWEEN_OPS
nemesis_during_prepare Run nemesis during prepare stage of the test True SCT_NEMESIS_DURING_PREPARE
nemesis_seed A seed number in order to repeat nemesis sequence as part of SisyphusMonkey N/A SCT_NEMESIS_SEED
nemesis_add_node_cnt Add/remove nodes during GrowShrinkCluster nemesis 1 SCT_NEMESIS_ADD_NODE_CNT
cluster_target_size Used for scale test: max size of the cluster N/A SCT_CLUSTER_TARGET_SIZE
space_node_threshold Space node threshold before starting nemesis (bytes)
The default value is 6GB (6x1024^3 bytes)
This value is supposed to reproduce
scylladb/scylladb#1140
N/A SCT_SPACE_NODE_THRESHOLD
nemesis_filter_seeds If true runs the nemesis only on non seed nodes True SCT_NEMESIS_FILTER_SEEDS
stress_cmd cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD
gemini_schema_url Url of the schema/configuration the gemini tool would use N/A SCT_GEMINI_SCHEMA_URL
gemini_cmd gemini command to run (for now used only in GeminiTest) N/A SCT_GEMINI_CMD
gemini_seed Seed number for gemini command N/A SCT_GEMINI_SEED
gemini_table_options table options for created table. example:
["cdc={'enabled': true}"]
["cdc={'enabled': true}", "compaction={'class': 'IncrementalCompactionStrategy'}"]
N/A SCT_GEMINI_TABLE_OPTIONS
instance_type_loader AWS image type of the loader node N/A SCT_INSTANCE_TYPE_LOADER
instance_type_monitor AWS image type of the monitor node N/A SCT_INSTANCE_TYPE_MONITOR
instance_type_db AWS image type of the db node N/A SCT_INSTANCE_TYPE_DB
instance_type_db_oracle AWS image type of the oracle node N/A SCT_INSTANCE_TYPE_DB_ORACLE
instance_type_runner instance type of the sct-runner node N/A SCT_INSTANCE_TYPE_RUNNER
region_name AWS regions to use N/A SCT_REGION_NAME
security_group_ids AWS security groups ids to use N/A SCT_SECURITY_GROUP_IDS
subnet_id AWS subnet ids to use N/A SCT_SUBNET_ID
ami_id_db_scylla AMS AMI id to use for scylla db node N/A SCT_AMI_ID_DB_SCYLLA
ami_id_loader AMS AMI id to use for loader node N/A SCT_AMI_ID_LOADER
ami_id_monitor AMS AMI id to use for monitor node N/A SCT_AMI_ID_MONITOR
ami_id_db_cassandra AMS AMI id to use for cassandra node N/A SCT_AMI_ID_DB_CASSANDRA
ami_id_db_oracle AMS AMI id to use for oracle node N/A SCT_AMI_ID_DB_ORACLE
root_disk_size_db N/A SCT_ROOT_DISK_SIZE_DB
root_disk_size_monitor N/A SCT_ROOT_DISK_SIZE_MONITOR
root_disk_size_loader N/A SCT_ROOT_DISK_SIZE_LOADER
root_disk_size_runner root disk size in Gb for sct-runner N/A SCT_ROOT_DISK_SIZE_RUNNER
ami_db_scylla_user N/A SCT_AMI_DB_SCYLLA_USER
ami_monitor_user N/A SCT_AMI_MONITOR_USER
ami_loader_user N/A SCT_AMI_LOADER_USER
ami_db_cassandra_user N/A SCT_AMI_DB_CASSANDRA_USER
spot_max_price The max percentage of the on demand price we set for spot/fleet instances N/A SCT_SPOT_MAX_PRICE
extra_network_interface if true, create extra network interface on each node N/A SCT_EXTRA_NETWORK_INTERFACE
aws_instance_profile_name_db This is the name of the instance profile to set on all db instances N/A SCT_AWS_INSTANCE_PROFILE_NAME_DB
aws_instance_profile_name_loader This is the name of the instance profile to set on all loader instances N/A SCT_AWS_INSTANCE_PROFILE_NAME_LOADER
backup_bucket_backend the backend to be used for backup (e.g., 's3', 'gcs' or 'azure') N/A SCT_BACKUP_BUCKET_BACKEND
backup_bucket_location the bucket name to be used for backup (e.g., 'manager-backup-tests') N/A SCT_BACKUP_BUCKET_LOCATION
backup_bucket_region the AWS region of a bucket to be used for backup (e.g., 'eu-west-1') N/A SCT_BACKUP_BUCKET_REGION
use_prepared_loaders If True, we use prepared VMs for loader (instead of using docker images) N/A SCT_USE_PREPARED_LOADERS
gce_project gcp project name to use N/A SCT_GCE_PROJECT
gce_datacenter Supported: us-east1 - means that the zone will be selected automatically or you can mention the zone explicitly, for example: us-east1-b N/A SCT_GCE_DATACENTER
gce_network N/A SCT_GCE_NETWORK
gce_image GCE image to use for all node types: db, loader and monitor N/A SCT_GCE_IMAGE
gce_image_db N/A SCT_GCE_IMAGE_DB
gce_image_monitor N/A SCT_GCE_IMAGE_MONITOR
gce_image_loader N/A SCT_GCE_IMAGE_LOADER
gce_image_username N/A SCT_GCE_IMAGE_USERNAME
gce_instance_type_loader N/A SCT_GCE_INSTANCE_TYPE_LOADER
gce_root_disk_type_loader N/A SCT_GCE_ROOT_DISK_TYPE_LOADER
gce_n_local_ssd_disk_loader N/A SCT_GCE_N_LOCAL_SSD_DISK_LOADER
gce_instance_type_monitor N/A SCT_GCE_INSTANCE_TYPE_MONITOR
gce_root_disk_type_monitor N/A SCT_GCE_ROOT_DISK_TYPE_MONITOR
gce_n_local_ssd_disk_monitor N/A SCT_GCE_N_LOCAL_SSD_DISK_MONITOR
gce_instance_type_db N/A SCT_GCE_INSTANCE_TYPE_DB
gce_root_disk_type_db N/A SCT_GCE_ROOT_DISK_TYPE_DB
gce_n_local_ssd_disk_db N/A SCT_GCE_N_LOCAL_SSD_DISK_DB
gce_pd_standard_disk_size_db N/A SCT_GCE_PD_STANDARD_DISK_SIZE_DB
gce_pd_ssd_disk_size_db N/A SCT_GCE_PD_SSD_DISK_SIZE_DB
gce_pd_ssd_disk_size_loader N/A SCT_GCE_PD_SSD_DISK_SIZE_LOADER
gce_pd_ssd_disk_size_monitor N/A SCT_GCE_SSD_DISK_SIZE_MONITOR
azure_region_name Supported: eastus N/A SCT_AZURE_REGION_NAME
azure_instance_type_loader N/A SCT_AZURE_INSTANCE_TYPE_LOADER
azure_instance_type_monitor N/A SCT_AZURE_INSTANCE_TYPE_MONITOR
azure_instance_type_db N/A SCT_AZURE_INSTANCE_TYPE_DB
azure_instance_type_db_oracle N/A SCT_AZURE_INSTANCE_TYPE_DB_ORACLE
azure_image_db N/A SCT_AZURE_IMAGE_DB
azure_image_monitor N/A SCT_AZURE_IMAGE_MONITOR
azure_image_loader N/A SCT_AZURE_IMAGE_LOADER
azure_image_username N/A SCT_AZURE_IMAGE_USERNAME
eks_service_ipv4_cidr N/A SCT_EKS_SERVICE_IPV4_CIDR
eks_vpc_cni_version N/A SCT_EKS_VPC_CNI_VERSION
eks_role_arn N/A SCT_EKS_ROLE_ARN
eks_cluster_version N/A SCT_EKS_CLUSTER_VERSION
eks_nodegroup_role_arn N/A SCT_EKS_NODEGROUP_ROLE_ARN
gke_cluster_version N/A SCT_GKE_CLUSTER_VERSION
gke_k8s_release_channel K8S release channel name to be used. Expected values are: 'rapid', 'regular', 'stable' and '' (static / No channel). N/A SCT_GKE_K8S_RELEASE_CHANNEL
k8s_scylla_utils_docker_image Docker image to be used by Scylla operator to tune K8S nodes for performance. Used when k8s_enable_performance_tuning' is defined to 'True'. If not set then the default from operator will be used. N/A SCT_K8S_SCYLLA_UTILS_DOCKER_IMAGE
k8s_enable_performance_tuning Define whether performance tuning must run or not. N/A SCT_K8S_ENABLE_PERFORMANCE_TUNING
k8s_deploy_monitoring N/A SCT_K8S_DEPLOY_MONITORING
k8s_scylla_operator_docker_image Docker image to be used for installation of scylla operator. N/A SCT_K8S_SCYLLA_OPERATOR_DOCKER_IMAGE
k8s_scylla_operator_upgrade_docker_image Docker image to be used for upgrade of scylla operator. N/A SCT_K8S_SCYLLA_OPERATOR_UPGRADE_DOCKER_IMAGE
k8s_scylla_operator_helm_repo Link to the Helm repository where to get 'scylla-operator' charts from. N/A SCT_K8S_SCYLLA_OPERATOR_HELM_REPO
k8s_scylla_operator_upgrade_helm_repo Link to the Helm repository where to get 'scylla-operator' charts for upgrade. N/A SCT_K8S_SCYLLA_OPERATOR_UPGRADE_HELM_REPO
k8s_scylla_operator_chart_version Version of 'scylla-operator' Helm chart to use. If not set then latest one will be used. N/A SCT_K8S_SCYLLA_OPERATOR_CHART_VERSION
k8s_scylla_operator_upgrade_chart_version Version of 'scylla-operator' Helm chart to use for upgrade. N/A SCT_K8S_SCYLLA_OPERATOR_UPGRADE_CHART_VERSION
k8s_functional_test_dataset Defines whether dataset uses for pre-fill cluster in functional test. Defined in sdcm.utils.sstable.load_inventory. Expected values: BIG_SSTABLE_MULTI_COLUMNS_DATA, MULTI_COLUMNS_DATA N/A SCT_K8S_FUNCTIONAL_TEST_DATASET
k8s_scylla_datacenter N/A SCT_K8S_SCYLLA_DATACENTER
k8s_scylla_rack N/A SCT_K8S_SCYLLA_RACK
k8s_scylla_cpu_limit The CPU limit that will be set for each Scylla cluster deployed in K8S. If not set, then will be autocalculated. Example: '500m' or '2' N/A SCT_K8S_SCYLLA_CPU_LIMIT
k8s_scylla_memory_limit The memory limit that will be set for each Scylla cluster deployed in K8S. If not set, then will be autocalculated. Example: '16384Mi' N/A SCT_K8S_SCYLLA_MEMORY_LIMIT
k8s_scylla_cluster_name N/A SCT_K8S_SCYLLA_CLUSTER_NAME
k8s_n_scylla_pods_per_cluster Number of loader pods per loader cluster. 3 K8S_N_SCYLLA_PODS_PER_CLUSTER
k8s_scylla_disk_gi N/A SCT_K8S_SCYLLA_DISK_GI
k8s_scylla_disk_class N/A SCT_K8S_SCYLLA_DISK_CLASS
k8s_loader_cluster_name N/A SCT_K8S_LOADER_CLUSTER_NAME
k8s_n_loader_pods_per_cluster Number of loader pods per loader cluster. N/A SCT_K8S_N_LOADER_PODS_PER_CLUSTER
k8s_loader_run_type Defines how the loader pods must run. It may be either 'static' (default, run stress command on the constantly existing idle pod having reserved resources, perf-oriented) or 'dynamic' (run stress commad in a separate pod as main thread and get logs in a searate retryable API call not having resource reservations). dynamic SCT_K8S_LOADER_RUN_TYPE
k8s_instance_type_auxiliary Instance type for the nodes of the K8S auxiliary/default node pool. N/A SCT_K8S_INSTANCE_TYPE_AUXILIARY
k8s_instance_type_monitor Instance type for the nodes of the K8S monitoring node pool. N/A SCT_K8S_INSTANCE_TYPE_MONITOR
mini_k8s_version N/A SCT_MINI_K8S_VERSION
k8s_cert_manager_version N/A SCT_K8S_CERT_MANAGER_VERSION
k8s_minio_storage_size 10Gi SCT_K8S_MINIO_STORAGE_SIZE
k8s_log_api_calls Defines whether the K8S API server logging must be enabled and it's logs gathered. Be aware that it may be really huge set of data. N/A SCT_K8S_LOG_API_CALLS
k8s_tenants_num Number of Scylla clusters to create in the K8S cluster. 1 SCT_TENANTS_NUM
k8s_enable_tls Defines whether the we enable the operator serverless options N/A SCT_K8S_ENABLE_TLS
k8s_connection_bundle_file Serverless configuration bundle file N/A SCT_K8S_CONNECTION_BUNDLE_FILE
k8s_use_chaos_mesh enables chaos-mesh for k8s testing N/A SCT_K8S_USE_CHAOS_MESH
k8s_n_auxiliary_nodes Number of of nodes in auxiliary pool N/A SCT_K8S_N_AUXILIARY_NODES
k8s_n_monitor_nodes Number of nodes in monitoring pool that will be used for scylla-operator's deployed monitoring pods. N/A SCT_K8S_N_MONITOR_NODES
mgmt_docker_image Scylla manager docker image, i.e. 'scylladb/scylla-manager:2.2.1' N/A SCT_MGMT_DOCKER_IMAGE
docker_image Scylla docker image repo, i.e. 'scylladb/scylla', if omitted is calculated from scylla_version N/A SCT_DOCKER_IMAGE
db_nodes_private_ip N/A SCT_DB_NODES_PRIVATE_IP
db_nodes_public_ip N/A SCT_DB_NODES_PUBLIC_IP
loaders_private_ip N/A SCT_LOADERS_PRIVATE_IP
loaders_public_ip N/A SCT_LOADERS_PUBLIC_IP
monitor_nodes_private_ip N/A SCT_MONITOR_NODES_PRIVATE_IP
monitor_nodes_public_ip N/A SCT_MONITOR_NODES_PUBLIC_IP
cassandra_stress_population_size 1000000 SCT_CASSANDRA_STRESS_POPULATION_SIZE
cassandra_stress_threads 1000 SCT_CASSANDRA_STRESS_THREADS
add_node_cnt 1 SCT_ADD_NODE_CNT
stress_multiplier Number of cassandra-stress processes 1 SCT_STRESS_MULTIPLIER
stress_multiplier_w Number of cassandra-stress processes for write workload 1 SCT_STRESS_MULTIPLIER_W
stress_multiplier_r Number of cassandra-stress processes for read workload 1 SCT_STRESS_MULTIPLIER_R
stress_multiplier_m Number of cassandra-stress processes for mixed workload 1 SCT_STRESS_MULTIPLIER_M
run_fullscan A list of dictionaries describing the parameters for the fullscan operations to be run. Each dictionary describes a separate thread to be spawned. Possible modes include: "table" for regular full table scans, "partition" for fullscans targeting partitions, "aggregate" for aggregate operations and "random" for a random selection of the former modes. N/A SCT_RUN_FULLSCAN
keyspace_num 1 SCT_KEYSPACE_NUM
round_robin N/A SCT_ROUND_ROBIN
batch_size 1 SCT_BATCH_SIZE
pre_create_schema N/A SCT_PRE_CREATE_SCHEMA
pre_create_keyspace Command to create keysapce to be pre-create before running workload N/A SCT_PRE_CREATE_KEYSPACE
post_prepare_cql_cmds CQL Commands to run after prepare stage finished (relevant only to longevity_test.py) N/A SCT_POST_PREPARE_CQL_CMDS
prepare_wait_no_compactions_timeout At the end of prepare stage, run major compaction and wait for this time (in minutes) for compaction to finish. (relevant only to longevity_test.py), Should be use only for when facing issue like compaction is affect the test or load N/A SCT_PREPARE_WAIT_NO_COMPACTIONS_TIMEOUT
compaction_strategy Choose a specific compaction strategy to pre-create schema with. SizeTieredCompactionStrategy SCT_COMPACTION_STRATEGY
sstable_size Configure sstable size for the usage of pre-create-schema mode N/A SSTABLE_SIZE
cluster_health_check When true, start cluster health checker for all nodes True SCT_CLUSTER_HEALTH_CHECK
validate_partitions when true, log of the partitions before and after the nemesis run is compacted N/A SCT_VALIDATE_PARTITIONS
table_name table name to check for the validate_partitions check N/A SCT_TABLE_NAME
primary_key_column primary key of the table to check for the validate_partitions check N/A SCT_PRIMARY_KEY_COLUMN
stress_read_cmd cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_READ_CMD
prepare_verify_cmd cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_PREPARE_VERIFY_CMD
user_profile_table_count number of tables to create for template user c-s 1 SCT_USER_PROFILE_TABLE_COUNT
scylla_mgmt_upgrade_to_repo Url to the repo of scylla manager version to upgrade to for management tests N/A SCT_SCYLLA_MGMT_UPGRADE_TO_REPO
partition_range_with_data_validation Relevant for scylla-bench. Hold range (min - max) of PKs values for partitions that data was
written with validate data and will be validate during the read.
Example: 0-250.
Optional parameter for DeleteByPartitionsMonkey and DeleteByRowsRangeMonkey
N/A SCT_PARTITION_RANGE_WITH_DATA_VALIDATION
max_partitions_in_test_table Relevant for scylla-bench. MAX partition keys (partition-count) in the scylla_bench.test table.
Mandatory parameter for DeleteByPartitionsMonkey and DeleteByRowsRangeMonkey
N/A SCT_MAX_PARTITIONS_IN_TEST_TABLE
stress_cmd_w cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_W
stress_cmd_r cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_R
stress_cmd_m cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_M
prepare_write_cmd cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_PREPARE_WRITE_CMD
stress_cmd_no_mv cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_NO_MV
stress_cmd_no_mv_profile N/A SCT_STRESS_CMD_NO_MV_PROFILE
cs_user_profiles N/A SCT_CS_USER_PROFILES
cs_duration 50m SCT_CS_DURATION
cs_debug enable debug for cassandra-stress N/A SCT_CS_DEBUG
stress_cmd_mv cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_MV
prepare_stress_cmd cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_PREPARE_STRESS_CMD
skip_download N/A SCT_SKIP_DOWNLOAD
sstable_file N/A SCT_SSTABLE_FILE
sstable_url N/A SCT_SSTABLE_URL
sstable_md5 N/A SCT_SSTABLE_MD5
flush_times N/A SCT_FLUSH_TIMES
flush_period N/A SCT_FLUSH_PERIOD
new_scylla_repo N/A SCT_NEW_SCYLLA_REPO
new_version Assign new upgrade version, use it to upgrade to specific minor release. eg: 3.0.1 N/A SCT_NEW_VERSION
target_upgrade_version Assign target upgrade version, use for decide if the truncate entries test should be run. This test should be performed in case the target upgrade version >= 3.1 N/A SCT_TARGET_UPGRADE_VERSION
upgrade_node_packages N/A SCT_UPGRADE_NODE_PACKAGES
test_sst3 N/A SCT_TEST_SST3
test_upgrade_from_installed_3_1_0 Enable an option for installed 3.1.0 for work around a scylla issue if it's true N/A SCT_TEST_UPGRADE_FROM_INSTALLED_3_1_0
recover_system_tables N/A SCT_RECOVER_SYSTEM_TABLES
stress_cmd_1 cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_1
stress_cmd_complex_prepare cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_COMPLEX_PREPARE
prepare_write_stress cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_PREPARE_WRITE_STRESS
stress_cmd_read_10m cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_READ_10M
stress_cmd_read_cl_one cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
N/A SCT_STRESS_CMD_READ_CL_ONE
stress_cmd_read_60m cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_READ_60M
stress_cmd_complex_verify_read cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_COMPLEX_VERIFY_READ
stress_cmd_complex_verify_more cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_COMPLEX_VERIFY_MORE
write_stress_during_entire_test cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_WRITE_STRESS_DURING_ENTIRE_TEST
verify_data_after_entire_test cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
N/A SCT_VERIFY_DATA_AFTER_ENTIRE_TEST
stress_cmd_read_cl_quorum cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_READ_CL_QUORUM
verify_stress_after_cluster_upgrade cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_VERIFY_STRESS_AFTER_CLUSTER_UPGRADE
stress_cmd_complex_verify_delete cassandra-stress commands.
You can specify everything but the -node parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
N/A SCT_STRESS_CMD_COMPLEX_VERIFY_DELETE
scylla_encryption_options options will be used for enable encryption at-rest for tables N/A SCT_SCYLLA_ENCRYPTION_OPTIONS
logs_transport How to transport logs: rsyslog, ssh or docker syslog-ng SCT_LOGS_TRANSPORT
rsyslog_imjournal_rate_limit_interval Value for rsyslog' imjournal Ratelimit.Interval option (maximum 65535 till rsyslog v8.34) 60 SCT_RSYSLOG_IMJOURNAL_RATE_LIMIT_INTERVAL
rsyslog_imjournal_rate_limit_burst Value for rsyslog' imjournal Ratelimit.Burst option (maximum 65535 till rsyslog v8.34) 50000 SCT_RSYSLOG_IMJOURNAL_RATE_LIMIT_BURST
collect_logs Collect logs from instances and sct runner N/A SCT_COLLECT_LOGS
execute_post_behavior Run post behavior actions in sct teardown step N/A SCT_EXECUTE_POST_BEHAVIOR
post_behavior_db_nodes Failure/post test behavior, i.e. what to do with the db cloud instances at the end of the test.

'destroy' - Destroy instances and credentials (default)
'keep' - Keep instances running and leave credentials alone
'keep-on-failure' - Keep instances if testrun failed
keep-on-failure SCT_POST_BEHAVIOR_DB_NODES
post_behavior_loader_nodes Failure/post test behavior, i.e. what to do with the loader cloud instances at the end of the test.

'destroy' - Destroy instances and credentials (default)
'keep' - Keep instances running and leave credentials alone
'keep-on-failure' - Keep instances if testrun failed
destroy SCT_POST_BEHAVIOR_LOADER_NODES
post_behavior_monitor_nodes Failure/post test behavior, i.e. what to do with the monitor cloud instances at the end of the test.

'destroy' - Destroy instances and credentials (default)
'keep' - Keep instances running and leave credentials alone
'keep-on-failure' - Keep instances if testrun failed
keep-on-failure SCT_POST_BEHAVIOR_MONITOR_NODES
post_behavior_k8s_cluster Failure/post test behavior, i.e. what to do with the k8s cluster at the end of the test.

'destroy' - Destroy k8s cluster and credentials (default)
'keep' - Keep k8s cluster running and leave credentials alone
'keep-on-failure' - Keep k8s cluster if testrun failed
keep-on-failure SCT_POST_BEHAVIOR_K8S_CLUSTER
internode_compression scylla option: internode_compression N/A SCT_INTERNODE_COMPRESSION
internode_encryption scylla sub option of server_encryption_options: internode_encryption all SCT_INTERNODE_ENCRYPTION
jmx_heap_memory The total size of the memory allocated to JMX. Values in MB, so for 1GB enter 1024(MB) N/A SCT_JMX_HEAP_MEMORY
loader_swap_size The size of the swap file for the loaders. Its size in bytes calculated by x * 1MB 1024 SCT_LOADER_SWAP_SIZE
monitor_swap_size The size of the swap file for the monitors. Its size in bytes calculated by x * 1MB 8192 SCT_MONITOR_SWAP_SIZE
store_perf_results A flag that indicates whether or not to gather the prometheus stats at the end of the run.
Intended to be used in performance testing
N/A SCT_STORE_PERF_RESULTS
append_scylla_setup_args More arguments to append to scylla_setup command line N/A SCT_APPEND_SCYLLA_SETUP_ARGS
use_preinstalled_scylla Don't install/update ScyllaDB on DB nodes N/A SCT_USE_PREINSTALLED_SCYLLA
stress_cdclog_reader_cmd cdc-stressor command to read cdc_log table.
You can specify everything but the -node , -keyspace, -table, parameter, which is going to
be provided by the test suite infrastructure.
multiple commands can passed as a list
cdc-stressor -stream-query-round-duration 30s SCT_STRESS_CDCLOG_READER_CMD
store_cdclog_reader_stats_in_es Add cdclog reader stats to ES for future performance result calculating N/A SCT_STORE_CDCLOG_READER_STATS_IN_ES
stop_test_on_stress_failure If set to True the test will be stopped immediately when stress command failed.
When set to False the test will continue to run even when there are errors in the
stress process
True SCT_STOP_TEST_ON_STRESS_FAILURE
stress_cdc_log_reader_batching_enable retrieving data from multiple streams in one poll True SCT_STRESS_CDC_LOG_READER_BATCHING_ENABLE
use_legacy_cluster_init Use legacy cluster initialization with autobootsrap disabled and parallel node setup N/A SCT_USE_LEGACY_CLUSTER_INIT
availability_zone Availability zone to use. Same for multi-region scenario. N/A SCT_AVAILABILITY_ZONE
aws_fallback_to_next_availability_zone Try all availability zones one by one in order to maximize the chances of getting
the requested instance capacity.
N/A SCT_AWS_FALLBACK_TO_NEXT_AVAILABILITY_ZONE
num_nodes_to_rollback Number of nodes to upgrade and rollback in test_generic_cluster_upgrade N/A SCT_NUM_NODES_TO_ROLLBACK
upgrade_sstables Whether to upgrade sstables as part of upgrade_node or not N/A SCT_UPGRADE_SSTABLES
stress_before_upgrade Stress command to be run before upgrade (preapre stage) N/A SCT_STRESS_BEFORE_UPGRADE
stress_during_entire_upgrade Stress command to be run during the upgrade - user should take care for suitable duration N/A SCT_STRESS_DURING_ENTIRE_UPGRADE
stress_after_cluster_upgrade Stress command to be run after full upgrade - usually used to read the dataset for verification N/A SCT_STRESS_AFTER_CLUSTER_UPGRADE
jepsen_scylla_repo Link to the git repository with Jepsen Scylla tests https://github.com/jepsen-io/scylla.git SCT_JEPSEN_SCYLLA_REPO
jepsen_test_cmd Jepsen test command (e.g., 'test-all') ['test-all -w cas-register --concurrency 10n', 'test-all -w counter --concurrency 10n', 'test-all -w cmap --concurrency 10n', 'test-all -w cset --concurrency 10n', 'test-all -w write-isolation --concurrency 10n', 'test-all -w list-append --concurrency 10n', 'test-all -w wr-register --concurrency 10n'] SCT_JEPSEN_TEST_CMD
jepsen_test_count possible number of reruns of single Jepsen test command 1 SCT_JEPSEN_TEST_COUNT
jepsen_test_run_policy Jepsen test run policy (i.e., what we want to consider as passed for a single test)

'most' - most test runs are passed
'any' - one pass is enough
'all' - all test runs should pass
all SCT_JEPSEN_TEST_RUN_POLICY
max_events_severities Limit severity level for event types N/A SCT_MAX_EVENTS_SEVERITIES
scylla_rsyslog_setup Configure rsyslog on Scylla nodes to send logs to monitoring nodes N/A SCT_SCYLLA_RSYSLOG_SETUP
events_limit_in_email Limit number events in email reports 10 SCT_EVENTS_LIMIT_IN_EMAIL
data_volume_disk_num Number of additional data volumes attached to instances
if data_volume_disk_num > 0, then data volumes (ebs on aws) will be
used for scylla data directory
N/A SCT_DATA_VOLUME_DISK_NUM
data_volume_disk_type Type of addtitional volumes: gp2 gp3 io2
data_volume_disk_size Size of additional volume in GB N/A SCT_DATA_VOLUME_DISK_SIZE
data_volume_disk_iops Number of iops for ebs type io2 io3 gp3
run_db_node_benchmarks Flag for running db node benchmarks before the tests N/A SCT_RUN_DB_NODE_BENCHMARKS
nemesis_selector nemesis_selector gets a list of "nemesis properties" and filters IN all the nemesis that has
ALL the properties in that list which are set to true (the intersection of all properties).
(In other words filters out all nemesis that doesn't ONE of these properties set to true)
IMPORTANT: If a property doesn't exist, ALL the nemesis will be included.
N/A SCT_NEMESIS_SELECTOR
nemesis_exclude_disabled nemesis_exclude_disabled determines whether 'disabled' nemeses are filtered out from list
or are allowed to be used. This allows to easily disable too 'risky' or 'extreme' nemeses by default,
for all longevities. For example: it is unwanted to run the ToggleGcModeMonkey in standard longevities
that runs a stress with data validation.
True SCT_NEMESIS_EXCLUDE_DISABLED
nemesis_multiply_factor Multiply the list of nemesis to execute by the specified factor 6 SCT_NEMESIS_MULTIPLY_FACTOR
raid_level Number of of raid level: 0 - RAID0, 5 - RAID5 N/A SCT_RAID_LEVEL
bare_loaders Don't install anything but node_exporter to the loaders during cluster setup N/A SCT_BARE_LOADERS
stress_image Dict of the images to use for the stress tools {'ndbench': 'scylladb/hydra-loaders:ndbench-jdk8-20210720', 'ycsb': 'scylladb/hydra-loaders:ycsb-jdk8-20220918', 'nosqlbench': 'scylladb/hydra-loaders:nosqlbench-4.15.49', 'cassandra-stress': '', 'scylla-bench': 'scylladb/hydra-loaders:scylla-bench-v0.1.18', 'gemini': 'scylladb/hydra-loaders:gemini-1.7.7', 'alternator-dns': 'scylladb/hydra-loaders:alternator-dns-0.1', 'cdc-stresser': 'scylladb/hydra-loaders:cdc-stresser-20210630', 'kcl': 'scylladb/hydra-loaders:kcl-jdk8-20210526-ShardSyncStrategyType-PERIODIC', 'harry': 'scylladb/hydra-loaders:cassandra-harry-jdk11-20220816'} SCT_STRESS_IMAGE
enable_argus Control reporting to argus True SCT_ENABLE_ARGUS
cs_populating_distribution set c-s parameter '-pop' with gauss/uniform distribution for
performance gradual throughtput grow tests
N/A SCT_CS_POPULATING_DISTRIBUTION
num_loaders_step Number of loaders which should be added per step N/A SCT_NUM_LOADERS_STEP
stress_threads_start_num Number of threads for c-s command N/A SCT_STRESS_THREADS_START_NUM
num_threads_step Number of threads which should be added on per step N/A SCT_NUM_THREADS_STEP
stress_step_duration Duration of time for stress round N/A SCT_STRESS_STEP_DURATION
max_deviation Max relative difference between best and current throughput,
if current throughput larger then best on max_rel_diff, it become new best one
N/A SCT_MAX_DEVIATION
n_stress_process Number of stress processes per loader N/A SCT_N_STRESS_PROCESS
stress_process_step add/remove num of process on each round N/A SCT_STRESS_PROCESS_STEP
use_hdr_cs_histogram Enable hdr histogram logging for cs N/A SCT_USE_HDR_CS_HISTOGRAM