Skip to content

Latest commit

 

History

History
113 lines (93 loc) · 4.13 KB

stressng.md

File metadata and controls

113 lines (93 loc) · 4.13 KB

stressng

stressng is a performance test tool to stress various system resources like the CPU, memory and the I/O subsystem.

Running stressng

Given that you followed instructions to deploy operator, you can modify cr.yaml to your needs.

The optional argument runtime_class can be set to specify an optional runtime_class to the podSpec runtimeClassName. This is primarily intended for Kata containers.

An example CR might look like this

apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
  name: stressng
  namespace: benchmark-operator
spec:
  elasticsearch:
    url: "http://es-instance.com:9200"
    index_name: ripsaw-stressng
  metadata:
    collection: true
  workload:
    name: "stressng"
    args:
      # general options
      runtype: "parallel"
      timeout: "30"
      instances: 1
      # nodeselector: 
      # cpu stressor options
      cpu_stressors: "1"
      cpu_percentage: "100"
      # vm stressor option
      vm_stressors: "1"
      vm_bytes: "128M"
      # mem stressor options
      mem_stressors: "1"

The stressng benchmark is divided into 3 subsystems, driven by so-called stressors. In the above example we have cpu stressors, virtual memory (vm) stressors and memory stressors. They are running in a parallel fashion, but could also run sequentially. Looking at the stressors individually:

field name description
runtype parallel or sequential
timeout time for the stressors to run
instances number of instances (pods) to run
nodeselector label for nodes on which the stressor pods will run
cpu_stressors number of cpu stressors
cpu_percentage percentage at which the stressor will run, e.g. 70% of a CPU
vm_stressors number of vm stressors
vm_bytes amount of memory the vm stressor will use
mem_stressors number of memory stressors

Once done creating/editing the resource file, you can run it by:

# kubectl apply -f <path_to_cr_file>

Running stressng in VMs through kubevirt/cnv [Preview]

Note: this is currently in preview mode.

changes to cr file

      kind: vm
      client_vm:
        dedicatedcpuplacement: false
        sockets: 1
        cores: 2
        threads: 1
        image: kubevirt/fedora-cloud-container-disk-demo:latest
        limits:
          memory: 4Gi
        requests:
          memory: 4Gi
        network:
          front_end: bridge # or masquerade
          multiqueue:
            enabled: false # if set to true, highly recommend to set selinux to permissive on the nodes where the vms would be scheduled
            queues: 0 # must be given if enabled is set to true and ideally should be set to vcpus ideally so sockets*threads*cores, your image must've ethtool installed
        extra_options:
          - none
          #- hostpassthrough

The above is the additional changes required to run stressng in vms. Currently, we only support images that can be used as containerDisk.

You can easily make your own container-disk-image as follows by downloading your qcow2 image of choice. You can then make changes to your qcow2 image as needed using virt-customize.

cat << END > Dockerfile
FROM scratch
ADD <yourqcow2image>.qcow2 /disk/
END

podman build -t <imageurl> .
podman push <imageurl>

You can either access results by indexing them directly or by accessing the console. The results are stored in /tmp/ directory