Default Vault instances are created with wrong ebs volume size #805
Description
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
What happened:
When doing fresh tarmak deployment, the Vault instances are created with wrong ebs volume size of 10Gi instead of 5Gi as specified in tarmak.yaml
What you expected to happen:
To have the Vault instances deployed with 5Gi ebs volumes.
How to reproduce it (as minimally and precisely as possible):
I deployed in all 3 AZs, using a multicluster setup.
Ensure the tarmak.yaml file states in the data volume that the size is 5Gi:
[..]
- amazon: {}
image: centos-puppet-agent
maxCount: 3
metadata:
creationTimestamp: "2019-04-27T15:54:14Z"
name: vault
minCount: 3
size: tiny
subnets:- metadata:
creationTimestamp: null
zone: eu-west-1a - metadata:
creationTimestamp: null
zone: eu-west-1b - metadata:
creationTimestamp: null
zone: eu-west-1c
type: vault
volumes: - metadata:
creationTimestamp: "2019-04-27T15:54:14Z"
name: root
size: 16Gi
type: ssd - metadata:
creationTimestamp: "2019-04-27T15:54:14Z"
name: data
size: 5Gi <<------------Here
type: ssd
[..]
- metadata:
tarmak init
tarmak apply(for hub cluster only)
Once deployed, ssh to any of the vault instances and confirm the wrong size:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 16G 0 part /
xvdd 202:48 0 10G 0 disk /var/lib/consul <<------------Here
Anything else we need to know?:
Nope
Environment:
- Kubernetes version (use
kubectl version
): - Cloud provider or hardware configuration**:
- Install tools:
- Others: