Skip to content

Commit 9633cb3

Browse files
authored
Merge pull request #36 from RADAR-base/dev
Sync working and updated versions of charts for various components.
2 parents ad9a5ed + 572e4a4 commit 9633cb3

File tree

128 files changed

+8188
-1197
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

128 files changed

+8188
-1197
lines changed

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,9 @@
11
/.env
22
cp-helm-charts/
3+
kubernetes-HDFS/
34
keystore.p12
45
radar-is.yml
56
*.tgz
7+
production.yaml
8+
.idea/
9+
RADAR-Kubernetes.iml

README.md

Lines changed: 41 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -39,9 +39,8 @@ After installing them run following commands:
3939
git clone https://github.com/RADAR-base/RADAR-Kubernetes.git
4040
cd RADAR-Kubernetes
4141
git clone https://github.com/RADAR-base/cp-helm-charts.git
42-
cp env.template .env
43-
vim .env # Change setup parameters and configurations
44-
source .env
42+
cp base.yaml production.yaml
43+
vim production.yaml # Change setup parameters and configurations
4544
./bin/keystore-init
4645
helmfile sync --concurrency 1
4746
```
@@ -52,7 +51,10 @@ Having `--concurrency 1` will make installation slower but it is necessary becau
5251
The Prometheus-operator will define a `ServiceMonitor` CRD that other services with monitoring enabled will use, so please make sure that Prometheus chart installs successfully before preceding. By default it's configured to wait for Prometheus deployment to be finished in 10 minutes if this time isn't enough for your environment change it accordingly. If the deployment has been failed for the first time then you should delete it first and then try installing the stack again:
5352
```
5453
helm del --purge prometheus-operator
55-
kubectl delete crd prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com alertmanagers.monitoring.coreos.com
54+
kubectl delete crd prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com alertmanagers.monitoring.coreos.com podmonitors.monitoring.coreos.com
55+
kubectl delete psp prometheus-operator-alertmanager prometheus-operator-grafana prometheus-operator-grafana-test prometheus-operator-kube-state-metrics prometheus-operator-operator prometheus-operator-prometheus prometheus-operator-prometheus-node-exporter
56+
kubectl delete mutatingwebhookconfigurations prometheus-admission
57+
kubectl delete ValidatingWebhookConfiguration prometheus-admission
5658
```
5759

5860
### kafka-init
@@ -65,13 +67,19 @@ kubectl delete pvc datadir-0-cp-kafka-{0,1,2} datadir-cp-zookeeper-{0,1,2} datal
6567
### Uninstall
6668
If you want to remove the Radar-base from your cluster you need set all of your `RADAR_INSTALL_*` variables in `.env` file to `false` and then run the `helmfile sync --concurrency 1` command to delete the charts after that you need to run following commands to remove all of the traces of the installation:
6769
```
68-
kubectl delete crd prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com alertmanagers.monitoring.coreos.com
69-
kubectl delete crd certificates.certmanager.k8s.io challenges.certmanager.k8s.io clusterissuers.certmanager.k8s.io issuers.certmanager.k8s.io orders.certmanager.k8s.io
70+
kubectl delete crd prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com alertmanagers.monitoring.coreos.com podmonitors.monitoring.coreos.com
71+
kubectl delete psp prometheus-operator-alertmanager prometheus-operator-grafana prometheus-operator-grafana-test prometheus-operator-kube-state-metrics prometheus-operator-operator prometheus-operator-prometheus prometheus-operator-prometheus-node-exporter
72+
kubectl delete mutatingwebhookconfigurations prometheus-admission
73+
kubectl delete ValidatingWebhookConfiguration prometheus-admission
74+
75+
kubectl delete crd certificaterequests.cert-manager.io certificates.cert-manager.io challenges.acme.cert-manager.io clusterissuers.cert-manager.io issuers.cert-manager.io orders.acme.cert-manager.io
7076
kubectl delete pvc --all
7177
kubectl -n cert-manager delete secrets letsencrypt-prod
7278
kubectl -n default delete secrets radar-base-tls
7379
kubectl -n monitoring delete secrets radar-base-tls
74-
kubectl -n monitoring delete psp prometheus-alertmanager prometheus-operator prometheus-prometheus
80+
81+
kubectl delete crd cephblockpools.ceph.rook.io cephclients.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io volumes.rook.io
82+
kubectl delete psp 00-rook-ceph-operator
7583
```
7684

7785
## Volume expansion
@@ -194,3 +202,29 @@ Alternatively you can forward SSH port to your local machine and connect locally
194202
kubectl port-forward svc/radar-output 2222:22
195203
```
196204
Now you can use "127.0.0.1" as `host` and "2222" as the `port` to connect to SFTP server.
205+
206+
207+
208+
209+
# - name: rook
210+
# chart: rook-release/rook-ceph
211+
# version: v1.2.3
212+
# namespace: rook-ceph
213+
# wait: true
214+
# installed: {{ .Values.rook._install }}
215+
#
216+
# - name: ceph
217+
# chart: ../charts/ceph
218+
# namespace: rook-ceph
219+
# wait: true
220+
# installed: {{ .Values.ceph._install }}
221+
# values:
222+
# - {{ .Values.ceph | toYaml | indent 8 | trim }}
223+
224+
#
225+
# - name: sftp
226+
# chart: ../charts/sftp
227+
# wait: true
228+
# installed: {{ .Values.sftp._install }}
229+
# values:
230+
# - {{ .Values.sftp | toYaml | indent 8 | trim }}

0 commit comments

Comments
 (0)