You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/guides/concept-guide/preinstall-checklist.md
+90-25Lines changed: 90 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,31 +7,40 @@ sidebar_position: 49
7
7
8
8
:::warning
9
9
10
-
This checklist is currently work in progress and incomplete.
10
+
This checklist is currently really *work in progress and incomplete*.
11
11
12
12
It is imperative that the following topics are clarified and the described resources are available before performing
13
13
the initial installation.
14
14
:::
15
15
16
16
This list describes some aspects (without claiming to be exhaustive) that should be clarified before a pilot and at least before production installation.
17
17
The aim of this list is to reduce waiting times, unsuccessful attempts, errors and major adjustment work in the
18
-
installation process itself as well as in subsequent operation.
19
18
20
-
## Network configuration of nodes and tenant networks
21
19
22
-
TBD:
20
+
## General
23
21
24
-
* It must be decided how the networks of the tenants should be separated in Openstack (Neutron)
25
-
* It must be decided how the underlay network of the cloud platform should be designed.
26
-
(e.g. native Layer2, Layer2 underlay with Tenant VLANs, Layer3 underlay)
27
-
* Layer 3 Underlay
28
-
* FRR Routing on the Nodes?
29
-
* ASN nameing scheme
22
+
### Avilibility and Support
30
23
31
-
## Hardware sizing of the plattform
24
+
* What requirements do you have for the availibility of the system?
25
+
* What gradation or requirements are there for the elimination of problems with regard to the different types of problems?
26
+
* Examples problem scenarios:
27
+
* complete loud service outage or downtime
28
+
* performance problems
29
+
* application problems
30
+
* ....
31
+
* Where should rollouts and changes to the system be tested or prepared, or does a dedicated environment make sense for t
32
32
33
+
### Hardware Concept
34
+
35
+
TBD:
36
+
37
+
- Are there defined hardware standards for the target data center and what are the general conditions?
38
+
- How should the systems be provisioned with an operating system?
39
+
- Decide which base operating system is used (e.g. RHEL or Ubuntu) and whether this fits the hardware support, strategy, upgrade support and cost structure.
40
+
- How many environments are required?
33
41
34
-
## Required IP Networks
42
+
43
+
### Required IP Networks
35
44
36
45
Estimate the expected number of IP addresses and plan sufficient reserves so that no adjustments to the networks will be necessary at a later date.
37
46
The installation can be carried out via IPv4 or IPv6 as well as hybrid.
@@ -56,7 +65,26 @@ The installation can be carried out via IPv4 or IPv6 as well as hybrid.
56
65
* The IP adresses should not be part of the "Frontend Access" network
57
66
* At least Port 443/TCP and 51820/UDP should be reachable from external networks
58
67
59
-
## Domains and Hosts
68
+
### Idendity Management of the Plattform
69
+
70
+
How should access to the administration of the environment (e.g. Openstack) be managed?
71
+
72
+
Should there only be local access or should the system be linked to one or more identity providers via OIDC or SAML (identity brokering)?
73
+
74
+
75
+
### Network configuration of nodes and tenant networks
76
+
77
+
TBD:
78
+
79
+
* It must be decided how the networks of the tenants should be separated in Openstack (Neutron)
80
+
* It must be decided how the underlay network of the cloud platform should be designed.
81
+
(e.g. native Layer2, Layer2 underlay with Tenant VLANs, Layer3 underlay)
82
+
* Layer 3 Underlay
83
+
* FRR Routing on the Nodes?
84
+
* ASN nameing scheme
85
+
86
+
87
+
### Domains and Hosts
60
88
61
89
* Cloud Domain: A dedicated subdomain used for the cloud environment
62
90
(i.e. `*.zone1.landscape.scs.community`)
@@ -65,13 +93,14 @@ The installation can be carried out via IPv4 or IPv6 as well as hybrid.
65
93
* External API endpoint: A hostname for the external api endpoint which points to address to the "Frontend Access" network
66
94
(i.e. `api.zone1.landscape.scs.community`)
67
95
68
-
## TLS Certificates
96
+
97
+
### TLS Certificates
69
98
70
99
Since not all domains that are used for the environment will be publicly accessible and therefore the use of “Let's Encrypt” certificates
71
100
is not generally possible without problems, we recommend that official TLS certificates are available for at least the two API endpoints.
72
101
Either a multi-domain certificate (with SANs) or a wildcard certificate (wildcard on the first level of the cloud domain) can be used for this.
73
102
74
-
## Access to installation resources.
103
+
###Access to installation resources.
75
104
76
105
For the download of installation data such as container images, operating system packages, etc.,
77
106
either access to publicly accessible networks must be provided or a caching proxy or a dedicated
@@ -84,6 +113,25 @@ TBD:
84
113
- Proxy requirements
85
114
- Are authenticated proxies possible?
86
115
116
+
### Git Repository
117
+
118
+
* A private Git Repository for the [configuration repository](https://osism.tech/docs/guides/configuration-guide/configuration-repository)
119
+
120
+
### Access managment
121
+
122
+
* What requirments are neede or defined for the administration of the system
123
+
* The public Keys of all administrators
124
+
125
+
### Monitoring and On-Call/On-Duty
126
+
127
+
* Connection and integration into existing operational monitoring
128
+
129
+
* What kind of On-Call/On-Duty do you need?
130
+
* How quickly should the solution to a problem be started?
131
+
* What downtimes are tolerable in extreme cases?
132
+
* Does a log aggregation system already exist and does it make sense to use it for the new environment?
133
+
134
+
87
135
## NTP Infrastructure
88
136
89
137
* The deployed nodes should have permanent access to at least 3 ntp servers
@@ -92,15 +140,39 @@ TBD:
92
140
* The NTP servers used, should not run on virtual hardware
93
141
(Depending on the architecture and the virtualization platform, this can otherwise cause minor or major problems in special situations.)
94
142
143
+
144
+
## Openstack
145
+
146
+
### Hardware Concept
147
+
148
+
TBD:
149
+
150
+
- How many compute nodes are needed?
151
+
- Are local NVMe needed?
152
+
- Are GPUs needed?
153
+
95
154
## Ceph Storage
96
155
97
156
### General
98
157
99
158
TBD:
100
-
* Crush / Failure domain properies
101
159
* Amount of usable storage
102
-
* External Ceph storage installation
103
-
* Dedicated ceph nodes or hyperconverged setup?
160
+
* External Ceph storage installation?
161
+
* What is the purpose of your storage?
162
+
* Fast NVMe disks?
163
+
* More read/write intensive workloads or mixed?
164
+
* Huge amounts of data, but perfomance is a second level requirement?
165
+
* Object Storage?
166
+
* ...
167
+
* What kind of network storage is needed?
168
+
* Spinners
169
+
* NVMe/SSD
170
+
* Dedicated ceph environment or hyperconverged setup?
171
+
* Crush / Failure domain properies
172
+
* Failure domains?
173
+
* Erasure encoded?
174
+
* Inter datacenter replication?
175
+
* ...
104
176
105
177
### Disk Storage
106
178
@@ -110,12 +182,5 @@ TBD:
110
182
111
183
* Rados Gateway Setup
112
184
113
-
## Miscellanious Topics
114
-
115
-
* Decide which base operating system is used (e.g. RHEL or Ubuntu) and whether this fits the hardware support, strategy, upgrade support and cost structure.
116
-
* A private Git Repository for the [configuration repository](https://osism.tech/docs/guides/configuration-guide/configuration-repository)
117
-
* The public Keys of all administrators
118
-
* Connection and integration into existing operational monitoring.
0 commit comments