Skip to content

Commit b24b0db

Browse files
authored
tiup: add --user (pingcap#20085)
1 parent 48441a2 commit b24b0db

6 files changed

+19
-97
lines changed

dashboard/dashboard-ops-reverse-proxy.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ server_configs:
132132
<details>
133133
<summary> <strong>Modify configuration when deploying a new cluster using TiUP</strong> </summary>
134134
135-
If you are deploying a new cluster, you can add the configuration above to the `topology.yaml` TiUP topology file and deploy the cluster. For specific instruction, see [TiUP deployment document](/production-deployment-using-tiup.md#step-3-initialize-cluster-topology-file).
135+
If you are deploying a new cluster, you can add the configuration above to the `topology.yaml` TiUP topology file and deploy the cluster. For specific instruction, see [TiUP deployment document](/production-deployment-using-tiup.md#step-3-initialize-the-cluster-topology-file).
136136
137137
</details>
138138

post-installation-check.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ Log in to the database by running the following command:
5656
mysql -u root -h ${tidb_server_host_IP_address} -P 4000
5757
```
5858

59-
`${tidb_server_host_IP_address}` is one of the IP addresses set for `tidb_servers` when you [initialize the cluster topology file](/production-deployment-using-tiup.md#step-3-initialize-cluster-topology-file), such as `10.0.1.7`.
59+
`${tidb_server_host_IP_address}` is one of the IP addresses set for `tidb_servers` when you [initialize the cluster topology file](/production-deployment-using-tiup.md#step-3-initialize-the-cluster-topology-file), such as `10.0.1.7`.
6060

6161
The following information indicates successful login:
6262

production-deployment-using-tiup.md

+9-66
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ TiUP is a cluster operation and maintenance tool introduced in TiDB v4.0. It pro
1212

1313
TiUP also supports deploying TiDB, TiFlash, TiCDC, and the monitoring system. This guide introduces how to deploy TiDB clusters with different topologies.
1414

15-
## Step 1. Prerequisites and precheck
15+
## Step 1. Prerequisites and prechecks
1616

1717
Make sure that you have read the following documents:
1818

@@ -31,8 +31,6 @@ Log in to the control machine using a regular user account (take the `tidb` user
3131

3232
1. Install TiUP by running the following command:
3333

34-
{{< copyable "shell-regular" >}}
35-
3634
```shell
3735
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
3836
```
@@ -41,32 +39,24 @@ Log in to the control machine using a regular user account (take the `tidb` user
4139

4240
1. Redeclare the global environment variables:
4341

44-
{{< copyable "shell-regular" >}}
45-
4642
```shell
4743
source .bash_profile
4844
```
4945

5046
2. Confirm whether TiUP is installed:
5147

52-
{{< copyable "shell-regular" >}}
53-
5448
```shell
5549
which tiup
5650
```
5751

5852
3. Install the TiUP cluster component:
5953

60-
{{< copyable "shell-regular" >}}
61-
6254
```shell
6355
tiup cluster
6456
```
6557

6658
4. If TiUP is already installed, update the TiUP cluster component to the latest version:
6759

68-
{{< copyable "shell-regular" >}}
69-
7060
```shell
7161
tiup update --self && tiup update cluster
7262
```
@@ -75,8 +65,6 @@ Log in to the control machine using a regular user account (take the `tidb` user
7565

7666
5. Verify the current version of your TiUP cluster:
7767

78-
{{< copyable "shell-regular" >}}
79-
8068
```shell
8169
tiup --binary cluster
8270
```
@@ -107,24 +95,18 @@ https://download.pingcap.org/tidb-community-toolkit-{version}-linux-{arch}.tar.g
10795
10896
1. Install the TiUP tool:
10997
110-
{{< copyable "shell-regular" >}}
111-
11298
```shell
11399
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
114100
```
115101
116102
2. Redeclare the global environment variables:
117103
118-
{{< copyable "shell-regular" >}}
119-
120104
```shell
121105
source .bash_profile
122106
```
123107
124108
3. Confirm whether TiUP is installed:
125109
126-
{{< copyable "shell-regular" >}}
127-
128110
```shell
129111
which tiup
130112
```
@@ -133,8 +115,6 @@ https://download.pingcap.org/tidb-community-toolkit-{version}-linux-{arch}.tar.g
133115
134116
1. Pull the needed components on a machine that has access to the Internet:
135117
136-
{{< copyable "shell-regular" >}}
137-
138118
```shell
139119
tiup mirror clone tidb-community-server-${version}-linux-amd64 ${version} --os=linux --arch=amd64
140120
```
@@ -143,8 +123,6 @@ https://download.pingcap.org/tidb-community-toolkit-{version}-linux-{arch}.tar.g
143123
144124
2. Pack the component package by using the `tar` command and send the package to the control machine in the isolated environment:
145125
146-
{{< copyable "shell-regular" >}}
147-
148126
```bash
149127
tar czvf tidb-community-server-${version}-linux-amd64.tar.gz tidb-community-server-${version}-linux-amd64
150128
```
@@ -157,8 +135,6 @@ https://download.pingcap.org/tidb-community-toolkit-{version}-linux-{arch}.tar.g
157135
158136
1. When pulling an offline mirror, you can get an incomplete offline mirror by specifying specific information via parameters, such as the component and version information. For example, you can pull an offline mirror that includes only the offline mirror of TiUP v1.12.3 and TiUP Cluster v1.12.3 by running the following command:
159137
160-
{{< copyable "shell-regular" >}}
161-
162138
```bash
163139
tiup mirror clone tiup-custom-mirror-v1.12.3 --tiup v1.12.3 --cluster v1.12.3
164140
```
@@ -169,8 +145,6 @@ https://download.pingcap.org/tidb-community-toolkit-{version}-linux-{arch}.tar.g
169145
170146
3. Check the path of the current offline mirror on the control machine in the isolated environment. If your TiUP tool is of a recent version, you can get the current mirror address by running the following command:
171147
172-
{{< copyable "shell-regular" >}}
173-
174148
```bash
175149
tiup mirror show
176150
```
@@ -181,16 +155,12 @@ https://download.pingcap.org/tidb-community-toolkit-{version}-linux-{arch}.tar.g
181155
182156
First, copy the `keys` directory in the current offline mirror to the `$HOME/.tiup` directory:
183157
184-
{{< copyable "shell-regular" >}}
185-
186158
```bash
187159
cp -r ${base_mirror}/keys $HOME/.tiup/
188160
```
189161
190162
Then use the TiUP command to merge the incomplete offline mirror into the mirror in use:
191163
192-
{{< copyable "shell-regular" >}}
193-
194164
```bash
195165
tiup mirror merge tiup-custom-mirror-v1.12.3
196166
```
@@ -201,8 +171,6 @@ https://download.pingcap.org/tidb-community-toolkit-{version}-linux-{arch}.tar.g
201171
202172
After sending the package to the control machine of the target cluster, install the TiUP component by running the following commands:
203173
204-
{{< copyable "shell-regular" >}}
205-
206174
```bash
207175
tar xzvf tidb-community-server-${version}-linux-amd64.tar.gz && \
208176
sh tidb-community-server-${version}-linux-amd64/local_install.sh && \
@@ -227,12 +195,10 @@ tiup mirror merge ../tidb-community-toolkit-${version}-linux-amd64
227195

228196
To switch the mirror to another directory, run the `tiup mirror set <mirror-dir>` command. To switch the mirror to the online environment, run the `tiup mirror set https://tiup-mirrors.pingcap.com` command.
229197

230-
## Step 3. Initialize cluster topology file
198+
## Step 3. Initialize the cluster topology file
231199

232200
Run the following command to create a cluster topology file:
233201

234-
{{< copyable "shell-regular" >}}
235-
236202
```shell
237203
tiup cluster template > topology.yaml
238204
```
@@ -241,23 +207,17 @@ In the following two common scenarios, you can generate recommended topology tem
241207

242208
- For hybrid deployment: Multiple instances are deployed on a single machine. For details, see [Hybrid Deployment Topology](/hybrid-deployment-topology.md).
243209

244-
{{< copyable "shell-regular" >}}
245-
246210
```shell
247211
tiup cluster template --full > topology.yaml
248212
```
249213

250214
- For geo-distributed deployment: TiDB clusters are deployed in geographically distributed data centers. For details, see [Geo-Distributed Deployment Topology](/geo-distributed-deployment-topology.md).
251215

252-
{{< copyable "shell-regular" >}}
253-
254216
```shell
255217
tiup cluster template --multi-dc > topology.yaml
256218
```
257219

258-
Run `vi topology.yaml` to see the configuration file content:
259-
260-
{{< copyable "shell-regular" >}}
220+
Run `vi topology.yaml` to see the content of the configuration file:
261221

262222
```shell
263223
global:
@@ -286,7 +246,7 @@ alertmanager_servers:
286246
- host: 10.0.1.4
287247
```
288248

289-
The following examples cover seven common scenarios. You need to modify the configuration file (named `topology.yaml`) according to the topology description and templates in the corresponding links. For other scenarios, edit the configuration template accordingly.
249+
The following examples cover six common scenarios. You need to modify the configuration file (named `topology.yaml`) according to the topology description and templates in the corresponding links. For other scenarios, edit the configuration template accordingly.
290250

291251
| Application | Configuration task | Configuration file template | Topology description |
292252
| :-- | :-- | :-- | :-- |
@@ -315,39 +275,35 @@ For more configuration description, see the following configuration examples:
315275
316276
> **Note:**
317277
>
318-
> You can use secret keys or interactive passwords for security authentication when you deploy TiDB using TiUP:
278+
> You can securely authenticate users used for initialization when deploying a cluster via TiUP (specified via `--user`) using either a key or a cross-password:
319279
>
320280
> - If you use secret keys, specify the path of the keys through `-i` or `--identity_file`.
321281
> - If you use passwords, add the `-p` flag to enter the password interaction window.
322282
> - If password-free login to the target machine has been configured, no authentication is required.
323283
>
324-
> In general, TiUP creates the user and group specified in the `topology.yaml` file on the target machine, with the following exceptions:
284+
> In general, the users and groups used by TiUP to actually execute the processes (specified via `topology.yaml`, and the default value is `tidb`) are created automatically on the target machine, with the following exceptions:
325285
>
326286
> - The user name configured in `topology.yaml` already exists on the target machine.
327287
> - You have used the `--skip-create-user` option in the command line to explicitly skip the step of creating the user.
288+
>
289+
> Regardless of whether the users and groups agreed upon in `topology.yaml` are created automatically, TiUP automatically generates a pair of ssh keys and sets up a secret-free login for that user on each machine. This user and ssh key will be used to manage the machine for all subsequent operations, while the user and password used for initialization will not be used any more after the deployment is complete.
328290
329291
Before you run the `deploy` command, use the `check` and `check --apply` commands to detect and automatically repair potential risks in the cluster:
330292
331293
1. Check for potential risks:
332294
333-
{{< copyable "shell-regular" >}}
334-
335295
```shell
336296
tiup cluster check ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
337297
```
338298
339299
2. Enable automatic repair:
340300
341-
{{< copyable "shell-regular" >}}
342-
343301
```shell
344302
tiup cluster check ./topology.yaml --apply --user root [-p] [-i /home/root/.ssh/gcp_rsa]
345303
```
346304
347305
3. Deploy a TiDB cluster:
348306
349-
{{< copyable "shell-regular" >}}
350-
351307
```shell
352308
tiup cluster deploy tidb-test v8.5.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]
353309
```
@@ -364,8 +320,6 @@ At the end of the output log, you will see ```Deployed cluster `tidb-test` succe
364320
365321
## Step 5. Check the clusters managed by TiUP
366322
367-
{{< copyable "shell-regular" >}}
368-
369323
```shell
370324
tiup cluster list
371325
```
@@ -376,8 +330,6 @@ TiUP supports managing multiple TiDB clusters. The preceding command outputs inf
376330
377331
For example, run the following command to check the status of the `tidb-test` cluster:
378332
379-
{{< copyable "shell-regular" >}}
380-
381333
```shell
382334
tiup cluster display tidb-test
383335
```
@@ -393,21 +345,16 @@ After safe start, TiUP automatically generates a password for the TiDB root user
393345
> **Note:**
394346
>
395347
> - After safe start of a TiDB cluster, you cannot log in to TiDB using a root user without a password. Therefore, you need to record the password returned in the command output for future logins.
396-
>
397348
> - The password is generated only once. If you do not record it or you forgot it, refer to [Forget the `root` password](/user-account-management.md#forget-the-root-password) to change the password.
398349
399350
Method 1: Safe start
400351
401-
{{< copyable "shell-regular" >}}
402-
403352
```shell
404353
tiup cluster start tidb-test --init
405354
```
406355
407356
If the output is as follows, the start is successful:
408357
409-
{{< copyable "shell-regular" >}}
410-
411358
```shell
412359
Started cluster `tidb-test` successfully.
413360
The root password of TiDB database has been changed.
@@ -418,8 +365,6 @@ The generated password can NOT be got again in future.
418365
419366
Method 2: Standard start
420367
421-
{{< copyable "shell-regular" >}}
422-
423368
```shell
424369
tiup cluster start tidb-test
425370
```
@@ -428,8 +373,6 @@ If the output log includes ```Started cluster `tidb-test` successfully```, the s
428373
429374
## Step 8. Verify the running status of the TiDB cluster
430375
431-
{{< copyable "shell-regular" >}}
432-
433376
```shell
434377
tiup cluster display tidb-test
435378
```
@@ -452,4 +395,4 @@ If you have deployed [TiCDC](/ticdc/ticdc-overview.md) along with the TiDB clust
452395
- [Troubleshoot TiCDC](/ticdc/troubleshoot-ticdc.md)
453396
- [TiCDC FAQs](/ticdc/ticdc-faq.md)
454397
455-
If you want to scale out or scale in your TiDB cluster without interrupting the online services, see [Scale a TiDB Cluster Using TiUP](/scale-tidb-using-tiup.md).
398+
If you want to scale out or scale in your TiDB cluster without interrupting the online services, see [Scale a TiDB Cluster Using TiUP](/scale-tidb-using-tiup.md).

0 commit comments

Comments
 (0)