From 6463bba430856c0cdf95e1872841c832af4dba47 Mon Sep 17 00:00:00 2001 From: Nilmadhab Mondal Date: Sun, 2 Mar 2025 10:48:33 +0100 Subject: [PATCH 1/2] Update all docker-compose references to docker compose --- .github/workflows/template.flink-ci.yml | 2 +- .../resource-providers/standalone/docker.md | 10 ++-- .../try-flink/flink-operations-playground.md | 46 +++++++++---------- docs/content.zh/docs/try-flink/table_api.md | 6 +-- .../resource-providers/standalone/docker.md | 10 ++-- .../try-flink/flink-operations-playground.md | 46 +++++++++---------- docs/content/docs/try-flink/table_api.md | 6 +-- 7 files changed, 63 insertions(+), 63 deletions(-) diff --git a/.github/workflows/template.flink-ci.yml b/.github/workflows/template.flink-ci.yml index 40e7758e7cc62..d4c334e11ef2d 100644 --- a/.github/workflows/template.flink-ci.yml +++ b/.github/workflows/template.flink-ci.yml @@ -316,7 +316,7 @@ jobs: maven_repo_folder: ${{ env.MAVEN_REPO_FOLDER }} - name: "Install missing packages" - run: sudo apt-get install -y net-tools docker-compose zip + run: sudo apt-get install -y net-tools docker compose zip # netty-tcnative requires OpenSSL v1.0.0 - name: "Install OpenSSL" diff --git a/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md b/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md index fa17ba7c8850a..3efb850383a29 100644 --- a/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md +++ b/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md @@ -287,13 +287,13 @@ The next sections show examples of configuration files to run Flink. * Launch a cluster in the foreground (use `-d` for background) ```sh - $ docker-compose up + $ docker compose up ``` * Scale the cluster up or down to `N` TaskManagers ```sh - $ docker-compose scale taskmanager= + $ docker compose scale taskmanager= ``` * Access the JobManager container @@ -305,7 +305,7 @@ The next sections show examples of configuration files to run Flink. * Kill the cluster ```sh - $ docker-compose down + $ docker compose down ``` * Access Web UI @@ -355,7 +355,7 @@ services: ### Session Mode -In Session Mode you use docker-compose to spin up a long-running Flink Cluster to which you can then submit Jobs. +In Session Mode you use docker compose to spin up a long-running Flink Cluster to which you can then submit Jobs. `docker-compose.yml` for *Session Mode*: @@ -427,7 +427,7 @@ services: ``` * In order to start the SQL Client run ```sh - docker-compose run sql-client + docker compose run sql-client ``` You can then start creating tables and queries those. diff --git a/docs/content.zh/docs/try-flink/flink-operations-playground.md b/docs/content.zh/docs/try-flink/flink-operations-playground.md index b073453ab493a..30455bcf15768 100644 --- a/docs/content.zh/docs/try-flink/flink-operations-playground.md +++ b/docs/content.zh/docs/try-flink/flink-operations-playground.md @@ -83,7 +83,7 @@ Job 监控以及资源管理。Flink TaskManager 运行 worker 进程, 环境搭建只需要几步就可以完成,我们将会带你过一遍必要的操作命令, 并说明如何验证我们正在操作的一切都是运行正常的。 -你需要在自己的主机上提前安装好 [docker](https://docs.docker.com/) (1.12+) 和 +你需要在自己的主机上提前安装好 [docker](https://docs.docker.com/) (20.10+) 和 [docker-compose](https://docs.docker.com/compose/) (2.1+)。 我们所使用的配置文件位于 @@ -93,7 +93,7 @@ Job 监控以及资源管理。Flink TaskManager 运行 worker 进程, ```bash git clone https://github.com/apache/flink-playgrounds.git cd flink-playgrounds/operations-playground -docker-compose build +docker compose build ``` 接下来在开始运行之前先在 Docker 主机上创建检查点和保存点目录(这些卷由 jobmanager 和 taskmanager 挂载,如 docker-compose.yaml 中所指定的): @@ -106,13 +106,13 @@ mkdir -p /tmp/flink-savepoints-directory 然后启动环境: ```bash -docker-compose up -d +docker compose up -d ``` 接下来你可以执行如下命令来查看正在运行中的 Docker 容器: ```bash -docker-compose ps +docker compose ps Name Command State Ports ----------------------------------------------------------------------------------------------------------------------------- @@ -130,7 +130,7 @@ operations-playground_zookeeper_1 /bin/sh -c /usr/sbin/sshd ... 你可以执行如下命令停止 docker 环境: ```bash -docker-compose down -v +docker compose down -v ``` @@ -157,10 +157,10 @@ Flink WebUI 界面包含许多关于 Flink 集群以及运行在其上的 Jobs **JobManager** -JobManager 日志可以通过 `docker-compose` 命令进行查看。 +JobManager 日志可以通过 `docker compose` 命令进行查看。 ```bash -docker-compose logs -f jobmanager +docker compose logs -f jobmanager ``` JobManager 刚启动完成之时,你会看到很多关于 checkpoint completion (检查点完成)的日志。 @@ -169,7 +169,7 @@ JobManager 刚启动完成之时,你会看到很多关于 checkpoint completio TaskManager 日志也可以通过同样的方式进行查看。 ```bash -docker-compose logs -f taskmanager +docker compose logs -f taskmanager ``` TaskManager 刚启动完成之时,你同样会看到很多关于 checkpoint completion (检查点完成)的日志。 @@ -179,7 +179,7 @@ TaskManager 刚启动完成之时,你同样会看到很多关于 checkpoint co [Flink CLI]({{< ref "docs/deployment/cli" >}}) 相关命令可以在 client 容器内进行使用。 比如,想查看 Flink CLI 的 `help` 命令,可以通过如下方式进行查看: ```bash -docker-compose run --no-deps client flink --help +docker compose run --no-deps client flink --help ``` ### Flink REST API @@ -195,7 +195,7 @@ curl localhost:8081/jobs {{< hint info >}} **注意:** 如果你的主机上没有 _curl_ 命令,那么你可以通过 client 容器进行访问(类似于 Flink CLI 命令): ```bash -docker-compose run --no-deps client curl jobmanager:8081/jobs +docker compose run --no-deps client curl jobmanager:8081/jobs ``` {{< /hint >}} {{< /unstable >}} @@ -205,11 +205,11 @@ docker-compose run --no-deps client curl jobmanager:8081/jobs 可以运行如下命令查看 Kafka Topics 中的记录: ```bash //input topic (1000 records/s) -docker-compose exec kafka kafka-console-consumer.sh \ +docker compose exec kafka kafka-console-consumer.sh \ --bootstrap-server localhost:9092 --topic input //output topic (24 records/min) -docker-compose exec kafka kafka-console-consumer.sh \ +docker compose exec kafka kafka-console-consumer.sh \ --bootstrap-server localhost:9092 --topic output ``` @@ -230,7 +230,7 @@ docker-compose exec kafka kafka-console-consumer.sh \ {{< tab "CLI" >}} **命令** ```bash -docker-compose run --no-deps client flink list +docker compose run --no-deps client flink list ``` **预期输出** ```plain @@ -281,7 +281,7 @@ curl localhost:8081/jobs 为此,通过控制台命令消费 *output* topic,保持消费直到 Job 从失败中恢复 (Step 3)。 ```bash -docker-compose exec kafka kafka-console-consumer.sh \ +docker compose exec kafka kafka-console-consumer.sh \ --bootstrap-server localhost:9092 --topic output ``` @@ -293,7 +293,7 @@ docker-compose exec kafka kafka-console-consumer.sh \ TaskManager 进程挂掉、TaskManager 机器宕机或者从框架或用户代码中抛出的一个临时异常(例如,由于外部资源暂时不可用)而导致的失败。 ```bash -docker-compose kill taskmanager +docker compose kill taskmanager ``` 几秒钟后,JobManager 就会感知到 TaskManager 已失联,接下来它会 @@ -319,7 +319,7 @@ docker-compose kill taskmanager 一旦 TaskManager 重启成功,它将会重新连接到 JobManager。 ```bash -docker-compose up -d taskmanager +docker compose up -d taskmanager ``` 当 TaskManager 注册成功后,JobManager 就会将处于 `SCHEDULED` 状态的所有任务调度到该 TaskManager @@ -354,7 +354,7 @@ Savepoint 是整个应用程序状态的一次快照(类似于 checkpoint ) 以便观察在升级过程中没有数据丢失或损坏。 ```bash -docker-compose exec kafka kafka-console-consumer.sh \ +docker compose exec kafka kafka-console-consumer.sh \ --bootstrap-server localhost:9092 --topic output ``` @@ -369,7 +369,7 @@ JobID 可以通过[获取所有运行中的 Job](#listing-running-jobs) 接口 {{< tab "CLI" >}} **命令** ```bash -docker-compose run --no-deps client flink stop +docker compose run --no-deps client flink stop ``` **预期输出** ```bash @@ -440,7 +440,7 @@ curl -X POST localhost:8081/jobs//stop -d '{"drain": false}' {{< tab "CLI" >}} **命令** ```bash -docker-compose run --no-deps client flink run -s \ +docker compose run --no-deps client flink run -s \ -d /opt/ClickCountJob.jar \ --bootstrap.servers kafka:9092 --checkpointing --event-time ``` @@ -455,7 +455,7 @@ Job has been submitted with JobID **请求** ```bash # 从客户端容器上传 JAR -docker-compose run --no-deps client curl -X POST -H "Expect:" \ +docker compose run --no-deps client curl -X POST -H "Expect:" \ -F "jarfile=@/opt/ClickCountJob.jar" http://jobmanager:8081/jars/upload ``` @@ -497,7 +497,7 @@ curl -X POST http://localhost:8081/jars//run \ {{< tab "CLI" >}} **命令** ```bash -docker-compose run --no-deps client flink run -p 3 -s \ +docker compose run --no-deps client flink run -p 3 -s \ -d /opt/ClickCountJob.jar \ --bootstrap.servers kafka:9092 --checkpointing --event-time ``` @@ -512,7 +512,7 @@ Job has been submitted with JobID **请求** ```bash # Uploading the JAR from the Client container -docker-compose run --no-deps client curl -X POST -H "Expect:" \ +docker compose run --no-deps client curl -X POST -H "Expect:" \ -F "jarfile=@/opt/ClickCountJob.jar" http://jobmanager:8081/jars/upload ``` @@ -541,7 +541,7 @@ curl -X POST http://localhost:8081/jars//run \ {{< /tabs >}} 现在 Job 已重新提交,但由于我们提高了并行度所以导致 TaskSlots 不够用(1 个 TaskSlot 可用,总共需要 3 个),最终 Job 会重启失败。通过如下命令: ```bash -docker-compose scale taskmanager=2 +docker compose scale taskmanager=2 ``` 你可以向 Flink 集群添加第二个 TaskManager(为 Flink 集群提供 2 个 TaskSlots 资源), 它会自动向 JobManager 注册,TaskManager 注册完成后,Job 会再次处于 "RUNNING" 状态。 diff --git a/docs/content.zh/docs/try-flink/table_api.md b/docs/content.zh/docs/try-flink/table_api.md index de53a2a844e3d..5b2bf3e04dedb 100644 --- a/docs/content.zh/docs/try-flink/table_api.md +++ b/docs/content.zh/docs/try-flink/table_api.md @@ -293,8 +293,8 @@ public static Table report(Table transactions) { 在 `table-walkthrough` 目录下启动 docker-compose 脚本。 ```bash -$ docker-compose build -$ docker-compose up -d +$ docker compose build +$ docker compose up -d ``` 运行中的作业信息可以通过 [Flink console](http://localhost:8082/) 查看。 @@ -304,7 +304,7 @@ $ docker-compose up -d 结果数据在 MySQL 中查看。 ```bash -$ docker-compose exec mysql mysql -Dsql-demo -usql-demo -pdemo-sql +$ docker compose exec mysql mysql -Dsql-demo -usql-demo -pdemo-sql mysql> use sql-demo; Database changed diff --git a/docs/content/docs/deployment/resource-providers/standalone/docker.md b/docs/content/docs/deployment/resource-providers/standalone/docker.md index 384912d5b9e5f..1994d846e995a 100644 --- a/docs/content/docs/deployment/resource-providers/standalone/docker.md +++ b/docs/content/docs/deployment/resource-providers/standalone/docker.md @@ -287,13 +287,13 @@ The next sections show examples of configuration files to run Flink. * Launch a cluster in the foreground (use `-d` for background) ```sh - $ docker-compose up + $ docker compose up ``` * Scale the cluster up or down to `N` TaskManagers ```sh - $ docker-compose scale taskmanager= + $ docker compose scale taskmanager= ``` * Access the JobManager container @@ -305,7 +305,7 @@ The next sections show examples of configuration files to run Flink. * Kill the cluster ```sh - $ docker-compose down + $ docker compose down ``` * Access Web UI @@ -355,7 +355,7 @@ services: ### Session Mode -In Session Mode you use docker-compose to spin up a long-running Flink Cluster to which you can then submit Jobs. +In Session Mode you use docker compose to spin up a long-running Flink Cluster to which you can then submit Jobs. `docker-compose.yml` for *Session Mode*: @@ -427,7 +427,7 @@ services: ``` * In order to start the SQL Client run ```sh - docker-compose run sql-client + docker compose run sql-client ``` You can then start creating tables and queries those. diff --git a/docs/content/docs/try-flink/flink-operations-playground.md b/docs/content/docs/try-flink/flink-operations-playground.md index 2e1ae96820a6f..fd0fc8bb3f6c6 100644 --- a/docs/content/docs/try-flink/flink-operations-playground.md +++ b/docs/content/docs/try-flink/flink-operations-playground.md @@ -89,7 +89,7 @@ output of the Flink job should show 1000 views per page and window. The playground environment is set up in just a few steps. We will walk you through the necessary commands and show how to validate that everything is running correctly. -We assume that you have [Docker](https://docs.docker.com/) (1.12+) and +We assume that you have [Docker](https://docs.docker.com/) (20.10+) and [docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your machine. The required configuration files are available in the @@ -98,19 +98,19 @@ The required configuration files are available in the ```bash git clone https://github.com/apache/flink-playgrounds.git cd flink-playgrounds/operations-playground -docker-compose build +docker compose build ``` Then start the playground: ```bash -docker-compose up -d +docker compose up -d ``` Afterwards, you can inspect the running Docker containers with the following command: ```bash -docker-compose ps +docker compose ps Name Command State Ports ----------------------------------------------------------------------------------------------------------------------------- @@ -128,7 +128,7 @@ cluster components as well as the data generator are running (`Up`). You can stop the playground environment by calling: ```bash -docker-compose down -v +docker compose down -v ``` ## Entering the Playground @@ -151,10 +151,10 @@ its Jobs (JobGraph, Metrics, Checkpointing Statistics, TaskManager Status,...). **JobManager** -The JobManager logs can be tailed via `docker-compose`. +The JobManager logs can be tailed via `docker compose`. ```bash -docker-compose logs -f jobmanager +docker compose logs -f jobmanager ``` After the initial startup you should mainly see log messages for every checkpoint completion. @@ -163,7 +163,7 @@ After the initial startup you should mainly see log messages for every checkpoin The TaskManager log can be tailed in the same way. ```bash -docker-compose logs -f taskmanager +docker compose logs -f taskmanager ``` After the initial startup you should mainly see log messages for every checkpoint completion. @@ -173,7 +173,7 @@ After the initial startup you should mainly see log messages for every checkpoin The [Flink CLI]({{< ref "docs/deployment/cli" >}}) can be used from within the client container. For example, to print the `help` message of the Flink CLI you can run ```bash -docker-compose run --no-deps client flink --help +docker compose run --no-deps client flink --help ``` ### Flink REST API @@ -190,7 +190,7 @@ curl localhost:8081/jobs **Note**: If the _curl_ command is not available on your machine, you can run it from the client container (similar to the Flink CLI): ```bash -docker-compose run --no-deps client curl jobmanager:8081/jobs +docker compose run --no-deps client curl jobmanager:8081/jobs ``` {{< /hint >}} {{< /unstable >}} @@ -201,11 +201,11 @@ You can look at the records that are written to the Kafka Topics by running ```bash //input topic (1000 records/s) -docker-compose exec kafka kafka-console-consumer.sh \ +docker compose exec kafka kafka-console-consumer.sh \ --bootstrap-server localhost:9092 --topic input //output topic (24 records/min) -docker-compose exec kafka kafka-console-consumer.sh \ +docker compose exec kafka kafka-console-consumer.sh \ --bootstrap-server localhost:9092 --topic output ``` @@ -224,7 +224,7 @@ Most tasks can be executed via the [CLI](#flink-cli) and the [REST API](#flink-r {{< tab "CLI" >}} **Command** ```bash -docker-compose run --no-deps client flink list +docker compose run --no-deps client flink list ``` **Expected Output** ```plain @@ -273,7 +273,7 @@ For this, start reading from the *output* topic and leave this command running u recovery (Step 3). ```bash -docker-compose exec kafka kafka-console-consumer.sh \ +docker compose exec kafka kafka-console-consumer.sh \ --bootstrap-server localhost:9092 --topic output ``` @@ -285,7 +285,7 @@ exception being thrown from the framework or user code (e.g. due to the temporar an external resource). ```bash -docker-compose kill taskmanager +docker compose kill taskmanager ``` After a few seconds, the JobManager will notice the loss of the TaskManager, cancel the affected Job, and immediately resubmit it for recovery. @@ -309,7 +309,7 @@ similar to a real production setup where data is produced while the Job to proce Once you restart the TaskManager, it reconnects to the JobManager. ```bash -docker-compose up -d taskmanager +docker compose up -d taskmanager ``` When the JobManager is notified about the new TaskManager, it schedules the tasks of the @@ -344,7 +344,7 @@ Before starting with the upgrade you might want to start tailing the *output* to observe that no data is lost or corrupted in the course the upgrade. ```bash -docker-compose exec kafka kafka-console-consumer.sh \ +docker compose exec kafka kafka-console-consumer.sh \ --bootstrap-server localhost:9092 --topic output ``` @@ -359,7 +359,7 @@ to stopping the Job: {{< tab "CLI" >}} **Command** ```bash -docker-compose run --no-deps client flink stop +docker compose run --no-deps client flink stop ``` **Expected Output** ```bash @@ -416,7 +416,7 @@ restarting it without any changes. {{< tab "CLI" >}} **Command** ```bash -docker-compose run --no-deps client flink run -s \ +docker compose run --no-deps client flink run -s \ -d /opt/ClickCountJob.jar \ --bootstrap.servers kafka:9092 --checkpointing --event-time ``` @@ -430,7 +430,7 @@ Job has been submitted with JobID **Request** ```bash # Uploading the JAR from the Client container -docker-compose run --no-deps client curl -X POST -H "Expect:" \ +docker compose run --no-deps client curl -X POST -H "Expect:" \ -F "jarfile=@/opt/ClickCountJob.jar" http://jobmanager:8081/jars/upload ``` @@ -472,7 +472,7 @@ during resubmission. {{< tab "CLI" >}} **Command** ```bash -docker-compose run --no-deps client flink run -p 3 -s \ +docker compose run --no-deps client flink run -p 3 -s \ -d /opt/ClickCountJob.jar \ --bootstrap.servers kafka:9092 --checkpointing --event-time ``` @@ -487,7 +487,7 @@ Job has been submitted with JobID **Request** ```bash # Uploading the JAR from the Client container -docker-compose run --no-deps client curl -X POST -H "Expect:" \ +docker compose run --no-deps client curl -X POST -H "Expect:" \ -F "jarfile=@/opt/ClickCountJob.jar" http://jobmanager:8081/jars/upload ``` @@ -517,7 +517,7 @@ curl -X POST http://localhost:8081/jars//run \ Now, the Job has been resubmitted, but it will not start as there are not enough TaskSlots to execute it with the increased parallelism (2 available, 3 needed). With ```bash -docker-compose scale taskmanager=2 +docker compose scale taskmanager=2 ``` you can add a second TaskManager with two TaskSlots to the Flink Cluster, which will automatically register with the JobManager. Shortly after adding the TaskManager the Job should start running again. diff --git a/docs/content/docs/try-flink/table_api.md b/docs/content/docs/try-flink/table_api.md index 0dd698262e84c..6dca600892bcc 100644 --- a/docs/content/docs/try-flink/table_api.md +++ b/docs/content/docs/try-flink/table_api.md @@ -298,8 +298,8 @@ The environment contains a Kafka topic, a continuous data generator, MySql, and From within the `table-walkthrough` folder start the docker-compose script. ```bash -$ docker-compose build -$ docker-compose up -d +$ docker compose build +$ docker compose up -d ``` You can see information on the running job via the [Flink console](http://localhost:8082/). @@ -309,7 +309,7 @@ You can see information on the running job via the [Flink console](http://localh Explore the results from inside MySQL. ```bash -$ docker-compose exec mysql mysql -Dsql-demo -usql-demo -pdemo-sql +$ docker compose exec mysql mysql -Dsql-demo -usql-demo -pdemo-sql mysql> use sql-demo; Database changed From d345cf9e9e89a52ee107156c170d2c170e460638 Mon Sep 17 00:00:00 2001 From: Nilmadhab Mondal Date: Thu, 6 Mar 2025 06:44:38 +0100 Subject: [PATCH 2/2] PR comments addressed --- .github/workflows/template.flink-ci.yml | 2 +- docs/content.zh/docs/try-flink/flink-operations-playground.md | 2 +- docs/content/docs/try-flink/flink-operations-playground.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/workflows/template.flink-ci.yml b/.github/workflows/template.flink-ci.yml index d4c334e11ef2d..5b0d639faecb0 100644 --- a/.github/workflows/template.flink-ci.yml +++ b/.github/workflows/template.flink-ci.yml @@ -316,7 +316,7 @@ jobs: maven_repo_folder: ${{ env.MAVEN_REPO_FOLDER }} - name: "Install missing packages" - run: sudo apt-get install -y net-tools docker compose zip + run: sudo apt-get install -y net-tools docker zip # netty-tcnative requires OpenSSL v1.0.0 - name: "Install OpenSSL" diff --git a/docs/content.zh/docs/try-flink/flink-operations-playground.md b/docs/content.zh/docs/try-flink/flink-operations-playground.md index 30455bcf15768..d012715f04a0b 100644 --- a/docs/content.zh/docs/try-flink/flink-operations-playground.md +++ b/docs/content.zh/docs/try-flink/flink-operations-playground.md @@ -84,7 +84,7 @@ Job 监控以及资源管理。Flink TaskManager 运行 worker 进程, 并说明如何验证我们正在操作的一切都是运行正常的。 你需要在自己的主机上提前安装好 [docker](https://docs.docker.com/) (20.10+) 和 -[docker-compose](https://docs.docker.com/compose/) (2.1+)。 +[docker compose](https://docs.docker.com/compose/) (2.1+)。 我们所使用的配置文件位于 [flink-playgrounds](https://github.com/apache/flink-playgrounds) 仓库中, diff --git a/docs/content/docs/try-flink/flink-operations-playground.md b/docs/content/docs/try-flink/flink-operations-playground.md index fd0fc8bb3f6c6..fcdf356b80b9f 100644 --- a/docs/content/docs/try-flink/flink-operations-playground.md +++ b/docs/content/docs/try-flink/flink-operations-playground.md @@ -90,7 +90,7 @@ The playground environment is set up in just a few steps. We will walk you throu commands and show how to validate that everything is running correctly. We assume that you have [Docker](https://docs.docker.com/) (20.10+) and -[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your machine. +[docker compose](https://docs.docker.com/compose/) (2.1+) installed on your machine. The required configuration files are available in the [flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. First checkout the code and build the docker image: