-
Notifications
You must be signed in to change notification settings - Fork 165
[RORDEV-1453] Ubuntu apt-get based installation of ES in tests #1127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
[RORDEV-1453] Ubuntu apt-get based installation of ES in tests #1127
Conversation
This comment was marked as resolved.
This comment was marked as resolved.
…ation-tests-for-apt-based-es
this: Suite with EsClusterProvider with ESVersionSupport => | ||
|
||
override lazy val container: EsClusterContainer = clusterContainer | ||
|
||
override protected def afterAll(): Unit = { | ||
super.afterAll() | ||
pruneDockerImages() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding the new test with apt-based ES caused the jobs to fail - not enough space on runner. Probably caused by downloading ~700 MB of ES package. In the first step I added cleaning of the resulting Docker images in newly added tests, but it was not enough. So I added pruning unused Docker images after each test suite.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So to sum up - the support for apt installation is added, the test is added and it detects the problematic situation. But there are 2 issues on pipeline:
- test jobs take longer time, even 15 minutes - because of downloading ubuntu, apt ES and building image
- the memory on runner is near 100%, I added pruning images after test suites, but I guess it contributed to a longer run time too - because of time needed for pruning and not reusing parts of images
- Simone mentioned about issue with packages not being available as soon as needed in the apt repo, but I guess we will see when next ES version is released. We will be able to just disable the test until apt package is available
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Do we have ideas how to optimize it?
- Do we have to use the Ubuntu image? Maybe we can use a thinner alternative
- Are we sure we don't prune this image between the tests?
- Any other ideas for current or/and future improvements in this matter?
- The thing Simone mentioned is important. We want to release a new version as soon as we can, so IMO it's better to address this issue now. Let's compare the dates of releasing the latest versions (the docker images and apt packages). Maybe we could check if there is a way to build the custom ES images, and if it's not, the test could be skipped. WDYT?
little warning that I'm not sure if still valid: deb images were not available as soon as the archives in the past. It was a delayed thing. #TODO check if it's still the case |
this: Suite with EsClusterProvider with ESVersionSupport => | ||
|
||
override lazy val container: EsClusterContainer = clusterContainer | ||
|
||
override protected def afterAll(): Unit = { | ||
super.afterAll() | ||
pruneDockerImages() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's add a node why we had to do this
val logger: Logger = LogManager.getLogger("prune-docker-images") | ||
val dockerClient: DockerClient = DockerClientFactory.instance().client() | ||
logger.info("Pruning docker images after test suite") | ||
val reclaimedSpace = 1.0 * dockerClient.pruneCmd(PruneType.IMAGES).withDangling(false).exec().getSpaceReclaimed / 1024 / 1024 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it too invasive? On the runner - no problem, but when I run tests locally, I don't want to prune all my images. So, maybe we can sth like that?
1. list images
2. grep <pattern>
3. remove image by IDs
.create(s"docker.elastic.co/elasticsearch/elasticsearch:$esVersion", customEntrypoint) | ||
case EsInstallationType.UbuntuDockerImageWithEsFromApt => | ||
val esMajorVersion: String = esVersion.split("\\.")(0) + ".x" | ||
DockerImageDescription |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's move this to a private method. It would be nice if it could look like this:
DockerImageDescription.createCustomImageWithESInstalledFromApt(esVersion, customEntrypoint)
to fed the need of symmetry ;)
.runWhen(Version.greaterOrEqualThan(esVersion, 9, 0, 0), "microdnf install -y tar") | ||
.run(s"chown -R elasticsearch:elasticsearch ${configDir.toString()}") | ||
.runWhen(missingTar, "microdnf install -y tar") | ||
.run(s"chown -R elasticsearch:elasticsearch ${esDir.toString()}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have to do this?
.copyFile(configDir / "ror-keystore.bcfks", fromResourceBy(name = "ror-keystore.bcfks")) | ||
.copyFile(configDir / "ror-truststore.bcfks", fromResourceBy(name = "ror-truststore.bcfks")) | ||
.copyFile(configDir / "elastic-certificates.bcfks", fromResourceBy(name = "elastic-certificates.bcfks")) | ||
.copyFile(configDir(esConfig) / "additional-permissions.policy", fromResourceBy(name = "additional-permissions.policy")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about using an extension method instead here? Like this: esConfig.esConfigDir
?
import java.util.function.Consumer | ||
import scala.language.postfixOps | ||
|
||
class DockerLogsToStringConsumer() extends Consumer[OutputFrame] { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need ()
here?
…ation-tests-for-apt-based-es
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
ci/run-pipeline.sh (2)
63-67
: Pruning everything mid-pipeline may break subsequent Testcontainers runs
docker image/container/volume/network prune -f
removes all unused objects.
The nextintegration-tests:test
invocation might rely on layers/volumes that have just been deleted, forcing costly re-pulls or even failing if the network is offline.Consider limiting the prune scope (e.g., only dangling images) or moving the cleanup to the end of the pipeline.
77-80
: Hard-codedsudo rm -rf
assumes password-less sudo and path existenceOn CI agents running as root (common in GitHub Actions)
sudo
is harmless but redundant; on agents without sudo it will fail.
Also, the directories may not exist, yielding non-zero exits whenset -e
is active (shebang uses-ex
).Guard the call:
if command -v sudo >/dev/null; then SUDO=sudo; fi $SUDO rm -rf /var/lib/apt/lists/* || true
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
ci/run-pipeline.sh
(1 hunks)integration-tests/src/test/scala/tech/beshu/ror/integration/suites/base/support.scala
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- integration-tests/src/test/scala/tech/beshu/ror/integration/suites/base/support.scala
🧰 Additional context used
🧠 Learnings (1)
ci/run-pipeline.sh (1)
Learnt from: mgoworko
PR: sscarduzio/elasticsearch-readonlyrest-plugin#1095
File: docker-envs/base-ror-docker/es/install-ror-es-using-file.sh:24-30
Timestamp: 2025-04-17T11:56:56.073Z
Learning: For the ES version check in docker-envs/base-ror-docker/es/install-ror-es-using-file.sh, the parameter order in `verlte "7.0.0" "$ES_VERSION"` is intentionally set to check if 7.0.0 ≤ ES_VERSION, ensuring that ES 8.x uses the same Java path as ES 7.x.
⏰ Context from checks skipped due to timeout of 90000ms (13)
- GitHub Check: ror (Run all tests IT_es67x)
- GitHub Check: ror (Run all tests IT_es80x)
- GitHub Check: ror (Run all tests IT_es70x)
- GitHub Check: ror (Run all tests IT_es810x)
- GitHub Check: ror (Run all tests IT_es710x)
- GitHub Check: ror (Run all tests IT_es90x)
- GitHub Check: ror (Run all tests IT_es717x)
- GitHub Check: ror (Run all tests IT_es818x)
- GitHub Check: ror (Required checks AUDIT_BUILD_CHECK)
- GitHub Check: ror (Required checks COMPILE_CODEBASE_CHECK)
- GitHub Check: ror (Required checks LICENSE_CHECK)
- GitHub Check: ror (Optional checks CVE_CHECK)
- GitHub Check: ror (Run all tests Unit tests)
ci/run-pipeline.sh
Outdated
APT_SIZE_BEFORE=$(du -sh /var/lib/apt/lists /var/cache/apt/archives /var/tmp 2>/dev/null | awk '{sum += $1} END {print sum}') | ||
|
||
echo ">>> Cleaning apt package cache from Docker builds..." | ||
sudo rm -rf /var/lib/apt/lists/* | ||
sudo rm -rf /var/cache/apt/archives/* | ||
sudo rm -rf /var/tmp/* | ||
|
||
echo ">>> Checking APT cache size after cleanup..." | ||
APT_SIZE_AFTER=$(du -sh /var/lib/apt/lists /var/cache/apt/archives /var/tmp 2>/dev/null | awk '{sum += $1} END {print sum}') | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
du -sh
+ awk sum += $1
drops the unit suffix -> wrong math
du -sh
outputs human-readable sizes (“452M”, “1.3G”). Adding those strings numerically results in truncation at the first non-digit char, producing misleading numbers.
Switch to byte precision and let numfmt
pretty-print if desirable:
-APT_SIZE_BEFORE=$(du -sh /var/lib/apt/lists /var/cache/apt/archives /var/tmp 2>/dev/null | awk '{sum += $1} END {print sum}')
+APT_SIZE_BEFORE=$(du -sb /var/lib/apt/lists /var/cache/apt/archives /var/tmp 2>/dev/null | awk '{sum += $1} END {print sum}')
+APT_SIZE_BEFORE=$(numfmt --to=iec --suffix=B $APT_SIZE_BEFORE)
Replicate for the “after” size.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
APT_SIZE_BEFORE=$(du -sh /var/lib/apt/lists /var/cache/apt/archives /var/tmp 2>/dev/null | awk '{sum += $1} END {print sum}') | |
echo ">>> Cleaning apt package cache from Docker builds..." | |
sudo rm -rf /var/lib/apt/lists/* | |
sudo rm -rf /var/cache/apt/archives/* | |
sudo rm -rf /var/tmp/* | |
echo ">>> Checking APT cache size after cleanup..." | |
APT_SIZE_AFTER=$(du -sh /var/lib/apt/lists /var/cache/apt/archives /var/tmp 2>/dev/null | awk '{sum += $1} END {print sum}') | |
APT_SIZE_BEFORE=$(du -sb /var/lib/apt/lists /var/cache/apt/archives /var/tmp 2>/dev/null | awk '{sum += $1} END {print sum}') | |
APT_SIZE_BEFORE=$(numfmt --to=iec --suffix=B $APT_SIZE_BEFORE) | |
echo ">>> Cleaning apt package cache from Docker builds..." | |
sudo rm -rf /var/lib/apt/lists/* | |
sudo rm -rf /var/cache/apt/archives/* | |
sudo rm -rf /var/tmp/* | |
echo ">>> Checking APT cache size after cleanup..." | |
APT_SIZE_AFTER=$(du -sh /var/lib/apt/lists /var/cache/apt/archives /var/tmp 2>/dev/null | awk '{sum += $1} END {print sum}') |
🤖 Prompt for AI Agents
In ci/run-pipeline.sh around lines 74 to 83, the use of 'du -sh' outputs sizes
with unit suffixes which cannot be summed correctly by awk, causing inaccurate
total size calculations. Replace 'du -sh' with 'du -sb' to get sizes in bytes
for accurate summation, then optionally use 'numfmt' to convert the byte total
back to a human-readable format. Apply this change to both the before and after
APT cache size calculations.
ci/run-pipeline.sh
Outdated
DOCKER_SIZE_BEFORE=$(docker system df --format "{{.Size}}" | grep 'Total space used' || true) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
docker system df --format
+ grep
yields an empty string
docker system df --format "{{.Size}}"
does not emit the “Total space used” line you later grep for, so DOCKER_SIZE_BEFORE/AFTER
will almost always be blank.
Use either the raw output (docker system df -v
) or drop the grep:
-DOCKER_SIZE_BEFORE=$(docker system df --format "{{.Size}}" | grep 'Total space used' || true)
+DOCKER_SIZE_BEFORE=$(docker system df -v | awk '/Total space used:/ {print $4}')
Same for the “after” block.
ci/run-pipeline.sh: fix Docker “Total space used” capture
The current commands use docker system df --format "{{.Size}}" | grep 'Total space used'
, but --format
only prints per-resource sizes—not the summary line—so both variables will be empty. Replace both the “before” and “after” captures with a verbose DF and an awk extraction:
• File: ci/run-pipeline.sh
• Lines: 60–61
- DOCKER_SIZE_BEFORE=$(docker system df --format "{{.Size}}" | grep 'Total space used' || true)
+ DOCKER_SIZE_BEFORE=$(docker system df -v | awk '/Total space used:/ {print $4}')
…later…
- DOCKER_SIZE_AFTER=$(docker system df --format "{{.Size}}" | grep 'Total space used' || true)
+ DOCKER_SIZE_AFTER=$(docker system df -v | awk '/Total space used:/ {print $4}')
This ensures you capture the actual “Total space used” value.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
DOCKER_SIZE_BEFORE=$(docker system df --format "{{.Size}}" | grep 'Total space used' || true) | |
# Replace the “before” capture | |
DOCKER_SIZE_BEFORE=$(docker system df -v | awk '/Total space used:/ {print $4}') | |
# … your existing pipeline logic … | |
# Replace the “after” capture | |
DOCKER_SIZE_AFTER=$(docker system df -v | awk '/Total space used:/ {print $4}') |
🤖 Prompt for AI Agents
In ci/run-pipeline.sh at lines 60 to 61, the command capturing Docker's "Total
space used" uses --format which only outputs per-resource sizes and misses the
summary line, resulting in empty variables. Replace the current command with one
that runs docker system df without --format and pipe it to awk to extract the
"Total space used" line, ensuring the actual total size is captured correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
azure-pipelines.yml (1)
461-462
: Same wiring bug duplicated lower in the matrixThe exact mismatch appears again here. Please fix or the error will persist in non-epic / non-develop branches.
- ROR_TASK: integration_es717x + ROR_TASK: ror_tools_es717x
🧹 Nitpick comments (2)
azure-pipelines.yml (1)
184-188
: YAML matrix now hard-codes ~70 nearly identical entries – consider anchorsThe growing copy-paste blocks for every ES minor release make the pipeline brittle and noisy.
A YAML anchor/alias or a small template function ( e.g.*IT_MATRIX
,*RTT_MATRIX
) keeps one definition in sync and avoids future mismatches like the one above.No functional breakage, but maintainability will degrade quickly.
Example skeleton:.def_task: &def_task container: openjdk:22-slim timeoutInMinutes: 120 steps: *common_steps matrix: <<: *IT_MATRIX <<: *RTT_MATRIXAlso applies to: 259-264, 344-348, 411-418, 461-468
ci/run-pipeline.sh (1)
60-70
: run_ror_tools_tests still talks about “integration tests”Minor copy-paste artefact that may confuse anyone scanning logs.
- echo "What ES module should I run integration tests for?" + echo "What ES module should I run ror-tools tests for?"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
azure-pipelines.yml
(5 hunks)ci/run-pipeline.sh
(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
ci/run-pipeline.sh (1)
Learnt from: mgoworko
PR: sscarduzio/elasticsearch-readonlyrest-plugin#1095
File: docker-envs/base-ror-docker/es/install-ror-es-using-file.sh:24-30
Timestamp: 2025-04-17T11:56:56.073Z
Learning: For the ES version check in docker-envs/base-ror-docker/es/install-ror-es-using-file.sh, the parameter order in `verlte "7.0.0" "$ES_VERSION"` is intentionally set to check if 7.0.0 ≤ ES_VERSION, ensuring that ES 8.x uses the same Java path as ES 7.x.
azure-pipelines.yml
Outdated
RTT_es717x: | ||
ROR_TASK: integration_es717x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect task wired to RTT_es717x
RTT_es717x
should launch the ror-tools tests, but it is currently mapped to integration_es717x
, so the new test suite never runs and the integration tests are executed twice.
- ROR_TASK: integration_es717x
+ ROR_TASK: ror_tools_es717x
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
RTT_es717x: | |
ROR_TASK: integration_es717x | |
RTT_es717x: | |
ROR_TASK: ror_tools_es717x |
🤖 Prompt for AI Agents
In azure-pipelines.yml at lines 411 to 412, the RTT_es717x task is incorrectly
mapped to integration_es717x. Change the ROR_TASK value for RTT_es717x to the
correct ror-tools test suite identifier to ensure the proper tests run and avoid
running integration tests twice.
ci/run-pipeline.sh
Outdated
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es812x" ]]; then | ||
run_ror_tools_tests "es812x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es811x" ]]; then | ||
run_ror_tools_tests "es811x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es810x" ]]; then | ||
run_ror_tools_tests "es810x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es89x" ]]; then | ||
run_ror_tools_tests "es89x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es88x" ]]; then | ||
run_ror_tools_tests "es88x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es87x" ]]; then | ||
run_ror_tools_tests "es87x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es85x" ]]; then | ||
run_ror_tools_tests "es85x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es84x" ]]; then | ||
run_ror_tools_tests "es84x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es83x" ]]; then | ||
run_ror_tools_tests "es83x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es82x" ]]; then | ||
run_ror_tools_tests "es82x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es81x" ]]; then | ||
run_ror_tools_tests "es81x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es80x" ]]; then | ||
run_ror_tools_tests "es80x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es717x" ]]; then | ||
run_ror_tools_tests "es717x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es716x" ]]; then | ||
run_ror_tools_tests "es716x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es714x" ]]; then | ||
run_ror_tools_tests "es714x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es711x" ]]; then | ||
run_ror_tools_tests "es711x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es710x" ]]; then | ||
run_ror_tools_tests "es710x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es79x" ]]; then | ||
run_ror_tools_tests "es79x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es78x" ]]; then | ||
run_ror_tools_tests "es78x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es77x" ]]; then | ||
run_ror_tools_tests "es77x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es74x" ]]; then | ||
run_ror_tools_tests "es74x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es73x" ]]; then | ||
run_ror_tools_tests "es73x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es72x" ]]; then | ||
run_ror_tools_tests "es72x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es70x" ]]; then | ||
run_ror_tools_tests "es70x" | ||
fi | ||
|
||
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es67x" ]]; then | ||
run_ror_tools_tests "es67x" | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Explosion of if
blocks – time to loop
Adding 50 more if [[ $ROR_TASK == "ror_tools_esXYZ" ]]
lines duplicates logic already present for integration tests. A compact loop keeps both lists in one place and prevents drift.
# declare once near the top
ROR_TOOL_VERSIONS=(90x 818x 816x 815x 814x 813x 812x 811x 810x 89x 88x 87x 85x 84x 83x 82x 81x 80x 717x 716x 714x 711x 710x 79x 78x 77x 74x 73x 72x 70x 67x)
for v in "${ROR_TOOL_VERSIONS[@]}"; do
if [[ -z $TRAVIS ]] || [[ $ROR_TASK == "ror_tools_es${v}" ]]; then
run_ror_tools_tests "es${v}"
fi
done
This drops ≈120 LoC and eliminates the chance of missing a new version in one list but not the other.
🤖 Prompt for AI Agents
In ci/run-pipeline.sh from lines 196 to 318, replace the repetitive if blocks
checking $ROR_TASK against multiple "ror_tools_esXYZ" values with a single loop
iterating over an array of version suffixes. Declare an array ROR_TOOL_VERSIONS
containing all the version suffixes, then loop through it and run
run_ror_tools_tests for each version if the conditions on $TRAVIS and $ROR_TASK
are met. This consolidates the logic, reduces code duplication, and makes it
easier to maintain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
azure-pipelines.yml
(6 hunks)ror-tools/build.gradle
(1 hunks)tests-utils/src/main/scala/tech/beshu/ror/utils/containers/images/DockerImageCreator.scala
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- tests-utils/src/main/scala/tech/beshu/ror/utils/containers/images/DockerImageCreator.scala
- azure-pipelines.yml
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: mgoworko
PR: sscarduzio/elasticsearch-readonlyrest-plugin#1095
File: docker-envs/base-ror-docker/es/install-ror-es-using-file.sh:24-30
Timestamp: 2025-04-17T11:56:56.073Z
Learning: For the ES version check in docker-envs/base-ror-docker/es/install-ror-es-using-file.sh, the parameter order in `verlte "7.0.0" "$ES_VERSION"` is intentionally set to check if 7.0.0 ≤ ES_VERSION, ensuring that ES 8.x uses the same Java path as ES 7.x.
⏰ Context from checks skipped due to timeout of 90000ms (17)
- GitHub Check: ror (Run all tests RTT_es67x)
- GitHub Check: ror (Run all tests RTT_es70x)
- GitHub Check: ror (Run all tests RTT_es80x)
- GitHub Check: ror (Run all tests RTT_es717x)
- GitHub Check: ror (Run all tests RTT_es818x)
- GitHub Check: ror (Run all tests IT_es67x)
- GitHub Check: ror (Run all tests IT_es70x)
- GitHub Check: ror (Run all tests RTT_es90x)
- GitHub Check: ror (Run all tests IT_es80x)
- GitHub Check: ror (Run all tests IT_es818x)
- GitHub Check: ror (Run all tests IT_es717x)
- GitHub Check: ror (Run all tests IT_es90x)
- GitHub Check: ror (Required checks AUDIT_BUILD_CHECK)
- GitHub Check: ror (Required checks COMPILE_CODEBASE_CHECK)
- GitHub Check: ror (Required checks LICENSE_CHECK)
- GitHub Check: ror (Optional checks CVE_CHECK)
- GitHub Check: ror (Run all tests Unit tests)
ror-tools/build.gradle
Outdated
subprojects { | ||
tasks.withType(Test).tap { | ||
configureEach { | ||
maxParallelForks = Runtime.runtime.availableProcessors() | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
Avoid tap
+ cap maxParallelForks
to prevent OOM / use idiomatic DSL
tap { configureEach { … } }
is unnecessary;configureEach
can be called directly.Runtime.runtime.availableProcessors()
can return high values on modern CI runners (32-64). Forking that many JVMs blows the memory budget fast and kills the build. Consider capping or making it overridable via a gradle property, e.g.-PmaxTestForks
.
Suggested tweak:
-subprojects {
- tasks.withType(Test).tap {
- configureEach {
- maxParallelForks = Runtime.runtime.availableProcessors()
- }
- }
-}
+subprojects {
+ tasks.withType(Test).configureEach {
+ // Allow override: ./gradlew test -PmaxTestForks=8
+ int forks = findProperty('maxTestForks')?.toInteger() ?: Runtime.runtime.availableProcessors()
+ maxParallelForks = Integer.min(forks, 8) // sensible upper bound
+ }
+}
This keeps the speed-up while safeguarding memory and stays idiomatic Groovy/Gradle.
Simplify Test Task DSL and Cap Parallel Forks
The current block uses an unnecessary tap
wrapper and may spawn too many JVM forks on CI. Let’s streamline the DSL and add a sensible cap with an override:
• File: ror-tools/build.gradle
• Lines: 74–80
Suggested update:
subprojects {
- tasks.withType(Test).tap {
- configureEach {
- maxParallelForks = Runtime.runtime.availableProcessors()
- }
- }
+ tasks.withType(Test).configureEach {
+ // Allow override: ./gradlew test -PmaxTestForks=8
+ int forks = (findProperty('maxTestForks') as String)?.toInteger()
+ ?: Runtime.runtime.availableProcessors()
+ maxParallelForks = Integer.min(forks, 8) // sensible upper bound
+ }
}
This removes the extra tap
, uses idiomatic configureEach
, lets you override via -PmaxTestForks
, and prevents OOM by capping forks.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
subprojects { | |
tasks.withType(Test).tap { | |
configureEach { | |
maxParallelForks = Runtime.runtime.availableProcessors() | |
} | |
} | |
} | |
subprojects { | |
tasks.withType(Test).configureEach { | |
// Allow override: ./gradlew test -PmaxTestForks=8 | |
int forks = (findProperty('maxTestForks') as String)?.toInteger() | |
?: Runtime.runtime.availableProcessors() | |
maxParallelForks = Integer.min(forks, 8) // sensible upper bound | |
} | |
} |
🤖 Prompt for AI Agents
In ror-tools/build.gradle around lines 74 to 80, remove the unnecessary tap
wrapper and replace tasks.withType(Test).tap { configureEach { ... } } with a
direct tasks.withType(Test).configureEach { ... } call. Add logic to cap
maxParallelForks to a sensible maximum (e.g., 4) and allow overriding this value
via a project property like -PmaxTestForks to prevent spawning too many JVM
forks on CI and avoid OOM errors.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Tests