Skip to content

Conversation

@vyavdoshenko
Copy link
Contributor

Add MinIO support for S3 snapshot tests.

Enable S3 snapshot tests to run against MinIO instead of real AWS S3, controlled by a single S3_ENDPOINT env var. MinIO binary is auto-downloaded and cached in ~/.cache/dragonfly-tests/. No changes to test files — only infrastructure (conftest.py, instance.py, CI action).

Fixes #6412

@vyavdoshenko vyavdoshenko self-assigned this Jan 29, 2026
Copilot AI review requested due to automatic review settings January 29, 2026 13:47
@augmentcode
Copy link

augmentcode bot commented Jan 29, 2026

🤖 Augment PR Summary

Summary: Adds MinIO-backed infrastructure so S3 snapshot tests can run without hitting real AWS S3.

Changes:

  • CI composite action runs the S3 snapshot subset with S3_ENDPOINT set and caches the MinIO binary.
  • tests/dragonfly/conftest.py auto-downloads and starts/stops a local MinIO server when S3_ENDPOINT is present, and wires bucket/credentials env vars for tests.
  • tests/dragonfly/instance.py propagates S3_ENDPOINT into Dragonfly flags (--s3_endpoint, --s3_use_https) so Dragonfly targets MinIO.

Notes: Keeps test files unchanged; the switch is driven by environment/config (Fixes #6412).

🤖 Was this summary useful? React with 👍 or 👎

Copy link

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 3 suggestions posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds infrastructure support to run S3 snapshot integration tests against a local MinIO server (instead of AWS S3) via a single S3_ENDPOINT environment variable, including CI wiring.

Changes:

  • Pass a custom S3 endpoint/HTTP setting into Dragonfly instances when S3_ENDPOINT is set.
  • Add pytest startup/shutdown hooks to download, start, and configure a MinIO server for S3 tests.
  • Add a dedicated CI step to run S3 snapshot tests against MinIO.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.

File Description
tests/dragonfly/instance.py Reads S3_ENDPOINT and translates it into Dragonfly --s3_endpoint / --s3_use_https args.
tests/dragonfly/conftest.py Adds MinIO binary download + MinIO server lifecycle management and env setup for S3 tests.
.github/actions/regression-tests/action.yml Runs S3 snapshot tests in CI with S3_ENDPOINT pointing to localhost MinIO and ensures MinIO binary is present.

@vyavdoshenko vyavdoshenko force-pushed the bobik/minio branch 2 times, most recently from 718e591 to 506aba5 Compare January 29, 2026 15:22
Copilot AI review requested due to automatic review settings January 29, 2026 15:22
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 7 out of 7 changed files in this pull request and generated 5 comments.

Copy link
Collaborator

@romange romange left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is a misunderstanding here. We still need to support aws s3 and it has slight differences with minio. This is why I wanted to have tests for both s3 and minio. We had a bug where s3 worked but minio failed imho

@vyavdoshenko
Copy link
Contributor Author

I think there is a misunderstanding here. We still need to support aws s3 and it has slight differences with minio. This is why I wanted to have tests for both s3 and minio. We had a bug where s3 worked but minio failed imho

There is no misunderstanding. I added minio as an option, we have both ways to test the same tests with AWS and minio.

df -h

- name: Run S3 snapshot tests with MinIO
if: inputs.with-s3 == 'true'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still do not understand.with-s3 means - test with minio? should we call it with-minio then?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to understand somehow that the current build has the S3 feature built in to execute MinIO tests.
I see that this solution is not good. I will try to get rid of it.

Copilot AI review requested due to automatic review settings January 30, 2026 08:33
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

Comment on lines +80 to +96
log_file = open(minio_log, "w")
proc = subprocess.Popen(
[str(minio_bin), "server", str(data_dir), "--address", address],
env={**os.environ, "MINIO_ROOT_USER": "minioadmin", "MINIO_ROOT_PASSWORD": "minioadmin"},
stdout=log_file,
stderr=subprocess.STDOUT,
)

bucket = "dragonfly-test"
s3 = boto3.client(
"s3",
endpoint_url=endpoint,
aws_access_key_id="minioadmin",
aws_secret_access_key="minioadmin",
region_name="us-east-1",
)

Copy link

Copilot AI Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If boto3.client(...) (or other code between starting the process and the retry loop) raises (e.g. due to an invalid endpoint), the MinIO subprocess and log file handle will be leaked. Wrap the MinIO start + boto3 setup in a try/except/finally that terminates the process, closes the log file, and deletes the temp dir on failure.

Copilot uses AI. Check for mistakes.
Comment on lines +77 to +101
- name: Run S3 snapshot tests with MinIO
if: inputs.s3-bucket != ''
shell: bash
run: |
cd ${GITHUB_WORKSPACE}/tests
pip3 install -r dragonfly/requirements.txt

export DRAGONFLY_PATH="${GITHUB_WORKSPACE}/${{inputs.build-folder-name}}/${{inputs.dfly-executable}}"

# Download MinIO binary (atomic: download to .tmp, then rename)
ARCH=$(uname -m)
case "$ARCH" in
x86_64) ARCH="amd64" ;;
aarch64) ARCH="arm64" ;;
*) echo "Unsupported MinIO architecture: $ARCH"; exit 1 ;;
esac
MINIO_DIR="$HOME/.cache/dragonfly-tests"
mkdir -p "$MINIO_DIR"
if [ ! -f "$MINIO_DIR/minio" ]; then
curl -fsSL "https://dl.min.io/server/minio/release/linux-${ARCH}/minio" -o "$MINIO_DIR/minio.tmp"
chmod +x "$MINIO_DIR/minio.tmp"
mv "$MINIO_DIR/minio.tmp" "$MINIO_DIR/minio"
fi

S3_ENDPOINT=http://localhost:9000 timeout 10m pytest -k "s3" --timeout=300 --color=yes dragonfly/snapshot_test.py --log-cli-level=INFO -v
Copy link

Copilot AI Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This adds a MinIO-only S3 test run, but the main "Run PyTests" step still exports DRAGONFLY_S3_BUCKET (and AWS creds) and will run the same S3 snapshot tests against real AWS as part of the full suite. That contradicts the PR description’s "instead of real AWS S3" claim and also duplicates coverage/time. Consider making MinIO the sole S3 backend when enabled (e.g., set S3_ENDPOINT for the main pytest run and avoid setting DRAGONFLY_S3_BUCKET to the AWS bucket in that case, or introduce a dedicated input flag).

Copilot uses AI. Check for mistakes.
Comment on lines +77 to +82
- name: Run S3 snapshot tests with MinIO
if: inputs.s3-bucket != ''
shell: bash
run: |
cd ${GITHUB_WORKSPACE}/tests
pip3 install -r dragonfly/requirements.txt
Copy link

Copilot AI Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new MinIO step installs test requirements via pip, and the subsequent "Run PyTests" step installs the same requirements again. Since composite-action steps share the same environment, consider installing requirements once (e.g., in a single earlier step) to avoid redundant work and reduce CI runtime.

Suggested change
- name: Run S3 snapshot tests with MinIO
if: inputs.s3-bucket != ''
shell: bash
run: |
cd ${GITHUB_WORKSPACE}/tests
pip3 install -r dragonfly/requirements.txt
- name: Install Python test requirements
shell: bash
run: |
cd ${GITHUB_WORKSPACE}/tests
pip3 install -r dragonfly/requirements.txt
- name: Run S3 snapshot tests with MinIO
if: inputs.s3-bucket != ''
shell: bash
run: |
cd ${GITHUB_WORKSPACE}/tests

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Create tests that supports integration with MinIO

3 participants