[WIP] Copying large files from an Azure virtual machine to a local server is unstable and prone to failure during the process. #3069
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Implementation Plan for Chunk-Level Resume Functionality
This addresses the issue of copying large files (3TB VHD) from Azure VM to local server with unstable network connections. The current resume functionality restarts entire transfers rather than resuming from completed chunks.
Current Status: Analysis Complete ✅
Implementation Plan:
Step 1: Add chunk progress tracking to JobPartPlanTransfer structure
Step 2: Modify chunk completion logic to persist progress
Step 3: Update resume logic to skip completed chunks
Step 4: Add large file optimizations
Step 5: Testing and validation
Expected Outcome:
Fixes #2998.
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.