Do not purge beatmap if about to update it #23
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Noticed in passing when attempting to debug the staging deployment failures.
The
PUT /beatmapsets
endpoint that's responsible for assigning IDs to beatmap sets and beatmaps before the full package is uploaded also purges garbage rows from potential previous submission failures (e.g. rows withactive = -1
set).However if the endpoint is given the set ID of such a garbage row, it will first run the purge, which will delete the row, but then notice that it just deleted the row, and just give up by 404ing.
This is both kinda stupid and undesirable; consider someone submitting a beatmap, where the first request succeeds, and the second fails. With
master
behaviour, this results in a basically unrecoverable state if the user doesn't know how submission internals work. With the behaviour introduced in this PR, all that is required to resolve the situation is to just try again.This is 100% a carryover from osu-web-10 logic which does the exact same thing, and maybe a reason why people resort to manual
OnlineID
reuses which are notorious for breaking everything.