Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow renaming datasets & dataset with duplicate names #8075

Open
wants to merge 107 commits into
base: master
Choose a base branch
from

Conversation

MichaelBuessemeyer
Copy link
Contributor

@MichaelBuessemeyer MichaelBuessemeyer commented Sep 12, 2024

Further Notes:

  • Quite some of the line changes are the result of moving the ObjectId class to the utils package so that all wk backend servers have access to this class.

URL of deployed dev instance (used for testing):

Steps to test:

  • Give two datasets the same name and check whether annotations and so on works
  • Test whether the task system still works with duplicate dataset names
  • check dataset upload
    • dataset upload
    • add remote
    • compose
  • ...

TODOs:

  • Add evolution and reversion
    • testing needed
  • Test uploading:
    • Report upload fails
  • Adjust worker to newest job arguments as the dataset name can no longer be used to uniquely identify a dataset
  • rename organization_name in worker to organization_id. see Rename organization_name to organization_id in worker args #8038
  • Dataset Name settings field has an unwanted spinner (see upload view)
  • Check the job list
  • Properly implement legacy searching for datasets when old URI param is used
  • Adjust legacy API routes to return dataset in old format
    • It is just an additional field. Thus, I would say it should be fine.
  • datasets appear to be duplicated in the db
    • Maybe these are created by jobs with an output dataset
  • Fix dataset insert
  • Skeleton & VolumeTracings address a dataset via its name
    • Not really used only during task / annotation creation
    • Use heuristic upon upload and temporary patch the Tracing case classes to carry the datasetId during the creation process once the dataset is identified once.
    • Task creation works
    • Needs testing
      • fix annotation upload
    • needs to support old nmls
  • Put datasetId into newly created nmls
  • In the backend LinkedLayerIdentifier still uses the datasetName as an identifier
    • used in wklibs, maybe just interpret the name as a path and work with this. in case it cannot be found the user needs to update wklibs. Add comment for this!
  • [ ] the dataset C555_tps_demo has quite some bucket loading errors. Unsure why some buckets do not work The dataset seems to be broken. Could reproduce this on other branches
  • Notion-style URLs are missing (i.e. -, but only the id part is actually used)
  • Maybe remove DatasetURIParser

I would also suggest to

  • Execute the screenshot tests once for this branch. Take care to change the branch name in the CI config (or local command), so that the tests are actually ran on this branch.
  • Looks much better now :) Did you re-test that exceptions are caught as expected?
  • undo application.conf & snapshot changes

Issues:


(Please delete unneeded items, merge only when none are left open)

} else {
Future.successful(JsonBadRequest(Messages("nml.file.noFile")))
Fox.paramFailure("NML upload failed", Empty, Empty, None)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fm3 this does not work as expected. The paramFailure is not caught in EtendedController.scala. Do you maybe know more about this? / How to make this work?

Copy link
Member

@daniel-wer daniel-wer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First part of my frontend code review. Good stuff! I'll continue tomorrow :)

@@ -3,7 +3,9 @@
# example: assume, the features route has changed, introducing v2. The older v1 needs to be provided in the legacyApiController
# Note: keep this in sync with the reported version numbers in the utils.ApiVersioning trait

# version log:
# version log:updateDatasetV8
# changed in v9: Datasets are now identified by their id, not their name. The routes now need to pass a datasets id instead of a name and organization id tuple.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# changed in v9: Datasets are now identified by their id, not their name. The routes now need to pass a datasets id instead of a name and organization id tuple.
# changed in v9: Datasets are now identified by their id, not their name. The routes now need to pass a dataset id instead of a name and organization id tuple.

# version log:
# version log:updateDatasetV8
# changed in v9: Datasets are now identified by their id, not their name. The routes now need to pass a datasets id instead of a name and organization id tuple.
# Requests to the TracingStore and DatasStore need to address a dataset based on it directoryName and organization id.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Requests to the TracingStore and DatasStore need to address a dataset based on it directoryName and organization id.
# Requests to the TracingStore and DatasStore need to address a dataset based on its directoryName and organization id.

Comment on lines +1146 to +1147
// Formatting the dataSourceId to the old format so that the backend can parse it.
// And removing the datasetId as the datastore cannot use it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't this be adapted to the new dataset id in the backend?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because @fm3 and I agreed upon that the tracing & data store now identify a dataset by its directoryName and organization id. This is especially needed for the datastore, as the datastore needs the orgaId and directory name of a dataset to locate the folder whether the dataset is saved on disk.

Otherwise, each request to the datastore needs a (cached) request to the core backend to resolve the datasetId to its orgaId & directoryName tuple.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And orgaId needs to be named team and the directoryName name due to legacy reasons.

@@ -73,35 +76,32 @@ export async function cancelJob(jobId: string): Promise<APIJob> {
}

export async function startConvertToWkwJob(
datasetName: string,
organizationId: string,
datasetId: APIDataset["id"],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not unified in this file, and in all other files I've seen, the type was simply denoted as string. I would prefer that as it's easier to see the type when looking at the code. If you agree, please search and replace.

Suggested change
datasetId: APIDataset["id"],
datasetId: string,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure 👍, thought this would make changing the type easier but honestly this seems very unlikely and a simple string is definitely easier to read :)

@@ -31,7 +31,7 @@ type SegmentInfo = {

export function getMeshfileChunksForSegment(
dataStoreUrl: string,
datasetId: APIDatasetId,
datasetId: APIDataSourceId,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be renamed to dataSourceId, also for the getMeshfileChunkData method.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, good catch. Thanks 🙏

frontend/javascripts/admin/task/task_create_form_view.tsx Outdated Show resolved Hide resolved
frontend/javascripts/admin/task/task_create_form_view.tsx Outdated Show resolved Hide resolved
Comment on lines +280 to +284
static getRowKey(dataset: APIDatasetCompact) {
return dataset.id;
}
getRowKey() {
return this.data.name;
return DatasetRenderer.getRowKey(this.data);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this static getRowKey function needed? Same for the other one later in the file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Becasue I use them in ll. 582ff.

if (selectedDatasets.length > 0) {
      selectedRowKeys = selectedDatasets.map(DatasetRenderer.getRowKey);
    } else if (context.selectedFolder && "name" in context.selectedFolder) {
      selectedRowKeys = [FolderRenderer.getRowKey(context.selectedFolder as FolderItemWithName)];
    }

I need them to be static, because In these lines I do not have access to a "row object" but I still want to access the row key here. => I could just map to the respective "key" property, but this would me the logic of what the key is to be splitted up / stored at two locations. Thus, when changing this later, there is only a single place where the "key-generation" needs to be adjusted -> the code is DRYer this way :)

Comment on lines +315 to +321
<Link
to={`/datasets/${this.data.owningOrganization}/${this.data.name}/view`}
title="View Dataset"
className="incognito-link dataset-table-name"
>
Test disambiguate
</Link>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I will definitely do that before merging. But I would currently keep this to make sure, that the disambiguation for old links still works.

Copy link
Member

@daniel-wer daniel-wer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Second half of my PR review :) I haven't done any testing yet. Would you say now is a good time to do so?

I would also suggest to

  • Execute the screenshot tests once for this branch. Take care to change the branch name in the CI config (or local command), so that the tests are actually ran on this branch.

@@ -367,14 +366,14 @@ class DatasetSettingsView extends React.PureComponent<PropsWithFormAndRouter, St
const dataSource = JSON.parse(formValues.dataSourceJson);

if (dataset != null && this.didDatasourceChange(dataSource)) {
await updateDatasetDatasource(this.props.datasetId.name, dataset.dataStore.url, dataSource);
await updateDatasetDatasource(dataset.directoryName, dataset.dataStore.url, dataSource);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a useful warning and I would change it from info to warning :)

Comment on lines 495 to 498
const maybeDataSourceId = {
owningOrganization: dataset?.owningOrganization || "",
directoryName: dataset?.directoryName || "",
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does the dataset sometimes not exist? Is that after an upload or is there another reason?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the DatasetSettingsView fetches the dataset given by its props.datsaetId upon mounting. Therefore, there are some rendering cycles in which the dataset does not exist in the state.

@@ -576,7 +590,7 @@ class DatasetSettingsView extends React.PureComponent<PropsWithFormAndRouter, St
children: (
<Hideable hidden={this.state.activeTabKey !== "defaultConfig"}>
<DatasetSettingsViewConfigTab
datasetId={this.props.datasetId}
dataSourceId={maybeDataSourceId}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this component work with owningOrganization and directoryName sometimes being empty strings? On a quick glance it looks like it doesn't as it calls getMappingsForDatasetLayer(dataStoreURL, dataSourceId, layerName) for example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm this never came up as an error 🤔

The magic seems to happen in line 636
<Spin size="large" spinning={this.state.isLoading}>
which wraps the whole tabs component. Therefore, this should be safe.

I changed the code to only render the DatasetSettingsViewConfigTab component in case the maybeDataSourceId has valid values. Else nothing is rendered and a comment explains that this case shouldn't occur.

frontend/javascripts/libs/utils.ts Outdated Show resolved Hide resolved
/>
<Route path="/datasets/:datasetNameAndId/view" render={this.tracingViewMode} />
{/*maybe this also needs a legacy route?*/}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I would think so, at least there are such links in the wild, for example in the connectome viewer repository and I've also sent such links via email to clients.


export default {
const apiDataset: APIDataset = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🙏 🎉

@@ -1,18 +1,22 @@
// @ts-nocheck
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thank you!

@@ -106,19 +106,38 @@ const datasetConfigOverrides: Record<string, PartialDatasetConfiguration> = {
},
};

const datasetNameToId: Record<string, string> = {};
datasetNames.map(async (datasetName) => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this test does not do what its description and assertion state. Instead it fetches the ids for all datasets. Doing that as part of a test seems hacky. Can't you use test.before for that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, good find :)

Comment on lines +80 to +87
jobsEnabled: true,
openIdConnectEnabled: false,
optInTabs: [],
publicDemoDatasetUrl: 'https://webknossos.org/datasets/scalable_minds/l4dense_motta_et_al_demo',
recommendWkorgInstance: true,
segmentAnythingEnabled: false,
taskReopenAllowedInSeconds: 30,
voxelyticsEnabled: false,
voxelyticsEnabled: true,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs to be reverted before merging, together with the snap file.

| "created"
| "isEditable"
| "lastUsedByUser"
| "tags"
| "isUnreported"
>;
export type APIDatasetCompact = APIDatasetCompactWithoutStatusAndLayerNames & {
id?: string;
id: string; // Open question: Why was this optional?, The backend code clearly always returns an id ... :thinking:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be legacy or an oversight, but I don't know. I think it's fine to change if typescript doesn't complain.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I did a double check in the backend and it seems to always send the datasetId 🤷

Copy link
Contributor Author

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback @daniel-wer. I applied / commented each of your comments. Please have another look. Testing should hopefully work :)

Comment on lines +1146 to +1147
// Formatting the dataSourceId to the old format so that the backend can parse it.
// And removing the datasetId as the datastore cannot use it.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And orgaId needs to be named team and the directoryName name due to legacy reasons.

@@ -73,35 +76,32 @@ export async function cancelJob(jobId: string): Promise<APIJob> {
}

export async function startConvertToWkwJob(
datasetName: string,
organizationId: string,
datasetId: APIDataset["id"],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure 👍, thought this would make changing the type easier but honestly this seems very unlikely and a simple string is definitely easier to read :)

@@ -31,7 +31,7 @@ type SegmentInfo = {

export function getMeshfileChunksForSegment(
dataStoreUrl: string,
datasetId: APIDatasetId,
datasetId: APIDataSourceId,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, good catch. Thanks 🙏

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I just gave it a quick double check: No changes at all :)

@@ -166,10 +166,12 @@ async function parseNmlFiles(fileList: FileList): Promise<Partial<WizardContext>
throw new SoftError("NML files should not be empty.");
}

const { trees: trees1, datasetName: datasetName1 } = await parseNml(nmlString1);
const { trees: trees2, datasetName: datasetName2 } = await parseNml(nmlString2);
// TODO: Now the datasetName stored in the nml is interpreted as the path of the dataset. -> call to legacy route is necessary.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good find, this is an open todo: @fm3 / @daniel-wer Could we have please have a small discussion on this? 🙏

Comment on lines +315 to +321
<Link
to={`/datasets/${this.data.owningOrganization}/${this.data.name}/view`}
title="View Dataset"
className="incognito-link dataset-table-name"
>
Test disambiguate
</Link>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I will definitely do that before merging. But I would currently keep this to make sure, that the disambiguation for old links still works.

Comment on lines 495 to 498
const maybeDataSourceId = {
owningOrganization: dataset?.owningOrganization || "",
directoryName: dataset?.directoryName || "",
};
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the DatasetSettingsView fetches the dataset given by its props.datsaetId upon mounting. Therefore, there are some rendering cycles in which the dataset does not exist in the state.

@@ -576,7 +590,7 @@ class DatasetSettingsView extends React.PureComponent<PropsWithFormAndRouter, St
children: (
<Hideable hidden={this.state.activeTabKey !== "defaultConfig"}>
<DatasetSettingsViewConfigTab
datasetId={this.props.datasetId}
dataSourceId={maybeDataSourceId}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm this never came up as an error 🤔

The magic seems to happen in line 636
<Spin size="large" spinning={this.state.isLoading}>
which wraps the whole tabs component. Therefore, this should be safe.

I changed the code to only render the DatasetSettingsViewConfigTab component in case the maybeDataSourceId has valid values. Else nothing is rendered and a comment explains that this case shouldn't occur.

@@ -106,19 +106,38 @@ const datasetConfigOverrides: Record<string, PartialDatasetConfiguration> = {
},
};

const datasetNameToId: Record<string, string> = {};
datasetNames.map(async (datasetName) => {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, good find :)

| "created"
| "isEditable"
| "lastUsedByUser"
| "tags"
| "isUnreported"
>;
export type APIDatasetCompact = APIDatasetCompactWithoutStatusAndLayerNames & {
id?: string;
id: string; // Open question: Why was this optional?, The backend code clearly always returns an id ... :thinking:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I did a double check in the backend and it seems to always send the datasetId 🤷

Copy link
Contributor Author

@MichaelBuessemeyer MichaelBuessemeyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your review @daniel-wer 🎉

I think I applied your suggestions, commented on comments and so on :).
The dev instance https://allowdatasetrenaming.webknossos.xyz/ should also be ready for testing 🎉

Copy link
Member

@fm3 fm3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting close! I added a few more comments/replied to threads.

I think the two open backend topics are the dataset upload protocol and the ParamFailure thing for NML upload. I hope I’ll have time for both on monday.

@@ -21,6 +23,39 @@ object TaskParameters {
implicit val taskParametersFormat: Format[TaskParameters] = Json.format[TaskParameters]
}

case class TaskParametersWithDatasetId(taskTypeId: String,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
case class TaskParametersWithDatasetId(taskTypeId: String,
case class TaskParameters(taskTypeId: String,

I’d move the old TaskParameters to LegacyApiController.scala and name it LegacyTaskParameters. This way, the current code doesn’t have to know/suggest about the backwards compatibility at all.

basePath: Option[String] = None)(implicit m: MessagesProvider,
ec: ExecutionContext,
ctx: DBAccessContext): Fox[NmlParseSuccessWithoutFile] = {
val foxInABox = try {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks much better now :) Did you re-test that exceptions are caught as expected?

foundDatasetOpt match {
case Some(foundDataset) if foundDataset._dataStore == dataStore.name =>
updateKnownDataSource(foundDataset, dataSource, dataStore).toFox.map(Some(_))
case Some(foundDataset) => // This only returns None for Datasets that are present on a normal Datastore but also got reported from a scratch Datastore
updateDataSourceDifferentDataStore(foundDataset, dataSource, dataStore)
case _ =>
insertNewDataset(dataSource, dataStore).toFox.map(Some(_))
insertNewDataset(dataSource, dataSource.id.directoryName, dataStore).toFox
.map(Some(_)) // TODO: Discuss how to better handle this case
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I see. Yes, it’s kind of unexpected. Unfortunately I don’t really have a good idea here. Since this was not introduced in this PR, I’d leave it as is for the time being.

Copy link
Member

@daniel-wer daniel-wer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was only able to do a quick test, but will continue testing on Monday :) I tested with the currently deployed dev instance, not sure whether that used the most recent code.

const { datasetId, datasetName } = getDatasetIdOrNameFromReadableURLPart(
match.params.datasetNameAndId,
);
if (datasetName) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why this is needed (same for line 218). Are there links like /datasets/:datasetName/edit? I thought there are only links like /datasets/:organizationId/:datasetName/edit or /datasets/:datasetName-:datasetId/edit. Could you please clarify?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kinda yes: There are links like /datasets/:datasetName/view out in the wild. Very old links. We even have a special backend route to get the organization id for a dataset base only on its name to resolve such old links. See: getOrganizationForDataset.

The problem I am trying to solve here is that a routes ``/datasets/:datasetName/viewand `/datasets/:datasetId/view` have the same "matching string" and thus need to be decided by programmatically. I modified `getDatasetIdOrNameFromReadableURLPart` to make some guesswork on whether the passed argument for the match (`match.params.datasetNameAndId`) contains an actual datasetId or just a name. In case it is a name I fall back to treating the legacy link and trying to resolve it to the newest schema.

As this is necessary for view links. there might also be the same problem for .../edit routes. I think it is very unlikely that there are some edit links with there very old URL schema out there. But in case they are out there, this code here takes care of it. I'd say this is not needed but the new return type of getDatasetIdOrNameFromReadableURLPart forces the user to consider legacy links. And I think this is a good thing so it won't be forgotten in the future. Therefore, I'd also keep the handling of legacy /edit links, bc it feels more case complete to me. But feel free to argue.

Besides that, handling the very old /view links is a must :)

frontend/javascripts/router.tsx Show resolved Hide resolved
condition,
`Dataset with name: "${datasetName}" does not look the same, see ${datasetName}.diff.png for the difference and ${datasetName}.new.png for the new screenshot.`,
);
t.true(condition, `Could not retrieve datasetId for dataset "${datasetName}".`);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what t.true does if it's called from outside a test 🤔 Should this be an assertion instead or does it work as expected?

Michael Büßemeyer added 3 commits November 15, 2024 14:34
- make new fields to reserve upload optional (for backward compatibility)
- fix find data request from core backend
rpc(s"${dataStore.url}/data/datasets/${dataset._id}/layers/$dataLayerName/findData")
rpc(s"${dataStore.url}/data/datasets/${dataset._organization}/${dataset.directoryName}/layers/$dataLayerName/findData")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@frcroth thanks for adding the dataset upload test. This already saved me twice in this pr 🙈

=> one place is here, where a request to the dataset was accidentally based on the dataset id and not on orga and the new directoryName

Comment on lines 31 to 38
case class ReserveUploadInformation(
uploadId: String, // upload id that was also used in chunk upload (this time without file paths)
name: String, // dataset name
directoryName: String, // dataset directory name
newDatasetId: String,
directoryName: Option[String], // dataset directory name
newDatasetId: Option[String],
organization: String,
totalFileCount: Long,
filePaths: Option[List[String]],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And the other occurence is here, where I accidentally made the new upload info a requirement which would have crashed uploads tried with wklibs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants