-
Notifications
You must be signed in to change notification settings - Fork 467
Reverted 5641,5645. Fix root cause of upgrade error #5654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
apache#5641)" This reverts commit 4999191.
…ing Fate (apache#5645)" This reverts commit c63a556.
This change reverts commits 242438a from apache#5641 and 963150d from apache#5645 and fixes the root cause of the issues, which was that the scanref and fate tables were not being created during the upgrade.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, seems fine. This would be easier once we collapse the two upgraders for 4.0 into a single one. Some stuff could be simplified and inlined.
expect(zrw.putPersistentData(isA(String.class), isA(byte[].class), isA(NodeExistsPolicy.class))) | ||
.andReturn(true).times(7); | ||
expect(context.getPropStore()).andReturn(store).atLeastOnce(); | ||
expect(store.exists(isA(TablePropKey.class))).andReturn(false).atLeastOnce(); | ||
store.create(isA(TablePropKey.class), isA(Map.class)); | ||
expectLastCall().once(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could probably use anyObject() for these parameters. It would avoid the need to suppress the warning, I think. Or, you could get more specific, and use eq()
to pass specific objects. For the Array, you could pass a more specific matcher.
At the very least, a comment to explain where the count of 7 comes from would be useful (3 for name/conf/status for 2 tables, plus 1 for the mapping update? or something like that)
Namespace.ACCUMULO.id(), SystemTables.FATE.tableName(), TableState.ONLINE, | ||
ZooUtil.NodeExistsPolicy.FAIL); | ||
} catch (InterruptedException | KeeperException ex) { | ||
Thread.currentThread().interrupt(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you need to re-set the interrupted flag here, because you're exiting the current thread in the next line with the ISE. I'm also wondering if this boilerplate exception handling can be pushed into the preparePre4_0NewTableState
method, so we can just inline this method. The namespace, nodeexistspolicy, and online state can probably all be inlined.
This change reverts commits 242438a from #5641
and 963150d from #5645 and fixes the root
cause of the issues, which was that the scanref and fate tables were not
being created during the upgrade.