-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enforce min rotation between samples #62
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of alerting the user to bad samples before calculation very much, so in principle I am very in favor. But I am concerned about both the implementation and the general approach.
-
Why is only the orientation compared? Imagine that you move the end-effector or camera horizontally in a grid above the marker, so the orientation does not change between samples, but the marker changes position in the camera view. It seems like this PR would refuse these samples, although (unless I am mistaken) they are informative and should not be discarded.
-
Instead of refusing to record a new sample, a dialogue box should allow the user to override the warning.
-
Why compare with only the previous sample? Why not go through the other recorded samples? It's not like it's an expensive or frequent operation.
I would suggest:
-
Making a function
transformsAreClose(transform1, transform2)
to calculate the distance between two transforms (both in translation and rotation) -
When a new sample is considered, loop through the samples in memory and confirm that the distance is sufficient
-
If it is not, alert the user, display the distance to the closest orientation, and let them choose if they want to keep the sample or skip it.
I think that would be more readable (= easier to maintain) and easier to use.
moveit_calibration_gui/handeye_calibration_rviz_plugin/src/handeye_control_widget.cpp
Outdated
Show resolved
Hide resolved
@@ -370,6 +371,33 @@ bool ControlTabWidget::takeTransformSamples() | |||
// Get the transform of the end-effector w.r.t the robot base | |||
base_to_eef_tf = tf_buffer_->lookupTransform(frame_names_["base"], frame_names_["eef"], ros::Time(0)); | |||
|
|||
// Verify that sample contains sufficient rotation | |||
Eigen::Isometry3d base_to_eef_eig, camera_to_object_eig; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd spell out eigen
Thanks for the review, @felixvd. I actually implemented this because I had misdiagnosed a solver failure I was hitting (which turned out to be much simpler: #63). I have refined it some and I like your suggestions, so I'll try to incorporate them.
Actually, rotation is more informative than translation. See section III.C on page 354 of Tsai & Lenz, 1989 (page 354, bottom left), although I admit I haven't fully digested this paper and I don't know how much their recommendations generalize to other solvers. Besides that, measuring translation similarity isn't very straightforward. There's the problem of scale: a 1cm translation might be huge on one robot and tiny on another. There's also the difficulty of considering both the camera and effector translation--one could have a large translation and the other doesn't. (This can't happen with rotation--if the two are rigidly attached, then the magnitude of the rotation will be the same for both, although the axes will differ.)
Good idea. I'll see if I can learn enough Qt to do this.
I just pushed this.
Since I'm inclined to continue ignoring the translations, I think it's sufficient to calculate the rotation angle in line, as it's just one line. I could be convinced otherwise, though. |
return false; | ||
} | ||
} | ||
|
||
if (!object_wrt_sensor_.empty()) | ||
for (const auto prior_tf : object_wrt_sensor_) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing & (auto&
)
@@ -376,31 +376,31 @@ bool ControlTabWidget::takeTransformSamples() | |||
base_to_eef_eig = tf2::transformToEigen(base_to_eef_tf); | |||
camera_to_object_eig = tf2::transformToEigen(camera_to_object_tf); | |||
|
|||
if (!effector_wrt_world_.empty()) | |||
for (const auto& prior_tf : effector_wrt_world_) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Would use "previous" instead of "prior" to avoid potential mixups with "prior assumptions"/"prior knowledge"_Machine Learning terminology.
The suffix _tf
is unnecessary if you don't use _eig
. Would still spell out _eigen
.
I don't really know how translation differences affect solver performance either, but I assume there is non-zero information in two samples where the marker is in opposite corners of the camera view, even if their orientation is the same. You could use the marker scale to set the limit. Either way it is just a warning and I wouldn't insist on including it.
Matter of taste I suppose. If you keep it rotation-only and in-line, I would make the variable name much more verbose (e.g. |
You can copy-paste the dialog messagebox from here, by the way. |
If there is too little rotation between two samples, the solver can fail. Rather than waiting until the solver is run, this prevents the sample from being captured in the first place. It also provides a helpful error message.