-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calling Custom API's and Creating a Workspace #53
Comments
With VDF you mean virtual dark field, right? That should be the easiest case since processing is simple and fast. So far we have this sort of experiment with Jupyter notebooks. LiberTEM offers APIs that should allow to orchestrate a complex setup and control several aspects of it in parallel:
From what you describe, the control and feedback will mostly happen after each complete scan, right? That could be done with repeated calls to With DENS we have an ongoing collaboration to combine live processing with controlling their holders. So far I've developed a simple demo notebook based on a simulated detector and their device simulator. It would be great to have a real-world use case for that! Regarding the scan control, it seems that this is completely handled by you through your signal generator API? It sounds like automating that should be possible. Regarding drift correction, that may require control of scan position / beam shift and/or stage control, right? Alternatively, the correction could be performed during processing by correcting the navigation position, as long as the area of interest stays in the field of view. As long as you have an API to control that on your microscope, it should be possible to integrate that in a control loop. We already have some limited experience in live microscope control and parameter tracking. In summary, what you have in mind should be doable and I'd be happy to help! |
Yes :) I figured as much .
Currently our set up continuously scans, but in the case of drift compensation we would probably expect the process to go something like:
In that case it isn't quite online processing but rather shortly there after.
We would love to help out with this. I think that integrating a little closer would allow for some unique experiments especially when considering something like the continuous scanning we talked about above. Overall I think this seems really promising. Thanks for the information regarding the |
Hi Carter, is there any update on this? Concrete steps how we can help etc? I'm reviewing the open issues for the upcoming relase. :-) |
Hi Dieter, Just bringing in @nhagopian as he is probably the person in our group most likely to use this as an end case. Particularly things like using the ptychography reconstruction. Currently implementation of this in regards to integrating with Direct Electron and DENS is being held up by Direct Electron's API for camera control. They are still using a 3rd party software (Streampix) to do their acquisitions and don't currently have an API or great framework for streaming data. I added functionality to load data from both the DE16 and Celeritas cameras here and I think @nhagopian has a workflow for loading data and then transferring it to LiberTEM. It's not necessarily live but honestly a couple of minutes after acquisition is almost as good. I was hoping that with the new software from Direct Electron we could start thinking about live processing but we might have to wait and see. Currently I think they are already overwhelmed with the amount of data being read and processed so adding to that might make things fail. If I remember correctly the second part of this Issue was about creating an extensible widget based workflow similar to the current GUI. That is something I would have to look into the LiberTEM code a bit more to get a feel for how easily that could be accomplished. On the microscope there is a lot of power in quick fast access to data which gives some idea of the quality of data. Sometimes if we have very large 1-2 Tb datasets that means loading 5-10% of the data and operating on that. Something graphical or application based is also easier than running things from scratch with jupyter etc. |
Hey @CSSFrancis
We were in talks with DE in the past, but did not yet have capacity to work on supporting their cameras yet. As I understand, one of the plans is to make the data available directly in GPU memory, which would be quite nice for directly running a UDF without having to copy data around again.
We will have a look at a few approaches to this next week - @matbryan52 has built a prototype based on Panel, which is part of holoviz, and we have a few other approaches we want to try. We can update this issue afterwards, and keep you in the loop. |
I am interested in seeing if I can get some of our custom hardware/ software working with Libertem to do some live processing.
Because there are many possible configurations/ scan systems and potential setups for a microscope I would be interesting to think about how to best integrate API's and create custom work spaces. In my particular case we are interfacing with the microscope, DE Celeritas detector, and a Dens heating holder as well as a wave generator for controlling the TEM scan system and the trigger for the DE camera.
As far as things that we would like to do:
I am not suggesting that all of these should be available through LiberTEM, but it would be nice to have one central control for this integration and document how to work in api's from different inputs. I'm curious what you thoughts are on an ideal workspace for controlling live systems and how to best integrate?
The text was updated successfully, but these errors were encountered: