-
Notifications
You must be signed in to change notification settings - Fork 273
Add a component to reduce waveform readout window #2780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Thanks Max: A couple of questions / comments from my side:
Minor comment on the test routine:
So, generally looks fine on my side. |
This comment has been minimized.
This comment has been minimized.
a44ec2b
to
527c3d8
Compare
This comment has been minimized.
This comment has been minimized.
1 similar comment
This comment has been minimized.
This comment has been minimized.
Just as a note: the reason we called it "Data Volume Reducer" and not "Zero Suppressor" was that we expected to potentially cut data not just spatially (per pixel), but also temporally (e.g. shorten waveforms). The "volume" here was not just "size of the data", but really the 3D volume formed by the waveform info (2D space + 1D time). So implementing this as a type of DataVolumeReducer makes a lot of sense. The notion of DVR comes from the original requirements elicitation process in 2016/17. If it is a DataVolumeReducer, it's already a step in ctapipe-process. However, right now we only support one, so it might be useful to support a series of them if you want to truncate the waveforms and also zero-suppress pixels. |
I would have implemented it as a volume reducer, but the current API of the DVR base class does not allow for that, as it returns a mask of pixels to keep. However, I think it would make sense to generalize the API so that DVR can be something else then zero supressing whole pixels. |
I guess it is a Data Volume Reducer in the sense that indeed it does reduce the volume of the data, but it's a bit special in that it's "pretending" that the camera produces shorter waveforms. So it would be "upstream" of DVR and Zero Suppression. But it doesn't matter so much, as long as it does what it says on the tin! |
This comment has been minimized.
This comment has been minimized.
11db5ce
to
d9e9d8c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still seems fine!
(Next step, to add a "GRB HaST" component in another pull request?)
Analysis Details0 IssuesCoverage and DuplicationsProject ID: cta-observatory_ctapipe_AY52EYhuvuGcMFidNyUs |
We don't simulate dead time / busy telescopes, so what would this component do actually do? |
That's true (for now... I'd like to add that with my Multiplicity-Energy Look-up table (which is actually a TelescopeTigggerPattern-Energy LUT). So, if there are no busy telescopes for now, there can be no change to the HaST simulation, you're right! So... I should make a pull request for having the busy behaviour / included? But, this is a step which should happen after the main ctapipe production, requiring at a minimum inputs of the standard background simulations (maybe protons scaled-up to the CR rate is enough). But for now, maybe doing a ctapipe production with only the NectarCAMs having a reduced window is enough (and I'll apply the "GRB HaST" downstream (but I'd like some feedback from LST/TIB/DPPS/ASWG if my results and proposal seem reasonable) . |
The question is now where to best apply this.
I see a couple of options:
ctapipe-process
, this would have the advantage that you could even just apply the reducer and write out reduced waveforms.CameraCalibrator.dl0_to_dl1
step.I would lean towards directly adding it in process.