Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Realistic BTOF digitization #1635

Open
wants to merge 72 commits into
base: main
Choose a base branch
from

Conversation

ssedd1123
Copy link

@ssedd1123 ssedd1123 commented Oct 15, 2024

Briefly, what does this PR introduce?

It's an attempt to make LGAD sensor response for Barrel TOF more realistic by doing the following,

  1. Spread charge deposition from strips that are hit directly to nearby strips.
  2. Perform digitization by converting charge deposited to electric pulse.
  3. Digitize the pulse by converting it to TDC and ADC value. ADC propto pulse height and TDC propto time when voltage crosses certain threshold.
  4. Convert the TDC and ADC value back to time and Edep by linear fit.
  5. Return the BTOF hit point as "TOFBarrelRecHit". Position is estimated by either weighted sum of ADC or just center of the cell with max ADC. It's weighted average by default, but the behavior can be changed by parameters.

Noise, edge correction and time talk correction will be developed in the future.

What kind of change does this PR introduce?

Please check if this PR fulfills the following:

  • Tests for the changes have been added
  • Documentation has been added / updated
  • [x ] Changes have been communicated to collaborators

Does this PR introduce breaking changes? What changes might users need to make to their code?

Does this PR change default behavior?

It replaces the content of "TOFBarrelRecHit" with results of this new clusterization. It was originally just geant point + random smearing. I have communicated to the reconstruction group and they advise that I submit this pull request so they can study the effect of having this new digitization/clusterization algorithm. They will decide if I should save the data in a new branch or replace the origin content.

ssedd1123 and others added 26 commits July 18, 2024 16:52
… distance relative to the center of the center cell. Now it's changed so spread is calculated relative to the true hit location.
…ighbors when cell ID from BitFieldDecoder for cellID is negative.
@github-actions github-actions bot added topic: tracking Relates to tracking reconstruction topic: PID Relates to PID reconstruction topic: barrel topic: digitization labels Oct 15, 2024
app->Add(new JOmniFactoryGeneratorT<TOFPulseDigitization_factory>(
"BTOFPulseDigitization",
{"TOFBarrelPulse"},
{"TOFBarrelADCTDC"},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we get this code running by default? What stops us from using this in the tracking?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To have the code running, we need to add "TOFBarrelADCTDC" to src/services/io/podio/JEventProcessorPODIO.cc.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, TOFBarrelADCTDC is not used anywhere downstream so I leave it out. You can still call it with "eicrecon -Ppodio:output_collections=TOFBarrelADCTDC output.edm4hep.root" etc. My plan is to push the clustering algorithm in my future pull request where it calculate hit location and time from TOFBarrelADCTDC. The reconstructed location will replace the original TOFBarrelRecHits so it will be used by the tracking algorithm downstream. I don't think we should write TOFBarrelADCTDC to output by default because saving individual pulses take up too (relatively) large amount of disk space, but if you believe it's a good idea to put it in JEventProcessorPODIO.cc, then I don't think that will be too much of an issue either.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought even with sharing digitization would output only a handful of RawTrackerHits. But yeah, if it's a large output, we can't have that saved, unfortunately.

Okay, how do we continuously test this then? Can you provide some unit tests like those in src/tests/algorithms_test?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay sorry I remembered incorrectly. "TOFBarrelPulse" saves the entire pulse, but "TOFBarrelADCTDC" only saves the ADC and TDC values. There is charge spreading so there could be 3-6 ADC-TDC pairs for each BTOF hits, but it is still just a handful of numbers. I take that back, I think we can save TOFBarrelADCTDC, but not TOFBarrelPulse.

We can still test it, like see if TDC vs geant hit time follows some rough linear relationship. We can also try to estimate if the hit position vs ADC weighted position follows linear relationship.

Are src/tests/algorithms_test tests being run automatically everytime a pull request is made? I think I can come up with some sort of linearity test.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are src/tests/algorithms_test tests being run automatically everytime a pull request is made?

Yes

I think I can come up with some sort of linearity test.

That would be great. Note that those tests intended to check individual components (which is sort of useful on its own). If you want to setup a more complete chain, it's a bit more involved, but there are some examples in src/tests/omnifactory_test (you will need to setup a separate subdirectory in src/tests for that).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to write a simple test for BTOFChargeSharing, but I can't get cellIDPositionConverter to work. I tried to put mock TPC hit in algorithmsInit.cc, but it seems like even if readout and segmentation are set correctly, there is no physical coordinates that corresponds to those mock hits. BTOFChargeSharing can't run because it needs to convert global coordinates to local for charge sharing calculation. I am sure cellIDPositionConverter is responsible for my error because the always errors out when I ask for cell position from cellIDPositionConverter.

I think TOFPulseGeneration and TOFPulseDigitization don't need cellIDPositionConverter and I can still test those two classes. Do you have any suggestions regarding BTOFChargeSharing? Thanks for any of your suggestions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What you could do for the charge sharing test is refactor the code so that there is a core function which takes the positions as arguments, having converted from CellID beforehand. This function would probably best be in place of your loop over neighbours.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can test individual functions in BTOFChargeSharing, but ultimately, cellIDPositionConverter is integral to the charge-sharing algorithm. The latest BTOF geometry drawing shows that not all sensors are the same size as a few near the edges are only half-sized. This means I can't fix the number of rows and columns of cells in a sensor and use them to find all neighbors. Therefore, to check if a cell ID is valid when I increment the "x" or "y" value in CartesianGridXY, I use cellIDPositionConverter::position. If it throws an exception, it indicates that the cell ID is invalid.

I have implemented tests for TOFPulseGeneration and TOFPulseDigitization. Is there a way to get cellIDPositionConverter to return anything meaningful in algorithm_test? If not, I'll need to reconsider how to test this class.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there is. The "mock" detectors are setup here:

auto detector = dd4hep::Detector::make_unique("");
dd4hep::Readout readout(std::string("MockCalorimeterHits"));
dd4hep::IDDescriptor id_desc("MockCalorimeterHits", "system:8,layer:8,x:8,y:8");
readout.setIDDescriptor(id_desc);
detector->add(id_desc);
detector->add(readout);
dd4hep::Readout readoutTracker(std::string("MockTrackerHits"));
dd4hep::IDDescriptor id_desc_tracker("MockTrackerHits", "system:8,layer:8,x:8,y:8");
//Create segmentation with 1x1 mm pixels
dd4hep::Segmentation segmentation("CartesianGridXY","TrackerHitsSeg", id_desc_tracker.decoder());
readoutTracker.setIDDescriptor(id_desc_tracker);
readoutTracker.setSegmentation(segmentation);
detector->add(id_desc_tracker);
detector->add(readoutTracker);

The cellID spec should be obvious how to modify, but let me know if you run into trouble.
I believe you can also add some fake volumes to make cellIDPositionConverter work, but I have not done so myself.

If you have some working tests please push them to the branch. If there are tests that you've implemented that should work, you can push too, we can debug here together.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Clang-Tidy found issue(s) with the introduced code (1/1)

src/services/io/podio/JEventProcessorPODIO.cc Outdated Show resolved Hide resolved
@github-actions github-actions bot dismissed their stale review November 5, 2024 21:15

No Clang-Tidy warnings found so I assume my comments were addressed

@simonge
Copy link
Contributor

simonge commented Nov 6, 2024

A couple of questions, mostly focussing on sides 30/31 of the presentation, please correct me if my understanding is wrong.
Presumably the main 40MHz clock is running independently of the particle arrival so particles could arrive at any point in the cycle rather than just the first half cycle? (At cern they might have been somewhat synched).

We are not 100% sure what the 40MHz clock is synched to, but it will most likely be synched to bunch crossings. It is just a guess on our part since we are not informed by the other groups on the details, but this scenario makes the most sense. Half cycle of 40MHz clock last for 12ns. For particles to arrive outside of half cycle, a particle traveling at roughly speed of light must have a path length exceeding 12ns*c = 3.6 m. However, BTOF has a radius of ~65 cm and a length of ~-1.1 to + 1.5 m in the beam axis. Even with magnetic field, it's unlikely that that path lengths of any particles of interest would be so long that it takes more than half cycle to arrive at BTOF. The other possibility is if the particle travels at half the speed of light, but having such such low momentum particles are unlikely to begin with.

From my understanding, this 40MHz clock will be free running but synched to the 100MHz experiment clock such that every 50ns will align. This will still distribute 4/5 of the 10ns bunch crossings through different phases of the 40MHz clock.

From what I can see the 10 bit tdc covers the range of possible arrival times and doesn't necessarily relate to the length of the landau pulse needing to be simulated. I can't quite picture how this 10 bit value is formed from the 40MHz clock and combinations of 140 and 120 ps delays to reach a clock binning of less than 20ps.

Two pulses of different periods can be combined to obtain time measurement much more precise than the period of each pulse. I was told that the principle is similar to that for Vernier scale, where two rulers of different markings can be combined to give measurements more accurate than each individual ruler can achieve by sliding one ruler against the other.

The precision achieved in a Vernier scale is called the Vernier constant, which is the different in scale of the two rulers. In our case, the Vernier constant is 140-120 = 20 ps which agrees with the clock binning of 20 ps.

Thanks for the clarification, I think I am still missing something conceptual to reach 20 ps in all instances as opposed to values between 20 ps and 120 ps. My question remains that 25 ns/1024 = 24.4 ps. From the diagram in the presentation there is a hint that the two phases of the 40MHz clock can be treated independently so it would actually be 12.5ns/1024 which provides the minimum precision below the vernier constant.

In either case it appears at the moment you are dividing 100ns into 1024 bins when producing the pulse shape, is this what you want or should it be the 25/12.5 ns?

I don't think the way the min and max timing is set up at the moment will work well for hits where the global time is shifted outside of the range. The time of the simulation hit for samples in the future may cover a few microseconds so there will also need to be more than the 10 bit encoding in the digitization output. Presumably this is added on the ASIC somewhere outside of the pixel or upstream.

As mentioned in my last paragraph, we are not sure what the global clock is synced to. I guess there will be a "global clock" class in the future where the start time can be estimated? Unfortunately, development of such a class is beyond our expertise.

My suggestion here would be to digitize the global simulation time in units of 25 ns/12.5 ns and pass that forward as the "time" associated with each pulse and start the pulse x sampling from this value. Then add this in units of 20 ps to the tdc value output from the TOFPulseDigitization.

When/how long until the start/stop signals reset and another hit could be identified?

Given that TDC values are generated by the 120 and 140 ps pulses, I believe each cell in (or a pixel/strip) TOF cannot distinguish multiple hits within 140 ps after the first hit. However, from a physics stand point, it is very unlikely that a cell would be hit more than once within a single event. Multiplicity of e+A is not expected to be high to begin with.

Given the way the timing precision is measured with the delay circuitry there may be at least the 140 ps*1024=143 ns deadtime in a pixel, some additional time . It might be the case that several hits can be handled in the pixel at the same time with different delay pathways or that the when the start and stop signals cross everything is readout and reset immediately giving a variable deadtime. It is still probably rare to have a second signal in this time window.

The measurement of the Landau minimum presumably cannot be limited to the time window between the start and stop signals. Do you know if this is just measured over the next phase of the clock or can be updated for the hit for the full 143 ns delay?

Of course my understanding may still be way off.

Apologies for the more technical review rather than pure code review, I'd like to understand how closely this replicates what the ASIC is doing. This is also the first detector effects example of its type so it will be nice to provide a really clear example.

That is fine. I am under the impression that you want this class to be a more general LGAD digitization class rather than TOF only digitization class, is that correct? I may not be an experts in other sub-system, but I will do what I can to help.

It won't just be other LGAD/EICROC digitization, aspects may be reusable for MAPS/Timepix detectors too.

Copy link
Contributor

@ruse-traveler ruse-traveler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Tommy, thanks for the extensive development here! This is looking really good, and I think it's pretty close to ready!

I've left my initial round of comments in the code (they're pretty minor). I think @veprbl and @simonge have already provided excellent feedback, and definitely second their suggestions.

src/algorithms/digi/BTOFChargeSharing.cc Outdated Show resolved Hide resolved
src/algorithms/digi/BTOFChargeSharing.h Show resolved Hide resolved

// Parameters of AC-LGAD signal generation - Added by Souvik
double gain = 80;
double risetime = 0.45; // 0.02; //in ns
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remind me: what's the 0.02 here?

src/algorithms/digi/TOFPulseDigitization.cc Outdated Show resolved Hide resolved
src/algorithms/digi/TOFPulseDigitization.cc Outdated Show resolved Hide resolved
src/algorithms/digi/TOFHitDigiConfig.h Outdated Show resolved Hide resolved
src/algorithms/digi/TOFPulseGeneration.cc Outdated Show resolved Hide resolved
namespace eicrecon {

double TOFPulseGeneration::_Landau(double x, double mean, double std) const {
double C = -113.755;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the meaning behind the value of C here? Should this be a configurable parameter?

src/algorithms/digi/TOFPulseGeneration.cc Outdated Show resolved Hide resolved
src/algorithms/digi/TOFPulseGeneration.cc Outdated Show resolved Hide resolved
@ssedd1123
Copy link
Author

A couple of questions, mostly focussing on sides 30/31 of the presentation, please correct me if my understanding is wrong.
Presumably the main 40MHz clock is running independently of the particle arrival so particles could arrive at any point in the cycle rather than just the first half cycle? (At cern they might have been somewhat synched).

We are not 100% sure what the 40MHz clock is synched to, but it will most likely be synched to bunch crossings. It is just a guess on our part since we are not informed by the other groups on the details, but this scenario makes the most sense. Half cycle of 40MHz clock last for 12ns. For particles to arrive outside of half cycle, a particle traveling at roughly speed of light must have a path length exceeding 12ns*c = 3.6 m. However, BTOF has a radius of ~65 cm and a length of ~-1.1 to + 1.5 m in the beam axis. Even with magnetic field, it's unlikely that that path lengths of any particles of interest would be so long that it takes more than half cycle to arrive at BTOF. The other possibility is if the particle travels at half the speed of light, but having such such low momentum particles are unlikely to begin with.

From my understanding, this 40MHz clock will be free running but synched to the 100MHz experiment clock such that every 50ns will align. This will still distribute 4/5 of the 10ns bunch crossings through different phases of the 40MHz clock.

I have asked opinions of other experts and it appears that I was mistaken. I thought the bunch crossings are synced to the 40 MHz clock. You are right, it's synced to 100 MHz clock. Naively, what you've mentioned, "This will still distribute 4/5 of the 10ns bunch crossings through different phases of the 40MHz clock." should be a natural consequence. However, that doesn't make sense to me.

Take for example, let say we begin at t = 0. 40 MHz clock begins it's first 12.5ns half cycle. The next collision occurs at t = 10 ns. However, at t = 10ns, the first half cycle from 40 MHz clock is still not done, it's still 2.5 ns from the end of the first half cycle. Does that mean we will be saving only the first 2.5 ns of the event? I asked multiple people in BNL and some say yes, some say such events are just discarded. Our group is not really sure what will happen in that case. From my limited understanding of EPIC clock, it does seem like it will just save the remaining 2.5 ns to disk, but such events are useless. It doesn't make intuitive sense to do things that way.

Can you confirmed if my understanding is correct? In any case, I think such detailed simulation of the clock and DAQ behavior can be added in the future. Such timing effects affects much more than BTOF. I believe this should not be a show stopper for this pull request, but keep this shortcoming in mind and we should circle back to it in the future.

From what I can see the 10 bit tdc covers the range of possible arrival times and doesn't necessarily relate to the length of the landau pulse needing to be simulated. I can't quite picture how this 10 bit value is formed from the 40MHz clock and combinations of 140 and 120 ps delays to reach a clock binning of less than 20ps.

Two pulses of different periods can be combined to obtain time measurement much more precise than the period of each pulse. I was told that the principle is similar to that for Vernier scale, where two rulers of different markings can be combined to give measurements more accurate than each individual ruler can achieve by sliding one ruler against the other.
The precision achieved in a Vernier scale is called the Vernier constant, which is the different in scale of the two rulers. In our case, the Vernier constant is 140-120 = 20 ps which agrees with the clock binning of 20 ps.

Thanks for the clarification, I think I am still missing something conceptual to reach 20 ps in all instances as opposed to values between 20 ps and 120 ps. My question remains that 25 ns/1024 = 24.4 ps. From the diagram in the presentation there is a hint that the two phases of the 40MHz clock can be treated independently so it would actually be 12.5ns/1024 which provides the minimum precision below the vernier constant.

In either case it appears at the moment you are dividing 100ns into 1024 bins when producing the pulse shape, is this what you want or should it be the 25/12.5 ns?

I confirmed with other experts that it should be 25 ns. You are correct, the TDC bin size is 25 ns/1024 = 24.4 ps, which also means the resolution can't be better than 24.4 ps no matter what. However, this is the settings our prototype uses (@eicsouvik please correct me if I am wrong). It is subjected to change.

I have updated the code so the bin size is indeed 25 ns/1024. TDC resolution in the original pull request is indeed to optimistic. It seems like our goal of achieving a timing resolution of 30 ps is quite hard. But like I said, it's not set in stone yet. We may still change those numbers in the future. Just like the last paragraph, I think having the framework is important and it shouldn't be a show stopper.

I don't think the way the min and max timing is set up at the moment will work well for hits where the global time is shifted outside of the range. The time of the simulation hit for samples in the future may cover a few microseconds so there will also need to be more than the 10 bit encoding in the digitization output. Presumably this is added on the ASIC somewhere outside of the pixel or upstream.

As mentioned in my last paragraph, we are not sure what the global clock is synced to. I guess there will be a "global clock" class in the future where the start time can be estimated? Unfortunately, development of such a class is beyond our expertise.

My suggestion here would be to digitize the global simulation time in units of 25 ns/12.5 ns and pass that forward as the "time" associated with each pulse and start the pulse x sampling from this value. Then add this in units of 20 ps to the tdc value output from the TOFPulseDigitization.

I hesitate to implement any clock related stuff until I understand fully how EPIC is expected to behave. Like I mentioned in my first paragraph, our group do not have a good understanding of the EIC clock. If you know the details, I think we can arrange a zoom/Skype meeting. I can implement those effect when we are certain of the expected behavior.

When/how long until the start/stop signals reset and another hit could be identified?

Given that TDC values are generated by the 120 and 140 ps pulses, I believe each cell in (or a pixel/strip) TOF cannot distinguish multiple hits within 140 ps after the first hit. However, from a physics stand point, it is very unlikely that a cell would be hit more than once within a single event. Multiplicity of e+A is not expected to be high to begin with.

Given the way the timing precision is measured with the delay circuitry there may be at least the 140 ps*1024=143 ns deadtime in a pixel, some additional time . It might be the case that several hits can be handled in the pixel at the same time with different delay pathways or that the when the start and stop signals cross everything is readout and reset immediately giving a variable deadtime. It is still probably rare to have a second signal in this time window.

Can you elaborate more on the factor of 1024 please? Why should the dead time be 140 ps*1024? Sorry for my ignorant questions as I am not familiar with the detailed circuitry design.

The measurement of the Landau minimum presumably cannot be limited to the time window between the start and stop signals. Do you know if this is just measured over the next phase of the clock or can be updated for the hit for the full 143 ns delay?

Of course my understanding may still be way off.

Landau minimum is needed in the simulation to scale the pulse height. But I can just use a constant. After all, the height of landau template function is something that can be calculated numerically. Find the minimum from -inf < t < inf in init, and just use that for process, is that what you are suggesting? That can be done.

Apologies for the more technical review rather than pure code review, I'd like to understand how closely this replicates what the ASIC is doing. This is also the first detector effects example of its type so it will be nice to provide a really clear example.

That is fine. I am under the impression that you want this class to be a more general LGAD digitization class rather than TOF only digitization class, is that correct? I may not be an experts in other sub-system, but I will do what I can to help.

It won't just be other LGAD/EICROC digitization, aspects may be reusable for MAPS/Timepix detectors too.

Got it. I will try my best.

@simonge
Copy link
Contributor

simonge commented Nov 8, 2024

Thank you for entertaining my curiosity on this, as you suggest we can push most of the effects I'm trying to understand into the future and get the core included much sooner.

From my understanding, this 40MHz clock will be free running but synched to the 100MHz experiment clock such that every 50ns will align. This will still distribute 4/5 of the 10ns bunch crossings through different phases of the 40MHz clock.

I have asked opinions of other experts and it appears that I was mistaken. I thought the bunch crossings are synced to the 40 MHz clock. You are right, it's synced to 100 MHz clock. Naively, what you've mentioned, "This will still distribute 4/5 of the 10ns bunch crossings through different phases of the 40MHz clock." should be a natural consequence. However, that doesn't make sense to me.

Take for example, let say we begin at t = 0. 40 MHz clock begins it's first 12.5ns half cycle. The next collision occurs at t = 10 ns. However, at t = 10ns, the first half cycle from 40 MHz clock is still not done, it's still 2.5 ns from the end of the first half cycle. Does that mean we will be saving only the first 2.5 ns of the event? I asked multiple people in BNL and some say yes, some say such events are just discarded. Our group is not really sure what will happen in that case. From my limited understanding of EPIC clock, it does seem like it will just save the remaining 2.5 ns to disk, but such events are useless. It doesn't make intuitive sense to do things that way.

Can you confirmed if my understanding is correct? In any case, I think such detailed simulation of the clock and DAQ behavior can be added in the future. Such timing effects affects much more than BTOF. I believe this should not be a show stopper for this pull request, but keep this shortcoming in mind and we should circle back to it in the future.

This is why I originally asked about the hits being required to come in the first half of the clock cycle because I can't imagine the ASIC would be designed with this limitation, it must be able to accept single hits at any time. If another hit comes within the 25 ns period from a subsequent bunch crossing this however I can't see how the ASIC would be able to distinguish these to will be discarded but might contribute to adc value recorded, this should be very rare. What I am hoping to understand, but can almost certainly be put aside to get the initial PR included is what happens after the 25 ns in which the hit is recorded, the deadtime of the pixel until another hit can be measured.

Combining contributions from different hits can perhaps wait until later as the charge sharing and timing resolutions of individual hits is more important to get in first.

My suggestion here would be to digitize the global simulation time in units of 25 ns/12.5 ns and pass that forward as the "time" associated with each pulse and start the pulse x sampling from this value. Then add this in units of 20 ps to the tdc value output from the TOFPulseDigitization.

I hesitate to implement any clock related stuff until I understand fully how EPIC is expected to behave. Like I mentioned in my first paragraph, our group do not have a good understanding of the EIC clock. If you know the details, I think we can arrange a zoom/Skype meeting. I can implement those effect when we are certain of the expected behavior.

I think this is important to this PR, the simulation and reconstruction should not be restricted to absolute time windows. I believe at the moment the way tMin and tMax are used means that all simulated hits have to occur in the 25 ns between these values. If we simulate an event starting at -100 ns the hits will all be outside of this window. For inclusion of backgrounds and realistic time frame readout for the streaming DAQ this is what we need to simulate over a wider range. I will try and add some suggestions directly into the code for this but could arrange a call next week if still useful.

Can you elaborate more on the factor of 1024 please? Why should the dead time be 140 ps*1024? Sorry for my ignorant questions as I am not familiar with the detailed circuitry design.

On slide 31 of the presentation, the way I interpret the hit arrival time as being measured depends on n where n is up to 1024 counted in periods of the 140 ps and 160 ps clock until the value of n for both counters matches. This means it can take up to 1024*140 ps before the time of arrival can be determined. If only a single pair of n values can be held in the pixel at one time this becomes the minimum deadtime (or maximum smallest possible deadtime if once a matching n is reached the counters can be reset). Or if they are not counters, instead separate delay circuitry maybe multiple signals can be propagated through them at once. I am also not familiar with the circuitry design so am only making guesses based on the presentation.

Landau minimum is needed in the simulation to scale the pulse height. But I can just use a constant. After all, the height of landau template function is something that can be calculated numerically. Find the minimum from -inf < t < inf in init, and just use that for process, is that what you are suggesting? That can be done.

This is more related to dealing with multiple hits again but still relevant for single hits. If a Landau pulse crosses the minimum threshold just before the end of the 40 MHz clock cycle, both the start and stop signals will come before the Landau reaches its minimum. This means there needs to be some other stop for measuring the adc value of the hit, this might just be when the pulse first returns below the smallest threshold after the initial start. Otherwise if a second hit with a larger adc value comes in during my supposed 143 ns processing time, this value could be used instead of the initial hits adc, again I doubt that would be the case.

auto& ADCs = adc_sum[cellID];
if(ADCs.size() == 0) ADCs.resize(nBins, 0);

mpv_analog = time + m_cfg.risetime;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If time is smaller than tMin or greater than tMax the signal will not be captured. This becomes a problem when the origin of the simulated collision is offset from t=0. A time series for each hit would probably be best produced individually over a range from tMin=time and tMax=time_signal_returns_below_threshold. A decision on whether to sum them taken in a separate loop or even a intermediary, optional extra factory before digitization.

In order to properly line up the signals so they can be summed between hits this way, some of the digitization of the time needs to leak into the PulseGeneration so tMin would actually need to be rounded down to the 25 ps.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I understand at least some of your concern. If the hit time lies outside of tMin and tMax, the landau_min will not reflects the peak height. I can replace landau_min with double landau_min = this -> _Landau(mvp_analog, mpv_analog, m_cfg.sigma_analog);

However, I am not sure if I understand the last part about tMin rounding down to 25 ps. Since there is no clock in the current implementation, TOF time is perfectly aligned with the collision time, i.e. t=0 is when collision occur and TOF pulse begins. (I realized that tMin = 0.1 in TOFHitDigiConfig, but that is a mistake. It should have been 0). Do you want me to change that code such that tMin is a variable instead in preparation for a clock class in the future where the beginning time is taken as an input? I can do that, but if I misunderstood your comments, please let me know.

I don't think we should set tMax=time_signal_returns_below_threshold and just save the entire pulse until the stop signal (in my case, it's always 25 ns). If we want to discard pulses below threshold, we can do it downstream in other classes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that the collision time is not always at t=0, within a bunch crossing it can vary between around +/-100 ps from where the centre of the bunches cross z=0. Ontop of that the simulation needs to deal with collisions from offset bunches such that the time associated with the hit from the simulation should be able to be treated reasonably equally with any value.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let see if I understand correctly. hit.getTime() returns Geant time, which always start at zero when collision occur. But like you said, bunch crossing can vary around +/- 100 ps. To account for this, instead of offsetting the Geant time, you want me to offset the TOF pulse timing windows, is that correct?

I think similar arguments extend to offset bunches. Let say a collision occurs at 10ns after the beginning of a 25ns cycle, i.e. detector only save the final 15ns to disk, wouldn't it make more sense to add 10ns to hit.getTime() rather than to shift the pulse generation windows by 10ns?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is incorrect here is that the Geant/simulation time starts at 0 at the start of a simulation. The MonteCarlo event samples used in the simulation already have times attributed to them which is meant to replicate the distribution of collisions throughout time. All of the times recorded in the hits already has this offset included so your algorithm has to be able to account for an arbitrary time coming from BTOF hits.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand now. In that case, can you tell me how to fetch the event start time please? I can replace double time = hit.getTime(); with double time = hit.getTime() - eventStartTime;.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: barrel topic: digitization topic: infrastructure topic: PID Relates to PID reconstruction topic: tracking Relates to tracking reconstruction
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants