You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, DAQ messages being exported are aggregated (using either average, or median, but could be anything else really). This is because DAQ messages arrive with a lot of readings inside of them, but the whole message only has one timestamp, so it's much easier to just average it and turn it into one data point.
For more accurate and powerful data, we might want to break out each of these individual readings into each their own data line, with interpolated time. Interpolation logic should try to keep a low drift from real timestamps, and also not be affected by jitter in DAQ signaling.
Currently, DAQ messages being exported are aggregated (using either average, or median, but could be anything else really). This is because DAQ messages arrive with a lot of readings inside of them, but the whole message only has one timestamp, so it's much easier to just average it and turn it into one data point.
For more accurate and powerful data, we might want to break out each of these individual readings into each their own data line, with interpolated time. Interpolation logic should try to keep a low drift from real timestamps, and also not be affected by jitter in DAQ signaling.
Relevant slack thread with info on how this decompression might be done:
https://waterloorocketry.slack.com/archives/C07MX0QDS/p1703642210069349
The text was updated successfully, but these errors were encountered: