You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to perform adaptive sequencing ("Read Until") on a dataset with relatively short reads. As a result, if I want to gain significant benefit from selective sequencing, reads need to be ejected as soon as possible. In order to do so, I would like to reduce MinKnow's default read chunk size.
According to the comments in this repository, the configuration is stored in minknow/conf/app_conf. This is incorrect and should be updated; this information is now stored in minknow/conf/tuning_params.toml.
Would you additionally be able to explain the difference between the following configuration options in minknow/conf/tuning_params.toml:
raw_data_intermediate
raw_meta_data_intermediate
read_data_intermediate
event_data_intermediate
Are there any limitations I should be aware of in terms of how low I can set these parameters? I assume once the data packets are too fragmented it becomes difficult to write them in real time. It looks like the default values are already significantly lower than they used to be (400 samples, not 2000). Is that correct?
The text was updated successfully, but these errors were encountered:
I would like to perform adaptive sequencing ("Read Until") on a dataset with relatively short reads. As a result, if I want to gain significant benefit from selective sequencing, reads need to be ejected as soon as possible. In order to do so, I would like to reduce MinKnow's default read chunk size.
According to the comments in this repository, the configuration is stored in
minknow/conf/app_conf
. This is incorrect and should be updated; this information is now stored inminknow/conf/tuning_params.toml
.Would you additionally be able to explain the difference between the following configuration options in
minknow/conf/tuning_params.toml
:raw_data_intermediate
raw_meta_data_intermediate
read_data_intermediate
event_data_intermediate
Are there any limitations I should be aware of in terms of how low I can set these parameters? I assume once the data packets are too fragmented it becomes difficult to write them in real time. It looks like the default values are already significantly lower than they used to be (400 samples, not 2000). Is that correct?
The text was updated successfully, but these errors were encountered: