You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Source data type (e.g., pod5 or fast5 - please note we always recommend converting to pod5 for optimal basecalling performance): Raw reads mapped PAF file
Source data location (on device or networked drive - NFS, etc.): Local HDD
Details about data (flow cell, kit, read lengths, number of reads, total dataset size in MB/GB/TB): ~40GB raw UL reads, ~120Gb PAF
The text was updated successfully, but these errors were encountered:
We don't have a better estimate for a minimum GPU requirement unfortunately.
As you've read, we recommend GPUs with high VRAM for use with Dorado Correct as it's a computationally intensive task but it is possible to reduce the VRAM usage If you're having trouble by setting --batchsize argument during inference, but that might only go so far.
We don't have a better estimate for a minimum GPU requirement unfortunately.
As you've read, we recommend GPUs with high VRAM for use with Dorado Correct as it's a computationally intensive task but it is possible to reduce the VRAM usage If you're having trouble by setting --batchsize argument during inference, but that might only go so far.
Issue Report
I tried to use the CPU model running Dorado correct from the prepared paf file, but the speed was extremely slow 1GB corrected reads in 4 hours.
I would like to know the lowest GPU memory required. Does 16/20Gb enough for correction? GPU with >20Gb memory is quite expensive in my country.
Best wishes!
Run environment:
The text was updated successfully, but these errors were encountered: