Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Applying INT8 quantization, just like in Dorado #388

Open
wtrinh8 opened this issue Apr 14, 2024 · 2 comments
Open

Applying INT8 quantization, just like in Dorado #388

wtrinh8 opened this issue Apr 14, 2024 · 2 comments

Comments

@wtrinh8
Copy link

wtrinh8 commented Apr 14, 2024

Hello all,

I was curious how actively Bonito is still being used, as I read that Dorado nowadays converted a majority of its neural network code to INT8.

I was interested in experimenting with the current Dorado model, but I could not find a way to create an ONNX model export of the C++ neural network model.

Therefore, is there still a possibility that optimization features from Dorado will be applied to Bonito as well?

Kind regards

image

@iiSeymour
Copy link
Member

Hi @wtrinh8

All models that dorado provides come from (and are available in) bonito so it's still active but note from the README:

For anything other than basecaller training or method development please use dorado.

So, no, most optimisation features from Dorado will not be applied to Bonito.

If you want to export a model to ONNX you are better off doing it from bonito.

HTH

@Dragonsky8
Copy link

Hi, I was looking into the Koi lstm.py file and saw you could apply quantisation there via a flag. However, the [email protected] does not use the LSTM class from Koi. Is it possible to use the LSTM from Koi, instead of the LSTM from Torch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants