Skip to content

Enabling/disabling GPU during inference #7622

Discussion options

You must be logged in to vote

Use spacy.require_gpu() and spacy.require_cpu() to switch back and forth. A model is loaded on the device specified in the current context, so you have to reload the model after running the command to switch the model, too. A plain thinc model will stay on the device it was loaded on and keep working even if you switch the context, but models that use torch don't work if you switch the mode between CPU and GPU after loading them.

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
1 reply
@BramVanroy
Comment options

Answer selected by BramVanroy
Comment options

You must be logged in to vote
1 reply
@danieldk
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
gpu Using spaCy on GPU
4 participants