-
Notifications
You must be signed in to change notification settings - Fork 94
Description
Hello,
when I try to run this code (src.encode) on my local system with a RTX 3060 for open images validation images it fails to produce the correct results, i.e. decoded(encoded(img)) != img.
I am 100 % sure that I did the install correct since I have access to a server that uses a 2080 and there the code runs correctly without problems except for having to manually install grpcio==1.42.0 via pip. Note I used conda to get this to run like they did in the original L3C paper. Otherwise I used the docker install method described here.
Furthermore running it on a 30-series GPU makes the execution time unbearably slow, loading the model despite its small size can take 20-30 minutes. (Here I am unsure if this is a problem with my system itself or the code, this slow down happens both in linux and WSL on a dual-boot system)
Of course running with --decode flag fails the assertion but when looking at the actual results produced of one encode -> decode cycle than I get either random noise or sometimes an image that looks similiar to the original one but with random colors for each pixel.
I should add, in both cases I can install torchac completely fine with the import returning true but in the case of the server with the 2080 the model loads within a few seconds and the code returns the desired results unlike my system with the weird loading times and the incorrect results despite not getting any error message whatsoever and having identical conda environments.
I primarily use WSL2 with nvidia-container-toolkit for my local system to run docker and the server uses native linux and docker, but I do not believe this should be the issue.