You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Affine transformations on CPU are already pretty fast, but GPU-accelerated transformations are useful in CPU-constrained environments like Google Colab where a GPU is available but only 2 mediocre CPUs which can lead to a bottleneck in the data pipeline.
Some open questions:
Would the existing transforms work out-of-the box on CuArrays (i.e. does ImageTransformations.warp[!] work on CuArrays) ? What about for masks, i.e. integer arrays? Otherwise, what would be necessary to implement that?
Performance-wise, do the transforms need to be applied to a whole batch at once or is it as fast to apply them to samples individually? Former would require first resizing images to the same size and wrapping in a Batch wrapper item.
Can someone who has experience with image transformations on GPU chime in?
Writing image transformations with KernelAbstractions.jl is probably the best approach, since you get CPU+CUDA+AMDGPU from just a single kernel (although you might want to write the CPU kernels differently for better performance). That sort of code should probably go in ImageTransformations.jl in some shape or form.
Affine transformations on CPU are already pretty fast, but GPU-accelerated transformations are useful in CPU-constrained environments like Google Colab where a GPU is available but only 2 mediocre CPUs which can lead to a bottleneck in the data pipeline.
Some open questions:
CuArray
s (i.e. doesImageTransformations.warp[!]
work onCuArray
s) ? What about for masks, i.e. integer arrays? Otherwise, what would be necessary to implement that?Batch
wrapper item.Can someone who has experience with image transformations on GPU chime in?
@jsamaroo
The text was updated successfully, but these errors were encountered: