Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chain quantizer GPU buffer error #36

Open
sepehr3pehr opened this issue Apr 26, 2019 · 4 comments
Open

chain quantizer GPU buffer error #36

sepehr3pehr opened this issue Apr 26, 2019 · 4 comments

Comments

@sepehr3pehr
Copy link

sepehr3pehr commented Apr 26, 2019

In running the chain quantizer I get the following error:

Training a chain quantizer
 -2 2.394428e+04... 0.60 secs updating C
ERROR: LoadError: ArgumentError: cannot take the CPU address of a GPU buffer
Stacktrace:
 [1] unsafe_convert(::Type{Ptr{Float32}}, ::CUDAdrv.Mem.Buffer) at /home/s2eghbal/.julia/packages/CUDAdrv/lu32K/src/memory.jl:20
 [2] macro expansion at /home/s2eghbal/.julia/packages/CUDAdrv/lu32K/src/execution.jl:171 [inlined]
 [3] #_cudacall#24(::Int64, ::Tuple{Int64,Int64}, ::Int64, ::CUDAdrv.CuStream, ::typeof(CUDAdrv._cudacall), ::CUDAdrv.CuFunction, ::Type{Tuple{Ptr{Float32},Ptr{Float32},Int32,Int32}}, ::Tuple{CUDAdrv.Mem.Buffer,CUDAdrv.Mem.Buffer,Int32,Int32}) at /home/s2eghbal/.julia/packages/CUDAdrv/lu32K/src/execution.jl:154
 [4] (::getfield(CUDAdrv, Symbol("#kw##_cudacall")))(::NamedTuple{(:blocks, :threads),Tuple{Int64,Tuple{Int64,Int64}}}, ::typeof(CUDAdrv._cudacall), ::CUDAdrv.CuFunction, ::Type, ::Tuple{CUDAdrv.Mem.Buffer,CUDAdrv.Mem.Buffer,Int32,Int32}) at ./none:0
 [5] #cudacall#23 at /home/s2eghbal/.julia/packages/CUDAdrv/lu32K/src/execution.jl:146 [inlined]
 [6] (::getfield(CUDAdrv, Symbol("#kw##cudacall")))(::NamedTuple{(:blocks, :threads),Tuple{Int64,Tuple{Int64,Int64}}}, ::typeof(CUDAdrv.cudacall), ::CUDAdrv.CuFunction, ::Type, ::CUDAdrv.Mem.Buffer, ::CUDAdrv.Mem.Buffer, ::Int32, ::Int32) at ./none:0
 [7] vec_add(::Int64, ::Tuple{Int64,Int64}, ::CUDAdrv.Mem.Buffer, ::CUDAdrv.Mem.Buffer, ::Int32, ::Int32) at /home/s2eghbal/.julia/dev/Rayuela/src/CudaUtilsModule.jl:75
 [8] quantize_chainq_cuda!(::Array{Int16,2}, ::Array{Float32,2}, ::Array{Array{Float32,2},1}, ::Array{Array{Float32,2},1}, ::UnitRange{Int64}) at /home/s2eghbal/.julia/dev/Rayuela/src/ChainQ.jl:242
 [9] quantize_chainq(::Array{Float32,2}, ::Array{Array{Float32,2},1}, ::Bool, ::Bool) at /home/s2eghbal/.julia/dev/Rayuela/src/ChainQ.jl:325
 [10] train_chainq(::Array{Float32,2}, ::Int64, ::Int64, ::Array{Float32,2}, ::Array{Int16,2}, ::Array{Array{Float32,2},1}, ::Int64, ::Bool) at /home/s2eghbal/.julia/dev/Rayuela/src/ChainQ.jl:401
 [11] run_demos(::String, ::Int64, ::Int64, ::Int64, ::Int64) at /home/s2eghbal/.julia/dev/Rayuela/demos/demos_train_query_base.jl:60
 [12] top-level scope at /home/s2eghbal/.julia/dev/Rayuela/demos/demos_train_query_base.jl:174 [inlined]
 [13] top-level scope at ./none:0
 [14] include at ./boot.jl:326 [inlined]
 [15] include_relative(::Module, ::String) at ./loading.jl:1038
 [16] include(::Module, ::String) at ./sysimg.jl:29
 [17] include(::String) at ./client.jl:403
 [18] top-level scope at none:0
in expression starting at /home/s2eghbal/.julia/dev/Rayuela/demos/demos_train_query_base.jl:173
@JerryChen97
Copy link

Have you solved this problem? I just met with nearly the same one and found no clue how to deal with it...

@una-dinosauria
Copy link
Owner

I believe @sepehr3pehr did. If you give me a pointer to the solution I'll try to patch it tonight.

@JerryChen97
Copy link

Oh thanks but I just managed to fix it by myself... And it turned out to be only my own fault not the package's...

@sepehr3pehr
Copy link
Author

I am on a trip right now and unfortunately don't have access to my code, but the error is mainly because of recent CUDAdrv update that differentiates between Ptr and CuPtr. I changed pointers to CuArrays from Ptr to CuPtr at line 76 of CudaUtilsModule and that solved the problem but there were some other parts of the code that needed the same modification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants