OpenCL compile error with ptx64 and 1.1.6

With version 1.1.6 I’m getting the following error at runtime, which I believe is an error when the OpenCL driver compiles for the device. If I use the 1.1.5 compiler it works fine. It’s not specific to the simple-vector-add example in the computecpp-sdk, but with any example. It doesn’t matter what version of Cuda I use, and I believe the system is using the latest Cuda driver from Nvidia (a variable I can’t control).

cgpu02:simple-vector-add$ srun clinfo -l
Platform #0: NVIDIA CUDA
`-- Device #0: Tesla V100-SXM2-16GB
cgpu02:simple-vector-add$ compute++ --version
Codeplay ComputeCpp - CE 1.1.6 Device Compiler - clang version 6.0.0 (based on LLVM 6.0.0svn)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /project/projectdirs/mpccc/dwdoerf/cori-gpu/ComputeCpp-CE-1.1.6-Ubuntu-16.04-x86_64/bin
cgpu02:simple-vector-add$ make -f Makefile.ptx64 -B
compute++ -O2 -std=c++11 -sycl-driver -sycl-target ptx64 -no-serial-memop -I/project/projectdirs/mpccc/dwdoerf/cori-gpu/ComputeCpp-CE-1.1.6-Ubuntu-16.04-x86_64/include -I/project/projectdirs/mpccc/dwdoerf/cori-gpu/cuda-toolkit/cuda_9.0.176/include -L/project/projectdirs/mpccc/dwdoerf/cori-gpu/cuda-toolkit/cuda_9.0.176/lib64 -lComputeCpp -lOpenCL -o simple-vector-add.exe simple-vector-add.cpp
cgpu02:simple-vector-add$ srun simple-vector-add.exe
terminate called after throwing an instance of ‘cl::sycl::compile_program_error’
srun: error: cgpu02: task 0: Aborted
srun: Terminating job step 359147.14

Hi,
Thanks for the report, and for using ComputeCpp.
It looks like there is a regression in v1.1.6, so I’d suggest sticking with v1.1.5 for now while we implement a solution. Since ptx is currently “experimental” support we don’t have fully automated testing to catch this type of thing.
I’ll inform you when we have a release available with the fix.
Rod.

Hi Rod, yes using 1.1.5 for ptx64 is fine for now, and understood with ptx64 being experimental.

Doug

A late update but just adding this comment to say this issue is fixed in the latest release 2.0.0.

Hi Rod, thanks for the follow up. I have tested it with 2.0.0 and can confirm this problem is fixed.