Cl::sycl::compile_program_error exception thrown when running simple-vector-add sample

I’m building/running on a centos7 host, using devtoolset-8, with the ptx64 backend, and the community edition v1.3.0 of the compiler. No problems building (using -DCOMPUTECPP_BITCODE=ptx64 cmake arg).

When running simple-vector-add from the sdk, I get the following error:

./simple-vector-add
terminate called after throwing an instance of ‘cl::sycl::compile_program_error’
Aborted

Running in the debugger and catching thrown exceptions gives the following stack trace:

gdb log
#0  0x00007ffff718092d in __cxa_throw () from /usr/lib64/libstdc++.so.6
#1  0x00007ffff778bc9a in void cl::sycl::detail::handle_sycl_log<cl::sycl::compile_program_error>(cl::sycl::detail::sycl_log&&) ()
   from /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/lib/libComputeCpp.so
#2  0x00007ffff7783dfb in cl::sycl::detail::trigger_sycl_log(cl::sycl::log_type, char const*, int, int, cl::sycl::detail::cpp_error_code, cl::sycl::detail::context const*, char const*) () from /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/lib/libComputeCpp.so
#3  0x00007ffff77ab23f in cl::sycl::detail::program::create_from_binary(unsigned char const*, unsigned long, std::string const&) ()
   from /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/lib/libComputeCpp.so
#4  0x00007ffff77ad09e in cl::sycl::detail::program::build(unsigned char const*, unsigned long, std::string, std::string) ()
   from /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/lib/libComputeCpp.so
#5  0x00007ffff77730e3 in cl::sycl::detail::context::create_program_for_binary(std::shared_ptr<cl::sycl::detail::context> const&, unsigned char const*, int, std::string) () from /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/lib/libComputeCpp.so
#6  0x00007ffff77a7c99 in cl::sycl::program::create_program_for_kernel_impl(std::string, unsigned char const*, int, char const* const*, std::shared_ptr<cl::sycl::detail::context>, std::string) () from /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/lib/libComputeCpp.so
#7  0x0000000000406e5d in cl::sycl::program::create_program_for_kernel<SimpleVadd<int> > (c=...)
    at /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/include/SYCL/program.h:479
#8  0x0000000000405bbb in cl::sycl::handler::parallel_for_impl<SimpleVadd<int>, simple_vadd<int, 4ul>(std::array<int, 4ul> const&, std::array<int, 4ul> const&, std::array<int, 4ul>&)::{lambda(cl::sycl::handler&)#1}::operator()(cl::sycl::handler&) const::{lambda(cl::sycl::id<1>)#1}>(cl::sycl::detail::index_array const&, cl::sycl::detail::index_array, simple_vadd<int, 4ul>(std::array<int, 4ul> const&, std::array<int, 4ul> const&, std::array<int, 4ul>&)::{lambda(cl::sycl::handler&)#1}::operator()(cl::sycl::handler&) const::{lambda(cl::sycl::id<1>)#1} const&, int) (this=0x84d700, range=..., 
    globalOffset=..., functor=..., dimensions=1) at /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/include/SYCL/apis.h:460
#9  0x0000000000404c76 in cl::sycl::handler::parallel_for<SimpleVadd<int>, simple_vadd<int, 4ul>(std::array<int, 4ul> const&, std::array<int, 4ul> const&, std::array<int, 4ul>&)::{lambda(cl::sycl::handler&)#1}::operator()(cl::sycl::handler&) const::{lambda(cl::sycl::id<1>)#1}, 1>(cl::sycl::range<1> const&, simple_vadd<int, 4ul>(std::array<int, 4ul> const&, std::array<int, 4ul> const&, std::array<int, 4ul>&)::{lambda(cl::sycl::handler&)#1}::operator()(cl::sycl::handler&) const::{lambda(cl::sycl::id<1>)#1} const&) (this=0x84d700, range=..., functor=...)
    at /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/include/SYCL/apis.h:491
#10 0x0000000000403432 in simple_vadd<int, 4ul>(std::array<int, 4ul> const&, std::array<int, 4ul> const&, std::array<int, 4ul>&)::{lambda(cl::sycl::handler&)#1}::operator()(cl::sycl::handler&) const (this=0x7fffffffdc80, cgh=...)
    at /home/leggett/work/gpu/sycl/sdk/computecpp-sdk/samples/simple-vector-add.cpp:60
#11 0x0000000000405df4 in cl::sycl::detail::command_group::submit_handler<simple_vadd<int, 4ul>(std::array<int, 4ul> const&, std::array<int, 4ul> const&, std::array<int, 4ul>&)::{lambda(cl::sycl::handler&)#1}>(simple_vadd<int, 4ul>(std::array<int, 4ul> const&, std::array<int, 4ul> const&, std::array<int, 4ul>&)::{lambda(cl::sycl::handler&)#1}, std::shared_ptr<cl::sycl::detail::queue> const&, cl::sycl::detail::standard_handler_tag) (
    this=0x7fffffffdcb0, cgf=..., fallbackQueue=std::shared_ptr<cl::sycl::detail::queue> (empty) = {...})
    at /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/include/SYCL/command_group.h:184
#12 0x0000000000404d22 in cl::sycl::queue::submit<simple_vadd<int, 4ul>(std::array<int, 4ul> const&, std::array<int, 4ul> const&, std::array<int, 4ul>&)::{lambda(cl::sycl::handler&)#1}>(simple_vadd<int, 4ul>(std::array<int, 4ul> const&, std::array<int, 4ul> const&, std::array<int, 4ul>&)::{lambda(cl::sycl::handler&)#1}) (this=0x7fffffffddd0, cgf=...) at /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/include/SYCL/queue.h:368
#13 0x000000000040397c in simple_vadd<int, 4ul> (VA=..., VB=..., VC=...)
    at /home/leggett/work/gpu/sycl/sdk/computecpp-sdk/samples/simple-vector-add.cpp:52
#14 0x00000000004024c2 in main () at /home/leggett/work/gpu/sycl/sdk/computecpp-sdk/samples/simple-vector-add.cpp:70

ldd shows it’s not picking up anything unexpected:

> ldd ./simple-vector-add
linux-vdso.so.1 =>  (0x00007ffcbf7da000)
libComputeCpp.so => /home/leggett/work/gpu/sycl/ComputeCpp-CE-1.3.0-CentOS-x86_64/lib/libComputeCpp.so (0x00007f395c1bb000)
libOpenCL.so.1 => /usr/lib64/libOpenCL.so.1 (0x00007f395bf9c000)
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f395bc95000)
libm.so.6 => /usr/lib64/libm.so.6 (0x00007f395b993000)
libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007f395b77d000)
libc.so.6 => /usr/lib64/libc.so.6 (0x00007f395b3af000)
libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007f395b1ab000)
libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x00007f395af8f000)
/lib64/ld-linux-x86-64.so.2 (0x00007f395c94e000)

I had gotten previous versions of computecpp w/ the ptx backend to work on this machine (most recently 1.1.7), but now that no longer works either. Since then, I’ve installed an AMD card as well with rocm libraries, but the default epel OpenCL header and ocl-icd libraries are still there, so don’t think it’s an OpenCL conflict issue.

clinfo works, and shows the various cards/devices:

clinfo output

Number of platforms 2
Platform Name AMD Accelerated Parallel Processing
Platform Vendor Advanced Micro Devices, Inc.
Platform Version OpenCL 2.1 AMD-APP (2982.0)
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd cl_amd_event_callback cl_amd_offline_devices
Platform Host timer resolution 1ns
Platform Extensions function suffix AMD

Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 10.2.95
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics
Platform Extensions function suffix NV

Platform Name AMD Accelerated Parallel Processing
Number of devices 1
Device Name gfx900
Device Vendor Advanced Micro Devices, Inc.
Device Vendor ID 0x1002
Device Version OpenCL 2.0
Driver Version 2982.0 (HSA1.1,LC)
Device OpenCL C Version OpenCL C 2.0
Device Type GPU
Device Available Yes
Device Profile FULL_PROFILE
Device Board Name (AMD) Vega 10 XL/XT [Radeon RX Vega 56/64]
Device Topology (AMD) PCI-E, 05:00.0
Max compute units 56
SIMD per compute unit (AMD) 4
SIMD width (AMD) 16
SIMD instruction width (AMD) 1
Max clock frequency 1622MHz
Graphics IP (AMD) 9.0
Device Partition (core)
Max number of sub-devices 56
Supported partition types None
Max work item dimensions 3
Max work item sizes 1024x1024x1024
Max work group size 256
Compiler Available Yes
Linker Available Yes
Preferred work group size multiple 64
Wavefront width (AMD) 64
Preferred / native vector sizes
char 4 / 4
short 2 / 2
int 1 / 1
long 1 / 1
half 1 / 1 (cl_khr_fp16)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals No
Infinity and NANs No
Round to nearest No
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Address bits 64, Little-Endian
Global memory size 8573157376 (7.984GiB)
Global free memory (AMD) 8372224 (7.984GiB)
Global memory channels (AMD) 64
Global memory banks per channel (AMD) 4
Global memory bank width (AMD) 256 bytes
Error Correction support No
Max memory allocation 7287183769 (6.787GiB)
Unified memory for Host and Device No
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing Yes
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Preferred alignment for atomics
SVM 0 bytes
Global 0 bytes
Local 0 bytes
Max size for global variable 7287183769 (6.787GiB)
Preferred total size of global vars 8573157376 (7.984GiB)
Global Memory cache type Read/Write
Global Memory cache size 16384 (16KiB)
Global Memory cache line 64 bytes
Image support Yes
Max number of samplers per kernel 26751
Max size for 1D images from buffer 65536 pixels
Max 1D or 2D image array size 2048 images
Base address alignment for 2D image buffers 256 bytes
Pitch alignment for 2D image buffers 256 bytes
Max 2D image size 16384x16384 pixels
Max 3D image size 2048x2048x2048 pixels
Max number of read image args 128
Max number of write image args 8
Max number of read/write image args 64
Max number of pipe args 16
Max active pipe reservations 16
Max pipe packet size 2992216473 (2.787GiB)
Local memory type Local
Local memory size 65536 (64KiB)
Local memory syze per CU (AMD) 65536 (64KiB)
Local memory banks (AMD) 32
Max constant buffer size 7287183769 (6.787GiB)
Max number of constant args 8
Max size of kernel argument 1024
Queue properties (on host)
Out-of-order execution No
Profiling Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 262144 (256KiB)
Max size 8388608 (8MiB)
Max queues on device 1
Max events on device 1024
Prefer user sync for interop Yes
Profiling timer resolution 1ns
Profiling timer offset since Epoch (AMD) 0ns (Wed Dec 31 16:00:00 1969)
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Thread trace supported (AMD) No
printf() buffer size 4194304 (4MiB)
Built-in kernels
Device Extensions cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program

Platform Name NVIDIA CUDA
Number of devices 1
Device Name GeForce GTX 1080 Ti
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 1.2 CUDA
Driver Version 440.33.01
Device OpenCL C Version OpenCL C 1.2
Device Type GPU
Device Available Yes
Device Profile FULL_PROFILE
Device Topology (NV) PCI-E, 02:00.0
Max compute units 28
Max clock frequency 1607MHz
Compute Capability (NV) 6.1
Device Partition (core)
Max number of sub-devices 1
Supported partition types None
Max work item dimensions 3
Max work item sizes 1024x1024x64
Max work group size 1024
Compiler Available Yes
Linker Available Yes
Preferred work group size multiple 32
Warp size (NV) 32
Preferred / native vector sizes
char 1 / 1
short 1 / 1
int 1 / 1
long 1 / 1
half 0 / 0 (n/a)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Address bits 64, Little-Endian
Global memory size 11721506816 (10.92GiB)
Error Correction support No
Max memory allocation 2930376704 (2.729GiB)
Unified memory for Host and Device No
Integrated memory (NV) No
Minimum alignment for any data type 128 bytes
Alignment of base address 4096 bits (512 bytes)
Global Memory cache type Read/Write
Global Memory cache size 1376256 (1.312MiB)
Global Memory cache line 128 bytes
Image support Yes
Max number of samplers per kernel 32
Max size for 1D images from buffer 268435456 pixels
Max 1D or 2D image array size 2048 images
Max 2D image size 16384x32768 pixels
Max 3D image size 16384x16384x16384 pixels
Max number of read image args 256
Max number of write image args 16
Local memory type Local
Local memory size 49152 (48KiB)
Registers per block (NV) 65536
Max constant buffer size 65536 (64KiB)
Max number of constant args 9
Max size of kernel argument 4352 (4.25KiB)
Queue properties
Out-of-order execution Yes
Profiling Yes
Prefer user sync for interop No
Profiling timer resolution 1000ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Kernel execution timeout (NV) No
Concurrent copy and kernel execution (NV) Yes
Number of async copy engines 2
printf() buffer size 1048576 (1024KiB)
Built-in kernels
Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics

NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, …) AMD Accelerated Parallel Processing
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, …) Success [AMD]
clCreateContext(NULL, …) [default] Success [AMD]
clCreateContext(NULL, …) [other] Success [NV]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx900
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx900

ICD loader properties
ICD loader Name OpenCL ICD Loader
ICD loader Vendor OCL Icd free software
ICD loader Version 2.2.12
ICD loader Profile OpenCL 2.2
NOTE: your OpenCL library declares to support OpenCL 2.2,
but it seems to support up to OpenCL 2.1 only.

as does computecpp_info:

computecpp_info output

ComputeCpp Info (CE 1.3.0)

SYCL 1.2.1 revision 3


Toolchain information:

GLIBC version: 2.17
GLIBCXX: 20150623
This version of libstdc++ is supported.


Device Info:

Discovered 2 devices matching:
platform :
device type :


Device 0:

Device is supported : UNTESTED - Device not tested on this OS
Bitcode targets : amdgcn
CL_DEVICE_NAME : gfx900
CL_DEVICE_VENDOR : Advanced Micro Devices, Inc.
CL_DRIVER_VERSION : 2982.0 (HSA1.1,LC)
CL_DEVICE_TYPE : CL_DEVICE_TYPE_GPU

Device 1:

Device is supported : UNTESTED - Vendor not tested on this OS
Bitcode targets : ptx64
CL_DEVICE_NAME : GeForce GTX 1080 Ti
CL_DEVICE_VENDOR : NVIDIA Corporation
CL_DRIVER_VERSION : 440.33.01
CL_DEVICE_TYPE : CL_DEVICE_TYPE_GPU

If you encounter problems when using any of these OpenCL devices, please consult
this website for known issues:
Ecosystem - Codeplay Software Ltd


Simple tests that just query the cl::sycl device info do work fine.

Any pointers on how to debug this further?

thanks, Charles.

It’s possible the selector is choosing another device.
Can you just confirm what device it is trying to run on using code like this?

I added some code to select and print out the device queue that it’s using, but it makes no difference. It gives the same error. The only time it doesn’t produce an error is when running on the host device.

Hi @leggett, What I mean is does the print show that it has chosen the NVIDIA device? I want to rule out that the selector has chosen the AMD device which could cause a driver error as it won’t accept ptx instructions.

I’d also suggest catching and printing the output of the exception. There’s an example of how to do that here.

Yes, that’s what I meant - the printout shows that it’s chosen the correct device:

name: GeForce GTX 1080 Ti
vendor: NVIDIA Corporation
platform name: NVIDIA CUDA

When I catch the exception, I get:

Error: [ComputeCpp:RT0100] Failed to build program (<Build log for program 0x1c92310 device 0 (size 146):
ptxas application ptx input, line 4180; fatal   : Parsing error near '.addrsig': syntax error
ptxas fatal   : Ptx assembly aborted due to errors


>


)
 SYCL Runtime closed with the following errors:
SYCL objects are still alive while the runtime is shutting down

 This probably indicates that a SYCL object was created  but not properly destroyed.

BTW, using an exception handler as in your example, creating the queue with it, then doing a queue.wait_and_throw after the kernel was submitted did NOT catch the exception. I had to put a try/catch block around the queue.submit() for it to be caught.

I am stuck with the same error message on a Win10 machine with Nvidia GTX 1050 card. Similar to the report, there is an additional GPU (Intel) but the output at the start of reduction.cpp confirms that the Nvidia GPU is used. I could not get any demo running. All fail either with this exception, an abort when the exception is not handled.

This appears to be a regression bug! Rolling back to ComputeCpp 1.2.0 fixes the issue without any other changes.

Hi @leggett, @NNemec,

Firstly I believe if you add -fno-addrsig to the CMake cache variable COMPUTECPP_USER_FLAGS, you should be able to compile and run using ComputeCpp 1.3.0. I’d recommend this as it has much better builtin support than previous releases. We should publish this advice somewhere!

On the note about catching this error - the compilation of the device binaries happens on the main (user) thread, therefore it won’t appear asynchronously through a wait_and_throw(), it will turn up on the user thread as normal through the usual try/catch pair. In general, ComputeCpp will throw errors related to building code or constructing objects synchronously, but most other things asynchronously.

I hope this helps,
Duncan.

Hi @duncan:

Yes! Adding -fno-addrsig solved the problem. Thank you very much!

cheers, Charles.

Hi @duncan , thanks for the hint - unfortunately, it does not address the problem for me. During compilation, I still see a message

Intrinsic llvm.fmuladd.f64 has been generated in function SYCL_class_<something> which is illegal in SPIR and may result in a compilation failure [-Wsycl-undef-func]

And lateron I get the aforementioned error when any kernel with these functions is being built before execution.

I remember similar warnings for other functions besides fmuladd, but I cannot reproduce these anymore now.

Greetings,
Norbert

I am also coming across this with ComputeCpp-CE-2.1.0 on a CentOS7 machine when trying to run ./samples/reduction as my first hello-world style test.

Setting -DCOMPUTECPP_USER_FLAGS="-fno-addrsig" does not seem to have helped.

Any idea what direction I should take next to debug?

Thanks - Tim

[phsmai@camperona build_computecpp]$ ldd samples/reduction
linux-vdso.so.1 => (0x00007ffe43f83000)
libComputeCpp.so => /home/epp/phsmai/sycl/ComputeCpp-CE-2.1.0-x86_64-linux-gnu/lib/libComputeCpp.so (0x00007fe6974ef000)
libOpenCL.so.1 => /lib64/libOpenCL.so.1 (0x00007fe6972e8000)
libstdc++.so.6 => /home/epp/phsmai/sycl/gcc/gcc-7.5.0_install/lib64/libstdc++.so.6 (0x00007fe696f65000)
libm.so.6 => /lib64/libm.so.6 (0x00007fe696c63000)
libgcc_s.so.1 => /home/epp/phsmai/sycl/gcc/gcc-7.5.0_install/lib64/libgcc_s.so.1 (0x00007fe696a4c000)
libc.so.6 => /lib64/libc.so.6 (0x00007fe69667e000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007fe69647a000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fe69625e000)
libatomic.so.1 => /home/epp/phsmai/sycl/gcc/gcc-7.5.0_install/lib64/libatomic.so.1 (0x00007fe696056000)
/lib64/ld-linux-x86-64.so.2 (0x00007fe697991000)

SYCL Sample code:
Reduction of an STL vector
Device Name: Quadro P1000
Platform Name NVIDIA CUDA
terminate called after throwing an instance of ‘cl::sycl::compile_program_error’

Hi Tim,

the -fno-addrsig flag shouldn’t be needed any more, I think, I believe one of my colleagues fixed that issue in the compiler. As Rod recommended, the next thing is to get the actual device compilation log. I think the easiest way to do this is to use a configuration file:

echo "verbose_output=true" > computecpp.conf
COMPUTECPP_CONFIGURATION_FILE=computecpp.conf samples/reduction

should have more output on the command line hopefully showing us why it doesn’t run!

Thanks Duncan,

I tried to print the e.what() of the throw, which got me to

Error: [ComputeCpp:RT0100] Failed to build program (<Build log for program 0x158c830 (size: 65) error : Binary format for key=‘0’, ident=‘’ is not recognized

Which then got me to < Working example on Windows 10 + Nvidia > and then I greped from the README the important info which I missed before

Lastly, the SDK will build targeting spir64 IR by default. This will
work on most devices, but won’t work on NVIDIA (for example). To that
end, you can specify -DCOMPUTECPP_BITCODE=target, which can be any of
spir[64], spirv[64] or ptx64.

Having removed the DCOMPUTECPP_USER_FLAGS and added the -DCOMPUTECPP_BITCODE=ptx64 I now get compiles warnings in many tests Intrinsic foo has been generated in function bar which is illegal in SPIR, but this looks expected for nvidia as far as I can tell.

I now get

[phsmai@camperona build_computecpp]$ ./samples/reduction
SYCL Sample code:
Reduction of an STL vector
Device Name: Quadro P1000
Platform Name NVIDIA CUDA
SYCL Reduction result: 634
STL Reduction result: 634

Great!

Great, I’m glad it’s working now!