Passing parameters to kernel; synchronization

Hello,
We are investigating Gap Junctions channels in cells (during cardiac arrhythmia also propagation of excitation in neural networks).
I am trying to implement our model in SYCL (to solve Noble-like equations using Euler’s method), and have two questions:

  1. How to pass parameters to SYCL kernel? is it only possible by using buffers?
    I want to pass iteration number to kernel and some other parameters.
  2. How to synchronize workers in kernel? adding something like ‘barrier’?

I splitted our model algorithm in two kernels, to make sure that all operations will be complete in kernel1 before kernel2 computation begins.
CUDA version of algorithm, main loop in host code:

	for (uint i=0; i < N_sim; i++) {
		uint i_reg = (N_reg*i)/N_sim;
		kernel_1<<<grid, block>>>(dev_n, dev_m, dev_h, dev_I, dev_V_YX, i, i_reg, dt_sim, cellCountX, cellCountY);
		cudaDeviceSynchronize();
		kernel_2<<<grid, block>>>(dev_I, dev_V_YX, dev_V_tYX_reg, i, i_reg, dt_sim, dt_reg, cellCountX, cellCountY);
		cudaDeviceSynchronize();
	}

Model is based on Noble-like equations combined with Gap Junction model, that is using Markov chain approach.
Our paper about similar model: http://jgp.rupress.org/content/147/3/273.abstract

Kestutis,
Lithuanian University of Health Sciences

Hi Kestutis,

I see you’ve made an issue on GitHub, but for completeness I’ll say that using variables inside the kernel is how you pass them as an argument. The easiest way is just to reference the iteration count in your kernel code, if you are using lambdas as your kernel functions. If you’re using function objects, then you should make a member variable of the type you want, and initialise that with the value you care about (see the sample “Using function objects” for some code). Accessors are the equivalent of (say) __global pointers in CUDA.

Synchronising work within a kernel is done with a barrier, yes. There are rules around how it works and what restrictions there are but at its core that is the idea. It’s worth noting that synchronisation only happens within a work-group (i.e. a CUDA warp), not globally. There is no way to do global synchronisation other than by ending the kernel and launching a new one.

In SYCL, kernels are scheduled based on their data access. I’m guessing that kernel_1 is writing to dev_V_YX and that kernel_2 is reading from it - in any case, since they are accessing the same data, the runtime will schedule the kernels in order that they don’t conflict.

I hope this helps,
Duncan.