The IceCube Neutrino Observatory is a cubic kilometer neutrino telescope
located at the Geographic South Pole. For every observed neutrino event,
there are over 10^6 background events caused by cosmic-ray air shower
muons. In order to properly separate signal from background, it is
necessary to produce Monte Carlo simulations of these air showers.
Although to-date, IceCube has produced large quantities of background
simulation, these studies remain statistics limited. The most
significant impediment to producing more simulation is complicated
computing requirements: the first stage of the simulation, air shower
and muon propagation, needs to be run on CPUs while the second stage,
photon propagation, can only be performed efficiently on GPUs.
Processing both of these stages on the same node will result in an
underutilized GPU but using different nodes will encounter bandwidth
bottlenecks. Furthermore, due to the power-law energy spectrum of cosmic
rays, the memory footprint of the detector response often exceeded the
limit in unpredictable ways. In this talk, I will present new
client/server code which parallelizes the first stage onto multiple CPUs
on the same node and then passes it on to the GPU for photon
propagation. This results in GPU utilization of greater than 90% as well
as more predictable memory usage and an overall factor of 20 improvement
in speed over previous techniques.
|Consider for long presentation||No|