Global optimization of dynamic cost functions is important in many engineering applications. For these tasks, global optima change over time, or are greatly affected by dynamic noise. Nature-based stochastic methods, including genetic algorithms, particle swarm optimization (PSO), and differential evolution, have been particularly effective in dynamic optimization. However, these methods are generally very computationally intensive, and consequently research has focused on parallelization paradigms. In this paper, PSO approaches for dynamic optimization are analyzed for parallelization opportunities on relatively inexpensive, readilyavailable heterogeneous parallel graphics processing unit (GPU) and multicore hardware. A sophisticated adaptation of PSO-a multi-swarm technique proposed for dynamic problems-is parallelized in different ways at multiple levels. Experimental results on high-dimensional "moving-peaks" functions show that high speedups can be obtained through making use of different high-performance components on commodity hardware. Heterogeneous high-performance computing is proposed as a way to mitigate the time complexity of dynamic PSO adaptions.