Parallel computation model is an abstraction for the performance characteristics of parallel computers, and should evolve with the development of computational infrastructure. The heterogeneous CPU/Graphics Processing Unit (GPU) systems have been and will be important platforms for scientific computing, which introduces an urgent demand for new parallel computation models targeting this kind of supercomputers. In this research, we propose a parallel computation model called HLog n GP to abstract the computation and communication features of heterogeneous platforms like TH-1A. All the substantial parameters of HLog n GP are in vector form and deal with the new features in GPU clusters. A simplified version HLog 3 GP of the proposed model is mapped to a specific GPU cluster and verified with two typical benchmarks. Experimental results show that HLog 3 GP outperforms the other two evaluated models and can well model the new particularities of GPU clusters. 4881 exascale systems will have much more hardware parallelism, and the development of large applications on them will be confronted with unprecedented challenges [12], including how to effectively reduce communication overhead. Therefore, the parallel computation model for large heterogeneous CPU/GPU clusters should be adept in modeling the communication operations and predicting the communication performance.In the high-performance computing scenario, communication operations can be subdivided into two categories, namely, memory communication and network communication [13]. The former indicates the transmission of data through memory hierarchy while the latter via network links. We herein give a brief survey of the prior work covering the aforementioned categories.Bosque and Pastor [14] proposed the HLogGP model to capture the heterogeneity in both compute nodes and networks. For a heterogeneous systems with M compute nodes, the five parameters of HLogGP are defined in the following forms: latency L and gap per byte G are M M matrices, while overhead o, gap between messages g and computational power P are all M -element vectors. If all the parameters are accurately determined, HLogGP can well predict the communication performance of parallel algorithms. However, HLogGP has two disadvantages: on the one hand, it is difficult to map the O.M 2 / parameters onto specific large-scale heterogeneous systems; on the other hand, it cannot model the memory communication widely witnessed in GPU platforms.With respect to memory communication, Cameron et al. have conducted in-depth study and proposed memory-logP [13] and log n P [6] models. log n P is an extension of memory-logP, and describes the effect of data continuity on communication performance. It consists of five parameters, of which n denotes the total number of communications involved in each message passing process. This model is good at modeling communication relevant to discontinuous data (e.g., Message Passing Interface (MPI) [15] derived data types), yet has routine performance for the continuous data. The r...