Speaker: Dinesh Kaushik, Argonne National Lab Title: Understanding the Performance of Hybrid (distributed/shared memory) Programming Model Abstract: With the availability of large scale SMP clusters, the different software models for parallel programming require a fresh assessment. For the physically distributed memory machines, the message passing interface (MPI) has been a natural and very successful software model. For another category of machines with distributed shared memory and non uniform memory access, both MPI and OpenMP have been used with respectable parallel scalability. But for clusters with two or more SMPs on a single node, the mixed software model with threads within a node (OpenMP being a special case of threads because of the potential for highly efficient handling of the threads and memory by the compiler) and MPI between the nodes looks quite natural. In this talk, we will present our assessment of the hybrid programming model in the context of a large scale CFD computation involving irregular memory references.