Skip to main content

6.5) Shared And Distributed Memory Programming


Shared Memory Programming

Parallel programming for shared memory machines is easier since the all cores have access to the same memory address space and so all have access to the same data structures. This greatly simplifies the task of parallelisation. Use can be made of auto-parallelisation via compiler options, loop-level parallelism through compiler directives or OpenMP. On the other hand, speedup and scalability are limited by the number of cores in the shared memory machine, and this is generally a relatively small number. In addition, code can only be used on a shared memory machine.

Distributed Memory Programming

Programming for distributed memory machines provides a means to take advantage of more resources than those available on a shared memory machine. In addition, code developed for distributed memory machines can be used on shared memory machines. However, this type of programming is generally more difficult than shared memory programming. Since each processor only has access to its local memory, the programmer is responsible for mapping data structures across the separate nodes. In addition, there is a need to coordinate the communications between nodes, i.e. message passing, to ensure that a node can access remote data when it is needed for a local computation. The standard library used for this is MPI.