Skip to main content

6.3) Parallel Computing (cont'd)


Limits On Parallel Computing

There a number of hurdles and limitations to the benefits that can be derived from parallel computing.

  • Theoretical upper limits: There is a limit on the achievable speedup due to serial or non-parallelisable portions of applications or code.
  • Practical considerations: Not all algorithms parallelise well and this may require reformulating the problem being investigated. In addition there is a need to think about load balancing, minimising time spent on communications and the non-computational sections of the code.
  • Other considerations: Though the benefit of using parallel computing can be huge, developing and maintaining efficient and scalable parallel applications can be quite difficult.

Speedup And Scalability

One of the most widely used and simple indicators of a parallel program’s performance is the ratio of code’s execution time on one core to the code’s execution time on multiple cores. This can be referred to as observed speedup. It can be expressed as:

Observed speedup = serial code execution time / parallel code execution time

Scalability refers to a program’s ability to exhibit a proportional increase in speedup when more resources are added. There are several factors that contribute to scalability including; algorithm in use, parallel overheads, hardware properties and individual characteristics of the application and problem in question.

It is advisable when running parallel codes to investigate how observed speedup changes as more resources are added. Each application will behave differently on a specific HPC cluster and it is important to identify the optimal amount of resources needed to run the job efficiently.