Risto Honkanen

The developpement of uniprocessor systems is not likely to remain as fast as it has been during the last few decades. Reason to this is for example definite area for logical components on a chip. A natural way to improve the performance of calculation is to use parallel processing.

A large amount of recent parallel computers consists of a set of independant processors and their local memories. In this kind of machines communication between processors can be implemented by message passing. Message Passing Interface (MPI) is one of practical portable subroutine libraries using message passing. When we use MPI as a programming environment it is, however, rather difficult to estimate running time of the program. The LogP is one of the models which abstracts the properties of parallel machines. With the LogP we can estimate running time of parallel programs.

In this presentation we will shortly introduce existing parallel machines, their properties, MPI, LogP and sketch briefly use of LogP in estimation of running time of parallel programs.