Parallel computation on cluster architectures has become the most common solution for developing high-performance scientific applications. Message Passing Interface (MPI) is the message-passing library most widely used to provide communications in clusters. Along the I/O phase, the processes frequently access a common data set by issuing a large number of small non-contiguous I/O requests, which might create bottlenecks in the I/O subsystem. These bottlenecks are still higher in commodity clusters, where commercial networks are usually installed. Scalability is also an important issue in...
Parallel computation on cluster architectures has become the most common solution for developing high-performance scientific applications. Message Pas...