8.2. MPI Hello World#
Communication Models#
There are two types of communication models: one-sided and two-sided.
One-Sided: One party can remotely read or write data on the other party without requiring the other party.
Two-Sided. Both parties agree to exchange data. The sending process invokes the send function, and the receiving process invokes the receive function.
World and Rank#
In MPI programming, when processes need to communicate with each other, two fundamental questions must be addressed: “Who am I in the MPI world?” and “Who else is in the MPI world?” In the MPI standard, MPI_Comm_rank
defines “Who am I?” and MPI_COMM_WORLD
answers “Who else?” When starting an MPI program, you should first define a World. This world consists of size
processes, each assigned a unique number known as rank. Ranks are integers ranging from 0 to size
- 1. Formally:
In MPI, the World refers to the total set of processes involved in parallel computation. In an MPI program, all processes belong to a default communication group known as
MPI_COMM_WORLD
. All processes within this communication group can communicate with each other.Each process in the World has a unique rank, which is used to identify the process in the communication group. As each process has its own rank number, we can program the process with rank 0 to send data to the process with rank 1.
Example: Hello World#
Listing 8.1 uses a simple example to demonstrate MPI programming.
from mpi4py import MPI
comm = MPI.COMM_WORLD
print(f"Hello! I'm rank {comm.Get_rank()} of {comm.Get_size()} running on host {MPI.Get_processor_name()}.")
comm.Barrier()
In this program, the print
statement is executed within each individual process, displaying the rank and hostname of the current process. comm.Barrier()
halts each process until all processes have reached comm.Barrier()
, then each process proceeds and executes the code after comm.Barrier()
. In this example, after comm.Barrier()
, there are no further operations, and the program exits.
If you run 8 processes on your personal computer, execute the following command in the terminal:
!mpiexec -np 8 python hello.py
Hello! I'm rank 5 of 8 running on host lu-mbp.Hello! I'm rank 1 of 8 running on host lu-mbp.
Hello! I'm rank 2 of 8 running on host lu-mbp.
Hello! I'm rank 4 of 8 running on host lu-mbp.
Hello! I'm rank 6 of 8 running on host lu-mbp.
Hello! I'm rank 7 of 8 running on host lu-mbp.
Hello! I'm rank 3 of 8 running on host lu-mbp.
Hello! I'm rank 0 of 8 running on host lu-mbp.
mpiexec
of vendors may be slightly different. You can check the parameters of mpiexec
from vendors’ documentation.
If you have a cluster with a shared file system mounted on each node, the source code (e.g. hello.py
) and the MPI installed (mpiexec
) on each node is identical. You can launch it the program parallelly as follows:
mpiexec –hosts h1:4,h2:4,h3:4,h4:4 –n 16 python hello.py
This launch command starts with 16 processes, distributed across 4 computing nodes, each node has 4 processes. If there are more nodes, you can create a separate node file, for instance, named hf
, with the following content:
h1:8
h2:8
And launch it:
mpiexec –hostfile hf –n 16 python hello.py
Communicator#
We mentioned the concept of MPI_COMM_WORLD
. More accurately, MPI_COMM_WORLD
is a communicator. MPI divides processes into different Groups, and each Group has a different Color. The combination of Group and Color forms a Communicator. In other words, a communicator is Group + Color. The predefined communicator is MPI_COMM_WORLD
.
A process may belong to different communicators, so its rank in different communicators may differ. Fig. 8.2 (a), (b), and (c) represent three communicators, with circles representing processes. When we start an MPI program, the default communicator (MPI_COMM_WORLD
) is created, as shown in Fig. 8.2 (a). Each process is assigned a rank number within this communicator, and the numbers on the circles in the diagram represent the rank of processes in this communicator. The same process can be assigned to different communicators, and the rank of the same process in different communicators may be different, as illustrated in Fig. 8.2 (b) and (c). Communication between processes within each communicator is independent. Messages inf one communicator will not affect messages in another. For most MPI programs, there is no need to create other communicators, the default MPI_COMM_WORLD
is enough.