Programming Taskbook


E-mail:

Password:

User registration   Restore password

Russian

SFedU SMBU

1100 training tasks on programming

©  M. E. Abramyan (Southern Federal University, Shenzhen MSU-BIT University), 1998–2024

 

PT for MPI-2 | Task groups | MPI4Type

PrevNext


Derived datatypes and data packing

The simplest derived types

MPI4Type1°. A sequence of K − 1 triples of integers is given in the master process; K is the amount of processes. Send all given data to each slave process using derived datatype with three integer elements and one collective operation with the derived datatype. Output received data in each slave process in the same order.

MPI4Type2°. A sequence of K − 1 triples of integers is given in the master process; K is the amount of processes. Send one given triple at a time to each slave process using derived datatype with three integer elements and one collective operation with the derived datatype. Output received integers in each slave process in the same order.

MPI4Type3°. A triple of integers is given in each slave process. Send all given triples to the master process using derived datatype with three integer elements and one collective operation with the derived datatype. Output received data in the master process in ascending order of ranks of sending processes.

MPI4Type4°. A sequence of K − 1 triples of numbers is given in the master process; K is the amount of processes. Two initial items of each triple are integers, the last item is a real number. Send all given triples to each slave process using derived datatype with three elements (two integers and a real number) and one collective operation with the derived datatype. Output received data in each slave process in the same order.

MPI4Type5°. A sequence of K − 1 triples of numbers is given in the master process; K is the amount of processes. The first item and the last item of each triple are integers, the middle item is a real number. Send one given triple at a time to each slave process using derived datatype with three elements (an integer, a real number, an integer) and one collective operation with the derived datatype. Output received data in each slave process in the same order.

MPI4Type6°. A triple of numbers is given in each slave process. The first item of each triple is a real number, the other items are integers. Send all given triples to the master process using derived datatype with three elements (a real number and two integers) and one collective operation with the derived datatype. Output received data in the master process in ascending order of ranks of sending processes.

MPI4Type7°. A triple of numbers is given in each process. The first item and the last item of each triple are integers, the middle item is a real number. Send the given triples from each process to all processes using derived datatype with three elements (an integer, a real number, an integer) and one collective operation with the derived datatype. Output received data in each process in ascending order of ranks of sending processes (including data received from itself).

MPI4Type8°. A sequence of R triples of numbers is given in each slave process; R is the rank of process. Two initial items of each triple are integers, the last item is a real number. Send all given triples to the master process using derived datatype with three elements (two integers and a real number) and one collective operation with the derived datatype. Output received data in the master process in ascending order of ranks of sending processes.

Data packing

MPI4Type9°. Two sequences of K numbers are given in the master process; K is the amount of processes. The first given sequence contains integers, the second given sequence contains real numbers. Send all data to each slave process using the MPI_Pack and MPI_Unpack functions and one collective operation. Output received data in each slave process in the same order.

MPI4Type10°. A sequence of K − 1 triples of numbers is given in the master process; K is the amount of processes. The first item and the last item of each triple are integers, the middle item is a real number. Send one given triple at a time to each slave process using the pack/unpack functions and one collective operation. Output received numbers in each slave process in the same order.

MPI4Type11°. A sequence of K − 1 triples of numbers is given in the master process; K is the amount of processes. Two initial items of each triple are integers, the last item is a real number. Send all given triples to each slave process using the pack/unpack functions and one collective operation. Output received data in each slave process in the same order.

MPI4Type12°. A triple of numbers is given in each slave process. Two initial items of each triple are integers, the last item is a real number. Send the given triples from each slave process to the master process using the pack/unpack functions and one collective operation. Output received data in the master process in ascending order of ranks of sending processes.

MPI4Type13°. A real number and a sequence of R integers are given in each slave process; R is the rank of process (one integer is given in the process 1, two integers are given in the process 2, and so on). Send all given data from each slave process to the master process using the pack/unpack functions and one collective operation. Output received data in the master process in ascending order of ranks of sending processes.

Additional ways of derived type creation

MPI4Type14°. Two sequences of integers are given in the master process: the sequence A of the size 3K and the sequence N of the size K, where K is the number of slave processes. The elements of sequences are numbered from 1. Send NR elements of the sequence A to each slave process R (R = 1, 2, …, K) starting with the AR and increasing the ordinal number by 2 (R, R + 2, R + 4, …). For example, if N2 is equal to 3 then the process 2 should receive the elements A2, A4, A6. Output all received data in each slave process. Use one call of the MPI_Send, MPI_Probe, and MPI_Recv functions for sending numbers to each slave process; the MPI_Recv function should return an array that contains only elements that should be output. To do this, define a new datatype that contains a single integer and an additional empty space (a hole) of a size that is equal to the size of integer datatype. Use the following data as parameters for the MPI_Send function: the given array A with the appropriate displacement, the amount NR of sending elements, a new datatype. Use an integer array of the size NR and the MPI_INT datatype in the MPI_Recv function. To determine the number NR of received elements, use the MPI_Get_count function in the slave processes.

Note. Use the MPI_Type_create_resized function to define the hole size for a new datatype (this function should be applied to the MPI_INT datatype). In the MPI-1, the zero-size upper-bound marker MPI_UB should be used jointly with the the MPI_Type_struct for this purpose (in MPI-2, the MPI_UB pseudo-datatype is deprecated).

MPI4Type15°. An real-valued square matrix of order K is given in the master process; K is the number of slave processes. Elements of the matrix should be stored in a one-dimensional array A in a row-major order. The columns of matrix are numbered from 1. Send R-th column of matrix to the process of rank R (R = 1, 2, …, K) and output all received elements in each slave process. Use one call of the MPI_Send and MPI_Recv functions for sending elements to each slave process; the MPI_Recv function should return an array that contains only elements that should be output. To do this, define a new datatype that contains a single real number and an additional empty space (a hole) of the appropriate size. Use the following data as parameters for the MPI_Send function: the given array A with the appropriate displacement, the amount K of sending elements (i. e., the size of column), a new datatype. Use a real-valued array of the size K and the MPI_DOUBLE datatype in the MPI_Recv function.

Note. See the note to MPI4Type14.

MPI4Type16°. R-th column of a real-valued square matrix of order K is given in the slave process of rank R (R = 1, 2, …, K); K is the number of slave processes, the columns of matrix are numbered from 1. Send all columns to the master process and store them in a one-dimensional array A in a row-major order. Output all elements of A in the master process. Use one call of the MPI_Send and MPI_Recv functions for sending elements of each column; the resulting array A with the appropriate displacement should be the first parameter for the MPI_Recv function, and a number 1 should be its second parameter. To do this, define a new datatype (in the master process) that contains K real numbers and an empty space (a hole) of the appropriate size after each number. Define a new datatype in two steps. In the first step, define auxiliary datatype that contains one real number and additional hole (see the note to MPI4Type14). In the second step, define the final datatype using the MPI_Type_contiguous function (this datatype should be the third parameter for the MPI_Recv function). The MPI_Type_commit function is sufficient to call only for the final datatype. Use a real-valued array of size K and the MPI_DOUBLE datatype in the MPI_Send function.

MPI4Type17°. The number of slave processes K is a multiple of 3 and does not exceed 9. An integer N is given in each process, all the numbers N are the same and are in the range from 3 to 5. Also an integer square matrix of order N (a block) is given in each slave process; the block should be stored in a one-dimensional array B in a row-major order. Send all arrays B to the master process and compose a block matrix of the size (K/3) × 3 (the size is indicated in blocks) using a row-major order for blocks (i. e., the first row of blocks should include blocks being received from the processes 1, 2, 3, the second row of blocks should include blocks from the processes 4, 5, 6, and so on). Store the block matrix in the one-dimensional array A in a row-major order. Output all elements of A in the master process. Use one call of the MPI_Send and MPI_Recv functions for sending each block B; the resulting array A with the appropriate displacement should be the first parameter for the MPI_Recv function, and a number 1 should be its second parameter. To do this, define a new datatype (in the master process) that contains N sequences, each sequence contains N integers, and an empty space (a hole) of the appropriate size should be placed between the sequences. Define the required datatype using the MPI_Type_vector function (this datatype should be the third parameter for the MPI_Recv function). Use the array B of size N·N and the MPI_INT datatype in the MPI_Send function.

MPI4Type18°. The number of slave processes K is a multiple of 3 and does not exceed 9. An integer N in the range from 3 to 5 and an integer block matrix of the size (K/3) × 3 (the size is indicated in blocks) are given in the master process. Each block is a lower triangular matrix of order N, the block contains all matrix elements, including zero-valued ones. The block matrix should be stored in the one-dimensional array A in a row-major order. Send a non-zero part of each block to the corresponding slave process in a row-major order of blocks (i. e., the blocks of the first row should be sent to the processes 1, 2, 3, the blocks of the second row should be sent to the processes 4, 5, 6, and so on). Output all received elements in each slave process (in a row-major order). Use one call of the MPI_Send, MPI_Probe, and MPI_Recv functions for sending each block; the resulting array A with the appropriate displacement should be the first parameter for the MPI_Send function, and a number 1 should be its second parameter. To do this, define a new datatype (in the master process) that contains N sequences, each sequence contains non-zero part of the next row of a lower triangular block (the first sequence consists of 1 element, the second sequence consists of 2 elements, and so on), and an empty space (a hole) of the appropriate size should be placed between the sequences. Define the required datatype using the MPI_Type_indexed function (this datatype should be the third parameter for the MPI_Send function). Use an integer array B, which contains a non-zero part of received block, and the MPI_INT datatype in the MPI_Recv function. To determine the number of received elements, use the MPI_Get_count function in the slave processes.

MPI4Type19°. The number of slave processes K is a multiple of 3 and does not exceed 9. An integer N is given in each process, all the numbers N are the same and are in the range from 3 to 5. Also an integer P and a non-zero part of an integer square matrix of order N (a Z-block) are given in each slave process. The given elements of Z-block should be stored in a one-dimensional array B in a row-major order. These elements are located in the Z-block in the form of the symbol "Z", i. e. they occupy the first and last row, and also the antidiagonal. Define a zero-valued integer matrix of the size N·(K/3) × 3N in the master process (all elements of this matrix are equal to 0 and should be stored in a one-dimensional array A in a row-major order). Send a non-zero part of the given Z-block from each slave process to the master process in ascending order of ranks of sending processes and write each received Z-block in the array A starting from the element of array A with index P (the positions of Z-blocks can overlap, in this case the elements of blocks received from processes of higher rank will replace some of the elements of previously written blocks). Output all elements of A in the master process. Use one call of the MPI_Send and MPI_Recv functions for sending each Z-block; the array A with the appropriate displacement should be the first parameter for the MPI_Recv function, and a number 1 should be its second parameter. To do this, define a new datatype (in the master process) that contains N sequences, the first and the last sequences contain N integers, the other sequences contain 1 integer, and an empty space (a hole) of the appropriate size should be placed between the sequences. Define the required datatype using the MPI_Type_indexed function (this datatype should be the third parameter for the MPI_Recv function). Use the array B, which contains a non-zero part of a Z-block, and the MPI_INT datatype in the MPI_Send function.

Note. Use the msgtag parameter to send the Z-block insertion position P to the master process. To do this, set the value of P as the msgtag parameter for the MPI_Send function in slave processes, call the MPI_Probe function with the MPI_ANY_TAG parameter in the master process (before calling the MPI_Recv function), and analyze its returned parameter of the MPI_Status type.

MPI4Type20°. The number of slave processes K is a multiple of 3 and does not exceed 9. An integer N is given in each process, all the numbers N are the same and are in the range from 3 to 5. Also an integer P and a non-zero part of an integer square matrix of order N (an U-block) are given in each slave process. The given elements of U-block should be stored in a one-dimensional array B in a row-major order. These elements are located in the U-block in the form of the symbol "U", i. e. they occupy the first and last column, and also the last row. Define a zero-valued integer matrix of the size N·(K/3) × 3N in the master process (all elements of this matrix are equal to 0 and should be stored in a one-dimensional array A in a row-major order). Send a non-zero part of the given U-block from each slave process to the master process in ascending order of ranks of sending processes and write each received U-block in the array A starting from the element of array A with index P (the positions of U-blocks can overlap, in this case the elements of blocks received from processes of higher rank will replace some of the elements of previously written blocks). Output all elements of A in the master process. Use one call of the MPI_Send and MPI_Recv functions for sending each U-block; the array A with the appropriate displacement should be the first parameter for the MPI_Recv function, and a number 1 should be its second parameter. To do this, define a new datatype (in the master process) that contains appropriate number of sequences with empty spaces (holes) between them. Define the required datatype using the MPI_Type_indexed function (this datatype should be the third parameter for the MPI_Recv function). Use the array B, which contains a non-zero part of an U-block, and the MPI_INT datatype in the MPI_Send function.

Note. See the note to MPI4Type19.

The MPI_Alltoallw function (MPI-2)

MPI4Type21°. Solve the MPI4Type15 task by using one collective operation instead of the MPI_Send and MPI_Recv functions to pass data.

Note. You cannot use the functions of the Scatter group, since the displacements for the passing data items (columns of the matrix) should be specified in bytes rather than in elements. Therefore, you should use the function MPI_Alltoallw introduced in MPI-2, which allows you to configure the collective communications in the most flexible way. In this case, the MPI_Alltoallw function should be used to implement a data passing of the Scatter type (and most of the array parameters used in this function need to be defined differently in the master and slave processes). The MPI-3 standard includes a nonblocking version of this function — MPI_Ialltoallw, which has the same features as other nonblocking collective functions (these functions are considered in the corresponding subgroup of the MPI5Comm group).

MPI4Type22°. Solve the MPI4Type16 task by using one collective operation instead of the MPI_Send and MPI_Recv functions to pass data.

Note. See the note to MPI4Type21. In this case, the MPI_Alltoallw function should be used to implement a data passing of the Gather type.


PrevNext

 

  Ðåéòèíã@Mail.ru

Designed by
M. E. Abramyan and V. N. Braguilevsky

Last revised:
01.01.2024