This program performs the following steps:
-
Command-Line Arguments: It accepts two command-line arguments -
matrix_size
andnum_threads
. These arguments determine the size of the matrices and the number of threads to use for parallel execution. -
Memory Allocation: Dynamic memory allocation is used to create three matrices
A
,B
, andC
. These matrices will hold the input matrices and the result of matrix multiplication. -
Initialization: Matrices
A
andB
are initialized with the value 2, while matrixC
is initialized with zeros. -
Parallel Matrix Multiplication: The core matrix multiplication operation is parallelized using OpenMP. The
#pragma omp parallel for
directive splits the work among multiple threads. Each thread calculates a portion of the result matrixC
. -
Timing: The program measures the elapsed time for the matrix multiplication using the
gettimeofday
function. This provides a measure of the program's performance. -
Memory Deallocation: After the matrix multiplication is complete, memory is deallocated to prevent memory leaks.
To compile the program, open a terminal and navigate to the directory containing the code. Use the following command:
gcc -o matrix_multiplication matrix_multiplication.c -fopenmp
This command compiles the code with OpenMP support and generates an executable named matrix_multiplication.
To run the program, use the following command:
./matrix_multiplication <matrix_size> <num_threads>
For finding nth power of the matrix:
./matrix_multiplication <matrix_size> <exponent> <num_threads>
Replace <matrix_size> with the desired size of the matrices and <num_threads> with the number of threads you want to use for parallel execution.
./matrix_multiplication 512 2
For nth power: Replace <matrix_size> with the desired size of the matrices and <num_threads> with the number of threads you want to use for parallel execution.
./matrix_multiplication 512 2 4
This will perform matrix multiplication with matrices of size 1000x1000 using 4 threads.
The program will display the size of the matrices, the number of threads used, and the elapsed time for matrix multiplication.
Graph for the same are as below:
This C program demonstrates how to use OpenMP to parallelize matrix multiplication, improving the performance of the computation. You can adjust the matrix size and the number of threads to observe the impact on execution time.
# Block Matrix Multiplication
Compile the program using a C compiler. For example:
gcc -o block_matrix_mult block_matrix_mult.c -fopenmp
Here, block_matrix_mult is the name of the executable.
To run the program, use the following command-line format:
./block_matrix_mult <matrix_size> <num_threads> <block_size>
For nth power of a matrix using BMM:
./block_matrix_mult <matrix_size> <exponent> <num_threads> <block_size>
<matrix_size>: The size of the square matrices (e.g., 100 for a 100x100 matrix). <block_size>: The size of the square blocks (e.g., 10 for a 10x10 block). For example, to multiply two 100x100 matrices using 10x10 blocks:
./block_matrix_mult 1024 6 4
For nth power:
./block_matrix_mult 1024 2 6 4
The program will display the size of the matrices, the block size, and the result of the multiplication.
The program dynamically allocates memory for matrices A, B, and C, as well as for the block buffers. It frees the allocated memory after the multiplication is complete to prevent memory leaks.