From 4627b0b5222010d048ac5debdc1014fee9a51dbc Mon Sep 17 00:00:00 2001 From: fredrikr79 Date: Mon, 22 Sep 2025 21:56:56 +0200 Subject: [PATCH] ex3: report --- exercise3/report.md | 74 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 exercise3/report.md diff --git a/exercise3/report.md b/exercise3/report.md new file mode 100644 index 0000000..636b606 --- /dev/null +++ b/exercise3/report.md @@ -0,0 +1,74 @@ +--- +title: tdt4200ex3 -- 1d wave equation mpi +date: 2025-09-22 +author: fredrik robertsen +--- + +## theory questions + +### 1) + +by parallelizing the code i get quite a speed-up when running four processes. +this is as i expected, because my laptop has four cores available for parallel +processing. i have done threading previously, which also gave a speed-up for +four threads. thus it makes sense that we get a speed-up here. it means we must +have more intensive calculations than the overhead of using mpi. + +here's an excerpt from `make check`: + +``` +./sequential +Total elapsed time: 5.115875 seconds + +mpiexec -n 1 --oversubscribe ./parallel +Total elapsed time: 5.347622 seconds + +mpiexec -n 4 --oversubscribe ./parallel +Total elapsed time: 1.729224 seconds + +mpiexec -n 13 --oversubscribe ./parallel +Total elapsed time: 2.717839 seconds +``` + +assuming i have implemented timing correctly, mpi is slightly slower with only +a single process, as expected. similarly, for 13 processes we get a speed-up, +but not by as much due to the overhead of having more processes that are not +necessarily capable of running in parallel -- they are limited by my hardware. + +### 2) + +this is a ring-buffer (halo) topology 3-stencil. a process only communicates +with its nearest neighbor. the calculations are a form of finite differences. + +### 3) + +_well-spoken [AI](https://kagi.com) answer:_ + +**point-to-point communication:** + +- communication between **exactly two processes** (sender ↔ receiver) +- examples: `mpi_send`, `mpi_recv`, `mpi_sendrecv` + +**collective communication:** + +- communication involving **all processes** in a communicator +- examples: `mpi_bcast`, `mpi_reduce`, `mpi_allreduce`, `mpi_gather`, `mpi_scatter` + +**key differences:** + +| aspect | point-to-point | collective | +| ------------------- | -------------- | ---------------- | +| **participants** | 2 processes | all processes | +| **synchronization** | optional | required | +| **optimization** | manual | automatic | +| **flexibility** | high | limited patterns | + +### 4) + +`int* a, b` is a classic c pitfall. my auto-formatter is configured to correctly +format such a string to `int *a, b`, since pointers are a part of the variable +name definition, not the type. as such, `b` is merely an `int`. + +this is slightly relevant for the entertaining c-ism `buffer[0] == 0[buffer]` +(this relates to the fact that indexing brackets are syntactic sugar for pointer +addition, which is commutative).