2.3 KiB
18 Outubro 2023 - #CP
Ex 1
a)
Running the code (first time): !
Running the code (second time): !
No, since the task scheduling is dependant on the resources available (cpu cores).
b)
Yes, since the threads do the same task (orderly printing ids) following the orderly creation of threads in the for loop.
NOTE: These threads use a fork-join architecture, aka both threads work simultaneously and when one ends, it waits for the other to finish their job to then end both threads and continuing on with the code.
c)
It is dependant on the CPU availability in each clock cycle. A thread, when executed uses
Ex 2
2.1 - #pragma omp for
a)
There is a distribution of literally half the prints to each of the 2 thread created. There is a static distribution of instructions to each thread. Thread 0 -> ids 0-49 Thread 1 -> ids 50-99
b)
Yes, since it is statically distributed.
2.2 - #pragma omp master
a)
There is a distribution of literally all the prints on the first thread created. There is a static distribution of instructions to the first thread created - the thread master. Thread 0 -> ids 0-99
b)
Yes, since it is statically distributed.
2.3 - #pragma omp single
a)
There is a distribution of literally all the prints on the first thread created. There is a static distribution of instructions to one of the threads randomly. Thread 0/1 -> ids 0-99
b)
Yes, since it is statically distributed.
c) #pragma omp critical
There is a restriction on the thread access to the printing - each thread is forced to complete the entirety of the task and it cannot end the printing until it is completed with its task. It creates a certain indivisible block of code the thread must run.
Ex 3
3.1
The output is always the same, since the instruction ''''#pragma omp barrier'''' creates a barrier after one single thread execution (series of fork joins between the 2 threads).
3.2
O resultado é semelhante ao #pragma omp ordered