No, since the task scheduling is dependant on the resources available (cpu cores).
### b)
Yes, since the threads do the same task (orderly printing ids) following the orderly creation of threads in the for loop.
NOTE: These threads use a fork-join architecture, aka both threads work simultaneously and when one ends, it waits for the other to finish their job to then end both threads and continuing on with the code.
### c)
It is dependant on the CPU availability in each clock cycle. A thread, when executed uses
There is a distribution of literally all the prints on the first thread created. There is a static distribution of instructions to the first thread created - the thread master.
Thread 0 -> ids 0-99
#### b)
Yes, since it is statically distributed.
### 2.3 - \#pragma omp single
![[Pasted image 20231018120219.png]]
#### a)
There is a distribution of literally all the prints on the first thread created. There is a static distribution of instructions to one of the threads randomly.
There is a restriction on the thread access to the printing - each thread is forced to complete the entirety of the task and it cannot end the printing until it is completed with its task. It creates a certain indivisible block of code the thread must run.
The output is always the same, since the instruction ''''\#pragma omp barrier'''' creates a barrier after one single thread execution (series of fork joins between the 2 threads).