my_digital_garden/4a1s/CP/PL - Aula 6.md

81 lines
2.6 KiB
Markdown
Raw Normal View History

2023-10-25 12:21:34 +01:00
---
dg-publish: true
---
2023-10-18 11:09:53 +01:00
18 Outubro 2023 - #CP
2023-10-18 11:39:55 +01:00
2023-10-25 12:21:34 +01:00
---
# Ficha 6
2023-10-18 12:39:55 +01:00
https://www.ibm.com/docs/en/xl-c-aix/13.1.0?topic=descriptions-pragma-directives-parallel-processing
2023-10-18 12:49:55 +01:00
https://610yilingliu.github.io/2020/07/15/ScheduleinOpenMP/
2023-10-18 11:39:55 +01:00
## Ex 1
### a)
2023-10-18 11:49:55 +01:00
Running the code (first time):
![[Pasted image 20231018113933.png]]
Running the code (second time):
![[Pasted image 20231018114101.png]]
No, since the task scheduling is dependant on the resources available (cpu cores).
### b)
Yes, since the threads do the same task (orderly printing ids) following the orderly creation of threads in the for loop.
NOTE: These threads use a fork-join architecture, aka both threads work simultaneously and when one ends, it waits for the other to finish their job to then end both threads and continuing on with the code.
### c)
2023-10-18 12:39:55 +01:00
It is dependant on the CPU availability in each clock cycle.
2023-10-18 11:49:55 +01:00
2023-10-18 12:09:55 +01:00
* * *
2023-10-18 11:59:55 +01:00
## Ex 2
### 2.1 - \#pragma omp for
![[Pasted image 20231018115329.png]]
#### a)
There is a distribution of literally half the prints to each of the 2 thread created. There is a static distribution of instructions to each thread.
Thread 0 -> ids 0-49
Thread 1 -> ids 50-99
#### b)
Yes, since it is statically distributed.
2023-10-18 12:09:55 +01:00
### 2.2 - \#pragma omp master
![[Pasted image 20231018120028.png]]
#### a)
There is a distribution of literally all the prints on the first thread created. There is a static distribution of instructions to the first thread created - the thread master.
Thread 0 -> ids 0-99
#### b)
Yes, since it is statically distributed.
### 2.3 - \#pragma omp single
![[Pasted image 20231018120219.png]]
#### a)
There is a distribution of literally all the prints on the first thread created. There is a static distribution of instructions to one of the threads randomly.
Thread 0/1 -> ids 0-99
#### b)
Yes, since it is statically distributed.
#### c) \#pragma omp critical
![[Pasted image 20231018120412.png]]
2023-10-18 12:19:55 +01:00
There is a restriction on the thread access to the printing - each thread is forced to complete the entirety of the task and it cannot end the printing until it is completed with its task. It creates a certain indivisible block of code the thread must run.
2023-10-18 11:59:55 +01:00
2023-10-18 12:09:55 +01:00
****
### Ex 3
2023-10-18 12:29:55 +01:00
### 3.1
2023-10-18 12:19:55 +01:00
![[Pasted image 20231018121319.png]]
The output is always the same, since the instruction ''''\#pragma omp barrier'''' creates a barrier after one single thread execution (series of fork joins between the 2 threads).
2023-10-18 12:29:55 +01:00
### 3.2
![[Pasted image 20231018122801.png]]
2023-10-18 12:39:55 +01:00
O resultado é semelhante ao ''\#pragma omp for'', sendo que colocar ''ordered'' obriga a ordenação das threads conforme a forma sequencial de código.
2023-10-18 12:49:55 +01:00
***
2023-10-18 12:39:55 +01:00
## Ex 4
### 4.1