2

Thread Pool

A: False, no circular dependency for two [enqueue()] functions.

B: False, [enqueue()] function is protected by a lock, two calls on [enqueue()] are not really parallel.

C: True, to enqueue a work item it needs to acquire a lock, but the lock is acquired by a working item for the whole duration of its execution

D: True, same lock acquired for the whole duration of the execution of a work item's function.

E: False, unlocks before waiting for threads to finish

F: False, the thread is not protected by a lock after if - when the thread re-enters the while loop it

G: False, only enqueue one task ⇒ want to wake up only one single thread

H: True, still ensures that all the critical resources are protected, but the function assigned to a thread happens in parallel.

I: False, without protecting [cv.notify_one()] in between the times the lock is unlocked and the notifications happen, some other things might happen with the working queue, resulting in unwanted behavior.

J: False, leads to some threads waiting forever, causing deadlock

Producer - Consumer Queue

// place 4 & stmt 2

A: False, because we use a lock on the conditional variable, ensuring the [dequeue()] call won't miss the wake-up call, and we also pass the [lck] inside the [cv.wait()], which will unlock the lock while sleeping and re-acquire the lock in an atomic manner when notified to wake up. So the way the functions [enqueue()] and [dequeue()] interact is correct. B: False, other problems may appear with two simultaneous calls to [dequeue()], but not a deadlock. C: False, since the whole function happens under one single lock [lck]. D: True. Let's say we have the following scenario: - [items] has only one element. - process A calls [dequeue()] and executes until line 24. - process B also calls [dequeue()] and executes until the end. - A resumes the execution on line 24, but when it wants to pop from [items] it will encounter an error, since [items] was already emptyed by B. E: False, because the portions of the code critical to the way [enqueue()] and [dequeue()] interact between each other are protected by a lock. More than that, [enqueue()] works with the back of [items], while [dequeue()] works with the front. F: False, it makes sense to first acquire the lock before checking on the size of [itmes]. More than that, the scenario described on D is still possible to happen. G: False, because a lot of inconsistent things can happen including the scanrio described on D. H: True, because we only need to have the lock acquired while we are working with the critical data ([items] and [isClosed] in our case). By removing line 21, we keep the lock for a few more lines as we check and modify [items], after which we can release it. Of course, it is not mandatory to manually release it, since it will be released when it gets out of scope, but it won't cause problems if we manually release it either on line 25 before we return the result, or on line 28 before we return the empty option. By doing this modification the issue described on D won't happen anymore.

I: False, because the [cv.wait(lck)] already does this and in an atomic manner. Even with such a change, scenario D can still happen.

J: False, because if we move statement 1, some other issues may appear. For example: - process A calls [enqueue()] and executes until line 10. - process B calls [dequeue()]], enters the while loop and executes until line 18. - A resumes execution from line 10 and finishes the whole function, sending the notification. - B resumes execution from line 18 and enters the waiting phase, which will never end if no other [enqueue()] will be called, since it missed the notification.

// place 3 & stmt 1

One problem is generated by the fact we check if the critical resource [items] is not empty on line 18, even though [items] is not protected by a lock. 


A: True, because if let's say we have the following scenario:
	 - [items] has 0 items.
 - process A calls [dequeue()] and executes until line 27.
	 - process B calls [enqueue()] and executes the whole function.
	 - process A resumes the execution from line 27 and enters the waiting phase, which will never end if no other [enqueue()] is called afterwards, since it missed the notification from B.

B: False, other problems may appear with two simultaneous calls to [dequeue()], but not a deadlock.

C: False, since the whole function happens under one single lock [lck].

D: True. Let's say we have the following scenario: 
	 - [items] contains only one element.
	 - process A calls [dequeue()] and executes until line 21.
	 - process B calls [dequeue()] and executes the whole function.
	 - A resumes the execution on line 21, but when it wants to pop from [items] it will encounter an error, since [items] was already emptyed by B.

E: False, because even though the if statement on [items] on line 18 is not protected by a lock, thus producing deadlocks, no inconsistent behavior on the actual data will happen
	 if [dequeue()] and [enqueue()] are called simultaneously and by miracle we avoid the deadlock. More than that, [enqueue()] works with the back of [items], while [dequeue()] works 
	 with the front.

F: True, this will make sure [items] is protected by the lock.

G: True, this will make sure [items] is protected by the lock. Yes, the lock will be released once we finish the scope of the while loop, but it will be immediately re-acquired when we 
	 re-enter it.

H: False, we are still not fully protecting our [items] by doing so. A similar scenario as in A might happen.

I: False, we are still not fully protecting our [items] by doing so. A similar scenario as in A might happen.

J: False, we are still not fully protecting our [items] by doing so. A similar scenario as in A might happen.

Futures

// cv.notify_all(); → lock