Professional Documents
Culture Documents
Exam Prep 310
Exam Prep 310
Exam Prep 310
Resource utilization
convenience - easier and more desirable
fairness - sharing resources via time slicing
Benefits of threads:
- responsiveness
- exploiting multiprocessors
- simplicity of modelling
- simplified handling of asynchronous events
Risks of threads:
- Safety hazards - correctness of data may be dependent on the relative timing or
interleaving of multiple threads (race condition)
- Liveness hazards - Liveness means 'something good eventually happens. The hazards
are hazards that prevent this from happening. For example, in a sequential program
where an indefinite loop may occur and the code that follows the loop never gets
executed. In concurrency, we have deadlock, livelock, and starvation.
- performance hazards - hazards related to the performance of the software, such as
low throughput, low responsiveness, poor service ime, resource consumption
Concurrent Programming is about writing thread safe code and managing access to
mutable shared state.
Shared: implies that a variable can be accessed by more than one threads
Mutable: the value of the variable could change during it's lifetime
Achieving Safety:
- Stateless
- Atomic operations
- Locking
Why not lock everything? Possible liveness and/or performance problems. Even if
every mehod were synchrnized, it doesn't solve actions that combine those methods!
(what if we need to perform two operations atomically and one thread holds one the
lock of one operation and the other holds the lock of the other operation? A
deadlock would occur.)
SHARING OBJECTS
Memory visibility: some threads may make reads or writes to a cache, instead of
directly to the shared memory location to increase speed or operation. However,
this can cause problems of visibility, where one thread does not see the updated
variable at the correct time (or even at all). Synchronization is used to force the
threads to read and write that variable from memory directly. The volatile keyword
does a similar job.
Why must all read and write threads be synchronized in their access to a variable
by a common lock? Excluding the race conditions (which depend mostly on multiple
sets and not really gets).
Answer: because of memory visibility and stale data. The value set by a thread may
not be visible to the other reading threads on time or at all, causing stale data.
Out-of-thin-air hazards may also occur with double or long values in which not only
might the data be stale, it could also be a random no because of the nonatomicity
of 64 bit operations.
Synchronizing all access to the variable does something similar by ensuring that
everything done while holding a lock is visible to the next threads when they hold
the lock. This is why it has to be the same lock, as otherwise, the thread holding
a different lock may not see the updated value according to the above.
Extra value of using volatile variables: when it forces the thread to write it's
value directly to memory, it also forces it to write all the other variables it has
updated in it's cache to the memory, causing them all to become visible to other
threads accessing the volatile variable. (This may not be completely accurate but
it's a good analogy).
Even though this means volatile variables ensure the visibility of other modified
variables before them, we should not rely on it. Volatile variables should only be
used to ensure THEIR OWN visibility or to indicate that an important life-cyle
event (such as initialization or shutdown) has occurred (to signal an event, just
like the asleep example)
You can use volatile variables only when all the following criteria are met:
• Writes to the variable do not depend on its current value, or you can ensure
that only a single thread ever updates the value; ((THIS IS TO SAY THAT ATOMICITY
IS NOT NEEDED IF ONLY ONE VARIABLE IS WRITING TO THE VARIABLE (using something like
count++) SINCE NO ONE ELSE IS GOING TO WRITE TO THE VARIABLE. IT IS ALSO SAYING
THAT IF THERE ARE MULTIPLE THREADS UPDATING THE VALUE, THEN WRITES TO THE VARIABLE
MUST NOT DEPEND ON THE CURRENT VALUE (meaning no count++) SO THAT THE WRITING
OPERATION IS ALREADY ATOMIC, UNLIKE THE count++ OPERATION THAT IS NOT ATOMIC. THESE
TWO CONDITIONS ARE TO SAY THAT IT IS BETTER TO JUST USE SYNCHRONIZATION TO COVER
BOTH ATOMICITY AND VISIBILITY, BUT WE CAN USE volatile VARIABLES TO COVER JUST
VISIBILITY IF WE KNOW WE IMPLICITLY HAVE ATOMICITY COVERED.))
• The variable does not participate in invariants with other state variables;
and
• Locking is not required for any other reason while the variable is being
accessed.
An object that is published when it should not have been is said to have escaped.
The design process for a thread-safe class should include these three basic
elements:
• Identify the variables that form the object’s state;
• Identify the invariants that constrain the state variables;
• Establish a policy for managing concurrent access to the object’s
state