Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 57

UNIT IV

SHARED OBJECTS AND CONCURRENT OBJECTS


Shared Objects and Synchronization -Properties of Mutual Exclusion-
The Moral- The Producer–Consumer Problem -The Readers–Writers
Problem-Realities of Parallelization- Parallel Programming- Principles-
Mutual Exclusion-Time- Critical Sections—Thread Solutions-The
Filter Lock-Fairness-Lamport’s Bakery Algorithm-Bounded
Timestamps-Lower Bounds on the Number of Locations-Concurrent
Objects- Concurrency and Correctness- Sequential Objects-Quiescent
Consistency- Sequential Consistency-Linearizability- Formal
Definitions- Progress Conditions- The Java Memory Model

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 1


Shared Objects and Synchronization
• The heart of the problem is that incrementing the counter’s
value requires two distinct operations on the shared variable
• Reading the value field into a temporary variable
• Writing it back to the Counter object.
A Fable:
• Sequence of fables, illustrating some of the basic problems.
• Alice and Bob need to agree on mutually compatible
procedures for deciding what to do.
• We call such an agreement a coordination protocol (or just a
protocol).

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 2


Properties of Mutual Exclusion
• The flag protocol is a correct solution to Alice and Bob’s
problem
• we proved that the pets are excluded from being in the yard at
the same time, a property we call mutual exclusion.
• Another property of central importance.
• First, if one pet wants to enter the yard, then it eventually
succeeds.
• Second, if both pets want to enter the yard, then eventually at
least one of them succeeds.
• We consider this deadlock-freedom property to be essential.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 3


• Another property of compelling interest is starvation-freedom
(sometimes called lockout-freedom)
• If a pet wants to enter the yard, will it eventually succeed?
• Here, Alice and Bob’s protocol performs poorly.
• Whenever Alice and Bob are in conflict
• Bob defers to Alice, so it is possible that Alice’s cat can use
the yard over and over again
• while Bob’s dog becomes increasingly uncomfortable.
• The last property of interest concerns waiting.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 4


The Moral
• Two kinds of communication occur naturally in concurrent
systems
Transient communication:
• It requires both parties to participate at the same time.
• Shouting, gestures, or cell phone calls are examples of
transient communication.
Persistent communication:
• It allows the sender and receiver to participate at different
times.
• Posting letters, sending email, or leaving notes under rocks are
all examples of persistent communication.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 5


The Producer–Consumer Problem
• Bob places a can standing up on Alice’s windowsill, ties one
end of his string around the can, and puts the other end of the
string in his living room.
• He then puts food in the yard and knocks the can down.
• When Alice wants to release the pets, she does the following:
1. She waits until the can is down.
2. She releases the pets.
3. When the pets return, Alice checks whether they finished
the food. If so, she resets the can.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 6


• Bob does the following:
1. He waits until the can is up.
2. He puts food in the yard.
3. He pulls the string and knocks the can down.
Three properties:
Mutual Exclusion:
Bob and the pets are never in the yard together.
Starvation-freedom:
If Bob is always willing to feed, and the pets are always finished,
then the pets will eat infinitely often.
Producer–Consumer:
The pets will not enter the yard unless there is food, and Bob will
never provide more food if there is unconsumed food.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 7


• This producer–consumer protocol and the mutual exclusion
protocol considered in the last section
• Both ensure that Alice and Bob are never in the yard at the
same time.
• Mutual exclusion requires deadlock-freedom:
• Anyone must be able to enter the yard infinitely often on their
own, even if the other is not there.
• By contrast, the producer–consumer protocol’s starvation-
freedom property assumes continuous cooperation from both
parties.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 8


The Readers–Writers Problem
• Bob and Alice eventually decide they love their pets so much
they need to communicate simple messages about them.
• Bob puts up a billboard in front of his house.
• The billboard holds a sequence of large tiles, each tile holding
a single letter.
• Bob, at his leisure, posts a message on the bulletin board by
lifting one tile at a time.
• Alice, at her leisure, reads the message by looking at the
billboard through a telescope, one tile at a time.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 9


Imagine that Bob posts the message:
•sell the cat
Alice, looking through her telescope, transcribes the message
•sell the
At this point Bob takes down the tiles and writes out a new
message
•wash the dog
Alice, continuing to scan across the billboard transcribes the
message
•sell the dog

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 10


• There are some straightforward ways to solve the readers–
writers problem
• Alice and Bob can use the mutual exclusion protocol to make
sure that Alice reads only complete sentences.
• She might still miss a sentence, however.
• They can use the can-and-string protocol, where Bob produces
sentences and Alice consumes them.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 11


The Harsh Realities of Parallelization
• In an ideal world, upgrading from a uniprocessor to an n-way
multiprocessor should provide about an n-fold increase in
computational power.
• In practice, sadly, this never happens.
• The primary reason for this is that most real-world
computational problems cannot be effectively parallelized
without incurring the costs of inter-processor communication
and coordination.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 12


Amdahl’s Law:
•It captures the notion that the extent to which we can speed up
any complex job (not just painting) is limited by how much of the
job must be executed sequentially.
•The parallelized computation takes time:
1 − p +p/n
•Amdahl’s Law says that the speedup, that is, the ratio between
the sequential (single-processor) time and the parallel time, is:
S =1/1 − p +p/n

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 13


Parallel Programming
• Multiprocessor programming poses many challenges, ranging
from grand intellectual issues to subtle engineering tricks.
• We tackle these challenges using successive refinement,
starting with an idealized model
• Where we increasingly focus on basic engineering principles.
• The first problem we consider is mutual exclusion
• The oldest and still one of the most basic problems in the field.
• We begin with a mathematical perspective, analyzing the
computability and correctness properties of various algorithms
on an idealized architecture.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 14


• The algorithms themselves, while classical, are not practical
for modern architectures.
• It is particularly important to learn how to reason about subtle
liveness issues such as starvation and deadlock.

Principles-Mutual Exclusion:
• Mutual exclusion is perhaps the most prevalent form of
coordination in multiprocessor programming.
• Classical mutual exclusion algorithms that work by reading
and writing shared memory

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 15


Time:
•Reasoning about concurrent computation is mostly reasoning
about time.
•Sometimes we want things to happen simultaneously, and
sometimes we want them to happen at different times.
•We need to reason about complicated conditions involving how
multiple time intervals can overlap, or, sometimes, how they
cannot.
•We need a simple but unambiguous language to talk about
events and durations in time.
•Everyday English is too ambiguous and imprecise.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 16


• Instead, we introduce a simple vocabulary and notation to
describe how concurrent threads behave in time.
• Threads share a common time (though not necessarily a
common clock).
• A thread is a state machine, and its state transitions are called
events.
Events are instantaneous:
• They occur at a single instant of time.
• It is convenient to require that events are never simultaneous:
distinct events occur at distinct times.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 17


Critical Sections
• The problem occurs if both threads read the value field at the
line marked
“start of danger zone,”
• Then both update that field at the line marked
“end of danger zone.”
• We can avoid this problem if we transform these two lines into
a critical section:
• a block of code that can be executed by only one thread at a
time.
• We call this property mutual exclusion.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 18


Mutual Exclusion
•Critical sections of different threads do not overlap.
• For threads A and B, and integers j and k, either CSkA→CSjB
Freedom from Deadlock
•If some thread attempts to acquire the lock, then some
•thread will succeed in acquiring the lock.
•If thread A calls lock() but never acquires the lock, then other
threads must be completing an infinite number of critical sections.
Freedom from Starvation
Every thread that attempts to acquire the lock eventually succeeds.
Every call to lock() eventually returns. This property is sometimes
called lockout freedom.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 19


2-Thread Solutions
The LockOne Class
•Our 2-thread lock algorithms follow the following conventions:
•The threads have ids 0 and 1, the calling thread has i, and the other j =
1 − i.
•Each thread acquires its index by calling ThreadID.get().
•The LockOne algorithm is inadequate because it deadlocks if thread
executions are interleaved.
•If writeA(flag[A] = true) and writeB(flag[B] = true) events occur
before readA(flag[B]) and readB(flag[A]) events, then both threads wait
forever.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 20


• Nevertheless, LockOne has an interesting property:
• if one thread runs before the other, no deadlock occurs, and
all is well.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 21


The LockTwo Class
• The LockTwo class is inadequate because it deadlocks if one
thread runs completely before the other.
• Nevertheless, LockTwo has an interesting property:
• if the threads run concurrently, the lock() method succeeds.
• The LockOne and LockTwo classes complement one another
• each succeeds under conditions that cause the other to
deadlock.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 22


The Peterson Lock
• Combine the LockOne and LockTwo algorithms to construct a
starvationfree Lock algorithm
• This algorithm is arguably the most succinct and elegant two-
thread mutual exclusion algorithm.
• It is known as “Peterson’s Algorithm,” after its inventor.
• It follows that writeB(flag[B] = true) → readA(flag[B] ==
false).
• This observation yields a contradiction because no other write
to flag[B] was performed before the critical section
executions.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 23


IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 24
The Filter Lock
• Consider two mutual exclusion protocols that work for n
threads, where n is greater than 2.
• The first solution, the Filter lock, is a direct generalization of
the Peterson lock to multiple threads.
• The second solution, the Bakery lock, is perhaps the simplest
and best known n-threa
• The Filter lock, creates n−1 “waiting rooms,” called levels,
that a thread must traverse before acquiring the lock.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 25


Two important properties:
•At least one thread trying to enter level succeeds.
•If more than one thread is trying to enter level , then at least one
is blocked (i.e., continues to wait at that level).
•The Peterson lock uses a two-element boolean flag array to
indicate whether a thread is trying to enter the critical section.
•Each thread must pass through n − 1 levels of “exclusion” to
enter its critical section.
•Each level has a distinct victim[] field used to “filter out” one
thread, excluding it from the next level.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 26


IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 27
Fairness
• The starvation-freedom property guarantees that every thread
that calls lock() eventually enters the critical section.
A doorway section:
• Whose execution interval DA consists of a bounded number of
steps
Waiting section:
• Whose execution interval WA may take an unbounded number
of steps.
• The requirement that the doorway section always finish in a
bounded number of steps is a strong requirement.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 28


Lamport’s Bakery Algorithm
• It maintains the first-come-first served property by using a
distributed version of the number-dispensing machines often
found in bakeries:
• Each thread takes a number in the doorway
• Then waits until no thread with an earlier number is trying to
enter it.
• Each time a thread acquires a lock, it generates a new label[] in
two steps.
• First, it reads all the other threads’ labels in any order.
• Second, it reads all the other threads’ labels one after the other
• Generates a label greater by one than the maximal label it read.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 29


IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 30
Bounded Timestamps
• If a thread’s label field silently rolls over from a large number
to zero, then the first-come-first-served property no longer
holds.
• In the Bakery lock, labels act as timestamps:
• They establish an order among the contending threads.
• Informally, we need to ensure that if one thread takes a label
after another, then the latter has the larger label.
• Inspecting the code for the Bakery lock, we see that a thread
needs two abilities

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2


31
• To read the other threads’ timestamps (scan), and
• To assign itself a later timestamp (label).
• It is possible to construct such a wait-free concurrent time
stamping system, but the construction is long and rather
technical.
• The range of possible timestamps as nodes of a directed graph
(called a precedence graph).
• An edge from node a to node b means that a is a later
timestamp than b.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 32


• The timestamp order is irreflexive: there is no edge from any
node a to itself.
• The order is also antisymmetric
• If there is an edge from a to b, then there is no edge from b to a.
• Notice that we do not require that the order be transitive
• There can be an edge from a to b and from b to c, without
necessarily implying there is an edge from a to c.
• Assigning a timestamp to a thread as placing that thread’s token
on that timestamp’s node.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 33


• A thread performs a scan by locating the other threads’ tokens
• It assigns itself a new timestamp by moving its own token to a
node a such that there is an edge from a to every other thread’s
node.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 34


Lower Bounds on the Number of Locations
• The Bakery lock is succinct, elegant, and fair.
• The principal drawback is the need to read and write n distinct
locations, where n (which may be very large) is the maximum
number of concurrent threads.
• An object’s state is just the state of its fields.
• A thread’s local state is the state of its program counters and
local variables.
• A global state or system state is the state of all objects, plus
the local states of the threads.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 35


Concurrent Objects
• The behavior of concurrent objects is best described through
their safety and liveness properties
• It is often referred to as correctness and progress.
• All notions of correctness for concurrent objects are based on
some notion of equivalence with sequential behavior.
Quiescent consistency
• It is appropriate for applications that require high performance
at the cost of placing relatively weak constraints on object
behavior.
Sequential consistency
• It is a stronger condition, often useful for describing low-level
systems such as hardware memory interfaces.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 36


• Linearizability, even stronger, is useful for describing
higherlevel systems composed from linearizable components.
Blocking:
• where the delay of any one thread can delay others
Nonblocking:
• where the delay of a thread cannot delay the others.
Concurrency and Correctness:
• A simple lock-based concurrent FIFO queue.
• The enq() and deq() methods synchronize by a mutual
exclusion lock
• Each method accesses and updates fields while holding an
exclusive lock, the method calls take effect sequentially.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 37


A lock-based FIFO queue.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 38


• Concurrent objects whose methods hold exclusive locks
• Therefore effectively execute one after the other, are less
desirable than ones with finer-grained locking or no locks at
all.
• It is easier to reason about concurrent objects ,
• if we can somehow map their concurrent executions to
sequential ones, and limit our reasoning to these sequential
executions.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 39


Sequential Objects
• An object in languages such as Java and C++ is a container for
data.
• Each object provides a set of methods which are the only way
to manipulate that object.
• Each object has a class, which defines the object’s methods
and how they behave.
• This kind of description divides naturally into a precondition
and a postcondition, describing, once the method returns, the
object’s state and return value.
• A change to an object’s state is sometimes called a side effect.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 40


• This style of documentation, called a sequential specification
• This is so familiar that it is easy to overlook how elegant and
powerful it is.
• Defining objects in terms of preconditions and postconditions
• It makes perfect sense in a sequential model of computation
where a single thread manipulates a collection of objects.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 41


A single-enqueuer/single-dequeuer FIFO queue.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 42


Quiescent Consistency

• Method calls take time.


• A method call is the interval that starts with an invocation
event and ends with a response event.
• Method calls by concurrent threads may overlap, while
method calls by a single thread are always sequential
• We say a method call is pending if its call event has occurred,
but not its response event.
• The object version of a read–write memory location is called a
register

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 43


Sequential Consistency
• The order in which a single thread issues method calls is
called its program order.
• Sequential consistency requires that method calls act as if they
occurred in a sequential order consistent with program order.
• That is, in any concurrent execution, there is a way to order
the method calls sequentially so that they,
(1) consistent with program order
(2) meet the object’s sequential specification.
• There may be more than one order satisfying this condition.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 44


Linearizability

• The principal drawback of sequential consistency is that it is


not compositional
• The result of composing sequentially consistent components is
not itself necessarily sequentially consistent.
• Each method call should appear to take effect instantaneously
at some moment between its invocation and response.
• This principle states that the real-time behavior of method
calls must be preserved.
• We call this correctness property linearizability.
• Every linearizable execution is sequentially consistent, but not
vice versa.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 45


Linearization Points
• A concurrent object implementation is linearizable is to identify
for each method a linearization point
• For lock-based implementations, each method’s critical section
can serve as its linearization point
• For implementations that do not use locking
• The linearization point is typically a single step where the effects
of the method call become visible to other method calls.
• The single-enqueuer/single-dequeuer queue
• This implementation has no critical sections, and yet we can
identify its linearization points.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 46


Formal Definitions

• A concurrent object is linearizable,


• if each method call appears to take effect instantaneously at
some moment between that method’s invocation and return
events.
• An execution of a concurrent system is modeled by a history,
• A finite sequence of method invocation and response events.
• A subhistory of a history H is a subsequence of the events of H.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 47


Linearizability
• The basic idea behind linearizability is that every concurrent
history is equivalent, in the following sense, to some sequential
history.
• The basic rule is that if one method call precedes another,
• Then the earlier call must have taken effect before the later call.
• By contrast, if two method calls overlap, then their order is
ambiguous, and we are free to order them in any convenient
way.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 48


Compositional Linearizability
• Linearizability is compositional
• Compositionality is important because it allows concurrent
systems to be designed and constructed in a modular fashion;
• linearizable objects can be implemented, verified, and
executed independently.
• A concurrent system based on a noncompositional correctness
property must either rely on a centralized scheduler for all
objects, or else satisfy additional constraints placed on objects

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 49


The Nonblocking Property

• Linearizability is a nonblocking property


• A pending invocation of a total method is never required to
wait for another pending invocation to complete.
• Linearizability is an appropriate correctness condition for
systems where concurrency and real-time response are
important.
• The nonblocking property does not rule out blocking in
situations where it is explicitly intended.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 50


Progress Conditions
• Linearizability’s nonblocking property states that any pending
invocation has a correct response
• It does not talk about how to compute such a response.
• Blocking, because an unexpected delay by one thread can
prevent others from making progress.
• Unexpected thread delays are common in multiprocessors.
• A method is wait-free if it guarantees that every call finishes
its execution in a finite number of steps.
• It is bounded wait-free if there is a bound on the number of
steps a method call can take.
• This bound may depend on the number of threads.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 51


The Java Memory Model
• The Java programming language does not guarantee
linearizability, or even sequential consistency,
• when reading or writing fields of shared objects.
• In a single-threaded computation, such reorderings are
invisible to the optimized program, but in a multithreaded
computation
• one thread can spy on another and observe out-of-order
executions.
• The Java memory model satisfies the Fundamental Property
of relaxed memory models

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 52


• If a program’s sequentially consistent executions follow
certain rules
• Then every execution of that program in the relaxed model
will still be sequentially consistent.
• Double-checked locking, a once-common programming idiom
that falls victim to Java’s lack of sequential consistency.
• Usually, the term “synchronization” implies some form of
atomicity or mutual exclusion.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 53


• In Java, however, it also implies reconciling a thread’s
working memory with the shared memory.

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 54


Locks and Synchronized Blocks
• A thread can achieve mutual exclusion either by entering a
synchronized block or method
• Which acquires an implicit lock, or by acquiring an explicit
lock
• Both approaches have the same implications for memory
behavior
• If all accesses to a particular field are protected by the same
lock, then reads–writes to that field are linearizable

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 55


Volatile Fields
• Volatile fields are linearizable.
• Reading a volatile field is like acquiring a lock
• The working memory is invalidated and the volatile field’s
current value is reread from memory.
• Writing a volatile field is like releasing a lock: the volatile
field is immediately written back to memory

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 56


Final Fields
• field declared to be final cannot be modified once it has been
initialized.
• An object’s final fields are initialized in its constructor. If the
constructor follows certain simple rules
• Constructor with final field,

IFETCE/ M.E (CSE) /I YEAR/I SEM/CP7102/ADS/UNIT-4/PPT/VER 1.2 57

You might also like