Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Question paper: Trust yourself and then answer.

1. Predict the code behaviour.(5 marks)

a. The program will end in 1 minute.


b. The program will end in 1 micro second
c. The program will end in 1 Second
d. The program may never end.

2. Identify any scalability problem with the simple cache


(Memorizer1) below and ignore cache purging for now. (5 marks)
a. You must be joking, there are no scalability
problems.
b. Hashmap is well protected with synchronization so
it is absolutely safe.
c. Compute() throws InterruptedException so if there
is a scalability problem it will throw exception.
d. All of the above.

3. Identify any scalability problem with the transferMoney() and


assume both A and B have enough money. (5 marks)
a. No scalability problem because the accounts have
to be locked to deny a concurrent transfer.
b. Throwing of exception is expensive and exception
should never be thrown.
c. The code is not scalable because it acquires too
many locks.
d. A and B would rather exchange cash for such
small amounts.

4. Given the condition that


“SomeOtherClass.someOtherMethod(this)” needs to execute
before setting the flag. Is it possible to get rid of volatile? (5
marks)
a. Yes. The volatile adds no value within
synchronization.
b. Yes. The volatile adds the EXTRA overhead
of syncing data (boolean flag) to and from
the memory even though the syncing gets
done in the beginning and the end of
synchronization.
c. All the above.
d. None of the above.

5. Put‐if‐absent idiom is not present in the java.util collection library.


Here is my attempt to add an atomic Put‐if‐absent idiom to an
arraylist class. Is this implementation right. (2 marks)
a. Perfect. We are having the synchronized
wrapper and additionally synchronizing the
putIfAbsent method.
b. NO. Implementation is wrong because the
lock is on the wrong object and hence it
does not guarantee mutual exclusion.
c. None of the above.(Provide reason)

6. Identify any problems with the code below.(5 marks)


a. Perfect almost poetic piece of code.
b. Don’t see any problem still. Perfect almost poetic
piece of code.
c. Second careful analysis.... Perfect almost poetic
piece of code.
d. All of the above.

7. When is object pooling justified? (1 mark)


a. Never. It results in Old to new Pointers
b. All the time. Pools are fashionable and provide
bragging rights to the designer.
c. Sometimes. When the object creation overhead is
more than maintaining it.
d. None of the above(Provide reason).

8. “‐Xnoclassgc” is a JVM option. Tick the right option(s) that apply to


it. (2 marks)
a. It is a performance optimizing option that disables
permanent space collection.
b. It may cause memory thrashing if the permanent
space is not configured properly.
c. Hot deployment in an application server is a
particularly dangerous combination.
d. Copying collector is used to collect it.

9. What are benefits of immutable objects. Tick all that apply.(2


marks)
a. They are thread safe.
b. They aid garbage collection preventing old to
young pointers.
c. However they should always have a mutable
companion class to aid multi-step operation.
d. They aid security.
e. They aid memory allocation.

Look at the below code fragment and answer the 10th and 11th
questions.
10. What is the importance of the variable “refList”.(5 marks)
a. refList does not pay the price of synchronization.
b. refList is a static variable and hence it is thread
safe.
c. refList is a static variable and hence in the root
set.
d. refList is useless because it adds a overhead.

11. When does the reference queue(“refQueue”) get populated?


(2 marks)
a. When JVM starts.
b. When GC kicks in.
c. When JVM crashes.
d. When windows crashes.
e. When refList is collected.
f. When refQueue is populated.
g. When Image3 is collected.
h. When NativeImage3 is collected.

12. In NativeImage3 dispose() method is calling


refList.remove(this). When does it get called? What is the
significance of this call? When is “refQueue” populated in this
case? (5 marks)
a. It gets called from finalize. It has no significance.
“refQueue” is not populated.
b. Its gets called when client calls Image3.dispose().
Its significance is to avoid finalize. “refQueue” is
populated when GC kicks in.
c. Its gets called when client calls
NativeImage3.dispose(). Its significance is to avoid
finalize. “refQueue” is populated when GC kicks in.
d. Its gets called when client calls
NativeImage3.dispose(). Its significance is to avoid
finalize. “refQueue” is populated when Major
collection happens.
e. Its gets called when client calls
Image3.dispose().Its significance is to remove the
corresponding reference object because the client
has explicitly removed the referent and this object
no longer requires monitoring. “refQueue” is
populated when GC kicks in.
f. Its gets called when client calls
Image3.dispose().Its significance is to remove the
corresponding reference object because the client
has explicitly removed the referent and this object
no longer requires monitoring. “refQueue” is
populated when NativeImage3 is collected.
g. Its gets called when client calls
Image3.dispose().Its significance is to remove the
corresponding reference object because the client
has explicitly removed the referent and this object
no longer requires monitoring. “refQueue” is
populated when Image3 is collected.
h. Its gets called when client calls
Image3.dispose().Its significance is to remove the
corresponding reference object because the client
has explicitly removed the referent and this object
no longer requires monitoring. “refQueue” is not
populated in this case.
i. Its gets called when client calls
NativeImage3.dispose().Its significance is to
remove the corresponding reference object
because the client has explicitly removed the
referent and this object no longer requires
monitoring. “refQueue” is not populated in this
case.

13. What cache topology would be optimal for this


application? Caching 10GB of financial portfolio data
Read‐heavy, updated nightly
Several hundred users
Several thousand requests per minute
(Describe the Cache topology and a 2 line explanation as to why.)
(5 marks)
14. Tick the correct options for memory mapping of files. (2 marks)
a. Memory mapping is very fast because it uses
virtual memory.
b. Memory mapping is very fast because it only copies
the exact amount of data that is required and extra
space is not wasted.
c. Memory mapping is very fast because the memory
allocated is outside the java heap.
d. Memory mapping is done in multiples of pages and
lack of enough physical memory may lead to
thrashing.
e. Memory mapping is done in multiples of pages and
lack of enough physical memory may lead to
unmapping.
f. Memory mapping is done in multiples of pages and
lack of enough physical memory may lead to
overclocking.
g. Memory mapping is done in multiples of 8192 kb
and lack of enough physical memory may lead to
overclocking.
h. Memory mapping is done in multiples of 8192 kb
and lack of enough physical memory may lead to
Survivor spaces being overrun.

You might also like