Uploaded by Latika Jain

Multi Threading In JAVA

advertisement
Multi Threading In JAVA
What is Thread in Java?
Threads consumes CPU in best possible manner, hence enables multi processing.
Multi threading reduces idle time of CPU which improves performance of
application.
A thread state. A thread can be in one of the following states:

NEW
A thread that has not yet started is in this state.

RUNNABLE
A thread executing in the Java virtual machine is in this state.

BLOCKED
A thread that is blocked waiting for a monitor lock is in this state.

WAITING
A thread that is waiting indefinitely for another thread to perform a particular action is in this state.

TIMED_WAITING
A thread that is waiting for another thread to perform an action for up to a specified waiting time is in this
state.

TERMINATED
A thread that has exited is in this state.
What is the difference between Thread and Process in Java?
main difference between Thread and Process in Java is that Threads are part of process. i.e. one
process can spawn multiple Threads. Another major difference between Process and Thread is
that, each process has its own separate memory space but Threads from same process same
memory space.
There are two ways of implementing threading in Java
1) By extending java.lang.Thread class, or
2) By implementing java.lang.Runnable interface.
Actually public void run() method is defined in Runnable interface and
since java.lang.Thread class implements Runnable interface it gets this method automatically.
So now the interview question which way of implementing Thread is better? Extending Thread class or
implementing Runnable method?
In my opinion implementing Runnable is better because in Java we can only extend one class so if we
extend Thread class we cannot extend any other class while by implementing Runnable interface we
still have that option open with us.
Second reason which make sense to me is more on OOPS concept according to OOPS if we extend a
class we provide some new feature or functionality , So if the purpose is just to use the run() method
to define code it’s better to use Runnable interface.
What is Volatile variable in Java
By making a variable volatile using the volatile keyword in Java, application
programmer ensures that its value should always be read from main memory and thread should not
use cached value of that variable from their own stack.
The volatile keyword can only be applied to a variable, it cannot be applied to class or method.
Any variable which is shared between multiple threads should be made volatile, in order to ensure
that all thread must see the latest value of the volatile variable.
the volatile keyword in Java is used as an indicator to Java compiler and Thread that do not
cache value of this variable and always read it from main memory. So if you want to share
any variable in which read and write operation is atomic by implementation e.g. read and
write in an int or a boolean variable then you can declare them as volatile variable.
‘A write to a volatile field happens-before every subsequent read of that same field,
known as Volatile variable rule.’
Java volatile keyword also guarantees visibility and ordering, after Java 5 write to any
volatile variable happens before any read into the volatile variable.
A volatile variable can be used as an alternative way of achieving synchronization in Java in
some cases, like Visibility. with volatile variable, it's guaranteed that all reader thread will
see updated value of the volatile variable once write operation completed, without volatile
keyword different reader thread may see different values.
In Java reads and writes are atomic for all variables declared using Java volatile keyword
(including long and double variables).
Reads and writes are atomic for reference variables and for most primitive variables (all
types except long and double) even without the use of volatile keyword in Java.
You can use Volatile variable if you want to read and write long and double variable
atomically. long and double both are 64 bit data type and by default writing of long and
double is not atomic and platform dependence. Many platform perform write in long and
double variable 2 step, writing 32 bit in each step, due to this its possible for a Thread to see
32 bit from two different write. You can avoid this issue by making long and double variable
volatile in Java.
Java volatile keyword doesn't mean atomic, its common misconception that after declaring
volatile ++ will be atomic, to make the operation atomic you still need to ensure exclusive
access using synchronized method or block in Java.(read-write and Update; check and act are
not atomic)
Volatile does not acquire any lock on variable or object,
but Synchronization acquires lock on method or block in which
it is used.
2. Volatile variables are not cached, but variables used
inside synchronized method or block are cached.
3. When volatile is used will never create deadlock in program, as
volatile never obtains any kind of lock . But in case if
synchronization is not done properly, we might end up creating
dedlock in program.
4. Synchronization may cost us performance issues, as one thread
might be waiting for another thread to release lock on object.
But volatile is never expensive in terms of performance.
1.
Why do one call start method of thread if start() calls run() in turn" or
"What is difference by calling start() over run() method in java thread"
Main difference is that when program calls start() method a new Thread is created and code
inside run() method is executed in new Thread while if you call run() method directly no new
Thread is created and code inside run() will execute on current Thread. Most of the time
calling run() is bug or programming mistake because caller has intention of calling start() to
create new thread and this error can be detect by many static code coverage tools like findbugs.
If you want to perform time consuming task than always call start() method otherwise
your main thread will stuck while performing time consuming task if you call run() method
directly. Another difference between start vs run in Java thread is that you cannot
call start() method twice on thread object. once started, second call of start() will
throw IllegalStateException in Java while you can call run() method twice.
Runnable is an interface and defines only one method called run(). When a Thread is started in Java by
using Thread.start() method, it calls run() method of Runnable task which was passed to Thread during
creation.
How can you say Thread behaviour is unpredictable? (Important)
The solution to question is quite simple, Thread behaviour is
unpredictable because execution of Threads depends on Thread
scheduler,
How can you ensure all threads that started from main must end in
order in which they started and also main should end in last?
(Important)
Interviewers tend to know interviewees knowledge about Thread methods. So
this is time to prove your point by answering correctly. We can use join()
method to ensure all threads that started from main must end in order in
which they started and also main should end in last. In other words waits for
this thread to die. Calling join() method internally calls join(0);
DETAILED DESCRIPTION : Join() method - ensure all threads that started
from main must end in order in which they started and also main should end
in last. Types of join() method with programs- 10 salient features of join.
Callable vs Runnable
Though both Callable and Runnable interface are designed to represent a task, which can be
executed by any thread, there is some significant difference between them. In my opinion, the major
difference between Callable and Runnable interface is that Callable can return the result of an
operation performed inside call() method, which was one of the limitations with Runnable interface.
Another significant difference between Runnable and Callable interface is the ability to
throw checked exception. The Callable interface can throw checked exception because it's call
method throws Exception.
Ways of Starting a Thread
1. Extending Thread class
Class Test extend Thread {
Public void run(){
for(int i = 0; i<10;i++){
sysout(i);
Thread.sleep(100); //surround by try catch
}
}
}
Class app{
Test t1 = new Test();
Test t2 = new Test();
t1.start();
t2.start();
}
2. Implement Runnable
Class Test implements Runnable {
Public void run(){
for(int i = 0; i<10;i++){
sysout(i);
Thread.sleep(100); //surround by try catch
}
}
}
Class app{
Thread t1 = new Thread(new Test());
Thread t2 = new Thread(new Test());
t1.start();
t2.start();
}
3. Directly while creating New Thread Object
Class app{
Thread t1 = new Thread(new Runnable() {
Public void run(){
for(int i = 0; i<10;i++){
sysout(i);
Thread.sleep(100); //surround by try catch
}
}
});
t1.start();
}
How threads communicate between each other? Or How do you share
data between two thread in Java?
Answer. This is very must know question for all the interviewees, you will most probably face this
question in almost every time you go for interview.
Threads can communicate with each other by using wait(), notify() and notifyAll() methods.
Java Inter Thread Communication example:
wait and notify method must be called from synchronized context. Another important thing to
keep in mind while calling them is, using loop to check conditions instead of if block. For
example, if a consumer thread, which is waiting because shared queue is empty, gets wake
up due to a false alarm and try to get something from queue without further checking
whether queue is empty or not then unexpected result is possible.
package concurrency;
import java.util.LinkedList;
import java.util.Queue;
import org.apache.log4j.Logger;
public class InterThreadCommunicationExample {
public static void main(String args[]) {
final Queue sharedQ = new LinkedList();
Thread producer = new Producer(sharedQ);
Thread consumer = new Consumer(sharedQ);
producer.start();
consumer.start();
}
}
public class Producer extends Thread {
private static final Logger logger = Logger.getLogger(Producer.class);
private final Queue sharedQ;
public Producer(Queue sharedQ) {
super("Producer");
this.sharedQ = sharedQ;
}
@Override
public void run() {
for (int i = 0; i < 4; i++) {
synchronized (sharedQ) {
//waiting condition - wait until Queue is not empty
while (sharedQ.size() >= 1) {
try {
logger.debug("Queue is full, waiting");
sharedQ.wait();
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
logger.debug("producing : " + i);
sharedQ.add(i);
sharedQ.notify();
}
}
}
}
public class Consumer extends Thread {
private static final Logger logger = Logger.getLogger(Consumer.class);
private final Queue sharedQ;
public Consumer(Queue sharedQ) {
super("Consumer");
this.sharedQ = sharedQ;
}
@Override
public void run() {
while(true) {
synchronized (sharedQ) {
//waiting condition - wait until Queue is not empty
while (sharedQ.size() == 0) {
try {
logger.debug("Queue is empty, waiting");
sharedQ.wait();
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
int number = sharedQ.poll();
logger.debug("consuming : " + number );
sharedQ.notify();
//termination condition
if(number == 3){break; }
}
}
}
}
Why wait notify and notifyAll called from synchronized block or method
in Java
If you don't call them from synchronized context, your code will
throw IllegalMonitorStateException.
We use wait(), notify(), or notifyAll() method mostly for inter-thread communication in
Java. One thread is waiting after checking a condition e.g. In the classic ProducerConsumer problem, the Producer thread waits if the buffer is full and Consumer
thread notify Producer thread after it creates a space in the buffer by consuming an
element.
Calling notify() or notifyAll() methods issues a notification to a single or multiple
thread that a condition has changed and once notification thread leaves synchronized
block, all the threads which are waiting fight for object lock on which they are
waiting and lucky thread returns from wait() method after reacquiring the lock and
proceed further.
Let’s divide this whole operation into steps to see a possibility of race condition
between wait() and notify() method in Java, we will use Produce Consumer thread
example to understand the scenario better:
1. The Producer thread tests the condition (buffer is full or not) and confirms that it must
wait (after finding buffer is full).
2. The Consumer thread, at the same time, sets the condition after consuming an element
from a buffer.(they both are working together because no synchronized lock on the shared
object)
3. The Consumer thread calls the notify () method; this goes unheard since the Producer
thread is not yet waiting.
4. The Producer thread calls the wait () method and goes into waiting state.
So due to race condition here we potential lost a notification and if we use buffer or just one
element Produce thread will be waiting forever and your program will hang.
https://stackoverflow.com/questions/10684111/can-notify-wake-up-the-same-threadmultiple-times
Wait(long timeout) - imp
public final void wait(long timeout)
throws InterruptedException
Causes the current thread to wait until either another thread invokes the notify() method or
the notifyAll() method for this object, or a specified amount of time has elapsed.
The current thread must own this object's monitor.
This method causes the current thread (call it T) to place itself in the wait set for this object and then to relinquish any
and all synchronization claims on this object. Thread T becomes disabled for thread scheduling purposes and lies
dormant until one of four things happens:




Some other thread invokes the notify method for this object and thread T happens to be arbitrarily
chosen as the thread to be awakened.
Some other thread invokes the notifyAll method for this object.
Some other thread interrupts thread T.
The specified amount of real time has elapsed, more or less. If timeout is zero, however, then real time is
not taken into consideration and the thread simply waits until notified.
The thread T is then removed from the wait set for this object and re-enabled for thread scheduling. It then competes
in the usual manner with other threads for the right to synchronize on the object; once it has gained control of the
object, all its synchronization claims on the object are restored to the status quo ante - that is, to the situation as of
the time that the wait method was invoked. Thread T then returns from the invocation of the wait method. Thus,
on return from the wait method, the synchronization state of the object and of thread T is exactly as it was when
the wait method was invoked.
A thread can also wake up without being notified, interrupted, or timing out, a so-called spurious wakeup. While this
will rarely occur in practice, applications must guard against it by testing for the condition that should have caused the
thread to be awakened, and continuing to wait if the condition is not satisfied. In other words, waits should always
occur in loops, like this one:
synchronized (obj) {
while (<condition does not hold>)
obj.wait(timeout);
... // Perform action appropriate to condition
}
What is the difference between notify and notifyAll in Java?
This is another tricky questions from core Java interviews, since multiple threads can wait on
single monitor lock, Java API designer provides method to inform only one of them or all of
them, once waiting condition changes, but they provide half implementation.
There notify() method doesn't provide any way to choose a particular thread, that's why its
only useful when you know that there is only one thread is waiting. On the other
hand, notifyAll() sends notification to all threads and allows them to compete for locks,
which ensures that at-least one thread will proceed further.
Why wait, notify and notifyAll are in object class and not inside thread
class?
One reason which is obvious is that Java provides lock at object level not at thread level. Every object
has lock, which is acquired by thread. Now if thread needs to wait for certain lock it make sense to call
wait() on that object rather than on that thread. Had wait() method declared on Thread class, it was
not clear that for which lock thread was waiting. In short, since wait, notify and notifyAll operate at lock
level, it make sense to defined it on object class because lock belongs to object.
Thread Pool
In all of the previous examples, there's a close connection between the task being done by a new thread, as
defined by its Runnable object, and the thread itself, as defined by a Thread object. This works well for small
applications, but in large-scale applications, it makes sense to separate thread management and creation from
the rest of the application. Objects that encapsulate these functions are known as executors. The following
subsections describe executors in detail.



Executor Interfaces define the three executor object types.
Thread Pools are the most common kind of executor implementation.
Fork/Join is a framework (new in JDK 7) for taking advantage of multiple processors.
Executor Interfaces
The java.util.concurrent package defines three executor interfaces:



Executor, a simple interface that supports launching new tasks.
ExecutorService, a subinterface of Executor, which adds features that help manage the
lifecycle, both of individual tasks and of the executor itself.
ScheduledExecutorService, a subinterface of ExecutorService, supports future and/or
periodic execution of tasks.
Typically, variables that refer to executor objects are declared as one of these three interface types, not with an
executor class type.
The Executor Interface
The Executor interface provides a single method, execute, designed to be a drop-in replacement for a
common thread-creation idiom. If r is a Runnable object, and e is an Executor object you can replace
(new Thread(r)).start();
with
e.execute(r);
However, the definition of execute is less specific. The low-level idiom creates a new thread and launches it
immediately. Depending on the Executor implementation, execute may do the same thing, but is more
likely to use an existing worker thread to run r, or to place r in a queue to wait for a worker thread to become
available. (We'll describe worker threads in the section on Thread Pools.)
The executor implementations in java.util.concurrent are designed to make full use of the more
advanced ExecutorService and ScheduledExecutorService interfaces, although they also work with
the base Executor interface.
The ExecutorService Interface
The ExecutorService interface supplements execute with a similar, but more versatile submit method.
Like execute, submit accepts Runnable objects, but also accepts Callable objects, which allow the
task to return a value. The submit method returns a Future object, which is used to retrieve
the Callable return value and to manage the status of both Callable and Runnable tasks.
package com.concretepage.util.concurrent;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
public class CallableDemo {
public static void main(String[] args) throws InterruptedException,
ExecutionException {
ExecutorService service = Executors.newSingleThreadExecutor();
SumTask sumTask = new SumTask(20);
Future<Integer> future = service.submit(sumTask);
Integer result = future.get();
System.out.println(result);
}
}
class SumTask implements Callable<Integer> {
private int num = 0;
public SumTask(int num){
this.num = num;
}
@Override
public Integer call() throws Exception {
int result = 0;
for(int i=1;i<=num;i++){
result+=i;
}
return result;
}
}
import java.util.\*;
import java.util.concurrent.\*;
public class CallableExample {
public static class WordLengthCallable
implements Callable {
private String word;
public WordLengthCallable(String word) {
this.word = word;
}
public Integer call() {
return Integer.valueOf(word.length());
}
}
public static void main(String args[]) throws Exception {
ExecutorService pool = Executors.newFixedThreadPool(3);
Set<Future<Integer>> set = new HashSet<Future<Integer>>();
for (String word: args) {
Callable<Integer> callable = new WordLengthCallable(word);
Future<Integer> future = pool.submit(callable);
set.add(future);
}
int sum = 0;
for (Future<Integer> future : set) {
sum += future.get();
}
System.out.printf("The sum of lengths is %s%n", sum);
System.exit(sum);
}
}
ExecutorService also provides methods for submitting large collections of Callable objects.
Finally, ExecutorService provides a number of methods for managing the shutdown of the executor. To
support immediate shutdown, tasks should handle interrupts correctly.
Lifecycle methods in ExecutorService:
public interface ExecutorService extends Executor {
void shutdown();
List<Runnable> shutdownNow();
boolean isShutdown();
boolean isTerminated();
boolean awaitTermination(long timeout, TimeUnit unit)
throws InterruptedException;
// ... additional convenience methods for task submission
}
The lifecycle implied by ExecutorService has three states—running, shutting down, and terminated. ExecutorServices
are initially created in the running state. The shutdown method initiates a graceful shutdown: no new tasks are
accepted but previously submitted tasks are allowed to complete—including those that have not yet begun
execution. The shutdownNow method initiates an abrupt shutdown: it attempts to cancel outstanding tasks and does
not start any tasks that are queued but not begun.
Once all tasks have completed, the ExecutorService transitions to the terminated state. You can wait for an
ExecutorService to reach the terminated state with awaitTermination, or poll for whether it has yet terminated with
isTerminated.
The ScheduledExecutorService Interface
The ScheduledExecutorService interface supplements the methods of its
parent ExecutorService with schedule, which executes a Runnable or Callable task after a
specified delay. In addition, the interface
defines scheduleAtFixedRate and scheduleWithFixedDelay, which executes specified tasks
repeatedly, at defined intervals.
Thread Pools
Most of the executor implementations in java.util.concurrent use thread pools, which consist of worker
threads. This kind of thread exists separately from the Runnable and Callable tasks it executes and is
often used to execute multiple tasks.
Using worker threads minimizes the overhead due to thread creation. Thread objects use a significant amount
of memory, and in a large-scale application, allocating and deallocating many thread objects creates a
significant memory management overhead.
One common type of thread pool is the fixed thread pool. This type of pool always has a specified number of
threads running; if a thread is somehow terminated while it is still in use, it is automatically replaced with a new
thread. Tasks are submitted to the pool via an internal queue, which holds extra tasks whenever there are more
active tasks than threads.
An important advantage of the fixed thread pool is that applications using it degrade gracefully. To understand
this, consider a web server application where each HTTP request is handled by a separate thread. If the
application simply creates a new thread for every new HTTP request, and the system receives more requests
than it can handle immediately, the application will suddenly stop responding to all requests when the overhead
of all those threads exceed the capacity of the system. With a limit on the number of the threads that can be
created, the application will not be servicing HTTP requests as quickly as they come in, but it will be servicing
them as quickly as the system can sustain.
A simple way to create an executor that uses a fixed thread pool is to invoke
the newFixedThreadPool factory method in java.util.concurrent.Executors This class also
provides the following factory methods:



The newCachedThreadPool method creates an executor with an expandable thread pool. This
executor is suitable for applications that launch many short-lived tasks.
The newSingleThreadExecutor method creates an executor that executes a single task at a
time.
Several factory methods are ScheduledExecutorService versions of the above executors.
If none of the executors provided by the above factory methods meet your needs, constructing instances
ofjava.util.concurrent.ThreadPoolExecutor or java.util.concurrent.ScheduledThrea
dPoolExecutor will give you additional options.
If only one thread is used to process client request, than it subsequently limit how many client can access server
concurrently. In order to support large number of clients, you may decide to use one thread per request paradigm, in
which each request is processed by separate Thread, but this require Thread to be created, when request
arrived. Since creation of Thread is time consuming process, it delays request processing.
It also limits number of clients based upon how many thread per JVM is allowed, which is obviously a limited number.
Thread pool solves this problem for you, It creates Thread and manage them. Instead of creating Thread and
discarding them once task is done, thread-pool reuses threads in form of worker thread.
Since Thread are usually created and pooled when application starts, your server can immediately start request
processing, which can further improve server’s response time.
public class ThreadPoolExample {
public static void main(String args[]) {
ExecutorService service = Executors.newFixedThreadPool(10);
for (int i =0; i<100; i++){
service.submit(new Task(i));
}
service.shutdown();
while (!service.isTerminated()) {
}
System.out.println("Finished all threads");
}
}
final class Task implements Runnable{
private int taskId;
public Task(int id){
this.taskId = id;
}
@Override
public void run() {
System.out.println("Task ID : " + this.taskId +" performed by "
+ Thread.currentThread().getName());
}
}
Blocking Queue
A Queue that additionally supports operations that wait for the queue to become non-empty when retrieving an
element, and wait for space to become available in the queue when storing an element.
BlockingQueue methods come in four forms, with different ways of handling operations that cannot be satisfied
immediately, but may be satisfied at some point in the future: one throws an exception, the second returns a special
value (either null or false, depending on the operation), the third blocks the current thread indefinitely until the
operation can succeed, and the fourth blocks for only a given maximum time limit before giving up. These methods
are summarized in the following table:
Throws exception
Special value
Blocks
Times out
Insert
add(e)
offer(e)
put(e)
offer(e, time, unit)
Remove
remove()
poll()
take()
poll(time, unit)
Examine
element()
peek()
not applicable
not applicable
A BlockingQueue does not accept null elements. Implementations
throw NullPointerEx0ception on attempts to add, put or offer a null. A null is used as a
sentinel value to indicate failure of poll operations.
A BlockingQueue may be capacity bounded. At any given time it may have
a remainingCapacity beyond which no additional elements can be put without blocking.
A BlockingQueue without any intrinsic capacity constraints always reports a remaining capacity
of Integer.MAX_VALUE.
class Producer implements Runnable {
private final BlockingQueue queue;
Producer(BlockingQueue q) { queue = q; }
public void run() {
try {
while (true) { queue.put(produce()); }
} catch (InterruptedException ex) { ... handle ...}
}
Object produce() { ... }
}
class Consumer implements Runnable {
private final BlockingQueue queue;
Consumer(BlockingQueue q) { queue = q; }
public void run() {
try {
while (true) { consume(queue.take()); }
} catch (InterruptedException ex) { ... handle ...}
}
void consume(Object x) { ... }
}
class Setup {
void main() {
BlockingQueue q = new SomeQueueImplementation();
Producer p = new Producer(q);
Consumer c1 = new Consumer(q);
Consumer c2 = new Consumer(q);
new Thread(p).start();
new Thread(c1).start();
new Thread(c2).start();
}
}
Deamon thread
Daemon thread is a low priority thread (in context of JVM) that runs in background to perform
tasks such as garbage collection (gc) etc., they do not prevent the JVM from exiting (even if
the daemon thread itself is running) when all the user threads (non-daemon threads) finish
their execution. JVM terminates itself when all user threads (non-daemon threads) finish their
execution, JVM does not care whether Daemon thread is running or not, if JVM finds running
daemon thread (upon completion of user threads), it terminates the thread and after that
shutdown itself.
Checked vs Unchecked Exceptions in Java
In Java, there two types of exceptions:
What are checked exceptions?
Checked exceptions are checked at compile-time. It means if a method is throwing a checked
exception then it should handle the exception using try-catch block or it should declare the exception
using throws keyword, otherwise the program will give a compilation error. It is named as checked
exception because these exceptions are checked at Compile time.





SQLException
IOException
DataAccessException
ClassNotFoundException
InvocationTargetException InterruptedExceptions
What are Unchecked exceptions?
Unchecked exceptions are not checked at compile time. It means if your program is throwing an
unchecked exception and even if you didn’t handle/declare that exception, the program won’t give a
compilation error. Most of the times these exception occurs due to the bad data provided by user
during the user-program interaction. It is up to the programmer to judge the conditions in advance,
that can cause such exceptions and handle them appropriately. All Unchecked exceptions are direct
sub classes of RuntimeException class(like divide by 0, array out of bound).




NullPointerException
ArrayIndexOutOfBoundsException
ArithmeticException
IllegalArgumentException
How do you check if a Thread holds a lock or not?
There is a method called holdsLock(object) on java.lang.Thread, it returns true if
and only if the current thread holds the monitor lock on the specified object.
What is blocking method in Java? (answer)
A blocking method is a method which blocks until the task is done
What will happen if we don’t override run method?
Answer. This question will test your basic knowledge how start and run methods work
internally in Thread Api.
When we call start() method on thread, it internally calls run() method with newly created
thread. So, if we don’t override run() method newly created thread won’t be called and
nothing will happen.
What will happen if we override start method?
Answer. This question will again test your basic core java knowledge how overriding works
at runtime, what what will be called at runtime and how start and run methods work
internally in Thread Api.
When we call start() method on thread, it internally calls run() method with newly created
thread. So, if we override start() method, run() method will not be called until we write code
for calling run() method.
Monitor
In the Java virtual machine, every object and class is logically associated with a monitor. For objects,
the associated monitor protects the object's instance variables. For classes, the monitor protects the
class's class variables. If an object has no instance variables, or a class has no class variables, the
associated monitor protects no data.
Suppose you have thread and it is in static synchronized method and now can
thread enter other non static synchronized method from that method?
Answer. Yes, here when thread is in static synchronized method it must be holding lock on class’s
class object and when it enters synchronized method it will hold lock on object’s monitor as well.
So, now thread holds 2 locks (it’s also called nested synchronization)-
>first one on class’s class object.
>second one on object’s monitor
Deadlock
When thread A holds lock L and tries to acquire lock M, but at the same time thread B holds M and tries to acquire L,
both threads will wait forever. This situation is the simplest case of deadlock, where multiple threads wait forever
due to a cyclic locking dependency. (Think of the threads as the nodes of a directed graph whose edges represent the
relation “Thread A is waiting for a resource held by thread B”. If this graph is cyclical, there is a deadlock.)
Database systems are designed to detect and recover from deadlock. A transaction may acquire many locks, and
locks are held until the transaction commits. So it is quite possible, and in fact not uncommon, for two transactions
to deadlock. Without intervention, they would wait forever (holding locks that are probably required by other
transactions as well). But the database server is not going to let this happen. When it detects that a set of transactions
is deadlocked (which it does by searching the is-waiting-for graph for cycles), it picks a victim and aborts that
transaction. This releases the locks held by the victim, allowing the other transactions to proceed. The application
can then retry the aborted transaction,
which may be able to complete now that any competing transactions have completed.
Lock-ordering deadlocks
public class LeftRightDeadlock {
private final Object left = new Object();
private final Object right = new Object();
public void leftRight() {
synchronized (left) {
synchronized (right) {
doSomething();
}
}
}
public void rightLeft() {
synchronized (right) {
synchronized (left) {
doSomethingElse();
}
}
}
}
A program will be free of lock-ordering deadlocks if all threads acquire the locks they need in a fixed global order.
The deadlock in LeftRightDeadlock came about because the two threads attempted to acquire the same locks in a
different order. If they asked for the locks in the same order, there would be no cyclic locking dependency and
therefore no deadlock. If you can guarantee that every thread that needs locks L and M at the same time always
acquires L and M in the same order, there will be no deadlock.
Dynamic lock order deadlocks
Sometimes it is not obvious that you have sufficient control over lock ordering to prevent deadlocks. Consider the
harmless-looking code in Listing 10.2 that transfers funds from one account to another. It acquires the locks on both
Account objects before executing the transfer, ensuring that the balances are updated atomically and without violating
invariants such as “an account cannot have a negative balance”.
How can transferMoney deadlock? It may appear as if all the threads acquire their locks in the same order, but in fact
the lock order depends on the order of arguments passed to transferMoney, and these in turn might depend on external
inputs. Deadlock can occur if two threads call transferMoney at the same time, one transferring from X to Y, and the
other doing the opposite:
A: transferMoney(myAccount, yourAccount, 10);
B: transferMoney(yourAccount, myAccount, 20);
// Warning: deadlock-prone!
public void transferMoney(Account fromAccount,
Account toAccount,
DollarAmount amount)
throws InsufficientFundsException {
synchronized (fromAccount) {
synchronized (toAccount) {
if (fromAccount.getBalance().compareTo(amount) < 0)
throw new InsufficientFundsException();
else {
fromAccount.debit(amount);
toAccount.credit(amount);
}
}
}
}
Deadlocks like this one can be spotted the same way as in Listing 10.1—look for nested lock acquisitions. Since the
order of arguments is out of our control, to fix the problem we must induce an ordering on the locks and acquire
them according to the induced ordering consistently throughout the application.
One way to induce an ordering on objects is to use System.identityHashCode, which returns the value that would be
returned by Object.hashCode. Listing 10.3 shows a version of transferMoney that uses System.identityHashCode to induce a
lock ordering. It involves a few extra lines of code, but eliminates the possibility of deadlock. In the rare case that
two objects have the same hash code, we must use an
arbitrary means of ordering the lock acquisitions, and this reintroduces the possibility of deadlock. To prevent
inconsistent lock ordering in this case, a third “tie breaking” lock is used. By acquiring the tie-breaking lock before
acquiring either Account lock, we ensure that only one thread at a time performs the risky task of acquiring two locks
in an arbitrary order, eliminating the possibility of deadlock.
private static final Object tieLock = new Object();
public void transferMoney(final Account fromAcct, final Account toAcct, final DollarAmount amount)
throws InsufficientFundsException {
class Helper {
public void transfer() throws InsufficientFundsException {
if (fromAcct.getBalance().compareTo(amount) < 0) {
throw new InsufficientFundsException();
} else {
fromAcct.debit(amount);
toAcct.credit(amount);
}
}
}
int fromHash = System.identityHashCode(fromAcct);
int toHash = System.identityHashCode(toAcct);
if (fromHash < toHash) {
synchronized (fromAcct) {
synchronized (toAcct) {
new Helper().transfer();
}
}
} else if (fromHash > toHash) {
synchronized (toAcct) {
synchronized (fromAcct) {
new Helper().transfer();
}
}
} else {
synchronized (tieLock) {
synchronized (fromAcct) {
synchronized (toAcct) {
new Helper().transfer();
}
}
}
}
}
Starvation
Starvation occurs when a thread is perpetually(repeatedly) denied access to resources it needs in order to make
progress; the most commonly starved resource is CPU cycles. Starvation in Java applications can be caused by
inappropriate use of thread priorities. It can also be caused by executing nonterminating constructs (infinite loops or
resource waits that do not terminate) with a lock held, since other threads that need that lock will never be able to
acquire it.
It is generally wise to resist the temptation to tweak thread priorities. As soon as you start modifying priorities, the
behavior of your application becomes platform-specific and you introduce the risk of starvation. You can often spot
a program that is trying to recover from priority tweaking or other responsiveness problems by the presence of
Thread.sleep or Thread.yield calls in odd places, in an attempt to give more time to lower-priority threads.5 Avoid the
temptation to use thread priorities, since they increase platform dependence and can cause liveness problems. Most
concurrent applications can use the default priority for all threads.
Livelock
Livelock is a form of liveness failure in which a thread, while not blocked, still cannot make progress because it
keeps retrying an operation that will always fail.
Livelock can also occur when multiple cooperating threads change their state in response to the others in such a way
that no thread can ever make progress. This is similar to what happens when two overly polite people are walking in
opposite directions in a hallway: each steps out of the other’s way, and now they are again in each other’s way. So
they both step aside again, and again, and again. . .
The solution for this variety of livelock is to introduce some randomness into the retry mechanism.
Semaphore
Semaphore is a concurrent API which works on the basis of a set of permits. It restricts
the use of resources by the threads. A thread before using a resource, acquires a
permit from Semaphore or goes on hold until thread does not get permit. Semaphore
uses fairness settings to handle the queue of threads which needs permits. Once a
permit is issued, the number of available Semaphore permit is decreased and when the
permit is retuned back by the thread, its number is increased. If number of permits is
more than one, we call it Counting Semaphore and if number of permit is only one, it is
called Binary Semaphore.
http://www.concretepage.com/java/java-counting-and-binary-semaphore-tutorial-with-example
Fork Join Framework
https://docs.oracle.com/javase/tutorial/essential/concurrency/forkjoin.html
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/RecursiveAction.html
http://www.javacreed.com/java-fork-join-example/
Countdown Latch
A synchronization aid that allows one or more threads to wait until a set of operations being performed in other
threads completes. A
CountDownLatch is initialized with a given count. The await methods block until
the current count reaches zero due to invocations of the countDown() method, after which all waiting threads
are released and any subsequent invocations of await return immediately. This is a one-shot phenomenon -- the
count cannot be reset.
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/CountDownLatch.html
ArrayBlockingQueue
Thread synchronized. A bounded blocking queue backed by an array. This is a classic "bounded buffer", in which
a fixed-sized array holds elements inserted by producers and extracted by consumers. Once created, the capacity
cannot be changed.
Methods:
add(E e)
Inserts the specified element at the tail of this queue if it is possible to do so immediately without exceeding the
queue's capacity, returning
peek()
Retrieves, but does not remove, the head of this queue, or returns
null if this queue is empty.
poll()
Retrieves and removes the head of this queue, or returns
null if this queue is empty.
put(E e)
Inserts the specified element at the tail of this queue, waiting for space to become available if the queue is full.
take()
Retrieves and removes the head of this queue, waiting if necessary until an element becomes available.(waits if
queue is empty)
A Queue that additionally supports operations that wait for the queue to become non-empty when retrieving an
element, and wait for space to become available in the queue when storing an element.
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically
using internal locks or other forms of concurrency control. A BlockingQueue may be capacity bounded.
Wait() and Notify()
Used inside synchronized blocks of code. Wait() puts a thread to wait and releases the lock for that wait period. Lock
is acquired by other thread. Other thread calls notify() so that waiting thread can wake up, but the first thread will not
get the lock until second thread is done completely with its synchronized block.
Why you need Double checked Locking of Singleton Class?
One of the common scenario, where a Singleton class breaks its contracts is multi-threading.
If you ask a beginner to write code for Singleton design pattern, there is good chance that he
will come up with something like below :
private static Singleton _instance;
public static Singleton getInstance() {
if (_instance == null) {
_instance = new Singleton();
}
return _instance;
}
and when you point out that this code will create multiple instances of Singleton class if
called by more than one thread parallel, he would probably make this
whole getInstance() method synchronized, as shown in our 2nd code
example getInstanceTS() method.
Though it’s a thread-safe and solves issue of multiple instance, it's not very efficient. You
need to bear cost of synchronization all the time you call this method, while synchronization
is only needed on first class, when Singleton instance is created.
This will bring us to double checked locking pattern, where only critical section of code is
locked. Programmer call it double checked locking because there are two checks
for _instance == null, one without locking and other with locking (inside synchronized)
block.
Here is how double checked locking looks like in Java :
public static Singleton getInstanceDC() {
if (_instance == null) {
// Single Checked
synchronized (Singleton.class) {
if (_instance == null) {
_instance = new Singleton();
}
}
}
return _instance;
// Double checked
}
On surface this method looks perfect, as you only need to pay price for synchronized block
one time, but it still broken, until you make _instance variable volatile.
Without volatile modifier it's possible for another thread in Java to see half initialized
state of _instance variable, but with volatile variable guaranteeing happens-before
relationship, all the write will happen on volatile _instance before any read
of _instance variable.
This was not the case prior to Java 5, and that's why double checked locking was broken
before. Now, with happens-before guarantee, you can safely assume that this will work.
Java Singleton Example - Thread Safe Singleton using Static field
Initialization
You can also create thread safe Singleton in Java by creating Singleton instance during class loading. static fields are
initialized during class loading and Classloader will guarantee that instance will not be visible until its fully created.
Here is example of creating thread safe singleton in Java using static factory method. Only disadvantage of this
implementing Singleton patter using static field is that this is not a lazy initialization and Singleton is initialized even
before any clients call there getInstance() method.
public class Singleton{
private static final Singleton INSTANCE = new Singleton();
private Singleton(){ }
public static Singleton getInstance(){
return INSTANCE;
}
public void show(){
System.out.println("Singleon using static initialization in Java");
}
}
Reentrant Lock
http://www.concretepage.com/java/reentrantlock-java-example-with-lock-unlock-trylocklockinterruptibly-isheldbycurrentthread-and-getholdcount - Important
Similar to Synchronized with extended capabilities. TryLock makes it different from synchronized.
With tryLock you can try to get a lock and check if you could actually get a lock. Could help to prevent
deadlock
Write a program to create a Deadlock
public class TestDeadlockExample1 {
public static void main(String[] args) {
final String resource1 = "ratan jaiswal";
final String resource2 = "vimal jaiswal";
// t1 tries to lock resource1 then resource2
Thread t1 = new Thread() {
public void run() {
synchronized (resource1) {
System.out.println("Thread 1: locked resource 1");
try { Thread.sleep(100);} catch (Exception e) {}
synchronized (resource2) {
System.out.println("Thread 1: locked resource 2");
}
}
}
};
// t2 tries to lock resource2 then resource1
Thread t2 = new Thread() {
public void run() {
synchronized (resource2) {
System.out.println("Thread 2: locked resource 2");
try { Thread.sleep(100);} catch (Exception e) {}
synchronized (resource1) {
System.out.println("Thread 2: locked resource 1");
}
}
}
};
t1.start();
t2.start();
}
}
Download