|20-CS-694||Advanced Programming Techniques||Spring 2012|
This series shows how to use many of the classes associated with streams.
Click on the class names in the following table to see class
||BlockingQueue. A Queue that additionally supports operations that wait for the queue to become non-empty when retrieving an element, and wait for space to become available in the queue when storing an element. In this example a Successor threads dumps tokens into the Queue while a consumer takes them out after a 2 nanosecond delay.|
Executor. An Executor object executes a submitted
Runnable task. This interface provides a way of decoupling
task submission from the mechanics of how each task will be run,
including details of thread use, scheduling, etc. An Executor
is normally used instead of explicitly creating threads. For example,
rather than invoking new Thread(new(RunnableTask())).start()
for each of a set of tasks, you might use:
Executor executor = anExecutor; executor.execute(new RunnableTask1()); executor.execute(new RunnableTask2());
Atomic operations and Futures. In concurrent programming, an
operation is atomic if it appears to the rest of the system to occur
instantaneously. Atomicity is a guarantee of isolation from
concurrent processes. Java provides classes that support lock-free
thread-safe programming on single variables through atomic operations.
boolean compareAndSet(expectedValue, updateValue);
boolean weakCompareAndSet(expectedValue, updateValue);
These methods are implemented to employ efficient machine-level atomic instructions that are available on contemporary processors. However on some platforms, support may entail some form of internal locking. Thus the methods are not strictly guaranteed to be non-blocking.
Associated with all atomic operations is get and set.
A Future represents the result of an asynchronous computation. Methods are provided to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation. The result can only be retrieved using method get when the computation has completed, blocking if necessary until it is ready. Cancellation is performed by the cancel method. Additional methods are provided to determine if the task completed normally or was cancelled. Once a computation has completed, the computation cannot be cancelled. If you would like to use a Future for the sake of cancellability but not provide a usable result, you can declare types of the form Future> and return null as a result of the underlying task.
A Callable task returns a result and may throw an exception. Implementors define a single method with no arguments called call. The Callable interface is similar to Runnable, in that both are designed for classes whose instances are potentially executed by another thread. A Runnable, however, does not return a result and cannot throw a checked exception.
Unchecked exceptions represent program bugs such as invalid arguments passed to a non-private method, checked exceptions represent invalid conditions in areas outside the immediate control of the program such as invalid user input, database problems, network outages.
||Semaphores. A Semaphore maintains a set of permits. Each acquire() blocks, if necessary, until a permit is available, and then takes it. Each release() adds a permit, potentially releasing a blocking acquirer. In this example Sender objects release permits and Receiver objects remain suspended until they can acquire a number of permits that is determined by the selected number in the rightmost JComboBox. By adjusting delay times, one can observe what happens if Senders are more or less eager to complete than Receivers. This is a revised version of the original semaphore example.|
Re-entrant Lock. A reentrant, mutual exclusion Lock with the
same basic behavior and semantics as the implicit monitor lock
accessed using synchronized, notify, wait, but with extended
A ReentrantLock is owned by the thread last successfully locking, but not yet unlocking it. A thread invoking lock() will return, successfully acquiring the lock, when the lock is not owned by another thread. The method returns immediately if the current thread already owns the lock.
Count down latch. A synchronization aid that allows one or
more threads to wait until a set of operations being performed in
other threads completes. A CountDownLatch is initialized
with a given count. The await() methods block until the
current count reaches zero due to invocations of
the countDown() method, after which all waiting threads are
released and any subsequent invocations of await return immediately.
The count cannot be reset: use a CyclicBarrier if that is
A CountdownLatch is used to start a series of threads and then wait until all of them are complete or until they call countDown() a given number of times. A Semaphore is used to control the number of concurrent threads that are using a resource.
Fork and Join.
A ForkJoinTask is a thread-like form of Future that
is much lighter weight than a normal thread. Huge numbers of tasks
and subtasks may be hosted by a small number of actual threads in a
ForkJoinPool, at the price of some usage limitations.
The efficiency of ForkJoinTasks stems from a set of
restrictions reflecting their intended use as computational tasks
calculating pure functions or operating on purely isolated objects.
A ForkJoinPool is an ExecutorService for running ForkJoinTasks. A ForkJoinPool provides the entry point for submissions from non-ForkJoinTask clients, as well as management and monitoring operations. A ForkJoinPool differs from other kinds of ExecutorService mainly by virtue of employing work-stealing: all threads in the pool attempt to find and execute subtasks created by other active tasks (eventually blocking waiting for work if none exist). This enables efficient processing when most tasks spawn other subtasks (as do most ForkJoinTasks). When setting asyncMode to true in constructors, ForkJoinPools may also be appropriate for use with event-style tasks that are never joined.
A ForkJoinPool is constructed with a given target parallelism level; by default, equal to the number of available processors. The pool attempts to maintain enough active (or available) threads by dynamically adding, suspending, or resuming internal worker threads, even if some tasks are stalled waiting to join others. However, no such adjustments are guaranteed in the face of blocked IO or other unmanaged synchronization. The nested ForkJoinPool.ManagedBlocker interface enables extension of the kinds of synchronization accommodated.