ThreadPoolExecutor

An {@link ExecutorService} that executes each submitted task using one of possibly several pooled threads, normally configured using {@link Executors} factory methods.

<p>Thread pools address two different problems: they usually provide improved performance when executing large numbers of asynchronous tasks, due to reduced per-task invocation overhead, and they provide a means of bounding and managing the resources, including threads, consumed when executing a collection of tasks. Each {@code ThreadPoolExecutor} also maintains some basic statistics, such as the number of completed tasks.

<p>To be useful across a wide range of contexts, this class provides many adjustable parameters and extensibility hooks. However, programmers are urged to use the more convenient {@link Executors} factory methods {@link Executors#newCachedThreadPool} (unbounded thread pool, with automatic thread reclamation), {@link Executors#newFixedThreadPool} (fixed size thread pool) and {@link Executors#newSingleThreadExecutor} (single background thread), that preconfigure settings for the most common usage scenarios. Otherwise, use the following guide when manually configuring and tuning this class:

<dl>

<dt>Core and maximum pool sizes</dt>

<dd>A {@code ThreadPoolExecutor} will automatically adjust the pool size (see {@link #getPoolSize}) according to the bounds set by corePoolSize (see {@link #getCorePoolSize}) and maximumPoolSize (see {@link #getMaximumPoolSize}).

When a new task is submitted in method {@link #execute(Runnable)}, if fewer than corePoolSize threads are running, a new thread is created to handle the request, even if other worker threads are idle. Else if fewer than maximumPoolSize threads are running, a new thread will be created to handle the request only if the queue is full. By setting corePoolSize and maximumPoolSize the same, you create a fixed-size thread pool. By setting maximumPoolSize to an essentially unbounded value such as {@code Integer.MAX_VALUE}, you allow the pool to accommodate an arbitrary number of concurrent tasks. Most typically, core and maximum pool sizes are set only upon construction, but they may also be changed dynamically using {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd>

<dt>On-demand construction</dt>

<dd>By default, even core threads are initially created and started only when new tasks arrive, but this can be overridden dynamically using method {@link #prestartCoreThread} or {@link #prestartAllCoreThreads}. You probably want to prestart threads if you construct the pool with a non-empty queue. </dd>

<dt>Creating new threads</dt>

<dd>New threads are created using a {@link ThreadFactory}. If not otherwise specified, a {@link Executors#defaultThreadFactory} is used, that creates threads to all be in the same {@link ThreadGroupEx} and with the same {@code NORM_PRIORITY} priority and non-daemon status. By supplying a different ThreadFactory, you can alter the thread's name, thread group, priority, daemon status, etc. If a {@code ThreadFactory} fails to create a thread when asked by returning null from {@code newThread}, the executor will continue, but might not be able to execute any tasks. Threads should possess the "modifyThread" {@code RuntimePermission}. If worker threads or other threads using the pool do not possess this permission, service may be degraded: configuration changes may not take effect in a timely manner, and a shutdown pool may remain in a state in which termination is possible but not completed.</dd>

<dt>Keep-alive times</dt>

<dd>If the pool currently has more than corePoolSize threads, excess threads will be terminated if they have been idle for more than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}). This provides a means of reducing resource consumption when the pool is not being actively used. If the pool becomes more active later, new threads will be constructed. This parameter can also be changed dynamically using method {@link #setKeepAliveTime(long, TimeUnit)}. Using a value of {@code Long.MAX_VALUE} {@link TimeUnit#NANOSECONDS} effectively disables idle threads from ever terminating prior to shut down. By default, the keep-alive policy applies only when there are more than corePoolSize threads, but method {@link #allowCoreThreadTimeOut(bool)} can be used to apply this time-out policy to core threads as well, so long as the keepAliveTime value is non-zero. </dd>

<dt>Queuing</dt>

<dd>Any {@link BlockingQueue} may be used to transfer and hold submitted tasks. The use of this queue interacts with pool sizing:

<ul>

<li>If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing.

<li>If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread.

<li>If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.

</ul>

There are three general strategies for queuing: <ol>

<li><em> Direct handoffs.</em> A good default choice for a work queue is a {@link SynchronousQueue} that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection of new submitted tasks. This in turn admits the possibility of unbounded thread growth when commands continue to arrive on average faster than they can be processed.

<li><em> Unbounded queues.</em> Using an unbounded queue (for example a {@link LinkedBlockingQueue} without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. Thus, no more than corePoolSize threads will ever be created. (And the value of the maximumPoolSize therefore doesn't have any effect.) This may be appropriate when each task is completely independent of others, so tasks cannot affect each others execution; for example, in a web page server. While this style of queuing can be useful in smoothing out bursts of requests, it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed.

<li><em>Bounded queues.</em> A bounded queue (for example, an {@link ArrayBlockingQueue}) helps prevent resource exhaustion when used with finite maximumPoolSizes, but can be more difficult to tune and control. Queue sizes and maximum pool sizes may be traded off for each other: Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. If tasks frequently block (for example if they are I/O bound), a system may be able to schedule time for more threads than you otherwise allow. Use of small queues generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput.

</ol>

</dd>

<dt>Rejected tasks</dt>

<dd>New tasks submitted in method {@link #execute(Runnable)} will be <em>rejected</em> when the Executor has been shut down, and also when the Executor uses finite bounds for both maximum threads and work queue capacity, and is saturated. In either case, the {@code execute} method invokes the {@link RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)} method of its {@link RejectedExecutionHandler}. Four predefined handler policies are provided:

<ol>

<li>In the default {@link ThreadPoolExecutor.AbortPolicy}, the handler throws a runtime {@link RejectedExecutionException} upon rejection.

<li>In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread that invokes {@code execute} itself runs the task. This provides a simple feedback control mechanism that will slow down the rate that new tasks are submitted.

<li>In {@link ThreadPoolExecutor.DiscardPolicy}, a task that cannot be executed is simply dropped.

<li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the executor is not shut down, the task at the head of the work queue is dropped, and then execution is retried (which can fail again, causing this to be repeated.)

</ol>

It is possible to define and use other kinds of {@link RejectedExecutionHandler} classes. Doing so requires some care especially when policies are designed to work only under particular capacity or queuing policies. </dd>

<dt>Hook methods</dt>

<dd>This class provides {@code protected} overridable {@link #beforeExecute(Thread, Runnable)} and {@link #afterExecute(Runnable, Throwable)} methods that are called before and after execution of each task. These can be used to manipulate the execution environment; for example, reinitializing ThreadLocals, gathering statistics, or adding log entries. Additionally, method {@link #terminated} can be overridden to perform any special processing that needs to be done once the Executor has fully terminated.

<p>If hook, callback, or BlockingQueue methods throw exceptions, internal worker threads may in turn fail, abruptly terminate, and possibly be replaced.</dd>

<dt>Queue maintenance</dt>

<dd>Method {@link #getQueue()} allows access to the work queue for purposes of monitoring and debugging. Use of this method for any other purpose is strongly discouraged. Two supplied methods, {@link #remove(Runnable)} and {@link #purge} are available to assist in storage reclamation when large numbers of queued tasks become cancelled.</dd>

<dt>Reclamation</dt>

<dd>A pool that is no longer referenced in a program <em>AND</em> has no remaining threads may be reclaimed (garbage collected) without being explicitly shutdown. You can configure a pool to allow all unused threads to eventually die by setting appropriate keep-alive times, using a lower bound of zero core threads and/or setting {@link #allowCoreThreadTimeOut(bool)}. </dd>

</dl>

<p><b>Extension example</b>. Most extensions of this class override one or more of the protected hook methods. For example, here is a subclass that adds a simple pause/resume feature:

<pre> {@code class PausableThreadPoolExecutor : ThreadPoolExecutor { private bool isPaused; private Mutex pauseLock = new Mutex(); private Condition unpaused = pauseLock.newCondition();

PausableThreadPoolExecutor(...) { super(...); }

protected void beforeExecute(Thread t, Runnable r) { super.beforeExecute(t, r); pauseLock.lock(); try { while (isPaused) unpaused.await(); } catch (InterruptedException ie) { t.interrupt(); } finally { pauseLock.unlock(); } }

void pause() { pauseLock.lock(); try { isPaused = true; } finally { pauseLock.unlock(); } }

void resume() { pauseLock.lock(); try { isPaused = false; unpaused.notifyAll(); } finally { pauseLock.unlock(); } } }}</pre>

@author Doug Lea

class ThreadPoolExecutor : AbstractExecutorService {}

Constructors

this
this(int corePoolSize, int maximumPoolSize, Duration keepAliveTime, BlockingQueue!(Runnable) workQueue)

Creates a new {@code ThreadPoolExecutor} with the given initial parameters, the default thread factory and the default rejected execution handler.

this
this(int corePoolSize, int maximumPoolSize, Duration keepAliveTime, BlockingQueue!(Runnable) workQueue, ThreadFactory threadFactory)

Creates a new {@code ThreadPoolExecutor} with the given initial parameters and {@linkplain ThreadPoolExecutor.AbortPolicy default rejected execution handler}.

this
this(int corePoolSize, int maximumPoolSize, Duration keepAliveTime, BlockingQueue!(Runnable) workQueue, RejectedExecutionHandler handler)

Creates a new {@code ThreadPoolExecutor} with the given initial parameters and {@linkplain ThreadFactory#defaultThreadFactory default thread factory}.

this
this(int corePoolSize, int maximumPoolSize, Duration keepAliveTime, BlockingQueue!(Runnable) workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler)

Creates a new {@code ThreadPoolExecutor} with the given initial parameters.

Members

Functions

afterExecute
void afterExecute(Runnable r, Throwable t)

Method invoked upon completion of execution of the given Runnable. This method is invoked by the thread that executed the task. If non-null, the Throwable is the uncaught {@code RuntimeException} or {@code Error} that caused execution to terminate abruptly.

allowCoreThreadTimeOut
void allowCoreThreadTimeOut(bool value)

Sets the policy governing whether core threads may time out and terminate if no tasks arrive within the keep-alive time, being replaced if needed when new tasks arrive. When false, core threads are never terminated due to lack of incoming tasks. When true, the same keep-alive policy applying to non-core threads applies also to core threads. To avoid continual thread replacement, the keep-alive time must be greater than zero when setting {@code true}. This method should in general be called before the pool is actively used.

allowsCoreThreadTimeOut
bool allowsCoreThreadTimeOut()

Returns true if this pool allows core threads to time out and terminate if no tasks arrive within the keepAlive time, being replaced if needed when new tasks arrive. When true, the same keep-alive policy applying to non-core threads applies also to core threads. When false (the default), core threads are never terminated due to lack of incoming tasks.

awaitTermination
bool awaitTermination(Duration timeout)
Undocumented in source. Be warned that the author may not have intended to support it.
beforeExecute
void beforeExecute(Thread t, Runnable r)

Method invoked prior to executing the given Runnable in the given thread. This method is invoked by thread {@code t} that will execute task {@code r}, and may be used to re-initialize ThreadLocals, or to perform logging.

ensurePrestart
void ensurePrestart()

Same as prestartCoreThread except arranges that at least one thread is started even if corePoolSize is 0.

execute
void execute(Runnable command)

Executes the given task sometime in the future. The task may execute in a new thread or in an existing pooled thread.

finalize
void finalize()

@implNote Previous versions of this class had a finalize method that shut down this executor, but in this version, finalize does nothing.

getActiveCount
int getActiveCount()

Returns the approximate number of threads that are actively executing tasks.

getCompletedTaskCount
long getCompletedTaskCount()

Returns the approximate total number of tasks that have completed execution. Because the states of tasks and threads may change dynamically during computation, the returned value is only an approximation, but one that does not ever decrease across successive calls.

getCorePoolSize
int getCorePoolSize()

Returns the core number of threads.

getKeepAliveTime
long getKeepAliveTime()

Returns the thread keep-alive time, which is the amount of time that threads may remain idle before being terminated. Threads that wait this amount of time without processing a task will be terminated if there are more than the core number of threads currently in the pool, or if this pool {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}.

getLargestPoolSize
int getLargestPoolSize()

Returns the largest number of threads that have ever simultaneously been in the pool.

getMaximumPoolSize
int getMaximumPoolSize()

Returns the maximum allowed number of threads.

getPoolSize
int getPoolSize()

Returns the current number of threads in the pool.

getQueue
BlockingQueue!(Runnable) getQueue()

Returns the task queue used by this executor. Access to the task queue is intended primarily for debugging and monitoring. This queue may be in active use. Retrieving the task queue does not prevent queued tasks from executing.

getRejectedExecutionHandler
RejectedExecutionHandler getRejectedExecutionHandler()

Returns the current handler for unexecutable tasks.

getTaskCount
long getTaskCount()

Returns the approximate total number of tasks that have ever been scheduled for execution. Because the states of tasks and threads may change dynamically during computation, the returned value is only an approximation.

getThreadFactory
ThreadFactory getThreadFactory()

Returns the thread factory used to create new threads.

isShutdown
bool isShutdown()
Undocumented in source. Be warned that the author may not have intended to support it.
isStopped
bool isStopped()

Used by ScheduledThreadPoolExecutor.

isTerminated
bool isTerminated()
Undocumented in source. Be warned that the author may not have intended to support it.
isTerminating
bool isTerminating()

Returns true if this executor is in the process of terminating after {@link #shutdown} or {@link #shutdownNow} but has not completely terminated. This method may be useful for debugging. A return of {@code true} reported a sufficient period after shutdown may indicate that submitted tasks have ignored or suppressed interruption, causing this executor not to properly terminate.

onShutdown
void onShutdown()

Performs any further cleanup following run state transition on invocation of shutdown. A no-op here, but used by ScheduledThreadPoolExecutor to cancel delayed tasks.

prestartAllCoreThreads
int prestartAllCoreThreads()

Starts all core threads, causing them to idly wait for work. This overrides the default policy of starting core threads only when new tasks are executed.

prestartCoreThread
bool prestartCoreThread()

Starts a core thread, causing it to idly wait for work. This overrides the default policy of starting core threads only when new tasks are executed. This method will return {@code false} if all core threads have already been started.

purge
void purge()

Tries to remove from the work queue all {@link Future} tasks that have been cancelled. This method can be useful as a storage reclamation operation, that has no other impact on functionality. Cancelled tasks are never executed, but may accumulate in work queues until worker threads can actively remove them. Invoking this method instead tries to remove them now. However, this method may fail to remove tasks in the presence of interference by other threads.

reject
void reject(Runnable command)

Invokes the rejected execution handler for the given command. Package-protected for use by ScheduledThreadPoolExecutor.

remove
bool remove(Runnable task)

Removes this task from the executor's internal queue if it is present, thus causing it not to be run if it has not already started.

runWorker
void runWorker(Worker w)

Main worker run loop. Repeatedly gets tasks from queue and executes them, while coping with a number of issues:

setCorePoolSize
void setCorePoolSize(int corePoolSize)

Sets the core number of threads. This overrides any value set in the constructor. If the new value is smaller than the current value, excess existing threads will be terminated when they next become idle. If larger, new threads will, if needed, be started to execute any queued tasks.

setKeepAliveTime
void setKeepAliveTime(Duration time)

Sets the thread keep-alive time, which is the amount of time that threads may remain idle before being terminated. Threads that wait this amount of time without processing a task will be terminated if there are more than the core number of threads currently in the pool, or if this pool {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}. This overrides any value set in the constructor.

setMaximumPoolSize
void setMaximumPoolSize(int maximumPoolSize)

Sets the maximum allowed number of threads. This overrides any value set in the constructor. If the new value is smaller than the current value, excess existing threads will be terminated when they next become idle.

setRejectedExecutionHandler
void setRejectedExecutionHandler(RejectedExecutionHandler handler)

Sets a new handler for unexecutable tasks.

setThreadFactory
void setThreadFactory(ThreadFactory threadFactory)

Sets the thread factory used to create new threads.

shutdown
void shutdown()

Initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted. Invocation has no additional effect if already shut down.

shutdownNow
List!(Runnable) shutdownNow()

Attempts to stop all actively executing tasks, halts the processing of waiting tasks, and returns a list of the tasks that were awaiting execution. These tasks are drained (removed) from the task queue upon return from this method.

terminated
void terminated()

Method invoked when the Executor has terminated. Default implementation does nothing. Note: To properly nest multiple overridings, subclasses should generally invoke {@code super.terminated} within this method.

toString
string toString()

Returns a string identifying this pool, as well as its state, including indications of run state and estimated worker and task counts.

tryTerminate
void tryTerminate()

Transitions to TERMINATED state if either (SHUTDOWN and pool and queue empty) or (STOP and pool empty). If otherwise eligible to terminate but workerCount is nonzero, interrupts an idle worker to ensure that shutdown signals propagate. This method must be called following any action that might make termination possible -- reducing worker count or removing tasks from the queue during shutdown. The method is non-private to allow access from ScheduledThreadPoolExecutor.

Inherited Members

From AbstractExecutorService

newTaskFor
RunnableFuture!(T) newTaskFor(Runnable runnable, T value)

Returns a {@code RunnableFuture} for the given runnable and default value.

newTaskFor
RunnableFuture!(T) newTaskFor(Runnable runnable)
Undocumented in source. Be warned that the author may not have intended to support it.
newTaskFor
RunnableFuture!(T) newTaskFor(Callable!(T) callable)

Returns a {@code RunnableFuture} for the given callable task.

submit
Future!(void) submit(Runnable task)

@throws RejectedExecutionException {@inheritDoc} @throws NullPointerException {@inheritDoc}

submit
Future!(T) submit(Runnable task, T result)

@throws RejectedExecutionException {@inheritDoc} @throws NullPointerException {@inheritDoc}

submit
Future!(T) submit(Callable!(T) task)

@throws RejectedExecutionException {@inheritDoc} @throws NullPointerException {@inheritDoc}

invokeAny
T invokeAny(Collection!(Callable!(T)) tasks)
Undocumented in source. Be warned that the author may not have intended to support it.
invokeAny
T invokeAny(Collection!(Callable!(T)) tasks, Duration timeout)
Undocumented in source. Be warned that the author may not have intended to support it.
invokeAll
List!(Future!(T)) invokeAll(Collection!(Callable!(T)) tasks)
Undocumented in source. Be warned that the author may not have intended to support it.
invokeAll
List!(Future!(T)) invokeAll(Collection!(Callable!(T)) tasks, Duration timeout)
Undocumented in source. Be warned that the author may not have intended to support it.

Meta