JAVA设计模式之行为型模式

##行为型模式

对象的创建和结构定义好后,就是他们的行为的设计了。

  • 模板方法模式(Template Method);
  • 观察者模式(Observer);
  • 状态模式(State);
  • 策略模式(Strategy);
  • 职责链模式(Chain of Responsibility);
  • 命令模式(Command);
  • 访问者模式(Visitor);
  • 调停者模式(Mediator);
  • 备忘录模式(Memento);
  • 迭代器模式(Iterator);
  • 解释器模式(Interpreter);

模板方法模式(Template Method);

GoF这样定义:

定义一个操作中的算法的骨架,而将一些步骤延迟到子类中。
Template Method使得子类可以不改变一个算法的结构即可重定义该算法的某些特定步骤。

这里我们以画布上画画为例,我们定义一抽象类,其中定义一个渲染方法,渲染时有两个步骤,一个画背景,二能画主体,三加印章。
咱们这里画一个圆和画一个矩形,抽象类中定义渲染时的先后流程,具体的实现有具体的子类进行实现。

代码如下:

package design.pattern.temlatemothod;

/**
* Created by Aaron on 15/9/16.
*/
public abstract class AbstractShape {
public void render(){
this.drawBackground();
this.drawGraphics();
this.drawSignature();
}
abstract void drawSignature();
abstract void drawBackground();
abstract void drawGraphics();

}

package design.pattern.temlatemothod;

/**
* Created by Aaron on 15/9/16.
*/
public class CircleShape extends AbstractShape {
@Override
public void drawSignature() {
System.out.println("draw circle signature!");
}

@Override
public void drawBackground() {
System.out.println("draw circle background! ");
}

@Override
public void drawGraphics() {
System.out.println("draw circle graphics!");
}
}

package design.pattern.temlatemothod;

/**
* Created by Aaron on 15/9/16.
*/
public class RectShape extends AbstractShape {
@Override
public void drawSignature() {
System.out.println("draw rect signature!");
}

@Override
public void drawBackground() {
System.out.println("draw rect background! ");
}

@Override
public void drawGraphics() {
System.out.println("draw rect graphics!");
}
}

package design.pattern.temlatemothod;

/**
* Created by Aaron on 15/9/16.
*/
public class User {
public static void main(String args[]){
new CircleShape().render();
System.out.println("-----");
new RectShape().render();
}
}

输出结果:

draw circle background! 
draw circle graphics!
draw circle signature!
-----
draw circle background! 
draw circle graphics!
draw circle signature!

观察者模式(Observer);

GoF这样定义:

定义对象间的一种一对多的依赖关系 , 以便当一个对象的状态发生改变时 , 所有依赖于它的对象都得到通知并自动刷新。

我们常常会遇到,当一个事件发生时,会有一些监听者进行相应的响应。这里我们的例子是, 当GPS发生变化时,它的订阅者的update的方法就会被调用。

`下面是示例代码:`
package design.pattern.observer;

import java.lang.reflect.Array;
import java.util.ArrayList;

/**
* Created by Aaron on 15/9/16.
*/
public abstract class Subject {
private ArrayList<Observer> observers=new ArrayList<Observer>();
public void addObserver(Observer observer){
this.observers.add(observer);
}
public void removeObserver(Observer observer){
this.observers.remove(observer);
}
public void notifyObserver(){
for(Observer observer:observers){
observer.update(this);
}
}
}

package design.pattern.observer;

import java.awt.*;

/**
* Created by Aaron on 15/9/16.
*/
public class GPSSubject extends Subject {
private Point point;
public void move(Point point){
this.point=point;
this.notifyObserver();
}
}

package design.pattern.observer;

/**
* Created by Aaron on 15/9/16.
*/
public abstract class Observer {
public Observer(){

}
public abstract void update(Subject subject);

}


package design.pattern.observer;

/**
* Created by Aaron on 15/9/16.
*/
public class MapObserver extends Observer {
@Override
public void update(Subject subject) {
System.out.println(this.getClass().getSimpleName()+"_"+this.hashCode()+" observer:"+subject.getClass().getSimpleName()+" position changed;");
}
}

package design.pattern.observer;

import java.awt.*;

/**
* Created by Aaron on 15/9/16.
*/
public class User {
public static void main(String[] args){
GPSSubject subject=new GPSSubject();
subject.addObserver(new MapObserver());
Observer observer1=null;
subject.addObserver(observer1=new MapObserver());
subject.move(new Point(200, 400));
System.out.println("remove one observer from subject's observer list!");
subject.removeObserver(observer1);
subject.move(new Point(200,400));
}
}

状态模式(State);

GoF这样定义: 允许一个对象在其内部状态改变时改变它的行为。对象看起来似乎修改了它
所属的类。

以下是示例代码:

package design.pattern.state;

/**
* Created by Aaron on 15/9/20.
*/
public class Context extends AbstractLifeState {
public static OpeningState openingState = new OpeningState();
public static ClosingState closingState = new ClosingState();
public static RunningState runningState = new RunningState();
public static StoppingState stoppingState = new StoppingState();
private AbstractLifeState lifeState;

public Context() {
}

public AbstractLifeState getLifeState() {
return lifeState;
}

public void setLifeState(AbstractLifeState lifeState) {
this.lifeState = lifeState;
this.lifeState.setContext(this);
}

@Override
public void open() {
this.lifeState.open();
}

@Override
public void close() {
this.lifeState.close();
}

@Override
public void run() {
this.lifeState.run();
}

@Override
public void stop() {
this.lifeState.stop();
}
}

package design.pattern.state;

/**
* Created by Aaron on 15/9/20.
*/
public abstract class AbstractLifeState {
protected Context context;

public void setContext(Context context) {
this.context = context;
}

public abstract void open();
public abstract void close();
public abstract void run();
public abstract void stop();
}

package design.pattern.state;

/**
* Created by Aaron on 15/9/20.
*/
public class OpeningState extends AbstractLifeState {
@Override
public void open() {
System.out.println(this.getClass().getSimpleName() + ": operate open");
}

@Override
public void close() {
System.out.println(this.getClass().getSimpleName() + ": operate close");
this.context.setLifeState(Context.closingState);
}

@Override
public void run() {
System.out.println(this.getClass().getSimpleName() + ": operate run");
this.context.setLifeState(Context.runningState);
}

@Override
public void stop() {
System.out.println(this.getClass().getSimpleName()+": operate stop");
this.context.setLifeState(Context.stoppingState);
}
}

package design.pattern.state;

/**
* Created by Aaron on 15/9/20.
*/
public class RunningState extends AbstractLifeState {
@Override
public void open() {
System.out.println(this.getClass().getSimpleName() + ": operate open");
context.setLifeState(Context.openingState);
}

@Override
public void close() {
System.out.println(this.getClass().getSimpleName()+": operate close");
context.setLifeState(Context.closingState);
}

@Override
public void run() {
System.out.println(this.getClass().getSimpleName()+": operate run");
}

@Override
public void stop() {
System.out.println(this.getClass().getSimpleName()+": operate stop");
context.setLifeState(Context.stoppingState);
}
}

package design.pattern.state;

/**
* Created by Aaron on 15/9/20.
*/
public class StoppingState extends AbstractLifeState {
@Override
public void open() {
System.out.println(this.getClass().getSimpleName() + ": operate open");
}

@Override
public void close() {
System.out.println(this.getClass().getSimpleName()+": operate close");
}

@Override
public void run() {
System.out.println(this.getClass().getSimpleName()+": operate run");
}

@Override
public void stop() {
System.out.println(this.getClass().getSimpleName()+": operate stop");
}
}

package design.pattern.state;

/**
* Created by Aaron on 15/9/20.
*/
public class ClosingState extends AbstractLifeState {
@Override
public void open() {
System.out.println(this.getClass().getSimpleName() + ": operate open");
context.setLifeState(Context.openingState);
}

@Override
public void close() {
System.out.println(this.getClass().getSimpleName() + ": operate close");
}

@Override
public void run() {
System.out.println(this.getClass().getSimpleName()+": operate run");
context.setLifeState(Context.runningState);
}

@Override
public void stop() {
System.out.println(this.getClass().getSimpleName()+": operate stop");
context.setLifeState(Context.stoppingState);
}
}

package design.pattern.state;

import design.pattern.flayweight.ConnectionPool;

/**
* Created by Aaron on 15/9/20.
*/
public class User {
public static void main(String[] args){
Context context=new Context();
context.setLifeState(Context.closingState);
context.open();
context.run();
context.close();

}
}

策略模式(Strategy);

GoF这样定义:

定义一系列的算法,把它们一个个封装起来,并且使它们可相互替换。

计算器的实现,其中计算的

以下是示例代码:

package design.pattern.strategy;

/**
* Created by Aaron on 15/9/21.
*/
public interface ICalculate {
double calculate(double a,double b);
}

package design.pattern.strategy;

/**
* Created by Aaron on 15/9/21.
*/
public class AddCalculate implements ICalculate {
public double calculate(double a, double b) {
return a+b;
}
}

package design.pattern.strategy;

/**
* Created by Aaron on 15/9/21.
*/
public class DivisionCalculate implements ICalculate {
public double calculate(double a, double b) {
return a/b;
}
}

package design.pattern.strategy;

/**
* Created by Aaron on 15/9/21.
*/
public class SubtractionCalculate implements ICalculate {
public double calculate(double a, double b) {
return a-b;
}
}

package design.pattern.strategy;

/**
* Created by Aaron on 15/9/21.
*/
public class Context {
private ICalculate calculate;
public Context(ICalculate calculate){
this.calculate=calculate;
}

public ICalculate getCalculate() {
return calculate;
}

public void setCalculate(ICalculate calculate) {
this.calculate = calculate;
}

public double calculate(double a,double b){
return this.calculate.calculate(a,b);
}
}

package design.pattern.strategy;

/**
* Created by Aaron on 15/9/21.
*/
public class User {
public static void main(String args[]){
Context context =new Context(new AddCalculate());
double result=context.calculate(20.0,30.3);
System.out.println(result);
context.setCalculate(new DivisionCalculate());
System.out.println(context.calculate(20,40));
}
}

结果输出:

50.3
0.5

职责链模式(Chain of Responsibility);

GoF这样定义:

典型的事例就是我们在Spring中的拦截器和Servlet中的Filter,它们都是现成的责任链模式。

`以下是示例代码:`
package design.pattern.responsibilitychain;

/**
* Created by Aaron on 15/9/29.
*/
public abstract class Handler {
protected Handler successor;

public abstract void process();

public Handler getSuccessor() {
return successor;
}

public void setSuccessor(Handler successor) {
this.successor = successor;
}
}

package design.pattern.responsibilitychain;

/**
* Created by Aaron on 15/9/29.
*/
public class LoggerHandler extends Handler {
@Override
public void process() {
if(getSuccessor()!=null){
System.out.println(getClass().getSimpleName()+",处理请求,并调用下一个处理者");
getSuccessor().process();
}else{
System.out.println(getClass().getSimpleName()+",仅处理,无下一处理者");
}
}
}

package design.pattern.responsibilitychain;

/**
* Created by Aaron on 15/9/29.
*/
public class ValidateHandler extends Handler {
@Override
public void process() {
if(getSuccessor()!=null){
System.out.println(getClass().getSimpleName()+",处理请求,并调用下一个处理者");
getSuccessor().process();
}else{
System.out.println(getClass().getSimpleName()+",仅处理,无下一处理者");
}
}
}

package design.pattern.responsibilitychain;

/**
* Created by Aaron on 15/9/29.
*/
public class User {
public static void main(String[] args) {
Handler validate = new ValidateHandler();
Handler handler = new LoggerHandler();
validate.setSuccessor(handler);
validate.process();
}
}

输出:

ValidateHandler,处理请求,并调用下一个处理者
LoggerHandler,仅处理,无下一处理者

命令模式(Command);

GoF这样定义:

将一个请求封装为一个对象,从而使你可用不同的请求对客户进行参数 化;对请求排队或记录请求日志,以及支持可取消的操作。

AudioPlayer系统(转)

  小女孩茱丽(Julia)有一个盒式录音机,此录音机有播音(Play)、倒带(Rewind)和停止(Stop)功能,录音机的键盘便是请求者(Invoker)角色;茱丽(Julia)是客户端角色,而录音机便是接收者角色。Command类扮演抽象命令角色,而PlayCommand、StopCommand和RewindCommand便是具体命令类。茱丽(Julia)不需要知道播音(play)、倒带(rewind)和停止(stop)功能是怎么具体执行的,这些命令执行的细节全都由键盘(Keypad)具体实施。茱丽(Julia)只需要在键盘上按下相应的键便可以了。

  录音机是典型的命令模式。录音机按键把客户端与录音机的操作细节分割开来。

`以下是示例代码:`
package design.pattern.command;

/**
* Created by Aaron on 15/9/29.
*/
public class AudioPlay {
public void play(){
System.out.println("播放....");
}
public void rewind(){
System.out.println("倒带....");
}
public void stop(){
System.out.println("停止....");
}

}

package design.pattern.command;

/**
* Created by Aaron on 15/9/29.
*/
public interface Command {
void execute();
}


package design.pattern.command;

/**
* Created by Aaron on 15/9/29.
*/
public class PlayCommand implements Command {
private AudioPlay audioPlay;

public PlayCommand(AudioPlay audioPlay) {
this.audioPlay = audioPlay;
}

public void execute() {
this.audioPlay.play();
}
}

package design.pattern.command;

/**
* Created by Aaron on 15/9/29.
*/
public class RewindCommand implements Command {
private AudioPlay audioPlay;

public RewindCommand(AudioPlay audioPlay) {
this.audioPlay = audioPlay;
}
public void execute() {
this.audioPlay.rewind();
}
}

package design.pattern.command;

/**
* Created by Aaron on 15/9/29.
*/
public class StopCommand implements Command {
private AudioPlay audioPlay;

public StopCommand(AudioPlay audioPlay) {
this.audioPlay = audioPlay;
}
public void execute() {
this.audioPlay.stop();
}
}

package design.pattern.command;

/**
* Created by Aaron on 15/9/29.
*/
public class Keypad {
private Command playCommand;
private Command rewindCommand;
private Command stopCommand;

public void setPlayCommand(Command playCommand) {
this.playCommand = playCommand;
}

public void setRewindCommand(Command rewindCommand) {
this.rewindCommand = rewindCommand;
}

public void setStopCommand(Command stopCommand) {
this.stopCommand = stopCommand;
}

public void play(){
playCommand.execute();
}
public void rewind(){
rewindCommand.execute();
}
public void stop(){
stopCommand.execute();
}
}

package design.pattern.command;

/**
* Created by Aaron on 15/9/29.
*/
public class User {
public static void main(String[] args) {
AudioPlay audioPlay = new AudioPlay();
PlayCommand playCommand = new PlayCommand(audioPlay);
RewindCommand rewindCommand = new RewindCommand(audioPlay);
StopCommand stopCommand = new StopCommand(audioPlay);

Keypad keypad=new Keypad();
keypad.setPlayCommand(playCommand);
keypad.setRewindCommand(rewindCommand);
keypad.setStopCommand(stopCommand);

keypad.play();
keypad.rewind();
keypad.stop();
keypad.play();
keypad.stop();

}
}

输出结果:

播放....
倒带....
停止....
播放....
停止....

访问者模式(Visitor);

GoF这样定义:

表示一个作用于某对象结构中的各元素的操作。它使你可以在不改变各元
素的类的前提下定义作用于这些元素的新操作。

`以下是示例代码:`
```
## 调停者模式(Mediator);
GoF这样定义:
>用一个中介对象来封装一系列的对象交互。中介者使各对象不需要显式地相互引用,从而使其耦合松散,而且可以独立地改变它们之间的交互

<div class="col-xs-12">
<img src='' class='col-lg-offset-3 col-lg-6 col-xs-12 thumbnail'/>
</div>
`以下是示例代码:`
```java
## 备忘录模式(Memento); GoF这样定义: > 在不破坏封装性的前提下,捕获一个对象的内部状态,并在该对象之外保存这个状态。这样以后就可将该对象恢复到保存的状态。
`以下是示例代码:`
```
## 迭代器模式(Iterator);
GoF这样定义:
>提供一种方法顺序访问一个聚合对象中各个元素,而又不需暴露该对象的内部表示。
<div class="col-xs-12">
<img src='' class='col-lg-offset-3 col-lg-6 col-xs-12 thumbnail'/>
</div>
`以下是示例代码:`
```java
##解释器模式(Interpreter) GoF这样定义: >给定一个语言,定义它的文法的一种表示,并定义一个解释器,该解释器使用该表示来解释语言中的句子。
`以下是示例代码:` ```java ```°

ThreadPoolExecutor 源码分析

/*
* ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
*/

/*
*
* Written by Doug Lea with assistance from members of JCP JSR-166
* Expert Group and released to the public domain, as explained at
* http://creativecommons.org/publicdomain/zero/1.0/
*/

package java.util.concurrent;
import java.util.concurrent.locks.AbstractQueuedSynchronizer;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.*;

/**
*一个ExecutorService执行每个被提交入到线程池中的任务,通过Executors工厂方法进行配置。
*线程池处理两种不同的问题:通过减少每个任务调用的开销、提供边界和资源管理,包括线程,
*任务集合的执行,从而改进了执行大量异步任务时的性能问题。ThreadPoolExcecutor也维护着一
*些统计数据,如已完成任务的数目。
*面对一个提供了许多可调用参数和可扩展性的hooks.程序员通常比较喜欢用Executors的工厂方法。
*如Executors.newCachedThreadPool(无限大小的线程池,自动线程回收)、Executors.newFixedThreadPool
*(固定大小的线程池)、Executors.newSingleThreadExecutor(单个后台线程),为最常用的场
*景进行预配置。或者,使用这个类进行手动配置实现同样的效果的线程池,
*
* 核心和最大的线程池
*
* ThreadPoolExecutor会根据线程池的大小配置corePoolSize和maximumPoolSize来自动调整池的大小,
*可以通过getPoolSize查看池的大小。
*
* When a new task is submitted in method {@link #execute(Runnable)},
* and fewer than corePoolSize threads are running, a new thread is
* created to handle the request, even if other worker threads are
* idle. If there are more than corePoolSize but less than
* maximumPoolSize threads running, a new thread will be created only
* if the queue is full. By setting corePoolSize and maximumPoolSize
* the same, you create a fixed-size thread pool. By setting
* maximumPoolSize to an essentially unbounded value such as {@code
* Integer.MAX_VALUE}, you allow the pool to accommodate an arbitrary
* number of concurrent tasks. Most typically, core and maximum pool
* sizes are set only upon construction, but they may also be changed
* dynamically using {@link #setCorePoolSize} and {@link
* #setMaximumPoolSize}. </dd>
*
* <dt>On-demand construction</dt>
*
* <dd>By default, even core threads are initially created and
* started only when new tasks arrive, but this can be overridden
* dynamically using method {@link #prestartCoreThread} or {@link
* #prestartAllCoreThreads}. You probably want to prestart threads if
* you construct the pool with a non-empty queue. </dd>
*
* <dt>Creating new threads</dt>
*
* <dd>New threads are created using a {@link ThreadFactory}. If not
* otherwise specified, a {@link Executors#defaultThreadFactory} is
* used, that creates threads to all be in the same {@link
* ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
* non-daemon status. By supplying a different ThreadFactory, you can
* alter the thread's name, thread group, priority, daemon status,
* etc. If a {@code ThreadFactory} fails to create a thread when asked
* by returning null from {@code newThread}, the executor will
* continue, but might not be able to execute any tasks. Threads
* should possess the "modifyThread" {@code RuntimePermission}. If
* worker threads or other threads using the pool do not possess this
* permission, service may be degraded: configuration changes may not
* take effect in a timely manner, and a shutdown pool may remain in a
* state in which termination is possible but not completed.</dd>
*
* <dt>Keep-alive times</dt>
*
* <dd>If the pool currently has more than corePoolSize threads,
* excess threads will be terminated if they have been idle for more
* than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}).
* This provides a means of reducing resource consumption when the
* pool is not being actively used. If the pool becomes more active
* later, new threads will be constructed. This parameter can also be
* changed dynamically using method {@link #setKeepAliveTime(long,
* TimeUnit)}. Using a value of {@code Long.MAX_VALUE} {@link
* TimeUnit#NANOSECONDS} effectively disables idle threads from ever
* terminating prior to shut down. By default, the keep-alive policy
* applies only when there are more than corePoolSize threads. But
* method {@link #allowCoreThreadTimeOut(boolean)} can be used to
* apply this time-out policy to core threads as well, so long as the
* keepAliveTime value is non-zero. </dd>
*
* <dt>Queuing</dt>
*
* <dd>Any {@link BlockingQueue} may be used to transfer and hold
* submitted tasks. The use of this queue interacts with pool sizing:
*
* <ul>
*
* <li> If fewer than corePoolSize threads are running, the Executor
* always prefers adding a new thread
* rather than queuing.</li>
*
* <li> If corePoolSize or more threads are running, the Executor
* always prefers queuing a request rather than adding a new
* thread.</li>
*
* <li> If a request cannot be queued, a new thread is created unless
* this would exceed maximumPoolSize, in which case, the task will be
* rejected.</li>
*
* </ul>
*
* There are three general strategies for queuing:
* <ol>
*
* <li> <em> Direct handoffs.</em> A good default choice for a work
* queue is a {@link SynchronousQueue} that hands off tasks to threads
* without otherwise holding them. Here, an attempt to queue a task
* will fail if no threads are immediately available to run it, so a
* new thread will be constructed. This policy avoids lockups when
* handling sets of requests that might have internal dependencies.
* Direct handoffs generally require unbounded maximumPoolSizes to
* avoid rejection of new submitted tasks. This in turn admits the
* possibility of unbounded thread growth when commands continue to
* arrive on average faster than they can be processed. </li>
*
* <li><em> Unbounded queues.</em> Using an unbounded queue (for
* example a {@link LinkedBlockingQueue} without a predefined
* capacity) will cause new tasks to wait in the queue when all
* corePoolSize threads are busy. Thus, no more than corePoolSize
* threads will ever be created. (And the value of the maximumPoolSize
* therefore doesn't have any effect.) This may be appropriate when
* each task is completely independent of others, so tasks cannot
* affect each others execution; for example, in a web page server.
* While this style of queuing can be useful in smoothing out
* transient bursts of requests, it admits the possibility of
* unbounded work queue growth when commands continue to arrive on
* average faster than they can be processed. </li>
*
* <li><em>Bounded queues.</em> A bounded queue (for example, an
* {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
* used with finite maximumPoolSizes, but can be more difficult to
* tune and control. Queue sizes and maximum pool sizes may be traded
* off for each other: Using large queues and small pools minimizes
* CPU usage, OS resources, and context-switching overhead, but can
* lead to artificially low throughput. If tasks frequently block (for
* example if they are I/O bound), a system may be able to schedule
* time for more threads than you otherwise allow. Use of small queues
* generally requires larger pool sizes, which keeps CPUs busier but
* may encounter unacceptable scheduling overhead, which also
* decreases throughput. </li>
*
* </ol>
*
* </dd>
*
* <dt>Rejected tasks</dt>
*
* <dd>New tasks submitted in method {@link #execute(Runnable)} will be
* <em>rejected</em> when the Executor has been shut down, and also when
* the Executor uses finite bounds for both maximum threads and work queue
* capacity, and is saturated. In either case, the {@code execute} method
* invokes the {@link
* RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)}
* method of its {@link RejectedExecutionHandler}. Four predefined handler
* policies are provided:
*
* <ol>
*
* <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the
* handler throws a runtime {@link RejectedExecutionException} upon
* rejection. </li>
*
* <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
* that invokes {@code execute} itself runs the task. This provides a
* simple feedback control mechanism that will slow down the rate that
* new tasks are submitted. </li>
*
* <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
* cannot be executed is simply dropped. </li>
*
* <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
* executor is not shut down, the task at the head of the work queue
* is dropped, and then execution is retried (which can fail again,
* causing this to be repeated.) </li>
*
* </ol>
*
* It is possible to define and use other kinds of {@link
* RejectedExecutionHandler} classes. Doing so requires some care
* especially when policies are designed to work only under particular
* capacity or queuing policies. </dd>
*
* <dt>Hook methods</dt>
*
* <dd>This class provides {@code protected} overridable
* {@link #beforeExecute(Thread, Runnable)} and
* {@link #afterExecute(Runnable, Throwable)} methods that are called
* before and after execution of each task. These can be used to
* manipulate the execution environment; for example, reinitializing
* ThreadLocals, gathering statistics, or adding log entries.
* Additionally, method {@link #terminated} can be overridden to perform
* any special processing that needs to be done once the Executor has
* fully terminated.
*
* <p>If hook or callback methods throw exceptions, internal worker
* threads may in turn fail and abruptly terminate.</dd>
*
* <dt>Queue maintenance</dt>
*
* <dd>Method {@link #getQueue()} allows access to the work queue
* for purposes of monitoring and debugging. Use of this method for
* any other purpose is strongly discouraged. Two supplied methods,
* {@link #remove(Runnable)} and {@link #purge} are available to
* assist in storage reclamation when large numbers of queued tasks
* become cancelled.</dd>
*
* <dt>Finalization</dt>
*
* <dd>A pool that is no longer referenced in a program <em>AND</em>
* has no remaining threads will be {@code shutdown} automatically. If
* you would like to ensure that unreferenced pools are reclaimed even
* if users forget to call {@link #shutdown}, then you must arrange
* that unused threads eventually die, by setting appropriate
* keep-alive times, using a lower bound of zero core threads and/or
* setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
*
* </dl>
*
* <p><b>Extension example</b>. Most extensions of this class
* override one or more of the protected hook methods. For example,
* here is a subclass that adds a simple pause/resume feature:
*
* <pre> {@code
* class PausableThreadPoolExecutor extends ThreadPoolExecutor {
* private boolean isPaused;
* private ReentrantLock pauseLock = new ReentrantLock();
* private Condition unpaused = pauseLock.newCondition();
*
* public PausableThreadPoolExecutor(...) { super(...); }
*
* protected void beforeExecute(Thread t, Runnable r) {
* super.beforeExecute(t, r);
* pauseLock.lock();
* try {
* while (isPaused) unpaused.await();
* } catch (InterruptedException ie) {
* t.interrupt();
* } finally {
* pauseLock.unlock();
* }
* }
*
* public void pause() {
* pauseLock.lock();
* try {
* isPaused = true;
* } finally {
* pauseLock.unlock();
* }
* }
*
* public void resume() {
* pauseLock.lock();
* try {
* isPaused = false;
* unpaused.signalAll();
* } finally {
* pauseLock.unlock();
* }
* }
* }}</pre>
*
* @since 1.5
* @author Doug Lea
*/
public class ThreadPoolExecutor extends AbstractExecutorService {
/**
* The main pool control state, ctl, is an atomic integer packing
* two conceptual fields
* workerCount, indicating the effective number of threads
* runState, indicating whether running, shutting down etc
*
* In order to pack them into one int, we limit workerCount to
* (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
* billion) otherwise representable. If this is ever an issue in
* the future, the variable can be changed to be an AtomicLong,
* and the shift/mask constants below adjusted. But until the need
* arises, this code is a bit faster and simpler using an int.
*
* The workerCount is the number of workers that have been
* permitted to start and not permitted to stop. The value may be
* transiently different from the actual number of live threads,
* for example when a ThreadFactory fails to create a thread when
* asked, and when exiting threads are still performing
* bookkeeping before terminating. The user-visible pool size is
* reported as the current size of the workers set.
*
* The runState provides the main lifecycle control, taking on values:
*
* RUNNING: Accept new tasks and process queued tasks
* SHUTDOWN: Don't accept new tasks, but process queued tasks
* STOP: Don't accept new tasks, don't process queued tasks,
* and interrupt in-progress tasks
* TIDYING: All tasks have terminated, workerCount is zero,
* the thread transitioning to state TIDYING
* will run the terminated() hook method
* TERMINATED: terminated() has completed
*
* The numerical order among these values matters, to allow
* ordered comparisons. The runState monotonically increases over
* time, but need not hit each state. The transitions are:
*
* RUNNING -> SHUTDOWN
* On invocation of shutdown(), perhaps implicitly in finalize()
* (RUNNING or SHUTDOWN) -> STOP
* On invocation of shutdownNow()
* SHUTDOWN -> TIDYING
* When both queue and pool are empty
* STOP -> TIDYING
* When pool is empty
* TIDYING -> TERMINATED
* When the terminated() hook method has completed
*
* Threads waiting in awaitTermination() will return when the
* state reaches TERMINATED.
*
* Detecting the transition from SHUTDOWN to TIDYING is less
* straightforward than you'd like because the queue may become
* empty after non-empty and vice versa during SHUTDOWN state, but
* we can only terminate if, after seeing that it is empty, we see
* that workerCount is 0 (which sometimes entails a recheck -- see
* below).
*/
private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
private static final int COUNT_BITS = Integer.SIZE - 3;
private static final int CAPACITY = (1 << COUNT_BITS) - 1;

// runState is stored in the high-order bits
private static final int RUNNING = -1 << COUNT_BITS;
private static final int SHUTDOWN = 0 << COUNT_BITS;
private static final int STOP = 1 << COUNT_BITS;
private static final int TIDYING = 2 << COUNT_BITS;
private static final int TERMINATED = 3 << COUNT_BITS;

// Packing and unpacking ctl
private static int runStateOf(int c) { return c & ~CAPACITY; }
private static int workerCountOf(int c) { return c & CAPACITY; }
private static int ctlOf(int rs, int wc) { return rs | wc; }

/*
* Bit field accessors that don't require unpacking ctl.
* These depend on the bit layout and on workerCount being never negative.
*/

private static boolean runStateLessThan(int c, int s) {
return c < s;
}

private static boolean runStateAtLeast(int c, int s) {
return c >= s;
}

private static boolean isRunning(int c) {
return c < SHUTDOWN;
}

/**
* Attempts to CAS-increment the workerCount field of ctl.
*/
private boolean compareAndIncrementWorkerCount(int expect) {
return ctl.compareAndSet(expect, expect + 1);
}

/**
* Attempts to CAS-decrement the workerCount field of ctl.
*/
private boolean compareAndDecrementWorkerCount(int expect) {
return ctl.compareAndSet(expect, expect - 1);
}

/**
* Decrements the workerCount field of ctl. This is called only on
* abrupt termination of a thread (see processWorkerExit). Other
* decrements are performed within getTask.
*/
private void decrementWorkerCount() {
do {} while (! compareAndDecrementWorkerCount(ctl.get()));
}

/**
* The queue used for holding tasks and handing off to worker
* threads. We do not require that workQueue.poll() returning
* null necessarily means that workQueue.isEmpty(), so rely
* solely on isEmpty to see if the queue is empty (which we must
* do for example when deciding whether to transition from
* SHUTDOWN to TIDYING). This accommodates special-purpose
* queues such as DelayQueues for which poll() is allowed to
* return null even if it may later return non-null when delays
* expire.
*/
private final BlockingQueue<Runnable> workQueue;

/**
* Lock held on access to workers set and related bookkeeping.
* While we could use a concurrent set of some sort, it turns out
* to be generally preferable to use a lock. Among the reasons is
* that this serializes interruptIdleWorkers, which avoids
* unnecessary interrupt storms, especially during shutdown.
* Otherwise exiting threads would concurrently interrupt those
* that have not yet interrupted. It also simplifies some of the
* associated statistics bookkeeping of largestPoolSize etc. We
* also hold mainLock on shutdown and shutdownNow, for the sake of
* ensuring workers set is stable while separately checking
* permission to interrupt and actually interrupting.
*/
private final ReentrantLock mainLock = new ReentrantLock();

/**
* Set containing all worker threads in pool. Accessed only when
* holding mainLock.
*/
private final HashSet<Worker> workers = new HashSet<Worker>();

/**
* Wait condition to support awaitTermination
*/
private final Condition termination = mainLock.newCondition();

/**
* Tracks largest attained pool size. Accessed only under
* mainLock.
*/
private int largestPoolSize;

/**
* Counter for completed tasks. Updated only on termination of
* worker threads. Accessed only under mainLock.
*/
private long completedTaskCount;

/*
* All user control parameters are declared as volatiles so that
* ongoing actions are based on freshest values, but without need
* for locking, since no internal invariants depend on them
* changing synchronously with respect to other actions.
*/

/**
* Factory for new threads. All threads are created using this
* factory (via method addWorker). All callers must be prepared
* for addWorker to fail, which may reflect a system or user's
* policy limiting the number of threads. Even though it is not
* treated as an error, failure to create threads may result in
* new tasks being rejected or existing ones remaining stuck in
* the queue.
*
* We go further and preserve pool invariants even in the face of
* errors such as OutOfMemoryError, that might be thrown while
* trying to create threads. Such errors are rather common due to
* the need to allocate a native stack in Thread.start, and users
* will want to perform clean pool shutdown to clean up. There
* will likely be enough memory available for the cleanup code to
* complete without encountering yet another OutOfMemoryError.
*/
private volatile ThreadFactory threadFactory;

/**
* Handler called when saturated or shutdown in execute.
*/
private volatile RejectedExecutionHandler handler;

/**
* Timeout in nanoseconds for idle threads waiting for work.
* Threads use this timeout when there are more than corePoolSize
* present or if allowCoreThreadTimeOut. Otherwise they wait
* forever for new work.
*/
private volatile long keepAliveTime;

/**
* If false (default), core threads stay alive even when idle.
* If true, core threads use keepAliveTime to time out waiting
* for work.
*/
private volatile boolean allowCoreThreadTimeOut;

/**
* Core pool size is the minimum number of workers to keep alive
* (and not allow to time out etc) unless allowCoreThreadTimeOut
* is set, in which case the minimum is zero.
*/
private volatile int corePoolSize;

/**
* Maximum pool size. Note that the actual maximum is internally
* bounded by CAPACITY.
*/
private volatile int maximumPoolSize;

/**
* The default rejected execution handler
*/
private static final RejectedExecutionHandler defaultHandler =
new AbortPolicy();

/**
* Permission required for callers of shutdown and shutdownNow.
* We additionally require (see checkShutdownAccess) that callers
* have permission to actually interrupt threads in the worker set
* (as governed by Thread.interrupt, which relies on
* ThreadGroup.checkAccess, which in turn relies on
* SecurityManager.checkAccess). Shutdowns are attempted only if
* these checks pass.
*
* All actual invocations of Thread.interrupt (see
* interruptIdleWorkers and interruptWorkers) ignore
* SecurityExceptions, meaning that the attempted interrupts
* silently fail. In the case of shutdown, they should not fail
* unless the SecurityManager has inconsistent policies, sometimes
* allowing access to a thread and sometimes not. In such cases,
* failure to actually interrupt threads may disable or delay full
* termination. Other uses of interruptIdleWorkers are advisory,
* and failure to actually interrupt will merely delay response to
* configuration changes so is not handled exceptionally.
*/
private static final RuntimePermission shutdownPerm =
new RuntimePermission("modifyThread");

/**
* Class Worker mainly maintains interrupt control state for
* threads running tasks, along with other minor bookkeeping.
* This class opportunistically extends AbstractQueuedSynchronizer
* to simplify acquiring and releasing a lock surrounding each
* task execution. This protects against interrupts that are
* intended to wake up a worker thread waiting for a task from
* instead interrupting a task being run. We implement a simple
* non-reentrant mutual exclusion lock rather than use
* ReentrantLock because we do not want worker tasks to be able to
* reacquire the lock when they invoke pool control methods like
* setCorePoolSize. Additionally, to suppress interrupts until
* the thread actually starts running tasks, we initialize lock
* state to a negative value, and clear it upon start (in
* runWorker).
*/
private final class Worker
extends AbstractQueuedSynchronizer
implements Runnable
{
/**
* This class will never be serialized, but we provide a
* serialVersionUID to suppress a javac warning.
*/
private static final long serialVersionUID = 6138294804551838833L;

/** Thread this worker is running in. Null if factory fails. */
final Thread thread;
/** Initial task to run. Possibly null. */
Runnable firstTask;
/** Per-thread task counter */
volatile long completedTasks;

/**
* Creates with given first task and thread from ThreadFactory.
* @param firstTask the first task (null if none)
*/
Worker(Runnable firstTask) {
setState(-1); // inhibit interrupts until runWorker
this.firstTask = firstTask;
this.thread = getThreadFactory().newThread(this);
}

/** Delegates main run loop to outer runWorker */
public void run() {
runWorker(this);
}

// Lock methods
//
// The value 0 represents the unlocked state.
// The value 1 represents the locked state.

protected boolean isHeldExclusively() {
return getState() != 0;
}

protected boolean tryAcquire(int unused) {
if (compareAndSetState(0, 1)) {
setExclusiveOwnerThread(Thread.currentThread());
return true;
}
return false;
}

protected boolean tryRelease(int unused) {
setExclusiveOwnerThread(null);
setState(0);
return true;
}

public void lock() { acquire(1); }
public boolean tryLock() { return tryAcquire(1); }
public void unlock() { release(1); }
public boolean isLocked() { return isHeldExclusively(); }

void interruptIfStarted() {
Thread t;
if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) {
try {
t.interrupt();
} catch (SecurityException ignore) {
}
}
}
}

/*
* Methods for setting control state
*/

/**
* Transitions runState to given target, or leaves it alone if
* already at least the given target.
*
* @param targetState the desired state, either SHUTDOWN or STOP
* (but not TIDYING or TERMINATED -- use tryTerminate for that)
*/
private void advanceRunState(int targetState) {
for (;;) {
int c = ctl.get();
if (runStateAtLeast(c, targetState) ||
ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
break;
}
}

/**
* Transitions to TERMINATED state if either (SHUTDOWN and pool
* and queue empty) or (STOP and pool empty). If otherwise
* eligible to terminate but workerCount is nonzero, interrupts an
* idle worker to ensure that shutdown signals propagate. This
* method must be called following any action that might make
* termination possible -- reducing worker count or removing tasks
* from the queue during shutdown. The method is non-private to
* allow access from ScheduledThreadPoolExecutor.
*/
final void tryTerminate() {
for (;;) {
int c = ctl.get();
if (isRunning(c) ||
runStateAtLeast(c, TIDYING) ||
(runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty()))
return;
if (workerCountOf(c) != 0) { // Eligible to terminate
interruptIdleWorkers(ONLY_ONE);
return;
}

final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
try {
terminated();
} finally {
ctl.set(ctlOf(TERMINATED, 0));
termination.signalAll();
}
return;
}
} finally {
mainLock.unlock();
}
// else retry on failed CAS
}
}

/*
* Methods for controlling interrupts to worker threads.
*/

/**
* If there is a security manager, makes sure caller has
* permission to shut down threads in general (see shutdownPerm).
* If this passes, additionally makes sure the caller is allowed
* to interrupt each worker thread. This might not be true even if
* first check passed, if the SecurityManager treats some threads
* specially.
*/
private void checkShutdownAccess() {
SecurityManager security = System.getSecurityManager();
if (security != null) {
security.checkPermission(shutdownPerm);
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
for (Worker w : workers)
security.checkAccess(w.thread);
} finally {
mainLock.unlock();
}
}
}

/**
* Interrupts all threads, even if active. Ignores SecurityExceptions
* (in which case some threads may remain uninterrupted).
*/
private void interruptWorkers() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
for (Worker w : workers)
w.interruptIfStarted();
} finally {
mainLock.unlock();
}
}

/**
* Interrupts threads that might be waiting for tasks (as
* indicated by not being locked) so they can check for
* termination or configuration changes. Ignores
* SecurityExceptions (in which case some threads may remain
* uninterrupted).
*
* @param onlyOne If true, interrupt at most one worker. This is
* called only from tryTerminate when termination is otherwise
* enabled but there are still other workers. In this case, at
* most one waiting worker is interrupted to propagate shutdown
* signals in case all threads are currently waiting.
* Interrupting any arbitrary thread ensures that newly arriving
* workers since shutdown began will also eventually exit.
* To guarantee eventual termination, it suffices to always
* interrupt only one idle worker, but shutdown() interrupts all
* idle workers so that redundant workers exit promptly, not
* waiting for a straggler task to finish.
*/
private void interruptIdleWorkers(boolean onlyOne) {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
for (Worker w : workers) {
Thread t = w.thread;
if (!t.isInterrupted() && w.tryLock()) {
try {
t.interrupt();
} catch (SecurityException ignore) {
} finally {
w.unlock();
}
}
if (onlyOne)
break;
}
} finally {
mainLock.unlock();
}
}

/**
* Common form of interruptIdleWorkers, to avoid having to
* remember what the boolean argument means.
*/
private void interruptIdleWorkers() {
interruptIdleWorkers(false);
}

private static final boolean ONLY_ONE = true;

/*
* Misc utilities, most of which are also exported to
* ScheduledThreadPoolExecutor
*/

/**
* Invokes the rejected execution handler for the given command.
* Package-protected for use by ScheduledThreadPoolExecutor.
*/
final void reject(Runnable command) {
handler.rejectedExecution(command, this);
}

/**
* Performs any further cleanup following run state transition on
* invocation of shutdown. A no-op here, but used by
* ScheduledThreadPoolExecutor to cancel delayed tasks.
*/
void onShutdown() {
}

/**
* State check needed by ScheduledThreadPoolExecutor to
* enable running tasks during shutdown.
*
* @param shutdownOK true if should return true if SHUTDOWN
*/
final boolean isRunningOrShutdown(boolean shutdownOK) {
int rs = runStateOf(ctl.get());
return rs == RUNNING || (rs == SHUTDOWN && shutdownOK);
}

/**
* Drains the task queue into a new list, normally using
* drainTo. But if the queue is a DelayQueue or any other kind of
* queue for which poll or drainTo may fail to remove some
* elements, it deletes them one by one.
*/
private List<Runnable> drainQueue() {
BlockingQueue<Runnable> q = workQueue;
ArrayList<Runnable> taskList = new ArrayList<Runnable>();
q.drainTo(taskList);
if (!q.isEmpty()) {
for (Runnable r : q.toArray(new Runnable[0])) {
if (q.remove(r))
taskList.add(r);
}
}
return taskList;
}

/*
* Methods for creating, running and cleaning up after workers
*/

/**
* Checks if a new worker can be added with respect to current
* pool state and the given bound (either core or maximum). If so,
* the worker count is adjusted accordingly, and, if possible, a
* new worker is created and started, running firstTask as its
* first task. This method returns false if the pool is stopped or
* eligible to shut down. It also returns false if the thread
* factory fails to create a thread when asked. If the thread
* creation fails, either due to the thread factory returning
* null, or due to an exception (typically OutOfMemoryError in
* Thread.start()), we roll back cleanly.
*
* @param firstTask the task the new thread should run first (or
* null if none). Workers are created with an initial first task
* (in method execute()) to bypass queuing when there are fewer
* than corePoolSize threads (in which case we always start one),
* or when the queue is full (in which case we must bypass queue).
* Initially idle threads are usually created via
* prestartCoreThread or to replace other dying workers.
*
* @param core if true use corePoolSize as bound, else
* maximumPoolSize. (A boolean indicator is used here rather than a
* value to ensure reads of fresh values after checking other pool
* state).
* @return true if successful
*/
private boolean addWorker(Runnable firstTask, boolean core) {
retry:
for (;;) {
int c = ctl.get();
int rs = runStateOf(c);

// Check if queue empty only if necessary.
if (rs >= SHUTDOWN &&
! (rs == SHUTDOWN &&
firstTask == null &&
! workQueue.isEmpty()))
return false;

for (;;) {
int wc = workerCountOf(c);
if (wc >= CAPACITY ||
wc >= (core ? corePoolSize : maximumPoolSize))
return false;
if (compareAndIncrementWorkerCount(c))
break retry;
c = ctl.get(); // Re-read ctl
if (runStateOf(c) != rs)
continue retry;
// else CAS failed due to workerCount change; retry inner loop
}
}

boolean workerStarted = false;
boolean workerAdded = false;
Worker w = null;
try {
w = new Worker(firstTask);
final Thread t = w.thread;
if (t != null) {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
// Recheck while holding lock.
// Back out on ThreadFactory failure or if
// shut down before lock acquired.
int rs = runStateOf(ctl.get());

if (rs < SHUTDOWN ||
(rs == SHUTDOWN && firstTask == null)) {
if (t.isAlive()) // precheck that t is startable
throw new IllegalThreadStateException();
workers.add(w);
int s = workers.size();
if (s > largestPoolSize)
largestPoolSize = s;
workerAdded = true;
}
} finally {
mainLock.unlock();
}
if (workerAdded) {
t.start();
workerStarted = true;
}
}
} finally {
if (! workerStarted)
addWorkerFailed(w);
}
return workerStarted;
}

/**
* Rolls back the worker thread creation.
* - removes worker from workers, if present
* - decrements worker count
* - rechecks for termination, in case the existence of this
* worker was holding up termination
*/
private void addWorkerFailed(Worker w) {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
if (w != null)
workers.remove(w);
decrementWorkerCount();
tryTerminate();
} finally {
mainLock.unlock();
}
}

/**
* Performs cleanup and bookkeeping for a dying worker. Called
* only from worker threads. Unless completedAbruptly is set,
* assumes that workerCount has already been adjusted to account
* for exit. This method removes thread from worker set, and
* possibly terminates the pool or replaces the worker if either
* it exited due to user task exception or if fewer than
* corePoolSize workers are running or queue is non-empty but
* there are no workers.
*
* @param w the worker
* @param completedAbruptly if the worker died due to user exception
*/
private void processWorkerExit(Worker w, boolean completedAbruptly) {
if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
decrementWorkerCount();

final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
completedTaskCount += w.completedTasks;
workers.remove(w);
} finally {
mainLock.unlock();
}

tryTerminate();

int c = ctl.get();
if (runStateLessThan(c, STOP)) {
if (!completedAbruptly) {
int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
if (min == 0 && ! workQueue.isEmpty())
min = 1;
if (workerCountOf(c) >= min)
return; // replacement not needed
}
addWorker(null, false);
}
}

/**
* Performs blocking or timed wait for a task, depending on
* current configuration settings, or returns null if this worker
* must exit because of any of:
* 1. There are more than maximumPoolSize workers (due to
* a call to setMaximumPoolSize).
* 2. The pool is stopped.
* 3. The pool is shutdown and the queue is empty.
* 4. This worker timed out waiting for a task, and timed-out
* workers are subject to termination (that is,
* {@code allowCoreThreadTimeOut || workerCount > corePoolSize})
* both before and after the timed wait, and if the queue is
* non-empty, this worker is not the last thread in the pool.
*
* @return task, or null if the worker must exit, in which case
* workerCount is decremented
*/
private Runnable getTask() {
boolean timedOut = false; // Did the last poll() time out?

for (;;) {
int c = ctl.get();
int rs = runStateOf(c);

// Check if queue empty only if necessary.
if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
decrementWorkerCount();
return null;
}

int wc = workerCountOf(c);

// Are workers subject to culling?
boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;

if ((wc > maximumPoolSize || (timed && timedOut))
&& (wc > 1 || workQueue.isEmpty())) {
if (compareAndDecrementWorkerCount(c))
return null;
continue;
}

try {
Runnable r = timed ?
workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
workQueue.take();
if (r != null)
return r;
timedOut = true;
} catch (InterruptedException retry) {
timedOut = false;
}
}
}

/**
* Main worker run loop. Repeatedly gets tasks from queue and
* executes them, while coping with a number of issues:
*
* 1. We may start out with an initial task, in which case we
* don't need to get the first one. Otherwise, as long as pool is
* running, we get tasks from getTask. If it returns null then the
* worker exits due to changed pool state or configuration
* parameters. Other exits result from exception throws in
* external code, in which case completedAbruptly holds, which
* usually leads processWorkerExit to replace this thread.
*
* 2. Before running any task, the lock is acquired to prevent
* other pool interrupts while the task is executing, and then we
* ensure that unless pool is stopping, this thread does not have
* its interrupt set.
*
* 3. Each task run is preceded by a call to beforeExecute, which
* might throw an exception, in which case we cause thread to die
* (breaking loop with completedAbruptly true) without processing
* the task.
*
* 4. Assuming beforeExecute completes normally, we run the task,
* gathering any of its thrown exceptions to send to afterExecute.
* We separately handle RuntimeException, Error (both of which the
* specs guarantee that we trap) and arbitrary Throwables.
* Because we cannot rethrow Throwables within Runnable.run, we
* wrap them within Errors on the way out (to the thread's
* UncaughtExceptionHandler). Any thrown exception also
* conservatively causes thread to die.
*
* 5. After task.run completes, we call afterExecute, which may
* also throw an exception, which will also cause thread to
* die. According to JLS Sec 14.20, this exception is the one that
* will be in effect even if task.run throws.
*
* The net effect of the exception mechanics is that afterExecute
* and the thread's UncaughtExceptionHandler have as accurate
* information as we can provide about any problems encountered by
* user code.
*
* @param w the worker
*/
final void runWorker(Worker w) {
Thread wt = Thread.currentThread();
Runnable task = w.firstTask;
w.firstTask = null;
w.unlock(); // allow interrupts
boolean completedAbruptly = true;
try {
while (task != null || (task = getTask()) != null) {
w.lock();
// If pool is stopping, ensure thread is interrupted;
// if not, ensure thread is not interrupted. This
// requires a recheck in second case to deal with
// shutdownNow race while clearing interrupt
if ((runStateAtLeast(ctl.get(), STOP) ||
(Thread.interrupted() &&
runStateAtLeast(ctl.get(), STOP))) &&
!wt.isInterrupted())
wt.interrupt();
try {
beforeExecute(wt, task);
Throwable thrown = null;
try {
task.run();
} catch (RuntimeException x) {
thrown = x; throw x;
} catch (Error x) {
thrown = x; throw x;
} catch (Throwable x) {
thrown = x; throw new Error(x);
} finally {
afterExecute(task, thrown);
}
} finally {
task = null;
w.completedTasks++;
w.unlock();
}
}
completedAbruptly = false;
} finally {
processWorkerExit(w, completedAbruptly);
}
}

// Public constructors and methods

/**
* Creates a new {@code ThreadPoolExecutor} with the given initial
* parameters and default thread factory and rejected execution handler.
* It may be more convenient to use one of the {@link Executors} factory
* methods instead of this general purpose constructor.
*
* @param corePoolSize the number of threads to keep in the pool, even
* if they are idle, unless {@code allowCoreThreadTimeOut} is set
* @param maximumPoolSize the maximum number of threads to allow in the
* pool
* @param keepAliveTime when the number of threads is greater than
* the core, this is the maximum time that excess idle threads
* will wait for new tasks before terminating.
* @param unit the time unit for the {@code keepAliveTime} argument
* @param workQueue the queue to use for holding tasks before they are
* executed. This queue will hold only the {@code Runnable}
* tasks submitted by the {@code execute} method.
* @throws IllegalArgumentException if one of the following holds:<br>
* {@code corePoolSize < 0}<br>
* {@code keepAliveTime < 0}<br>
* {@code maximumPoolSize <= 0}<br>
* {@code maximumPoolSize < corePoolSize}
* @throws NullPointerException if {@code workQueue} is null
*/
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue) {
this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
Executors.defaultThreadFactory(), defaultHandler);
}

/**
* Creates a new {@code ThreadPoolExecutor} with the given initial
* parameters and default rejected execution handler.
*
* @param corePoolSize the number of threads to keep in the pool, even
* if they are idle, unless {@code allowCoreThreadTimeOut} is set
* @param maximumPoolSize the maximum number of threads to allow in the
* pool
* @param keepAliveTime when the number of threads is greater than
* the core, this is the maximum time that excess idle threads
* will wait for new tasks before terminating.
* @param unit the time unit for the {@code keepAliveTime} argument
* @param workQueue the queue to use for holding tasks before they are
* executed. This queue will hold only the {@code Runnable}
* tasks submitted by the {@code execute} method.
* @param threadFactory the factory to use when the executor
* creates a new thread
* @throws IllegalArgumentException if one of the following holds:<br>
* {@code corePoolSize < 0}<br>
* {@code keepAliveTime < 0}<br>
* {@code maximumPoolSize <= 0}<br>
* {@code maximumPoolSize < corePoolSize}
* @throws NullPointerException if {@code workQueue}
* or {@code threadFactory} is null
*/
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory) {
this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
threadFactory, defaultHandler);
}

/**
* Creates a new {@code ThreadPoolExecutor} with the given initial
* parameters and default thread factory.
*
* @param corePoolSize the number of threads to keep in the pool, even
* if they are idle, unless {@code allowCoreThreadTimeOut} is set
* @param maximumPoolSize the maximum number of threads to allow in the
* pool
* @param keepAliveTime when the number of threads is greater than
* the core, this is the maximum time that excess idle threads
* will wait for new tasks before terminating.
* @param unit the time unit for the {@code keepAliveTime} argument
* @param workQueue the queue to use for holding tasks before they are
* executed. This queue will hold only the {@code Runnable}
* tasks submitted by the {@code execute} method.
* @param handler the handler to use when execution is blocked
* because the thread bounds and queue capacities are reached
* @throws IllegalArgumentException if one of the following holds:<br>
* {@code corePoolSize < 0}<br>
* {@code keepAliveTime < 0}<br>
* {@code maximumPoolSize <= 0}<br>
* {@code maximumPoolSize < corePoolSize}
* @throws NullPointerException if {@code workQueue}
* or {@code handler} is null
*/
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
RejectedExecutionHandler handler) {
this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
Executors.defaultThreadFactory(), handler);
}

/**
* Creates a new {@code ThreadPoolExecutor} with the given initial
* parameters.
*
* @param corePoolSize the number of threads to keep in the pool, even
* if they are idle, unless {@code allowCoreThreadTimeOut} is set
* @param maximumPoolSize the maximum number of threads to allow in the
* pool
* @param keepAliveTime when the number of threads is greater than
* the core, this is the maximum time that excess idle threads
* will wait for new tasks before terminating.
* @param unit the time unit for the {@code keepAliveTime} argument
* @param workQueue the queue to use for holding tasks before they are
* executed. This queue will hold only the {@code Runnable}
* tasks submitted by the {@code execute} method.
* @param threadFactory the factory to use when the executor
* creates a new thread
* @param handler the handler to use when execution is blocked
* because the thread bounds and queue capacities are reached
* @throws IllegalArgumentException if one of the following holds:<br>
* {@code corePoolSize < 0}<br>
* {@code keepAliveTime < 0}<br>
* {@code maximumPoolSize <= 0}<br>
* {@code maximumPoolSize < corePoolSize}
* @throws NullPointerException if {@code workQueue}
* or {@code threadFactory} or {@code handler} is null
*/
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler) {
if (corePoolSize < 0 ||
maximumPoolSize <= 0 ||
maximumPoolSize < corePoolSize ||
keepAliveTime < 0)
throw new IllegalArgumentException();
if (workQueue == null || threadFactory == null || handler == null)
throw new NullPointerException();
this.corePoolSize = corePoolSize;
this.maximumPoolSize = maximumPoolSize;
this.workQueue = workQueue;
this.keepAliveTime = unit.toNanos(keepAliveTime);
this.threadFactory = threadFactory;
this.handler = handler;
}

/**
* Executes the given task sometime in the future. The task
* may execute in a new thread or in an existing pooled thread.
*
* If the task cannot be submitted for execution, either because this
* executor has been shutdown or because its capacity has been reached,
* the task is handled by the current {@code RejectedExecutionHandler}.
*
* @param command the task to execute
* @throws RejectedExecutionException at discretion of
* {@code RejectedExecutionHandler}, if the task
* cannot be accepted for execution
* @throws NullPointerException if {@code command} is null
*/
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}

/**
* Initiates an orderly shutdown in which previously submitted
* tasks are executed, but no new tasks will be accepted.
* Invocation has no additional effect if already shut down.
*
* <p>This method does not wait for previously submitted tasks to
* complete execution. Use {@link #awaitTermination awaitTermination}
* to do that.
*
* @throws SecurityException {@inheritDoc}
*/
public void shutdown() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
checkShutdownAccess();
advanceRunState(SHUTDOWN);
interruptIdleWorkers();
onShutdown(); // hook for ScheduledThreadPoolExecutor
} finally {
mainLock.unlock();
}
tryTerminate();
}

/**
* Attempts to stop all actively executing tasks, halts the
* processing of waiting tasks, and returns a list of the tasks
* that were awaiting execution. These tasks are drained (removed)
* from the task queue upon return from this method.
*
* <p>This method does not wait for actively executing tasks to
* terminate. Use {@link #awaitTermination awaitTermination} to
* do that.
*
* <p>There are no guarantees beyond best-effort attempts to stop
* processing actively executing tasks. This implementation
* cancels tasks via {@link Thread#interrupt}, so any task that
* fails to respond to interrupts may never terminate.
*
* @throws SecurityException {@inheritDoc}
*/
public List<Runnable> shutdownNow() {
List<Runnable> tasks;
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
checkShutdownAccess();
advanceRunState(STOP);
interruptWorkers();
tasks = drainQueue();
} finally {
mainLock.unlock();
}
tryTerminate();
return tasks;
}

public boolean isShutdown() {
return ! isRunning(ctl.get());
}

/**
* Returns true if this executor is in the process of terminating
* after {@link #shutdown} or {@link #shutdownNow} but has not
* completely terminated. This method may be useful for
* debugging. A return of {@code true} reported a sufficient
* period after shutdown may indicate that submitted tasks have
* ignored or suppressed interruption, causing this executor not
* to properly terminate.
*
* @return {@code true} if terminating but not yet terminated
*/
public boolean isTerminating() {
int c = ctl.get();
return ! isRunning(c) && runStateLessThan(c, TERMINATED);
}

public boolean isTerminated() {
return runStateAtLeast(ctl.get(), TERMINATED);
}

public boolean awaitTermination(long timeout, TimeUnit unit)
throws InterruptedException {
long nanos = unit.toNanos(timeout);
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
for (;;) {
if (runStateAtLeast(ctl.get(), TERMINATED))
return true;
if (nanos <= 0)
return false;
nanos = termination.awaitNanos(nanos);
}
} finally {
mainLock.unlock();
}
}

/**
* Invokes {@code shutdown} when this executor is no longer
* referenced and it has no threads.
*/
protected void finalize() {
shutdown();
}

/**
* Sets the thread factory used to create new threads.
*
* @param threadFactory the new thread factory
* @throws NullPointerException if threadFactory is null
* @see #getThreadFactory
*/
public void setThreadFactory(ThreadFactory threadFactory) {
if (threadFactory == null)
throw new NullPointerException();
this.threadFactory = threadFactory;
}

/**
* Returns the thread factory used to create new threads.
*
* @return the current thread factory
* @see #setThreadFactory(ThreadFactory)
*/
public ThreadFactory getThreadFactory() {
return threadFactory;
}

/**
* Sets a new handler for unexecutable tasks.
*
* @param handler the new handler
* @throws NullPointerException if handler is null
* @see #getRejectedExecutionHandler
*/
public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
if (handler == null)
throw new NullPointerException();
this.handler = handler;
}

/**
* Returns the current handler for unexecutable tasks.
*
* @return the current handler
* @see #setRejectedExecutionHandler(RejectedExecutionHandler)
*/
public RejectedExecutionHandler getRejectedExecutionHandler() {
return handler;
}

/**
* Sets the core number of threads. This overrides any value set
* in the constructor. If the new value is smaller than the
* current value, excess existing threads will be terminated when
* they next become idle. If larger, new threads will, if needed,
* be started to execute any queued tasks.
*
* @param corePoolSize the new core size
* @throws IllegalArgumentException if {@code corePoolSize < 0}
* @see #getCorePoolSize
*/
public void setCorePoolSize(int corePoolSize) {
if (corePoolSize < 0)
throw new IllegalArgumentException();
int delta = corePoolSize - this.corePoolSize;
this.corePoolSize = corePoolSize;
if (workerCountOf(ctl.get()) > corePoolSize)
interruptIdleWorkers();
else if (delta > 0) {
// We don't really know how many new threads are "needed".
// As a heuristic, prestart enough new workers (up to new
// core size) to handle the current number of tasks in
// queue, but stop if queue becomes empty while doing so.
int k = Math.min(delta, workQueue.size());
while (k-- > 0 && addWorker(null, true)) {
if (workQueue.isEmpty())
break;
}
}
}

/**
* Returns the core number of threads.
*
* @return the core number of threads
* @see #setCorePoolSize
*/
public int getCorePoolSize() {
return corePoolSize;
}

/**
* Starts a core thread, causing it to idly wait for work. This
* overrides the default policy of starting core threads only when
* new tasks are executed. This method will return {@code false}
* if all core threads have already been started.
*
* @return {@code true} if a thread was started
*/
public boolean prestartCoreThread() {
return workerCountOf(ctl.get()) < corePoolSize &&
addWorker(null, true);
}

/**
* Same as prestartCoreThread except arranges that at least one
* thread is started even if corePoolSize is 0.
*/
void ensurePrestart() {
int wc = workerCountOf(ctl.get());
if (wc < corePoolSize)
addWorker(null, true);
else if (wc == 0)
addWorker(null, false);
}

/**
* Starts all core threads, causing them to idly wait for work. This
* overrides the default policy of starting core threads only when
* new tasks are executed.
*
* @return the number of threads started
*/
public int prestartAllCoreThreads() {
int n = 0;
while (addWorker(null, true))
++n;
return n;
}

/**
* Returns true if this pool allows core threads to time out and
* terminate if no tasks arrive within the keepAlive time, being
* replaced if needed when new tasks arrive. When true, the same
* keep-alive policy applying to non-core threads applies also to
* core threads. When false (the default), core threads are never
* terminated due to lack of incoming tasks.
*
* @return {@code true} if core threads are allowed to time out,
* else {@code false}
*
* @since 1.6
*/
public boolean allowsCoreThreadTimeOut() {
return allowCoreThreadTimeOut;
}

/**
* Sets the policy governing whether core threads may time out and
* terminate if no tasks arrive within the keep-alive time, being
* replaced if needed when new tasks arrive. When false, core
* threads are never terminated due to lack of incoming
* tasks. When true, the same keep-alive policy applying to
* non-core threads applies also to core threads. To avoid
* continual thread replacement, the keep-alive time must be
* greater than zero when setting {@code true}. This method
* should in general be called before the pool is actively used.
*
* @param value {@code true} if should time out, else {@code false}
* @throws IllegalArgumentException if value is {@code true}
* and the current keep-alive time is not greater than zero
*
* @since 1.6
*/
public void allowCoreThreadTimeOut(boolean value) {
if (value && keepAliveTime <= 0)
throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
if (value != allowCoreThreadTimeOut) {
allowCoreThreadTimeOut = value;
if (value)
interruptIdleWorkers();
}
}

/**
* Sets the maximum allowed number of threads. This overrides any
* value set in the constructor. If the new value is smaller than
* the current value, excess existing threads will be
* terminated when they next become idle.
*
* @param maximumPoolSize the new maximum
* @throws IllegalArgumentException if the new maximum is
* less than or equal to zero, or
* less than the {@linkplain #getCorePoolSize core pool size}
* @see #getMaximumPoolSize
*/
public void setMaximumPoolSize(int maximumPoolSize) {
if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
throw new IllegalArgumentException();
this.maximumPoolSize = maximumPoolSize;
if (workerCountOf(ctl.get()) > maximumPoolSize)
interruptIdleWorkers();
}

/**
* Returns the maximum allowed number of threads.
*
* @return the maximum allowed number of threads
* @see #setMaximumPoolSize
*/
public int getMaximumPoolSize() {
return maximumPoolSize;
}

/**
* Sets the time limit for which threads may remain idle before
* being terminated. If there are more than the core number of
* threads currently in the pool, after waiting this amount of
* time without processing a task, excess threads will be
* terminated. This overrides any value set in the constructor.
*
* @param time the time to wait. A time value of zero will cause
* excess threads to terminate immediately after executing tasks.
* @param unit the time unit of the {@code time} argument
* @throws IllegalArgumentException if {@code time} less than zero or
* if {@code time} is zero and {@code allowsCoreThreadTimeOut}
* @see #getKeepAliveTime(TimeUnit)
*/
public void setKeepAliveTime(long time, TimeUnit unit) {
if (time < 0)
throw new IllegalArgumentException();
if (time == 0 && allowsCoreThreadTimeOut())
throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
long keepAliveTime = unit.toNanos(time);
long delta = keepAliveTime - this.keepAliveTime;
this.keepAliveTime = keepAliveTime;
if (delta < 0)
interruptIdleWorkers();
}

/**
* Returns the thread keep-alive time, which is the amount of time
* that threads in excess of the core pool size may remain
* idle before being terminated.
*
* @param unit the desired time unit of the result
* @return the time limit
* @see #setKeepAliveTime(long, TimeUnit)
*/
public long getKeepAliveTime(TimeUnit unit) {
return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
}

/* User-level queue utilities */

/**
* Returns the task queue used by this executor. Access to the
* task queue is intended primarily for debugging and monitoring.
* This queue may be in active use. Retrieving the task queue
* does not prevent queued tasks from executing.
*
* @return the task queue
*/
public BlockingQueue<Runnable> getQueue() {
return workQueue;
}

/**
* Removes this task from the executor's internal queue if it is
* present, thus causing it not to be run if it has not already
* started.
*
* <p>This method may be useful as one part of a cancellation
* scheme. It may fail to remove tasks that have been converted
* into other forms before being placed on the internal queue. For
* example, a task entered using {@code submit} might be
* converted into a form that maintains {@code Future} status.
* However, in such cases, method {@link #purge} may be used to
* remove those Futures that have been cancelled.
*
* @param task the task to remove
* @return {@code true} if the task was removed
*/
public boolean remove(Runnable task) {
boolean removed = workQueue.remove(task);
tryTerminate(); // In case SHUTDOWN and now empty
return removed;
}

/**
* Tries to remove from the work queue all {@link Future}
* tasks that have been cancelled. This method can be useful as a
* storage reclamation operation, that has no other impact on
* functionality. Cancelled tasks are never executed, but may
* accumulate in work queues until worker threads can actively
* remove them. Invoking this method instead tries to remove them now.
* However, this method may fail to remove tasks in
* the presence of interference by other threads.
*/
public void purge() {
final BlockingQueue<Runnable> q = workQueue;
try {
Iterator<Runnable> it = q.iterator();
while (it.hasNext()) {
Runnable r = it.next();
if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
it.remove();
}
} catch (ConcurrentModificationException fallThrough) {
// Take slow path if we encounter interference during traversal.
// Make copy for traversal and call remove for cancelled entries.
// The slow path is more likely to be O(N*N).
for (Object r : q.toArray())
if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
q.remove(r);
}

tryTerminate(); // In case SHUTDOWN and now empty
}

/* Statistics */

/**
* Returns the current number of threads in the pool.
*
* @return the number of threads
*/
public int getPoolSize() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
// Remove rare and surprising possibility of
// isTerminated() && getPoolSize() > 0
return runStateAtLeast(ctl.get(), TIDYING) ? 0
: workers.size();
} finally {
mainLock.unlock();
}
}

/**
* Returns the approximate number of threads that are actively
* executing tasks.
*
* @return the number of threads
*/
public int getActiveCount() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
int n = 0;
for (Worker w : workers)
if (w.isLocked())
++n;
return n;
} finally {
mainLock.unlock();
}
}

/**
* Returns the largest number of threads that have ever
* simultaneously been in the pool.
*
* @return the number of threads
*/
public int getLargestPoolSize() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
return largestPoolSize;
} finally {
mainLock.unlock();
}
}

/**
* Returns the approximate total number of tasks that have ever been
* scheduled for execution. Because the states of tasks and
* threads may change dynamically during computation, the returned
* value is only an approximation.
*
* @return the number of tasks
*/
public long getTaskCount() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
long n = completedTaskCount;
for (Worker w : workers) {
n += w.completedTasks;
if (w.isLocked())
++n;
}
return n + workQueue.size();
} finally {
mainLock.unlock();
}
}

/**
* Returns the approximate total number of tasks that have
* completed execution. Because the states of tasks and threads
* may change dynamically during computation, the returned value
* is only an approximation, but one that does not ever decrease
* across successive calls.
*
* @return the number of tasks
*/
public long getCompletedTaskCount() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
long n = completedTaskCount;
for (Worker w : workers)
n += w.completedTasks;
return n;
} finally {
mainLock.unlock();
}
}

/**
* Returns a string identifying this pool, as well as its state,
* including indications of run state and estimated worker and
* task counts.
*
* @return a string identifying this pool, as well as its state
*/
public String toString() {
long ncompleted;
int nworkers, nactive;
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
ncompleted = completedTaskCount;
nactive = 0;
nworkers = workers.size();
for (Worker w : workers) {
ncompleted += w.completedTasks;
if (w.isLocked())
++nactive;
}
} finally {
mainLock.unlock();
}
int c = ctl.get();
String rs = (runStateLessThan(c, SHUTDOWN) ? "Running" :
(runStateAtLeast(c, TERMINATED) ? "Terminated" :
"Shutting down"));
return super.toString() +
"[" + rs +
", pool size = " + nworkers +
", active threads = " + nactive +
", queued tasks = " + workQueue.size() +
", completed tasks = " + ncompleted +
"]";
}

/* Extension hooks */

/**
* Method invoked prior to executing the given Runnable in the
* given thread. This method is invoked by thread {@code t} that
* will execute task {@code r}, and may be used to re-initialize
* ThreadLocals, or to perform logging.
*
* <p>This implementation does nothing, but may be customized in
* subclasses. Note: To properly nest multiple overridings, subclasses
* should generally invoke {@code super.beforeExecute} at the end of
* this method.
*
* @param t the thread that will run task {@code r}
* @param r the task that will be executed
*/
protected void beforeExecute(Thread t, Runnable r) { }

/**
* Method invoked upon completion of execution of the given Runnable.
* This method is invoked by the thread that executed the task. If
* non-null, the Throwable is the uncaught {@code RuntimeException}
* or {@code Error} that caused execution to terminate abruptly.
*
* <p>This implementation does nothing, but may be customized in
* subclasses. Note: To properly nest multiple overridings, subclasses
* should generally invoke {@code super.afterExecute} at the
* beginning of this method.
*
* <p><b>Note:</b> When actions are enclosed in tasks (such as
* {@link FutureTask}) either explicitly or via methods such as
* {@code submit}, these task objects catch and maintain
* computational exceptions, and so they do not cause abrupt
* termination, and the internal exceptions are <em>not</em>
* passed to this method. If you would like to trap both kinds of
* failures in this method, you can further probe for such cases,
* as in this sample subclass that prints either the direct cause
* or the underlying exception if a task has been aborted:
*
* <pre> {@code
* class ExtendedExecutor extends ThreadPoolExecutor {
* // ...
* protected void afterExecute(Runnable r, Throwable t) {
* super.afterExecute(r, t);
* if (t == null && r instanceof Future<?>) {
* try {
* Object result = ((Future<?>) r).get();
* } catch (CancellationException ce) {
* t = ce;
* } catch (ExecutionException ee) {
* t = ee.getCause();
* } catch (InterruptedException ie) {
* Thread.currentThread().interrupt(); // ignore/reset
* }
* }
* if (t != null)
* System.out.println(t);
* }
* }}</pre>
*
* @param r the runnable that has completed
* @param t the exception that caused termination, or null if
* execution completed normally
*/
protected void afterExecute(Runnable r, Throwable t) { }

/**
* Method invoked when the Executor has terminated. Default
* implementation does nothing. Note: To properly nest multiple
* overridings, subclasses should generally invoke
* {@code super.terminated} within this method.
*/
protected void terminated() { }

/* Predefined RejectedExecutionHandlers */

/**
* A handler for rejected tasks that runs the rejected task
* directly in the calling thread of the {@code execute} method,
* unless the executor has been shut down, in which case the task
* is discarded.
*/
public static class CallerRunsPolicy implements RejectedExecutionHandler {
/**
* Creates a {@code CallerRunsPolicy}.
*/
public CallerRunsPolicy() { }

/**
* Executes task r in the caller's thread, unless the executor
* has been shut down, in which case the task is discarded.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
if (!e.isShutdown()) {
r.run();
}
}
}

/**
* A handler for rejected tasks that throws a
* {@code RejectedExecutionException}.
*/
public static class AbortPolicy implements RejectedExecutionHandler {
/**
* Creates an {@code AbortPolicy}.
*/
public AbortPolicy() { }

/**
* Always throws RejectedExecutionException.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
* @throws RejectedExecutionException always
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
throw new RejectedExecutionException("Task " + r.toString() +
" rejected from " +
e.toString());
}
}

/**
* A handler for rejected tasks that silently discards the
* rejected task.
*/
public static class DiscardPolicy implements RejectedExecutionHandler {
/**
* Creates a {@code DiscardPolicy}.
*/
public DiscardPolicy() { }

/**
* Does nothing, which has the effect of discarding task r.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
}
}

/**
* A handler for rejected tasks that discards the oldest unhandled
* request and then retries {@code execute}, unless the executor
* is shut down, in which case the task is discarded.
*/
public static class DiscardOldestPolicy implements RejectedExecutionHandler {
/**
* Creates a {@code DiscardOldestPolicy} for the given executor.
*/
public DiscardOldestPolicy() { }

/**
* Obtains and ignores the next task that the executor
* would otherwise execute, if one is immediately available,
* and then retries execution of task r, unless the executor
* is shut down, in which case task r is instead discarded.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
if (!e.isShutdown()) {
e.getQueue().poll();
e.execute(r);
}
}
}
}

设计模式分类

平时工作代码进行重构时也会涉及到设计模式,另外在看一些开源框架时也会涉及到很多的设计模式。只是平时没有太全面的了解,最近面试时有同事问汲到这里,所以在这里整理以备将来随时查看。
这里我举一个最容易理解的例子来解释每种设计模式
首先看一下设计模式的分类及关系

它们之间的关系如图:

##创建型模式
这六个模式都是与创建对象相关的

  • 简单工厂模式(Simple Factory);
  • 工厂方法模式(Factory Method);
  • 抽象工厂模式(Abstract Factory);
  • 创建者模式(Builder);
  • 原型模式(Prototype);
  • 单例模式(Singleton);

##结构型模式

创建对象后,对象与对象之间的依赖关系,设计好了会为后续代码的维护带来很大的方便。

  • 外观模式(Facade);
  • 适配器模式(Adapter);
  • 代理模式(Proxy);
  • 装饰模式(Decorator);
  • 桥模式(Bridge);
  • 组合模式(Composite);
  • 享元模式(Flyweight)

##行为型模式

对象的创建和结构定义好后,就是他们的行为的设计了。
模板方法模式(Template Method);

  • 观察者模式(Observer);
  • 状态模式(State);
  • 策略模式(Strategy);
  • 职责链模式(Chain of Responsibility);
  • 命令模式(Command);
  • 访问者模式(Visitor);
  • 调停者模式(Mediator);
  • 备忘录模式(Memento);
  • 迭代器模式(Iterator);
  • 解释器模式(Interpreter)

个人书架

最近在看的书籍:

曾经看过的书籍:

工具类技术书籍

业务专业相关

  • 《流程优化与再造》

思维构建相关

生活向导

  • 《平凡的世界》(路遥)
  • 《尼古拉.特斯拉传》(Steve Law)

要看的书籍

  • 《集装箱改变世界》

    本书从集装箱的发明史娓娓道来,将一个看似平凡的主题衍变成一个个非同寻常的有趣故事,展现了一项技术的进步是如何改变世界经济形态的。它的价值不在于是什么,而在于怎样使用。在集装箱出现之前,美国的沃尔玛、法国的成衣绝对不会遍地开花。而在集装箱出现之后,货运变得如此便宜,以至于某件产品产自东半球,运至纽约销售,远比在纽约近郊生产该产品来得划算。中国也从此登上国际集装箱海运和世界工厂的舞台。读者在享受阅读的同时,还会有趣地发现,即便是一个简单的创新,也会彻底改变我们的生活

sed,awk

Linux shell编程从初学到精通

最近工作时遇到了一个问题,就是查看进行时,只查看某些进行的进程号,若直接用ps aux|grep sms 这样会得到一大堆的东东,所以同事推荐用awk,同
时也提及了sed。

这里抽时间对这两个命令做一个总结,仅为个人学习工作所用。

##sed、awk是什么?

它们是linux\unix系统中的两种功能强大的文本处理工具。
  • 有一个sed的编辑器,才有了sed(stream editor)这个名字,它是一个将一系列编辑命令作用于一个文本文件的理想工具。
  • 由于创建awk的三个作者名称 是Aho、Weinberger和Kernighan,所以得名为AWK,是一种能够对结构化数据进行操作并产生格式化报表的编程语言。

##sed的使用

###使用场合

  • 编辑相对交互式广西编辑器而言太大的文件
  • 编辑命令太复杂,在交互式文本编辑器中难以输入的情况
  • 对文件扫描一遍,但是需要执行多个编辑函数的情况

sed只对缓冲区中的原始文件的副本进行编辑,并不编辑原始的文件。so,若要保存个性后的文件,压根将输出重定向到另一个文件。如:

sed 'sed command' source-file > target-file

###调用方式
如何没有指定输入文件sed将从标准输入中接受输入

  • 在shell命令行输入命令调用sed,格式为:

    sed [option] ‘sed command’ 输入文件
    注意此处为单引号将命令引起来

  • 将sed命令插入脚本文件后,然后通过sed命令调用它,格式为:

    sed [option] -f sed脚本文件 输入文件

  • 将sed命令插入脚本文件后,最常用的方法是设置该脚本文件为可执行,然后直接执行该脚本,格式为:

    ./sed脚本文件 输入文件

但此命令脚本文件,应该以sha-bang(#!)开头,sha-bang后面是解析这个脚本的程序名。

####sed命令选项及其意义

  • -n:不打印所有行到标准输出
  • -e:将下一个字符串解析为sed编辑命令,如果只传递一个编辑命令给sed,-e选项可以省略
  • -f:表示正在调用sed脚本文件

###命令组成方式

定位文本行和编辑命令两部分组成

####定位文本

  • 使用行号,指定一行,或指定行号范围
  • 使用正则表达式

下面是sed命令定位文本的方法

  • x 为指定的行号
  • x,y 指定行号范围
  • /pattern/ 查询包含模式的行
  • /pattern/pattern/ 查询包含两个模式的行
  • /pattern/,x 从与模式匹配到x号行之间的行 反之类似
  • x,y!查询不包括x和y行号的行

####常用编辑命令

  • p 打印匹配行
  • = 打印文件行号
  • a\ 在定位行号之后追加文本信息
  • i\ 在定们行号之前插入文本信息
  • d 删除定位行
  • c\ 用新文本替换定位文本
  • s 使用替换模式替换相应模式
  • r 从另一个文件中读广西
  • w 将文本写入到一个文件
  • y 变换字符
  • q 第一个模式匹配完成后退出
  • l 显示与八进制ASCII码等价的控制字符
  • {} 在定位行执行的命令组
  • n 读取下一个输入行,用下一个命令处理新的行
  • h 将模式缓冲区的文本复制到保持缓冲区
  • H 将模式缓冲区的文本追加到保持缓冲区
  • x 互换模式缓冲区和保持缓冲区的内容
  • g 将保持缓冲区的内容复制到模式缓冲区
  • G 将保持缓冲区的内容追加到模式缓冲区

###实例
我们就用下面这个文件内容作为事例参考:

#!/usr/bin/env python
import os
import sys

if __name__ == "__main__":
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "test.settings")

    from django.core.management import execute_from_command_line

    execute_from_command_line(sys.argv)

-n 选项的使用

  • 使用-n 不输出所有的内容 1p 输出第一行
➜  linuxstudy  sed -n '1p' manage.py2
#!/usr/bin/env python
  • 打印3到6行
➜  linuxstudy  sed -n '3,6p' manage.py2
import sys

if __name__ == "__main__":
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "test.settings")
  • 模式匹配
➜  linuxstudy  sed -n '/environ/p' manage.py2
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "test.settings")

- e 选项的使用

  • 打印行号:
    ➜  linuxstudy  sed -n '/env/=' manage.py2
    1
    6

添加e选项:

    ➜  linuxstudy  sed -n -e '/env/p' -e '/env/=' manage.py2 
    #!/usr/bin/env python
    1
        os.environ.setdefault("DJANGO_SETTINGS_MODULE", "test.settings")
    6

    `sed不支持多个编辑命令的用法,带多个编辑命令的用法,一般格式为:`

    sed [option] -e 编辑命令 -e 编辑命令 ... -e 编辑命令 输入文件

    将下面命令操作存放到一个后缀为.sed的文件中,让其可执行
    #!/usr/bin/sed -f 
    /command/a\
    we append a new line

sed文本定位

  • 匹配元字符 $和.
    ➜  linuxstudy  sed -n '/\./p' manage.py2
        os.environ.setdefault("DJANGO_SETTINGS_MODULE", "test.settings")
        from django$.core.management import execute_from_command_line
        execute_from_command_line(sys.argv)
    ➜  linuxstudy  sed -n '/\$/p' manage.py2
        from django$.core.management import execute_from_command_line
  • 元字符进行匹配
    $在正则中表示行尾,但在这里表示最后一行
    sed的基本命令,可以放在单引号外或内都行,根据自己的习惯
    ➜  linuxstudy  sed  '$p' manage.py2 得取最后一行

    ➜  linuxstudy sed  -n '/.*line/p' manage.py2 找出以line结尾的行
  • ! 符号,表示取反,但是不能用于模式匹配的取反

      ➜  linuxstudy  sed -n '2,4!p' manage.py2 不打印2到4行 
  • 使用行号与关键字匹配限定行范围

      /pattern/,x和x,/pattern/ 这两种形式其实与x,y一样的,只是将x或y代替罢了
    
      ➜  linuxstudy  sed -n '4,/mana/p' manage.py2  得到的是从第四行起到与mana匹配的行的内容
    
      if __name__ == "__main__":
          os.environ.setdefault("DJANGO_SETTINGS_MODULE", "hwbuluo.settings")
    
          from django$.core.management import execute_from_command_line

sed文本编辑

  • 插入文本 i\

      在匹配行的前端插入
      sed '定位i\text' 文件
    
      修改上面的追加脚本:
      #!/usr/bin/sed -f 
      /command/i\
      we append a new line
  • 修改文本 modify.sed c\

      -------------------
      #!/usr/bin/sed -f
    
      /command/c\
      I modify the file.
      -------------------
    
      执行:./modify.sed
  • 删除文本 d 与追加和插入修改有所不同,这里在后面不需要添加\

      sed '1,3d' manage.py2
  • 替换文本 替换文本与修改文本类似,只是修改是对一整行的个性,替换则是对局部进行修改

      s/被替换的字符串/新字符串/[替换选项]
    
      替换选项及意义:
          g:替换文本中所有出现被替换字符串之处,若不使用此选项,则只替换每行的第一个匹配上的字符串
          p:与-n选项结合,只打印替换行
          w 文件名:表示将输出定向到一个文件
    
      ➜  linuxstudy  sed -n 's/command/============/p' manage.py2 
          from django$.core.management import execute_from_============_line
          execute_from_============_line(sys.argv)
    
      也可以 linuxstudy  sed -n 's/command/============/2p' manage.py2   来替换每行出现的第几个 

##awk的使用

###使用场合
###调用方式
###实例

awk -F ':' 'BEGIN {count=0;} {name[count] = $1;count++;};END {for (i=0;i<NR;i++) print i,name[i]}'   /etc/passwd

0 root
1 daemon
2 bin
3 sys
4 sync
5 games
6 man
7 lp
8 mail

ls -l |awk 'BEGIN {size=0;} {size=size+$5;} END{print "[end]size is",size/1024/1024 ,"M"}'

[end]size is 0.098505 M

常用运维知识大杂烩

##日常系统操作

  • 将远程文件拷贝到本地
    scp username@ip:remote_filepath /local_dir
  • 同步目录

      rsync -avzr 172.xx.xx.11:/opt/jagent/tomcat* .
    
      sudo chown -R fsdevops.fsdevops sms-service/
  • linux 最大文件限制数 ulimit

      ulimit -n
  • 资源暂时不可用,资源已耗尽

        $ ps
            -bash: fork: retry: 资源暂时不可用
            -bash: fork: retry: 资源暂时不可用
            -bash: fork: retry: 资源暂时不可用
            -bash: fork: retry: 资源暂时不可用
    
        IT组哥们分析说是每个用户句柄数只有1024个,目前超了
    
        ulimit -a 即可查询linux相关的参数     
  • 查看进程号

    ps aux|grep sms|awk -F  ' ' '{print $2}'

    ps aux|grep sms|awk -F  ' ' '{kill $2}'
  • grep的使用
    cat rmq_bk_gc.log|grep -E -o '\w+'|sort|uniq -c|sort -k 2,1
    -E 正则
    -o 输出 -O 标示出 
    sort排序
    uniq group
  • 强制用sudo保存

      :w !sudo tee %
  • 设置服务自启动

    chkconfig

  • 查看某端口被谁占用

       netstat -apn
       netstat -apn|grep 8013
       ps -aux | grep 33514/java
  • 查看文件占用
    du -hs .

  • 监视指定网络的数据包

       监视指定主机和端口的数据包      
       tcpdump tcp port 23 and host 210.27.48.1
  • 防火墙

        hostname
        iptables -t filter -I INPUT -p tcp --dport 27107 -m state --state NEW -j ACCEPT
    
        sudo iptables -A INPUT -p tcp --dport 13710 -j ACCEPT
        sudo iptables -A OUTPUT -p tcp --sport 13710 -j ACCEPT
    
        service iptables save
        vim /etc/puppet/puppet.conf 
    
        service puppet restart
        iptables -L
        more /etc/sysconfig/iptables
        vim /etc/sysconfig/iptables
        service iptables reload
      停止防火墙

      sudo su
      service iptables stop
  • 安装telnet

        yum install -y telnet
  • 查询某类文件

        grep netty -R .
  • 查看内存

        free 
  • curl 发送请求

      目的1:通过脚本发送post请求。
      答案: curl -d "leaderboard_id=7778a8143f111272&score=19&app_key=8d49f16fe034b98b&_test_user=test01" "http://172.16.102.208:8089/wiapi/score"
    
      目的2:通过脚本发送post请求,顺便附带文本数据,比如通过"浏览"选择本地的card.txt并上传发送post请求
      答案:  curl  -F "blob=@card.txt;type=text/plain"  "http://172.16.102.208:8089/wiapi/score?leaderboard_id=7778a8143f111272&score=40&app_key=8d49f16fe034b98b&_test_user=test01"   
  • ssh免密码登陆

ssh-keygen -t rsa -P ''

将生成的文件拷到目标主机,交添加到keys文件中

cat sshnopw.pub >> /root/.ssh/authorized_keys
  • vmstat 相比top,我可以看到整个机器的CPU,内存,IO的使用情况,而不是单单看到各个进程的CPU使用率和内存使用率
    2表示每个两秒采集一次服务器状态,1表示只采集一次。
    procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
     r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
     0  0      0 779304  67972 706748    0    0   135    45  538 1117 10  3 86  2  0

     r 表示运行队列(就是说多少个进程真的分配到CPU),我测试的服务器目前CPU比较空闲,没什么程序在跑,当这个值超过了CPU数目,就会出现CPU瓶颈了。这个也和top的负载有关系,一般负载超过了3就比较高,超过了5就高,超过了10就不正常了,服务器的状态很危险。top的负载类似每秒的运行队列。如果运行队列过大,表示你的CPU很繁忙,一般会造成CPU使用率很高。

     b 表示阻塞的进程,这个不多说,进程阻塞,大家懂的。

     swpd 虚拟内存已使用的大小,如果大于0,表示你的机器物理内存不足了,如果不是程序内存泄露的原因,那么你该升级内存了或者把耗内存的任务迁移到其他机器。

     free   空闲的物理内存的大小,我的机器内存总共8G,剩余3415M。

     buff   Linux/Unix系统是用来存储,目录里面有什么内容,权限等的缓存,我本机大概占用300多M

     cache cache直接用来记忆我们打开的文件,给文件做缓冲,我本机大概占用300多M(这里是Linux/Unix的聪明之处,把空闲的物理内存的一部分拿来做文件和目录的缓存,是为了提高 程序执行的性能,当程序使用内存时,buffer/cached会很快地被使用。)

     si  每秒从磁盘读入虚拟内存的大小,如果这个值大于0,表示物理内存不够用或者内存泄露了,要查找耗内存进程解决掉。我的机器内存充裕,一切正常。

     so  每秒虚拟内存写入磁盘的大小,如果这个值大于0,同上。

     bi  块设备每秒接收的块数量,这里的块设备是指系统上所有的磁盘和其他块设备,默认块大小是1024byte,我本机上没什么IO操作,所以一直是0,但是我曾在处理拷贝大量数据(2-3T)的机器上看过可以达到140000/s,磁盘写入速度差不多140M每秒

     bo 块设备每秒发送的块数量,例如我们读取文件,bo就要大于0。bi和bo一般都要接近0,不然就是IO过于频繁,需要调整。

     in 每秒CPU的中断次数,包括时间中断

     cs 每秒上下文切换次数,例如我们调用系统函数,就要进行上下文切换,线程的切换,也要进程上下文切换,这个值要越小越好,太大了,要考虑调低线程或者进程的数目,例如在apache和nginx这种web服务器中,我们一般做性能测试时会进行几千并发甚至几万并发的测试,选择web服务器的进程可以由进程或者线程的峰值一直下调,压测,直到cs到一个比较小的值,这个进程和线程数就是比较合适的值了。系统调用也是,每次调用系统函数,我们的代码就会进入内核空间,导致上下文切换,这个是很耗资源,也要尽量避免频繁调用系统函数。上下文切换次数过多表示你的CPU大部分浪费在上下文切换,导致CPU干正经事的时间少了,CPU没有充分利用,是不可取的。

     us 用户CPU时间,我曾经在一个做加密解密很频繁的服务器上,可以看到us接近100,r运行队列达到80(机器在做压力测试,性能表现不佳)。

     sy 系统CPU时间,如果太高,表示系统调用时间长,例如是IO操作频繁。

     id  空闲 CPU时间,一般来说,id + us + sy = 100,一般我认为id是空闲CPU使用率,us是用户CPU使用率,sy是系统CPU使用率。

     wt 等待IO CPU时间。
  • jstat java虚拟机 垃圾回收状态查看
    命令格式
    jstat命令命令格式:
    jstat [Options] vmid [interval] [count]
    参数说明:
    Options,选项,我们一般使用 -gcutil 查看gc情况
    vmid,VM的进程号,即当前运行的java进程号
    interval,间隔时间,单位为秒或者毫秒
    count,打印次数,如果缺省则打印无数次
    示例说明
    示例
    通常运行命令如下:
    jstat -gc 12538 5000
    即会每5秒一次显示进程号为12538的java进成的GC情况,
    显示内容如下图:

    jstat -gc 19014 1000
     S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT     GCT   
    10752.0 10752.0  0.0   5293.9 65536.0  65224.5   175104.0     16.0    13056.0 12799.5 1536.0 1495.2      1    0.009   0      0.000    0.009
    10752.0 10752.0  0.0   5293.9 65536.0  65224.5   175104.0     16.0    13056.0 12799.5 1536.0 1495.2      1    0.009   0      0.000    0.009

    结果说明
    显示内容说明如下(部分结果是通过其他其他参数显示的,暂不说明):
             S0C:年轻代中第一个survivor(幸存区)的容量 (字节) 
             S1C:年轻代中第二个survivor(幸存区)的容量 (字节) 
             S0U:年轻代中第一个survivor(幸存区)目前已使用空间 (字节) 
             S1U:年轻代中第二个survivor(幸存区)目前已使用空间 (字节) 
             EC:年轻代中Eden(伊甸园)的容量 (字节) 
             EU:年轻代中Eden(伊甸园)目前已使用空间 (字节) 
             OC:Old代的容量 (字节) 
             OU:Old代目前已使用空间 (字节) 
             PC:Perm(持久代)的容量 (字节) 
             PU:Perm(持久代)目前已使用空间 (字节) 
             YGC:从应用程序启动到采样时年轻代中gc次数 
             YGCT:从应用程序启动到采样时年轻代中gc所用时间(s) 
             FGC:从应用程序启动到采样时old代(全gc)gc次数 
             FGCT:从应用程序启动到采样时old代(全gc)gc所用时间(s) 
             GCT:从应用程序启动到采样时gc用的总时间(s) 
             NGCMN:年轻代(young)中初始化(最小)的大小 (字节) 
             NGCMX:年轻代(young)的最大容量 (字节) 
             NGC:年轻代(young)中当前的容量 (字节) 
             OGCMN:old代中初始化(最小)的大小 (字节) 
             OGCMX:old代的最大容量 (字节) 
             OGC:old代当前新生成的容量 (字节) 
             PGCMN:perm代中初始化(最小)的大小 (字节) 
             PGCMX:perm代的最大容量 (字节)   
             PGC:perm代当前新生成的容量 (字节) 
             S0:年轻代中第一个survivor(幸存区)已使用的占当前容量百分比 
             S1:年轻代中第二个survivor(幸存区)已使用的占当前容量百分比 
             E:年轻代中Eden(伊甸园)已使用的占当前容量百分比 
             O:old代已使用的占当前容量百分比 
             P:perm代已使用的占当前容量百分比 
             S0CMX:年轻代中第一个survivor(幸存区)的最大容量 (字节) 
             S1CMX :年轻代中第二个survivor(幸存区)的最大容量 (字节) 
             ECMX:年轻代中Eden(伊甸园)的最大容量 (字节) 
             DSS:当前需要survivor(幸存区)的容量 (字节)(Eden区已满) 
             TT: 持有次数限制 
             MTT : 最大持有次数限制 
  • jstack pid java查看java程序的状态

  • grep 正则输出

    grep -o -E "[0-9]{11}" xx.log

    cat error.log |grep 'Failed to invoke the method'|grep '2015-12-08 20'|awk -F'Failed to invoke the method' '{print $2}'|awk '{print $1}'  |sort|uniq -c
  • 删除某些文件

      find ./ -name 'xx.log' |xargs rm -rf
  • 删除某个文件外的其它文件

      ls | grep -v keep | xargs rm #删除keep文件之外的所有文件
      说明: ls先得到当前的所有文件和文件夹的名字, grep -v keep,进行grep正则匹配查找keep,-v参数决定了结果为匹配之外的结果,也就是的到了keep之外的所有文件名,然后 xargs用于从 标准输入获得参数 并且传递给后面的命令,这里使用的命令是 rm,然后由rm删除前面选择的文件
  • 查看磁盘信息

    查看当前文件夹下所有文件大小(包括子文件夹)
       ➜  ~  du -sh
        47G    .
    查看指定文件夹大小
       # du -hs ftp
       6.3G    ftp

    查看磁盘空间大小命令
    df -h Df命令是linux系统以磁盘分区为单位查看文件系统,可以加上参数查看磁盘剩余空间信息,命令格式: df -hl 显示格式为: 文件系统 容量 已用 可用 已用% 挂载点 Filesystem Size Used Avail Use% Mounted on /dev/hda2 45G 19G 24G 44% / /dev/hda1 494

    df   -h

    Df命令是linux系统以磁盘分区为单位查看文件系统,可以加上参数查看磁盘剩余空间信息,命令格式:

    df -hl

    显示格式为: 

    文件系统              容量 已用 可用 已用% 挂载点 

    Filesystem            Size Used Avail Use% Mounted on

    /dev/hda2              45G   19G   24G 44% /
  • gz解压

      gzip -x ...     
  • proc 启动应用程序时,找不到log去哪了

      ls -l /proc/63220/fd|grep log
  • 查看内存信息

#linux 下安装软件

  • yum
    指定源进行安装
    yum install 软件名 --enablerepo=安装包地址
  • 重新安装JDK
    删除JDK:
        rpm -qa | grep jdk|xargs sudo rpm -e --nodeps
    download jdk
     wget -c -P ./ http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm?AuthParam=1448637274_af870ccf6c2c78750a5977e6da301744

     安装

     以JDK1.8为例

     拷贝到/usr/share下,mv jdk-8u65-linux-x64.rpm /usr/share

     用rpm -ivh命令安装



     配置环境变量

     在/etc/profile下增加

     # set Java environment
     JAVA_HOME=/usr/share/jdk-8u65-linux-x64
     PATH=$JAVA_HOME/bin:$PATH
     CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
     export JAVA_HOME
     export PATH
     export CLASSPATH



     测试

     [root@localhost ~]# echo $JAVA_HOME
     /usr/share/jdk1.6.0_43
     [root@localhost ~]# echo $PATH
     /usr/share/jdk1.6.0_43/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
     [root@localhost ~]# echo $CLASSPATH
     .:/usr/share/jdk1.6.0_43/lib/dt.jar:/usr/share/jdk1.6.0_43/lib/tools.jar

     [root@localhost ~]# java -version
     java version "1.6.0_43"
     Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
     Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)

    Managing Java

    sudo update-alternatives --config java
    有 2 个候选项可用于替换 java (提供 /usr/bin/java)。

数据库操作

  • mysql授权

      ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY 'root' WITH GRANT OPTION;

安装apt-get

http://everyday-tech.com/apt-get-on-centos/

lsof

lsof语法格式是:
lsof [options] filename
复制代码

lsof abc.txt 显示开启文件abc.txt的进程
lsof -c abc 显示abc进程现在打开的文件
lsof -c -p 1234 列出进程号为1234的进程所打开的文件
lsof -g gid 显示归属gid的进程情况
lsof +d /usr/local/ 显示目录下被进程开启的文件
lsof +D /usr/local/ 同上,但是会搜索目录下的目录,时间较长
lsof -d 4 显示使用fd为4的进程
lsof -i 用以显示符合条件的进程情况
lsof -i[46] [protocol][@hostname|hostaddr][:service|port]
46 –> IPv4 or IPv6
protocol –> TCP or UDP
hostname –> Internet host name
hostaddr –> IPv4地址
service –> /etc/service中的 service name (可以不止一个)
port –> 端口号 (可以不止一个)

traceroute IP

监控某台机器到某IP的链路的连通性

    nohup ping -W 1 172.31.xx.xx &>/tmp/ping.log &
    crontab -e
    * * * * *       echo "`date +%d-%H:%M`" >> /tmp/ping.log