- 前面介绍了jdk1.7和jdk1.8中的HashMap【】,文章后面分析了HashMap线程不安全的原因,那么作为线程安全的Hashtable和ConcurrentHashMap是如何做到线程安全的呢?
Hashtable
/*** @since JDK1.0*/public class Hashtableextends Dictionary implements Map , Cloneable, java.io.Serializable复制代码
- 从上面的jdk1.8中的源码就可以看出来,Hashtable是从jdk1.0就有了,而且是线程安全的,后来因为Hashtable效率太低才有了HashMap,HashMap为了追求效率,去掉了保障线程安全的synchronized关键字。
Hashtable和HashMap的主要区别
- 默认大小:Hashtable的默认大小为11,可以设置大于0的值,HashMap的默认大小为16,给定一个值,会初始化为大于给定值的最小的2的倍数值(HashMap见上面连接)。
public Hashtable() { this(11, 0.75f); }public Hashtable(int initialCapacity, float loadFactor) { if (initialCapacity < 0) throw new IllegalArgumentException("Illegal Capacity: "+ initialCapacity); if (loadFactor <= 0 || Float.isNaN(loadFactor)) throw new IllegalArgumentException("Illegal Load: "+loadFactor); if (initialCapacity==0) initialCapacity = 1; this.loadFactor = loadFactor; table = new Entry [initialCapacity]; threshold = (int)Math.min(initialCapacity * loadFactor, MAX_ARRAY_SIZE + 1); }复制代码
- 扩容:Hashtable扩容为现有容量的2倍+1,HashMap扩容为现有容量的2倍。
int newCapacity = (oldCapacity << 1) + 1;复制代码
- Hashtable线程安全的原因:
public synchronized int size() {略}public synchronized boolean isEmpty(){}public synchronized boolean contains(Object value) {}public synchronized boolean containsKey(Object key) {}public synchronized V get(Object key) {}public synchronized V put(K key, V value) {略}public synchronized V remove(Object key) {}public synchronized void putAll(Map t) {}public synchronized void clear() {}public synchronized Object clone() {}public synchronized int hashCode() {}..........复制代码
从源码中可以看出,Hashtable的方法上基本上都加上了synchronized关键字,而当一个线程访问加了synchronized关键字的方法时,会先获得实例对象的锁,而其他线程就得不到对象的锁,也就不能访问加了synchronized关键字的方法,这就相当于此线程
锁住了Hashtable实例对象的整张表,从而使Hashtable是线程安全的。
ConcurrentHashMap
ConcurrentHashMap同样是线程安全的,但是却比Hashtable效率要高,是如何做到的呢?
简单说一下sun.misc.Unsafe类,Unsafe类可以直接操作内存,并且都是原子操作
- objectFieldOffset方法获取对象的属性在内存中的偏移量
- putObject(Object var1, long var2, Object var4)方法设置var1对象偏移var2的地址上的值为var4
- getObject(object,offset)方法获取object对象偏移offset的地址上的属性的值
- getObjectVolatile(object,offset)方法获取object对象偏移offset的地址上的属性的值(用Volatile修饰的属性)
CAS (原子操作)
cas,Compare and Swap即比较并交换,Unsafe中cas的方法有:
public final native boolean compareAndSwapObject(Object var1, long var2, Object var4, Object var5);public final native boolean compareAndSwapInt(Object var1, long var2, int var4, int var5);public final native boolean compareAndSwapLong(Object var1, long var2, long var4, long var6);复制代码
compareAndSwapObject方法有4个参数,对象var1,偏移值var2,预期值var4,修改的值var5,如果var1偏移了var2的内存地址上的值和var4相等,那么把内存上的值修改为var5并且返回true,否则返回false。
JDK1.7
结构
jdk1.7中,ConcurrentHashMap由一个个segment组成,每个segment中有一个table,table中有链表,即把HashMap中的整个table分成了若干个segment,多线程操作时对单独的segment进行加锁,而不是像HashTable中对整个table进行加锁,粒度更细,可同时操作的线程更多,效率更高。
定义
public class ConcurrentHashMapextends AbstractMap implements ConcurrentMap , Serializable复制代码
ConcurrentHashMap继承自AbstractMap类,实现了ConcurrentMap和Serializable接口。
成员变量
- 默认初始化大小值16
static final int DEFAULT_INITIAL_CAPACITY = 16;复制代码
- 默认负载因子大小0.75
static final float DEFAULT_LOAD_FACTOR = 0.75f;复制代码
- 默认分段数量(最大并发线程数)
static final int DEFAULT_CONCURRENCY_LEVEL = 16;复制代码
- 最大容量
static final int MAXIMUM_CAPACITY = 1 << 30;复制代码
- 每个segment分段中表的最小容量
static final int MIN_SEGMENT_TABLE_CAPACITY = 2;复制代码
- 最大分段数量
static final int MAX_SEGMENTS = 1 << 16;复制代码
- containsValue方法不锁表的情况下尝试的次数
static final int RETRIES_BEFORE_LOCK = 2;复制代码
segment
segment继承了ReentrantLock,就有了加锁和解锁的方法。
static final class Segmentextends ReentrantLock implements Serializable { private static final long serialVersionUID = 2249069246763182397L; //自旋等待尝试加锁次数,单核为1,多核为64,Runtime.getRuntime().availableProcessors()方法获取CPU核心数 static final int MAX_SCAN_RETRIES = Runtime.getRuntime().availableProcessors() > 1 ? 64 : 1; //表,即HashEntry数组(每个segment中都有一个table) transient volatile HashEntry [] table; //segment中元素个数 transient int count; //修改次数 transient int modCount; //扩容阀值 transient int threshold; //负载因子 final float loadFactor; /** * 构造函数 */ Segment(float lf, int threshold, HashEntry [] tab) { this.loadFactor = lf; this.threshold = threshold; this.table = tab; }}复制代码
- put方法
final V put(K key, int hash, V value, boolean onlyIfAbsent) { /** * 尝试进行加锁,如果加锁失败,则执行scanAndLockForPut方法,尝试加锁一定次数之后调用线程自中断方法(自旋等待)。 */ HashEntrynode = tryLock() ? null : scanAndLockForPut(key, hash, value); V oldValue; try { HashEntry [] tab = table; int index = (tab.length - 1) & hash; HashEntry first = entryAt(tab, index); for (HashEntry e = first;;) { //for循环查找key是否存在,如果找到了,替换value值,返回oldValue if (e != null) { K k; if ((k = e.key) == key || (e.hash == hash && key.equals(k))) { oldValue = e.value; if (!onlyIfAbsent) { e.value = value; ++modCount; } break; } e = e.next; } else { //如果没找到,新建HashEntry节点,放到first节点前面 if (node != null) //scanAndLockForPut自旋等待时如果已经新建了节点,设置next值即可,setNext方法实现了延迟写。 node.setNext(first); else node = new HashEntry (hash, key, value, first); int c = count + 1; //元素个数加1,如果超过了阀值,则进行rehash,进行扩容 if (c > threshold && tab.length < MAXIMUM_CAPACITY) rehash(node); else setEntryAt(tab, index, node); //把新建节点放在链表的头位置 ++modCount; count = c; oldValue = null; break; } } } finally { unlock(); //最后释放锁 } return oldValue; }复制代码
- scanAndLockForPut方法,自旋锁,尝试加锁一定次数仍然失败进行线程自中断,该方法先计算hash值在table中的位置,循环该位置上的链表查找key值,如果不存在则新建节点,之后尝试加锁MAX_SCAN_RETRIES次,如果一直失败则挂起当前线程。期间如果链表头被修改,则重新开始该过程。
private HashEntryscanAndLockForPut(K key, int hash, V value) { HashEntry first = entryForHash(this, hash); HashEntry e = first; HashEntry node = null; int retries = -1; while (!tryLock()) { //获取锁失败时进入循环 HashEntry f; if (retries < 0) { //循环链表,找到key值或者不存在新建节点 if (e == null) { if (node == null) // speculatively create node node = new HashEntry (hash, key, value, null); retries = 0; } else if (key.equals(e.key)) //如果找到了key值, retries = 0; else e = e.next; } /** * 找到key值或者key值不存在新建节点之后,尝试加锁一定次数进入等待状态 * 尝试次数,单核为1,多核为64 */ else if (++retries > MAX_SCAN_RETRIES) { lock(); break; } else if ((retries & 1) == 0 && (f = entryForHash(this, hash)) != first) { //如果尝试加锁过程中发现链表头变化了,重置retries为-1,重新开始 e = first = f; retries = -1; } } return node; }复制代码
- rehash方法,对当前table进行扩容操作,大小变为原来的2倍,其中的元素会被重新分配位置,oldTable[idx]上的链表上的元素可能会重新hash到newTable[idx]和newTbale[idx+n]的链表上,n为oldTable的大小。
private void rehash(HashEntrynode) { HashEntry [] oldTable = table; int oldCapacity = oldTable.length; int newCapacity = oldCapacity << 1; //newTable的大小为oldTable的2被 threshold = (int)(newCapacity * loadFactor); HashEntry [] newTable = (HashEntry []) new HashEntry[newCapacity]; int sizeMask = newCapacity - 1; for (int i = 0; i < oldCapacity ; i++) { HashEntry e = oldTable[i]; if (e != null) { HashEntry next = e.next; int idx = e.hash & sizeMask; //计算节点在newTable中的位置idx if (next == null) //如果链表只有一个节点,直接放到newTable的idx上 newTable[idx] = e; else { /** * 与重新计算每个节点在newTable中的位置并依次进行头插法插入链表头相比,这里进行了优化 * 1.计算链表中每个节点在newTable中的位置,但是并不立即插入链表头 * 2.记住最后一个与它的上一个节点在新表中位置不同的节点lastRun,即链表中此节点之后的节点在newTable中的位置都相同 * 3.把lastRun放到newTable中,它之后的节点会带过来 * 4.计算lastRun之前的节点在newTable中的位置并依次进行头插法插入newTable中。 */ HashEntry lastRun = e; int lastIdx = idx; for (HashEntry last = next; last != null; last = last.next) { int k = last.hash & sizeMask; if (k != lastIdx) { lastIdx = k; lastRun = last; } } newTable[lastIdx] = lastRun; // Clone remaining nodes for (HashEntry p = e; p != lastRun; p = p.next) { V v = p.value; int h = p.hash; int k = h & sizeMask; HashEntry n = newTable[k]; newTable[k] = new HashEntry (h, p.key, v, n); } } } } int nodeIndex = node.hash & sizeMask; // 把新节点放入newTable中 node.setNext(newTable[nodeIndex]); newTable[nodeIndex] = node; table = newTable;}复制代码
- remove方法,先尝试获取锁,如果加锁失败,则scanAndLock自旋等待(和上面的put方法相似),获取锁之后,(tab.length - 1) & hash计算删除节点在table中的下标,如果table中该位置的链表不为空,循环判断链表中节点是否和删除节点相等(value为null时,key相等即可,否则key和value均需相等),如果删除节点存在,设置pre节点的next指针指向next节点即可。
final V remove(Object key, int hash, Object value) { if (!tryLock()) //尝试加锁 scanAndLock(key, hash); //加锁失败,则自旋等待 V oldValue = null; try { HashEntry[] tab = table; int index = (tab.length - 1) & hash; //计算hash值在table中的下标 HashEntry e = entryAt(tab, index); HashEntry pred = null; while (e != null) { K k; HashEntry next = e.next; if ((k = e.key) == key || (e.hash == hash && key.equals(k))) { V v = e.value; if (value == null || value == v || value.equals(v)) { //key相等时,value为null或者value也相等即为删除节点 if (pred == null) setEntryAt(tab, index, next); //如果删除节点是头节点,设置头节点为next节点 else pred.setNext(next); //否则设置上一个节点的next指针指向next节点 ++modCount; //修改次数加1 --count; //节点数量减1 oldValue = v; } break; } pred = e; e = next; } } finally { unlock(); //释放锁 } return oldValue;}复制代码
hash方法,用位移及异或运算使k值在segment中的分布尽量均匀
private int hash(Object k) { int h = hashSeed; if ((0 != h) && (k instanceof String)) { return sun.misc.Hashing.stringHash32((String) k); } h ^= k.hashCode(); h += (h << 15) ^ 0xffffcd7d; h ^= (h >>> 10); h += (h << 3); h ^= (h >>> 6); h += (h << 2) + (h << 14); return h ^ (h >>> 16); }复制代码
put方法(ConcurrentHashMap的put方法就很简单了,先计算key值在哪个segment中,然后调用segment的put方法即可)
public V put(K key, V value) { Segments; if (value == null) throw new NullPointerException(); int hash = hash(key); int j = (hash >>> segmentShift) & segmentMask; //计算key落在哪个segment中 if ((s = (Segment )UNSAFE.getObject (segments, (j << SSHIFT) + SBASE)) == null) s = ensureSegment(j); //如果segment不存在则初始化 return s.put(key, hash, value, false); //调用segment的put方法 }复制代码
get方法,先计算key落在哪个segment中,如果segment不为null并且table不为null,tab.length - 1) & h计算在table中的下标,循环链表的节点进行比较,如果key相等或者hash和equals方法相等,则返回value值。
public V get(Object key) { Segments; // manually integrate access methods to reduce overhead HashEntry [] tab; int h = hash(key); long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE; if ((s = (Segment )UNSAFE.getObjectVolatile(segments, u)) != null && (tab = s.table) != null) { for (HashEntry e = (HashEntry ) UNSAFE.getObjectVolatile (tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE); e != null; e = e.next) { K k; if ((k = e.key) == key || (e.hash == h && key.equals(k))) //如果key相等,或者重载的hash方法和equals方法相等 return e.value; } } return null; }复制代码
remove方法(remova方法有两个,一个参数只有key,一个参数是key和value,所以segment的remove方法中value为null时,key相等即可)
public V remove(Object key) { int hash = hash(key); //计算hash值 Segments = segmentForHash(hash); //计算hash值落在哪个segment中 return s == null ? null : s.remove(key, hash, null); //调用segment的remove方法 } /** * {@inheritDoc} * * @throws NullPointerException if the specified key is null */ public boolean remove(Object key, Object value) { int hash = hash(key); Segment s; return value != null && (s = segmentForHash(hash)) != null && s.remove(key, hash, value) != null; }复制代码
JDK1.8
jdk1.8中ConcurrentHashMap有了很大的变化,不再是segment结构,而是使用类似乐观锁的方式来达到多线程安全的目的。
定义
ConcurrentHashMap继承了AbstractMap,实现了ConcurrentMap接口。
public class ConcurrentHashMapextends AbstractMap implements ConcurrentMap , Serializable复制代码
常量
- 最大容量
private static final int MAXIMUM_CAPACITY = 1 << 30;复制代码
- 默认初始化的容量16,容量必须是2的倍数,最小为1
private static final int DEFAULT_CAPACITY = 16;复制代码
- 数组的最大容量,toArray和相关方法会用到
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;复制代码
- 默认分段数量(不再使用,兼容老版本)
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;复制代码
- 默认负载因子
private static final float LOAD_FACTOR = 0.75f;复制代码
- 链表转化为树的阈值
static final int TREEIFY_THRESHOLD = 8;复制代码
- 树转化为链表的阈值
static final int UNTREEIFY_THRESHOLD = 6;复制代码
- 树结构的最小容量(当table中的其中一个链表长度达到8并且table中的节点总数达到64时,会把该链表转化为树结构,而如果table中的节点数量小于64,不会进行树结构的转化,而是对table进行扩容以降低该链表的长度。)
static final int MIN_TREEIFY_CAPACITY = 64;复制代码
- 扩容时每个核心转移的间隔数
private static final int MIN_TRANSFER_STRIDE = 16;复制代码
private static int RESIZE_STAMP_BITS = 16;复制代码
private static final int MAX_RESIZERS = (1 << (32 - RESIZE_STAMP_BITS)) - 1;复制代码
private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;复制代码
- 节点的hash值
static final int MOVED = -1; // 表示该节点正在处理中static final int TREEBIN = -2; // 表示该节点是树的根节点static final int RESERVED = -3; // 暂时保留static final int HASH_BITS = 0x7fffffff; // 正常节点的hash值可用的位数复制代码
- CPU的核心数量
static final int NCPU = Runtime.getRuntime().availableProcessors();复制代码
属性
- table表,volatile修饰(一个线程修改该属性时,会立即写入到主存中,即对其他线程立即可见),transient修饰符(序列化时忽略该属性,即该属性只存在内存中,而不会持久化到磁盘里)
transient volatile Node[] table;复制代码
- newTable,进行扩容时会新建该表,其他线程发现该表不为空,说明已经有线程在进行扩容操作,就会帮助把oldTable中的数据扩容操作到此新表中,一起完成扩容操作。
private transient volatile Node[] nextTable;复制代码
- baseCount用于计算size的其中一个属性
private transient volatile long baseCount;复制代码
- 控制table初始化和扩容的属性
- 0 ,初始化值
- -1,表示正在初始化
- -N,表示N-1个线程正在一起进行扩容操作
- N ,table为null时,该值表示初始化的大小,table不为null,该值表示下一次扩容的大小
private transient volatile int sizeCtl;复制代码
- 扩容时下一个table下标
private transient volatile int transferIndex;复制代码
- 扩容和CounterCells时的锁标识
private transient volatile int cellsBusy;复制代码
private transient volatile CounterCell[] counterCells;复制代码
private transient KeySetViewkeySet;private transient ValuesView values;private transient EntrySetView entrySet;复制代码
节点
- Node节点
static class Nodeimplements Map.Entry { final int hash; //hash值 final K key; //key值 volatile V val; //value值 volatile Node next; //next节点 Node(int hash, K key, V val, Node next) { this.hash = hash; this.key = key; this.val = val; this.next = next; } public final K getKey() { return key; } public final V getValue() { return val; } public final int hashCode() { return key.hashCode() ^ val.hashCode(); } public final String toString(){ return key + "=" + val; } public final V setValue(V value) { throw new UnsupportedOperationException(); } public final boolean equals(Object o) { Object k, v, u; Map.Entry e; return ((o instanceof Map.Entry) && (k = (e = (Map.Entry )o).getKey()) != null && (v = e.getValue()) != null && (k == key || k.equals(key)) && (v == (u = val) || v.equals(u))); } /** * Virtualized support for map.get(); overridden in subclasses. */ Node find(int h, Object k) { Node e = this; if (k != null) { do { K ek; if (e.hash == h && ((ek = e.key) == k || (ek != null && k.equals(ek)))) return e; } while ((e = e.next) != null); } return null; }}复制代码
- ForwardingNode节点(扩容时有线程正在操作的链表的头节点的结构),重写了find方法
static final class ForwardingNodeextends Node { final Node [] nextTable; ForwardingNode(Node [] tab) { super(MOVED, null, null, null); this.nextTable = tab; } Node find(int h, Object k) { // loop to avoid arbitrarily deep recursion on forwarding nodes outer: for (Node [] tab = nextTable;;) { Node e; int n; if (k == null || tab == null || (n = tab.length) == 0 || (e = tabAt(tab, (n - 1) & h)) == null) return null; for (;;) { int eh; K ek; if ((eh = e.hash) == h && ((ek = e.key) == k || (ek != null && k.equals(ek)))) return e; if (eh < 0) { if (e instanceof ForwardingNode) { tab = ((ForwardingNode )e).nextTable; continue outer; } else return e.find(h, k); } if ((e = e.next) == null) return null; } } }}复制代码
构造函数
不同参数的构造函数设置的sizeCtl值并不相同,初始化table时,table的大小也就不同。
public ConcurrentHashMap() { }public ConcurrentHashMap(int initialCapacity) { if (initialCapacity < 0) throw new IllegalArgumentException(); int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY : tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1)); this.sizeCtl = cap; }public ConcurrentHashMap(Map m) { this.sizeCtl = DEFAULT_CAPACITY; putAll(m); }public ConcurrentHashMap(int initialCapacity, float loadFactor) { this(initialCapacity, loadFactor, 1); }public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) { if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0) throw new IllegalArgumentException(); if (initialCapacity < concurrencyLevel) // Use at least as many bins initialCapacity = concurrencyLevel; // as estimated threads long size = (long)(1.0 + (long)initialCapacity / loadFactor); int cap = (size >= (long)MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : tableSizeFor((int)size); this.sizeCtl = cap; }复制代码
主要方法
- CAS,jdk1.8中主要用了三种cas操作来保证线程安全,这三种方法都是原子性操作。
/*** tabAt,读取tab[i]的值*/static finalNode tabAt(Node [] tab, int i) { return (Node )U.getObjectVolatile(tab, ((long)i << ASHIFT) + ABASE);}/*** casTabAt,如果tab[i]的值等于c,用v替换c并返回true,否则返回false*/static final boolean casTabAt(Node [] tab, int i, Node c, Node v) { return U.compareAndSwapObject(tab, ((long)i << ASHIFT) + ABASE, c, v);}/*** setTabAt,设置tab[i]=v*/static final void setTabAt(Node [] tab, int i, Node v) { U.putObjectVolatile(tab, ((long)i << ASHIFT) + ABASE, v);}复制代码
- tableSizeFor,计算table的大小,返回大于等于给定整数的最小的2的倍数
private static final int tableSizeFor(int c) { int n = c - 1; n |= n >>> 1; //把最前面的1复制到第二位,使下一位也为1。例0100变为0110 n |= n >>> 2; //把上一步的最前面的两位1复制到下面两位,使后两位也为1。例01100010变为01111010 n |= n >>> 4; //... n |= n >>> 8; n |= n >>> 16; return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;}复制代码
- put方法
public V put(K key, V value) { return putVal(key, value, false);}final V putVal(K key, V value, boolean onlyIfAbsent) { if (key == null || value == null) throw new NullPointerException(); //ConcurrentHashMap的key和value都不能为null int hash = spread(key.hashCode()); int binCount = 0; for (Node[] tab = table;;) { Node f; int n, i, fh; if (tab == null || (n = tab.length) == 0) tab = initTable(); //如果table为null或者length为0,则进行初始化操作 else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) { //如果链表头为null,新建节点 if (casTabAt(tab, i, null, new Node (hash, key, value, null))) break; //如果多个线程同时执行casTabAt,因为是原子性操作,所以只有一个线程成功并结束for循环,其他线程继续for循环 } else if ((fh = f.hash) == MOVED) //如果链表头节点的hash值为-1,说明table可能正在进行扩容,调用helpTransfer方法帮助扩容 tab = helpTransfer(tab, f); else { V oldVal = null; synchronized (f) { //这里使用synchronized关键字,对链表头节点f加锁,从而实现多线程安全 if (tabAt(tab, i) == f) { //加锁之后需要进行再判断一次,保证f在加锁之前没有被其他线程修改 if (fh >= 0) { binCount = 1; //计算链表的长度 for (Node e = f;; ++binCount) { //循环链表,如果找到key则替换,否则新建节点 K ek; if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) { oldVal = e.val; if (!onlyIfAbsent) e.val = value; break; } Node pred = e; if ((e = e.next) == null) { pred.next = new Node (hash, key, value, null); break; } } } else if (f instanceof TreeBin) { //如果头节点是树结构的,则调用putTreeVal方法 Node p; binCount = 2; if ((p = ((TreeBin )f).putTreeVal(hash, key, value)) != null) { oldVal = p.val; if (!onlyIfAbsent) p.val = value; } } } } if (binCount != 0) { if (binCount >= TREEIFY_THRESHOLD) //如果链表长度大于等于8,则转化为树结构 treeifyBin(tab, i); if (oldVal != null) return oldVal; break; } } } addCount(1L, binCount); //节点数量+1 return null;}static final int spread(int h) { return (h ^ (h >>> 16)) & HASH_BITS;}final Node [] helpTransfer(Node [] tab, Node f) { Node [] nextTab; int sc; if (tab != null && (f instanceof ForwardingNode) && (nextTab = ((ForwardingNode )f).nextTable) != null) { //如果table正在进行扩容 int rs = resizeStamp(tab.length); //扩容标识 while (nextTab == nextTable && table == tab && (sc = sizeCtl) < 0) { if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 || //如果状态变化了,说明扩容结束 sc == rs + MAX_RESIZERS || transferIndex <= 0) break; if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) { //sizCtl加1,多1个线程同时扩容 transfer(tab, nextTab); break; } } return nextTab; } return table;}复制代码
- initTable初始化方法
private final Node[] initTable() { Node [] tab; int sc; while ((tab = table) == null || tab.length == 0) { if ((sc = sizeCtl) < 0) Thread.yield(); // 如果有其他线程已经开始初始化了,则释放cpu资源,等待其他线程初始化 else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) { try { //初始化,设置sizeCtl为-1,表示有线程正在初始化 if ((tab = table) == null || tab.length == 0) { int n = (sc > 0) ? sc : DEFAULT_CAPACITY; @SuppressWarnings("unchecked") Node [] nt = (Node [])new Node [n]; table = tab = nt; sc = n - (n >>> 2); //扩容阈值0.75*n } } finally { sizeCtl = sc; //设置sizeCtl为扩容阈值 } break; } } return tab;}复制代码
- get方法
public V get(Object key) { Node[] tab; Node e, p; int n, eh; K ek; int h = spread(key.hashCode()); if ((tab = table) != null && (n = tab.length) > 0 && (e = tabAt(tab, (n - 1) & h)) != null) { if ((eh = e.hash) == h) { //判断头节点 if ((ek = e.key) == key || (ek != null && key.equals(ek))) return e.val; } else if (eh < 0) //如果头节点的hash<0,表明该节点是ForwardingNode节点或者树节点,调用子类的find方法 return (p = e.find(h, key)) != null ? p.val : null; while ((e = e.next) != null) { //循环查找链表中的节点 if (e.hash == h && ((ek = e.key) == key || (ek != null && key.equals(ek)))) return e.val; } } return null;}复制代码
- transfer扩容方法
private final void transfer(Node[] tab, Node [] nextTab) { int n = tab.length, stride; if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE) stride = MIN_TRANSFER_STRIDE; // 转移节点时,下标跨越的步幅 if (nextTab == null) { // 初始化nextTab try { @SuppressWarnings("unchecked") Node [] nt = (Node [])new Node [n << 1]; nextTab = nt; } catch (Throwable ex) { // try to cope with OOME sizeCtl = Integer.MAX_VALUE; return; } nextTable = nextTab; transferIndex = n; //从下标n开始转移节点到新表 } int nextn = nextTab.length; ForwardingNode fwd = new ForwardingNode (nextTab); boolean advance = true; //已处理标识 boolean finishing = false; // 结束标识 for (int i = 0, bound = 0;;) { Node f; int fh; while (advance) { //如果该table[i]处理过,则--i int nextIndex, nextBound; if (--i >= bound || finishing) advance = false; else if ((nextIndex = transferIndex) <= 0) { //下一个要处理的下标<=0,跳出while循环 i = -1; advance = false; } else if (U.compareAndSwapInt (this, TRANSFERINDEX, nextIndex, nextBound = (nextIndex > stride ? nextIndex - stride : 0))) { //处理下一个范围 bound = nextBound; i = nextIndex - 1; advance = false; } } if (i < 0 || i >= n || i + n >= nextn) { int sc; if (finishing) { //如果结束,nextTable置空,table变成新表,sizeCtl扩容阈值 nextTable = null; table = nextTab; sizeCtl = (n << 1) - (n >>> 1); return; } if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) { //当前线程扩容结束,sizeCtl减1 if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT) //如果不等,说明还有其他线程没有结束扩容 return; finishing = advance = true; i = n; // recheck before commit } } else if ((f = tabAt(tab, i)) == null) advance = casTabAt(tab, i, null, fwd); else if ((fh = f.hash) == MOVED) advance = true; // 该链表已经在处理中 else { synchronized (f) { //对链表头节点进行加锁 if (tabAt(tab, i) == f) { Node ln, hn; //ln表示放在下标i的头节点,hn表示放在下标i+n的头节点 if (fh >= 0) { //fh>=0表示是Node类型节点 /** * n是2的倍数 * fh & (n-1)计算hash值在oldTable中的下标i, * fh & n==0表示在newTable中下标仍为i, * fh & n==1表示在newTable中的下标为i + n */ int runBit = fh & n; Node lastRun = f; //最后一个和它前面的下标不相同的节点,即它之后的节点在新表中的下标都相同 for (Node p = f.next; p != null; p = p.next) { int b = p.hash & n; if (b != runBit) { runBit = b; lastRun = p; } } if (runBit == 0) { //如果为0,表示在新表中的下标仍为i ln = lastRun; hn = null; } else { //否则下标为i+n hn = lastRun; ln = null; } for (Node p = f; p != lastRun; p = p.next) { //把lastRun之前的节点用头插法插入ln和hn为头节点的两条链表中 int ph = p.hash; K pk = p.key; V pv = p.val; if ((ph & n) == 0) ln = new Node (ph, pk, pv, ln); else hn = new Node (ph, pk, pv, hn); } setTabAt(nextTab, i, ln); //ln为头节点的链表放入newTable[i]中 setTabAt(nextTab, i + n, hn); //hn为头节点的链表放入newTable[i+n]中 setTabAt(tab, i, fwd); //oldTable[i]修改为ForwardingNode节点,表示已处理 advance = true; //已处理标识 } else if (f instanceof TreeBin) { //如果头节点是树节点 TreeBin t = (TreeBin )f; TreeNode lo = null, loTail = null; TreeNode hi = null, hiTail = null; int lc = 0, hc = 0; /** * 循环链表(ConcurrentHashMap中的树节点也有next指针,也是一条链表) * lo表示将要放到下标i的链表的头节点,loTail用来构建新链表,lc表示链表节点数量 * ho表示将要放到下标i的链表的头节点,hoTail用来构建新链表,hc表示链表节点数量 */ for (Node e = t.first; e != null; e = e.next) { int h = e.hash; TreeNode p = new TreeNode (h, e.key, e.val, null, null); if ((h & n) == 0) { if ((p.prev = loTail) == null) lo = p; else loTail.next = p; loTail = p; ++lc; } else { if ((p.prev = hiTail) == null) hi = p; else hiTail.next = p; hiTail = p; ++hc; } } /** * 判断两条子链表的长度如果小于等于6,则从树结构转换为链表结构 * 如果大于6,需要新建两个子树 * 如果其中一个没有节点,直接用原先的树结构即可 */ ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) : (hc != 0) ? new TreeBin (lo) : t; hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) : (lc != 0) ? new TreeBin (hi) : t; setTabAt(nextTab, i, ln); setTabAt(nextTab, i + n, hn); setTabAt(tab, i, fwd); advance = true; } } } } }}复制代码
- remove方法
public V remove(Object key) { return replaceNode(key, null, null);}final V replaceNode(Object key, V value, Object cv) { int hash = spread(key.hashCode()); //计算hash值 for (Node[] tab = table;;) { Node f; int n, i, fh; if (tab == null || (n = tab.length) == 0 || (f = tabAt(tab, i = (n - 1) & hash)) == null) //如果表是空的或者链表是空的,结束 break; else if ((fh = f.hash) == MOVED) //如果链表头节点的hash值是-1,说明正在处理中,如果在扩容,则帮助扩容 tab = helpTransfer(tab, f); else { V oldVal = null; boolean validated = false; synchronized (f) { //对链表头节点加锁 if (tabAt(tab, i) == f) { if (fh >= 0) { //如果是Node节点 validated = true; for (Node e = f, pred = null;;) { //循环链表查找key值 K ek; if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) { //如果key值相等 V ev = e.val; if (cv == null || cv == ev || (ev != null && cv.equals(ev))) { //如果给定的value为null或者value也相等 oldVal = ev; if (value != null) //如果给定value值不为null,替换value e.val = value; else if (pred != null) //前一个节点的next指向下一个节点,删除当前节点 pred.next = e.next; else setTabAt(tab, i, e.next); //前一个节点为null,说明是链表头节点,插入头节点,next指向原头节点的next } break; } pred = e; if ((e = e.next) == null) //如果循环链表没找到key,结束 break; } } else if (f instanceof TreeBin) { //如果是树节点 validated = true; TreeBin t = (TreeBin )f; TreeNode r, p; if ((r = t.root) != null && (p = r.findTreeNode(hash, key, null)) != null) { //调用树节点的findTreeNode方法查找key值 V pv = p.val; if (cv == null || cv == pv || (pv != null && cv.equals(pv))) { oldVal = pv; if (value != null) p.val = value; else if (t.removeTreeNode(p)) //调用树节点removeTreeNode方法删除节点,返回true说明节点太少,转化为链表结构 setTabAt(tab, i, untreeify(t.first)); } } } } } if (validated) { //如果锁住头节点之后执行了删除操作(有可能加锁之前,其他线程进行了扩容操作,那么就不会执行删除节点操作,该值就为false) if (oldVal != null) { if (value == null) addCount(-1L, -1); //count减1 return oldVal; } break; } } } return null;}复制代码
- addCount方法,节点数量加减操作 check<0,不检查是否扩容,<=1,无竞争情况下才检查
private final void addCount(long x, int check) { CounterCell[] as; long b, s; //如果counterCells不为null或者增加baseCount变量值失败 if ((as = counterCells) != null || !U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) { CounterCell a; long v; int m; boolean uncontended = true; //表示无竞争 /** * 如果as为null或者length为0 * 或者数组中随机一个元素为null,ThreadLocalRandom.getProbe()获取一个随机值 * 或者CAS加值失败 * 调用fullAddCount增加count值 */ if (as == null || (m = as.length - 1) < 0 || (a = as[ThreadLocalRandom.getProbe() & m]) == null || !(uncontended = U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) { fullAddCount(x, uncontended); return; } if (check <= 1) return; s = sumCount(); //计算节点数量 } if (check >= 0) { Node[] tab, nt; int n, sc; while (s >= (long)(sc = sizeCtl) && (tab = table) != null && (n = tab.length) < MAXIMUM_CAPACITY) { int rs = resizeStamp(n); if (sc < 0) { //如果小于0,正在扩容或进行初始化 if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 || sc == rs + MAX_RESIZERS || (nt = nextTable) == null || transferIndex <= 0) //如果已结束,推出循环 break; if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) //否则帮助扩容,sizeCtl+1表示多一个线程进行扩容 transfer(tab, nt); } else if (U.compareAndSwapInt(this, SIZECTL, sc, (rs << RESIZE_STAMP_SHIFT) + 2)) //首次扩容,rs << RESIZE_STAMP_SHIFT) + 2)赋值给sizeCtl transfer(tab, null); s = sumCount(); } } }复制代码
- size方法,计算节点数量,计算baseCount和counterCells的总和
public int size() { long n = sumCount(); return ((n < 0L) ? 0 : (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE : (int)n);}final long sumCount() { CounterCell[] as = counterCells; CounterCell a; long sum = baseCount; if (as != null) { for (int i = 0; i < as.length; ++i) { if ((a = as[i]) != null) sum += a.value; } } return sum;}复制代码
总结
- ConcurrentHashMap主要是通过sun.misc.Unsafe类的CAS方法保证了操作的原子性。
- jdk1.7中,ConcurrentHashMap使用segment结构进行分段,segment继承ReentrantLock实现加锁,从而保证在多线程中是安全的。
- jdk1.8中,ConcurrentHashMap不再使用segment结构,而是使用synchronized关键字对table中的链表头节点进行加锁,粒度更小,从而使同时操作的线程数量更多,效率更高。
- jdk1.8中,当链表过长时,会转化为树结构,提高效率,关于树结构参考上篇文章,大概的思想基本相同。