java ConcurrentHashMap分段加锁提高并发效率
ConcurrentHashMap详解
JDK7
Segment
在jdk8之前concurrentHashMap使用该对象进行分段加锁,降低了锁的粒度,使得并发效率提高,Segment本身也相当于一个HashMap,Segment包含一个HashEntry数组,数组中每个HashEntry既是一个键值对,又是一个链表的头结点
get方法
- 根据key做hash运算,得到hash值
- 通过hash值,定位到对应的segment对象
- 再次通过hash值,定位到segment当中数组的具体位置
put方法
- 根据key做hash运算,得到hash值
- 通过hash值,定位到对应的segment对象
- 获取可重入锁
- 再次通过hash值,定位到segment当中数组的具体位置
- 插入或覆盖hashEntry对象
- 释放锁
但是使用这种方式实现需要进行两次hash操作,第一次hash操作找到对应的segment,第二次hash操作定位到元素所在链表的头部
JDK8
在jdk8的时候参考了HashMap的设计,采用了数组+链表+红黑树的方式,内部大量采用CAS操作,舍弃了分段锁的思想
CAS
CAS是compare and swap的缩写,即我们所说的比较交换,CAS属于乐观锁。
CAS包含三个操作数,---内存中的值(V),预期原值(A),新值(B) 如果内存中的值和A的值一样,就可以将内存中的值更新为B。CAS通过无限循环来获取数据,一直到V和A一致为止
乐观锁
乐观锁会很乐观的认为不会出现并发问题,所以采用无锁的机制来进行处理,比如通过给记录加version来获取数据,性能比悲观锁要高
悲观锁
悲观锁会很悲观的认为肯定会出现并发问题,所以会将资源锁住,该资源只能有一个线程进行操作,只有前一个获得锁的线程释放锁之后,下一个线程才可以访问
源码分析
重要变量
// 表示整个hash表,初始化阶段是在第一次插入的时候,容量总是2的次幂 transient volatile Node<K,V>[] table; // 下一个使用的表 只有在扩容的时候非空,其他情况都是null private transient volatile Node<K,V>[] nextTable; /** * Base counter value, used mainly when there is no contention, * but also as a fallback during table initialization * races. Updated via CAS. */ private transient volatile long baseCount; // 用于初始化和扩容控制 // 0:默认值 // -1:正在初始化 // 大于0:为hash表的阈值 // 小于-1:有多个线程在进行扩容 该值为 -(1+正在扩容的线程数) private transient volatile int sizeCtl; /** * The next table index (plus one) to split while resizing. */ private transient volatile int transferIndex; /** * Spinlock (locked via CAS) used when resizing and/or creating CounterCells. */ private transient volatile int cellsBusy; /** * Table of counter cells. When non-null, size is a power of 2. */ private transient volatile CounterCell[] counterCells; // views private transient KeySetView<K,V> keySet; private transient ValuesView<K,V> values; private transient EntrySetView<K,V> entrySet;
构造函数
/** * Creates a new, empty map with the default initial table size (16). */ public ConcurrentHashMap() { } /** * Creates a new, empty map with an initial table size * accommodating the specified number of elements without the need * to dynamically resize. * * @param initialCapacity The implementation performs internal * sizing to accommodate this many elements. * @throws IllegalArgumentException if the initial capacity of * elements is negative */ public ConcurrentHashMap(int initialCapacity) { if (initialCapacity < 0) throw new IllegalArgumentException(); int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY : tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1)); this.sizeCtl = cap; } /** * Creates a new map with the same mappings as the given map. * * @param m the map */ public ConcurrentHashMap(Map<? extends K, ? extends V> m) { this.sizeCtl = DEFAULT_CAPACITY; putAll(m); } /** * Creates a new, empty map with an initial table size based on * the given number of elements ({@code initialCapacity}) and * initial table density ({@code loadFactor}). * * @param initialCapacity the initial capacity. The implementation * performs internal sizing to accommodate this many elements, * given the specified load factor. * @param loadFactor the load factor (table density) for * establishing the initial table size * @throws IllegalArgumentException if the initial capacity of * elements is negative or the load factor is nonpositive * * @since 1.6 */ public ConcurrentHashMap(int initialCapacity, float loadFactor) { this(initialCapacity, loadFactor, 1); } /** * Creates a new, empty map with an initial table size based on * the given number of elements ({@code initialCapacity}), table * density ({@code loadFactor}), and number of concurrently * updating threads ({@code concurrencyLevel}). * * @param initialCapacity the initial capacity. The implementation * performs internal sizing to accommodate this many elements, * given the specified load factor. * @param loadFactor the load factor (table density) for * establishing the initial table size * @param concurrencyLevel the estimated number of concurrently * updating threads. The implementation may use this value as * a sizing hint. * @throws IllegalArgumentException if the initial capacity is * negative or the load factor or concurrencyLevel are * nonpositive */ public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) { if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0) throw new IllegalArgumentException(); if (initialCapacity < concurrencyLevel) // Use at least as many bins initialCapacity = concurrencyLevel; // as estimated threads long size = (long)(1.0 + (long)initialCapacity / loadFactor); int cap = (size >= (long)MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : tableSizeFor((int)size); this.sizeCtl = cap; }
重要方法
put方法
ConcurrentHashMap是如何保证在插入的时候线程安全的呢
public V put(K key, V value) { return putVal(key, value, false); }
final V putVal(K key, V value, boolean onlyIfAbsent) { // ConcurrentHashMap不允许key和value为null if (key == null || value == null) throw new NullPointerException(); // 计算hash值 int hash = spread(key.hashCode()); int binCount = 0; for (Node<K,V>[] tab = table;;) { Node<K,V> f; int n, i, fh; // tab为null,哈希表还没有初始化,进行初始化哈希表 if (tab == null || (n = tab.length) == 0) tab = initTable(); // 该索引位置为null,表示还没有元素 else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) { // 使用CAS的方式添加节点 if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value, null))) break; // no lock when adding to empty bin } // 节点的hash值为-1,表示该哈希表正在扩容 else if ((fh = f.hash) == MOVED) tab = helpTransfer(tab, f); else { V oldVal = null; // 对头节点加锁 synchronized (f) { // 再次判断一下该节点是否为目标索引位置的头节点,防止期间被修改 if (tabAt(tab, i) == f) { // 表示是普通的链表 if (fh >= 0) { binCount = 1; for (Node<K,V> e = f;; ++binCount) { K ek; if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) { oldVal = e.val; if (!onlyIfAbsent) e.val = value; break; } Node<K,V> pred = e; if ((e = e.next) == null) { pred.next = new Node<K,V>(hash, key, value, null); break; } } } // 红黑树 TreeBin的hash值为TREEBIN,是-2 else if (f instanceof TreeBin) { Node<K,V> p; binCount = 2; if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key, value)) != null) { oldVal = p.val; if (!onlyIfAbsent) p.val = value; } } } } // 可以看一下上述的赋值流程 // 默认初始值是0 // 链表时为1 在遍历时进行累加,直到找到所要添加的位置为止 // 红黑树时为2 if (binCount != 0) { // 链表的长度是否达到8 达到8转为红黑树 if (binCount >= TREEIFY_THRESHOLD) treeifyBin(tab, i); // oldVal不为null,表示只是对key的值进行的修改,没有添加元素,直接返回即可 if (oldVal != null) return oldVal; break; } } } // addCount(1L, binCount); return null; }
哈希函数根据hashCode计算出哈希值,这里的hash值与HashMap的计算方式稍微有点不同,在低十六位异或高十六位之后还需要与HASH_BITS在进行与运算,HASH_BITS的值是0x7fffffff,转为二进制是31个1,进行与运算是为了保证得到的hash值为正数。
ConcurrentHashMap中hash值为负数包含有其他含义,-1表示为ForwardingNode节点,-2表示为TreeBin节点
static final int spread(int h) { // (h ^ (h >>> 16)与hashMap相同 // HASH_BITS进行与运算 return (h ^ (h >>> 16)) & HASH_BITS; }
初始化hash表的操作
private final Node<K,V>[] initTable() { Node<K,V>[] tab; int sc; // hash表为null时才需要进行初始化 while ((tab = table) == null || tab.length == 0) { // sizeCtl小于0表示有其他线程在进行初始化操作了 if ((sc = sizeCtl) < 0) Thread.yield(); // lost initialization race; just spin // 将SIZECTL设为-1,表示该线程要开始初始化表了 else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) { try { if ((tab = table) == null || tab.length == 0) { int n = (sc > 0) ? sc : DEFAULT_CAPACITY; @SuppressWarnings("unchecked") Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n]; table = tab = nt; // n右移两位 表示1/4n n-1/4n为3/4n 即为n*0.75 sc = n - (n >>> 2); } } finally { sizeCtl = sc; } break; } } return tab; }
private final void addCount(long x, int check) { CounterCell[] as; long b, s; if ((as = counterCells) != null || !U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) { CounterCell a; long v; int m; boolean uncontended = true; if (as == null || (m = as.length - 1) < 0 || (a = as[ThreadLocalRandom.getProbe() & m]) == null || !(uncontended = U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) { fullAddCount(x, uncontended); return; } if (check <= 1) return; s = sumCount(); } if (check >= 0) { Node<K,V>[] tab, nt; int n, sc; while (s >= (long)(sc = sizeCtl) && (tab = table) != null && (n = tab.length) < MAXIMUM_CAPACITY) { int rs = resizeStamp(n); if (sc < 0) { if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 || sc == rs + MAX_RESIZERS || (nt = nextTable) == null || transferIndex <= 0) break; if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) transfer(tab, nt); } else if (U.compareAndSwapInt(this, SIZECTL, sc, (rs << RESIZE_STAMP_SHIFT) + 2)) transfer(tab, null); s = sumCount(); } } }
computeIfAbsent和putIfAbsent方法
ConcurrentHashMap有两个比较特殊的方法,这两个方法要是可以好好地利用起来,那就爽歪歪了
- 当Key存在的时候,如果Value获取比较昂贵的话,putIfAbsent就白白浪费时间在获取这个昂贵的Value上(这个点特别注意)
- Key不存在的时候,putIfAbsent返回null,小心空指针,而computeIfAbsent返回计算后的值
- 当Key不存在的时候,putIfAbsent允许put null进去,而computeIfAbsent不能,之后进行containsKey查询是有区别的
以上就是java ConcurrentHashMap分段加锁提高并发效率的详细内容,更多关于java ConcurrentHashMap分段加锁的资料请关注脚本之家其它相关文章!
相关文章
Java中的UrlDecoder 和 UrlEncoder_动力节点Java学院整理
HTML 格式编码的实用工具类。该类包含了将 String 转换为 application/x-www-form-urlencoded MIME 格式的静态方法。下文通过实例代码给大家介绍Java中的UrlDecoder 和 UrlEncoder知识,感兴趣的的朋友一起看看吧2017-07-07Spring Boot 整合mybatis 使用多数据源的实现方法
这篇文章主要介绍了Spring Boot 整合mybatis 使用多数据源的实现方法,需要的朋友可以参考下2018-03-03MyBatis XML去除多余AND|OR前缀或逗号等后缀的操作
这篇文章主要介绍了MyBatis XML去除多余AND|OR前缀或逗号等后缀的操作,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧2021-02-02
最新评论