• ConcurrentHashMap深入剖析(基于JDK1.7)


    复制代码
    近有点时间,翻了翻ConcurrentHashMap的源码学习了一下,对我自己认为比较重要的一些方法进行了学习,添加了一些必要的注释,拿出来与园子的小伙伴分享一下,有说的不对的地方,还请各位批评指正,欢迎交流。
    话不多说,上源码:
       1 package cn.com.pep.concurrent;
       2 
       3 import java.util.concurrent.ConcurrentMap;
       4 import java.util.concurrent.locks.*;
       5 
       6 import java.util.*;
       7 import java.io.Serializable;
       8 import java.io.IOException;
       9 
      10 /**
      11  * A hash table supporting full concurrency of retrievals and adjustable
      12  * expected concurrency for updates. This class obeys the same functional
      13  * specification as {@link java.util.Hashtable}, and includes versions of
      14  * methods corresponding to each method of <tt>Hashtable</tt>. However, even
      15  * though all operations are thread-safe, retrieval operations do <em>not</em>
      16  * entail locking, and there is <em>not</em> any support for locking the entire
      17  * table in a way that prevents all access. This class is fully interoperable
      18  * with <tt>Hashtable</tt> in programs that rely on its thread safety but not on
      19  * its synchronization details.
      20  *
      21  * <p>
      22  * Retrieval operations (including <tt>get</tt>) generally do not block, so may
      23  * overlap with update operations (including <tt>put</tt> and <tt>remove</tt>).
      24  * Retrievals reflect the results of the most recently <em>completed</em> update
      25  * operations holding upon their onset. For aggregate operations such as
      26  * <tt>putAll</tt> and <tt>clear</tt>, concurrent retrievals may reflect
      27  * insertion or removal of only some entries. Similarly, Iterators and
      28  * Enumerations return elements reflecting the state of the hash table at some
      29  * point at or since the creation of the iterator/enumeration. They do
      30  * <em>not</em> throw {@link ConcurrentModificationException}. However,
      31  * iterators are designed to be used by only one thread at a time.
      32  *
      33  * <p>
      34  * The allowed concurrency among update operations is guided by the optional
      35  * <tt>concurrencyLevel</tt> constructor argument (default <tt>16</tt>), which
      36  * is used as a hint for internal sizing. The table is internally partitioned to
      37  * try to permit the indicated number of concurrent updates without contention.
      38  * Because placement in hash tables is essentially random, the actual
      39  * concurrency will vary. Ideally, you should choose a value to accommodate as
      40  * many threads as will ever concurrently modify the table. Using a
      41  * significantly higher value than you need can waste space and time, and a
      42  * significantly lower value can lead to thread contention. But overestimates
      43  * and underestimates within an order of magnitude do not usually have much
      44  * noticeable impact. A value of one is appropriate when it is known that only
      45  * one thread will modify and all others will only read. Also, resizing this or
      46  * any other kind of hash table is a relatively slow operation, so, when
      47  * possible, it is a good idea to provide estimates of expected table sizes in
      48  * constructors.
      49  *
      50  * <p>
      51  * This class and its views and iterators implement all of the <em>optional</em>
      52  * methods of the {@link Map} and {@link Iterator} interfaces.
      53  *
      54  * <p>
      55  * Like {@link Hashtable} but unlike {@link HashMap}, this class does
      56  * <em>not</em> allow <tt>null</tt> to be used as a key or value.
      57  *
      58  * <p>
      59  * This class is a member of the
      60  * <a href="{@docRoot}/../technotes/guides/collections/index.html"> Java
      61  * Collections Framework</a>.
      62  *
      63  * @since 1.5
      64  * @author Doug Lea
      65  * @param <K> the type of keys maintained by this map
      66  * @param <V> the type of mapped values
      67  */
      68 public class ConcurrentHashMap<K, V> extends AbstractMap<K, V> implements ConcurrentMap<K, V>, Serializable {
      69     private static final long serialVersionUID = 7249069246763182397L;
      70 
      71     /*
      72      * The basic strategy is to subdivide the table among Segments, each of which
      73      * itself is a concurrently readable hash table. To reduce footprint, all but
      74      * one segments are constructed only when first needed (see ensureSegment). To
      75      * maintain visibility in the presence of lazy construction, accesses to
      76      * segments as well as elements of segment's table must use volatile access,
      77      * which is done via Unsafe within methods segmentAt etc below. These provide
      78      * the functionality of AtomicReferenceArrays but reduce the levels of
      79      * indirection. Additionally, volatile-writes of table elements and entry "next"
      80      * fields within locked operations use the cheaper "lazySet" forms of writes
      81      * (via putOrderedObject) because these writes are always followed by lock
      82      * releases that maintain sequential consistency of table updates.
      83      *
      84      * Historical note: The previous version of this class relied heavily on "final"
      85      * fields, which avoided some volatile reads at the expense of a large initial
      86      * footprint. Some remnants of that design (including forced construction of
      87      * segment 0) exist to ensure serialization compatibility.
      88      */
      89 
      90     /* ---------------- Constants -------------- */
      91 
      92     /**
      93      * The default initial capacity for this table, used when not otherwise
      94      * specified in a constructor.
      95      */
      96     static final int DEFAULT_INITIAL_CAPACITY = 16;
      97 
      98     /**
      99      * The default load factor for this table, used when not otherwise specified in
     100      * a constructor.
     101      */
     102     static final float DEFAULT_LOAD_FACTOR = 0.75f;
     103 
     104     /**
     105      * The default concurrency level for this table, used when not otherwise
     106      * specified in a constructor.
     107      */
     108     static final int DEFAULT_CONCURRENCY_LEVEL = 16;
     109 
     110     /**
     111      * The maximum capacity, used if a higher value is implicitly specified by
     112      * either of the constructors with arguments. MUST be a power of two <= 1<<30 to
     113      * ensure that entries are indexable using ints.
     114      */
     115     static final int MAXIMUM_CAPACITY = 1 << 30;
     116 
     117     /**
     118      * The minimum capacity for per-segment tables. Must be a power of two, at least
     119      * two to avoid immediate resizing on next use after lazy construction.
     120      */
     121     static final int MIN_SEGMENT_TABLE_CAPACITY = 2;
     122 
     123     /**
     124      * The maximum number of segments to allow; used to bound constructor arguments.
     125      * Must be power of two less than 1 << 24.
     126      */
     127     static final int MAX_SEGMENTS = 1 << 16; // slightly conservative
     128 
     129     /**
     130      * Number of unsynchronized retries in size and containsValue methods before
     131      * resorting to locking. This is used to avoid unbounded retries if tables
     132      * undergo continuous modification which would make it impossible to obtain an
     133      * accurate result.
     134      */
     135     static final int RETRIES_BEFORE_LOCK = 2;
     136 
     137     /* ---------------- Fields -------------- */
     138 
     139     /**
     140      * holds values which can't be initialized until after VM is booted.
     141      */
     142     private static class Holder {
     143 
     144         /**
     145          * Enable alternative hashing of String keys?
     146          *
     147          * <p>
     148          * Unlike the other hash map implementations we do not implement a threshold for
     149          * regulating whether alternative hashing is used for String keys. Alternative
     150          * hashing is either enabled for all instances or disabled for all instances.
     151          */
     152         static final boolean ALTERNATIVE_HASHING;
     153 
     154         static {
     155             // Use the "threshold" system property even though our threshold
     156             // behaviour is "ON" or "OFF".
     157             String altThreshold = java.security.AccessController
     158                     .doPrivileged(new sun.security.action.GetPropertyAction("jdk.map.althashing.threshold"));
     159 
     160             int threshold;
     161             try {
     162                 threshold = (null != altThreshold) ? Integer.parseInt(altThreshold) : Integer.MAX_VALUE;
     163 
     164                 // disable alternative hashing if -1
     165                 if (threshold == -1) {
     166                     threshold = Integer.MAX_VALUE;
     167                 }
     168 
     169                 if (threshold < 0) {
     170                     throw new IllegalArgumentException("value must be positive integer.");
     171                 }
     172             } catch (IllegalArgumentException failed) {
     173                 throw new Error("Illegal value for 'jdk.map.althashing.threshold'", failed);
     174             }
     175             ALTERNATIVE_HASHING = threshold <= MAXIMUM_CAPACITY;
     176         }
     177     }
     178 
     179     /**
     180      * A randomizing value associated with this instance that is applied to hash
     181      * code of keys to make hash collisions harder to find.
     182      */
     183     private transient final int hashSeed = randomHashSeed(this);
     184 
     185     private static int randomHashSeed(ConcurrentHashMap instance) {
     186         if (sun.misc.VM.isBooted() && Holder.ALTERNATIVE_HASHING) {
     187             return sun.misc.Hashing.randomHashSeed(instance);
     188         }
     189 
     190         return 0;
     191     }
     192 
     193     /**
     194      * Mask value for indexing into segments. The upper bits of a key's hash code
     195      * are used to choose the segment.
     196      */
     197     final int segmentMask;
     198 
     199     /**
     200      * Shift value for indexing within segments.
     201      */
     202     final int segmentShift;
     203 
     204     /**
     205      * The segments, each of which is a specialized hash table.
     206      */
     207     final Segment<K, V>[] segments;
     208 
     209     transient Set<K> keySet;
     210     transient Set<Map.Entry<K, V>> entrySet;
     211     transient Collection<V> values;
     212 
     213     /**
     214      * ConcurrentHashMap list entry. Note that this is never exported out as a
     215      * user-visible Map.Entry.
     216      */
     217     static final class HashEntry<K, V> {
     218         final int hash;
     219         final K key;
     220         volatile V value;
     221         volatile HashEntry<K, V> next;
     222 
     223         HashEntry(int hash, K key, V value, HashEntry<K, V> next) {
     224             this.hash = hash;
     225             this.key = key;
     226             this.value = value;
     227             this.next = next;
     228         }
     229 
     230         /**
     231          * Sets next field with volatile write semantics. (See above about use of
     232          * putOrderedObject.)
     233          */
     234         final void setNext(HashEntry<K, V> n) {
     235             UNSAFE.putOrderedObject(this, nextOffset, n);
     236         }
     237 
     238         // Unsafe mechanics
     239         static final sun.misc.Unsafe UNSAFE;
     240         static final long nextOffset;
     241         static {
     242             try {
     243                 UNSAFE = sun.misc.Unsafe.getUnsafe();
     244                 Class k = HashEntry.class;
     245                 nextOffset = UNSAFE.objectFieldOffset(k.getDeclaredField("next"));
     246             } catch (Exception e) {
     247                 throw new Error(e);
     248             }
     249         }
     250     }
     251 
     252     /**
     253      * Gets the ith element of given table (if nonnull) with volatile read
     254      * semantics. Note: This is manually integrated into a few performance-sensitive
     255      * methods to reduce call overhead.
     256      */
     257     @SuppressWarnings("unchecked")
     258     static final <K, V> HashEntry<K, V> entryAt(HashEntry<K, V>[] tab, int i) {
     259         /* 获取HashEntry数组中,指定下标的头结点 */
     260         return (HashEntry<K, V>) (tab == null ? null : UNSAFE.getObjectVolatile(tab, i << TSHIFT + TBASE));
     261     }
     262 
     263     /**
     264      * Sets the ith element of given table, with volatile write semantics. (See
     265      * above about use of putOrderedObject.)
     266      */
     267     static final <K, V> void setEntryAt(HashEntry<K, V>[] tab, int i, HashEntry<K, V> e) {
     268         UNSAFE.putOrderedObject(tab, (i << TSHIFT) + TBASE, e);
     269     }
     270 
     271     /**
     272      * Applies a supplemental hash function to a given hashCode, which defends
     273      * against poor quality hash functions. This is critical because
     274      * ConcurrentHashMap uses power-of-two length hash tables, that otherwise
     275      * encounter collisions for hashCodes that do not differ in lower or upper bits.
     276      */
     277     private int hash(Object k) {
     278         int h = hashSeed;
     279 
     280         if ((0 != h) && (k instanceof String)) {
     281             return sun.misc.Hashing.stringHash32((String) k);
     282         }
     283 
     284         h ^= k.hashCode();
     285 
     286         // Spread bits to regularize both segment and index locations,
     287         // using variant of single-word Wang/Jenkins hash.
     288         h += (h << 15) ^ 0xffffcd7d;
     289         h ^= (h >>> 10);
     290         h += (h << 3);
     291         h ^= (h >>> 6);
     292         h += (h << 2) + (h << 14);
     293         return h ^ (h >>> 16);
     294     }
     295 
     296     /**
     297      * Segments are specialized versions of hash tables. This subclasses from
     298      * ReentrantLock opportunistically, just to simplify some locking and avoid
     299      * separate construction.
     300      */
     301     static final class Segment<K, V> extends ReentrantLock implements Serializable {
     302         /*
     303          * Segments maintain a table of entry lists that are always kept in a consistent
     304          * state, so can be read (via volatile reads of segments and tables) without
     305          * locking. This requires replicating nodes when necessary during table
     306          * resizing, so the old lists can be traversed by readers still using old
     307          * version of table.
     308          *
     309          * This class defines only mutative methods requiring locking. Except as noted,
     310          * the methods of this class perform the per-segment versions of
     311          * ConcurrentHashMap methods. (Other methods are integrated directly into
     312          * ConcurrentHashMap methods.) These mutative methods use a form of controlled
     313          * spinning on contention via methods scanAndLock and scanAndLockForPut. These
     314          * intersperse tryLocks with traversals to locate nodes. The main benefit is to
     315          * absorb cache misses (which are very common for hash tables) while obtaining
     316          * locks so that traversal is faster once acquired. We do not actually use the
     317          * found nodes since they must be re-acquired under lock anyway to ensure
     318          * sequential consistency of updates (and in any case may be undetectably
     319          * stale), but they will normally be much faster to re-locate. Also,
     320          * scanAndLockForPut speculatively creates a fresh node to use in put if no node
     321          * is found.
     322          */
     323 
     324         private static final long serialVersionUID = 2249069246763182397L;
     325 
     326         /**
     327          * The maximum number of times to tryLock in a prescan before possibly blocking
     328          * on acquire in preparation for a locked segment operation. On multiprocessors,
     329          * using a bounded number of retries maintains cache acquired while locating
     330          * nodes.
     331          */
     332         static final int MAX_SCAN_RETRIES = Runtime.getRuntime().availableProcessors() > 1 ? 64 : 1;
     333 
     334         /**
     335          * The per-segment table. Elements are accessed via entryAt/setEntryAt providing
     336          * volatile semantics.
     337          */
     338         transient volatile HashEntry<K, V>[] table;
     339 
     340         /**
     341          * The number of elements. Accessed only either within locks or among other
     342          * volatile reads that maintain visibility.
     343          */
     344         transient int count;//当前segment中包含节点元素的数量
     345 
     346         /**
     347          * The total number of mutative operations in this segment. Even though this may
     348          * overflows 32 bits, it provides sufficient accuracy for stability checks in
     349          * CHM isEmpty() and size() methods. Accessed only either within locks or among
     350          * other volatile reads that maintain visibility.
     351          */
     352         transient int modCount;//发生写操作的次数
     353 
     354         /**
     355          * The table is rehashed when its size exceeds this threshold. (The value of
     356          * this field is always <tt>(int)(capacity *
     357          * loadFactor)</tt>.)
     358          */
     359         transient int threshold;//进行扩容的阈值
     360 
     361         /**
     362          * The load factor for the hash table. Even though this value is same for all
     363          * segments, it is replicated to avoid needing links to outer object.
     364          * 
     365          * @serial
     366          */
     367         final float loadFactor;//加载因子
     368 
     369         Segment(float lf, int threshold, HashEntry<K, V>[] tab) {
     370             this.loadFactor = lf;
     371             this.threshold = threshold;
     372             this.table = tab;
     373         }
     374 
     375         final V put(K key, int hash, V value, boolean onlyIfAbsent) {
     376             /* Try get lock for synchronized */
     377             HashEntry<K, V> node = tryLock() ? null : scanAndLockForPut(key, hash, value);
     378             V oldValue = null;
     379             try {
     380                 HashEntry<K, V>[] tab = table;
     381                 int index = (tab.length - 1) & hash;
     382                 /* 获取头结点 */
     383                 HashEntry<K, V> first = entryAt(tab, index);
     384                 for (HashEntry<K, V> e = first;;) {
     385                     if (e != null) {
     386                         K k;
     387                         if ((k = e.key) == key || (e.hash == hash && key.equals(k))) {
     388                             oldValue = e.value;
     389                             if (!onlyIfAbsent) {
     390                                 e.value = value;
     391                                 ++modCount;
     392                             }
     393                             break;
     394                         }
     395                         e = e.next;
     396                     } else {
     397                         /* 获取锁的过程中,创建了node节点,将node直接链接到当前这条链的头结点,作为新的头结点 */
     398                         if (node != null) {
     399                             node.next = first;
     400                         } else {
     401                             /* 否则,创建新的头结点 */
     402                             node = new HashEntry<K, V>(hash, key, value, first);
     403                         }
     404 
     405                         int c = count + 1;
     406                         /* 判断是否需要重新当前Segment中的数组是否需要重新hash */
     407                         if (c > threshold && tab.length < MAXIMUM_CAPACITY) {
     408                             rehash(node);
     409                         } else {
     410                             setEntryAt(tab, index, node);
     411                         }
     412                         ++modCount;
     413                         count = c;
     414                         oldValue = null;
     415                         break;
     416                     }
     417                 }
     418             } finally {
     419                 unlock();
     420             }
     421 
     422             return oldValue;
     423         }
     424 
     425         /**
     426          * Doubles size of table and repacks entries, also adding the given node to new
     427          * table
     428          */
     429         @SuppressWarnings("unchecked")
     430         private void rehash(HashEntry<K, V> node) {
     431             HashEntry<K, V>[] oldTable = table;
     432             int oldCapacity = oldTable.length;
     433             int newCapacity = oldCapacity << 1;
     434             threshold = (int) (newCapacity * loadFactor);
     435             HashEntry<K, V>[] newTable = new HashEntry[newCapacity];
     436             int sizeMask = newCapacity - 1;
     437             for (int i = 0; i < oldCapacity; i++) {
     438                 HashEntry<K, V> e = oldTable[i];
     439                 if (e != null) {
     440                     HashEntry<K, V> next = e.next;
     441                     int idx = e.hash & sizeMask;
     442                     /* 说明这条链上有且仅有一个结点 */
     443                     if (next == null) {
     444                         newTable[idx] = e;
     445                     } else {
     446                         HashEntry<K, V> lastRun = e;
     447                         int lastIdx = idx;
     448                         /*寻找扩容之后,不再原来这条链上的最后一个节点lastRun,这个节点之后的所有元素也必定不再原来链上,也必须转移到新的链上*/
     449                         for (HashEntry<K, V> last = next; last != null; last = last.next) {
     450                             int k = last.hash & sizeMask;
     451                             if (k != lastIdx) {
     452                                 lastIdx = k;
     453                                 lastRun = last;
     454                             }
     455                         }
     456                         
     457                         /*遍历完成之后这个lastIdx就是新链在HashEntry中的下标*/
     458                         
     459                         /*
     460                          * 重点注意:
     461                          *         1、因为HashEntry[]的扩容是倍增的,始终是2的幂次方,计算某个HashEntry所在HashEntry[]数组中下标的算法为:i = (e.hash >> segmentShift) & segmentMask; 
     462                          *         2、假设扩容之前HashEntry[]的长度为2的k次方,要确定某个hashEntry在HashEntry[]中的下标m,按照上面算法,m只与hash值的后k位有关;
     463                          *         3、扩容之后HashEntry[]的长度变为了2的k+1次方,又因为hash、segmentShift、SegmentMask均为final,所以计算的新下标n只与hash值的后k+1位有关;
     464                          *      4、第2步和第3步的的后k位和后k+1位,差异只在第k+1位,这位要么为0,要么为1,所以扩容前后,节点的所在数组中的下标有如下关系:m == n 或者 m+(k<<1) = n;
     465                          *      5、根据第4步得出的结论,扩容之前每一条链上的HashEntry节点扩容之后只可能有两种分布情况:a、还分布在原来的下标为m链上;b、分布在新的m+(k<<1)这条链上,k不等于0;
     466                          */
     467                         
     468                         /*根据最上面第5步得出的结论,扩容之前不同的链上的元素扩容之后是不可能分布在同一条链上的,所以就有如下赋值操作 ,将lastRun之后的所有元素赋值到新的链上,也有可能旧链*/
     469                         newTable[lastIdx] = lastRun;
     470                         /* 将lastRun之前的节点采用头插法插入到链表的头部 */
     471                         for (HashEntry<K, V> p = e; p != lastRun; p = p.next) {
     472                             V v = p.value;
     473                             int h = p.hash;
     474                             int k = h & sizeMask;
     475                             HashEntry<K, V> n = newTable[k];
     476                             newTable[k] = new HashEntry<K, V>(h, p.key, v, n);
     477                         }
     478                     }
     479                 }
     480             }
     481             /* 重置这条链的头结点 */
     482             int nodeIndex = node.hash & sizeMask;
     483             node.setNext(newTable[nodeIndex]);
     484             newTable[nodeIndex] = node;
     485             table = newTable;
     486         }
     487 
     488         /**
     489          * Scans for a node containing given key while trying to acquire lock, creating
     490          * and returning one if not found. Upon return, guarantees that lock is held.
     491          * UNlike in most methods, calls to method equals are not screened: Since
     492          * traversal speed doesn't matter, we might as well help warm up the associated
     493          * code and accesses as well.
     494          *
     495          * @return a new node if key not found, else null
     496          */
     497         private HashEntry<K, V> scanAndLockForPut(K key, int hash, V value) {
     498             /* 自旋的过程中尝试获取这个节点 */
     499             HashEntry<K, V> first = entryForHash(this, hash);
     500             HashEntry<K, V> e = first;
     501             HashEntry<K, V> node = null;
     502             int retries = -1;
     503             while (!tryLock()) {
     504                 HashEntry<K, V> f;
     505                 if (retries < 0) {
     506                     if (e == null) {
     507                         /* 在自旋尝试的过程中,当前这条hashEntry链遍历完成了,但是没有找到这个节点就新建一个结点 */
     508                         if (node == null) {
     509                             node = new HashEntry<K, V>(hash, key, value, null);
     510                             retries = 0;
     511                         }
     512                     } else if (key.equals(e.key)) {
     513                         retries = 0;
     514                     } else {
     515                         e = e.next;
     516                     }
     517                 } else if (++retries > RETRIES_BEFORE_LOCK) {
     518                     /* 自旋达到了最大次数,则不再自旋获取,阻塞等待 */
     519                     lock();
     520                     break;
     521                 } else if (((retries & 1) == 0) && (f = entryForHash(this, hash)) != first) {
     522                     /* 自旋的过程中,头结点发生了变化,说明发生了并发的写操作,重新尝试 */
     523                     e = first = f;
     524                     retries = -1;
     525                 }
     526             }
     527 
     528             /* 第一次尝试就获取到了锁,或者这条链还没遍历完就获取到了锁则node为空 */
     529             return node;
     530         }
     531 
     532         /**
     533          * Scans for a node containing the given key while trying to acquire lock for a
     534          * remove or replace operation. Upon return, guarantees that lock is held. Note
     535          * that we must lock even if the key is not found, to ensure sequential
     536          * consistency of updates.
     537          */
     538         private void scanAndLock(Object key, int hash) {
     539             // similar to but simpler than scanAndLockForPut
     540             HashEntry<K, V> first = entryForHash(this, hash);
     541             HashEntry<K, V> e = first;
     542             int retries = -1;
     543             /* 自旋的时候做一些遍历操作,降低cpu的使用率 */
     544             while (!tryLock()) {
     545                 HashEntry<K, V> f;
     546                 if (retries < 0) {
     547                     if (e == null || key.equals(e.key)) {
     548                         retries = 0;
     549                     }
     550                     e = e.next;
     551                 } else if (++retries > MAX_SCAN_RETRIES) {
     552                     lock();
     553                     break;
     554                 } else if (((retries & 1) == 0) && ((f = entryForHash(this, hash)) != first)) {
     555                     /* 尝试获取锁的过程中 */
     556                     e = first = f;
     557                     retries = -1;
     558                 }
     559             }
     560         }
     561 
     562         /**
     563          * Remove; match on key only if value null, else match both.
     564          */
     565         final V remove(Object key, int hash, Object value) {
     566             V oldValue = null;
     567             if (!tryLock()) {
     568                 scanAndLock(key, hash);
     569             }
     570             try {
     571                 HashEntry<K, V>[] tab = table;
     572                 int index = (tab.length - 1) & hash;
     573                 /* current node */
     574                 HashEntry<K, V> e = entryAt(tab, index);
     575                 /* previous node before current node */
     576                 HashEntry<K, V> prev = null;
     577                 while (e != null) {
     578                     K k;
     579                     HashEntry<K, V> next = e.next;
     580                     if ((k = e.key) == key || ((e.hash == hash) && k.equals(key))) {
     581                         V v = e.value;
     582                         /* Value为null的时候只匹配key,否则key和value都需要匹配 */
     583                         if (value == null || value == v || value.equals(value)) {
     584                             if (prev == null) {
     585                                 setEntryAt(tab, index, next);
     586                             } else {
     587                                 prev.setNext(next);
     588                             }
     589                             modCount++;
     590                             count--;
     591                             oldValue = v;
     592                         }
     593                         break;
     594                     }
     595                     prev = e;
     596                     e = next;
     597                 }
     598             } finally {
     599                 unlock();
     600             }
     601             return oldValue;
     602         }
     603 
     604         final boolean replace(K key, int hash, V oldValue, V newValue) {
     605             boolean repalced = false;
     606             if (!tryLock()) {
     607                 /* 获取锁失败,进行一定次数的自旋获取,还未成功则阻塞获取 */
     608                 scanAndLock(key, hash);
     609             }
     610 
     611             try {
     612                 for (HashEntry<K, V> e = entryForHash(this, hash); e != null; e = e.next) {
     613                     K k;
     614                     if ((k = e.key) == key || (e.hash == hash && key.equals(k))) {
     615                         /* key相等,并且oldValue值等于原来位置的值才进行替换操作 */
     616                         if (oldValue.equals(e.value)) {
     617                             e.value = newValue;
     618                             ++modCount;
     619                             repalced = true;
     620                         }
     621                         break;
     622                     }
     623                 }
     624             } finally {
     625                 unlock();
     626             }
     627             return repalced;
     628         }
     629 
     630         final V replace(K key, int hash, V value) {
     631             V oldValue = null;
     632             if (!tryLock()) {
     633                 scanAndLock(key, hash);
     634             }
     635 
     636             try {
     637                 HashEntry<K, V> e;
     638                 for (e = entryForHash(this, hash); e != null; e = e.next) {
     639                     K k;
     640                     if ((k = e.key) == key || (e.hash == hash && key.equals(k))) {
     641                         oldValue = e.value;
     642                         e.value = value;
     643                         ++modCount;
     644                         break;
     645                     }
     646                 }
     647             } finally {
     648                 unlock();
     649             }
     650             return oldValue;
     651         }
     652 
     653         final void clear() {
     654             lock();
     655             try {
     656                 HashEntry<K, V>[] tab = table;
     657                 for (int i = 0; i < tab.length; i++)
     658                     setEntryAt(tab, i, null);
     659                 ++modCount;
     660                 count = 0;
     661             } finally {
     662                 unlock();
     663             }
     664         }
     665     }
     666 
     667     // Accessing segments
     668 
     669     /**
     670      * Gets the jth element of given segment array (if nonnull) with volatile
     671      * element access semantics via Unsafe. (The null check can trigger harmlessly
     672      * only during deserialization.) Note: because each element of segments array is
     673      * set only once (using fully ordered writes), some performance-sensitive
     674      * methods rely on this method only as a recheck upon null reads.
     675      */
     676     @SuppressWarnings("unchecked")
     677     static final <K, V> Segment<K, V> segmentAt(Segment<K, V>[] ss, int j) {
     678         long u = (j << SSHIFT) + SBASE;
     679         return ss == null ? null : (Segment<K, V>) UNSAFE.getObjectVolatile(ss, u);
     680     }
     681 
     682     /**
     683      * Returns the segment for the given index, creating it and recording in segment
     684      * table (via CAS) if not already present.
     685      *
     686      * @param k the index
     687      * @return the segment
     688      */
     689     @SuppressWarnings("unchecked")
     690     private Segment<K, V> ensureSegment(int k) {
     691         /* 给当前位置创建一个新的segment */
     692         Segment<K, V>[] ss = this.segments;
     693         long u = (k << SSHIFT) + SBASE;
     694         Segment<K, V> seg;
     695         if ((seg = (Segment<K, V>) UNSAFE.getObjectVolatile(ss, u)) == null) {
     696             /* 以segment[0]作为原型来初始化当前位置的segment */
     697             Segment<K, V> proto = ss[0];
     698             int len = proto.table.length;
     699             float lf = proto.loadFactor;
     700             int threshold = (int) (len * lf);
     701             HashEntry<K, V>[] tab = new HashEntry[len];
     702             if ((seg = (Segment<K, V>) UNSAFE.getObjectVolatile(ss, u)) == null) {
     703                 Segment<K, V> segment = new Segment<>(lf, threshold, tab);
     704                 while ((seg = (Segment<K, V>) UNSAFE.getObjectVolatile(ss, u)) == null) {
     705                     if (UNSAFE.compareAndSwapObject(ss, u, null, seg = segment)) {
     706                         break;
     707                     }
     708                 }
     709             }
     710         }
     711         return seg;
     712     }
     713 
     714     // Hash-based segment and entry accesses
     715 
     716     /**
     717      * Get the segment for the given hash
     718      */
     719     @SuppressWarnings("unchecked")
     720     private Segment<K, V> segmentForHash(int h) {
     721         long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
     722         return (Segment<K, V>) UNSAFE.getObjectVolatile(segments, u);
     723     }
     724 
     725     /**
     726      * Gets the table entry for the given segment and hash
     727      */
     728     @SuppressWarnings("unchecked")
     729     static final <K, V> HashEntry<K, V> entryForHash(Segment<K, V> seg, int h) {
     730         /* 根据hash值获取当前segment中,维护的HashEntry数组中的某一条链的头结点 */
     731         HashEntry<K, V>[] tab;
     732         return (HashEntry<K, V>) ((seg == null || (tab = seg.table) == null) ? null
     733                 : UNSAFE.getObjectVolatile(seg, (((tab.length - 1) & h) << TSHIFT) + TBASE));
     734     }
     735 
     736     /* ---------------- Public operations -------------- */
     737 
     738     /**
     739      * Creates a new, empty map with the specified initial capacity, load factor and
     740      * concurrency level.
     741      *
     742      * @param initialCapacity  the initial capacity. The implementation performs
     743      *                         internal sizing to accommodate this many elements.
     744      * @param loadFactor       the load factor threshold, used to control resizing.
     745      *                         Resizing may be performed when the average number of
     746      *                         elements per bin exceeds this threshold.
     747      * @param concurrencyLevel the estimated number of concurrently updating
     748      *                         threads. The implementation performs internal sizing
     749      *                         to try to accommodate this many threads.
     750      * @throws IllegalArgumentException if the initial capacity is negative or the
     751      *                                  load factor or concurrencyLevel are
     752      *                                  nonpositive.
     753      */
     754     @SuppressWarnings("unchecked")
     755     public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) {
     756         if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
     757             throw new IllegalArgumentException();
     758         if (concurrencyLevel > MAX_SEGMENTS)
     759             concurrencyLevel = MAX_SEGMENTS;
     760         // Find power-of-two sizes best matching arguments
     761         int sshift = 0;
     762         int ssize = 1;
     763         while (ssize < concurrencyLevel) {
     764             ++sshift;
     765             ssize <<= 1;
     766         }
     767         this.segmentShift = 32 - sshift;
     768         this.segmentMask = ssize - 1;
     769         if (initialCapacity > MAXIMUM_CAPACITY)
     770             initialCapacity = MAXIMUM_CAPACITY;
     771         int c = initialCapacity / ssize;
     772         if (c * ssize < initialCapacity)
     773             ++c;
     774         int cap = MIN_SEGMENT_TABLE_CAPACITY;
     775         while (cap < c)
     776             cap <<= 1;
     777         // create segments and segments[0]
     778         Segment<K, V> s0 = new Segment<K, V>(loadFactor, (int) (cap * loadFactor),
     779                 (HashEntry<K, V>[]) new HashEntry[cap]);
     780         Segment<K, V>[] ss = (Segment<K, V>[]) new Segment[ssize];
     781         UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
     782         this.segments = ss;
     783     }
     784 
     785     /**
     786      * Creates a new, empty map with the specified initial capacity and load factor
     787      * and with the default concurrencyLevel (16).
     788      *
     789      * @param initialCapacity The implementation performs internal sizing to
     790      *                        accommodate this many elements.
     791      * @param loadFactor      the load factor threshold, used to control resizing.
     792      *                        Resizing may be performed when the average number of
     793      *                        elements per bin exceeds this threshold.
     794      * @throws IllegalArgumentException if the initial capacity of elements is
     795      *                                  negative or the load factor is nonpositive
     796      *
     797      * @since 1.6
     798      */
     799     public ConcurrentHashMap(int initialCapacity, float loadFactor) {
     800         this(initialCapacity, loadFactor, DEFAULT_CONCURRENCY_LEVEL);
     801     }
     802 
     803     /**
     804      * Creates a new, empty map with the specified initial capacity, and with
     805      * default load factor (0.75) and concurrencyLevel (16).
     806      *
     807      * @param initialCapacity the initial capacity. The implementation performs
     808      *                        internal sizing to accommodate this many elements.
     809      * @throws IllegalArgumentException if the initial capacity of elements is
     810      *                                  negative.
     811      */
     812     public ConcurrentHashMap(int initialCapacity) {
     813         this(initialCapacity, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
     814     }
     815 
     816     /**
     817      * Creates a new, empty map with a default initial capacity (16), load factor
     818      * (0.75) and concurrencyLevel (16).
     819      */
     820     public ConcurrentHashMap() {
     821         this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL);
     822     }
     823 
     824     /**
     825      * Creates a new map with the same mappings as the given map. The map is created
     826      * with a capacity of 1.5 times the number of mappings in the given map or 16
     827      * (whichever is greater), and a default load factor (0.75) and concurrencyLevel
     828      * (16).
     829      *
     830      * @param m the map
     831      */
     832     public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
     833         this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1, DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR,
     834                 DEFAULT_CONCURRENCY_LEVEL);
     835         putAll(m);
     836     }
     837 
     838     /**
     839      * Returns <tt>true</tt> if this map contains no key-value mappings.
     840      *
     841      * @return <tt>true</tt> if this map contains no key-value mappings
     842      */
     843     public boolean isEmpty() {
     844         /*
     845          * Sum per-segment modCounts to avoid mis-reporting when elements are
     846          * concurrently added and removed in one segment while checking another, in
     847          * which case the table was never actually empty at any point. (The sum ensures
     848          * accuracy up through at least 1<<31 per-segment modifications before recheck.)
     849          * Methods size() and containsValue() use similar constructions for stability
     850          * checks.
     851          */
     852         /*
     853          * 总体思路是遍历两次segment[]数组,不加锁操作 第一次遍历,
     854          * 在遍历的过程中有任何一个segment中元素数量不为空,就立即返回false,否则就累加每个segment中写操作的次数modCount;
     855          * 第一次遍历结束,如果累加的写操作的次数为0,直接返回true,说明segment[]数组中没有任何元素,否则再进行第二次遍历,任何一个segment不为空
     856          * ,则返回false, 否则就进行第一次计算的累计写操作次数sum的减法操作,直到遍历完所有的segment,遍历结束之后,sum不等于0就说明,
     857          * 在这两次遍历的过程中发生了一次写操作,所以必定不为空。
     858          */
     859         long sum = 0L;
     860         final Segment<K, V>[] segments = this.segments;
     861         for (int i = 0; i < segments.length; i++) {
     862             Segment<K, V> seg = segmentAt(segments, i);
     863             if (seg != null) {
     864                 if (seg.count != 0) {
     865                     return false;
     866                 }
     867                 sum += seg.modCount;
     868             }
     869         }
     870 
     871         if (sum != 0L) {
     872             for (int i = 0; i < segments.length; i++) {
     873                 Segment<K, V> seg = segmentAt(segments, i);
     874                 if (seg != null) {
     875                     if (seg.count != 0) {
     876                         return false;
     877                     }
     878                     sum -= seg.modCount;
     879                 }
     880             }
     881 
     882             if (sum != 0) {
     883                 return false;
     884             }
     885         }
     886         return true;
     887     }
     888 
     889     /**
     890      * Returns the number of key-value mappings in this map. If the map contains
     891      * more than <tt>Integer.MAX_VALUE</tt> elements, returns
     892      * <tt>Integer.MAX_VALUE</tt>.
     893      *
     894      * @return the number of key-value mappings in this map
     895      */
     896     public int size() {
     897         /* 统计size的过程中可以不同的segment中有并发的写操作,所以只能相对的统计某一个时间范围内的size大小 */
     898         final Segment<K, V>[] segments = this.segments;
     899         int size;// 计算的大小
     900         long sum;// 本轮计算的累计并发次数
     901         long last = 0L;// 上一次计算的累计并发次数
     902         int retries = -1;// 初始重试次数
     903         boolean overflow;// size是否溢出
     904         try {
     905             /*
     906              * 总体思路:不加锁先尝试计算size,最大重试3次,要是在这个最大重试次数的范围内,segment[]中没有发生并发写操作,则结束,
     907              * 否则对所有的segment进行加锁再统计size
     908              */
     909             for (;;) {
     910                 /* 重试次数达到了阈值,则对所有的段进行加锁计算 */
     911                 if (retries++ == RETRIES_BEFORE_LOCK) {
     912                     for (int i = 0; i < segments.length; i++) {
     913                         segmentAt(segments, i).lock();
     914                     }
     915                 }
     916 
     917                 sum = 0L;
     918                 size = 0;
     919                 overflow = false;
     920                 for (int i = 0; i < segments.length; i++) {
     921                     Segment<K, V> seg = segmentAt(segments, i);
     922                     if (seg != null) {
     923                         sum += seg.modCount;
     924                         int c = seg.count;
     925                         /* 发生了长度溢出 */
     926                         if (c < 0 || (size += c) < 0) {
     927                             overflow = true;
     928                         }
     929                     }
     930                 }
     931 
     932                 /* 两次并发修改的次数一样说明这段时间的size是稳定的,则统计size结束,否则继续重试,达到阈值,仍不稳定,则对所有segment加锁,然后再计算 */
     933                 if (sum == last) {
     934                     break;
     935                 }
     936                 last = sum;
     937             }
     938         } finally {
     939             if (retries > RETRIES_BEFORE_LOCK) {
     940                 for (int i = 0; i < segments.length; i++) {
     941                     segmentAt(segments, i).unlock();
     942                 }
     943             }
     944         }
     945         return overflow ? Integer.MAX_VALUE : size;
     946     }
     947 
     948     /**
     949      * Returns the value to which the specified key is mapped, or {@code null} if
     950      * this map contains no mapping for the key.
     951      *
     952      * <p>
     953      * More formally, if this map contains a mapping from a key {@code k} to a value
     954      * {@code v} such that {@code key.equals(k)}, then this method returns
     955      * {@code v}; otherwise it returns {@code null}. (There can be at most one such
     956      * mapping.)
     957      *
     958      * @throws NullPointerException if the specified key is null
     959      */
     960     @SuppressWarnings("unchecked")
     961     public V get(Object key) {
     962         Segment<K, V> seg;
     963         HashEntry<K, V>[] tab;
     964         int h = hash(key);
     965         /* 根据key定位到其所在的segment[]数组中下标,在内存中所对应的地址 */
     966         long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
     967         if ((seg = (Segment<K, V>) UNSAFE.getObjectVolatile(segments, u)) != null && (tab = seg.table) != null) {
     968             for (HashEntry<K, V> e = (HashEntry<K, V>) UNSAFE.getObjectVolatile(seg,
     969                     (((tab.length - 1) & h) << TSHIFT) + TBASE); e != null; e = e.next) {
     970                 K k;
     971                 if ((k = e.key) == key || (e.hash == h && key.equals(k))) {
     972                     return e.value;
     973                 }
     974             }
     975         }
     976         return null;
     977     }
     978 
     979     /**
     980      * Tests if the specified object is a key in this table.
     981      *
     982      * @param key possible key
     983      * @return <tt>true</tt> if and only if the specified object is a key in this
     984      *         table, as determined by the <tt>equals</tt> method; <tt>false</tt>
     985      *         otherwise.
     986      * @throws NullPointerException if the specified key is null
     987      */
     988     @SuppressWarnings("unchecked")
     989     public boolean containsKey(Object key) {
     990         Segment<K, V> seg;
     991         HashEntry<K, V>[] tab;
     992         int h = hash(key);
     993         long u = (((h >> segmentShift) & segmentMask) << SSHIFT) + SBASE;
     994         if ((seg = (Segment<K, V>) UNSAFE.getObjectVolatile(segments, u)) != null && (tab = seg.table) != null) {
     995             long ut = (((tab.length - 1) & h) << TSHIFT) + TBASE;
     996             for (HashEntry<K, V> e = (HashEntry<K, V>) UNSAFE.getObjectVolatile(tab, ut); e != null; e = e.next) {
     997                 K k;
     998                 if ((k = e.key) == key || (e.hash == h && key.equals(k))) {
     999                     return true;
    1000                 }
    1001             }
    1002         }
    1003 
    1004         return false;
    1005     }
    1006 
    1007     /**
    1008      * Returns <tt>true</tt> if this map maps one or more keys to the specified
    1009      * value. Note: This method requires a full internal traversal of the hash
    1010      * table, and so is much slower than method <tt>containsKey</tt>.
    1011      *
    1012      * @param value value whose presence in this map is to be tested
    1013      * @return <tt>true</tt> if this map maps one or more keys to the specified
    1014      *         value
    1015      * @throws NullPointerException if the specified value is null
    1016      */
    1017     public boolean containsValue(Object value) {
    1018         /* 总体和计算size方法思路一致,参见size注释 */
    1019         if (value == null) {
    1020             throw new NullPointerException();
    1021         }
    1022 
    1023         final Segment<K, V>[] segments = this.segments;
    1024         boolean found = false;
    1025         long last = 0;
    1026         int retries = -1;
    1027         try {
    1028             /* 找到了直接跳出到这里 */
    1029             outer: for (;;) {
    1030                 if (retries++ == RETRIES_BEFORE_LOCK) {
    1031                     for (int i = 0; i < segments.length; i++) {
    1032                         segmentAt(segments, i).lock();
    1033                     }
    1034                 }
    1035 
    1036                 /* 用来统计并发的操作次数 */
    1037                 long sum = 0L;
    1038                 for (int i = 0; i < segments.length; i++) {
    1039                     Segment<K, V> seg = segmentAt(segments, i);
    1040                     HashEntry<K, V>[] tab;
    1041                     if (seg != null && (tab = seg.table) != null) {
    1042                         for (int j = 0; j < tab.length; j++) {
    1043                             HashEntry<K, V> e;
    1044                             for (e = entryAt(tab, j); e != null; e = e.next) {
    1045                                 V v = e.value;
    1046                                 if (v != null && value.equals(v)) {
    1047                                     found = true;
    1048                                     /* 找到了匹配的value,则跳出到最外层的循环 */
    1049                                     break outer;
    1050                                 }
    1051                             }
    1052                         }
    1053                     }
    1054                     sum += seg.modCount;
    1055                 }
    1056 
    1057                 if (retries > 0 && sum == last) {
    1058                     break;
    1059                 }
    1060                 last = sum;
    1061             }
    1062         } finally {
    1063             if (retries > RETRIES_BEFORE_LOCK) {
    1064                 for (int i = 0; i < segments.length; i++) {
    1065                     segmentAt(segments, i).unlock();
    1066                 }
    1067             }
    1068         }
    1069         return found;
    1070     }
    1071 
    1072     /**
    1073      * Legacy method testing if some key maps into the specified value in this
    1074      * table. This method is identical in functionality to {@link #containsValue},
    1075      * and exists solely to ensure full compatibility with class
    1076      * {@link java.util.Hashtable}, which supported this method prior to
    1077      * introduction of the Java Collections framework.
    1078      * 
    1079      * @param value a value to search for
    1080      * @return <tt>true</tt> if and only if some key maps to the <tt>value</tt>
    1081      *         argument in this table as determined by the <tt>equals</tt> method;
    1082      *         <tt>false</tt> otherwise
    1083      * @throws NullPointerException if the specified value is null
    1084      */
    1085     public boolean contains(Object value) {
    1086         return containsValue(value);
    1087     }
    1088 
    1089     /**
    1090      * Maps the specified key to the specified value in this table. Neither the key
    1091      * nor the value can be null.
    1092      *
    1093      * <p>
    1094      * The value can be retrieved by calling the <tt>get</tt> method with a key that
    1095      * is equal to the original key.
    1096      *
    1097      * @param key   key with which the specified value is to be associated
    1098      * @param value value to be associated with the specified key
    1099      * @return the previous value associated with <tt>key</tt>, or <tt>null</tt> if
    1100      *         there was no mapping for <tt>key</tt>
    1101      * @throws NullPointerException if the specified key or value is null
    1102      */
    1103     @SuppressWarnings("unchecked")
    1104     public V put(K key, V value) {
    1105         Segment<K, V> s;
    1106         if (value == null) {
    1107             throw new NullPointerException();
    1108         }
    1109         int hash = hash(key);
    1110         int j = (hash >>> segmentShift) & segmentMask;
    1111         /* 如果当前的segment为空,则创建一个 */
    1112         if ((s = (Segment<K, V>) UNSAFE.getObject(segments, (j << SSHIFT) + SBASE)) == null) {
    1113             s = ensureSegment(j);
    1114         }
    1115         return s.put(key, hash, value, false);
    1116     }
    1117 
    1118     /**
    1119      * {@inheritDoc}
    1120      *
    1121      * @return the previous value associated with the specified key, or
    1122      *         <tt>null</tt> if there was no mapping for the key
    1123      * @throws NullPointerException if the specified key or value is null
    1124      */
    1125     @SuppressWarnings("unchecked")
    1126     public V putIfAbsent(K key, V value) {
    1127         Segment<K, V> s;
    1128         if (value == null)
    1129             throw new NullPointerException();
    1130         int hash = hash(key);
    1131         int j = (hash >>> segmentShift) & segmentMask;
    1132         if ((s = (Segment<K, V>) UNSAFE.getObject(segments, (j << SSHIFT) + SBASE)) == null)
    1133             s = ensureSegment(j);
    1134         return s.put(key, hash, value, true);
    1135     }
    1136 
    1137     /**
    1138      * Copies all of the mappings from the specified map to this one. These mappings
    1139      * replace any mappings that this map had for any of the keys currently in the
    1140      * specified map.
    1141      *
    1142      * @param m mappings to be stored in this map
    1143      */
    1144     public void putAll(Map<? extends K, ? extends V> m) {
    1145         for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
    1146             put(e.getKey(), e.getValue());
    1147     }
    1148 
    1149     /**
    1150      * Removes the key (and its corresponding value) from this map. This method does
    1151      * nothing if the key is not in the map.
    1152      *
    1153      * @param key the key that needs to be removed
    1154      * @return the previous value associated with <tt>key</tt>, or <tt>null</tt> if
    1155      *         there was no mapping for <tt>key</tt>
    1156      * @throws NullPointerException if the specified key is null
    1157      */
    1158     public V remove(Object key) {
    1159         int hash = hash(key);
    1160         Segment<K, V> s = segmentForHash(hash);
    1161         return s == null ? null : s.remove(key, hash, null);
    1162     }
    1163 
    1164     /**
    1165      * {@inheritDoc}
    1166      *
    1167      * @throws NullPointerException if the specified key is null
    1168      */
    1169     public boolean remove(Object key, Object value) {
    1170         int hash = hash(key);
    1171         Segment<K, V> s;
    1172         return value != null && (s = segmentForHash(hash)) != null && s.remove(key, hash, value) != null;
    1173     }
    1174 
    1175     /**
    1176      * {@inheritDoc}
    1177      *
    1178      * @throws NullPointerException if any of the arguments are null
    1179      */
    1180     public boolean replace(K key, V oldValue, V newValue) {
    1181         int hash = hash(key);
    1182         if (oldValue == null || newValue == null)
    1183             throw new NullPointerException();
    1184         Segment<K, V> s = segmentForHash(hash);
    1185         return s != null && s.replace(key, hash, oldValue, newValue);
    1186     }
    1187 
    1188     /**
    1189      * {@inheritDoc}
    1190      *
    1191      * @return the previous value associated with the specified key, or
    1192      *         <tt>null</tt> if there was no mapping for the key
    1193      * @throws NullPointerException if the specified key or value is null
    1194      */
    1195     public V replace(K key, V value) {
    1196         int hash = hash(key);
    1197         if (value == null) {
    1198             throw new NullPointerException();
    1199         }
    1200         Segment<K, V> seg = segmentForHash(hash);
    1201         return seg == null ? null : seg.replace(key, hash, value);
    1202     }
    1203 
    1204     /**
    1205      * Removes all of the mappings from this map.
    1206      */
    1207     public void clear() {
    1208         final Segment<K, V>[] segments = this.segments;
    1209         for (int j = 0; j < segments.length; ++j) {
    1210             Segment<K, V> s = segmentAt(segments, j);
    1211             if (s != null)
    1212                 s.clear();
    1213         }
    1214     }
    1215 
    1216     /**
    1217      * Returns a {@link Set} view of the keys contained in this map. The set is
    1218      * backed by the map, so changes to the map are reflected in the set, and
    1219      * vice-versa. The set supports element removal, which removes the corresponding
    1220      * mapping from this map, via the <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
    1221      * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt> operations. It
    1222      * does not support the <tt>add</tt> or <tt>addAll</tt> operations.
    1223      *
    1224      * <p>
    1225      * The view's <tt>iterator</tt> is a "weakly consistent" iterator that will
    1226      * never throw {@link ConcurrentModificationException}, and guarantees to
    1227      * traverse elements as they existed upon construction of the iterator, and may
    1228      * (but is not guaranteed to) reflect any modifications subsequent to
    1229      * construction.
    1230      */
    1231     public Set<K> keySet() {
    1232         Set<K> ks = keySet;
    1233         return (ks != null) ? ks : (keySet = new KeySet());
    1234     }
    1235 
    1236     /**
    1237      * Returns a {@link Collection} view of the values contained in this map. The
    1238      * collection is backed by the map, so changes to the map are reflected in the
    1239      * collection, and vice-versa. The collection supports element removal, which
    1240      * removes the corresponding mapping from this map, via the
    1241      * <tt>Iterator.remove</tt>, <tt>Collection.remove</tt>, <tt>removeAll</tt>,
    1242      * <tt>retainAll</tt>, and <tt>clear</tt> operations. It does not support the
    1243      * <tt>add</tt> or <tt>addAll</tt> operations.
    1244      *
    1245      * <p>
    1246      * The view's <tt>iterator</tt> is a "weakly consistent" iterator that will
    1247      * never throw {@link ConcurrentModificationException}, and guarantees to
    1248      * traverse elements as they existed upon construction of the iterator, and may
    1249      * (but is not guaranteed to) reflect any modifications subsequent to
    1250      * construction.
    1251      */
    1252     public Collection<V> values() {
    1253         Collection<V> vs = values;
    1254         return (vs != null) ? vs : (values = new Values());
    1255     }
    1256 
    1257     /**
    1258      * Returns a {@link Set} view of the mappings contained in this map. The set is
    1259      * backed by the map, so changes to the map are reflected in the set, and
    1260      * vice-versa. The set supports element removal, which removes the corresponding
    1261      * mapping from the map, via the <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
    1262      * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt> operations. It
    1263      * does not support the <tt>add</tt> or <tt>addAll</tt> operations.
    1264      *
    1265      * <p>
    1266      * The view's <tt>iterator</tt> is a "weakly consistent" iterator that will
    1267      * never throw {@link ConcurrentModificationException}, and guarantees to
    1268      * traverse elements as they existed upon construction of the iterator, and may
    1269      * (but is not guaranteed to) reflect any modifications subsequent to
    1270      * construction.
    1271      */
    1272     public Set<Map.Entry<K, V>> entrySet() {
    1273         Set<Map.Entry<K, V>> es = entrySet;
    1274         return (es != null) ? es : (entrySet = new EntrySet());
    1275     }
    1276 
    1277     /**
    1278      * Returns an enumeration of the keys in this table.
    1279      *
    1280      * @return an enumeration of the keys in this table
    1281      * @see #keySet()
    1282      */
    1283     public Enumeration<K> keys() {
    1284         return new KeyIterator();
    1285     }
    1286 
    1287     /**
    1288      * Returns an enumeration of the values in this table.
    1289      *
    1290      * @return an enumeration of the values in this table
    1291      * @see #values()
    1292      */
    1293     public Enumeration<V> elements() {
    1294         return new ValueIterator();
    1295     }
    1296 
    1297     /* ---------------- Iterator Support -------------- */
    1298 
    1299     abstract class HashIterator {
    1300         int nextSegmentIndex;
    1301         int nextTableIndex;
    1302         HashEntry<K, V>[] currentTable;
    1303         HashEntry<K, V> nextEntry;
    1304         HashEntry<K, V> lastReturned;
    1305 
    1306         HashIterator() {
    1307             nextSegmentIndex = segments.length - 1;
    1308             nextTableIndex = -1;
    1309             advance();
    1310         }
    1311 
    1312         /**
    1313          * Set nextEntry to first node of next non-empty table (in backwards order, to
    1314          * simplify checks).
    1315          */
    1316         final void advance() {
    1317             for (;;) {
    1318                 if (nextTableIndex >= 0) {
    1319                     if ((nextEntry = entryAt(currentTable, nextTableIndex--)) != null)
    1320                         break;
    1321                 } else if (nextSegmentIndex >= 0) {
    1322                     Segment<K, V> seg = segmentAt(segments, nextSegmentIndex--);
    1323                     if (seg != null && (currentTable = seg.table) != null)
    1324                         nextTableIndex = currentTable.length - 1;
    1325                 } else
    1326                     break;
    1327             }
    1328         }
    1329 
    1330         final HashEntry<K, V> nextEntry() {
    1331             HashEntry<K, V> e = nextEntry;
    1332             if (e == null)
    1333                 throw new NoSuchElementException();
    1334             lastReturned = e; // cannot assign until after null check
    1335             if ((nextEntry = e.next) == null)
    1336                 advance();
    1337             return e;
    1338         }
    1339 
    1340         public final boolean hasNext() {
    1341             return nextEntry != null;
    1342         }
    1343 
    1344         public final boolean hasMoreElements() {
    1345             return nextEntry != null;
    1346         }
    1347 
    1348         public final void remove() {
    1349             if (lastReturned == null)
    1350                 throw new IllegalStateException();
    1351             ConcurrentHashMap.this.remove(lastReturned.key);
    1352             lastReturned = null;
    1353         }
    1354     }
    1355 
    1356     final class KeyIterator extends HashIterator implements Iterator<K>, Enumeration<K> {
    1357         public final K next() {
    1358             return super.nextEntry().key;
    1359         }
    1360 
    1361         public final K nextElement() {
    1362             return super.nextEntry().key;
    1363         }
    1364     }
    1365 
    1366     final class ValueIterator extends HashIterator implements Iterator<V>, Enumeration<V> {
    1367         public final V next() {
    1368             return super.nextEntry().value;
    1369         }
    1370 
    1371         public final V nextElement() {
    1372             return super.nextEntry().value;
    1373         }
    1374     }
    1375 
    1376     /**
    1377      * Custom Entry class used by EntryIterator.next(), that relays setValue changes
    1378      * to the underlying map.
    1379      */
    1380     final class WriteThroughEntry extends AbstractMap.SimpleEntry<K, V> {
    1381         WriteThroughEntry(K k, V v) {
    1382             super(k, v);
    1383         }
    1384 
    1385         /**
    1386          * Set our entry's value and write through to the map. The value to return is
    1387          * somewhat arbitrary here. Since a WriteThroughEntry does not necessarily track
    1388          * asynchronous changes, the most recent "previous" value could be different
    1389          * from what we return (or could even have been removed in which case the put
    1390          * will re-establish). We do not and cannot guarantee more.
    1391          */
    1392         public V setValue(V value) {
    1393             if (value == null)
    1394                 throw new NullPointerException();
    1395             V v = super.setValue(value);
    1396             ConcurrentHashMap.this.put(getKey(), value);
    1397             return v;
    1398         }
    1399     }
    1400 
    1401     final class EntryIterator extends HashIterator implements Iterator<Entry<K, V>> {
    1402         public Map.Entry<K, V> next() {
    1403             HashEntry<K, V> e = super.nextEntry();
    1404             return new WriteThroughEntry(e.key, e.value);
    1405         }
    1406     }
    1407 
    1408     final class KeySet extends AbstractSet<K> {
    1409         public Iterator<K> iterator() {
    1410             return new KeyIterator();
    1411         }
    1412 
    1413         public int size() {
    1414             return ConcurrentHashMap.this.size();
    1415         }
    1416 
    1417         public boolean isEmpty() {
    1418             return ConcurrentHashMap.this.isEmpty();
    1419         }
    1420 
    1421         public boolean contains(Object o) {
    1422             return ConcurrentHashMap.this.containsKey(o);
    1423         }
    1424 
    1425         public boolean remove(Object o) {
    1426             return ConcurrentHashMap.this.remove(o) != null;
    1427         }
    1428 
    1429         public void clear() {
    1430             ConcurrentHashMap.this.clear();
    1431         }
    1432     }
    1433 
    1434     final class Values extends AbstractCollection<V> {
    1435         public Iterator<V> iterator() {
    1436             return new ValueIterator();
    1437         }
    1438 
    1439         public int size() {
    1440             return ConcurrentHashMap.this.size();
    1441         }
    1442 
    1443         public boolean isEmpty() {
    1444             return ConcurrentHashMap.this.isEmpty();
    1445         }
    1446 
    1447         public boolean contains(Object o) {
    1448             return ConcurrentHashMap.this.containsValue(o);
    1449         }
    1450 
    1451         public void clear() {
    1452             ConcurrentHashMap.this.clear();
    1453         }
    1454     }
    1455 
    1456     final class EntrySet extends AbstractSet<Map.Entry<K, V>> {
    1457         public Iterator<Map.Entry<K, V>> iterator() {
    1458             return new EntryIterator();
    1459         }
    1460 
    1461         public boolean contains(Object o) {
    1462             if (!(o instanceof Map.Entry))
    1463                 return false;
    1464             Map.Entry<?, ?> e = (Map.Entry<?, ?>) o;
    1465             V v = ConcurrentHashMap.this.get(e.getKey());
    1466             return v != null && v.equals(e.getValue());
    1467         }
    1468 
    1469         public boolean remove(Object o) {
    1470             if (!(o instanceof Map.Entry))
    1471                 return false;
    1472             Map.Entry<?, ?> e = (Map.Entry<?, ?>) o;
    1473             return ConcurrentHashMap.this.remove(e.getKey(), e.getValue());
    1474         }
    1475 
    1476         public int size() {
    1477             return ConcurrentHashMap.this.size();
    1478         }
    1479 
    1480         public boolean isEmpty() {
    1481             return ConcurrentHashMap.this.isEmpty();
    1482         }
    1483 
    1484         public void clear() {
    1485             ConcurrentHashMap.this.clear();
    1486         }
    1487     }
    1488 
    1489     /* ---------------- Serialization Support -------------- */
    1490 
    1491     /**
    1492      * Save the state of the <tt>ConcurrentHashMap</tt> instance to a stream (i.e.,
    1493      * serialize it).
    1494      * 
    1495      * @param s the stream
    1496      * @serialData the key (Object) and value (Object) for each key-value mapping,
    1497      *             followed by a null pair. The key-value mappings are emitted in no
    1498      *             particular order.
    1499      */
    1500     private void writeObject(java.io.ObjectOutputStream s) throws IOException {
    1501         // force all segments for serialization compatibility
    1502         for (int k = 0; k < segments.length; ++k)
    1503             ensureSegment(k);
    1504         s.defaultWriteObject();
    1505 
    1506         final Segment<K, V>[] segments = this.segments;
    1507         for (int k = 0; k < segments.length; ++k) {
    1508             Segment<K, V> seg = segmentAt(segments, k);
    1509             seg.lock();
    1510             try {
    1511                 HashEntry<K, V>[] tab = seg.table;
    1512                 for (int i = 0; i < tab.length; ++i) {
    1513                     HashEntry<K, V> e;
    1514                     for (e = entryAt(tab, i); e != null; e = e.next) {
    1515                         s.writeObject(e.key);
    1516                         s.writeObject(e.value);
    1517                     }
    1518                 }
    1519             } finally {
    1520                 seg.unlock();
    1521             }
    1522         }
    1523         s.writeObject(null);
    1524         s.writeObject(null);
    1525     }
    1526 
    1527     /**
    1528      * Reconstitute the <tt>ConcurrentHashMap</tt> instance from a stream (i.e.,
    1529      * deserialize it).
    1530      * 
    1531      * @param s the stream
    1532      */
    1533     @SuppressWarnings("unchecked")
    1534     private void readObject(java.io.ObjectInputStream s) throws IOException, ClassNotFoundException {
    1535         s.defaultReadObject();
    1536 
    1537         // set hashMask
    1538         UNSAFE.putIntVolatile(this, HASHSEED_OFFSET, randomHashSeed(this));
    1539 
    1540         // Re-initialize segments to be minimally sized, and let grow.
    1541         int cap = MIN_SEGMENT_TABLE_CAPACITY;
    1542         final Segment<K, V>[] segments = this.segments;
    1543         for (int k = 0; k < segments.length; ++k) {
    1544             Segment<K, V> seg = segments[k];
    1545             if (seg != null) {
    1546                 seg.threshold = (int) (cap * seg.loadFactor);
    1547                 seg.table = (HashEntry<K, V>[]) new HashEntry[cap];
    1548             }
    1549         }
    1550 
    1551         // Read the keys and values, and put the mappings in the table
    1552         for (;;) {
    1553             K key = (K) s.readObject();
    1554             V value = (V) s.readObject();
    1555             if (key == null)
    1556                 break;
    1557             put(key, value);
    1558         }
    1559     }
    1560 
    1561     // Unsafe mechanics
    1562     private static final sun.misc.Unsafe UNSAFE;
    1563     private static final long SBASE;
    1564     private static final int SSHIFT;
    1565     private static final long TBASE;
    1566     private static final int TSHIFT;
    1567     private static final long HASHSEED_OFFSET;
    1568 
    1569     static {
    1570         int ss, ts;
    1571         try {
    1572             UNSAFE = sun.misc.Unsafe.getUnsafe();
    1573             Class tc = HashEntry[].class;
    1574             Class sc = Segment[].class;
    1575             /* 获取数组中第0个元素在数组对象中的偏移量 */
    1576             TBASE = UNSAFE.arrayBaseOffset(tc);
    1577             SBASE = UNSAFE.arrayBaseOffset(sc);
    1578             /* 获取数组中每个元素的长度大小 */
    1579             ts = UNSAFE.arrayIndexScale(tc);
    1580             ss = UNSAFE.arrayIndexScale(sc);
    1581             HASHSEED_OFFSET = UNSAFE.objectFieldOffset(ConcurrentHashMap.class.getDeclaredField("hashSeed"));
    1582         } catch (Exception e) {
    1583             throw new Error(e);
    1584         }
    1585         if ((ss & (ss - 1)) != 0 || (ts & (ts - 1)) != 0)
    1586             throw new Error("data type scale not a power of two");
    1587         /* 数组中元素的大小用2的幂次表示,如ss =16,SSHIFT = 4; */
    1588         SSHIFT = 31 - Integer.numberOfLeadingZeros(ss);
    1589         TSHIFT = 31 - Integer.numberOfLeadingZeros(ts);
    1590     }
    1591 
    1592 }
    View Code
    
    

     

     
    
    
    复制代码

     

  • 相关阅读:
    学习大数据必须掌握哪些核心技术?
    开箱即用轻量级雪花算法id生成器Java工具类
    2022NISCTF--web
    Bit, byte, KB, GB, MG
    数学建模--K-means聚类的Python实现
    分布式机器学习---数据与模型划分
    哈希表与有序表
    基于遗传算法的新能源电动汽车充电桩与路径选择(Matlab代码实现)
    攻防世界 level3
    商场促销--策略模式
  • 原文地址:https://www.cnblogs.com/wha6239/p/16419371.html