8010293: java/util/concurrent/ConcurrentHashMap/toArray.java fails intermittently
Co-authored-by: Doug Lea <dl@cs.oswego.edu> Co-authored-by: Peter Levart <peter.levart@gmail.com> Reviewed-by: forax, chegar, alanb
This commit is contained in:
parent
e763c78a79
commit
bef51c4a08
@ -374,27 +374,26 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
* The table is resized when occupancy exceeds a percentage
|
||||
* threshold (nominally, 0.75, but see below). Any thread
|
||||
* noticing an overfull bin may assist in resizing after the
|
||||
* initiating thread allocates and sets up the replacement
|
||||
* array. However, rather than stalling, these other threads may
|
||||
* proceed with insertions etc. The use of TreeBins shields us
|
||||
* from the worst case effects of overfilling while resizes are in
|
||||
* initiating thread allocates and sets up the replacement array.
|
||||
* However, rather than stalling, these other threads may proceed
|
||||
* with insertions etc. The use of TreeBins shields us from the
|
||||
* worst case effects of overfilling while resizes are in
|
||||
* progress. Resizing proceeds by transferring bins, one by one,
|
||||
* from the table to the next table. To enable concurrency, the
|
||||
* next table must be (incrementally) prefilled with place-holders
|
||||
* serving as reverse forwarders to the old table. Because we are
|
||||
* using power-of-two expansion, the elements from each bin must
|
||||
* either stay at same index, or move with a power of two
|
||||
* offset. We eliminate unnecessary node creation by catching
|
||||
* cases where old nodes can be reused because their next fields
|
||||
* won't change. On average, only about one-sixth of them need
|
||||
* cloning when a table doubles. The nodes they replace will be
|
||||
* garbage collectable as soon as they are no longer referenced by
|
||||
* any reader thread that may be in the midst of concurrently
|
||||
* traversing table. Upon transfer, the old table bin contains
|
||||
* only a special forwarding node (with hash field "MOVED") that
|
||||
* contains the next table as its key. On encountering a
|
||||
* forwarding node, access and update operations restart, using
|
||||
* the new table.
|
||||
* from the table to the next table. However, threads claim small
|
||||
* blocks of indices to transfer (via field transferIndex) before
|
||||
* doing so, reducing contention. Because we are using
|
||||
* power-of-two expansion, the elements from each bin must either
|
||||
* stay at same index, or move with a power of two offset. We
|
||||
* eliminate unnecessary node creation by catching cases where old
|
||||
* nodes can be reused because their next fields won't change. On
|
||||
* average, only about one-sixth of them need cloning when a table
|
||||
* doubles. The nodes they replace will be garbage collectable as
|
||||
* soon as they are no longer referenced by any reader thread that
|
||||
* may be in the midst of concurrently traversing table. Upon
|
||||
* transfer, the old table bin contains only a special forwarding
|
||||
* node (with hash field "MOVED") that contains the next table as
|
||||
* its key. On encountering a forwarding node, access and update
|
||||
* operations restart, using the new table.
|
||||
*
|
||||
* Each bin transfer requires its bin lock, which can stall
|
||||
* waiting for locks while resizing. However, because other
|
||||
@ -402,13 +401,19 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
* locks, average aggregate waits become shorter as resizing
|
||||
* progresses. The transfer operation must also ensure that all
|
||||
* accessible bins in both the old and new table are usable by any
|
||||
* traversal. This is arranged by proceeding from the last bin
|
||||
* (table.length - 1) up towards the first. Upon seeing a
|
||||
* forwarding node, traversals (see class Traverser) arrange to
|
||||
* move to the new table without revisiting nodes. However, to
|
||||
* ensure that no intervening nodes are skipped, bin splitting can
|
||||
* only begin after the associated reverse-forwarders are in
|
||||
* place.
|
||||
* traversal. This is arranged in part by proceeding from the
|
||||
* last bin (table.length - 1) up towards the first. Upon seeing
|
||||
* a forwarding node, traversals (see class Traverser) arrange to
|
||||
* move to the new table without revisiting nodes. To ensure that
|
||||
* no intervening nodes are skipped even when moved out of order,
|
||||
* a stack (see class TableStack) is created on first encounter of
|
||||
* a forwarding node during a traversal, to maintain its place if
|
||||
* later processing the current table. The need for these
|
||||
* save/restore mechanics is relatively rare, but when one
|
||||
* forwarding node is encountered, typically many more will be.
|
||||
* So Traversers use a simple caching scheme to avoid creating so
|
||||
* many new TableStack nodes. (Thanks to Peter Levart for
|
||||
* suggesting use of a stack here.)
|
||||
*
|
||||
* The traversal scheme also applies to partial traversals of
|
||||
* ranges of bins (via an alternate Traverser constructor)
|
||||
@ -776,11 +781,6 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
*/
|
||||
private transient volatile int transferIndex;
|
||||
|
||||
/**
|
||||
* The least available table index to split while resizing.
|
||||
*/
|
||||
private transient volatile int transferOrigin;
|
||||
|
||||
/**
|
||||
* Spinlock (locked via CAS) used when resizing and/or creating CounterCells.
|
||||
*/
|
||||
@ -1377,7 +1377,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
}
|
||||
int segmentShift = 32 - sshift;
|
||||
int segmentMask = ssize - 1;
|
||||
@SuppressWarnings("unchecked") Segment<K,V>[] segments = (Segment<K,V>[])
|
||||
@SuppressWarnings("unchecked")
|
||||
Segment<K,V>[] segments = (Segment<K,V>[])
|
||||
new Segment<?,?>[DEFAULT_CONCURRENCY_LEVEL];
|
||||
for (int i = 0; i < segments.length; ++i)
|
||||
segments[i] = new Segment<K,V>(LOAD_FACTOR);
|
||||
@ -1420,8 +1421,10 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
long size = 0L;
|
||||
Node<K,V> p = null;
|
||||
for (;;) {
|
||||
@SuppressWarnings("unchecked") K k = (K) s.readObject();
|
||||
@SuppressWarnings("unchecked") V v = (V) s.readObject();
|
||||
@SuppressWarnings("unchecked")
|
||||
K k = (K) s.readObject();
|
||||
@SuppressWarnings("unchecked")
|
||||
V v = (V) s.readObject();
|
||||
if (k != null && v != null) {
|
||||
p = new Node<K,V>(spread(k.hashCode()), k, v, p);
|
||||
++size;
|
||||
@ -1439,8 +1442,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
int sz = (int)size;
|
||||
n = tableSizeFor(sz + (sz >>> 1) + 1);
|
||||
}
|
||||
@SuppressWarnings({"rawtypes","unchecked"})
|
||||
Node<K,V>[] tab = (Node<K,V>[])new Node[n];
|
||||
@SuppressWarnings("unchecked")
|
||||
Node<K,V>[] tab = (Node<K,V>[])new Node<?,?>[n];
|
||||
int mask = n - 1;
|
||||
long added = 0L;
|
||||
while (p != null) {
|
||||
@ -2200,8 +2203,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
try {
|
||||
if ((tab = table) == null || tab.length == 0) {
|
||||
int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
|
||||
@SuppressWarnings({"rawtypes","unchecked"})
|
||||
Node<K,V>[] nt = (Node<K,V>[])new Node[n];
|
||||
@SuppressWarnings("unchecked")
|
||||
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
|
||||
table = tab = nt;
|
||||
sc = n - (n >>> 2);
|
||||
}
|
||||
@ -2246,7 +2249,7 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
while (s >= (long)(sc = sizeCtl) && (tab = table) != null &&
|
||||
tab.length < MAXIMUM_CAPACITY) {
|
||||
if (sc < 0) {
|
||||
if (sc == -1 || transferIndex <= transferOrigin ||
|
||||
if (sc == -1 || transferIndex <= 0 ||
|
||||
(nt = nextTable) == null)
|
||||
break;
|
||||
if (U.compareAndSwapInt(this, SIZECTL, sc, sc - 1))
|
||||
@ -2266,10 +2269,13 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
Node<K,V>[] nextTab; int sc;
|
||||
if ((f instanceof ForwardingNode) &&
|
||||
(nextTab = ((ForwardingNode<K,V>)f).nextTable) != null) {
|
||||
if (nextTab == nextTable && tab == table &&
|
||||
transferIndex > transferOrigin && (sc = sizeCtl) < -1 &&
|
||||
U.compareAndSwapInt(this, SIZECTL, sc, sc - 1))
|
||||
transfer(tab, nextTab);
|
||||
while (transferIndex > 0 && nextTab == nextTable &&
|
||||
(sc = sizeCtl) < -1) {
|
||||
if (U.compareAndSwapInt(this, SIZECTL, sc, sc - 1)) {
|
||||
transfer(tab, nextTab);
|
||||
break;
|
||||
}
|
||||
}
|
||||
return nextTab;
|
||||
}
|
||||
return table;
|
||||
@ -2291,8 +2297,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
|
||||
try {
|
||||
if (table == tab) {
|
||||
@SuppressWarnings({"rawtypes","unchecked"})
|
||||
Node<K,V>[] nt = (Node<K,V>[])new Node[n];
|
||||
@SuppressWarnings("unchecked")
|
||||
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
|
||||
table = nt;
|
||||
sc = n - (n >>> 2);
|
||||
}
|
||||
@ -2319,36 +2325,27 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
stride = MIN_TRANSFER_STRIDE; // subdivide range
|
||||
if (nextTab == null) { // initiating
|
||||
try {
|
||||
@SuppressWarnings({"rawtypes","unchecked"})
|
||||
Node<K,V>[] nt = (Node<K,V>[])new Node[n << 1];
|
||||
@SuppressWarnings("unchecked")
|
||||
Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
|
||||
nextTab = nt;
|
||||
} catch (Throwable ex) { // try to cope with OOME
|
||||
sizeCtl = Integer.MAX_VALUE;
|
||||
return;
|
||||
}
|
||||
nextTable = nextTab;
|
||||
transferOrigin = n;
|
||||
transferIndex = n;
|
||||
ForwardingNode<K,V> rev = new ForwardingNode<K,V>(tab);
|
||||
for (int k = n; k > 0;) { // progressively reveal ready slots
|
||||
int nextk = (k > stride) ? k - stride : 0;
|
||||
for (int m = nextk; m < k; ++m)
|
||||
nextTab[m] = rev;
|
||||
for (int m = n + nextk; m < n + k; ++m)
|
||||
nextTab[m] = rev;
|
||||
U.putOrderedInt(this, TRANSFERORIGIN, k = nextk);
|
||||
}
|
||||
}
|
||||
int nextn = nextTab.length;
|
||||
ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
|
||||
boolean advance = true;
|
||||
boolean finishing = false; // to ensure sweep before committing nextTab
|
||||
for (int i = 0, bound = 0;;) {
|
||||
int nextIndex, nextBound, fh; Node<K,V> f;
|
||||
Node<K,V> f; int fh;
|
||||
while (advance) {
|
||||
int nextIndex, nextBound;
|
||||
if (--i >= bound || finishing)
|
||||
advance = false;
|
||||
else if ((nextIndex = transferIndex) <= transferOrigin) {
|
||||
else if ((nextIndex = transferIndex) <= 0) {
|
||||
i = -1;
|
||||
advance = false;
|
||||
}
|
||||
@ -2362,29 +2359,22 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
}
|
||||
}
|
||||
if (i < 0 || i >= n || i + n >= nextn) {
|
||||
int sc;
|
||||
if (finishing) {
|
||||
nextTable = null;
|
||||
table = nextTab;
|
||||
sizeCtl = (n << 1) - (n >>> 1);
|
||||
return;
|
||||
}
|
||||
for (int sc;;) {
|
||||
if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, ++sc)) {
|
||||
if (sc != -1)
|
||||
return;
|
||||
finishing = advance = true;
|
||||
i = n; // recheck before commit
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
else if ((f = tabAt(tab, i)) == null) {
|
||||
if (casTabAt(tab, i, null, fwd)) {
|
||||
setTabAt(nextTab, i, null);
|
||||
setTabAt(nextTab, i + n, null);
|
||||
advance = true;
|
||||
if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, ++sc)) {
|
||||
if (sc != -1)
|
||||
return;
|
||||
finishing = advance = true;
|
||||
i = n; // recheck before commit
|
||||
}
|
||||
}
|
||||
else if ((f = tabAt(tab, i)) == null)
|
||||
advance = casTabAt(tab, i, null, fwd);
|
||||
else if ((fh = f.hash) == MOVED)
|
||||
advance = true; // already processed
|
||||
else {
|
||||
@ -3223,6 +3213,18 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
|
||||
/* ----------------Table Traversal -------------- */
|
||||
|
||||
/**
|
||||
* Records the table, its length, and current traversal index for a
|
||||
* traverser that must process a region of a forwarded table before
|
||||
* proceeding with current table.
|
||||
*/
|
||||
static final class TableStack<K,V> {
|
||||
int length;
|
||||
int index;
|
||||
Node<K,V>[] tab;
|
||||
TableStack<K,V> next;
|
||||
}
|
||||
|
||||
/**
|
||||
* Encapsulates traversal for methods such as containsValue; also
|
||||
* serves as a base class for other iterators and spliterators.
|
||||
@ -3247,6 +3249,7 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
static class Traverser<K,V> {
|
||||
Node<K,V>[] tab; // current table; updated if resized
|
||||
Node<K,V> next; // the next entry to use
|
||||
TableStack<K,V> stack, spare; // to save/restore on ForwardingNodes
|
||||
int index; // index of bin to use next
|
||||
int baseIndex; // current index of initial table
|
||||
int baseLimit; // index bound for initial table
|
||||
@ -3268,16 +3271,17 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
if ((e = next) != null)
|
||||
e = e.next;
|
||||
for (;;) {
|
||||
Node<K,V>[] t; int i, n; K ek; // must use locals in checks
|
||||
Node<K,V>[] t; int i, n; // must use locals in checks
|
||||
if (e != null)
|
||||
return next = e;
|
||||
if (baseIndex >= baseLimit || (t = tab) == null ||
|
||||
(n = t.length) <= (i = index) || i < 0)
|
||||
return next = null;
|
||||
if ((e = tabAt(t, index)) != null && e.hash < 0) {
|
||||
if ((e = tabAt(t, i)) != null && e.hash < 0) {
|
||||
if (e instanceof ForwardingNode) {
|
||||
tab = ((ForwardingNode<K,V>)e).nextTable;
|
||||
e = null;
|
||||
pushState(t, i, n);
|
||||
continue;
|
||||
}
|
||||
else if (e instanceof TreeBin)
|
||||
@ -3285,10 +3289,49 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
else
|
||||
e = null;
|
||||
}
|
||||
if ((index += baseSize) >= n)
|
||||
index = ++baseIndex; // visit upper slots if present
|
||||
if (stack != null)
|
||||
recoverState(n);
|
||||
else if ((index = i + baseSize) >= n)
|
||||
index = ++baseIndex; // visit upper slots if present
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Saves traversal state upon encountering a forwarding node.
|
||||
*/
|
||||
private void pushState(Node<K,V>[] t, int i, int n) {
|
||||
TableStack<K,V> s = spare; // reuse if possible
|
||||
if (s != null)
|
||||
spare = s.next;
|
||||
else
|
||||
s = new TableStack<K,V>();
|
||||
s.tab = t;
|
||||
s.length = n;
|
||||
s.index = i;
|
||||
s.next = stack;
|
||||
stack = s;
|
||||
}
|
||||
|
||||
/**
|
||||
* Possibly pops traversal state.
|
||||
*
|
||||
* @param n length of current table
|
||||
*/
|
||||
private void recoverState(int n) {
|
||||
TableStack<K,V> s; int len;
|
||||
while ((s = stack) != null && (index += (len = s.length)) >= n) {
|
||||
n = len;
|
||||
index = s.index;
|
||||
tab = s.tab;
|
||||
s.tab = null;
|
||||
TableStack<K,V> next = s.next;
|
||||
s.next = spare; // save for reuse
|
||||
stack = next;
|
||||
spare = s;
|
||||
}
|
||||
if (s == null && (index += baseSize) >= n)
|
||||
index = ++baseIndex;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@ -4722,6 +4765,7 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
abstract static class BulkTask<K,V,R> extends CountedCompleter<R> {
|
||||
Node<K,V>[] tab; // same as Traverser
|
||||
Node<K,V> next;
|
||||
TableStack<K,V> stack, spare;
|
||||
int index;
|
||||
int baseIndex;
|
||||
int baseLimit;
|
||||
@ -4750,16 +4794,17 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
if ((e = next) != null)
|
||||
e = e.next;
|
||||
for (;;) {
|
||||
Node<K,V>[] t; int i, n; K ek; // must use locals in checks
|
||||
Node<K,V>[] t; int i, n;
|
||||
if (e != null)
|
||||
return next = e;
|
||||
if (baseIndex >= baseLimit || (t = tab) == null ||
|
||||
(n = t.length) <= (i = index) || i < 0)
|
||||
return next = null;
|
||||
if ((e = tabAt(t, index)) != null && e.hash < 0) {
|
||||
if ((e = tabAt(t, i)) != null && e.hash < 0) {
|
||||
if (e instanceof ForwardingNode) {
|
||||
tab = ((ForwardingNode<K,V>)e).nextTable;
|
||||
e = null;
|
||||
pushState(t, i, n);
|
||||
continue;
|
||||
}
|
||||
else if (e instanceof TreeBin)
|
||||
@ -4767,10 +4812,41 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
else
|
||||
e = null;
|
||||
}
|
||||
if ((index += baseSize) >= n)
|
||||
index = ++baseIndex; // visit upper slots if present
|
||||
if (stack != null)
|
||||
recoverState(n);
|
||||
else if ((index = i + baseSize) >= n)
|
||||
index = ++baseIndex;
|
||||
}
|
||||
}
|
||||
|
||||
private void pushState(Node<K,V>[] t, int i, int n) {
|
||||
TableStack<K,V> s = spare;
|
||||
if (s != null)
|
||||
spare = s.next;
|
||||
else
|
||||
s = new TableStack<K,V>();
|
||||
s.tab = t;
|
||||
s.length = n;
|
||||
s.index = i;
|
||||
s.next = stack;
|
||||
stack = s;
|
||||
}
|
||||
|
||||
private void recoverState(int n) {
|
||||
TableStack<K,V> s; int len;
|
||||
while ((s = stack) != null && (index += (len = s.length)) >= n) {
|
||||
n = len;
|
||||
index = s.index;
|
||||
tab = s.tab;
|
||||
s.tab = null;
|
||||
TableStack<K,V> next = s.next;
|
||||
s.next = spare; // save for reuse
|
||||
stack = next;
|
||||
spare = s;
|
||||
}
|
||||
if (s == null && (index += baseSize) >= n)
|
||||
index = ++baseIndex;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
@ -5229,7 +5305,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") ReduceKeysTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
ReduceKeysTask<K,V>
|
||||
t = (ReduceKeysTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5276,7 +5353,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") ReduceValuesTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
ReduceValuesTask<K,V>
|
||||
t = (ReduceValuesTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5321,7 +5399,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") ReduceEntriesTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
ReduceEntriesTask<K,V>
|
||||
t = (ReduceEntriesTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5374,7 +5453,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceKeysTask<K,V,U>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceKeysTask<K,V,U>
|
||||
t = (MapReduceKeysTask<K,V,U>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5427,7 +5507,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceValuesTask<K,V,U>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceValuesTask<K,V,U>
|
||||
t = (MapReduceValuesTask<K,V,U>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5480,7 +5561,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceEntriesTask<K,V,U>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceEntriesTask<K,V,U>
|
||||
t = (MapReduceEntriesTask<K,V,U>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5533,7 +5615,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceMappingsTask<K,V,U>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceMappingsTask<K,V,U>
|
||||
t = (MapReduceMappingsTask<K,V,U>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5585,7 +5668,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceKeysToDoubleTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceKeysToDoubleTask<K,V>
|
||||
t = (MapReduceKeysToDoubleTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5634,7 +5718,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceValuesToDoubleTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceValuesToDoubleTask<K,V>
|
||||
t = (MapReduceValuesToDoubleTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5683,7 +5768,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceEntriesToDoubleTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceEntriesToDoubleTask<K,V>
|
||||
t = (MapReduceEntriesToDoubleTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5732,7 +5818,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceMappingsToDoubleTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceMappingsToDoubleTask<K,V>
|
||||
t = (MapReduceMappingsToDoubleTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5781,7 +5868,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceKeysToLongTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceKeysToLongTask<K,V>
|
||||
t = (MapReduceKeysToLongTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5830,7 +5918,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceValuesToLongTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceValuesToLongTask<K,V>
|
||||
t = (MapReduceValuesToLongTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5879,7 +5968,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceEntriesToLongTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceEntriesToLongTask<K,V>
|
||||
t = (MapReduceEntriesToLongTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5928,7 +6018,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceMappingsToLongTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceMappingsToLongTask<K,V>
|
||||
t = (MapReduceMappingsToLongTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -5977,7 +6068,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceKeysToIntTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceKeysToIntTask<K,V>
|
||||
t = (MapReduceKeysToIntTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -6026,7 +6118,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceValuesToIntTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceValuesToIntTask<K,V>
|
||||
t = (MapReduceValuesToIntTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -6075,7 +6168,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceEntriesToIntTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceEntriesToIntTask<K,V>
|
||||
t = (MapReduceEntriesToIntTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -6124,7 +6218,8 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
result = r;
|
||||
CountedCompleter<?> c;
|
||||
for (c = firstComplete(); c != null; c = c.nextComplete()) {
|
||||
@SuppressWarnings("unchecked") MapReduceMappingsToIntTask<K,V>
|
||||
@SuppressWarnings("unchecked")
|
||||
MapReduceMappingsToIntTask<K,V>
|
||||
t = (MapReduceMappingsToIntTask<K,V>)c,
|
||||
s = t.rights;
|
||||
while (s != null) {
|
||||
@ -6140,7 +6235,6 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
private static final sun.misc.Unsafe U;
|
||||
private static final long SIZECTL;
|
||||
private static final long TRANSFERINDEX;
|
||||
private static final long TRANSFERORIGIN;
|
||||
private static final long BASECOUNT;
|
||||
private static final long CELLSBUSY;
|
||||
private static final long CELLVALUE;
|
||||
@ -6155,8 +6249,6 @@ public class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
|
||||
(k.getDeclaredField("sizeCtl"));
|
||||
TRANSFERINDEX = U.objectFieldOffset
|
||||
(k.getDeclaredField("transferIndex"));
|
||||
TRANSFERORIGIN = U.objectFieldOffset
|
||||
(k.getDeclaredField("transferOrigin"));
|
||||
BASECOUNT = U.objectFieldOffset
|
||||
(k.getDeclaredField("baseCount"));
|
||||
CELLSBUSY = U.objectFieldOffset
|
||||
|
@ -23,39 +23,53 @@
|
||||
|
||||
/*
|
||||
* @test
|
||||
* @bug 4486658
|
||||
* @bug 4486658 8010293
|
||||
* @summary thread safety of toArray methods of subCollections
|
||||
* @author Martin Buchholz
|
||||
*/
|
||||
|
||||
import java.util.*;
|
||||
import java.util.concurrent.*;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
import java.util.concurrent.ConcurrentHashMap;
|
||||
import java.util.stream.IntStream;
|
||||
|
||||
public class toArray {
|
||||
public class ToArray {
|
||||
|
||||
public static void main(String[] args) throws Throwable {
|
||||
// Execute a number of times to increase the probability of
|
||||
// failure if there is an issue
|
||||
for (int i = 0; i < 16; i++) {
|
||||
executeTest();
|
||||
}
|
||||
}
|
||||
|
||||
static void executeTest() throws Throwable {
|
||||
final Throwable throwable[] = new Throwable[1];
|
||||
final int maxSize = 1000;
|
||||
final ConcurrentHashMap<Integer, Integer> m
|
||||
= new ConcurrentHashMap<Integer, Integer>();
|
||||
final ConcurrentHashMap<Integer, Integer> m = new ConcurrentHashMap<>();
|
||||
|
||||
final Thread t1 = new Thread() { public void run() {
|
||||
for (int i = 0; i < maxSize; i++)
|
||||
m.put(i,i);}};
|
||||
// Number of workers equal to the number of processors
|
||||
// Each worker will put globally unique keys into the map
|
||||
final int nWorkers = Runtime.getRuntime().availableProcessors();
|
||||
final int sizePerWorker = 1024;
|
||||
final int maxSize = nWorkers * sizePerWorker;
|
||||
|
||||
final Thread t2 = new Thread() {
|
||||
public Throwable exception = null;
|
||||
// The foreman keeps checking that the size of the arrays
|
||||
// obtained from the key and value sets is never less than the
|
||||
// previously observed size and is never greater than the maximum size
|
||||
// NOTE: these size constraints are not specific to toArray and are
|
||||
// applicable to any form of traversal of the collection views
|
||||
CompletableFuture<?> foreman = CompletableFuture.runAsync(new Runnable() {
|
||||
private int prevSize = 0;
|
||||
|
||||
private boolean checkProgress(Object[] a) {
|
||||
int size = a.length;
|
||||
if (size < prevSize) throw new RuntimeException("WRONG WAY");
|
||||
if (size > maxSize) throw new RuntimeException("OVERSHOOT");
|
||||
if (size > maxSize) throw new RuntimeException("OVERSHOOT");
|
||||
if (size == maxSize) return true;
|
||||
prevSize = size;
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
try {
|
||||
Integer[] empty = new Integer[0];
|
||||
@ -65,15 +79,25 @@ public class toArray {
|
||||
if (checkProgress(m.values().toArray(empty))) return;
|
||||
if (checkProgress(m.keySet().toArray(empty))) return;
|
||||
}
|
||||
} catch (Throwable t) {
|
||||
throwable[0] = t;
|
||||
}}};
|
||||
}
|
||||
catch (Throwable t) {
|
||||
throwable[0] = t;
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
t2.start();
|
||||
t1.start();
|
||||
// Create workers
|
||||
// Each worker will put globally unique keys into the map
|
||||
CompletableFuture<?>[] workers = IntStream.range(0, nWorkers).
|
||||
mapToObj(w -> CompletableFuture.runAsync(() -> {
|
||||
for (int i = 0, o = w * sizePerWorker; i < sizePerWorker; i++)
|
||||
m.put(o + i, i);
|
||||
})).
|
||||
toArray(CompletableFuture<?>[]::new);
|
||||
|
||||
t1.join();
|
||||
t2.join();
|
||||
// Wait for workers and then foreman to complete
|
||||
CompletableFuture.allOf(workers).join();
|
||||
foreman.join();
|
||||
|
||||
if (throwable[0] != null)
|
||||
throw throwable[0];
|
||||
|
Loading…
x
Reference in New Issue
Block a user