ScavengeALot now causes an incremental (but possibly partially young, in the G1 sense) collection. Some such collections may be abandoned on account of MMU specs. Band-aided a native leak associated with abandoned pauses, as well as an MMU tracker overflow related to frequent scavenge events in the face of a large MMU denominator interval; the latter is protected by a product flag that defaults to false.
Reviewed-by: tonyp
Protected stats dump with a new develop flag; other than for the dump, reconciled product and non-product behaviour in face of the error.
Reviewed-by: tonyp
Added instrumentation and (temporary) assert in non-product mode; clipped the value when found out-of-bounds in product mode. Fix of original issue will follow collection of data from this instrumentation.
Reviewed-by: jcoomes, tonyp
The type in the NEW_C_HEAP_ARRRY and FREE_C_HEAP_ARRAY calls in the buffer allocation code was changed from void* to char as the size argument had already been mulitipled by the byte size of an object pointer.
Reviewed-by: ysr, tonyp
Allow iteration over the shared spaces when using CDS, repealing previous proscription. Deferred further required CDS-related cleanups of perm gen to CR 6897789.
Reviewed-by: phh, jmasa
Incorrect code was being generated for the store operation in the null case of the aastore bytecode template. The bad code was generated by the store_heap_oop routine which takes a Register as its second argument. Passing NULL_WORD (0) as the second argument causes the value to be converted to Register(0), which is rax. Thus the generated store was "mov (dst), $rax" instead of "mov (dst), $0x0". Changed calls to store_heap_oop that pass NULL_WORD as the second argument to a new routine store_heap_oop_null.
Reviewed-by: kvn, twisti
Restore old behaviour wrt HeapDumpPath; first dump goes to <file>, <n>th dump goes to <file>.<n-1>, with default value of <file> the same as before.
Reviewed-by: alanb, jcoomes, tonyp
The fix addresses two memory leaks in G1 code: (1) _evac_failure_scan_stack - a resource object allocated on the C heap was not freed; (2) RSHashTable were linked into deleted list which was only cleared at full GC.
Reviewed-by: tonyp, iveresov
6889757: G1: enable card mark elision for initializing writes from compiled code (ReduceInitialCardMarks)
Defer the (compiler-elided) card-mark upon a slow-path allocation until after the store and before the next subsequent safepoint; G1 now answers yes to can_elide_tlab_write_barriers().
Reviewed-by: jcoomes, kvn, never
When we copy objects to survivors during marking, we incorrectly set NTAMS to bottom, which causes marking to miss visiting some of those objects.
Reviewed-by: apetrusenko, iveresov
It tidies up the G1 heap verification a bit. In particular, when the verification is done in parallel and there is a failure, this is propagated to the top level and the heap is dumped at the end, not by every thread that encounters a failure.
Reviewed-by: johnc, jmasa