This commit is contained in:
Alejandro Murillo 2013-05-03 08:10:10 -07:00
commit ec847be4fa
149 changed files with 3389 additions and 1579 deletions

View File

@ -1,22 +1,22 @@
<html>
<head>
<title>
C2 Replay
Replay
</title>
</head>
<body>
<h1>C2 compiler replay</h1>
<h1>Compiler replay</h1>
<p>
The C2 compiler replay is a function to repeat the compiling process from a crashed java process in compiled method<br>
The compiler replay is a function to repeat the compiling process from a crashed java process in compiled method<br>
This function only exists in debug version of VM
</p>
<h2>Usage</h2>
<pre>
First, use SA to attach to the core file, if suceeded, do
clhsdb>dumpreplaydata <address> | -a | <thread_id> [> replay.txt]
<pre>
First, use SA to attach to the core file, if succeeded, do
hsdb&gt; dumpreplaydata &lt;address&gt; | -a | &lt;thread_id&gt; [&gt; replay.txt]
create file replay.txt, address is address of Method, or nmethod(CodeBlob)
clhsdb>buildreplayjars [all | boot | app]
hsdb&gt; buildreplayjars [all | boot | app]
create files:
all:
app.jar, boot.jar
@ -26,16 +26,16 @@ First, use SA to attach to the core file, if suceeded, do
app.jar
exit SA now.
Second, use the obtained replay text file, replay.txt and jar files, app.jar and boot.jar, using debug version of java
java -Xbootclasspath/p:boot.jar -cp app.jar -XX:ReplayDataFile=<datafile> -XX:+ReplayCompiles ....
java -Xbootclasspath/p:boot.jar -cp app.jar -XX:ReplayDataFile=&lt;datafile&gt; -XX:+ReplayCompiles ....
This will replay the compiling process.
With ReplayCompiles, the replay will recompile all the methods in app.jar, and in boot.jar to emulate the process in java app.
notes:
1) Most time, we don't need the boot.jar which is the classes loaded from JDK. It will be only modified when an agent(JVMDI) is running and modifies the classes.
2) If encounter error as "<flag>" not found, that means the SA is using a VMStructs which is different from the one with corefile. In this case, SA has a utility tool vmstructsdump which is located at agent/src/os/<os>/proc/<os_platform>
2) If encounter error as "&lt;flag&gt;" not found, that means the SA is using a VMStructs which is different from the one with corefile. In this case, SA has a utility tool vmstructsdump which is located at agent/src/os/&lt;os&gt;/proc/&lt;os_platform&gt;
Use this tool to dump VM type library:
vmstructsdump libjvm.so > <type_name>.db
vmstructsdump libjvm.so &gt; &lt;type_name&gt;.db
set env SA_TYPEDB=<type_name>.db (refer different shell for set envs)
set env SA_TYPEDB=&lt;type_name&gt;.db (refer different shell for set envs)

View File

@ -15,7 +15,7 @@ GUI tools. Command line HSDB (CLHSDB) tool is alternative to SA GUI tool HSDB.
<p>
There is also JavaScript based SA command line interface called <a href="jsdb.html">jsdb</a>.
But, CLHSDB supports Unix shell-like (or dbx/gdb-like) command line interface with
support for output redirection/appending (familiar >, >>), command history and so on.
support for output redirection/appending (familiar &gt;, &gt;&gt;), command history and so on.
Each CLHSDB command can have zero or more arguments and optionally end with output redirection
(or append) to a file. Commands may be stored in a file and run using <b>source</b> command.
<b>help</b> command prints usage message for all supported commands (or a specific command)
@ -49,7 +49,7 @@ Available commands:
dumpheap [ file ] <font color="red">dump heap in hprof binary format</font>
dumpideal -a | id <font color="red">dump ideal graph like debug flag -XX:+PrintIdeal</font>
dumpilt -a | id <font color="red">dump inline tree for C2 compilation</font>
dumpreplaydata <address> | -a | <thread_id> [>replay.txt] <font color="red">dump replay data into a file</font>
dumpreplaydata &lt;address&gt; | -a | &lt;thread_id&gt; [&gt;replay.txt] <font color="red">dump replay data into a file</font>
echo [ true | false ] <font color="red">turn on/off command echo mode</font>
examine [ address/count ] | [ address,address] <font color="red">show contents of memory from given address</font>
field [ type [ name fieldtype isStatic offset address ] ] <font color="red">print info about a field of HotSpot type</font>
@ -96,11 +96,11 @@ Available commands:
<h3>JavaScript integration</h3>
<p>Few CLHSDB commands are already implemented in JavaScript. It is possible to extend CLHSDB command set
<p>Few CLHSDB commands are already implemented in JavaScript. It is possible to extend CLHSDB command set
by implementing more commands in a JavaScript file and by loading it by <b>jsload</b> command. <b>jseval</b>
command may be used to evaluate arbitrary JavaScript expression from a string. Any JavaScript function
may be exposed as a CLHSDB command by registering it using JavaScript <b><code>registerCommand</code></b>
function. This function accepts command name, usage and name of the JavaScript implementation function
function. This function accepts command name, usage and name of the JavaScript implementation function
as arguments.
</p>
@ -127,11 +127,11 @@ hsdb&gt; jsload test.js
</code>
</pre>
<h3>C2 Compilation Replay</h3>
<h3>Compilation Replay</h3>
<p>
When a java process crashes in compiled method, usually a core file is saved.
The C2 replay function can reproduce the compiling process in the core.
<a href="c2replay.html">c2replay.html</a>
The replay function can reproduce the compiling process in the core.
<a href="cireplay.html">cireplay.html</a>
</body>
</html>

View File

@ -204,7 +204,7 @@ Java_sun_jvm_hotspot_debugger_bsd_BsdDebuggerLocal_lookupByName0(
jstring objectName, jstring symbolName)
{
struct ps_prochandle* ph = get_proc_handle(env, this_obj);
if (ph->core != NULL) {
if (ph != NULL && ph->core != NULL) {
return lookupByNameIncore(env, ph, this_obj, objectName, symbolName);
}
@ -238,10 +238,13 @@ JNIEXPORT jobject JNICALL Java_sun_jvm_hotspot_debugger_bsd_BsdDebuggerLocal_loo
const char* sym = NULL;
struct ps_prochandle* ph = get_proc_handle(env, this_obj);
sym = symbol_for_pc(ph, (uintptr_t) addr, &offset);
if (sym == NULL) return 0;
return (*env)->CallObjectMethod(env, this_obj, createClosestSymbol_ID,
if (ph != NULL && ph->core != NULL) {
sym = symbol_for_pc(ph, (uintptr_t) addr, &offset);
if (sym == NULL) return 0;
return (*env)->CallObjectMethod(env, this_obj, createClosestSymbol_ID,
(*env)->NewStringUTF(env, sym), (jlong)offset);
}
return 0;
}
/** called from Java_sun_jvm_hotspot_debugger_bsd_BsdDebuggerLocal_readBytesFromProcess0 */
@ -279,7 +282,7 @@ Java_sun_jvm_hotspot_debugger_bsd_BsdDebuggerLocal_readBytesFromProcess0(
jbyteArray array;
struct ps_prochandle* ph = get_proc_handle(env, this_obj);
if (ph->core != NULL) {
if (ph != NULL && ph->core != NULL) {
return readBytesFromCore(env, ph, this_obj, addr, numBytes);
}
@ -394,9 +397,9 @@ bool fill_java_threads(JNIEnv* env, jobject this_obj, struct ps_prochandle* ph)
/* For core file only, called from
* Java_sun_jvm_hotspot_debugger_bsd_BsdDebuggerLocal_getThreadIntegerRegisterSet0
*/
jlongArray getThreadIntegerRegisterSetFromCore(JNIEnv *env, jobject this_obj, long lwp_id) {
jlongArray getThreadIntegerRegisterSetFromCore(JNIEnv *env, jobject this_obj, long lwp_id, struct ps_prochandle* ph) {
if (!_threads_filled) {
if (!fill_java_threads(env, this_obj, get_proc_handle(env, this_obj))) {
if (!fill_java_threads(env, this_obj, ph)) {
throw_new_debugger_exception(env, "Failed to fill in threads");
return 0;
} else {
@ -409,7 +412,6 @@ jlongArray getThreadIntegerRegisterSetFromCore(JNIEnv *env, jobject this_obj, lo
jlongArray array;
jlong *regs;
struct ps_prochandle* ph = get_proc_handle(env, this_obj);
if (get_lwp_regs(ph, lwp_id, &gregs) != true) {
THROW_NEW_DEBUGGER_EXCEPTION_("get_thread_regs failed for a lwp", 0);
}
@ -521,8 +523,8 @@ Java_sun_jvm_hotspot_debugger_bsd_BsdDebuggerLocal_getThreadIntegerRegisterSet0(
print_debug("getThreadRegisterSet0 called\n");
struct ps_prochandle* ph = get_proc_handle(env, this_obj);
if (ph->core != NULL) {
return getThreadIntegerRegisterSetFromCore(env, this_obj, thread_id);
if (ph != NULL && ph->core != NULL) {
return getThreadIntegerRegisterSetFromCore(env, this_obj, thread_id, ph);
}
kern_return_t result;
@ -705,8 +707,8 @@ JNF_COCOA_ENTER(env);
task_t gTask = 0;
result = task_for_pid(mach_task_self(), jpid, &gTask);
if (result != KERN_SUCCESS) {
print_error("attach: task_for_pid(%d) failed (%d)\n", (int)jpid, result);
THROW_NEW_DEBUGGER_EXCEPTION("Can't attach to the process");
print_error("attach: task_for_pid(%d) failed: '%s' (%d)\n", (int)jpid, mach_error_string(result), result);
THROW_NEW_DEBUGGER_EXCEPTION("Can't attach to the process. Could be caused by an incorrect pid or lack of privileges.");
}
putTask(env, this_obj, gTask);

View File

@ -93,10 +93,11 @@ public class ciEnv extends VMObject {
CompileTask task = task();
Method method = task.method();
int entryBci = task.osrBci();
int compLevel = task.compLevel();
Klass holder = method.getMethodHolder();
out.println("compile " + holder.getName().asString() + " " +
OopUtilities.escapeString(method.getName().asString()) + " " +
method.getSignature().asString() + " " +
entryBci);
entryBci + " " + compLevel);
}
}

View File

@ -78,6 +78,8 @@ public class NMethod extends CodeBlob {
current sweep traversal index. */
private static CIntegerField stackTraversalMarkField;
private static CIntegerField compLevelField;
static {
VM.registerVMInitializedObserver(new Observer() {
public void update(Observable o, Object data) {
@ -113,7 +115,7 @@ public class NMethod extends CodeBlob {
osrEntryPointField = type.getAddressField("_osr_entry_point");
lockCountField = type.getJIntField("_lock_count");
stackTraversalMarkField = type.getCIntegerField("_stack_traversal_mark");
compLevelField = type.getCIntegerField("_comp_level");
pcDescSize = db.lookupType("PcDesc").getSize();
}
@ -530,7 +532,7 @@ public class NMethod extends CodeBlob {
out.println("compile " + holder.getName().asString() + " " +
OopUtilities.escapeString(method.getName().asString()) + " " +
method.getSignature().asString() + " " +
getEntryBCI());
getEntryBCI() + " " + getCompLevel());
}
@ -551,4 +553,5 @@ public class NMethod extends CodeBlob {
private int getHandlerTableOffset() { return (int) handlerTableOffsetField.getValue(addr); }
private int getNulChkTableOffset() { return (int) nulChkTableOffsetField .getValue(addr); }
private int getNMethodEndOffset() { return (int) nmethodEndOffsetField .getValue(addr); }
private int getCompLevel() { return (int) compLevelField .getValue(addr); }
}

View File

@ -46,10 +46,12 @@ public class CompileTask extends VMObject {
Type type = db.lookupType("CompileTask");
methodField = type.getAddressField("_method");
osrBciField = new CIntField(type.getCIntegerField("_osr_bci"), 0);
compLevelField = new CIntField(type.getCIntegerField("_comp_level"), 0);
}
private static AddressField methodField;
private static CIntField osrBciField;
private static CIntField compLevelField;
public CompileTask(Address addr) {
super(addr);
@ -63,4 +65,8 @@ public class CompileTask extends VMObject {
public int osrBci() {
return (int)osrBciField.getValue(getAddress());
}
public int compLevel() {
return (int)compLevelField.getValue(getAddress());
}
}

View File

@ -117,8 +117,6 @@ public class JMap extends Tool {
mode = MODE_HEAP_SUMMARY;
} else if (modeFlag.equals("-histo")) {
mode = MODE_HISTOGRAM;
} else if (modeFlag.equals("-permstat")) {
mode = MODE_CLSTATS;
} else if (modeFlag.equals("-clstats")) {
mode = MODE_CLSTATS;
} else if (modeFlag.equals("-finalizerinfo")) {

View File

@ -35,7 +35,7 @@ HOTSPOT_VM_COPYRIGHT=Copyright 2013
HS_MAJOR_VER=25
HS_MINOR_VER=0
HS_BUILD_NUMBER=30
HS_BUILD_NUMBER=31
JDK_MAJOR_VER=1
JDK_MINOR_VER=8

View File

@ -0,0 +1,193 @@
/*
* Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "precompiled.hpp"
#include "asm/macroAssembler.inline.hpp"
#include "code/compiledIC.hpp"
#include "code/icBuffer.hpp"
#include "code/nmethod.hpp"
#include "memory/resourceArea.hpp"
#include "runtime/mutexLocker.hpp"
#include "runtime/safepoint.hpp"
#ifdef COMPILER2
#include "opto/matcher.hpp"
#endif
// Release the CompiledICHolder* associated with this call site is there is one.
void CompiledIC::cleanup_call_site(virtual_call_Relocation* call_site) {
// This call site might have become stale so inspect it carefully.
NativeCall* call = nativeCall_at(call_site->addr());
if (is_icholder_entry(call->destination())) {
NativeMovConstReg* value = nativeMovConstReg_at(call_site->cached_value());
InlineCacheBuffer::queue_for_release((CompiledICHolder*)value->data());
}
}
bool CompiledIC::is_icholder_call_site(virtual_call_Relocation* call_site) {
// This call site might have become stale so inspect it carefully.
NativeCall* call = nativeCall_at(call_site->addr());
return is_icholder_entry(call->destination());
}
//-----------------------------------------------------------------------------
// High-level access to an inline cache. Guaranteed to be MT-safe.
CompiledIC::CompiledIC(nmethod* nm, NativeCall* call)
: _ic_call(call)
{
address ic_call = call->instruction_address();
assert(ic_call != NULL, "ic_call address must be set");
assert(nm != NULL, "must pass nmethod");
assert(nm->contains(ic_call), "must be in nmethod");
// Search for the ic_call at the given address.
RelocIterator iter(nm, ic_call, ic_call+1);
bool ret = iter.next();
assert(ret == true, "relocInfo must exist at this address");
assert(iter.addr() == ic_call, "must find ic_call");
if (iter.type() == relocInfo::virtual_call_type) {
virtual_call_Relocation* r = iter.virtual_call_reloc();
_is_optimized = false;
_value = nativeMovConstReg_at(r->cached_value());
} else {
assert(iter.type() == relocInfo::opt_virtual_call_type, "must be a virtual call");
_is_optimized = true;
_value = NULL;
}
}
// ----------------------------------------------------------------------------
#define __ _masm.
void CompiledStaticCall::emit_to_interp_stub(CodeBuffer &cbuf) {
#ifdef COMPILER2
// Stub is fixed up when the corresponding call is converted from calling
// compiled code to calling interpreted code.
// set (empty), G5
// jmp -1
address mark = cbuf.insts_mark(); // Get mark within main instrs section.
MacroAssembler _masm(&cbuf);
address base =
__ start_a_stub(to_interp_stub_size()*2);
if (base == NULL) return; // CodeBuffer::expand failed.
// Static stub relocation stores the instruction address of the call.
__ relocate(static_stub_Relocation::spec(mark));
__ set_metadata(NULL, as_Register(Matcher::inline_cache_reg_encode()));
__ set_inst_mark();
AddressLiteral addrlit(-1);
__ JUMP(addrlit, G3, 0);
__ delayed()->nop();
// Update current stubs pointer and restore code_end.
__ end_a_stub();
#else
ShouldNotReachHere();
#endif
}
#undef __
int CompiledStaticCall::to_interp_stub_size() {
// This doesn't need to be accurate but it must be larger or equal to
// the real size of the stub.
return (NativeMovConstReg::instruction_size + // sethi/setlo;
NativeJump::instruction_size + // sethi; jmp; nop
(TraceJumps ? 20 * BytesPerInstWord : 0) );
}
// Relocation entries for call stub, compiled java to interpreter.
int CompiledStaticCall::reloc_to_interp_stub() {
return 10; // 4 in emit_java_to_interp + 1 in Java_Static_Call
}
void CompiledStaticCall::set_to_interpreted(methodHandle callee, address entry) {
address stub = find_stub();
guarantee(stub != NULL, "stub not found");
if (TraceICs) {
ResourceMark rm;
tty->print_cr("CompiledStaticCall@" INTPTR_FORMAT ": set_to_interpreted %s",
instruction_address(),
callee->name_and_sig_as_C_string());
}
// Creation also verifies the object.
NativeMovConstReg* method_holder = nativeMovConstReg_at(stub);
NativeJump* jump = nativeJump_at(method_holder->next_instruction_address());
assert(method_holder->data() == 0 || method_holder->data() == (intptr_t)callee(),
"a) MT-unsafe modification of inline cache");
assert(jump->jump_destination() == (address)-1 || jump->jump_destination() == entry,
"b) MT-unsafe modification of inline cache");
// Update stub.
method_holder->set_data((intptr_t)callee());
jump->set_jump_destination(entry);
// Update jump to call.
set_destination_mt_safe(stub);
}
void CompiledStaticCall::set_stub_to_clean(static_stub_Relocation* static_stub) {
assert (CompiledIC_lock->is_locked() || SafepointSynchronize::is_at_safepoint(), "mt unsafe call");
// Reset stub.
address stub = static_stub->addr();
assert(stub != NULL, "stub not found");
// Creation also verifies the object.
NativeMovConstReg* method_holder = nativeMovConstReg_at(stub);
NativeJump* jump = nativeJump_at(method_holder->next_instruction_address());
method_holder->set_data(0);
jump->set_jump_destination((address)-1);
}
//-----------------------------------------------------------------------------
// Non-product mode code
#ifndef PRODUCT
void CompiledStaticCall::verify() {
// Verify call.
NativeCall::verify();
if (os::is_MP()) {
verify_alignment();
}
// Verify stub.
address stub = find_stub();
assert(stub != NULL, "no stub found for static call");
// Creation also verifies the object.
NativeMovConstReg* method_holder = nativeMovConstReg_at(stub);
NativeJump* jump = nativeJump_at(method_holder->next_instruction_address());
// Verify state.
assert(is_clean() || is_call_to_compiled() || is_call_to_interpreted(), "sanity check");
}
#endif // !PRODUCT

View File

@ -1655,53 +1655,6 @@ uint BoxLockNode::size(PhaseRegAlloc *ra_) const {
return ra_->C->scratch_emit_size(this);
}
//=============================================================================
// emit call stub, compiled java to interpretor
void emit_java_to_interp(CodeBuffer &cbuf ) {
// Stub is fixed up when the corresponding call is converted from calling
// compiled code to calling interpreted code.
// set (empty), G5
// jmp -1
address mark = cbuf.insts_mark(); // get mark within main instrs section
MacroAssembler _masm(&cbuf);
address base =
__ start_a_stub(Compile::MAX_stubs_size);
if (base == NULL) return; // CodeBuffer::expand failed
// static stub relocation stores the instruction address of the call
__ relocate(static_stub_Relocation::spec(mark));
__ set_metadata(NULL, reg_to_register_object(Matcher::inline_cache_reg_encode()));
__ set_inst_mark();
AddressLiteral addrlit(-1);
__ JUMP(addrlit, G3, 0);
__ delayed()->nop();
// Update current stubs pointer and restore code_end.
__ end_a_stub();
}
// size of call stub, compiled java to interpretor
uint size_java_to_interp() {
// This doesn't need to be accurate but it must be larger or equal to
// the real size of the stub.
return (NativeMovConstReg::instruction_size + // sethi/setlo;
NativeJump::instruction_size + // sethi; jmp; nop
(TraceJumps ? 20 * BytesPerInstWord : 0) );
}
// relocation entries for call stub, compiled java to interpretor
uint reloc_java_to_interp() {
return 10; // 4 in emit_java_to_interp + 1 in Java_Static_Call
}
//=============================================================================
#ifndef PRODUCT
void MachUEPNode::format( PhaseRegAlloc *ra_, outputStream *st ) const {
@ -2576,15 +2529,15 @@ encode %{
enc_class Java_Static_Call (method meth) %{ // JAVA STATIC CALL
// CALL to fixup routine. Fixup routine uses ScopeDesc info to determine
// who we intended to call.
if ( !_method ) {
if (!_method) {
emit_call_reloc(cbuf, $meth$$method, relocInfo::runtime_call_type);
} else if (_optimized_virtual) {
emit_call_reloc(cbuf, $meth$$method, relocInfo::opt_virtual_call_type);
} else {
emit_call_reloc(cbuf, $meth$$method, relocInfo::static_call_type);
}
if( _method ) { // Emit stub for static call
emit_java_to_interp(cbuf);
if (_method) { // Emit stub for static call.
CompiledStaticCall::emit_to_interp_stub(cbuf);
}
%}

View File

@ -0,0 +1,180 @@
/*
* Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "precompiled.hpp"
#include "asm/macroAssembler.inline.hpp"
#include "code/compiledIC.hpp"
#include "code/icBuffer.hpp"
#include "code/nmethod.hpp"
#include "memory/resourceArea.hpp"
#include "runtime/mutexLocker.hpp"
#include "runtime/safepoint.hpp"
// Release the CompiledICHolder* associated with this call site is there is one.
void CompiledIC::cleanup_call_site(virtual_call_Relocation* call_site) {
// This call site might have become stale so inspect it carefully.
NativeCall* call = nativeCall_at(call_site->addr());
if (is_icholder_entry(call->destination())) {
NativeMovConstReg* value = nativeMovConstReg_at(call_site->cached_value());
InlineCacheBuffer::queue_for_release((CompiledICHolder*)value->data());
}
}
bool CompiledIC::is_icholder_call_site(virtual_call_Relocation* call_site) {
// This call site might have become stale so inspect it carefully.
NativeCall* call = nativeCall_at(call_site->addr());
return is_icholder_entry(call->destination());
}
//-----------------------------------------------------------------------------
// High-level access to an inline cache. Guaranteed to be MT-safe.
CompiledIC::CompiledIC(nmethod* nm, NativeCall* call)
: _ic_call(call)
{
address ic_call = call->instruction_address();
assert(ic_call != NULL, "ic_call address must be set");
assert(nm != NULL, "must pass nmethod");
assert(nm->contains(ic_call), "must be in nmethod");
// Search for the ic_call at the given address.
RelocIterator iter(nm, ic_call, ic_call+1);
bool ret = iter.next();
assert(ret == true, "relocInfo must exist at this address");
assert(iter.addr() == ic_call, "must find ic_call");
if (iter.type() == relocInfo::virtual_call_type) {
virtual_call_Relocation* r = iter.virtual_call_reloc();
_is_optimized = false;
_value = nativeMovConstReg_at(r->cached_value());
} else {
assert(iter.type() == relocInfo::opt_virtual_call_type, "must be a virtual call");
_is_optimized = true;
_value = NULL;
}
}
// ----------------------------------------------------------------------------
#define __ _masm.
void CompiledStaticCall::emit_to_interp_stub(CodeBuffer &cbuf) {
// Stub is fixed up when the corresponding call is converted from
// calling compiled code to calling interpreted code.
// movq rbx, 0
// jmp -5 # to self
address mark = cbuf.insts_mark(); // Get mark within main instrs section.
// Note that the code buffer's insts_mark is always relative to insts.
// That's why we must use the macroassembler to generate a stub.
MacroAssembler _masm(&cbuf);
address base =
__ start_a_stub(to_interp_stub_size()*2);
if (base == NULL) return; // CodeBuffer::expand failed.
// Static stub relocation stores the instruction address of the call.
__ relocate(static_stub_Relocation::spec(mark), Assembler::imm_operand);
// Static stub relocation also tags the Method* in the code-stream.
__ mov_metadata(rbx, (Metadata*) NULL); // Method is zapped till fixup time.
// This is recognized as unresolved by relocs/nativeinst/ic code.
__ jump(RuntimeAddress(__ pc()));
// Update current stubs pointer and restore insts_end.
__ end_a_stub();
}
#undef __
int CompiledStaticCall::to_interp_stub_size() {
return NOT_LP64(10) // movl; jmp
LP64_ONLY(15); // movq (1+1+8); jmp (1+4)
}
// Relocation entries for call stub, compiled java to interpreter.
int CompiledStaticCall::reloc_to_interp_stub() {
return 4; // 3 in emit_to_interp_stub + 1 in emit_call
}
void CompiledStaticCall::set_to_interpreted(methodHandle callee, address entry) {
address stub = find_stub();
guarantee(stub != NULL, "stub not found");
if (TraceICs) {
ResourceMark rm;
tty->print_cr("CompiledStaticCall@" INTPTR_FORMAT ": set_to_interpreted %s",
instruction_address(),
callee->name_and_sig_as_C_string());
}
// Creation also verifies the object.
NativeMovConstReg* method_holder = nativeMovConstReg_at(stub);
NativeJump* jump = nativeJump_at(method_holder->next_instruction_address());
assert(method_holder->data() == 0 || method_holder->data() == (intptr_t)callee(),
"a) MT-unsafe modification of inline cache");
assert(jump->jump_destination() == (address)-1 || jump->jump_destination() == entry,
"b) MT-unsafe modification of inline cache");
// Update stub.
method_holder->set_data((intptr_t)callee());
jump->set_jump_destination(entry);
// Update jump to call.
set_destination_mt_safe(stub);
}
void CompiledStaticCall::set_stub_to_clean(static_stub_Relocation* static_stub) {
assert (CompiledIC_lock->is_locked() || SafepointSynchronize::is_at_safepoint(), "mt unsafe call");
// Reset stub.
address stub = static_stub->addr();
assert(stub != NULL, "stub not found");
// Creation also verifies the object.
NativeMovConstReg* method_holder = nativeMovConstReg_at(stub);
NativeJump* jump = nativeJump_at(method_holder->next_instruction_address());
method_holder->set_data(0);
jump->set_jump_destination((address)-1);
}
//-----------------------------------------------------------------------------
// Non-product mode code
#ifndef PRODUCT
void CompiledStaticCall::verify() {
// Verify call.
NativeCall::verify();
if (os::is_MP()) {
verify_alignment();
}
// Verify stub.
address stub = find_stub();
assert(stub != NULL, "no stub found for static call");
// Creation also verifies the object.
NativeMovConstReg* method_holder = nativeMovConstReg_at(stub);
NativeJump* jump = nativeJump_at(method_holder->next_instruction_address());
// Verify state.
assert(is_clean() || is_call_to_compiled() || is_call_to_interpreted(), "sanity check");
}
#endif // !PRODUCT

View File

@ -1256,43 +1256,6 @@ uint BoxLockNode::size(PhaseRegAlloc *ra_) const {
}
}
//=============================================================================
// emit call stub, compiled java to interpreter
void emit_java_to_interp(CodeBuffer &cbuf ) {
// Stub is fixed up when the corresponding call is converted from calling
// compiled code to calling interpreted code.
// mov rbx,0
// jmp -1
address mark = cbuf.insts_mark(); // get mark within main instrs section
// Note that the code buffer's insts_mark is always relative to insts.
// That's why we must use the macroassembler to generate a stub.
MacroAssembler _masm(&cbuf);
address base =
__ start_a_stub(Compile::MAX_stubs_size);
if (base == NULL) return; // CodeBuffer::expand failed
// static stub relocation stores the instruction address of the call
__ relocate(static_stub_Relocation::spec(mark), RELOC_IMM32);
// static stub relocation also tags the Method* in the code-stream.
__ mov_metadata(rbx, (Metadata*)NULL); // method is zapped till fixup time
// This is recognized as unresolved by relocs/nativeInst/ic code
__ jump(RuntimeAddress(__ pc()));
__ end_a_stub();
// Update current stubs pointer and restore insts_end.
}
// size of call stub, compiled java to interpretor
uint size_java_to_interp() {
return 10; // movl; jmp
}
// relocation entries for call stub, compiled java to interpretor
uint reloc_java_to_interp() {
return 4; // 3 in emit_java_to_interp + 1 in Java_Static_Call
}
//=============================================================================
#ifndef PRODUCT
void MachUEPNode::format( PhaseRegAlloc *ra_, outputStream* st ) const {
@ -1909,8 +1872,8 @@ encode %{
emit_d32_reloc(cbuf, ($meth$$method - (int)(cbuf.insts_end()) - 4),
static_call_Relocation::spec(), RELOC_IMM32 );
}
if (_method) { // Emit stub for static call
emit_java_to_interp(cbuf);
if (_method) { // Emit stub for static call.
CompiledStaticCall::emit_to_interp_stub(cbuf);
}
%}

View File

@ -1387,48 +1387,6 @@ uint BoxLockNode::size(PhaseRegAlloc *ra_) const
return (offset < 0x80) ? 5 : 8; // REX
}
//=============================================================================
// emit call stub, compiled java to interpreter
void emit_java_to_interp(CodeBuffer& cbuf)
{
// Stub is fixed up when the corresponding call is converted from
// calling compiled code to calling interpreted code.
// movq rbx, 0
// jmp -5 # to self
address mark = cbuf.insts_mark(); // get mark within main instrs section
// Note that the code buffer's insts_mark is always relative to insts.
// That's why we must use the macroassembler to generate a stub.
MacroAssembler _masm(&cbuf);
address base =
__ start_a_stub(Compile::MAX_stubs_size);
if (base == NULL) return; // CodeBuffer::expand failed
// static stub relocation stores the instruction address of the call
__ relocate(static_stub_Relocation::spec(mark), RELOC_IMM64);
// static stub relocation also tags the Method* in the code-stream.
__ mov_metadata(rbx, (Metadata*) NULL); // method is zapped till fixup time
// This is recognized as unresolved by relocs/nativeinst/ic code
__ jump(RuntimeAddress(__ pc()));
// Update current stubs pointer and restore insts_end.
__ end_a_stub();
}
// size of call stub, compiled java to interpretor
uint size_java_to_interp()
{
return 15; // movq (1+1+8); jmp (1+4)
}
// relocation entries for call stub, compiled java to interpretor
uint reloc_java_to_interp()
{
return 4; // 3 in emit_java_to_interp + 1 in Java_Static_Call
}
//=============================================================================
#ifndef PRODUCT
void MachUEPNode::format(PhaseRegAlloc* ra_, outputStream* st) const
@ -2078,8 +2036,8 @@ encode %{
RELOC_DISP32);
}
if (_method) {
// Emit stub for static call
emit_java_to_interp(cbuf);
// Emit stub for static call.
CompiledStaticCall::emit_to_interp_stub(cbuf);
}
%}

View File

@ -0,0 +1,122 @@
/*
* Copyright (c) 1997, 2013, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*
*/
#include "precompiled.hpp"
#include "classfile/systemDictionary.hpp"
#include "code/codeCache.hpp"
#include "code/compiledIC.hpp"
#include "code/icBuffer.hpp"
#include "code/nmethod.hpp"
#include "code/vtableStubs.hpp"
#include "interpreter/interpreter.hpp"
#include "interpreter/linkResolver.hpp"
#include "memory/metadataFactory.hpp"
#include "memory/oopFactory.hpp"
#include "oops/method.hpp"
#include "oops/oop.inline.hpp"
#include "oops/symbol.hpp"
#include "runtime/icache.hpp"
#include "runtime/sharedRuntime.hpp"
#include "runtime/stubRoutines.hpp"
#include "utilities/events.hpp"
// Release the CompiledICHolder* associated with this call site is there is one.
void CompiledIC::cleanup_call_site(virtual_call_Relocation* call_site) {
// This call site might have become stale so inspect it carefully.
NativeCall* call = nativeCall_at(call_site->addr());
if (is_icholder_entry(call->destination())) {
NativeMovConstReg* value = nativeMovConstReg_at(call_site->cached_value());
InlineCacheBuffer::queue_for_release((CompiledICHolder*)value->data());
}
}
bool CompiledIC::is_icholder_call_site(virtual_call_Relocation* call_site) {
// This call site might have become stale so inspect it carefully.
NativeCall* call = nativeCall_at(call_site->addr());
return is_icholder_entry(call->destination());
}
//-----------------------------------------------------------------------------
// High-level access to an inline cache. Guaranteed to be MT-safe.
CompiledIC::CompiledIC(nmethod* nm, NativeCall* call)
: _ic_call(call)
{
address ic_call = call->instruction_address();
assert(ic_call != NULL, "ic_call address must be set");
assert(nm != NULL, "must pass nmethod");
assert(nm->contains(ic_call), "must be in nmethod");
// Search for the ic_call at the given address.
RelocIterator iter(nm, ic_call, ic_call+1);
bool ret = iter.next();
assert(ret == true, "relocInfo must exist at this address");
assert(iter.addr() == ic_call, "must find ic_call");
if (iter.type() == relocInfo::virtual_call_type) {
virtual_call_Relocation* r = iter.virtual_call_reloc();
_is_optimized = false;
_value = nativeMovConstReg_at(r->cached_value());
} else {
assert(iter.type() == relocInfo::opt_virtual_call_type, "must be a virtual call");
_is_optimized = true;
_value = NULL;
}
}
// ----------------------------------------------------------------------------
void CompiledStaticCall::emit_to_interp_stub(CodeBuffer &cbuf) {
ShouldNotReachHere(); // Only needed for COMPILER2.
}
int CompiledStaticCall::to_interp_stub_size() {
ShouldNotReachHere(); // Only needed for COMPILER2.
return 0;
}
// Relocation entries for call stub, compiled java to interpreter.
int CompiledStaticCall::reloc_to_interp_stub() {
ShouldNotReachHere(); // Only needed for COMPILER2.
return 0;
}
void CompiledStaticCall::set_to_interpreted(methodHandle callee, address entry) {
ShouldNotReachHere(); // Only needed for COMPILER2.
}
void CompiledStaticCall::set_stub_to_clean(static_stub_Relocation* static_stub) {
ShouldNotReachHere(); // Only needed for COMPILER2.
}
//-----------------------------------------------------------------------------
// Non-product mode code.
#ifndef PRODUCT
void CompiledStaticCall::verify() {
ShouldNotReachHere(); // Only needed for COMPILER2.
}
#endif // !PRODUCT

View File

@ -1230,10 +1230,6 @@ bool os::dll_build_name(char* buffer, size_t buflen,
return retval;
}
const char* os::get_current_directory(char *buf, int buflen) {
return getcwd(buf, buflen);
}
// check if addr is inside libjvm.so
bool os::address_is_in_vm(address addr) {
static address libjvm_base_addr;
@ -2080,9 +2076,10 @@ static char* anon_mmap(char* requested_addr, size_t bytes, bool fixed) {
flags |= MAP_FIXED;
}
// Map uncommitted pages PROT_READ and PROT_WRITE, change access
// to PROT_EXEC if executable when we commit the page.
addr = (char*)::mmap(requested_addr, bytes, PROT_READ|PROT_WRITE,
// Map reserved/uncommitted pages PROT_NONE so we fail early if we
// touch an uncommitted page. Otherwise, the read/write might
// succeed if we have enough swap space to back the physical page.
addr = (char*)::mmap(requested_addr, bytes, PROT_NONE,
flags, -1, 0);
if (addr != MAP_FAILED) {

View File

@ -119,6 +119,7 @@ int (*os::Linux::_pthread_getcpuclockid)(pthread_t, clockid_t *) = NULL;
Mutex* os::Linux::_createThread_lock = NULL;
pthread_t os::Linux::_main_thread;
int os::Linux::_page_size = -1;
const int os::Linux::_vm_default_page_size = (8 * K);
bool os::Linux::_is_floating_stack = false;
bool os::Linux::_is_NPTL = false;
bool os::Linux::_supports_fast_thread_cpu_time = false;
@ -1662,10 +1663,6 @@ bool os::dll_build_name(char* buffer, size_t buflen,
return retval;
}
const char* os::get_current_directory(char *buf, int buflen) {
return getcwd(buf, buflen);
}
// check if addr is inside libjvm.so
bool os::address_is_in_vm(address addr) {
static address libjvm_base_addr;
@ -2906,9 +2903,10 @@ static char* anon_mmap(char* requested_addr, size_t bytes, bool fixed) {
flags |= MAP_FIXED;
}
// Map uncommitted pages PROT_READ and PROT_WRITE, change access
// to PROT_EXEC if executable when we commit the page.
addr = (char*)::mmap(requested_addr, bytes, PROT_READ|PROT_WRITE,
// Map reserved/uncommitted pages PROT_NONE so we fail early if we
// touch an uncommitted page. Otherwise, the read/write might
// succeed if we have enough swap space to back the physical page.
addr = (char*)::mmap(requested_addr, bytes, PROT_NONE,
flags, -1, 0);
if (addr != MAP_FAILED) {
@ -4249,6 +4247,15 @@ void os::init(void) {
Linux::clock_init();
initial_time_count = os::elapsed_counter();
pthread_mutex_init(&dl_mutex, NULL);
// If the pagesize of the VM is greater than 8K determine the appropriate
// number of initial guard pages. The user can change this with the
// command line arguments, if needed.
if (vm_page_size() > (int)Linux::vm_default_page_size()) {
StackYellowPages = 1;
StackRedPages = 1;
StackShadowPages = round_to((StackShadowPages*Linux::vm_default_page_size()), vm_page_size()) / vm_page_size();
}
}
// To install functions for atexit system call
@ -4302,8 +4309,8 @@ jint os::init_2(void)
// Add in 2*BytesPerWord times page size to account for VM stack during
// class initialization depending on 32 or 64 bit VM.
os::Linux::min_stack_allowed = MAX2(os::Linux::min_stack_allowed,
(size_t)(StackYellowPages+StackRedPages+StackShadowPages+
2*BytesPerWord COMPILER2_PRESENT(+1)) * Linux::page_size());
(size_t)(StackYellowPages+StackRedPages+StackShadowPages) * Linux::page_size() +
(2*BytesPerWord COMPILER2_PRESENT(+1)) * Linux::vm_default_page_size());
size_t threadStackSizeInBytes = ThreadStackSize * K;
if (threadStackSizeInBytes != 0 &&

View File

@ -70,6 +70,7 @@ class Linux {
static pthread_t _main_thread;
static Mutex* _createThread_lock;
static int _page_size;
static const int _vm_default_page_size;
static julong available_memory();
static julong physical_memory() { return _physical_memory; }
@ -116,6 +117,8 @@ class Linux {
static int page_size(void) { return _page_size; }
static void set_page_size(int val) { _page_size = val; }
static int vm_default_page_size(void) { return _vm_default_page_size; }
static address ucontext_get_pc(ucontext_t* uc);
static intptr_t* ucontext_get_sp(ucontext_t* uc);
static intptr_t* ucontext_get_fp(ucontext_t* uc);

View File

@ -251,3 +251,11 @@ bool os::has_allocatable_memory_limit(julong* limit) {
return true;
#endif
}
const char* os::get_current_directory(char *buf, size_t buflen) {
return getcwd(buf, buflen);
}
FILE* os::open(int fd, const char* mode) {
return ::fdopen(fd, mode);
}

View File

@ -824,7 +824,7 @@ void os::init_system_properties_values() {
// allocate new buffer and initialize
info = (Dl_serinfo*)malloc(_info.dls_size);
if (info == NULL) {
vm_exit_out_of_memory(_info.dls_size,
vm_exit_out_of_memory(_info.dls_size, OOM_MALLOC_ERROR,
"init_system_properties_values info");
}
info->dls_size = _info.dls_size;
@ -866,7 +866,7 @@ void os::init_system_properties_values() {
common_path = malloc(bufsize);
if (common_path == NULL) {
free(info);
vm_exit_out_of_memory(bufsize,
vm_exit_out_of_memory(bufsize, OOM_MALLOC_ERROR,
"init_system_properties_values common_path");
}
sprintf(common_path, COMMON_DIR "/lib/%s", cpu_arch);
@ -879,7 +879,7 @@ void os::init_system_properties_values() {
if (library_path == NULL) {
free(info);
free(common_path);
vm_exit_out_of_memory(bufsize,
vm_exit_out_of_memory(bufsize, OOM_MALLOC_ERROR,
"init_system_properties_values library_path");
}
library_path[0] = '\0';
@ -1623,7 +1623,8 @@ void os::thread_local_storage_at_put(int index, void* value) {
// %%% this is used only in threadLocalStorage.cpp
if (thr_setspecific((thread_key_t)index, value)) {
if (errno == ENOMEM) {
vm_exit_out_of_memory(SMALLINT, "thr_setspecific: out of swap space");
vm_exit_out_of_memory(SMALLINT, OOM_MALLOC_ERROR,
"thr_setspecific: out of swap space");
} else {
fatal(err_msg("os::thread_local_storage_at_put: thr_setspecific failed "
"(%s)", strerror(errno)));
@ -1915,10 +1916,6 @@ bool os::dll_build_name(char* buffer, size_t buflen,
return retval;
}
const char* os::get_current_directory(char *buf, int buflen) {
return getcwd(buf, buflen);
}
// check if addr is inside libjvm.so
bool os::address_is_in_vm(address addr) {
static address libjvm_base_addr;

View File

@ -1221,8 +1221,10 @@ bool os::dll_build_name(char *buffer, size_t buflen,
// Needs to be in os specific directory because windows requires another
// header file <direct.h>
const char* os::get_current_directory(char *buf, int buflen) {
return _getcwd(buf, buflen);
const char* os::get_current_directory(char *buf, size_t buflen) {
int n = static_cast<int>(buflen);
if (buflen > INT_MAX) n = INT_MAX;
return _getcwd(buf, n);
}
//-----------------------------------------------------------
@ -4098,6 +4100,10 @@ int os::open(const char *path, int oflag, int mode) {
return ::open(pathbuf, oflag | O_BINARY | O_NOINHERIT, mode);
}
FILE* os::open(int fd, const char* mode) {
return ::_fdopen(fd, mode);
}
// Is a (classpath) directory empty?
bool os::dir_is_empty(const char* path) {
WIN32_FIND_DATA fd;

View File

@ -178,7 +178,7 @@ static void current_stack_region(address* bottom, size_t* size) {
// JVM needs to know exact stack location, abort if it fails
if (rslt != 0) {
if (rslt == ENOMEM) {
vm_exit_out_of_memory(0, "pthread_getattr_np");
vm_exit_out_of_memory(0, OOM_MMAP_ERROR, "pthread_getattr_np");
} else {
fatal(err_msg("pthread_getattr_np failed with errno = %d", rslt));
}

View File

@ -710,7 +710,7 @@ static void current_stack_region(address * bottom, size_t * size) {
// JVM needs to know exact stack location, abort if it fails
if (rslt != 0) {
if (rslt == ENOMEM) {
vm_exit_out_of_memory(0, "pthread_getattr_np");
vm_exit_out_of_memory(0, OOM_MMAP_ERROR, "pthread_getattr_np");
} else {
fatal(err_msg("pthread_getattr_np failed with errno = %d", rslt));
}

View File

@ -313,7 +313,7 @@ static void current_stack_region(address *bottom, size_t *size) {
int res = pthread_getattr_np(pthread_self(), &attr);
if (res != 0) {
if (res == ENOMEM) {
vm_exit_out_of_memory(0, "pthread_getattr_np");
vm_exit_out_of_memory(0, OOM_MMAP_ERROR, "pthread_getattr_np");
}
else {
fatal(err_msg("pthread_getattr_np failed with errno = %d", res));

View File

@ -591,7 +591,7 @@ JVM_handle_solaris_signal(int sig, siginfo_t* info, void* ucVoid,
// on the thread stack, which could get a mapping error when touched.
address addr = (address) info->si_addr;
if (sig == SIGBUS && info->si_code == BUS_OBJERR && info->si_errno == ENOMEM) {
vm_exit_out_of_memory(0, "Out of swap space to map in thread stack.");
vm_exit_out_of_memory(0, OOM_MMAP_ERROR, "Out of swap space to map in thread stack.");
}
VMError err(t, sig, pc, info, ucVoid);

View File

@ -745,7 +745,7 @@ JVM_handle_solaris_signal(int sig, siginfo_t* info, void* ucVoid,
// on the thread stack, which could get a mapping error when touched.
address addr = (address) info->si_addr;
if (sig == SIGBUS && info->si_code == BUS_OBJERR && info->si_errno == ENOMEM) {
vm_exit_out_of_memory(0, "Out of swap space to map in thread stack.");
vm_exit_out_of_memory(0, OOM_MMAP_ERROR, "Out of swap space to map in thread stack.");
}
VMError err(t, sig, pc, info, ucVoid);

View File

@ -213,6 +213,7 @@ int main(int argc, char *argv[])
AD.addInclude(AD._CPP_file, "adfiles", get_basename(AD._HPP_file._name));
AD.addInclude(AD._CPP_file, "memory/allocation.inline.hpp");
AD.addInclude(AD._CPP_file, "asm/macroAssembler.inline.hpp");
AD.addInclude(AD._CPP_file, "code/compiledIC.hpp");
AD.addInclude(AD._CPP_file, "code/vmreg.hpp");
AD.addInclude(AD._CPP_file, "gc_interface/collectedHeap.inline.hpp");
AD.addInclude(AD._CPP_file, "oops/compiledICHolder.hpp");

View File

@ -44,7 +44,7 @@ AbstractAssembler::AbstractAssembler(CodeBuffer* code) {
CodeSection* cs = code->insts();
cs->clear_mark(); // new assembler kills old mark
if (cs->start() == NULL) {
vm_exit_out_of_memory(0, err_msg("CodeCache: no room for %s",
vm_exit_out_of_memory(0, OOM_MMAP_ERROR, err_msg("CodeCache: no room for %s",
code->name()));
}
_code_section = cs;

View File

@ -483,7 +483,8 @@ ciKlass* ciEnv::get_klass_by_index_impl(constantPoolHandle cpool,
{
// We have to lock the cpool to keep the oop from being resolved
// while we are accessing it.
MonitorLockerEx ml(cpool->lock());
oop cplock = cpool->lock();
ObjectLocker ol(cplock, THREAD, cplock != NULL);
constantTag tag = cpool->tag_at(index);
if (tag.is_klass()) {
// The klass has been inserted into the constant pool
@ -1149,23 +1150,9 @@ void ciEnv::record_out_of_memory_failure() {
record_method_not_compilable("out of memory");
}
fileStream* ciEnv::_replay_data_stream = NULL;
void ciEnv::dump_replay_data() {
void ciEnv::dump_replay_data(outputStream* out) {
VM_ENTRY_MARK;
MutexLocker ml(Compile_lock);
if (_replay_data_stream == NULL) {
_replay_data_stream = new (ResourceObj::C_HEAP, mtCompiler) fileStream(ReplayDataFile);
if (_replay_data_stream == NULL) {
fatal(err_msg("Can't open %s for replay data", ReplayDataFile));
}
}
dump_replay_data(_replay_data_stream);
}
void ciEnv::dump_replay_data(outputStream* out) {
ASSERT_IN_VM;
ResourceMark rm;
#if INCLUDE_JVMTI
out->print_cr("JvmtiExport can_access_local_variables %d", _jvmti_can_access_local_variables);
@ -1178,13 +1165,15 @@ void ciEnv::dump_replay_data(outputStream* out) {
for (int i = 0; i < objects->length(); i++) {
objects->at(i)->dump_replay_data(out);
}
Method* method = task()->method();
int entry_bci = task()->osr_bci();
CompileTask* task = this->task();
Method* method = task->method();
int entry_bci = task->osr_bci();
int comp_level = task->comp_level();
// Klass holder = method->method_holder();
out->print_cr("compile %s %s %s %d",
out->print_cr("compile %s %s %s %d %d",
method->klass_name()->as_quoted_ascii(),
method->name()->as_quoted_ascii(),
method->signature()->as_quoted_ascii(),
entry_bci);
entry_bci, comp_level);
out->flush();
}

View File

@ -46,8 +46,6 @@ class ciEnv : StackObj {
friend class CompileBroker;
friend class Dependencies; // for get_object, during logging
static fileStream* _replay_data_stream;
private:
Arena* _arena; // Alias for _ciEnv_arena except in init_shared_objects()
Arena _ciEnv_arena;
@ -451,10 +449,6 @@ public:
// RedefineClasses support
void metadata_do(void f(Metadata*)) { _factory->metadata_do(f); }
// Dump the compilation replay data for this ciEnv to
// ReplayDataFile, creating the file if needed.
void dump_replay_data();
// Dump the compilation replay data for the ciEnv to the stream.
void dump_replay_data(outputStream* out);
};

View File

@ -196,7 +196,6 @@ class ciMethod : public ciMetadata {
// Analysis and profiling.
//
// Usage note: liveness_at_bci and init_vars should be wrapped in ResourceMarks.
bool uses_monitors() const { return _uses_monitors; } // this one should go away, it has a misleading name
bool has_monitor_bytecodes() const { return _uses_monitors; }
bool has_balanced_monitors();

View File

@ -89,7 +89,7 @@ class CompileReplay : public StackObj {
loader = Handle(thread, SystemDictionary::java_system_loader());
stream = fopen(filename, "rt");
if (stream == NULL) {
fprintf(stderr, "Can't open replay file %s\n", filename);
fprintf(stderr, "ERROR: Can't open replay file %s\n", filename);
}
buffer_length = 32;
buffer = NEW_RESOURCE_ARRAY(char, buffer_length);
@ -327,7 +327,6 @@ class CompileReplay : public StackObj {
if (had_error()) {
tty->print_cr("Error while parsing line %d: %s\n", line_no, _error_message);
tty->print_cr("%s", buffer);
assert(false, "error");
return;
}
pos = 0;
@ -370,11 +369,47 @@ class CompileReplay : public StackObj {
}
}
// compile <klass> <name> <signature> <entry_bci>
// validation of comp_level
bool is_valid_comp_level(int comp_level) {
const int msg_len = 256;
char* msg = NULL;
if (!is_compile(comp_level)) {
msg = NEW_RESOURCE_ARRAY(char, msg_len);
jio_snprintf(msg, msg_len, "%d isn't compilation level", comp_level);
} else if (!TieredCompilation && (comp_level != CompLevel_highest_tier)) {
msg = NEW_RESOURCE_ARRAY(char, msg_len);
switch (comp_level) {
case CompLevel_simple:
jio_snprintf(msg, msg_len, "compilation level %d requires Client VM or TieredCompilation", comp_level);
break;
case CompLevel_full_optimization:
jio_snprintf(msg, msg_len, "compilation level %d requires Server VM", comp_level);
break;
default:
jio_snprintf(msg, msg_len, "compilation level %d requires TieredCompilation", comp_level);
}
}
if (msg != NULL) {
report_error(msg);
return false;
}
return true;
}
// compile <klass> <name> <signature> <entry_bci> <comp_level>
void process_compile(TRAPS) {
// methodHandle method;
Method* method = parse_method(CHECK);
int entry_bci = parse_int("entry_bci");
const char* comp_level_label = "comp_level";
int comp_level = parse_int(comp_level_label);
// old version w/o comp_level
if (had_error() && (error_message() == comp_level_label)) {
comp_level = CompLevel_full_optimization;
}
if (!is_valid_comp_level(comp_level)) {
return;
}
Klass* k = method->method_holder();
((InstanceKlass*)k)->initialize(THREAD);
if (HAS_PENDING_EXCEPTION) {
@ -389,12 +424,12 @@ class CompileReplay : public StackObj {
}
}
// Make sure the existence of a prior compile doesn't stop this one
nmethod* nm = (entry_bci != InvocationEntryBci) ? method->lookup_osr_nmethod_for(entry_bci, CompLevel_full_optimization, true) : method->code();
nmethod* nm = (entry_bci != InvocationEntryBci) ? method->lookup_osr_nmethod_for(entry_bci, comp_level, true) : method->code();
if (nm != NULL) {
nm->make_not_entrant();
}
replay_state = this;
CompileBroker::compile_method(method, entry_bci, CompLevel_full_optimization,
CompileBroker::compile_method(method, entry_bci, comp_level,
methodHandle(), 0, "replay", THREAD);
replay_state = NULL;
reset();
@ -551,7 +586,7 @@ class CompileReplay : public StackObj {
if (parsed_two_word == i) continue;
default:
ShouldNotReachHere();
fatal(err_msg_res("Unexpected tag: %d", cp->tag_at(i).value()));
break;
}
@ -819,6 +854,11 @@ int ciReplay::replay_impl(TRAPS) {
ReplaySuppressInitializers = 1;
}
if (FLAG_IS_DEFAULT(ReplayDataFile)) {
tty->print_cr("ERROR: no compiler replay data file specified (use -XX:ReplayDataFile=replay_pid12345.txt).");
return 1;
}
// Load and parse the replay data
CompileReplay rp(ReplayDataFile, THREAD);
int exit_code = 0;

View File

@ -2027,7 +2027,6 @@ methodHandle ClassFileParser::parse_method(bool is_interface,
u2 method_parameters_length = 0;
u1* method_parameters_data = NULL;
bool method_parameters_seen = false;
bool method_parameters_four_byte_flags;
bool parsed_code_attribute = false;
bool parsed_checked_exceptions_attribute = false;
bool parsed_stackmap_attribute = false;
@ -2241,26 +2240,14 @@ methodHandle ClassFileParser::parse_method(bool is_interface,
}
method_parameters_seen = true;
method_parameters_length = cfs->get_u1_fast();
// Track the actual size (note: this is written for clarity; a
// decent compiler will CSE and constant-fold this into a single
// expression)
// Use the attribute length to figure out the size of flags
if (method_attribute_length == (method_parameters_length * 6u) + 1u) {
method_parameters_four_byte_flags = true;
} else if (method_attribute_length == (method_parameters_length * 4u) + 1u) {
method_parameters_four_byte_flags = false;
} else {
if (method_attribute_length != (method_parameters_length * 4u) + 1u) {
classfile_parse_error(
"Invalid MethodParameters method attribute length %u in class file",
method_attribute_length, CHECK_(nullHandle));
}
method_parameters_data = cfs->get_u1_buffer();
cfs->skip_u2_fast(method_parameters_length);
if (method_parameters_four_byte_flags) {
cfs->skip_u4_fast(method_parameters_length);
} else {
cfs->skip_u2_fast(method_parameters_length);
}
cfs->skip_u2_fast(method_parameters_length);
// ignore this attribute if it cannot be reflected
if (!SystemDictionary::Parameter_klass_loaded())
method_parameters_length = 0;
@ -2423,13 +2410,8 @@ methodHandle ClassFileParser::parse_method(bool is_interface,
for (int i = 0; i < method_parameters_length; i++) {
elem[i].name_cp_index = Bytes::get_Java_u2(method_parameters_data);
method_parameters_data += 2;
if (method_parameters_four_byte_flags) {
elem[i].flags = Bytes::get_Java_u4(method_parameters_data);
method_parameters_data += 4;
} else {
elem[i].flags = Bytes::get_Java_u2(method_parameters_data);
method_parameters_data += 2;
}
elem[i].flags = Bytes::get_Java_u2(method_parameters_data);
method_parameters_data += 2;
}
}

View File

@ -304,7 +304,19 @@ class ClassFileParser VALUE_OBJ_CLASS_SPEC {
inline void assert_property(bool b, const char* msg, TRAPS) {
#ifdef ASSERT
if (!b) { fatal(msg); }
if (!b) {
ResourceMark rm(THREAD);
fatal(err_msg(msg, _class_name->as_C_string()));
}
#endif
}
inline void assert_property(bool b, const char* msg, int index, TRAPS) {
#ifdef ASSERT
if (!b) {
ResourceMark rm(THREAD);
fatal(err_msg(msg, index, _class_name->as_C_string()));
}
#endif
}
@ -312,7 +324,7 @@ class ClassFileParser VALUE_OBJ_CLASS_SPEC {
if (_need_verify) {
guarantee_property(property, msg, index, CHECK);
} else {
assert_property(property, msg, CHECK);
assert_property(property, msg, index, CHECK);
}
}

View File

@ -1345,9 +1345,10 @@ void ClassLoader::compile_the_world_in(char* name, Handle loader, TRAPS) {
tty->print_cr("CompileTheWorld (%d) : %s", _compile_the_world_class_counter, buffer);
// Preload all classes to get around uncommon traps
// Iterate over all methods in class
int comp_level = CompilationPolicy::policy()->initial_compile_level();
for (int n = 0; n < k->methods()->length(); n++) {
methodHandle m (THREAD, k->methods()->at(n));
if (CompilationPolicy::can_be_compiled(m)) {
if (CompilationPolicy::can_be_compiled(m, comp_level)) {
if (++_codecache_sweep_counter == CompileTheWorldSafepointInterval) {
// Give sweeper a chance to keep up with CTW
@ -1356,7 +1357,7 @@ void ClassLoader::compile_the_world_in(char* name, Handle loader, TRAPS) {
_codecache_sweep_counter = 0;
}
// Force compilation
CompileBroker::compile_method(m, InvocationEntryBci, CompilationPolicy::policy()->initial_compile_level(),
CompileBroker::compile_method(m, InvocationEntryBci, comp_level,
methodHandle(), 0, "CTW", THREAD);
if (HAS_PENDING_EXCEPTION) {
clear_pending_exception_if_not_oom(CHECK);

View File

@ -53,6 +53,7 @@
#include "classfile/metadataOnStackMark.hpp"
#include "classfile/systemDictionary.hpp"
#include "code/codeCache.hpp"
#include "memory/gcLocker.hpp"
#include "memory/metadataFactory.hpp"
#include "memory/metaspaceShared.hpp"
#include "memory/oopFactory.hpp"
@ -65,17 +66,19 @@
ClassLoaderData * ClassLoaderData::_the_null_class_loader_data = NULL;
ClassLoaderData::ClassLoaderData(Handle h_class_loader, bool is_anonymous) :
ClassLoaderData::ClassLoaderData(Handle h_class_loader, bool is_anonymous, Dependencies dependencies) :
_class_loader(h_class_loader()),
_is_anonymous(is_anonymous), _keep_alive(is_anonymous), // initially
_metaspace(NULL), _unloading(false), _klasses(NULL),
_claimed(0), _jmethod_ids(NULL), _handles(NULL), _deallocate_list(NULL),
_next(NULL), _dependencies(),
_next(NULL), _dependencies(dependencies),
_metaspace_lock(new Mutex(Monitor::leaf+1, "Metaspace allocation lock", true)) {
// empty
}
void ClassLoaderData::init_dependencies(TRAPS) {
assert(!Universe::is_fully_initialized(), "should only be called when initializing");
assert(is_the_null_class_loader_data(), "should only call this for the null class loader");
_dependencies.init(CHECK);
}
@ -277,6 +280,9 @@ void ClassLoaderData::remove_class(Klass* scratch_class) {
void ClassLoaderData::unload() {
_unloading = true;
// Tell serviceability tools these classes are unloading
classes_do(InstanceKlass::notify_unload_class);
if (TraceClassLoaderData) {
ResourceMark rm;
tty->print("[ClassLoaderData: unload loader data "PTR_FORMAT, this);
@ -300,6 +306,9 @@ bool ClassLoaderData::is_alive(BoolObjectClosure* is_alive_closure) const {
ClassLoaderData::~ClassLoaderData() {
// Release C heap structures for all the classes.
classes_do(InstanceKlass::release_C_heap_structures);
Metaspace *m = _metaspace;
if (m != NULL) {
_metaspace = NULL;
@ -423,7 +432,7 @@ void ClassLoaderData::free_deallocate_list() {
// These anonymous class loaders are to contain classes used for JSR292
ClassLoaderData* ClassLoaderData::anonymous_class_loader_data(oop loader, TRAPS) {
// Add a new class loader data to the graph.
return ClassLoaderDataGraph::add(NULL, loader, CHECK_NULL);
return ClassLoaderDataGraph::add(loader, true, CHECK_NULL);
}
const char* ClassLoaderData::loader_name() {
@ -495,19 +504,22 @@ ClassLoaderData* ClassLoaderDataGraph::_head = NULL;
ClassLoaderData* ClassLoaderDataGraph::_unloading = NULL;
ClassLoaderData* ClassLoaderDataGraph::_saved_head = NULL;
// Add a new class loader data node to the list. Assign the newly created
// ClassLoaderData into the java/lang/ClassLoader object as a hidden field
ClassLoaderData* ClassLoaderDataGraph::add(ClassLoaderData** cld_addr, Handle loader, TRAPS) {
// Not assigned a class loader data yet.
// Create one.
ClassLoaderData* *list_head = &_head;
ClassLoaderData* next = _head;
ClassLoaderData* ClassLoaderDataGraph::add(Handle loader, bool is_anonymous, TRAPS) {
// We need to allocate all the oops for the ClassLoaderData before allocating the
// actual ClassLoaderData object.
ClassLoaderData::Dependencies dependencies(CHECK_NULL);
bool is_anonymous = (cld_addr == NULL);
ClassLoaderData* cld = new ClassLoaderData(loader, is_anonymous);
No_Safepoint_Verifier no_safepoints; // we mustn't GC until we've installed the
// ClassLoaderData in the graph since the CLD
// contains unhandled oops
if (cld_addr != NULL) {
ClassLoaderData* cld = new ClassLoaderData(loader, is_anonymous, dependencies);
if (!is_anonymous) {
ClassLoaderData** cld_addr = java_lang_ClassLoader::loader_data_addr(loader());
// First, Atomically set it
ClassLoaderData* old = (ClassLoaderData*) Atomic::cmpxchg_ptr(cld, cld_addr, NULL);
if (old != NULL) {
@ -519,6 +531,9 @@ ClassLoaderData* ClassLoaderDataGraph::add(ClassLoaderData** cld_addr, Handle lo
// We won the race, and therefore the task of adding the data to the list of
// class loader data
ClassLoaderData** list_head = &_head;
ClassLoaderData* next = _head;
do {
cld->set_next(next);
ClassLoaderData* exchanged = (ClassLoaderData*)Atomic::cmpxchg_ptr(cld, list_head, next);
@ -531,10 +546,6 @@ ClassLoaderData* ClassLoaderDataGraph::add(ClassLoaderData** cld_addr, Handle lo
cld->loader_name());
tty->print_cr("]");
}
// Create dependencies after the CLD is added to the list. Otherwise,
// the GC GC will not find the CLD and the _class_loader field will
// not be updated.
cld->init_dependencies(CHECK_NULL);
return cld;
}
next = exchanged;
@ -665,6 +676,8 @@ bool ClassLoaderDataGraph::do_unloading(BoolObjectClosure* is_alive_closure) {
dead->unload();
data = data->next();
// Remove from loader list.
// This class loader data will no longer be found
// in the ClassLoaderDataGraph.
if (prev != NULL) {
prev->set_next(data);
} else {
@ -686,6 +699,7 @@ void ClassLoaderDataGraph::purge() {
next = purge_me->next();
delete purge_me;
}
Metaspace::purge();
}
// CDS support

View File

@ -62,7 +62,7 @@ class ClassLoaderDataGraph : public AllStatic {
// CMS support.
static ClassLoaderData* _saved_head;
static ClassLoaderData* add(ClassLoaderData** loader_data_addr, Handle class_loader, TRAPS);
static ClassLoaderData* add(Handle class_loader, bool anonymous, TRAPS);
public:
static ClassLoaderData* find_or_create(Handle class_loader, TRAPS);
static void purge();
@ -100,6 +100,9 @@ class ClassLoaderData : public CHeapObj<mtClass> {
Thread* THREAD);
public:
Dependencies() : _list_head(NULL) {}
Dependencies(TRAPS) : _list_head(NULL) {
init(CHECK);
}
void add(Handle dependency, TRAPS);
void init(TRAPS);
void oops_do(OopClosure* f);
@ -150,7 +153,7 @@ class ClassLoaderData : public CHeapObj<mtClass> {
void set_next(ClassLoaderData* next) { _next = next; }
ClassLoaderData* next() const { return _next; }
ClassLoaderData(Handle h_class_loader, bool is_anonymous);
ClassLoaderData(Handle h_class_loader, bool is_anonymous, Dependencies dependencies);
~ClassLoaderData();
void set_metaspace(Metaspace* m) { _metaspace = m; }
@ -190,7 +193,9 @@ class ClassLoaderData : public CHeapObj<mtClass> {
static void init_null_class_loader_data() {
assert(_the_null_class_loader_data == NULL, "cannot initialize twice");
assert(ClassLoaderDataGraph::_head == NULL, "cannot initialize twice");
_the_null_class_loader_data = new ClassLoaderData((oop)NULL, false);
// We explicitly initialize the Dependencies object at a later phase in the initialization
_the_null_class_loader_data = new ClassLoaderData((oop)NULL, false, Dependencies());
ClassLoaderDataGraph::_head = _the_null_class_loader_data;
assert(_the_null_class_loader_data->is_the_null_class_loader_data(), "Must be");
if (DumpSharedSpaces) {

View File

@ -43,10 +43,9 @@ inline ClassLoaderData *ClassLoaderDataGraph::find_or_create(Handle loader, TRAP
assert(loader() != NULL,"Must be a class loader");
// Gets the class loader data out of the java/lang/ClassLoader object, if non-null
// it's already in the loader_data, so no need to add
ClassLoaderData** loader_data_addr = java_lang_ClassLoader::loader_data_addr(loader());
ClassLoaderData* loader_data_id = *loader_data_addr;
if (loader_data_id) {
return loader_data_id;
ClassLoaderData* loader_data= java_lang_ClassLoader::loader_data(loader());
if (loader_data) {
return loader_data;
}
return ClassLoaderDataGraph::add(loader_data_addr, loader, THREAD);
return ClassLoaderDataGraph::add(loader, false, THREAD);
}

View File

@ -27,7 +27,6 @@
#include "classfile/systemDictionary.hpp"
#include "oops/oop.inline.hpp"
#include "prims/jvmtiRedefineClassesTrace.hpp"
#include "services/classLoadingService.hpp"
#include "utilities/hashtable.inline.hpp"
@ -156,19 +155,7 @@ bool Dictionary::do_unloading() {
if (k_def_class_loader_data == loader_data) {
// This is the defining entry, so the referred class is about
// to be unloaded.
// Notify the debugger and clean up the class.
class_was_unloaded = true;
// notify the debugger
if (JvmtiExport::should_post_class_unload()) {
JvmtiExport::post_class_unload(ik);
}
// notify ClassLoadingService of class unload
ClassLoadingService::notify_class_unloaded(ik);
// Clean up C heap
ik->release_C_heap_structures();
ik->constants()->release_C_heap_structures();
}
// Also remove this system dictionary entry.
purge_entry = true;

View File

@ -315,14 +315,18 @@ Handle java_lang_String::char_converter(Handle java_string, jchar from_char, jch
return string;
}
jchar* java_lang_String::as_unicode_string(oop java_string, int& length) {
jchar* java_lang_String::as_unicode_string(oop java_string, int& length, TRAPS) {
typeArrayOop value = java_lang_String::value(java_string);
int offset = java_lang_String::offset(java_string);
length = java_lang_String::length(java_string);
jchar* result = NEW_RESOURCE_ARRAY(jchar, length);
for (int index = 0; index < length; index++) {
result[index] = value->char_at(index + offset);
jchar* result = NEW_RESOURCE_ARRAY_RETURN_NULL(jchar, length);
if (result != NULL) {
for (int index = 0; index < length; index++) {
result[index] = value->char_at(index + offset);
}
} else {
THROW_MSG_0(vmSymbols::java_lang_OutOfMemoryError(), "could not allocate Unicode string");
}
return result;
}

View File

@ -153,7 +153,7 @@ class java_lang_String : AllStatic {
static char* as_utf8_string(oop java_string, char* buf, int buflen);
static char* as_utf8_string(oop java_string, int start, int len);
static char* as_platform_dependent_str(Handle java_string, TRAPS);
static jchar* as_unicode_string(oop java_string, int& length);
static jchar* as_unicode_string(oop java_string, int& length, TRAPS);
// produce an ascii string with all other values quoted using \u####
static char* as_quoted_ascii(oop java_string);

View File

@ -735,7 +735,7 @@ oop StringTable::intern(oop string, TRAPS)
ResourceMark rm(THREAD);
int length;
Handle h_string (THREAD, string);
jchar* chars = java_lang_String::as_unicode_string(string, length);
jchar* chars = java_lang_String::as_unicode_string(string, length, CHECK_NULL);
oop result = intern(h_string, chars, length, CHECK_NULL);
return result;
}

View File

@ -463,8 +463,10 @@ void CodeCache::verify_perm_nmethods(CodeBlobClosure* f_or_null) {
}
#endif //PRODUCT
nmethod* CodeCache::find_and_remove_saved_code(Method* m) {
/**
* Remove and return nmethod from the saved code list in order to reanimate it.
*/
nmethod* CodeCache::reanimate_saved_code(Method* m) {
MutexLockerEx mu(CodeCache_lock, Mutex::_no_safepoint_check_flag);
nmethod* saved = _saved_nmethods;
nmethod* prev = NULL;
@ -479,7 +481,7 @@ nmethod* CodeCache::find_and_remove_saved_code(Method* m) {
saved->set_speculatively_disconnected(false);
saved->set_saved_nmethod_link(NULL);
if (PrintMethodFlushing) {
saved->print_on(tty, " ### nmethod is reconnected\n");
saved->print_on(tty, " ### nmethod is reconnected");
}
if (LogCompilation && (xtty != NULL)) {
ttyLocker ttyl;
@ -496,6 +498,9 @@ nmethod* CodeCache::find_and_remove_saved_code(Method* m) {
return NULL;
}
/**
* Remove nmethod from the saved code list in order to discard it permanently
*/
void CodeCache::remove_saved_code(nmethod* nm) {
// For conc swpr this will be called with CodeCache_lock taken by caller
assert_locked_or_safepoint(CodeCache_lock);
@ -529,7 +534,7 @@ void CodeCache::speculatively_disconnect(nmethod* nm) {
nm->set_saved_nmethod_link(_saved_nmethods);
_saved_nmethods = nm;
if (PrintMethodFlushing) {
nm->print_on(tty, " ### nmethod is speculatively disconnected\n");
nm->print_on(tty, " ### nmethod is speculatively disconnected");
}
if (LogCompilation && (xtty != NULL)) {
ttyLocker ttyl;

View File

@ -57,7 +57,7 @@ class CodeCache : AllStatic {
static int _number_of_nmethods_with_dependencies;
static bool _needs_cache_clean;
static nmethod* _scavenge_root_nmethods; // linked via nm->scavenge_root_link()
static nmethod* _saved_nmethods; // linked via nm->saved_nmethod_look()
static nmethod* _saved_nmethods; // Linked list of speculatively disconnected nmethods.
static void verify_if_often() PRODUCT_RETURN;
@ -168,7 +168,7 @@ class CodeCache : AllStatic {
static void set_needs_cache_clean(bool v) { _needs_cache_clean = v; }
static void clear_inline_caches(); // clear all inline caches
static nmethod* find_and_remove_saved_code(Method* m);
static nmethod* reanimate_saved_code(Method* m);
static void remove_saved_code(nmethod* nm);
static void speculatively_disconnect(nmethod* nm);

View File

@ -45,25 +45,6 @@
// Every time a compiled IC is changed or its type is being accessed,
// either the CompiledIC_lock must be set or we must be at a safe point.
// Release the CompiledICHolder* associated with this call site is there is one.
void CompiledIC::cleanup_call_site(virtual_call_Relocation* call_site) {
// This call site might have become stale so inspect it carefully.
NativeCall* call = nativeCall_at(call_site->addr());
if (is_icholder_entry(call->destination())) {
NativeMovConstReg* value = nativeMovConstReg_at(call_site->cached_value());
InlineCacheBuffer::queue_for_release((CompiledICHolder*)value->data());
}
}
bool CompiledIC::is_icholder_call_site(virtual_call_Relocation* call_site) {
// This call site might have become stale so inspect it carefully.
NativeCall* call = nativeCall_at(call_site->addr());
return is_icholder_entry(call->destination());
}
//-----------------------------------------------------------------------------
// Low-level access to an inline cache. Private, since they might not be
// MT-safe to use.
@ -488,33 +469,6 @@ bool CompiledIC::is_icholder_entry(address entry) {
return (cb != NULL && cb->is_adapter_blob());
}
CompiledIC::CompiledIC(nmethod* nm, NativeCall* call)
: _ic_call(call)
{
address ic_call = call->instruction_address();
assert(ic_call != NULL, "ic_call address must be set");
assert(nm != NULL, "must pass nmethod");
assert(nm->contains(ic_call), "must be in nmethod");
// search for the ic_call at the given address
RelocIterator iter(nm, ic_call, ic_call+1);
bool ret = iter.next();
assert(ret == true, "relocInfo must exist at this address");
assert(iter.addr() == ic_call, "must find ic_call");
if (iter.type() == relocInfo::virtual_call_type) {
virtual_call_Relocation* r = iter.virtual_call_reloc();
_is_optimized = false;
_value = nativeMovConstReg_at(r->cached_value());
} else {
assert(iter.type() == relocInfo::opt_virtual_call_type, "must be a virtual call");
_is_optimized = true;
_value = NULL;
}
}
// ----------------------------------------------------------------------------
void CompiledStaticCall::set_to_clean() {
@ -549,33 +503,6 @@ bool CompiledStaticCall::is_call_to_interpreted() const {
return nm->stub_contains(destination());
}
void CompiledStaticCall::set_to_interpreted(methodHandle callee, address entry) {
address stub=find_stub();
guarantee(stub != NULL, "stub not found");
if (TraceICs) {
ResourceMark rm;
tty->print_cr("CompiledStaticCall@" INTPTR_FORMAT ": set_to_interpreted %s",
instruction_address(),
callee->name_and_sig_as_C_string());
}
NativeMovConstReg* method_holder = nativeMovConstReg_at(stub); // creation also verifies the object
NativeJump* jump = nativeJump_at(method_holder->next_instruction_address());
assert(method_holder->data() == 0 || method_holder->data() == (intptr_t)callee(), "a) MT-unsafe modification of inline cache");
assert(jump->jump_destination() == (address)-1 || jump->jump_destination() == entry, "b) MT-unsafe modification of inline cache");
// Update stub
method_holder->set_data((intptr_t)callee());
jump->set_jump_destination(entry);
// Update jump to call
set_destination_mt_safe(stub);
}
void CompiledStaticCall::set(const StaticCallInfo& info) {
assert (CompiledIC_lock->is_locked() || SafepointSynchronize::is_at_safepoint(), "mt unsafe call");
MutexLockerEx pl(Patching_lock, Mutex::_no_safepoint_check_flag);
@ -618,19 +545,6 @@ void CompiledStaticCall::compute_entry(methodHandle m, StaticCallInfo& info) {
}
}
void CompiledStaticCall::set_stub_to_clean(static_stub_Relocation* static_stub) {
assert (CompiledIC_lock->is_locked() || SafepointSynchronize::is_at_safepoint(), "mt unsafe call");
// Reset stub
address stub = static_stub->addr();
assert(stub!=NULL, "stub not found");
NativeMovConstReg* method_holder = nativeMovConstReg_at(stub); // creation also verifies the object
NativeJump* jump = nativeJump_at(method_holder->next_instruction_address());
method_holder->set_data(0);
jump->set_jump_destination((address)-1);
}
address CompiledStaticCall::find_stub() {
// Find reloc. information containing this call-site
RelocIterator iter((nmethod*)NULL, instruction_address());
@ -668,19 +582,16 @@ void CompiledIC::verify() {
|| is_optimized() || is_megamorphic(), "sanity check");
}
void CompiledIC::print() {
print_compiled_ic();
tty->cr();
}
void CompiledIC::print_compiled_ic() {
tty->print("Inline cache at " INTPTR_FORMAT ", calling %s " INTPTR_FORMAT " cached_value " INTPTR_FORMAT,
instruction_address(), is_call_to_interpreted() ? "interpreted " : "", ic_destination(), is_optimized() ? NULL : cached_value());
}
void CompiledStaticCall::print() {
tty->print("static call at " INTPTR_FORMAT " -> ", instruction_address());
if (is_clean()) {
@ -693,21 +604,4 @@ void CompiledStaticCall::print() {
tty->cr();
}
void CompiledStaticCall::verify() {
// Verify call
NativeCall::verify();
if (os::is_MP()) {
verify_alignment();
}
// Verify stub
address stub = find_stub();
assert(stub != NULL, "no stub found for static call");
NativeMovConstReg* method_holder = nativeMovConstReg_at(stub); // creation also verifies the object
NativeJump* jump = nativeJump_at(method_holder->next_instruction_address());
// Verify state
assert(is_clean() || is_call_to_compiled() || is_call_to_interpreted(), "sanity check");
}
#endif
#endif // !PRODUCT

View File

@ -304,6 +304,11 @@ class CompiledStaticCall: public NativeCall {
friend CompiledStaticCall* compiledStaticCall_at(address native_call);
friend CompiledStaticCall* compiledStaticCall_at(Relocation* call_site);
// Code
static void emit_to_interp_stub(CodeBuffer &cbuf);
static int to_interp_stub_size();
static int reloc_to_interp_stub();
// State
bool is_clean() const;
bool is_call_to_compiled() const;

View File

@ -67,7 +67,7 @@ StubQueue::StubQueue(StubInterface* stub_interface, int buffer_size,
intptr_t size = round_to(buffer_size, 2*BytesPerWord);
BufferBlob* blob = BufferBlob::create(name, size);
if( blob == NULL) {
vm_exit_out_of_memory(size, err_msg("CodeCache: no room for %s", name));
vm_exit_out_of_memory(size, OOM_MALLOC_ERROR, err_msg("CodeCache: no room for %s", name));
}
_stub_interface = stub_interface;
_buffer_size = blob->content_size();

View File

@ -60,7 +60,7 @@ void* VtableStub::operator new(size_t size, int code_size) {
const int bytes = chunk_factor * real_size + pd_code_alignment();
BufferBlob* blob = BufferBlob::create("vtable chunks", bytes);
if (blob == NULL) {
vm_exit_out_of_memory(bytes, "CodeCache: no room for vtable chunks");
vm_exit_out_of_memory(bytes, OOM_MALLOC_ERROR, "CodeCache: no room for vtable chunks");
}
_chunk = blob->content_begin();
_chunk_end = _chunk + bytes;

View File

@ -65,7 +65,7 @@ HS_DTRACE_PROBE_DECL8(hotspot, method__compile__begin,
HS_DTRACE_PROBE_DECL9(hotspot, method__compile__end,
char*, intptr_t, char*, intptr_t, char*, intptr_t, char*, intptr_t, bool);
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(compiler, method, comp_name) \
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(method, comp_name) \
{ \
Symbol* klass_name = (method)->klass_name(); \
Symbol* name = (method)->name(); \
@ -77,8 +77,7 @@ HS_DTRACE_PROBE_DECL9(hotspot, method__compile__end,
signature->bytes(), signature->utf8_length()); \
}
#define DTRACE_METHOD_COMPILE_END_PROBE(compiler, method, \
comp_name, success) \
#define DTRACE_METHOD_COMPILE_END_PROBE(method, comp_name, success) \
{ \
Symbol* klass_name = (method)->klass_name(); \
Symbol* name = (method)->name(); \
@ -92,7 +91,7 @@ HS_DTRACE_PROBE_DECL9(hotspot, method__compile__end,
#else /* USDT2 */
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(compiler, method, comp_name) \
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(method, comp_name) \
{ \
Symbol* klass_name = (method)->klass_name(); \
Symbol* name = (method)->name(); \
@ -104,8 +103,7 @@ HS_DTRACE_PROBE_DECL9(hotspot, method__compile__end,
(char *) signature->bytes(), signature->utf8_length()); \
}
#define DTRACE_METHOD_COMPILE_END_PROBE(compiler, method, \
comp_name, success) \
#define DTRACE_METHOD_COMPILE_END_PROBE(method, comp_name, success) \
{ \
Symbol* klass_name = (method)->klass_name(); \
Symbol* name = (method)->name(); \
@ -120,8 +118,8 @@ HS_DTRACE_PROBE_DECL9(hotspot, method__compile__end,
#else // ndef DTRACE_ENABLED
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(compiler, method, comp_name)
#define DTRACE_METHOD_COMPILE_END_PROBE(compiler, method, comp_name, success)
#define DTRACE_METHOD_COMPILE_BEGIN_PROBE(method, comp_name)
#define DTRACE_METHOD_COMPILE_END_PROBE(method, comp_name, success)
#endif // ndef DTRACE_ENABLED
@ -1229,7 +1227,7 @@ nmethod* CompileBroker::compile_method(methodHandle method, int osr_bci,
if (method->is_not_compilable(comp_level)) return NULL;
if (UseCodeCacheFlushing) {
nmethod* saved = CodeCache::find_and_remove_saved_code(method());
nmethod* saved = CodeCache::reanimate_saved_code(method());
if (saved != NULL) {
method->set_code(method, saved);
return saved;
@ -1288,9 +1286,9 @@ nmethod* CompileBroker::compile_method(methodHandle method, int osr_bci,
method->jmethod_id();
}
// If the compiler is shut off due to code cache flushing or otherwise,
// If the compiler is shut off due to code cache getting full
// fail out now so blocking compiles dont hang the java thread
if (!should_compile_new_jobs() || (UseCodeCacheFlushing && CodeCache::needs_flushing())) {
if (!should_compile_new_jobs()) {
CompilationPolicy::policy()->delay_compilation(method());
return NULL;
}
@ -1766,8 +1764,7 @@ void CompileBroker::invoke_compiler_on_method(CompileTask* task) {
// Save information about this method in case of failure.
set_last_compile(thread, method, is_osr, task_level);
DTRACE_METHOD_COMPILE_BEGIN_PROBE(compiler(task_level), method,
compiler_name(task_level));
DTRACE_METHOD_COMPILE_BEGIN_PROBE(method, compiler_name(task_level));
}
// Allocate a new set of JNI handles.
@ -1842,13 +1839,14 @@ void CompileBroker::invoke_compiler_on_method(CompileTask* task) {
}
}
}
// simulate crash during compilation
assert(task->compile_id() != CICrashAt, "just as planned");
}
pop_jni_handle_block();
methodHandle method(thread, task->method());
DTRACE_METHOD_COMPILE_END_PROBE(compiler(task_level), method,
compiler_name(task_level), task->is_success());
DTRACE_METHOD_COMPILE_END_PROBE(method, compiler_name(task_level), task->is_success());
collect_statistics(thread, time, task);

View File

@ -2444,8 +2444,7 @@ void CMSCollector::collect_in_foreground(bool clear_all_soft_refs) {
// initial marking in checkpointRootsInitialWork has been completed
if (VerifyDuringGC &&
GenCollectedHeap::heap()->total_collections() >= VerifyGCStartAt) {
gclog_or_tty->print("Verify before initial mark: ");
Universe::verify();
Universe::verify("Verify before initial mark: ");
}
{
bool res = markFromRoots(false);
@ -2456,8 +2455,7 @@ void CMSCollector::collect_in_foreground(bool clear_all_soft_refs) {
case FinalMarking:
if (VerifyDuringGC &&
GenCollectedHeap::heap()->total_collections() >= VerifyGCStartAt) {
gclog_or_tty->print("Verify before re-mark: ");
Universe::verify();
Universe::verify("Verify before re-mark: ");
}
checkpointRootsFinal(false, clear_all_soft_refs,
init_mark_was_synchronous);
@ -2468,8 +2466,7 @@ void CMSCollector::collect_in_foreground(bool clear_all_soft_refs) {
// final marking in checkpointRootsFinal has been completed
if (VerifyDuringGC &&
GenCollectedHeap::heap()->total_collections() >= VerifyGCStartAt) {
gclog_or_tty->print("Verify before sweep: ");
Universe::verify();
Universe::verify("Verify before sweep: ");
}
sweep(false);
assert(_collectorState == Resizing, "Incorrect state");
@ -2484,8 +2481,7 @@ void CMSCollector::collect_in_foreground(bool clear_all_soft_refs) {
// The heap has been resized.
if (VerifyDuringGC &&
GenCollectedHeap::heap()->total_collections() >= VerifyGCStartAt) {
gclog_or_tty->print("Verify before reset: ");
Universe::verify();
Universe::verify("Verify before reset: ");
}
reset(false);
assert(_collectorState == Idling, "Collector state should "
@ -2853,8 +2849,8 @@ class VerifyMarkedClosure: public BitMapClosure {
bool failed() { return _failed; }
};
bool CMSCollector::verify_after_remark() {
gclog_or_tty->print(" [Verifying CMS Marking... ");
bool CMSCollector::verify_after_remark(bool silent) {
if (!silent) gclog_or_tty->print(" [Verifying CMS Marking... ");
MutexLockerEx ml(verification_mark_bm()->lock(), Mutex::_no_safepoint_check_flag);
static bool init = false;
@ -2915,7 +2911,7 @@ bool CMSCollector::verify_after_remark() {
warning("Unrecognized value %d for CMSRemarkVerifyVariant",
CMSRemarkVerifyVariant);
}
gclog_or_tty->print(" done] ");
if (!silent) gclog_or_tty->print(" done] ");
return true;
}
@ -3426,8 +3422,9 @@ bool ConcurrentMarkSweepGeneration::grow_to_reserved() {
void ConcurrentMarkSweepGeneration::shrink_free_list_by(size_t bytes) {
assert_locked_or_safepoint(Heap_lock);
assert_lock_strong(freelistLock());
// XXX Fix when compaction is implemented.
warning("Shrinking of CMS not yet implemented");
if (PrintGCDetails && Verbose) {
warning("Shrinking of CMS not yet implemented");
}
return;
}
@ -6010,26 +6007,23 @@ void CMSCollector::refProcessingWork(bool asynch, bool clear_all_soft_refs) {
&cmsDrainMarkingStackClosure,
NULL);
}
verify_work_stacks_empty();
}
// This is the point where the entire marking should have completed.
verify_work_stacks_empty();
if (should_unload_classes()) {
{
TraceTime t("class unloading", PrintGCDetails, false, gclog_or_tty);
// Follow SystemDictionary roots and unload classes
// Unload classes and purge the SystemDictionary.
bool purged_class = SystemDictionary::do_unloading(&_is_alive_closure);
// Follow CodeCache roots and unload any methods marked for unloading
// Unload nmethods.
CodeCache::do_unloading(&_is_alive_closure, purged_class);
cmsDrainMarkingStackClosure.do_void();
verify_work_stacks_empty();
// Update subklass/sibling/implementor links in KlassKlass descendants
// Prune dead klasses from subklass/sibling/implementor lists.
Klass::clean_weak_klass_links(&_is_alive_closure);
// Nothing should have been pushed onto the working stacks.
verify_work_stacks_empty();
}
{
@ -6043,11 +6037,10 @@ void CMSCollector::refProcessingWork(bool asynch, bool clear_all_soft_refs) {
// Need to check if we really scanned the StringTable.
if ((roots_scanning_options() & SharedHeap::SO_Strings) == 0) {
TraceTime t("scrub string table", PrintGCDetails, false, gclog_or_tty);
// Now clean up stale oops in StringTable
// Delete entries for dead interned strings.
StringTable::unlink(&_is_alive_closure);
}
verify_work_stacks_empty();
// Restore any preserved marks as a result of mark stack or
// work queue overflow
restore_preserved_marks_if_any(); // done single-threaded for now

View File

@ -990,7 +990,7 @@ class CMSCollector: public CHeapObj<mtGC> {
// debugging
void verify();
bool verify_after_remark();
bool verify_after_remark(bool silent = VerifySilently);
void verify_ok_to_terminate() const PRODUCT_RETURN;
void verify_work_stacks_empty() const PRODUCT_RETURN;
void verify_overflow_empty() const PRODUCT_RETURN;

View File

@ -1273,10 +1273,9 @@ void ConcurrentMark::checkpointRootsFinal(bool clear_all_soft_refs) {
if (VerifyDuringGC) {
HandleMark hm; // handle scope
gclog_or_tty->print(" VerifyDuringGC:(before)");
Universe::heap()->prepare_for_verify();
Universe::verify(/* silent */ false,
/* option */ VerifyOption_G1UsePrevMarking);
Universe::verify(VerifyOption_G1UsePrevMarking,
" VerifyDuringGC:(before)");
}
G1CollectorPolicy* g1p = g1h->g1_policy();
@ -1300,10 +1299,9 @@ void ConcurrentMark::checkpointRootsFinal(bool clear_all_soft_refs) {
// Verify the heap w.r.t. the previous marking bitmap.
if (VerifyDuringGC) {
HandleMark hm; // handle scope
gclog_or_tty->print(" VerifyDuringGC:(overflow)");
Universe::heap()->prepare_for_verify();
Universe::verify(/* silent */ false,
/* option */ VerifyOption_G1UsePrevMarking);
Universe::verify(VerifyOption_G1UsePrevMarking,
" VerifyDuringGC:(overflow)");
}
// Clear the marking state because we will be restarting
@ -1323,10 +1321,9 @@ void ConcurrentMark::checkpointRootsFinal(bool clear_all_soft_refs) {
if (VerifyDuringGC) {
HandleMark hm; // handle scope
gclog_or_tty->print(" VerifyDuringGC:(after)");
Universe::heap()->prepare_for_verify();
Universe::verify(/* silent */ false,
/* option */ VerifyOption_G1UseNextMarking);
Universe::verify(VerifyOption_G1UseNextMarking,
" VerifyDuringGC:(after)");
}
assert(!restart_for_overflow(), "sanity");
// Completely reset the marking state since marking completed
@ -1972,10 +1969,9 @@ void ConcurrentMark::cleanup() {
if (VerifyDuringGC) {
HandleMark hm; // handle scope
gclog_or_tty->print(" VerifyDuringGC:(before)");
Universe::heap()->prepare_for_verify();
Universe::verify(/* silent */ false,
/* option */ VerifyOption_G1UsePrevMarking);
Universe::verify(VerifyOption_G1UsePrevMarking,
" VerifyDuringGC:(before)");
}
G1CollectorPolicy* g1p = G1CollectedHeap::heap()->g1_policy();
@ -2127,10 +2123,9 @@ void ConcurrentMark::cleanup() {
if (VerifyDuringGC) {
HandleMark hm; // handle scope
gclog_or_tty->print(" VerifyDuringGC:(after)");
Universe::heap()->prepare_for_verify();
Universe::verify(/* silent */ false,
/* option */ VerifyOption_G1UsePrevMarking);
Universe::verify(VerifyOption_G1UsePrevMarking,
" VerifyDuringGC:(after)");
}
g1h->verify_region_sets_optional();

View File

@ -77,7 +77,7 @@ void G1BlockOffsetSharedArray::resize(size_t new_word_size) {
assert(delta > 0, "just checking");
if (!_vs.expand_by(delta)) {
// Do better than this for Merlin
vm_exit_out_of_memory(delta, "offset table expansion");
vm_exit_out_of_memory(delta, OOM_MMAP_ERROR, "offset table expansion");
}
assert(_vs.high() == high + delta, "invalid expansion");
// Initialization of the contents is left to the

View File

@ -1271,9 +1271,8 @@ double G1CollectedHeap::verify(bool guard, const char* msg) {
if (guard && total_collections() >= VerifyGCStartAt) {
double verify_start = os::elapsedTime();
HandleMark hm; // Discard invalid handles created during verification
gclog_or_tty->print(msg);
prepare_for_verify();
Universe::verify(false /* silent */, VerifyOption_G1UsePrevMarking);
Universe::verify(VerifyOption_G1UsePrevMarking, msg);
verify_time_ms = (os::elapsedTime() - verify_start) * 1000;
}
@ -1304,7 +1303,7 @@ bool G1CollectedHeap::do_collection(bool explicit_gc,
print_heap_before_gc();
size_t metadata_prev_used = MetaspaceAux::used_in_bytes();
size_t metadata_prev_used = MetaspaceAux::allocated_used_bytes();
HRSPhaseSetter x(HRSPhaseFullGC);
verify_region_sets_optional();
@ -1425,6 +1424,7 @@ bool G1CollectedHeap::do_collection(bool explicit_gc,
// Delete metaspaces for unloaded class loaders and clean up loader_data graph
ClassLoaderDataGraph::purge();
MetaspaceAux::verify_metrics();
// Note: since we've just done a full GC, concurrent
// marking is no longer active. Therefore we need not
@ -1831,7 +1831,7 @@ bool G1CollectedHeap::expand(size_t expand_bytes) {
if (G1ExitOnExpansionFailure &&
_g1_storage.uncommitted_size() >= aligned_expand_bytes) {
// We had head room...
vm_exit_out_of_memory(aligned_expand_bytes, "G1 heap expansion");
vm_exit_out_of_memory(aligned_expand_bytes, OOM_MMAP_ERROR, "G1 heap expansion");
}
}
return successful;
@ -1955,13 +1955,6 @@ G1CollectedHeap::G1CollectedHeap(G1CollectorPolicy* policy_) :
int n_rem_sets = HeapRegionRemSet::num_par_rem_sets();
assert(n_rem_sets > 0, "Invariant.");
HeapRegionRemSetIterator** iter_arr =
NEW_C_HEAP_ARRAY(HeapRegionRemSetIterator*, n_queues, mtGC);
for (int i = 0; i < n_queues; i++) {
iter_arr[i] = new HeapRegionRemSetIterator();
}
_rem_set_iterator = iter_arr;
_worker_cset_start_region = NEW_C_HEAP_ARRAY(HeapRegion*, n_queues, mtGC);
_worker_cset_start_region_time_stamp = NEW_C_HEAP_ARRAY(unsigned int, n_queues, mtGC);
@ -3614,7 +3607,7 @@ G1CollectedHeap::setup_surviving_young_words() {
uint array_length = g1_policy()->young_cset_region_length();
_surviving_young_words = NEW_C_HEAP_ARRAY(size_t, (size_t) array_length, mtGC);
if (_surviving_young_words == NULL) {
vm_exit_out_of_memory(sizeof(size_t) * array_length,
vm_exit_out_of_memory(sizeof(size_t) * array_length, OOM_MALLOC_ERROR,
"Not enough space for young surv words summary.");
}
memset(_surviving_young_words, 0, (size_t) array_length * sizeof(size_t));
@ -4397,7 +4390,7 @@ G1ParScanThreadState::G1ParScanThreadState(G1CollectedHeap* g1h, uint queue_num)
PADDING_ELEM_NUM;
_surviving_young_words_base = NEW_C_HEAP_ARRAY(size_t, array_length, mtGC);
if (_surviving_young_words_base == NULL)
vm_exit_out_of_memory(array_length * sizeof(size_t),
vm_exit_out_of_memory(array_length * sizeof(size_t), OOM_MALLOC_ERROR,
"Not enough space for young surv histo.");
_surviving_young_words = _surviving_young_words_base + PADDING_ELEM_NUM;
memset(_surviving_young_words, 0, (size_t) real_length * sizeof(size_t));
@ -5079,10 +5072,9 @@ g1_process_strong_roots(bool is_scavenging,
}
void
G1CollectedHeap::g1_process_weak_roots(OopClosure* root_closure,
OopClosure* non_root_closure) {
G1CollectedHeap::g1_process_weak_roots(OopClosure* root_closure) {
CodeBlobToOopClosure roots_in_blobs(root_closure, /*do_marking=*/ false);
SharedHeap::process_weak_roots(root_closure, &roots_in_blobs, non_root_closure);
SharedHeap::process_weak_roots(root_closure, &roots_in_blobs);
}
// Weak Reference Processing support

View File

@ -786,9 +786,6 @@ protected:
// concurrently after the collection.
DirtyCardQueueSet _dirty_card_queue_set;
// The Heap Region Rem Set Iterator.
HeapRegionRemSetIterator** _rem_set_iterator;
// The closure used to refine a single card.
RefineCardTableEntryClosure* _refine_cte_cl;
@ -827,8 +824,7 @@ protected:
// Apply "blk" to all the weak roots of the system. These include
// JNI weak roots, the code cache, system dictionary, symbol table,
// string table, and referents of reachable weak refs.
void g1_process_weak_roots(OopClosure* root_closure,
OopClosure* non_root_closure);
void g1_process_weak_roots(OopClosure* root_closure);
// Frees a non-humongous region by initializing its contents and
// adding it to the free list that's passed as a parameter (this is
@ -1114,15 +1110,6 @@ public:
G1RemSet* g1_rem_set() const { return _g1_rem_set; }
ModRefBarrierSet* mr_bs() const { return _mr_bs; }
// The rem set iterator.
HeapRegionRemSetIterator* rem_set_iterator(int i) {
return _rem_set_iterator[i];
}
HeapRegionRemSetIterator* rem_set_iterator() {
return _rem_set_iterator[0];
}
unsigned get_gc_time_stamp() {
return _gc_time_stamp;
}

View File

@ -144,33 +144,28 @@ void G1MarkSweep::mark_sweep_phase1(bool& marked_for_unloading,
&GenMarkSweep::follow_stack_closure,
NULL);
// Follow system dictionary roots and unload classes
// This is the point where the entire marking should have completed.
assert(GenMarkSweep::_marking_stack.is_empty(), "Marking should have completed");
// Unload classes and purge the SystemDictionary.
bool purged_class = SystemDictionary::do_unloading(&GenMarkSweep::is_alive);
assert(GenMarkSweep::_marking_stack.is_empty(),
"stack should be empty by now");
// Follow code cache roots (has to be done after system dictionary,
// assumes all live klasses are marked)
// Unload nmethods.
CodeCache::do_unloading(&GenMarkSweep::is_alive, purged_class);
GenMarkSweep::follow_stack();
// Update subklass/sibling/implementor links of live klasses
// Prune dead klasses from subklass/sibling/implementor lists.
Klass::clean_weak_klass_links(&GenMarkSweep::is_alive);
assert(GenMarkSweep::_marking_stack.is_empty(),
"stack should be empty by now");
// Visit interned string tables and delete unmarked oops
// Delete entries for dead interned strings.
StringTable::unlink(&GenMarkSweep::is_alive);
// Clean up unreferenced symbols in symbol table.
SymbolTable::unlink();
assert(GenMarkSweep::_marking_stack.is_empty(),
"stack should be empty by now");
if (VerifyDuringGC) {
HandleMark hm; // handle scope
COMPILER2_PRESENT(DerivedPointerTableDeactivate dpt_deact);
gclog_or_tty->print(" VerifyDuringGC:(full)[Verifying ");
Universe::heap()->prepare_for_verify();
// Note: we can verify only the heap here. When an object is
// marked, the previous value of the mark word (including
@ -182,11 +177,13 @@ void G1MarkSweep::mark_sweep_phase1(bool& marked_for_unloading,
// fail. At the end of the GC, the orginal mark word values
// (including hash values) are restored to the appropriate
// objects.
Universe::heap()->verify(/* silent */ false,
/* option */ VerifyOption_G1UseMarkWord);
G1CollectedHeap* g1h = G1CollectedHeap::heap();
gclog_or_tty->print_cr("]");
if (!VerifySilently) {
gclog_or_tty->print(" VerifyDuringGC:(full)[Verifying ");
}
Universe::heap()->verify(VerifySilently, VerifyOption_G1UseMarkWord);
if (!VerifySilently) {
gclog_or_tty->print_cr("]");
}
}
}
@ -308,17 +305,16 @@ void G1MarkSweep::mark_sweep_phase3() {
sh->process_strong_roots(true, // activate StrongRootsScope
false, // not scavenging.
SharedHeap::SO_AllClasses,
&GenMarkSweep::adjust_root_pointer_closure,
&GenMarkSweep::adjust_pointer_closure,
NULL, // do not touch code cache here
&GenMarkSweep::adjust_klass_closure);
assert(GenMarkSweep::ref_processor() == g1h->ref_processor_stw(), "Sanity");
g1h->ref_processor_stw()->weak_oops_do(&GenMarkSweep::adjust_root_pointer_closure);
g1h->ref_processor_stw()->weak_oops_do(&GenMarkSweep::adjust_pointer_closure);
// Now adjust pointers in remaining weak roots. (All of which should
// have been cleared if they pointed to non-surviving objects.)
g1h->g1_process_weak_roots(&GenMarkSweep::adjust_root_pointer_closure,
&GenMarkSweep::adjust_pointer_closure);
g1h->g1_process_weak_roots(&GenMarkSweep::adjust_pointer_closure);
GenMarkSweep::adjust_marks();

View File

@ -169,14 +169,13 @@ public:
// _try_claimed || r->claim_iter()
// is true: either we're supposed to work on claimed-but-not-complete
// regions, or we successfully claimed the region.
HeapRegionRemSetIterator* iter = _g1h->rem_set_iterator(_worker_i);
hrrs->init_iterator(iter);
HeapRegionRemSetIterator iter(hrrs);
size_t card_index;
// We claim cards in block so as to recude the contention. The block size is determined by
// the G1RSetScanBlockSize parameter.
size_t jump_to_card = hrrs->iter_claimed_next(_block_size);
for (size_t current_card = 0; iter->has_next(card_index); current_card++) {
for (size_t current_card = 0; iter.has_next(card_index); current_card++) {
if (current_card >= jump_to_card + _block_size) {
jump_to_card = hrrs->iter_claimed_next(_block_size);
}

View File

@ -53,14 +53,14 @@ protected:
NumSeqTasks = 1
};
CardTableModRefBS* _ct_bs;
SubTasksDone* _seq_task;
G1CollectorPolicy* _g1p;
CardTableModRefBS* _ct_bs;
SubTasksDone* _seq_task;
G1CollectorPolicy* _g1p;
ConcurrentG1Refine* _cg1r;
ConcurrentG1Refine* _cg1r;
size_t* _cards_scanned;
size_t _total_cards_scanned;
size_t* _cards_scanned;
size_t _total_cards_scanned;
// Used for caching the closure that is responsible for scanning
// references into the collection set.

View File

@ -285,7 +285,7 @@ OtherRegionsTable::OtherRegionsTable(HeapRegion* hr) :
_fine_grain_regions = new PerRegionTablePtr[_max_fine_entries];
if (_fine_grain_regions == NULL) {
vm_exit_out_of_memory(sizeof(void*)*_max_fine_entries,
vm_exit_out_of_memory(sizeof(void*)*_max_fine_entries, OOM_MALLOC_ERROR,
"Failed to allocate _fine_grain_entries.");
}
@ -877,14 +877,9 @@ bool HeapRegionRemSet::iter_is_complete() {
return _iter_state == Complete;
}
void HeapRegionRemSet::init_iterator(HeapRegionRemSetIterator* iter) const {
iter->initialize(this);
}
#ifndef PRODUCT
void HeapRegionRemSet::print() const {
HeapRegionRemSetIterator iter;
init_iterator(&iter);
HeapRegionRemSetIterator iter(this);
size_t card_index;
while (iter.has_next(card_index)) {
HeapWord* card_start =
@ -928,35 +923,23 @@ void HeapRegionRemSet::scrub(CardTableModRefBS* ctbs,
//-------------------- Iteration --------------------
HeapRegionRemSetIterator::
HeapRegionRemSetIterator() :
_hrrs(NULL),
HeapRegionRemSetIterator:: HeapRegionRemSetIterator(const HeapRegionRemSet* hrrs) :
_hrrs(hrrs),
_g1h(G1CollectedHeap::heap()),
_bosa(NULL),
_sparse_iter() { }
void HeapRegionRemSetIterator::initialize(const HeapRegionRemSet* hrrs) {
_hrrs = hrrs;
_coarse_map = &_hrrs->_other_regions._coarse_map;
_fine_grain_regions = _hrrs->_other_regions._fine_grain_regions;
_bosa = _hrrs->bosa();
_is = Sparse;
_coarse_map(&hrrs->_other_regions._coarse_map),
_fine_grain_regions(hrrs->_other_regions._fine_grain_regions),
_bosa(hrrs->bosa()),
_is(Sparse),
// Set these values so that we increment to the first region.
_coarse_cur_region_index = -1;
_coarse_cur_region_cur_card = (HeapRegion::CardsPerRegion-1);
_cur_region_cur_card = 0;
_fine_array_index = -1;
_fine_cur_prt = NULL;
_n_yielded_coarse = 0;
_n_yielded_fine = 0;
_n_yielded_sparse = 0;
_sparse_iter.init(&hrrs->_other_regions._sparse_table);
}
_coarse_cur_region_index(-1),
_coarse_cur_region_cur_card(HeapRegion::CardsPerRegion-1),
_cur_region_cur_card(0),
_fine_array_index(-1),
_fine_cur_prt(NULL),
_n_yielded_coarse(0),
_n_yielded_fine(0),
_n_yielded_sparse(0),
_sparse_iter(&hrrs->_other_regions._sparse_table) {}
bool HeapRegionRemSetIterator::coarse_has_next(size_t& card_index) {
if (_hrrs->_other_regions._n_coarse_entries == 0) return false;
@ -1209,8 +1192,7 @@ void HeapRegionRemSet::test() {
hrrs->add_reference((OopOrNarrowOopStar)hr5->bottom());
// Now, does iteration yield these three?
HeapRegionRemSetIterator iter;
hrrs->init_iterator(&iter);
HeapRegionRemSetIterator iter(hrrs);
size_t sum = 0;
size_t card_index;
while (iter.has_next(card_index)) {

View File

@ -281,9 +281,6 @@ public:
return (_iter_state == Unclaimed) && (_iter_claimed == 0);
}
// Initialize the given iterator to iterate over this rem set.
void init_iterator(HeapRegionRemSetIterator* iter) const;
// The actual # of bytes this hr_remset takes up.
size_t mem_size() {
return _other_regions.mem_size()
@ -345,9 +342,9 @@ public:
#endif
};
class HeapRegionRemSetIterator : public CHeapObj<mtGC> {
class HeapRegionRemSetIterator : public StackObj {
// The region over which we're iterating.
// The region RSet over which we're iterating.
const HeapRegionRemSet* _hrrs;
// Local caching of HRRS fields.
@ -362,8 +359,10 @@ class HeapRegionRemSetIterator : public CHeapObj<mtGC> {
size_t _n_yielded_coarse;
size_t _n_yielded_sparse;
// If true we're iterating over the coarse table; if false the fine
// table.
// Indicates what granularity of table that we're currently iterating over.
// We start iterating over the sparse table, progress to the fine grain
// table, and then finish with the coarse table.
// See HeapRegionRemSetIterator::has_next().
enum IterState {
Sparse,
Fine,
@ -403,9 +402,7 @@ class HeapRegionRemSetIterator : public CHeapObj<mtGC> {
public:
// We require an iterator to be initialized before use, so the
// constructor does little.
HeapRegionRemSetIterator();
void initialize(const HeapRegionRemSet* hrrs);
HeapRegionRemSetIterator(const HeapRegionRemSet* hrrs);
// If there remains one or more cards to be yielded, returns true and
// sets "card_index" to one of those cards (which is then considered

View File

@ -35,10 +35,6 @@
#define UNROLL_CARD_LOOPS 1
void SparsePRT::init_iterator(SparsePRTIter* sprt_iter) {
sprt_iter->init(this);
}
void SparsePRTEntry::init(RegionIdx_t region_ind) {
_region_ind = region_ind;
_next_index = NullEntry;

View File

@ -192,18 +192,11 @@ class RSHashTableIter VALUE_OBJ_CLASS_SPEC {
size_t compute_card_ind(CardIdx_t ci);
public:
RSHashTableIter() :
_tbl_ind(RSHashTable::NullEntry),
RSHashTableIter(RSHashTable* rsht) :
_tbl_ind(RSHashTable::NullEntry), // So that first increment gets to 0.
_bl_ind(RSHashTable::NullEntry),
_card_ind((SparsePRTEntry::cards_num() - 1)),
_rsht(NULL) {}
void init(RSHashTable* rsht) {
_rsht = rsht;
_tbl_ind = -1; // So that first increment gets to 0.
_bl_ind = RSHashTable::NullEntry;
_card_ind = (SparsePRTEntry::cards_num() - 1);
}
_rsht(rsht) {}
bool has_next(size_t& card_index);
};
@ -284,8 +277,6 @@ public:
static void cleanup_all();
RSHashTable* cur() const { return _cur; }
void init_iterator(SparsePRTIter* sprt_iter);
static void add_to_expanded_list(SparsePRT* sprt);
static SparsePRT* get_from_expanded_list();
@ -321,9 +312,9 @@ public:
class SparsePRTIter: public RSHashTableIter {
public:
void init(const SparsePRT* sprt) {
RSHashTableIter::init(sprt->cur());
}
SparsePRTIter(const SparsePRT* sprt) :
RSHashTableIter(sprt->cur()) {}
bool has_next(size_t& card_index) {
return RSHashTableIter::has_next(card_index);
}

View File

@ -567,7 +567,7 @@ bool CardTableExtension::resize_commit_uncommit(int changed_region,
MemRegion(new_start_aligned, new_end_for_commit);
if (!os::commit_memory((char*)new_committed.start(),
new_committed.byte_size())) {
vm_exit_out_of_memory(new_committed.byte_size(),
vm_exit_out_of_memory(new_committed.byte_size(), OOM_MMAP_ERROR,
"card table expansion");
}
}

View File

@ -43,7 +43,7 @@ GCTaskThread::GCTaskThread(GCTaskManager* manager,
_time_stamp_index(0)
{
if (!os::create_thread(this, os::pgc_thread))
vm_exit_out_of_memory(0, "Cannot create GC thread. Out of system resources.");
vm_exit_out_of_memory(0, OOM_MALLOC_ERROR, "Cannot create GC thread. Out of system resources.");
if (PrintGCTaskTimeStamps) {
_time_stamps = NEW_C_HEAP_ARRAY(GCTaskTimeStamp, GCTaskTimeStampEntries, mtGC);

View File

@ -99,7 +99,7 @@ void ObjectStartArray::set_covered_region(MemRegion mr) {
// Expand
size_t expand_by = requested_blocks_size_in_bytes - current_blocks_size_in_bytes;
if (!_virtual_space.expand_by(expand_by)) {
vm_exit_out_of_memory(expand_by, "object start array expansion");
vm_exit_out_of_memory(expand_by, OOM_MMAP_ERROR, "object start array expansion");
}
// Clear *only* the newly allocated region
memset(_blocks_region.end(), clean_block, expand_by);

View File

@ -138,8 +138,7 @@ bool PSMarkSweep::invoke_no_policy(bool clear_all_softrefs) {
if (VerifyBeforeGC && heap->total_collections() >= VerifyGCStartAt) {
HandleMark hm; // Discard invalid handles created during verification
gclog_or_tty->print(" VerifyBeforeGC:");
Universe::verify();
Universe::verify(" VerifyBeforeGC:");
}
// Verify object start arrays
@ -177,7 +176,7 @@ bool PSMarkSweep::invoke_no_policy(bool clear_all_softrefs) {
size_t prev_used = heap->used();
// Capture metadata size before collection for sizing.
size_t metadata_prev_used = MetaspaceAux::used_in_bytes();
size_t metadata_prev_used = MetaspaceAux::allocated_used_bytes();
// For PrintGCDetails
size_t old_gen_prev_used = old_gen->used_in_bytes();
@ -238,6 +237,7 @@ bool PSMarkSweep::invoke_no_policy(bool clear_all_softrefs) {
// Delete metaspaces for unloaded class loaders and clean up loader_data graph
ClassLoaderDataGraph::purge();
MetaspaceAux::verify_metrics();
BiasedLocking::restore_marks();
Threads::gc_epilogue();
@ -340,8 +340,7 @@ bool PSMarkSweep::invoke_no_policy(bool clear_all_softrefs) {
if (VerifyAfterGC && heap->total_collections() >= VerifyGCStartAt) {
HandleMark hm; // Discard invalid handles created during verification
gclog_or_tty->print(" VerifyAfterGC:");
Universe::verify();
Universe::verify(" VerifyAfterGC:");
}
// Re-verify object start arrays
@ -518,23 +517,23 @@ void PSMarkSweep::mark_sweep_phase1(bool clear_all_softrefs) {
is_alive_closure(), mark_and_push_closure(), follow_stack_closure(), NULL);
}
// Follow system dictionary roots and unload classes
// This is the point where the entire marking should have completed.
assert(_marking_stack.is_empty(), "Marking should have completed");
// Unload classes and purge the SystemDictionary.
bool purged_class = SystemDictionary::do_unloading(is_alive_closure());
// Follow code cache roots
// Unload nmethods.
CodeCache::do_unloading(is_alive_closure(), purged_class);
follow_stack(); // Flush marking stack
// Update subklass/sibling/implementor links of live klasses
Klass::clean_weak_klass_links(&is_alive);
assert(_marking_stack.is_empty(), "just drained");
// Prune dead klasses from subklass/sibling/implementor lists.
Klass::clean_weak_klass_links(is_alive_closure());
// Visit interned string tables and delete unmarked oops
// Delete entries for dead interned strings.
StringTable::unlink(is_alive_closure());
// Clean up unreferenced symbols in symbol table.
SymbolTable::unlink();
assert(_marking_stack.is_empty(), "stack should be empty by now");
}
@ -583,28 +582,27 @@ void PSMarkSweep::mark_sweep_phase3() {
ClassLoaderDataGraph::clear_claimed_marks();
// General strong roots.
Universe::oops_do(adjust_root_pointer_closure());
JNIHandles::oops_do(adjust_root_pointer_closure()); // Global (strong) JNI handles
CLDToOopClosure adjust_from_cld(adjust_root_pointer_closure());
Threads::oops_do(adjust_root_pointer_closure(), &adjust_from_cld, NULL);
ObjectSynchronizer::oops_do(adjust_root_pointer_closure());
FlatProfiler::oops_do(adjust_root_pointer_closure());
Management::oops_do(adjust_root_pointer_closure());
JvmtiExport::oops_do(adjust_root_pointer_closure());
Universe::oops_do(adjust_pointer_closure());
JNIHandles::oops_do(adjust_pointer_closure()); // Global (strong) JNI handles
CLDToOopClosure adjust_from_cld(adjust_pointer_closure());
Threads::oops_do(adjust_pointer_closure(), &adjust_from_cld, NULL);
ObjectSynchronizer::oops_do(adjust_pointer_closure());
FlatProfiler::oops_do(adjust_pointer_closure());
Management::oops_do(adjust_pointer_closure());
JvmtiExport::oops_do(adjust_pointer_closure());
// SO_AllClasses
SystemDictionary::oops_do(adjust_root_pointer_closure());
ClassLoaderDataGraph::oops_do(adjust_root_pointer_closure(), adjust_klass_closure(), true);
//CodeCache::scavenge_root_nmethods_oops_do(adjust_root_pointer_closure());
SystemDictionary::oops_do(adjust_pointer_closure());
ClassLoaderDataGraph::oops_do(adjust_pointer_closure(), adjust_klass_closure(), true);
// Now adjust pointers in remaining weak roots. (All of which should
// have been cleared if they pointed to non-surviving objects.)
// Global (weak) JNI handles
JNIHandles::weak_oops_do(&always_true, adjust_root_pointer_closure());
JNIHandles::weak_oops_do(&always_true, adjust_pointer_closure());
CodeCache::oops_do(adjust_pointer_closure());
StringTable::oops_do(adjust_root_pointer_closure());
ref_processor()->weak_oops_do(adjust_root_pointer_closure());
PSScavenge::reference_processor()->weak_oops_do(adjust_root_pointer_closure());
StringTable::oops_do(adjust_pointer_closure());
ref_processor()->weak_oops_do(adjust_pointer_closure());
PSScavenge::reference_processor()->weak_oops_do(adjust_pointer_closure());
adjust_marks();

View File

@ -44,7 +44,6 @@ class PSMarkSweep : public MarkSweep {
static KlassClosure* follow_klass_closure() { return &MarkSweep::follow_klass_closure; }
static VoidClosure* follow_stack_closure() { return (VoidClosure*)&MarkSweep::follow_stack_closure; }
static OopClosure* adjust_pointer_closure() { return (OopClosure*)&MarkSweep::adjust_pointer_closure; }
static OopClosure* adjust_root_pointer_closure() { return (OopClosure*)&MarkSweep::adjust_root_pointer_closure; }
static KlassClosure* adjust_klass_closure() { return &MarkSweep::adjust_klass_closure; }
static BoolObjectClosure* is_alive_closure() { return (BoolObjectClosure*)&MarkSweep::is_alive; }

View File

@ -787,12 +787,11 @@ bool PSParallelCompact::IsAliveClosure::do_object_b(oop p) { return mark_bitmap(
void PSParallelCompact::KeepAliveClosure::do_oop(oop* p) { PSParallelCompact::KeepAliveClosure::do_oop_work(p); }
void PSParallelCompact::KeepAliveClosure::do_oop(narrowOop* p) { PSParallelCompact::KeepAliveClosure::do_oop_work(p); }
PSParallelCompact::AdjustPointerClosure PSParallelCompact::_adjust_root_pointer_closure(true);
PSParallelCompact::AdjustPointerClosure PSParallelCompact::_adjust_pointer_closure(false);
PSParallelCompact::AdjustPointerClosure PSParallelCompact::_adjust_pointer_closure;
PSParallelCompact::AdjustKlassClosure PSParallelCompact::_adjust_klass_closure;
void PSParallelCompact::AdjustPointerClosure::do_oop(oop* p) { adjust_pointer(p, _is_root); }
void PSParallelCompact::AdjustPointerClosure::do_oop(narrowOop* p) { adjust_pointer(p, _is_root); }
void PSParallelCompact::AdjustPointerClosure::do_oop(oop* p) { adjust_pointer(p); }
void PSParallelCompact::AdjustPointerClosure::do_oop(narrowOop* p) { adjust_pointer(p); }
void PSParallelCompact::FollowStackClosure::do_void() { _compaction_manager->follow_marking_stacks(); }
@ -805,7 +804,7 @@ void PSParallelCompact::FollowKlassClosure::do_klass(Klass* klass) {
klass->oops_do(_mark_and_push_closure);
}
void PSParallelCompact::AdjustKlassClosure::do_klass(Klass* klass) {
klass->oops_do(&PSParallelCompact::_adjust_root_pointer_closure);
klass->oops_do(&PSParallelCompact::_adjust_pointer_closure);
}
void PSParallelCompact::post_initialize() {
@ -892,7 +891,7 @@ public:
_heap_used = heap->used();
_young_gen_used = heap->young_gen()->used_in_bytes();
_old_gen_used = heap->old_gen()->used_in_bytes();
_metadata_used = MetaspaceAux::used_in_bytes();
_metadata_used = MetaspaceAux::allocated_used_bytes();
};
size_t heap_used() const { return _heap_used; }
@ -967,8 +966,7 @@ void PSParallelCompact::pre_compact(PreGCValues* pre_gc_values)
if (VerifyBeforeGC && heap->total_collections() >= VerifyGCStartAt) {
HandleMark hm; // Discard invalid handles created during verification
gclog_or_tty->print(" VerifyBeforeGC:");
Universe::verify();
Universe::verify(" VerifyBeforeGC:");
}
// Verify object start arrays
@ -1027,6 +1025,7 @@ void PSParallelCompact::post_compact()
// Delete metaspaces for unloaded class loaders and clean up loader_data graph
ClassLoaderDataGraph::purge();
MetaspaceAux::verify_metrics();
Threads::gc_epilogue();
CodeCache::gc_epilogue();
@ -2168,8 +2167,7 @@ bool PSParallelCompact::invoke_no_policy(bool maximum_heap_compaction) {
if (VerifyAfterGC && heap->total_collections() >= VerifyGCStartAt) {
HandleMark hm; // Discard invalid handles created during verification
gclog_or_tty->print(" VerifyAfterGC:");
Universe::verify();
Universe::verify(" VerifyAfterGC:");
}
// Re-verify object start arrays
@ -2356,22 +2354,24 @@ void PSParallelCompact::marking_phase(ParCompactionManager* cm,
}
TraceTime tm_c("class unloading", print_phases(), true, gclog_or_tty);
// This is the point where the entire marking should have completed.
assert(cm->marking_stacks_empty(), "Marking should have completed");
// Follow system dictionary roots and unload classes.
bool purged_class = SystemDictionary::do_unloading(is_alive_closure());
// Follow code cache roots.
// Unload nmethods.
CodeCache::do_unloading(is_alive_closure(), purged_class);
cm->follow_marking_stacks(); // Flush marking stack.
// Update subklass/sibling/implementor links of live klasses
// Prune dead klasses from subklass/sibling/implementor lists.
Klass::clean_weak_klass_links(is_alive_closure());
// Visit interned string tables and delete unmarked oops
// Delete entries for dead interned strings.
StringTable::unlink(is_alive_closure());
// Clean up unreferenced symbols in symbol table.
SymbolTable::unlink();
assert(cm->marking_stacks_empty(), "marking stacks should be empty");
}
void PSParallelCompact::follow_klass(ParCompactionManager* cm, Klass* klass) {
@ -2398,7 +2398,7 @@ void PSParallelCompact::follow_class_loader(ParCompactionManager* cm,
void PSParallelCompact::adjust_class_loader(ParCompactionManager* cm,
ClassLoaderData* cld) {
cld->oops_do(PSParallelCompact::adjust_root_pointer_closure(),
cld->oops_do(PSParallelCompact::adjust_pointer_closure(),
PSParallelCompact::adjust_klass_closure(),
true);
}
@ -2419,32 +2419,31 @@ void PSParallelCompact::adjust_roots() {
ClassLoaderDataGraph::clear_claimed_marks();
// General strong roots.
Universe::oops_do(adjust_root_pointer_closure());
JNIHandles::oops_do(adjust_root_pointer_closure()); // Global (strong) JNI handles
CLDToOopClosure adjust_from_cld(adjust_root_pointer_closure());
Threads::oops_do(adjust_root_pointer_closure(), &adjust_from_cld, NULL);
ObjectSynchronizer::oops_do(adjust_root_pointer_closure());
FlatProfiler::oops_do(adjust_root_pointer_closure());
Management::oops_do(adjust_root_pointer_closure());
JvmtiExport::oops_do(adjust_root_pointer_closure());
Universe::oops_do(adjust_pointer_closure());
JNIHandles::oops_do(adjust_pointer_closure()); // Global (strong) JNI handles
CLDToOopClosure adjust_from_cld(adjust_pointer_closure());
Threads::oops_do(adjust_pointer_closure(), &adjust_from_cld, NULL);
ObjectSynchronizer::oops_do(adjust_pointer_closure());
FlatProfiler::oops_do(adjust_pointer_closure());
Management::oops_do(adjust_pointer_closure());
JvmtiExport::oops_do(adjust_pointer_closure());
// SO_AllClasses
SystemDictionary::oops_do(adjust_root_pointer_closure());
ClassLoaderDataGraph::oops_do(adjust_root_pointer_closure(), adjust_klass_closure(), true);
SystemDictionary::oops_do(adjust_pointer_closure());
ClassLoaderDataGraph::oops_do(adjust_pointer_closure(), adjust_klass_closure(), true);
// Now adjust pointers in remaining weak roots. (All of which should
// have been cleared if they pointed to non-surviving objects.)
// Global (weak) JNI handles
JNIHandles::weak_oops_do(&always_true, adjust_root_pointer_closure());
JNIHandles::weak_oops_do(&always_true, adjust_pointer_closure());
CodeCache::oops_do(adjust_pointer_closure());
StringTable::oops_do(adjust_root_pointer_closure());
ref_processor()->weak_oops_do(adjust_root_pointer_closure());
StringTable::oops_do(adjust_pointer_closure());
ref_processor()->weak_oops_do(adjust_pointer_closure());
// Roots were visited so references into the young gen in roots
// may have been scanned. Process them also.
// Should the reference processor have a span that excludes
// young gen objects?
PSScavenge::reference_processor()->weak_oops_do(
adjust_root_pointer_closure());
PSScavenge::reference_processor()->weak_oops_do(adjust_pointer_closure());
}
void PSParallelCompact::enqueue_region_draining_tasks(GCTaskQueue* q,

View File

@ -799,16 +799,6 @@ class PSParallelCompact : AllStatic {
virtual void do_oop(narrowOop* p);
};
// Current unused
class FollowRootClosure: public OopsInGenClosure {
private:
ParCompactionManager* _compaction_manager;
public:
FollowRootClosure(ParCompactionManager* cm) : _compaction_manager(cm) { }
virtual void do_oop(oop* p);
virtual void do_oop(narrowOop* p);
};
class FollowStackClosure: public VoidClosure {
private:
ParCompactionManager* _compaction_manager;
@ -818,10 +808,7 @@ class PSParallelCompact : AllStatic {
};
class AdjustPointerClosure: public OopClosure {
private:
bool _is_root;
public:
AdjustPointerClosure(bool is_root) : _is_root(is_root) { }
virtual void do_oop(oop* p);
virtual void do_oop(narrowOop* p);
// do not walk from thread stacks to the code cache on this phase
@ -838,7 +825,6 @@ class PSParallelCompact : AllStatic {
friend class AdjustPointerClosure;
friend class AdjustKlassClosure;
friend class FollowKlassClosure;
friend class FollowRootClosure;
friend class InstanceClassLoaderKlass;
friend class RefProcTaskProxy;
@ -853,7 +839,6 @@ class PSParallelCompact : AllStatic {
static IsAliveClosure _is_alive_closure;
static SpaceInfo _space_info[last_space_id];
static bool _print_phases;
static AdjustPointerClosure _adjust_root_pointer_closure;
static AdjustPointerClosure _adjust_pointer_closure;
static AdjustKlassClosure _adjust_klass_closure;
@ -889,9 +874,6 @@ class PSParallelCompact : AllStatic {
static void marking_phase(ParCompactionManager* cm,
bool maximum_heap_compaction);
template <class T> static inline void adjust_pointer(T* p, bool is_root);
static void adjust_root_pointer(oop* p) { adjust_pointer(p, true); }
template <class T>
static inline void follow_root(ParCompactionManager* cm, T* p);
@ -1046,7 +1028,6 @@ class PSParallelCompact : AllStatic {
// Closure accessors
static OopClosure* adjust_pointer_closure() { return (OopClosure*)&_adjust_pointer_closure; }
static OopClosure* adjust_root_pointer_closure() { return (OopClosure*)&_adjust_root_pointer_closure; }
static KlassClosure* adjust_klass_closure() { return (KlassClosure*)&_adjust_klass_closure; }
static BoolObjectClosure* is_alive_closure() { return (BoolObjectClosure*)&_is_alive_closure; }
@ -1067,6 +1048,7 @@ class PSParallelCompact : AllStatic {
// Check mark and maybe push on marking stack
template <class T> static inline void mark_and_push(ParCompactionManager* cm,
T* p);
template <class T> static inline void adjust_pointer(T* p);
static void follow_klass(ParCompactionManager* cm, Klass* klass);
static void adjust_klass(ParCompactionManager* cm, Klass* klass);
@ -1151,9 +1133,6 @@ class PSParallelCompact : AllStatic {
static ParMarkBitMap* mark_bitmap() { return &_mark_bitmap; }
static ParallelCompactData& summary_data() { return _summary_data; }
static inline void adjust_pointer(oop* p) { adjust_pointer(p, false); }
static inline void adjust_pointer(narrowOop* p) { adjust_pointer(p, false); }
// Reference Processing
static ReferenceProcessor* const ref_processor() { return _ref_processor; }
@ -1230,7 +1209,7 @@ inline void PSParallelCompact::mark_and_push(ParCompactionManager* cm, T* p) {
}
template <class T>
inline void PSParallelCompact::adjust_pointer(T* p, bool isroot) {
inline void PSParallelCompact::adjust_pointer(T* p) {
T heap_oop = oopDesc::load_heap_oop(p);
if (!oopDesc::is_null(heap_oop)) {
oop obj = oopDesc::decode_heap_oop_not_null(heap_oop);

View File

@ -314,8 +314,7 @@ bool PSScavenge::invoke_no_policy() {
if (VerifyBeforeGC && heap->total_collections() >= VerifyGCStartAt) {
HandleMark hm; // Discard invalid handles created during verification
gclog_or_tty->print(" VerifyBeforeGC:");
Universe::verify();
Universe::verify(" VerifyBeforeGC:");
}
{
@ -638,8 +637,7 @@ bool PSScavenge::invoke_no_policy() {
if (VerifyAfterGC && heap->total_collections() >= VerifyGCStartAt) {
HandleMark hm; // Discard invalid handles created during verification
gclog_or_tty->print(" VerifyAfterGC:");
Universe::verify();
Universe::verify(" VerifyAfterGC:");
}
heap->print_heap_after_gc();

View File

@ -81,7 +81,7 @@ void MarkSweep::follow_class_loader(ClassLoaderData* cld) {
}
void MarkSweep::adjust_class_loader(ClassLoaderData* cld) {
cld->oops_do(&MarkSweep::adjust_root_pointer_closure, &MarkSweep::adjust_klass_closure, true);
cld->oops_do(&MarkSweep::adjust_pointer_closure, &MarkSweep::adjust_klass_closure, true);
}
@ -121,11 +121,10 @@ void MarkSweep::preserve_mark(oop obj, markOop mark) {
}
}
MarkSweep::AdjustPointerClosure MarkSweep::adjust_root_pointer_closure(true);
MarkSweep::AdjustPointerClosure MarkSweep::adjust_pointer_closure(false);
MarkSweep::AdjustPointerClosure MarkSweep::adjust_pointer_closure;
void MarkSweep::AdjustPointerClosure::do_oop(oop* p) { adjust_pointer(p, _is_root); }
void MarkSweep::AdjustPointerClosure::do_oop(narrowOop* p) { adjust_pointer(p, _is_root); }
void MarkSweep::AdjustPointerClosure::do_oop(oop* p) { adjust_pointer(p); }
void MarkSweep::AdjustPointerClosure::do_oop(narrowOop* p) { adjust_pointer(p); }
void MarkSweep::adjust_marks() {
assert( _preserved_oop_stack.size() == _preserved_mark_stack.size(),

View File

@ -80,10 +80,7 @@ class MarkSweep : AllStatic {
};
class AdjustPointerClosure: public OopsInGenClosure {
private:
bool _is_root;
public:
AdjustPointerClosure(bool is_root) : _is_root(is_root) {}
virtual void do_oop(oop* p);
virtual void do_oop(narrowOop* p);
};
@ -146,7 +143,6 @@ class MarkSweep : AllStatic {
static MarkAndPushClosure mark_and_push_closure;
static FollowKlassClosure follow_klass_closure;
static FollowStackClosure follow_stack_closure;
static AdjustPointerClosure adjust_root_pointer_closure;
static AdjustPointerClosure adjust_pointer_closure;
static AdjustKlassClosure adjust_klass_closure;
@ -179,12 +175,7 @@ class MarkSweep : AllStatic {
static void adjust_marks(); // Adjust the pointers in the preserved marks table
static void restore_marks(); // Restore the marks that we saved in preserve_mark
template <class T> static inline void adjust_pointer(T* p, bool isroot);
static void adjust_root_pointer(oop* p) { adjust_pointer(p, true); }
static void adjust_pointer(oop* p) { adjust_pointer(p, false); }
static void adjust_pointer(narrowOop* p) { adjust_pointer(p, false); }
template <class T> static inline void adjust_pointer(T* p);
};
class PreservedMark VALUE_OBJ_CLASS_SPEC {

View File

@ -76,7 +76,7 @@ void MarkSweep::push_objarray(oop obj, size_t index) {
_objarray_stack.push(task);
}
template <class T> inline void MarkSweep::adjust_pointer(T* p, bool isroot) {
template <class T> inline void MarkSweep::adjust_pointer(T* p) {
T heap_oop = oopDesc::load_heap_oop(p);
if (!oopDesc::is_null(heap_oop)) {
oop obj = oopDesc::decode_heap_oop_not_null(heap_oop);

View File

@ -225,7 +225,10 @@ void VM_CollectForMetadataAllocation::doit() {
gclog_or_tty->print_cr("\nCMS full GC for Metaspace");
}
heap->collect_as_vm_thread(GCCause::_metadata_GC_threshold);
_result = _loader_data->metaspace_non_null()->allocate(_size, _mdtype);
// After a GC try to allocate without expanding. Could fail
// and expansion will be tried below.
_result =
_loader_data->metaspace_non_null()->allocate(_size, _mdtype);
}
if (_result == NULL && !UseConcMarkSweepGC /* CMS already tried */) {
// If still failing, allow the Metaspace to expand.

View File

@ -1052,7 +1052,7 @@ void SignatureHandlerLibrary::initialize() {
return;
}
if (set_handler_blob() == NULL) {
vm_exit_out_of_memory(blob_size, "native signature handlers");
vm_exit_out_of_memory(blob_size, OOM_MALLOC_ERROR, "native signature handlers");
}
BufferBlob* bb = BufferBlob::create("Signature Handler Temp Buffer",

View File

@ -259,7 +259,7 @@ class ChunkPool: public CHeapObj<mtInternal> {
}
if (p == NULL) p = os::malloc(bytes, mtChunk, CURRENT_PC);
if (p == NULL)
vm_exit_out_of_memory(bytes, "ChunkPool::allocate");
vm_exit_out_of_memory(bytes, OOM_MALLOC_ERROR, "ChunkPool::allocate");
return p;
}
@ -371,7 +371,7 @@ void* Chunk::operator new(size_t requested_size, size_t length) {
default: {
void *p = os::malloc(bytes, mtChunk, CALLER_PC);
if (p == NULL)
vm_exit_out_of_memory(bytes, "Chunk::new");
vm_exit_out_of_memory(bytes, OOM_MALLOC_ERROR, "Chunk::new");
return p;
}
}
@ -531,7 +531,7 @@ size_t Arena::used() const {
}
void Arena::signal_out_of_memory(size_t sz, const char* whence) const {
vm_exit_out_of_memory(sz, whence);
vm_exit_out_of_memory(sz, OOM_MALLOC_ERROR, whence);
}
// Grow a new Chunk

View File

@ -539,6 +539,9 @@ class ResourceObj ALLOCATION_SUPER_CLASS_SPEC {
#define NEW_RESOURCE_ARRAY(type, size)\
(type*) resource_allocate_bytes((size) * sizeof(type))
#define NEW_RESOURCE_ARRAY_RETURN_NULL(type, size)\
(type*) resource_allocate_bytes((size) * sizeof(type), AllocFailStrategy::RETURN_NULL)
#define NEW_RESOURCE_ARRAY_IN_THREAD(thread, type, size)\
(type*) resource_allocate_bytes(thread, (size) * sizeof(type))

View File

@ -58,7 +58,9 @@ inline char* AllocateHeap(size_t size, MEMFLAGS flags, address pc = 0,
#ifdef ASSERT
if (PrintMallocFree) trace_heap_malloc(size, "AllocateHeap", p);
#endif
if (p == NULL && alloc_failmode == AllocFailStrategy::EXIT_OOM) vm_exit_out_of_memory(size, "AllocateHeap");
if (p == NULL && alloc_failmode == AllocFailStrategy::EXIT_OOM) {
vm_exit_out_of_memory(size, OOM_MALLOC_ERROR, "AllocateHeap");
}
return p;
}
@ -68,7 +70,9 @@ inline char* ReallocateHeap(char *old, size_t size, MEMFLAGS flags,
#ifdef ASSERT
if (PrintMallocFree) trace_heap_malloc(size, "ReallocateHeap", p);
#endif
if (p == NULL && alloc_failmode == AllocFailStrategy::EXIT_OOM) vm_exit_out_of_memory(size, "ReallocateHeap");
if (p == NULL && alloc_failmode == AllocFailStrategy::EXIT_OOM) {
vm_exit_out_of_memory(size, OOM_MALLOC_ERROR, "ReallocateHeap");
}
return p;
}
@ -130,12 +134,12 @@ E* ArrayAllocator<E, F>::allocate(size_t length) {
_addr = os::reserve_memory(_size, NULL, alignment);
if (_addr == NULL) {
vm_exit_out_of_memory(_size, "Allocator (reserve)");
vm_exit_out_of_memory(_size, OOM_MMAP_ERROR, "Allocator (reserve)");
}
bool success = os::commit_memory(_addr, _size, false /* executable */);
if (!success) {
vm_exit_out_of_memory(_size, "Allocator (commit)");
vm_exit_out_of_memory(_size, OOM_MMAP_ERROR, "Allocator (commit)");
}
return (E*)_addr;

View File

@ -80,7 +80,7 @@ void BlockOffsetSharedArray::resize(size_t new_word_size) {
assert(delta > 0, "just checking");
if (!_vs.expand_by(delta)) {
// Do better than this for Merlin
vm_exit_out_of_memory(delta, "offset table expansion");
vm_exit_out_of_memory(delta, OOM_MMAP_ERROR, "offset table expansion");
}
assert(_vs.high() == high + delta, "invalid expansion");
} else {

View File

@ -116,7 +116,7 @@ CardTableModRefBS::CardTableModRefBS(MemRegion whole_heap,
_guard_region = MemRegion((HeapWord*)guard_page, _page_size);
if (!os::commit_memory((char*)guard_page, _page_size, _page_size)) {
// Do better than this for Merlin
vm_exit_out_of_memory(_page_size, "card table last card");
vm_exit_out_of_memory(_page_size, OOM_MMAP_ERROR, "card table last card");
}
*guard_card = last_card;
@ -292,7 +292,7 @@ void CardTableModRefBS::resize_covered_region(MemRegion new_region) {
if (!os::commit_memory((char*)new_committed.start(),
new_committed.byte_size(), _page_size)) {
// Do better than this for Merlin
vm_exit_out_of_memory(new_committed.byte_size(),
vm_exit_out_of_memory(new_committed.byte_size(), OOM_MMAP_ERROR,
"card table expansion");
}
// Use new_end_aligned (as opposed to new_end_for_commit) because

View File

@ -238,8 +238,8 @@ void FileMapInfo::write_header() {
void FileMapInfo::write_space(int i, Metaspace* space, bool read_only) {
align_file_position();
size_t used = space->used_words(Metaspace::NonClassType) * BytesPerWord;
size_t capacity = space->capacity_words(Metaspace::NonClassType) * BytesPerWord;
size_t used = space->used_bytes_slow(Metaspace::NonClassType);
size_t capacity = space->capacity_bytes_slow(Metaspace::NonClassType);
struct FileMapInfo::FileMapHeader::space_info* si = &_header._space[i];
write_region(i, (char*)space->bottom(), used, capacity, read_only, false);
}

View File

@ -377,7 +377,7 @@ void GenCollectedHeap::do_collection(bool full,
ClearedAllSoftRefs casr(do_clear_all_soft_refs, collector_policy());
const size_t metadata_prev_used = MetaspaceAux::used_in_bytes();
const size_t metadata_prev_used = MetaspaceAux::allocated_used_bytes();
print_heap_before_gc();
@ -447,8 +447,7 @@ void GenCollectedHeap::do_collection(bool full,
prepare_for_verify();
prepared_for_verification = true;
}
gclog_or_tty->print(" VerifyBeforeGC:");
Universe::verify();
Universe::verify(" VerifyBeforeGC:");
}
COMPILER2_PRESENT(DerivedPointerTable::clear());
@ -519,8 +518,7 @@ void GenCollectedHeap::do_collection(bool full,
if (VerifyAfterGC && i >= VerifyGCLevel &&
total_collections() >= VerifyGCStartAt) {
HandleMark hm; // Discard invalid handles created during verification
gclog_or_tty->print(" VerifyAfterGC:");
Universe::verify();
Universe::verify(" VerifyAfterGC:");
}
if (PrintGCDetails) {
@ -556,6 +554,7 @@ void GenCollectedHeap::do_collection(bool full,
if (complete) {
// Delete metaspaces for unloaded class loaders and clean up loader_data graph
ClassLoaderDataGraph::purge();
MetaspaceAux::verify_metrics();
// Resize the metaspace capacity after full collections
MetaspaceGC::compute_new_size();
update_full_collections_completed();
@ -633,9 +632,8 @@ gen_process_strong_roots(int level,
}
void GenCollectedHeap::gen_process_weak_roots(OopClosure* root_closure,
CodeBlobClosure* code_roots,
OopClosure* non_root_closure) {
SharedHeap::process_weak_roots(root_closure, code_roots, non_root_closure);
CodeBlobClosure* code_roots) {
SharedHeap::process_weak_roots(root_closure, code_roots);
// "Local" "weak" refs
for (int i = 0; i < _n_gens; i++) {
_gens[i]->ref_processor()->weak_oops_do(root_closure);

View File

@ -432,8 +432,7 @@ public:
// JNI weak roots, the code cache, system dictionary, symbol table,
// string table, and referents of reachable weak refs.
void gen_process_weak_roots(OopClosure* root_closure,
CodeBlobClosure* code_roots,
OopClosure* non_root_closure);
CodeBlobClosure* code_roots);
// Set the saved marks of generations, if that makes sense.
// In particular, if any generation might iterate over the oops

View File

@ -223,23 +223,23 @@ void GenMarkSweep::mark_sweep_phase1(int level,
&is_alive, &keep_alive, &follow_stack_closure, NULL);
}
// Follow system dictionary roots and unload classes
// This is the point where the entire marking should have completed.
assert(_marking_stack.is_empty(), "Marking should have completed");
// Unload classes and purge the SystemDictionary.
bool purged_class = SystemDictionary::do_unloading(&is_alive);
// Follow code cache roots
// Unload nmethods.
CodeCache::do_unloading(&is_alive, purged_class);
follow_stack(); // Flush marking stack
// Update subklass/sibling/implementor links of live klasses
// Prune dead klasses from subklass/sibling/implementor lists.
Klass::clean_weak_klass_links(&is_alive);
assert(_marking_stack.is_empty(), "just drained");
// Visit interned string tables and delete unmarked oops
// Delete entries for dead interned strings.
StringTable::unlink(&is_alive);
// Clean up unreferenced symbols in symbol table.
SymbolTable::unlink();
assert(_marking_stack.is_empty(), "stack should be empty by now");
}
@ -282,11 +282,10 @@ void GenMarkSweep::mark_sweep_phase3(int level) {
// Need new claim bits for the pointer adjustment tracing.
ClassLoaderDataGraph::clear_claimed_marks();
// Because the two closures below are created statically, cannot
// Because the closure below is created statically, we cannot
// use OopsInGenClosure constructor which takes a generation,
// as the Universe has not been created when the static constructors
// are run.
adjust_root_pointer_closure.set_orig_generation(gch->get_gen(level));
adjust_pointer_closure.set_orig_generation(gch->get_gen(level));
gch->gen_process_strong_roots(level,
@ -294,18 +293,17 @@ void GenMarkSweep::mark_sweep_phase3(int level) {
true, // activate StrongRootsScope
false, // not scavenging
SharedHeap::SO_AllClasses,
&adjust_root_pointer_closure,
&adjust_pointer_closure,
false, // do not walk code
&adjust_root_pointer_closure,
&adjust_pointer_closure,
&adjust_klass_closure);
// Now adjust pointers in remaining weak roots. (All of which should
// have been cleared if they pointed to non-surviving objects.)
CodeBlobToOopClosure adjust_code_pointer_closure(&adjust_pointer_closure,
/*do_marking=*/ false);
gch->gen_process_weak_roots(&adjust_root_pointer_closure,
&adjust_code_pointer_closure,
&adjust_pointer_closure);
gch->gen_process_weak_roots(&adjust_pointer_closure,
&adjust_code_pointer_closure);
adjust_marks();
GenAdjustPointersClosure blk;

View File

@ -28,6 +28,7 @@
#include "utilities/copy.hpp"
#include "utilities/debug.hpp"
class VirtualSpaceNode;
//
// Future modification
//
@ -45,27 +46,30 @@ size_t Metachunk::_overhead =
// Metachunk methods
Metachunk* Metachunk::initialize(MetaWord* ptr, size_t word_size) {
// Set bottom, top, and end. Allow space for the Metachunk itself
Metachunk* chunk = (Metachunk*) ptr;
MetaWord* chunk_bottom = ptr + _overhead;
chunk->set_bottom(ptr);
chunk->set_top(chunk_bottom);
MetaWord* chunk_end = ptr + word_size;
assert(chunk_end > chunk_bottom, "Chunk must be too small");
chunk->set_end(chunk_end);
chunk->set_next(NULL);
chunk->set_prev(NULL);
chunk->set_word_size(word_size);
Metachunk::Metachunk(size_t word_size,
VirtualSpaceNode* container) :
_word_size(word_size),
_bottom(NULL),
_end(NULL),
_top(NULL),
_next(NULL),
_prev(NULL),
_container(container)
{
_bottom = (MetaWord*)this;
_top = (MetaWord*)this + _overhead;
_end = (MetaWord*)this + word_size;
#ifdef ASSERT
size_t data_word_size = pointer_delta(chunk_end, chunk_bottom, sizeof(MetaWord));
Copy::fill_to_words((HeapWord*) chunk_bottom, data_word_size, metadata_chunk_initialize);
set_is_free(false);
size_t data_word_size = pointer_delta(end(),
top(),
sizeof(MetaWord));
Copy::fill_to_words((HeapWord*) top(),
data_word_size,
metadata_chunk_initialize);
#endif
return chunk;
}
MetaWord* Metachunk::allocate(size_t word_size) {
MetaWord* result = NULL;
// If available, bump the pointer to allocate.

View File

@ -41,10 +41,13 @@
// | | | |
// +--------------+ <- bottom ---+ ---+
class VirtualSpaceNode;
class Metachunk VALUE_OBJ_CLASS_SPEC {
// link to support lists of chunks
Metachunk* _next;
Metachunk* _prev;
VirtualSpaceNode* _container;
MetaWord* _bottom;
MetaWord* _end;
@ -61,29 +64,20 @@ class Metachunk VALUE_OBJ_CLASS_SPEC {
// the space.
static size_t _overhead;
void set_bottom(MetaWord* v) { _bottom = v; }
void set_end(MetaWord* v) { _end = v; }
void set_top(MetaWord* v) { _top = v; }
void set_word_size(size_t v) { _word_size = v; }
public:
#ifdef ASSERT
Metachunk() : _bottom(NULL), _end(NULL), _top(NULL), _is_free(false),
_next(NULL), _prev(NULL) {}
#else
Metachunk() : _bottom(NULL), _end(NULL), _top(NULL),
_next(NULL), _prev(NULL) {}
#endif
Metachunk(size_t word_size , VirtualSpaceNode* container);
// Used to add a Metachunk to a list of Metachunks
void set_next(Metachunk* v) { _next = v; assert(v != this, "Boom");}
void set_prev(Metachunk* v) { _prev = v; assert(v != this, "Boom");}
void set_container(VirtualSpaceNode* v) { _container = v; }
MetaWord* allocate(size_t word_size);
static Metachunk* initialize(MetaWord* ptr, size_t word_size);
// Accessors
Metachunk* next() const { return _next; }
Metachunk* prev() const { return _prev; }
VirtualSpaceNode* container() const { return _container; }
MetaWord* bottom() const { return _bottom; }
MetaWord* end() const { return _end; }
MetaWord* top() const { return _top; }

File diff suppressed because it is too large Load Diff

View File

@ -111,6 +111,10 @@ class Metaspace : public CHeapObj<mtClass> {
SpaceManager* _class_vsm;
SpaceManager* class_vsm() const { return _class_vsm; }
// Allocate space for metadata of type mdtype. This is space
// within a Metachunk and is used by
// allocate(ClassLoaderData*, size_t, bool, MetadataType, TRAPS)
// which returns a Metablock.
MetaWord* allocate(size_t word_size, MetadataType mdtype);
// Virtual Space lists for both classes and other metadata
@ -133,11 +137,14 @@ class Metaspace : public CHeapObj<mtClass> {
static size_t first_class_chunk_word_size() { return _first_class_chunk_word_size; }
char* bottom() const;
size_t used_words(MetadataType mdtype) const;
size_t used_words_slow(MetadataType mdtype) const;
size_t free_words(MetadataType mdtype) const;
size_t capacity_words(MetadataType mdtype) const;
size_t capacity_words_slow(MetadataType mdtype) const;
size_t waste_words(MetadataType mdtype) const;
size_t used_bytes_slow(MetadataType mdtype) const;
size_t capacity_bytes_slow(MetadataType mdtype) const;
static Metablock* allocate(ClassLoaderData* loader_data, size_t size,
bool read_only, MetadataType mdtype, TRAPS);
void deallocate(MetaWord* ptr, size_t byte_size, bool is_class);
@ -150,6 +157,9 @@ class Metaspace : public CHeapObj<mtClass> {
static bool contains(const void *ptr);
void dump(outputStream* const out) const;
// Free empty virtualspaces
static void purge();
void print_on(outputStream* st) const;
// Debugging support
void verify();
@ -158,28 +168,81 @@ class Metaspace : public CHeapObj<mtClass> {
class MetaspaceAux : AllStatic {
// Statistics for class space and data space in metaspace.
static size_t used_in_bytes(Metaspace::MetadataType mdtype);
// These methods iterate over the classloader data graph
// for the given Metaspace type. These are slow.
static size_t used_bytes_slow(Metaspace::MetadataType mdtype);
static size_t free_in_bytes(Metaspace::MetadataType mdtype);
static size_t capacity_in_bytes(Metaspace::MetadataType mdtype);
static size_t capacity_bytes_slow(Metaspace::MetadataType mdtype);
// Iterates over the virtual space list.
static size_t reserved_in_bytes(Metaspace::MetadataType mdtype);
static size_t free_chunks_total(Metaspace::MetadataType mdtype);
static size_t free_chunks_total_in_bytes(Metaspace::MetadataType mdtype);
public:
// Total of space allocated to metadata in all Metaspaces
static size_t used_in_bytes() {
return used_in_bytes(Metaspace::ClassType) +
used_in_bytes(Metaspace::NonClassType);
// Running sum of space in all Metachunks that has been
// allocated to a Metaspace. This is used instead of
// iterating over all the classloaders
static size_t _allocated_capacity_words;
// Running sum of space in all Metachunks that have
// are being used for metadata.
static size_t _allocated_used_words;
public:
// Decrement and increment _allocated_capacity_words
static void dec_capacity(size_t words);
static void inc_capacity(size_t words);
// Decrement and increment _allocated_used_words
static void dec_used(size_t words);
static void inc_used(size_t words);
// Total of space allocated to metadata in all Metaspaces.
// This sums the space used in each Metachunk by
// iterating over the classloader data graph
static size_t used_bytes_slow() {
return used_bytes_slow(Metaspace::ClassType) +
used_bytes_slow(Metaspace::NonClassType);
}
// Total of available space in all Metaspaces
// Total of capacity allocated to all Metaspaces. This includes
// space in Metachunks not yet allocated and in the Metachunk
// freelist.
static size_t capacity_in_bytes() {
return capacity_in_bytes(Metaspace::ClassType) +
capacity_in_bytes(Metaspace::NonClassType);
// Used by MetaspaceCounters
static size_t free_chunks_total();
static size_t free_chunks_total_in_bytes();
static size_t allocated_capacity_words() {
return _allocated_capacity_words;
}
static size_t allocated_capacity_bytes() {
return _allocated_capacity_words * BytesPerWord;
}
static size_t allocated_used_words() {
return _allocated_used_words;
}
static size_t allocated_used_bytes() {
return _allocated_used_words * BytesPerWord;
}
static size_t free_bytes();
// Total capacity in all Metaspaces
static size_t capacity_bytes_slow() {
#ifdef PRODUCT
// Use allocated_capacity_bytes() in PRODUCT instead of this function.
guarantee(false, "Should not call capacity_bytes_slow() in the PRODUCT");
#endif
size_t class_capacity = capacity_bytes_slow(Metaspace::ClassType);
size_t non_class_capacity = capacity_bytes_slow(Metaspace::NonClassType);
assert(allocated_capacity_bytes() == class_capacity + non_class_capacity,
err_msg("bad accounting: allocated_capacity_bytes() " SIZE_FORMAT
" class_capacity + non_class_capacity " SIZE_FORMAT
" class_capacity " SIZE_FORMAT " non_class_capacity " SIZE_FORMAT,
allocated_capacity_bytes(), class_capacity + non_class_capacity,
class_capacity, non_class_capacity));
return class_capacity + non_class_capacity;
}
// Total space reserved in all Metaspaces
@ -198,6 +261,11 @@ class MetaspaceAux : AllStatic {
static void print_waste(outputStream* out);
static void dump(outputStream* out);
static void verify_free_chunks();
// Checks that the values returned by allocated_capacity_bytes() and
// capacity_bytes_slow() are the same.
static void verify_capacity();
static void verify_used();
static void verify_metrics();
};
// Metaspace are deallocated when their class loader are GC'ed.
@ -232,7 +300,6 @@ class MetaspaceGC : AllStatic {
public:
static size_t capacity_until_GC() { return _capacity_until_GC; }
static size_t capacity_until_GC_in_bytes() { return _capacity_until_GC * BytesPerWord; }
static void inc_capacity_until_GC(size_t v) { _capacity_until_GC += v; }
static void dec_capacity_until_GC(size_t v) {
_capacity_until_GC = _capacity_until_GC > v ? _capacity_until_GC - v : 0;

View File

@ -29,6 +29,16 @@
MetaspaceCounters* MetaspaceCounters::_metaspace_counters = NULL;
size_t MetaspaceCounters::calc_total_capacity() {
// The total capacity is the sum of
// 1) capacity of Metachunks in use by all Metaspaces
// 2) unused space at the end of each Metachunk
// 3) space in the freelist
size_t total_capacity = MetaspaceAux::allocated_capacity_bytes()
+ MetaspaceAux::free_bytes() + MetaspaceAux::free_chunks_total_in_bytes();
return total_capacity;
}
MetaspaceCounters::MetaspaceCounters() :
_capacity(NULL),
_used(NULL),
@ -36,8 +46,8 @@ MetaspaceCounters::MetaspaceCounters() :
if (UsePerfData) {
size_t min_capacity = MetaspaceAux::min_chunk_size();
size_t max_capacity = MetaspaceAux::reserved_in_bytes();
size_t curr_capacity = MetaspaceAux::capacity_in_bytes();
size_t used = MetaspaceAux::used_in_bytes();
size_t curr_capacity = calc_total_capacity();
size_t used = MetaspaceAux::allocated_used_bytes();
initialize(min_capacity, max_capacity, curr_capacity, used);
}
@ -82,15 +92,13 @@ void MetaspaceCounters::initialize(size_t min_capacity,
void MetaspaceCounters::update_capacity() {
assert(UsePerfData, "Should not be called unless being used");
assert(_capacity != NULL, "Should be initialized");
size_t capacity_in_bytes = MetaspaceAux::capacity_in_bytes();
_capacity->set_value(capacity_in_bytes);
size_t total_capacity = calc_total_capacity();
_capacity->set_value(total_capacity);
}
void MetaspaceCounters::update_used() {
assert(UsePerfData, "Should not be called unless being used");
assert(_used != NULL, "Should be initialized");
size_t used_in_bytes = MetaspaceAux::used_in_bytes();
size_t used_in_bytes = MetaspaceAux::allocated_used_bytes();
_used->set_value(used_in_bytes);
}

View File

@ -37,6 +37,7 @@ class MetaspaceCounters: public CHeapObj<mtClass> {
size_t max_capacity,
size_t curr_capacity,
size_t used);
size_t calc_total_capacity();
public:
MetaspaceCounters();
~MetaspaceCounters();

View File

@ -376,18 +376,17 @@ void VM_PopulateDumpSharedSpace::doit() {
const char* fmt = "%s space: %9d [ %4.1f%% of total] out of %9d bytes [%4.1f%% used] at " PTR_FORMAT;
Metaspace* ro_space = _loader_data->ro_metaspace();
Metaspace* rw_space = _loader_data->rw_metaspace();
const size_t BPW = BytesPerWord;
// Allocated size of each space (may not be all occupied)
const size_t ro_alloced = ro_space->capacity_words(Metaspace::NonClassType) * BPW;
const size_t rw_alloced = rw_space->capacity_words(Metaspace::NonClassType) * BPW;
const size_t ro_alloced = ro_space->capacity_bytes_slow(Metaspace::NonClassType);
const size_t rw_alloced = rw_space->capacity_bytes_slow(Metaspace::NonClassType);
const size_t md_alloced = md_end-md_low;
const size_t mc_alloced = mc_end-mc_low;
const size_t total_alloced = ro_alloced + rw_alloced + md_alloced + mc_alloced;
// Occupied size of each space.
const size_t ro_bytes = ro_space->used_words(Metaspace::NonClassType) * BPW;
const size_t rw_bytes = rw_space->used_words(Metaspace::NonClassType) * BPW;
const size_t ro_bytes = ro_space->used_bytes_slow(Metaspace::NonClassType);
const size_t rw_bytes = rw_space->used_bytes_slow(Metaspace::NonClassType);
const size_t md_bytes = size_t(md_top - md_low);
const size_t mc_bytes = size_t(mc_top - mc_low);

View File

@ -218,14 +218,13 @@ public:
static AlwaysTrueClosure always_true;
void SharedHeap::process_weak_roots(OopClosure* root_closure,
CodeBlobClosure* code_roots,
OopClosure* non_root_closure) {
CodeBlobClosure* code_roots) {
// Global (weak) JNI handles
JNIHandles::weak_oops_do(&always_true, root_closure);
CodeCache::blobs_do(code_roots);
StringTable::oops_do(root_closure);
}
StringTable::oops_do(root_closure);
}
void SharedHeap::set_barrier_set(BarrierSet* bs) {
_barrier_set = bs;

View File

@ -249,8 +249,7 @@ public:
// JNI weak roots, the code cache, system dictionary, symbol table,
// string table.
void process_weak_roots(OopClosure* root_closure,
CodeBlobClosure* code_roots,
OopClosure* non_root_closure);
CodeBlobClosure* code_roots);
// The functions below are helper functions that a subclass of
// "SharedHeap" can use in the implementation of its virtual

View File

@ -1270,7 +1270,7 @@ void Universe::print_heap_after_gc(outputStream* st, bool ignore_extended) {
st->print_cr("}");
}
void Universe::verify(bool silent, VerifyOption option) {
void Universe::verify(VerifyOption option, const char* prefix, bool silent) {
// The use of _verify_in_progress is a temporary work around for
// 6320749. Don't bother with a creating a class to set and clear
// it since it is only used in this method and the control flow is
@ -1287,11 +1287,12 @@ void Universe::verify(bool silent, VerifyOption option) {
HandleMark hm; // Handles created during verification can be zapped
_verify_count++;
if (!silent) gclog_or_tty->print(prefix);
if (!silent) gclog_or_tty->print("[Verifying ");
if (!silent) gclog_or_tty->print("threads ");
Threads::verify();
if (!silent) gclog_or_tty->print("heap ");
heap()->verify(silent, option);
if (!silent) gclog_or_tty->print("syms ");
SymbolTable::verify();
if (!silent) gclog_or_tty->print("strs ");

View File

@ -445,12 +445,12 @@ class Universe: AllStatic {
// Debugging
static bool verify_in_progress() { return _verify_in_progress; }
static void verify(bool silent, VerifyOption option);
static void verify(bool silent) {
verify(silent, VerifyOption_Default /* option */);
static void verify(VerifyOption option, const char* prefix, bool silent = VerifySilently);
static void verify(const char* prefix, bool silent = VerifySilently) {
verify(VerifyOption_Default, prefix, silent);
}
static void verify() {
verify(false /* silent */);
static void verify(bool silent = VerifySilently) {
verify("", silent);
}
static int verify_count() { return _verify_count; }

View File

@ -40,6 +40,7 @@
#include "runtime/init.hpp"
#include "runtime/javaCalls.hpp"
#include "runtime/signature.hpp"
#include "runtime/synchronizer.hpp"
#include "runtime/vframe.hpp"
ConstantPool* ConstantPool::allocate(ClassLoaderData* loader_data, int length, TRAPS) {
@ -69,7 +70,6 @@ ConstantPool::ConstantPool(Array<u1>* tags) {
// only set to non-zero if constant pool is merged by RedefineClasses
set_version(0);
set_lock(new Monitor(Monitor::nonleaf + 2, "A constant pool lock"));
// initialize tag array
int length = tags->length();
@ -95,9 +95,6 @@ void ConstantPool::deallocate_contents(ClassLoaderData* loader_data) {
void ConstantPool::release_C_heap_structures() {
// walk constant pool and decrement symbol reference counts
unreference_symbols();
delete _lock;
set_lock(NULL);
}
objArrayOop ConstantPool::resolved_references() const {
@ -154,9 +151,6 @@ void ConstantPool::restore_unshareable_info(TRAPS) {
ClassLoaderData* loader_data = pool_holder()->class_loader_data();
set_resolved_references(loader_data->add_handle(refs_handle));
}
// Also need to recreate the mutex. Make sure this matches the constructor
set_lock(new Monitor(Monitor::nonleaf + 2, "A constant pool lock"));
}
}
@ -167,7 +161,23 @@ void ConstantPool::remove_unshareable_info() {
set_resolved_reference_length(
resolved_references() != NULL ? resolved_references()->length() : 0);
set_resolved_references(NULL);
set_lock(NULL);
}
oop ConstantPool::lock() {
if (_pool_holder) {
// We re-use the _pool_holder's init_lock to reduce footprint.
// Notes on deadlocks:
// [1] This lock is a Java oop, so it can be recursively locked by
// the same thread without self-deadlocks.
// [2] Deadlock will happen if there is circular dependency between
// the <clinit> of two Java classes. However, in this case,
// the deadlock would have happened long before we reach
// ConstantPool::lock(), so reusing init_lock does not
// increase the possibility of deadlock.
return _pool_holder->init_lock();
} else {
return NULL;
}
}
int ConstantPool::cp_to_object_index(int cp_index) {
@ -208,7 +218,9 @@ Klass* ConstantPool::klass_at_impl(constantPoolHandle this_oop, int which, TRAPS
Symbol* name = NULL;
Handle loader;
{ MonitorLockerEx ml(this_oop->lock());
{
oop cplock = this_oop->lock();
ObjectLocker ol(cplock , THREAD, cplock != NULL);
if (this_oop->tag_at(which).is_unresolved_klass()) {
if (this_oop->tag_at(which).is_unresolved_klass_in_error()) {
@ -255,7 +267,8 @@ Klass* ConstantPool::klass_at_impl(constantPoolHandle this_oop, int which, TRAPS
bool throw_orig_error = false;
{
MonitorLockerEx ml(this_oop->lock());
oop cplock = this_oop->lock();
ObjectLocker ol(cplock, THREAD, cplock != NULL);
// some other thread has beaten us and has resolved the class.
if (this_oop->tag_at(which).is_klass()) {
@ -323,7 +336,8 @@ Klass* ConstantPool::klass_at_impl(constantPoolHandle this_oop, int which, TRAPS
}
return k();
} else {
MonitorLockerEx ml(this_oop->lock());
oop cplock = this_oop->lock();
ObjectLocker ol(cplock, THREAD, cplock != NULL);
// Only updated constant pool - if it is resolved.
do_resolve = this_oop->tag_at(which).is_unresolved_klass();
if (do_resolve) {
@ -619,7 +633,8 @@ void ConstantPool::save_and_throw_exception(constantPoolHandle this_oop, int whi
int tag, TRAPS) {
ResourceMark rm;
Symbol* error = PENDING_EXCEPTION->klass()->name();
MonitorLockerEx ml(this_oop->lock()); // lock cpool to change tag.
oop cplock = this_oop->lock();
ObjectLocker ol(cplock, THREAD, cplock != NULL); // lock cpool to change tag.
int error_tag = (tag == JVM_CONSTANT_MethodHandle) ?
JVM_CONSTANT_MethodHandleInError : JVM_CONSTANT_MethodTypeInError;
@ -780,7 +795,8 @@ oop ConstantPool::resolve_constant_at_impl(constantPoolHandle this_oop, int inde
if (cache_index >= 0) {
// Cache the oop here also.
Handle result_handle(THREAD, result_oop);
MonitorLockerEx ml(this_oop->lock()); // don't know if we really need this
oop cplock = this_oop->lock();
ObjectLocker ol(cplock, THREAD, cplock != NULL); // don't know if we really need this
oop result = this_oop->resolved_references()->obj_at(cache_index);
// Benign race condition: resolved_references may already be filled in while we were trying to lock.
// The important thing here is that all threads pick up the same result.
@ -1043,24 +1059,13 @@ bool ConstantPool::compare_entry_to(int index1, constantPoolHandle cp2,
case JVM_CONSTANT_InvokeDynamic:
{
int k1 = invoke_dynamic_bootstrap_method_ref_index_at(index1);
int k2 = cp2->invoke_dynamic_bootstrap_method_ref_index_at(index2);
bool match = compare_entry_to(k1, cp2, k2, CHECK_false);
if (!match) return false;
k1 = invoke_dynamic_name_and_type_ref_index_at(index1);
k2 = cp2->invoke_dynamic_name_and_type_ref_index_at(index2);
match = compare_entry_to(k1, cp2, k2, CHECK_false);
if (!match) return false;
int argc = invoke_dynamic_argument_count_at(index1);
if (argc == cp2->invoke_dynamic_argument_count_at(index2)) {
for (int j = 0; j < argc; j++) {
k1 = invoke_dynamic_argument_index_at(index1, j);
k2 = cp2->invoke_dynamic_argument_index_at(index2, j);
match = compare_entry_to(k1, cp2, k2, CHECK_false);
if (!match) return false;
}
return true; // got through loop; all elements equal
}
int k1 = invoke_dynamic_name_and_type_ref_index_at(index1);
int k2 = cp2->invoke_dynamic_name_and_type_ref_index_at(index2);
int i1 = invoke_dynamic_bootstrap_specifier_index(index1);
int i2 = cp2->invoke_dynamic_bootstrap_specifier_index(index2);
bool match = compare_entry_to(k1, cp2, k2, CHECK_false) &&
compare_operand_to(i1, cp2, i2, CHECK_false);
return match;
} break;
case JVM_CONSTANT_String:
@ -1095,6 +1100,80 @@ bool ConstantPool::compare_entry_to(int index1, constantPoolHandle cp2,
} // end compare_entry_to()
// Resize the operands array with delta_len and delta_size.
// Used in RedefineClasses for CP merge.
void ConstantPool::resize_operands(int delta_len, int delta_size, TRAPS) {
int old_len = operand_array_length(operands());
int new_len = old_len + delta_len;
int min_len = (delta_len > 0) ? old_len : new_len;
int old_size = operands()->length();
int new_size = old_size + delta_size;
int min_size = (delta_size > 0) ? old_size : new_size;
ClassLoaderData* loader_data = pool_holder()->class_loader_data();
Array<u2>* new_ops = MetadataFactory::new_array<u2>(loader_data, new_size, CHECK);
// Set index in the resized array for existing elements only
for (int idx = 0; idx < min_len; idx++) {
int offset = operand_offset_at(idx); // offset in original array
operand_offset_at_put(new_ops, idx, offset + 2*delta_len); // offset in resized array
}
// Copy the bootstrap specifiers only
Copy::conjoint_memory_atomic(operands()->adr_at(2*old_len),
new_ops->adr_at(2*new_len),
(min_size - 2*min_len) * sizeof(u2));
// Explicitly deallocate old operands array.
// Note, it is not needed for 7u backport.
if ( operands() != NULL) { // the safety check
MetadataFactory::free_array<u2>(loader_data, operands());
}
set_operands(new_ops);
} // end resize_operands()
// Extend the operands array with the length and size of the ext_cp operands.
// Used in RedefineClasses for CP merge.
void ConstantPool::extend_operands(constantPoolHandle ext_cp, TRAPS) {
int delta_len = operand_array_length(ext_cp->operands());
if (delta_len == 0) {
return; // nothing to do
}
int delta_size = ext_cp->operands()->length();
assert(delta_len > 0 && delta_size > 0, "extended operands array must be bigger");
if (operand_array_length(operands()) == 0) {
ClassLoaderData* loader_data = pool_holder()->class_loader_data();
Array<u2>* new_ops = MetadataFactory::new_array<u2>(loader_data, delta_size, CHECK);
// The first element index defines the offset of second part
operand_offset_at_put(new_ops, 0, 2*delta_len); // offset in new array
set_operands(new_ops);
} else {
resize_operands(delta_len, delta_size, CHECK);
}
} // end extend_operands()
// Shrink the operands array to a smaller array with new_len length.
// Used in RedefineClasses for CP merge.
void ConstantPool::shrink_operands(int new_len, TRAPS) {
int old_len = operand_array_length(operands());
if (new_len == old_len) {
return; // nothing to do
}
assert(new_len < old_len, "shrunken operands array must be smaller");
int free_base = operand_next_offset_at(new_len - 1);
int delta_len = new_len - old_len;
int delta_size = 2*delta_len + free_base - operands()->length();
resize_operands(delta_len, delta_size, CHECK);
} // end shrink_operands()
void ConstantPool::copy_operands(constantPoolHandle from_cp,
constantPoolHandle to_cp,
TRAPS) {
@ -1357,6 +1436,46 @@ int ConstantPool::find_matching_entry(int pattern_i,
} // end find_matching_entry()
// Compare this constant pool's bootstrap specifier at idx1 to the constant pool
// cp2's bootstrap specifier at idx2.
bool ConstantPool::compare_operand_to(int idx1, constantPoolHandle cp2, int idx2, TRAPS) {
int k1 = operand_bootstrap_method_ref_index_at(idx1);
int k2 = cp2->operand_bootstrap_method_ref_index_at(idx2);
bool match = compare_entry_to(k1, cp2, k2, CHECK_false);
if (!match) {
return false;
}
int argc = operand_argument_count_at(idx1);
if (argc == cp2->operand_argument_count_at(idx2)) {
for (int j = 0; j < argc; j++) {
k1 = operand_argument_index_at(idx1, j);
k2 = cp2->operand_argument_index_at(idx2, j);
match = compare_entry_to(k1, cp2, k2, CHECK_false);
if (!match) {
return false;
}
}
return true; // got through loop; all elements equal
}
return false;
} // end compare_operand_to()
// Search constant pool search_cp for a bootstrap specifier that matches
// this constant pool's bootstrap specifier at pattern_i index.
// Return the index of a matching bootstrap specifier or (-1) if there is no match.
int ConstantPool::find_matching_operand(int pattern_i,
constantPoolHandle search_cp, int search_len, TRAPS) {
for (int i = 0; i < search_len; i++) {
bool found = compare_operand_to(pattern_i, search_cp, i, CHECK_(-1));
if (found) {
return i;
}
}
return -1; // bootstrap specifier not found; return unused index (-1)
} // end find_matching_operand()
#ifndef PRODUCT
const char* ConstantPool::printable_name_at(int which) {

View File

@ -111,7 +111,6 @@ class ConstantPool : public Metadata {
int _version;
} _saved;
Monitor* _lock;
void set_tags(Array<u1>* tags) { _tags = tags; }
void tag_at_put(int which, jbyte t) { tags()->at_put(which, t); }
@ -567,6 +566,47 @@ class ConstantPool : public Metadata {
_indy_argc_offset = 1, // u2 argc
_indy_argv_offset = 2 // u2 argv[argc]
};
// These functions are used in RedefineClasses for CP merge
int operand_offset_at(int bootstrap_specifier_index) {
assert(0 <= bootstrap_specifier_index &&
bootstrap_specifier_index < operand_array_length(operands()),
"Corrupted CP operands");
return operand_offset_at(operands(), bootstrap_specifier_index);
}
int operand_bootstrap_method_ref_index_at(int bootstrap_specifier_index) {
int offset = operand_offset_at(bootstrap_specifier_index);
return operands()->at(offset + _indy_bsm_offset);
}
int operand_argument_count_at(int bootstrap_specifier_index) {
int offset = operand_offset_at(bootstrap_specifier_index);
int argc = operands()->at(offset + _indy_argc_offset);
return argc;
}
int operand_argument_index_at(int bootstrap_specifier_index, int j) {
int offset = operand_offset_at(bootstrap_specifier_index);
return operands()->at(offset + _indy_argv_offset + j);
}
int operand_next_offset_at(int bootstrap_specifier_index) {
int offset = operand_offset_at(bootstrap_specifier_index) + _indy_argv_offset
+ operand_argument_count_at(bootstrap_specifier_index);
return offset;
}
// Compare a bootsrap specifier in the operands arrays
bool compare_operand_to(int bootstrap_specifier_index1, constantPoolHandle cp2,
int bootstrap_specifier_index2, TRAPS);
// Find a bootsrap specifier in the operands array
int find_matching_operand(int bootstrap_specifier_index, constantPoolHandle search_cp,
int operands_cur_len, TRAPS);
// Resize the operands array with delta_len and delta_size
void resize_operands(int delta_len, int delta_size, TRAPS);
// Extend the operands array with the length and size of the ext_cp operands
void extend_operands(constantPoolHandle ext_cp, TRAPS);
// Shrink the operands array to a smaller array with new_len length
void shrink_operands(int new_len, TRAPS);
int invoke_dynamic_bootstrap_method_ref_index_at(int which) {
assert(tag_at(which).is_invoke_dynamic(), "Corrupted constant pool");
int op_base = invoke_dynamic_operand_base(which);
@ -782,8 +822,17 @@ class ConstantPool : public Metadata {
void set_resolved_reference_length(int length) { _saved._resolved_reference_length = length; }
int resolved_reference_length() const { return _saved._resolved_reference_length; }
void set_lock(Monitor* lock) { _lock = lock; }
Monitor* lock() { return _lock; }
// lock() may return null -- constant pool updates may happen before this lock is
// initialized, because the _pool_holder has not been fully initialized and
// has not been registered into the system dictionary. In this case, no other
// thread can be modifying this constantpool, so no synchronization is
// necessary.
//
// Use cplock() like this:
// oop cplock = cp->lock();
// ObjectLocker ol(cplock , THREAD, cplock != NULL);
oop lock();
// Decrease ref counts of symbols that are in the constant pool
// when the holder class is unloaded

View File

@ -266,7 +266,8 @@ void ConstantPoolCacheEntry::set_method_handle_common(constantPoolHandle cpool,
// the lock, so that when the losing writer returns, he can use the linked
// cache entry.
MonitorLockerEx ml(cpool->lock());
oop cplock = cpool->lock();
ObjectLocker ol(cplock, Thread::current(), cplock != NULL);
if (!is_f1_null()) {
return;
}

View File

@ -54,6 +54,7 @@
#include "runtime/javaCalls.hpp"
#include "runtime/mutexLocker.hpp"
#include "runtime/thread.inline.hpp"
#include "services/classLoadingService.hpp"
#include "services/threadService.hpp"
#include "utilities/dtrace.hpp"
#include "utilities/macros.hpp"
@ -418,25 +419,6 @@ void InstanceKlass::deallocate_contents(ClassLoaderData* loader_data) {
set_annotations(NULL);
}
volatile oop InstanceKlass::init_lock() const {
volatile oop lock = _init_lock; // read once
assert((oop)lock != NULL || !is_not_initialized(), // initialized or in_error state
"only fully initialized state can have a null lock");
return lock;
}
// Set the initialization lock to null so the object can be GC'ed. Any racing
// threads to get this lock will see a null lock and will not lock.
// That's okay because they all check for initialized state after getting
// the lock and return.
void InstanceKlass::fence_and_clear_init_lock() {
// make sure previous stores are all done, notably the init_state.
OrderAccess::storestore();
klass_oop_store(&_init_lock, NULL);
assert(!is_not_initialized(), "class must be initialized now");
}
bool InstanceKlass::should_be_initialized() const {
return !is_initialized();
}
@ -473,7 +455,7 @@ void InstanceKlass::eager_initialize(Thread *thread) {
void InstanceKlass::eager_initialize_impl(instanceKlassHandle this_oop) {
EXCEPTION_MARK;
volatile oop init_lock = this_oop->init_lock();
ObjectLocker ol(init_lock, THREAD, init_lock != NULL);
ObjectLocker ol(init_lock, THREAD);
// abort if someone beat us to the initialization
if (!this_oop->is_not_initialized()) return; // note: not equivalent to is_initialized()
@ -492,7 +474,6 @@ void InstanceKlass::eager_initialize_impl(instanceKlassHandle this_oop) {
} else {
// linking successfull, mark class as initialized
this_oop->set_init_state (fully_initialized);
this_oop->fence_and_clear_init_lock();
// trace
if (TraceClassInitialization) {
ResourceMark rm(THREAD);
@ -619,7 +600,7 @@ bool InstanceKlass::link_class_impl(
// verification & rewriting
{
volatile oop init_lock = this_oop->init_lock();
ObjectLocker ol(init_lock, THREAD, init_lock != NULL);
ObjectLocker ol(init_lock, THREAD);
// rewritten will have been set if loader constraint error found
// on an earlier link attempt
// don't verify or rewrite if already rewritten
@ -742,7 +723,7 @@ void InstanceKlass::initialize_impl(instanceKlassHandle this_oop, TRAPS) {
// Step 1
{
volatile oop init_lock = this_oop->init_lock();
ObjectLocker ol(init_lock, THREAD, init_lock != NULL);
ObjectLocker ol(init_lock, THREAD);
Thread *self = THREAD; // it's passed the current thread
@ -890,9 +871,8 @@ void InstanceKlass::set_initialization_state_and_notify(ClassState state, TRAPS)
void InstanceKlass::set_initialization_state_and_notify_impl(instanceKlassHandle this_oop, ClassState state, TRAPS) {
volatile oop init_lock = this_oop->init_lock();
ObjectLocker ol(init_lock, THREAD, init_lock != NULL);
ObjectLocker ol(init_lock, THREAD);
this_oop->set_init_state(state);
this_oop->fence_and_clear_init_lock();
ol.notify_all(CHECK);
}
@ -2312,7 +2292,29 @@ static void clear_all_breakpoints(Method* m) {
m->clear_all_breakpoints();
}
void InstanceKlass::notify_unload_class(InstanceKlass* ik) {
// notify the debugger
if (JvmtiExport::should_post_class_unload()) {
JvmtiExport::post_class_unload(ik);
}
// notify ClassLoadingService of class unload
ClassLoadingService::notify_class_unloaded(ik);
}
void InstanceKlass::release_C_heap_structures(InstanceKlass* ik) {
// Clean up C heap
ik->release_C_heap_structures();
ik->constants()->release_C_heap_structures();
}
void InstanceKlass::release_C_heap_structures() {
// Can't release the constant pool here because the constant pool can be
// deallocated separately from the InstanceKlass for default methods and
// redefine classes.
// Deallocate oop map cache
if (_oop_map_cache != NULL) {
delete _oop_map_cache;
@ -2837,7 +2839,7 @@ void InstanceKlass::print_on(outputStream* st) const {
st->print(BULLET"protection domain: "); ((InstanceKlass*)this)->protection_domain()->print_value_on(st); st->cr();
st->print(BULLET"host class: "); host_klass()->print_value_on_maybe_null(st); st->cr();
st->print(BULLET"signers: "); signers()->print_value_on(st); st->cr();
st->print(BULLET"init_lock: "); ((oop)_init_lock)->print_value_on(st); st->cr();
st->print(BULLET"init_lock: "); ((oop)_init_lock)->print_value_on(st); st->cr();
if (source_file_name() != NULL) {
st->print(BULLET"source file: ");
source_file_name()->print_value_on(st);

View File

@ -184,8 +184,9 @@ class InstanceKlass: public Klass {
oop _protection_domain;
// Class signers.
objArrayOop _signers;
// Initialization lock. Must be one per class and it has to be a VM internal
// object so java code cannot lock it (like the mirror)
// Lock for (1) initialization; (2) access to the ConstantPool of this class.
// Must be one per class and it has to be a VM internal object so java code
// cannot lock it (like the mirror).
// It has to be an object not a Mutex because it's held through java calls.
volatile oop _init_lock;
@ -236,7 +237,7 @@ class InstanceKlass: public Klass {
_misc_rewritten = 1 << 0, // methods rewritten.
_misc_has_nonstatic_fields = 1 << 1, // for sizing with UseCompressedOops
_misc_should_verify_class = 1 << 2, // allow caching of preverification
_misc_is_anonymous = 1 << 3, // has embedded _inner_classes field
_misc_is_anonymous = 1 << 3, // has embedded _host_klass field
_misc_is_contended = 1 << 4, // marked with contended annotation
_misc_has_default_methods = 1 << 5 // class/superclass/implemented interfaces has default methods
};
@ -934,7 +935,9 @@ class InstanceKlass: public Klass {
// referenced by handles.
bool on_stack() const { return _constants->on_stack(); }
void release_C_heap_structures();
// callbacks for actions during class unloading
static void notify_unload_class(InstanceKlass* ik);
static void release_C_heap_structures(InstanceKlass* ik);
// Parallel Scavenge and Parallel Old
PARALLEL_GC_DECLS
@ -968,6 +971,7 @@ class InstanceKlass: public Klass {
#endif // INCLUDE_ALL_GCS
u2 idnum_allocated_count() const { return _idnum_allocated_count; }
private:
// initialization state
#ifdef ASSERT
@ -994,9 +998,10 @@ private:
{ OrderAccess::release_store_ptr(&_methods_cached_itable_indices, indices); }
// Lock during initialization
volatile oop init_lock() const;
void set_init_lock(oop value) { klass_oop_store(&_init_lock, value); }
void fence_and_clear_init_lock(); // after fully_initialized
public:
volatile oop init_lock() const {return _init_lock; }
private:
void set_init_lock(oop value) { klass_oop_store(&_init_lock, value); }
// Offsets for memory management
oop* adr_protection_domain() const { return (oop*)&this->_protection_domain;}
@ -1022,6 +1027,8 @@ private:
// Returns the array class with this class as element type
Klass* array_klass_impl(bool or_null, TRAPS);
// Free CHeap allocated fields.
void release_C_heap_structures();
public:
// CDS support - remove and restore oops from metadata. Oops are not shared.
virtual void remove_unshareable_info();

View File

@ -519,6 +519,9 @@ bool klassVtable::is_miranda_entry_at(int i) {
// check if a method is a miranda method, given a class's methods table and it's super
// the caller must make sure that the method belongs to an interface implemented by the class
bool klassVtable::is_miranda(Method* m, Array<Method*>* class_methods, Klass* super) {
if (m->is_static()) {
return false;
}
Symbol* name = m->name();
Symbol* signature = m->signature();
if (InstanceKlass::find_method(class_methods, name, signature) == NULL) {

Some files were not shown because too many files have changed in this diff Show More