This commit is contained in:
Prasanta Sadhukhan 2019-06-04 13:34:50 +05:30
commit 60f385e737
80 changed files with 1843 additions and 300 deletions

54
doc/ide.html Normal file
View File

@ -0,0 +1,54 @@
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
<head>
<meta charset="utf-8" />
<meta name="generator" content="pandoc" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
<title>IDE support in the JDK</title>
<style type="text/css">
code{white-space: pre-wrap;}
span.smallcaps{font-variant: small-caps;}
span.underline{text-decoration: underline;}
div.column{display: inline-block; vertical-align: top; width: 50%;}
</style>
<link rel="stylesheet" href="../make/data/docs-resources/resources/jdk-default.css" />
<!--[if lt IE 9]>
<script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
<![endif]-->
</head>
<body>
<header id="title-block-header">
<h1 class="title">IDE support in the JDK</h1>
</header>
<nav id="TOC">
<ul>
<li><a href="#introduction">Introduction</a><ul>
<li><a href="#ide-support-for-native-code">IDE support for native code</a></li>
<li><a href="#ide-support-for-java-code">IDE support for Java code</a></li>
</ul></li>
</ul>
</nav>
<h2 id="introduction">Introduction</h2>
<p>When you are familiar with building and testing the JDK, you may want to configure an IDE to work with the source code. The instructions differ a bit depending on whether you are interested in working with the native (C/C++) or the Java code.</p>
<h3 id="ide-support-for-native-code">IDE support for native code</h3>
<p>There are a few ways to generate IDE configuration for the native sources, depending on which IDE to use.</p>
<h4 id="visual-studio-code">Visual Studio Code</h4>
<p>The make system can generate a <a href="https://code.visualstudio.com">Visual Studio Code</a> workspace that has C/C++ source indexing configured correctly, as well as launcher targets for tests and the Java launcher. After configuring, a workspace for the configuration can be generated using:</p>
<pre class="shell"><code>make vscode-project</code></pre>
<p>This creates a file called <code>jdk.code-workspace</code> in the build output folder. The full location will be printed after the workspace has been generated. To use it, choose <code>File -&gt; Open Workspace...</code> in Visual Studio Code.</p>
<h5 id="alternative-indexers">Alternative indexers</h5>
<p>The main <code>vscode-project</code> target configures the default C++ support in Visual Studio Code. There are also other source indexers that can be installed, that may provide additional features. It's currently possible to generate configuration for two such indexers, <a href="https://clang.llvm.org/extra/clangd/">clangd</a> and <a href="https://github.com/Andersbakken/rtags">rtags</a>. These can be configured by appending the name of the indexer to the make target, such as:</p>
<pre class="shell"><code>make vscode-project-clangd</code></pre>
<p>Additional instructions for configuring the given indexer will be displayed after the workspace has been generated.</p>
<h4 id="visual-studio">Visual Studio</h4>
<p>This section is a work in progress.</p>
<pre class="shell"><code>make ide-project</code></pre>
<h4 id="compilation-database">Compilation Database</h4>
<p>The make system can generate generic native code indexing support in the form of a <a href="https://clang.llvm.org/docs/JSONCompilationDatabase.html">Compilation Database</a> that can be used by many different IDEs and source code indexers.</p>
<pre class="shell"><code>make compile-commands</code></pre>
<p>It's also possible to generate the Compilation Database for the HotSpot source code only, which is a bit faster as it includes less information.</p>
<pre class="shell"><code>make compile-commands-hotspot</code></pre>
<h3 id="ide-support-for-java-code">IDE support for Java code</h3>
<p>This section is a work in progress.</p>
</body>
</html>

73
doc/ide.md Normal file
View File

@ -0,0 +1,73 @@
% IDE support in the JDK
## Introduction
When you are familiar with building and testing the JDK, you may want to
configure an IDE to work with the source code. The instructions differ a bit
depending on whether you are interested in working with the native (C/C++) or
the Java code.
### IDE support for native code
There are a few ways to generate IDE configuration for the native sources,
depending on which IDE to use.
#### Visual Studio Code
The make system can generate a [Visual Studio Code](https://code.visualstudio.com)
workspace that has C/C++ source indexing configured correctly, as well as
launcher targets for tests and the Java launcher. After configuring, a workspace
for the configuration can be generated using:
```shell
make vscode-project
```
This creates a file called `jdk.code-workspace` in the build output folder. The
full location will be printed after the workspace has been generated. To use it,
choose `File -> Open Workspace...` in Visual Studio Code.
##### Alternative indexers
The main `vscode-project` target configures the default C++ support in Visual
Studio Code. There are also other source indexers that can be installed, that
may provide additional features. It's currently possible to generate
configuration for two such indexers, [clangd](https://clang.llvm.org/extra/clangd/)
and [rtags](https://github.com/Andersbakken/rtags). These can be configured by
appending the name of the indexer to the make target, such as:
```shell
make vscode-project-clangd
```
Additional instructions for configuring the given indexer will be displayed
after the workspace has been generated.
#### Visual Studio
This section is a work in progress.
```shell
make ide-project
```
#### Compilation Database
The make system can generate generic native code indexing support in the form of
a [Compilation Database](https://clang.llvm.org/docs/JSONCompilationDatabase.html)
that can be used by many different IDEs and source code indexers.
```shell
make compile-commands
```
It's also possible to generate the Compilation Database for the HotSpot source
code only, which is a bit faster as it includes less information.
```shell
make compile-commands-hotspot
```
### IDE support for Java code
This section is a work in progress.

View File

@ -287,6 +287,27 @@ compile-commands compile-commands-hotspot:
ALL_TARGETS += $(COMPILE_COMMANDS_TARGETS_HOTSPOT) $(COMPILE_COMMANDS_TARGETS_JDK)
ALL_TARGETS += compile-commands compile-commands-hotspot
################################################################################
# VS Code projects
vscode-project:
+($(CD) $(TOPDIR)/make/vscode && $(MAKE) $(MAKE_ARGS) -f CreateVSCodeProject.gmk \
VSCODE_INDEXER=cpptools)
vscode-project-clangd:
+($(CD) $(TOPDIR)/make/vscode && $(MAKE) $(MAKE_ARGS) -f CreateVSCodeProject.gmk \
VSCODE_INDEXER=clangd)
vscode-project-rtags:
+($(CD) $(TOPDIR)/make/vscode && $(MAKE) $(MAKE_ARGS) -f CreateVSCodeProject.gmk \
VSCODE_INDEXER=rtags)
vscode-project-ccls:
+($(CD) $(TOPDIR)/make/vscode && $(MAKE) $(MAKE_ARGS) -f CreateVSCodeProject.gmk \
VSCODE_INDEXER=ccls)
ALL_TARGETS += vscode-project vscode-project-clangd vscode-project-rtags \
vscode-project-ccls
################################################################################
# Build demos targets
@ -774,6 +795,11 @@ else
compile-commands-hotspot: $(COMPILE_COMMANDS_TARGETS_HOTSPOT)
compile-commands: $(COMPILE_COMMANDS_TARGETS_HOTSPOT) $(COMPILE_COMMANDS_TARGETS_JDK)
vscode-project: compile-commands
vscode-project-clangd: compile-commands
vscode-project-rtags: compile-commands
vscode-project-ccls: compile-commands
# Jmods cannot be created until we have the jmod tool ready to run. During
# a normal build we run it from the exploded image, but when cross compiling
# it's run from the buildjdk, which is either created at build time or user

View File

@ -61,6 +61,14 @@ $(eval $(call SetupProcessMarkdown, testing, \
))
TARGETS += $(testing)
$(eval $(call SetupProcessMarkdown, ide, \
FILES := $(DOCS_DIR)/ide.md, \
DEST := $(DOCS_DIR), \
CSS := $(GLOBAL_SPECS_DEFAULT_CSS_FILE), \
OPTIONS := --toc, \
))
TARGETS += $(ide)
################################################################################
$(eval $(call IncludeCustomExtension, UpdateBuildDocs.gmk))

View File

@ -122,7 +122,9 @@ define SetupJdkLibraryBody
endif
ifneq ($$($1_EXCLUDE_SRC_PATTERNS), )
$1_EXCLUDE_SRC := $$(call containing, $$($1_EXCLUDE_SRC_PATTERNS), $$($1_SRC))
$1_SRC_WITHOUT_WORKSPACE_ROOT := $$(patsubst $$(WORKSPACE_ROOT)/%, %, $$($1_SRC))
$1_EXCLUDE_SRC := $$(addprefix %, $$(call containing, $$($1_EXCLUDE_SRC_PATTERNS), \
$$($1_SRC_WITHOUT_WORKSPACE_ROOT)))
$1_SRC := $$(filter-out $$($1_EXCLUDE_SRC), $$($1_SRC))
endif

View File

@ -471,6 +471,22 @@ else
$1
endif
################################################################################
# FixPathList
#
# On Windows, converts a cygwin/unix style path list (colon-separated) into
# the native format (mixed mode, semicolon-separated). On other platforms,
# return the path list unchanged.
################################################################################
ifeq ($(call isTargetOs, windows), true)
FixPathList = \
$(subst @,$(SPACE),$(subst $(SPACE),;,$(foreach entry,$(subst :,$(SPACE),\
$(subst $(SPACE),@,$(strip $1))),$(call FixPath, $(entry)))))
else
FixPathList = \
$1
endif
################################################################################
# DependOnVariable
#

View File

@ -122,7 +122,8 @@ DirToDotDot = \
# $2 - Directory to compute the relative path from
RelativePath = \
$(eval $1_prefix := $(call FindCommonPathPrefix, $1, $2)) \
$(eval $1_dotdots := $(call DirToDotDot, $(patsubst $($(strip $1)_prefix)/%, %, $2))) \
$(eval $1_dotdots := $(call DirToDotDot, $(patsubst $($(strip $1)_prefix)%, %, $2))) \
$(eval $1_dotdots := $(if $($(strip $1)_dotdots),$($(strip $1)_dotdots),.)) \
$(eval $1_suffix := $(patsubst $($(strip $1)_prefix)/%, %, $1)) \
$($(strip $1)_dotdots)/$($(strip $1)_suffix)

View File

@ -39,7 +39,7 @@ LIBAWT_DEFAULT_HEADER_DIRS := \
# We must not include java.desktop/unix/native/libmlib_image, which is only
# for usage by solaris-sparc in libmlib_image_v.
BUILD_LIBMLIB_EXCLUDE_SRC_PATTERNS := unix
BUILD_LIBMLIB_EXCLUDE_SRC_PATTERNS := /unix/
BUILD_LIBMLIB_CFLAGS := -D__USE_J2D_NAMES -D__MEDIALIB_OLD_NAMES -DMLIB_NO_LIBSUNMATH
@ -698,7 +698,7 @@ else # not windows
ifeq ($(call isTargetOs, macosx), true)
# libjawt on macosx do not use the unix code
LIBJAWT_EXCLUDE_SRC_PATTERNS := unix
LIBJAWT_EXCLUDE_SRC_PATTERNS := /unix/
endif
ifeq ($(call isTargetOs, macosx), true)
@ -788,7 +788,7 @@ ifeq ($(ENABLE_HEADLESS_ONLY), false)
ifeq ($(call isTargetOs, macosx), true)
# libsplashscreen on macosx do not use the unix code
LIBSPLASHSCREEN_EXCLUDE_SRC_PATTERNS := unix
LIBSPLASHSCREEN_EXCLUDE_SRC_PATTERNS := /unix/
endif
LIBSPLASHSCREEN_CFLAGS += -DSPLASHSCREEN -DPNG_NO_MMX_CODE -DPNG_ARM_NEON_OPT=0

View File

@ -0,0 +1,113 @@
#
# Copyright (c) 2019, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 2 only, as
# published by the Free Software Foundation. Oracle designates this
# particular file as subject to the "Classpath" exception as provided
# by Oracle in the LICENSE file that accompanied this code.
#
# This code is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# version 2 for more details (a copy is included in the LICENSE file that
# accompanied this code).
#
# You should have received a copy of the GNU General Public License version
# 2 along with this work; if not, write to the Free Software Foundation,
# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
# or visit www.oracle.com if you need additional information or have any
# questions.
#
# This must be the first rule
default: all
include $(SPEC)
include MakeBase.gmk
################################################################################
# Return the full path to an indexer-specific file fragment.
#
# Param 1: Fragment name
################################################################################
GetIndexerFragment = \
$(TOPDIR)/make/vscode/indexers/$(VSCODE_INDEXER)-$(1).txt
################################################################################
# Show indexer-specific notes if they exist, otherwise do nothing
################################################################################
ifneq (,$(wildcard $(call GetIndexerFragment,notes)))
ShowIndexerNotes = $(CAT) $(call GetIndexerFragment,notes)
else
ShowIndexerNotes =
endif
################################################################################
# Return the platform-dependent preferred debug engine name.
################################################################################
ifeq ($(call isTargetOs, windows), true)
DebugEngineName = cppvsdbg
else
DebugEngineName = cppdbg
endif
################################################################################
# Return an additional configuration fragment if the WORKSPACE_ROOT is different
# from TOPDIR.
################################################################################
ifneq ($(WORKSPACE_ROOT),$(TOPDIR))
GetExtraWorkspaceRoot = $(TOPDIR)/make/vscode/template-workspace-folder.txt
else
GetExtraWorkspaceRoot = /dev/null
endif
################################################################################
# Create a project configuration from a given template, replacing a known set
# of variables.
#
# Param 1: Template
# Param 2: Output
################################################################################
define CreateFromTemplate
$(call LogInfo, Generating $2)
$(call MakeDir, $(dir $2))
$(SED) -e '/{{INDEXER_EXTENSIONS}}/r $(call GetIndexerFragment,extensions)' \
-e '/{{INDEXER_SETTINGS}}/r $(call GetIndexerFragment,settings)' \
-e '/{{EXTRA_WORKSPACE_ROOT}}/r $(call GetExtraWorkspaceRoot)' $1 | \
$(SED) -e 's!{{TOPDIR}}!$(call FixPath,$(TOPDIR))!g' \
-e 's!{{TOPDIR_RELATIVE}}!$(call FixPath,$(strip \
$(call RelativePath,$(OUTPUTDIR),$(TOPDIR))))!g' \
-e 's!{{WORKSPACE_ROOT}}!$(call FixPath,$(WORKSPACE_ROOT))!g' \
-e 's!{{OUTPUTDIR}}!$(call FixPath,$(OUTPUTDIR))!g' \
-e 's!{{CONF_NAME}}!$(CONF_NAME)!g' \
-e 's!{{COMPILER}}!$(call FixPath,$(CXX)) $(SYSROOT_CFLAGS)!g' \
-e 's!{{MAKE}}!$(call FixPath,$(MAKE))!g' \
-e 's!{{PATH}}!$(call FixPathList,$(PATH))!g' \
-e 's!{{DEBUGENGINENAME}}!$(call DebugEngineName)!g' \
-e '/{{INDEXER_EXTENSIONS}}/d' \
-e '/{{INDEXER_SETTINGS}}/d' \
-e '/{{EXTRA_WORKSPACE_ROOT}}/d' \
> $2
endef
$(OUTPUTDIR)/jdk.code-workspace:
$(call LogWarn, Creating workspace $@)
$(call CreateFromTemplate, $(TOPDIR)/make/vscode/template-workspace.jsonc, $@)
$(call ShowIndexerNotes)
$(OUTPUTDIR)/.vscode/tasks.json:
$(call CreateFromTemplate, $(TOPDIR)/make/vscode/template-tasks.jsonc, $@)
$(OUTPUTDIR)/.vscode/launch.json:
$(call CreateFromTemplate, $(TOPDIR)/make/vscode/template-launch.jsonc, $@)
TARGETS := $(OUTPUTDIR)/jdk.code-workspace $(OUTPUTDIR)/.vscode/tasks.json \
$(OUTPUTDIR)/.vscode/launch.json
all: $(TARGETS)
.PHONY: all $(TARGETS)

View File

@ -0,0 +1,2 @@
"ms-vscode.cpptools",
"ccls-project.ccls"

View File

@ -0,0 +1,3 @@
* The "ccls" indexer must be present in PATH, or configured with "ccls.launch.command" in user preferences.

View File

@ -0,0 +1,28 @@
// Configure cpptools IntelliSense
"C_Cpp.intelliSenseCachePath": "{{OUTPUTDIR}}/.vscode",
"C_Cpp.default.compileCommands": "{{OUTPUTDIR}}/compile_commands.json",
"C_Cpp.default.cppStandard": "c++03",
"C_Cpp.default.compilerPath": "{{COMPILER}}",
// Configure ccls
"ccls.misc.compilationDatabaseDirectory": "{{TOPDIR_RELATIVE}}",
"ccls.cache.hierarchicalPath": true,
"ccls.cache.directory": "{{OUTPUTDIR}}/.vscode/ccls",
// Avoid issues with precompiled headers
"ccls.clang.excludeArgs": [
// Windows / MSVC
"-Fp{{OUTPUTDIR}}/hotspot/variant-server/libjvm/objs/BUILD_LIBJVM.pch",
"-Fp{{OUTPUTDIR}}/hotspot/variant-server/libjvm/gtest/objs/BUILD_GTEST_LIBJVM.pch",
"-Yuprecompiled.hpp",
// MacOS / clang
"{{OUTPUTDIR}}/hotspot/variant-server/libjvm/objs/precompiled/precompiled.hpp.pch",
"{{OUTPUTDIR}}/hotspot/variant-server/libjvm/gtest/objs/precompiled/precompiled.hpp.pch",
"-include-pch"
],
// Disable conflicting features from cpptools
"C_Cpp.autocomplete": "Disabled",
"C_Cpp.errorSquiggles": "Disabled",
"C_Cpp.formatting": "Disabled",
"C_Cpp.intelliSenseEngine": "Disabled",

View File

@ -0,0 +1,2 @@
"ms-vscode.cpptools",
"llvm-vs-code-extensions.vscode-clangd"

View File

@ -0,0 +1,4 @@
* The "clangd" indexer must be present in PATH, or configured with "clangd.path" in user preferences.
* If building with clang (default on OSX), precompiled headers must be disabled.

View File

@ -0,0 +1,17 @@
// Configure cpptools IntelliSense
"C_Cpp.intelliSenseCachePath": "{{OUTPUTDIR}}/.vscode",
"C_Cpp.default.compileCommands": "{{OUTPUTDIR}}/compile_commands.json",
"C_Cpp.default.cppStandard": "c++03",
"C_Cpp.default.compilerPath": "{{COMPILER}}",
// Configure clangd
"clangd.arguments": [
"-background-index",
"-compile-commands-dir={{OUTPUTDIR}}"
],
// Disable conflicting features from cpptools
"C_Cpp.autocomplete": "Disabled",
"C_Cpp.errorSquiggles": "Disabled",
"C_Cpp.formatting": "Disabled",
"C_Cpp.intelliSenseEngine": "Disabled",

View File

@ -0,0 +1 @@
"ms-vscode.cpptools"

View File

@ -0,0 +1,5 @@
// Configure cpptools IntelliSense
"C_Cpp.intelliSenseCachePath": "{{OUTPUTDIR}}/.vscode",
"C_Cpp.default.compileCommands": "{{OUTPUTDIR}}/compile_commands.json",
"C_Cpp.default.cppStandard": "c++03",
"C_Cpp.default.compilerPath": "{{COMPILER}}",

View File

@ -0,0 +1,2 @@
"ms-vscode.cpptools",
"jomiller.rtags-client"

View File

@ -0,0 +1,14 @@
// Configure cpptools IntelliSense
"C_Cpp.intelliSenseCachePath": "{{OUTPUTDIR}}/.vscode",
"C_Cpp.default.compileCommands": "{{OUTPUTDIR}}/compile_commands.json",
"C_Cpp.default.cppStandard": "c++03",
"C_Cpp.default.compilerPath": "{{COMPILER}}",
// Configure RTags
"rtags.misc.compilationDatabaseDirectory": "{{OUTPUTDIR}}",
// Disable conflicting features from cpptools
"C_Cpp.autocomplete": "Disabled",
"C_Cpp.errorSquiggles": "Disabled",
"C_Cpp.formatting": "Disabled",
"C_Cpp.intelliSenseEngine": "Disabled",

View File

@ -0,0 +1,55 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "gtestLauncher",
"type": "{{DEBUGENGINENAME}}",
"request": "launch",
"program": "{{OUTPUTDIR}}/hotspot/variant-server/libjvm/gtest/gtestLauncher",
"args": ["-jdk:{{OUTPUTDIR}}/jdk"],
"stopAtEntry": false,
"cwd": "{{WORKSPACE_ROOT}}",
"environment": [],
"externalConsole": false,
"preLaunchTask": "Make 'exploded-image'",
"osx": {
"MIMode": "lldb",
"internalConsoleOptions": "openOnSessionStart",
"args": ["--gtest_color=no", "-jdk:{{OUTPUTDIR}}/jdk"]
},
"linux": {
"MIMode": "gdb",
"setupCommands": [
{
"text": "handle SIGSEGV noprint nostop",
"description": "Disable stopping on signals handled by the JVM"
}
]
}
},
{
"name": "java",
"type": "{{DEBUGENGINENAME}}",
"request": "launch",
"program": "{{OUTPUTDIR}}/jdk/bin/java",
"stopAtEntry": false,
"cwd": "{{WORKSPACE_ROOT}}",
"environment": [],
"externalConsole": false,
"preLaunchTask": "Make 'exploded-image'",
"osx": {
"MIMode": "lldb",
"internalConsoleOptions": "openOnSessionStart",
},
"linux": {
"MIMode": "gdb",
"setupCommands": [
{
"text": "handle SIGSEGV noprint nostop",
"description": "Disable stopping on signals handled by the JVM"
}
]
}
}
]
}

View File

@ -0,0 +1,55 @@
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "Update compilation database (compile_commands.json)",
"type": "shell",
"options": {
"env": {
"PATH": "{{PATH}}"
},
"cwd": "{{WORKSPACE_ROOT}}"
},
"command": "{{MAKE}} CONF_NAME={{CONF_NAME}} compile-commands",
"problemMatcher": []
},
{
"label": "Make 'hotspot'",
"type": "shell",
"options": {
"env": {
"PATH": "{{PATH}}"
},
"cwd": "{{WORKSPACE_ROOT}}"
},
"command": "{{MAKE}} CONF_NAME={{CONF_NAME}} hotspot",
"problemMatcher": ["$gcc"]
},
{
"label": "Make 'exploded-image'",
"type": "shell",
"options": {
"env": {
"PATH": "{{PATH}}"
},
"cwd": "{{WORKSPACE_ROOT}}"
},
"command": "{{MAKE}} CONF_NAME={{CONF_NAME}} exploded-image",
"problemMatcher": ["$gcc"]
},
{
"label": "Make 'jdk'",
"type": "shell",
"options": {
"env": {
"PATH": "{{PATH}}"
},
"cwd": "{{WORKSPACE_ROOT}}"
},
"command": "{{MAKE}} CONF_NAME={{CONF_NAME}} jdk",
"problemMatcher": ["$gcc"]
}
]
}

View File

@ -0,0 +1,4 @@
{
"name": "Additional sources",
"path": "{{WORKSPACE_ROOT}}"
},

View File

@ -0,0 +1,63 @@
{
"folders": [
{
"name": "Source root",
"path": "{{TOPDIR}}"
},
// {{EXTRA_WORKSPACE_ROOT}}
{
"name": "Build artifacts",
"path": "{{OUTPUTDIR}}"
}
],
"extensions": {
"recommendations": [
// {{INDEXER_EXTENSIONS}}
]
},
"settings": {
// {{INDEXER_SETTINGS}}
// Additional conventions
"files.associations": {
"*.gmk": "makefile"
},
// Having these enabled slow down task execution
"typescript.tsc.autoDetect": "off",
"gulp.autoDetect": "off",
"npm.autoDetect": "off",
"grunt.autoDetect": "off",
"jake.autoDetect": "off",
// Certain types of files are not relevant for the file browser
"files.exclude": {
"**/.git": true,
"**/.hg": true,
"**/.DS_Store": true,
},
// Files that may be interesting to browse manually, but avoided during searches
"search.exclude": {
"**/*.class": true,
"**/*.jsa": true,
"**/*.vardeps": true,
"**/*.o": true,
"**/*.obj": true,
"**/*.d": true,
"**/*.d.*": true,
"**/*_batch*": true,
"**/*.marker": true,
"**/compile-commands/": true,
"**/objs": true,
"**/launcher-objs": true,
"**/*.cmdline": true,
"**/*.log": true,
".vscode": true,
".clangd": true
},
// Trailing whitespace should never be used in this project
"files.trimTrailingWhitespace": true
}
}

View File

@ -3263,6 +3263,70 @@ int os::active_processor_count() {
return _processor_count;
}
#ifdef __APPLE__
uint os::processor_id() {
static volatile int* volatile apic_to_cpu_mapping = NULL;
static volatile int next_cpu_id = 0;
volatile int* mapping = OrderAccess::load_acquire(&apic_to_cpu_mapping);
if (mapping == NULL) {
// Calculate possible number space for APIC ids. This space is not necessarily
// in the range [0, number_of_cpus).
uint total_bits = 0;
for (uint i = 0;; ++i) {
uint eax = 0xb; // Query topology leaf
uint ebx;
uint ecx = i;
uint edx;
__asm__ ("cpuid\n\t" : "+a" (eax), "+b" (ebx), "+c" (ecx), "+d" (edx) : );
uint level_type = (ecx >> 8) & 0xFF;
if (level_type == 0) {
// Invalid level; end of topology
break;
}
uint level_apic_id_shift = eax & ((1u << 5) - 1);
total_bits += level_apic_id_shift;
}
uint max_apic_ids = 1u << total_bits;
mapping = NEW_C_HEAP_ARRAY(int, max_apic_ids, mtInternal);
for (uint i = 0; i < max_apic_ids; ++i) {
mapping[i] = -1;
}
if (!Atomic::replace_if_null(mapping, &apic_to_cpu_mapping)) {
FREE_C_HEAP_ARRAY(int, mapping);
mapping = OrderAccess::load_acquire(&apic_to_cpu_mapping);
}
}
uint eax = 0xb;
uint ebx;
uint ecx = 0;
uint edx;
asm ("cpuid\n\t" : "+a" (eax), "+b" (ebx), "+c" (ecx), "+d" (edx) : );
// Map from APIC id to a unique logical processor ID in the expected
// [0, num_processors) range.
uint apic_id = edx;
int cpu_id = Atomic::load(&mapping[apic_id]);
while (cpu_id < 0) {
if (Atomic::cmpxchg(-2, &mapping[apic_id], -1)) {
Atomic::store(Atomic::add(1, &next_cpu_id) - 1, &mapping[apic_id]);
}
cpu_id = Atomic::load(&mapping[apic_id]);
}
return (uint)cpu_id;
}
#endif
void os::set_native_thread_name(const char *name) {
#if defined(__APPLE__) && MAC_OS_X_VERSION_MIN_REQUIRED > MAC_OS_X_VERSION_10_5
// This is only supported in Snow Leopard and beyond

View File

@ -2027,6 +2027,7 @@ void* os::signal(int signal_number, void* handler) {
struct sigaction sigAct, oldSigAct;
sigfillset(&(sigAct.sa_mask));
sigAct.sa_flags = SA_RESTART & ~SA_RESETHAND;
sigAct.sa_flags |= SA_SIGINFO;
sigAct.sa_handler = CAST_TO_FN_PTR(sa_handler_t, handler);
if (sigaction(signal_number, &sigAct, &oldSigAct)) {

View File

@ -24,6 +24,7 @@
#include "precompiled.hpp"
#include "aot/aotCodeHeap.hpp"
#include "aot/aotLoader.inline.hpp"
#include "classfile/javaClasses.hpp"
#include "jvm.h"
#include "memory/allocation.inline.hpp"
#include "memory/resourceArea.hpp"
@ -319,3 +320,24 @@ bool AOTLoader::reconcile_dynamic_invoke(InstanceKlass* holder, int index, Metho
vmassert(success || thread->last_frame().sender(&map).is_deoptimized_frame(), "caller not deoptimized on failure");
return success;
}
// This should be called very early during startup before any of the AOTed methods that use boxes can deoptimize.
// Deoptimization machinery expects the caches to be present and populated.
void AOTLoader::initialize_box_caches(TRAPS) {
if (!UseAOT || libraries_count() == 0) {
return;
}
TraceTime timer("AOT initialization of box caches", TRACETIME_LOG(Info, aot, startuptime));
Symbol* box_classes[] = { java_lang_Boolean::symbol(), java_lang_Byte_ByteCache::symbol(),
java_lang_Short_ShortCache::symbol(), java_lang_Character_CharacterCache::symbol(),
java_lang_Integer_IntegerCache::symbol(), java_lang_Long_LongCache::symbol() };
for (unsigned i = 0; i < sizeof(box_classes) / sizeof(Symbol*); i++) {
Klass* k = SystemDictionary::resolve_or_fail(box_classes[i], true, CHECK);
InstanceKlass* ik = InstanceKlass::cast(k);
if (ik->is_not_initialized()) {
ik->initialize(CHECK);
}
}
}

View File

@ -64,6 +64,7 @@ public:
static void oops_do(OopClosure* f) NOT_AOT_RETURN;
static void metadata_do(MetadataClosure* f) NOT_AOT_RETURN;
static void mark_evol_dependent_methods(InstanceKlass* dependee) NOT_AOT_RETURN;
static void initialize_box_caches(TRAPS) NOT_AOT_RETURN;
NOT_PRODUCT( static void print_statistics() NOT_AOT_RETURN; )

View File

@ -4155,6 +4155,14 @@ int java_nio_Buffer::_limit_offset;
int java_util_concurrent_locks_AbstractOwnableSynchronizer::_owner_offset;
int reflect_ConstantPool::_oop_offset;
int reflect_UnsafeStaticFieldAccessorImpl::_base_offset;
int java_lang_Integer_IntegerCache::_static_cache_offset;
int java_lang_Long_LongCache::_static_cache_offset;
int java_lang_Character_CharacterCache::_static_cache_offset;
int java_lang_Short_ShortCache::_static_cache_offset;
int java_lang_Byte_ByteCache::_static_cache_offset;
int java_lang_Boolean::_static_TRUE_offset;
int java_lang_Boolean::_static_FALSE_offset;
#define STACKTRACEELEMENT_FIELDS_DO(macro) \
@ -4314,6 +4322,192 @@ void java_util_concurrent_locks_AbstractOwnableSynchronizer::serialize_offsets(S
}
#endif
#define INTEGER_CACHE_FIELDS_DO(macro) \
macro(_static_cache_offset, k, "cache", java_lang_Integer_array_signature, true)
void java_lang_Integer_IntegerCache::compute_offsets(InstanceKlass *k) {
guarantee(k != NULL && k->is_initialized(), "must be loaded and initialized");
INTEGER_CACHE_FIELDS_DO(FIELD_COMPUTE_OFFSET);
}
objArrayOop java_lang_Integer_IntegerCache::cache(InstanceKlass *ik) {
oop base = ik->static_field_base_raw();
return objArrayOop(base->obj_field(_static_cache_offset));
}
Symbol* java_lang_Integer_IntegerCache::symbol() {
return vmSymbols::java_lang_Integer_IntegerCache();
}
#if INCLUDE_CDS
void java_lang_Integer_IntegerCache::serialize_offsets(SerializeClosure* f) {
INTEGER_CACHE_FIELDS_DO(FIELD_SERIALIZE_OFFSET);
}
#endif
#undef INTEGER_CACHE_FIELDS_DO
jint java_lang_Integer::value(oop obj) {
jvalue v;
java_lang_boxing_object::get_value(obj, &v);
return v.i;
}
#define LONG_CACHE_FIELDS_DO(macro) \
macro(_static_cache_offset, k, "cache", java_lang_Long_array_signature, true)
void java_lang_Long_LongCache::compute_offsets(InstanceKlass *k) {
guarantee(k != NULL && k->is_initialized(), "must be loaded and initialized");
LONG_CACHE_FIELDS_DO(FIELD_COMPUTE_OFFSET);
}
objArrayOop java_lang_Long_LongCache::cache(InstanceKlass *ik) {
oop base = ik->static_field_base_raw();
return objArrayOop(base->obj_field(_static_cache_offset));
}
Symbol* java_lang_Long_LongCache::symbol() {
return vmSymbols::java_lang_Long_LongCache();
}
#if INCLUDE_CDS
void java_lang_Long_LongCache::serialize_offsets(SerializeClosure* f) {
LONG_CACHE_FIELDS_DO(FIELD_SERIALIZE_OFFSET);
}
#endif
#undef LONG_CACHE_FIELDS_DO
jlong java_lang_Long::value(oop obj) {
jvalue v;
java_lang_boxing_object::get_value(obj, &v);
return v.j;
}
#define CHARACTER_CACHE_FIELDS_DO(macro) \
macro(_static_cache_offset, k, "cache", java_lang_Character_array_signature, true)
void java_lang_Character_CharacterCache::compute_offsets(InstanceKlass *k) {
guarantee(k != NULL && k->is_initialized(), "must be loaded and initialized");
CHARACTER_CACHE_FIELDS_DO(FIELD_COMPUTE_OFFSET);
}
objArrayOop java_lang_Character_CharacterCache::cache(InstanceKlass *ik) {
oop base = ik->static_field_base_raw();
return objArrayOop(base->obj_field(_static_cache_offset));
}
Symbol* java_lang_Character_CharacterCache::symbol() {
return vmSymbols::java_lang_Character_CharacterCache();
}
#if INCLUDE_CDS
void java_lang_Character_CharacterCache::serialize_offsets(SerializeClosure* f) {
CHARACTER_CACHE_FIELDS_DO(FIELD_SERIALIZE_OFFSET);
}
#endif
#undef CHARACTER_CACHE_FIELDS_DO
jchar java_lang_Character::value(oop obj) {
jvalue v;
java_lang_boxing_object::get_value(obj, &v);
return v.c;
}
#define SHORT_CACHE_FIELDS_DO(macro) \
macro(_static_cache_offset, k, "cache", java_lang_Short_array_signature, true)
void java_lang_Short_ShortCache::compute_offsets(InstanceKlass *k) {
guarantee(k != NULL && k->is_initialized(), "must be loaded and initialized");
SHORT_CACHE_FIELDS_DO(FIELD_COMPUTE_OFFSET);
}
objArrayOop java_lang_Short_ShortCache::cache(InstanceKlass *ik) {
oop base = ik->static_field_base_raw();
return objArrayOop(base->obj_field(_static_cache_offset));
}
Symbol* java_lang_Short_ShortCache::symbol() {
return vmSymbols::java_lang_Short_ShortCache();
}
#if INCLUDE_CDS
void java_lang_Short_ShortCache::serialize_offsets(SerializeClosure* f) {
SHORT_CACHE_FIELDS_DO(FIELD_SERIALIZE_OFFSET);
}
#endif
#undef SHORT_CACHE_FIELDS_DO
jshort java_lang_Short::value(oop obj) {
jvalue v;
java_lang_boxing_object::get_value(obj, &v);
return v.s;
}
#define BYTE_CACHE_FIELDS_DO(macro) \
macro(_static_cache_offset, k, "cache", java_lang_Byte_array_signature, true)
void java_lang_Byte_ByteCache::compute_offsets(InstanceKlass *k) {
guarantee(k != NULL && k->is_initialized(), "must be loaded and initialized");
BYTE_CACHE_FIELDS_DO(FIELD_COMPUTE_OFFSET);
}
objArrayOop java_lang_Byte_ByteCache::cache(InstanceKlass *ik) {
oop base = ik->static_field_base_raw();
return objArrayOop(base->obj_field(_static_cache_offset));
}
Symbol* java_lang_Byte_ByteCache::symbol() {
return vmSymbols::java_lang_Byte_ByteCache();
}
#if INCLUDE_CDS
void java_lang_Byte_ByteCache::serialize_offsets(SerializeClosure* f) {
BYTE_CACHE_FIELDS_DO(FIELD_SERIALIZE_OFFSET);
}
#endif
#undef BYTE_CACHE_FIELDS_DO
jbyte java_lang_Byte::value(oop obj) {
jvalue v;
java_lang_boxing_object::get_value(obj, &v);
return v.b;
}
#define BOOLEAN_FIELDS_DO(macro) \
macro(_static_TRUE_offset, k, "TRUE", java_lang_Boolean_signature, true); \
macro(_static_FALSE_offset, k, "FALSE", java_lang_Boolean_signature, true)
void java_lang_Boolean::compute_offsets(InstanceKlass *k) {
guarantee(k != NULL && k->is_initialized(), "must be loaded and initialized");
BOOLEAN_FIELDS_DO(FIELD_COMPUTE_OFFSET);
}
oop java_lang_Boolean::get_TRUE(InstanceKlass *ik) {
oop base = ik->static_field_base_raw();
return base->obj_field(_static_TRUE_offset);
}
oop java_lang_Boolean::get_FALSE(InstanceKlass *ik) {
oop base = ik->static_field_base_raw();
return base->obj_field(_static_FALSE_offset);
}
Symbol* java_lang_Boolean::symbol() {
return vmSymbols::java_lang_Boolean();
}
#if INCLUDE_CDS
void java_lang_Boolean::serialize_offsets(SerializeClosure* f) {
BOOLEAN_FIELDS_DO(FIELD_SERIALIZE_OFFSET);
}
#endif
#undef BOOLEAN_CACHE_FIELDS_DO
jboolean java_lang_Boolean::value(oop obj) {
jvalue v;
java_lang_boxing_object::get_value(obj, &v);
return v.z;
}
static int member_offset(int hardcoded_offset) {
return (hardcoded_offset * heapOopSize) + instanceOopDesc::base_offset_in_bytes();
}

View File

@ -1497,6 +1497,94 @@ class jdk_internal_misc_UnsafeConstants : AllStatic {
static void serialize_offsets(SerializeClosure* f) { }
};
class java_lang_Integer : AllStatic {
public:
static jint value(oop obj);
};
class java_lang_Long : AllStatic {
public:
static jlong value(oop obj);
};
class java_lang_Character : AllStatic {
public:
static jchar value(oop obj);
};
class java_lang_Short : AllStatic {
public:
static jshort value(oop obj);
};
class java_lang_Byte : AllStatic {
public:
static jbyte value(oop obj);
};
class java_lang_Boolean : AllStatic {
private:
static int _static_TRUE_offset;
static int _static_FALSE_offset;
public:
static Symbol* symbol();
static void compute_offsets(InstanceKlass* k);
static oop get_TRUE(InstanceKlass *k);
static oop get_FALSE(InstanceKlass *k);
static void serialize_offsets(SerializeClosure* f) NOT_CDS_RETURN;
static jboolean value(oop obj);
};
class java_lang_Integer_IntegerCache : AllStatic {
private:
static int _static_cache_offset;
public:
static Symbol* symbol();
static void compute_offsets(InstanceKlass* k);
static objArrayOop cache(InstanceKlass *k);
static void serialize_offsets(SerializeClosure* f) NOT_CDS_RETURN;
};
class java_lang_Long_LongCache : AllStatic {
private:
static int _static_cache_offset;
public:
static Symbol* symbol();
static void compute_offsets(InstanceKlass* k);
static objArrayOop cache(InstanceKlass *k);
static void serialize_offsets(SerializeClosure* f) NOT_CDS_RETURN;
};
class java_lang_Character_CharacterCache : AllStatic {
private:
static int _static_cache_offset;
public:
static Symbol* symbol();
static void compute_offsets(InstanceKlass* k);
static objArrayOop cache(InstanceKlass *k);
static void serialize_offsets(SerializeClosure* f) NOT_CDS_RETURN;
};
class java_lang_Short_ShortCache : AllStatic {
private:
static int _static_cache_offset;
public:
static Symbol* symbol();
static void compute_offsets(InstanceKlass* k);
static objArrayOop cache(InstanceKlass *k);
static void serialize_offsets(SerializeClosure* f) NOT_CDS_RETURN;
};
class java_lang_Byte_ByteCache : AllStatic {
private:
static int _static_cache_offset;
public:
static Symbol* symbol();
static void compute_offsets(InstanceKlass* k);
static objArrayOop cache(InstanceKlass *k);
static void serialize_offsets(SerializeClosure* f) NOT_CDS_RETURN;
};
// Use to declare fields that need to be injected into Java classes
// for the JVM to use. The name_index and signature_index are
// declared in vmSymbols. The may_be_java flag is used to declare

View File

@ -442,6 +442,11 @@
template(getProtectionDomain_name, "getProtectionDomain") \
template(getProtectionDomain_signature, "(Ljava/security/CodeSource;)Ljava/security/ProtectionDomain;") \
template(java_lang_Integer_array_signature, "[Ljava/lang/Integer;") \
template(java_lang_Long_array_signature, "[Ljava/lang/Long;") \
template(java_lang_Character_array_signature, "[Ljava/lang/Character;") \
template(java_lang_Short_array_signature, "[Ljava/lang/Short;") \
template(java_lang_Byte_array_signature, "[Ljava/lang/Byte;") \
template(java_lang_Boolean_signature, "Ljava/lang/Boolean;") \
template(url_code_signer_array_void_signature, "(Ljava/net/URL;[Ljava/security/CodeSigner;)V") \
template(module_entry_name, "module_entry") \
template(resolved_references_name, "<resolved_references>") \

View File

@ -56,7 +56,7 @@ oop DebugInfoReadStream::read_oop() {
return o;
}
ScopeValue* DebugInfoReadStream::read_object_value() {
ScopeValue* DebugInfoReadStream::read_object_value(bool is_auto_box) {
int id = read_int();
#ifdef ASSERT
assert(_obj_pool != NULL, "object pool does not exist");
@ -64,7 +64,7 @@ ScopeValue* DebugInfoReadStream::read_object_value() {
assert(_obj_pool->at(i)->as_ObjectValue()->id() != id, "should not be read twice");
}
#endif
ObjectValue* result = new ObjectValue(id);
ObjectValue* result = is_auto_box ? new AutoBoxObjectValue(id) : new ObjectValue(id);
// Cache the object since an object field could reference it.
_obj_pool->push(result);
result->read_object(this);
@ -88,18 +88,20 @@ ScopeValue* DebugInfoReadStream::get_cached_object() {
enum { LOCATION_CODE = 0, CONSTANT_INT_CODE = 1, CONSTANT_OOP_CODE = 2,
CONSTANT_LONG_CODE = 3, CONSTANT_DOUBLE_CODE = 4,
OBJECT_CODE = 5, OBJECT_ID_CODE = 6 };
OBJECT_CODE = 5, OBJECT_ID_CODE = 6,
AUTO_BOX_OBJECT_CODE = 7 };
ScopeValue* ScopeValue::read_from(DebugInfoReadStream* stream) {
ScopeValue* result = NULL;
switch(stream->read_int()) {
case LOCATION_CODE: result = new LocationValue(stream); break;
case CONSTANT_INT_CODE: result = new ConstantIntValue(stream); break;
case CONSTANT_OOP_CODE: result = new ConstantOopReadValue(stream); break;
case CONSTANT_LONG_CODE: result = new ConstantLongValue(stream); break;
case CONSTANT_DOUBLE_CODE: result = new ConstantDoubleValue(stream); break;
case OBJECT_CODE: result = stream->read_object_value(); break;
case OBJECT_ID_CODE: result = stream->get_cached_object(); break;
case LOCATION_CODE: result = new LocationValue(stream); break;
case CONSTANT_INT_CODE: result = new ConstantIntValue(stream); break;
case CONSTANT_OOP_CODE: result = new ConstantOopReadValue(stream); break;
case CONSTANT_LONG_CODE: result = new ConstantLongValue(stream); break;
case CONSTANT_DOUBLE_CODE: result = new ConstantDoubleValue(stream); break;
case OBJECT_CODE: result = stream->read_object_value(false /*is_auto_box*/); break;
case AUTO_BOX_OBJECT_CODE: result = stream->read_object_value(true /*is_auto_box*/); break;
case OBJECT_ID_CODE: result = stream->get_cached_object(); break;
default: ShouldNotReachHere();
}
return result;
@ -142,7 +144,7 @@ void ObjectValue::write_on(DebugInfoWriteStream* stream) {
stream->write_int(_id);
} else {
_visited = true;
stream->write_int(OBJECT_CODE);
stream->write_int(is_auto_box() ? AUTO_BOX_OBJECT_CODE : OBJECT_CODE);
stream->write_int(_id);
_klass->write_on(stream);
int length = _field_values.length();
@ -154,7 +156,7 @@ void ObjectValue::write_on(DebugInfoWriteStream* stream) {
}
void ObjectValue::print_on(outputStream* st) const {
st->print("obj[%d]", _id);
st->print("%s[%d]", is_auto_box() ? "box_obj" : "obj", _id);
}
void ObjectValue::print_fields_on(outputStream* st) const {

View File

@ -49,6 +49,7 @@ class ScopeValue: public ResourceObj {
// Testers
virtual bool is_location() const { return false; }
virtual bool is_object() const { return false; }
virtual bool is_auto_box() const { return false; }
virtual bool is_constant_int() const { return false; }
virtual bool is_constant_double() const { return false; }
virtual bool is_constant_long() const { return false; }
@ -94,13 +95,12 @@ class LocationValue: public ScopeValue {
// An ObjectValue describes an object eliminated by escape analysis.
class ObjectValue: public ScopeValue {
private:
protected:
int _id;
ScopeValue* _klass;
GrowableArray<ScopeValue*> _field_values;
Handle _value;
bool _visited;
public:
ObjectValue(int id, ScopeValue* klass)
: _id(id)
@ -140,6 +140,16 @@ class ObjectValue: public ScopeValue {
void print_fields_on(outputStream* st) const;
};
class AutoBoxObjectValue : public ObjectValue {
bool _cached;
public:
bool is_auto_box() const { return true; }
bool is_cached() const { return _cached; }
void set_cached(bool cached) { _cached = cached; }
AutoBoxObjectValue(int id, ScopeValue* klass) : ObjectValue(id, klass), _cached(false) { }
AutoBoxObjectValue(int id) : ObjectValue(id), _cached(false) { }
};
// A ConstantIntValue describes a constant int; i.e., the corresponding logical entity
// is either a source constant or its computation has been constant-folded.
@ -280,7 +290,7 @@ class DebugInfoReadStream : public CompressedReadStream {
assert(o == NULL || o->is_metadata(), "meta data only");
return o;
}
ScopeValue* read_object_value();
ScopeValue* read_object_value(bool is_auto_box);
ScopeValue* get_cached_object();
// BCI encoding is mostly unsigned, but -1 is a distinguished value
int read_bci() { return read_int() + InvocationEntryBci; }

View File

@ -472,7 +472,7 @@ void G1BarrierSetC2::post_barrier(GraphKit* kit,
__ if_then(card_val, BoolTest::ne, young_card); {
kit->sync_kit(ideal);
kit->insert_mem_bar(Op_MemBarVolatile, oop_store);
kit->insert_store_load_for_barrier();
__ sync_kit(kit);
Node* card_val_reload = __ load(__ ctrl(), card_adr, TypeInt::INT, T_BYTE, Compile::AliasIdxRaw);

View File

@ -256,3 +256,7 @@ size_t G1Arguments::reasonable_max_memory_for_young() {
size_t G1Arguments::heap_reserved_size_bytes() {
return (is_heterogeneous_heap() ? 2 : 1) * MaxHeapSize;
}
size_t G1Arguments::heap_max_size_bytes() {
return MaxHeapSize;
}

View File

@ -53,6 +53,7 @@ public:
static bool is_heterogeneous_heap();
static size_t reasonable_max_memory_for_young();
static size_t heap_reserved_size_bytes();
static size_t heap_max_size_bytes();
};
#endif // SHARE_GC_G1_G1ARGUMENTS_HPP

View File

@ -54,14 +54,14 @@ public:
// The selected encoding allows us to use a single check (> NotInCSet) for the
// former.
//
// The other values are used for objects requiring various special cases,
// for example eager reclamation of humongous objects or optional regions.
static const region_type_t Optional = -2; // The region is optional and NOT in the current collection set.
static const region_type_t Humongous = -1; // The region is a humongous candidate not in the current collection set.
static const region_type_t NotInCSet = 0; // The region is not in the collection set.
static const region_type_t Young = 1; // The region is in the collection set and a young region.
static const region_type_t Old = 2; // The region is in the collection set and an old region.
static const region_type_t Num = 3;
// The other values are used for objects in regions requiring various special handling,
// eager reclamation of humongous objects or optional regions.
static const region_type_t Optional = -3; // The region is optional not in the current collection set.
static const region_type_t Humongous = -2; // The region is a humongous candidate not in the current collection set.
static const region_type_t NotInCSet = -1; // The region is not in the collection set.
static const region_type_t Young = 0; // The region is in the collection set and a young region.
static const region_type_t Old = 1; // The region is in the collection set and an old region.
static const region_type_t Num = 2;
G1HeapRegionAttr(region_type_t type = NotInCSet, bool needs_remset_update = false) :
_needs_remset_update(needs_remset_update), _type(type) {
@ -92,7 +92,7 @@ public:
void set_has_remset(bool value) { _needs_remset_update = value ? 1 : 0; }
bool is_in_cset_or_humongous() const { return is_in_cset() || is_humongous(); }
bool is_in_cset() const { return type() > NotInCSet; }
bool is_in_cset() const { return type() >= Young; }
bool is_humongous() const { return type() == Humongous; }
bool is_young() const { return type() == Young; }

View File

@ -75,7 +75,6 @@ G1ParScanThreadState::G1ParScanThreadState(G1CollectedHeap* g1h,
_plab_allocator = new G1PLABAllocator(_g1h->allocator());
_dest[G1HeapRegionAttr::NotInCSet] = G1HeapRegionAttr::NotInCSet;
// The dest for Young is used when the objects are aged enough to
// need to be moved to the next space.
_dest[G1HeapRegionAttr::Young] = G1HeapRegionAttr::Old;

View File

@ -70,7 +70,7 @@ HeapRegionManager::HeapRegionManager() :
HeapRegionManager* HeapRegionManager::create_manager(G1CollectedHeap* heap) {
if (G1Arguments::is_heterogeneous_heap()) {
return new HeterogeneousHeapRegionManager((uint)(G1Arguments::heap_reserved_size_bytes() / HeapRegion::GrainBytes) /*heap size as num of regions*/);
return new HeterogeneousHeapRegionManager((uint)(G1Arguments::heap_max_size_bytes() / HeapRegion::GrainBytes) /*heap size as num of regions*/);
}
return new HeapRegionManager();
}

View File

@ -614,12 +614,12 @@ void HeapRegionRemSet::clear_fcc() {
}
void HeapRegionRemSet::setup_remset_size() {
// Setup sparse and fine-grain tables sizes.
// table_size = base * (log(region_size / 1M) + 1)
const int LOG_M = 20;
int region_size_log_mb = MAX2(HeapRegion::LogOfHRGrainBytes - LOG_M, 0);
guarantee(HeapRegion::LogOfHRGrainBytes >= LOG_M, "Code assumes the region size >= 1M, but is " SIZE_FORMAT "B", HeapRegion::GrainBytes);
int region_size_log_mb = HeapRegion::LogOfHRGrainBytes - LOG_M;
if (FLAG_IS_DEFAULT(G1RSetSparseRegionEntries)) {
G1RSetSparseRegionEntries = G1RSetSparseRegionEntriesBase * (region_size_log_mb + 1);
G1RSetSparseRegionEntries = G1RSetSparseRegionEntriesBase * ((size_t)1 << (region_size_log_mb + 1));
}
if (FLAG_IS_DEFAULT(G1RSetRegionEntries)) {
G1RSetRegionEntries = G1RSetRegionEntriesBase * (region_size_log_mb + 1);

View File

@ -179,6 +179,7 @@ private:
public:
HeapRegionRemSet(G1BlockOffsetTable* bot, HeapRegion* hr);
// Setup sparse and fine-grain tables sizes.
static void setup_remset_size();
bool cardset_is_empty() const {

View File

@ -218,9 +218,7 @@ class SparsePRT {
RSHashTable* _table;
enum SomeAdditionalPrivateConstants {
InitialCapacity = 16
};
static const size_t InitialCapacity = 8;
void expand();

View File

@ -38,7 +38,7 @@
// create ASPSOldGen and ASPSYoungGen the same way as in base class
AdjoiningGenerationsForHeteroHeap::AdjoiningGenerationsForHeteroHeap(ReservedSpace old_young_rs) :
_total_size_limit(ParallelArguments::heap_reserved_size_bytes()) {
_total_size_limit(ParallelArguments::heap_max_size_bytes()) {
size_t init_old_byte_size = OldSize;
size_t min_old_byte_size = MinOldSize;
size_t max_old_byte_size = MaxOldSize;
@ -85,9 +85,9 @@ AdjoiningGenerationsForHeteroHeap::AdjoiningGenerationsForHeteroHeap(ReservedSpa
size_t AdjoiningGenerationsForHeteroHeap::required_reserved_memory() {
// This is the size that young gen can grow to, when AdaptiveGCBoundary is true.
size_t max_yg_size = ParallelArguments::heap_reserved_size_bytes() - MinOldSize;
size_t max_yg_size = ParallelArguments::heap_max_size_bytes() - MinOldSize;
// This is the size that old gen can grow to, when AdaptiveGCBoundary is true.
size_t max_old_size = ParallelArguments::heap_reserved_size_bytes() - MinNewSize;
size_t max_old_size = ParallelArguments::heap_max_size_bytes() - MinNewSize;
return max_yg_size + max_old_size;
}

View File

@ -214,6 +214,10 @@ size_t ParallelArguments::heap_reserved_size_bytes() {
return max_yg_size + max_old_size;
}
size_t ParallelArguments::heap_max_size_bytes() {
return MaxHeapSize;
}
CollectedHeap* ParallelArguments::create_heap() {
return new ParallelScavengeHeap();
}

View File

@ -46,6 +46,7 @@ public:
// Heterogeneous heap support
static bool is_heterogeneous_heap();
static size_t heap_reserved_size_bytes();
static size_t heap_max_size_bytes();
};
#endif // SHARE_GC_PARALLEL_PARALLELARGUMENTS_HPP

View File

@ -105,7 +105,7 @@ void CardTableBarrierSetC2::post_barrier(GraphKit* kit,
if (UseCondCardMark) {
if (ct->scanned_concurrently()) {
kit->insert_mem_bar(Op_MemBarVolatile, oop_store);
kit->insert_store_load_for_barrier();
__ sync_kit(kit);
}
// The classic GC reference write barrier is typically implemented

View File

@ -30,10 +30,6 @@
#include "gc/shenandoah/shenandoahThreadLocalData.hpp"
#include "gc/shenandoah/c1/shenandoahBarrierSetC1.hpp"
#ifndef PATCHED_ADDR
#define PATCHED_ADDR (max_jint)
#endif
#ifdef ASSERT
#define __ gen->lir(__FILE__, __LINE__)->
#else

View File

@ -1604,9 +1604,8 @@ void ShenandoahHeap::op_full(GCCause::Cause cause) {
}
metrics.snap_after();
metrics.print();
if (metrics.is_good_progress("Full GC")) {
if (metrics.is_good_progress()) {
_progress_last_gc.set();
} else {
// Nothing to do. Tell the allocation path that we have failed to make
@ -1739,11 +1738,10 @@ void ShenandoahHeap::op_degenerated(ShenandoahDegenPoint point) {
}
metrics.snap_after();
metrics.print();
// Check for futility and fail. There is no reason to do several back-to-back Degenerated cycles,
// because that probably means the heap is overloaded and/or fragmented.
if (!metrics.is_good_progress("Degenerated GC")) {
if (!metrics.is_good_progress()) {
_progress_last_gc.unset();
cancel_gc(GCCause::_shenandoah_upgrade_to_full_gc);
op_degenerated_futile();

View File

@ -127,48 +127,52 @@ void ShenandoahMetricsSnapshot::snap_after() {
_ef_after = ShenandoahMetrics::external_fragmentation();
}
void ShenandoahMetricsSnapshot::print() {
log_info(gc, ergo)("Used: before: " SIZE_FORMAT "M, after: " SIZE_FORMAT "M", _used_before/M, _used_after/M);
log_info(gc, ergo)("Internal frag: before: %.1f%%, after: %.1f%%", _if_before * 100, _if_after * 100);
log_info(gc, ergo)("External frag: before: %.1f%%, after: %.1f%%", _ef_before * 100, _ef_after * 100);
}
bool ShenandoahMetricsSnapshot::is_good_progress(const char *label) {
// Under the critical threshold? Declare failure.
bool ShenandoahMetricsSnapshot::is_good_progress() {
// Under the critical threshold?
size_t free_actual = _heap->free_set()->available();
size_t free_expected = _heap->max_capacity() / 100 * ShenandoahCriticalFreeThreshold;
if (free_actual < free_expected) {
log_info(gc, ergo)("Not enough free space (" SIZE_FORMAT "M, need " SIZE_FORMAT "M) after %s",
free_actual / M, free_expected / M, label);
bool prog_free = free_actual >= free_expected;
log_info(gc, ergo)("%s progress for free space: " SIZE_FORMAT "%s, need " SIZE_FORMAT "%s",
prog_free ? "Good" : "Bad",
byte_size_in_proper_unit(free_actual), proper_unit_for_byte_size(free_actual),
byte_size_in_proper_unit(free_expected), proper_unit_for_byte_size(free_expected));
if (!prog_free) {
return false;
}
// Freed up enough? Good! Declare victory.
// Freed up enough?
size_t progress_actual = (_used_before > _used_after) ? _used_before - _used_after : 0;
size_t progress_expected = ShenandoahHeapRegion::region_size_bytes();
if (progress_actual >= progress_expected) {
bool prog_used = progress_actual >= progress_expected;
log_info(gc, ergo)("%s progress for used space: " SIZE_FORMAT "%s, need " SIZE_FORMAT "%s",
prog_used ? "Good" : "Bad",
byte_size_in_proper_unit(progress_actual), proper_unit_for_byte_size(progress_actual),
byte_size_in_proper_unit(progress_expected), proper_unit_for_byte_size(progress_expected));
if (prog_used) {
return true;
}
log_info(gc,ergo)("Not enough progress (" SIZE_FORMAT "M, need " SIZE_FORMAT "M) after %s",
progress_actual / M, progress_expected / M, label);
// Internal fragmentation is down? Good! Declare victory.
// Internal fragmentation is down?
double if_actual = _if_before - _if_after;
double if_expected = 0.01; // 1% should be enough
if (if_actual > if_expected) {
bool prog_if = if_actual >= if_expected;
log_info(gc, ergo)("%s progress for internal fragmentation: %.1f%%, need %.1f%%",
prog_if ? "Good" : "Bad",
if_actual * 100, if_expected * 100);
if (prog_if) {
return true;
}
log_info(gc,ergo)("Not enough internal fragmentation improvement (%.1f%%, need %.1f%%) after %s",
if_actual * 100, if_expected * 100, label);
// External fragmentation is down? Good! Declare victory.
// External fragmentation is down?
double ef_actual = _ef_before - _ef_after;
double ef_expected = 0.01; // 1% should be enough
if (ef_actual > ef_expected) {
bool prog_ef = ef_actual >= ef_expected;
log_info(gc, ergo)("%s progress for external fragmentation: %.1f%%, need %.1f%%",
prog_ef ? "Good" : "Bad",
ef_actual * 100, ef_expected * 100);
if (prog_ef) {
return true;
}
log_info(gc,ergo)("Not enough external fragmentation improvement (%.1f%%, need %.1f%%) after %s",
if_actual * 100, if_expected * 100, label);
// Nothing good had happened.
return false;

View File

@ -47,9 +47,8 @@ public:
void snap_before();
void snap_after();
void print();
bool is_good_progress(const char *label);
bool is_good_progress();
};
#endif // SHARE_GC_SHENANDOAH_SHENANDOAHMETRICS_HPP

View File

@ -988,9 +988,11 @@ GrowableArray<ScopeValue*>* CodeInstaller::record_virtual_objects(JVMCIObject de
JVMCIObject value = JVMCIENV->get_object_at(virtualObjects, i);
int id = jvmci_env()->get_VirtualObject_id(value);
JVMCIObject type = jvmci_env()->get_VirtualObject_type(value);
bool is_auto_box = jvmci_env()->get_VirtualObject_isAutoBox(value);
Klass* klass = jvmci_env()->asKlass(type);
oop javaMirror = klass->java_mirror();
ObjectValue* sv = new ObjectValue(id, new ConstantOopWriteValue(JNIHandles::make_local(Thread::current(), javaMirror)));
ScopeValue *klass_sv = new ConstantOopWriteValue(JNIHandles::make_local(Thread::current(), javaMirror));
ObjectValue* sv = is_auto_box ? new AutoBoxObjectValue(id, klass_sv) : new ObjectValue(id, klass_sv);
if (id < 0 || id >= objects->length()) {
JVMCI_ERROR_NULL("virtual object id %d out of bounds", id);
}

View File

@ -1213,7 +1213,7 @@ C2V_VMENTRY_NULL(jobject, iterateFrames, (JNIEnv* env, jobject compilerToVM, job
}
}
}
bool realloc_failures = Deoptimization::realloc_objects(thread, fst.current(), objects, CHECK_NULL);
bool realloc_failures = Deoptimization::realloc_objects(thread, fst.current(), fst.register_map(), objects, CHECK_NULL);
Deoptimization::reassign_fields(fst.current(), fst.register_map(), objects, realloc_failures, false);
realloc_called = true;
@ -1471,7 +1471,7 @@ C2V_VMENTRY(void, materializeVirtualObjects, (JNIEnv* env, jobject, jobject _hs_
return;
}
bool realloc_failures = Deoptimization::realloc_objects(thread, fstAfterDeopt.current(), objects, CHECK);
bool realloc_failures = Deoptimization::realloc_objects(thread, fstAfterDeopt.current(), fstAfterDeopt.register_map(), objects, CHECK);
Deoptimization::reassign_fields(fstAfterDeopt.current(), fstAfterDeopt.register_map(), objects, realloc_failures, false);
for (int frame_index = 0; frame_index < virtualFrames->length(); frame_index++) {

View File

@ -309,6 +309,7 @@
end_class \
start_class(VirtualObject, jdk_vm_ci_code_VirtualObject) \
int_field(VirtualObject, id) \
boolean_field(VirtualObject, isAutoBox) \
object_field(VirtualObject, type, "Ljdk/vm/ci/meta/ResolvedJavaType;") \
objectarray_field(VirtualObject, values, "[Ljdk/vm/ci/meta/JavaValue;") \
objectarray_field(VirtualObject, slotKinds, "[Ljdk/vm/ci/meta/JavaKind;") \

View File

@ -3306,6 +3306,18 @@ Node* GraphKit::insert_mem_bar_volatile(int opcode, int alias_idx, Node* precede
return membar;
}
void GraphKit::insert_store_load_for_barrier() {
Node* mem = reset_memory();
MemBarNode* mb = MemBarNode::make(C, Op_MemBarVolatile, Compile::AliasIdxRaw);
mb->init_req(TypeFunc::Control, control());
mb->init_req(TypeFunc::Memory, mem);
Node* membar = _gvn.transform(mb);
set_control(_gvn.transform(new ProjNode(membar, TypeFunc::Control)));
Node* newmem = _gvn.transform(new ProjNode(membar, TypeFunc::Memory));
set_all_memory(mem);
set_memory(newmem, Compile::AliasIdxRaw);
}
//------------------------------shared_lock------------------------------------
// Emit locking code.
FastLockNode* GraphKit::shared_lock(Node* obj) {

View File

@ -811,6 +811,7 @@ class GraphKit : public Phase {
int next_monitor();
Node* insert_mem_bar(int opcode, Node* precedent = NULL);
Node* insert_mem_bar_volatile(int opcode, int alias_idx, Node* precedent = NULL);
void insert_store_load_for_barrier();
// Optional 'precedent' is appended as an extra edge, to force ordering.
FastLockNode* shared_lock(Node* obj);
void shared_unlock(Node* box, Node* obj);

View File

@ -286,6 +286,9 @@ Node* IdealLoopTree::reassociate_add_sub(Node* n1, PhaseIdealLoop *phase) {
Node* n2 = n1->in(3 - inv1_idx);
int inv2_idx = is_invariant_addition(n2, phase);
if (!inv2_idx) return NULL;
if (!phase->may_require_nodes(10, 10)) return NULL;
Node* x = n2->in(3 - inv2_idx);
Node* inv2 = n2->in(inv2_idx);
@ -337,61 +340,72 @@ void IdealLoopTree::reassociate_invariants(PhaseIdealLoop *phase) {
Node* nn = reassociate_add_sub(n, phase);
if (nn == NULL) break;
n = nn; // again
};
}
}
}
//------------------------------policy_peeling---------------------------------
// Return TRUE or FALSE if the loop should be peeled or not. Peel if we can
// make some loop-invariant test (usually a null-check) happen before the loop.
bool IdealLoopTree::policy_peeling(PhaseIdealLoop *phase) const {
IdealLoopTree *loop = (IdealLoopTree*)this;
// Return TRUE if the loop should be peeled, otherwise return FALSE. Peeling
// is applicable if we can make a loop-invariant test (usually a null-check)
// execute before we enter the loop. When TRUE, the estimated node budget is
// also requested.
bool IdealLoopTree::policy_peeling(PhaseIdealLoop *phase) {
uint estimate = estimate_peeling(phase);
return estimate == 0 ? false : phase->may_require_nodes(estimate);
}
// Perform actual policy and size estimate for the loop peeling transform, and
// return the estimated loop size if peeling is applicable, otherwise return
// zero. No node budget is allocated.
uint IdealLoopTree::estimate_peeling(PhaseIdealLoop *phase) {
// If nodes are depleted, some transform has miscalculated its needs.
assert(!phase->exceeding_node_budget(), "sanity");
uint body_size = loop->_body.size();
// Peeling does loop cloning which can result in O(N^2) node construction
if (body_size > 255) {
return false; // Prevent overflow for large body size
// Peeling does loop cloning which can result in O(N^2) node construction.
if (_body.size() > 255) {
return 0; // Suppress too large body size.
}
uint estimate = body_size * body_size;
// Optimistic estimate that approximates loop body complexity via data and
// control flow fan-out (instead of using the more pessimistic: BodySize^2).
uint estimate = est_loop_clone_sz(2);
if (phase->exceeding_node_budget(estimate)) {
return false; // Too large to safely clone
return 0; // Too large to safely clone.
}
// check for vectorized loops, any peeling done was already applied
// Check for vectorized loops, any peeling done was already applied.
if (_head->is_CountedLoop()) {
CountedLoopNode* cl = _head->as_CountedLoop();
if (cl->is_unroll_only() || cl->trip_count() == 1) {
return false;
return 0;
}
}
Node* test = loop->tail();
Node* test = tail();
while (test != _head) { // Scan till run off top of loop
if (test->is_If()) { // Test?
while (test != _head) { // Scan till run off top of loop
if (test->is_If()) { // Test?
Node *ctrl = phase->get_ctrl(test->in(1));
if (ctrl->is_top()) {
return false; // Found dead test on live IF? No peeling!
return 0; // Found dead test on live IF? No peeling!
}
// Standard IF only has one input value to check for loop invariance
// Standard IF only has one input value to check for loop invariance.
assert(test->Opcode() == Op_If ||
test->Opcode() == Op_CountedLoopEnd ||
test->Opcode() == Op_RangeCheck,
"Check this code when new subtype is added");
// Condition is not a member of this loop?
if (!is_member(phase->get_loop(ctrl)) && is_loop_exit(test)) {
// Found reason to peel!
return phase->may_require_nodes(estimate);
return estimate; // Found reason to peel!
}
}
// Walk up dominators to loop _head looking for test which is
// executed on every path thru loop.
// Walk up dominators to loop _head looking for test which is executed on
// every path through the loop.
test = phase->idom(test);
}
return false;
return 0;
}
//------------------------------peeled_dom_test_elim---------------------------
@ -638,8 +652,8 @@ void PhaseIdealLoop::do_peeling(IdealLoopTree *loop, Node_List &old_new) {
}
}
// Step 4: Correct dom-depth info. Set to loop-head depth.
int dd = dom_depth(head);
set_idom(head, head->in(1), dd);
for (uint j3 = 0; j3 < loop->_body.size(); j3++) {
@ -657,11 +671,30 @@ void PhaseIdealLoop::do_peeling(IdealLoopTree *loop, Node_List &old_new) {
loop->record_for_igvn();
}
#define EMPTY_LOOP_SIZE 7 // number of nodes in an empty loop
// The Estimated Loop Unroll Size: UnrollFactor * (106% * BodySize + BC) + CC,
// where BC and CC are (totally) ad-hoc/magic "body" and "clone" constants,
// respectively, used to ensure that node usage estimates made are on the safe
// side, for the most part. This is a simplified version of the loop clone
// size calculation in est_loop_clone_sz(), defined for unroll factors larger
// than one (>1), performing an overflow check and returning 'UINT_MAX' in
// case of an overflow.
static uint est_loop_unroll_sz(uint factor, uint size) {
precond(0 < factor);
uint const bc = 5;
uint const cc = 7;
uint const sz = size + (size + 15) / 16;
uint estimate = factor * (sz + bc) + cc;
return (estimate - cc) / factor == sz + bc ? estimate : UINT_MAX;
}
#define EMPTY_LOOP_SIZE 7 // Number of nodes in an empty loop.
//------------------------------policy_maximally_unroll------------------------
// Calculate exact loop trip count and return true if loop can be maximally
// unrolled.
// Calculate the exact loop trip-count and return TRUE if loop can be fully,
// i.e. maximally, unrolled, otherwise return FALSE. When TRUE, the estimated
// node budget is also requested.
bool IdealLoopTree::policy_maximally_unroll(PhaseIdealLoop *phase) const {
CountedLoopNode *cl = _head->as_CountedLoop();
assert(cl->is_normal_loop(), "");
@ -693,7 +726,7 @@ bool IdealLoopTree::policy_maximally_unroll(PhaseIdealLoop *phase) const {
// Take into account that after unroll conjoined heads and tails will fold,
// otherwise policy_unroll() may allow more unrolling than max unrolling.
uint new_body_size = est_loop_clone_sz(trip_count, body_size - EMPTY_LOOP_SIZE);
uint new_body_size = est_loop_unroll_sz(trip_count, body_size - EMPTY_LOOP_SIZE);
if (new_body_size == UINT_MAX) { // Check for bad estimate (overflow).
return false;
@ -742,8 +775,9 @@ bool IdealLoopTree::policy_maximally_unroll(PhaseIdealLoop *phase) const {
//------------------------------policy_unroll----------------------------------
// Return TRUE or FALSE if the loop should be unrolled or not. Unroll if the
// loop is a CountedLoop and the body is small enough.
// Return TRUE or FALSE if the loop should be unrolled or not. Apply unroll if
// the loop is a counted loop and the loop body is small enough. When TRUE,
// the estimated node budget is also requested.
bool IdealLoopTree::policy_unroll(PhaseIdealLoop *phase) {
CountedLoopNode *cl = _head->as_CountedLoop();
@ -887,7 +921,7 @@ bool IdealLoopTree::policy_unroll(PhaseIdealLoop *phase) {
LoopMaxUnroll = slp_max_unroll_factor;
}
uint estimate = est_loop_clone_sz(2, body_size);
uint estimate = est_loop_clone_sz(2);
if (cl->has_passed_slp()) {
if (slp_max_unroll_factor >= future_unroll_cnt) {
@ -958,8 +992,10 @@ bool IdealLoopTree::policy_align(PhaseIdealLoop *phase) const {
}
//------------------------------policy_range_check-----------------------------
// Return TRUE or FALSE if the loop should be range-check-eliminated.
// Actually we do iteration-splitting, a more powerful form of RCE.
// Return TRUE or FALSE if the loop should be range-check-eliminated or not.
// When TRUE, the estimated node budget is also requested.
//
// We will actually perform iteration-splitting, a more powerful form of RCE.
bool IdealLoopTree::policy_range_check(PhaseIdealLoop *phase) const {
if (!RangeCheckElimination) return false;
@ -967,9 +1003,9 @@ bool IdealLoopTree::policy_range_check(PhaseIdealLoop *phase) const {
assert(!phase->exceeding_node_budget(), "sanity");
CountedLoopNode *cl = _head->as_CountedLoop();
// If we unrolled with no intention of doing RCE and we later
// changed our minds, we got no pre-loop. Either we need to
// make a new pre-loop, or we gotta disallow RCE.
// If we unrolled with no intention of doing RCE and we later changed our
// minds, we got no pre-loop. Either we need to make a new pre-loop, or we
// have to disallow RCE.
if (cl->is_main_no_pre_loop()) return false; // Disallowed for now.
Node *trip_counter = cl->phi();
@ -1016,13 +1052,13 @@ bool IdealLoopTree::policy_range_check(PhaseIdealLoop *phase) const {
if (!phase->is_scaled_iv_plus_offset(rc_exp, trip_counter, NULL, NULL)) {
continue;
}
// Found a test like 'trip+off vs limit'. Test is an IfNode, has two
// (2) projections. If BOTH are in the loop we need loop unswitching
// instead of iteration splitting.
// Found a test like 'trip+off vs limit'. Test is an IfNode, has two (2)
// projections. If BOTH are in the loop we need loop unswitching instead
// of iteration splitting.
if (is_loop_exit(iff)) {
// Found valid reason to split iterations (if there is room).
// NOTE: Usually a gross overestimate.
return phase->may_require_nodes(est_loop_clone_sz(2, _body.size()));
return phase->may_require_nodes(est_loop_clone_sz(2));
}
} // End of is IF
}
@ -1521,9 +1557,6 @@ void PhaseIdealLoop::insert_vector_post_loop(IdealLoopTree *loop, Node_List &old
// only process vectorized main loops
if (!cl->is_vectorized_loop() || !cl->is_main_loop()) return;
if (!may_require_nodes(est_loop_clone_sz(2, loop->_body.size()))) {
return;
}
int slp_max_unroll_factor = cl->slp_max_unroll();
int cur_unroll = cl->unrolled_count();
@ -1535,6 +1568,10 @@ void PhaseIdealLoop::insert_vector_post_loop(IdealLoopTree *loop, Node_List &old
// we only ever process this one time
if (cl->has_atomic_post_loop()) return;
if (!may_require_nodes(loop->est_loop_clone_sz(2))) {
return;
}
#ifndef PRODUCT
if (TraceLoopOpts) {
tty->print("PostVector ");
@ -3178,9 +3215,6 @@ bool IdealLoopTree::iteration_split_impl(PhaseIdealLoop *phase, Node_List &old_n
AutoNodeBudget node_budget(phase);
bool should_peel = policy_peeling(phase);
bool should_unswitch = policy_unswitching(phase);
// Non-counted loops may be peeled; exactly 1 iteration is peeled.
// This removes loop-invariant tests (usually null checks).
if (!_head->is_CountedLoop()) { // Non-counted loop
@ -3188,10 +3222,10 @@ bool IdealLoopTree::iteration_split_impl(PhaseIdealLoop *phase, Node_List &old_n
// Partial peel succeeded so terminate this round of loop opts
return false;
}
if (should_peel) { // Should we peel?
if (policy_peeling(phase)) { // Should we peel?
if (PrintOpto) { tty->print_cr("should_peel"); }
phase->do_peeling(this,old_new);
} else if (should_unswitch) {
phase->do_peeling(this, old_new);
} else if (policy_unswitching(phase)) {
phase->do_unswitching(this, old_new);
}
return true;
@ -3209,12 +3243,11 @@ bool IdealLoopTree::iteration_split_impl(PhaseIdealLoop *phase, Node_List &old_n
// Before attempting fancy unrolling, RCE or alignment, see if we want
// to completely unroll this loop or do loop unswitching.
if (cl->is_normal_loop()) {
if (should_unswitch) {
if (policy_unswitching(phase)) {
phase->do_unswitching(this, old_new);
return true;
}
bool should_maximally_unroll = policy_maximally_unroll(phase);
if (should_maximally_unroll) {
if (policy_maximally_unroll(phase)) {
// Here we did some unrolling and peeling. Eventually we will
// completely unroll this loop and it will no longer be a loop.
phase->do_maximally_unroll(this, old_new);
@ -3222,6 +3255,9 @@ bool IdealLoopTree::iteration_split_impl(PhaseIdealLoop *phase, Node_List &old_n
}
}
uint est_peeling = estimate_peeling(phase);
bool should_peel = 0 < est_peeling;
// Counted loops may be peeled, may need some iterations run up
// front for RCE, and may want to align loop refs to a cache
// line. Thus we clone a full loop up front whose trip count is
@ -3252,14 +3288,15 @@ bool IdealLoopTree::iteration_split_impl(PhaseIdealLoop *phase, Node_List &old_n
// peeling.
if (should_rce || should_align || should_unroll) {
if (cl->is_normal_loop()) { // Convert to 'pre/main/post' loops
if (!phase->may_require_nodes(est_loop_clone_sz(3, _body.size()))) {
uint estimate = est_loop_clone_sz(3);
if (!phase->may_require_nodes(estimate)) {
return false;
}
phase->insert_pre_post_loops(this,old_new, !may_rce_align);
phase->insert_pre_post_loops(this, old_new, !may_rce_align);
}
// Adjust the pre- and main-loop limits to let the pre and post loops run
// with full checks, but the main-loop with no checks. Remove said
// checks from the main body.
// Adjust the pre- and main-loop limits to let the pre and post loops run
// with full checks, but the main-loop with no checks. Remove said checks
// from the main body.
if (should_rce) {
if (phase->do_range_check(this, old_new) != 0) {
cl->mark_has_range_checks();
@ -3293,7 +3330,9 @@ bool IdealLoopTree::iteration_split_impl(PhaseIdealLoop *phase, Node_List &old_n
}
} else { // Else we have an unchanged counted loop
if (should_peel) { // Might want to peel but do nothing else
phase->do_peeling(this,old_new);
if (phase->may_require_nodes(est_peeling)) {
phase->do_peeling(this, old_new);
}
}
}
return true;

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2006, 2012, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2006, 2019, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -79,7 +79,7 @@ bool IdealLoopTree::policy_unswitching( PhaseIdealLoop *phase ) const {
}
// Too speculative if running low on nodes.
return phase->may_require_nodes(est_loop_clone_sz(3, _body.size()));
return phase->may_require_nodes(est_loop_clone_sz(2));
}
//------------------------------find_unswitching_candidate-----------------------------
@ -116,7 +116,7 @@ IfNode* PhaseIdealLoop::find_unswitching_candidate(const IdealLoopTree *loop) co
// Clone loop with an invariant test (that does not exit) and
// insert a clone of the test that selects which version to
// execute.
void PhaseIdealLoop::do_unswitching (IdealLoopTree *loop, Node_List &old_new) {
void PhaseIdealLoop::do_unswitching(IdealLoopTree *loop, Node_List &old_new) {
// Find first invariant test that doesn't exit the loop
LoopNode *head = loop->_head->as_Loop();

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 1998, 2018, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1998, 2019, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -2439,12 +2439,63 @@ void IdealLoopTree::counted_loop( PhaseIdealLoop *phase ) {
if (loop->_next) loop->_next ->counted_loop(phase);
}
// The Estimated Loop Clone Size:
// CloneFactor * (~112% * BodySize + BC) + CC + FanOutTerm,
// where BC and CC are totally ad-hoc/magic "body" and "clone" constants,
// respectively, used to ensure that the node usage estimates made are on the
// safe side, for the most part. The FanOutTerm is an attempt to estimate the
// possible additional/excessive nodes generated due to data and control flow
// merging, for edges reaching outside the loop.
uint IdealLoopTree::est_loop_clone_sz(uint factor) const {
precond(0 < factor && factor < 16);
uint const bc = 13;
uint const cc = 17;
uint const sz = _body.size() + (_body.size() + 7) / 8;
uint estimate = factor * (sz + bc) + cc;
assert((estimate - cc) / factor == sz + bc, "overflow");
uint ctrl_edge_out_cnt = 0;
uint data_edge_out_cnt = 0;
for (uint i = 0; i < _body.size(); i++) {
Node* node = _body.at(i);
uint outcnt = node->outcnt();
for (uint k = 0; k < outcnt; k++) {
Node* out = node->raw_out(k);
if (out->is_CFG()) {
if (!is_member(_phase->get_loop(out))) {
ctrl_edge_out_cnt++;
}
} else {
Node* ctrl = _phase->get_ctrl(out);
assert(ctrl->is_CFG(), "must be");
if (!is_member(_phase->get_loop(ctrl))) {
data_edge_out_cnt++;
}
}
}
}
// Add data (x1.5) and control (x1.0) count to estimate iff both are > 0.
if (ctrl_edge_out_cnt > 0 && data_edge_out_cnt > 0) {
estimate += ctrl_edge_out_cnt + data_edge_out_cnt + data_edge_out_cnt / 2;
}
return estimate;
}
#ifndef PRODUCT
//------------------------------dump_head--------------------------------------
// Dump 1 liner for loop header info
void IdealLoopTree::dump_head( ) const {
for (uint i=0; i<_nest; i++)
void IdealLoopTree::dump_head() const {
for (uint i = 0; i < _nest; i++) {
tty->print(" ");
}
tty->print("Loop: N%d/N%d ",_head->_idx,_tail->_idx);
if (_irreducible) tty->print(" IRREDUCIBLE");
Node* entry = _head->is_Loop() ? _head->as_Loop()->skip_strip_mined(-1)->in(LoopNode::EntryControl) : _head->in(LoopNode::EntryControl);
@ -2513,7 +2564,7 @@ void IdealLoopTree::dump_head( ) const {
//------------------------------dump-------------------------------------------
// Dump loops by loop tree
void IdealLoopTree::dump( ) const {
void IdealLoopTree::dump() const {
dump_head();
if (_child) _child->dump();
if (_next) _next ->dump();
@ -2908,8 +2959,8 @@ void PhaseIdealLoop::build_and_optimize(LoopOptsMode mode) {
assert(C->unique() == unique, "non-optimize mode made Nodes? ? ?");
return;
}
if(VerifyLoopOptimizations) verify();
if(TraceLoopOpts && C->has_loops()) {
if (VerifyLoopOptimizations) verify();
if (TraceLoopOpts && C->has_loops()) {
_ltree_root->dump();
}
#endif
@ -2938,7 +2989,6 @@ void PhaseIdealLoop::build_and_optimize(LoopOptsMode mode) {
}
if (ReassociateInvariants) {
AutoNodeBudget node_budget(this, AutoNodeBudget::NO_BUDGET_CHECK);
// Reassociate invariants and prep for split_thru_phi
for (LoopTreeIterator iter(_ltree_root); !iter.done(); iter.next()) {
IdealLoopTree* lpt = iter.current();
@ -2946,14 +2996,17 @@ void PhaseIdealLoop::build_and_optimize(LoopOptsMode mode) {
if (!is_counted || !lpt->is_innermost()) continue;
// check for vectorized loops, any reassociation of invariants was already done
if (is_counted && lpt->_head->as_CountedLoop()->is_unroll_only()) continue;
lpt->reassociate_invariants(this);
if (is_counted && lpt->_head->as_CountedLoop()->is_unroll_only()) {
continue;
} else {
AutoNodeBudget node_budget(this);
lpt->reassociate_invariants(this);
}
// Because RCE opportunities can be masked by split_thru_phi,
// look for RCE candidates and inhibit split_thru_phi
// on just their loop-phi's for this pass of loop opts
if (SplitIfBlocks && do_split_ifs) {
AutoNodeBudget node_budget(this, AutoNodeBudget::NO_BUDGET_CHECK);
if (lpt->policy_range_check(this)) {
lpt->_rce_candidate = 1; // = true
}

View File

@ -589,17 +589,18 @@ public:
// Convert one iteration loop into normal code.
bool do_one_iteration_loop( PhaseIdealLoop *phase );
// Return TRUE or FALSE if the loop should be peeled or not. Peel if we can
// make some loop-invariant test (usually a null-check) happen before the
// loop.
bool policy_peeling( PhaseIdealLoop *phase ) const;
// Return TRUE or FALSE if the loop should be peeled or not. Peel if we can
// move some loop-invariant test (usually a null-check) before the loop.
bool policy_peeling(PhaseIdealLoop *phase);
uint estimate_peeling(PhaseIdealLoop *phase);
// Return TRUE or FALSE if the loop should be maximally unrolled. Stash any
// known trip count in the counted loop node.
bool policy_maximally_unroll( PhaseIdealLoop *phase ) const;
bool policy_maximally_unroll(PhaseIdealLoop *phase) const;
// Return TRUE or FALSE if the loop should be unrolled or not. Unroll if
// the loop is a CountedLoop and the body is small enough.
// Return TRUE or FALSE if the loop should be unrolled or not. Apply unroll
// if the loop is a counted loop and the loop body is small enough.
bool policy_unroll(PhaseIdealLoop *phase);
// Loop analyses to map to a maximal superword unrolling for vectorization.
@ -620,6 +621,9 @@ public:
// Return TRUE if "iff" is a range check.
bool is_range_check_if(IfNode *iff, PhaseIdealLoop *phase, Invariance& invar) const;
// Estimate the number of nodes required when cloning a loop (body).
uint est_loop_clone_sz(uint factor) const;
// Compute loop trip count if possible
void compute_trip_count(PhaseIdealLoop* phase);
@ -1356,50 +1360,66 @@ private:
// < UINT_MAX Nodes currently requested (estimate).
uint _nodes_required;
enum { REQUIRE_MIN = 70 };
uint nodes_required() const { return _nodes_required; }
// Given the _currently_ available number of nodes, check whether there is
// "room" for an additional request or not, considering the already required
// number of nodes. Return TRUE if the new request is exceeding the node
// budget limit, otherwise return FALSE. Note that this interpretation will
// act pessimistic on additional requests when new nodes have already been
// generated since the 'begin'. This behaviour fits with the intention that
// node estimates/requests should be made upfront.
bool exceeding_node_budget(uint required = 0) {
assert(C->live_nodes() < C->max_node_limit(), "sanity");
uint available = C->max_node_limit() - C->live_nodes();
return available < required + _nodes_required;
}
uint require_nodes(uint require) {
uint require_nodes(uint require, uint minreq = REQUIRE_MIN) {
precond(require > 0);
_nodes_required += MAX2(100u, require); // Keep requests at minimum 100.
_nodes_required += MAX2(require, minreq);
return _nodes_required;
}
bool may_require_nodes(uint require) {
return !exceeding_node_budget(require) && require_nodes(require) > 0;
bool may_require_nodes(uint require, uint minreq = REQUIRE_MIN) {
return !exceeding_node_budget(require) && require_nodes(require, minreq) > 0;
}
void require_nodes_begin() {
uint require_nodes_begin() {
assert(_nodes_required == UINT_MAX, "Bad state (begin).");
_nodes_required = 0;
return C->live_nodes();
}
// Final check that the requested nodes did not exceed the limit and that
// the request was reasonably correct with respect to the number of new
// nodes introduced by any transform since the last 'begin'.
void require_nodes_final_check(uint live_at_begin) {
uint required = _nodes_required;
require_nodes_final();
uint delta = C->live_nodes() - live_at_begin;
// Assert is disabled, see JDK-8223911 and related issues.
assert(true || delta <= 2 * required, "Bad node estimate (actual: %d, request: %d)",
delta, required);
}
void require_nodes_final() {
// When a node request is final, optionally check that the requested number
// of nodes was reasonably correct with respect to the number of new nodes
// introduced since the last 'begin'. Always check that we have not exceeded
// the maximum node limit.
void require_nodes_final(uint live_at_begin, bool check_estimate) {
assert(_nodes_required < UINT_MAX, "Bad state (final).");
assert(!exceeding_node_budget(), "Too many NODES required!");
if (check_estimate) {
// Assert that the node budget request was not off by too much (x2).
// Should this be the case we _surely_ need to improve the estimates
// used in our budget calculations.
assert(C->live_nodes() - live_at_begin <= 2 * _nodes_required,
"Bad node estimate: actual = %d >> request = %d",
C->live_nodes() - live_at_begin, _nodes_required);
}
// Assert that we have stayed within the node budget limit.
assert(C->live_nodes() < C->max_node_limit(),
"Exceeding node budget limit: %d + %d > %d (request = %d)",
C->live_nodes() - live_at_begin, live_at_begin,
C->max_node_limit(), _nodes_required);
_nodes_required = UINT_MAX;
}
bool _created_loop_node;
public:
uint nodes_required() const { return _nodes_required; }
void set_created_loop_node() { _created_loop_node = true; }
bool created_loop_node() { return _created_loop_node; }
void register_new_node( Node *n, Node *blk );
@ -1438,29 +1458,30 @@ public:
{
precond(_phase != NULL);
_nodes_at_begin = _phase->C->live_nodes();
_phase->require_nodes_begin();
_nodes_at_begin = _phase->require_nodes_begin();
}
~AutoNodeBudget() {
if (_check_at_final) {
#ifndef PRODUCT
if (TraceLoopOpts) {
uint request = _phase->nodes_required();
if (TraceLoopOpts) {
uint request = _phase->nodes_required();
uint delta = _phase->C->live_nodes() - _nodes_at_begin;
if (request > 0) {
uint delta = _phase->C->live_nodes() - _nodes_at_begin;
if (request < delta) {
tty->print_cr("Exceeding node budget: %d < %d", request, delta);
if (request < delta) {
tty->print_cr("Exceeding node budget: %d < %d", request, delta);
} else {
uint const REQUIRE_MIN = PhaseIdealLoop::REQUIRE_MIN;
// Identify the worst estimates as "poor" ones.
if (request > REQUIRE_MIN && delta > 0) {
if ((delta > REQUIRE_MIN && request > 3 * delta) ||
(delta <= REQUIRE_MIN && request > 10 * delta)) {
tty->print_cr("Poor node estimate: %d >> %d", request, delta);
}
}
}
#endif
_phase->require_nodes_final_check(_nodes_at_begin);
} else {
_phase->require_nodes_final();
}
#endif // PRODUCT
_phase->require_nodes_final(_nodes_at_begin, _check_at_final);
}
private:
@ -1469,17 +1490,6 @@ private:
uint _nodes_at_begin;
};
// The Estimated Loop Clone Size: CloneFactor * (BodySize + BC) + CC, where BC
// and CC are totally ad-hoc/magic "body" and "clone" constants, respectively,
// used to ensure that node usage estimates made are on the safe side, for the
// most part.
static inline uint est_loop_clone_sz(uint fact, uint size) {
uint const bc = 31;
uint const cc = 41;
uint estimate = fact * (size + bc) + cc;
return (estimate - cc) / fact == size + bc ? estimate : UINT_MAX;
}
// This kit may be used for making of a reserved copy of a loop before this loop
// goes under non-reversible changes.

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 1999, 2018, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 1999, 2019, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -3029,16 +3029,16 @@ bool PhaseIdealLoop::partial_peel( IdealLoopTree *loop, Node_List &old_new ) {
assert(!loop->_head->is_CountedLoop(), "Non-counted loop only");
if (!loop->_head->is_Loop()) {
return false; }
LoopNode *head = loop->_head->as_Loop();
return false;
}
LoopNode *head = loop->_head->as_Loop();
if (head->is_partial_peel_loop() || head->partial_peel_has_failed()) {
return false;
}
// Check for complex exit control
for(uint ii = 0; ii < loop->_body.size(); ii++ ) {
for (uint ii = 0; ii < loop->_body.size(); ii++) {
Node *n = loop->_body.at(ii);
int opc = n->Opcode();
if (n->is_Call() ||
@ -3065,12 +3065,12 @@ bool PhaseIdealLoop::partial_peel( IdealLoopTree *loop, Node_List &old_new ) {
IfNode *peel_if_cmpu = NULL;
Node *iff = loop->tail();
while( iff != head ) {
if( iff->is_If() ) {
while (iff != head) {
if (iff->is_If()) {
Node *ctrl = get_ctrl(iff->in(1));
if (ctrl->is_top()) return false; // Dead test on live IF.
// If loop-varying exit-test, check for induction variable
if( loop->is_member(get_loop(ctrl)) &&
if (loop->is_member(get_loop(ctrl)) &&
loop->is_loop_exit(iff) &&
is_possible_iv_test(iff)) {
Node* cmp = iff->in(1)->in(1);
@ -3084,6 +3084,7 @@ bool PhaseIdealLoop::partial_peel( IdealLoopTree *loop, Node_List &old_new ) {
}
iff = idom(iff);
}
// Prefer signed compare over unsigned compare.
IfNode* new_peel_if = NULL;
if (peel_if == NULL) {
@ -3131,7 +3132,7 @@ bool PhaseIdealLoop::partial_peel( IdealLoopTree *loop, Node_List &old_new ) {
Node_List worklist(area);
Node_List sink_list(area);
if (!may_require_nodes(est_loop_clone_sz(2, loop->_body.size()))) {
if (!may_require_nodes(loop->est_loop_clone_sz(2))) {
return false;
}

View File

@ -518,7 +518,7 @@ WB_ENTRY(jlong, WB_DramReservedEnd(JNIEnv* env, jobject o))
uint end_region = HeterogeneousHeapRegionManager::manager()->end_index_of_dram();
return (jlong)(g1h->base() + (end_region + 1) * HeapRegion::GrainBytes - 1);
} else {
return (jlong)g1h->base() + G1Arguments::heap_reserved_size_bytes();
return (jlong)g1h->base() + G1Arguments::heap_max_size_bytes();
}
}
#endif // INCLUDE_G1GC

View File

@ -50,7 +50,10 @@
#include "runtime/biasedLocking.hpp"
#include "runtime/compilationPolicy.hpp"
#include "runtime/deoptimization.hpp"
#include "runtime/fieldDescriptor.hpp"
#include "runtime/fieldDescriptor.inline.hpp"
#include "runtime/frame.inline.hpp"
#include "runtime/jniHandles.inline.hpp"
#include "runtime/handles.inline.hpp"
#include "runtime/interfaceSupport.inline.hpp"
#include "runtime/safepointVerifiers.hpp"
@ -232,7 +235,7 @@ Deoptimization::UnrollBlock* Deoptimization::fetch_unroll_info_helper(JavaThread
}
if (objects != NULL) {
JRT_BLOCK
realloc_failures = realloc_objects(thread, &deoptee, objects, THREAD);
realloc_failures = realloc_objects(thread, &deoptee, &map, objects, THREAD);
JRT_END
bool skip_internal = (cm != NULL) && !cm->is_compiled_by_jvmci();
reassign_fields(&deoptee, &map, objects, realloc_failures, skip_internal);
@ -810,8 +813,131 @@ void Deoptimization::deoptimize_all_marked() {
Deoptimization::DeoptAction Deoptimization::_unloaded_action
= Deoptimization::Action_reinterpret;
#if INCLUDE_JVMCI || INCLUDE_AOT
template<typename CacheType>
class BoxCacheBase : public CHeapObj<mtCompiler> {
protected:
static InstanceKlass* find_cache_klass(Symbol* klass_name, TRAPS) {
ResourceMark rm;
char* klass_name_str = klass_name->as_C_string();
Klass* k = SystemDictionary::find(klass_name, Handle(), Handle(), THREAD);
guarantee(k != NULL, "%s must be loaded", klass_name_str);
InstanceKlass* ik = InstanceKlass::cast(k);
guarantee(ik->is_initialized(), "%s must be initialized", klass_name_str);
CacheType::compute_offsets(ik);
return ik;
}
};
template<typename PrimitiveType, typename CacheType, typename BoxType> class BoxCache : public BoxCacheBase<CacheType> {
PrimitiveType _low;
PrimitiveType _high;
jobject _cache;
protected:
static BoxCache<PrimitiveType, CacheType, BoxType> *_singleton;
BoxCache(Thread* thread) {
InstanceKlass* ik = BoxCacheBase<CacheType>::find_cache_klass(CacheType::symbol(), thread);
objArrayOop cache = CacheType::cache(ik);
assert(cache->length() > 0, "Empty cache");
_low = BoxType::value(cache->obj_at(0));
_high = _low + cache->length() - 1;
_cache = JNIHandles::make_global(Handle(thread, cache));
}
~BoxCache() {
JNIHandles::destroy_global(_cache);
}
public:
static BoxCache<PrimitiveType, CacheType, BoxType>* singleton(Thread* thread) {
if (_singleton == NULL) {
BoxCache<PrimitiveType, CacheType, BoxType>* s = new BoxCache<PrimitiveType, CacheType, BoxType>(thread);
if (!Atomic::replace_if_null(s, &_singleton)) {
delete s;
}
}
return _singleton;
}
oop lookup(PrimitiveType value) {
if (_low <= value && value <= _high) {
int offset = value - _low;
return objArrayOop(JNIHandles::resolve_non_null(_cache))->obj_at(offset);
}
return NULL;
}
};
typedef BoxCache<jint, java_lang_Integer_IntegerCache, java_lang_Integer> IntegerBoxCache;
typedef BoxCache<jlong, java_lang_Long_LongCache, java_lang_Long> LongBoxCache;
typedef BoxCache<jchar, java_lang_Character_CharacterCache, java_lang_Character> CharacterBoxCache;
typedef BoxCache<jshort, java_lang_Short_ShortCache, java_lang_Short> ShortBoxCache;
typedef BoxCache<jbyte, java_lang_Byte_ByteCache, java_lang_Byte> ByteBoxCache;
template<> BoxCache<jint, java_lang_Integer_IntegerCache, java_lang_Integer>* BoxCache<jint, java_lang_Integer_IntegerCache, java_lang_Integer>::_singleton = NULL;
template<> BoxCache<jlong, java_lang_Long_LongCache, java_lang_Long>* BoxCache<jlong, java_lang_Long_LongCache, java_lang_Long>::_singleton = NULL;
template<> BoxCache<jchar, java_lang_Character_CharacterCache, java_lang_Character>* BoxCache<jchar, java_lang_Character_CharacterCache, java_lang_Character>::_singleton = NULL;
template<> BoxCache<jshort, java_lang_Short_ShortCache, java_lang_Short>* BoxCache<jshort, java_lang_Short_ShortCache, java_lang_Short>::_singleton = NULL;
template<> BoxCache<jbyte, java_lang_Byte_ByteCache, java_lang_Byte>* BoxCache<jbyte, java_lang_Byte_ByteCache, java_lang_Byte>::_singleton = NULL;
class BooleanBoxCache : public BoxCacheBase<java_lang_Boolean> {
jobject _true_cache;
jobject _false_cache;
protected:
static BooleanBoxCache *_singleton;
BooleanBoxCache(Thread *thread) {
InstanceKlass* ik = find_cache_klass(java_lang_Boolean::symbol(), thread);
_true_cache = JNIHandles::make_global(Handle(thread, java_lang_Boolean::get_TRUE(ik)));
_false_cache = JNIHandles::make_global(Handle(thread, java_lang_Boolean::get_FALSE(ik)));
}
~BooleanBoxCache() {
JNIHandles::destroy_global(_true_cache);
JNIHandles::destroy_global(_false_cache);
}
public:
static BooleanBoxCache* singleton(Thread* thread) {
if (_singleton == NULL) {
BooleanBoxCache* s = new BooleanBoxCache(thread);
if (!Atomic::replace_if_null(s, &_singleton)) {
delete s;
}
}
return _singleton;
}
oop lookup(jboolean value) {
if (value != 0) {
return JNIHandles::resolve_non_null(_true_cache);
}
return JNIHandles::resolve_non_null(_false_cache);
}
};
BooleanBoxCache* BooleanBoxCache::_singleton = NULL;
oop Deoptimization::get_cached_box(AutoBoxObjectValue* bv, frame* fr, RegisterMap* reg_map, TRAPS) {
Klass* k = java_lang_Class::as_Klass(bv->klass()->as_ConstantOopReadValue()->value()());
BasicType box_type = SystemDictionary::box_klass_type(k);
if (box_type != T_OBJECT) {
StackValue* value = StackValue::create_stack_value(fr, reg_map, bv->field_at(0));
switch(box_type) {
case T_INT: return IntegerBoxCache::singleton(THREAD)->lookup(value->get_int());
case T_LONG: {
StackValue* low = StackValue::create_stack_value(fr, reg_map, bv->field_at(1));
jlong res = (jlong)low->get_int();
return LongBoxCache::singleton(THREAD)->lookup(res);
}
case T_CHAR: return CharacterBoxCache::singleton(THREAD)->lookup(value->get_int());
case T_SHORT: return ShortBoxCache::singleton(THREAD)->lookup(value->get_int());
case T_BYTE: return ByteBoxCache::singleton(THREAD)->lookup(value->get_int());
case T_BOOLEAN: return BooleanBoxCache::singleton(THREAD)->lookup(value->get_int());
default:;
}
}
return NULL;
}
#endif // INCLUDE_JVMCI || INCLUDE_AOT
#if COMPILER2_OR_JVMCI
bool Deoptimization::realloc_objects(JavaThread* thread, frame* fr, GrowableArray<ScopeValue*>* objects, TRAPS) {
bool Deoptimization::realloc_objects(JavaThread* thread, frame* fr, RegisterMap* reg_map, GrowableArray<ScopeValue*>* objects, TRAPS) {
Handle pending_exception(THREAD, thread->pending_exception());
const char* exception_file = thread->exception_file();
int exception_line = thread->exception_line();
@ -827,8 +953,21 @@ bool Deoptimization::realloc_objects(JavaThread* thread, frame* fr, GrowableArra
oop obj = NULL;
if (k->is_instance_klass()) {
#if INCLUDE_JVMCI || INCLUDE_AOT
CompiledMethod* cm = fr->cb()->as_compiled_method_or_null();
if (cm->is_compiled_by_jvmci() && sv->is_auto_box()) {
AutoBoxObjectValue* abv = (AutoBoxObjectValue*) sv;
obj = get_cached_box(abv, fr, reg_map, THREAD);
if (obj != NULL) {
// Set the flag to indicate the box came from a cache, so that we can skip the field reassignment for it.
abv->set_cached(true);
}
}
#endif // INCLUDE_JVMCI || INCLUDE_AOT
InstanceKlass* ik = InstanceKlass::cast(k);
obj = ik->allocate_instance(THREAD);
if (obj == NULL) {
obj = ik->allocate_instance(THREAD);
}
} else if (k->is_typeArray_klass()) {
TypeArrayKlass* ak = TypeArrayKlass::cast(k);
assert(sv->field_size() % type2size[ak->element_type()] == 0, "non-integral array length");
@ -1101,7 +1240,12 @@ void Deoptimization::reassign_fields(frame* fr, RegisterMap* reg_map, GrowableAr
if (obj.is_null()) {
continue;
}
#if INCLUDE_JVMCI || INCLUDE_AOT
// Don't reassign fields of boxes that came from a cache. Caches may be in CDS.
if (sv->is_auto_box() && ((AutoBoxObjectValue*) sv)->is_cached()) {
continue;
}
#endif // INCLUDE_JVMCI || INCLUDE_AOT
if (k->is_instance_klass()) {
InstanceKlass* ik = InstanceKlass::cast(k);
reassign_fields_by_klass(ik, fr, reg_map, sv, 0, obj(), skip_internal);

View File

@ -33,6 +33,7 @@ class vframeArray;
class MonitorInfo;
class MonitorValue;
class ObjectValue;
class AutoBoxObjectValue;
class ScopeValue;
class compiledVFrame;
@ -153,6 +154,7 @@ class Deoptimization : AllStatic {
#if INCLUDE_JVMCI
static address deoptimize_for_missing_exception_handler(CompiledMethod* cm);
static oop get_cached_box(AutoBoxObjectValue* bv, frame* fr, RegisterMap* reg_map, TRAPS);
#endif
private:
@ -169,7 +171,7 @@ class Deoptimization : AllStatic {
JVMCI_ONLY(public:)
// Support for restoring non-escaping objects
static bool realloc_objects(JavaThread* thread, frame* fr, GrowableArray<ScopeValue*>* objects, TRAPS);
static bool realloc_objects(JavaThread* thread, frame* fr, RegisterMap* reg_map, GrowableArray<ScopeValue*>* objects, TRAPS);
static void reassign_type_array_elements(frame* fr, RegisterMap* reg_map, ObjectValue* sv, typeArrayOop obj, BasicType type);
static void reassign_object_array_elements(frame* fr, RegisterMap* reg_map, ObjectValue* sv, objArrayOop obj);
static void reassign_fields(frame* fr, RegisterMap* reg_map, GrowableArray<ScopeValue*>* objects, bool realloc_failures, bool skip_internal);

View File

@ -396,10 +396,8 @@ JNIHandleBlock* JNIHandleBlock::allocate_block(Thread* thread) {
block->_next = NULL;
block->_pop_frame_link = NULL;
block->_planned_capacity = block_size_in_oops;
// _last, _free_list & _allocate_before_rebuild initialized in allocate_handle
// _last initialized in allocate_handle
debug_only(block->_last = NULL);
debug_only(block->_free_list = NULL);
debug_only(block->_allocate_before_rebuild = -1);
return block;
}
@ -460,12 +458,7 @@ void JNIHandleBlock::oops_do(OopClosure* f) {
"only blocks first in chain should have pop frame link set");
for (int index = 0; index < current->_top; index++) {
oop* root = &(current->_handles)[index];
oop value = *root;
// traverse heap pointers only, not deleted handles or free list
// pointers
if (value != NULL && Universe::heap()->is_in_reserved(value)) {
f->do_oop(root);
}
f->do_oop(root);
}
// the next handle block is valid only if current block is full
if (current->_top < block_size_in_oops) {
@ -486,8 +479,6 @@ jobject JNIHandleBlock::allocate_handle(oop obj) {
for (JNIHandleBlock* current = _next; current != NULL;
current = current->_next) {
assert(current->_last == NULL, "only first block should have _last set");
assert(current->_free_list == NULL,
"only first block should have _free_list set");
if (current->_top == 0) {
// All blocks after the first clear trailing block are already cleared.
#ifdef ASSERT
@ -501,8 +492,6 @@ jobject JNIHandleBlock::allocate_handle(oop obj) {
current->zap();
}
// Clear initial block
_free_list = NULL;
_allocate_before_rebuild = 0;
_last = this;
zap();
}
@ -514,13 +503,6 @@ jobject JNIHandleBlock::allocate_handle(oop obj) {
return (jobject) handle;
}
// Try free list
if (_free_list != NULL) {
oop* handle = _free_list;
_free_list = (oop*) *_free_list;
NativeAccess<IS_DEST_UNINITIALIZED>::oop_store(handle, obj);
return (jobject) handle;
}
// Check if unused block follow last
if (_last->_next != NULL) {
// update last and retry
@ -528,51 +510,16 @@ jobject JNIHandleBlock::allocate_handle(oop obj) {
return allocate_handle(obj);
}
// No space available, we have to rebuild free list or expand
if (_allocate_before_rebuild == 0) {
rebuild_free_list(); // updates _allocate_before_rebuild counter
} else {
// Append new block
Thread* thread = Thread::current();
Handle obj_handle(thread, obj);
// This can block, so we need to preserve obj across call.
_last->_next = JNIHandleBlock::allocate_block(thread);
_last = _last->_next;
_allocate_before_rebuild--;
obj = obj_handle();
}
// Append new block
Thread* thread = Thread::current();
Handle obj_handle(thread, obj);
// This can block, so we need to preserve obj across call.
_last->_next = JNIHandleBlock::allocate_block(thread);
_last = _last->_next;
obj = obj_handle();
return allocate_handle(obj); // retry
}
void JNIHandleBlock::rebuild_free_list() {
assert(_allocate_before_rebuild == 0 && _free_list == NULL, "just checking");
int free = 0;
int blocks = 0;
for (JNIHandleBlock* current = this; current != NULL; current = current->_next) {
for (int index = 0; index < current->_top; index++) {
oop* handle = &(current->_handles)[index];
if (*handle == NULL) {
// this handle was cleared out by a delete call, reuse it
*handle = (oop) _free_list;
_free_list = handle;
free++;
}
}
// we should not rebuild free list if there are unused handles at the end
assert(current->_top == block_size_in_oops, "just checking");
blocks++;
}
// Heuristic: if more than half of the handles are free we rebuild next time
// as well, otherwise we append a corresponding number of new blocks before
// attempting a free list rebuild again.
int total = blocks * block_size_in_oops;
int extra = total - 2*free;
if (extra > 0) {
// Not as many free handles as we would like - compute number of new blocks to append
_allocate_before_rebuild = (extra + block_size_in_oops - 1) / block_size_in_oops;
}
}
bool JNIHandleBlock::contains(jobject handle) const {
return ((jobject)&_handles[0] <= handle && handle<(jobject)&_handles[_top]);

View File

@ -148,8 +148,6 @@ class JNIHandleBlock : public CHeapObj<mtInternal> {
// Having two types of blocks complicates the code and the space overhead in negligible.
JNIHandleBlock* _last; // Last block in use
JNIHandleBlock* _pop_frame_link; // Block to restore on PopLocalFrame call
oop* _free_list; // Handle free list
int _allocate_before_rebuild; // Number of blocks to allocate before rebuilding free list
// Check JNI, "planned capacity" for current frame (or push/ensure)
size_t _planned_capacity;
@ -165,9 +163,6 @@ class JNIHandleBlock : public CHeapObj<mtInternal> {
// Fill block with bad_handle values
void zap() NOT_DEBUG_RETURN;
// Free list computation
void rebuild_free_list();
// No more handles in the both the current and following blocks
void clear() { _top = 0; }

View File

@ -24,6 +24,7 @@
#include "precompiled.hpp"
#include "jvm.h"
#include "aot/aotLoader.hpp"
#include "classfile/classLoader.hpp"
#include "classfile/javaClasses.hpp"
#include "classfile/moduleEntry.hpp"
@ -3650,6 +3651,9 @@ void Threads::initialize_java_lang_classes(JavaThread* main_thread, TRAPS) {
initialize_class(vmSymbols::java_lang_StackOverflowError(), CHECK);
initialize_class(vmSymbols::java_lang_IllegalMonitorStateException(), CHECK);
initialize_class(vmSymbols::java_lang_IllegalArgumentException(), CHECK);
// Eager box cache initialization only if AOT is on and any library is loaded.
AOTLoader::initialize_box_caches(CHECK);
}
void Threads::initialize_jsr292_core_classes(TRAPS) {
@ -3912,6 +3916,7 @@ jint Threads::create_vm(JavaVMInitArgs* args, bool* canTryAgain) {
Chunk::start_chunk_pool_cleaner_task();
}
// initialize compiler(s)
#if defined(COMPILER1) || COMPILER2_OR_JVMCI
#if INCLUDE_JVMCI

View File

@ -65,7 +65,7 @@ import java.util.jar.JarFile;
* <p>
* Once an agent acquires an <code>Instrumentation</code> instance,
* the agent may call methods on the instance at any time.
* <p>
*
* @apiNote This interface is not intended to be implemented outside of
* the java.instrument module.
*

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2010, 2015, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2010, 2019, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -43,6 +43,7 @@ public final class VirtualObject implements JavaValue {
private JavaValue[] values;
private JavaKind[] slotKinds;
private final int id;
private boolean isAutoBox;
/**
* Creates a new {@link VirtualObject} for the given type, with the given fields. If
@ -58,12 +59,33 @@ public final class VirtualObject implements JavaValue {
* @return a new {@link VirtualObject} instance.
*/
public static VirtualObject get(ResolvedJavaType type, int id) {
return new VirtualObject(type, id);
return new VirtualObject(type, id, false);
}
private VirtualObject(ResolvedJavaType type, int id) {
/**
* Creates a new {@link VirtualObject} for the given type, with the given fields. If
* {@code type} is an instance class then {@code values} provides the values for the fields
* returned by {@link ResolvedJavaType#getInstanceFields(boolean) getInstanceFields(true)}. If
* {@code type} is an array then the length of the values array determines the reallocated array
* length.
*
* @param type the type of the object whose allocation was removed during compilation. This can
* be either an instance of an array type.
* @param id a unique id that identifies the object within the debug information for one
* position in the compiled code.
* @param isAutoBox a flag that tells the runtime that the object may be a boxed primitive and
* that it possibly needs to be obtained for the box cache instead of creating
* a new instance.
* @return a new {@link VirtualObject} instance.
*/
public static VirtualObject get(ResolvedJavaType type, int id, boolean isAutoBox) {
return new VirtualObject(type, id, isAutoBox);
}
private VirtualObject(ResolvedJavaType type, int id, boolean isAutoBox) {
this.type = type;
this.id = id;
this.isAutoBox = isAutoBox;
}
private static StringBuilder appendValue(StringBuilder buf, JavaValue value, Set<VirtualObject> visited) {
@ -143,6 +165,23 @@ public final class VirtualObject implements JavaValue {
return id;
}
/**
* Returns true if the object is a box. For boxes the deoptimization would check if the value of
* the box is in the cache range and try to return a cached object.
*/
public boolean isAutoBox() {
return isAutoBox;
}
/**
* Sets the value of the box flag.
* @param isAutoBox a flag that tells the runtime that the object may be a boxed primitive and that
* it possibly needs to be obtained for the box cache instead of creating a new instance.
*/
public void setIsAutoBox(boolean isAutoBox) {
this.isAutoBox = isAutoBox;
}
/**
* Overwrites the current set of values with a new one.
*

View File

@ -187,6 +187,9 @@ public final class HotSpotJVMCIRuntime implements JVMCIRuntime {
// initialized.
JVMCI.getRuntime();
}
// Make sure all the primitive box caches are populated (required to properly materialize boxed primitives
// during deoptimization).
Object[] boxCaches = { Boolean.valueOf(false), Byte.valueOf((byte)0), Short.valueOf((short) 0), Character.valueOf((char) 0), Integer.valueOf(0), Long.valueOf(0) };
}
}
return result;

View File

@ -43,7 +43,9 @@ import org.graalvm.compiler.nodes.ValueNode;
import org.graalvm.compiler.nodes.spi.NodeValueMap;
import org.graalvm.compiler.nodes.util.GraphUtil;
import org.graalvm.compiler.nodes.virtual.EscapeObjectState;
import org.graalvm.compiler.nodes.virtual.VirtualBoxingNode;
import org.graalvm.compiler.nodes.virtual.VirtualObjectNode;
import org.graalvm.compiler.serviceprovider.GraalServices;
import org.graalvm.compiler.virtual.nodes.MaterializedObjectState;
import org.graalvm.compiler.virtual.nodes.VirtualObjectState;
@ -154,6 +156,10 @@ public class DebugInfoBuilder {
}
assert checkValues(vobjValue.getType(), values, slotKinds);
vobjValue.setValues(values, slotKinds);
if (vobjNode instanceof VirtualBoxingNode) {
GraalServices.markVirtualObjectAsAutoBox(vobjValue);
}
}
virtualObjectsArray = new VirtualObject[virtualObjects.size()];

View File

@ -0,0 +1,108 @@
/*
* Copyright (c) 2019, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
package org.graalvm.compiler.hotspot.test;
import org.graalvm.compiler.api.directives.GraalDirectives;
import org.graalvm.compiler.core.test.GraalCompilerTest;
import org.graalvm.compiler.serviceprovider.JavaVersionUtil;
import org.junit.Assert;
import org.junit.Assume;
import org.junit.Test;
public class BoxDeoptimizationTest extends GraalCompilerTest {
private static boolean isJDK13OrLater = JavaVersionUtil.JAVA_SPECIFICATION_VERSION >= 13;
public static void testInteger() {
Object[] values = {42, new Exception()};
GraalDirectives.deoptimize();
Assert.assertSame(values[0], Integer.valueOf(42));
}
@Test
public void test1() {
Assume.assumeTrue(isJDK13OrLater);
test("testInteger");
}
public static void testLong() {
Object[] values = {42L, new Exception()};
GraalDirectives.deoptimize();
Assert.assertSame(values[0], Long.valueOf(42));
}
@Test
public void test2() {
Assume.assumeTrue(isJDK13OrLater);
test("testLong");
}
public static void testChar() {
Object[] values = {'a', new Exception()};
GraalDirectives.deoptimize();
Assert.assertSame(values[0], Character.valueOf('a'));
}
@Test
public void test3() {
Assume.assumeTrue(isJDK13OrLater);
test("testChar");
}
public static void testShort() {
Object[] values = {(short) 42, new Exception()};
GraalDirectives.deoptimize();
Assert.assertSame(values[0], Short.valueOf((short) 42));
}
@Test
public void test4() {
Assume.assumeTrue(isJDK13OrLater);
test("testShort");
}
public static void testByte() {
Object[] values = {(byte) 42, new Exception()};
GraalDirectives.deoptimize();
Assert.assertSame(values[0], Byte.valueOf((byte) 42));
}
@Test
public void test5() {
Assume.assumeTrue(isJDK13OrLater);
test("testByte");
}
public static void testBoolean() {
Object[] values = {true, new Exception()};
GraalDirectives.deoptimize();
Assert.assertSame(values[0], Boolean.valueOf(true));
}
@Test
public void test6() {
Assume.assumeTrue(isJDK13OrLater);
test("testBoolean");
}
}

View File

@ -42,6 +42,7 @@ import java.util.function.Supplier;
import org.graalvm.compiler.serviceprovider.SpeculationReasonGroup.SpeculationContextObject;
import jdk.vm.ci.code.BytecodePosition;
import jdk.vm.ci.code.VirtualObject;
import jdk.vm.ci.meta.ResolvedJavaField;
import jdk.vm.ci.meta.ResolvedJavaMethod;
import jdk.vm.ci.meta.ResolvedJavaType;
@ -519,4 +520,12 @@ public final class GraalServices {
public static double fma(double a, double b, double c) {
return Math.fma(a, b, c);
}
/**
* Set the flag in the {@link VirtualObject} that indicates that it is a boxed primitive that
* was produced as a result of a call to a {@code valueOf} method.
*/
public static void markVirtualObjectAsAutoBox(VirtualObject virtualObject) {
virtualObject.setIsAutoBox(true);
}
}

View File

@ -974,7 +974,7 @@ class Eval {
ins.stream().forEach(u -> u.setDiagnostics(at));
// corral any Snippets that need it
if (ins.stream().anyMatch(u -> u.corralIfNeeded(ins))) {
if (ins.stream().filter(u -> u.corralIfNeeded(ins)).count() > 0) {
// if any were corralled, re-analyze everything
state.taskFactory.analyze(outerWrapSet(ins), cat -> {
ins.stream().forEach(u -> u.setCorralledDiagnostics(cat));

View File

@ -73,9 +73,10 @@ public abstract class Snippet {
* ({@link jdk.jshell.Snippet.SubKind#SINGLE_STATIC_IMPORT_SUBKIND}) --
* use {@link jdk.jshell.Snippet#subKind()} to distinguish.
*
* @jls 7.5 Import Declarations
* <P>
* An import declaration is {@linkplain Kind#isPersistent() persistent}.
*
* @jls 7.5 Import Declarations
*/
IMPORT(true),
@ -91,9 +92,10 @@ public abstract class Snippet {
* annotation interfaces -- see {@link jdk.jshell.Snippet.SubKind} to
* differentiate.
*
* @jls 7.6 Top Level Type Declarations
* <P>
* A type declaration is {@linkplain Kind#isPersistent() persistent}.
*
* @jls 7.6 Top Level Type Declarations
*/
TYPE_DECL(true),
@ -101,9 +103,10 @@ public abstract class Snippet {
* A method declaration.
* The snippet is an instance of {@link jdk.jshell.MethodSnippet}.
*
* @jls 8.4 Method Declarations
* <P>
* A method declaration is {@linkplain Kind#isPersistent() persistent}.
*
* @jls 8.4 Method Declarations
*/
METHOD(true),
@ -116,9 +119,10 @@ public abstract class Snippet {
* variable representing an expression -- see
* {@link jdk.jshell.Snippet.SubKind}to differentiate.
*
* @jls 8.3 Field Declarations
* <P>
* A variable declaration is {@linkplain Kind#isPersistent() persistent}.
*
* @jls 8.3 Field Declarations
*/
VAR(true),

View File

@ -74,6 +74,7 @@ gc/stress/gclocker/TestGCLockerWithParallel.java 8180622 generic-all
gc/stress/gclocker/TestGCLockerWithG1.java 8180622 generic-all
gc/stress/TestJNIBlockFullGC/TestJNIBlockFullGC.java 8192647 generic-all
gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java 8193639 solaris-all
gc/stress/TestReclaimStringsLeaksMemory.java 8224847 generic-all
#############################################################################

View File

@ -1,5 +1,5 @@
/*
* Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2017, 2019, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@ -37,7 +37,7 @@
* @run driver ClassFileInstaller sun.hotspot.WhiteBox
* sun.hotspot.WhiteBox$WhiteBoxPermission
* @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions
* -XX:+WhiteBoxAPI -XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -Xbatch
* -XX:+WhiteBoxAPI -XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -Xbatch -XX:CompileThresholdScaling=1.0
* compiler.jvmci.compilerToVM.IsMatureVsReprofileTest
*/

View File

@ -0,0 +1,168 @@
/*
* Copyright (c) 2019, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
/*
* @test
* @bug 8223502
* @summary Node estimate for loop unswitching is not correct:
* assert(delta <= 2 * required) failed: Bad node estimate
*
* @requires !vm.graal.enabled
*
* @run main/othervm -XX:-TieredCompilation -XX:-BackgroundCompilation
* -XX:-UseOnStackReplacement -XX:CompileOnly=LoopUnswitchingBadNodeBudget::test
* -XX:CompileCommand=dontinline,LoopUnswitchingBadNodeBudget::helper
* -XX:+UnlockExperimentalVMOptions -XX:-UseSwitchProfiling LoopUnswitchingBadNodeBudget
*
*/
public class LoopUnswitchingBadNodeBudget {
public static void main(String[] args) {
for (int i = 0; i < 20_000; i++) {
for (int j = 0; j < 100; j++) {
test(j, true, 0, 0, 0);
test(j, false, 0, 0, 0);
}
}
}
private static int test(int j, boolean flag, int k, int l, int m) {
int res = 0;
for (int i = 0; i < 24; i++) {
if (flag) {
k = k / 2;
l = l * 2;
m = m + 2;
}
switch (j) {
case 0: break;
case 1: return helper(j, k, l, m);
case 2: return helper(j, k, l, m);
case 3: return helper(j, k, l, m);
case 4: return helper(j, k, l, m);
case 5: return helper(j, k, l, m);
case 6: return helper(j, k, l, m);
case 7: return helper(j, k, l, m);
case 8: return helper(j, k, l, m);
case 9: return helper(j, k, l, m);
case 10: return helper(j, k, l, m);
case 11: return helper(j, k, l, m);
case 12: return helper(j, k, l, m);
case 13: return helper(j, k, l, m);
case 14: return helper(j, k, l, m);
case 15: return helper(j, k, l, m);
case 16: return helper(j, k, l, m);
case 17: return helper(j, k, l, m);
case 18: return helper(j, k, l, m);
case 19: return helper(j, k, l, m);
case 20: return helper(j, k, l, m);
case 21: return helper(j, k, l, m);
case 22: return helper(j, k, l, m);
case 23: return helper(j, k, l, m);
case 24: return helper(j, k, l, m);
case 25: return helper(j, k, l, m);
case 26: return helper(j, k, l, m);
case 27: return helper(j, k, l, m);
case 28: return helper(j, k, l, m);
case 29: return helper(j, k, l, m);
case 30: return helper(j, k, l, m);
case 31: return helper(j, k, l, m);
case 32: return helper(j, k, l, m);
case 33: return helper(j, k, l, m);
case 34: return helper(j, k, l, m);
case 35: return helper(j, k, l, m);
case 36: return helper(j, k, l, m);
case 37: return helper(j, k, l, m);
case 38: return helper(j, k, l, m);
case 39: return helper(j, k, l, m);
case 40: return helper(j, k, l, m);
case 41: return helper(j, k, l, m);
case 42: return helper(j, k, l, m);
case 43: return helper(j, k, l, m);
case 44: return helper(j, k, l, m);
case 45: return helper(j, k, l, m);
case 46: return helper(j, k, l, m);
case 47: return helper(j, k, l, m);
case 48: return helper(j, k, l, m);
case 49: return helper(j, k, l, m);
case 50: return helper(j, k, l, m);
case 51: return helper(j, k, l, m);
case 52: return helper(j, k, l, m);
case 53: return helper(j, k, l, m);
case 54: return helper(j, k, l, m);
case 55: return helper(j, k, l, m);
case 56: return helper(j, k, l, m);
case 57: return helper(j, k, l, m);
case 58: return helper(j, k, l, m);
case 59: return helper(j, k, l, m);
case 60: return helper(j, k, l, m);
case 61: return helper(j, k, l, m);
case 62: return helper(j, k, l, m);
case 63: return helper(j, k, l, m);
case 64: return helper(j, k, l, m);
case 65: return helper(j, k, l, m);
case 66: return helper(j, k, l, m);
case 67: return helper(j, k, l, m);
case 68: return helper(j, k, l, m);
case 69: return helper(j, k, l, m);
case 70: return helper(j, k, l, m);
case 71: return helper(j, k, l, m);
case 72: return helper(j, k, l, m);
case 73: return helper(j, k, l, m);
case 74: return helper(j, k, l, m);
case 75: return helper(j, k, l, m);
case 76: return helper(j, k, l, m);
case 77: return helper(j, k, l, m);
case 78: return helper(j, k, l, m);
case 79: return helper(j, k, l, m);
case 80: return helper(j, k, l, m);
case 81: return helper(j, k, l, m);
case 82: return helper(j, k, l, m);
case 83: return helper(j, k, l, m);
case 84: return helper(j, k, l, m);
case 85: return helper(j, k, l, m);
case 86: return helper(j, k, l, m);
case 87: return helper(j, k, l, m);
case 88: return helper(j, k, l, m);
case 89: return helper(j, k, l, m);
case 90: return helper(j, k, l, m);
case 91: return helper(j, k, l, m);
case 92: return helper(j, k, l, m);
case 93: return helper(j, k, l, m);
case 94: return helper(j, k, l, m);
case 95: return helper(j, k, l, m);
case 96: return helper(j, k, l, m);
case 97: return helper(j, k, l, m);
case 98: return helper(j, k, l, m);
case 99: return helper(j, k, l, m);
}
res += helper(j, k, l, m);
}
return res;
}
private static int helper(int j, int k, int l, int m) {
return j + k;
}
}

View File

@ -37,7 +37,7 @@ jdk/javadoc/doclet/testIOException/TestIOException.java
jdk/jshell/UserJdiUserRemoteTest.java 8173079 linux-all
jdk/jshell/UserInputTest.java 8169536 generic-all
## jdk/jshell/ExceptionsTest.java 8200701 windows-all
jdk/jshell/ExceptionsTest.java 8200701 windows-all
###########################################################################
#

View File

@ -23,7 +23,7 @@
/*
* @test
* @bug 8081431 8080069 8167128
* @bug 8081431 8080069 8167128 8199623
* @summary Test of JShell#drop().
* @build KullaTesting TestingInputStream
* @run testng DropTest
@ -31,6 +31,7 @@
import jdk.jshell.DeclarationSnippet;
import jdk.jshell.Snippet;
import jdk.jshell.MethodSnippet;
import jdk.jshell.VarSnippet;
import org.testng.annotations.Test;
@ -225,4 +226,17 @@ public class DropTest extends KullaTesting {
ste(MAIN_SNIPPET, VALID, VALID, true, null),
ste(ax, VALID, OVERWRITTEN, false, MAIN_SNIPPET));
}
// 8199623
public void testTwoForkedDrop() {
MethodSnippet p = methodKey(assertEval("void p() throws Exception { ((String) null).toString(); }"));
MethodSnippet n = methodKey(assertEval("void n() throws Exception { try { p(); } catch (Exception ex) { throw new RuntimeException(\"bar\", ex); }}"));
MethodSnippet m = methodKey(assertEval("void m() { try { n(); } catch (Exception ex) { throw new RuntimeException(\"foo\", ex); }}"));
MethodSnippet c = methodKey(assertEval("void c() throws Throwable { p(); }"));
assertDrop(p,
ste(p, VALID, DROPPED, true, null),
ste(n, VALID, RECOVERABLE_DEFINED, false, p),
ste(c, VALID, RECOVERABLE_DEFINED, false, p));
assertActiveKeys();
}
}

View File

@ -23,7 +23,7 @@
/*
* @test
* @bug 8153716 8143955 8151754 8150382 8153920 8156910 8131024 8160089 8153897 8167128 8154513 8170015 8170368 8172102 8172103 8165405 8173073 8173848 8174041 8173916 8174028 8174262 8174797 8177079 8180508 8177466 8172154 8192979 8191842 8198573 8198801 8210596 8210959 8215099
* @bug 8153716 8143955 8151754 8150382 8153920 8156910 8131024 8160089 8153897 8167128 8154513 8170015 8170368 8172102 8172103 8165405 8173073 8173848 8174041 8173916 8174028 8174262 8174797 8177079 8180508 8177466 8172154 8192979 8191842 8198573 8198801 8210596 8210959 8215099 8199623
* @summary Simple jshell tool tests
* @modules jdk.compiler/com.sun.tools.javac.api
* jdk.compiler/com.sun.tools.javac.main
@ -270,6 +270,33 @@ public class ToolSimpleTest extends ReplToolTesting {
);
}
// 8199623
@Test
public void testTwoForkedDrop() {
test(
(a) -> assertCommand(a, "void p() throws Exception { ((String) null).toString(); }",
"| created method p()"),
(a) -> assertCommand(a, "void n() throws Exception { try { p(); } catch (Exception ex) { throw new IOException(\"bar\", ex); }} ",
"| created method n()"),
(a) -> assertCommand(a, "void m() { try { n(); } catch (Exception ex) { throw new RuntimeException(\"foo\", ex); }}",
"| created method m()"),
(a) -> assertCommand(a, "void c() throws Throwable { p(); }",
"| created method c()"),
(a) -> assertCommand(a, "/drop p",
"| dropped method p()"),
(a) -> assertCommand(a, "m()",
"| attempted to call method n() which cannot be invoked until method p() is declared"),
(a) -> assertCommand(a, "/meth n",
"| void n()\n" +
"| which cannot be invoked until method p() is declared"),
(a) -> assertCommand(a, "/meth m",
"| void m()"),
(a) -> assertCommand(a, "/meth c",
"| void c()\n" +
"| which cannot be invoked until method p() is declared")
);
}
@Test
public void testUnknownCommand() {
test((a) -> assertCommand(a, "/unknown",

View File

@ -361,6 +361,18 @@ $(call AssertEquals, \
RelativePath, \
)
$(call AssertEquals, \
$(call RelativePath, /foo/bar/baz/banan/kung, /foo/bar/baz), \
./banan/kung, \
RelativePath, \
)
$(call AssertEquals, \
$(call RelativePath, /foo/bar/baz/banan/kung, /foo/bar/baz/), \
./banan/kung, \
RelativePath, \
)
################################################################################
# Test ParseKeywordVariable