diff --git a/.hgtags b/.hgtags index 8c42438e208..9014cdb9085 100644 --- a/.hgtags +++ b/.hgtags @@ -522,3 +522,5 @@ f8626bcc169813a4b2a15880386b952719d1d6d1 jdk-12+15 eefa65e142af305923d2adcd596fab9c639723a1 jdk-12+17 e38473506688e0995e701fc7f77d5a91b438ef93 jdk-12+18 dc1f9dec2018a37fedba47d8a2aedef99faaec64 jdk-12+19 +40098289d5804c3b5e7074bc75501a81e70d9b0d jdk-12+20 +f8fb0c86f2b3d24294d39c5685a628e1beb14ba7 jdk-12+21 diff --git a/doc/building.html b/doc/building.html index fbb13f9626c..5df2e305676 100644 --- a/doc/building.html +++ b/doc/building.html @@ -69,6 +69,7 @@
The JDK is a complex software project. Building it requires a certain amount of technical expertise, a fair number of dependencies on external software, and reasonably powerful hardware.
If you just want to use the JDK and not build it yourself, this document is not for you. See for instance OpenJDK installation for some methods of installing a prebuilt JDK.
Make sure you are getting the correct version. As of JDK 10, the source is no longer split into separate repositories so you only need to clone one single repository. At the OpenJDK Mercurial server you can see a list of all available forests. If you want to build an older version, e.g. JDK 8, it is recommended that you get the jdk8u
forest, which contains incremental updates, instead of the jdk8
forest, which was frozen at JDK 8 GA.
Make sure you are getting the correct version. As of JDK 10, the source is no longer split into separate repositories so you only need to clone one single repository. At the OpenJDK Mercurial server you can see a list of all available repositories. If you want to build an older version, e.g. JDK 8, it is recommended that you get the jdk8u
forest, which contains incremental updates, instead of the jdk8
forest, which was frozen at JDK 8 GA.
If you are new to Mercurial, a good place to start is the Mercurial Beginner's Guide. The rest of this document assumes a working knowledge of Mercurial.
For a smooth building experience, it is recommended that you follow these rules on where and how to check out the source code.
@@ -570,6 +571,47 @@ CC: Sun C++ 5.13 SunOS_i386 151846-10 2015/10/30This requires a more complex setup and build procedure. This section assumes you are familiar with cross-compiling in general, and will only deal with the particularities of cross-compiling the JDK. If you are new to cross-compiling, please see the external links at Wikipedia for a good start on reading materials.
Cross-compiling the JDK requires you to be able to build both for the build platform and for the target platform. The reason for the former is that we need to build and execute tools during the build process, both native tools and Java tools.
If all you want to do is to compile a 32-bit version, for the same OS, on a 64-bit machine, consider using --with-target-bits=32
instead of doing a full-blown cross-compilation. (While this surely is possible, it's a lot more work and will take much longer to build.)
The OpenJDK build system provides out-of-the box support for creating and using so called devkits. A devkit
is basically a collection of a cross-compiling toolchain and a sysroot environment which can easily be used together with the --with-devkit
configure option to cross compile the OpenJDK. On Linux/x86_64, the following command:
bash configure --with-devkit=<devkit-path> --openjdk-target=ppc64-linux-gnu && make
+will configure and build OpenJDK for Linux/ppc64 assuming that <devkit-path>
points to a Linux/x86_64 to Linux/ppc64 devkit.
Devkits can be created from the make/devkit
directory by executing:
make [ TARGETS="<TARGET_TRIPLET>+" ] [ BASE_OS=<OS> ] [ BASE_OS_VERSION=<VER> ]
+where TARGETS
contains one or more TARGET_TRIPLET
s of the form described in section 3.4 of the GNU Autobook. If no targets are given, a native toolchain for the current platform will be created. Currently, at least the following targets are known to work:
Supported devkit targets | +
---|
x86_64-linux-gnu | +
aarch64-linux-gnu | +
arm-linux-gnueabihf | +
ppc64-linux-gnu | +
ppc64le-linux-gnu | +
s390x-linux-gnu | +
BASE_OS
must be one of "OEL6" for Oracle Enterprise Linux 6 or "Fedora" (if not specified "OEL6" will be the default). If the base OS is "Fedora" the corresponding Fedora release can be specified with the help of the BASE_OS_VERSION
option (with "27" as default version). If the build is successful, the new devkits can be found in the build/devkit/result
subdirectory:
cd make/devkit
+make TARGETS="ppc64le-linux-gnu aarch64-linux-gnu" BASE_OS=Fedora BASE_OS_VERSION=21
+ls -1 ../../build/devkit/result/
+x86_64-linux-gnu-to-aarch64-linux-gnu
+x86_64-linux-gnu-to-ppc64le-linux-gnu
+Notice that devkits are not only useful for targeting different build platforms. Because they contain the full build dependencies for a system (i.e. compiler and root file system), they can easily be used to build well-known, reliable and reproducible build environments. You can for example create and use a devkit with GCC 7.3 and a Fedora 12 sysroot environment (with glibc 2.11) on Ubuntu 14.04 (which doesn't have GCC 7.3 by default) to produce OpenJDK binaries which will run on all Linux systems with runtime libraries newer than the ones from Fedora 12 (e.g. Ubuntu 16.04, SLES 11 or RHEL 6).
When cross-compiling, make sure you use a boot JDK that runs on the build system, and not on the target system.
To be able to build, we need a "Build JDK", which is a JDK built from the current sources (that is, the same as the end result of the entire build process), but able to run on the build system, and not the target system. (In contrast, the Boot JDK should be from an older release, e.g. JDK 8 when building JDK 9.)
@@ -662,8 +704,8 @@ ls build/linux-aarch64-normal-server-release/CC
CXX
--arch=...
--openjdk-target=...
--arch=...
--openjdk-target=...
Verify that the summary at the end looks correct. Are you indeed using the Boot JDK and native toolchain that you expect?
By default, the JDK has a strict approach where warnings from the compiler is considered errors which fail the build. For very new or very old compiler versions, this can trigger new classes of warnings, which thus fails the build. Run configure
with --disable-warnings-as-errors
to turn of this behavior. (The warnings will still show, but not make the build fail.)
Incremental rebuilds mean that when you modify part of the product, only the affected parts get rebuilt. While this works great in most cases, and significantly speed up the development process, from time to time complex interdependencies will result in an incorrect build result. This is the most common cause for unexpected build problems, together with inconsistencies between the different Mercurial repositories in the forest.
+Incremental rebuilds mean that when you modify part of the product, only the affected parts get rebuilt. While this works great in most cases, and significantly speed up the development process, from time to time complex interdependencies will result in an incorrect build result. This is the most common cause for unexpected build problems.
Here are a suggested list of things to try if you are having unexpected build problems. Each step requires more time than the one before, so try them in order. Most issues will be solved at step 1 or 2.
Make sure your forest is up-to-date
-Run bash get_source.sh
to make sure you have the latest version of all repositories.
Make sure your repository is up-to-date
+Run hg pull -u
to make sure you have the latest changes.
Clean build results
The simplest way to fix incremental rebuild issues is to run make clean
. This will remove all build results, but not the configuration or any build system support artifacts. In most cases, this will solve build errors resulting from incremental build mismatches.
Completely clean the build directory.
@@ -793,8 +835,8 @@ Hint: If caused by a warning, try configure --disable-warnings-as-errors. make dist-clean bash configure $(cat current-configuration) makeRe-clone the Mercurial forest
-Sometimes the Mercurial repositories themselves gets in a state that causes the product to be un-buildable. In such a case, the simplest solution is often the "sledgehammer approach": delete the entire forest, and re-clone it. If you have local changes, save them first to a different location using hg export
.
Re-clone the Mercurial repository
+Sometimes the Mercurial repository gets in a state that causes the product to be un-buildable. In such a case, the simplest solution is often the "sledgehammer approach": delete the entire repository, and re-clone it. If you have local changes, save them first to a different location using hg export
.
This can be a sign of a Cygwin problem. See the information about solving problems in the Cygwin section. Rebooting the computer might help temporarily.
If none of the suggestions in this document helps you, or if you find what you believe is a bug in the build system, please contact the Build Group by sending a mail to build-dev@openjdk.java.net. Please include the relevant parts of the configure and/or build log.
+If none of the suggestions in this document helps you, or if you find what you believe is a bug in the build system, please contact the Build Group by sending a mail to . Please include the relevant parts of the configure and/or build log.
If you need general help or advice about developing for the JDK, you can also contact the Adoption Group. See the section on Contributing to OpenJDK for more information.
To help you prepare a proper push path for a Mercurial repository, there exists a useful tool known as defpath. It will help you setup a proper push path for pushing changes to the JDK.
Install the extension by cloning http://hg.openjdk.java.net/code-tools/defpath
and updating your .hgrc
file. Here's one way to do this:
cd ~
@@ -829,7 +876,6 @@ defpath=~/hg-ext/defpath/defpath.py
EOT
You can now setup a proper push path using:
hg defpath -d -u <your OpenJDK username>
-If you also have the trees
extension installed in Mercurial, you will automatically get a tdefpath
command, which is even more useful. By running hg tdefpath -du <username>
in the top repository of your forest, all repos will get setup automatically. This is the recommended usage.
The configure
and make
commands tries to play nice with bash command-line completion (using <tab>
or <tab><tab>
). To use this functionality, make sure you enable completion in your ~/.bashrc
(see instructions for bash in your operating system).
Make completion will work out of the box, and will complete valid make targets. For instance, typing make jdk-i<tab>
will complete to make jdk-image
.
Now configure --en<tab>-dt<tab>
will result in configure --enable-dtrace
.
You can have multiple configurations for a single source forest. When you create a new configuration, run configure --with-conf-name=<name>
to create a configuration with the name <name>
. Alternatively, you can create a directory under build
and run configure
from there, e.g. mkdir build/<name> && cd build/<name> && bash ../../configure
.
You can have multiple configurations for a single source repository. When you create a new configuration, run configure --with-conf-name=<name>
to create a configuration with the name <name>
. Alternatively, you can create a directory under build
and run configure
from there, e.g. mkdir build/<name> && cd build/<name> && bash ../../configure
.
Then you can build that configuration using make CONF_NAME=<name>
or make CONF=<pattern>
, where <pattern>
is a substring matching one or several configurations, e.g. CONF=debug
. The special empty pattern (CONF=
) will match all available configuration, so make CONF= hotspot
will build the hotspot
target for all configurations. Alternatively, you can execute make
in the configuration directory, e.g. cd build/<name> && make
.
If you update the forest and part of the configure script has changed, the build system will force you to re-run configure
.
If you update the repository and part of the configure script has changed, the build system will force you to re-run configure
.
Most of the time, you will be fine by running configure
again with the same arguments as the last time, which can easily be performed by make reconfigure
. To simplify this, you can use the CONF_CHECK
make control variable, either as make CONF_CHECK=auto
, or by setting an environment variable. For instance, if you add export CONF_CHECK=auto
to your .bashrc
file, make
will always run reconfigure
automatically whenever the configure script has changed.
You can also use CONF_CHECK=ignore
to skip the check for a needed configure update. This might speed up the build, but comes at the risk of an incorrect build result. This is only recommended if you know what you're doing.
From time to time, you will also need to modify the command line to configure
due to changes. Use make print-configure
to show the command line used for your current configuration.
To be able to run JTReg tests, configure
needs to know where to find the JTReg test framework. If it is not picked up automatically by configure, use the --with-jtreg=<path to jtreg home>
option to point to the JTReg framework. Note that this option should point to the JTReg home, i.e. the top directory, containing lib/jtreg.jar
etc. (An alternative is to set the JT_HOME
environment variable to point to the JTReg home before running configure
.)
To be able to run microbenchmarks, configure
needs to know where to find the JMH dependency. Use --with-jmh=<path to JMH jars>
to point to a directory containing the core JMH and transitive dependencies. The recommended dependencies can be retrieved by running sh make/devkit/createJMHBundle.sh
, after which --with-jmh=build/jmh/jars
should work.
All functionality is available using the test
make target. In this use case, the test or tests to be executed is controlled using the TEST
variable. To speed up subsequent test runs with no source code changes, test-only
can be used instead, which do not depend on the source and test image build.
For some common top-level tests, direct make targets have been generated. This includes all JTReg test groups, the hotspot gtest, and custom tests (if present). This means that make test-tier1
is equivalent to make test TEST="tier1"
, but the latter is more tab-completion friendly. For more complex test runs, the test TEST="x"
solution needs to be used.
Since the Hotspot Gtest suite is so quick, the default is to run all tests. This is specified by just gtest
, or as a fully qualified test descriptor gtest:all
.
If you want, you can single out an individual test or a group of tests, for instance gtest:LogDecorations
or gtest:LogDecorations.level_test_vm
. This can be particularly useful if you want to run a shaky test repeatedly.
For Gtest, there is a separate test suite for each JVM variant. The JVM variant is defined by adding /<variant>
to the test descriptor, e.g. gtest:Log/client
. If you specify no variant, gtest will run once for each JVM variant present (e.g. server, client). So if you only have the server JVM present, then gtest:all
will be equivalent to gtest:all/server
.
Which microbenchmarks to run is selected using a regular expression following the micro:
test descriptor, e.g., micro:java.lang.reflect
. This delegates the test selection to JMH, meaning package name, class name and even benchmark method names can be used to select tests.
Using special characters like |
in the regular expression is possible, but needs to be escaped multiple times: micro:ArrayCopy\\\\\|reflect
.
A handful of odd tests that are not covered by any other testing framework are accessible using the special:
test descriptor. Currently, this includes failure-handler
and make
.
Additional options to the Gtest test framework.
Use GTEST="OPTIONS=--help"
to see all available Gtest options.
Override the number of benchmark forks to spawn. Same as specifying -f <num>
.
Number of measurement iterations per fork. Same as specifying -i <num>
.
Amount of time to spend in each measurement iteration, in seconds. Same as specifying -r <num>
Number of warmup iterations to run before the measurement phase in each fork. Same as specifying -wi <num>
.
Amount of time to spend in each warmup iteration. Same as specifying -w <num>
.
Specify to have the test run save a log of the values. Accepts the same values as -rff
, i.e., text
, csv
, scsv
, json
, or latex
.
Additional VM arguments to provide to forked off VMs. Same as -jvmArgs <args>
Additional arguments to send to JMH.