tag:blogger.com,1999:blog-42154582228332648082024-02-19T02:58:21.530+00:00Stas's blogAnonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.comBlogger35125tag:blogger.com,1999:blog-4215458222833264808.post-72639319854749569972015-01-11T21:45:00.000+00:002015-01-11T21:45:16.677+00:00OpenJDK Cookbook<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;">
<img src="https://d1ldz4te4covpm.cloudfront.net/sites/default/files/imagecache/ppv4_main_book_cover/8405OT_Mockup_cb.jpg" />
</div>
<br />
It has been long time since I've published my last post. Nowdays, work and the second child are consuming the most of my time, so getting just a few free minutes is somewhat very hard to afford :)
But I haven't been just surviving these uneasy conditions, but manged to do some work outside of my day-to-day routine and that is a book, which I have wrote with two other guys - Alex Kasko and Alexey Mironchenko. This was quite challenging experience which, as you would expect, took much more effort that I have expected and it brought me some sleepness nights. But, I have to admit, it was positive experience and it is very pleasing feeling to see that the work is now completed.<br />
As can be guessed, the book is about OpenJDK. In the book we specifically tried to avoid pure-java topics, one will not find there how to work with collections or tune JVM, there are lots of other books these topics and it's not something new to cover. In this book the most of material is about OpenJDK-specifics, things which can't be be found anywhere else. It is useful to someone who is going to hack around with OpenJDK, make changes in it's source code and experiment with it. The content covers building various versions of OpenJDK on various platforms, making code changes and, of course, testing them.<br />
At the moment book is at the editing stage and will be published at the end of Jan. It is available for pre-order from <a href="http://www.amazon.co.uk/gp/product/1849698406/ref=as_li_tl?ie=UTF8&camp=1634&creative=6738&creativeASIN=1849698406&linkCode=as2&tag=stasblo-21&linkId=ARVY67JDZNJ7ZH3W">Amazon UK</a><img alt="" border="0" src="http://ir-uk.amazon-adsystem.com/e/ir?t=stasblo-21&l=as2&o=2&a=1849698406" height="1" style="border: none !important; margin: 0px !important;" width="1" />or <a href="http://www.amazon.com/gp/product/1849698406/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=1849698406&linkCode=as2&tag=stasblo08-20&linkId=SXEI2DYM3X6YCZSK">Amazon US</a><img alt="" border="0" src="http://ir-na.amazon-adsystem.com/e/ir?t=stasblo08-20&l=as2&o=1&a=1849698406" height="1" style="border: none !important; margin: 0px !important;" width="1" /></div>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com1tag:blogger.com,1999:blog-4215458222833264808.post-39208356911251828892013-05-07T22:47:00.000+01:002013-05-07T22:51:36.300+01:00Embedding Jetty9 & Spring MVC<p>
This post is re-done of one of my previous posts which was about embedding Jetty7. Now it's about new version - Jetty9 and also with support of Spring MVC. Just thought it would be a good idea to keep something like that as a reference. There is no much text below, this is because the source is clear enough and doesn't need much explanation. Though, feel free to raise questions in comments.
</p><a name='more'></a><p>
Lets start with Jetty server warpper. That's the main class which wraps all Jetty setup. Notice that there is no war here, there is no even 'web.xml'. It's all on plain folders and configured via code.
</p>
<script type="syntaxhighlighter" class="brush: java"><![CDATA[
package com.tracklab42.webapp;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.servlet.DispatcherServlet;
import java.io.File;
import java.net.URL;
/**
* @author Stas
* @date 4/1/13
*/
public class JettyServer {
private static final Logger log = LoggerFactory.getLogger(JettyServer.class);
public static final String WEB_APP_ROOT = "webapp"; // that folder has to be just somewhere in classpath
public static final String MVC_SERVLET_NAME = "mvcDispatcher";
public static final String JSP_SERVLET_NAME = "jspServlet";
private final int port;
private Server server;
public JettyServer(int port) {
this.port = port;
}
public void start() {
server = new Server(port);
server.setHandler( getServletHandler() );
try {
server.start();
} catch (Exception e) {
log.error("Failed to start server", e);
throw new RuntimeException();
}
log.info("Server started");
}
private ServletContextHandler getServletHandler() {
ServletHolder mvcServletHolder = new ServletHolder(MVC_SERVLET_NAME, new DispatcherServlet());
mvcServletHolder.setInitParameter("contextConfigLocation", "web-context.xml");
ServletHolder jspServletHolder = new ServletHolder(JSP_SERVLET_NAME, new org.apache.jasper.servlet.JspServlet());
// these two lines are not strictly required - they will keep classes generated from JSP in "${javax.servlet.context.tempdir}/views/generated"
jspServletHolder.setInitParameter("keepgenerated", "true");
jspServletHolder.setInitParameter("scratchDir", "views/generated");
// session has to be set, otherwise Jasper won't work
ServletContextHandler context = new ServletContextHandler(ServletContextHandler.SESSIONS);
context.setAttribute("javax.servlet.context.tempdir", new File("../tmp/webapp"));
// that classloader is requres to set JSP classpath. Without it you will just get NPE
context.setClassLoader(Thread.currentThread().getContextClassLoader());
context.addServlet(jspServletHolder, "*.jsp");
context.addServlet(mvcServletHolder, "/");
context.setResourceBase( getBaseUrl() );
return context;
}
public void join() throws InterruptedException {
server.join();
}
private String getBaseUrl() {
URL webInfUrl = JettyServer.class.getClassLoader().getResource(WEB_APP_ROOT);
if (webInfUrl == null) {
throw new RuntimeException("Failed to find web application root: " + WEB_APP_ROOT);
}
return webInfUrl.toExternalForm();
}
}
]]></script>
<p>
Here is class with main method. It's very simple, just starts the server and configures j.u.l's handler for slf4j (j.u.l is used by Jasper):
</p>
<script type="syntaxhighlighter" class="brush: java"><![CDATA[
package com.tracklab42.webapp;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.slf4j.bridge.SLF4JBridgeHandler;
/**
* @author Stas
* @date 3/27/13
*/
public class ServerMain {
private static final Logger log = LoggerFactory.getLogger(ServerMain.class);
public static void main(String args[]) throws Exception {
// add SLF4JBridgeHandler to j.u.l's root logger, should be done once during
// the initialization phase of your application
SLF4JBridgeHandler.install();
try {
JettyServer server = new JettyServer(8080);
server.start();
log.info("Server started");
server.join();
} catch (Exception e) {
log.error("Failed to start server.", e);
}
}
}
]]></script>
<p>
Here is Spring config, which was referenced by JettyServer class ('web-context.xml'). That config is used for all stuff related to String MVC.
</p>
<script type="syntaxhighlighter" class="brush: xml"><![CDATA[
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:mvc="http://www.springframework.org/schema/mvc"
xsi:schemaLocation="http://www.springframework.org/schema/mvc
http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd">
<!-- Scans the classpath of this application for @Components to deploy as beans -->
<context:component-scan base-package="com.tracklab42.webapp" />
<!-- Configures the @Controller programming model -->
<mvc:annotation-driven/>
<!-- Forwards requests to the "/" resource to the "home" view -->
<mvc:view-controller path="/" view-name="redirect:/index"/>
<!-- Resolves view names to protected .jsp resources within the 'views/' directory -->
<bean class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name="prefix" value="views/"/>
<property name="suffix" value=".jsp"/>
</bean>
</beans>
]]></script>
<p>
Just an example controller:
</p>
<script type="syntaxhighlighter" class="brush: java"><![CDATA[
package com.tracklab42.webapp.controller;
import org.springframework.stereotype.*;
import org.springframework.web.bind.annotation.*;
/**
* @author Stas
* @date 3/31/13
*/
@Controller
@RequestMapping("/index")
public class Home {
@RequestMapping(method = RequestMethod.GET)
public String home() {
return "home";
}
}
]]></script>
<p>
And finally the list of dependencies in Gradle format:
</p>
<script type="syntaxhighlighter" class="brush: groovy"><![CDATA[
dependencies {
def slf4jVersion = '1.7.5';
def jettyVersion = '9.0.2.v20130417';
def springVersion = '3.2.2.RELEASE';
compile "org.slf4j:slf4j-api:${slf4jVersion}"
compile "org.eclipse.jetty:jetty-server:${jettyVersion}"
compile "org.eclipse.jetty:jetty-webapp:${jettyVersion}"
compile "org.eclipse.jetty:jetty-servlet:${jettyVersion}"
compile "org.eclipse.jetty:jetty-servlets:${jettyVersion}"
compile "org.eclipse.jetty:jetty-jsp:${jettyVersion}"
compile 'javax.servlet:jstl:1.2'
compile "org.springframework:spring-context:${springVersion}"
compile "org.springframework:spring-webmvc:${springVersion}"
runtime 'ch.qos.logback:logback-classic:1.0.11'
runtime 'ch.qos.logback:logback-core:1.0.11'
runtime "org.slf4j:jcl-over-slf4j:${slf4jVersion}"
runtime "org.slf4j:jul-to-slf4j:${slf4jVersion}"
}
]]></script>
<p>
References:</br>
<a href="http://www.eclipse.org/jetty/documentation/current/embedding-jetty.html">Embedding Jetty</a><br/>
<a href="http://www.eclipse.org/jetty/documentation/current/configuring-jsp.html">Configuring Jetty JSP Support</a>
</p>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com5tag:blogger.com,1999:blog-4215458222833264808.post-68699686359874378682013-03-27T11:53:00.002+00:002013-04-09T00:18:09.647+01:00AtomicFieldUpdater vs. Atomic<p>
Java 1.5 introduced new family of classes (Atomic*FieldUpdater) for atomic updates of object fields with properties similar to Atomic* set of classes and it seems like there is slight confusion about the purpose of these. And that confusion is understood, the reason for their existance is not very obvious. First of all they are no way faster than Atomics, if you look at source, you see that there are lots of access control checks. Then, they are not handy - developer has to write more code, understand new API, etc.
<p>
<p>
So why would you bother? There are two main use cases when Atomic*FieldUpdater can be considered an an option:
<ul>
<li>There is a field which is mostly read and rarely changed. In that case, volatile field can be used for read access and Atomic*FieldUpdater for ocasional updates. Thought, that optimization is arguable, because there is a good chance that in latest JVMs Atomic*.get() is intrinsic and should not be slower than volatile.</li>
<li>Atomics have much higher overhead on memory usage than primitives. In cases when memory is critical Atomic can be replaced with volatile primitive with Atomic*FieldUpdater.</li>
</ul>
</p>
<p>
References:<br/>
<a href="http://concurrency.markmail.org/message/ns4c5376otat2p54?q=FieldUpdater">http://concurrency.markmail.org/message/ns4c5376otat2p54?q=FieldUpdater</a><br/>
<a href="http://concurrency.markmail.org/message/mpoy74yhuwgi52fa?q=FieldUpdater+">http://concurrency.markmail.org/message/mpoy74yhuwgi52fa?q=FieldUpdater</a><br/>
</p>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com1tag:blogger.com,1999:blog-4215458222833264808.post-19842882727899978122013-03-12T23:44:00.000+00:002013-03-12T23:47:11.344+00:00Scala: Automatic resourse management<p>After completing <a href="https://www.coursera.org/course/progfun">wonderful course by Martin Odesky</a>, I have eventually had a chance to have a little play with Scala and create something more useful than "hello world" app. And even I have had some experience with that language just a few week before, I felt slightly frustrated. I reckon all that is because I become too dull and silly spending too much time with Java :) First surprise was that I realized that this language has a compiler - with Java it almost doesn't exist, you never 'compile' you do 'build', which is very different kind of thing. With Java you always almost curtain that you code is compilable, because modern IDEs (like Intellij) do not give you a chance to leave compilation error in your code. Another surprize is that Scala compiler is deadly slow, I have a good feeling that big project will suffer with it. So, you can say that with Scala it feels like comming back to good old C++ days :)</p>
<p>Ok, that's was introduction, here is some stuff I wrote, and which I almost sure is just another 'bicycle', but was useful for me. After some time with language, I realized that it doesn't have any standard resource-management construction, which probably is good for Scala - language is so flexible that it allows you to build your own without much effort (mostly code is stolen from <a href="http://stackoverflow.com/questions/2207425/what-automatic-resource-management-alternatives-exists-for-scala">this post</a>):</p>
<pre class="prettyprint">
trait Managed[T] {
def onEnter(): T
def onExit(t:Throwable = null)
def attempt(block: => Unit) {
try { block } finally {}
}
}
def using[T <: Any, R](managed: Managed[T])(block: T => R): R = {
val resource = managed.onEnter()
var exception = false
try {
block(resource)
} catch {
case t:Throwable => {
exception = true
managed.onExit(t)
throw t
}
} finally {
if (!exception) {
managed.onExit()
}
}
}
def using[T <: Any, U <: Any, R] (managed1: Managed[T], managed2: Managed[U]) (block: T => U => R): R = {
using[T, R](managed1) { r =>
using[U, R](managed2) { s => block(r)(s) }
}
}
class ManagedClosable[T <: Closeable](closable:T) extends Managed[T] {
def onEnter(): T = closable
def onExit(t:Throwable = null) {
attempt(closable.close())
}
}
implicit def closable2managed[T <: Closeable](closable:T): Managed[T] = {
new ManagedClosable(closable)
}
</pre>
and the usage looks like this:
<pre class="prettyprint">
def readLine() {
using(new BufferedReader(new FileReader("file.txt"))) {
file => {
file.readLine()
}
}
}
</pre>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com1tag:blogger.com,1999:blog-4215458222833264808.post-82658195797791568942013-02-04T23:42:00.000+00:002013-03-12T23:48:36.472+00:00Evil of microbenchmarking & CAS performance on Ivy Bridge<p>Some days back Martin Thompson <a href="http://mechanical-sympathy.blogspot.co.uk/2013/01/further-adventures-with-cas.html">published investigation</a> on results of his controversial <a href="http://en.wikipedia.org/wiki/Compare_and_swap">CAS (compare and swap)</a> performance test he made few months back. And that investigation really impressed me - it shows how microbenchmarking can go really wrong, even when it is done by such a smart guy.</p>
<p>Just to recap, <a href="http://mechanical-sympathy.blogspot.co.uk/2011/09/adventures-with-atomiclong.html">test</a> was executing several threads which were hammering CPU with CAS operations. Test showed that on average CAS on modern Ivy Bridge processor works significantly slower than on older Nehalem architecture. After a few months and Martin <a href="http://mechanical-sympathy.blogspot.co.uk/2013/01/further-adventures-with-cas.html">found out the reason</a> for such strange behavior and amazing thing about it is that the reason for test being slower is that Ivy Bridge is actually faster.</p>
<p>To understand why that happens lets see what's going on when CAS is executed. Generally speaking, on high level, in relation to CPU core, memory which is going to be written can be in two states - core can either exclusively own cache line with it or do not own. If it owns that line then CAS is extremely fast - core doesn't need to notify other cores to do that operation. If core doesn't own it, the situation is very different - core has to send request to fetch cache line in exclusive mode and such request requires communication with all other cores. Such negotiation is not fast, but on Ivy Bridge it is much faster than on Nehalem. And because it is faster on Ivy Bride, core has less time to perform a set of fast local CAS operations while it owns cacheline, therefore total throughput is less.</p>
<p>I suppose, a very good lesson learned here - microbenchmarking can be very tricky and not easy to do properly. Also results can be easily interpreted in a wrong way. So, be careful!</p>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com0tag:blogger.com,1999:blog-4215458222833264808.post-69719688434568509292012-12-20T11:59:00.001+00:002012-12-20T17:51:00.733+00:00git hangs after "Resolving deltas"<p>
Have had a funy problem with Git. I suppose it's proxy-related. Writing it down, because sure that will have the same problem some time again. Also hope it will help to people who are also suffering with it.
</p>
<p>
As a precondition, I have a git with following in '.gitconfig':
<pre class="prettyprint">
[http]
proxy=http://user:password@proxy:8080
</pre>
</p>
<p>
When I tried to clone repository I've got this:
<pre class="prettyprint">
$ git clone https://code.google.com/p/caliper/
Cloning into 'caliper'...
remote: Counting objects: 3298, done.
remote: Finding sources: 100% (3298/3298), done.
remote: Total 3298 (delta 1755)
Receiving objects: 100% (3298/3298), 7.14 MiB | 1.94 MiB/s, done.
Resolving deltas: 100% (1755/1755), done.
</pre>
</p>
<p>
And then nothing, it just hangs. If you go and have a look, you can see that files are downloaded, but not unpacked. As all other people on Internet, I have no idea why that is happening, but eventually I have found a way to get files out of it.
</p>
<p>
When it hangs, just kill the process with Ctrl+C and run this command in repository folder:
<pre class="prettyprint">
$ git fsck
notice: HEAD points to an unborn branch (master)
Checking object directories: 100% (256/256), done.
Checking objects: 100% (3298/3298), done.
notice: No default references
dangling commit 2916d1238ca0f4adecbda580ef4329a649fc777c
</pre>
Now just merge that dangling commit:
<pre class="prettyprint">
$ git merge 2916d1238ca0f4adecbda580ef4329a649fc777c
</pre>
and from now on you can enjoy repository content in any way you want.
</p>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com5tag:blogger.com,1999:blog-4215458222833264808.post-49721997806645554722012-12-13T11:06:00.000+00:002012-12-18T12:46:02.331+00:00File.setLastModified & File.lastModified<p>Have observed interesting behavior of <a href="http://docs.oracle.com/javase/6/docs/api/java/io/File.html#lastModified()">File.lastModified</a> file property on Linux. Basically, my problem was that I was incrementing the value of that property by 1 in one thread and monitoring the change in the other thread. And apparently no change in property's value happened, the other thread did not see increment. After some time trying to make it work, I realized that I have to increment it at least by a 1000 to make the change visible.<p/>
<p>Wondering why that is happening, I have had a look at JDK source code and that's what I found:<p/>
<pre class="prettyprint">
JNIEXPORT jlong JNICALL
Java_java_io_UnixFileSystem_getLastModifiedTime(JNIEnv *env, jobject this,
jobject file)
{
jlong rv = 0;
WITH_FIELD_PLATFORM_STRING(env, file, ids.path, path) {
struct stat64 sb;
if (stat64(path, &sb) == 0) {
rv = 1000 * (jlong)sb.st_mtime;
}
} END_PLATFORM_STRING(env, path);
return rv;
}
</pre>
<p>What happens is that on Linux <a href="http://docs.oracle.com/javase/6/docs/api/java/io/File.html#lastModified()">File.lastModified</a> has 1sec resolution and simply ignores milliseconds. I'm not an expert in Linux programming, so not sure is there any way get that time with millisecond resolution on Linux. Assume it should be possible because 'setLastModified' seems like is working as it is expected to work - sets modification time with millisecond resolution (you can find the source code in 'UnixFileSystem_md.c').</p>
<p>So, just a nice thing to remember: when you work with files on Linux, you may not see change in <a href="http://docs.oracle.com/javase/6/docs/api/java/io/File.html#lastModified()">File.lastModified</a> when it's value updated for less than 1000ms.</p>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com2tag:blogger.com,1999:blog-4215458222833264808.post-38143802442041582212012-10-24T00:19:00.002+01:002012-10-24T00:21:41.691+01:00Effective Concurrency by Herb Sutter<div dir="ltr" style="text-align: left;" trbidi="on">
Have never ever written feedback on events or courses, but here I decided to write one. It is about "Effective concurrency" course by Herb Sutter. Hopefully that post will help to someone to support an approval for that course :)<br />
<br />
So, as I have already said, a few weeks back I was lucky enough to attend "Effective Concurrency" course by <a href="http://herbsutter.com/,%20http://en.wikipedia.org/wiki/Herb_Sutter">Herb Sutter</a>. That guy is software architect at Microsoft where he has been the lead designer of C++/CLI, C++/CX, C++ AMP, and other technologies. He also has served for a decade as chair of the ISO C++ standards committee. Many people also know him for <a href="http://www.amazon.co.uk/s/ref=ntt_athr_dp_sr_1?_encoding=UTF8&field-author=Herb%20Sutter&search-alias=books-uk">his books</a>.<br />
<br />
<a name='more'></a><br />
So, about the course. It is very complete and covers concurrency subject starting from software design patters (actors, pipelines, atomics, etc) and finishing with hardware stuff (caches, pipelines, NUMA, etc). In overall course was great and I really enjoyed it, if you ever have a chance to attend it - do it without hesitation, you will learn a lot. Regarding to myself, I can't say that I learned many new things there, more re-iterating of what I have already learned from years of programming. Still there was some new stuff, which I didn't know, especially about the approach for solving some concurrency problems. BUT!, The most important thing I got out of that course is that it helped me to arrange and structure all that knowledge in my head. Also, there was a lot of good reasoning why some things things should be done/work this way, not the other and many interesting code examples with explanations (e.g. parallel multi-threaded graph traversing algorithm was just amazing). As far as he is C++ guy, the most examples were in C++11, but all the same concepts can be applied to Java or to any other language, so the course is language-agnostic.<br />
<br />
Herb is good lecturer - very focused, he doesn't support long discussions and doesn't spend time on talking not exactly about the topic. He was talking non-stop for a three days and everything he said was valuable information. Not sure that I can ever do that :)<br />
<div dir="ltr" trbidi="on">
<br />
Below is brief overview of topics covered on that course. Unfortunately, I do not think that I can share sliders, but hope that description should give some indication of what it is all about. Specifically, I want to make a note that it is my understanding of the course may have nothing with reality and may be completely different from the other's opinion. Just keep that in mind. </div>
<h2>
Day 1</h2>
An overview of concurrency and it's evolution, why it's important to know how to use it, primitives, thread-pooling and actors.<br />
<ul>
<li><a href="http://www.gotw.ca/publications/concurrency-ddj.htm">Free lunch is over</a> and <a href="http://herbsutter.com/welcome-to-the-jungle/">Welcome to the Jungle</a>: That's all about the fact that computers are not getting simply faster, how it was before. They are getting bigger, more complicated and more specialized. </li>
<li>Types of possible concurrency levels: Single-threaded, K-threaded (K-constant), N-threaded (N-numbers of physical threads). </li>
<li>Types of <a href="http://en.wikipedia.org/wiki/Thread_(computing)">multi-threading</a>: cooperative, preemptive, multi-core preemptive and why the last one is the most complicated. </li>
<li>Overview of concurrency primitives. <a href="http://en.wikipedia.org/wiki/Lock_(computer_science)">Locks</a>, <a href="http://en.wikipedia.org/wiki/Thread_pool">thread pools</a>, <a href="http://en.wikipedia.org/wiki/Futures_and_promises">futures</a>, <a href="http://en.wikipedia.org/wiki/Atomic_operation">atomics</a>, etc. The main idea is thread as it is is too low-level concept, hard to operate. Need something higher level - Futures, <a href="http://en.wikipedia.org/wiki/Actor_model">Actors</a>, etc. </li>
<li>Code has to be structured and should not look like spaghetti. And that's where libraries with threading primitives come in. </li>
<li>Thread pools and work stealing. Mostly that was about work stealing and how cool is that. Example using Quick sort. Actors and pool with work stealing as implementation. </li>
<li>Futures in C++, Java, C#.</li>
</ul>
Conclusion: concurrency is here and we have to use it effectively. To archive that and not over complicate your code make use concurrency primitives/libraries like actors, futures on the top of thread pools. Prefer not to use 'raw' threads.<br />
<h2>
<span style="font-family: Arial;">Day 2</span></h2>
Memory architecture, caching, hardware details.<br />
<div>
<ul>
<li>Pipeline and concurrency. What it pipe line and how to build it properly with some examples. </li>
<li>What is bandwidth and latency. The main idea is that you can always get better bandwidth, but latency is something much more complicated and much harder (if possible) to fix. </li>
<li>Memory latency is the biggest problem and the most of CPU manufacturers efforts are applied to hide that latency. If you have a look at CPU - "1% die to change data, 99% to move, store data", i.e. to hide latency (<a href="http://www.ll.mit.edu/HPEC/agendas/proc04/invited/patterson_keynote.pdf">http://www.ll.mit.edu/HPEC/agendas/proc04/invited/patterson_keynote.pdf</a>). </li>
<li>CPU pipe lines - how it works and why it's required. Superscalar CPUs. </li>
<li>Reordering and hardware memory model. Main conclusion is that the main aim of CPU is to execute program is a such way that it will be valid, as if it is executed just on one thread. The code you wrote and the way how code is executed by CPU are two different things. If you want it to execute the program exactly as you wrote it, you need to tell CPU about it. </li>
<li>Hardware loves arrays. Pre-fetching, caches, etc. The love comes from the simplicity of optimization for memory latency. </li>
<li>False sharing, cache locality. </li>
</ul>
</div>
Conclusion: one of the main problem of concurrency performance is speed of the memory access. Love arrays the same way as hardware does and try not to use random-access for the same reason. Do not modify the data in the same location (same cache line) from different threads. Use cache-friendly data structures. Keep in mind that what you wrote is not exactly what is executed.<br />
<h2>
Day 3</h2>
Doing concurrency - writing concurrent code<br />
<ul>
<li>Example of concurrent concurrency (hm...). Basically when data is not just processed by several threads, but also by several clients. E.g. traversing tree and updating it at the same time in several threads by several clients. </li>
<li>Some data structures are well for concurrency, some are not. E.g. arrays are brilliant, linked lists are ok, but balanced binary tree is not concurrency friendly at all. Choose correct data structures. </li>
<li>Different types of scalability in detail - K-scalable, linear scalability and super-linear scalability. Super-linear scalability is interesting, it can be archived using algorithms which are naturally faster when run in parallel. Also, several threads can benefit of several caches, etc. </li>
<li>How to stop thread. Kill - very bad, almost kill (not available in java) - still bad, interrupt - well..., just a flag... maybe. The recommended approach is just a flag. In Java I would also consider interrupt, but I have to say that as far as not many people know how to use it, it can be slightly dangerous. </li>
<li>Locks are complicated and always are cause of problems. It is possible that deadlock can happen even when locks are not related at all. E.g. one lock on your code, the other one is in library. You won't even know about it. To make YOUR locks work safer, use lock levels and lock groups. Levels mean that thread won't ever grab the lock with lower level. Groups help with ordering to ensure that locks are always grabbed in the same order. Keep in mind that it will make YOUR locks better, but if you are using third party code and it is using locks, nothing can help, really.</li>
</ul>
<div dir="ltr" style="text-align: left;" trbidi="on">
Conclusion: you have to know concurrency algorithms and data structures. Locks are complicated, never trust to code which you didn't write.<br />
<br />
And the overall conclusion. As I have already mentioned course is very good. I would recommend it to everybody. It has extremely good coverage of the topic and the only thing, which I found missed is <a href="http://en.wikipedia.org/wiki/MSI_protocol">MSI</a>, it would be good to have an overview with some examples how it affects latency. Unfortunately, I do not think that Herb can do it more often than once in a year, so if you want to attend you have to wait and check for it to re-appear here (<a href="http://developerfocus.com/">http://developerfocus.com/</a>) or just ping Herb, he definitely has to know about his plans :)</div>
</div>
Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com2tag:blogger.com,1999:blog-4215458222833264808.post-62187155926886594012012-09-11T23:15:00.000+01:002014-01-14T21:20:43.778+00:00Building OpenJDK on Windows<p>Experimenting with some stuff, I found that it is often useful to have JDK source code available in hand to make some changes, play with it, etc. So I decided to download and compile that beast. Apparently, it took me some time to do that, although my initial thought was that it's should be as simple as running make command :). As you can guess, I found that it's not a trivial task and to simplify my life in future, it would be useful to keep some records of what I was doing.</p><a name='more'></a><p>Below are steps which I had to do to make it happen. I assume machine already has Visual Studio 2010 installed. I have a feeling that Express version should work just fine, but I haven't tried.</p>
<ol>
<li><p>Install <a href="http://www.cygwin.com/">cygwin</a>. Ensure that you have installed all packages listed <a href="http://hg.openjdk.java.net/jdk8/jdk8/raw-file/tip/README-builds.html#cygwin">here</a>, some of them are not installed by default. Just in case, here is the copy of that table, but it is recommended to verify with the <a href="http://hg.openjdk.java.net/jdk8/jdk8/raw-file/tip/README-builds.html#cygwin">master source</a>:
<table border="1">
<tr>
<td><b>Binary Name</b></td><td><b>Category</b></td><td><b>Package</b></td><td><b>Description</b></td><td><b>Installed by default</b></td>
</tr>
<tr>
<td>ar.exe</td><td>Devel</td><td>binutils</td><td>The GNU assembler, linker and binary utilities</td><td>No</td>
</tr>
<tr>
<td>make.exe</td><td>Devel</td><td>make</td><td>The GNU version of the 'make' utility built for CYGWIN.</td><td>No</td>
</tr>
<tr>
<td>m4.exe</td><td>Interpreters</td><td>m4</td><td>GNU implementation of the traditional Unix macro processor</td><td>No</td>
</tr>
<tr>
<td>cpio.exe</td><td>Utils</td><td>cpio</td><td>A program to manage archives of files</td><td>No</td>
</tr>
<tr>
<td>gawk.exe</td><td>Interpreters</td><td>gawk</td><td>Pattern-directed scanning and processing language</td><td>Yes</td>
</tr>
<tr>
<td>file.exe</td><td>Utils</td><td>file</td><td>Determines file type using 'magic' numbers</td><td>Yes</td>
</tr>
<tr>
<td>zip.exe</td><td>Archive</td><td>zip</td><td>Package and compress (archive) files</td><td>No</td>
</tr>
<tr>
<td>unzip.exe</td><td>Archive</td><td>unzip</td><td>Extract compressed files in a ZIP archive</td><td>No</td>
</tr>
<tr>
<td>free.exe</td><td>System</td><td>procps</td><td>Display amount of free and used memory in the system</td><td>No</td>
</tr>
</table>
Do not forget to add cygwin's 'bin' folder into the PATH.
</p>
</li>
<li><p>Install Mercurial from <a href="http://mercurial.selenic.com/wiki/Download/">here</a> and add 'hg' to the PATH.</p></li>
<li><p>Install <a href="http://www.microsoft.com/download/en/details.aspx?id=8279">Microsoft Windows SDK for Windows 7 and .NET Framework 4</a>.</p></li>
<li><p>Install <a href="http://www.microsoft.com/en-us/download/details.aspx?id=6812">DirectX SDK</a>. JDK requires v9.0, but I couldn't find it easily. So I decided not to bother and installed the latest one. Seems like it's working just fine.</p></li>
<li><p>Bootstrap JDK is required for the build. It just happened that I used JDK6, but suppose, any version >JDK6 will work with no probs.</p></li>
<li><p>Download and install <a href="http://ant.apache.org">Ant</a>. I used version 1.8.2. Add Ant to the PATH.</p></li>
<li><p>Checkout sources. For a number of reasons it was the most complicated part. 'hg' is not particularly stable, so some things which are supposed to be done my scripts, had be done manually.</p>
<p>So, to start run this in command line:
<pre>
hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u <some_folder>\openjdk7'
</pre>
This should download root folder with some helper scripts.</p>
<p>Then in cygwin go to just created 'openjdk7' folder and run 'get_source.sh'. 'get_source.sh' may fail or just hang (and that's exactly what happened to me). If it does, then you may try to to use '--pull' flag (pull protocol for metadata). I'm not absolutely sure why, but it helped me. Unfortunately, scripts are not written in very friendly manner and there is no way to pass any 'hg' arguments to sources retrieval script. So you need to go to 'make\scripts\hgforest.sh' and add '--pull' to every invocation of 'hg clone'.</p>
<p>And if even after adding '--pull' it still fails, well... just give up and run these commands manually:
<pre>
hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/corba corba
hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/hotspot hotspot
hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/jaxp jaxp
hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/jaxws jaxws
hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/jdk jdk
hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/langtools langtools
</pre>
Hopefully now you have sources and can contunue :)
</p>
</li>
<li><p>Build requires some external binaries and a version of 'make.exe', which works under windows. 'make' which comes with cygwin doesn't really work, because has some problems with drive letters in path names.</p>
<p>Next is we need to compile a couple of files. One is fixed version of 'make.exe'. The other is FreeType library, which is only available to download as source. </p>
<p>If you are not interested in compiling all that stuff and want just compile JDK with less hassle, I would recommend to download binaries from <a href="https://docs.google.com/open?id=0B_Oq4PMUt3r6anNXQXF2eWdGZG8">here</a> (that's my Drive). Unpack 'make.exe' into 'openjdk7/bin'. Note, that 'make.exe' from the package is quite old and requires cygintl-3.dll, which is not provided with current cygwin. To fix it, just copy cygintl-8.dll -> cygintl-3.dll.</br>Freetype lib and dll has to be put in folder referenced by 'ALT_FREETYPE_LIB_PATH' conf variable (see Step 13). Also, some Freetype headers are still required and are located by make via 'ALT_FREETYPE_HEADERS_PATH' variable (see Step 13). It means you will also need to download the source code.</p>
<p>If you are not looking for a simple solution and want to compile these binaries yourself, then follow these instructions:</p>
<ol>
<li><p>Download make 3.82 from <a href="http://ftp.gnu.org/gnu/make/">here</a> and unpack it. Find 'config.h.W32' and uncomment line with 'HAVE_CYGWIN_SHELL' definition. Open make_msvc_net2003.sln solution in Visual Studio, select 'Release' configuration and make a build. In 'Release' folder, you will get 'make_msvc.net2003.exe', rename it to 'make.exe'.</p></li>
<li><p>Now compile FreeType:
<ol>
<li>Download source of FreeType v.2.4.7 from <a href="http://download.savannah.gnu.org/releases/freetype/">here</a>. </li>
<li>Unpack it somewhere and open '\builds\win32\vc2010\freetype.sln' in Visual Studio.</li>
<li>Goto project properties (right click on project in project tree) and in 'Configuration Properties/General/Configuration type' select 'Dynamic Library (.ddl)' and rename output to 'freetype'.</li>
<li>Update ftoption.h, add following two lines:<br/>
#define FT_EXPORT(x) __declspec(dllexport) x<br/>
#define FT_BASE(x) __declspec(dllexport) x<br/>
<li>Make a build and you will get dll & lib in 'objs\win32\vc2010'.</li>
<li>Do not forget to assign appropriate values to 'ALT_FREETYPE_LIB_PATH' and 'ALT_FREETYPE_HEADERS_PATH' variables (see Step 13).</li>
</ol>
</p>
</li>
</ol>
</li>
<li><p>I had some problems with javadoc generation, which was failing with OutOfMemory. In order to fix it, I had to change 'openjdk7\jdk\make\docs\Makefile'.</br>
This code:
<pre>
ifeq ($(ARCH_DATA_MODEL),64)
MAX_VM_MEMORY = 1024
else ifeq ($(ARCH),universal)
MAX_VM_MEMORY = 1024
else
MAX_VM_MEMORY = 512
endif
</pre>
has to be replaed with this:
<pre>
ifeq ($(ARCH_DATA_MODEL),64)
MAX_VM_MEMORY = 1024
else ifeq ($(ARCH),universal)
MAX_VM_MEMORY = 1024
else
MAX_VM_MEMORY = 1024
endif
</pre>
</p>
</li>
<li><p>Copy 'msvcr100.dll' to drops:
<pre>
cp /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ Visual\ Studio\ 10.0/Common7/Packages/Debugger/X64/msvcr100.dll ./drops/
</pre>
</p>
</li>
<li><p>Ensure that cygwin's 'find.exe' in the PATH before the Windows' one. The easiest way to do so is to copy it into 'openjdk7/bin', which is then set at the beginning of current PATH.</p></li>
<li><p>Create a batch file similar to the following one. Do not forgot to update paths appopriately:
<pre>
ALT_BOOTDIR=C:/Stuff/java_libs/jdk1.6.0_25
ANT_HOME=C:/Stuff/java_libs/apache-ant-1.8.2
JAVA_HOME=
CLASSPATH=
PATH=C:/Stuff/openjdk7/bin;%PATH%
ALLOW_DOWNLOADS=true
ALT_MSVCRNN_DLL_PATH=C:/Stuff/java_libs/openjdk7/drops
C:\WINDOWS\system32\cmd.exe /E:ON /V:ON /K "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /Release /xp /x86
</pre>
</p>
<li><p>Run batch file. Now you have fully configured environment, which is ready for build. Run 'bash' and from the shell execute 'make':
<pre>
make ARCH_DATA_MODEL=32 ALT_OUTPUTDIR=C:/Users/Stas/Stuff/java_libs/openjdk7/output_32 ALT_FREETYPE_LIB_PATH=C:/Users/Stas/Stuff/java_libs/openjdk7/freetype-2.4.7/objs/win32/vc2010 ALT_FREETYPE_HEADERS_PATH=C:/Users/Stas/Stuff/java_libs/openjdk7/freetype-2.4.7/include ALT_BOOTDIR=C:/Users/Stas/Stuff/java_libs/jdk1.6.0_25 ALT_DROPS_DIR=c:/OpenJDK/ALT_DROPS_DIR ALT_DROPS_DIR=C:/Users/Stas/Stuff/java_libs/openjdk7/drops HOTSPOT_BUILD_JOBS=4 PARALLEL_COMPILE_JOBS=4 2>&1 | tee C:/Stuff/java_libs/openjdk7/output_32.log
</pre>
This will start build of 32bit JDK.
</p>
</li>
<li><p>Have a coffee, tea, or whatever you prefer to have and then after about an hour or so you should see something like this:
<pre>
#-- Build times ----------
Target all_product_build
Start 2012-09-01 23:08:55
End 2012-09-01 23:55:48
00:02:35 corba
00:06:46 hotspot
00:00:30 jaxp
00:00:51 jaxws
00:35:30 jdk
00:00:37 langtools
00:46:53 TOTAL
-------------------------
</pre>
</p>
</li>
</ol>
<p>
References:</br>
<a href="http://weblogs.java.net/blog/simonis/archive/2011/10/28/yaojowbi-yet-another-openjdk-windows-build-instruction">Yet another OpenJDK on Windows Build Instruction</a>
</p>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com6tag:blogger.com,1999:blog-4215458222833264808.post-34512555438411261992012-05-19T22:10:00.000+01:002012-05-19T22:10:23.582+01:00Bug in Java Memory Model implementationJust have came around amazing question on stackoverwflow:<br/><br/>
<a href="http://stackoverflow.com/questions/10620680/why-volatile-in-java-5-doesnt-synchronize-cached-copies-of-variables-with-main">http://stackoverflow.com/questions/10620680/why-volatile-in-java-5-doesnt-synchronize-cached-copies-of-variables-with-main</a><br/><br/>
Basically the guy there is trying to use "piggybacking" to publish non-volatile variable and it doesn't work. "piggybacking" is a technique that uses data visibility guarantees of volatile variable or monitor to publish non-volatile data. For example such technique is used in ConcurrentHashmap#containsValue() and ConcurrentHashmap#containsKey(). The fact that is doesn't work in that case is a bug in Oracle's Java implementation. And that is rather scary - concurrency problems are very hard to indentify even on bug-free JVM and such bugs in Memory Model implementation making things much worse. Hopefully that's the only bug related to JMM and Oracle has good test coverage for such cases.<br/><br/>
The good news is that this particular problem appears just on C1 (client Hotspot compiler) and not in all cases. It doesn't happen on C2 (server compiler, enabled with "-server" switch). Fortunately, the most of people are running java on server side and there are quite a few client application which are using advanced concurrency features.<br/><br/>
For ones who want to understand that case better, please, follow the link, I've provided at the beginning of post. Also there is very useful post on "concurrency-interest", which also has a good explanation of what is going on there: <a href="http://cs.oswego.edu/pipermail/concurrency-interest/2012-May/009449.html">http://cs.oswego.edu/pipermail/concurrency-interest/2012-May/009449.html</a>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com1tag:blogger.com,1999:blog-4215458222833264808.post-24759262973688051072012-02-06T23:11:00.000+00:002012-03-05T13:28:27.712+00:00What is behind System.nanoTime()?In java world there is a very good perception about System.nanoTime(). There is always some guys who says that it is fast, reliable and, whenever possible, should be used for timings instead of System.currentTimemillis(). In overall he is absolutely lying, it is not bad at all, but there are some drawback which developer should be aware about. Also, although they have a lot in common, these drawbacks are usually platform-specific.
<h2 style="font-size: large;">Windows</h2>
<p>Functionality is implemented using <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx">QueryPerformanceCounter</a> API, which is known to have some issues. There is possibility that it <a href="http://support.microsoft.com/kb/274323">can leap forward</a>, some people are reporting that is can be <a href="http://stackoverflow.com/questions/1723629/what-happens-when-queryperformancecounter-is-called">extremely slow on multiprocessor machines</a>, <a href="http://www.virtualdub.org/blog/pivot/entry.php?id=106">etc</a>. I spent a some time on net trying to find how exactly QueryPerformanceCounter works and what is does. There is no clear conclusion on that topic but there are some posts which can give some brief idea how it works. I would say that the most useful, probably are <a href="http://blogs.msdn.com/b/oldnewthing/archive/2005/09/02/459952.aspx">that</a> and <a href="http://www.virtualdub.org/blog/pivot/entry.php?id=106">that</a> ones. Sure, one can find more if search a little bit, but info will be more or less the same.</p>
<p>So, it looks like implementation is using <a href="http://en.wikipedia.org/wiki/High_Precision_Event_Timer">HPET</a>, if it is available. If not, then it uses <a href="http://en.wikipedia.org/wiki/Time_Stamp_Counter">TSC</a> with some kind of synchronization of the value among CPUs. Interestingly that <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx">QueryPerformanceCounter</a> promise to return value which increases with constant frequency. It means that in case of using <a href="http://en.wikipedia.org/wiki/Time_Stamp_Counter">TSC</a> and several CPUs it may have some difficulties not just with the fact that CPUs may have just different value of TSC, but also may have different frequency. Keeping all that in mind Microsoft <a href="http://msdn.microsoft.com/en-us/library/ee417693%28VS.85%29.aspx">recommends</a> to use <a href="http://www.google.co.uk/search?aq=f&sourceid=chrome&ie=UTF-8&q=SetThreadAffinityMask">SetThreadAffinityMask</a> to stuck thread which calls to QueryPerformanceCounter to single processor, which, obviously, is not happening in JVM.</p>
<h2 style="font-size: large;">Linux</h2>
Linux is very similar to Windows, apart from the fact that it is much more transparent (I managed to download sources :) ). The value is read from <a href="http://linux.die.net/man/3/clock_gettime">clock_gettime</a> with CLOCK_MONOTONIC flag (for real man, source is available in <a href="http://git.kernel.org/?p=linux/kernel/git/stable/linux-stable.git;a=blob_plain;f=arch/x86/vdso/vclock_gettime.c">vclock_gettime.c</a> from Linux source). Which uses either <a href="http://en.wikipedia.org/wiki/Time_Stamp_Counter">TSC</a> or <a href="http://en.wikipedia.org/wiki/High_Precision_Event_Timer">HPET</a>. The only difference with Windows is that Linux not even trying to sync values of <a href="http://en.wikipedia.org/wiki/Time_Stamp_Counter">TSC</a> read from different CPUs, it just returns it as it is. It means that value can leap back and jump forward with dependency of CPU where it is read. Also, in contrast to Windows, Linux doesn't keep change frequency constant. On the other hand, it definitely should improve performance.
<h2 style="font-size: large;">Solaris</h2>
Solaris is simple. I believe that via <a href="http://manpages.unixforum.co.uk/man-pages/unix/solaris-10-11_06/9F/gethrtime-man-page.html">gethrtime</a> it goes to more or less the same implementation of <a href="http://linux.die.net/man/3/clock_gettime">clock_gettime</a> as linux does. The difference is that Solaris guarantees that counter will not leap back, which is possible on Linux, but it is possible that the same value will be returned back. That guarantee, as can be observed from source code, is implemented using CAS, which requires sync with the main memory and can be relatively expensive on multi-processor machines. The same as on Linux, change rate can vary.
<h2 style="font-size: large;">Conclusion</h2>
<p>The conclusion is king of cloudy. Developer has to be aware that function is not perfect, it can leap back or just forward. It may not change monotonically and change rate can vary with dependency on CPU clock speed. Also, it is not as fast as many may think. On my Windows 7 machine in a single threaded test it is just about 10% faster than System.currentTimeMillis(), on multi threaded test, where number of threads is the same as number of CPUs, it is just the same. And on IBM Z400 workstation with WinXP System.nanoTime() is always approximately 8 times slower.</p>
<p>So, in overall, all it gives is increase in resolution, which may be important for some cases. And as a final note, even when CPU frequency is not changing, do no think that you can map that value reliably to system clock, see details <a href="http://msdn.microsoft.com/en-us/magazine/cc163996.aspx">here</a> (this example describes just Windows, but more or less the same stuff is applicable to all other OSes).</p>
<h2 style="font-size: large;">Appendix</h2>
Appendix contains implementations of the function for different OSes. Source code is from OpenJDK v.7.
<h4>Solaris</h4>
<pre class="prettyprint">
// gethrtime can move backwards if read from one cpu and then a different cpu
// getTimeNanos is guaranteed to not move backward on Solaris
inline hrtime_t getTimeNanos() {
if (VM_Version::supports_cx8()) {
const hrtime_t now = gethrtime();
// Use atomic long load since 32-bit x86 uses 2 registers to keep long.
const hrtime_t prev = Atomic::load((volatile jlong*)&max_hrtime);
if (now <= prev) return prev; // same or retrograde time;
const hrtime_t obsv = Atomic::cmpxchg(now, (volatile jlong*)&max_hrtime, prev);
assert(obsv >= prev, "invariant"); // Monotonicity
// If the CAS succeeded then we're done and return "now".
// If the CAS failed and the observed value "obs" is >= now then
// we should return "obs". If the CAS failed and now > obs > prv then
// some other thread raced this thread and installed a new value, in which case
// we could either (a) retry the entire operation, (b) retry trying to install now
// or (c) just return obs. We use (c). No loop is required although in some cases
// we might discard a higher "now" value in deference to a slightly lower but freshly
// installed obs value. That's entirely benign -- it admits no new orderings compared
// to (a) or (b) -- and greatly reduces coherence traffic.
// We might also condition (c) on the magnitude of the delta between obs and now.
// Avoiding excessive CAS operations to hot RW locations is critical.
// See http://blogs.sun.com/dave/entry/cas_and_cache_trivia_invalidate
return (prev == obsv) ? now : obsv ;
} else {
return oldgetTimeNanos();
}
}
</pre>
<h4>Linux</h4>
<pre class="prettyprint">
jlong os::javaTimeNanos() {
if (Linux::supports_monotonic_clock()) {
struct timespec tp;
int status = Linux::clock_gettime(CLOCK_MONOTONIC, &tp);
assert(status == 0, "gettime error");
jlong result = jlong(tp.tv_sec) * (1000 * 1000 * 1000) + jlong(tp.tv_nsec);
return result;
} else {
timeval time;
int status = gettimeofday(&time, NULL);
assert(status != -1, "linux error");
jlong usecs = jlong(time.tv_sec) * (1000 * 1000) + jlong(time.tv_usec);
return 1000 * usecs;
}
}
</pre>
<h4>Windows</h4>
<pre class="prettyprint">
jlong os::javaTimeNanos() {
if (!has_performance_count) {
return javaTimeMillis() * NANOS_PER_MILLISEC; // the best we can do.
} else {
LARGE_INTEGER current_count;
QueryPerformanceCounter(¤t_count);
double current = as_long(current_count);
double freq = performance_frequency;
jlong time = (jlong)((current/freq) * NANOS_PER_SEC);
return time;
}
}
</pre>
<h2 style="font-size: large;">References</h2>
<a href="http://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks">Inside the Hotspot VM: Clocks, Timers and Scheduling Events</a><br/>
<a href="http://www.virtualdub.org/blog/pivot/entry.php?id=106">Beware of QueryPerformanceCounter()</a><br/>
<a href="http://msdn.microsoft.com/en-us/magazine/cc163996.aspx">Implement a Continuously Updating, High-Resolution Time Provider for Windows</a><br/>
<a href="http://msdn.microsoft.com/en-us/library/windows/desktop/ee417693(v=vs.85).aspx">Game Timing and Multicore Processors</a><br/>
<a href="http://en.wikipedia.org/wiki/High_Precision_Event_Timer">High Precision Event Timer (Wikipedia)</a><br/>
<a href="http://en.wikipedia.org/wiki/Time_Stamp_Counter">Time Stamp Counter (Wikipedia)</a>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com2tag:blogger.com,1999:blog-4215458222833264808.post-35498761981042911762011-12-05T09:20:00.001+00:002011-12-06T09:14:59.251+00:00The magic of conditional operatorCan you guess what is going to be the output of the following piece of code?
<pre class="prettyprint">
Object obj = false ? new Long(1) : new Integer(1);
System.out.println(obj.getClass());
</pre>
The smart once can guess that it is going to be "class java.lang.Long", otherwise there won't be a question. Now can you answer why? To be honest, I was a surprised by such an outcome, but apparently, according to JLS that is correct behaviour. Here is quotation from <a href="http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.25">"15.25 Conditional Operator ? :"</a>:<br/><br/>
<i>
The type of a conditional expression is determined as follows:
<ul>
<li>If the second and third operands have the same type (which may be the null type), then that is the type of the conditional expression.</li>
<li>If one of the second and third operands is of type boolean and the type of the other is of type Boolean, then the type of the conditional expression is boolean.</li>
<li>If one of the second and third operands is of the null type and the type of the other is a reference type, then the type of the conditional expression is that reference type.</li>
<li><b>Otherwise, if the second and third operands have types that are convertible (<a href="http://java.sun.com/docs/books/jls/third_edition/html/conversions.html#190699">§5.1.8</a>) to numeric types, then there are several cases:</b>
<ul>
<li>If one of the operands is of type byte or Byte and the other is of type short or Short, then the type of the conditional expression is short.</li>
<li>If one of the operands is of type T where T is byte, short, or char, and the other operand is a constant expression of type int whose value is representable in type T, then the type of the conditional expression is T.</li>
<li>If one of the operands is of type Byte and the other operand is a constant expression of type int whose value is representable in type byte, then the type of the conditional expression is byte.</li>
<li>If one of the operands is of type Short and the other operand is a constant expression of type int whose value is representable in type short, then the type of the conditional expression is short.</li>
<li>If one of the operands is of type; Character and the other operand is a constant expression of type int whose value is representable in type char, then the type of the conditional expression is char.</li>
<li><b>Otherwise, binary numeric promotion (<a href="http://java.sun.com/docs/books/jls/third_edition/html/conversions.html#170983">§5.6.2</a>) is applied to the operand types, and the type of the conditional expression is the promoted type of the second and third operands. Note that binary numeric promotion performs unboxing conversion (<a href="http://java.sun.com/docs/books/jls/third_edition/html/conversions.html#190699">§5.1.8</a>) and value set conversion (<a href="http://java.sun.com/docs/books/jls/third_edition/html/conversions.html#184225">§5.1.13</a>).</b></li>
</ul>
<li>Otherwise, the second and third operands are of types S1 and S2 respectively. Let T1 be the type that results from applying boxing conversion to S1, and let T2 be the type that results from applying boxing conversion to S2. The type of the conditional expression is the result of applying capture conversion (§5.1.10) to lub(T1, T2) (§15.12.2.7).</li>
</ul>
</i>
<br/>
In the other words, it says that if second and third operands are convertible to primitive numeric types, then the result type is based on numeric promotion of converted values. Here how it looks after applying these rules:
<pre class="prettyprint">
Object obj = Long.valueOf(false ? (new Long(1L)).longValue() : (new Integer(1)).intValue());
</pre>
My opinion, it all looks like a magic, and at the end, that's the price Java paid for autoboxing. Well world is not perfect and that's it's beauty, isn't it? :)Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com2tag:blogger.com,1999:blog-4215458222833264808.post-75938392678146082002011-11-09T22:29:00.000+00:002011-11-11T11:55:37.130+00:00Java file flushing performance<div dir="ltr" style="text-align: left;" trbidi="on">
There are many situations when it is required to ensure that data was written to the disk and write is also required to be fast. The most most common where it has to happen are databases, journalling, etc. Also, it is often required to update some random position in a file. I specifically what to place emphasis on random access here, as far as the rest will cover just cases where it is supported, i.e. I'm not going to mention OutputStream.flush() & related topics. Just haven't tried it, as far as that wasn't my case at the moment.<br />
<br />
There are several way of flushing the data to disk in java. These options can be quite different in the way they implemented internally and in their performance. Here is the list of existing things you can do:<br />
<div style="text-align: left;">
</div>
<ul style="text-align: left;">
<li>FileChannel.force()</li>
<li>'rws' or 'rwd' mode of RandomAccessFile, which 'works much like the force(boolean) method of the FileChannel class' (from javadoc).</li>
<li>MappedbyteBuffer.force()</li>
<li>RandomAccessFile.getFD().sync()</li>
<li>any close() method. Here I mean seek and close stream each time when access is required. Doing tests, I actually didn't seek, as far was updating data with zero offset.</li>
</ul>
Surprisingly (the only unsurprising exception is close()) all these methods gives very different performance and it varies almost randomly on different OSes and file systems. Worth noticing that hardware can also put it's correction on the performance of any of these methods. I have also a strong feeling that performance may vary even with minor change in OS or JVM version number. Here is the table with time it takes to flush 8bytes (keep in mind, that the real amount of flushed data depends on the size of caches and going to be much more that 8byes), just to give a flavour of how different is that:<br />
<br />
<table border="1">
<tbody>
<tr>
<td></td>
<td>RandomAccessFile.<br />
getFD().sync()</td><td>RandomAccessFile, rwd mode</td>
<td>MappedbyteBuffer.force()</td><td>FileChannel.force()</td>
</tr>
<tr>
<td>Windows</td>
<td>0.2818ms</td><td>0.0125ms</td>
<td>0.007ms</td>
<td>0.139ms</td>
</tr>
<tr>
<td>Linux</td>
<td>0.5354ms</td>
<td>0.5144ms</td>
<td>0.4663ms</td>
<td>0.0093ms</td>
</tr>
</tbody>
</table>
<div>
<br />
Please, do not treat these numbers as any relevant result. They are here just to give an example how these things can vary.<br />
<br />
So, what the conclusion? Conclusion is that if you would need to write high-performance application which does lots of IO, you really need to test different approached on different OSes, on different file systems and, preferably on different JVMs. Do not expect something to be fast on Linux (Solaris, AIX, etc) production box, when it is fast on your Windows (Linux, etc) workstation and vice versa. As can be seen, the difference can be in orders of magnitude.</div>
</div>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com1tag:blogger.com,1999:blog-4215458222833264808.post-86300386423001029862011-07-20T14:23:00.007+01:002013-10-18T12:33:27.099+01:00The most complete list of -XX options for Java JVM<div dir="ltr" style="text-align: left;" trbidi="on">
There was a wonderful page with that list (<a href="http://www.md.pp.ru/~eu/jdk6options.html">http://www.md.pp.ru/~eu/jdk6options.html</a>), but it seems it's gone now and not available any more. Luckily there is <a href="http://web.archive.org/">http://web.archive.org</a> which allows to make time flowing backwards and recover things which have passed away long time back. So, I have just get that page out of grave and copy/pasted here to return that bit of information back to life. Also I'm constantly doing updates to the original list, so by now it should be longer than it was initially.<br />
<a name='more'></a><ul>
<li><a href="#product">product</a> flags are always settable / visible</li>
<li><a href="#develop">develop</a> flags are settable / visible only during development and are constant in the PRODUCT version</li>
<li><a href="#notproduct">notproduct</a> flags are settable / visible only during development and are not declared in the PRODUCT version</li>
<li><a href="#diagnostic">diagnostic</a> options not meant for VM tuning or for product modes. They are to be used for VM quality assurance or field diagnosis of VM bugs. They are hidden so that users will not be encouraged to try them as if they were VM ordinary execution options. However, they are available in the product version of the VM. Under instruction from support engineers, VM customers can turn them on to collect diagnostic information about VM problems. To use a VM diagnostic option, you must first specify +UnlockDiagnosticVMOptions. (This master switch also affects the behavior of -Xprintflags.)</li>
<li><a href="#manageable">manageable</a> flags are writeable external product flags. They are dynamically writeable through the JDK management interface (com.sun.management.HotSpotDiagnosticMXBean API) and also through JConsole. These flags are external exported interface (see CCC). The list of manageable flags can be queried programmatically through the management interface.</li>
<li><a href="#experimental">experimental</a> experimental options which become available just after XX:+UnlockExperimentalVMOptions flag is set.</li>
<li><a href="#product_rw">product_rw</a> flags are writeable internal product flags. They are like "manageable" flags but for internal/private use. The list of product_rw flags are internal/private flags which may be changed/removed in a future release. It can be set through the management interface to get/set value when the name of flag is supplied.</li>
<li><a href="#product_pd">product_pd</a></li>
<li><a href="#develop_pd">develop_pd</a></li>
</ul>
<table border="1" cellpadding="3" cellspacing="0" ><tbody>
<tr><th>Name</th><th>Description</th><th>Default</th><th>Type</th></tr>
<tr bgcolor="#e0e0e0"><td colspan="4"><a href="" name="product">product</a></td></tr>
<tr valign="top"><td><a href="" name="UseMembar"></a><a href="#UseMembar">UseMembar</a></td><td>(Unstable) Issues membars on thread state transitions</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UnlockCommercialFeatures."></a><a href="#UnlockCommercialFeatures.">UnlockCommercialFeatures</a></td><td>Enables Oracle Java SE users to control when licensed features are allowed to run. Since Java SE 7 Update 4.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintCommandLineFlags"></a><a href="#PrintCommandLineFlags">PrintCommandLineFlags</a></td><td>Prints flags that appeared on the command line</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseGCLogFileRotation"></a><a href="#UseGCLogFileRotation">UseGCLogFileRotation</a></td><td>Prevent large gclog file for long running app. Requires -Xloggc:<filename>. Since Java7.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="NumberOfGCLogFiles"></a><a href="#NumberOfGCLogFiles">NumberOfGCLogFiles</a></td><td>Number of gclog files in rotation. Default: 0, no rotation. Only valid with UseGCLogFileRotation. Since Java7.</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="GCLogFileSize"></a><a href="#GCLogFileSize">GCLogFileSize</a></td><td>GC log file size, Default: 0 bytes, no rotation. Only valid with UseGCLogFileRotation. Since Java7.</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="JavaMonitorsInStackTrace"></a><a href="#JavaMonitorsInStackTrace">JavaMonitorsInStackTrace</a></td><td>Print info. about Java monitor locks when the stacks are dumped</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="LargePageSizeInBytes"></a><a href="#LargePageSizeInBytes">LargePageSizeInBytes</a></td><td>Large page size (0 to let VM choose the page size</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="LargePageHeapSizeThreshold"></a><a href="#LargePageHeapSizeThreshold">LargePageHeapSizeThreshold</a></td><td>Use large pages if max heap is at least this big</td><td>128*M</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="ForceTimeHighResolution"></a><a href="#ForceTimeHighResolution">ForceTimeHighResolution</a></td><td>Using high time resolution(For Win32 only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintVMQWaitTime"></a><a href="#PrintVMQWaitTime">PrintVMQWaitTime</a></td><td>Prints out the waiting time in VM operation queue</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintJNIResolving"></a><a href="#PrintJNIResolving">PrintJNIResolving</a></td><td>Used to implement -v:jni</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseInlineCaches"></a><a href="#UseInlineCaches">UseInlineCaches</a></td><td>Use Inline Caches for virtual calls</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseCompilerSafepoints"></a><a href="#UseCompilerSafepoints">UseCompilerSafepoints</a></td><td>Stop at safepoints in compiled code</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseSplitVerifier"></a><a href="#UseSplitVerifier">UseSplitVerifier</a></td><td>use split verifier with StackMapTable attributes</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="FailOverToOldVerifier"></a><a href="#FailOverToOldVerifier">FailOverToOldVerifier</a></td><td>fail over to old verifier when split verifier fails</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SuspendRetryCount"></a><a href="#SuspendRetryCount">SuspendRetryCount</a></td><td>Maximum retry count for an external suspend request</td><td>50</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="SuspendRetryDelay"></a><a href="#SuspendRetryDelay">SuspendRetryDelay</a></td><td>Milliseconds to delay per retry (* current_retry_count)</td><td>5</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="UseSuspendResumeThreadLists"></a><a href="#UseSuspendResumeThreadLists">UseSuspendResumeThreadLists</a></td><td>Enable SuspendThreadList and ResumeThreadList</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="MaxFDLimit"></a><a href="#MaxFDLimit">MaxFDLimit</a></td><td>Bump the number of file descriptors to max in solaris.</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="BytecodeVerificationRemote"></a><a href="#BytecodeVerificationRemote">BytecodeVerificationRemote</a></td><td>Enables the Java bytecode verifier for remote classes</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="BytecodeVerificationLocal"></a><a href="#BytecodeVerificationLocal">BytecodeVerificationLocal</a></td><td>Enables the Java bytecode verifier for local classes</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="PrintGCApplicationConcurrentTime"></a><a href="#PrintGCApplicationConcurrentTime">PrintGCApplicationConcurrentTime</a></td><td>Print the time the application has been running</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintGCApplicationStoppedTime"></a><a href="#PrintGCApplicationStoppedTime">PrintGCApplicationStoppedTime</a></td><td>Print the time the application has been stopped</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ShowMessageBoxOnError"></a><a href="#ShowMessageBoxOnError">ShowMessageBoxOnError</a></td><td>Keep process alive on VM fatal error</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SuppressFatalErrorMessage"></a><a href="#SuppressFatalErrorMessage">SuppressFatalErrorMessage</a></td><td>Do NO Fatal Error report [Avoid deadlock]</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="OnError"></a><a href="#OnError">OnError</a></td><td>Run user-defined commands on fatal error; see VMError.cpp for examples</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="OnOutOfMemoryError"></a><a href="#OnOutOfMemoryError">OnOutOfMemoryError</a></td><td>Run user-defined commands on first java.lang.OutOfMemoryError</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="PrintCompilation"></a><a href="#PrintCompilation">PrintCompilation</a></td><td>Print compilations</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="StackTraceInThrowable"></a><a href="#StackTraceInThrowable">StackTraceInThrowable</a></td><td>Collect backtrace in throwable when exception happens</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="OmitStackTraceInFastThrow"></a><a href="#OmitStackTraceInFastThrow">OmitStackTraceInFastThrow</a></td><td>Omit backtraces for some 'hot' exceptions in optimized code</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProfilerPrintByteCodeStatistics"></a><a href="#ProfilerPrintByteCodeStatistics">ProfilerPrintByteCodeStatistics</a></td><td>Prints byte code statictics when dumping profiler output</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProfilerRecordPC"></a><a href="#ProfilerRecordPC">ProfilerRecordPC</a></td><td>Collects tick for each 16 byte interval of compiled code</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseNUMA"></a><a href="#UseNUMA">UseNUMA</a></td><td>Enables NUMA support. See details <a href="https://blogs.oracle.com/jonthecollector/entry/help_for_the_numa_weary">here</a></td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProfileVM"></a><a href="#ProfileVM">ProfileVM</a></td><td>Profiles ticks that fall within VM (either in the VM Thread or VM code called through stubs)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProfileIntervals"></a><a href="#ProfileIntervals">ProfileIntervals</a></td><td>Prints profiles for each interval (see ProfileIntervalsTicks)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="RegisterFinalizersAtInit"></a><a href="#RegisterFinalizersAtInit">RegisterFinalizersAtInit</a></td><td>Register finalizable objects at end of Object.<init> or after allocation.</init></td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ClassUnloading"></a><a href="#ClassUnloading">ClassUnloading</a></td><td>Do unloading of classes</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ConvertYieldToSleep"></a><a href="#ConvertYieldToSleep">ConvertYieldToSleep</a></td><td>Converts yield to a sleep of MinSleepInterval to simulate Win32 behavior (SOLARIS only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseBoundThreads"></a><a href="#UseBoundThreads">UseBoundThreads</a></td><td>Bind user level threads to kernel threads (for SOLARIS only)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseLWPSynchronization"></a><a href="#UseLWPSynchronization">UseLWPSynchronization</a></td><td>Use LWP-based instead of libthread-based synchronization (SPARC only)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SyncKnobs"></a><a href="#SyncKnobs">SyncKnobs</a></td><td>(Unstable) Various monitor synchronization tunables</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="EmitSync"></a><a href="#EmitSync">EmitSync</a></td><td>(Unsafe,Unstable) Controls emission of inline sync fast-path code</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="AlwaysInflate"></a><a href="#AlwaysInflate">AlwaysInflate</a></td><td>(Unstable) Force inflation</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="Atomics"></a><a href="#Atomics">Atomics</a></td><td>(Unsafe,Unstable) Diagnostic - Controls emission of atomics</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="EmitLFence"></a><a href="#EmitLFence">EmitLFence</a></td><td>(Unsafe,Unstable) Experimental</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="AppendRatio"></a><a href="#AppendRatio">AppendRatio</a></td><td>(Unstable) Monitor queue fairness" ) product(intx, SyncFlags, 0,(Unsafe,Unstable) Experimental Sync flags" ) product(intx, SyncVerbose, 0,(Unstable)" ) product(intx, ClearFPUAtPark, 0,(Unsafe,Unstable)" ) product(intx, hashCode, 0, (Unstable) select hashCode generation algorithm" ) product(intx, WorkAroundNPTLTimedWaitHang, 1, (Unstable, Linux-specific)" avoid NPTL-FUTEX hang pthread_cond_timedwait" ) product(bool, FilterSpuriousWakeups , true, Prevent spurious or premature wakeups from object.wait" (Solaris only)</td><td>11</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="AdjustConcurrency"></a><a href="#AdjustConcurrency">AdjustConcurrency</a></td><td>call thr_setconcurrency at thread create time to avoid LWP starvation on MP systems (For Solaris Only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ReduceSignalUsage"></a><a href="#ReduceSignalUsage">ReduceSignalUsage</a></td><td>Reduce the use of OS signals in Java and/or the VM</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AllowUserSignalHandlers"></a><a href="#AllowUserSignalHandlers">AllowUserSignalHandlers</a></td><td>Do not complain if the application installs signal handlers (Solaris & Linux only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseSignalChaining"></a><a href="#UseSignalChaining">UseSignalChaining</a></td><td>Use signal-chaining to invoke signal handlers installed by the application (Solaris & Linux only)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseAltSigs"></a><a href="#UseAltSigs">UseAltSigs</a></td><td>Use alternate signals instead of SIGUSR1 & SIGUSR2 for VM internal signals. (Solaris only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseSpinning"></a><a href="#UseSpinning">UseSpinning</a></td><td>Use spinning in monitor inflation and before entry</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PreSpinYield"></a><a href="#PreSpinYield">PreSpinYield</a></td><td>Yield before inner spinning loop</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PostSpinYield"></a><a href="#PostSpinYield">PostSpinYield</a></td><td>Yield after inner spinning loop</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UsePopCountInstruction"></a><a href="#UsePopCountInstruction">UsePopCountInstruction</a></td><td>Where possible replaces call to Integer.bitCount() with assembly instruction, i.e. POCCNT on Intel, POPC on Sparc, etc</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AllowJNIEnvProxy"></a><a href="#AllowJNIEnvProxy">AllowJNIEnvProxy</a></td><td>Allow JNIEnv proxies for jdbx</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="JNIDetachReleasesMonitors"></a><a href="#JNIDetachReleasesMonitors">JNIDetachReleasesMonitors</a></td><td>JNI DetachCurrentThread releases monitors owned by thread</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="RestoreMXCSROnJNICalls"></a><a href="#RestoreMXCSROnJNICalls">RestoreMXCSROnJNICalls</a></td><td>Restore MXCSR when returning from JNI calls</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CheckJNICalls"></a><a href="#CheckJNICalls">CheckJNICalls</a></td><td>Verify all arguments to JNI calls</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseFastJNIAccessors"></a><a href="#UseFastJNIAccessors">UseFastJNIAccessors</a></td><td>Use optimized versions of Get<primitive>Field</primitive></td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="EagerXrunInit"></a><a href="#EagerXrunInit">EagerXrunInit</a></td><td>Eagerly initialize -Xrun libraries; allows startup profiling, but not all -Xrun libraries may support the state of the VM at this time</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PreserveAllAnnotations"></a><a href="#PreserveAllAnnotations">PreserveAllAnnotations</a></td><td>Preserve RuntimeInvisibleAnnotations as well as RuntimeVisibleAnnotations</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="LazyBootClassLoader"></a><a href="#LazyBootClassLoader">LazyBootClassLoader</a></td><td>Enable/disable lazy opening of boot class path entries</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseBiasedLocking"></a><a href="#UseBiasedLocking">UseBiasedLocking</a></td><td>Enable biased locking in JVM</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="BiasedLockingStartupDelay"></a><a href="#BiasedLockingStartupDelay">BiasedLockingStartupDelay</a></td><td>Number of milliseconds to wait before enabling biased locking</td><td>4000</td><td>intx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="BiasedLockingBulkRebiasThreshold"></a><a href="#BiasedLockingBulkRebiasThreshold">BiasedLockingBulkRebiasThreshold</a></td><td>Threshold of number of revocations per type to try to rebias all objects in the heap of that type</td><td>20</td><td>intx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="BiasedLockingBulkRevokeThreshold"></a><a href="#BiasedLockingBulkRevokeThreshold">BiasedLockingBulkRevokeThreshold</a></td><td>Threshold of number of revocations per type to permanently revoke biases of all objects in the heap of that type</td><td>40</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="BiasedLockingDecayTime"></a><a href="#BiasedLockingDecayTime">BiasedLockingDecayTime</a></td><td>Decay time (in milliseconds) to re-enable bulk rebiasing of a type after previous bulk rebias</td><td>25000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="TraceJVMTI"></a><a href="#TraceJVMTI">TraceJVMTI</a></td><td>Trace flags for JVMTI functions and events</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="StressLdcRewrite"></a><a href="#StressLdcRewrite">StressLdcRewrite</a></td><td>Force ldc -> ldc_w rewrite during RedefineClasses</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceRedefineClasses"></a><a href="#TraceRedefineClasses">TraceRedefineClasses</a></td><td>Trace level for JVMTI RedefineClasses</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="VerifyMergedCPBytecodes"></a><a href="#VerifyMergedCPBytecodes">VerifyMergedCPBytecodes</a></td><td>Verify bytecodes after RedefineClasses constant pool merging</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="HPILibPath"></a><a href="#HPILibPath">HPILibPath</a></td><td>Specify alternate path to HPI library</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="TraceClassResolution"></a><a href="#TraceClassResolution">TraceClassResolution</a></td><td>Trace all constant pool resolutions (for debugging)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceBiasedLocking"></a><a href="#TraceBiasedLocking">TraceBiasedLocking</a></td><td>Trace biased locking in JVM</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceMonitorInflation"></a><a href="#TraceMonitorInflation">TraceMonitorInflation</a></td><td>Trace monitor inflation in JVM</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="Use486InstrsOnly"></a><a href="#Use486InstrsOnly">Use486InstrsOnly</a></td><td>Use 80486 Compliant instruction subset</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseSerialGC"></a><a href="#UseSerialGC">UseSerialGC</a></td><td>Tells whether the VM should use serial garbage collector</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseParallelGC"></a><a href="#UseParallelGC">UseParallelGC</a></td><td>Use parallel garbage collection for scavenges. (Introduced in 1.4.1)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseParallelOldGC"></a><a href="#UseParallelOldGC">UseParallelOldGC</a></td><td>Use parallel garbage collection for the full collections. Enabling this option automatically sets -XX:+UseParallelGC. (Introduced in 5.0 update 6)</td><td>'false' before Java 7 update 4 and 'true' after that version</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseParallelOldGCCompacting"></a><a href="#UseParallelOldGCCompacting">UseParallelOldGCCompacting</a></td><td>In the Parallel Old garbage collector use parallel compaction</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseParallelDensePrefixUpdate"></a><a href="#UseParallelDensePrefixUpdate">UseParallelDensePrefixUpdate</a></td><td>In the Parallel Old garbage collector use parallel dense" prefix update</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="HeapMaximumCompactionInterval"></a><a href="#HeapMaximumCompactionInterval">HeapMaximumCompactionInterval</a></td><td>How often should we maximally compact the heap (not allowing any dead space)</td><td>20</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="HeapFirstMaximumCompactionCount"></a><a href="#HeapFirstMaximumCompactionCount">HeapFirstMaximumCompactionCount</a></td><td>The collection count for the first maximum compaction</td><td>3</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="UseMaximumCompactionOnSystemGC"></a><a href="#UseMaximumCompactionOnSystemGC">UseMaximumCompactionOnSystemGC</a></td><td>In the Parallel Old garbage collector maximum compaction for a system GC</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="ParallelOldDeadWoodLimiterMean"></a><a href="#ParallelOldDeadWoodLimiterMean">ParallelOldDeadWoodLimiterMean</a></td><td>The mean used by the par compact dead wood" limiter (a number between 0-100).</td><td>50</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="ParallelOldDeadWoodLimiterStdDev"></a><a href="#ParallelOldDeadWoodLimiterStdDev">ParallelOldDeadWoodLimiterStdDev</a></td><td>The standard deviation used by the par compact dead wood" limiter (a number between 0-100).</td><td>80</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="UseParallelOldGCDensePrefix"></a><a href="#UseParallelOldGCDensePrefix">UseParallelOldGCDensePrefix</a></td><td>Use a dense prefix with the Parallel Old garbage collector</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ParallelGCThreads"></a><a href="#ParallelGCThreads">ParallelGCThreads</a></td><td>Number of parallel threads parallel gc will use</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="ParallelCMSThreads"></a><a href="#ParallelCMSThreads">ParallelCMSThreads</a></td><td>Max number of threads CMS will use for concurrent work</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="YoungPLABSize"></a><a href="#YoungPLABSize">YoungPLABSize</a></td><td>Size of young gen promotion labs (in HeapWords)</td><td>4096</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="OldPLABSize"></a><a href="#OldPLABSize">OldPLABSize</a></td><td>Size of old gen promotion labs (in HeapWords). See good explanation about that parameter <a href="http://aragozin.blogspot.com/2011/10/java-cg-hotspots-cms-and-heap.html">here</a>.</td><td>1024</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="GCTaskTimeStampEntries"></a><a href="#GCTaskTimeStampEntries">GCTaskTimeStampEntries</a></td><td>Number of time stamp entries per gc worker thread</td><td>200</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="AlwaysTenure"></a><a href="#AlwaysTenure">AlwaysTenure</a></td><td>Always tenure objects in eden. (ParallelGC only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="NeverTenure"></a><a href="#NeverTenure">NeverTenure</a></td><td>Never tenure objects in eden, May tenure on overflow" (ParallelGC only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ScavengeBeforeFullGC"></a><a href="#ScavengeBeforeFullGC">ScavengeBeforeFullGC</a></td><td>Scavenge youngest generation before each full GC," used with UseParallelGC</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseCompressedOops"></a><a href="#UseCompressedOops">UseCompressedOops</a></td><td>Enables object-reference compression capabilities via the Compressed References. Have sense just on 64bit JVM. See more details <a href="https://wikis.oracle.com/display/HotSpotInternals/CompressedOops">here</a>.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseConcMarkSweepGC"></a><a href="#UseConcMarkSweepGC">UseConcMarkSweepGC</a></td><td>Use Concurrent Mark-Sweep GC in the old generation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ExplicitGCInvokesConcurrent"></a><a href="#ExplicitGCInvokesConcurrent">ExplicitGCInvokesConcurrent</a></td><td>A System.gc() request invokes a concurrent collection;" (effective only when UseConcMarkSweepGC)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseCMSBestFit"></a><a href="#UseCMSBestFit">UseCMSBestFit</a></td><td>Use CMS best fit allocation strategy</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseCMSCollectionPassing"></a><a href="#UseCMSCollectionPassing">UseCMSCollectionPassing</a></td><td>Use passing of collection from background to foreground</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseParNewGC"></a><a href="#UseParNewGC">UseParNewGC</a></td><td>Use parallel threads in the new generation.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ParallelGCVerbose"></a><a href="#ParallelGCVerbose">ParallelGCVerbose</a></td><td>Verbose output for parallel GC.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ParallelGCBufferWastePct"></a><a href="#ParallelGCBufferWastePct">ParallelGCBufferWastePct</a></td><td>wasted fraction of parallel allocation buffer.</td><td>10</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ParallelGCRetainPLAB"></a><a href="#ParallelGCRetainPLAB">ParallelGCRetainPLAB</a></td><td>Retain parallel allocation buffers across scavenges.</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TargetPLABWastePct"></a><a href="#TargetPLABWastePct">TargetPLABWastePct</a></td><td>target wasted space in last buffer as pct of overall allocation</td><td>10</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PLABWeight"></a><a href="#PLABWeight">PLABWeight</a></td><td>Percentage (0-100) used to weight the current sample when" computing exponentially decaying average for ResizePLAB.</td><td>75</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="ResizePLAB"></a><a href="#ResizePLAB">ResizePLAB</a></td><td>Dynamically resize (survivor space) promotion labs</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintPLAB"></a><a href="#PrintPLAB">PrintPLAB</a></td><td>Print (survivor space) promotion labs sizing decisions</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ParGCArrayScanChunk"></a><a href="#ParGCArrayScanChunk">ParGCArrayScanChunk</a></td><td>Scan a subset and push remainder, if array is bigger than this</td><td>50</td><td>intx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="ParGCDesiredObjsFromOverflowList"></a><a href="#ParGCDesiredObjsFromOverflowList">ParGCDesiredObjsFromOverflowList</a></td><td>The desired number of objects to claim from the overflow list</td><td>20</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CMSParPromoteBlocksToClaim"></a><a href="#CMSParPromoteBlocksToClaim">CMSParPromoteBlocksToClaim</a></td><td>Number of blocks to attempt to claim when refilling CMS LAB for parallel GC.</td><td>50</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="AlwaysPreTouch"></a><a href="#AlwaysPreTouch">AlwaysPreTouch</a></td><td>It forces all freshly committed pages to be pre-touched.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSUseOldDefaults"></a><a href="#CMSUseOldDefaults">CMSUseOldDefaults</a></td><td>A flag temporarily introduced to allow reverting to some older" default settings; older as of 6.0</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSYoungGenPerWorker"></a><a href="#CMSYoungGenPerWorker">CMSYoungGenPerWorker</a></td><td>The amount of young gen chosen by default per GC worker thread available</td><td>16*M</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CMSIncrementalMode"></a><a href="#CMSIncrementalMode">CMSIncrementalMode</a></td><td>Whether CMS GC should operate in \"incremental\" mode</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSIncrementalDutyCycle"></a><a href="#CMSIncrementalDutyCycle">CMSIncrementalDutyCycle</a></td><td>CMS incremental mode duty cycle (a percentage, 0-100). If" CMSIncrementalPacing is enabled, then this is just the initial" value</td><td>10</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSIncrementalPacing"></a><a href="#CMSIncrementalPacing">CMSIncrementalPacing</a></td><td>Whether the CMS incremental mode duty cycle should be automatically adjusted</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSIncrementalDutyCycleMin"></a><a href="#CMSIncrementalDutyCycleMin">CMSIncrementalDutyCycleMin</a></td><td>Lower bound on the duty cycle when CMSIncrementalPacing is" enabled (a percentage, 0-100).</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSIncrementalSafetyFactor"></a><a href="#CMSIncrementalSafetyFactor">CMSIncrementalSafetyFactor</a></td><td>Percentage (0-100) used to add conservatism when computing the" duty cycle.</td><td>10</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSIncrementalOffset"></a><a href="#CMSIncrementalOffset">CMSIncrementalOffset</a></td><td>Percentage (0-100) by which the CMS incremental mode duty cycle" is shifted to the right within the period between young GCs</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSExpAvgFactor"></a><a href="#CMSExpAvgFactor">CMSExpAvgFactor</a></td><td>Percentage (0-100) used to weight the current sample when" computing exponential averages for CMS statistics.</td><td>25</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMS_FLSWeight"></a><a href="#CMS_FLSWeight">CMS_FLSWeight</a></td><td>Percentage (0-100) used to weight the current sample when" computing exponentially decating averages for CMS FLS statistics.</td><td>50</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMS_FLSPadding"></a><a href="#CMS_FLSPadding">CMS_FLSPadding</a></td><td>The multiple of deviation from mean to use for buffering" against volatility in free list demand.</td><td>2</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="FLSCoalescePolicy"></a><a href="#FLSCoalescePolicy">FLSCoalescePolicy</a></td><td>CMS: Aggression level for coalescing, increasing from 0 to 4</td><td>2</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMS_SweepWeight"></a><a href="#CMS_SweepWeight">CMS_SweepWeight</a></td><td>Percentage (0-100) used to weight the current sample when" computing exponentially decaying average for inter-sweep duration.</td><td>50</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMS_SweepPadding"></a><a href="#CMS_SweepPadding">CMS_SweepPadding</a></td><td>The multiple of deviation from mean to use for buffering" against volatility in inter-sweep duration.</td><td>2</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMS_SweepTimerThresholdMillis"></a><a href="#CMS_SweepTimerThresholdMillis">CMS_SweepTimerThresholdMillis</a></td><td>Skip block flux-rate sampling for an epoch unless inter-sweep duration exceeds this threhold in milliseconds</td><td>10</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSClassUnloadingEnabled"></a><a href="#CMSClassUnloadingEnabled">CMSClassUnloadingEnabled</a></td><td>Whether class unloading enabled when using CMS GC</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSCompactWhenClearAllSoftRefs"></a><a href="#CMSCompactWhenClearAllSoftRefs">CMSCompactWhenClearAllSoftRefs</a></td><td>Compact when asked to collect CMS gen with clear_all_soft_refs</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseCMSCompactAtFullCollection"></a><a href="#UseCMSCompactAtFullCollection">UseCMSCompactAtFullCollection</a></td><td>Use mark sweep compact at full collections</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSFullGCsBeforeCompaction"></a><a href="#CMSFullGCsBeforeCompaction">CMSFullGCsBeforeCompaction</a></td><td>Number of CMS full collection done before compaction if > 0</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSIndexedFreeListReplenish"></a><a href="#CMSIndexedFreeListReplenish">CMSIndexedFreeListReplenish</a></td><td>Replenish and indexed free list with this number of chunks</td><td>4</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSLoopWarn"></a><a href="#CMSLoopWarn">CMSLoopWarn</a></td><td>Warn in case of excessive CMS looping</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSMarkStackSize"></a><a href="#CMSMarkStackSize">CMSMarkStackSize</a></td><td>Size of CMS marking stack</td><td>32*K</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSMarkStackSizeMax"></a><a href="#CMSMarkStackSizeMax">CMSMarkStackSizeMax</a></td><td>Max size of CMS marking stack</td><td>4*M</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSMaxAbortablePrecleanLoops"></a><a href="#CMSMaxAbortablePrecleanLoops">CMSMaxAbortablePrecleanLoops</a></td><td>(Temporary, subject to experimentation)" Maximum number of abortable preclean iterations, if > 0</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSMaxAbortablePrecleanTime"></a><a href="#CMSMaxAbortablePrecleanTime">CMSMaxAbortablePrecleanTime</a></td><td>(Temporary, subject to experimentation)" Maximum time in abortable preclean in ms</td><td>5000</td><td>intx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="CMSAbortablePrecleanMinWorkPerIteration"></a><a href="#CMSAbortablePrecleanMinWorkPerIteration">CMSAbortablePrecleanMinWorkPerIteration</a></td><td>(Temporary, subject to experimentation)" Nominal minimum work per abortable preclean iteration</td><td>100</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSAbortablePrecleanWaitMillis"></a><a href="#CMSAbortablePrecleanWaitMillis">CMSAbortablePrecleanWaitMillis</a></td><td>(Temporary, subject to experimentation)" Time that we sleep between iterations when not given" enough work per iteration</td><td>100</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CMSRescanMultiple"></a><a href="#CMSRescanMultiple">CMSRescanMultiple</a></td><td>Size (in cards) of CMS parallel rescan task</td><td>32</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSConcMarkMultiple"></a><a href="#CMSConcMarkMultiple">CMSConcMarkMultiple</a></td><td>Size (in cards) of CMS concurrent MT marking task</td><td>32</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSRevisitStackSize"></a><a href="#CMSRevisitStackSize">CMSRevisitStackSize</a></td><td>Size of CMS KlassKlass revisit stack</td><td>1*M</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSAbortSemantics"></a><a href="#CMSAbortSemantics">CMSAbortSemantics</a></td><td>Whether abort-on-overflow semantics is implemented</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSParallelRemarkEnabled"></a><a href="#CMSParallelRemarkEnabled">CMSParallelRemarkEnabled</a></td><td>Whether parallel remark enabled (only if ParNewGC)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="CMSParallelSurvivorRemarkEnabled"></a><a href="#CMSParallelSurvivorRemarkEnabled">CMSParallelSurvivorRemarkEnabled</a></td><td>Whether parallel remark of survivor space" enabled (effective only if CMSParallelRemarkEnabled)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSPLABRecordAlways"></a><a href="#CMSPLABRecordAlways">CMSPLABRecordAlways</a></td><td>Whether to always record survivor space PLAB bdries" (effective only if CMSParallelSurvivorRemarkEnabled)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSConcurrentMTEnabled"></a><a href="#CMSConcurrentMTEnabled">CMSConcurrentMTEnabled</a></td><td>Whether multi-threaded concurrent work enabled (if ParNewGC)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSPermGenPrecleaningEnabled"></a><a href="#CMSPermGenPrecleaningEnabled">CMSPermGenPrecleaningEnabled</a></td><td>Whether concurrent precleaning enabled in perm gen" (effective only when CMSPrecleaningEnabled is true)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSPermGenSweepingEnabled"></a><a href="#CMSPermGenSweepingEnabled">CMSPermGenSweepingEnabled</a></td><td>Whether sweeping of perm gen is enabled</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSPrecleaningEnabled"></a><a href="#CMSPrecleaningEnabled">CMSPrecleaningEnabled</a></td><td>Whether concurrent precleaning enabled</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSPrecleanIter"></a><a href="#CMSPrecleanIter">CMSPrecleanIter</a></td><td>Maximum number of precleaning iteration passes</td><td>3</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSPrecleanNumerator"></a><a href="#CMSPrecleanNumerator">CMSPrecleanNumerator</a></td><td>CMSPrecleanNumerator:CMSPrecleanDenominator yields convergence" ratio</td><td>2</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSPrecleanDenominator"></a><a href="#CMSPrecleanDenominator">CMSPrecleanDenominator</a></td><td>CMSPrecleanNumerator:CMSPrecleanDenominator yields convergence" ratio</td><td>3</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSPrecleanRefLists1"></a><a href="#CMSPrecleanRefLists1">CMSPrecleanRefLists1</a></td><td>Preclean ref lists during (initial) preclean phase</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSPrecleanRefLists2"></a><a href="#CMSPrecleanRefLists2">CMSPrecleanRefLists2</a></td><td>Preclean ref lists during abortable preclean phase</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSPrecleanSurvivors1"></a><a href="#CMSPrecleanSurvivors1">CMSPrecleanSurvivors1</a></td><td>Preclean survivors during (initial) preclean phase</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSPrecleanSurvivors2"></a><a href="#CMSPrecleanSurvivors2">CMSPrecleanSurvivors2</a></td><td>Preclean survivors during abortable preclean phase</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSPrecleanThreshold"></a><a href="#CMSPrecleanThreshold">CMSPrecleanThreshold</a></td><td>Don't re-iterate if #dirty cards less than this</td><td>1000</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSCleanOnEnter"></a><a href="#CMSCleanOnEnter">CMSCleanOnEnter</a></td><td>Clean-on-enter optimization for reducing number of dirty cards</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSRemarkVerifyVariant"></a><a href="#CMSRemarkVerifyVariant">CMSRemarkVerifyVariant</a></td><td>Choose variant (1,2) of verification following remark</td><td>1</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="CMSScheduleRemarkEdenSizeThreshold"></a><a href="#CMSScheduleRemarkEdenSizeThreshold">CMSScheduleRemarkEdenSizeThreshold</a></td><td>If Eden used is below this value, don't try to schedule remark</td><td>2*M</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="CMSScheduleRemarkEdenPenetration"></a><a href="#CMSScheduleRemarkEdenPenetration">CMSScheduleRemarkEdenPenetration</a></td><td>The Eden occupancy % at which to try and schedule remark pause</td><td>50</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="CMSScheduleRemarkSamplingRatio"></a><a href="#CMSScheduleRemarkSamplingRatio">CMSScheduleRemarkSamplingRatio</a></td><td>Start sampling Eden top at least before yg occupancy reaches" 1/<ratio> of the size at which we plan to schedule remark</ratio></td><td>5</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSSamplingGrain"></a><a href="#CMSSamplingGrain">CMSSamplingGrain</a></td><td>The minimum distance between eden samples for CMS (see above)</td><td>16*K</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSScavengeBeforeRemark"></a><a href="#CMSScavengeBeforeRemark">CMSScavengeBeforeRemark</a></td><td>Attempt scavenge before the CMS remark step</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSWorkQueueDrainThreshold"></a><a href="#CMSWorkQueueDrainThreshold">CMSWorkQueueDrainThreshold</a></td><td>Don't drain below this size per parallel worker/thief</td><td>10</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSWaitDuration"></a><a href="#CMSWaitDuration">CMSWaitDuration</a></td><td>Time in milliseconds that CMS thread waits for young GC</td><td>2000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CMSYield"></a><a href="#CMSYield">CMSYield</a></td><td>Yield between steps of concurrent mark & sweep</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSBitMapYieldQuantum"></a><a href="#CMSBitMapYieldQuantum">CMSBitMapYieldQuantum</a></td><td>Bitmap operations should process at most this many bits" between yields</td><td>10*M</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="BlockOffsetArrayUseUnallocatedBlock"></a><a href="#BlockOffsetArrayUseUnallocatedBlock">BlockOffsetArrayUseUnallocatedBlock</a></td><td>Maintain _unallocated_block in BlockOffsetArray" (currently applicable only to CMS collector)</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="RefDiscoveryPolicy"></a><a href="#RefDiscoveryPolicy">RefDiscoveryPolicy</a></td><td>Whether reference-based(0) or referent-based(1)</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ParallelRefProcEnabled"></a><a href="#ParallelRefProcEnabled">ParallelRefProcEnabled</a></td><td>Enable parallel reference processing whenever possible</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSTriggerRatio"></a><a href="#CMSTriggerRatio">CMSTriggerRatio</a></td><td>Percentage of MinHeapFreeRatio in CMS generation that is allocated before a CMS collection cycle commences</td><td>80</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CMSBootstrapOccupancy"></a><a href="#CMSBootstrapOccupancy">CMSBootstrapOccupancy</a></td><td>Percentage CMS generation occupancy at which to initiate CMS collection for bootstrapping collection stats</td><td>50</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CMSInitiatingOccupancyFraction"></a><a href="#CMSInitiatingOccupancyFraction">CMSInitiatingOccupancyFraction</a></td><td>Percentage CMS generation occupancy to start a CMS collection cycle (A negative value means that CMSTirggerRatio is used). See good explanation about that parameter <a href="http://aragozin.blogspot.com/2011/10/java-cg-hotspots-cms-and-heap.html">here</a>.</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="UseCMSInitiatingOccupancyOnly"></a><a href="#UseCMSInitiatingOccupancyOnly">UseCMSInitiatingOccupancyOnly</a></td><td>Only use occupancy as a crierion for starting a CMS collection. See good explanation about that parameter <a href="http://aragozin.blogspot.com/2011/10/java-cg-hotspots-cms-and-heap.html">here</a>.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="HandlePromotionFailure"></a><a href="#HandlePromotionFailure">HandlePromotionFailure</a></td><td>The youngest generation collection does not require" a guarantee of full promotion of all live objects.</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PreserveMarkStackSize"></a><a href="#PreserveMarkStackSize">PreserveMarkStackSize</a></td><td>Size for stack used in promotion failure handling</td><td>40</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="ZeroTLAB"></a><a href="#ZeroTLAB">ZeroTLAB</a></td><td>Zero out the newly created <a href="#TLAB">TLAB</a></td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintTLAB"></a><a href="#PrintTLAB">PrintTLAB</a></td><td>Print various <a href="#TLAB">TLAB</a> related information</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TLABStats"></a><a href="#TLABStats">TLABStats</a></td><td>Print various <a href="#TLAB">TLAB</a> related information</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AlwaysActAsServerClassMachine"></a><a href="#AlwaysActAsServerClassMachine">AlwaysActAsServerClassMachine</a></td><td>Always act like a server-class machine</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DefaultMaxRAM"></a><a href="#DefaultMaxRAM">DefaultMaxRAM</a></td><td>Maximum real memory size for setting server class heap size</td><td>G</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="DefaultMaxRAMFraction"></a><a href="#DefaultMaxRAMFraction">DefaultMaxRAMFraction</a></td><td>Fraction (1/n) of real memory used for server class max heap</td><td>4</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="DefaultInitialRAMFraction"></a><a href="#DefaultInitialRAMFraction">DefaultInitialRAMFraction</a></td><td>Fraction (1/n) of real memory used for server class initial heap</td><td>64</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="UseAutoGCSelectPolicy"></a><a href="#UseAutoGCSelectPolicy">UseAutoGCSelectPolicy</a></td><td>Use automatic collection selection policy</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AutoGCSelectPauseMillis"></a><a href="#AutoGCSelectPauseMillis">AutoGCSelectPauseMillis</a></td><td>Automatic GC selection pause threshhold in ms</td><td>5000</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="UseAdaptiveSizePolicy"></a><a href="#UseAdaptiveSizePolicy">UseAdaptiveSizePolicy</a></td><td>Use adaptive generation sizing policies</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UsePSAdaptiveSurvivorSizePolicy"></a><a href="#UsePSAdaptiveSurvivorSizePolicy">UsePSAdaptiveSurvivorSizePolicy</a></td><td>Use adaptive survivor sizing policies</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="UseAdaptiveGenerationSizePolicyAtMinorCollection"></a><a href="#UseAdaptiveGenerationSizePolicyAtMinorCollection">UseAdaptiveGenerationSizePolicyAtMinorCollection</a></td><td>Use adaptive young-old sizing policies at minor collections</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="UseAdaptiveGenerationSizePolicyAtMajorCollection"></a><a href="#UseAdaptiveGenerationSizePolicyAtMajorCollection">UseAdaptiveGenerationSizePolicyAtMajorCollection</a></td><td>Use adaptive young-old sizing policies at major collections</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="UseAdaptiveSizePolicyWithSystemGC"></a><a href="#UseAdaptiveSizePolicyWithSystemGC">UseAdaptiveSizePolicyWithSystemGC</a></td><td>Use statistics from System.GC for adaptive size policy</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseAdaptiveGCBoundary"></a><a href="#UseAdaptiveGCBoundary">UseAdaptiveGCBoundary</a></td><td>Allow young-old boundary to move</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AdaptiveSizeThroughPutPolicy"></a><a href="#AdaptiveSizeThroughPutPolicy">AdaptiveSizeThroughPutPolicy</a></td><td>Policy for changeing generation size for throughput goals</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="AdaptiveSizePausePolicy"></a><a href="#AdaptiveSizePausePolicy">AdaptiveSizePausePolicy</a></td><td>Policy for changing generation size for pause goals</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="AdaptiveSizePolicyInitializingSteps"></a><a href="#AdaptiveSizePolicyInitializingSteps">AdaptiveSizePolicyInitializingSteps</a></td><td>Number of steps where heuristics is used before data is used</td><td>20</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="AdaptiveSizePolicyOutputInterval"></a><a href="#AdaptiveSizePolicyOutputInterval">AdaptiveSizePolicyOutputInterval</a></td><td>Collecton interval for printing information, zero => never</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="UseAdaptiveSizePolicyFootprintGoal"></a><a href="#UseAdaptiveSizePolicyFootprintGoal">UseAdaptiveSizePolicyFootprintGoal</a></td><td>Use adaptive minimum footprint as a goal</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AdaptiveSizePolicyWeight"></a><a href="#AdaptiveSizePolicyWeight">AdaptiveSizePolicyWeight</a></td><td>Weight given to exponential resizing, between 0 and 100</td><td>10</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="AdaptiveTimeWeight"></a><a href="#AdaptiveTimeWeight">AdaptiveTimeWeight</a></td><td>Weight given to time in adaptive policy, between 0 and 100</td><td>25</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="PausePadding"></a><a href="#PausePadding">PausePadding</a></td><td>How much buffer to keep for pause time</td><td>1</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="PromotedPadding"></a><a href="#PromotedPadding">PromotedPadding</a></td><td>How much buffer to keep for promotion failure</td><td>3</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="SurvivorPadding"></a><a href="#SurvivorPadding">SurvivorPadding</a></td><td>How much buffer to keep for survivor overflow</td><td>3</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="AdaptivePermSizeWeight"></a><a href="#AdaptivePermSizeWeight">AdaptivePermSizeWeight</a></td><td>Weight for perm gen exponential resizing, between 0 and 100</td><td>20</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="PermGenPadding"></a><a href="#PermGenPadding">PermGenPadding</a></td><td>How much buffer to keep for perm gen sizing</td><td>3</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="ThresholdTolerance"></a><a href="#ThresholdTolerance">ThresholdTolerance</a></td><td>Allowed collection cost difference between generations</td><td>10</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="AdaptiveSizePolicyCollectionCostMargin"></a><a href="#AdaptiveSizePolicyCollectionCostMargin">AdaptiveSizePolicyCollectionCostMargin</a></td><td>If collection costs are within margin, reduce both by full delta</td><td>50</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="YoungGenerationSizeIncrement"></a><a href="#YoungGenerationSizeIncrement">YoungGenerationSizeIncrement</a></td><td>Adaptive size percentage change in young generation</td><td>20</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="YoungGenerationSizeSupplement"></a><a href="#YoungGenerationSizeSupplement">YoungGenerationSizeSupplement</a></td><td>Supplement to YoungedGenerationSizeIncrement used at startup</td><td>80</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="YoungGenerationSizeSupplementDecay"></a><a href="#YoungGenerationSizeSupplementDecay">YoungGenerationSizeSupplementDecay</a></td><td>Decay factor to YoungedGenerationSizeSupplement</td><td>8</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="TenuredGenerationSizeIncrement"></a><a href="#TenuredGenerationSizeIncrement">TenuredGenerationSizeIncrement</a></td><td>Adaptive size percentage change in tenured generation</td><td>20</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="TenuredGenerationSizeSupplement"></a><a href="#TenuredGenerationSizeSupplement">TenuredGenerationSizeSupplement</a></td><td>Supplement to TenuredGenerationSizeIncrement used at startup</td><td>80</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="TenuredGenerationSizeSupplementDecay"></a><a href="#TenuredGenerationSizeSupplementDecay">TenuredGenerationSizeSupplementDecay</a></td><td>Decay factor to TenuredGenerationSizeIncrement</td><td>2</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MaxGCPauseMillis"></a><a href="#MaxGCPauseMillis">MaxGCPauseMillis</a></td><td>Adaptive size policy maximum GC pause time goal in msec</td><td>max_uintx</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MaxGCMinorPauseMillis"></a><a href="#MaxGCMinorPauseMillis">MaxGCMinorPauseMillis</a></td><td>Adaptive size policy maximum GC minor pause time goal in msec</td><td>max_uintx</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="GCTimeRatio"></a><a href="#GCTimeRatio">GCTimeRatio</a></td><td>Adaptive size policy application time to GC time ratio</td><td>99</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="AdaptiveSizeDecrementScaleFactor"></a><a href="#AdaptiveSizeDecrementScaleFactor">AdaptiveSizeDecrementScaleFactor</a></td><td>Adaptive size scale down factor for shrinking</td><td>4</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="UseAdaptiveSizeDecayMajorGCCost"></a><a href="#UseAdaptiveSizeDecayMajorGCCost">UseAdaptiveSizeDecayMajorGCCost</a></td><td>Adaptive size decays the major cost for long major intervals</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="AdaptiveSizeMajorGCDecayTimeScale"></a><a href="#AdaptiveSizeMajorGCDecayTimeScale">AdaptiveSizeMajorGCDecayTimeScale</a></td><td>Time scale over which major costs decay</td><td>10</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MinSurvivorRatio"></a><a href="#MinSurvivorRatio">MinSurvivorRatio</a></td><td>Minimum ratio of young generation/survivor space size</td><td>3</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="InitialSurvivorRatio"></a><a href="#InitialSurvivorRatio">InitialSurvivorRatio</a></td><td>Initial ratio of eden/survivor space size</td><td>8</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="BaseFootPrintEstimate"></a><a href="#BaseFootPrintEstimate">BaseFootPrintEstimate</a></td><td>Estimate of footprint other than Java Heap</td><td>256*M</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="UseGCOverheadLimit"></a><a href="#UseGCOverheadLimit">UseGCOverheadLimit</a></td><td>Use policy to limit of proportion of time spent in GC before an OutOfMemory error is thrown</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="GCTimeLimit"></a><a href="#GCTimeLimit">GCTimeLimit</a></td><td>Limit of proportion of time spent in GC before an OutOfMemory" error is thrown (used with GCHeapFreeLimit)</td><td>98</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="GCHeapFreeLimit"></a><a href="#GCHeapFreeLimit">GCHeapFreeLimit</a></td><td>Minimum percentage of free space after a full GC before an OutOfMemoryError is thrown (used with GCTimeLimit)</td><td>2</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="PrintAdaptiveSizePolicy"></a><a href="#PrintAdaptiveSizePolicy">PrintAdaptiveSizePolicy</a></td><td>Print information about AdaptiveSizePolicy</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DisableExplicitGC"></a><a href="#DisableExplicitGC">DisableExplicitGC</a></td><td>Tells whether calling System.gc() does a full GC</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CollectGen0First"></a><a href="#CollectGen0First">CollectGen0First</a></td><td>Collect youngest generation before each full GC</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="BindGCTaskThreadsToCPUs"></a><a href="#BindGCTaskThreadsToCPUs">BindGCTaskThreadsToCPUs</a></td><td>Bind GCTaskThreads to CPUs if possible</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseGCTaskAffinity"></a><a href="#UseGCTaskAffinity">UseGCTaskAffinity</a></td><td>Use worker affinity when asking for GCTasks</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProcessDistributionStride"></a><a href="#ProcessDistributionStride">ProcessDistributionStride</a></td><td>Stride through processors when distributing processes</td><td>4</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSCoordinatorYieldSleepCount"></a><a href="#CMSCoordinatorYieldSleepCount">CMSCoordinatorYieldSleepCount</a></td><td>number of times the coordinator GC thread will sleep while yielding before giving up and resuming GC</td><td>10</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CMSYieldSleepCount"></a><a href="#CMSYieldSleepCount">CMSYieldSleepCount</a></td><td>number of times a GC thread (minus the coordinator) will sleep while yielding before giving up and resuming GC</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="PrintGCTaskTimeStamps"></a><a href="#PrintGCTaskTimeStamps">PrintGCTaskTimeStamps</a></td><td>Print timestamps for individual gc worker thread tasks</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceClassLoadingPreorder"></a><a href="#TraceClassLoadingPreorder">TraceClassLoadingPreorder</a></td><td>Trace all classes loaded in order referenced (not loaded)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceGen0Time"></a><a href="#TraceGen0Time">TraceGen0Time</a></td><td>Trace accumulated time for Gen 0 collection</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceGen1Time"></a><a href="#TraceGen1Time">TraceGen1Time</a></td><td>Trace accumulated time for Gen 1 collection</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintTenuringDistribution"></a><a href="#PrintTenuringDistribution">PrintTenuringDistribution</a></td><td>Print tenuring age information</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintHeapAtSIGBREAK"></a><a href="#PrintHeapAtSIGBREAK">PrintHeapAtSIGBREAK</a></td><td>Print heap layout in response to SIGBREAK</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceParallelOldGCTasks"></a><a href="#TraceParallelOldGCTasks">TraceParallelOldGCTasks</a></td><td>Trace multithreaded GC activity</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintParallelOldGCPhaseTimes"></a><a href="#PrintParallelOldGCPhaseTimes">PrintParallelOldGCPhaseTimes</a></td><td>Print the time taken by each parallel old gc phase." PrintGCDetails must also be enabled.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CITime"></a><a href="#CITime">CITime</a></td><td>collect timing information for compilation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="Inline"></a><a href="#Inline">Inline</a></td><td>enable inlining</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ClipInlining"></a><a href="#ClipInlining">ClipInlining</a></td><td>clip inlining if aggregate method exceeds DesiredMethodLimit</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseTypeProfile"></a><a href="#UseTypeProfile">UseTypeProfile</a></td><td>Check interpreter profile for historically monomorphic calls</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TypeProfileMinimumRatio"></a><a href="#TypeProfileMinimumRatio">TypeProfileMinimumRatio</a></td><td>Minimum ratio of profiled majority type to all minority types</td><td>9</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="Tier1UpdateMethodData"></a><a href="#Tier1UpdateMethodData">Tier1UpdateMethodData</a></td><td>Update methodDataOops in Tier1-generated code</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintVMOptions"></a><a href="#PrintVMOptions">PrintVMOptions</a></td><td>print VM flag settings</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ErrorFile"></a><a href="#ErrorFile">ErrorFile</a></td><td>If an error occurs, save the error data to this file [default: ./hs_err_pid%p.log] (%p replaced with pid)</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="DisplayVMOutputToStderr"></a><a href="#DisplayVMOutputToStderr">DisplayVMOutputToStderr</a></td><td>If DisplayVMOutput is true, display all VM output to stderr</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DisplayVMOutputToStdout"></a><a href="#DisplayVMOutputToStdout">DisplayVMOutputToStdout</a></td><td>If DisplayVMOutput is true, display all VM output to stdout</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseHeavyMonitors"></a><a href="#UseHeavyMonitors">UseHeavyMonitors</a></td><td>use heavyweight instead of lightweight Java monitors</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="RangeCheckElimination"></a><a href="#RangeCheckElimination">RangeCheckElimination</a></td><td>Split loop iterations to eliminate range checks</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SplitIfBlocks"></a><a href="#SplitIfBlocks">SplitIfBlocks</a></td><td>Clone compares and control flow through merge points to fold some branches</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AggressiveOpts"></a><a href="#AggressiveOpts">AggressiveOpts</a></td><td>Enable aggressive optimizations - see arguments.cpp</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintInterpreter"></a><a href="#PrintInterpreter">PrintInterpreter</a></td><td>Prints the generated interpreter code</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseInterpreter"></a><a href="#UseInterpreter">UseInterpreter</a></td><td>Use interpreter for non-compiled methods</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseNiagaraInstrs"></a><a href="#UseNiagaraInstrs">UseNiagaraInstrs</a></td><td>Use Niagara-efficient instruction subset</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseLoopCounter"></a><a href="#UseLoopCounter">UseLoopCounter</a></td><td>Increment invocation counter on backward branch</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseFastEmptyMethods"></a><a href="#UseFastEmptyMethods">UseFastEmptyMethods</a></td><td>Use fast method entry code for empty methods</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseFastAccessorMethods"></a><a href="#UseFastAccessorMethods">UseFastAccessorMethods</a></td><td>Use fast method entry code for accessor methods</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="EnableJVMPIInstructionStartEvent"></a><a href="#EnableJVMPIInstructionStartEvent">EnableJVMPIInstructionStartEvent</a></td><td>Enable JVMPI_EVENT_INSTRUCTION_START events - slows down interpretation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="JVMPICheckGCCompatibility"></a><a href="#JVMPICheckGCCompatibility">JVMPICheckGCCompatibility</a></td><td>If JVMPI is used, make sure that we are using a JVMPI-compatible garbage collector</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProfileMaturityPercentage"></a><a href="#ProfileMaturityPercentage">ProfileMaturityPercentage</a></td><td>number of method invocations/branches (expressed as % of CompileThreshold) before using the method's profile</td><td>20</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="UseCompiler"></a><a href="#UseCompiler">UseCompiler</a></td><td>use compilation</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseCounterDecay"></a><a href="#UseCounterDecay">UseCounterDecay</a></td><td>adjust recompilation counters</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AlwaysCompileLoopMethods"></a><a href="#AlwaysCompileLoopMethods">AlwaysCompileLoopMethods</a></td><td>when using recompilation, never interpret methods containing loops</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DontCompileHugeMethods"></a><a href="#DontCompileHugeMethods">DontCompileHugeMethods</a></td><td>don't compile methods > HugeMethodLimit</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="EstimateArgEscape"></a><a href="#EstimateArgEscape">EstimateArgEscape</a></td><td>Analyze bytecodes to estimate escape state of arguments</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="BCEATraceLevel"></a><a href="#BCEATraceLevel">BCEATraceLevel</a></td><td>How much tracing to do of bytecode escape analysis estimates</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxBCEAEstimateLevel"></a><a href="#MaxBCEAEstimateLevel">MaxBCEAEstimateLevel</a></td><td>Maximum number of nested calls that are analyzed by BC EA.</td><td>5</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxBCEAEstimateSize"></a><a href="#MaxBCEAEstimateSize">MaxBCEAEstimateSize</a></td><td>Maximum bytecode size of a method to be analyzed by BC EA.</td><td>150</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="SelfDestructTimer"></a><a href="#SelfDestructTimer">SelfDestructTimer</a></td><td>Will cause VM to terminate after a given time (in minutes) (0 means off)</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxJavaStackTraceDepth"></a><a href="#MaxJavaStackTraceDepth">MaxJavaStackTraceDepth</a></td>
<td>
Max. no. of lines in the stack trace for Java exceptions (0 means all).<br/>
With Java > 1.6, value 0 really means 0. value -1 or any negative number must be specified to print all the stack (tested with 1.6.0_22, 1.7.0 on Windows).<br/>
With Java <= 1.5, value 0 means everything, JVM chokes on negative number (tested with 1.5.0_22 on Windows).
</td>
<td>1024</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="NmethodSweepFraction"></a><a href="#NmethodSweepFraction">NmethodSweepFraction</a></td><td>Number of invocations of sweeper to cover all nmethods</td><td>4</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxInlineSize"></a><a href="#MaxInlineSize">MaxInlineSize</a></td><td>maximum bytecode size of a method to be inlined</td><td>35</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ProfileIntervalsTicks"></a><a href="#ProfileIntervalsTicks">ProfileIntervalsTicks</a></td><td># of ticks between printing of interval profile (+ProfileIntervals)</td><td>100</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="EventLogLength"></a><a href="#EventLogLength">EventLogLength</a></td><td>maximum nof events in event log</td><td>2000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PerMethodRecompilationCutoff"></a><a href="#PerMethodRecompilationCutoff">PerMethodRecompilationCutoff</a></td><td>After recompiling N times, stay in the interpreter (-1=>'Inf')</td><td>400</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PerBytecodeRecompilationCutoff"></a><a href="#PerBytecodeRecompilationCutoff">PerBytecodeRecompilationCutoff</a></td><td>Per-BCI limit on repeated recompilation (-1=>'Inf')</td><td>100</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PerMethodTrapLimit"></a><a href="#PerMethodTrapLimit">PerMethodTrapLimit</a></td><td>Limit on traps (of one kind) in a method (includes inlines)</td><td>100</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PerBytecodeTrapLimit"></a><a href="#PerBytecodeTrapLimit">PerBytecodeTrapLimit</a></td><td>Limit on traps (of one kind) at a particular BCI</td><td>4</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="AliasLevel"></a><a href="#AliasLevel">AliasLevel</a></td><td>0 for no aliasing, 1 for oop/field/static/array split, 2 for best</td><td>2</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ReadSpinIterations"></a><a href="#ReadSpinIterations">ReadSpinIterations</a></td><td>Number of read attempts before a yield (spin inner loop)</td><td>100</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PreBlockSpin"></a><a href="#PreBlockSpin">PreBlockSpin</a></td><td>Number of times to spin in an inflated lock before going to an OS lock</td><td>10</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxHeapSize"></a><a href="#MaxHeapSize">MaxHeapSize</a></td><td>Default maximum size for object heap (in bytes)</td><td>ScaleForWordSize (64*M)</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MaxNewSize"></a><a href="#MaxNewSize">MaxNewSize</a></td><td>Maximum size of new generation (in bytes)</td><td>max_uintx</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="PretenureSizeThreshold"></a><a href="#PretenureSizeThreshold">PretenureSizeThreshold</a></td><td>Max size in bytes of objects allocated in DefNew generation</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MinTLABSize"></a><a href="#MinTLABSize">MinTLABSize</a></td><td>Minimum allowed TLAB size (in bytes)</td><td>2*K</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="TLABAllocationWeight"></a><a href="#TLABAllocationWeight">TLABAllocationWeight</a></td><td>Allocation averaging weight</td><td>35</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="TLABWasteTargetPercent"></a><a href="#TLABWasteTargetPercent">TLABWasteTargetPercent</a></td><td>Percentage of Eden that can be wasted</td><td>1</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="TLABRefillWasteFraction"></a><a href="#TLABRefillWasteFraction">TLABRefillWasteFraction</a></td><td>Max TLAB waste at a refill (internal fragmentation)</td><td>64</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="TLABWasteIncrement"></a><a href="#TLABWasteIncrement">TLABWasteIncrement</a></td><td>Increment allowed waste at slow allocation</td><td>4</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MaxLiveObjectEvacuationRatio"></a><a href="#MaxLiveObjectEvacuationRatio">MaxLiveObjectEvacuationRatio</a></td><td>Max percent of eden objects that will be live at scavenge</td><td>100</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="OldSize"></a><a href="#OldSize">OldSize</a></td><td>Default size of tenured generation (in bytes)</td><td>ScaleForWordSize (4096*K)</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MinHeapFreeRatio"></a><a href="#MinHeapFreeRatio">MinHeapFreeRatio</a></td><td>Min percentage of heap free after GC to avoid expansion</td><td>40</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MaxHeapFreeRatio"></a><a href="#MaxHeapFreeRatio">MaxHeapFreeRatio</a></td><td>Max percentage of heap free after GC to avoid shrinking</td><td>70</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="SoftRefLRUPolicyMSPerMB"></a><a href="#SoftRefLRUPolicyMSPerMB">SoftRefLRUPolicyMSPerMB</a></td><td>Number of milliseconds per MB of free space in the heap</td><td>1000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MinHeapDeltaBytes"></a><a href="#MinHeapDeltaBytes">MinHeapDeltaBytes</a></td><td>Min change in heap space due to GC (in bytes)</td><td>ScaleForWordSize (128*K)</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MinPermHeapExpansion"></a><a href="#MinPermHeapExpansion">MinPermHeapExpansion</a></td><td>Min expansion of permanent heap (in bytes)</td><td>ScaleForWordSize (256*K)</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MaxPermHeapExpansion"></a><a href="#MaxPermHeapExpansion">MaxPermHeapExpansion</a></td><td>Max expansion of permanent heap without full GC (in bytes)</td><td>ScaleForWordSize (4*M)</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="QueuedAllocationWarningCount"></a><a href="#QueuedAllocationWarningCount">QueuedAllocationWarningCount</a></td><td>Number of times an allocation that queues behind a GC will retry before printing a warning</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxTenuringThreshold"></a><a href="#MaxTenuringThreshold">MaxTenuringThreshold</a></td><td>Maximum value for tenuring threshold. See more info about that flag <a href="http://cybergav.in/2009/12/12/the-maxtenuringthreshold-for-a-hotspot-jvm/">here</a>.</td><td>15</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="InitialTenuringThreshold"></a><a href="#InitialTenuringThreshold">InitialTenuringThreshold</a></td><td>Initial value for tenuring threshold</td><td>7</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="TargetSurvivorRatio"></a><a href="#TargetSurvivorRatio">TargetSurvivorRatio</a></td><td>Desired percentage of survivor space used after scavenge</td><td>50</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MarkSweepDeadRatio"></a><a href="#MarkSweepDeadRatio">MarkSweepDeadRatio</a></td><td>Percentage (0-100) of the old gen allowed as dead wood. "Serial mark sweep treats this as both the min and max value." CMS uses this value only if it falls back to mark sweep." Par compact uses a variable scale based on the density of the" generation and treats this as the max value when the heap is" either completely full or completely empty. Par compact also" has a smaller default value; see arguments.cpp.</td><td>5</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PermMarkSweepDeadRatio"></a><a href="#PermMarkSweepDeadRatio">PermMarkSweepDeadRatio</a></td><td>Percentage (0-100) of the perm gen allowed as dead wood." See MarkSweepDeadRatio for collector-specific comments.</td><td>20</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MarkSweepAlwaysCompactCount"></a><a href="#MarkSweepAlwaysCompactCount">MarkSweepAlwaysCompactCount</a></td><td>How often should we fully compact the heap (ignoring the dead space parameters)</td><td>4</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PrintCMSStatistics"></a><a href="#PrintCMSStatistics">PrintCMSStatistics</a></td><td>Statistics for CMS</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PrintCMSInitiationStatistics"></a><a href="#PrintCMSInitiationStatistics">PrintCMSInitiationStatistics</a></td><td>Statistics for initiating a CMS collection</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintFLSStatistics"></a><a href="#PrintFLSStatistics">PrintFLSStatistics</a></td><td>Statistics for CMS' FreeListSpace</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PrintFLSCensus"></a><a href="#PrintFLSCensus">PrintFLSCensus</a></td><td>Census for CMS' FreeListSpace</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="DeferThrSuspendLoopCount"></a><a href="#DeferThrSuspendLoopCount">DeferThrSuspendLoopCount</a></td><td>(Unstable) Number of times to iterate in safepoint loop before blocking VM threads</td><td>4000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="DeferPollingPageLoopCount"></a><a href="#DeferPollingPageLoopCount">DeferPollingPageLoopCount</a></td><td>(Unsafe,Unstable) Number of iterations in safepoint loop before changing safepoint polling page to RO</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="SafepointSpinBeforeYield"></a><a href="#SafepointSpinBeforeYield">SafepointSpinBeforeYield</a></td><td>(Unstable)</td><td>2000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="UseDepthFirstScavengeOrder"></a><a href="#UseDepthFirstScavengeOrder">UseDepthFirstScavengeOrder</a></td><td>true: the scavenge order will be depth-first, false: the scavenge order will be breadth-first</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="GCDrainStackTargetSize"></a><a href="#GCDrainStackTargetSize">GCDrainStackTargetSize</a></td><td>how many entries we'll try to leave on the stack during parallel GC</td><td>64</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="ThreadSafetyMargin"></a><a href="#ThreadSafetyMargin">ThreadSafetyMargin</a></td><td>Thread safety margin is used on fixed-stack LinuxThreads (on Linux/x86 only) to prevent heap-stack collision. Set to 0 to disable this feature</td><td>50*M</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CodeCacheMinimumFreeSpace"></a><a href="#CodeCacheMinimumFreeSpace">CodeCacheMinimumFreeSpace</a></td><td>When less than X space left, we stop compiling.</td><td>500*K</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CompileOnly"></a><a href="#CompileOnly">CompileOnly</a></td><td>List of methods (pkg/class.name) to restrict compilation to</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="CompileCommandFile"></a><a href="#CompileCommandFile">CompileCommandFile</a></td><td>Read compiler commands from this file [.hotspot_compiler]</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="CompileCommand"></a><a href="#CompileCommand">CompileCommand</a></td><td>Prepend to .hotspot_compiler; e.g. log,java/lang/String.<init></init></td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="CICompilerCountPerCPU"></a><a href="#CICompilerCountPerCPU">CICompilerCountPerCPU</a></td><td>1 compiler thread for log(N CPUs)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseThreadPriorities"></a><a href="#UseThreadPriorities">UseThreadPriorities</a></td><td>Use native thread priorities</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ThreadPriorityPolicy"></a><a href="#ThreadPriorityPolicy">ThreadPriorityPolicy</a></td><td>0 : Normal. VM chooses priorities that are appropriate for normal applications. On Solaris NORM_PRIORITY and above are mapped to normal native priority. Java priorities below NORM_PRIORITY" map to lower native priority values. On Windows applications" are allowed to use higher native priorities. However, with ThreadPriorityPolicy=0, VM will not use the highest possible" native priority, THREAD_PRIORITY_TIME_CRITICAL, as it may interfere with system threads. On Linux thread priorities are ignored because the OS does not support static priority in SCHED_OTHER scheduling class which is the only choice for" non-root, non-realtime applications. 1 : Aggressive. Java thread priorities map over to the entire range of native thread priorities. Higher Java thread priorities map to higher native thread priorities. This policy should be used with care, as sometimes it can cause performance degradation in the application and/or the entire system. On Linux this policy requires root privilege.</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ThreadPriorityVerbose"></a><a href="#ThreadPriorityVerbose">ThreadPriorityVerbose</a></td><td>print priority changes</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DefaultThreadPriority"></a><a href="#DefaultThreadPriority">DefaultThreadPriority</a></td><td>what native priority threads run at if not specified elsewhere (-1 means no change)</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CompilerThreadPriority"></a><a href="#CompilerThreadPriority">CompilerThreadPriority</a></td><td>what priority should compiler threads run at (-1 means no change)</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="VMThreadPriority"></a><a href="#VMThreadPriority">VMThreadPriority</a></td><td>what priority should VM threads run at (-1 means no change)</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CompilerThreadHintNoPreempt"></a><a href="#CompilerThreadHintNoPreempt">CompilerThreadHintNoPreempt</a></td><td>(Solaris only) Give compiler threads an extra quanta</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VMThreadHintNoPreempt"></a><a href="#VMThreadHintNoPreempt">VMThreadHintNoPreempt</a></td><td>(Solaris only) Give VM thread an extra quanta</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="JavaPriority1_To_OSPriority"></a><a href="#JavaPriority1_To_OSPriority">JavaPriority1_To_OSPriority</a></td><td>Map Java priorities to OS priorities</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="JavaPriority2_To_OSPriority"></a><a href="#JavaPriority2_To_OSPriority">JavaPriority2_To_OSPriority</a></td><td>Map Java priorities to OS priorities</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="JavaPriority3_To_OSPriority"></a><a href="#JavaPriority3_To_OSPriority">JavaPriority3_To_OSPriority</a></td><td>Map Java priorities to OS priorities</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="JavaPriority4_To_OSPriority"></a><a href="#JavaPriority4_To_OSPriority">JavaPriority4_To_OSPriority</a></td><td>Map Java priorities to OS priorities</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="JavaPriority5_To_OSPriority"></a><a href="#JavaPriority5_To_OSPriority">JavaPriority5_To_OSPriority</a></td><td>Map Java priorities to OS priorities</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="JavaPriority6_To_OSPriority"></a><a href="#JavaPriority6_To_OSPriority">JavaPriority6_To_OSPriority</a></td><td>Map Java priorities to OS priorities</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="JavaPriority7_To_OSPriority"></a><a href="#JavaPriority7_To_OSPriority">JavaPriority7_To_OSPriority</a></td><td>Map Java priorities to OS priorities</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="JavaPriority8_To_OSPriority"></a><a href="#JavaPriority8_To_OSPriority">JavaPriority8_To_OSPriority</a></td><td>Map Java priorities to OS priorities</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="JavaPriority9_To_OSPriority"></a><a href="#JavaPriority9_To_OSPriority">JavaPriority9_To_OSPriority</a></td><td>Map Java priorities to OS priorities</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="JavaPriority10_To_OSPriority"></a><a href="#JavaPriority10_To_OSPriority">JavaPriority10_To_OSPriority</a></td><td>Map Java priorities to OS priorities</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="StarvationMonitorInterval"></a><a href="#StarvationMonitorInterval">StarvationMonitorInterval</a></td><td>Pause between each check in ms</td><td>200</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="Tier1BytecodeLimit"></a><a href="#Tier1BytecodeLimit">Tier1BytecodeLimit</a></td><td>Must have at least this many bytecodes before tier1" invocation counters are used</td><td>10</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="StressTieredRuntime"></a><a href="#StressTieredRuntime">StressTieredRuntime</a></td><td>Alternate client and server compiler on compile requests</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InterpreterProfilePercentage"></a><a href="#InterpreterProfilePercentage">InterpreterProfilePercentage</a></td><td>number of method invocations/branches (expressed as % of CompileThreshold) before profiling in the interpreter</td><td>33</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxDirectMemorySize"></a><a href="#MaxDirectMemorySize">MaxDirectMemorySize</a></td><td>Maximum total size of NIO direct-buffer allocations</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="UseUnsupportedDeprecatedJVMPI"></a><a href="#UseUnsupportedDeprecatedJVMPI">UseUnsupportedDeprecatedJVMPI</a></td><td>Flag to temporarily re-enable the, soon to be removed, experimental interface JVMPI.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UsePerfData"></a><a href="#UsePerfData">UsePerfData</a></td><td>Flag to disable jvmstat instrumentation for performance testing" and problem isolation purposes.</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PerfDataSaveToFile"></a><a href="#PerfDataSaveToFile">PerfDataSaveToFile</a></td><td>Save PerfData memory to hsperfdata_<pid> file on exit</pid></td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PerfDataSamplingInterval"></a><a href="#PerfDataSamplingInterval">PerfDataSamplingInterval</a></td><td>Data sampling interval in milliseconds</td><td>50 /*ms*/</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PerfDisableSharedMem"></a><a href="#PerfDisableSharedMem">PerfDisableSharedMem</a></td><td>Store performance data in standard memory</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PerfDataMemorySize"></a><a href="#PerfDataMemorySize">PerfDataMemorySize</a></td><td>Size of performance data memory region. Will be rounded up to a multiple of the native os page size.</td><td>32*K</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PerfMaxStringConstLength"></a><a href="#PerfMaxStringConstLength">PerfMaxStringConstLength</a></td><td>Maximum PerfStringConstant string length before truncation</td><td>1024</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PerfAllowAtExitRegistration"></a><a href="#PerfAllowAtExitRegistration">PerfAllowAtExitRegistration</a></td><td>Allow registration of atexit() methods</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PerfBypassFileSystemCheck"></a><a href="#PerfBypassFileSystemCheck">PerfBypassFileSystemCheck</a></td><td>Bypass Win32 file system criteria checks (Windows Only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UnguardOnExecutionViolation"></a><a href="#UnguardOnExecutionViolation">UnguardOnExecutionViolation</a></td><td>Unguard page and retry on no-execute fault (Win32 only)" 0=off, 1=conservative, 2=aggressive</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ManagementServer"></a><a href="#ManagementServer">ManagementServer</a></td><td>Create JMX Management Server</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DisableAttachMechanism"></a><a href="#DisableAttachMechanism">DisableAttachMechanism</a></td><td>Disable mechanism that allows tools to attach to this VM</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="StartAttachListener"></a><a href="#StartAttachListener">StartAttachListener</a></td><td>Always start Attach Listener at VM startup</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseSharedSpaces"></a><a href="#UseSharedSpaces">UseSharedSpaces</a></td><td>Use shared spaces in the permanent generation</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="RequireSharedSpaces"></a><a href="#RequireSharedSpaces">RequireSharedSpaces</a></td><td>Require shared spaces in the permanent generation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ForceSharedSpaces"></a><a href="#ForceSharedSpaces">ForceSharedSpaces</a></td><td>Require shared spaces in the permanent generation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DumpSharedSpaces"></a><a href="#DumpSharedSpaces">DumpSharedSpaces</a></td><td>Special mode: JVM reads a class list, loads classes, builds shared spaces, and dumps the shared spaces to a file to be used in future JVM runs.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintSharedSpaces"></a><a href="#PrintSharedSpaces">PrintSharedSpaces</a></td><td>Print usage of shared spaces</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SharedDummyBlockSize"></a><a href="#SharedDummyBlockSize">SharedDummyBlockSize</a></td><td>Size of dummy block used to shift heap addresses (in bytes)</td><td>512*M</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="SharedReadWriteSize"></a><a href="#SharedReadWriteSize">SharedReadWriteSize</a></td><td>Size of read-write space in permanent generation (in bytes)</td><td>12*M</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="SharedReadOnlySize"></a><a href="#SharedReadOnlySize">SharedReadOnlySize</a></td><td>Size of read-only space in permanent generation (in bytes)</td><td>8*M</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="SharedMiscDataSize"></a><a href="#SharedMiscDataSize">SharedMiscDataSize</a></td><td>Size of the shared data area adjacent to the heap (in bytes)</td><td>4*M</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="SharedMiscCodeSize"></a><a href="#SharedMiscCodeSize">SharedMiscCodeSize</a></td><td>Size of the shared code area adjacent to the heap (in bytes)</td><td>4*M</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="TaggedStackInterpreter"></a><a href="#TaggedStackInterpreter">TaggedStackInterpreter</a></td><td>Insert tags in interpreter execution stack for oopmap generaion</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ExtendedDTraceProbes"></a><a href="#ExtendedDTraceProbes">ExtendedDTraceProbes</a></td><td>Enable performance-impacting dtrace probes</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DTraceMethodProbes"></a><a href="#DTraceMethodProbes">DTraceMethodProbes</a></td><td>Enable dtrace probes for method-entry and method-exit</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DTraceAllocProbes"></a><a href="#DTraceAllocProbes">DTraceAllocProbes</a></td><td>Enable dtrace probes for object allocation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DTraceMonitorProbes"></a><a href="#DTraceMonitorProbes">DTraceMonitorProbes</a></td><td>Enable dtrace probes for monitor events</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="RelaxAccessControlCheck"></a><a href="#RelaxAccessControlCheck">RelaxAccessControlCheck</a></td><td>Relax the access control checks in the verifier</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseVMInterruptibleIO"></a><a href="#UseVMInterruptibleIO">UseVMInterruptibleIO</a></td><td>(Unstable, Solaris-specific) Thread interrupt before or with EINTR for I/O operations results in OS_INTRPT</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AggressiveHeap"></a><a href="#AggressiveHeap">AggressiveHeap</a></td><td>The option inspects the server resources (size of memory and number of processors), and attempts to set various parameters to be optimal for long-running, memory allocation-intensive jobs. The JVM team view AggressiveHeap as an anachronism and would like to see it go away. Instead, we'd prefer for you to determine which of the individual options that AggressiveHeap sets actually impact your app, and then set those on the command line directly. You can check the Open JDK source code to see what AggressiveHeap actually does (arguments.cpp)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseCompressedStrings"></a><a href="#UseCompressedStrings">UseCompressedStrings</a></td><td>Use a byte[] for Strings which can be represented as pure ASCII. (Introduced in Java 6 Update 21 Performance Release)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="OptimizeStringConcat"></a><a href="#OptimizeStringConcat">OptimizeStringConcat</a></td><td>Optimize String concatenation operations where possible. (Introduced in Java 6 Update 20)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseStringCache"></a><a href="#UseStringCache">UseStringCache</a></td><td>Enables caching of commonly allocated strings.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="G1HeapRegionSize"></a><a href="#G1HeapRegionSize">G1HeapRegionSize</a></td><td>With G1 the Java heap is subdivided into uniformly sized regions. This sets the size of the individual sub-divisions. The default value of this parameter is determined ergonomically based upon heap size. The minimum value is 1Mb and the maximum value is 32Mb. Introduced in Java 6 Update 26.</td><td>8m</td><td></td></tr>
<tr valign="top"><td><a href="" name="G1ReservePercent"></a><a href="#G1ReservePercent">G1ReservePercent</a></td><td>Sets the amount of heap that is reserved as a false ceiling to reduce the possibility of promotion failure. Introduced in Java 6 Update 26.</td><td>10</td><td></td></tr>
<tr valign="top"><td><a href="" name="G1ConfidencePercent"></a><a href="#G1ConfidencePercent">G1ConfidencePercent</a></td><td>Confidence coefficient for G1 pause prediction. Introduced in Java 6 Update 26.</td><td>50</td><td></td></tr>
<tr valign="top"><td><a href="" name="PrintPromotionFailure"></a><a href="#PrintPromotionFailure">PrintPromotionFailure</a></td><td>Prints additional information on GC promotion failures.</td><td></td><td>bool</td></tr>
<tr bgcolor="#e0e0e0"><td colspan="4"><a href="" name="manageable">manageable</a></td></tr>
<tr valign="top"><td><a href="" name="HeapDumpOnOutOfMemoryError"></a><a href="#HeapDumpOnOutOfMemoryError">HeapDumpOnOutOfMemoryError</a></td><td>Dump heap to file when java.lang.OutOfMemoryError is thrown</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="HeapDumpPath"></a><a href="#HeapDumpPath">HeapDumpPath</a></td><td>When HeapDumpOnOutOfMemoryError is on, the path (filename or" directory) of the dump file (defaults to java_pid<pid>.hprof" in the working directory)</pid></td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="PrintGC"></a><a href="#PrintGC">PrintGC</a></td><td>Print message at garbage collect</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintGCDetails"></a><a href="#PrintGCDetails">PrintGCDetails</a></td><td>Print more details at garbage collect</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintGCTimeStamps"></a><a href="#PrintGCTimeStamps">PrintGCTimeStamps</a></td><td>Print timestamps at garbage collect</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintGCDateStamps"></a><a href="#PrintGCDateStamps">PrintGCDateStamps</a></td><td>Print datestamps (<a href="http://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations">ISO_8601</a>, e.g. 2013-10-18T12:32:01.657+0100) at garbage collect. Since Java 1.6.u4.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintClassHistogram"></a><a href="#PrintClassHistogram">PrintClassHistogram</a></td><td>Print a histogram of class instances</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintConcurrentLocks"></a><a href="#PrintConcurrentLocks">PrintConcurrentLocks</a></td><td>Print java.util.concurrent locks in thread dump</td><td>false</td><td>bool</td></tr>
<tr bgcolor="#e0e0e0"><td colspan="4"><a href="" name="experimental">experimental</a></td></tr>
<tr valign="top"><td><a href="" name="UnlockExperimentalVMOptions"></a><a href="#UnlockExperimentalVMOptions">UnlockExperimentalVMOptions</a></td><td>Unlocks experimental options.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseG1GC"></a><a href="#UseG1GC">UseG1GC</a></td><td>Switch on G1 for Java6. G1 is default for Java7, so there is no such option there.</td><td>false</td><td>bool</td></tr>
<tr bgcolor="#e0e0e0"><td colspan="4"><a href="" name="product_rw">product_rw</a></td></tr>
<tr valign="top"><td><a href="" name="TraceClassLoading"></a><a href="#TraceClassLoading">TraceClassLoading</a></td><td>Trace all classes loaded</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceClassUnloading"></a><a href="#TraceClassUnloading">TraceClassUnloading</a></td><td>Trace unloading of classes</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceLoaderConstraints"></a><a href="#TraceLoaderConstraints">TraceLoaderConstraints</a></td><td>Trace loader constraints</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintHeapAtGC"></a><a href="#PrintHeapAtGC">PrintHeapAtGC</a></td><td>Print heap layout before and after each GC</td><td>false</td><td>bool</td></tr>
<tr bgcolor="#e0e0e0"><td colspan="4"><a href="" name="develop">develop</a></td></tr>
<tr valign="top"><td><a href="" name="TraceItables"></a><a href="#TraceItables">TraceItables</a></td><td>Trace initialization and use of itables</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TracePcPatching"></a><a href="#TracePcPatching">TracePcPatching</a></td><td>Trace usage of frame::patch_pc</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceJumps"></a><a href="#TraceJumps">TraceJumps</a></td><td>Trace assembly jumps in thread ring buffer</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceRelocator"></a><a href="#TraceRelocator">TraceRelocator</a></td><td>Trace the bytecode relocator</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceLongCompiles"></a><a href="#TraceLongCompiles">TraceLongCompiles</a></td><td>Print out every time compilation is longer than a given threashold</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SafepointALot"></a><a href="#SafepointALot">SafepointALot</a></td><td>Generates a lot of safepoints. Works with GuaranteedSafepointInterval</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="BailoutToInterpreterForThrows"></a><a href="#BailoutToInterpreterForThrows">BailoutToInterpreterForThrows</a></td><td>Compiled methods which throws/catches exceptions will be deopt and intp.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="NoYieldsInMicrolock"></a><a href="#NoYieldsInMicrolock">NoYieldsInMicrolock</a></td><td>Disable yields in microlock</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceOopMapGeneration"></a><a href="#TraceOopMapGeneration">TraceOopMapGeneration</a></td><td>Shows oopmap generation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="MethodFlushing"></a><a href="#MethodFlushing">MethodFlushing</a></td><td>Reclamation of zombie and not-entrant methods</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyStack"></a><a href="#VerifyStack">VerifyStack</a></td><td>Verify stack of each thread when it is entering a runtime call</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceDerivedPointers"></a><a href="#TraceDerivedPointers">TraceDerivedPointers</a></td><td>Trace traversal of derived pointers on stack</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineArrayCopy"></a><a href="#InlineArrayCopy">InlineArrayCopy</a></td><td>inline arraycopy native that is known to be part of base library DLL</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineObjectHash"></a><a href="#InlineObjectHash">InlineObjectHash</a></td><td>inline Object::hashCode() native that is known to be part of base library DLL</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineNatives"></a><a href="#InlineNatives">InlineNatives</a></td><td>inline natives that are known to be part of base library DLL</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineMathNatives"></a><a href="#InlineMathNatives">InlineMathNatives</a></td><td>inline SinD, CosD, etc.</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineClassNatives"></a><a href="#InlineClassNatives">InlineClassNatives</a></td><td>inline Class.isInstance, etc</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineAtomicLong"></a><a href="#InlineAtomicLong">InlineAtomicLong</a></td><td>inline sun.misc.AtomicLong</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineThreadNatives"></a><a href="#InlineThreadNatives">InlineThreadNatives</a></td><td>inline Thread.currentThread, etc</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineReflectionGetCallerClass"></a><a href="#InlineReflectionGetCallerClass">InlineReflectionGetCallerClass</a></td><td>inline sun.reflect.Reflection.getCallerClass(), known to be part of base library DLL</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineUnsafeOps"></a><a href="#InlineUnsafeOps">InlineUnsafeOps</a></td><td>inline memory ops (native methods) from sun.misc.Unsafe</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ConvertCmpD2CmpF"></a><a href="#ConvertCmpD2CmpF">ConvertCmpD2CmpF</a></td><td>Convert cmpD to cmpF when one input is constant in float range</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ConvertFloat2IntClipping"></a><a href="#ConvertFloat2IntClipping">ConvertFloat2IntClipping</a></td><td>Convert float2int clipping idiom to integer clipping</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SpecialStringCompareTo"></a><a href="#SpecialStringCompareTo">SpecialStringCompareTo</a></td><td>special version of string compareTo</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SpecialStringIndexOf"></a><a href="#SpecialStringIndexOf">SpecialStringIndexOf</a></td><td>special version of string indexOf</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceCallFixup"></a><a href="#TraceCallFixup">TraceCallFixup</a></td><td>traces all call fixups</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DeoptimizeALot"></a><a href="#DeoptimizeALot">DeoptimizeALot</a></td><td>deoptimize at every exit from the runtime system</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DeoptimizeOnlyAt"></a><a href="#DeoptimizeOnlyAt">DeoptimizeOnlyAt</a></td><td>a comma separated list of bcis to deoptimize at</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="Debugging"></a><a href="#Debugging">Debugging</a></td><td>set when executing debug methods in debug.ccp (to prevent triggering assertions)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceHandleAllocation"></a><a href="#TraceHandleAllocation">TraceHandleAllocation</a></td><td>Prints out warnings when suspicious many handles are allocated</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ShowSafepointMsgs"></a><a href="#ShowSafepointMsgs">ShowSafepointMsgs</a></td><td>Show msg. about safepoint synch.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SafepointTimeout"></a><a href="#SafepointTimeout">SafepointTimeout</a></td><td>Time out and warn or fail after SafepointTimeoutDelay milliseconds if failed to reach safepoint</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DieOnSafepointTimeout"></a><a href="#DieOnSafepointTimeout">DieOnSafepointTimeout</a></td><td>Die upon failure to reach safepoint (see SafepointTimeout)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ForceFloatExceptions"></a><a href="#ForceFloatExceptions">ForceFloatExceptions</a></td><td>Force exceptions on FP stack under/overflow</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SoftMatchFailure"></a><a href="#SoftMatchFailure">SoftMatchFailure</a></td><td>If the DFA fails to match a node, print a message and bail out</td><td>trueInProduct</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyStackAtCalls"></a><a href="#VerifyStackAtCalls">VerifyStackAtCalls</a></td><td>Verify that the stack pointer is unchanged after calls</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceJavaAssertions"></a><a href="#TraceJavaAssertions">TraceJavaAssertions</a></td><td>Trace java language assertions</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ZapDeadCompiledLocals"></a><a href="#ZapDeadCompiledLocals">ZapDeadCompiledLocals</a></td><td>Zap dead locals in compiler frames</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseMallocOnly"></a><a href="#UseMallocOnly">UseMallocOnly</a></td><td>use only malloc/free for allocation (no resource area/arena)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintMalloc"></a><a href="#PrintMalloc">PrintMalloc</a></td><td>print all malloc/free calls</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ZapResourceArea"></a><a href="#ZapResourceArea">ZapResourceArea</a></td><td>Zap freed resource/arena space with 0xABABABAB</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ZapJNIHandleArea"></a><a href="#ZapJNIHandleArea">ZapJNIHandleArea</a></td><td>Zap freed JNI handle space with 0xFEFEFEFE</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ZapUnusedHeapArea"></a><a href="#ZapUnusedHeapArea">ZapUnusedHeapArea</a></td><td>Zap unused heap space with 0xBAADBABE</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintVMMessages"></a><a href="#PrintVMMessages">PrintVMMessages</a></td><td>Print vm messages on console</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="Verbose"></a><a href="#Verbose">Verbose</a></td><td>Prints additional debugging information from other modes</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintMiscellaneous"></a><a href="#PrintMiscellaneous">PrintMiscellaneous</a></td><td>Prints uncategorized debugging information (requires +Verbose)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="WizardMode"></a><a href="#WizardMode">WizardMode</a></td><td>Prints much more debugging information</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SegmentedHeapDumpThreshold"></a><a href="#SegmentedHeapDumpThreshold">SegmentedHeapDumpThreshold</a></td><td>Generate a segmented heap dump (JAVA PROFILE 1.0.2 format) when the heap usage is larger than this</td><td>2*G</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="HeapDumpSegmentSize"></a><a href="#HeapDumpSegmentSize">HeapDumpSegmentSize</a></td><td>Approximate segment size when generating a segmented heap dump</td><td>1*G</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="BreakAtWarning"></a><a href="#BreakAtWarning">BreakAtWarning</a></td><td>Execute breakpoint upon encountering VM warning</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceVMOperation"></a><a href="#TraceVMOperation">TraceVMOperation</a></td><td>Trace vm operations</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseFakeTimers"></a><a href="#UseFakeTimers">UseFakeTimers</a></td><td>Tells whether the VM should use system time or a fake timer</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintAssembly"></a><a href="#PrintAssembly">PrintAssembly</a></td><td>Print assembly code. Requires disassembler plugin, see details <a href="https://wikis.oracle.com/display/HotSpotInternals/PrintAssembly">here</a>.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintNMethods"></a><a href="#PrintNMethods">PrintNMethods</a></td><td>Print assembly code for nmethods when generated</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintNativeNMethods"></a><a href="#PrintNativeNMethods">PrintNativeNMethods</a></td><td>Print assembly code for native nmethods when generated</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintDebugInfo"></a><a href="#PrintDebugInfo">PrintDebugInfo</a></td><td>Print debug information for all nmethods when generated</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintRelocations"></a><a href="#PrintRelocations">PrintRelocations</a></td><td>Print relocation information for all nmethods when generated</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintDependencies"></a><a href="#PrintDependencies">PrintDependencies</a></td><td>Print dependency information for all nmethods when generated</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintExceptionHandlers"></a><a href="#PrintExceptionHandlers">PrintExceptionHandlers</a></td><td>Print exception handler tables for all nmethods when generated</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InterceptOSException"></a><a href="#InterceptOSException">InterceptOSException</a></td><td>Starts debugger when an implicit OS (e.g., NULL) exception happens</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintCodeCache2"></a><a href="#PrintCodeCache2">PrintCodeCache2</a></td><td>Print detailed info on the compiled_code cache when exiting</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintStubCode"></a><a href="#PrintStubCode">PrintStubCode</a></td><td>Print generated stub code</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintJVMWarnings"></a><a href="#PrintJVMWarnings">PrintJVMWarnings</a></td><td>Prints warnings for unimplemented JVM functions</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InitializeJavaLangSystem"></a><a href="#InitializeJavaLangSystem">InitializeJavaLangSystem</a></td><td>Initialize java.lang.System - turn off for individual method debugging</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InitializeJavaLangString"></a><a href="#InitializeJavaLangString">InitializeJavaLangString</a></td><td>Initialize java.lang.String - turn off for individual method debugging</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InitializeJavaLangExceptionsErrors"></a><a href="#InitializeJavaLangExceptionsErrors">InitializeJavaLangExceptionsErrors</a></td><td>Initialize various error and exception classes - turn off for individual method debugging</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="RegisterReferences"></a><a href="#RegisterReferences">RegisterReferences</a></td><td>Tells whether the VM should register soft/weak/final/phantom references</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="IgnoreRewrites"></a><a href="#IgnoreRewrites">IgnoreRewrites</a></td><td>Supress rewrites of bytecodes in the oopmap generator. This is unsafe!</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintCodeCacheExtension"></a><a href="#PrintCodeCacheExtension">PrintCodeCacheExtension</a></td><td>Print extension of code cache</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UsePrivilegedStack"></a><a href="#UsePrivilegedStack">UsePrivilegedStack</a></td><td>Enable the security JVM functions</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="IEEEPrecision"></a><a href="#IEEEPrecision">IEEEPrecision</a></td><td>Enables IEEE precision (for INTEL only)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProtectionDomainVerification"></a><a href="#ProtectionDomainVerification">ProtectionDomainVerification</a></td><td>Verifies protection domain before resolution in system dictionary</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DisableStartThread"></a><a href="#DisableStartThread">DisableStartThread</a></td><td>Disable starting of additional Java threads (for debugging only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="MemProfiling"></a><a href="#MemProfiling">MemProfiling</a></td><td>Write memory usage profiling to log file</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseDetachedThreads"></a><a href="#UseDetachedThreads">UseDetachedThreads</a></td><td>Use detached threads that are recycled upon termination (for SOLARIS only)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UsePthreads"></a><a href="#UsePthreads">UsePthreads</a></td><td>Use pthread-based instead of libthread-based synchronization (SPARC only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UpdateHotSpotCompilerFileOnError"></a><a href="#UpdateHotSpotCompilerFileOnError">UpdateHotSpotCompilerFileOnError</a></td><td>Should the system attempt to update the compiler file when an error occurs?</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="LoadLineNumberTables"></a><a href="#LoadLineNumberTables">LoadLineNumberTables</a></td><td>Tells whether the class file parser loads line number tables</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="LoadLocalVariableTables"></a><a href="#LoadLocalVariableTables">LoadLocalVariableTables</a></td><td>Tells whether the class file parser loads local variable tables</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="LoadLocalVariableTypeTables"></a><a href="#LoadLocalVariableTypeTables">LoadLocalVariableTypeTables</a></td><td>Tells whether the class file parser loads local variable type tables</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="PreallocatedOutOfMemoryErrorCount"></a><a href="#PreallocatedOutOfMemoryErrorCount">PreallocatedOutOfMemoryErrorCount</a></td><td>Number of OutOfMemoryErrors preallocated with backtrace</td><td>4</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="PrintBiasedLockingStatistics"></a><a href="#PrintBiasedLockingStatistics">PrintBiasedLockingStatistics</a></td><td>Print statistics of biased locking in JVM</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceJVMPI"></a><a href="#TraceJVMPI">TraceJVMPI</a></td><td>Trace JVMPI</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceJNICalls"></a><a href="#TraceJNICalls">TraceJNICalls</a></td><td>Trace JNI calls</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceJNIHandleAllocation"></a><a href="#TraceJNIHandleAllocation">TraceJNIHandleAllocation</a></td><td>Trace allocation/deallocation of JNI handle blocks</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceThreadEvents"></a><a href="#TraceThreadEvents">TraceThreadEvents</a></td><td>Trace all thread events</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceBytecodes"></a><a href="#TraceBytecodes">TraceBytecodes</a></td><td>Trace bytecode execution</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceClassInitialization"></a><a href="#TraceClassInitialization">TraceClassInitialization</a></td><td>Trace class initialization</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceExceptions"></a><a href="#TraceExceptions">TraceExceptions</a></td><td>Trace exceptions</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceICs"></a><a href="#TraceICs">TraceICs</a></td><td>Trace inline cache changes</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceInlineCacheClearing"></a><a href="#TraceInlineCacheClearing">TraceInlineCacheClearing</a></td><td>Trace clearing of inline caches in nmethods</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceDependencies"></a><a href="#TraceDependencies">TraceDependencies</a></td><td>Trace dependencies</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyDependencies"></a><a href="#VerifyDependencies">VerifyDependencies</a></td><td>Exercise and verify the compilation dependency mechanism</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceNewOopMapGeneration"></a><a href="#TraceNewOopMapGeneration">TraceNewOopMapGeneration</a></td><td>Trace OopMapGeneration</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="TraceNewOopMapGenerationDetailed"></a><a href="#TraceNewOopMapGenerationDetailed">TraceNewOopMapGenerationDetailed</a></td><td>Trace OopMapGeneration: print detailed cell states</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TimeOopMap"></a><a href="#TimeOopMap">TimeOopMap</a></td><td>Time calls to GenerateOopMap::compute_map() in sum</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TimeOopMap2"></a><a href="#TimeOopMap2">TimeOopMap2</a></td><td>Time calls to GenerateOopMap::compute_map() individually</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceMonitorMismatch"></a><a href="#TraceMonitorMismatch">TraceMonitorMismatch</a></td><td>Trace monitor matching failures during OopMapGeneration</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceOopMapRewrites"></a><a href="#TraceOopMapRewrites">TraceOopMapRewrites</a></td><td>Trace rewritting of method oops during oop map generation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceSafepoint"></a><a href="#TraceSafepoint">TraceSafepoint</a></td><td>Trace safepoint operations</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceICBuffer"></a><a href="#TraceICBuffer">TraceICBuffer</a></td><td>Trace usage of IC buffer</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceCompiledIC"></a><a href="#TraceCompiledIC">TraceCompiledIC</a></td><td>Trace changes of compiled IC</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceStartupTime"></a><a href="#TraceStartupTime">TraceStartupTime</a></td><td>Trace setup time</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceHPI"></a><a href="#TraceHPI">TraceHPI</a></td><td>Trace Host Porting Interface (HPI)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceProtectionDomainVerification"></a><a href="#TraceProtectionDomainVerification">TraceProtectionDomainVerification</a></td><td>Trace protection domain verifcation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceClearedExceptions"></a><a href="#TraceClearedExceptions">TraceClearedExceptions</a></td><td>Prints when an exception is forcibly cleared</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseParallelOldGCChunkPointerCalc"></a><a href="#UseParallelOldGCChunkPointerCalc">UseParallelOldGCChunkPointerCalc</a></td><td>In the Parallel Old garbage collector use chucks to calculate" new object locations</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyParallelOldWithMarkSweep"></a><a href="#VerifyParallelOldWithMarkSweep">VerifyParallelOldWithMarkSweep</a></td><td>Use the MarkSweep code to verify phases of Parallel Old</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="VerifyParallelOldWithMarkSweepInterval"></a><a href="#VerifyParallelOldWithMarkSweepInterval">VerifyParallelOldWithMarkSweepInterval</a></td><td>Interval at which the MarkSweep code is used to verify phases of Parallel Old</td><td>1</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="ParallelOldMTUnsafeMarkBitMap"></a><a href="#ParallelOldMTUnsafeMarkBitMap">ParallelOldMTUnsafeMarkBitMap</a></td><td>Use the Parallel Old MT unsafe in marking the bitmap</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="ParallelOldMTUnsafeUpdateLiveData"></a><a href="#ParallelOldMTUnsafeUpdateLiveData">ParallelOldMTUnsafeUpdateLiveData</a></td><td>Use the Parallel Old MT unsafe in update of live size</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceChunkTasksQueuing"></a><a href="#TraceChunkTasksQueuing">TraceChunkTasksQueuing</a></td><td>Trace the queuing of the chunk tasks</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ScavengeWithObjectsInToSpace"></a><a href="#ScavengeWithObjectsInToSpace">ScavengeWithObjectsInToSpace</a></td><td>Allow scavenges to occur when to_space contains objects.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseCMSAdaptiveFreeLists"></a><a href="#UseCMSAdaptiveFreeLists">UseCMSAdaptiveFreeLists</a></td><td>Use Adaptive Free Lists in the CMS generation</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseAsyncConcMarkSweepGC"></a><a href="#UseAsyncConcMarkSweepGC">UseAsyncConcMarkSweepGC</a></td><td>Use Asynchronous Concurrent Mark-Sweep GC in the old generation</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="RotateCMSCollectionTypes"></a><a href="#RotateCMSCollectionTypes">RotateCMSCollectionTypes</a></td><td>Rotate the CMS collections among concurrent and STW</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSTraceIncrementalMode"></a><a href="#CMSTraceIncrementalMode">CMSTraceIncrementalMode</a></td><td>Trace CMS incremental mode</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSTraceIncrementalPacing"></a><a href="#CMSTraceIncrementalPacing">CMSTraceIncrementalPacing</a></td><td>Trace CMS incremental mode pacing computation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSTraceThreadState"></a><a href="#CMSTraceThreadState">CMSTraceThreadState</a></td><td>Trace the CMS thread state (enable the trace_state() method)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSDictionaryChoice"></a><a href="#CMSDictionaryChoice">CMSDictionaryChoice</a></td><td>Use BinaryTreeDictionary as default in the CMS generation</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CMSOverflowEarlyRestoration"></a><a href="#CMSOverflowEarlyRestoration">CMSOverflowEarlyRestoration</a></td><td>Whether preserved marks should be restored early</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSTraceSweeper"></a><a href="#CMSTraceSweeper">CMSTraceSweeper</a></td><td>Trace some actions of the CMS sweeper</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="FLSVerifyDictionary"></a><a href="#FLSVerifyDictionary">FLSVerifyDictionary</a></td><td>Do lots of (expensive) FLS dictionary verification</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyBlockOffsetArray"></a><a href="#VerifyBlockOffsetArray">VerifyBlockOffsetArray</a></td><td>Do (expensive!) block offset array verification</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceCMSState"></a><a href="#TraceCMSState">TraceCMSState</a></td><td>Trace the state of the CMS collection</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSTestInFreeList"></a><a href="#CMSTestInFreeList">CMSTestInFreeList</a></td><td>Check if the coalesced range is already in the free lists as claimed.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSIgnoreResurrection"></a><a href="#CMSIgnoreResurrection">CMSIgnoreResurrection</a></td><td>Ignore object resurrection during the verification.</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="FullGCALot"></a><a href="#FullGCALot">FullGCALot</a></td><td>Force full gc at every Nth exit from the runtime system (N=FullGCALotInterval)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PromotionFailureALotCount"></a><a href="#PromotionFailureALotCount">PromotionFailureALotCount</a></td><td>Number of promotion failures occurring at ParGCAllocBuffer" refill attempts (ParNew) or promotion attempts (other young collectors)</td><td>1000</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="PromotionFailureALotInterval"></a><a href="#PromotionFailureALotInterval">PromotionFailureALotInterval</a></td><td>Total collections between promotion failures alot</td><td>5</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="WorkStealingSleepMillis"></a><a href="#WorkStealingSleepMillis">WorkStealingSleepMillis</a></td><td>Sleep time when sleep is used for yields</td><td>1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="WorkStealingYieldsBeforeSleep"></a><a href="#WorkStealingYieldsBeforeSleep">WorkStealingYieldsBeforeSleep</a></td><td>Number of yields before a sleep is done during workstealing</td><td>1000</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="TraceAdaptiveGCBoundary"></a><a href="#TraceAdaptiveGCBoundary">TraceAdaptiveGCBoundary</a></td><td>Trace young-old boundary moves</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="PSAdaptiveSizePolicyResizeVirtualSpaceAlot"></a><a href="#PSAdaptiveSizePolicyResizeVirtualSpaceAlot">PSAdaptiveSizePolicyResizeVirtualSpaceAlot</a></td><td>Resize the virtual spaces of the young or old generations</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="PSAdjustTenuredGenForMinorPause"></a><a href="#PSAdjustTenuredGenForMinorPause">PSAdjustTenuredGenForMinorPause</a></td><td>Adjust tenured generation to achive a minor pause goal</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PSAdjustYoungGenForMajorPause"></a><a href="#PSAdjustYoungGenForMajorPause">PSAdjustYoungGenForMajorPause</a></td><td>Adjust young generation to achive a major pause goal</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AdaptiveSizePolicyReadyThreshold"></a><a href="#AdaptiveSizePolicyReadyThreshold">AdaptiveSizePolicyReadyThreshold</a></td><td>Number of collections before the adaptive sizing is started</td><td>5</td><td>uintx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="AdaptiveSizePolicyGCTimeLimitThreshold"></a><a href="#AdaptiveSizePolicyGCTimeLimitThreshold">AdaptiveSizePolicyGCTimeLimitThreshold</a></td><td>Number of consecutive collections before gc time limit fires</td><td>5</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="UsePrefetchQueue"></a><a href="#UsePrefetchQueue">UsePrefetchQueue</a></td><td>Use the prefetch queue during PS promotion</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ConcGCYieldTimeout"></a><a href="#ConcGCYieldTimeout">ConcGCYieldTimeout</a></td><td>If non-zero, assert that GC threads yield within this # of ms.</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="TraceReferenceGC"></a><a href="#TraceReferenceGC">TraceReferenceGC</a></td><td>Trace handling of soft/weak/final/phantom references</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceFinalizerRegistration"></a><a href="#TraceFinalizerRegistration">TraceFinalizerRegistration</a></td><td>Trace registration of final references</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceWorkGang"></a><a href="#TraceWorkGang">TraceWorkGang</a></td><td>Trace activities of work gangs</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceBlockOffsetTable"></a><a href="#TraceBlockOffsetTable">TraceBlockOffsetTable</a></td><td>Print BlockOffsetTable maps</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceCardTableModRefBS"></a><a href="#TraceCardTableModRefBS">TraceCardTableModRefBS</a></td><td>Print CardTableModRefBS maps</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceGCTaskManager"></a><a href="#TraceGCTaskManager">TraceGCTaskManager</a></td><td>Trace actions of the GC task manager</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceGCTaskQueue"></a><a href="#TraceGCTaskQueue">TraceGCTaskQueue</a></td><td>Trace actions of the GC task queues</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceGCTaskThread"></a><a href="#TraceGCTaskThread">TraceGCTaskThread</a></td><td>Trace actions of the GC task threads</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceParallelOldGCMarkingPhase"></a><a href="#TraceParallelOldGCMarkingPhase">TraceParallelOldGCMarkingPhase</a></td><td>Trace parallel old gc marking phase</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceParallelOldGCSummaryPhase"></a><a href="#TraceParallelOldGCSummaryPhase">TraceParallelOldGCSummaryPhase</a></td><td>Trace parallel old gc summary phase</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="TraceParallelOldGCCompactionPhase"></a><a href="#TraceParallelOldGCCompactionPhase">TraceParallelOldGCCompactionPhase</a></td><td>Trace parallel old gc compaction phase</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceParallelOldGCDensePrefix"></a><a href="#TraceParallelOldGCDensePrefix">TraceParallelOldGCDensePrefix</a></td><td>Trace parallel old gc dense prefix computation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="IgnoreLibthreadGPFault"></a><a href="#IgnoreLibthreadGPFault">IgnoreLibthreadGPFault</a></td><td>Suppress workaround for libthread GP fault</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CIPrintCompilerName"></a><a href="#CIPrintCompilerName">CIPrintCompilerName</a></td><td>when CIPrint is active, print the name of the active compiler</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CIPrintCompileQueue"></a><a href="#CIPrintCompileQueue">CIPrintCompileQueue</a></td><td>display the contents of the compile queue whenever a compilation is enqueued</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CIPrintRequests"></a><a href="#CIPrintRequests">CIPrintRequests</a></td><td>display every request for compilation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CITimeEach"></a><a href="#CITimeEach">CITimeEach</a></td><td>display timing information after each successful compilation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CICountOSR"></a><a href="#CICountOSR">CICountOSR</a></td><td>use a separate counter when assigning ids to osr compilations</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CICompileNatives"></a><a href="#CICompileNatives">CICompileNatives</a></td><td>compile native methods if supported by the compiler</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CIPrintMethodCodes"></a><a href="#CIPrintMethodCodes">CIPrintMethodCodes</a></td><td>print method bytecodes of the compiled code</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CIPrintTypeFlow"></a><a href="#CIPrintTypeFlow">CIPrintTypeFlow</a></td><td>print the results of ciTypeFlow analysis</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CITraceTypeFlow"></a><a href="#CITraceTypeFlow">CITraceTypeFlow</a></td><td>detailed per-bytecode tracing of ciTypeFlow analysis</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CICloneLoopTestLimit"></a><a href="#CICloneLoopTestLimit">CICloneLoopTestLimit</a></td><td>size limit for blocks heuristically cloned in ciTypeFlow</td><td>100</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="UseStackBanging"></a><a href="#UseStackBanging">UseStackBanging</a></td><td>use stack banging for stack overflow checks (required for proper StackOverflow handling; disable only to measure cost of stackbanging)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="Use24BitFPMode"></a><a href="#Use24BitFPMode">Use24BitFPMode</a></td><td>Set 24-bit FPU mode on a per-compile basis</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="Use24BitFP"></a><a href="#Use24BitFP">Use24BitFP</a></td><td>use FP instructions that produce 24-bit precise results</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseStrictFP"></a><a href="#UseStrictFP">UseStrictFP</a></td><td>use strict fp if modifier strictfp is set</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="GenerateSynchronizationCode"></a><a href="#GenerateSynchronizationCode">GenerateSynchronizationCode</a></td><td>generate locking/unlocking code for synchronized methods and monitors</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="GenerateCompilerNullChecks"></a><a href="#GenerateCompilerNullChecks">GenerateCompilerNullChecks</a></td><td>Generate explicit null checks for loads/stores/calls</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="GenerateRangeChecks"></a><a href="#GenerateRangeChecks">GenerateRangeChecks</a></td><td>Generate range checks for array accesses</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintSafepointStatistics"></a><a href="#PrintSafepointStatistics">PrintSafepointStatistics</a></td><td>print statistics about safepoint synchronization</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineAccessors"></a><a href="#InlineAccessors">InlineAccessors</a></td><td>inline accessor methods (get/set)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseCHA"></a><a href="#UseCHA">UseCHA</a></td><td>enable CHA</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintInlining"></a><a href="#PrintInlining">PrintInlining</a></td><td>prints inlining optimizations</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="EagerInitialization"></a><a href="#EagerInitialization">EagerInitialization</a></td><td>Eagerly initialize classes if possible</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceMethodReplacement"></a><a href="#TraceMethodReplacement">TraceMethodReplacement</a></td><td>Print when methods are replaced do to recompilation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintMethodFlushing"></a><a href="#PrintMethodFlushing">PrintMethodFlushing</a></td><td>print the nmethods being flushed</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseRelocIndex"></a><a href="#UseRelocIndex">UseRelocIndex</a></td><td>use an index to speed random access to relocations</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="StressCodeBuffers"></a><a href="#StressCodeBuffers">StressCodeBuffers</a></td><td>Exercise code buffer expansion and other rare state changes</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DebugVtables"></a><a href="#DebugVtables">DebugVtables</a></td><td>add debugging code to vtable dispatch</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintVtables"></a><a href="#PrintVtables">PrintVtables</a></td><td>print vtables when printing klass</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceCreateZombies"></a><a href="#TraceCreateZombies">TraceCreateZombies</a></td><td>trace creation of zombie nmethods</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="MonomorphicArrayCheck"></a><a href="#MonomorphicArrayCheck">MonomorphicArrayCheck</a></td><td>Uncommon-trap array store checks that require full type check</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DelayCompilationDuringStartup"></a><a href="#DelayCompilationDuringStartup">DelayCompilationDuringStartup</a></td><td>Delay invoking the compiler until main application class is loaded</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CompileTheWorld"></a><a href="#CompileTheWorld">CompileTheWorld</a></td><td>Compile all methods in all classes in bootstrap class path (stress test)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CompileTheWorldPreloadClasses"></a><a href="#CompileTheWorldPreloadClasses">CompileTheWorldPreloadClasses</a></td><td>Preload all classes used by a class before start loading</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceIterativeGVN"></a><a href="#TraceIterativeGVN">TraceIterativeGVN</a></td><td>Print progress during Iterative Global Value Numbering</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="FillDelaySlots"></a><a href="#FillDelaySlots">FillDelaySlots</a></td><td>Fill delay slots (on SPARC only)</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyIterativeGVN"></a><a href="#VerifyIterativeGVN">VerifyIterativeGVN</a></td><td>Verify Def-Use modifications during sparse Iterative Global Value Numbering</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TimeLivenessAnalysis"></a><a href="#TimeLivenessAnalysis">TimeLivenessAnalysis</a></td><td>Time computation of bytecode liveness analysis</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceLivenessGen"></a><a href="#TraceLivenessGen">TraceLivenessGen</a></td><td>Trace the generation of liveness analysis information</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintDominators"></a><a href="#PrintDominators">PrintDominators</a></td><td>Print out dominator trees for GVN</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseLoopSafepoints"></a><a href="#UseLoopSafepoints">UseLoopSafepoints</a></td><td>Generate Safepoint nodes in every loop</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DeutschShiffmanExceptions"></a><a href="#DeutschShiffmanExceptions">DeutschShiffmanExceptions</a></td><td>Fast check to find exception handler for precisely typed exceptions</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="FastAllocateSizeLimit"></a><a href="#FastAllocateSizeLimit">FastAllocateSizeLimit</a></td><td>Inline allocations larger than this in doublewords must go slow</td><td>100000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="UseVTune"></a><a href="#UseVTune">UseVTune</a></td><td>enable support for Intel's VTune profiler</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CountCompiledCalls"></a><a href="#CountCompiledCalls">CountCompiledCalls</a></td><td>counts method invocations</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CountJNICalls"></a><a href="#CountJNICalls">CountJNICalls</a></td><td>counts jni method invocations</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ClearInterpreterLocals"></a><a href="#ClearInterpreterLocals">ClearInterpreterLocals</a></td><td>Always clear local variables of interpreter activations upon entry</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseFastSignatureHandlers"></a><a href="#UseFastSignatureHandlers">UseFastSignatureHandlers</a></td><td>Use fast signature handlers for native calls</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseV8InstrsOnly"></a><a href="#UseV8InstrsOnly">UseV8InstrsOnly</a></td><td>Use SPARC-V8 Compliant instruction subset</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseCASForSwap"></a><a href="#UseCASForSwap">UseCASForSwap</a></td><td>Do not use swap instructions, but only CAS (in a loop) on SPARC</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PoisonOSREntry"></a><a href="#PoisonOSREntry">PoisonOSREntry</a></td><td>Detect abnormal calls to OSR code</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CountBytecodes"></a><a href="#CountBytecodes">CountBytecodes</a></td><td>Count number of bytecodes executed</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintBytecodeHistogram"></a><a href="#PrintBytecodeHistogram">PrintBytecodeHistogram</a></td><td>Print histogram of the executed bytecodes</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintBytecodePairHistogram"></a><a href="#PrintBytecodePairHistogram">PrintBytecodePairHistogram</a></td><td>Print histogram of the executed bytecode pairs</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintSignatureHandlers"></a><a href="#PrintSignatureHandlers">PrintSignatureHandlers</a></td><td>Print code generated for native method signature handlers</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyOops"></a><a href="#VerifyOops">VerifyOops</a></td><td>Do plausibility checks for oops</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CheckUnhandledOops"></a><a href="#CheckUnhandledOops">CheckUnhandledOops</a></td><td>Check for unhandled oops in VM code</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyJNIFields"></a><a href="#VerifyJNIFields">VerifyJNIFields</a></td><td>Verify jfieldIDs for instance fields</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyFPU"></a><a href="#VerifyFPU">VerifyFPU</a></td><td>Verify FPU state (check for NaN's, etc.)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyThread"></a><a href="#VerifyThread">VerifyThread</a></td><td>Watch the thread register for corruption (SPARC only)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyActivationFrameSize"></a><a href="#VerifyActivationFrameSize">VerifyActivationFrameSize</a></td><td>Verify that activation frame didn't become smaller than its minimal size</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceFrequencyInlining"></a><a href="#TraceFrequencyInlining">TraceFrequencyInlining</a></td><td>Trace frequency based inlining</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintMethodData"></a><a href="#PrintMethodData">PrintMethodData</a></td><td>Print the results of +ProfileInterpreter at end of run</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyDataPointer"></a><a href="#VerifyDataPointer">VerifyDataPointer</a></td><td>Verify the method data pointer during interpreter profiling</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceCompilationPolicy"></a><a href="#TraceCompilationPolicy">TraceCompilationPolicy</a></td><td>Trace compilation policy</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TimeCompilationPolicy"></a><a href="#TimeCompilationPolicy">TimeCompilationPolicy</a></td><td>Time the compilation policy</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CounterHalfLifeTime"></a><a href="#CounterHalfLifeTime">CounterHalfLifeTime</a></td><td>half-life time of invocation counters (in secs)</td><td>30</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CounterDecayMinIntervalLength"></a><a href="#CounterDecayMinIntervalLength">CounterDecayMinIntervalLength</a></td><td>Min. ms. between invocation of CounterDecay</td><td>500</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="TraceDeoptimization"></a><a href="#TraceDeoptimization">TraceDeoptimization</a></td><td>Trace deoptimization</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DebugDeoptimization"></a><a href="#DebugDeoptimization">DebugDeoptimization</a></td><td>Tracing various information while debugging deoptimization</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="GuaranteedSafepointInterval"></a><a href="#GuaranteedSafepointInterval">GuaranteedSafepointInterval</a></td><td>Guarantee a safepoint (at least) every so many milliseconds (0 means none)</td><td>1000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="SafepointTimeoutDelay"></a><a href="#SafepointTimeoutDelay">SafepointTimeoutDelay</a></td><td>Delay in milliseconds for option SafepointTimeout</td><td>10000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MallocCatchPtr"></a><a href="#MallocCatchPtr">MallocCatchPtr</a></td><td>Hit breakpoint when mallocing/freeing this pointer</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="TotalHandleAllocationLimit"></a><a href="#TotalHandleAllocationLimit">TotalHandleAllocationLimit</a></td><td>Threshold for total handle allocation when +TraceHandleAllocation is used</td><td>1024</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="StackPrintLimit"></a><a href="#StackPrintLimit">StackPrintLimit</a></td><td>number of stack frames to print in VM-level stack dump</td><td>100</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxInlineLevel"></a><a href="#MaxInlineLevel">MaxInlineLevel</a></td><td>maximum number of nested calls that are inlined</td><td>9</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxRecursiveInlineLevel"></a><a href="#MaxRecursiveInlineLevel">MaxRecursiveInlineLevel</a></td><td>maximum number of nested recursive calls that are inlined</td><td>1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="InlineSmallCode"></a><a href="#InlineSmallCode">InlineSmallCode</a></td><td>Only inline already compiled methods if their code size is less than this</td><td>1000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxTrivialSize"></a><a href="#MaxTrivialSize">MaxTrivialSize</a></td><td>maximum bytecode size of a trivial method to be inlined</td><td>6</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MinInliningThreshold"></a><a href="#MinInliningThreshold">MinInliningThreshold</a></td><td>min. invocation count a method needs to have to be inlined</td><td>250</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="AlignEntryCode"></a><a href="#AlignEntryCode">AlignEntryCode</a></td><td>aligns entry code to specified value (in bytes)</td><td>4</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MethodHistogramCutoff"></a><a href="#MethodHistogramCutoff">MethodHistogramCutoff</a></td><td>cutoff value for method invoc. histogram (+CountCalls)</td><td>100</td><td>intx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="ProfilerNumberOfInterpretedMethods"></a><a href="#ProfilerNumberOfInterpretedMethods">ProfilerNumberOfInterpretedMethods</a></td><td># of interpreted methods to show in profile</td><td>25</td><td>intx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="ProfilerNumberOfCompiledMethods"></a><a href="#ProfilerNumberOfCompiledMethods">ProfilerNumberOfCompiledMethods</a></td><td># of compiled methods to show in profile</td><td>25</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ProfilerNumberOfStubMethods"></a><a href="#ProfilerNumberOfStubMethods">ProfilerNumberOfStubMethods</a></td><td># of stub methods to show in profile</td><td>25</td><td>intx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="ProfilerNumberOfRuntimeStubNodes"></a><a href="#ProfilerNumberOfRuntimeStubNodes">ProfilerNumberOfRuntimeStubNodes</a></td><td># of runtime stub nodes to show in profile</td><td>25</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="DontYieldALotInterval"></a><a href="#DontYieldALotInterval">DontYieldALotInterval</a></td><td>Interval between which yields will be dropped (milliseconds)</td><td>10</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MinSleepInterval"></a><a href="#MinSleepInterval">MinSleepInterval</a></td><td>Minimum sleep() interval (milliseconds) when ConvertSleepToYield is off (used for SOLARIS)</td><td>1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ProfilerPCTickThreshold"></a><a href="#ProfilerPCTickThreshold">ProfilerPCTickThreshold</a></td><td>Number of ticks in a PC buckets to be a hotspot</td><td>15</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="StressNonEntrant"></a><a href="#StressNonEntrant">StressNonEntrant</a></td><td>Mark nmethods non-entrant at registration</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TypeProfileWidth"></a><a href="#TypeProfileWidth">TypeProfileWidth</a></td><td>number of receiver types to record in call profile</td><td>2</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="BciProfileWidth"></a><a href="#BciProfileWidth">BciProfileWidth</a></td><td>number of return bci's to record in ret profile</td><td>2</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="FreqCountInvocations"></a><a href="#FreqCountInvocations">FreqCountInvocations</a></td><td>Scaling factor for branch frequencies (deprecated)</td><td>1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="InlineFrequencyRatio"></a><a href="#InlineFrequencyRatio">InlineFrequencyRatio</a></td><td>Ratio of call site execution to caller method invocation</td><td>20</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="InlineThrowCount"></a><a href="#InlineThrowCount">InlineThrowCount</a></td><td>Force inlining of interpreted methods that throw this often</td><td>50</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="InlineThrowMaxSize"></a><a href="#InlineThrowMaxSize">InlineThrowMaxSize</a></td><td>Force inlining of throwing methods smaller than this</td><td>200</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="VerifyAliases"></a><a href="#VerifyAliases">VerifyAliases</a></td><td>perform extra checks on the results of alias analysis</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProfilerNodeSize"></a><a href="#ProfilerNodeSize">ProfilerNodeSize</a></td><td>Size in K to allocate for the Profile Nodes of each thread</td><td>1024</td><td>intx</td></tr>
<tr valign="top"><td style="word-break:break-all;"><a href="" name="V8AtomicOperationUnderLockSpinCount"></a><a href="#V8AtomicOperationUnderLockSpinCount">V8AtomicOperationUnderLockSpinCount</a></td><td>Number of times to spin wait on a v8 atomic operation lock</td><td>50</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ExitAfterGCNum"></a><a href="#ExitAfterGCNum">ExitAfterGCNum</a></td><td>If non-zero, exit after this GC.</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="GCExpandToAllocateDelayMillis"></a><a href="#GCExpandToAllocateDelayMillis">GCExpandToAllocateDelayMillis</a></td><td>Delay in ms between expansion and allocation</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CodeCacheSegmentSize"></a><a href="#CodeCacheSegmentSize">CodeCacheSegmentSize</a></td><td>Code cache segment size (in bytes) - smallest unit of allocation</td><td>64</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="BinarySwitchThreshold"></a><a href="#BinarySwitchThreshold">BinarySwitchThreshold</a></td><td>Minimal number of lookupswitch entries for rewriting to binary switch</td><td>5</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="StopInterpreterAt"></a><a href="#StopInterpreterAt">StopInterpreterAt</a></td><td>Stops interpreter execution at specified bytecode number</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="TraceBytecodesAt"></a><a href="#TraceBytecodesAt">TraceBytecodesAt</a></td><td>Traces bytecodes starting with specified bytecode number</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CIStart"></a><a href="#CIStart">CIStart</a></td><td>the id of the first compilation to permit</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CIStop"></a><a href="#CIStop">CIStop</a></td><td>the id of the last compilation to permit</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CIStartOSR"></a><a href="#CIStartOSR">CIStartOSR</a></td><td>the id of the first osr compilation to permit (CICountOSR must be on)</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CIStopOSR"></a><a href="#CIStopOSR">CIStopOSR</a></td><td>the id of the last osr compilation to permit (CICountOSR must be on)</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CIBreakAtOSR"></a><a href="#CIBreakAtOSR">CIBreakAtOSR</a></td><td>id of osr compilation to break at</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CIBreakAt"></a><a href="#CIBreakAt">CIBreakAt</a></td><td>id of compilation to break at</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CIFireOOMAt"></a><a href="#CIFireOOMAt">CIFireOOMAt</a></td><td>Fire OutOfMemoryErrors throughout CI for testing the compiler (non-negative value throws OOM after this many CI accesses in each compile)</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CIFireOOMAtDelay"></a><a href="#CIFireOOMAtDelay">CIFireOOMAtDelay</a></td><td>Wait for this many CI accesses to occur in all compiles before beginning to throw OutOfMemoryErrors in each compile</td><td>-1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="NewCodeParameter"></a><a href="#NewCodeParameter">NewCodeParameter</a></td><td>Testing Only: Create a dedicated integer parameter before putback</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MinOopMapAllocation"></a><a href="#MinOopMapAllocation">MinOopMapAllocation</a></td><td>Minimum number of OopMap entries in an OopMapSet</td><td>8</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="LongCompileThreshold"></a><a href="#LongCompileThreshold">LongCompileThreshold</a></td><td>Used with +TraceLongCompiles</td><td>50</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxRecompilationSearchLength"></a><a href="#MaxRecompilationSearchLength">MaxRecompilationSearchLength</a></td><td>max. # frames to inspect searching for recompilee</td><td>10</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxInterpretedSearchLength"></a><a href="#MaxInterpretedSearchLength">MaxInterpretedSearchLength</a></td><td>max. # interp. frames to skip when searching for recompilee</td><td>3</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="DesiredMethodLimit"></a><a href="#DesiredMethodLimit">DesiredMethodLimit</a></td><td>desired max. method size (in bytecodes) after inlining</td><td>8000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="HugeMethodLimit"></a><a href="#HugeMethodLimit">HugeMethodLimit</a></td><td>don't compile methods larger than this if +DontCompileHugeMethods</td><td>8000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="UseNewReflection"></a><a href="#UseNewReflection">UseNewReflection</a></td><td>Temporary flag for transition to reflection based on dynamic bytecode generation in 1.4; can no longer be turned off in 1.4 JDK, and is unneeded in 1.3 JDK, but marks most places VM changes were needed</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyReflectionBytecodes"></a><a href="#VerifyReflectionBytecodes">VerifyReflectionBytecodes</a></td><td>Force verification of 1.4 reflection bytecodes. Does not work in situations like that described in 4486457 or for constructors generated for serialization, so can not be enabled in product.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="FastSuperclassLimit"></a><a href="#FastSuperclassLimit">FastSuperclassLimit</a></td><td>Depth of hardwired instanceof accelerator array</td><td>8</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PerfTraceDataCreation"></a><a href="#PerfTraceDataCreation">PerfTraceDataCreation</a></td><td>Trace creation of Performance Data Entries</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PerfTraceMemOps"></a><a href="#PerfTraceMemOps">PerfTraceMemOps</a></td><td>Trace PerfMemory create/attach/detach calls</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SharedOptimizeColdStartPolicy"></a><a href="#SharedOptimizeColdStartPolicy">SharedOptimizeColdStartPolicy</a></td><td>Reordering policy for SharedOptimizeColdStart 0=favor classload-time locality, 1=balanced, 2=favor runtime locality</td><td>2</td><td>intx</td></tr>
<tr bgcolor="#e0e0e0"><td colspan="4"><a href="" name="product_pd">product_pd</a></td></tr>
<tr valign="top"><td><a href="" name="UseLargePages"></a><a href="#UseLargePages">UseLargePages</a></td><td>Use large page memory</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseSSE"></a><a href="#UseSSE">UseSSE</a></td><td>0=fpu stack,1=SSE for floats,2=SSE/SSE2 for all (x86/amd only)</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="UseISM"></a><a href="#UseISM">UseISM</a></td><td>Use Intimate Shared Memory. [Not accepted for non-Solaris platforms.] For details, see <a href="http://www.oracle.com/technetwork/java/ism-139376.html">Intimate Shared Memory</a>.</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseMPSS"></a><a href="#UseMPSS">UseMPSS</a></td><td>Use Multiple Page Size Support w/4mb pages for the heap. Do not use with ISM as this replaces the need for ISM. (Introduced in 1.4.0 update 1, Relevant to Solaris 9 and newer.) [1.4.1 and earlier: false]</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="BackgroundCompilation"></a><a href="#BackgroundCompilation">BackgroundCompilation</a></td><td>A thread requesting compilation is not blocked during compilation</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseVectoredExceptions"></a><a href="#UseVectoredExceptions">UseVectoredExceptions</a></td><td>Temp Flag - Use Vectored Exceptions rather than SEH (Windows Only)</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DontYieldALot"></a><a href="#DontYieldALot">DontYieldALot</a></td><td>Throw away obvious excess yield calls (for SOLARIS only)</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ConvertSleepToYield"></a><a href="#ConvertSleepToYield">ConvertSleepToYield</a></td><td>Converts sleep(0) to thread yield (may be off for SOLARIS to improve GUI)</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseTLAB"></a><a href="#UseTLAB">UseTLAB</a></td><td>Use thread-local object allocation</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ResizeTLAB"></a><a href="#ResizeTLAB">ResizeTLAB</a></td><td>Dynamically resize tlab size for threads</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="NeverActAsServerClassMachine"></a><a href="#NeverActAsServerClassMachine">NeverActAsServerClassMachine</a></td><td>Never act like a server-class machine</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrefetchCopyIntervalInBytes"></a><a href="#PrefetchCopyIntervalInBytes">PrefetchCopyIntervalInBytes</a></td><td>How far ahead to prefetch destination area (<= 0 means off)</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PrefetchScanIntervalInBytes"></a><a href="#PrefetchScanIntervalInBytes">PrefetchScanIntervalInBytes</a></td><td>How far ahead to prefetch scan area (<= 0 means off)</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PrefetchFieldsAhead"></a><a href="#PrefetchFieldsAhead">PrefetchFieldsAhead</a></td><td>How many fields ahead to prefetch in oop scan (<= 0 means off)</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CompilationPolicyChoice"></a><a href="#CompilationPolicyChoice">CompilationPolicyChoice</a></td><td>which compilation policy</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="RewriteBytecodes"></a><a href="#RewriteBytecodes">RewriteBytecodes</a></td><td>Allow rewriting of bytecodes (bytecodes are not immutable)</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="RewriteFrequentPairs"></a><a href="#RewriteFrequentPairs">RewriteFrequentPairs</a></td><td>Rewrite frequently used bytecode pairs into a single bytecode</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseOnStackReplacement"></a><a href="#UseOnStackReplacement">UseOnStackReplacement</a></td><td>Use on stack replacement, calls runtime if invoc. counter overflows in loop</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PreferInterpreterNativeStubs"></a><a href="#PreferInterpreterNativeStubs">PreferInterpreterNativeStubs</a></td><td>Use always interpreter stubs for native methods invoked via interpreter</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name=" AllocatePrefetchStyle"></a><a href="# AllocatePrefetchStyle">AllocatePrefetchStyle</a></td><td>0=no prefetch, 1=dead load, 2=prefetch instruction</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name=" AllocatePrefetchDistance"></a><a href="# AllocatePrefetchDistance">AllocatePrefetchDistance</a></td><td>Distance to prefetch ahead of allocation pointer</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="FreqInlineSize"></a><a href="#FreqInlineSize">FreqInlineSize</a></td><td>maximum bytecode size of a frequent method to be inlined</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="PreInflateSpin"></a><a href="#PreInflateSpin">PreInflateSpin</a></td><td>Number of times to spin wait before inflation</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="NewSize"></a><a href="#NewSize">NewSize</a></td><td>Default size of new generation (in bytes)</td><td></td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="TLABSize"></a><a href="#TLABSize">TLABSize</a></td><td>Default (or starting) size of TLAB (in bytes)</td><td></td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="SurvivorRatio"></a><a href="#SurvivorRatio">SurvivorRatio</a></td><td>Ratio of eden/survivor space size</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="NewRatio"></a><a href="#NewRatio">NewRatio</a></td><td>Ratio of new/old generation sizes</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="NewSizeThreadIncrease"></a><a href="#NewSizeThreadIncrease">NewSizeThreadIncrease</a></td><td>Additional size added to desired new generation size per non-daemon thread (in bytes)</td><td></td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="PermSize"></a><a href="#PermSize">PermSize</a></td><td>Default size of permanent generation (in bytes)</td><td></td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MaxPermSize"></a><a href="#MaxPermSize">MaxPermSize</a></td><td>Maximum size of permanent generation (in bytes)</td><td></td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="StackYellowPages"></a><a href="#StackYellowPages">StackYellowPages</a></td><td>Number of yellow zone (recoverable overflows) pages</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="StackRedPages"></a><a href="#StackRedPages">StackRedPages</a></td><td>Number of red zone (unrecoverable overflows) pages</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="StackShadowPages"></a><a href="#StackShadowPages">StackShadowPages</a></td><td>Number of shadow zone (for overflow checking) pages this should exceed the depth of the VM and native call stack. In the HotSpot implementation, Java methods share stack frames with C/C++ native code, namely user native code and the virtual machine itself. Java methods generate code that checks that stack space is available a fixed distance towards the end of the stack so that the native code can be called without exceeding the stack space. This distance towards the end of the stack is called “Shadow Pages”. The page size usually is 4096b, which mean that 20 pages would occupy 90Kb. See some more info on that parameter in <a href="http://bugs.sun.com/view_bug.do?bug_id=7059899">bug 7059899</a> and <a href="http://www.oracle.com/technetwork/java/javase/crashes-137240.html#gbyzz">Crash due to Stack Overflow</a> section of "Troubleshooting System Crashes" from Oracle.</td><td>Depends on machine. It's 3 on x86, 6 on amd64, etc</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ThreadStackSize"></a><a href="#ThreadStackSize">ThreadStackSize</a></td><td>Thread Stack Size (in Kbytes)</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="VMThreadStackSize"></a><a href="#VMThreadStackSize">VMThreadStackSize</a></td><td>Non-Java Thread Stack Size (in Kbytes)</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CompilerThreadStackSize"></a><a href="#CompilerThreadStackSize">CompilerThreadStackSize</a></td><td>Compiler Thread Stack Size (in Kbytes)</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="InitialCodeCacheSize"></a><a href="#InitialCodeCacheSize">InitialCodeCacheSize</a></td><td>Initial code cache size (in bytes)</td><td></td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="ReservedCodeCacheSize"></a><a href="#ReservedCodeCacheSize">ReservedCodeCacheSize</a></td><td>Reserved code cache size (in bytes) - maximum code cache size</td><td></td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CodeCacheExpansionSize"></a><a href="#CodeCacheExpansionSize">CodeCacheExpansionSize</a></td><td>Code cache expansion size (in bytes)</td><td></td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CompileThreshold"></a><a href="#CompileThreshold">CompileThreshold</a></td><td>number of method invocations/branches before (re-)compiling</td><td>10000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="Tier2CompileThreshold"></a><a href="#Tier2CompileThreshold">Tier2CompileThreshold</a></td><td>threshold at which a tier 2 compilation is invoked</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="Tier2BackEdgeThreshold"></a><a href="#Tier2BackEdgeThreshold">Tier2BackEdgeThreshold</a></td><td>Back edge threshold at which a tier 2 compilation is invoked</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="TieredCompilation"></a><a href="#TieredCompilation">TieredCompilation</a></td><td>Enable two-tier compilation</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="OnStackReplacePercentage"></a><a href="#OnStackReplacePercentage">OnStackReplacePercentage</a></td><td>number of method invocations/branches (expressed as % of CompileThreshold) before (re-)compiling OSR code</td><td></td><td>intx</td></tr>
<tr bgcolor="#e0e0e0"><td colspan="4"><a href="" name="develop_pd">develop_pd</a></td></tr>
<tr valign="top"><td><a href="" name="ShareVtableStubs"></a><a href="#ShareVtableStubs">ShareVtableStubs</a></td><td>Share vtable stubs (smaller code but worse branch prediction</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CICompileOSR"></a><a href="#CICompileOSR">CICompileOSR</a></td><td>compile on stack replacement methods if supported by the compiler</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ImplicitNullChecks"></a><a href="#ImplicitNullChecks">ImplicitNullChecks</a></td><td>generate code for implicit null checks</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UncommonNullCast"></a><a href="#UncommonNullCast">UncommonNullCast</a></td><td>Uncommon-trap NULLs passed to check cast</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineIntrinsics"></a><a href="#InlineIntrinsics">InlineIntrinsics</a></td><td>Inline intrinsics that can be statically resolved</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProfileInterpreter"></a><a href="#ProfileInterpreter">ProfileInterpreter</a></td><td>Profile at the bytecode level during interpretation</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProfileTraps"></a><a href="#ProfileTraps">ProfileTraps</a></td><td>Profile deoptimization traps at the bytecode level</td><td></td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="InlineFrequencyCount"></a><a href="#InlineFrequencyCount">InlineFrequencyCount</a></td><td>Count of call site execution necessary to trigger frequent inlining</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="JVMInvokeMethodSlack"></a><a href="#JVMInvokeMethodSlack">JVMInvokeMethodSlack</a></td><td>Stack space (bytes) required for JVM_InvokeMethod to complete</td><td></td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="CodeEntryAlignment"></a><a href="#CodeEntryAlignment">CodeEntryAlignment</a></td><td>Code entry alignment for generated code (in bytes)</td><td></td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CodeCacheMinBlockLength"></a><a href="#CodeCacheMinBlockLength">CodeCacheMinBlockLength</a></td><td>Minimum number of segments in a code cache block.</td><td></td><td>uintx</td></tr>
<tr bgcolor="#e0e0e0"><td colspan="4"><a href="" name="notproduct">notproduct</a></td></tr>
<tr valign="top"><td><a href="" name="StressDerivedPointers"></a><a href="#StressDerivedPointers">StressDerivedPointers</a></td><td>Force scavenge when a derived pointers is detected on stack after rtm call</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceCodeBlobStacks"></a><a href="#TraceCodeBlobStacks">TraceCodeBlobStacks</a></td><td>Trace stack-walk of codeblobs</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintRewrites"></a><a href="#PrintRewrites">PrintRewrites</a></td><td>Print methods that are being rewritten</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DeoptimizeRandom"></a><a href="#DeoptimizeRandom">DeoptimizeRandom</a></td><td>deoptimize random frames on random exit from the runtime system</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ZombieALot"></a><a href="#ZombieALot">ZombieALot</a></td><td>creates zombies (non-entrant) at exit from the runt. system</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="WalkStackALot"></a><a href="#WalkStackALot">WalkStackALot</a></td><td>trace stack (no print) at every exit from the runtime system</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="StrictSafepointChecks"></a><a href="#StrictSafepointChecks">StrictSafepointChecks</a></td><td>Enable strict checks that safepoints cannot happen for threads that used No_Safepoint_Verifier</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyLastFrame"></a><a href="#VerifyLastFrame">VerifyLastFrame</a></td><td>Verify oops on last frame on entry to VM</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="LogEvents"></a><a href="#LogEvents">LogEvents</a></td><td>Enable Event log</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CheckAssertionStatusDirectives"></a><a href="#CheckAssertionStatusDirectives">CheckAssertionStatusDirectives</a></td><td>temporary - see javaClasses.cpp</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintMallocFree"></a><a href="#PrintMallocFree">PrintMallocFree</a></td><td>Trace calls to C heap malloc/free allocation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintOopAddress"></a><a href="#PrintOopAddress">PrintOopAddress</a></td><td>Always print the location of the oop</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyCodeCacheOften"></a><a href="#VerifyCodeCacheOften">VerifyCodeCacheOften</a></td><td>Verify compiled-code cache often</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ZapDeadLocalsOld"></a><a href="#ZapDeadLocalsOld">ZapDeadLocalsOld</a></td><td>Zap dead locals (old version, zaps all frames when entering the VM</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CheckOopishValues"></a><a href="#CheckOopishValues">CheckOopishValues</a></td><td>Warn if value contains oop ( requires ZapDeadLocals)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ZapVMHandleArea"></a><a href="#ZapVMHandleArea">ZapVMHandleArea</a></td><td>Zap freed VM handle space with 0xBCBCBCBC</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintCompilation2"></a><a href="#PrintCompilation2">PrintCompilation2</a></td><td>Print additional statistics per compilation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintAdapterHandlers"></a><a href="#PrintAdapterHandlers">PrintAdapterHandlers</a></td><td>Print code generated for i2c/c2i adapters</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintCodeCache"></a><a href="#PrintCodeCache">PrintCodeCache</a></td><td>Print the compiled_code cache when exiting</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ProfilerCheckIntervals"></a><a href="#ProfilerCheckIntervals">ProfilerCheckIntervals</a></td><td>Collect and print info on spacing of profiler ticks</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="WarnOnStalledSpinLock"></a><a href="#WarnOnStalledSpinLock">WarnOnStalledSpinLock</a></td><td>Prints warnings for stalled SpinLocks</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="PrintSystemDictionaryAtExit"></a><a href="#PrintSystemDictionaryAtExit">PrintSystemDictionaryAtExit</a></td><td>Prints the system dictionary at exit</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ValidateMarkSweep"></a><a href="#ValidateMarkSweep">ValidateMarkSweep</a></td><td>Do extra validation during MarkSweep collection</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="RecordMarkSweepCompaction"></a><a href="#RecordMarkSweepCompaction">RecordMarkSweepCompaction</a></td><td>Enable GC-to-GC recording and querying of compaction during MarkSweep</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceRuntimeCalls"></a><a href="#TraceRuntimeCalls">TraceRuntimeCalls</a></td><td>Trace run-time calls</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceJVMCalls"></a><a href="#TraceJVMCalls">TraceJVMCalls</a></td><td>Trace JVM calls</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceInvocationCounterOverflow"></a><a href="#TraceInvocationCounterOverflow">TraceInvocationCounterOverflow</a></td><td>Trace method invocation counter overflow</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceZapDeadLocals"></a><a href="#TraceZapDeadLocals">TraceZapDeadLocals</a></td><td>Trace zapping dead locals</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSMarkStackOverflowALot"></a><a href="#CMSMarkStackOverflowALot">CMSMarkStackOverflowALot</a></td><td>Whether we should simulate frequent marking stack / work queue" overflow</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CMSMarkStackOverflowInterval"></a><a href="#CMSMarkStackOverflowInterval">CMSMarkStackOverflowInterval</a></td><td>A per-thread `interval' counter that determines how frequently" we simulate overflow; a smaller number increases frequency</td><td>1000</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CMSVerifyReturnedBytes"></a><a href="#CMSVerifyReturnedBytes">CMSVerifyReturnedBytes</a></td><td>Check that all the garbage collected was returned to the free lists.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ScavengeALot"></a><a href="#ScavengeALot">ScavengeALot</a></td><td>Force scavenge at every Nth exit from the runtime system (N=ScavengeALotInterval)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="GCALotAtAllSafepoints"></a><a href="#GCALotAtAllSafepoints">GCALotAtAllSafepoints</a></td><td>Enforce ScavengeALot/GCALot at all potential safepoints</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PromotionFailureALot"></a><a href="#PromotionFailureALot">PromotionFailureALot</a></td><td>Use promotion failure handling on every youngest generation collection</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CheckMemoryInitialization"></a><a href="#CheckMemoryInitialization">CheckMemoryInitialization</a></td><td>Checks memory initialization</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceMarkSweep"></a><a href="#TraceMarkSweep">TraceMarkSweep</a></td><td>Trace mark sweep</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintReferenceGC"></a><a href="#PrintReferenceGC">PrintReferenceGC</a></td><td>Print times spent handling reference objects during GC (enabled only when PrintGCDetails)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceScavenge"></a><a href="#TraceScavenge">TraceScavenge</a></td><td>Trace scavenge</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TimeCompiler"></a><a href="#TimeCompiler">TimeCompiler</a></td><td>time the compiler</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TimeCompiler2"></a><a href="#TimeCompiler2">TimeCompiler2</a></td><td>detailed time the compiler (requires +TimeCompiler)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="LogMultipleMutexLocking"></a><a href="#LogMultipleMutexLocking">LogMultipleMutexLocking</a></td><td>log locking and unlocking of mutexes (only if multiple locks are held)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintSymbolTableSizeHistogram"></a><a href="#PrintSymbolTableSizeHistogram">PrintSymbolTableSizeHistogram</a></td><td>print histogram of the symbol table</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ExitVMOnVerifyError"></a><a href="#ExitVMOnVerifyError">ExitVMOnVerifyError</a></td><td>standard exit from VM if bytecode verify error (only in debug mode)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="AbortVMOnException"></a><a href="#AbortVMOnException">AbortVMOnException</a></td><td>Call fatal if this exception is thrown. Example: java -XX:AbortVMOnException=java.lang.NullPointerException Foo</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="PrintVtableStats"></a><a href="#PrintVtableStats">PrintVtableStats</a></td><td>print vtables stats at end of run</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="IgnoreLockingAssertions"></a><a href="#IgnoreLockingAssertions">IgnoreLockingAssertions</a></td><td>disable locking assertions (for speed)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyLoopOptimizations"></a><a href="#VerifyLoopOptimizations">VerifyLoopOptimizations</a></td><td>verify major loop optimizations</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CompileTheWorldIgnoreInitErrors"></a><a href="#CompileTheWorldIgnoreInitErrors">CompileTheWorldIgnoreInitErrors</a></td><td>Compile all methods although class initializer failed</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TracePhaseCCP"></a><a href="#TracePhaseCCP">TracePhaseCCP</a></td><td>Print progress during Conditional Constant Propagation</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceLivenessQuery"></a><a href="#TraceLivenessQuery">TraceLivenessQuery</a></td><td>Trace queries of liveness analysis information</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CollectIndexSetStatistics"></a><a href="#CollectIndexSetStatistics">CollectIndexSetStatistics</a></td><td>Collect information about IndexSets</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceCISCSpill"></a><a href="#TraceCISCSpill">TraceCISCSpill</a></td><td>Trace allocators use of cisc spillable instructions</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceSpilling"></a><a href="#TraceSpilling">TraceSpilling</a></td><td>Trace spilling</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CountVMLocks"></a><a href="#CountVMLocks">CountVMLocks</a></td><td>counts VM internal lock attempts and contention</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CountRuntimeCalls"></a><a href="#CountRuntimeCalls">CountRuntimeCalls</a></td><td>counts VM runtime calls</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CountJVMCalls"></a><a href="#CountJVMCalls">CountJVMCalls</a></td><td>counts jvm method invocations</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CountRemovableExceptions"></a><a href="#CountRemovableExceptions">CountRemovableExceptions</a></td><td>count exceptions that could be replaced by branches due to inlining</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="ICMissHistogram"></a><a href="#ICMissHistogram">ICMissHistogram</a></td><td>produce histogram of IC misses</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintClassStatistics"></a><a href="#PrintClassStatistics">PrintClassStatistics</a></td><td>prints class statistics at end of run</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PrintMethodStatistics"></a><a href="#PrintMethodStatistics">PrintMethodStatistics</a></td><td>prints method statistics at end of run</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceOnStackReplacement"></a><a href="#TraceOnStackReplacement">TraceOnStackReplacement</a></td><td>Trace on stack replacement</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyJNIEnvThread"></a><a href="#VerifyJNIEnvThread">VerifyJNIEnvThread</a></td><td>Verify JNIEnv.thread == Thread::current() when entering VM from JNI</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="TraceTypeProfile"></a><a href="#TraceTypeProfile">TraceTypeProfile</a></td><td>Trace type profile</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="MemProfilingInterval"></a><a href="#MemProfilingInterval">MemProfilingInterval</a></td><td>Time between each invocation of the MemProfiler</td><td>500</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="AssertRepeat"></a><a href="#AssertRepeat">AssertRepeat</a></td><td>number of times to evaluate expression in assert (to estimate overhead); only works with -DUSE_REPEATED_ASSERTS</td><td>1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="SuppressErrorAt"></a><a href="#SuppressErrorAt">SuppressErrorAt</a></td><td>List of assertions (file:line) to muzzle</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="HandleAllocationLimit"></a><a href="#HandleAllocationLimit">HandleAllocationLimit</a></td><td>Threshold for HandleMark allocation when +TraceHandleAllocation is used</td><td>1024</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="MaxElementPrintSize"></a><a href="#MaxElementPrintSize">MaxElementPrintSize</a></td><td>maximum number of elements to print</td><td>256</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MaxSubklassPrintSize"></a><a href="#MaxSubklassPrintSize">MaxSubklassPrintSize</a></td><td>maximum number of subklasses to print when printing klass</td><td>4</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ScavengeALotInterval"></a><a href="#ScavengeALotInterval">ScavengeALotInterval</a></td><td>Interval between which scavenge will occur with +ScavengeALot</td><td>1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="FullGCALotInterval"></a><a href="#FullGCALotInterval">FullGCALotInterval</a></td><td>Interval between which full gc will occur with +FullGCALot</td><td>1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="FullGCALotStart"></a><a href="#FullGCALotStart">FullGCALotStart</a></td><td>For which invocation to start FullGCAlot</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="FullGCALotDummies"></a><a href="#FullGCALotDummies">FullGCALotDummies</a></td><td>Dummy object allocated with +FullGCALot, forcing all objects to move</td><td>32*K</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="DeoptimizeALotInterval"></a><a href="#DeoptimizeALotInterval">DeoptimizeALotInterval</a></td><td>Number of exits until DeoptimizeALot kicks in</td><td>5</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ZombieALotInterval"></a><a href="#ZombieALotInterval">ZombieALotInterval</a></td><td>Number of exits until ZombieALot kicks in</td><td>5</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="ExitOnFullCodeCache"></a><a href="#ExitOnFullCodeCache">ExitOnFullCodeCache</a></td><td>Exit the VM if we fill the code cache.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CompileTheWorldStartAt"></a><a href="#CompileTheWorldStartAt">CompileTheWorldStartAt</a></td><td>First class to consider when using +CompileTheWorld</td><td>1</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="CompileTheWorldStopAt"></a><a href="#CompileTheWorldStopAt">CompileTheWorldStopAt</a></td><td>Last class to consider when using +CompileTheWorld</td><td>max_jint</td><td>intx</td></tr>
<tr bgcolor="#e0e0e0"><td colspan="4"><a href="" name="diagnostic">diagnostic</a></td></tr>
<tr valign="top"><td><a href="" name="PrintFlagsFinal"></a><a href="#PrintFlagsFinal">PrintFlagsFinal</a></td><td>Prints list of all available java paramenters. Information is displayed in 4 columns. First one is the type of parameter, second is parameter name, third is default value and the fourth is the type of the flag, i.e. product, diagnostic, C1 product (only for client JVM), C2 product (only for server JVM), etc. Available since 1.6.</td><td></td><td></td></tr>
<tr valign="top"><td><a href="" name="UnlockDiagnosticVMOptions"></a><a href="#UnlockDiagnosticVMOptions">UnlockDiagnosticVMOptions</a></td><td>Enable processing of flags relating to field diagnostics</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="LogCompilation"></a><a href="#LogCompilation">LogCompilation</a></td><td>Log compilation activity in detail to hotspot.log or LogFile</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UnsyncloadClass"></a><a href="#UnsyncloadClass">UnsyncloadClass</a></td><td>Unstable: VM calls loadClass unsynchronized. Custom classloader must call VM synchronized for findClass & defineClass</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="FLSVerifyAllHeapReferences"></a><a href="#FLSVerifyAllHeapReferences">FLSVerifyAllHeapReferences</a></td><td>Verify that all refs across the FLS boundary are to valid objects</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="FLSVerifyLists"></a><a href="#FLSVerifyLists">FLSVerifyLists</a></td><td>Do lots of (expensive) FreeListSpace verification</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="FLSVerifyIndexTable"></a><a href="#FLSVerifyIndexTable">FLSVerifyIndexTable</a></td><td>Do lots of (expensive) FLS index table verification</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyBeforeExit"></a><a href="#VerifyBeforeExit">VerifyBeforeExit</a></td><td>Verify system before exiting</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyBeforeGC"></a><a href="#VerifyBeforeGC">VerifyBeforeGC</a></td><td>Verify memory system before GC</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyAfterGC"></a><a href="#VerifyAfterGC">VerifyAfterGC</a></td><td>Verify memory system after GC</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyDuringGC"></a><a href="#VerifyDuringGC">VerifyDuringGC</a></td><td>Verify memory system during GC (between phases)</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyRememberedSets"></a><a href="#VerifyRememberedSets">VerifyRememberedSets</a></td><td>Verify GC remembered sets</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyObjectStartArray"></a><a href="#VerifyObjectStartArray">VerifyObjectStartArray</a></td><td>Verify GC object start array if verify before/after</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="BindCMSThreadToCPU"></a><a href="#BindCMSThreadToCPU">BindCMSThreadToCPU</a></td><td>Bind CMS Thread to CPU if possible</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="CPUForCMSThread"></a><a href="#CPUForCMSThread">CPUForCMSThread</a></td><td>When BindCMSThreadToCPU is true, the CPU to bind CMS thread to</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="TraceJVMTIObjectTagging"></a><a href="#TraceJVMTIObjectTagging">TraceJVMTIObjectTagging</a></td><td>Trace JVMTI object tagging calls</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="VerifyBeforeIteration"></a><a href="#VerifyBeforeIteration">VerifyBeforeIteration</a></td><td>Verify memory system before JVMTI iteration</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DebugNonSafepoints"></a><a href="#DebugNonSafepoints">DebugNonSafepoints</a></td><td>Generate extra debugging info for non-safepoints in nmethods</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SerializeVMOutput"></a><a href="#SerializeVMOutput">SerializeVMOutput</a></td><td>Use a mutex to serialize output to tty and hotspot.log</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="DisplayVMOutput"></a><a href="#DisplayVMOutput">DisplayVMOutput</a></td><td>Display all VM output on the tty, independently of LogVMOutput</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="LogVMOutput"></a><a href="#LogVMOutput">LogVMOutput</a></td><td>Save VM output to hotspot.log, or to LogFile</td><td>trueInDebug</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="LogFile"></a><a href="#LogFile">LogFile</a></td><td>If LogVMOutput is on, save VM output to this file [hotspot.log]</td><td>""</td><td>ccstr</td></tr>
<tr valign="top"><td><a href="" name="MallocVerifyInterval"></a><a href="#MallocVerifyInterval">MallocVerifyInterval</a></td><td>if non-zero, verify C heap after every N calls to malloc/realloc/free</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="MallocVerifyStart"></a><a href="#MallocVerifyStart">MallocVerifyStart</a></td><td>if non-zero, start verifying C heap after Nth call to malloc/realloc/free</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="VerifyGCStartAt"></a><a href="#VerifyGCStartAt">VerifyGCStartAt</a></td><td>GC invoke count where +VerifyBefore/AfterGC kicks in</td><td>0</td><td>uintx</td></tr>
<tr valign="top"><td><a href="" name="VerifyGCLevel"></a><a href="#VerifyGCLevel">VerifyGCLevel</a></td><td>Generation level at which to start +VerifyBefore/AfterGC</td><td>0</td><td>intx</td></tr>
<tr valign="top"><td><a href="" name="UseNewCode"></a><a href="#UseNewCode">UseNewCode</a></td><td>Testing Only: Use the new version while testing</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseNewCode2"></a><a href="#UseNewCode2">UseNewCode2</a></td><td>Testing Only: Use the new version while testing</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="UseNewCode3"></a><a href="#UseNewCode3">UseNewCode3</a></td><td>Testing Only: Use the new version while testing</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SharedOptimizeColdStart"></a><a href="#SharedOptimizeColdStart">SharedOptimizeColdStart</a></td><td>At dump time, order shared objects to achieve better cold startup time.</td><td>true</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="SharedSkipVerify"></a><a href="#SharedSkipVerify">SharedSkipVerify</a></td><td>Skip assert() and verify() which page-in unwanted shared objects.</td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PauseAtStartup"></a><a href="#PauseAtStartup">PauseAtStartup</a></td><td>Causes the VM to pause at startup time and wait for the pause file to be removed (default: ./vm.paused.<pid>)</pid></td><td>false</td><td>bool</td></tr>
<tr valign="top"><td><a href="" name="PauseAtStartupFile"></a><a href="#PauseAtStartupFile">PauseAtStartupFile</a></td><td>The file to create and for whose removal to await when pausing at startup. (default: ./vm.paused.<pid>)</pid></td><td>""</td><td>ccstr</td></tr>
</tbody></table>
<br />
<h1>
Glossary</h1>
<br />
<h2>
<a href="" name="TLAB">TLAB</a></h2>
Thread-local allocation buffer. Used to allocate heap space quickly without synchronization. Compiled code has a "fast path" of a few instructions which tries to bump a high-water mark in the current thread's TLAB, successfully allocating an object if the bumped mark falls before a TLAB-specific limit address. <a href="http://blogs.oracle.com/jonthecollector/entry/the_real_thing">Here</a> is nice article with explanation about TLABs.</div>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com12tag:blogger.com,1999:blog-4215458222833264808.post-60672501480758018112011-07-12T11:15:00.005+01:002011-07-18T12:24:39.122+01:00Integration of TeamCity 6.x and FitnesseTeam city is wonderful CI and build management platform, which combines great capabilities with really simple configuration. I seriously love it for the way it is designed, configured and managed and would recommend it to anybody who cares about quality and wants to have fast feedback on build/test/integration problems. FitNess is “The fully integrated standalone wiki, and acceptance testing framework”, which I, personally, do not like, but testers (at least ones I know) say it is good. I still do not believe them, but have to live with it :)<br />
<a name='more'></a><br />
Integration problem is divided nicely into two pieces - one is fighting with FitNesse, another is integration of battle outcome into the TeamCity. So, lets start with FitNesse first.<br />
<br />
FitNess is not easily-integratable framework for a few reasons. The first problem is test URL & Fixtures’ path usually are not configurable via parameters. Fortunately, there is quite good solution, which allows to pass variable to FitNesse using “-D” java argument. Another issue is that it runs just as HTTP server, there is no other way to execute tests, other from running the server. Luckily it can run it and after execution stop it and such behaviour can be managed via command line. And the last issue is report produced by Fitnesse. It has to be converted into something widely-acceptable, e.g. into JUnit report.<br />
<br />
Now let sort all these issues out. The first step is updating test scripts to make them accept test URL and path to fixtures from command line. Assuming that there is just one property defined for URL and just one reference to fixture path, all which need to be done is Wiki page to contain something like this:<br />
<br />
!define TEST_SERVER_URL {${testserver.host_port}}<br />
!path ${testserver.fixture_jar}<br />
<br />
Note another set of curly brackets around value of TEST_SERVER_URL variable, it shouldn’t be missed. After these values are defined, it is possible to pass them via command line:<br />
<br />
$JAVA_HOME/bin/java -classpath <path_to_fixtures_jar> -Dtestserver.fixture_jar=<path_to_fixtures_jar> -DDtestserver.host_port=<test_server_url> fitnesseMain.FitNesseMain -p <fitnesse_port> -d <path_to_test_suite> -c <suite_url>\&format=xml > <report_temp_file><br />
<br />
where:<br />
<b>path_to_fixtures_jar</b> - is the path to jar with fixtures. It is referenced twice, because it is assumed that it has both FitNesse distribution and fixtures inside it. For me, it just looks like the most convenient way to package it.<br />
<b>test_server_url</b> - URL to the server being tested. That will be the value of TEST_SERVER_URL. Be careful here - as many other tools written in cowboy style dev technique, it hardly have any logs and when you will try to run test on bad (not connectable) URL or with invalid or unavailable port number, it will hang forever.<br />
<b>fitnesse_port</b> - now this is weired, but is required by FitNesse. It requires port to run. Just choose something random, which is not likely to be used by something else. I have no idea why, but '0' port (which should just select first available port) didn’t work for me, FitNesse just hanged without any messages, as it loves to do.<br />
<b>path_to_test_suite</b> - path to the folder with test suite. <br />
<b>suite_url</b> - that’s the path of the suite without host and port. To identify it, run FitNesse, goto the page with Suite and hover mouse over “Suite” link on the left hand side of FitNesse page with the suite. Do not miss ‘format=xml’ suffix.<br />
<b>report_temp_file</b> - is just a test file with report which will be generated by FitNesse. <br />
<br />
Now when we have report, we need to convert it into the JUnit format to allow TeamCity to understand it. Fist, it has to be cleaned-up, because contains not just XML, but also some other irrelevant stuff. I was was lazy here and done it simply with following command:<br />
<br />
grep ".*<.*>" $TEMP_FILE_TXT > $TEMP_FILE_XML"<br />
<br />
then is more interesting part. That report has to be converted into JUnit XML and that is done by applying following XSLT:<br />
<pre class="prettyprint"><?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<xsl:element name="testsuite">
<xsl:attribute name="tests">
<xsl:value-of select="sum(testResults/finalCounts/*)" />
</xsl:attribute>
<xsl:attribute name="failures">
<xsl:value-of select="testResults/finalCounts/wrong" />
</xsl:attribute>
<xsl:attribute name="disabled">
<xsl:value-of select="testResults/finalCounts/ignores" />
</xsl:attribute>
<xsl:attribute name="errors">
<xsl:value-of select="testResults/finalCounts/exceptions" />
</xsl:attribute>
<xsl:attribute name="name">AcceptanceTests</xsl:attribute>
<xsl:for-each select="testResults/result">
<xsl:element name="testcase">
<xsl:attribute name="classname">
<xsl:value-of select="/testResults/rootPath" />
</xsl:attribute>
<xsl:attribute name="name">
<xsl:value-of select="relativePageName" />
</xsl:attribute>
<xsl:choose>
<xsl:when test="counts/exceptions > 0">
<xsl:element name="error">
<xsl:attribute name="message">
<xsl:value-of select="counts/exceptions" />
<xsl:text> exceptions thrown</xsl:text>
<xsl:if test="counts/wrong > 0">
<xsl:text> and </xsl:text>
<xsl:value-of select="counts/wrong" />
<xsl:text> assertions failed</xsl:text>
</xsl:if>
</xsl:attribute>
</xsl:element>
</xsl:when>
<xsl:when test="counts/wrong > 0">
<xsl:element name="failure">
<xsl:attribute name="message">
<xsl:value-of select="counts/wrong" />
<xsl:text> assertions failed</xsl:text>
</xsl:attribute>
</xsl:element>
</xsl:when>
</xsl:choose>
</xsl:element>
</xsl:for-each>
</xsl:element>
</xsl:template>
</xsl:stylesheet>
</pre>to execute conversion we can use standard tool, provided by JDK. Surprisingly, found that not many people know about it, but it can be very handy. Here is snippet which allows to execute XSLT transformation in command line:<br />
<br />
$JAVA_HOME/bin/java com.sun.org.apache.xalan.internal.xsltc.cmdline.Compile fitnesse2junit.xslt<br />
$JAVA_HOME/bin/java com.sun.org.apache.xalan.internal.xsltc.cmdline.Transform <report_temp_file> fitnesse2junit > funct_test_res.xml<br />
<br />
The result is going to be XML test report in JUnit format.<br />
<br />
That’s all with FitNesse. Now it’s TeamCity’s turn. It is really up to developer how to do that, I will just give a brief overview and some flavour of what has to be done:<br />
<ul><li>Create new build task for execution of the script which runs FitNesse. That script should be just a collection of command lines provided above.</li>
<li>FitNesse requires fixtures and tests, so these have to be retrieved from VCS or linked as artifact dependencies from the other build. Either way these files will available for script which runs FitNesse and can be added into classpath, etc.</li>
<li>I would recommend to put such properties like 'test_server_url' into 'Environment Variables' in 'Build Parameters'. Then they can be accessible by scripts.</li>
<li>To see test execution results, add a Feature which will analyse XML with JUnit report. That is easy and there is already build-in Feature type in TeamCity which supports JUnit. That functionality is available from “Build Step/Add build feature”.</li>
</ul>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com3tag:blogger.com,1999:blog-4215458222833264808.post-60279173844262088352011-07-03T19:27:00.006+01:002011-08-26T16:35:50.932+01:00EhCache replication: RMI vs JGroups.<div dir="ltr" style="text-align: left;" trbidi="on"><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Recently, I was working on one product which required replicated caching. Caching provider was already decided - EhCache, and the remained was a question about transport. Which one is the best option? By the best option here I mean just the one which has better performance. The performance measurement was done just between two of available transports - JGroups and RMI, others were not considered, sorry.</span><br />
<span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span><br />
<a name='more'></a><span class="Apple-style-span" style="font-family: Arial; font-size: 15px; white-space: pre-wrap;">Replication was tested between two nodes. The main goal was to understand how increase of message data size and total number of messages affects performance. Another goal was to find the point where replication performance getting really bad. Latter is not that easy, because test used limited amount of memory and non-leaner performance deterioration could be caused by exhaust of free heap space.</span></div><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span></div><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; vertical-align: baseline;"><span class="Apple-style-span" style="font-family: Arial;"><span class="Apple-style-span" style="font-size: 11pt; white-space: pre-wrap;">Below are </span><span class="Apple-style-span" style="font-size: 15px; white-space: pre-wrap;">memory</span><span class="Apple-style-span" style="font-size: 11pt; white-space: pre-wrap;"> size and software versions used to run the test:</span></span></span><br />
<ul><li style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">All tests used 6GB of heap for all executions.</span></li>
<li style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span class="Apple-style-span" style="white-space: pre-wrap;">Tests were executed on the EhCache v.2.3.2</span></li>
<li style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; list-style-type: disc; text-decoration: none; vertical-align: baseline;"><span class="Apple-style-span" style="white-space: pre-wrap;">JVM is Sun java 1.6.0_21</span></li>
</ul><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The test itself is very simple. One node puts some number of elements with some size in the cache, other node reads all these elements. The test output is the the time required to read all elements. Timer starts just after first element is read.</span></div><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"></span><br />
<div style="background-color: transparent; font-family: 'Times New Roman'; font-size: medium; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; white-space: normal;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"> </span></span></div><div style="background-color: transparent; font-family: 'Times New Roman'; font-size: medium; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; white-space: normal;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">The first test creates 10000 elements for each iteration. The variable is the message size, which increased twice on each iteration. On the first iteration size is 1280 bytes, on the last one - 327680 bytes (320 Kb). It means that final iteration with 10000 elements, where each size was 320 Kb will transfer approximate 3Gb of data. The tests have shown that EhCache copes very well with increasing size of the element and the slowdown was approximately proportional to the size of transferred data, which can be seen on the graph:</span></span></div><div style="background-color: transparent; font-family: 'Times New Roman'; font-size: medium; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; white-space: normal;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br />
</span></span></div><div class="separator" style="clear: both; font-family: Arial; font-size: 11pt; text-align: center; white-space: pre-wrap;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaQqgtLB9zxhf_uaHRInU5ayqUaMxa2GZmem9u4tXGfqGkhl4xGsTghul7bBqWV9E0FESjVMN9A1qOtzn6Viqfvig-YLtgKnJiVQq7virpfhww1VecJnK8FZYzvpTF7TUh10B2KXr9rYE_/s1600/ehcache_time_vs_packet_data_size.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaQqgtLB9zxhf_uaHRInU5ayqUaMxa2GZmem9u4tXGfqGkhl4xGsTghul7bBqWV9E0FESjVMN9A1qOtzn6Viqfvig-YLtgKnJiVQq7virpfhww1VecJnK8FZYzvpTF7TUh10B2KXr9rYE_/s640/ehcache_time_vs_packet_data_size.png" width="640" /></a></span></div><div class="separator" style="clear: both; font-family: Arial; font-size: 11pt; text-align: center; white-space: pre-wrap;"><br />
</div><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"></span></span><br />
<div style="background-color: transparent; font-family: 'Times New Roman'; font-size: medium; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; white-space: normal;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">Here y-axis is time required for transfer in milliseconds and x-axis the the size of the element. No need to give much comments. RMI definitely looks better than JGroups.</span></span></span><br />
<span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></span></span><br />
<span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">In the seconds test, the variable was number of elements and the size of the element stayed constant and equal to 1280 bytes. As in previous test the number of messages was multiplied by two in each iteration and the amount of data transferred in final iteration was the same 3Gb. Graph below show how did it go:</span></span></span></div><div class="separator" style="clear: both; font-family: Arial; font-size: 11pt; text-align: center; white-space: pre-wrap;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLiz2ynnfS7VxFIjbl1vPkRQPhbpdMeJhowE9iZNJ_ZJWG54zIr3HoCTe3Am5PPYEckZxybyLEym_SAImtbvi1MTpJduqyCB9GqehegfActTKYbCfqXkPVEhDnzmQZSGo7JVYtmWk3iBGj/s1600/ehcache_time_vs_packet_number.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLiz2ynnfS7VxFIjbl1vPkRQPhbpdMeJhowE9iZNJ_ZJWG54zIr3HoCTe3Am5PPYEckZxybyLEym_SAImtbvi1MTpJduqyCB9GqehegfActTKYbCfqXkPVEhDnzmQZSGo7JVYtmWk3iBGj/s640/ehcache_time_vs_packet_number.png" width="640" /></a></span></span></div><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="background-color: transparent; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><div style="font-family: 'Times New Roman'; font-size: medium; white-space: normal;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">As in previous graph, y-axis is the time require to transfer all elements in one iteration. X-axis is the number of elements. Again, it can be seen that RMI is the leader. I believe hat JGroups hit the heap at the latest iteration, that’s why it performed so bad. It means that JGroups has more memory overhead per element.</span></span></span></div><div style="font-family: 'Times New Roman'; font-size: medium; white-space: normal;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"></span></span></span></div><div style="font-family: 'Times New Roman'; font-size: medium; white-space: normal;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">For the once, who do not trust (I woulnd’t ;) ) to my results and want to try it yourself, here are <a href="https://docs.google.com/leaf?id=0B_Oq4PMUt3r6MTdmMjIzYzgtNjdjMi00ODdkLTljMTEtY2UyYTQ0YTNjNjI5&hl=en_US">sources and configuration</a>.</span></span></span></div><div style="font-family: 'Times New Roman'; font-size: medium; white-space: normal;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;"><br />
</span></span></span></div><div style="font-family: 'Times New Roman'; font-size: medium; white-space: normal;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span id="internal-source-marker_0.4318215469829738" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline;"><span style="background-color: transparent; color: black; font-family: Arial; font-size: 11pt; font-style: normal; font-variant: normal; font-weight: normal; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">And, as conclusion... Well, RMI and JGroups both are acceptably fast. JGroups is definitely more memory consuming, which means one can hit a problem using it with big amounts of data. RMI, on the other hand uses TCP, instead of UDP, which, </span></span></span><span class="Apple-style-span" style="font-family: Arial; font-size: 15px; white-space: pre-wrap;">with big amount of nodes, </span><span class="Apple-style-span" style="font-family: Arial; font-size: 15px; white-space: pre-wrap;">may cause higher network load. Latter, unfortunately, is not covered by the test by any means and the real impact is not clear. </span></div></div></div></div></div></div></div></div>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com9tag:blogger.com,1999:blog-4215458222833264808.post-74554971850546072362011-01-15T15:08:00.005+00:002012-09-18T22:41:41.536+01:00jakarta regexp vs java.util.regexHave never really been thinking about which library to use for regular expression, it always was <a href="http://download.oracle.com/javase/6/docs/api/java/util/regex/package-summary.html">java.util.regex</a> by default. But I found that some people prefer to use <a href="http://jakarta.apache.org/regexp/index.html">Jakarta Regexp</a>, and usually there is not clear reason for that choice. So, I decided to spent some time trying to find out what is better and probably write a couple of tests. After some time spent in the Internet looking for the examples of tests, it appeared that there was a good guy already who have done all that work. Here is the link to the page with results of his investigation:<br />
<br />
<a href="http://tusker.org/regex/regex_benchmark.html">http://tusker.org/regex/regex_benchmark.html</a><br />
<br />
The conclusion is that Jakarta Regexp doesn't look good at all. Also you may notice that there are some other libraries who are doing regexps and some have very impressive performance and seems like worth trying.<br/><br/>
UPDATE 09/2012: Thank's for comments from <a href="http://hype-free.blogspot.ro">Cd MaN</a>, there is newer update of that benchmark: <a href="http://hype-free.blogspot.ro/2008/12/big-java-regex-shoutout.html">http://hype-free.blogspot.ro/2008/12/big-java-regex-shoutout.html</a>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com2tag:blogger.com,1999:blog-4215458222833264808.post-8607656431507285602010-11-30T23:13:00.006+00:002010-12-12T12:17:31.773+00:00All you need to know about QuickSortIt would be true to say that Quicksort is one of the most popular sorting algorithms. You can find it implemented on the most of the languages and it is present in almost any core library. In Java and Go Quicksort is default sorting algorithm for some data types and it is used in in C++ STL (<a href="http://en.wikipedia.org/wiki/Introsort">Introsoft</a> which is used there begins with Quicksort). Such popularity can be explained by the fact that on average, Quicksort is one of the fastest known sorting algorithms. Interestingly that complexity of Quicksort is not less than it is for other algorithms like MergeSort or HeapSort. The best case performance is O(nlogn) and on the worst case it gives O(n^2). Latter, luckily, is exceptional case for the proper implementation. Quicksort performance is gained by the main loop which tends to make excellent usage of CPU caches. Another reason of popularity is that it doesn't need allocation of additional memory.<br /><br />Personally for me Quicksort appeared as one of the most complex sorting algorithms. The basic idea is pretty simple and usually takes just a few minutes to implement. But that version, of course, if not practically usable. When it comes to details and to efficiency, it is getting more and more complicated.<br /><br />Quicksort was first discovered by C.A.R. Hoare in 1962 (see "Quicksort," Computer Journal 5, 1, 1962) and in following years algorithm slightly mutated. The most known version is Three-way Quicksort. The most comprehensive of widely known ones is Dual-Pivot Quicksort. Both algorithms will be covered in that post.<br /><a name='more'></a><br />The Java language was used to implement all algorithms. That post do not pretend to make adequate performance analysis. Test data used for performance comparison is incomplete and used just to show certain optimization techniques. Also, algorithm implementations are not necessary optimal. Just keep that in mind while you are reading.<br /><h1>Basics</h1><br />The basic version of Quicksort is pretty simple and can be implemented just in few lines of code:<br /><pre class="prettyprint"><br />public static void basicQuickSort(long arr[], int beginIdx, int len) {<br /> if ( len <= 1 )<br /> return;<br /> <br /> final int endIdx = beginIdx+len-1;<br /><br /> // Pivot selection<br /> final int pivotPos = beginIdx+len/2;<br /> final long pivot = arr[pivotPos];<br /> Utils.swap(arr, pivotPos, endIdx);<br /><br /> // partitioning<br /> int p = beginIdx;<br /> for(int i = beginIdx; i != endIdx; ++i) {<br /> if ( arr[i] <= pivot ) {<br /> Utils.swap(arr, i, p++);<br /> }<br /> }<br /> Utils.swap(arr, p, endIdx);<br /><br /> // recursive call<br /> basicQuickSort(arr, beginIdx, p-beginIdx);<br /> basicQuickSort(arr, p+1, endIdx-p);<br />}<br /></pre><br />The code looks pretty simple and easily readable. Pivot selection is trivial and doesn't require any explanation. The partitioning process can be illustrated using following figure:<br /><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIfCwHtHpn9HvM4vxlK7ipb-K9-Pfyeh2k2EP1axrQ453yAotlZb8yYpHyb07iT_ZsSwMWCBg5N5WOGkzJkNRz6Pl2DDEqAAMFwMDGK3U8HV-ocrcc8YYZIrzdRdZNYHYtvdt1ZARVP7g0/s1600/basic_quickSort.gif" border="0" alt="Basic Quicksort" /><br />pointer "i" moves from the beginning to the end on array (note, that the last element of the array is skipped - we know that it the pivot). If i-th element is "<= pivot" then i-th and p-th elements are swapped and "p" pointer is moved to the next element. When partitioning is finished array will look like this: <img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEidRPOBUfe4Y1xO6IkoXI3Yh14bYOva6S6oOUHuZdy4n5yDhq85UqrQemnzj8Jxo0LVfCeQZ2JKswCPpHjanJuOxe4w2KfqZgZbchEaMQQ3v_uIJOPhraTJR6QYGq65pT5B_tB9QRiDsXY_/s1600/basic1_quickSort.gif" border="0" alt="Basic Quicksort" /><br />Remember, that in the code, at the end of array there is element with pivot value, and that element is excluded from the pivoting loop. That element is put on p-th position, which makes p-th element included in "<= pivot" area. If you need more details, have a look at <a href="http://en.wikipedia.org/wiki/Quicksort">Wikipedia</a>. There is pretty good explanation with lots of references. I would just emphasize your attention that algorithm consists of three main sections. These sections are pivot selection, partitioning and recursive call to sort partition. To make separation clearer the algorithm can be written down as:<br /><pre class="prettyprint"><br />public static void basicQuickSort(long arr[], int beginIdx, int len) {<br /> if ( len <= 1 )<br /> return;<br /> <br /> final int endIdx = beginIdx + len - 1;<br /> final int pivotIdx = getPivotIdx(arr, beginIdx, len);<br /> final long pivot = arr[pivotIdx];<br /><br /> Utils.swap(arr, pivotIdx, endIdx);<br /> int p = partition(arr, beginIdx, len, pivot);<br /> Utils.swap(arr, p, endIdx);<br /><br /> basicQuickSort(arr, beginIdx, p-beginIdx);<br /> basicQuickSort(arr, p+1, endIdx-p); <br />} <br /><br />public static int partition(long[] arr, int beginIdx, int len, long pivot) {<br /> final int endIdx = beginIdx + len - 1;<br /> int p = beginIdx;<br /> for(int i = beginIdx; i != endIdx; ++i) {<br /> if ( arr[i] <= pivot ) {<br /> Utils.swap(arr, i, p++); <br /> } <br /> } <br /> return p;<br />}<br /><br />public static int getPivotIdx(long arr[], int beginIdx, int len) {<br /> return beginIdx+len/2;<br />}<br /></pre><br />Now let's have a look how it performs vs Java 1.6 sort algorithm. For the test I will generate array using following loop:<br /><pre class="prettyprint"><br />static Random rnd = new Random();<br />private static long[] generateData() {<br /> long arr[] = new long[5000000];<br /> for(int i = 0; i != arr.length; ++i) {<br /> arr[i] = rnd.nextInt(arr.length);<br /> }<br /> return arr;<br />}<br /></pre><br />Then I run each JDK 6 Arrays.sort() and basicQuickSort() for 30 times and took the average run time as the result. New set of random data was generated for each run. The result if that exercise is this:<table border="1"><tbody><tr><td></td><td>arr[i]=rnd.nextInt(arr.length)</td></tr><tr><td>Java 6 Arrays.sort</td><td>1654ms</td></tr><tr><td>basicQuickSort</td><td>1431ms</td></tr></tbody></table><br />Not that bad. Now look what would happen if input data has some more repeated elements. To generated that data, I just divided nextInt() argument by 100:<br /><table border="1"> <tbody><tr><td></td><td>arr[i]=rnd.nextInt(arr.length)</td><td>arr[i]=rnd.nextInt(arr.length/100)</td></tr><tr><td>Java 6 Arrays.sort</td><td>1654ms</td><td>935ms</td></tr><tr><td>basicQuickSort</td><td>1431ms</td><td>2570ms</td></tr></tbody></table><br />Now that is very bad. Obviously that simple algorithm doesn't behave well in such cases. It can be assumed that the problem is in the quality of the pivot. The worst possible pivot is the biggest or the smallest element of the array. In that case, algorithm would has O(n^2) complexity. Ideally pivot should be chosen such as it splits an array into two parts with equal sizes. It means that ideal pivot is the median on all values of given array. Practically that is not good idea - too slow. Therefore, usually, implementation uses median of 3-5 elements. The decision on the number of elements used for pivot can be based on the size of partitioned array. The code for the pivot selection may look like this:<br /><pre class="prettyprint"><br />public static int getPivotIdx(long arr[], int beginIdx, int len) {<br /> if ( len <= 512 ) {<br /> int p1 = beginIdx;<br /> int p2 = beginIdx+(len>>>1);<br /> int p3 = beginIdx+len-1;<br /><br /> if ( arr[p1] > arr[p2] ) { int tmp = p1; p1 = p2; p2 = tmp; }<br /> if ( arr[p2] > arr[p3] ) { p2 = p3; }<br /> if ( arr[p1] > arr[p2] ) { p2 = p1; }<br /><br /> return p2;<br /> } else {<br /> int p1 = beginIdx+(len/4);<br /> int p2 = beginIdx+(len>>1);<br /> int p3 = beginIdx+(len-len/4);<br /> int p4 = beginIdx;<br /> int p5 = beginIdx+len-1;<br /><br /> if ( arr[p1] > arr[p2] ) { int tmp = p1; p1 = p2; p2 = tmp; }<br /> if ( arr[p2] > arr[p3] ) { int tmp = p2; p2 = p3; p3 = tmp; }<br /> if ( arr[p1] > arr[p2] ) { int tmp = p1; p1 = p2; p2 = tmp; }<br /> if ( arr[p3] > arr[p4] ) { int tmp = p3; p3 = p4; p4 = tmp; }<br /> if ( arr[p2] > arr[p3] ) { int tmp = p2; p2 = p3; p3 = tmp; }<br /> if ( arr[p1] > arr[p2] ) { p2 = p1; }<br /> if ( arr[p4] > arr[p5] ) { p4 = p5; }<br /> if ( arr[p3] > arr[p4] ) { p3 = p4; }<br /> if ( arr[p2] > arr[p3] ) { p3 = p2; }<br /> return p3;<br /> }<br />}<br /></pre><br />Here are results after improvements in pivot selection strategy:<table border="1"><tbody><tr><td></td><td>arr[i]=rnd.nextInt(arr.length)</td><td>arr[i]=rnd.nextInt(arr.length/100)</td></tr><tr><td>Java 6 Arrays.sort</td><td>1654ms</td><td>935ms</td></tr><tr><td>basicQuickSort</td><td>1431ms</td><td>2570ms</td></tr><tr><td>basicQuickSort with 'better' pivot</td><td>1365ms</td><td>2482ms</td></tr></tbody></table><br />Unfortunately, the improvement is almost nothing. It appeared that pivot selection is not the root cause of the problem. But still let's keep it, it doesn't harm, even helps a little bit. It also significantly reduce possibility of O(n^2) behaviour. Another suspect is the algorithm itself. It seems like it's not good enough. Obviously it doesn't perform well, when collection has repeated elements. Therefore something has to be changed.<br /><h1>Three-way partitioning</h1><br />The way to get around that problem is three-way-partitioning. As a result of such partitioning, elements which are equal to the pivot are put in the middle of the array. Elements which are bigger than pivot are put in the right side of the array and ones which are smaller on the left side, appropriately.<br />Implementation of that partitioning method consists of two stages. In the first stage arrays is scanned by two pointers ("i" and "j") which are approaching in opposite directions. Elements which are equals to pivot are moved to the ends of array:<br /><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCDkI04O_S4ULjDURYJAsEoiO2gJayFHvmfMCjuf15gTCNJT26yMEQrsaEb572t8vKi6FqVT_FrXEI1FkChzYHT04tXBAKDV_DQRd_ih6_bMxSx5PuoQtyX_5f7c3bgQRjmCY2rx2QAyiR/s1600/threeWay1_quickSort.gif" border="0" /><br />It can be seen that after the first stage elements which are equal to the pivot are located on the edges of the array. On the second stage these elements are moved to the middle. That is now their final position and they can be be excluded from the further sorting:<br /><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLqz5vJNBrQL7Ie7qGXQnX-dUppYApcUcMhHgutRWPpf4RpqNGSyj_CyKcrtNeiexg3ukfCMVjuGF8Wh5tlfJfSPsoNVnso00C0RUk2GvPX-sRmXPF6a3JbNek92LyT_CXkim_SngdhyphenhyphenVG/s1600/threeWay2_quickSort.gif" border="0" /><br />After implementation of such algorithm partitioning function is getting much more complicated. In that implementation the result of the partitioning is lengths of two bound partitions:<br /><pre class="prettyprint"><br />public static long partition(long[] arr, int beginIdx, int endIdx, long pivot) {<br /> int i = beginIdx-1;<br /> int l = i;<br /> int j = endIdx+1;<br /> int r = j;<br /> while ( true ) {<br /> while(arr[++i] <> pivot){}<br /><br /> if ( i >= j )<br /> break;<br /><br /> Utils.swap(arr, i, j);<br /> if ( arr[i] == pivot ) {<br /> Utils.swap(arr, i, ++l);<br /> }<br /> if ( arr[j] == pivot ) {<br /> Utils.swap(arr, j, --r);<br /> }<br /> }<br /> // if i == j then arr[i] == arr[j] == pivot<br /> if ( i == j ) {<br /> ++i;<br /> --j;<br /> }<br /><br /> final int lLen = j-l;<br /> final int rLen = r-i;<br /><br /> final int pLen = l-beginIdx;<br /> final int exchp = pLen > lLen ? lLen: pLen;<br /> int pidx = beginIdx;<br /> for(int s = 0; s <= exchp; ++s) {<br /> Utils.swap(arr, pidx++, j--);<br /> }<br /> final int qLen = endIdx-r;<br /> final int exchq = rLen > qLen ? qLen : rLen;<br /> int qidx = endIdx;<br /> for(int s = 0; s <= exchq; ++s) {<br /> Utils.swap(arr, qidx--, i++);<br /> }<br /><br /> return (((long)lLen)<<32)|rlen;<br />}<br /></pre><br />The pivot selection has to be changed as well, but more just for convenience, the idea remains absolutely the same. Now it returns actual value of pivot, instead of index:<br /><pre class="prettyprint"><br />public static long getPivot(long arr[], int beginIdx, int len) {<br /> if ( len <= 512 ) {<br /> long p1 = arr[beginIdx];<br /> long p2 = arr[beginIdx+(len>>1)];<br /> long p3 = arr[beginIdx+len-1];<br /><br /> return getMedian(p1, p2, p3);<br /> } else {<br /> long p1 = arr[beginIdx+(len/4)];<br /> long p2 = arr[beginIdx+(len>>1)];<br /> long p3 = arr[beginIdx+(len-len/4)];<br /> long p4 = arr[beginIdx];<br /> long p5 = arr[beginIdx+len-1];<br /><br /> return getMedian(p1, p2, p3, p4, p5);<br /> }<br />}<br /></pre><br />And here is the main method, which is slightly changed as well:<br /><pre class="prettyprint"><br />public static void threeWayQuickSort(long[] arr, int beginIdx, int len) {<br /> if ( len < 2 )<br /> return;<br /><br /> final int endIdx = beginIdx+len-1;<br /> final long pivot = getPivot(arr, beginIdx, len);<br /> final long lengths = threeWayPartitioning(arr, beginIdx, endIdx, pivot);<br /><br /> final int lLen = (int)(lengths>>32);<br /> final int rLen = (int)lengths;<br /><br /> threeWayQuickSort(arr, beginIdx, lLen);<br /> threeWayQuickSort(arr, endIdx-rLen+1, rLen);<br />}<br /></pre><br />now let's compare it with Java 6 sort:<table border="1"><tbody><tr><td></td><td>arr[i]=rnd.nextInt(arr.length)</td><td>arr[i]=rnd.nextInt(arr.length/100)</td></tr><tr><td>Java 6 Arrays.sort</td><td>1654ms</td><td>935ms</td></tr><tr><td>basicQuickSort</td><td>1431ms</td><td>2570ms</td></tr><tr><td>basicQuickSort with 'better' pivot</td><td>1365ms</td><td>2482ms</td></tr><tr><td>Three-way partitioning Quicksort</td><td>1330ms</td><td>829ms</td></tr></tbody></table><br />Huh, impressive! It is faster than standard library, which, by he way, implements the same algorithm. To be honest I was surprised, when found that it is such an easy task to beat standard library.<br />But what about making it even faster? There is one trick which always helps and it works for all sorting algorithms which work with consecutive memory. That trick is <a href="http://en.wikipedia.org/wiki/Insertion_sort">Insertion</a> sort. Although is has big chance of O(n^2), it appears to be very very effective on the small arrays and always gives some performance improvements. Especially that is noticeable when input data is not sorted and there are not many repeated elements. All you need to do is just add it at the beginning of sorting method:<br /><pre class="prettyprint"><br />public static void threeWayQuickSort(long[] arr, int beginIdx, int len) {<br /> if ( len < 2 )<br /> return;<br /><br /> if ( len < 17 ) {<br /> InsertionSort.sort(arr, beginIdx, len);<br /> return;<br /> }<br /><br /> final int endIdx = beginIdx+len-1;<br /> final long pivot = getPivot(arr, beginIdx, len);<br /> final long lengths = threeWayPartitioning(arr, beginIdx, endIdx, pivot);<br /><br /> final int lLen = (int)(lengths>>32);<br /> final int rLen = (int)lengths;<br /><br /> threeWayQuickSort(arr, beginIdx, lLen);<br /> threeWayQuickSort(arr, endIdx-rLen+1, rLen);<br />}<br /></pre><br />and run the test again:<table border="1"><tbody><tr><td></td><td>arr[i]=rnd.nextInt(arr.length)</td><td>arr[i]=rnd.nextInt(arr.length/100)</td></tr><tr><td>Java 6 Arrays.sort</td><td>1654ms</td><td>935ms</td></tr><tr><td>basicQuickSort</td><td>1431ms</td><td>2570ms</td></tr><tr><td>basicQuickSort with 'better' pivot</td><td>1365ms</td><td>2482ms</td></tr><tr><td>Three-way partitioning Quicksort</td><td>1330ms</td><td>829ms</td></tr><tr><td>Three-way partitioning Quicksort with Insertion sort</td><td>1155ms</td><td>818ms</td></tr></tbody></table><br />now standard library looks just awful. It looks now that all is said and done. But it in reality that's not the end of the story and there is something else to talk about.<br /><h1>Dual-pivot Quicksort</h1><br />Moving forward, I found that Java 7 is much more advanced and performs much faster than Java 6 version and outperforms all previous tests:<table border="1"><tbody><tr><td></td><td>arr[i]=rnd.nextInt(arr.length)</td><td>arr[i]=rnd.nextInt(arr.length/100)</td></tr><tr><td>Java 6 Arrays.sort</td><td>1654ms</td><td>935ms</td></tr><tr><td>Java 7 Arrays.sort</td><td>951ms</td><td>764ms</td></tr><tr><td>basicQuickSort</td><td>1431ms</td><td>2570ms</td></tr><tr><td>basicQuickSort with 'better' pivot</td><td>1365ms</td><td>2482ms</td></tr><tr><td>Three-way partitioning Quicksort</td><td>1330ms</td><td>829ms</td></tr><tr><td>Three-way partitioning Quicksort with Insertion sort</td><td>1155ms</td><td>818ms</td></tr></tbody></table><br />After several seconds of very exciting research study it was found that Java 7 uses new version of Quicksort algorithm which was discovered just in 2009 by Vladimir Yaroslavskiy and named <a href="http://gdtoolbox.com/DualPivotQuicksort.pdf">Dual-Pivot QuickSort</a>. Interestingly that after some search in internet, I have found algorithm called <a href="http://www.freepatentsonline.com/y2007/0088699.html">"Multiple pivot sorting"</a> which was published in 2007. It seems like generic case of "Dual-Pivot QuickSort" where is possible to have any number of pivots.<br />As you may notice from the name, the main difference of that algorithm is that it is using two pivots, instead of one. Coding now is getting even more complicated. The simplest version of that algorithm may look like this:<br /><pre class="prettyprint"><br />public static void dualPivotQuicksort(long arr[], int beginIdx, int len) {<br /> if ( len < 2 )<br /> return;<br /><br /> final int endIdx = beginIdx+len-1;<br /><br /> long pivot1 = arr[beginIdx];<br /> long pivot2 = arr[endIdx];<br /><br /> if ( pivot1 == pivot2 ) {<br /> final long lengths = QuickSort.threeWayPartitioning(arr, beginIdx, endIdx, pivot1);<br /> final int lLen = (int)(lengths>>32);<br /> final int rLen = (int)lengths;<br /><br /> dualPivotQuicksort3(arr, beginIdx, lLen);<br /> dualPivotQuicksort3(arr, endIdx-rLen+1, rLen);<br /> } else {<br /> if ( pivot1 > pivot2 ) {<br /> long tmp = pivot1;<br /> pivot1 = pivot2;<br /> pivot2 = tmp;<br /> Utils.swap(arr, beginIdx, endIdx);<br /> }<br /><br /> int l = beginIdx;<br /> int r = endIdx;<br /> int p = beginIdx;<br /><br /> while ( p <= r ) {<br /> if ( arr[p] < pivot1 ) {<br /> Utils.swap(arr, l++, p++);<br /> } else if ( arr[p] > pivot2 ) {<br /> while ( arr[r] > pivot2 && r > p ) {<br /> --r;<br /> }<br /> Utils.swap(arr, r--, p);<br /> } else {<br /> ++p;<br /> }<br /> }<br /> if ( arr[l] == pivot1 ) ++l;<br /> if ( arr[r] == pivot2 ) --r;<br /><br /> dualPivotQuicksort3(arr, beginIdx, l-beginIdx);<br /> dualPivotQuicksort3(arr, l, r-l+1);<br /> dualPivotQuicksort3(arr, r+1, endIdx-r);<br /> }<br />}<br /></pre><br />First code picks up two pivots. If pivots are the same, it means we have just one pivot and in that case we can used three-way method for partitioning. If pivots are different, then partitioning process will look like this:<br /><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTDzZeTveOvZQq0-56nY5MVA5HwvXVhw9VbyO5RDiD6FAtT3R86Z_eJ1GiCxCEPNiwVYPG20ZwbmQdXaP9_Xaite36IUlDl4aQNtoT3re1tTXWYr0BOVGLi9m98IHnsjVFA3MKzBJr8HmK/s1600/doublePivot1_quickSort.gif" border="0" /><br />Scanning pointer "p" is moving from the beginning of array. If current element is "<> pivot1", then r-th element is swapped with p-th and "r" pointer is moved to next element backwards. All stops when "p" becomes less than "r". After partitioning, array will look like this:<br /><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwCwXFi760U0Cug0mQrs71sqMfOICtSJr6jHsx308J05Y8a0zACfTMUnArD-cfHZ1l7hUFSeay1glCi5EB0PXqzSCZxJF93iaI4duSqa8vUB17t-X8Gmv1ZMMFurulbhO9tVTnvkOie0pV/s1600/doublePivot2_quickSort.gif" border="0" /><br />When partition is finished, algorithms is called recursively for each partition.<br /><br />Reader shouldn't expect good performance from the provided code, it is not fast and performs even worse than Java 6 Arrays.sort. I was provided just to illustrate the concept.<br /><br />To be honest I failed to make my implementation to perform any better than version from Java 7. I must admit, that Yaroslav made a very good job there. Therefore I do not think that there is any sense in discussing my implementation here in details.<br /><br />But, if someone wants to challenge Java 7 version I can point to some direction for optimizations. Firstly, which is seems obvious? is pivot selection. Another easy improvement is Insertions sort at the beginning. Also, I have noticed that that algorithm is very sensitive to inlining, so there is sense to inline Utils.swap(). As other option, you can decided to go thought the middle partition and move elements equals to pivot1 or pivot2 to their final positions which will exclide them from the further sorting. I found that it is effective for relatively (<=512 elements) small arrays. You can also have a look at source from the Java 7 and try to implement some tricks from there. Be ready to spend a lot of time :) All in all, it can be seen that over the years sorting is getting better and better. And that statement doesn't only relate to Quicksort. Other sorting algorithms are improving as well. As examples can be considered <a href="http://en.wikipedia.org/wiki/Introsort">Introsoft</a> or <a href="http://bugs.python.org/file4451/timsort.txt">Timsort</a>. However, it would be true to say that nothing really new was discovered in that area since 1960s-1980s. Hopefully we will be lucky enough to see something completely new and radical in the future.<br /><br />For ones who want to dig deeper, as the starting point, I would suggest to visit following links:<br /><ul><br /><li><a href="http://en.wikipedia.org/wiki/Quicksort">Quicksort Wikipedia article</a></li><br /><li><a href="http://gdtoolbox.com/DualPivotQuicksort.pdf">Dual-Pivot QuickSort</a></li><br /><li><a href="http://www.cs.princeton.edu/~rs/talks/QuicksortIsOptimal.pdf">Quicksort Is Optimal</a> presentation by Robert Sedgewick & Jon Bentley</li><br /><li><a href="http://videolectures.net/mit6046jf05_leiserson_lec04/">MIT lecture about quicksort</a></li><br /></ul>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com3tag:blogger.com,1999:blog-4215458222833264808.post-72401398204519596152010-10-11T23:08:00.057+01:002011-01-08T22:02:11.837+00:00JCaptcha, SecureRandom & performance<div>Personally, I'm not big fan of JCaptcha library and recently was lucky enough to find another problem there. The problem is related mostly to linux and it's implementation of java.security. SecureRandom, which is <a href="http://stackoverflow.com/questions/137212/how-to-solve-performance-problem-with-java-securerandom">known to be slow</a> and is locking on every call to it.</div><div><br /></div><div>For some reason (I suspect that <a href="http://jcaptcha.octo.com/jira/browse/FWK-48">this</a> was the reason) JCaptacha overuse java.security.SecureRandom when it generates background image using FunkyBackgroundGenerator. Number of calls to get next random float can easily reach something around 100 000 per one captcha image. It is basically called at least once for each pixel.</div><div><br /></div><div>Lets run some quick tests to have a look how bad is that. I have wrote write simple captch engine:</div><br /><pre class="prettyprint"><br />public static class MyImageCaptchaEngine extends ListImageCaptchaEngine {<br /> protected void buildInitialFactories() {<br /> WordGenerator wgen = new RandomWordGenerator("ABCDEFGHIJKLMNOPQRSTUVWXYZ123456789");<br /> RandomRangeColorGenerator cgen = new RandomRangeColorGenerator(<br /> new int[] {0, 100},<br /> new int[] {0, 100},<br /> new int[] {0, 100});<br /> TextPaster textPaster = new RandomTextPaster(new Integer(7), new Integer(7), cgen, true);<br /><br /> BackgroundGenerator backgroundGenerator = new FunkyBackgroundGenerator(new Integer(200), new Integer(100));<br /><br /> Font[] fontsList = new Font[] {<br /> new Font("Arial", 0, 10),<br /> new Font("Tahoma", 0, 10),<br /> new Font("Verdana", 0, 10),<br /> };<br /><br /> FontGenerator fontGenerator = new RandomFontGenerator(new Integer(20), new Integer(35), fontsList);<br /><br /> WordToImage wordToImage = new ComposedWordToImage(fontGenerator, backgroundGenerator, textPaster);<br /> this.addFactory(new GimpyFactory(wgen, wordToImage));<br /> }<br />}<br /></pre>And the test itself:<div><span class="Apple-style-span" style="font-family: monospace; font-size: 13px; white-space: pre; "><br /></span></div><div><div><br /><pre class="prettyprint"><br />long begin = System.currentTimeMillis();<br />for(int i = 0; i != 100; ++i) {<br /> engine.getNextCaptcha();<br />}<br />long end = System.currentTimeMillis();<br />System.out.println("Total time is [" + (end - begin) + "]");<br /></pre>now lets run it:</div><div><span class="Apple-style-span" style="font-family: monospace; font-size: 13px; white-space: pre; "><br /></span></div><div>Total time is [10967]</div><div><div><div><br /></div></div></div><div>Ok, so what we have now it 10967ms, which, I believe, is bad. It can be significantly improved. I'm not very big fan of high-quality random backgrounds, so I will replace SecureRandom class, used by parent of FunkyBackgroundGenerator with "pseudo" random Random class. Funs of high-quality random backgrounds can still use SecureRandom for seeding, through:</div><div><br /><pre class="prettyprint"><br />public static class MyFunkyBackgroundGenerator extends FunkyBackgroundGenerator {<br /> public MyFunkyBackgroundGenerator(Integer width, Integer height) {<br /> super(width, height);<br /> try {<br /> Field rndField = AbstractBackgroundGenerator.class.getDeclaredField("myRandom");<br /> rndField.setAccessible(true);<br /> rndField.set(this, new Random());<br /> }<br /> catch (Exception e) {<br /> e.printStackTrace();<br /> }<br /> }<br />}<br /></pre></div><div>I know, that is dirty hack, but as far "myRandom" is declared with default visibility, that's the shortest way to replace it for now . And what we have now is:</div><div><div><span class="Apple-style-span" style="font-family: monospace; font-size: 13px; white-space: pre; "><br /></span></div><div>Total time is [1308]</div><div><br /></div><div>Approximately 7 time quicker. Not that bad, specially for a sample case. In real world application improvement will be even more significant because some other processes may use '/dev/(u)random' or application itself can utilize SecureRandom for other purposes.</div></div><div><br /></div><div>That's not it. There is another bottleneck, which is usage of Java2D for rendering captcha images. Java2D is well known for it's problems with multi-threading and the summary of these problems is that Java2D doesn't scale well. Some details can be found <a href="http://stackoverflow.com/questions/3922440/howto-make-image-generation-scalable-on-java">here</a>. Possibly the way to fix that problem may be removing Java2D and using direct access to image via <a href="http://download-llnw.oracle.com/javase/6/docs/api/java/awt/image/WritableRaster.html">WritableRaster</a> instead. However, it doesn't solve all problems, as far as Java2D is still used for drawing text.</div></div>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com2tag:blogger.com,1999:blog-4215458222833264808.post-40561312116871937962010-08-17T12:31:00.094+01:002012-07-01T21:27:54.028+01:00ConcurrentHashMap revealed<div dir="ltr" style="text-align: left;" trbidi="on">
Java 1.5 introduced some new cool concurrency stuff which is located in java.util.concurrent package. There are lots of good things there including new ConcurrentHashMap class which I'm going to talk about. That class is targeted to be used in concurrent environment and provides significant performance benefits over synchronized version of HashMap. Javadoc doesn't really provide lots of details on how that class works and why can be better than synchronized version of HashMap. I think that understanding of these details is crucial for using that class in a right way, so I've made some research to undercover implementation details and mechanisms used to implement that class.<br />
<a name='more'></a><div>
<br /></div>
<b>Map Regions</b><br />
<br />
Firstly I will repeat what is already said in JavaDoc - that map implementation consists of regions. Each region is hash table - an array where each element is associated with some range of hash codes and contains linked list of entries. It means that structurally region is very similar to normal HashMap. All write operations have to acquire write lock on the one whole region.<br/>
All read operations on the region are performed without locking, apart from one small exception. That exception happens on attempt to read value of entry and that value is "null". In that case code suspects that compiler and could possibly assign entry to array element before calling entry's constructor (good example and explanation of that problem can be found <a href="http://en.wikipedia.org/wiki/Double-checked_locking">here</a>). When that happens code tries to acquire write lock and perform reading inside that lock. Such strategy guarantees that initialization of the object is completed before the reference is assigned to the array element. As it is stated in comments in ConcurrentHashMap's source code, chance that comliler will re-order initialization and assignment in that paticular case is extremely low, so chance of "blocking read" is negligible. <br />
Another interesting thing about region is that it has <a href="http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile">volatile</a> counter which counts number of modifications of that region. That value is used in many methods of ConcurrentHashMap to identify that region’s state is not changed during execution of current method. Basically it is used as a means of solving <a href="http://en.wikipedia.org/wiki/ABA_problem">ABA problem</a>. It doesn't seem right to me that such strategy is applied even on read methods, the most of them are much slower than they can be without that check. There is also some locking (e.g. in size() operation) of individual segments which looks unnecessary as far as result is not going to be accurate anyway. But, on the other hand, that class was created by Doug Lea who is very respectable for his concurrency research, so I suspect I'm missing something...<br />
<br />
<b>Iteration through ConcurrentHashMap</b><br />
<div>
<br />
Some other stuff, which may raise questions, is relationship between collection and enumeration objects returned by “keySet()”, “values()”, “entrySet()”, “keys()” and “elements()” and ConcurrentHashMap instance which have created them. Implementation of all these objects is based on internal class HashIterator and the noticeable thing here is that instance of that class has some information about the state of ConcurrentHashmap at the moment of it’s (HashIterator’s instance) creation. As an example, there is direct reference to the segment’s data array and on the moment when the HashIterator instance is actually used, given segment could have already decided to create a new array, because the old one appeared to be too small. It means that data returned by these collections and enumerations may not reflect changes in the instance of ConcurrentHashmap which happened after given collection or enumeration was created.<br />
Performance considerations for these objects are exactly the same as for ConcurrentHashMap’s methods - read methods are non-blocking and for update methods lock is acquired on segment level. Interesting, but not really practically useful fact, is that HashIterator isn’t that smart as ConcurrentHashMap itself and doesn’t lock when entry’s value is null. So in theory, it is possible that iterator created by “elements()” method will return null value from “next()” or “nextElement()”.<br />
<br />
<b>Building the map</b></div>
<div>
<br />
To build a map properly you have to understand what constructor’s arguments mean. Here is a brief explanation in the light of things which are written above:<br />
<ul>
<li>Default initial capacity. That parameter is used to calculate initial capacity of segments. The capacity of each segment is nearest power of two bigger than provided initial capacity divided by number of regions (see Default concurrency level). E.g. initial capacity is 1000 and concurrency level is 3, then number of regions is 4 and capacity of each region is 256 (which is nearest power of two bigger than 1000/4). Default value of that argument is 16. </li>
<li>Default load factor. Is used to identify the moment when the region has to be resized (see section about memory consumption below). No magic, used as is. Default value of that argument is 0.75. </li>
<li>Default concurrency level. Defines number of regions which will be created. Exact number of regions is nearest power of two bigger than provided concurrency level. Concurrency level is always less than 2^16. E.g. if concurrency level is 18, number of regions will be 32. Default value of that argument is 16. Keep in mind, that more regions (better concurrency), means higher memory consumption.</li>
</ul>
<div>
And, probably, the last bit - memory consumption. In these terms ConcurrentHashMap is approximately the same as HashMap, but multiplied by number of regions. Region grows at the moment when number of elements appeared to be equal to loadFactor*currentSizeOfRegion value. New region size is always twice as big as previous one, maximum size of region is 2^30.</div>
<div>
<br /></div>
<b>Migration to ConcurrentHashMap</b></div>
<div>
<br />
Other possible question which can be raised is the migration from synchronized version of HashMap to CuncurrentHashMap. There are several things which has to be evaluated during the migration process:<br />
<ul>
<li>Some methods can use synchronized map object for synchronization purposes. If someone accrue the lock on the object, all it’s methods are locked as well. That’s be biggest fun. There is no simple way to archive such behaviour with CuncurrentHashMap. </li>
<li>CuncurrentHashMap’s size method is slow. In the worst case it spins couple of time and then get the lock on all regions. This is much-much-much slower than HashMap implementation, which returns back just the value of the “size” field. </li>
<li>Iterator of CuncurrentHashMap does not throw ConcurrentModificationException. Do not see how this can cause problem, but just keep it is mind. </li>
<li>Containing the same amount of elements ConcurrentHashMap object requires more memory than HashMap object (see “Building the map” section).</li>
</ul>
<div>
<div>
<br /></div>
<div>
All in all, ConcurrentHashMap is brilliant container. In multi-threaded environments it will have much bigger throughput than synchronized version of HashMap on read and write operations. Such improvement is result of absence of locking on read and using of region-based locking on write operations. There is sill small chance that segment will be locked for a very short time on read, but chance is so small that it can be just ignored. Write operations are locking just one region, other regions can be updated at the same time.</div>
</div>
<div>
<br /></div>
<div>
The price for all these improvements are increase in memory usage and some degradation of performance on read operations. More memory is used because each region wastes approximately the same amount of memory as one HashMap. Performance drop on accessing elements is because of <a href="http://en.wikipedia.org/wiki/ABA_problem">ABA checks</a>. Write operations do not seem to introduce any performance degradation, apart from use of locking.</div>
</div>
</div>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com13tag:blogger.com,1999:blog-4215458222833264808.post-19774892249950577352010-05-25T15:40:00.063+01:002010-05-28T21:51:58.744+01:00Some hints for writing secure code<div class="MsoNormal"><span class="apple-style-span"><span style=" ;font-family:Georgia;color:black;">Security and data protection are becoming now more and more popular topics. We are coming into the world where too much information is transfered/used/processed by computer systems and any leak of that information can cause a big trouble. Thus, it is very important for application to protect customer information as much as it can and do not allow it to spread out.</span></span><span style=" ;font-family:Georgia;color:black;"><br /><br /><span class="apple-style-span">There are many aspects of application security and these cover processes, architecture, infrastructure, code, etc. The whole topic is extremely big and versatile and there are some</span><span class="apple-converted-space"> </span><span class="apple-style-span"><a href="http://books.google.co.uk/books?q=application+security&oq=application+se">books</a> written to cover all its possible verges. I will touch just a small piece which is related to something which is in area of developer's responsibility - code and application architecture. Also, I assume that that reader mostly works on web applications implemented on Java or similar platform.</span></span></div><div class="MsoNormal"><span style=" ;font-family:Georgia;color:black;"><span class="apple-style-span"></span></span></div><a name='more'></a><br /><div class="MsoNormal"><span style=" ;font-family:Georgia;color:black;">I disagree that creation of secure code is hard work; I would say it's just a question of some knowledge and discipline. Here are several guidelines which developer has to follow to cover the most of application security vulnerabilities. Of course list is not complete and mostly covers just protection of customer's data.<o:p></o:p></span></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b><span style=" ;font-family:Georgia;color:black;">Always hash user passwords</span></b></div><div class="MsoNormal"><span style=" ;font-family:Georgia;color:black;">The worst thing you can do is do not hash user passwords and store then as clear text. Less bad thing is to encode them, but it’s still naughty. You do not need to know your customers’ password, let you know, better you sleep.<o:p></o:p></span></div><div class="MsoNormal"><span style=" ;font-family:Georgia;color:black;">I can't imagine scenario where application has to have an access to user passwords, but easily can imagine what would happen if someone<span class="apple-converted-space"> </span><a href="http://www.nytimes.com/external/readwriteweb/2009/12/16/16readwriteweb-rockyou-hacker-30-of-sites-store-plain-text-13200.html">will get an access to such treasure</a>. Thus, user's password must be hashed, preferably using<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/Salted_hash">salted hash</a><span class="apple-converted-space"> </span>and good hash function like<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/SHA-2">SHA-2</a>.<o:p></o:p></span></div><div class="MsoNormal"><span style=" ;font-family:Georgia;color:black;">And be always suspicious about websites, which are able to send you your password by mail.<o:p></o:p></span></div><div class="MsoNormal"><span style=" ;font-family:Georgia;color:black;"><br /><b>Always encrypt sensitive data</b></span></div><div class="MsoNormal"><span style=" ;font-family:Georgia;color:black;">If application operates with sensitive data, e.g. password for connecting to legacy backend systems or credit card numbers, that password mustn't be in clear text and must be encrypted. The most of the data has to be encrypted just in storage, but some also on runtime. There is very small chance that someone will get an access to your memory dump, that chance is really-really small, but for very critical data it shouldn't be ignored.<o:p></o:p></span></div><div class="MsoNormal"><span style=" ;font-family:Georgia;color:black;">Of course, encryption has to be done with proper cipher,<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard_process">AES</a><span class="apple-converted-space"> </span>should be fine. The only problem is that to encrypt something you need to have a secret, which has to be unencrypted and if attacker will get it, the whole system will be compromised. So that secret has to be protected properly, e.g. with help of<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/Hardware_Security_Module">HSM</a>, or at least it has to be protected on OS level with proper permissions.<o:p></o:p></span></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b><span style=" ;font-family:Georgia;color:black;">Logging</span></b></div><div class="MsoNormal"><span style=" ;font-family:Georgia;color:black;">Logging mustn’t have user-related or any other type of sensitive information. Examples can be credit card details, bank details, personal messages, etc. Be very careful with “toString” implementation of classes which contain sensitive information. Always keep in mind that logs can fall into somebody's dirty hands.</span></div><div class="MsoNormal"><span style=" ;font-family:Georgia;color:black;"><br /></span></div><div class="MsoNormal"><span class="Apple-style-span" style="font-family:Georgia;"><b>Always use strong ciphers and hashes</b></span></div><div class="MsoNormal" style="margin-bottom: 12.0pt;"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;">There is no much sense to use hash or cipher if it can be easily compromised. Thus, application has to use the best available set. Another thing developer has to remember is the fact, that even the best things will become trash in the face of tomorrow. It means that must be a way to change algorithm in future. For instance, hashed password may have prefix with algorithm name at the beginning, so as soon new algorithm is available it can be used. Here is an example of such hash:</span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><i>{SHA-1}:pOtmGeFyIsThHT6LfNE846FJAWxjmLiR</i></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;">Of course old passwords will remain the same, but it’s better than nothing. The similar principle applies to encrypted values with only difference that encrypted values can be recovered and re-encrypted.</span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;">Acceptable choice of algorithms at this moment (05.2010) is<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/SHA-2">SHA-2</a><span class="apple-converted-space"> </span>for hashing and<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/Advanced_Encryption_Standard">AES<span class="apple-converted-space"> </span></a>for symmetric cryptography. There are not many choices of asymmetric algorithms,<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/RSA">RSA</a><span class="apple-converted-space"> </span>is the best known and wide used example. Apart from algorithm have a thought about hash/key size - bigger is better for security.</span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;">Use Prepared Statement</span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;">That's just a must. Always use PreparedStatement or equivalent and never build SQL statement by concatenating arguments which with high probability will be cause of<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/SQL_injection">SQL injection</a><span class="apple-converted-space"> </span>problem. Let database driver think about that problem.</span></span></b></span></span></span></b></span></span></b><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><br /></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b>Hide implementation details from the user</b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;">It doesn't matter who are your users, it can be another system or real person, you have to tell them as less as you can about your system. That knowledge can help attacker to recognize products you use or give some information how you system is build. All it can give some idea how application can be hacked. For example, printing exception stack trace into browser window, which is happening on some websites, provides almost all information about libraries, used products, architecture, etc.</span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b>Validate user input</b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;">Anything which comes externally must be validated. Any value has to have boundaries. The most convenient way to archive that is using regular expressions. This protects against<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/Cross-site_scripting">XSS</a><span class="apple-converted-space"> </span>and related problems.</span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b>Shield output</b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;">Do not trust to anything and your own system is not exception. Before output any data to customer, it has to be shielded to remove any invalid characters. And again it can be base for <a href="http://en.wikipedia.org/wiki/Cross-site_scripting">XSS</a><span class="apple-converted-space"> </span>and related attacks.</span></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b></b></span></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b>Re-create session after authentication</b></span></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;">It is possible that session id was<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/Session_hijacking">compromised</a><span class="apple-converted-space"> </span>or used in<span class="apple-converted-space"> </span><a href="http://en.wikipedia.org/wiki/Session_fixation">session fixation</a><span class="apple-converted-space"> </span>attack. Thus, using the same session id after authentication will open the system to attacker, so it has to be re-generated.</span></b></span></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"></span></b></span></span></b></span></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;">Use authorization</span></b></span></span></b></span></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;">You should explicitly authorize uses on every layer; otherwise there is too much space for the <a href="http://www.xiom.com/whid-list/Insufficient%20Authorization">holes</a><span class="apple-converted-space"> </span>in application. That is specifically important for web applications where sometimes all authorization is based on just URL principle, which is dangerous practice.</span></span></b></span></span></b></span></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"></span></b></span></span></span></b></span></span></b></span></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;">Use auditing</span></b></span></span></span></b></span></span></b></span></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b><br /><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style=" font-weight: normal;font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><b><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><b><span style=" ;font-family:Georgia;color:black;"><span class="Apple-style-span" style="font-weight: normal;">Would be funny if someone will attack your system and you will not have any tracks of it. Therefore, developer has to have some sort of log for recoding activities performed by the system. That logs shouldn’t be confused with logs used for debugging and troubleshooting, latter contain debug information which help to developer to look for and fix bugs (although they can contain relevant information as well and can be used for tracking attacker actions as well). Auditing logs, on other hand, contain events raised on performing certain activities, e.g. login attempts, payments, etc.</span></span></b></span></span></span></b></span></span></b></span></b></span></b></span></b></span></span></b></span></span></span></b></span></span></b></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;">In conclusion, I would remind that it’s not complete list, there are much more things which you can do to protect your application. For ones who are interested in making their application more secure, I would recommend following links:</span></div><div class="MsoNormal"></div><ul><li><a href="https://www.pcisecuritystandards.org/index.shtml"><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;">PCI DSS</span></a><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;">. This is more about payments and protecting card details, but do not forget that card details are just another piece of information.</span></li><li><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><a href="http://en.wikipedia.org/wiki/Application_security"><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;">Wikipedia application security page</span></a><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;">. Has lots of good links for further reading.</span></span></span></li><li><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;"><span class="Apple-style-span" style="font-family:'Times New Roman';"><a href="http://www.owasp.org/index.php/Main_Page"><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;">OWASP website</span></a><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;">. Lots of information about </span><span style="color:black;"><span class="Apple-style-span" style="font-family:Georgia, 'Times New Roman', serif;">vulnerabilities and how to avoid them and about web application security in general.</span></span></span></span></span></span></li></ul>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com0tag:blogger.com,1999:blog-4215458222833264808.post-17835559644391414302010-04-05T20:17:00.044+01:002010-04-29T22:40:58.415+01:00Killing the private (and protected)<div class="MsoNormal">Recently on one forum I've found topic which released from memory my C++ days. That was topic about using "private" modifier on methods. The question there was about cases where to use it and why. Everybody knows when you are programming on C++ (as the most glaring example) one of "good practices" is "less client can do - better". Of course that's not in sense of functionality provided to client, but in mind of something which is not explicitly provided. In C++, I believe, the main reason for that is fragility of runtime, which can be easily killed if someone accidentally will do something wrong, which can also cause physical damage to person who is responsible for doing that (worth mentioning, in Java situation is slightly different, it's hard (but still possible, of course) to kill the application by "accidental coding"). And a consequence of such “good practice” is the basic rule – “make private as much as you can”.</div><a name='more'></a><br />
<div class="MsoNormal">That rule makes a good job in protecting object’s invariant and supporting encapsulation, but it has some disadvantages as well. First, it indicates that author most probably haven’t ever unit-tested that method and, second, which is more important for the topic, client of such code loses freedom of modification of library if he needs to (btw, some other techniques, like <a href="http://www.ibm.com/developerworks/java/library/j-jtp1029.html">final</a> classes, static members, “default” access modifier are doing the same thing). In the most it affects developers who use languages similar to Java. The way how Java developer works assumes that he spends lots of time in libraries’ code – understanding how they work, looking possible ways of customization, etc. The most of IDE now days are brilliant and I feel no difference in browsing my own code or library code. Moreover, if I don’t have source code, IDE will kindly decompile class for me. As a consequence of such close interaction with the library code, the situation when you want to fix/change/hack something in library is not very rare. For me that happens at least several times per year. And the one of the worst things I can see in that case is method I want to “modify” is private and I can’t change its behaviour by inheritance or implementation of other strategy. In that situation author of library had decided what is safe for me and what is not. I appreciate that, but would prefer to make decision myself.</div><div class="MsoNormal"><o:p>Some can say that with loss of “private” modifier we are loosing benefits of encapsulation. I wouldn't be so sure about it. The same effect can be easily archived using other methods, like combination if pure Interfaces and Factories. And it works even better – client would never know instance of which class he is using, all he knows is just an interface. </o:p></div><div class="MsoNormal">The real problem of removing “private” is that we need to use something instead. The closest candidate is “protected” (I will not cover “default”, etc modifiers for simplicity reasons). Problem of “protected” is that such method is considered as part of interface. Commonly it is used as technique for implementing Template Method pattern. But the same effect can be achieved using Strategy, which provides less coupling, is more flexible in implementation, etc (see <a href="http://staff.cs.utu.fi/~jounsmed/doos_06/material/TemplateAndStrategy.pdf">here</a>) and as a result you will have class just with “public” methods.</div><div class="MsoNormal"><o:p>All in all, I suppose, that ideal approach is to build systems based on interfaces and appropriate patterns, which will exclude the most of need for “private” and “protected” method modifiers. In that case, actually, I even do not worry about them <span style="font-family: Wingdings;">J</span> But ideal case usually never happens. As a general approach for other cases, I would suggest to avoid private as much as you can and use appropriate patterns instead. Also, it’s good to avoid protected as well, but if you can’t then state clearly, that these methods are not part of the interface and if client is going to override them, he is doing it on his own risk.</o:p></div>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com6tag:blogger.com,1999:blog-4215458222833264808.post-12380555950551405172010-03-31T22:37:00.002+01:002010-03-31T22:44:28.432+01:00Public key infrastructure<div class="MsoNormal">Some time ago I was asked to create presentation for my colleagues which describes Public Key Infrastructure, its components, functions, how it generally works, etc. To create that presentation, I've collected some material on that topic and it would be just dissipation to throw it out. That presentation wasn’t technical at all, and that post is not going to be technical as well. It will give just a concept, high-level picture, which, I believe, can be a good base knowledge before start looking at details.</div><a name='more'></a><br />
<div class="MsoNormal">I will start with cryptography itself. Why do we need it? There are at least three reasons for that - Confidentiality, Authentication and Integrity. Confidentiality is the most obvious one. It's crystal clear that we need cryptography to hide information from others. Authentication confirms that message is send by subject which we can identify and our claims about it are true. And finally, Integrity ensures that message wasn't modified or corrupted during transfer process.<br />
<br />
</div><div class="MsoNormal">We may try to use Symmetric Cryptography to help us to achieve our aims. It uses just one shared key, which is also called secret. The secret is used for encryption and for decryption of data. Let’s have a look how it can help us to archive our aims. Does it encrypt messages? Yes. Well, Confidentiality is solved, as soon as nobody else, except communicating parties, knows the secret. Does it provide Authentication? Mmm… I would say, no. If there are just two parties in conversation, is seems ok, but if there are hundreds, then should be hundreds secrets, which is hard to manage and distribute. What about Integrity? Yes, it works fine – it’s very hard to modify encrypted message. As you can guess, symmetric cryptography has one big problem – and that problem is “shared secret”. These two words… they don't even fit one to other. If something is known by more that one person, it is not a secret any more. Moreover, to be shared, that secret somehow has to be transferred and during that process there are too many way for secret to be stolen. This means that such type of cryptography hardly solves our problems. But it is still in use and works quite well for its purposes. It's very fast and can be used for encryption/decryption of big amounts of data, e.g. you hard drive. Also, as far as it hundreds or even thousands times faster that asymmetric cryptography, it’s used in hybrid schemas (like TLS aka SSL), where asymmetric cryptography is used for just for transferring symmetric key and encryption/decryption is done by symmetric algorithm.<br />
<br />
</div><div class="MsoNormal"><o:p></o:p></div><div class="MsoNormal"><o:p> </o:p></div><div class="MsoNormal">Let’s have a look at Asymmetric Cryptography. It was invented very recently about 40 years ago. The first paper (“<a href="http://www.cs.rutgers.edu/~tdnguyen/classes/cs671/presentations/Arvind-NEWDIRS.pdf">New Directions in Cryptography</a>”) was published in 1976 by <a href="http://en.wikipedia.org/wiki/Whitfield_Diffie">Whitfield Diffie</a> and <a href="http://en.wikipedia.org/wiki/Martin_Hellman">Martin Hellman</a>. Their work was influenced by <a href="http://en.wikipedia.org/wiki/Ralph_Merkle">Ralph Merkle</a>, who believed to be the one who created the idea of Public Key Cryptography in 1974 (<a href="http://www.merkle.com/1974/">http://www.merkle.com/1974/</a>) and suggested it as project to his mentor - <a href="http://en.wikipedia.org/wiki/Lance_Hoffman">Lance Hoffman</a>, who rejected it. “New Directions in Cryptography” describes algorithm of key exchange known as “Diffie–Hellman key exchange”. Interesting fact that the same key exchange algorithm was invented earlier, in 1974 in Government Communication Headquarters, <st1:country-region><st1:place>UK</st1:place></st1:country-region> by Malcolm J. Williamson, but that information was classified and fact was disclosed just in 1997.<br />
<br />
</div><div class="MsoNormal">Asymmetric Cryptography uses pair of keys - one Private Key and one Public Key. Private Key has to be kept secret and not shared with anybody. Public Key can be available to public; it doesn’t need to be secret. Information encrypted with public key can be decrypted only with corresponding private key. As far as Private Key is not shared, there is no need to distribute it, and there is reasonably small chance that it will be compromised. So such way of exchanging information can solve Confidentiality problem. What about Authentication and Integrity? These problems are solvable as well and utilise mechanism called Digital Signature. The simplest variant if Digital Signature can use following scenario – subject creates a hash based on message, encrypt that hash with Private Key and attach it to message. Now if recipient wants to verify the subject who created a message, he will encrypt that hash using subject’s public key (that’s Authentication) and compare it with hash generated on recipient side (Integrity). In reality hash is not exactly encrypted, instead it used in special signing algorithm, but the overall concept is the same. It’s important to notice that in Asymmetric Cryptography each pair of keys serves just one purpose, e.g. if pair is used for signing, it can’t be used for encryption.<o:p></o:p><br />
<br />
</div><div class="MsoNormal"><o:p> </o:p></div><div class="MsoNormal">Digital Signature, also, is the base for Digital Certificate AKA Public Key Certificate. Certificate is pretty much the same as your passport. It has identity information, which is similar to name, date of birth, etc. in passport. Owner of certificate has to have Private Key which matches Public Key stored in certificate, similar passport has photo of the owner, which matches owner’s face. And, finally, certificate has a signature, and its meaning is the same, as meaning of stamp in passport. Signature proves that certificate was issued by organization which made that signature. In Public Key Infrastructure world such organizations are called Certificate Authorities. If one system discovers that Certificate is signed by “trusted” Certificate Authority, it means that system will trust to information in certificate.<br />
<br />
</div><div class="MsoNormal"><o:p> </o:p></div><div class="MsoNormal">Last paragraph may not be obvious, especially “trust” part of it. What does “trust” mean in that context? Let have a look at simple example. Every site on Web which makes a use of encrypted connection does it via TLS (SSL) protocol, which is based on Certificates. When you go to <a href="https://www.amazon.co.uk/">https://www.amazon.co.uk</a> and it sends its certificate back to your browser. In that certificate there is information about website and reference to Certificate Authority who signed that certificate. First browser will look at the name in certificate – it has to be exactly the same as website domain name, in our case, that’s “www.amazon.co.uk”. Then browser will verify that certificate is signed by Trusted Certificate Authority, which is VeriSing in case of Amazon. You browser already has a list of Certificate Authorities (this is just a list of certificates with public keys) which are known as trusted ones, so it can verify that certificate is issued by one of them. There are some other verification steps, that these two are the most important ones. Assume in our case verification was successful (if it’s not browser will show is big red warning message, like <a href="https://www.rsdn.ru/">that one</a>) – certificate has proper name in it and was signed by Trusted Certificate Authority. What does it give to us? Just one thing – we know that we are on <a href="http://www.amazon.co.uk/">www.amazon.co.uk</a> and the server behind that name is Amazon server, not some dodgy website, which just looks like Amazon. When we enter our credit card details and we can be relatively sure that they will be sent to Amazon, but not to hacker’s database. Our hope here based on assumption that such Certificate Authorities like VeriSign do not give dodgy certificates and Amazon server is not compromised. Well, better than nothing <span style="font-family: Wingdings;">J</span></div><div class="MsoNormal">Another example are severs in organization, which use certificates to verify that they can trust one to other. The schema there is very similar to browser’s ones, except two differences:</div><ul style="margin-top: 0cm;" type="disc"><li class="MsoNormal" style="mso-list: l0 level1 lfo2; tab-stops: list 36.0pt;">Mutual authentication. Certificates are, usually, verified but both sides, not just by client. Client has to send his certificate to server.</li>
<li class="MsoNormal" style="mso-list: l0 level1 lfo2; tab-stops: list 36.0pt;">Certificate Authority, is hosted inside the company.</li>
</ul><div class="MsoNormal">When CA is inside the company we can be almost sure that certificates are going to be issued only to properly validated subjects. It gives some confidence that hacker can’t inject his server, even if he has access to network infrastructure. Attack is possible only if CA is compromised or some server’s Private Key is compromised.<br />
<br />
</div><div class="MsoNormal">We already know, Certificate Authority is the organization which issues certificate and in the Internet, an example of such organization is VeriSing. If certificate is created to be used just inside organization (intranet), it can be issued by Information Security Department which can act as Certificate Authority. When someone wants to have certificate, he has to send certificate request which is called Certificate Signing Request to Certificate Authority. That certificate consists of subject’s identity information, subject’s public key and signature, created by subject’s private key to ensure, that subject who sent request has appropriate private key. Before signing Certificate Authority passes that request to Registration Authority who verifies all details, ensures that proper process is followed, etc. It’s possible that Certificate Authority can also act as Registration Authority. After all, if everything is ok, Certificate Authority creates new certificate signed by its private key and send it back to subject which requested certificate.<br />
<br />
</div><div class="MsoNormal">I've already mentioned Certificate validation process. Here are some details of it; worth mentioning theirs details are still high-level. Validation consists of several steps which, broadly speaking can be described as:</div><div class="MsoNormal"><o:p></o:p></div><ul style="margin-top: 0cm;" type="disc"><li class="MsoNormal" style="mso-list: l4 level1 lfo1; tab-stops: list 36.0pt;">Certificate data validation – validity date, presence of required fields, their values, etc.<o:p></o:p></li>
<li class="MsoNormal" style="mso-list: l4 level1 lfo1; tab-stops: list 36.0pt;">Verify that certificate is issued by Trusted Certificate Authority. If you are browsing internet that list if already built-in in your browser. If that’s communication between two systems, each system has a list of trusted Certificate Authorities; usually that is just a file with certificates.<o:p></o:p></li>
<li class="MsoNormal" style="mso-list: l4 level1 lfo1; tab-stops: list 36.0pt;">Certificate’s signature is valid and made by Certificate Authority who signed that certificate.<o:p></o:p></li>
<li class="MsoNormal" style="mso-list: l4 level1 lfo1; tab-stops: list 36.0pt;">Verify that certificate is not revoked.<o:p></o:p></li>
<li class="MsoNormal" style="mso-list: l4 level1 lfo1; tab-stops: list 36.0pt;">Key verification – proves that servers can decode messaged encrypted by certificate’s Public Key.</li>
</ul><div class="MsoNormal"><o:p>Mentioned above certificate revocation can happen because of many reasons – certificate could be <a href="ttp://www.amug.org/~glguerin/opinion/revocation.html">compromised</a>, or, in corporate world, employee, which owned certificate, left company, or sever which had certificate was decommissioned, etc. On order to verify certificate revocation, browser or any other piece of software, has to use one or both of following techniques:</o:p></div><ul style="margin-top: 0cm;" type="disc"><li class="MsoNormal" style="mso-list: l2 level1 lfo5; tab-stops: list 36.0pt;">Certificate Revocation List (CRL). That’s just a file, which can be hosted on http server. It contains list of revoked certificate IDs. That’s method is simple and straightforward, it doesn’t require lots of efforts for implementation, but has three disadvantages – that’s just a file, which means, that it’s not real-time, it can use significant network traffic and it’s not checked by default by the most of the browsers (I would even say by all browsers), even if certificate has a link to CRL.</li>
<li class="MsoNormal" style="mso-list: l2 level1 lfo5; tab-stops: list 36.0pt;">Online Certificate Status Protocol (OCSP). That is preferable solution, which utilizes dedicated server, which implements protocol which will return back revocation status of certificate by its id. If browser (at least FireFox > v.3.0) will find link to that server in certificate, it will make a call to verify that certificate is not revoked. Only disadvantage is that OCSP server has to be very reliable and be able to answer on requests all the time.</li>
</ul><div class="MsoNormal">In internet certificate usually contains links to CRL or OCSP inside it. When certificates are used in corporate network these links are usually known by all parties and there is no need to have them in certificate. </div><div class="MsoNormal">So, finally, what is Public Key Infrastructure? That’s infrastructure, which supports everything which was described above and generally consists of following elements:</div><ul style="margin-top: 0cm;" type="disc"><li class="MsoNormal" style="mso-list: l3 level1 lfo3; tab-stops: list 36.0pt;">Subscribers. Users of certificates. Clients and ones who owns certificates.</li>
<li class="MsoNormal" style="mso-list: l3 level1 lfo3; tab-stops: list 36.0pt;">Certificates.</li>
<li class="MsoNormal" style="mso-list: l3 level1 lfo3; tab-stops: list 36.0pt;">Certificate Authority and Registration Authority.</li>
<li class="MsoNormal" style="mso-list: l3 level1 lfo3; tab-stops: list 36.0pt;">Certificate Revocation Infrastructure. Server with Certificate Revocation list or OCSP Server.</li>
<li class="MsoNormal" style="mso-list: l3 level1 lfo3; tab-stops: list 36.0pt;">Certificate Policy and Practices documents. Describe format of certificate, format of certificate request, when certificated have to be revoked, etc. Basically all procedures related to infrastructure.</li>
<li class="MsoNormal" style="mso-list: l3 level1 lfo3; tab-stops: list 36.0pt;">Hardware Security Modules, which are usually used to protect <st1:place><st1:city>Root</st1:city> <st1:state>CA</st1:state></st1:place>’s private key.</li>
</ul><div class="MsoNormal">And that entire infrastructure support following functions, which we’ve just discussed:</div><ul style="margin-top: 0cm;" type="disc"><li class="MsoNormal" style="mso-list: l1 level1 lfo4; tab-stops: list 36.0pt;">Public Key Cryptography.</li>
<li class="MsoNormal" style="mso-list: l1 level1 lfo4; tab-stops: list 36.0pt;">Certificate issuance.</li>
<li class="MsoNormal" style="mso-list: l1 level1 lfo4; tab-stops: list 36.0pt;">Certificate validation.</li>
<li class="MsoNormal" style="mso-list: l1 level1 lfo4; tab-stops: list 36.0pt;">Certificate revocation.</li>
</ul><div class="MsoNormal">And that’s it. Appeared to be not such a big topic ;)<o:p></o:p></div>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com3tag:blogger.com,1999:blog-4215458222833264808.post-14070338234726297022010-03-23T22:27:00.004+00:002010-03-23T22:54:33.175+00:00Mac in 1984Was looking for some presentations on Web and found one made by Steve Jobs in 1984. The reaction of people there is just amazing, I have never seen anything similar on IT event. I wish I would visit one which will be the same impressive :)<br /><br /><embed id="VideoPlayback" src="http://video.google.com/googleplayer.swf?docid=8631701936876784775&hl=en&fs=true" style="width:400px;height:326px" allowfullscreen="true" allowscriptaccess="always" type="application/x-shockwave-flash"></embed>Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com0tag:blogger.com,1999:blog-4215458222833264808.post-15599927514520336352010-03-04T16:42:00.010+00:002010-03-04T17:23:05.607+00:00Redirecting or error output to variable in shellSpent couple of painful hours trying to do that. And eventually, here is the code which will output standard output into the file and error into the variable:<br /><pre class="prettyprint"><br />var=`(ls -l > ./file.txt) 2>&1`<br /></pre><br />bloody shell...Anonymoushttp://www.blogger.com/profile/08810736345204674453noreply@blogger.com0