32-bit Windows and JVM virtual memory limit

This post is also available at my personal web site: http://milosophical.me/blog/2007/03/04/32-bit-windows-and-jvm-virtual-memory-limit.html

On the 32-bit Windows platform, JVM programs can only ever use up to about 1.5–1.6 GiB of memory in RAM per Java VM process. Allocating a heap size greater than this amount does not work. What’s going on?

UPDATE: 20110731 — Some more answers and links at Stack Overflow

UPDATE: 20110428 — An explanation of my vague “overheads”

See this excellent post by Dharmir Singh that explains the JVM memory structure and shows where some of the missing heap could be going, which I vaguely hand-waved away as “overheads” below.

UPDATE: 20110228 — Some corrections, more observation needed

First, a correction:  “real” operating systems let processes use <=4GiB on a 32bit system.  *BSD lets user processes access 3GiB (read the Devil Book) and Linux lets user processes malloc up to TASK_SIZE — usually 3GiB on a 32-bit system (see “Professional Linux Kernel Architecture”). This is because in each of these operating systems, 1GiB of user space is reserved to map in system libraries (as opposed 2GiB on Win32 with it’s multiple personality disorder). Mapping parts of the kernel makes calling kernel services quicker. An alternative is Mac OS X (XNU): it’s processes actually only map a tiny part of the kernel into user space, so they may malloc pretty much all of the 4GiB address space (see the OS X book). This is at the cost of a slower context switch whenever a system call is made.

Second is the observation Saar makes in the comments below. I guess that the overheads incurred by the JVM must increase in proportion to the heap size (perhaps some tables to manage references???), but to answer that for certain, I’d need to put the JVM in some sort of debugger and measure the actual overheads incurred (and to do that, I’d need to know what they are, instead of just my vague “overheads”).

Also, switching to 64-bit JVM isn’t necessarily the answer: References are twice as long, usually, which means increased overhead.

Clearly some more research is needed here.

UPDATE: 20100819 — Some interesting related posts

Thanks to the people who took the time to check limits in their own systems and comment on this blog, it makes me seem more credible.

I noticed WordPress is generating some related pages links for this, and one of them is a very interesting and informative read. I recommend you take a look at Esken AKSOY‘s post as a general background on how Windows maps memory for client processes (much more accessible than Microsoft’s Knowledgebase article linked below).  It explains the 2GB per-process limit very well. There are also two posts for the cynics / conspiracy theorists among us 🙂

UPDATE: 20081104 — More recent info?

I notice this page gets a lot of hits still. It’s quite old and I haven’t researched to see what is happening in OpenJDK or to see if 6u10 addresses any of this for Windows. If anyone has more info please comment or write to me, I’d like to update what I have.

— Original post follows (2007) —

From my own research on this issue (and taking into account Microsoft’s advice on KnowledgeBase about Win32 virtual memory allocation), this is what I understand about the limits of RAM usability, both for Win32 itself, and for 32-bit JVMs running on Windows. I haven’t researched if this limit applies for 64-bit Windows or JVMs, nor what Vista might be doing. Also, though the 4GiB limit is inherent to 32-bit machines, the 2GiB limit seems to be peculiar to Windows, and I’ve not seen it anywhere in Linux, Solaris or BSD: when Unix runs on a 32-bit machine, there’s a 4GiB limit, but not 2GiB.

Any 32-bit binary processor has a hard limit of 4,294,967,296 which is the largest number that can be represented in a single 32 bit machine word: 232 = 4,294,967,296. On a byte-addressed computer like Intel IA32, that equates 4294967296 bytes = 4194304 KiB = 4096 MiB = 4GiB.

Normal memory access techniques used by 32-bit Windows programs use standard linear byte addressing, and so are limited to 4GiB of addressable memory, whether it is real or virtual.

On Windows, the amount of this 4GiB space that can actually occupy physical RAM is halved to 2GiB per process. Windows uses the other 2GiB of virtual address space as a per-process overhead for the kernel, and to speed up paging. This is really dumb because it means that if your process allocates > 2GiB heap memory, then Windows must page some to virtual memory, even if you have > 2GiB of actual RAM! Sort of like the DOS 640KiB limit reborn!

To overcome this dumb design, Windows has a memory addressing scheme called the AWE API, which allows a process to allocate up to 3GiB of memory and have that memory reside on chips. To use this, the program must be specially written to use the AWE API.
There’s also another virtual memory technology in Windows called PAE. This is not useful to application programs — it is how the Windows kernel can use > 4GiB of real memory to allocate physical memory to all processes on the system. Each process is still limited to the 4GiB address space each, with 2GiB mapped to RAM (or 3GiB, if the program uses the AWE API) and the rest having to be virtual (paged to disc). PAE just lets Windows keep more than one of these big processes in memory at once even if their combined total heap is more than 4GiB (and assuming you have more than 4GiB of RAM of course).

Both Sun and BAE have refused to use the AWE API for implementing their Java VMs. This is probably because the AWE API does not allow for a contiguous address space of 3GiB, but rather breaks it at the 2GiB mark. The Java VM spec’ used to require a contiguous addressed heap (though this has changed for the JVMS 2nd Ed.…). So any Java VM running on Windows is still limited to at most < 2GiB per running program (actually, only about 1.5GiB is usable because of further overhead for the JVM itself). At least, so long as Win32 JVMs don’t use the AWE API. I’m not sure how difficult it would be for Sun or BAE to change their JVMs to make use of the AWE API in Windows, but the fact that they haven’t done so seems to indicate to me that it wouldn’t be easy. I have been unable to determine if IBM’s JVM uses it…

The only workable solution for utilizing greater than about 1.5–1.6 GiB per JVM process on a 32-bit host is to not run it in Windows (i.e. use Solaris, Linux or BSD). Real operating systems can let processes use 4GiB on 32-bit machines without special programming tricks. Or, you could switch to a 64-bit platform. Although there is a Windows for IA-64, I’m not sure about the availability of a 64-bit JVM for that platform.

It would be better to have a smaller heap on Win32, and if you need more, consider re-engineering the program to use less anyway. If your program is genuinely memory bound and you can’t get away from needing more than 1.5GiB heap, you could work around the Win32 memory limit by splitting your Java program into more than once process, each running on its own JVM, allocating 1.5GiB to each JVM, and then having the processes communicate with an IPC mechanism as needed, such as JMS. However there’s probably a lot more re-engineering work involved in this than there is to just migrate away from Win32 …


13 thoughts on “32-bit Windows and JVM virtual memory limit

  1. Pingback: Complex Event Processing (CEP) Blog » Blackboards for Complex Event Processing

  2. This page gets a high hit rate because it talks about java heap allocation and remote desktop. For some reason the amount of memory that Java can reserve when you are remoting in to XP is much more limited than if you are at the console. Even when you have loads of free memory.
    E.g. I have a machine with 3 gig, task manager shows less than 1 Gig being used. ANT_OPTS -Xmx1024m -XX:MaxPermSize=512m
    Ant -v from remote desktop causes a could not reserve enough heap error.
    In console it works fine.

  3. I just ran into this problem too. On a Windows Vista 32-bit system I can have Java use (with the -Xmx parameter) no more than exactly 1,665,135,275 bytes of memory.

    On 32-bit NetBSD (Unix) I don’t have this problem, and I expect not to have it on 64-bit NetBSD either. I haven’t bothered to try Windows Vista 64-bit because there are too many drivers that simply aren’t available for it, and so it’s not an option — Unix meets all the needs anyway, so we’re just moving to NetBSD instead of investing any more time and money into trying to figure out how to make this work in Windows.

  4. I just tried quickly on a system running Vista 64-bit with 8 GBs of RAM, and it seems that Windows Vista puts a hard limit of 3 GBs on application-based memory allocations.

    Hmm, 3 GBs? That must be the magic number that Microsoft has been touting as a hardware limit (although we know that’s not true because Windows XP can see up to 4 GBs of RAM minus a few other things; under Xen in an HVM Guest on NetBSD I can get Windows XP Pro with SP3 to recognize 3.75 GBs of RAM without any problems).

    This 1.5 GBs limit in 32-bit Windows, I wonder if it was merely doubled to 3.0 GBs in 64-bit Windows?

  5. Yes! I’m just having the same problem with Weka (a java app.) The limit is 1.5gb. I have a 64-bit windows 7 on a machine with 8gb ram. I’m planning to try 64-jvm. If it won’t work out, then solaris or ubuntu.

  6. Re:Real operating systems can let processes use 4GiB on 32-bit machines without special programming tricks.

    Id say that a program using 2Gb for a single instance is not acceptable, what is it doing? We see lots of Java apps (client side)using massive amounts of CPU and memory for very simple user interfaces. Maybe just maybe the problem may be at the Java side not the windows side. After all the Java client was written for Windows and the applications were developed for Windows users..You would think they were doing a lot of clietn side preprocessing but no the servers are screaming as well!!!

    • Agreed, there are a lot of very poorly written Java applications out there, particularly front-end GUI apps are notorious resource hogs. However this argument does sound an awful lot like “2GB ought to be enough for anyone”…

      Also, 1.5GB (not 2GB, we never see more than 1.6GB heaps in Windows) is a severe per-process limitation for server-side applications I have worked on. On the server side, when servicing upwards of 2 or 3 million simultaneous online transactions for a single instance, keeping track of state, 2GB is not unacceptable at all, it’s quite reasonable.

      Think of a banking application, for instance. Perhaps lend brokering instead of the usual thought-experiment of online retail banking. These things have huge amounts of state, multi-meggabyte XML documents to parse and translate at many stages for instance. It’s been my experience that such applications perform fine on Windows hosts until the app is asked to scale up to peek production load (usually at Christmas) and suddenly either the performance drops or the system just hangs. This is what prompted my research. The conclusion seems to be: get off Win32 (and the comments here seem to say, Win64 is no good either). Or else scale *out*, with the usual expensive management and resource overheads that scaling out requires — that assumes your application can scale out, not all are designed to.

      You could re-write your existing app in some other platform than Java too, of course. Why though, when a simple solution to overcomming Windows’ 1.5GB limit is to run the Java app on a platform other than Windows? Much less rework there.

  7. Windows 32bit will let a process to allocate 2GB in the user space.

    The post says:
    “actually, only about 1.5GiB is usable because of further overhead for the JVM itself”

    It seems that if the JVM max heap size is set to 1.5 gb, the process takes 1.7 gb. This makes sense – that the JVM overhead.
    But – why can’t I allocate heap of 1.7 and let the process consume 2.0 gb?

  8. It’s not just a Java problem. One may observe this in commercial games which were probably written in C++.

  9. Hi,

    It seems no one is actually active in this page but I have one question:

    I am working on a remote server with 64G of ram, I am using a platform which is using 32bit JVM and what I have to do is to create multiple JVMs (around 500). what happens is that after creating 190 or so I get the OOM error from java which says unable to create new native thread. Each JVM occupies around 20M of RAM.
    So is there any limit on the memory used by all the JVMs? BTW my process limit in Linux is around 10000 and the limit in /proc/sys/kernel/pid_max is 65000. Another point, changing the heap size doesn’t help either. Any thoughts?

    • Hi Javad,

      This is a blog, not a forum, so only the blog owner (me) is really “active”, but still… 😉

      I don’t really have any concrete thoughts on this one. Have you tried using a 64bit JVM just for kicks? Does it get OOM after more (or even fewer) JVMs?

      190 * 20MB = 3,800MB (about 3.5GB). It seems to me that all the JVMs are sharing a single 32-bit segment and maxing out near MAXMEMORY for the 32-bit libraries (3.8GB). As to why this may be happening, I’m unsure. It is as-if it might be something in the way shared libraries or 32-bit processes malloc memory on a 64-bit host. There might be something that can be done about it with kernel settings?

      It’s definitely an interesting problem. I’m curious about why so many JVMs are required too? If each is only 20MB, might it be better to use multiple threads in fewer VMs?

      • Hi Mike,

        I have done the same test on a 64bit JVM too and I still have the same problem.

        The platform I’m using is configured this way and it can’t be changed, so changing JVMs to threads is not an option.

        It is really strange since I have all the limits in Linux set.

Comments are closed.