With ps
or similar tools you will only get the amount of memory pages allocated by that process. This number is correct, but:
does not reflect the actual amount of memory used by the application, only the amount of memory reserved for it
can be misleading if pages are shared, for example by several threads or by using dynamically linked libraries
If you really want to know what amount of memory your application actually uses, you need to run it within a profiler. For example, Valgrind can give you insights about the amount of memory used, and, more importantly, about possible memory leaks in your program. The heap profiler tool of Valgrind is called 'massif':
Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal.
As explained in the Valgrind documentation, you need to run the program through Valgrind:
valgrind --tool=massif <executable> <arguments>
Massif writes a dump of memory usage snapshots (e.g. massif.out.12345
). These provide, (1) a timeline of memory usage, (2) for each snapshot, a record of where in your program memory was allocated. A great graphical tool for analyzing these files is massif-visualizer. But I found ms_print
, a simple text-based tool shipped with Valgrind, to be of great help already.
To find memory leaks, use the (default) memcheck
tool of valgrind.
Keep in mind that Windows has virtual memory management and the JVM only needs memory that is contiguous in its address space. So, other programs running on the system shouldn't necessarily impact your heap size. What will get in your way are DLL's that get loaded in to your address space. Unfortunately optimizations in Windows that minimize the relocation of DLL's during linking make it more likely you'll have a fragmented address space. Things that are likely to cut in to your address space aside from the usual stuff include security software, CBT software, spyware and other forms of malware. Likely causes of the variances are different security patches, C runtime versions, etc. Device drivers and other kernel bits have their own address space (the other 2GB of the 4GB 32-bit space).
You could try going through your DLL bindings in your JVM process and look at trying to rebase your DLL's in to a more compact address space. Not fun, but if you are desperate...
Alternatively, you can just switch to 64-bit Windows and a 64-bit JVM. Despite what others have suggested, while it will chew up more RAM, you will have much more contiguous virtual address space, and allocating 2GB contiguously would be trivial.
Best Solution
Writing data to RAM is atomic. If two CPUs try to write to the same location at the same time, the memory controller will decide on some order for the writes. While one CPU is writing to memory, the other CPU will stall for as many cycles as necessary until the first write is completed; then it will overwrite its value. This is what's known as a race condition.
Writes that are smaller than the native word size are not atomic -- in that case, the CPU must read the old memory value into a register, write the new bytes into the register, and then write that new value back to memory.
You should never have code that depends on this -- if you have multiple CPUs that are trying to simultaneously write to the same memory location, you're doing something wrong.
Another important consideration is the cache coherency problem. Each CPU has its own cache. If a CPU writes data to its cache, the other CPUs need to be made aware of the change to that data value if they want to read it.