What is the size of a byte[] array in Java?

Java allows you to create an array just big enough to contain 4 bytes, like so:

byte[] array = new byte[4];

How much memory does this array take? If you have answered “4 bytes”, you are wrong. A more likely answer is 24 bytes.

I wrote a little Java program that relies on the jamm library to print out some answers, for various array sizes:

size of the array estimated memory usage
0 16 bytes
1 24 bytes
2 24 bytes
3 24 bytes
4 24 bytes
5 24 bytes
6 24 bytes
7 24 bytes
8 24 bytes
9 32 bytes

This is not necessarily the exact memory usage on your system, but it is a reasonable guess.

Further work: A library such as JOL might provide a more accurate measure, according to a reader (Bempel).


Daniel Lemire, "What is the size of a byte[] array in Java?," in Daniel Lemire's blog, November 22, 2022.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

2 thoughts on “What is the size of a byte[] array in Java?”

  1. Well known. And now give the VM 33GB RAM!

    You might put in some explanations, too.

    4 bytes for memory management
    4 bytes object hash code? I don’t recall
    4 bytes class pointer for object type (8 if >32 GB Mem limit)
    4 bytes length (signed, hence maximum array size ~ 2^31

    So 16-20 bytes overhead, then rounded up to multiples of 8 for memory alignment and the compressed pointer trick. (Regular objects: 12-16, as they don’t have an array length, the object size is known via the class pointer)

    Default settings — it might be possible to tune to compressedOOPS to use 32 bit pointers up to 64GB RAM at the cost of increasing the alignment to 16 bytes. Not sure if you could go to a 16GB limit and 4 byte size padding – there might be other places where 8 bytes memory alignment is desirable (you might know this better than me, which CPUs want this kind of alignment). 8 bytes seems to be the best trade-off.

  2. What are the overheads in C and C++?

    I’d assume that even C alloc needs to keep track of memory allocations, so there *will* be some overhead associated. I have no idea about the current glibc. I know that optimized allocators exist (last but not least in the templates), that memory alignment is common, and I’ve once debugged a poor memory allocator for a MMUless ARM SoC that simply stored the length before and after each allocated chunk (and a “free” bit, i.e. 8 bytes overhead on 32 bit) – which was of course incredibly prone to corruption by out of bounds writes…
    I’d assume that for C++ with OOP there will also be some type information involved. So I’d expect on 64 bit systems overheads of >=16 Bytes for arrays in OOP are common across languages, too.

Leave a Reply

Your email address will not be published.

You may subscribe to this blog by email.