java - example - mappedbytebuffer




ByteBuffer.allocate() vs. ByteBuffer.allocateDirect() (3)

To allocate() or to allocateDirect(), that is the question.

For some years now I've just stuck to the thought that since DirectByteBuffers are a direct memory mapping at OS level, that it would perform quicker with get/put calls than HeapByteBuffers. I never was really interested in finding out the exact details regarding the situation until now. I want to know which of the two types of ByteBuffers are faster and on what conditions.


since DirectByteBuffers are a direct memory mapping at OS level

They aren't. They are just normal application process memory, but not subject to relocation during Java GC which simplifies things inside the JNI layer considerably. What you describe applies to MappedByteBuffer.

that it would perform quicker with get/put calls

The conclusion doesn't follow from the premiss; the premiss is false; and the conclusion is also false. They are faster once you get inside the JNI layer, and if you are reading and writing from the same DirectByteBuffer they are much faster, because the data never has to cross the JNI boundary at all.


Best to do your own measurements. Quick answer seems to be that sending from an allocateDirect() buffer takes 25% to 75% less time than the allocate() variant (tested as copying a file to /dev/null), depending on size, but that the allocation itself can be significantly slower (even by a factor of 100x).

Sources:


There is no reason to expect direct buffers to be faster for access inside the jvm. Their advantage comes when you pass them to native code -- such as, the code behind channels of all kinds.





bytebuffer