tuning - what is the difference between buffer memory and cache memory in linux




What is the difference between buffer vs cache memory in Linux? (7)

Buffers are associated with a specific block device, and cover caching of filesystem metadata as well as tracking in-flight pages. The cache only contains parked file data. That is, the buffers remember what's in directories, what file permissions are, and keep track of what memory is being written from or read to for a particular block device. The cache only contains the contents of the files themselves.

quote link

To me it's not clear what's the difference between the two Linux memory concept :buffer and cache. I've read through this post and it seems to me that the difference between them is the expiration policy:

  1. buffer's policy is first-in, first-out
  2. cache's policy is Least Recently Used.

Am I right?

In particular, I am looking at the two commands: free and vmstat

[email protected]:~$ vmstat -S M
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
5  0      0    173     67    912    0    0    19    59   75 1087 24  4 71  1
[email protected]:~$ free -m
             total       used       free     shared    buffers     cached
Mem:          2007       1834        172          0         67        914
-/+ buffers/cache:        853       1153
Swap:         2859          0       2859


A buffer is a region of memory used to temporarily hold data while it is being moved from one place to another within a computer.while a cache is a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, future use can be made by accessing the cached copy rather than re-fetching the original data, so that the average access time is shorter.


Buffer contains metadata which helps improve write performance

Cache contains the file content itself (sometimes yet to write to disk) which improves read performance


I think this page will help understanding the difference between buffer and cache deeply. http://www.tldp.org/LDP/sag/html/buffer-cache.html

Reading from a disk is very slow compared to accessing (real) memory. In addition, it is common to read the same part of a disk several times during relatively short periods of time. For example, one might first read an e-mail message, then read the letter into an editor when replying to it, then make the mail program read it again when copying it to a folder. Or, consider how often the command ls might be run on a system with many users. By reading the information from disk only once and then keeping it in memory until no longer needed, one can speed up all but the first read. This is called disk buffering, and the memory used for the purpose is called the buffer cache.

Since memory is, unfortunately, a finite, nay, scarce resource, the buffer cache usually cannot be big enough (it can't hold all the data one ever wants to use). When the cache fills up, the data that has been unused for the longest time is discarded and the memory thus freed is used for the new data.

Disk buffering works for writes as well. On the one hand, data that is written is often soon read again (e.g., a source code file is saved to a file, then read by the compiler), so putting data that is written in the cache is a good idea. On the other hand, by only putting the data into the cache, not writing it to disk at once, the program that writes runs quicker. The writes can then be done in the background, without slowing down the other programs.


It's not 'quite' as simple as this, but it might help understand:

Buffer is for storing file metadata (permissions, location, etc). Every memory page is kept track of here.

Cache is for storing actual file contents.


buffer and cache.

A buffer is something that has yet to be "written" to disk.

A cache is something that has been "read" from the disk and stored for later use.





buffer