memory-management - What and where are the stack and heap?

Programming language books explain that value types are created on the stack, and reference types are created on the heap, without explaining what these two things are. I haven't read a clear explanation of this. I understand what a stack is, but where and what are they (physically in a real computer's memory)?

  • To what extent are they controlled by the OS or language runtime?
  • What is their scope?
  • What determines the size of each of them?
  • What makes one faster?


The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO (last in first out) order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.

The heap is memory set aside for dynamic allocation. Unlike the stack, there's no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.

Each thread gets a stack, while there's typically only one heap for the application (although it isn't uncommon to have multiple heaps for different types of allocation).

To answer your questions directly:

To what extent are they controlled by the OS or language runtime?

The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.

What is their scope?

The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.

What determines the size of each of them?

The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).

What makes one faster?

The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor's cache, making it very fast. Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be - typically - synchronized with "all" other heap accesses in the program.

A clear demonstration:
Image source:


  • Stored in computer RAM just like the heap.
  • Variables created on the stack will go out of scope and are automatically deallocated.
  • Much faster to allocate in comparison to variables on the heap.
  • Implemented with an actual stack data structure.
  • Stores local data, return addresses, used for parameter passing.
  • Can have a when too much of the stack is used (mostly from infinite or too deep recursion, very large allocations).
  • Data created on the stack can be used without pointers.
  • You would use the stack if you know exactly how much data you need to allocate before compile time and it is not too big.
  • Usually has a maximum size already determined when your program starts.


  • Stored in computer RAM just like the stack.
  • In C++, variables on the heap must be destroyed manually and never fall out of scope. The data is freed with delete, delete[], or free.
  • Slower to allocate in comparison to variables on the stack.
  • Used on demand to allocate a block of data for use by the program.
  • Can have fragmentation when there are a lot of allocations and deallocations.
  • In C++ or C, data created on the heap will be pointed to by pointers and allocated with new or malloc respectively.
  • Can have allocation failures if too big of a buffer is requested to be allocated.
  • You would use the heap if you don't know exactly how much data you will need at run time or if you need to allocate a lot of data.
  • Responsible for memory leaks.


int foo()
  char *pBuffer; //<--nothing allocated yet (excluding the pointer itself, which is allocated here on the stack).
  bool b = true; // Allocated on the stack.
    //Create 500 bytes on the stack
    char buffer[500];

    //Create 500 bytes on the heap
    pBuffer = new char[500];

   }//<-- buffer is deallocated here, pBuffer is not
}//<--- oops there's a memory leak, I should have called delete[] pBuffer;

The most important point is that heap and stack are generic terms for ways in which memory can be allocated. They can be implemented in many different ways, and the terms apply to the basic concepts.

  • In a stack of items, items sit one on top of the other in the order they were placed there, and you can only remove the top one (without toppling the whole thing over).

    The simplicity of a stack is that you do not need to maintain a table containing a record of each section of allocated memory; the only state information you need is a single pointer to the end of the stack. To allocate and de-allocate, you just increment and decrement that single pointer. Note: a stack can sometimes be implemented to start at the top of a section of memory and extend downwards rather than growing upwards.

  • In a heap, there is no particular order to the way items are placed. You can reach in and remove items in any order because there is no clear 'top' item.

    Heap allocation requires maintaining a full record of what memory is allocated and what isn't, as well as some overhead maintenance to reduce fragmentation, find contiguous memory segments big enough to fit the requested size, and so on. Memory can be deallocated at any time leaving free space. Sometimes a memory allocator will perform maintenance tasks such as defragmenting memory by moving allocated memory around, or garbage collecting - identifying at runtime when memory is no longer in scope and deallocating it.

These images should do a fairly good job of describing the two ways of allocating and freeing memory in a stack and a heap. Yum!

  • To what extent are they controlled by the OS or language runtime?

    As mentioned, heap and stack are general terms, and can be implemented in many ways. Computer programs typically have a stack called a call stack which stores information relevant to the current function such as a pointer to whichever function it was called from, and any local variables. Because functions call other functions and then return, the stack grows and shrinks to hold information from the functions further down the call stack. A program doesn't really have runtime control over it; it's determined by the programming language, OS and even the system architecture.

    A heap is a general term used for any memory that is allocated dynamically and randomly; i.e. out of order. The memory is typically allocated by the OS, with the application calling API functions to do this allocation. There is a fair bit of overhead required in managing dynamically allocated memory, which is usually handled by the OS.

  • What is their scope?

    The call stack is such a low level concept that it doesn't relate to 'scope' in the sense of programming. If you disassemble some code you'll see relative pointer style references to portions of the stack, but as far as a higher level language is concerned, the language imposes its own rules of scope. One important aspect of a stack, however, is that once a function returns, anything local to that function is immediately freed from the stack. That works the way you'd expect it to work given how your programming languages work. In a heap, it's also difficult to define. The scope is whatever is exposed by the OS, but your programming language probably adds its rules about what a "scope" is in your application. The processor architecture and the OS use virtual addressing, which the processor translates to physical addresses and there are page faults, etc. They keep track of what pages belong to which applications. You never really need to worry about this, though, because you just use whatever method your programming language uses to allocate and free memory, and check for errors (if the allocation/freeing fails for any reason).

  • What determines the size of each of them?

    Again, it depends on the language, compiler, operating system and architecture. A stack is usually pre-allocated, because by definition it must be contiguous memory (more on that in the last paragraph). The language compiler or the OS determine its size. You don't store huge chunks of data on the stack, so it'll be big enough that it should never be fully used, except in cases of unwanted endless recursion (hence, "") or other unusual programming decisions.

    A heap is a general term for anything that can be dynamically allocated. Depending on which way you look at it, it is constantly changing size. In modern processors and operating systems the exact way it works is very abstracted anyway, so you don't normally need to worry much about how it works deep down, except that (in languages where it lets you) you mustn't use memory that you haven't allocated yet or memory that you have freed.

  • What makes one faster?

    The stack is faster because all free memory is always contiguous. No list needs to be maintained of all the segments of free memory, just a single pointer to the current top of the stack. Compilers usually store this pointer in a special, fast register for this purpose. What's more, subsequent operations on a stack are usually concentrated within very nearby areas of memory, which at a very low level is good for optimization by the processor on-die caches.

Size of stack and heap memory

"Stack is created in the top-level-address and the heap at the low-level-address" Please elobarate this

This is a myth. It may have a basis in historical truth. It might sometimes resonate with things you see in real life. But it is not literally true.

It's easy enough to explore, though:

#include <stdlib.h>
#include <stdio.h>

void check(int depth) {
    char c;
    char *ptr = malloc(1);
    printf("stack at %p, heap at %p\n", &c, ptr);
    if (depth <= 0) return;

int main() {
    return 0;

On my machine I see:

stack at 0x22ac3b, heap at 0x20010240
stack at 0x22ac0b, heap at 0x200485b0
stack at 0x22abdb, heap at 0x200485c0
stack at 0x22abab, heap at 0x200485d0
stack at 0x22ab7b, heap at 0x200485e0
stack at 0x22ab4b, heap at 0x200485f0
stack at 0x22ab1b, heap at 0x20048600
stack at 0x22aaeb, heap at 0x20048610
stack at 0x22aabb, heap at 0x20048620
stack at 0x22aa8b, heap at 0x20048630
stack at 0x22aa5b, heap at 0x20048640

So, the stack is going downwards and the heap is going upwards (as you might expect based on the myth), but the stack has the smaller address, and they are not growing toward each other (myth busted).

Btw, my check function is tail-recursive, and on some implementations with some compiler options you might see the stack not moving at all. Which tells you something about why the standard doesn't mandate how all this works -- if it did it might inadvertently forbid useful optimizations.

As mentioned already, sizes are OS specific. For e.g. on windows using Visual Studio, default stack size is 1MB


On Linux the following command can show show your current one.

ulimit -s or -a

On my Linux mint 64 bit it shows 8192 KB.

Every program when loaded in memory has several segments. In assembly one can indicate each of those using .data, .code etc prefix (intelx86).

It is data segment which has several sub sections. Both stack and heap are part of it in addition to several others.

Stack can also grow implicitly i.e. when you make another function call, an activation record is pushed on to stack, they by utilizing more memory of stack. That is why infinite recursion results in crash when a program runs out of allocated stack.

When a function call returns, that activation record is popped and stack shrinks.

In contrast heap grows from the opposite direction and contains all dynamically allocated memory.

The reason these two segments grow in opposite direction is to maximize the utilization of their combined memory. Note that as mentioned in comments this is not a c standard, but most common OS's have this implemented.

------ stack starts ----------- stack grows downward

-------- Unless they cross each other a program is okay to run.

------- heap starts ------------heap grows upwards

If your program uses no heap, your stack can utilize maximum memory including that of heap too. If program makes few recursive calls and uses minimum local variables (i.e. uses less memory for stack), it can utilize heap to the most.

Other parts of data segment are BSS etc. which might contain fields such as uninitialized static variables

What is the initial size with which a stack/heap is created? and who decides it?

This is compiler- and OS-specific.

Wherein physical memory are they are created? I see a general description as "Heap is created in the top-level-address and stack at the low-level-address".

This is compiler- and OS-specific.

Really. The language standard does not mandate the minimum stack size nor specifies the location of either the stack or the heap in memory. And the reason for that is to make C programs less dependent on these details and therefore more portable to different platforms (read: different OSes, different CPUs, different compilers).

Stack Memory vs Heap Memory

Stack memory is specifically the range of memory that is accessible via the Stack register of the CPU. The Stack was used as a way to implement the "Jump-Subroutine"-"Return" code pattern in assembly language, and also as a means to implement hardware-level interrupt handling. For instance, during an interrupt, the Stack was used to store various CPU registers, including Status (which indicates the results of an operation) and Program Counter (where was the CPU in the program when the interrupt occurred).

Stack memory is very much the consequence of usual CPU design. The speed of its allocation/deallocation is fast because it is strictly a last-in/first-out design. It is a simple matter of a move operation and a decrement/increment operation on the Stack register.

Heap memory was simply the memory that was left over after the program was loaded and the Stack memory was allocated. It may (or may not) include global variable space (it's a matter of convention).

Modern pre-emptive multitasking OS's with virtual memory and memory-mapped devices make the actual situation more complicated, but that's Stack vs Heap in a nutshell.

In C++ the stack memory is where local variables get stored/constructed. The stack is also used to hold parameters passed to functions.

The stack is like a very like the std::stack class, you push parameters onto it and then call a function. The function then knows the parameters it expects can be found on the end of the stack. Likewise the function can push locals onto the stack and pop them off it before returning from the function. (caveat- compiler optimizations and calling conventions all mean things aren't this simple)

The stack is really best understood from a low level and I'd recommend this link Art of Assembly - Passing Parameters on the Stack. Rarely if ever would you consider any sort of manual stack manipulation from C++.

Generally speaking the stack is preferred as it is usually in the CPU cache, so operations involving objects stored on it tend to be faster. However the stack is a limited resource, and shouldn't be used for anything large. Running out of stack memory is called a Stack buffer overflow. It's a serious thing to encounter, but you really shouldn't come across one unless you have a crazy recursive function or something similar.

Heap memory is much as rskar says. Generally speaking in C++ objects allocated with new, or blocks of memory allocated with the likes of malloc ends up on the heap. Heap memory almost always must be manually freed, though you should reallly use a smart pointer class or similar to avoid needing to remember to do so. Running out of heap memory can (will?) result in a std::bad_alloc.

It's a language abstraction - some languages have both, some one, some neither.

In the case of C++, the code is not run in either the stack or the heap. You can test what happens if you run out of heap memory by repeatingly calling new to allocate memory in a loop without calling delete to free it it. But make a system backup before doing this.

Why is memory split up into stack and heap?

Stack: The stack is used as a sort of temporary scratch pad for use by the block of code that's currently executing, and whatever block called the current one, and whatever block called that one, and so on. When the current block exits, the local variables it was using are forgotten. As the name indicates, the stack is used in a last in, first out manner.

One of the most important uses of the stack is to keep track of the current call chain. When one function calls another, the caller pushes the address of the next instruction (the return address) onto the stack. As each function exits, it pops it's caller's return address off the stack and continues executing code starting at that address. It's also used for communicating function parameters and return values between caller and callee.

Heap: The heap is different -- there's no particular order to it. If you want to allocate memory in a block of code and have that memory stick around beyond the end of the block, you allocate it on the heap. Of course, you'll also need to store a pointer/reference to it somewhere so that other code can find that memory; most languages provide accomodation that.

Speed: Differences in speed aren't due to any property of the memory itself -- as you say in your question, both stack and heap typically inhabit the same physical memory. Allocating space on the stack is quick due to the stacks LIFO nature: if you push something onto the stack, there's only one place it can end up. By contrast, allocating a block on the heap requires finding a large enough contiguous free region in memory. A stack allocation can be as quick as a single instruction; a heap allocation requires a call to a memory allocation function like malloc().

Static v. dynamic: Allocating memory on the heap is dynamic -- whether to allocate a block, and the size of the block, can be determined according to input the program receives while it's running. Regions of memory allocated on the heap can even be resized if necessary. It's possible to dynamically allocate memory on the stack, too (see the C standard library function alloca()), but that memory will be lost as soon as the current function exits. Stack allocations are usually static -- the compiler determines how much space is needed for the (non-register) parameters, return data, and local variables, and it generates code to reserve the necessary space on the stack when the function is called.

Example: Imagine that you're creating a word processor. You can't know ahead of time how large the document will be, or even how many documents will be in use at the same time. At the same time, you want the user's documents to remain in memory as long as the user wants to keep them open. If you try to allocate memory for the documents on the stack you'll find it difficult to have more than one open at once, and you'll need to create a single function that creates, edits, saves, and closes the document. Allocating the space on the heap allows you to create as many documents as you like, each sized appropriately for the data it contains, and to avoid tying the lifetime of the documents to the lifetime of any particular function.

Summary: In a nutshell, the stack holds the values of variables (sometimes registers are used instead), while the heap is used for allocating memory that will be used beyond the lifetime of the current block.

You cannot only use a stack, because a stack requires a last-in first-out allocation & deallocation order (i.e. you can only deallocate the newest allocated data; in a stack you cannot deallocate some old data and keep some newer one).

Actually, you can get rid of the stack (only keeping the heap). See Appel's paper Garbage Collection Can Be Faster Than Stack Allocation and his Compiling with Continuation book.

And heap does not have a well defined meaning (other than "dynamically allocated memory which is not on the stack"). Actually, on Linux systems, allocating a big chunk of memory using the mmap system call is fairly quick (but malloc implementations try to avoid mmap and prefer reusing free-d memory). The issue is allocation of small memory zones.

And read more about garbage collection techniques. In C or C++ you might use Boehm's GC

A stack is often useful, notably for recursive function calls. It is so useful (e.g. in C) that today's processors usually have a dedicated stack pointer register (used by CALL & RET machine instructions for calling & returning). But this was not always the case; on some processors (eg IBM360), the stack pointer is a conventional register, not a hardcoded one.

The memory is just the same for both of them, but the stack and the heap are 2 different data structures that are useful for different purposes.

Stack is a very primitive abstraction that is needed by any micro processor in order to execute instructions on a couple of operands (usually processor registers or memory addresses).

The heap is a general allocation memory area where usually you want to store data that is not bound to the stack, that is their lifetime is longer that if stored in the stack, or said another way, the data is going to be accesses by different portions of code.