[memory-management] 什麼和堆棧和堆在哪裡?



  • 就像堆一樣存儲在計算機RAM中。
  • 在堆棧上創建的變量將超出範圍,並自動解除分配。
  • 與堆中的變量相比,分配速度要快得多。
  • 用一個實際的堆棧數據結構來實現。
  • 存儲本地數據,返回地址,用於參數傳遞。
  • 如果使用太多堆棧(主要來自無限或太深的遞歸,非常大的分配),可能會發生堆棧溢出。
  • 在堆棧上創建的數據可以在沒有指針的情況下使用。
  • 如果您確切知道需要在編譯之前分配多少數據並且它不太大,則可以使用堆棧。
  • 通常在程序啟動時已經確定了最大尺寸。


  • 就像堆棧一樣存儲在計算機RAM中。
  • 在C ++中,堆中的變量必須手動銷毀,絕不會超出範圍。 數據被deletedelete[]free
  • 與堆棧中的變量相比,分配速度較慢。
  • 按需使用以分配一塊數據供程序使用。
  • 當有很多分配和釋放時,可能會有碎片。
  • 在C ++或C中,在堆上創建的數據將由指針指向並分別分配newmalloc
  • 如果請求分配過大的緩衝區,可能會導致分配失敗。
  • 如果您不知道在運行時需要多少數據,或者您需要分配大量數據,則可以使用堆。
  • 負責內存洩漏。


int foo()
  char *pBuffer; //<--nothing allocated yet (excluding the pointer itself, which is allocated here on the stack).
  bool b = true; // Allocated on the stack.
    //Create 500 bytes on the stack
    char buffer[500];

    //Create 500 bytes on the heap
    pBuffer = new char[500];

   }//<-- buffer is deallocated here, pBuffer is not
}//<--- oops there's a memory leak, I should have called delete[] pBuffer;

編程語言書籍解釋了值類型是在堆棧上創建的,參考類型是在堆上創建的,而沒有解釋這兩件事是什麼。 我沒有讀到這個清楚的解釋。 我明白什麼是堆棧 ,但他們在什麼地方(什麼是物理地位於真實計算機的內存中)?

  • 它們在多大程度上由操作系統或語言運行時控制?
  • 他們的範圍是什麼?
  • 什麼決定了他們每個的大小?
  • 是什麼讓一個更快?

A couple of cents: I think, it will be good to draw memory graphical and more simple:

Arrows - show where grow stack and heap, process stack size have limit, defined in OS, thread stack size limits by parameters in thread create API usually. Heap usually limiting by process maximum virtual memory size, for 32 bit 2-4 GB for example.

So simple way: process heap is general for process and all threads inside, using for memory allocation in common case with something like malloc() .

Stack is quick memory for store in common case function return pointers and variables, processed as parameters in function call, local function variables.

  • 介紹

Physical memory is the range of the physical addresses of the memory cells in which an application or system stores its data, code, and so on during execution. Memory management denotes the managing of these physical addresses by swapping the data from physical memory to a storage device and then back to physical memory when needed. The OS implements the memory management services using virtual memory. As a C# application developer you do not need to write any memory management services. The CLR uses the underlying OS memory management services to provide the memory model for C# or any other high-level language targeting the CLR.

Figure 4-1 shows physical memory that has been abstracted and managed by the OS, using the virtual memory concept. Virtual memory is the abstract view of the physical memory, managed by the OS. Virtual memory is simply a series of virtual addresses, and these virtual addresses are translated by the CPU into the physical address when needed.

Figure 4-1. CLR memory abstraction

The CLR provides the memory management abstract layer for the virtual execution environment, using the operating memory services. The abstracted concepts the CLR uses are AppDomain, thread, stack, heapmemorymapped file, and so on. The concept of the application domain (AppDomain) gives your application an isolated execution environment.

  • Memory Interaction between the CLR and OS

By looking at the stack trace while debugging the following C# application, using WinDbg, you will see how the CLR uses the underlying OS memory management services (eg, the HeapFree method from KERNEL32.dll, the RtlpFreeHeap method from ntdll.dll) to implement its own memory model:

using System;
namespace CH_04
    class Program
        static void Main(string[] args)
            Book book = new Book();

    public class Book
        public void Print() { Console.WriteLine(ToString()); }

The compiled assembly of the program is loaded into WinDbg to start debugging. You use the following commands to initialize the debugging session:

0:000> sxe ld clrjit

0:000> g

0:000> .loadby sos clr

0:000> .load C:\Windows\Microsoft.NET\Framework\v4.0.30319\sos.dll

Then, you set a breakpoint at the Main method of the Program class, using the !bpmd command:

0:000>!bpmd CH_04.exe CH_04.Program.Main

To continue the execution and break at the breakpoint, execute the g command:

0:000> g

When the execution breaks at the breakpoint, you use the !eestack command to view the stack trace details of all threads running for the current process. The following output shows the stack trace for all the threads running for the application CH_04.exe:

0:000> !eestack

Thread 0

Current frame: (MethodDesc 00233800 +0 CH_04.Program.Main(System.String[]))

ChildEBP RetAddr Caller, Callee

0022ed24 5faf21db clr!CallDescrWorker+0x33

/ trace removed /

0022f218 77712d68 ntdll!RtlFreeHeap+0x142, calling ntdll!RtlpFreeHeap

0022f238 771df1ac KERNEL32!HeapFree+0x14, calling ntdll!RtlFreeHeap

0022f24c 5fb4c036 clr!EEHeapFree+0x36, calling KERNEL32!HeapFree

0022f260 5fb4c09d clr!EEHeapFreeInProcessHeap+0x24, calling clr!EEHeapFree

0022f274 5fb4c06d clr!operator delete[]+0x30, calling clr!EEHeapFreeInProcessHeap / trace removed /

0022f4d0 7771316f ntdll!RtlpFreeHeap+0xb7a, calling ntdll!_SEH_epilog4

0022f4d4 77712d68 ntdll!RtlFreeHeap+0x142, calling ntdll!RtlpFreeHeap

0022f4f4 771df1ac KERNEL32!HeapFree+0x14, calling ntdll!RtlFreeHeap

/ trace removed /

This stack trace indicates that the CLR uses OS memory management services to implement its own memory model. Any memory operation in.NET goes via the CLR memory layer to the OS memory management layer.

Figure 4-2 illustrates a typical C# application memory model used by the CLR at runtime.

Figure 4-2 . A typical C# application memory model

The CLR memory model is tightly coupled with the OS memory management services. To understand the CLR memory model, it is important to understand the underlying OS memory model. It is also crucial to know how the physical memory address space is abstracted into the virtual memory address space, the ways the virtual address space is being used by the user application and system application, how virtual-to-physical address mapping works, how memory-mapped file works, and so on. This background knowledge will improve your grasp of CLR memory model concepts, including AppDomain, stack, and heap.

For more information, refer to this book:

C# Deconstructed: Discover how C# works on the .NET Framework

This book + ClrViaC# + Windows Internals are excellent resources to known .net framework in depth and relation with OS.

I have something to share with you, although major points are already penned.


  • Very fast access.
  • Stored in RAM.
  • Function calls are loaded here along with the local variables and function parameters passed.
  • Space is freed automatically when program goes out of a scope.
  • Stored in sequential memory.


  • Slow access comparatively to Stack.
  • Stored in RAM.
  • Dynamically created variables are stored here, which later requires freeing the allocated memory after use.
  • Stored wherever memory allocation is done, accessed by pointer always.

Interesting note:

  • Should the function calls had been stored in heap, it would had resulted in 2 messy points:
    1. Due to sequential storage in stack, execution is faster. Storage in heap would have resulted in huge time consumption thus resulting whole program to execute slower.
    2. If functions were stored in heap (messy storage pointed by pointer), there would have been no way to return to the caller address back (which stack gives due to sequential storage in memory).

Feedbacks are wellcomed.

I think many other people have given you mostly correct answers on this matter.

One detail that has been missed, however, is that the "heap" should in fact probably be called the "free store". The reason for this distinction is that the original free store was implemented with a data structure known as a "binomial heap." For that reason, allocating from early implementations of malloc()/free() was allocation from a heap. However, in this modern day, most free stores are implemented with very elaborate data structures that are not binomial heaps.

Simply, the stack is where local variables get created. Also, every time you call a subroutine the program counter (pointer to the next machine instruction) and any important registers, and sometimes the parameters get pushed on the stack. Then any local variables inside the subroutine are pushed onto the stack (and used from there). When the subroutine finishes, that stuff all gets popped back off the stack. The PC and register data gets and put back where it was as it is popped, so your program can go on its merry way.

The heap is the area of memory dynamic memory allocations are made out of (explicit "new" or "allocate" calls). It is a special data structure that can keep track of blocks of memory of varying sizes and their allocation status.

In "classic" systems RAM was laid out such that the stack pointer started out at the bottom of memory, the heap pointer started out at the top, and they grew towards each other. If they overlap, you are out of RAM. That doesn't work with modern multi-threaded OSes though. Every thread has to have its own stack, and those can get created dynamicly.

What is a stack?

A stack is a pile of objects, typically one that is neatly arranged.

Stacks in computing architectures are regions of memory where data is added or removed in a last-in-first-out manner.
In a multi-threaded application, each thread will have its own stack.

What is a heap?

A heap is an untidy collection of things piled up haphazardly.

In computing architectures the heap is an area of dynamically-allocated memory that is managed automatically by the operating system or the memory manager library.
Memory on the heap is allocated, deallocated, and resized regularly during program execution, and this can lead to a problem called fragmentation.
Fragmentation occurs when memory objects are allocated with small spaces in between that are too small to hold additional memory objects.
The net result is a percentage of the heap space that is not usable for further memory allocations.

Both together

In a multi-threaded application, each thread will have its own stack. But, all the different threads will share the heap.
Because the different threads share the heap in a multi-threaded application, this also means that there has to be some coordination between the threads so that they don't try to access and manipulate the same piece(s) of memory in the heap at the same time.

Which is faster – the stack or the heap? 為什麼?

The stack is much faster than the heap.
This is because of the way that memory is allocated on the stack.
Allocating memory on the stack is as simple as moving the stack pointer up.

For people new to programming, it's probably a good idea to use the stack since it's easier.
Because the stack is small, you would want to use it when you know exactly how much memory you will need for your data, or if you know the size of your data is very small.
It's better to use the heap when you know that you will need a lot of memory for your data, or you just are not sure how much memory you will need (like with a dynamic array).

Java Memory Model

The stack is the area of memory where local variables (including method parameters) are stored. When it comes to object variables, these are merely references (pointers) to the actual objects on the heap.
Every time an object is instantiated, a chunk of heap memory is set aside to hold the data (state) of that object. Since objects can contain other objects, some of this data can in fact hold references to those nested objects.

Others have answered the broad strokes pretty well, so I'll throw in a few details.

  1. Stack and heap need not be singular. A common situation in which you have more than one stack is if you have more than one thread in a process. In this case each thread has its own stack. You can also have more than one heap, for example some DLL configurations can result in different DLLs allocating from different heaps, which is why it's generally a bad idea to release memory allocated by a different library.

  2. In C you can get the benefit of variable length allocation through the use of alloca , which allocates on the stack, as opposed to alloc, which allocates on the heap. This memory won't survive your return statement, but it's useful for a scratch buffer.

  3. Making a huge temporary buffer on Windows that you don't use much of is not free. This is because the compiler will generate a stack probe loop that is called every time your function is entered to make sure the stack exists (because Windows uses a single guard page at the end of your stack to detect when it needs to grow the stack. If you access memory more than one page off the end of the stack you will crash). 例:

void myfunction()
   char big[10000000];
   // Do something that only uses for first 1K of big 99% of the time.


  • Very fast access
  • Don't have to explicitly de-allocate variables
  • Space is managed efficiently by CPU, memory will not become fragmented
  • Local variables only
  • Limit on stack size (OS-dependent)
  • Variables cannot be resized


  • Variables can be accessed globally
  • No limit on memory size
  • (Relatively) slower access
  • No guaranteed efficient use of space, memory may become fragmented over time as blocks of memory are allocated, then freed
  • You must manage memory (you're in charge of allocating and freeing variables)
  • Variables can be resized using realloc()

堆棧當你調用一個函數時,該函數的參數加上一些其他開銷被放在堆棧上。 一些信息(例如返回的地方)也存儲在那裡。 當你在你的函數中聲明一個變量時,該變量也被分配到堆棧上。

取消分配堆棧非常簡單,因為您總是按照分配的相反順序釋放資源。 在輸入函數時添加堆棧東西,相應的數據在您退出時會被刪除。 這意味著,除非調用大量調用其他許多函數的函數(或創建遞歸解決方案),否則您傾向於停留在堆棧的一小塊區域內。

堆堆是您放置您即時創建的數據的通用名稱。 如果你不知道你的程序要創建多少個太空船,你很可能會使用新的(或malloc或同等的)操作符來創建每個太空船。 這種分配會持續一段時間,所以很可能我們會按照與創建它們不同的順序釋放事物。

因此,堆更加複雜,因為最終存在的是未被使用的內存區域,這些內存區域與內存碎片交織在一起。 尋找所需大小的免費記憶是一個難題。 這就是為什麼堆應該避免(儘管它仍然經常使用)。

實現堆棧和堆的實現通常直到運行時/操作系統。 通常遊戲和其他對性能至關重要的應用程序都會創建自己的內存解決方案,從堆中獲取大量內存,然後在內部清除內存以避免依賴操作系統進行內存。

這只有在你的內存使用情況與規範大不相同的情況下才適用 - 也就是說,對於在一次巨大的操作中加載一個級別的遊戲,並且可以在另一個巨大的操作中將全部內存移走。

內存中的物理位置由於虛擬內存技術使您的程序認為您可以訪問物理數據在其他地方(即使在硬盤上!)的某個地址,因此這一點與您想像的相關性較低。 隨著呼叫樹變得更深,您獲得堆棧的地址將不斷增加。 堆的地址是不可預測的(即,特定於implimentation)並且坦率地說不重要。


您的問題的答案是具體實現,並且可能因編譯器和處理器體系結構而異。 但是,這是一個簡單的解釋。

  • 堆棧和堆都是從底層操作系統分配的內存區域(通常是按需映射到物理內存的虛擬內存)。
  • 在多線程環境中,每個線程都有自己完全獨立的堆棧,但它們將共享堆。 並發訪問必須在堆上進行控制,並且不能在堆棧上進行。

  • 堆包含已使用和空閒塊的鏈接列表。 堆中的新分配(通過newmalloc )可通過從一個空閒塊創建合適的塊來滿足。 這需要更新堆上的塊列表。 關於堆中塊的元信息也常常存儲在每個塊前面的小區域中的堆上。
  • 隨著堆的增長,新塊通常從較低地址分配到較高地址。 因此,您可以將堆看作一堆隨存儲器分配而增大的內存塊。 如果分配的堆太小,通常可以通過從底層操​​作系統獲取更多內存來增加堆的大小。
  • 分配和釋放許多小塊可能會使堆處於散佈在所用塊之間的小塊空閒塊的狀態。 由於沒有空閒塊足夠大以滿足分配請求,即使空閒塊的組合大小可能足夠大,分配大塊的請求也可能失敗。 這被稱為堆碎片
  • 當與空閒塊相鄰的使用塊被解除分配時,新的空閒塊可以與相鄰的空閒塊合併以創建更大的空閒塊,從而有效地減少堆的碎片化。


  • 堆棧通常與CPU上名為堆棧指針的特殊寄存器緊密串聯。 最初堆棧指針指向堆棧的頂部(堆棧中的最高地址)。
  • CPU有特殊的指令入堆棧並將其從堆棧回。 每次push都會將值存儲在堆棧指針的當前位置,並減少堆棧指針。 一個pop檢索堆棧指針所指向的值,然後增加堆棧指針(不要被添加一個值給堆棧減少堆棧指針的事實所困惑,並且刪除一個值會增加堆棧指針的值,請記住堆棧增長到底端)。 存儲和檢索的值是CPU寄存器的值。
  • 當一個函數被調用時,CPU使用推送當前指令指針的特殊指令,即在堆棧上執行的代碼的地址。 CPU然後通過將指令指針設置為所調用函數的地址跳轉到該函數。 稍後,當函數返回時,舊的指令指針將從堆棧中彈出,然後在調用該函數之後的代碼中繼續執行。
  • 當輸入函數時,堆棧指針會減少,為本地(自動)變量在堆棧上分配更多空間。 如果該函數有一個本地32位變量,則在堆棧上留出四個字節。 當函數返回時,堆棧指針被移回以釋放分配的區域。
  • 如果函數具有參數,則在調用該函數之前將這些參數壓入堆棧。 函數中的代碼然後能夠從當前堆棧指針向上導航堆棧以定位這些值。
  • 嵌套函數調用就像一個魅力。 每個新的調用都會分配函數參數,局部變量的返回地址和空間,並且這些激活記錄可以堆疊用於嵌套調用,並在函數返回時以正確的方式展開。
  • 由於堆棧是有限的內存塊,因此調用過多的嵌套函數和/或為本地變量分配太多空間會導致堆棧溢出 。 通常用於堆棧的內存區域的設置方式使得在堆棧底部(最低地址)之下寫入將觸發CPU中的陷阱或異常。 這個異常情況可以被運行時捕獲並轉換成某種堆棧溢出異常。



如何管理堆確實取決於運行時環境。 C使用malloc和C ++使用new ,但許多其他語言有垃圾收集。

但是,堆棧是一個與處理器架構緊密相關的更低級的特性。 當沒有足夠的空間時增加堆不是太難,因為它可以在處理堆的庫調用中實現。 然而,堆棧溢出通常是不可能的,因為只有當堆棧溢出時才發現堆棧溢出; 並關閉執行線程是唯一可行的選擇。

OK, simply and in short words, they mean ordered and not ordered ...!

Stack : In stack items, things get on the top of each-other, means gonna be faster and more efficient to be processed!...

So there is always an index to point the specific item, also processing gonna be faster, there is relationship between the items as well!...

Heap : No order, processing gonna be slower and values are messed up together with no specific order or index... there are random and there is no relationship between them... so execution and usage time could be vary...

I also create the image below to show how they may look like: