[Java] 왜 분류되지 않은 배열보다 정렬 된 배열을 처리하는 것이 더 빠릅니까?


Answers

분기 예측.

정렬 된 배열을 사용하면 조건 data[c] >= 128 이 값 줄무늬에 대해 처음 false 가되고 이후의 모든 값에 대해 true 가됩니다. 그것은 예측하기 쉽습니다. 정렬되지 않은 배열을 사용하면 분기 비용을 지불하게됩니다.

Question

매우 특이한 C ++ 코드가 있습니다. 이상한 이유로 데이터를 기적적으로 정렬하면 코드가 거의 6 배 더 빠릅니다.

#include <algorithm>
#include <ctime>
#include <iostream>

int main()
{
    // Generate data
    const unsigned arraySize = 32768;
    int data[arraySize];

    for (unsigned c = 0; c < arraySize; ++c)
        data[c] = std::rand() % 256;

    // !!! With this, the next loop runs faster
    std::sort(data, data + arraySize);

    // Test
    clock_t start = clock();
    long long sum = 0;

    for (unsigned i = 0; i < 100000; ++i)
    {
        // Primary loop
        for (unsigned c = 0; c < arraySize; ++c)
        {
            if (data[c] >= 128)
                sum += data[c];
        }
    }

    double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;

    std::cout << elapsedTime << std::endl;
    std::cout << "sum = " << sum << std::endl;
}
  • std::sort(data, data + arraySize); 없이 std::sort(data, data + arraySize); 코드는 11.54 초 만에 실행됩니다.
  • 정렬 된 데이터로 코드는 1.93 초 만에 실행됩니다.

처음에는 이것이 언어 또는 컴파일러 예외 일 수 있다고 생각했습니다. 그래서 나는 자바로 시도했다.

import java.util.Arrays;
import java.util.Random;

public class Main
{
    public static void main(String[] args)
    {
        // Generate data
        int arraySize = 32768;
        int data[] = new int[arraySize];

        Random rnd = new Random(0);
        for (int c = 0; c < arraySize; ++c)
            data[c] = rnd.nextInt() % 256;

        // !!! With this, the next loop runs faster
        Arrays.sort(data);

        // Test
        long start = System.nanoTime();
        long sum = 0;

        for (int i = 0; i < 100000; ++i)
        {
            // Primary loop
            for (int c = 0; c < arraySize; ++c)
            {
                if (data[c] >= 128)
                    sum += data[c];
            }
        }

        System.out.println((System.nanoTime() - start) / 1000000000.0);
        System.out.println("sum = " + sum);
    }
}

다소 유사하지만 덜 극단적 인 결과.

나의 첫 번째 생각은 정렬이 캐시에 데이터를 가져 왔지만 어레이가 방금 생성 되었기 때문에 이것이 얼마나 어리석은 것인지 생각했습니다.

  • 무슨 일 이니?
  • 왜 분류되지 않은 배열보다 정렬 된 배열을 처리하는 것이 더 빠릅니까?
  • 이 코드는 몇 가지 독립적 인 용어를 요약 한 것이며 순서는 중요하지 않습니다.



The above behavior is happening because of Branch prediction.

To understand branch prediction one must first understand Instruction Pipeline :

Any instruction is broken into a sequence of steps so that different steps can be executed concurrently in parallel. This technique is known as instruction pipeline and this is used to increase throughput in modern processors. To understand this better please see this example on Wikipedia .

Generally, modern processors have quite long pipelines, but for ease let's consider these 4 steps only.

  1. IF -- Fetch the instruction from memory
  2. ID -- Decode the instruction
  3. EX -- Execute the instruction
  4. WB -- Write back to CPU register

4-stage pipeline in general for 2 instructions.

Moving back to the above question let's consider the following instructions:

                        A) if (data[c] >= 128)
                                /\
                               /  \
                              /    \
                        true /      \ false
                            /        \
                           /          \
                          /            \
                         /              \
              B) sum += data[c];          C) for loop or print().

Without branch prediction, the following would occur:

To execute instruction B or instruction C the processor will have to wait till the instruction A doesn't reach till EX stage in the pipeline, as the decision to go to instruction B or instruction C depends on the result of instruction A. So the pipeline will look like this.

when if condition returns true:

When if condition returns false:

As a result of waiting for the result of instruction A, the total CPU cycles spent in the above case (without branch prediction; for both true and false) is 7.

So what is branch prediction?

Branch predictor will try to guess which way a branch (an if-then-else structure) will go before this is known for sure. It will not wait for the instruction A to reach the EX stage of the pipeline, but it will guess the decision and go to that instruction (B or C in case of our example).

In case of a correct guess, the pipeline looks something like this:

If it is later detected that the guess was wrong then the partially executed instructions are discarded and the pipeline starts over with the correct branch, incurring a delay. The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the greater the need for a good branch predictor .

In the OP's code, the first time when the conditional, the branch predictor does not have any information to base up prediction, so the first time it will randomly choose the next instruction. Later in the for loop, it can base the prediction on the history. For an array sorted in ascending order, there are three possibilities:

  1. All the elements are less than 128
  2. All the elements are greater than 128
  3. Some starting new elements are less than 128 and later it become greater than 128

Let us assume that the predictor will always assume the true branch on the first run.

So in the first case, it will always take the true branch since historically all its predictions are correct. In the 2nd case, initially it will predict wrong, but after a few iterations, it will predict correctly. In the 3rd case, it will initially predict correctly till the elements are less than 128. After which it will fail for some time and the correct itself when it sees branch prediction failure in history.

In all these cases the failure will be too less in number and as a result, only a few times it will need to discard the partially executed instructions and start over with the correct branch, resulting in fewer CPU cycles.

But in case of a random unsorted array, the prediction will need to discard the partially executed instructions and start over with the correct branch most of the time and result in more CPU cycles compared to the sorted array.




Branch-prediction gain!

It is important to understand that branch misprediction doesn't slow down programs. The cost of a missed prediction is just as if branch prediction didn't exist and you waited for the evaluation of the expression to decide what code to run (further explanation in the next paragraph).

if (expression)
{
    // Run 1
} else {
    // Run 2
}

Whenever there's an if-else \ switch statement, the expression has to be evaluated to determine which block should be executed. In the assembly code generated by the compiler, conditional branch instructions are inserted.

A branch instruction can cause a computer to begin executing a different instruction sequence and thus deviate from its default behavior of executing instructions in order (ie if the expression is false, the program skips the code of the if block) depending on some condition, which is the expression evaluation in our case.

That being said, the compiler tries to predict the outcome prior to it being actually evaluated. It will fetch instructions from the if block, and if the expression turns out to be true, then wonderful! We gained the time it took to evaluate it and made progress in the code; if not then we are running the wrong code, the pipeline is flushed, and the correct block is run.

Visualization:

Let's say you need to pick route 1 or route 2. Waiting for your partner to check the map, you have stopped at ## and waited, or you could just pick route1 and if you were lucky (route 1 is the correct route), then great you didn't have to wait for your partner to check the map (you saved the time it would have taken him to check the map), otherwise you will just turn back.

While flushing pipelines is super fast, nowadays taking this gamble is worth it. Predicting sorted data or a data that changes slowly is always easier and better than predicting fast changes.

 O      Route 1  /-------------------------------
/|\             /
 |  ---------##/
/ \            \
                \
        Route 2  \--------------------------------



이 코드에서 더 많은 최적화를 할 수 있는지 궁금하다면 다음을 고려하십시오.

원래 루프로 시작 :

for (unsigned i = 0; i < 100000; ++i)
{
    for (unsigned j = 0; j < arraySize; ++j)
    {
        if (data[j] >= 128)
            sum += data[j];
    }
}

루프 교환을 사용하면이 루프를 다음과 같이 안전하게 변경할 수 있습니다.

for (unsigned j = 0; j < arraySize; ++j)
{
    for (unsigned i = 0; i < 100000; ++i)
    {
        if (data[j] >= 128)
            sum += data[j];
    }
}

그런 다음 if 루프가 실행되는 동안 if 조건이 일정하다는 것을 알 수 있으므로 if 출력을 끌어 올 수 있습니다.

for (unsigned j = 0; j < arraySize; ++j)
{
    if (data[j] >= 128)
    {
        for (unsigned i = 0; i < 100000; ++i)
        {
            sum += data[j];
        }
    }
}

그런 다음 부동 소수점 모델에서 허용하는 것으로 가정하면 내부 루프를 단일 표현식으로 축소 할 수 있습니다 (예 : / fp : throw)

for (unsigned j = 0; j < arraySize; ++j)
{
    if (data[j] >= 128)
    {
        sum += data[j] * 100000;
    }
}

그 중 하나가 이전보다 100,000 배 빠릅니다.




On ARM, there is no branch needed, because every instruction has a 4-bit condition field, which is tested at zero cost. This eliminates the need for short branches. The inner loop would look something like the following, and there would be no branch prediction hit. Therefore, the sorted version would run slower than the unsorted version on ARM, because of the extra overhead of sorting:

MOV R0, #0     // R0 = sum = 0
MOV R1, #0     // R1 = c = 0
ADR R2, data   // R2 = addr of data array (put this instruction outside outer loop)
.inner_loop    // Inner loop branch label
    LDRB R3, [R2, R1]     // R3 = data[c]
    CMP R3, #128          // compare R3 to 128
    ADDGE R0, R0, R3      // if R3 >= 128, then sum += data[c] -- no branch needed!
    ADD R1, R1, #1        // c++
    CMP R1, #arraySize    // compare c to arraySize
    BLT inner_loop        // Branch to inner_loop if c < arraySize



This question has already been answered excellently many times over. Still I'd like to draw the group's attention to yet another interesting analysis.

Recently this example (modified very slightly) was also used as a way to demonstrate how a piece of code can be profiled within the program itself on Windows. Along the way, the author also shows how to use the results to determine where the code is spending most of its time in both the sorted & unsorted case. Finally the piece also shows how to use a little known feature of the HAL (Hardware Abstraction Layer) to determine just how much branch misprediction is happening in the unsorted case.

The link is here: http://www.geoffchappell.com/studies/windows/km/ntoskrnl/api/ex/profile/demo.htm




One way to avoid branch prediction errors is to build a lookup table, and index it using the data. Stefan de Bruijn discussed that in his answer.

But in this case, we know values are in the range [0, 255] and we only care about values >= 128. That means we can easily extract a single bit that will tell us whether we want a value or not: by shifting the data to the right 7 bits, we are left with a 0 bit or a 1 bit, and we only want to add the value when we have a 1 bit. Let's call this bit the "decision bit".

By using the 0/1 value of the decision bit as an index into an array, we can make code that will be equally fast whether the data is sorted or not sorted. Our code will always add a value, but when the decision bit is 0, we will add the value somewhere we don't care about. 코드는 다음과 같습니다.

// Test
clock_t start = clock();
long long a[] = {0, 0};
long long sum;

for (unsigned i = 0; i < 100000; ++i)
{
    // Primary loop
    for (unsigned c = 0; c < arraySize; ++c)
    {
        int j = (data[c] >> 7);
        a[j] += data[c];
    }
}

double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;
sum = a[1];

This code wastes half of the adds, but never has a branch prediction failure. It's tremendously faster on random data than the version with an actual if statement.

But in my testing, an explicit lookup table was slightly faster than this, probably because indexing into a lookup table was slightly faster than bit shifting. This shows how my code sets up and uses the lookup table (unimaginatively called lut for "LookUp Table" in the code). Here's the C++ code:

// declare and then fill in the lookup table
int lut[256];
for (unsigned c = 0; c < 256; ++c)
    lut[c] = (c >= 128) ? c : 0;

// use the lookup table after it is built
for (unsigned i = 0; i < 100000; ++i)
{
    // Primary loop
    for (unsigned c = 0; c < arraySize; ++c)
    {
        sum += lut[data[c]];
    }
}

In this case the lookup table was only 256 bytes, so it fit nicely in cache and all was fast. This technique wouldn't work well if the data was 24-bit values and we only wanted half of them... the lookup table would be far too big to be practical. On the other hand, we can combine the two techniques shown above: first shift the bits over, then index a lookup table. For a 24-bit value that we only want the top half value, we could potentially shift the data right by 12 bits, and be left with a 12-bit value for a table index. A 12-bit table index implies a table of 4096 values, which might be practical.

EDIT: One thing I forgot to put in.

The technique of indexing into an array, instead of using an if statement, can be used for deciding which pointer to use. I saw a library that implemented binary trees, and instead of having two named pointers ( pLeft and pRight or whatever) had a length-2 array of pointers, and used the "decision bit" technique to decide which one to follow. For example, instead of:

if (x < node->value)
    node = node->pLeft;
else
    node = node->pRight;

this library would do something like:

i = (x < node->value);
node = node->link[i];

Here's a link to this code: Red Black Trees , Eternally Confuzzled




방금이 질문과 답변을 읽었으며 대답이 누락되었다고 생각합니다.

관리되는 언어에서 특히 잘 작동하는 것으로 발견 된 분기 예측을 제거하는 일반적인 방법은 분기를 사용하는 대신 테이블 조회입니다 (이 경우 테스트하지는 않았지만).

이 방법은 일반적으로 다음과 같은 경우에 사용할 수 있습니다.

  1. 작은 테이블이며 프로세서에 캐시 될 가능성이 큽니다.
  2. 매우 빡빡한 루프에서 일을하고 있거나 프로세서가 데이터를 미리로드 할 수 있습니다.

배경 및 이유

파 퓨. 도대체 도대체 무슨 소리 야?

프로세서 관점에서 볼 때 메모리가 느립니다. 속도의 차이를 보완하기 위해 프로세서 (L1 / L2 캐시)에있는 두 개의 캐시에서이를 보완합니다. 그래서 당신이 당신의 좋은 계산을하고 있고 당신이 기억의 조각을 필요로한다는 것을 이해한다고 상상해보십시오. 프로세서는 '로드'연산을 수행하고 캐시에 메모리 조각을로드 한 다음 캐시를 사용하여 나머지 계산을 수행합니다. 메모리가 비교적 느리기 때문에이 '로드'는 프로그램 속도를 저하시킵니다.

분기 예측과 마찬가지로, 이것은 펜티엄 프로세서에서 최적화되었습니다. 프로세서는 데이터 조각을로드해야하며 캐시에 실제로로드되기 전에 캐시에로드하려고 시도합니다. 이미 살펴본 바와 같이, 분기 예측은 때로는 끔찍하게 잘못됩니다. 최악의 시나리오에서는 돌아가서 실제로 메모리로드가 필요할 때까지 기다려야합니다. 즉, 영원히 걸릴 것입니다 ( 즉, 분기 예측 실패, 메모리 부족 분기 예측이 실패한 후 부하가 끔찍합니다! ).

다행스럽게도 우리에게 메모리 액세스 패턴이 예측 가능한 경우 프로세서는 빠른 캐시에로드하므로 모두 정상입니다.

가장 먼저 알아야 할 것은 작은 것 입니까? 일반적으로 크기가 더 작은 것이 좋지만 일반적으로 크기가 4096 바이트 미만인 조회 테이블을 사용하는 것이 좋습니다. 상한으로 : 조회 테이블이 64K보다 큰 경우 재검토 가치가 있습니다.

테이블 만들기

그래서 우리는 작은 테이블을 만들 수 있다는 것을 알아 냈습니다. 다음으로 할 일은 조회 기능을 제자리에 놓는 것입니다. 조회 함수는 일반적으로 몇 가지 기본 정수 연산 (및 또는, xor, shift, add, remove 및 아마도 곱하기)을 사용하는 작은 함수입니다. 조회 기능을 통해 테이블의 '고유 키'에 대한 귀하의 의견을 번역하고 싶다면 원하는 모든 작업에 대한 답변을 제공하면됩니다.

이 경우 :> = 128은 값을 유지할 수 있음을 의미하고, <128은 제거 할 수 있음을 의미합니다. 가장 쉬운 방법은 'AND'를 사용하는 것입니다 : 우리가 그것을 지키면 7FFFFFFF와 AND합니다. 우리가 그것을 제거하고 싶다면 0과 함께합니다. 128은 2의 거듭 제곱입니다 - 그래서 우리는 32768/128 정수의 테이블을 만들어 하나의 0과 많은 값으로 채울 수 있습니다 7FFFFFFFF입니다.

관리 언어

왜 이것이 관리되는 언어로 잘 작동하는지 궁금 할 것입니다. 결국, 관리되는 언어는 브랜치가있는 배열의 경계를 확인하여 혼란스럽지 않게합니다.

음, 정확히는 ... :-)

관리되는 언어에 대해이 지점을 제거하는 데 상당한 노력이있었습니다. 예 :

for (int i=0; i<array.Length; ++i)
   // Use array[i]

이 경우 컴파일러는 경계 조건에 결코 부딪치지 않는다는 것을 분명히 알 수 있습니다. 적어도 Microsoft JIT 컴파일러 (하지만 Java가 비슷한 기능을 수행 할 것으로 예상 함)에서는이를 확인하고 모두 검사를 제거합니다. 와우 - 그건 가지가 없다는 뜻이야. 마찬가지로 다른 명백한 경우도 처리 할 것입니다.

관리되는 언어의 조회에 문제가 생기는 경우 - 핵심은 경계 검사를 예측 가능하게 만들기 위해 조회 기능에 & 0x[something]FFF 를 추가하여 더 빨리 진행하는 것입니다.

이 사건의 결과

// Generate data
int arraySize = 32768;
int[] data = new int[arraySize];

Random rnd = new Random(0);
for (int c = 0; c < arraySize; ++c)
    data[c] = rnd.Next(256);

//To keep the spirit of the code in-tact I'll make a separate lookup table
// (I assume we cannot modify 'data' or the number of loops)
int[] lookup = new int[256];

for (int c = 0; c < 256; ++c)
    lookup[c] = (c >= 128) ? c : 0;

// Test
DateTime startTime = System.DateTime.Now;
long sum = 0;

for (int i = 0; i < 100000; ++i)
{
    // Primary loop
    for (int j = 0; j < arraySize; ++j)
    {
        // Here you basically want to use simple operations - so no
        // random branches, but things like &, |, *, -, +, etc. are fine.
        sum += lookup[data[j]];
    }
}

DateTime endTime = System.DateTime.Now;
Console.WriteLine(endTime - startTime);
Console.WriteLine("sum = " + sum);

Console.ReadLine();



It's about branch prediction. 이게 뭐야?

  • A branch predictor is one of the ancient performance improving techniques which still finds relevance into modern architectures. While the simple prediction techniques provide fast lookup and power efficiency they suffer from a high misprediction rate.

  • On the other hand, complex branch predictions –either neural based or variants of two-level branch prediction –provide better prediction accuracy, but they consume more power and complexity increases exponentially.

  • In addition to this, in complex prediction techniques the time taken to predict the branches is itself very high –ranging from 2 to 5 cycles –which is comparable to the execution time of actual branches.

  • Branch prediction is essentially an optimization (minimization) problem where the emphasis is on to achieve lowest possible miss rate, low power consumption, and low complexity with minimum resources.

There really are three different kinds of branches:

Forward conditional branches - based on a run-time condition, the PC (program counter) is changed to point to an address forward in the instruction stream.

Backward conditional branches - the PC is changed to point backward in the instruction stream. The branch is based on some condition, such as branching backwards to the beginning of a program loop when a test at the end of the loop states the loop should be executed again.

Unconditional branches - this includes jumps, procedure calls and returns that have no specific condition. For example, an unconditional jump instruction might be coded in assembly language as simply "jmp", and the instruction stream must immediately be directed to the target location pointed to by the jump instruction, whereas a conditional jump that might be coded as "jmpne" would redirect the instruction stream only if the result of a comparison of two values in a previous "compare" instructions shows the values to not be equal. (The segmented addressing scheme used by the x86 architecture adds extra complexity, since jumps can be either "near" (within a segment) or "far" (outside the segment). Each type has different effects on branch prediction algorithms.)

Static/dynamic Branch Prediction : Static branch prediction is used by the microprocessor the first time a conditional branch is encountered, and dynamic branch prediction is used for succeeding executions of the conditional branch code.

참고 문헌 :




In the same line (I think this was not highlighted by any answer) it's good to mention that sometimes (specially in software where the performance matters—like in the Linux kernel) you can find some if statements like the following:

if (likely( everything_is_ok ))
{
    /* Do something */
}

or similarly:

if (unlikely(very_improbable_condition))
{
    /* Do something */    
}

Both likely() and unlikely() are in fact macros that are defined by using something like the GCC's __builtin_expect to help the compiler insert prediction code to favour the condition taking into account the information provided by the user. GCC supports other builtins that could change the behavior of the running program or emit low level instructions like clearing the cache, etc. See this documentation that goes through the available GCC's builtins.

Normally this kind of optimizations are mainly found in hard-real time applications or embedded systems where execution time matters and it's critical. For example, if you are checking for some error condition that only happens 1/10000000 times, then why not inform the compiler about this? This way, by default, the branch prediction would assume that the condition is false.




Links