[Java] لماذا هو أسرع لمعالجة صف مصنفة من صفيف لم يتم فرزها؟


Answers

التنبؤ فرع.

باستخدام الصفيف الذي تم فرزه ، تكون data[c] >= 128 الشرط data[c] >= 128 false لأول مرة لسلسلة من القيم ، ثم تصبح true لكل القيم الأحدث. هذا من السهل التنبؤ به. مع مجموعة غير مجزأة ، تدفعه مقابل تكلفة المتفرعة.

Question

هنا هو قطعة من رمز C ++ التي تبدو غريبة جدا. لسبب غريب ، يؤدي ترتيب البيانات إلى جعل الشفرة أسرع بمعدل ست مرات.

#include <algorithm>
#include <ctime>
#include <iostream>

int main()
{
    // Generate data
    const unsigned arraySize = 32768;
    int data[arraySize];

    for (unsigned c = 0; c < arraySize; ++c)
        data[c] = std::rand() % 256;

    // !!! With this, the next loop runs faster
    std::sort(data, data + arraySize);

    // Test
    clock_t start = clock();
    long long sum = 0;

    for (unsigned i = 0; i < 100000; ++i)
    {
        // Primary loop
        for (unsigned c = 0; c < arraySize; ++c)
        {
            if (data[c] >= 128)
                sum += data[c];
        }
    }

    double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;

    std::cout << elapsedTime << std::endl;
    std::cout << "sum = " << sum << std::endl;
}
  • بدون std::sort(data, data + arraySize); ، يعمل الرمز في 11.54 ثانية.
  • مع البيانات التي تم فرزها ، يتم تشغيل التعليمات البرمجية في 1.93 ثانية.

في البداية ، اعتقدت أن هذا قد يكون مجرد لغة أو مترجم شاذ. لذلك حاولت في جافا.

import java.util.Arrays;
import java.util.Random;

public class Main
{
    public static void main(String[] args)
    {
        // Generate data
        int arraySize = 32768;
        int data[] = new int[arraySize];

        Random rnd = new Random(0);
        for (int c = 0; c < arraySize; ++c)
            data[c] = rnd.nextInt() % 256;

        // !!! With this, the next loop runs faster
        Arrays.sort(data);

        // Test
        long start = System.nanoTime();
        long sum = 0;

        for (int i = 0; i < 100000; ++i)
        {
            // Primary loop
            for (int c = 0; c < arraySize; ++c)
            {
                if (data[c] >= 128)
                    sum += data[c];
            }
        }

        System.out.println((System.nanoTime() - start) / 1000000000.0);
        System.out.println("sum = " + sum);
    }
}

مع نتيجة مشابهة إلى حد ما ولكن أقل تطرفا.

كان أول ما فكرت به هو أن الفرز يجلب البيانات إلى ذاكرة التخزين المؤقت ، ولكن بعد ذلك فكرت كم هو سخيف لأن المصفوفة تم إنشاؤها للتو.

  • ما الذي يجري؟
  • لماذا هو أسرع لمعالجة صف مصنفة من صفيف لم يتم فرزها؟
  • تلخص الشفرة بعض المصطلحات المستقلة ، ويجب ألا يهم الأمر.



This question has already been answered excellently many times over. Still I'd like to draw the group's attention to yet another interesting analysis.

Recently this example (modified very slightly) was also used as a way to demonstrate how a piece of code can be profiled within the program itself on Windows. Along the way, the author also shows how to use the results to determine where the code is spending most of its time in both the sorted & unsorted case. Finally the piece also shows how to use a little known feature of the HAL (Hardware Abstraction Layer) to determine just how much branch misprediction is happening in the unsorted case.

The link is here: http://www.geoffchappell.com/studies/windows/km/ntoskrnl/api/ex/profile/demo.htm




One way to avoid branch prediction errors is to build a lookup table, and index it using the data. Stefan de Bruijn discussed that in his answer.

But in this case, we know values are in the range [0, 255] and we only care about values >= 128. That means we can easily extract a single bit that will tell us whether we want a value or not: by shifting the data to the right 7 bits, we are left with a 0 bit or a 1 bit, and we only want to add the value when we have a 1 bit. Let's call this bit the "decision bit".

By using the 0/1 value of the decision bit as an index into an array, we can make code that will be equally fast whether the data is sorted or not sorted. Our code will always add a value, but when the decision bit is 0, we will add the value somewhere we don't care about. هنا الرمز:

// Test
clock_t start = clock();
long long a[] = {0, 0};
long long sum;

for (unsigned i = 0; i < 100000; ++i)
{
    // Primary loop
    for (unsigned c = 0; c < arraySize; ++c)
    {
        int j = (data[c] >> 7);
        a[j] += data[c];
    }
}

double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;
sum = a[1];

This code wastes half of the adds, but never has a branch prediction failure. It's tremendously faster on random data than the version with an actual if statement.

But in my testing, an explicit lookup table was slightly faster than this, probably because indexing into a lookup table was slightly faster than bit shifting. This shows how my code sets up and uses the lookup table (unimaginatively called lut for "LookUp Table" in the code). Here's the C++ code:

// declare and then fill in the lookup table
int lut[256];
for (unsigned c = 0; c < 256; ++c)
    lut[c] = (c >= 128) ? c : 0;

// use the lookup table after it is built
for (unsigned i = 0; i < 100000; ++i)
{
    // Primary loop
    for (unsigned c = 0; c < arraySize; ++c)
    {
        sum += lut[data[c]];
    }
}

In this case the lookup table was only 256 bytes, so it fit nicely in cache and all was fast. This technique wouldn't work well if the data was 24-bit values and we only wanted half of them... the lookup table would be far too big to be practical. On the other hand, we can combine the two techniques shown above: first shift the bits over, then index a lookup table. For a 24-bit value that we only want the top half value, we could potentially shift the data right by 12 bits, and be left with a 12-bit value for a table index. A 12-bit table index implies a table of 4096 values, which might be practical.

EDIT: One thing I forgot to put in.

The technique of indexing into an array, instead of using an if statement, can be used for deciding which pointer to use. I saw a library that implemented binary trees, and instead of having two named pointers ( pLeft and pRight or whatever) had a length-2 array of pointers, and used the "decision bit" technique to decide which one to follow. For example, instead of:

if (x < node->value)
    node = node->pLeft;
else
    node = node->pRight;

this library would do something like:

i = (x < node->value);
node = node->link[i];

Here's a link to this code: Red Black Trees , Eternally Confuzzled




لقد قرأت للتو عن هذا السؤال وأجوبته ، وأشعر أن هناك إجابة مفقودة.

من الطرق الشائعة للتخلص من التنبؤ الفرعى الذي وجدته جيدًا في اللغات المدارة هو البحث عن جدول بدلاً من استخدام فرع (على الرغم من أنني لم أختبره في هذه الحالة).

يعمل هذا النهج بشكل عام إذا:

  1. إنها طاولة صغيرة ومن المحتمل أن يتم تخزينها مؤقتًا في المعالج
  2. أنت تقوم بتشغيل الأشياء في حلقة ضيقة جدًا و / أو يمكن للمعالج تحميل البيانات مسبقًا

الخلفية ولماذا

ما هذا الجحيم الذي من المفترض أن يعني؟

من منظور المعالج ، تكون الذاكرة بطيئة. للتعويض عن الاختلاف في السرعة ، فإنها تقوم ببناء مخابئ في المعالج (L1 / L2 cache) لتعويض ذلك. لذا تخيل أنك تقوم بعمليات حسابية لطيفة وتكتشف أنك بحاجة إلى جزء من الذاكرة. سيحصل المعالج على عملية "التحميل" ويحمّل قطعة الذاكرة في ذاكرة التخزين المؤقت - ثم يستخدم ذاكرة التخزين المؤقت لإجراء بقية العمليات الحسابية. نظرًا لأن الذاكرة بطيئة نسبيًا ، سيؤدي هذا "التحميل" إلى إبطاء البرنامج.

مثل التنبؤ بالفرع ، تم تحسين ذلك في معالجات Pentium: يتوقع المعالج أنه يحتاج إلى تحميل جزء من البيانات ويحاول تحميله في ذاكرة التخزين المؤقت قبل أن تضرب العملية فعليًا ذاكرة التخزين المؤقت. كما رأينا سابقاً ، فإن التنبؤ بالفروع يذهب أحياناً بشكل خاطئ - في أسوأ الحالات ، تحتاج إلى العودة والانتظار فعليًا لحمل ذاكرة ، والذي سيستغرق إلى الأبد ( وبعبارة أخرى: فشل التنبؤ بالفرع هو سيئ ، ذاكرة تحميل بعد فشل فرع التنبؤ هو مجرد الرهيبة! ).

لحسن الحظ بالنسبة لنا ، إذا كان نمط الوصول إلى الذاكرة قابلاً للتنبؤ ، فسيقوم المعالج بتحميله في ذاكرة التخزين المؤقت السريعة وكل شيء على ما يرام.

أول شيء نحتاج إلى معرفته هو ما هو صغير ؟ بينما يكون الأصغر بشكل عام أفضل ، فإن القاعدة الأساسية هي الالتصاق بجداول البحث التي تبلغ <= 4096 بايت في الحجم. كحد أعلى: إذا كان جدول البحث أكبر من 64 كيلو بايت ، فربما يستحق الأمر إعادة النظر.

بناء طاولة

لذلك اكتشفنا أنه يمكننا إنشاء طاولة صغيرة. والشيء التالي الذي يجب عمله هو الحصول على وظيفة البحث في مكانها. دالات البحث عادة ما تكون وظائف صغيرة تستخدم زوجين من العمليات الأساسية الأساسية (و ، أو ، xor ، shift ، add ، remove و might multiply). إذا كنت تريد أن تتم ترجمة إدخالك من خلال وظيفة البحث إلى نوع من "المفتاح الفريد" في الجدول الخاص بك ، فهذا يعني ببساطة أنه يوفر لك الإجابة عن كل الأعمال التي تريد أن تقوم بها.

في هذه الحالة:> = 128 يعني أنه يمكننا الاحتفاظ بالقيمة ، يعني <128> أننا نتخلص منها. أسهل طريقة للقيام بذلك هي استخدام "AND": إذا احتفظنا بها ، فإننا نحن و 7 CFFFFFF ؛ إذا أردنا التخلص منه ، ونحن مع 0. لاحظ أيضا أن 128 هي قوة 2 - حتى نتمكن من المضي قدما وجعل جدول من 32768/128 الأعداد الصحيحة وملئه مع صفر واحد والكثير من و7FFFFFFFF.

اللغات المدارة

قد تتساءل لماذا يعمل هذا بشكل جيد في اللغات المدارة. بعد كل شيء ، تحقق اللغات المدارة من حدود المصفوفات مع فرع لضمان عدم العبث ...

حسنا ، ليس بالضبط ... :-)

كان هناك بعض العمل على القضاء على هذا الفرع للغات المدارة. فمثلا:

for (int i=0; i<array.Length; ++i)
   // Use array[i]

في هذه الحالة ، من الواضح أن المحول البرمجي لن يتم ضرب شرط الحدود. على الأقل مترجم Microsoft JIT (ولكن أتوقع Java يفعل أشياء مشابهة) سوف تلاحظ هذا وإزالة الاختيار تماما. نجاح باهر - وهذا يعني عدم وجود فرع. وبالمثل ، فإنه سيتناول الحالات الواضحة الأخرى.

إذا واجهت مشكلة مع عمليات البحث في اللغات التي تتم إدارتها - فالمفتاح هو إضافة علامة & 0x[something]FFF إلى وظيفة البحث الخاصة بك لجعل التحقق الحد المتوقع - ومشاهدته تسير بشكل أسرع.

نتيجة هذه القضية

// Generate data
int arraySize = 32768;
int[] data = new int[arraySize];

Random rnd = new Random(0);
for (int c = 0; c < arraySize; ++c)
    data[c] = rnd.Next(256);

//To keep the spirit of the code in-tact I'll make a separate lookup table
// (I assume we cannot modify 'data' or the number of loops)
int[] lookup = new int[256];

for (int c = 0; c < 256; ++c)
    lookup[c] = (c >= 128) ? c : 0;

// Test
DateTime startTime = System.DateTime.Now;
long sum = 0;

for (int i = 0; i < 100000; ++i)
{
    // Primary loop
    for (int j = 0; j < arraySize; ++j)
    {
        // Here you basically want to use simple operations - so no
        // random branches, but things like &, |, *, -, +, etc. are fine.
        sum += lookup[data[j]];
    }
}

DateTime endTime = System.DateTime.Now;
Console.WriteLine(endTime - startTime);
Console.WriteLine("sum = " + sum);

Console.ReadLine();



Branch-prediction gain!

It is important to understand that branch misprediction doesn't slow down programs. The cost of a missed prediction is just as if branch prediction didn't exist and you waited for the evaluation of the expression to decide what code to run (further explanation in the next paragraph).

if (expression)
{
    // Run 1
} else {
    // Run 2
}

Whenever there's an if-else \ switch statement, the expression has to be evaluated to determine which block should be executed. In the assembly code generated by the compiler, conditional branch instructions are inserted.

A branch instruction can cause a computer to begin executing a different instruction sequence and thus deviate from its default behavior of executing instructions in order (ie if the expression is false, the program skips the code of the if block) depending on some condition, which is the expression evaluation in our case.

That being said, the compiler tries to predict the outcome prior to it being actually evaluated. It will fetch instructions from the if block, and if the expression turns out to be true, then wonderful! We gained the time it took to evaluate it and made progress in the code; if not then we are running the wrong code, the pipeline is flushed, and the correct block is run.

Visualization:

Let's say you need to pick route 1 or route 2. Waiting for your partner to check the map, you have stopped at ## and waited, or you could just pick route1 and if you were lucky (route 1 is the correct route), then great you didn't have to wait for your partner to check the map (you saved the time it would have taken him to check the map), otherwise you will just turn back.

While flushing pipelines is super fast, nowadays taking this gamble is worth it. Predicting sorted data or a data that changes slowly is always easier and better than predicting fast changes.

 O      Route 1  /-------------------------------
/|\             /
 |  ---------##/
/ \            \
                \
        Route 2  \--------------------------------



إذا كنت تشعر بالفضول بشأن المزيد من التحسينات التي يمكن القيام بها لهذا الرمز ، ففكر في هذا:

بدءا من الحلقة الأصلية:

for (unsigned i = 0; i < 100000; ++i)
{
    for (unsigned j = 0; j < arraySize; ++j)
    {
        if (data[j] >= 128)
            sum += data[j];
    }
}

من خلال تبادل الحلقات ، يمكننا تغيير هذه الحلقة بأمان:

for (unsigned j = 0; j < arraySize; ++j)
{
    for (unsigned i = 0; i < 100000; ++i)
    {
        if (data[j] >= 128)
            sum += data[j];
    }
}

ثم ، يمكنك أن ترى أن الشرطية if ثابتة طوال تنفيذ حلقة i ، حتى يمكنك رفع if الخروج:

for (unsigned j = 0; j < arraySize; ++j)
{
    if (data[j] >= 128)
    {
        for (unsigned i = 0; i < 100000; ++i)
        {
            sum += data[j];
        }
    }
}

ثم ، سترى أنه يمكن طي الحلقة الداخلية في تعبير واحد ، على افتراض أن نموذج النقطة العائمة يسمح بذلك (/ fp: يتم طرح سريع ، على سبيل المثال)

for (unsigned j = 0; j < arraySize; ++j)
{
    if (data[j] >= 128)
    {
        sum += data[j] * 100000;
    }
}

هذا واحد هو 100000x أسرع من ذي قبل




On ARM, there is no branch needed, because every instruction has a 4-bit condition field, which is tested at zero cost. This eliminates the need for short branches. The inner loop would look something like the following, and there would be no branch prediction hit. Therefore, the sorted version would run slower than the unsorted version on ARM, because of the extra overhead of sorting:

MOV R0, #0     // R0 = sum = 0
MOV R1, #0     // R1 = c = 0
ADR R2, data   // R2 = addr of data array (put this instruction outside outer loop)
.inner_loop    // Inner loop branch label
    LDRB R3, [R2, R1]     // R3 = data[c]
    CMP R3, #128          // compare R3 to 128
    ADDGE R0, R0, R3      // if R3 >= 128, then sum += data[c] -- no branch needed!
    ADD R1, R1, #1        // c++
    CMP R1, #arraySize    // compare c to arraySize
    BLT inner_loop        // Branch to inner_loop if c < arraySize



The above behavior is happening because of Branch prediction.

To understand branch prediction one must first understand Instruction Pipeline :

Any instruction is broken into a sequence of steps so that different steps can be executed concurrently in parallel. This technique is known as instruction pipeline and this is used to increase throughput in modern processors. To understand this better please see this example on Wikipedia .

Generally, modern processors have quite long pipelines, but for ease let's consider these 4 steps only.

  1. IF -- Fetch the instruction from memory
  2. ID -- Decode the instruction
  3. EX -- Execute the instruction
  4. WB -- Write back to CPU register

4-stage pipeline in general for 2 instructions.

Moving back to the above question let's consider the following instructions:

                        A) if (data[c] >= 128)
                                /\
                               /  \
                              /    \
                        true /      \ false
                            /        \
                           /          \
                          /            \
                         /              \
              B) sum += data[c];          C) for loop or print().

Without branch prediction, the following would occur:

To execute instruction B or instruction C the processor will have to wait till the instruction A doesn't reach till EX stage in the pipeline, as the decision to go to instruction B or instruction C depends on the result of instruction A. So the pipeline will look like this.

when if condition returns true:

When if condition returns false:

As a result of waiting for the result of instruction A, the total CPU cycles spent in the above case (without branch prediction; for both true and false) is 7.

So what is branch prediction?

Branch predictor will try to guess which way a branch (an if-then-else structure) will go before this is known for sure. It will not wait for the instruction A to reach the EX stage of the pipeline, but it will guess the decision and go to that instruction (B or C in case of our example).

In case of a correct guess, the pipeline looks something like this:

If it is later detected that the guess was wrong then the partially executed instructions are discarded and the pipeline starts over with the correct branch, incurring a delay. The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the greater the need for a good branch predictor .

In the OP's code, the first time when the conditional, the branch predictor does not have any information to base up prediction, so the first time it will randomly choose the next instruction. Later in the for loop, it can base the prediction on the history. For an array sorted in ascending order, there are three possibilities:

  1. All the elements are less than 128
  2. All the elements are greater than 128
  3. Some starting new elements are less than 128 and later it become greater than 128

Let us assume that the predictor will always assume the true branch on the first run.

So in the first case, it will always take the true branch since historically all its predictions are correct. In the 2nd case, initially it will predict wrong, but after a few iterations, it will predict correctly. In the 3rd case, it will initially predict correctly till the elements are less than 128. After which it will fail for some time and the correct itself when it sees branch prediction failure in history.

In all these cases the failure will be too less in number and as a result, only a few times it will need to discard the partially executed instructions and start over with the correct branch, resulting in fewer CPU cycles.

But in case of a random unsorted array, the prediction will need to discard the partially executed instructions and start over with the correct branch most of the time and result in more CPU cycles compared to the sorted array.




It's about branch prediction. What is it?

  • A branch predictor is one of the ancient performance improving techniques which still finds relevance into modern architectures. While the simple prediction techniques provide fast lookup and power efficiency they suffer from a high misprediction rate.

  • On the other hand, complex branch predictions –either neural based or variants of two-level branch prediction –provide better prediction accuracy, but they consume more power and complexity increases exponentially.

  • In addition to this, in complex prediction techniques the time taken to predict the branches is itself very high –ranging from 2 to 5 cycles –which is comparable to the execution time of actual branches.

  • Branch prediction is essentially an optimization (minimization) problem where the emphasis is on to achieve lowest possible miss rate, low power consumption, and low complexity with minimum resources.

There really are three different kinds of branches:

Forward conditional branches - based on a run-time condition, the PC (program counter) is changed to point to an address forward in the instruction stream.

Backward conditional branches - the PC is changed to point backward in the instruction stream. The branch is based on some condition, such as branching backwards to the beginning of a program loop when a test at the end of the loop states the loop should be executed again.

Unconditional branches - this includes jumps, procedure calls and returns that have no specific condition. For example, an unconditional jump instruction might be coded in assembly language as simply "jmp", and the instruction stream must immediately be directed to the target location pointed to by the jump instruction, whereas a conditional jump that might be coded as "jmpne" would redirect the instruction stream only if the result of a comparison of two values in a previous "compare" instructions shows the values to not be equal. (The segmented addressing scheme used by the x86 architecture adds extra complexity, since jumps can be either "near" (within a segment) or "far" (outside the segment). Each type has different effects on branch prediction algorithms.)

Static/dynamic Branch Prediction : Static branch prediction is used by the microprocessor the first time a conditional branch is encountered, and dynamic branch prediction is used for succeeding executions of the conditional branch code.

المراجع:




In the same line (I think this was not highlighted by any answer) it's good to mention that sometimes (specially in software where the performance matters—like in the Linux kernel) you can find some if statements like the following:

if (likely( everything_is_ok ))
{
    /* Do something */
}

or similarly:

if (unlikely(very_improbable_condition))
{
    /* Do something */    
}

Both likely() and unlikely() are in fact macros that are defined by using something like the GCC's __builtin_expect to help the compiler insert prediction code to favour the condition taking into account the information provided by the user. GCC supports other builtins that could change the behavior of the running program or emit low level instructions like clearing the cache, etc. See this documentation that goes through the available GCC's builtins.

Normally this kind of optimizations are mainly found in hard-real time applications or embedded systems where execution time matters and it's critical. For example, if you are checking for some error condition that only happens 1/10000000 times, then why not inform the compiler about this? This way, by default, the branch prediction would assume that the condition is false.