[Java] Pourquoi est-il plus rapide de traiter un tableau trié qu'un tableau non trié?



Answers

Prédiction de branche.

Avec un tableau trié, la condition data[c] >= 128 est d'abord false pour une série de valeurs, puis devient true pour toutes les valeurs ultérieures. C'est facile à prévoir. Avec un tableau non trié, vous payez pour le coût de branchement.

Question

Voici un morceau de code C ++ qui semble très particulier. Pour une raison étrange, trier les données miraculeusement rend le code presque six fois plus rapide.

#include <algorithm>
#include <ctime>
#include <iostream>

int main()
{
    // Generate data
    const unsigned arraySize = 32768;
    int data[arraySize];

    for (unsigned c = 0; c < arraySize; ++c)
        data[c] = std::rand() % 256;

    // !!! With this, the next loop runs faster
    std::sort(data, data + arraySize);

    // Test
    clock_t start = clock();
    long long sum = 0;

    for (unsigned i = 0; i < 100000; ++i)
    {
        // Primary loop
        for (unsigned c = 0; c < arraySize; ++c)
        {
            if (data[c] >= 128)
                sum += data[c];
        }
    }

    double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;

    std::cout << elapsedTime << std::endl;
    std::cout << "sum = " << sum << std::endl;
}
  • Sans std::sort(data, data + arraySize); , le code s'exécute en 11,54 secondes.
  • Avec les données triées, le code s'exécute en 1,93 secondes.

Au début, je pensais que cela pourrait être juste une anomalie du langage ou du compilateur. Alors je l'ai essayé en Java.

import java.util.Arrays;
import java.util.Random;

public class Main
{
    public static void main(String[] args)
    {
        // Generate data
        int arraySize = 32768;
        int data[] = new int[arraySize];

        Random rnd = new Random(0);
        for (int c = 0; c < arraySize; ++c)
            data[c] = rnd.nextInt() % 256;

        // !!! With this, the next loop runs faster
        Arrays.sort(data);

        // Test
        long start = System.nanoTime();
        long sum = 0;

        for (int i = 0; i < 100000; ++i)
        {
            // Primary loop
            for (int c = 0; c < arraySize; ++c)
            {
                if (data[c] >= 128)
                    sum += data[c];
            }
        }

        System.out.println((System.nanoTime() - start) / 1000000000.0);
        System.out.println("sum = " + sum);
    }
}

Avec un résultat un peu similaire mais moins extrême.

Ma première pensée a été que le tri amène les données dans le cache, mais j'ai alors pensé à quel point c'était stupide parce que le tableau venait d'être généré.

  • Que se passe-t-il?
  • Pourquoi est-il plus rapide de traiter un tableau trié qu'un tableau non trié?
  • Le code résume certains termes indépendants, et l'ordre ne devrait pas avoir d'importance.



The above behavior is happening because of Branch prediction.

To understand branch prediction one must first understand Instruction Pipeline :

Any instruction is broken into a sequence of steps so that different steps can be executed concurrently in parallel. This technique is known as instruction pipeline and this is used to increase throughput in modern processors. To understand this better please see this example on Wikipedia .

Generally, modern processors have quite long pipelines, but for ease let's consider these 4 steps only.

  1. IF -- Fetch the instruction from memory
  2. ID -- Decode the instruction
  3. EX -- Execute the instruction
  4. WB -- Write back to CPU register

4-stage pipeline in general for 2 instructions.

Moving back to the above question let's consider the following instructions:

                        A) if (data[c] >= 128)
                                /\
                               /  \
                              /    \
                        true /      \ false
                            /        \
                           /          \
                          /            \
                         /              \
              B) sum += data[c];          C) for loop or print().

Without branch prediction, the following would occur:

To execute instruction B or instruction C the processor will have to wait till the instruction A doesn't reach till EX stage in the pipeline, as the decision to go to instruction B or instruction C depends on the result of instruction A. So the pipeline will look like this.

when if condition returns true:

When if condition returns false:

As a result of waiting for the result of instruction A, the total CPU cycles spent in the above case (without branch prediction; for both true and false) is 7.

So what is branch prediction?

Branch predictor will try to guess which way a branch (an if-then-else structure) will go before this is known for sure. It will not wait for the instruction A to reach the EX stage of the pipeline, but it will guess the decision and go to that instruction (B or C in case of our example).

In case of a correct guess, the pipeline looks something like this:

If it is later detected that the guess was wrong then the partially executed instructions are discarded and the pipeline starts over with the correct branch, incurring a delay. The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the greater the need for a good branch predictor .

In the OP's code, the first time when the conditional, the branch predictor does not have any information to base up prediction, so the first time it will randomly choose the next instruction. Later in the for loop, it can base the prediction on the history. For an array sorted in ascending order, there are three possibilities:

  1. All the elements are less than 128
  2. All the elements are greater than 128
  3. Some starting new elements are less than 128 and later it become greater than 128

Let us assume that the predictor will always assume the true branch on the first run.

So in the first case, it will always take the true branch since historically all its predictions are correct. In the 2nd case, initially it will predict wrong, but after a few iterations, it will predict correctly. In the 3rd case, it will initially predict correctly till the elements are less than 128. After which it will fail for some time and the correct itself when it sees branch prediction failure in history.

In all these cases the failure will be too less in number and as a result, only a few times it will need to discard the partially executed instructions and start over with the correct branch, resulting in fewer CPU cycles.

But in case of a random unsorted array, the prediction will need to discard the partially executed instructions and start over with the correct branch most of the time and result in more CPU cycles compared to the sorted array.




In the same line (I think this was not highlighted by any answer) it's good to mention that sometimes (specially in software where the performance matters—like in the Linux kernel) you can find some if statements like the following:

if (likely( everything_is_ok ))
{
    /* Do something */
}

or similarly:

if (unlikely(very_improbable_condition))
{
    /* Do something */    
}

Both likely() and unlikely() are in fact macros that are defined by using something like the GCC's __builtin_expect to help the compiler insert prediction code to favour the condition taking into account the information provided by the user. GCC supports other builtins that could change the behavior of the running program or emit low level instructions like clearing the cache, etc. See this documentation that goes through the available GCC's builtins.

Normally this kind of optimizations are mainly found in hard-real time applications or embedded systems where execution time matters and it's critical. For example, if you are checking for some error condition that only happens 1/10000000 times, then why not inform the compiler about this? This way, by default, the branch prediction would assume that the condition is false.




On ARM, there is no branch needed, because every instruction has a 4-bit condition field, which is tested at zero cost. This eliminates the need for short branches. The inner loop would look something like the following, and there would be no branch prediction hit. Therefore, the sorted version would run slower than the unsorted version on ARM, because of the extra overhead of sorting:

MOV R0, #0     // R0 = sum = 0
MOV R1, #0     // R1 = c = 0
ADR R2, data   // R2 = addr of data array (put this instruction outside outer loop)
.inner_loop    // Inner loop branch label
    LDRB R3, [R2, R1]     // R3 = data[c]
    CMP R3, #128          // compare R3 to 128
    ADDGE R0, R0, R3      // if R3 >= 128, then sum += data[c] -- no branch needed!
    ADD R1, R1, #1        // c++
    CMP R1, #arraySize    // compare c to arraySize
    BLT inner_loop        // Branch to inner_loop if c < arraySize



It's about branch prediction. Qu'Est-ce que c'est?

  • A branch predictor is one of the ancient performance improving techniques which still finds relevance into modern architectures. While the simple prediction techniques provide fast lookup and power efficiency they suffer from a high misprediction rate.

  • On the other hand, complex branch predictions –either neural based or variants of two-level branch prediction –provide better prediction accuracy, but they consume more power and complexity increases exponentially.

  • In addition to this, in complex prediction techniques the time taken to predict the branches is itself very high –ranging from 2 to 5 cycles –which is comparable to the execution time of actual branches.

  • Branch prediction is essentially an optimization (minimization) problem where the emphasis is on to achieve lowest possible miss rate, low power consumption, and low complexity with minimum resources.

There really are three different kinds of branches:

Forward conditional branches - based on a run-time condition, the PC (program counter) is changed to point to an address forward in the instruction stream.

Backward conditional branches - the PC is changed to point backward in the instruction stream. The branch is based on some condition, such as branching backwards to the beginning of a program loop when a test at the end of the loop states the loop should be executed again.

Unconditional branches - this includes jumps, procedure calls and returns that have no specific condition. For example, an unconditional jump instruction might be coded in assembly language as simply "jmp", and the instruction stream must immediately be directed to the target location pointed to by the jump instruction, whereas a conditional jump that might be coded as "jmpne" would redirect the instruction stream only if the result of a comparison of two values in a previous "compare" instructions shows the values to not be equal. (The segmented addressing scheme used by the x86 architecture adds extra complexity, since jumps can be either "near" (within a segment) or "far" (outside the segment). Each type has different effects on branch prediction algorithms.)

Static/dynamic Branch Prediction : Static branch prediction is used by the microprocessor the first time a conditional branch is encountered, and dynamic branch prediction is used for succeeding executions of the conditional branch code.

Les références:




Si vous êtes curieux de connaître encore plus d'optimisations pour ce code, considérez ceci:

À partir de la boucle d'origine:

for (unsigned i = 0; i < 100000; ++i)
{
    for (unsigned j = 0; j < arraySize; ++j)
    {
        if (data[j] >= 128)
            sum += data[j];
    }
}

Avec l'échange de boucle, nous pouvons changer cette boucle en toute sécurité pour:

for (unsigned j = 0; j < arraySize; ++j)
{
    for (unsigned i = 0; i < 100000; ++i)
    {
        if (data[j] >= 128)
            sum += data[j];
    }
}

Ensuite, vous pouvez voir que le conditionnel if est constant tout au long de l'exécution de la boucle i , donc vous pouvez lever le if :

for (unsigned j = 0; j < arraySize; ++j)
{
    if (data[j] >= 128)
    {
        for (unsigned i = 0; i < 100000; ++i)
        {
            sum += data[j];
        }
    }
}

Ensuite, vous voyez que la boucle interne peut être réduite en une seule expression, en supposant que le modèle en virgule flottante le permette (/ fp: fast est lancé, par exemple)

for (unsigned j = 0; j < arraySize; ++j)
{
    if (data[j] >= 128)
    {
        sum += data[j] * 100000;
    }
}

Celui-ci est 100 000 fois plus rapide qu'avant




One way to avoid branch prediction errors is to build a lookup table, and index it using the data. Stefan de Bruijn discussed that in his answer.

But in this case, we know values are in the range [0, 255] and we only care about values >= 128. That means we can easily extract a single bit that will tell us whether we want a value or not: by shifting the data to the right 7 bits, we are left with a 0 bit or a 1 bit, and we only want to add the value when we have a 1 bit. Let's call this bit the "decision bit".

By using the 0/1 value of the decision bit as an index into an array, we can make code that will be equally fast whether the data is sorted or not sorted. Our code will always add a value, but when the decision bit is 0, we will add the value somewhere we don't care about. Voici le code:

// Test
clock_t start = clock();
long long a[] = {0, 0};
long long sum;

for (unsigned i = 0; i < 100000; ++i)
{
    // Primary loop
    for (unsigned c = 0; c < arraySize; ++c)
    {
        int j = (data[c] >> 7);
        a[j] += data[c];
    }
}

double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;
sum = a[1];

This code wastes half of the adds, but never has a branch prediction failure. It's tremendously faster on random data than the version with an actual if statement.

But in my testing, an explicit lookup table was slightly faster than this, probably because indexing into a lookup table was slightly faster than bit shifting. This shows how my code sets up and uses the lookup table (unimaginatively called lut for "LookUp Table" in the code). Here's the C++ code:

// declare and then fill in the lookup table
int lut[256];
for (unsigned c = 0; c < 256; ++c)
    lut[c] = (c >= 128) ? c : 0;

// use the lookup table after it is built
for (unsigned i = 0; i < 100000; ++i)
{
    // Primary loop
    for (unsigned c = 0; c < arraySize; ++c)
    {
        sum += lut[data[c]];
    }
}

In this case the lookup table was only 256 bytes, so it fit nicely in cache and all was fast. This technique wouldn't work well if the data was 24-bit values and we only wanted half of them... the lookup table would be far too big to be practical. On the other hand, we can combine the two techniques shown above: first shift the bits over, then index a lookup table. For a 24-bit value that we only want the top half value, we could potentially shift the data right by 12 bits, and be left with a 12-bit value for a table index. A 12-bit table index implies a table of 4096 values, which might be practical.

EDIT: One thing I forgot to put in.

The technique of indexing into an array, instead of using an if statement, can be used for deciding which pointer to use. I saw a library that implemented binary trees, and instead of having two named pointers ( pLeft and pRight or whatever) had a length-2 array of pointers, and used the "decision bit" technique to decide which one to follow. For example, instead of:

if (x < node->value)
    node = node->pLeft;
else
    node = node->pRight;

this library would do something like:

i = (x < node->value);
node = node->link[i];

Here's a link to this code: Red Black Trees , Eternally Confuzzled




Je viens de lire sur cette question et ses réponses, et je pense qu'il manque une réponse.

Une manière courante d'éliminer la prédiction de branchement que j'ai trouvée particulièrement efficace dans les langages gérés est une recherche de table au lieu d'utiliser une branche (bien que je ne l'ai pas testé dans ce cas).

Cette approche fonctionne en général si:

  1. C'est une petite table et est susceptible d'être mis en cache dans le processeur
  2. Vous exécutez les choses dans une boucle assez serrée et / ou le processeur peut pré-charger les données

Contexte et pourquoi

Pfew, alors qu'est-ce que ça veut dire?

Du point de vue du processeur, votre mémoire est lente. Pour compenser la différence de vitesse, ils construisent dans quelques caches dans votre processeur (cache L1 / L2) qui compensent cela. Alors imaginez que vous faites vos beaux calculs et que vous ayez besoin d'un morceau de mémoire. Le processeur obtient son opération de chargement et charge la mémoire dans le cache, puis utilise le cache pour effectuer le reste des calculs. Comme la mémoire est relativement lente, cette 'charge' ralentira votre programme.

Comme la prédiction de branchement, ceci a été optimisé dans les processeurs Pentium: le processeur prédit qu'il a besoin de charger une donnée et essaie de la charger dans le cache avant que l'opération n'atteigne le cache. Comme nous l'avons déjà vu, la prédiction de branche devient parfois horriblement mauvaise - dans le pire des cas, vous devez revenir en arrière et attendre un chargement de mémoire, ce qui prendra une éternité ( en d'autres termes: une mauvaise prédiction de branche charger après un échec de prédiction de branche est juste horrible! ).

Heureusement pour nous, si le schéma d'accès à la mémoire est prévisible, le processeur le chargera dans son cache rapide et tout ira bien.

La première chose que nous devons savoir est ce qui est petit ? Tandis que plus petit est généralement meilleur, une règle de base est de s'en tenir à des tables de recherche qui sont <= 4096 octets de taille. En tant que limite supérieure: si votre table de recherche est supérieure à 64 Ko, cela vaut probablement la peine d'être reconsidéré.

Construire une table

Nous avons donc compris que nous pouvions créer une petite table. La prochaine chose à faire est d'obtenir une fonction de recherche en place. Les fonctions de recherche sont généralement de petites fonctions qui utilisent quelques opérations entières de base (et, ou, xor, shift, add, remove et éventuellement multiplier). Vous voulez que votre entrée soit traduite par la fonction de recherche en une sorte de «clé unique» dans votre tableau, qui vous donne simplement la réponse de tout le travail que vous vouliez faire.

Dans ce cas:> = 128 signifie que nous pouvons garder la valeur, <128 signifie que nous nous en débarrassons. La façon la plus simple de le faire est d'utiliser un 'AND': si nous le gardons, nous le ferons avec 7FFFFFFF; si nous voulons nous en débarrasser, nous le faisons avec 0. Notez également que 128 est une puissance de 2 - donc nous pouvons aller de l'avant et faire une table de 32768/128 entiers et le remplir avec un zéro et beaucoup de 7FFFFFFFF.

Langues gérées

Vous pourriez vous demander pourquoi cela fonctionne bien dans les langues gérées. Après tout, les langages gérés vérifient les limites des tableaux avec une branche pour s'assurer que vous ne vous trompez pas ...

Eh bien, pas exactement ... :-)

Il y a eu beaucoup de travail sur l'élimination de cette branche pour les langues gérées. Par exemple:

for (int i=0; i<array.Length; ++i)
   // Use array[i]

Dans ce cas, il est évident pour le compilateur que la condition aux limites ne sera jamais atteinte. Au moins le compilateur Microsoft JIT (mais je pense que Java fait des choses similaires) le remarquera et supprimera la vérification tout à fait. WOW - cela signifie pas de branche. De même, il traitera d'autres cas évidents.

Si vous rencontrez des problèmes avec les recherches sur les langages gérés - la clé est d'ajouter un & 0x[something]FFF à votre fonction de recherche pour rendre le contrôle des limites prévisible - et le regarder aller plus vite.

Le résultat de cette affaire

// Generate data
int arraySize = 32768;
int[] data = new int[arraySize];

Random rnd = new Random(0);
for (int c = 0; c < arraySize; ++c)
    data[c] = rnd.Next(256);

//To keep the spirit of the code in-tact I'll make a separate lookup table
// (I assume we cannot modify 'data' or the number of loops)
int[] lookup = new int[256];

for (int c = 0; c < 256; ++c)
    lookup[c] = (c >= 128) ? c : 0;

// Test
DateTime startTime = System.DateTime.Now;
long sum = 0;

for (int i = 0; i < 100000; ++i)
{
    // Primary loop
    for (int j = 0; j < arraySize; ++j)
    {
        // Here you basically want to use simple operations - so no
        // random branches, but things like &, |, *, -, +, etc. are fine.
        sum += lookup[data[j]];
    }
}

DateTime endTime = System.DateTime.Now;
Console.WriteLine(endTime - startTime);
Console.WriteLine("sum = " + sum);

Console.ReadLine();



Branch-prediction gain!

It is important to understand that branch misprediction doesn't slow down programs. The cost of a missed prediction is just as if branch prediction didn't exist and you waited for the evaluation of the expression to decide what code to run (further explanation in the next paragraph).

if (expression)
{
    // Run 1
} else {
    // Run 2
}

Whenever there's an if-else \ switch statement, the expression has to be evaluated to determine which block should be executed. In the assembly code generated by the compiler, conditional branch instructions are inserted.

A branch instruction can cause a computer to begin executing a different instruction sequence and thus deviate from its default behavior of executing instructions in order (ie if the expression is false, the program skips the code of the if block) depending on some condition, which is the expression evaluation in our case.

That being said, the compiler tries to predict the outcome prior to it being actually evaluated. It will fetch instructions from the if block, and if the expression turns out to be true, then wonderful! We gained the time it took to evaluate it and made progress in the code; if not then we are running the wrong code, the pipeline is flushed, and the correct block is run.

Visualization:

Let's say you need to pick route 1 or route 2. Waiting for your partner to check the map, you have stopped at ## and waited, or you could just pick route1 and if you were lucky (route 1 is the correct route), then great you didn't have to wait for your partner to check the map (you saved the time it would have taken him to check the map), otherwise you will just turn back.

While flushing pipelines is super fast, nowadays taking this gamble is worth it. Predicting sorted data or a data that changes slowly is always easier and better than predicting fast changes.

 O      Route 1  /-------------------------------
/|\             /
 |  ---------##/
/ \            \
                \
        Route 2  \--------------------------------



This question has already been answered excellently many times over. Still I'd like to draw the group's attention to yet another interesting analysis.

Recently this example (modified very slightly) was also used as a way to demonstrate how a piece of code can be profiled within the program itself on Windows. Along the way, the author also shows how to use the results to determine where the code is spending most of its time in both the sorted & unsorted case. Finally the piece also shows how to use a little known feature of the HAL (Hardware Abstraction Layer) to determine just how much branch misprediction is happening in the unsorted case.

The link is here: http://www.geoffchappell.com/studies/windows/km/ntoskrnl/api/ex/profile/demo.htm




Links