# [c++] What really is a deque in STL?

deque = double ended queue

A container which can grow in either direction.

Deque is *typically* implemented as a `vector`

of `vectors`

(a list of vectors can't give constant time random access). While the size of the secondary vectors is implementation dependent, a common algorithm is to use a constant size in bytes.

I was looking at STL containers and trying to figure what they really are (i.e. the data structure used), and the *deque* stopped me: I thought at first that it was a double linked list, which would allow insertion and deletion from both ends in constant time, but I am troubled by the promise made by the operator [] to be done in constant time. In a linked list, arbitrary access should be O(n), right?

And if it's a dynamic array, how can it add elements in constant time? It should be mentioned that reallocation may happen, and that O(1) is an amortized cost, like for a vector.

So I wonder what is this structure that allows arbitrary access in constant time, and at the same time never needs to be moved to a new bigger place.

I was reading "Data structures and algorithms in C++" by Adam Drozdek, and found this useful. HTH.

A very interesting aspect of STL deque is its implementation. An STL deque is not implemented as a linked list but as an array of pointers to blocks or arrays of data. The number of blocks changes dynamically depending on storage needs, and the size of the array of pointers changes accordingly.

You can notice in the middle is the array of pointers to the data (chunks on the right), and also you can notice that the array in the middle is dynamically changing.

An image is worth a thousand words.

(This is an answer I've given in another thread. Essentially I'm arguing that even fairly naive implementations, using a single `vector`

, conform to the requirements of "constant non-amortized push_{front,back}". You might be surprised, and think this is impossible, but I have found other relevant quotes in the standard that define the context in a surprising way. Please bear with me; if I have made a mistake in this answer, it would be very helpful to identify which things I have said correctly and where my logic has broken down. )

In this answer, I am not trying to identify a *good* implementation, I'm merely trying to help us to interpret the complexity requirements in the C++ standard. I'm quoting from N3242, which is, according to Wikipedia, the latest freely available C++11 standardization document. (It appears to be organized differently from the final standard, and hence I won't quote the exact page numbers. Of course, these rules might have changed in the final standard, but I don't think that has happened.)

A `deque<T>`

could be implemented correctly by using a `vector<T*>`

. All the elements are copied onto the heap and the pointers stored in a vector. (More on the vector later).

Why `T*`

instead of `T`

? Because the standard requires that

"An insertion at either end of the deque invalidates all the iterators to the deque, but has

no effect on the validity of references to elements of the deque."

(my emphasis). The `T*`

helps to satisfy that. It also helps us to satisfy this:

"Inserting a single element either at the beginning or end of a deque always ..... causes a

single call to a constructor of T."

Now for the (controversial) bit. Why use a `vector`

to store the `T*`

? It gives us random access, which is a good start. Let's forget about the complexity of vector for a moment and build up to this carefully:

The standard talks about "the number of operations on the contained objects.". For `deque::push_front`

this is clearly 1 because exactly one `T`

object is constructed and zero of the existing `T`

objects are read or scanned in any way. This number, 1, is clearly a constant and is independent of the number of objects currently in the deque. This allows us to say that:

'For our `deque::push_front`

, the number of operations on the contained objects (the Ts) is fixed and is independent of the number of objects already in the deque.'

Of course, the number of operations on the `T*`

will not be so well-behaved. When the `vector<T*>`

grows too big, it'll be realloced and many `T*`

s will be copied around. So yes, the number of operations on the `T*`

will vary wildly, but the number of operations on `T`

will not be affected.

Why do we care about this distinction between counting operations on `T`

and counting operations on `T*`

? It's because the standard says:

All of the complexity requirements in this clause are stated solely in terms of the number of operations on the contained objects.

For the `deque`

, the contained objects are the `T`

, not the `T*`

, meaning we can ignore any operation which copies (or reallocs) a `T*`

.

I haven't said much about how a vector would behave in a deque. Perhaps we would interpret it as a circular buffer (with the vector always taking up its maximum `capacity()`

, and then realloc everything into a bigger buffer when the vector is full. The details don't matter.

In the last few paragraphs, we have analyzed `deque::push_front`

and the relationship between the number of objects in the deque already and the number of operations performed by push_front on contained `T`

-objects. And we found they were independent of each other. **As the standard mandates that complexity is in terms of operations-on- T, then we can say this has constant complexity.**

Yes, the **Operations-On-T*-Complexity** is amortized (due to the `vector`

), but we're only interested in the **Operations-On-T-Complexity** and this is constant (non-amortized).

The complexity of vector::push_back or vector::push_front is irrelevant in this implementation; those considerations involve operations on `T*`

and hence are irrelevant. If the standard was referring to the 'conventional' theoretical notion of complexity, then they wouldn't have explicitly restricted themselves to the "number of operations on the contained objects". Am I overinterpreting that sentence?