variable - python string concatenation time complexity
Python string 'join' is faster(?) than '+', but what's wrong here? (8)
As to why
q is a lot slower: when you say
l += "a"
you are appending the string
"a" to the end of
l, but when you say
l = l + ["a"]
you are creating a new list with the contents of
["a"] and then reassigning the results back to
l. Thus new lists are constantly being generated.
I asked the most efficient method for mass dynamic string concatenation in an earlier post and I was suggested to use the join method, the best, simplest and fastest method to do so (as everyone said that). But while I was playing with string concatenations, I found some weird(?) results. I'm sure something is going on but I can't not get it quite. Here is what I did:
I defined these functions:
import timeit def x(): s= for i in range(100): # Other codes here... s.append("abcdefg"[i%7]) return ''.join(s) def y(): s='' for i in range(100): # Other codes here... s+="abcdefg"[i%7] return s def z(): s='' for i in range(100): # Other codes here... s=s+"abcdefg"[i%7] return s def p(): s= for i in range(100): # Other codes here... s+="abcdefg"[i%7] return ''.join(s) def q(): s= for i in range(100): # Other codes here... s = s + ["abcdefg"[i%7]] return ''.join(s)
I have tried to keep other things (except the concatenation) almost same throughout the functions. Then I tested with the following with results in comment (using Python 3.1.1 IDLE on Windows 32 bit machine):
timeit.timeit(x) # 31.54912480500002 timeit.timeit(y) # 23.533029429999942 timeit.timeit(z) # 22.116181330000018 timeit.timeit(p) # 37.718607439999914 timeit.timeit(q) # 108.60377576499991
That means it shows that strng = strng + dyn_strng is the fastest. Though the difference in times are not that significant (except the last one), but I wanna know why this is happening. Is that because I am using Python 3.1.1 and that provides '+' as most efficient? Should I use '+' as an alternative to join? Or, have I done something extremely silly? Or what? Please explain clearly.
I assume x() is slower because you're first building the array and then joining it. So you're not only measuring the time that join takes, but also the time that you take to build the array.
In a scenario where you already have an array and you want to create a string out of its elements, join should be faster than iterating through the array and building the string step by step.
In addition to what the others said, 100 1-char strings is really small. (I'm kind of surprised you get separation of results at all.) That's the sort of dataset that fits in your processor cache. You're not going to see asymptotic performance on a microbenchmark.
Interesting: I've done some tests where the size of the string changes, and this is what I found:
def x(): x = "a" * 100 s= for i in range(100): # Other codes here... s.append(x) return ''.join(s) def z(): x = "a" * 100 s='' for i in xrange(100): # Other codes here... s=s+x return s from timeit import timeit print "x:", timeit(x, number=1000000) print "z:", timeit(z, number=1000000)
For strings of length 1 (
x = "a" * 1):
x: 27.2318270206 z: 14.4046051502
For strings of length 100:
x: 30.0796670914 z: 21.5891489983
And for strings of length 1000, running timeit 100,000 times instead of 1,000,000
x: 14.1769361496 z: 31.4864079952
Which, if my reading of
Objects/stringobject.c is correct, makes sense.
It appears, on a first reading, that the String.join algorithm (edge-cases aside) is:
def join(sep, sequence): size = 0 for string in sequence: size += len(string) + len(sep) result = malloc(size) for string in sequence: copy string into result copy sep into result return result
So this will require more or less
O(S) steps (where
S is the sum of the lengths of all the strings being joined).
String concatenation was a lot slower before Python 2.5, when it still created a new copy for every string concatenation rather than appending to the original, leading to join() becoming a popular workaround.
Here's an old benchmark demonstrating the old problem: http://www.skymind.com/~ocrow/python_string/
There are many good summaries already here, but just for more proof.
Source: I stared at python source code for an hour and calculated complexities!
For 2 strings. (Assume n is the length of both strings)
Concat (+) - O(n) Join - O(n+k) effectively O(n) Format - O(2n+k) effectively O(n)
For more than 2 strings. (Assume n is the length of all strings)
Concat (+) - O(n^2) Join - O(n+k) effectively O(n) Format - O(2n+k) effectively O(n)
If you have two strings technically concatenation (+) is better, effectively though it is exactly the same as join and format.
If you have more than two strings concat becomes awful and join and format are effectively the same though technically join is a bit better.
If you don't care for efficiency use any of the above. (Though since you asked the question I would assume you care)
If you have 2 strings use concat (when not in a loop!)
If you have more than two strings (all strings) (or in a loop) use join
If you have anything not strings use format, because duh.
Hope this helps!
This question is really about what things cost. We'll play a bit fast and loose here, subtracting results in similar cases. You can decide for yourself if this is a valid method. Here are some basic test cases:
import timeit def append_to_list_with_join(): s= for i in xrange(100): s.append("abcdefg"[i%7]) return ''.join(s) def append_to_list_with_join_opt(): s= x = s.append for i in xrange(100): x("abcdefg"[i%7]) return ''.join(s) def plus_equals_string(): s='' for i in xrange(100): s+="abcdefg"[i%7] return s def plus_assign_string(): s='' for i in xrange(100): s=s+"abcdefg"[i%7] return s def list_comp_join(): return ''.join(["abcdefg"[i%7] for i in xrange(100)]) def list_comp(): return ["abcdefg"[i%7] for i in xrange(100)] def empty_loop(): for i in xrange(100): pass def loop_mod(): for i in xrange(100): a = "abcdefg"[i%7] def fast_list_join(): return "".join(["0"] * 100) for f in [append_to_list_with_join, append_to_list_with_join_opt, plus_equals_string,plus_assign_string,list_comp_join, list_comp, empty_loop,loop_mod, fast_list_join]: print f.func_name, timeit.timeit(f)
And here is what they cost:
append_to_list_with_join 25.4540209021 append_to_list_with_join_opt 19.9999782794 plus_equals_string 16.7842428996 plus_assign_string 14.8312124167 list_comp_join 16.329590353 list_comp 14.6934344309 empty_loop 2.3819276612 loop_mod 10.1424356308 fast_list_join 2.58149394686
First off, lots of things have unexpected costs in python. append_to_list_with_join versus append_to_list_with_join_opt shows that even looking up a method on an object has a non-negligible cost. In this case, looking up s.append is one-quarter of the time.
Next, list_comp_join versus list_comp shows that join() is pretty fast: It takes about 1.7 or only 10% of list_comp_join's time.
loop_mod shows that the greatest part of this test is actually in setting up the data, irrespective of which string construction method is used. By inference, the time taken for "string = string + ", "string += ", and list comprehension are:
plus_equals_string = 16.78 - 10.14 = 6.64 plus_assign_string = 14.83 - 10.14 = 4.69 list_comp = 14.69 - 10.14 = 4.55
So as to the OP's question, join() is fast but the time to create the underlying list, whether with list primitives or list comprehension, is comparable to creating the string with string primitives. If you already have a list, convert it to a string with join() -- it will be fast.
The timings the OP presents indicate that constructing lists using concatenate operators is slow. In contrast, using list comprehensions is fast. If you have to build a list, use a list comprehension.
Finally, let's take three of the OP's closest functions: what is the difference between x, p, and q? Let's simplify a bit:
import timeit def x(): s= for i in range(100): s.append("c") def p(): s= for i in range(100): s += "c" def q(): s= for i in range(100): s = s + ["c"] for f in [x,p,q]: print f.func_name, timeit.timeit(f)
Here are the results:
x 16.0757342064 p 87.1533697719 q 85.0999698984
And here is the disassembly:
>>> import dis >>> dis.dis(x) 2 0 BUILD_LIST 0 3 STORE_FAST 0 (s) 3 6 SETUP_LOOP 33 (to 42) 9 LOAD_GLOBAL 0 (range) 12 LOAD_CONST 1 (100) 15 CALL_FUNCTION 1 18 GET_ITER >> 19 FOR_ITER 19 (to 41) 22 STORE_FAST 1 (i) 4 25 LOAD_FAST 0 (s) 28 LOAD_ATTR 1 (append) 31 LOAD_CONST 2 ('c') 34 CALL_FUNCTION 1 37 POP_TOP 38 JUMP_ABSOLUTE 19 >> 41 POP_BLOCK >> 42 LOAD_CONST 0 (None) 45 RETURN_VALUE >>> dis.dis(p) 2 0 BUILD_LIST 0 3 STORE_FAST 0 (s) 3 6 SETUP_LOOP 30 (to 39) 9 LOAD_GLOBAL 0 (range) 12 LOAD_CONST 1 (100) 15 CALL_FUNCTION 1 18 GET_ITER >> 19 FOR_ITER 16 (to 38) 22 STORE_FAST 1 (i) 4 25 LOAD_FAST 0 (s) 28 LOAD_CONST 2 ('c') 31 INPLACE_ADD 32 STORE_FAST 0 (s) 35 JUMP_ABSOLUTE 19 >> 38 POP_BLOCK >> 39 LOAD_CONST 0 (None) 42 RETURN_VALUE >>> dis.dis(q) 2 0 BUILD_LIST 0 3 STORE_FAST 0 (s) 3 6 SETUP_LOOP 33 (to 42) 9 LOAD_GLOBAL 0 (range) 12 LOAD_CONST 1 (100) 15 CALL_FUNCTION 1 18 GET_ITER >> 19 FOR_ITER 19 (to 41) 22 STORE_FAST 1 (i) 4 25 LOAD_FAST 0 (s) 28 LOAD_CONST 2 ('c') 31 BUILD_LIST 1 34 BINARY_ADD 35 STORE_FAST 0 (s) 38 JUMP_ABSOLUTE 19 >> 41 POP_BLOCK >> 42 LOAD_CONST 0 (None) 45 RETURN_VALUE
The loops are nearly identical. The comparison amounts to CALL_FUNCTION+POP_TOP vs. INPLACE_ADD+STORE_FAST vs. BUILD_LIST+BINARY_ADD+STORE_FAST. However, I can't give a more low-level explanation than that -- I just can't find costs of python bytecodes on the Net. However, you might get some inspiration from looking at Doug Hellmann's Python Module of the Week posting on dis.
You are measuring two distinct operations: the creation of an array of strings, and the concatenation of strings.
import timeit def x(): s =  for i in range(100): s.append("abcdefg"[i%7]) return ''.join(s) def y(): s = '' for i in range(100): s += "abcdefgh"[i%7] # timeit.timeit(x) returns about 32s # timeit.timeit(y) returns about 23s
From the above it would indeed seem that '+' is a faster operation than join. But consider:
src =  def c(): global src s =  for i in range(100): s.append("abcdefg"[i%7]) src = s def x2(): return ''.join(src) def y2(): s = '' for i in range(len(src)): s += src[i] return s # timeit.timeit(c) returns about 30s # timeit.timeit(x2) returns about 1.5s # timeit.timeit(y2) returns about 14s
In other words, by timing x() vs y(), your result is polluted by the construction of your source array. If you break that out, you find that join is faster.
Furthermore, you're working with small arrays, and your timing numbers just happen to coincide. If you increase the size of the array and the length of each string significantly, the differences are more clear:
def c2(): global src s =  for i in range(10000): s.append("abcdefghijklmnopqrstuvwxyz0123456789" src = s # timeit.timeit(x2, number=10000) returns about 1s # timeit.timeit(y2, number=10000) returns about 80s