manipulation - python strip




How to split a string on whitespace and retain offsets and lengths of words (7)

I need to split a string into words, but also get the starting and ending offset of the words. So, for example, if the input string is:

input_string = "ONE  ONE ONE   \t TWO TWO ONE TWO TWO THREE"

I want to get:

[('ONE', 0, 2), ('ONE', 5, 7), ('ONE', 9, 11), ('TWO', 17, 19), ('TWO', 21, 23),
 ('ONE', 25, 27), ('TWO', 29, 31), ('TWO', 33, 35), ('THREE', 37, 41)]

I've got some working code that does this using input_string.split and calls to .index, but it's slow. I tried to code it by manually iterating through the string, but that was slower still. Does anyone have a fast algorithm for this?

Here are my two versions:

def using_split(line):
    words = line.split()
    offsets = []
    running_offset = 0
    for word in words:
        word_offset = line.index(word, running_offset)
        word_len = len(word)
        running_offset = word_offset + word_len
        offsets.append((word, word_offset, running_offset - 1))

    return offsets

def manual_iteration(line):
    start = 0
    offsets = []
    word = ''
    for off, char in enumerate(line + ' '):
        if char in ' \t\r\n':
            if off > start:
                offsets.append((word, start, off - 1))
            start = off + 1
            word = ''
        else:
            word += char

    return offsets

By using timeit, "using_split" is the fastest, followed by "manual_iteration", then the slowest so far is using re.finditer as suggested below.


Here are a couple of ideas which you could profile to see if they are fast enough:

input_string = "".join([" ","ONE  ONE ONE   \t TWO TWO ONE TWO TWO THREE"," "])

#pre processing
from itertools import chain
stuff = list(chain(*zip(range(len(input_string)),range(len(input_string)))))
print stuff
stuff = iter(stuff)
next(stuff)

#calculate
switches = (i for i in range(0,len(input_string)-1) if (input_string[next(stuff)] in " \t\r\n") ^ (input_string[next(stuff)] in " \t\r\n"))
print [(word,next(switches),next(switches)-1) for word in input_string.split()]


#pre processing
from itertools import chain
stuff = list(chain(*zip(range(len(input_string)),range(len(input_string)))))
print stuff
stuff = iter(stuff)
next(stuff)


#calculate
switches = (i for i in range(0,len(input_string)-1) if (input_string[next(stuff)] in " \t\r\n") ^ (input_string[next(stuff)] in " \t\r\n"))
print [(input_string[i:j+1],i,j-1) for i,j in zip(switches,switches)]

Here you have some c-oriented approach, that only iterates once over the complete string. You can also define your own seperators. Tested and works, but could probably be cleaner.

def mySplit(myString, mySeperators):
    w = []
    o = 0
    iW = False
    word = [None, None,None]
    for i,c in enumerate(myString):
        if not c in mySeperators:
            if not iW:
                word[1]=i
                iW = True
        if iW == True and c in mySeperators:
            word[2]=i-1
            word[0] = myString[word[1]:i]
            w.append(tuple(word))
            word=[None,None,None]
            iW = False
    return w

mySeperators = [" ", "\t"]
myString = "ONE  ONE ONE   \t TWO TWO ONE TWO TWO THREE"
splitted = mySplit(myString, mySeperators)
print splitted

It occurs to me the python loop is the slow operation here, thus I started on bitmaps, I got this far and it's still fast, but I can't figure out a loop-free way to get start/stop indices out it:

import string
table = "".join([chr(i).isspace() and "0" or "1" for i in range(256)])
def indexed6(line):
    binline = string.translate(line, table)
    return int(binline, 2) ^ int(binline+"0", 2)

The returned integer has bits set for each start position and each stop+1 position.

P.S. zip() is relatively slow: fast enough to be used once, too slow to be used 3 times.


The following ideas may result in speed-ups:

  1. Use a deque rather than a list to store the offsets and converting to a list only on return. Deque appends do not result in memory moves like a list append does.
  2. If the words are known to be shorter than some length, provide this in the index function.
  3. Define your functions in the local dictionary.

Note: I haven't tested these but here is an example

from collections import deque

def using_split(line):
    MAX_WORD_LENGTH = 10
    line_index = line.index

    words = line.split()

    offsets = deque()
    offsets_append = offsets.append

    running_offset = 0

    for word in words:
        word_offset = line_index(word, running_offset, running_offset+MAX_WORD_LENGTH)
        running_offset = word_offset + len(word)
        offsets_append((word, word_offset, running_offset - 1))

    return list(offsets)

The following will do it:

import re
s = 'ONE  ONE ONE   \t TWO TWO ONE TWO TWO THREE'
ret = [(m.group(0), m.start(), m.end() - 1) for m in re.finditer(r'\S+', s)]
print(ret)

This produces:

[('ONE', 0, 2), ('ONE', 5, 7), ('ONE', 9, 11), ('TWO', 17, 19), ('TWO', 21, 23),
 ('ONE', 25, 27), ('TWO', 29, 31), ('TWO', 33, 35), ('THREE', 37, 41)]

This seems to work pretty quickly:

tuple_list = [(match.group(), match.start(), match.end()) for match in re.compile("\S+").finditer(input_string)]

def split_span(s):
    for match in re.finditer(r"\S+", s):
        span = match.span()
        yield match.group(0), span[0], span[1] - 1




string