[python] What does the “yield” keyword do?
Shortcut to Grokking
When you see a function with
yield statements, apply this easy trick to understand what will happen:
- Insert a line
result = at the start of the function.
- Replace each
- Insert a line
return resultat the bottom of the function.
- Yay - no more
yieldstatements! Read and figure out code.
- Compare function to original definition.
This trick may give you an idea of the logic behind the function, but what actually happens with
yield is significantly different that what happens in the list based approach. In many cases the yield approach will be a lot more memory efficient and faster too. In other cases this trick will get you stuck in an infinite loop, even though the original function works just fine. Read on to learn more...
Don't confuse your Iterables, Iterators and Generators
First, the iterator protocol - when you write
for x in mylist: ...loop body...
Python performs the following two steps:
Gets an iterator for
iter(mylist)-> this returns an object with a
__next__()in Python 3).
[This is the step most people forget to tell you about]
Uses the iterator to loop over items:
Keep calling the
next()method on the iterator returned from step 1. The return value from
next()is assigned to
xand the loop body is executed. If an exception
StopIterationis raised from within
next(), it means there are no more values in the iterator and the loop is exited.
The truth is Python performs the above two steps anytime it wants to loop over the contents of an object - so it could be a for loop, but it could also be code like
otherlist is a Python list).
mylist is an iterable because it implements the iterator protocol. In a user defined class, you can implement the
__iter__() method to make instances of your class iterable. This method should return an iterator. An iterator is an object with a
next() method. It is possible to implement both
next() on the same class, and have
self. This will work for simple cases, but not when you want two iterators looping over the same object at the same time.
So that's the iterator protocol, many objects implement this protocol:
- Built-in lists, dictionaries, tuples, sets, files.
- User defined classes that implement
Note that a
for loop doesn't know what kind of object it's dealing with - it just follows the iterator protocol, and is happy to get item after item as it calls
next(). Built-in lists return their items one by one, dictionaries return the keys one by one, files return the lines one by one, etc. And generators return... well that's where
yield comes in:
def f123(): yield 1 yield 2 yield 3 for item in f123(): print item
yield statements, if you had three
return statements in
f123() only the first would get executed, and the function would exit. But
f123() is no ordinary function. When
f123() is called, it does not return any of the values in the yield statements! It returns a generator object. Also, the function does not really exit - it goes into a suspended state. When the
for loop tries to loop over the generator object, the function resumes from its suspended state at the very next line after the
yield it previously returned from, executes the next line of code, in this case a
yield statement, and returns that as the next item. This happens until the function exits, at which point the generator raises
StopIteration, and the loop exits.
So the generator object is sort of like an adapter - at one end it exhibits the iterator protocol, by exposing
next() methods to keep the
for loop happy. At the other end however, it runs the function just enough to get the next value out of it, and puts it back in suspended mode.
Why Use Generators?
Usually you can write code that doesn't use generators but implements the same logic. One option is to use the temporary list 'trick' I mentioned before. That will not work in all cases, for e.g. if you have infinite loops, or it may make inefficient use of memory when you have a really long list. The other approach is to implement a new iterable class
SomethingIter that keeps state in instance members and performs the next logical step in it's
__next__() in Python 3) method. Depending on the logic, the code inside the
next() method may end up looking very complex and be prone to bugs. Here generators provide a clean and easy solution.
What is the use of the
yield keyword in Python? What does it do?
For example, I'm trying to understand this code1:
def _get_child_candidates(self, distance, min_dist, max_dist): if self._leftchild and distance - max_dist < self._median: yield self._leftchild if self._rightchild and distance + max_dist >= self._median: yield self._rightchild
And this is the caller:
result, candidates = , [self] while candidates: node = candidates.pop() distance = node._get_dist(obj) if distance <= max_dist and distance >= min_dist: result.extend(node._values) candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result
What happens when the method
_get_child_candidates is called?
Is a list returned? A single element? Is it called again? When will subsequent calls stop?
1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: Module mspace.
yield is just like
return - it returns whatever you tell it to. The only difference is that the next time you call the function, execution starts from the last call to the
In the case of your code, the function
get_child_candidates is acting like an iterator so that when you extend your list, it adds one element at a time to the new list.
list.extend calls an iterator until it's exhausted. In the case of the code sample you posted, it would be much clearer to just return a tuple and append that to the list.
It's returning a generator. I'm not particularly familiar with Python, but I believe it's the same kind of thing as C#'s iterator blocks if you're familiar with those.
There's an IBM article which explains it reasonably well (for Python) as far as I can see.
The key idea is that the compiler/interpreter/whatever does some trickery so that as far as the caller is concerned, they can keep calling next() and it will keep returning values - as if the generator method was paused. Now obviously you can't really "pause" a method, so the compiler builds a state machine for you to remember where you currently are and what the local variables etc look like. This is much easier than writing an iterator yourself.
From a programming viewpoint, the iterators are implemented as thunks
To implement iterators/generators/thread pools for concurrent execution/etc as thunks (also called anonymous functions), one uses messages sent to a closure object, which has a dispatcher, and the dispatcher answers to "messages".
"next" is a message sent to a closure, created by "iter" call.
There are lots of ways to implement this computation. I used mutation but it is easy to do it without mutation, by returning the current value and the next yielder.
Here is a demonstration which uses the structure of R6RS but the semantics is absolutely identical as in python, it's the same model of computation, only a change in syntax is required to rewrite it in python.
Welcome to Racket v184.108.40.206. -> (define gen (lambda (l) (define yield (lambda () (if (null? l) 'END (let ((v (car l))) (set! l (cdr l)) v)))) (lambda(m) (case m ('yield (yield)) ('init (lambda (data) (set! l data) 'OK)))))) -> (define stream (gen '(1 2 3))) -> (stream 'yield) 1 -> (stream 'yield) 2 -> (stream 'yield) 3 -> (stream 'yield) 'END -> ((stream 'init) '(a b)) 'OK -> (stream 'yield) 'a -> (stream 'yield) 'b -> (stream 'yield) 'END -> (stream 'yield) 'END ->
Many people use
return rather than
yield but in some cases
yield can be more efficient and easier to work with.
Here is an example which
yield is definitely best for:
return (in function)
import random def return_dates(): dates =  # with return you need to create a list then return it for i in range(5): date = random.choice(["1st", "2nd", "3rd", "4th", "5th", "6th", "7th", "8th", "9th", "10th"]) dates.append(date) return dates
yield (in function)
def yield_dates(): for i in range(5): date = random.choice(["1st", "2nd", "3rd", "4th", "5th", "6th", "7th", "8th", "9th", "10th"]) yield date # yield makes a generator automatically which works in a similar way, this is much more efficient
dates_list = return_dates() print(dates_list) for i in dates_list: print(i) dates_generator = yield_dates() print(dates_generator) for i in dates_generator: print(i)
Both functions do the same thing but
yield uses 3 lines instead of 5 and has one less variable to worry about.
This is the result from the code:
As you can see both functions do the same thing, the only difference is
return_dates() gives a list and
yield_dates() gives a generator
A real life example would be something like reading a file line by line or if you just want to make a generator
While a lot of answers show why you'd use a
yield to create a generator, there are more uses for
yield. It's quite easy to make a coroutine, which enables the passing of information between two blocks of code. I won't repeat any of the fine examples that have already been given about using
yield to create a generator.
To help understand what a
yield does in the following code, you can use your finger to trace the cycle through any code that has a
yield. Every time your finger hits the
yield, you have to wait for a
next or a
send to be entered. When a
next is called, you trace through the code until you hit the
yield… the code on the right of the
yield is evaluated and returned to the caller… then you wait. When
next is called again, you perform another loop through the code. However, you'll note that in a coroutine,
yield can also be used with a
send… which will send a value from the caller into the yielding function. If a
send is given, then
yield receives the value sent, and spits it out the left hand side… then the trace through the code progresses until you hit the
yield again (returning the value at the end, as if
next was called).
>>> def coroutine(): ... i = -1 ... while True: ... i += 1 ... val = (yield i) ... print("Received %s" % val) ... >>> sequence = coroutine() >>> sequence.next() 0 >>> sequence.next() Received None 1 >>> sequence.send('hello') Received hello 2 >>> sequence.close()
For those who prefer a minimal working example, meditate on this interactive Python session:
>>> def f(): ... yield 1 ... yield 2 ... yield 3 ... >>> g = f() >>> for i in g: ... print i ... 1 2 3 >>> for i in g: ... print i ... >>> # Note that this time nothing was printed
(My below answer only speaks from the perspective of using Python generator, not the underlying implementation of generator mechanism, which involves some tricks of stack and heap manipulation.)
yield is used instead of a
return in a python function, that function is turned into something special called
generator function. That function will return an object of
generator type. The
yield keyword is a flag to notify the python compiler to treat such function specially. Normal functions will terminate once some value is returned from it. But with the help of the compiler, the generator function can be thought of as resumable. That is, the execution context will be restored and the execution will continue from last run. Until you explicitly call return, which will raise a
StopIteration exception (which is also part of the iterator protocol), or reach the end of the function. I found a lot of references about
generator but this one from the
functional programming perspective is the most digestable.
(Now I want to talk about the rationale behind
generator, and the
iterator based on my own understanding. I hope this can help you grasp the essential motivation of iterator and generator. Such concept shows up in other languages as well such as C#.)
As I understand, when we want to process a bunch of data, we usually first store the data somewhere and then process it one by one. But this intuitive approach is problematic. If the data volume is huge, it's expensive to store them as a whole beforehand. So instead of storing the
data itself directly, why not store some kind of
metadata indirectly, i.e.
the logic how the data is computed.
There are 2 approaches to wrap such metadata.
- The OO approach, we wrap the metadata
as a class. This is the so-called
iteratorwho implements the iterator protocol (i.e. the
__iter__()methods). This is also the commonly seen iterator design pattern.
- The functional approach, we wrap the metadata
as a function. This is the so-called
generator function. But under the hood, the returned
IS-Aiterator because it also implements the iterator protocol.
Either way, an iterator is created, i.e. some object that can give you the data you want. The OO approach may be a bit complex. Anyway, which one to use is up to you.
I was going to post "read page 19 of Beazley's 'Python: Essential Reference' for a quick description of generators", but so many others have posted good descriptions already.
Also, note that
yield can be used in coroutines as the dual of their use in generator functions. Although it isn't the same use as your code snippet,
(yield) can be used as an expression in a function. When a caller sends a value to the method using the
send() method, then the coroutine will execute until the next
(yield) statement is encountered.
Generators and coroutines are a cool way to set up data-flow type applications. I thought it would be worthwhile knowing about the other use of the
yield statement in functions.
Here is a mental image of what
I like to think of a thread as having a stack (even when it's not implemented that way).
When a normal function is called, it puts its local variables on the stack, does some computation, then clears the stack and returns. The values of its local variables are never seen again.
yield function, when its code begins to run (i.e. after the function is called, returning a generator object, whose
next() method is then invoked), it similarly puts its local variables onto the stack and computes for a while. But then, when it hits the
yield statement, before clearing its part of the stack and returning, it takes a snapshot of its local variables and stores them in the generator object. It also writes down the place where it's currently up to in its code (i.e. the particular
So it's a kind of a frozen function that the generator is hanging onto.
next() is called subsequently, it retrieves the function's belongings onto the stack and re-animates it. The function continues to compute from where it left off, oblivious to the fact that it had just spent an eternity in cold storage.
Compare the following examples:
def normalFunction(): return if False: pass def yielderFunction(): return if False: yield 12
When we call the second function, it behaves very differently to the first. The
yield statement might be unreachable, but if it's present anywhere, it changes the nature of what we're dealing with.
>>> yielderFunction() <generator object yielderFunction at 0x07742D28>
yielderFunction() doesn't run its code, but makes a generator out of the code. (Maybe it's a good idea to name such things with the
yielder prefix for readability.)
>>> gen = yielderFunction() >>> dir(gen) ['__class__', ... '__iter__', #Returns gen itself, to make it work uniformly with containers ... #when given to a for loop. (Containers return an iterator instead.) 'close', 'gi_code', 'gi_frame', 'gi_running', 'next', #The method that runs the function's body. 'send', 'throw']
gi_frame fields are where the frozen state is stored. Exploring them with
dir(..), we can confirm that our mental model above is credible.
yield keyword is reduced to two simple facts:
- If the compiler detects the
yieldkeyword anywhere inside a function, that function no longer returns via the
returnstatement. Instead, it immediately returns a lazy "pending list" object called a generator
- A generator is iterable. What is an iterable? It's anything like a
rangeor dict-view, with a built-in protocol for visiting each element in a certain order.
In a nutshell: a generator is a lazy, incrementally-pending list, and
yield statements allow you to use function notation to program the list values the generator should incrementally spit out.
generator = myYieldingFunction(...) x = list(generator) generator v [x, ..., ???] generator v [x, x, ..., ???] generator v [x, x, x, ..., ???] StopIteration exception [x, x, x] done list==[x, x, x]
Let's define a function
makeRange that's just like Python's
makeRange(n) RETURNS A GENERATOR:
def makeRange(n): # return 0,1,2,...,n-1 i = 0 while i < n: yield i i += 1 >>> makeRange(5) <generator object makeRange at 0x19e4aa0>
To force the generator to immediately return its pending values, you can pass it into
list() (just like you could any iterable):
>>> list(makeRange(5)) [0, 1, 2, 3, 4]
Comparing example to "just returning a list"
The above example can be thought of as merely creating a list which you append to and return:
# list-version # # generator-version def makeRange(n): # def makeRange(n): """return [0,1,2,...,n-1]""" #~ """return 0,1,2,...,n-1""" TO_RETURN =  #> i = 0 # i = 0 while i < n: # while i < n: TO_RETURN += [i] #~ yield i i += 1 # i += 1 ## indented return TO_RETURN #> >>> makeRange(5) [0, 1, 2, 3, 4]
There is one major difference, though; see the last section.
How you might use generators
An iterable is the last part of a list comprehension, and all generators are iterable, so they're often used like so:
# _ITERABLE_ >>> [x+10 for x in makeRange(5)] [10, 11, 12, 13, 14]
To get a better feel for generators, you can play around with the
itertools module (be sure to use
chain.from_iterable rather than
chain when warranted). For example, you might even use generators to implement infinitely-long lazy lists like
itertools.count(). You could implement your own
def enumerate(iterable): zip(count(), iterable), or alternatively do so with the
yield keyword in a while-loop.
Please note: generators can actually be used for many more things, such as implementing coroutines or non-deterministic programming or other elegant things. However, the "lazy lists" viewpoint I present here is the most common use you will find.
Behind the scenes
This is how the "Python iteration protocol" works. That is, what is going on when you do
list(makeRange(5)). This is what I describe earlier as a "lazy, incremental list".
>>> x=iter(range(5)) >>> next(x) 0 >>> next(x) 1 >>> next(x) 2 >>> next(x) 3 >>> next(x) 4 >>> next(x) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration
The built-in function
next() just calls the objects
.next() function, which is a part of the "iteration protocol" and is found on all iterators. You can manually use the
next() function (and other parts of the iteration protocol) to implement fancy things, usually at the expense of readability, so try to avoid doing that...
Normally, most people would not care about the following distinctions and probably want to stop reading here.
In Python-speak, an iterable is any object which "understands the concept of a for-loop" like a list
[1,2,3], and an iterator is a specific instance of the requested for-loop like
[1,2,3].__iter__(). A generator is exactly the same as any iterator, except for the way it was written (with function syntax).
When you request an iterator from a list, it creates a new iterator. However, when you request an iterator from an iterator (which you would rarely do), it just gives you a copy of itself.
Thus, in the unlikely event that you are failing to do something like this...
> x = myRange(5) > list(x) [0, 1, 2, 3, 4] > list(x) 
... then remember that a generator is an iterator; that is, it is one-time-use. If you want to reuse it, you should call
myRange(...) again. If you need to use the result twice, convert the result to a list and store it in a variable
x = list(myRange(5)). Those who absolutely need to clone a generator (for example, who are doing terrifyingly hackish metaprogramming) can use
itertools.tee if absolutely necessary, since the copyable iterator Python PEP standards proposal has been deferred.
All great answers whereas a bit difficult for newbies.
I assume you have learned
As an analogy,
yield are twins.
return means 'Return and Stop' whereas 'yield` means 'Return but Continue'
- Try to get a num_list with
def num_list(n): for i in range(n): return i
In : num_list(3) Out: 0
See, you get only a single number instead of a list of them,.
return never allow you happy to prevail. It implemented once and quit.
- There comes
In : def num_list(n): ...: for i in range(n): ...: yield i ...: In : num_list(3) Out: <generator object num_list at 0x10327c990> In : list(num_list(3)) Out: [0, 1, 2]
Now, you win to get all the numbers.
return which runs once and stops,
yield runs times you planed.
You can interpret
return one of them,
return all of them. This is called
- One more step we can rewrite
In : def num_list(n): ...: result =  ...: for i in range(n): ...: result.append(i) ...: return result In : num_list(3) Out: [0, 1, 2]
It's the core about
The difference between a list
return outputs and the object
yield output is:
You can get [0, 1, 2] from a list object always whereas can only retrieve them from 'the object
yield output' once.
So, it has a new name
generator object as displayed in
Out: <generator object num_list at 0x10327c990>.
In conclusion as a metaphor to grok it,
yield are twins,
generator are twins.
When you find yourself building a
list from scratch...
def squares_list(n): the_list =  # Replace for x in range(n): y = x * x the_list.append(y) # these return the_list # lines
yield each piece instead
def squares_the_yield_way(n): for x in range(n): y = x * x yield y # with this
This was my first "aha" moment with yield.
yield is a sugary way to say
build a series of stuff
>>> for square in squares_list(4): ... print(square) ... 0 1 4 9 >>> for square in squares_the_yield_way(4): ... print(square) ... 0 1 4 9
Yield is single-pass: you can only iterate through once. When a function has a yield in it we call it a generator function. And an iterator is what it returns. That's revealing. We lose the convenience of a container, but gain the power of an arbitrarily long series.
Yield is lazy, it puts off computation. A function with a yield in it doesn't actually execute at all when you call it. The iterator object it returns uses magic to maintain the function's internal context. Each time you call
next() on the iterator (this happens in a for-loop) execution inches forward to the next yield. (
StopIteration and ends the series.)
Yield is versatile. It can do infinite loops:
>>> def squares_all_of_them(): ... x = 0 ... while True: ... yield x * x ... x += 1 ... >>> squares = squares_all_of_them() >>> for _ in range(4): ... print(next(squares)) ... 0 1 4 9
If you need multiple passes and the series isn't too long, just call
list() on it:
>>> list(squares_the_yield_way(4)) [0, 1, 4, 9]
Brilliant choice of the word
yield because both meanings apply:
yield — produce or provide (as in agriculture)
...provide the next data in the series.
yield — give way or relinquish (as in political power)
...relinquish CPU execution until the iterator advances.
Yield is an Object
return in a function will return a single value.
If you want function to return huge set of values use
yield is a barrier
like Barrier in Cuda Language, it will not transfer control until it gets completed.
It will run the code in your function from the beginning until it hits
yield. Then, it’ll return the first value of the loop.
Then, every other call will run the loop you have written in the function one more time, returning the next value until there is no value to return.
Yet another TL;DR
iterator on list:
next() returns the next element of the list
next() will compute the next element on the fly (execute code)
You can see the yield/generator as a way to manually run the control flow from outside (like continue loop 1 step), by calling next, however complex the flow.
NOTE: the generator is NOT a normal function, it remembers previous state like local variables (stack), see other answers or articles for detailed explanation, the generator can only be iterated on once.
You could do without
yield but it would not be as nice, so it can be considered 'very nice' language sugar.