[Python] “yield”關鍵字的作用是什麼?



Answers

快速獲得Grokking yield

當你看到帶有yield語句的函數時,應用這個簡單的技巧來了解將要發生的事情:

  1. 在函數的開頭插入一行result = []
  2. result.append(expr)替換每個yield expr
  3. 在函數的底部插入一個換行return result
  4. 耶 - 沒有更多的yield聲明! 閱讀並找出代碼。
  5. 比較功能與原始定義。

這個技巧可能會讓你對函數背後的邏輯有所了解,但是yield實際情況與基於列表的方法發生的情況明顯不同。 在很多情況下,收益率方法的記憶效率會更高,速度更快。 在其他情況下,這個技巧會讓你陷入無限循環,即使原始函數工作得很好。 請繼續閱讀以了解更多信息...

不要混淆你的迭代器,迭代器和發生器

首先, 迭代器協議 - 當你寫

for x in mylist:
    ...loop body...

Python執行以下兩個步驟:

  1. 獲取mylist的迭代器:

    調用iter(mylist) - >這會返回一個帶有next()方法的對象(或Python 3中的__next__() )。

    [這是大多數人忘記告訴你的步驟]

  2. 使用迭代器遍歷項目:

    繼續從步驟1返回的迭代器上調用next()方法。將next()的返回值賦給x並執行循環體。 如果在next()引發異常StopIteration ,則意味著迭代器中沒有更多值,並且退出循環。

事實是,只要Python想循環對象的內容,Python就會執行上述兩個步驟 - 所以它可能是for循環,但它也可以是像otherlist.extend(mylist) (其中otherlist是Python列表) 。

這裡mylist是一個迭代器,因為它實現了迭代器協議。 在用戶定義的類中,可以實現__iter__()方法以使您的類的實例可迭代。 這個方法應該返回一個迭代器 。 迭代器是帶有next()方法的對象。 可以在同一個類上實現__iter__()next() ,並使__iter__()返回self 。 這將適用於簡單的情況,但不是當你想讓兩個迭代器同時在同一個對像上循環時。

所以這就是迭代器協議,許多對象實現這個協議:

  1. 內置列表,字典,元組,集合,文件。
  2. 實現__iter__()用戶定義類。
  3. 發電機。

請注意, for循環並不知道它處理的是什麼類型的對象 - 它只是遵循迭代器協議,並且很高興在item調用next()獲取item。 內置列表逐個返回它們的項目,字典逐個返回 ,文件逐個返回 ,等等。而生成器返回......那麼這就是yield的地方:

def f123():
    yield 1
    yield 2
    yield 3

for item in f123():
    print item

而不是yield語句,如果你在f123()有三個return語句, f123()只有第一個語句會被執行,並且該函數會退出。 但f123()不是普通的函數。 當f123() ,它不會返回yield語句中的任何值! 它返回一個生成器對象。 此外,函數並不真正退出 - 它進入暫停狀態。 當for循環嘗試循環生成器對象時,該函數從之前返回的yield之後的下一行恢復其掛起狀態,執行下一行代碼(在本例中為yield語句),並將其返回為下一個項目。 發生這種情況,直到函數退出,此時生成器引發StopIteration ,並退出循環。

所以生成器對像有點像適配器 - 一方面它展示了迭代器協議,通過暴露__iter__()next()方法來保持for循環的快樂。 然而,在另一端,它運行的功能足以讓下一個值出來,並將其重新置於暫停模式。

為什麼使用生成器?

通常你可以編寫不使用生成器但實現相同邏輯的代碼。 一種選擇是使用我之前提到的臨時列表“技巧”。 這在所有情況下都不起作用,例如,如果你有無限循環,或者當你有一個很長的列表時,它可能會無效地使用內存。 另一種方法是實現一個新的可迭代的類SomethingIter ,它保存實例成員中的狀態,並在Python 3中的next() (或__next__() )方法中執行下一個邏輯步驟。 取決於邏輯, next()方法中的代碼可能最終看起來非常複雜並容易出現錯誤。 這裡的發電機提供了一個乾淨而簡單的解

Question

Python中yield關鍵字的用法是什麼? 它有什麼作用?

例如,我試圖理解這個代碼1

def _get_child_candidates(self, distance, min_dist, max_dist):
    if self._leftchild and distance - max_dist < self._median:
        yield self._leftchild
    if self._rightchild and distance + max_dist >= self._median:
        yield self._rightchild  

這是來電者:

result, candidates = [], [self]
while candidates:
    node = candidates.pop()
    distance = node._get_dist(obj)
    if distance <= max_dist and distance >= min_dist:
        result.extend(node._values)
    candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))
return result

調用方法_get_child_candidates時會發生什麼? 是否返回列表? 單個元素? 它是否再次被調用? 隨後的通話何時停止?

1.代碼來自Jochen Schulz(jrschulz),他為度量空間創建了一個偉大的Python庫。 這是完整源代碼的鏈接: 模塊mspace




yield關鍵字被簡化為兩個簡單的事實:

  1. 如果編譯器在函數內的任何位置檢測到yield關鍵字,則該函數不再通過return語句return相反 ,它會立即返回一個名為生成器的懶惰“待處理列表”對象
  2. 一個生成器是可迭代的。 什麼是可迭代的 ? 它像listsetrange或字典視圖一樣,具有用於以特定順序訪問每個元素內置協議

簡而言之: 生成器是一個懶惰的遞增列表 ,並且yield語句允許您使用函數表示法來編寫生成器應該逐漸吐出的列表值

generator = myYieldingFunction(...)
x = list(generator)

   generator
       v
[x[0], ..., ???]

         generator
             v
[x[0], x[1], ..., ???]

               generator
                   v
[x[0], x[1], x[2], ..., ???]

                       StopIteration exception
[x[0], x[1], x[2]]     done

list==[x[0], x[1], x[2]]

讓我們定義一個函數makeRange ,就像Python的range 。 調用makeRange(n)一個發生器:

def makeRange(n):
    # return 0,1,2,...,n-1
    i = 0
    while i < n:
        yield i
        i += 1

>>> makeRange(5)
<generator object makeRange at 0x19e4aa0>

為了強制生成器立即返回它的待處理值,可以將它傳遞給list() (就像你可以迭代的那樣):

>>> list(makeRange(5))
[0, 1, 2, 3, 4]

將示例與“僅返回列表”進行比較

上面的例子可以被認為僅僅是創建一個你追加並返回的列表:

# list-version                   #  # generator-version
def makeRange(n):                #  def makeRange(n):
    """return [0,1,2,...,n-1]""" #~     """return 0,1,2,...,n-1"""
    TO_RETURN = []               #>
    i = 0                        #      i = 0
    while i < n:                 #      while i < n:
        TO_RETURN += [i]         #~         yield i
        i += 1                   #          i += 1  ## indented
    return TO_RETURN             #>

>>> makeRange(5)
[0, 1, 2, 3, 4]

但是有一個主要區別: 見最後一節。

你如何使用發電機

迭代是列表理解的最後一部分,所有的生成器都是可迭代的,所以它們經常被使用如下:

#                   _ITERABLE_
>>> [x+10 for x in makeRange(5)]
[10, 11, 12, 13, 14]

為了更好地感受生成器,你可以使用itertools模塊(確保使用chain.from_iterable而不是chain當保證)。 例如,你甚至可以使用生成器來實現像itertools.count()這樣的無限長的懶惰列表。 你可以實現自己的def enumerate(iterable): zip(count(), iterable) ,或者在while循環中使用yield關鍵字。

請注意:發生器實際上可以用於更多的事情,比如執行協程或非確定性編程或其他優雅的事情。 然而,我在這裡呈現的“懶惰列表”觀點是最常見的用法。

在幕後

這就是“Python迭代協議”的工作原理。 也就是說,當你list(makeRange(5)) 。 這是我之前描述的“懶惰,增量列表”。

>>> x=iter(range(5))
>>> next(x)
0
>>> next(x)
1
>>> next(x)
2
>>> next(x)
3
>>> next(x)
4
>>> next(x)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration

內置函數next()只是調用對象.next()函數,它是“迭代協議”的一部分,可在所有迭代器中找到。 你可以手動使用next()函數(和迭代協議的其他部分)來實現花哨的東西,通常會犧牲可讀性,所以盡量避免這樣做...

細節

通常,大多數人不會關心以下區別,並可能想停止閱讀。

在Python中, 可迭代是任何“理解for循環的概念”的對象,如列表[1,2,3]迭代器是被請求的for循環的特定實例,如[1,2,3].__iter__()生成器與任何迭代器完全相同,除了它的寫法(使用函數語法)。

當您從列表中請求迭代器時,它會創建一個新的迭代器。 然而,當你從一個迭代器(你很少會這麼做)請求一個迭代器時,它只會給你一個自己的副本。

因此,萬一你沒有做到這樣的事情......

> x = myRange(5)
> list(x)
[0, 1, 2, 3, 4]
> list(x)
[]

...然後記住一個生成器是一個迭代器 ; 也就是說,這是一次性使用。 如果你想重用它,你應該再次調用myRange(...) 。 如果您需要使用兩次結果,請將結果轉換為列表並將其存儲在變量x = list(myRange(5)) 。 那些絕對需要克隆一個生成器的人(例如,可怕的元編程人員)可以在絕對必要時使用itertools.tee ,因為可複制的迭代器Python PEP標準提議已被推遲。




For those who prefer a minimal working example, meditate on this interactive Python session:

>>> def f():
...   yield 1
...   yield 2
...   yield 3
... 
>>> g = f()
>>> for i in g:
...   print i
... 
1
2
3
>>> for i in g:
...   print i
... 
>>> # Note that this time nothing was printed



Yet another TL;DR

iterator on list : next() returns the next element of the list

iterator generator : next() will compute the next element on the fly (execute code)

You can see the yield/generator as a way to manually run the control flow from outside (like continue loop 1 step), by calling next, however complex the flow.

NOTE: the generator is NOT a normal function, it remembers previous state like local variables (stack), see other answers or articles for detailed explanation, the generator can only be iterated on once . You could do without yield but it would not be as nice, so it can be considered 'very nice' language sugar.




From a programming viewpoint, the iterators are implemented as thunks

http://en.wikipedia.org/wiki/Thunk_(functional_programming)

To implement iterators/generators/thread pools for concurrent execution/etc as thunks (also called anonymous functions), one uses messages sent to a closure object, which has a dispatcher, and the dispatcher answers to "messages".

http://en.wikipedia.org/wiki/Message_passing

" next " is a message sent to a closure, created by " iter " call.

There are lots of ways to implement this computation. I used mutation but it is easy to do it without mutation, by returning the current value and the next yielder.

Here is a demonstration which uses the structure of R6RS but the semantics is absolutely identical as in python, it's the same model of computation, only a change in syntax is required to rewrite it in python.

Welcome to Racket v6.5.0.3.

-> (define gen
     (lambda (l)
       (define yield
         (lambda ()
           (if (null? l)
               'END
               (let ((v (car l)))
                 (set! l (cdr l))
                 v))))
       (lambda(m)
         (case m
           ('yield (yield))
           ('init  (lambda (data)
                     (set! l data)
                     'OK))))))
-> (define stream (gen '(1 2 3)))
-> (stream 'yield)
1
-> (stream 'yield)
2
-> (stream 'yield)
3
-> (stream 'yield)
'END
-> ((stream 'init) '(a b))
'OK
-> (stream 'yield)
'a
-> (stream 'yield)
'b
-> (stream 'yield)
'END
-> (stream 'yield)
'END
-> 



It's returning a generator. I'm not particularly familiar with Python, but I believe it's the same kind of thing as C#'s iterator blocks if you're familiar with those.

There's an IBM article which explains it reasonably well (for Python) as far as I can see.

The key idea is that the compiler/interpreter/whatever does some trickery so that as far as the caller is concerned, they can keep calling next() and it will keep returning values - as if the generator method was paused . Now obviously you can't really "pause" a method, so the compiler builds a state machine for you to remember where you currently are and what the local variables etc look like. This is much easier than writing an iterator yourself.




All great answers whereas a bit difficult for newbies.

I assume you have learned return statement.
As an analogy, return and yield are twins.
return means 'Return and Stop' whereas 'yield` means 'Return but Continue'

  1. Try to get a num_list with return .
def num_list(n):
    for i in range(n):
        return i

運行:

In [5]: num_list(3)
Out[5]: 0

See, you get only a single number instead of a list of them,. return never allow you happy to prevail. It implemented once and quit.

  1. There comes yield

Replace return with yield

In [10]: def num_list(n):
    ...:     for i in range(n):
    ...:         yield i
    ...:

In [11]: num_list(3)
Out[11]: <generator object num_list at 0x10327c990> 

In [12]: list(num_list(3))
Out[12]: [0, 1, 2]

Now, you win to get all the numbers.
Comparing to return which runs once and stops, yield runs times you planed.
You can interpret return as return one of them ,
yield as return all of them . This is called iterable .

  1. One more step we can rewrite yield statement with return
In [15]: def num_list(n):
    ...:     result = []
    ...:     for i in range(n):
    ...:         result.append(i)
    ...:     return result

In [16]: num_list(3)
Out[16]: [0, 1, 2]

It's the core about yield .

The difference between a list return outputs and the object yield output is:
You can get [0, 1, 2] from a list object always whereas can only retrieve them from 'the object yield output' once.
So, it has a new name generator object as displayed in Out[11]: <generator object num_list at 0x10327c990> .

In conclusion as a metaphor to grok it,

return and yield are twins,
list and generator are twins.




Many people use return rather than yield but in some cases yield can be more efficient and easier to work with.

Here is an example which yield is definitely best for:

return (in function)

import random

def return_dates():
    dates = [] # with return you need to create a list then return it
    for i in range(5):
        date = random.choice(["1st", "2nd", "3rd", "4th", "5th", "6th", "7th", "8th", "9th", "10th"])
        dates.append(date)
    return dates

yield (in function)

def yield_dates():
    for i in range(5):
        date = random.choice(["1st", "2nd", "3rd", "4th", "5th", "6th", "7th", "8th", "9th", "10th"])
        yield date # yield makes a generator automatically which works in a similar way, this is much more efficient

Calling functions

dates_list = return_dates()
print(dates_list)
for i in dates_list:
    print(i)

dates_generator = yield_dates()
print(dates_generator)
for i in  dates_generator:
    print(i)

Both functions do the same thing but yield uses 3 lines instead of 5 and has one less variable to worry about.

This is the result from the code:

As you can see both functions do the same thing, the only difference is return_dates() gives a list and yield_dates() gives a generator

A real life example would be something like reading a file line by line or if you just want to make a generator




While a lot of answers show why you'd use a yield to create a generator, there are more uses for yield . It's quite easy to make a coroutine, which enables the passing of information between two blocks of code. I won't repeat any of the fine examples that have already been given about using yield to create a generator.

To help understand what a yield does in the following code, you can use your finger to trace the cycle through any code that has a yield . Every time your finger hits the yield , you have to wait for a next or a send to be entered. When a next is called, you trace through the code until you hit the yield … the code on the right of the yield is evaluated and returned to the caller… then you wait. When next is called again, you perform another loop through the code. However, you'll note that in a coroutine, yield can also be used with a send … which will send a value from the caller into the yielding function. If a send is given, then yield receives the value sent, and spits it out the left hand side… then the trace through the code progresses until you hit the yield again (returning the value at the end, as if next was called).

例如:

>>> def coroutine():
...     i = -1
...     while True:
...         i += 1
...         val = (yield i)
...         print("Received %s" % val)
...
>>> sequence = coroutine()
>>> sequence.next()
0
>>> sequence.next()
Received None
1
>>> sequence.send('hello')
Received hello
2
>>> sequence.close()



(My below answer only speaks from the perspective of using Python generator, not the underlying implementation of generator mechanism , which involves some tricks of stack and heap manipulation.)

When yield is used instead of a return in a python function, that function is turned into something special called generator function . That function will return an object of generator type. The yield keyword is a flag to notify the python compiler to treat such function specially. Normal functions will terminate once some value is returned from it. But with the help of the compiler, the generator function can be thought of as resumable. That is, the execution context will be restored and the execution will continue from last run. Until you explicitly call return, which will raise a StopIteration exception (which is also part of the iterator protocol), or reach the end of the function. I found a lot of references about generator but this one from the functional programming perspective is the most digestable.

(Now I want to talk about the rationale behind generator , and the iterator based on my own understanding. I hope this can help you grasp the essential motivation of iterator and generator. Such concept shows up in other languages as well such as C#.)

As I understand, when we want to process a bunch of data, we usually first store the data somewhere and then process it one by one. But this intuitive approach is problematic. If the data volume is huge, it's expensive to store them as a whole beforehand. So instead of storing the data itself directly, why not store some kind of metadata indirectly, ie the logic how the data is computed .

There are 2 approaches to wrap such metadata.

  1. The OO approach, we wrap the metadata as a class . This is the so-called iterator who implements the iterator protocol (ie the __next__() , and __iter__() methods). This is also the commonly seen iterator design pattern .
  2. The functional approach, we wrap the metadata as a function . This is the so-called generator function . But under the hood, the returned generator object still IS-A iterator because it also implements the iterator protocol.

Either way, an iterator is created, ie some object that can give you the data you want. The OO approach may be a bit complex. Anyway, which one to use is up to you.




Yield is an Object

A return in a function will return a single value.

If you want function to return huge set of values use yield .

More importantly, yield is a barrier

like Barrier in Cuda Language, it will not transfer control until it gets completed.

It will run the code in your function from the beginning until it hits yield . Then, it'll return the first value of the loop. Then, every other call will run the loop you have written in the function one more time, returning the next value until there is no value to return.




yield is just like return - it returns whatever you tell it to. The only difference is that the next time you call the function, execution starts from the last call to the yield statement.

In the case of your code, the function get_child_candidates is acting like an iterator so that when you extend your list, it adds one element at a time to the new list.

list.extend calls an iterator until it's exhausted. In the case of the code sample you posted, it would be much clearer to just return a tuple and append that to the list.




Here is a mental image of what yield does.

I like to think of a thread as having a stack (even when it's not implemented that way).

When a normal function is called, it puts its local variables on the stack, does some computation, then clears the stack and returns. The values of its local variables are never seen again.

With a yield function, when its code begins to run (ie after the function is called, returning a generator object, whose next() method is then invoked), it similarly puts its local variables onto the stack and computes for a while. But then, when it hits the yield statement, before clearing its part of the stack and returning, it takes a snapshot of its local variables and stores them in the generator object. It also writes down the place where it's currently up to in its code (ie the particular yield statement).

So it's a kind of a frozen function that the generator is hanging onto.

When next() is called subsequently, it retrieves the function's belongings onto the stack and re-animates it. The function continues to compute from where it left off, oblivious to the fact that it had just spent an eternity in cold storage.

Compare the following examples:

def normalFunction():
    return
    if False:
        pass

def yielderFunction():
    return
    if False:
        yield 12

When we call the second function, it behaves very differently to the first. The yield statement might be unreachable, but if it's present anywhere, it changes the nature of what we're dealing with.

>>> yielderFunction()
<generator object yielderFunction at 0x07742D28>

Calling yielderFunction() doesn't run its code, but makes a generator out of the code. (Maybe it's a good idea to name such things with the yielder prefix for readability.)

>>> gen = yielderFunction()
>>> dir(gen)
['__class__',
 ...
 '__iter__',    #Returns gen itself, to make it work uniformly with containers
 ...            #when given to a for loop. (Containers return an iterator instead.)
 'close',
 'gi_code',
 'gi_frame',
 'gi_running',
 'next',        #The method that runs the function's body.
 'send',
 'throw']

The gi_code and gi_frame fields are where the frozen state is stored. Exploring them with dir(..) , we can confirm that our mental model above is credible.




I was going to post "read page 19 of Beazley's 'Python: Essential Reference' for a quick description of generators", but so many others have posted good descriptions already.

Also, note that yield can be used in coroutines as the dual of their use in generator functions. Although it isn't the same use as your code snippet, (yield) can be used as an expression in a function. When a caller sends a value to the method using the send() method, then the coroutine will execute until the next (yield) statement is encountered.

Generators and coroutines are a cool way to set up data-flow type applications. I thought it would be worthwhile knowing about the other use of the yield statement in functions.




TL;DR

When you find yourself building a list from scratch...

def squares_list(n):
    the_list = []                         # Replace
    for x in range(n):
        y = x * x
        the_list.append(y)                # these
    return the_list                       # lines

... yield each piece instead

def squares_the_yield_way(n):
    for x in range(n):
        y = x * x
        yield y                           # with this

This was my first "aha" moment with yield.

yield is a sugary way to say

build a series of stuff

Same behavior:

>>> for square in squares_list(4):
...     print(square)
...
0
1
4
9
>>> for square in squares_the_yield_way(4):
...     print(square)
...
0
1
4
9

Different behavior:

Yield is single-pass : you can only iterate through once. When a function has a yield in it we call it a generator function . And an iterator is what it returns. That's revealing. We lose the convenience of a container, but gain the power of an arbitrarily long series.

Yield is lazy , it puts off computation. A function with a yield in it doesn't actually execute at all when you call it. The iterator object it returns uses magic to maintain the function's internal context. Each time you call next() on the iterator (this happens in a for-loop) execution inches forward to the next yield. ( return raises StopIteration and ends the series.)

Yield is versatile . It can do infinite loops:

>>> def squares_all_of_them():
...     x = 0
...     while True:
...         yield x * x
...         x += 1
...
>>> squares = squares_all_of_them()
>>> for _ in range(4):
...     print(next(squares))
...
0
1
4
9

If you need multiple passes and the series isn't too long, just call list() on it:

>>> list(squares_the_yield_way(4))
[0, 1, 4, 9]

Brilliant choice of the word yield because both meanings apply:

yield — produce or provide (as in agriculture)

...provide the next data in the series.

yield — give way or relinquish (as in political power)

...relinquish CPU execution until the iterator advances.




Links