sql - 在Apache Spark Join中包含空值




scala apache-spark (2)

我想在Apache Spark連接中包含空值。 Spark默認情況下不包含null的行。

這是默認的Spark行為。

val numbersDf = Seq(
  ("123"),
  ("456"),
  (null),
  ("")
).toDF("numbers")

val lettersDf = Seq(
  ("123", "abc"),
  ("456", "def"),
  (null, "zzz"),
  ("", "hhh")
).toDF("numbers", "letters")

val joinedDf = numbersDf.join(lettersDf, Seq("numbers"))

這是 joinedDf.show() 的輸出:

+-------+-------+
|numbers|letters|
+-------+-------+
|    123|    abc|
|    456|    def|
|       |    hhh|
+-------+-------+

這是我想要的輸出:

+-------+-------+
|numbers|letters|
+-------+-------+
|    123|    abc|
|    456|    def|
|       |    hhh|
|   null|    zzz|
+-------+-------+

Spark提供了一個特殊的 NULL 安全等於運算符:

numbersDf
  .join(lettersDf, numbersDf("numbers") <=> lettersDf("numbers"))
  .drop(lettersDf("numbers"))
+-------+-------+
|numbers|letters|
+-------+-------+
|    123|    abc|
|    456|    def|
|   null|    zzz|
|       |    hhh|
+-------+-------+

小心不要在Spark 1.5或更早版本中使用它。 在Spark 1.6之前,它需要一個笛卡爾積( SPARK-11111 - 快速零安全連接 )。

Spark 2.3.0 或更高版本中,您可以在 PySpark中 使用 Column.eqNullSafe

numbers_df = sc.parallelize([
    ("123", ), ("456", ), (None, ), ("", )
]).toDF(["numbers"])

letters_df = sc.parallelize([
    ("123", "abc"), ("456", "def"), (None, "zzz"), ("", "hhh")
]).toDF(["numbers", "letters"])

numbers_df.join(letters_df, numbers_df.numbers.eqNullSafe(letters_df.numbers))
+-------+-------+-------+
|numbers|numbers|letters|
+-------+-------+-------+
|    456|    456|    def|
|   null|   null|    zzz|
|       |       |    hhh|
|    123|    123|    abc|
+-------+-------+-------+

SparkR中的 %<=>%

numbers_df <- createDataFrame(data.frame(numbers = c("123", "456", NA, "")))
letters_df <- createDataFrame(data.frame(
  numbers = c("123", "456", NA, ""),
  letters = c("abc", "def", "zzz", "hhh")
))

head(join(numbers_df, letters_df, numbers_df$numbers %<=>% letters_df$numbers))
  numbers numbers letters
1     456     456     def
2    <NA>    <NA>     zzz
3                     hhh
4     123     123     abc

使用 SQL Spark 2.2.0+ ),您可以使用 IS NOT DISTINCT FROM

SELECT * FROM numbers JOIN letters 
ON numbers.numbers IS NOT DISTINCT FROM letters.numbers

這也可以與 DataFrame API一起使用:

numbersDf.alias("numbers")
  .join(lettersDf.alias("letters"))
  .where("numbers.numbers IS NOT DISTINCT FROM letters.numbers")

嘗試以下方法將空行包含在JOIN運算符的結果中:

def nullSafeJoin(leftDF: DataFrame, rightDF: DataFrame, columns: Seq[String], joinType: String): DataFrame = {

    var columnsExpr: Column = leftDF(columns.head) <=> rightDF(columns.head)

    columns.drop(1).foreach(column => {
        columnsExpr = columnsExpr && (leftDF(column) <=> rightDF(column))
    })

    var joinedDF: DataFrame = leftDF.join(rightDF, columnsExpr, joinType)

    columns.foreach(column => {
        joinedDF = joinedDF.drop(leftDF(column))
    })

    joinedDF
}






apache-spark-sql