val jdbcDF = spark.read.format("jdbc").option("url","jdbc:sqlserver://192.168.1.21;username=sa;password=yishidb;database=CDRDB16").option("driver","com.microsoft.sqlserver.jdbc.SQLServerDriver").option("dbtable","DC_PATIENT").load() 这个是我从sqlserver里拉出来的表 没问题
jdbcDF.write.mode(SaveMode.Overwrite).options(Map("table" -> "DC_PATIENT", "zkUrl" -> "hadoop001:2181")).format("org.apache.phoenix.spark").save() 这句话就出现以下错误。。
scala> jdbcDF.write.mode(SaveMode.Overwrite).options(Map("table" -> "DC_PATIENT", "zkUrl" -> "hadoop001:2181")).format("org.apache.phoenix.spark").save()
<console>:26: error: not found: value SaveMode
jdbcDF.write.mode(SaveMode.Overwrite).options(Map("table" -> "DC_PATIENT", "zkUrl" -> "hadoop001:2181")).format("org.apache.phoenix.spark").save()
^
|
|