分享

spark运行中问题。

liuzhixin137 发表于 2016-6-7 18:00:26 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 6 17613
34290157916,10.4.196.143,14825BA28D1F,4:02,0149,20160229T112740,0x01#高清影院-付费#149#0;02,0112,2016229T145738,0x02#湖南卫视#112#034290157917,10.4.196.144,14825BA28D1F,4:02,0150,20160229T112740,0x01#湖北卫视#150#0;02,0035,20160120T181745,0

这是打印的stdout,出错原因是16/06/07 23:15:53 INFO storage.MemoryStore: ensureFreeSpace(14540) called with curMem=7140, maxMem=27370192816/06/07 23:15:53 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 14.2 KB, free 261.0 MB)16/06/07 23:15:53 INFO broadcast.TorrentBroadcast: Reading broadcast variable 0 took 54 ms16/06/07 23:15:54 INFO storage.MemoryStore: ensureFreeSpace(204280) called with curMem=21680, maxMem=27370192816/06/07 23:15:54 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 199.5 KB, free 260.8 MB)16/06/07 23:15:58 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id16/06/07 23:15:58 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id16/06/07 23:15:58 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap16/06/07 23:15:58 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition16/06/07 23:15:58 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id16/06/07 23:16:01 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 (TID 0)java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef;        at com.ghost.scala.DataTranslate$.AnalyData(DataTranslate.scala:90)        at com.ghost.scala.DataTranslate$$anonfun$1.apply(DataTranslate.scala:29)        at com.ghost.scala.DataTranslate$$anonfun$1.apply(DataTranslate.scala:29)        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)        at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:203)        at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)        at org.apache.spark.scheduler.Task.run(Task.scala:88)        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)        at java.lang.Thread.run(Thread.java:745)16/06/07 23:16:02 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef;        at com.ghost.scala.DataTranslate$.AnalyData(DataTranslate.scala:90)        at com.ghost.scala.DataTranslate$$anonfun$1.apply(DataTranslate.scala:29)        at com.ghost.scala.DataTranslate$$anonfun$1.apply(DataTranslate.scala:29)        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)        at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:203)        at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)        at org.apache.spark.scheduler.Task.run(Task.scala:88)        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)        at java.lang.Thread.run(Thread.java:745)16/06/07 23:16:02 INFO storage.DiskBlockManager: Shutdown hook called16/06/07 23:16:02 INFO util.ShutdownHookManager: Shutdown hook called16/06/07 23:16:02 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-488f385b-c163-40dc-a41f-56c9361a9aea/executor-2ca7dcbd-49b0-4377-ac27-017b5b0c8ea2/spark-70302962-396b-41b2-815f-b5b0dd9b886f16/06/07 23:16:02 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 116/06/07 23:16:02 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID 1)16/06/07 23:16:02 INFO rdd.HadoopRDD: Input split: hdfs://master:8020/usr/data/miniData.txt:129+12916/06/07 23:16:02 ERROR executor.Executor: Exception in task 1.0 in stage 0.0 (TID 1)java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef;        at com.ghost.scala.DataTranslate$.AnalyData(DataTranslate.scala:90)        at com.ghost.scala.DataTranslate$$anonfun$1.apply(DataTranslate.scala:29)        at com.ghost.scala.DataTranslate$$anonfun$1.apply(DataTranslate.scala:29)        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)        at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:203)        at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)        at org.apache.spark.scheduler.Task.run(Task.scala:88)        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)        at java.lang.Thread.run(Thread.java:745)16/06/07 23:16:02 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-1,5,main]java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef;        at com.ghost.scala.DataTranslate$.AnalyData(DataTranslate.scala:90)        at com.ghost.scala.DataTranslate$$anonfun$1.apply(DataTranslate.scala:29)        at com.ghost.scala.DataTranslate$$anonfun$1.apply(DataTranslate.scala:29)        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)        at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:203)        at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)        at org.apache.spark.scheduler.Task.run(Task.scala:88)        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)        at java.lang.Thread.run(Thread.java:745)16/06/07 23:16:02 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 216/06/07 23:16:02 INFO executor.Executor: Running task 0.1 in stage 0.0 (TID 2)16/06/07 23:16:02 INFO rdd.HadoopRDD: Input split: hdfs://master:8020/usr/data/miniData.txt:0+129
源代码在这一行是
def AnalyData(pstr: String): String = {
  println("==============" + pstr + "=====================")
  var firstArr: Array[String] = pstr.split(";")
  var secondArr: Array[String] = firstArr(0).split(":")
  var thirdArr: Array[String] = secondArr(0).split(",")

对照前面的数据格式,竟然报无此方法。请各位帮忙看看。多谢!

已有(6)人评论

跳转到指定楼层
qcbb001 发表于 2016-6-7 20:23:45
本帖最后由 qcbb001 于 2016-6-7 20:28 编辑

跟代码似乎没关系 ,感觉是环境没弄好,导致缺各种包
回复

使用道具 举报

liuzhixin137 发表于 2016-6-8 09:22:44
qcbb001 发表于 2016-6-7 20:23
跟代码似乎没关系 ,感觉是环境没弄好,导致缺各种包

除了我的scala版本高一点,其他没区别。我再试试检查一下。多谢!
回复

使用道具 举报

liuzhixin137 发表于 2016-6-8 14:52:08
正在弄环境。不知道是否是 spark和scala 版本不兼容导致。

不知道现在有没有一个 这样的具体的文档,说明那些spark 和 scala 版本的问题。

我的spark 是 1.5.0
scala是 2.11.4   最开始是2.11.8

因为有个测试用的大集群spark1.5.0   scala2.10.4 所以我也改成这样了。再试试看
回复

使用道具 举报

liuzhixin137 发表于 2016-6-8 14:56:53
果然是这个问题,这个问题已经解决了。现在的环境是
spark 1.5.0
scala  2.10.4
hadoop 是2.6.0

出新问题了,真开心。
回复

使用道具 举报

Gong小阳 发表于 2018-4-12 20:23:54
您好,报ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]错,是环境的问题吗?
回复

使用道具 举报

fly2015 发表于 2018-4-13 11:11:38
liuzhixin137 发表于 2016-6-8 14:56
果然是这个问题,这个问题已经解决了。现在的环境是
spark 1.5.0
scala  2.10.4

就是scala版本问题,顺便问一下 你是哥哥影迷吗  我也是

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条