分享

hive创建mapreduce 为0个。有详细日志。求分析为何原因?

zhangshuai 发表于 2015-7-7 16:27:44 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 10 40654
[root@master cron.daily]# ./atet_core.sh
文件存在
15/07/07 16:16:59 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
15/07/07 16:16:59 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
15/07/07 16:16:59 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
15/07/07 16:16:59 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
15/07/07 16:16:59 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
15/07/07 16:16:59 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
15/07/07 16:16:59 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
15/07/07 16:16:59 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed

Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-1.0.1.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/hive-jdbc-1.0.1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/phoenix-4.1.0-client-hadoop2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
OK
Time taken: 2.789 seconds
15/07/07 16:17:31 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
15/07/07 16:17:31 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
15/07/07 16:17:31 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
15/07/07 16:17:31 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
15/07/07 16:17:31 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
15/07/07 16:17:31 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
15/07/07 16:17:31 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
15/07/07 16:17:31 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed

Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-1.0.1.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/hive-jdbc-1.0.1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/phoenix-4.1.0-client-hadoop2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Added [/usr/local/hive/lib/rownum.jar] to class path
Added resources: [/usr/local/hive/lib/rownum.jar]
OK
Time taken: 2.843 seconds
OK
Time taken: 1.057 seconds
Query ID = root_20150707161717_ca25dc69-1e50-43cb-90cc-41de89891f20
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1435218801012_0141, Tracking URL = http://master:8088/proxy/application_1435218801012_0141/
Kill Command = /usr/local/hadoop/bin/hadoop job  -kill job_1435218801012_0141
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2015-07-07 16:19:00,293 Stage-1 map = 0%,  reduce = 0%
Ended Job = job_1435218801012_0141 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
15/07/07 16:19:05 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
15/07/07 16:19:05 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
15/07/07 16:19:05 INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/phoenix-4.1.0-client-hadoop2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/07/07 16:19:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `atet_logs_stat` AS t LIMIT 1
15/07/07 16:19:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `atet_logs_stat` AS t LIMIT 1
15/07/07 16:19:07 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/local/hadoop
Note: /tmp/sqoop-root/compile/7ae934ba7647baade846aed66fb728fa/atet_logs_stat.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/07/07 16:19:12 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/7ae934ba7647baade846aed66fb728fa/atet_logs_stat.jar
15/07/07 16:19:12 INFO mapreduce.ExportJobBase: Beginning export of atet_logs_stat
15/07/07 16:19:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/07/07 16:19:14 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
15/07/07 16:19:16 WARN mapreduce.ExportJobBase: Input path hdfs://cluster1/hive/warehouse/atet_tmp contains no files
15/07/07 16:19:16 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
15/07/07 16:19:16 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
15/07/07 16:19:16 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/07/07 16:19:17 INFO client.RMProxy: Connecting to ResourceManager at master/10.1.1.231:8032
15/07/07 16:19:20 INFO input.FileInputFormat: Total input paths to process : 0
15/07/07 16:19:20 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
15/07/07 16:19:20 INFO input.FileInputFormat: Total input paths to process : 0
15/07/07 16:19:21 INFO mapreduce.JobSubmitter: number of splits:0
15/07/07 16:19:21 INFO Configuration.deprecation: mapred.job.classpath.files is deprecated. Instead, use mapreduce.job.classpath.files
15/07/07 16:19:21 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
15/07/07 16:19:21 INFO Configuration.deprecation: mapred.cache.files.filesizes is deprecated. Instead, use mapreduce.job.cache.files.filesizes
15/07/07 16:19:21 INFO Configuration.deprecation: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
15/07/07 16:19:21 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
15/07/07 16:19:21 INFO Configuration.deprecation: mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.class
15/07/07 16:19:21 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
15/07/07 16:19:21 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
15/07/07 16:19:21 INFO Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
15/07/07 16:19:21 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
15/07/07 16:19:21 INFO Configuration.deprecation: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
15/07/07 16:19:21 INFO Configuration.deprecation: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
15/07/07 16:19:21 INFO Configuration.deprecation: mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.class
15/07/07 16:19:21 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
15/07/07 16:19:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1435218801012_0142
15/07/07 16:19:23 INFO impl.YarnClientImpl: Submitted application application_1435218801012_0142 to ResourceManager at master/10.1.1.231:8032
15/07/07 16:19:23 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1435218801012_0142/
15/07/07 16:19:23 INFO mapreduce.Job: Running job: job_1435218801012_0142
15/07/07 16:19:39 INFO mapreduce.Job: Job job_1435218801012_0142 running in uber mode : false
15/07/07 16:19:39 INFO mapreduce.Job:  map 0% reduce 0%
15/07/07 16:19:41 INFO mapreduce.Job: Job job_1435218801012_0142 completed successfully
15/07/07 16:19:41 INFO mapreduce.Job: Counters: 2
        Job Counters
                Total time spent by all maps in occupied slots (ms)=0
                Total time spent by all reduces in occupied slots (ms)=0
15/07/07 16:19:41 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
15/07/07 16:19:41 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 25.0445 seconds (0 bytes/sec)
15/07/07 16:19:41 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/07/07 16:19:41 INFO mapreduce.ExportJobBase: Exported 0 records.
15/07/07 16:19:55 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
15/07/07 16:19:55 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
15/07/07 16:19:55 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
15/07/07 16:19:55 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
15/07/07 16:19:55 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
15/07/07 16:19:55 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
15/07/07 16:19:55 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
15/07/07 16:19:55 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed

Logging initialized using configuration in jar:file:/usr/local/hive/lib/hive-common-1.0.1.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/hive-jdbc-1.0.1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/phoenix-4.1.0-client-hadoop2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
OK
Time taken: 3.085 seconds
[root@master cron.daily]#

已有(10)人评论

跳转到指定楼层
bob007 发表于 2015-7-7 16:51:14
select * from table
hive> select * from t limit 1;   
直接执行FetchTask

Hive中有各种各样的Task任务,其中FetchTask算是最简单的一种了。FetchTask不同于MapReduce任务,它不会启动mapreduce,而是直接读取文件,输出结果。当你执行简单的select * with limit语句的时候,其不会运行mapreduce任务。

参考:
Hive中什么时候执行FetchTask任务
回复

使用道具 举报

zhangshuai 发表于 2015-7-7 16:58:14
bob007 发表于 2015-7-7 16:51
select * from table
hive> select * from t limit 1;   
直接执行FetchTask

你在说什么。我不懂 。。我不是简单的 * 查询。。
回复

使用道具 举报

bob007 发表于 2015-7-7 17:05:42
zhangshuai 发表于 2015-7-7 16:58
你在说什么。我不懂 。。我不是简单的 * 查询。。

看你的日志,应该是下面sql吧
SELECT t.* FROM `atet_logs_stat` AS t LIMIT 1

如果只是这个,不需要经过mapreduce


回复

使用道具 举报

zhangshuai 发表于 2015-7-7 17:08:14
bob007 发表于 2015-7-7 17:05
看你的日志,应该是下面sql吧
SELECT t.* FROM `atet_logs_stat` AS t LIMIT 1

hive -e "add jar /usr/local/hive/lib/rownum.jar;create temporary function row_number as 'com.mapreduce.RowNumber';create table atet_tmp(id int,datetime date,channelid string,deviceid string,account int);insert overwrite table atet_tmp select row_number(1)+unix_timestamp(),to_date('2015-04-01'),nvl(channelid,'0'),nvl(deviceid,'0'),count(distinct atetid) from atet where substr(createtime,0,7) = '2015-04' group by to_date(createtime),channelid,deviceid grouping sets((channelid,deviceid),channelid,());insert into table atet_tmp select row_number(1)+unix_timestamp(),to_date('2015-05-01'),nvl(channelid,'0'),nvl(deviceid,'0'),count(distinct atetid) from atet where substr(createtime,0,7) ='2015-05' group by to_date(createtime),channelid,deviceid grouping sets((channelid,deviceid),channelid,());"



我的SQL··········
回复

使用道具 举报

NEOGX 发表于 2015-7-7 17:14:46
zhangshuai 发表于 2015-7-7 17:08
hive -e "add jar /usr/local/hive/lib/rownum.jar;create temporary function row_number as 'com.mapre ...

楼主是什么意思??
是成功还是没成功??
日志里面显示了有mapreduce任务
回复

使用道具 举报

zhangshuai 发表于 2015-7-7 17:19:41
NEOGX 发表于 2015-7-7 17:14
楼主是什么意思??
是成功还是没成功??
日志里面显示了有mapreduce任务

成功我也不会在这里求助了。。当然是没成功了! 没看到 map数量为0 reduce为0 么。。。。
回复

使用道具 举报

NEOGX 发表于 2015-7-7 17:27:50
回复

使用道具 举报

s060403072 发表于 2015-7-7 18:04:23

sql一团乱啊,
一看就是sql语法问题。
回复

使用道具 举报

zhangshuai 发表于 2015-7-7 18:06:11
s060403072 发表于 2015-7-7 18:04
sql一团乱啊,
一看就是sql语法问题。

| id         | datetime   | channelid            | deviceid             | account |
+------------+------------+----------------------+----------------------+---------+
| 1436306206 | 2015-05-01 | 20140701212944327101 | 20141017170423869121 |    2157 |
| 1436306207 | 2015-05-01 | 20140701212944327101 | 20141017170851923127 |    2124 |
| 1436305624 | 2015-04-01 | 0                    | 0                    |    2638 |
| 1436305626 | 2015-04-01 | 20140701212944327101 | 0                    |    2638 |
| 1436305627 | 2015-04-01 | 20140701212944327101 | 20140910175136992102 |    1263 |
| 1436306202 | 2015-05-01 | 0                    | 0                    |    2905 |
| 1436306204 | 2015-05-01 | 20140701212944327101 | 0                    |    2905 |
| 1436306205 | 2015-05-01 | 20140701212944327101 | 20140910175136992102 |    1698 |
| 1436305628 | 2015-04-01 | 20140701212944327101 | 20141017170423869121 |    1614 |
| 1436305629 | 2015-04-01 | 20140701212944327101 | 20141017170851923127 |    1651 |
+------------+------------+----------------------+----------------------+---------+

为什么 重启机器跑成这样?  成功了?不应该是SQL错误吧?

回复

使用道具 举报

12下一页
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条