分享

Hadoop时间与系统时间不一致

wqk151 发表于 2016-3-21 18:31:08 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 7 20380
我的环境是:Hadoop2.5+hive1.2
在hive中创建了一个表,有4个字段,导入了4条数据。
执行查询count(*) 时,特别慢
10分钟后,还没出结果,进程nomanger也自动消失,MRAppMaster进程还在。
查看yarn-root-resourcemanager-wqk01.log
有一点是记录时间是10:00,而系统时间是18:00 ,所有系统时间已做同步是正确时间,不知道是什么问题,有没有朋友遇到过这样问题。

已有(7)人评论

跳转到指定楼层
when30 发表于 2016-3-21 18:58:30
楼主几个节点,确保日志没有看错。最好安装ntp
回复

使用道具 举报

wqk151 发表于 2016-3-22 12:14:56
when30 发表于 2016-3-21 18:58
楼主几个节点,确保日志没有看错。最好安装ntp

4个节点
每个节点系统时间是正确的 也是统一的
Hadoop日志时间不对
回复

使用道具 举报

atsky123 发表于 2016-3-22 13:13:55
wqk151 发表于 2016-3-22 12:14
4个节点
每个节点系统时间是正确的 也是统一的
Hadoop日志时间不对

这个应该没有问题的。应该是楼主看错日志了。
日志时间是来源于系统的。
清空日志后,在测试下
回复

使用道具 举报

wqk151 发表于 2016-3-22 14:30:59
本帖最后由 wqk151 于 2016-3-22 16:22 编辑

[mw_shl_code=applescript,true]2016-03-22 06:19:28,843 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,844 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,844 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,846 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,846 INFO  [main]: parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command: select count(*) from t
2016-03-22 06:19:28,848 INFO  [main]: parse.ParseDriver (ParseDriver.java:parse(209)) - Parse Completed
2016-03-22 06:19:28,848 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=parse start=1458627568846 end=1458627568848 duration=2 from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,849 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,850 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:analyzeInternal(10127)) - Starting Semantic Analysis
2016-03-22 06:19:28,851 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:genResolvedParseTree(10074)) - Completed phase 1 of Semantic Analysis
2016-03-22 06:19:28,851 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1552)) - Get metadata for source tables
2016-03-22 06:19:28,852 INFO  [pool-3-thread-2]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(746)) - 2: source:192.168.1.241 get_table : db=default tbl=t
2016-03-22 06:19:28,853 INFO  [pool-3-thread-2]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(371)) - ugi=root        ip=192.168.1.241        cmd=source:192.168.1.241 get_table : db=default tbl=t        
2016-03-22 06:19:28,891 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1704)) - Get metadata for subqueries
2016-03-22 06:19:28,891 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:getMetaData(1728)) - Get metadata for destination tables
2016-03-22 06:19:28,913 INFO  [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://datatang/tmp/hive/root/368821e5-2089-45b3-84b7-9130fb4ae15f/hive_2016-03-22_06-19-28_846_8718151686200813519-1
2016-03-22 06:19:28,914 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:genResolvedParseTree(10078)) - Completed getting MetaData in Semantic Analysis
2016-03-22 06:19:28,914 INFO  [main]: parse.BaseSemanticAnalyzer (CalcitePlanner.java:canCBOHandleAst(388)) - Not invoking CBO because the statement has too few joins
2016-03-22 06:19:28,916 INFO  [main]: common.FileUtils (FileUtils.java:mkdir(501)) - Creating directory if it doesn't exist: hdfs://datatang/tmp/hive/root/368821e5-2089-45b3-84b7-9130fb4ae15f/hive_2016-03-22_06-19-28_846_8718151686200813519-1/-mr-10000/.hive-staging_hive_2016-03-22_06-19-28_846_8718151686200813519-1
2016-03-22 06:19:28,955 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:genFileSinkPlan(6653)) - Set stats collection dir : hdfs://datatang/tmp/hive/root/368821e5-2089-45b3-84b7-9130fb4ae15f/hive_2016-03-22_06-19-28_846_8718151686200813519-1/-mr-10000/.hive-staging_hive_2016-03-22_06-19-28_846_8718151686200813519-1/-ext-10002
2016-03-22 06:19:28,962 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for FS(6)
2016-03-22 06:19:28,962 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for SEL(5)
2016-03-22 06:19:28,963 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for GBY(4)
2016-03-22 06:19:28,964 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for RS(3)
2016-03-22 06:19:28,964 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for GBY(2)
2016-03-22 06:19:28,964 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(655)) - Processing for SEL(1)
2016-03-22 06:19:28,968 INFO  [main]: ppd.OpProcFactory (OpProcFactory.java:process(382)) - Processing for TS(0)
2016-03-22 06:19:28,970 INFO  [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneReduceSinkOperator(817)) - RS 3 oldColExprMap: {VALUE._col0=Column[_col0]}
2016-03-22 06:19:28,970 INFO  [main]: optimizer.ColumnPrunerProcFactory (ColumnPrunerProcFactory.java:pruneReduceSinkOperator(866)) - RS 3 newColExprMap: {VALUE._col0=Column[_col0]}
2016-03-22 06:19:28,978 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=partition-retrieving from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
2016-03-22 06:19:28,979 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=partition-retrieving start=1458627568978 end=1458627568979 duration=1 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
2016-03-22 06:19:28,982 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(175)) - Looking for table scans where optimization is applicable
2016-03-22 06:19:28,983 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(199)) - Found 0 null table scans
2016-03-22 06:19:28,983 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(175)) - Looking for table scans where optimization is applicable
2016-03-22 06:19:28,984 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(199)) - Found 0 null table scans
2016-03-22 06:19:28,987 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(175)) - Looking for table scans where optimization is applicable
2016-03-22 06:19:28,987 INFO  [main]: physical.NullScanTaskDispatcher (NullScanTaskDispatcher.java:dispatch(199)) - Found 0 null table scans
2016-03-22 06:19:28,988 INFO  [main]: parse.CalcitePlanner (SemanticAnalyzer.java:analyzeInternal(10213)) - Completed plan generation
2016-03-22 06:19:28,988 INFO  [main]: ql.Driver (Driver.java:compile(436)) - Semantic Analysis Completed
2016-03-22 06:19:28,989 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=semanticAnalyze start=1458627568849 end=1458627568989 duration=140 from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,990 INFO  [main]: exec.ListSinkOperator (Operator.java:initialize(332)) - Initializing operator OP[7]
2016-03-22 06:19:28,992 INFO  [main]: exec.ListSinkOperator (Operator.java:initialize(372)) - Initialization Done 7 OP
2016-03-22 06:19:28,993 INFO  [main]: exec.ListSinkOperator (Operator.java:initializeChildren(429)) - Operator 7 OP initialized
2016-03-22 06:19:28,993 INFO  [main]: ql.Driver (Driver.java:getSchema(240)) - Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)], properties:null)
2016-03-22 06:19:28,994 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=compile start=1458627568844 end=1458627568994 duration=150 from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,994 INFO  [main]: ql.Driver (Driver.java:checkConcurrency(160)) - Concurrency mode is disabled, not creating a lock manager
2016-03-22 06:19:28,994 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,994 INFO  [main]: ql.Driver (Driver.java:execute(1328)) - Starting command(queryId=root_20160322061928_4ccf5e49-000a-47ee-aeab-ffeaf99bb9db): select count(*) from t
2016-03-22 06:19:28,995 INFO  [main]: ql.Driver (SessionState.java:printInfo(951)) - Query ID = root_20160322061928_4ccf5e49-000a-47ee-aeab-ffeaf99bb9db
2016-03-22 06:19:28,996 INFO  [main]: ql.Driver (SessionState.java:printInfo(951)) - Total jobs = 1
2016-03-22 06:19:28,996 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=TimeToSubmit start=1458627568844 end=1458627568996 duration=152 from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,997 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,997 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=task.MAPRED.Stage-1 from=org.apache.hadoop.hive.ql.Driver>
2016-03-22 06:19:28,997 INFO  [main]: ql.Driver (SessionState.java:printInfo(951)) - Launching Job 1 out of 1
2016-03-22 06:19:29,002 INFO  [main]: ql.Driver (Driver.java:launchTask(1651)) - Starting task [Stage-1:MAPRED] in serial mode
2016-03-22 06:19:29,002 INFO  [main]: exec.Task (SessionState.java:printInfo(951)) - Number of reduce tasks determined at compile time: 1
2016-03-22 06:19:29,002 INFO  [main]: exec.Task (SessionState.java:printInfo(951)) - In order to change the average load for a reducer (in bytes):
2016-03-22 06:19:29,002 INFO  [main]: exec.Task (SessionState.java:printInfo(951)) -   set hive.exec.reducers.bytes.per.reducer=<number>
2016-03-22 06:19:29,003 INFO  [main]: exec.Task (SessionState.java:printInfo(951)) - In order to limit the maximum number of reducers:
2016-03-22 06:19:29,003 INFO  [main]: exec.Task (SessionState.java:printInfo(951)) -   set hive.exec.reducers.max=<number>
2016-03-22 06:19:29,003 INFO  [main]: exec.Task (SessionState.java:printInfo(951)) - In order to set a constant number of reducers:
2016-03-22 06:19:29,003 INFO  [main]: exec.Task (SessionState.java:printInfo(951)) -   set mapreduce.job.reduces=<number>
2016-03-22 06:19:29,004 INFO  [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://datatang/tmp/hive/root/368821e5-2089-45b3-84b7-9130fb4ae15f/hive_2016-03-22_06-19-28_846_8718151686200813519-1
2016-03-22 06:19:29,017 INFO  [main]: mr.ExecDriver (ExecDriver.java:execute(288)) - Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
2016-03-22 06:19:29,017 INFO  [main]: exec.Utilities (Utilities.java:getInputPaths(3397)) - Processing alias t
2016-03-22 06:19:29,017 INFO  [main]: exec.Utilities (Utilities.java:getInputPaths(3414)) - Adding input file hdfs://datatang/tmp/hive/warehouse/t
2016-03-22 06:19:29,017 INFO  [main]: exec.Utilities (Utilities.java:isEmptyPath(2698)) - Content Summary not cached for hdfs://datatang/tmp/hive/warehouse/t
2016-03-22 06:19:29,021 INFO  [main]: ql.Context (Context.java:getMRScratchDir(330)) - New scratch dir is hdfs://datatang/tmp/hive/root/368821e5-2089-45b3-84b7-9130fb4ae15f/hive_2016-03-22_06-19-28_846_8718151686200813519-1
2016-03-22 06:19:29,035 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-03-22 06:19:29,036 INFO  [main]: exec.Utilities (Utilities.java:serializePlan(938)) - Serializing MapWork via kryo
2016-03-22 06:19:29,087 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=serializePlan start=1458627569035 end=1458627569087 duration=52 from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-03-22 06:19:29,101 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-03-22 06:19:29,102 INFO  [main]: exec.Utilities (Utilities.java:serializePlan(938)) - Serializing ReduceWork via kryo
2016-03-22 06:19:29,154 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=serializePlan start=1458627569101 end=1458627569154 duration=53 from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-03-22 06:19:29,161 ERROR [main]: mr.ExecDriver (ExecDriver.java:execute(400)) - yarn
2016-03-22 06:19:29,184 INFO  [main]: client.RMProxy (RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at wqkansible/192.168.1.240:8032
2016-03-22 06:19:29,224 INFO  [main]: client.RMProxy (RMProxy.java:createRMProxy(92)) - Connecting to ResourceManager at wqkansible/192.168.1.240:8032
2016-03-22 06:19:29,227 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(390)) - PLAN PATH = hdfs://datatang/tmp/hive/root/368821e5-2089-45b3-84b7-9130fb4ae15f/hive_2016-03-22_06-19-28_846_8718151686200813519-1/-mr-10004/20108e9f-0338-4493-a794-5e5b1251c802/map.xml
2016-03-22 06:19:29,227 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(390)) - PLAN PATH = hdfs://datatang/tmp/hive/root/368821e5-2089-45b3-84b7-9130fb4ae15f/hive_2016-03-22_06-19-28_846_8718151686200813519-1/-mr-10004/20108e9f-0338-4493-a794-5e5b1251c802/reduce.xml
2016-03-22 06:19:29,237 WARN  [main]: mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2016-03-22 06:19:29,805 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
2016-03-22 06:19:29,805 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(390)) - PLAN PATH = hdfs://datatang/tmp/hive/root/368821e5-2089-45b3-84b7-9130fb4ae15f/hive_2016-03-22_06-19-28_846_8718151686200813519-1/-mr-10004/20108e9f-0338-4493-a794-5e5b1251c802/map.xml
2016-03-22 06:19:29,805 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(517)) - Total number of paths: 1, launching 1 threads to check non-combinable ones.
2016-03-22 06:19:29,813 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getCombineSplits(439)) - CombineHiveInputSplit creating pool for hdfs://datatang/tmp/hive/warehouse/t; using filter path hdfs://datatang/tmp/hive/warehouse/t
2016-03-22 06:19:29,819 INFO  [main]: input.FileInputFormat (FileInputFormat.java:listStatus(281)) - Total input paths to process : 1
2016-03-22 06:19:29,819 INFO  [main]: input.CombineFileInputFormat (CombineFileInputFormat.java:createSplits(413)) - DEBUG: Terminated node allocation with : CompletedNodes: 3, size left: 0
2016-03-22 06:19:29,820 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getCombineSplits(494)) - number of splits 1
2016-03-22 06:19:29,820 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(587)) - Number of all splits 1
2016-03-22 06:19:29,821 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=getSplits start=1458627569805 end=1458627569821 duration=16 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
2016-03-22 06:19:29,946 INFO  [main]: mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1
2016-03-22 06:19:30,074 INFO  [main]: mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_1458546911428_0013
2016-03-22 06:19:30,104 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:submitApplication(236)) - Submitted application application_1458546911428_0013
2016-03-22 06:19:30,108 INFO  [main]: mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://wqkansible:8088/proxy/application_1458546911428_0013/
2016-03-22 06:19:30,109 INFO  [main]: exec.Task (SessionState.java:printInfo(951)) - Starting Job = job_1458546911428_0013, Tracking URL = http://wqkansible:8088/proxy/application_1458546911428_0013/
2016-03-22 06:19:30,110 INFO  [main]: exec.Task (SessionState.java:printInfo(951)) - Kill Command = /sorce/hadoop2.5/bin/hadoop job  -kill job_1458546911428_0013[/mw_shl_code]


这是刚测试的日志,时间不对(应该是时区问题,日志应该是UTC时间),关键是select count(*) from table t;   节点3的NodeManager 直接就没了,一直出现这样问题。
回复

使用道具 举报

xuanxufeng 发表于 2016-3-23 17:34:07
wqk151 发表于 2016-3-22 14:30
[mw_shl_code=applescript,true]2016-03-22 06:19:28,843 INFO  [main]: log.PerfLogger (PerfLogger.java: ...

你这是虚拟机时间,还是本地window时间
回复

使用道具 举报

wqk151 发表于 2016-3-24 16:09:32
xuanxufeng 发表于 2016-3-23 17:34
你这是虚拟机时间,还是本地window时间

虚拟机系统时间和windows本地时间是一致的。

主要问题是nodmanager会莫名其妙死掉,查看日志是,hive中执行select count(*) ,显示下载资源失败
回复

使用道具 举报

wqk151 发表于 2016-3-25 11:51:01
感觉是版本兼容性问题,后来全部换成cdh5.6的 没问题了,任务每次都能执行成功。被Apache坑了好多天

cdh版下载:http://archive-primary.cloudera.com/cdh5/cdh/5/
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条