分享

hive-异常,yarn 运行报错,请高手帮忙

追随云科技 发表于 2016-5-30 12:05:44 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 5 20879
hive shell  运行    select count(*) from table;  


yarn-root-resourcemanager-hadoopnamenode.log  日志信息:



2016-05-30 11:49:41,980 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Allocated new applicationId: 1
2016-05-30 11:49:59,641 INFO org.apache.hadoop.yarn.server.resourcemanager.ClientRMService: Application with id 1 submitted by user root
2016-05-30 11:49:59,645 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Storing application with id application_1464579869479_0001
2016-05-30 11:49:59,649 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        IP=172.29.140.171        OPERATION=Submit Application Request        TARGET=ClientRMService        RESULT=SUCCESS        A
PPID=application_1464579869479_00012016-05-30 11:49:59,677 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1464579869479_0001 State change from NEW to NEW_SAVING
2016-05-30 11:49:59,677 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Storing info for app: application_1464579869479_0001
2016-05-30 11:49:59,680 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1464579869479_0001 State change from NEW_SAVING to SUBMITTED
2016-05-30 11:49:59,737 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Accepted application application_1464579869479_0001 from user: root, in queue: default, currently num of
applications: 12016-05-30 11:49:59,778 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1464579869479_0001 State change from SUBMITTED to ACCEPTED
2016-05-30 11:49:59,869 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1464579869479_0001_000001
2016-05-30 11:49:59,872 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000001 State change from NEW to SUBMITTED
2016-05-30 11:49:59,923 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Added Application Attempt appattempt_1464579869479_0001_000001 to scheduler from user: root
2016-05-30 11:49:59,932 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000001 State change from SUBMITTED to SCHEDULED
2016-05-30 11:50:00,680 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1464579869479_0001_01_000001 Container Transitioned from NEW to ALLOCATED
2016-05-30 11:50:00,681 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Allocated Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1464579
869479_0001        CONTAINERID=container_1464579869479_0001_01_0000012016-05-30 11:50:00,683 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1464579869479_0001_01_000001 of capacity <memory:2048, vCores:1> on host hadoopd
atanode1:33939, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation2016-05-30 11:50:00,705 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : hadoopdatanode1:33939 for container : container_1464579869479_0001_01_
0000012016-05-30 11:50:00,718 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1464579869479_0001_01_000001 Container Transitioned from ALLOCATED to ACQUIRED
2016-05-30 11:50:00,721 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1464579869479_0001_000001
2016-05-30 11:50:00,731 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1464579869479_0001 AttemptId: appattempt_1464579869479_0001_000001
MasterContainer: Container: [ContainerId: container_1464579869479_0001_01_000001, NodeId: hadoopdatanode1:33939, NodeHttpAddress: hadoopdatanode1:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.29.140.172:33939 }, ]2016-05-30 11:50:00,768 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000001 State change from SCHEDULED to ALLOCATED_SAVING
2016-05-30 11:50:00,779 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000001 State change from ALLOCATED_SAVING to ALLOCATED
2016-05-30 11:50:00,789 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1464579869479_0001_000001
2016-05-30 11:50:00,946 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1464579869479_0001_01_000001, NodeId: hadoopdatanode1:3
3939, NodeHttpAddress: hadoopdatanode1:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.29.140.172:33939 }, ] for AM appattempt_1464579869479_0001_0000012016-05-30 11:50:00,946 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1464579869479_0001_01_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=
container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-05-30 11:50:00,951 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1464579869479_0001_000001
2016-05-30 11:50:00,953 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1464579869479_0001_000001
2016-05-30 11:50:02,027 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1464579869479_0001_01_000001, NodeId: hadoopdatanod
e1:33939, NodeHttpAddress: hadoopdatanode1:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.29.140.172:33939 }, ] for AM appattempt_1464579869479_0001_0000012016-05-30 11:50:02,030 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000001 State change from ALLOCATED to LAUNCHED
2016-05-30 11:50:02,657 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1464579869479_0001_01_000001 Container Transitioned from ACQUIRED to RUNNING
2016-05-30 11:50:16,819 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1464579869479_0001_01_000001 Container Transitioned from RUNNING to COMPLETED
2016-05-30 11:50:16,819 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Completed container: container_1464579869479_0001_01_000001 in state: COMPLETED event:FINISHED
2016-05-30 11:50:16,820 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Released Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1464579869479_0
001        CONTAINERID=container_1464579869479_0001_01_0000012016-05-30 11:50:16,820 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1464579869479_0001_01_000001 of capacity <memory:2048, vCores:1> on host hadoopd
atanode1:33939, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true2016-05-30 11:50:16,821 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Application attempt appattempt_1464579869479_0001_000001 released container container_1464579869479_0001
_01_000001 on node: host: hadoopdatanode1:33939 #containers=0 available=8192 used=0 with event: FINISHED2016-05-30 11:50:16,824 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1464579869479_0001_000001 with final state: FAILED, and exit s
tatus: 12016-05-30 11:50:16,826 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000001 State change from LAUNCHED to FINAL_SAVING
2016-05-30 11:50:16,827 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1464579869479_0001_000001
2016-05-30 11:50:16,830 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1464579869479_0001_000001
2016-05-30 11:50:16,830 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000001 State change from FINAL_SAVING to FAILED
2016-05-30 11:50:16,831 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2
2016-05-30 11:50:16,833 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Application appattempt_1464579869479_0001_000001 is done. finalState=FAILED
2016-05-30 11:50:16,833 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1464579869479_0001_000002
2016-05-30 11:50:16,834 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000002 State change from NEW to SUBMITTED
2016-05-30 11:50:16,834 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1464579869479_0001 requests cleared
2016-05-30 11:50:16,838 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Added Application Attempt appattempt_1464579869479_0001_000002 to scheduler from user: root
2016-05-30 11:50:16,839 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000002 State change from SUBMITTED to SCHEDULED
2016-05-30 11:50:16,913 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1464579869479_0001_02_000001 Container Transitioned from NEW to ALLOCATED
2016-05-30 11:50:16,914 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Allocated Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1464579
869479_0001        CONTAINERID=container_1464579869479_0001_02_0000012016-05-30 11:50:16,914 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1464579869479_0001_02_000001 of capacity <memory:2048, vCores:1> on host hadoopd
atanode3:56076, which has 1 containers, <memory:2048, vCores:1> used and <memory:6144, vCores:7> available after allocation2016-05-30 11:50:16,916 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : hadoopdatanode3:56076 for container : container_1464579869479_0001_02_
0000012016-05-30 11:50:16,921 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1464579869479_0001_02_000001 Container Transitioned from ALLOCATED to ACQUIRED
2016-05-30 11:50:16,921 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1464579869479_0001_000002
2016-05-30 11:50:16,922 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1464579869479_0001 AttemptId: appattempt_1464579869479_0001_000002
MasterContainer: Container: [ContainerId: container_1464579869479_0001_02_000001, NodeId: hadoopdatanode3:56076, NodeHttpAddress: hadoopdatanode3:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.29.140.174:56076 }, ]2016-05-30 11:50:16,922 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000002 State change from SCHEDULED to ALLOCATED_SAVING
2016-05-30 11:50:16,923 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000002 State change from ALLOCATED_SAVING to ALLOCATED
2016-05-30 11:50:16,925 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1464579869479_0001_000002
2016-05-30 11:50:16,931 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1464579869479_0001_02_000001, NodeId: hadoopdatanode3:5
6076, NodeHttpAddress: hadoopdatanode3:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.29.140.174:56076 }, ] for AM appattempt_1464579869479_0001_0000022016-05-30 11:50:16,932 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1464579869479_0001_02_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=
container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr 2016-05-30 11:50:16,933 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1464579869479_0001_000002
2016-05-30 11:50:16,933 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1464579869479_0001_000002
2016-05-30 11:50:17,722 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1464579869479_0001_02_000001, NodeId: hadoopdatanod
e3:56076, NodeHttpAddress: hadoopdatanode3:8042, Resource: <memory:2048, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.29.140.174:56076 }, ] for AM appattempt_1464579869479_0001_0000022016-05-30 11:50:17,724 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000002 State change from ALLOCATED to LAUNCHED
2016-05-30 11:50:17,827 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Container container_1464579869479_0001_01_000001 completed with event FINISHED
2016-05-30 11:50:17,943 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1464579869479_0001_02_000001 Container Transitioned from ACQUIRED to RUNNING
2016-05-30 11:50:33,044 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1464579869479_0001_02_000001 Container Transitioned from RUNNING to COMPLETED
2016-05-30 11:50:33,045 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Completed container: container_1464579869479_0001_02_000001 in state: COMPLETED event:FINISHED
2016-05-30 11:50:33,045 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Released Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1464579869479_0
001        CONTAINERID=container_1464579869479_0001_02_0000012016-05-30 11:50:33,046 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1464579869479_0001_02_000001 of capacity <memory:2048, vCores:1> on host hadoopdatanode3:56076, which currently has 0 containers, <memory:0, vCores:0> used and <memory:8192, vCores:8> available, release resources=true
2016-05-30 11:50:33,046 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1464579869479_0001_000002 with final state: FAILED, and exit s
tatus: 12016-05-30 11:50:33,046 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Application attempt appattempt_1464579869479_0001_000002 released container container_1464579869479_0001
_02_000001 on node: host: hadoopdatanode3:56076 #containers=0 available=8192 used=0 with event: FINISHED2016-05-30 11:50:33,047 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000002 State change from LAUNCHED to FINAL_SAVING
2016-05-30 11:50:33,048 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1464579869479_0001_000002
2016-05-30 11:50:33,049 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1464579869479_0001_000002
2016-05-30 11:50:33,049 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1464579869479_0001_000002 State change from FINAL_SAVING to FAILED
2016-05-30 11:50:33,049 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 2. The max attempts is 2
2016-05-30 11:50:33,052 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1464579869479_0001 with final state: FAILED
2016-05-30 11:50:33,054 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1464579869479_0001 State change from ACCEPTED to FINAL_SAVING
2016-05-30 11:50:33,054 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1464579869479_0001
2016-05-30 11:50:33,055 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Application appattempt_1464579869479_0001_000002 is done. finalState=FAILED
2016-05-30 11:50:33,055 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1464579869479_0001 requests cleared
2016-05-30 11:50:34,618 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1464579869479_0001 failed 2 times due to AM Container for appattempt_1464579869479_0001_00000
2 exited with  exitCode: 1For more detailed output, check application tracking page:http://hadoopnamenode:8088/proxy/application_1464579869479_0001/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1464579869479_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
        at org.apache.hadoop.util.Shell.run(Shell.java:478)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2016-05-30 11:50:34,623 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1464579869479_0001 State change from FINAL_SAVING to FAILED
2016-05-30 11:50:34,628 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=Application Finished - Failed        TARGET=RMAppManager        RESULT=FAILURE        DESCRIPTION=App failed wi
th state: FAILED        PERMISSIONS=Application application_1464579869479_0001 failed 2 times due to AM Container for appattempt_1464579869479_0001_000002 exited with  exitCode: 1For more detailed output, check application tracking page:http://hadoopnamenode:8088/proxy/application_1464579869479_0001/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1464579869479_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
        at org.apache.hadoop.util.Shell.run(Shell.java:478)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.        APPID=application_1464579869479_0001
2016-05-30 11:50:34,635 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1464579869479_0001,name=select count(kh) from tb_dgsksj(Stage-1),user=root,queue=ro
ot.root,state=FAILED,trackingUrl=http://hadoopnamenode:8088/cluster/app/application_1464579869479_0001,appMasterHost=N/A,startTime=1464580199636,finishTime=1464580233052,finalStatus=FAILED2016-05-30 11:50:34,641 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler: Container container_1464579869479_0001_02_000001 completed with event FINISHED
2016-05-30 11:50:35,337 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        IP=172.29.140.171        OPERATION=Kill Application Request        TARGET=ClientRMService        RESULT=SUCCESS        A
PPID=application_1464579869479_0001

已有(5)人评论

跳转到指定楼层
追随云科技 发表于 2016-5-30 12:07:18
hive.log 运行日志:
2016-05-30 11:49:39,786 INFO  [main]: exec.Utilities (Utilities.java:getInputPaths(3372)) - Processing alias tb_dgsksj
2016-05-30 11:49:39,787 INFO  [main]: exec.Utilities (Utilities.java:getInputPaths(3389)) - Adding input file hdfs://hadoopnamenode:9000/hive/warehouse/dggj.db/tb_dgsksj/sksjdate=20160401
2016-05-30 11:49:39,787 INFO  [main]: exec.Utilities (Utilities.java:isEmptyPath(2691)) - Content Summary not cached for hdfs://hadoopnamenode:9000/hive/warehouse/dggj.db/tb_dgsksj/sksjdate=20160401
2016-05-30 11:49:39,803 INFO  [main]: ql.Context (Context.java:getMRScratchDir(328)) - New scratch dir is hdfs://hadoopnamenode:9000/tmp/hive/root/fb8def43-5538-41dc-9bf6-91222995bb2f/hive_2016-05-30_11-49-38_
422_1917391347173254573-12016-05-30 11:49:39,876 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(122)) - <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-05-30 11:49:39,878 INFO  [main]: exec.Utilities (Utilities.java:serializePlan(927)) - Serializing MapWork via kryo
2016-05-30 11:49:40,568 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=serializePlan start=1464580179876 end=1464580180568 duration=692 from=org.apache.hadoop.hive.ql.exec.Ut
ilities>2016-05-30 11:49:40,574 INFO  [main]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1174)) - mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
2016-05-30 11:49:40,627 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(122)) - <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-05-30 11:49:40,628 INFO  [main]: exec.Utilities (Utilities.java:serializePlan(927)) - Serializing ReduceWork via kryo
2016-05-30 11:49:40,719 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=serializePlan start=1464580180627 end=1464580180719 duration=92 from=org.apache.hadoop.hive.ql.exec.Uti
lities>2016-05-30 11:49:40,730 ERROR [main]: mr.ExecDriver (ExecDriver.java:execute(398)) - yarn
2016-05-30 11:49:40,965 INFO  [main]: client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at hadoopnamenode/172.29.140.171:8032
2016-05-30 11:49:41,809 INFO  [main]: client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at hadoopnamenode/172.29.140.171:8032
2016-05-30 11:49:41,834 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(385)) - PLAN PATH = hdfs://hadoopnamenode:9000/tmp/hive/root/fb8def43-5538-41dc-9bf6-91222995bb2f/hive_2016-05-30_11-49-38_422_1
917391347173254573-1/-mr-10004/bedf5137-3f5c-4857-a681-74d814d47f05/map.xml2016-05-30 11:49:41,835 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(385)) - PLAN PATH = hdfs://hadoopnamenode:9000/tmp/hive/root/fb8def43-5538-41dc-9bf6-91222995bb2f/hive_2016-05-30_11-49-38_422_1
917391347173254573-1/-mr-10004/bedf5137-3f5c-4857-a681-74d814d47f05/reduce.xml2016-05-30 11:49:42,480 WARN  [main]: mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your
application with ToolRunner to remedy this.2016-05-30 11:49:55,819 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(122)) - <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
2016-05-30 11:49:55,821 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(385)) - PLAN PATH = hdfs://hadoopnamenode:9000/tmp/hive/root/fb8def43-5538-41dc-9bf6-91222995bb2f/hive_2016-05-30_11-49-38_422_1
917391347173254573-1/-mr-10004/bedf5137-3f5c-4857-a681-74d814d47f05/map.xml2016-05-30 11:49:55,822 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(519)) - Total number of paths: 1, launching 1 threads to check non-combinable ones.
2016-05-30 11:49:55,882 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getCombineSplits(441)) - CombineHiveInputSplit creating pool for hdfs://hadoopnamenode:9000/hive/warehouse/dggj.db/t
b_dgsksj/sksjdate=20160401; using filter path hdfs://hadoopnamenode:9000/hive/warehouse/dggj.db/tb_dgsksj/sksjdate=201604012016-05-30 11:49:55,906 INFO  [main]: input.FileInputFormat (FileInputFormat.java:listStatus(283)) - Total input paths to process : 1
2016-05-30 11:49:55,920 INFO  [main]: input.CombineFileInputFormat (CombineFileInputFormat.java:createSplits(413)) - DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
2016-05-30 11:49:55,924 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getCombineSplits(496)) - number of splits 1
2016-05-30 11:49:55,926 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(589)) - Number of all splits 1
2016-05-30 11:49:55,927 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=getSplits start=1464580195819 end=1464580195927 duration=108 from=org.apache.hadoop.hive.ql.io.CombineH
iveInputFormat>2016-05-30 11:49:56,138 INFO  [main]: mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(202)) - number of splits:1
2016-05-30 11:49:56,580 INFO  [main]: mapreduce.JobSubmitter (JobSubmitter.java:printTokens(291)) - Submitting tokens for job: job_1464579869479_0001
2016-05-30 11:49:59,835 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:submitApplication(251)) - Submitted application application_1464579869479_0001
2016-05-30 11:50:00,002 INFO  [main]: mapreduce.Job (Job.java:submit(1311)) - The url to track the job: http://hadoopnamenode:8088/proxy/application_1464579869479_0001/
2016-05-30 11:50:00,007 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - Starting Job = job_1464579869479_0001, Tracking URL = http://hadoopnamenode:8088/proxy/application_1464579869479_0001/
2016-05-30 11:50:00,008 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - Kill Command = /opt/hadoop/hadoop-2.6.0/bin/hadoop job  -kill job_1464579869479_0001
2016-05-30 11:50:34,833 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2016-05-30 11:50:35,082 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-05-30 11:50:35,084 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - 2016-05-30 11:50:35,050 Stage-1 map = 0%,  reduce = 0%
2016-05-30 11:50:35,155 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-05-30 11:50:35,220 ERROR [main]: exec.Task (SessionState.java:printError(936)) - Ended Job = job_1464579869479_0001 with errors
2016-05-30 11:50:35,228 ERROR [Thread-290]: exec.Task (SessionState.java:printError(936)) - Error during job, obtaining debugging information...
2016-05-30 11:50:35,229 INFO  [Thread-290]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1174)) - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2016-05-30 11:50:35,232 ERROR [Thread-290]: exec.Task (SessionState.java:printError(936)) - Job Tracking URL: http://hadoopnamenode:8088/cluster/app/application_1464579869479_0001
2016-05-30 11:50:35,400 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:killApplication(364)) - Killed application application_1464579869479_0001
2016-05-30 11:50:35,470 ERROR [main]: ql.Driver (SessionState.java:printError(936)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2016-05-30 11:50:35,471 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=Driver.execute start=1464580179665 end=1464580235471 duration=55806 from=org.apache.hadoop.hive.ql.Driv
er>2016-05-30 11:50:35,472 INFO  [main]: ql.Driver (SessionState.java:printInfo(927)) - MapReduce Jobs Launched:
2016-05-30 11:50:35,485 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-05-30 11:50:35,501 INFO  [main]: ql.Driver (SessionState.java:printInfo(927)) - Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
2016-05-30 11:50:35,502 INFO  [main]: ql.Driver (SessionState.java:printInfo(927)) - Total MapReduce CPU Time Spent: 0 msec
2016-05-30 11:50:35,502 INFO  [main]: ql.Driver (Driver.java:execute(1696)) - Completed executing command(queryId=root_20160530114747_1ce4780f-64a7-4b51-a2f9-17ec20718ee2); Time taken: 55.806 seconds
2016-05-30 11:50:35,503 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(122)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-05-30 11:50:36,025 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=releaseLocks start=1464580235503 end=1464580236025 duration=522 from=org.apache.hadoop.hive.ql.Driver>
2016-05-30 11:50:36,030 INFO  [main]: exec.ListSinkOperator (Operator.java:close(595)) - 7 finished. closing...
2016-05-30 11:50:36,031 INFO  [main]: exec.ListSinkOperator (Operator.java:close(613)) - 7 Close done

回复

使用道具 举报

追随云科技 发表于 2016-5-30 12:10:01


mapred-site.xml  :

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    <description>Execution framework set to Hadoop YARN.</description>
  </property>
  <property>
    <name>mapreduce.map.memory.mb</name>
    <value>1024</value>
    <description>Larger resource limit for maps.</description>
  </property>
  <property>
    <name>mapreduce.map.java.opts</name>
    <value>Xmx1024M</value>
    <description>Larger heap-size for child jvms of maps.</description>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>1024</value>
    <description>Larger resource limit for reduces.</description>
  </property>
  <property>
    <name>mapreduce.reduce.java.opts</name>
    <value>Xmx1024M</value>
    <description></description>
  </property>
  <property>
    <name>mapreduce.task.io.sort.mb</name>
    <value>512</value>
    <description></description>
  </property>
  <property>
    <name>mapreduce.task.io.sort.factor</name>
    <value>10</value>
    <description>More streams merged at once while sorting files.</description>
  </property>
  <property>
    <name>mapreduce.reduce.shuffle.parallelcopies</name>
    <value>5</value>
    <description>Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.</description>
  </property>
  <!--Configurations for MapReduce JobHistory Server:-->
        <property>
    <name>mapreduce.jobhistory.address</name>
    <value>hadoopnamenode:10020</value>
    <description>MapReduce JobHistory Server host:port Default port is 10020</description>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>hadoopnamenode:19888</value>
    <description>MapReduce JobHistory Server Web UI host:port Default port is 19888</description>
  </property>
  <property>
    <name>mapreduce.jobhistory.intermediate-done-dir</name>
    <value>/mr-history/tmp</value>
    <description>Directory where history files are written by MapReduce jobs. Defalut is "/mr-history/tmp"</description>
  </property>
  <property>
    <name>mapreduce.jobhistory.done-dir</name>
    <value>/mr-history/done</value>
    <description>Directory where history files are managed by the MR JobHistory Server.Default value is "/mr-history/done"</description>
  </property>





yarn-site.xml  :

<property>
  <name>resourcemanagerHost</name>
  <value>hadoopnamenode</value>
</property>


<property>
<name>hadoop.tmp.dir</name>
<value>/opt/data/hadoop</value>
</property>

<property>
  <name>yarn.resourcemanager.resource-tracker.address</name>
  <value>${resourcemanagerHost}:8031</value>
  <description>ResourceManager host:port for NodeManagers.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description>
</property>

<property>
  <name>yarn.resourcemanager.address</name>
  <value>${resourcemanagerHost}:8032</value>
  <description>ResourceManager host:port for clients to submit jobs.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>

<property>
  <name>yarn.resourcemanager.scheduler.address</name>
  <value>${resourcemanagerHost}:8030</value>
  <description>ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description>
</property>

<property>
  <name>yarn.resourcemanager.admin.address</name>
  <value>${resourcemanagerHost}:8033</value>
  <description>ResourceManager host:port for administrative commands.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description>
</property>

<property>
  <name>yarn.resourcemanager.webapp.address</name>
  <value>${resourcemanagerHost}:8088</value>
  <description>ResourceManager web-ui host:port. NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description>
</property>

<property>
  <name>yarn.nodemanager.local-dirs</name>
  <value>${hadoop.tmp.dir}/yarn/local</value>
  <description>Comma-separated list of paths on the local filesystem where intermediate data is written.The default value is "${hadoop.tmp.dir}/nm-local-dir". NOTES:Multiple paths help spread disk i/o.</description>
</property>

<property>
<name>yarn.log.server.url</name>
<value>http://hadoopnamenode:19888/jobhistory/logs/</value>
</property>

<property>
<name>yarn.nodemanager.log-dirs</name>
<value>${hadoop.tmp.dir}/yarn/logs</value>
<description>Comma-separated list of paths on the local filesystem where logs are written,The default value is "${yarn.log.dir}/userlogs" NOTES:Multiple paths help spread disk i/o.</description>
</property>

<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
  <description>Shuffle service that needs to be set for Map Reduce applications.</description>
</property>
<property>
  <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
  <name>yarn.log-aggregation-enable</name>
  <value>true</value>
  <description>Configuration to enable or disable log aggregation</description>
</property>

<property>
    <name>yarn.application.classpath</name>
    <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*,/opt/hadoop/hadoop-2.6.0/share/hadoop/tools/*,/opt/hadoop/hadoop-2.6.0/share/hadoop/tools/lib/*</value>
  </property>



<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/yarn/apps</value>
</property>



hive-site.xml :


<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoopnamenode:9000</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hadoopnamenode:8031</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/hive/warehouse</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://hadoopnamenode:9083</value>
</property>
<property>
<name>hive.support.concurrency</name>
<value>true</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>hadoopnamenode:2181,hadoop2ndnamenode:2181,hadoopdatanode1:2181</value>
</property>
<property>
<name>hive.merge.mapredfiles</name>
<value>true</value>
</property>
        <property>
        <name>datanucleus.readOnlyDatastore</name>
        <value>false</value>
    </property>
    <property>
        <name>datanucleus.fixedDatastore</name>
        <value>false</value>
    </property>
    <property>
        <name>datanucleus.autoCreateSchema</name>
        <value>true</value>
    </property>
    <property>
        <name>datanucleus.autoCreateTables</name>
        <value>true</value>
    </property>
    <property>
        <name>datanucleus.autoCreateColumns</name>
        <value>true</value>
    </property>


回复

使用道具 举报

easthome001 发表于 2016-5-30 14:13:40

上面楼主最好参考一些标准文档,下面参数建议修改,应该还有更多的参数需要修改。
Adding input file hdfs://hadoopnamenode:9000/hive/warehouse/dggj.db/tb_dgsksj/sksjdate=20160401
上面文件是否存在

需要带有符号:负号
<property>
    <name>mapreduce.map.java.opts</name>
    <value>-Xmx1024M</value>
    <description>Larger heap-size for child jvms of maps.</description>
</property>

localhost最好换成ip地址
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore</value>
</property>

回复

使用道具 举报

追随云科技 发表于 2016-5-30 15:00:26
刚说的我已经更换了,但是还是不行。
hive.log  :


2016-05-30 14:38:14,714 INFO  [main]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1174)) - mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
2016-05-30 14:38:14,739 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(122)) - <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2016-05-30 14:38:14,740 INFO  [main]: exec.Utilities (Utilities.java:serializePlan(927)) - Serializing ReduceWork via kryo
2016-05-30 14:38:14,817 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=serializePlan start=1464590294739 end=1464590294817 duration=78 from=org.apache.hadoop.hive.ql.exec.Uti
lities>2016-05-30 14:38:14,824 ERROR [main]: mr.ExecDriver (ExecDriver.java:execute(398)) - yarn
2016-05-30 14:38:15,111 INFO  [main]: client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at hadoopnamenode/172.29.140.171:8032
2016-05-30 14:38:15,839 INFO  [main]: client.RMProxy (RMProxy.java:createRMProxy(98)) - Connecting to ResourceManager at hadoopnamenode/172.29.140.171:8032
2016-05-30 14:38:15,850 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(385)) - PLAN PATH = hdfs://hadoopnamenode:9000/tmp/hive/root/32daf4fc-6e89-48bd-9db7-2248498c1ddb/hive_2016-05-30_14-38-13_076_1
78747312301560528-1/-mr-10004/d0f80671-ee84-4251-97e2-e404c0f21f83/map.xml2016-05-30 14:38:15,851 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(385)) - PLAN PATH = hdfs://hadoopnamenode:9000/tmp/hive/root/32daf4fc-6e89-48bd-9db7-2248498c1ddb/hive_2016-05-30_14-38-13_076_1
78747312301560528-1/-mr-10004/d0f80671-ee84-4251-97e2-e404c0f21f83/reduce.xml2016-05-30 14:38:15,867 INFO  [main]: mapreduce.JobSubmissionFiles (JobSubmissionFiles.java:getStagingDir(127)) - Permissions on staging directory /tmp/hadoop-yarn/staging/root/.staging are incorrect: rwxrwxrw
x. Fixing permissions to correct value rwx------2016-05-30 14:38:16,282 WARN  [main]: mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your
application with ToolRunner to remedy this.2016-05-30 14:38:28,200 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(122)) - <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
2016-05-30 14:38:28,202 INFO  [main]: exec.Utilities (Utilities.java:getBaseWork(385)) - PLAN PATH = hdfs://hadoopnamenode:9000/tmp/hive/root/32daf4fc-6e89-48bd-9db7-2248498c1ddb/hive_2016-05-30_14-38-13_076_1
78747312301560528-1/-mr-10004/d0f80671-ee84-4251-97e2-e404c0f21f83/map.xml2016-05-30 14:38:28,204 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(519)) - Total number of paths: 1, launching 1 threads to check non-combinable ones.
2016-05-30 14:38:28,264 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getCombineSplits(441)) - CombineHiveInputSplit creating pool for hdfs://hadoopnamenode:9000/hive/warehouse/dggj.db/t
b_dgsksj/sksjdate=20160401; using filter path hdfs://hadoopnamenode:9000/hive/warehouse/dggj.db/tb_dgsksj/sksjdate=201604012016-05-30 14:38:28,289 INFO  [main]: input.FileInputFormat (FileInputFormat.java:listStatus(283)) - Total input paths to process : 1
2016-05-30 14:38:28,312 INFO  [main]: input.CombineFileInputFormat (CombineFileInputFormat.java:createSplits(413)) - DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 0
2016-05-30 14:38:28,317 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getCombineSplits(496)) - number of splits 1
2016-05-30 14:38:28,319 INFO  [main]: io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(589)) - Number of all splits 1
2016-05-30 14:38:28,319 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=getSplits start=1464590308200 end=1464590308319 duration=119 from=org.apache.hadoop.hive.ql.io.CombineH
iveInputFormat>2016-05-30 14:38:28,450 INFO  [main]: mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(202)) - number of splits:1
2016-05-30 14:38:28,908 INFO  [main]: mapreduce.JobSubmitter (JobSubmitter.java:printTokens(291)) - Submitting tokens for job: job_1464587611974_0003
2016-05-30 14:38:31,462 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:submitApplication(251)) - Submitted application application_1464587611974_0003
2016-05-30 14:38:31,579 INFO  [main]: mapreduce.Job (Job.java:submit(1311)) - The url to track the job: http://hadoopnamenode:8088/proxy/application_1464587611974_0003/
2016-05-30 14:38:31,596 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - Starting Job = job_1464587611974_0003, Tracking URL = http://hadoopnamenode:8088/proxy/application_1464587611974_0003/
2016-05-30 14:38:31,598 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - Kill Command = /opt/hadoop/hadoop-2.6.0/bin/hadoop job  -kill job_1464587611974_0003
2016-05-30 14:38:53,879 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2016-05-30 14:38:54,728 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-05-30 14:38:54,731 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - 2016-05-30 14:38:54,713 Stage-1 map = 0%,  reduce = 0%
2016-05-30 14:39:42,682 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - 2016-05-30 14:39:42,681 Stage-1 map = 100%,  reduce = 100%
2016-05-30 14:39:43,889 ERROR [main]: exec.Task (SessionState.java:printError(936)) - Ended Job = job_1464587611974_0003 with errors
2016-05-30 14:39:43,894 ERROR [Thread-291]: exec.Task (SessionState.java:printError(936)) - Error during job, obtaining debugging information...
2016-05-30 14:39:43,895 INFO  [Thread-291]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1174)) - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2016-05-30 14:39:43,897 ERROR [Thread-291]: exec.Task (SessionState.java:printError(936)) - Job Tracking URL: http://hadoopnamenode:8088/proxy/application_1464587611974_0003/
2016-05-30 14:39:43,915 ERROR [Thread-292]: exec.Task (SessionState.java:printError(936)) - Examining task ID: task_1464587611974_0003_m_000000 (and more) from job job_1464587611974_0003
2016-05-30 14:39:43,917 WARN  [Thread-292]: shims.HadoopShimsSecure (Hadoop23Shims.java:getTaskAttemptLogUrl(151)) - Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-05-30 14:39:44,005 WARN  [Thread-292]: shims.HadoopShimsSecure (Hadoop23Shims.java:getTaskAttemptLogUrl(151)) - Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-05-30 14:39:44,012 WARN  [Thread-292]: shims.HadoopShimsSecure (Hadoop23Shims.java:getTaskAttemptLogUrl(151)) - Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-05-30 14:39:44,017 WARN  [Thread-292]: shims.HadoopShimsSecure (Hadoop23Shims.java:getTaskAttemptLogUrl(151)) - Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-05-30 14:39:44,048 ERROR [Thread-291]: exec.Task (SessionState.java:printError(936)) -
Task with the most failures(4):
-----
Task ID:
  task_1464587611974_0003_m_000000

URL:
  http://hadoopnamenode:8088/taskdetails.jsp?jobid=job_1464587611974_0003&tipid=task_1464587611974_0003_m_000000
-----
Diagnostic Messages for this Task:
Exception from container-launch.
Container id: container_1464587611974_0003_01_000005
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
        at org.apache.hadoop.util.Shell.run(Shell.java:478)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1


2016-05-30 14:39:44,109 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:killApplication(364)) - Killed application application_1464587611974_0003
2016-05-30 14:39:44,120 ERROR [main]: ql.Driver (SessionState.java:printError(936)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2016-05-30 14:39:44,121 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=Driver.execute start=1464590293837 end=1464590384121 duration=90284 from=org.apache.hadoop.hive.ql.Driv
er>2016-05-30 14:39:44,122 INFO  [main]: ql.Driver (SessionState.java:printInfo(927)) - MapReduce Jobs Launched:
2016-05-30 14:39:44,134 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-05-30 14:39:44,137 INFO  [main]: ql.Driver (SessionState.java:printInfo(927)) - Stage-Stage-1: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
2016-05-30 14:39:44,138 INFO  [main]: ql.Driver (SessionState.java:printInfo(927)) - Total MapReduce CPU Time Spent: 0 msec
2016-05-30 14:39:44,138 INFO  [main]: ql.Driver (Driver.java:execute(1696)) - Completed executing command(queryId=root_20160530143535_a2263f15-9372-4197-aff4-c50429f88dfb); Time taken: 90.284 seconds
2016-05-30 14:39:44,139 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(122)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-05-30 14:39:44,272 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=releaseLocks start=1464590384139 end=1464590384272 duration=133 from=org.apache.hadoop.hive.ql.Driver>
2016-05-30 14:39:44,279 INFO  [main]: exec.ListSinkOperator (Operator.java:close(595)) - 7 finished. closing...
2016-05-30 14:39:44,279 INFO  [main]: exec.ListSinkOperator (Operator.java:close(613)) - 7 Close done




回复

使用道具 举报

追随云科技 发表于 2016-5-30 15:19:02

hive.log:信息

2016-05-30 15:06:37,753 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-05-30 15:06:37,754 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - 2016-05-30 15:06:37,753 Stage-1 map = 0%,  reduce = 0%
2016-05-30 15:07:10,822 INFO  [main]: exec.Task (SessionState.java:printInfo(927)) - 2016-05-30 15:07:10,822 Stage-1 map = 100%,  reduce = 100%
2016-05-30 15:07:11,985 ERROR [main]: exec.Task (SessionState.java:printError(936)) - Ended Job = job_1464587611974_0004 with errors
2016-05-30 15:07:11,986 ERROR [Thread-576]: exec.Task (SessionState.java:printError(936)) - Error during job, obtaining debugging information...
2016-05-30 15:07:11,987 INFO  [Thread-576]: Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1174)) - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2016-05-30 15:07:11,989 ERROR [Thread-576]: exec.Task (SessionState.java:printError(936)) - Job Tracking URL: http://hadoopnamenode:8088/proxy/application_1464587611974_0004/
2016-05-30 15:07:12,000 ERROR [Thread-577]: exec.Task (SessionState.java:printError(936)) - Examining task ID: task_1464587611974_0004_m_000000 (and more) from job job_1464587611974_0004
2016-05-30 15:07:12,000 WARN  [Thread-577]: shims.HadoopShimsSecure (Hadoop23Shims.java:getTaskAttemptLogUrl(151)) - Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-05-30 15:07:12,019 WARN  [Thread-577]: shims.HadoopShimsSecure (Hadoop23Shims.java:getTaskAttemptLogUrl(151)) - Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-05-30 15:07:12,024 WARN  [Thread-577]: shims.HadoopShimsSecure (Hadoop23Shims.java:getTaskAttemptLogUrl(151)) - Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-05-30 15:07:12,029 WARN  [Thread-577]: shims.HadoopShimsSecure (Hadoop23Shims.java:getTaskAttemptLogUrl(151)) - Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-05-30 15:07:12,039 ERROR [Thread-576]: exec.Task (SessionState.java:printError(936)) -
Task with the most failures(4):
-----
Task ID:
  task_1464587611974_0004_m_000000

URL:
  http://hadoopnamenode:8088/taskdetails.jsp?jobid=job_1464587611974_0004&tipid=task_1464587611974_0004_m_000000
-----
Diagnostic Messages for this Task:
Exception from container-launch.
Container id: container_1464587611974_0004_01_000005
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
        at org.apache.hadoop.util.Shell.run(Shell.java:478)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1


2016-05-30 15:07:12,077 INFO  [main]: impl.YarnClientImpl (YarnClientImpl.java:killApplication(364)) - Killed application application_1464587611974_0004
2016-05-30 15:07:12,094 ERROR [main]: ql.Driver (SessionState.java:printError(936)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2016-05-30 15:07:12,095 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=Driver.execute start=1464591952917 end=1464592032095 duration=79178 from=org.apache.hadoop.hive.ql.Driv
er>2016-05-30 15:07:12,095 INFO  [main]: ql.Driver (SessionState.java:printInfo(927)) - MapReduce Jobs Launched:
2016-05-30 15:07:12,096 WARN  [main]: mapreduce.Counters (AbstractCounters.java:getGroup(234)) - Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-05-30 15:07:12,096 INFO  [main]: ql.Driver (SessionState.java:printInfo(927)) - Stage-Stage-1: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
2016-05-30 15:07:12,097 INFO  [main]: ql.Driver (SessionState.java:printInfo(927)) - Total MapReduce CPU Time Spent: 0 msec
2016-05-30 15:07:12,097 INFO  [main]: ql.Driver (Driver.java:execute(1696)) - Completed executing command(queryId=root_20160530143535_a2263f15-9372-4197-aff4-c50429f88dfb); Time taken: 79.178 seconds
2016-05-30 15:07:12,098 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(122)) - <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-05-30 15:07:12,328 INFO  [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(163)) - </PERFLOG method=releaseLocks start=1464592032098 end=1464592032328 duration=230 from=org.apache.hadoop.hive.ql.Driver>
2016-05-30 15:07:12,329 INFO  [main]: exec.ListSinkOperator (Operator.java:close(595)) - 9 finished. closing...
2016-05-30 15:07:12,329 INFO  [main]: exec.ListSinkOperator (Operator.java:close(613)) - 9 Close done

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条