分享

关于hive中,内存分配问题

Smile鹏鹏 发表于 2017-5-3 10:45:01 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 6 20458
master 2个处理器 2G内存 3个slave 都是1个处理器 1G内存 (物理机8G) 只跑了2个slave,内存不够不敢开第三个了在hive中执行 select count(*) from xxx;
刚开始使用默认默认配置只能跑出2个 map = reduce=
后来改了一下

  <property>
                <name>mapreduce.map.memory.mb</name>
                <value>1024</value>
        </property>
        <property>
                <name>mapreduce.reduce.memory.mb</name>
                <value>1024</value>
        </property>
        <property>
                <name>mapreduce.map.java.opts</name>
                <value>812</value>
        </property>
        <property>
                <name>mapreduce.reduce.java.opts</name>
                <value>816</value>
        </property>



<property>
                <name>yarn.scheduler.minimum-allocation-mb</name>
                <value>256</value>
        </property>
        <property>
                <name>yarn.log-aggregation.retain-seconds</name>
                <value>604800</value>
        </property>
        <property>
                <name>yarn.nodemanager.resource.memory-mb</name>
                <value>1024</value>
        </property>
        <property>
                <name>yarn.scheduler.minimum-allocation-mb</name>
                <value>512</value>
        </property>
        <property>
                <name>yarn.scheduler.maximum-allocation-bm</name>
                <value>1024</value>

能跑出5个了, 但是报错
Ended Job = job_1493778955612_0001 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec


日志中没有错误的记录
是内存分配的问题吗?
在slave中 free -m查看内存 free很少,200-300之间,明明分配了1G
内存该怎么分配合理呢?更改尝试了好多次了

已有(6)人评论

跳转到指定楼层
Smile鹏鹏 发表于 2017-5-3 10:58:49
2017-05-03 10:37:18,636 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1493778955612_0001_000001 released container container_1493778955612_0001_01_000007 on node: host: slave2:40022 #containers=0 available=<memory:2048, vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2017-05-03 10:37:25,224 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_01_000001 Container Transitioned from RUNNING to COMPLETED
2017-05-03 10:37:25,224 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1493778955612_0001_01_000001 in state: COMPLETED event:FINISHED
2017-05-03 10:37:25,224 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Released Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_01_000001
2017-05-03 10:37:25,225 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1493778955612_0001_01_000001 of capacity <memory:1536, vCores:1> on host slave1:36670, which currently has 0 containers, <memory:0, vCores:0> used and <memory:2048, vCores:8> available, release resources=true
2017-05-03 10:37:25,225 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=root user-resources=<memory:0, vCores:0>
2017-05-03 10:37:25,225 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1493778955612_0001_01_000001, NodeId: slave1:36670, NodeHttpAddress: slave1:8042, Resource: <memory:1536, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.22.21:36670 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:4096, vCores:16>
2017-05-03 10:37:25,225 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:25,225 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2017-05-03 10:37:25,225 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1493778955612_0001_000001 released container container_1493778955612_0001_01_000001 on node: host: slave1:36670 #containers=0 available=<memory:2048, vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2017-05-03 10:37:25,227 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1493778955612_0001_000001 with final state: FAILED, and exit status: 0
2017-05-03 10:37:25,228 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1493778955612_0001_000001 State change from RUNNING to FINAL_SAVING
2017-05-03 10:37:25,229 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1493778955612_0001_000001
2017-05-03 10:37:25,230 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1493778955612_0001_000001
2017-05-03 10:37:25,230 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1493778955612_0001_000001 State change from FINAL_SAVING to FAILED
2017-05-03 10:37:25,230 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2
2017-05-03 10:37:25,230 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1493778955612_0001 State change from RUNNING to ACCEPTED
2017-05-03 10:37:25,231 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1493778955612_0001_000001 is done. finalState=FAILED
2017-05-03 10:37:25,231 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1493778955612_0001_000002
2017-05-03 10:37:25,231 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1493778955612_0001_000002 State change from NEW to SUBMITTED
2017-05-03 10:37:25,231 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1493778955612_0001 requests cleared
2017-05-03 10:37:25,231 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1493778955612_0001 user: root queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2017-05-03 10:37:25,231 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue, it is likely set too low. skipping enforcement to allow at least one application to start
2017-05-03 10:37:25,231 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent is insufficient to start a single application in queue for user, it is likely set too low. skipping enforcement to allow at least one application to start
2017-05-03 10:37:25,231 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application application_1493778955612_0001 from user: root activated in queue: default
2017-05-03 10:37:25,231 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application added - appId: application_1493778955612_0001 user: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue$User@12153649, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1
2017-05-03 10:37:25,231 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Added Application Attempt appattempt_1493778955612_0001_000002 to scheduler from user root in queue default
2017-05-03 10:37:25,231 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1493778955612_0001_000001
2017-05-03 10:37:25,234 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1493778955612_0001_000002 State change from SUBMITTED to SCHEDULED
2017-05-03 10:37:25,259 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000001 Container Transitioned from NEW to ALLOCATED
2017-05-03 10:37:25,259 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Allocated Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_02_000001
2017-05-03 10:37:25,259 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1493778955612_0001_02_000001 of capacity <memory:1536, vCores:1> on host slave1:36670, which has 1 containers, <memory:1536, vCores:1> used and <memory:512, vCores:7> available after allocation
2017-05-03 10:37:25,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1493778955612_0001_000002 container=Container: [ContainerId: container_1493778955612_0001_02_000001, NodeId: slave1:36670, NodeHttpAddress: slave1:8042, Resource: <memory:1536, vCores:1>, Priority: 0, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 clusterResource=<memory:4096, vCores:16>
2017-05-03 10:37:25,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1
2017-05-03 10:37:25,260 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:1536, vCores:1> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:25,261 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : slave1:36670 for container : container_1493778955612_0001_02_000001
2017-05-03 10:37:25,263 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000001 Container Transitioned from ALLOCATED to ACQUIRED
2017-05-03 10:37:25,264 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Clear node set for appattempt_1493778955612_0001_000002
2017-05-03 10:37:25,264 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1493778955612_0001 AttemptId: appattempt_1493778955612_0001_000002 MasterContainer: Container: [ContainerId: container_1493778955612_0001_02_000001, NodeId: slave1:36670, NodeHttpAddress: slave1:8042, Resource: <memory:1536, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.22.21:36670 }, ]
2017-05-03 10:37:25,264 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1493778955612_0001_000002 State change from SCHEDULED to ALLOCATED_SAVING
2017-05-03 10:37:25,264 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1493778955612_0001_000002 State change from ALLOCATED_SAVING to ALLOCATED
2017-05-03 10:37:25,267 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Launching masterappattempt_1493778955612_0001_000002
2017-05-03 10:37:25,269 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1493778955612_0001_02_000001, NodeId: slave1:36670, NodeHttpAddress: slave1:8042, Resource: <memory:1536, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.22.21:36670 }, ] for AM appattempt_1493778955612_0001_000002
2017-05-03 10:37:25,269 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Command to launch container container_1493778955612_0001_02_000001 : $JAVA_HOME/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=<LOG_DIR> -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog  -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout 2><LOG_DIR>/stderr
2017-05-03 10:37:25,269 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1493778955612_0001_000002
2017-05-03 10:37:25,269 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Creating password for appattempt_1493778955612_0001_000002
2017-05-03 10:37:25,290 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1493778955612_0001_02_000001, NodeId: slave1:36670, NodeHttpAddress: slave1:8042, Resource: <memory:1536, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.22.21:36670 }, ] for AM appattempt_1493778955612_0001_000002
2017-05-03 10:37:25,291 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1493778955612_0001_000002 State change from ALLOCATED to LAUNCHED
2017-05-03 10:37:26,265 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000001 Container Transitioned from ACQUIRED to RUNNING
2017-05-03 10:37:30,937 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1493778955612_0001_000002 (auth:SIMPLE)
2017-05-03 10:37:30,945 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: AM registration appattempt_1493778955612_0001_000002
2017-05-03 10:37:30,945 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        IP=192.168.22.21        OPERATION=Register App Master        TARGET=ApplicationMasterService        RESULT=SUCCESS        APPID=application_1493778955612_0001        APPATTEMPTID=appattempt_1493778955612_0001_000002
2017-05-03 10:37:30,945 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1493778955612_0001_000002 State change from LAUNCHED to RUNNING
2017-05-03 10:37:30,945 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1493778955612_0001 State change from ACCEPTED to RUNNING
2017-05-03 10:37:32,326 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000002 Container Transitioned from NEW to RESERVED
2017-05-03 10:37:32,326 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Reserved container  application=application_1493778955612_0001 resource=<memory:1024, vCores:1> queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:1536, vCores:1> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:32,326 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2560, vCores:2>, usedCapacity=0.625, absoluteUsedCapacity=0.625, numApps=1, numContainers=2
2017-05-03 10:37:32,326 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.625 absoluteUsedCapacity=0.625 used=<memory:2560, vCores:2> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:33,326 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Trying to fulfill reservation for application application_1493778955612_0001 on node: slave1:36670
2017-05-03 10:37:33,326 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Reserved container  application=application_1493778955612_0001 resource=<memory:1024, vCores:1> queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2560, vCores:2>, usedCapacity=0.625, absoluteUsedCapacity=0.625, numApps=1, numContainers=2 usedCapacity=0.625 absoluteUsedCapacity=0.625 used=<memory:2560, vCores:2> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:33,326 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Skipping scheduling since node slave1:36670 is reserved by application appattempt_1493778955612_0001_000002
2017-05-03 10:37:33,707 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Application application_1493778955612_0001 unreserved  on node host: slave1:36670 #containers=1 available=<memory:512, vCores:7> used=<memory:1536, vCores:1>, currently has 0 at priority 20; currentReservation <memory:0, vCores:0>
2017-05-03 10:37:33,708 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:1536, vCores:1> numContainers=1 user=root user-resources=<memory:1536, vCores:1>
2017-05-03 10:37:33,709 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1493778955612_0001_02_000002, NodeId: slave1:36670, NodeHttpAddress: slave1:8042, Resource: <memory:1024, vCores:1>, Priority: 20, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 cluster=<memory:4096, vCores:16>
2017-05-03 10:37:33,709 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:1536, vCores:1> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:33,709 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000003 Container Transitioned from NEW to ALLOCATED
2017-05-03 10:37:33,709 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Allocated Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_02_000003
2017-05-03 10:37:33,709 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1493778955612_0001_02_000003 of capacity <memory:1024, vCores:1> on host slave2:40022, which has 1 containers, <memory:1024, vCores:1> used and <memory:1024, vCores:7> available after allocation
2017-05-03 10:37:33,709 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1493778955612_0001_000002 container=Container: [ContainerId: container_1493778955612_0001_02_000003, NodeId: slave2:40022, NodeHttpAddress: slave2:8042, Resource: <memory:1024, vCores:1>, Priority: 20, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 clusterResource=<memory:4096, vCores:16>
2017-05-03 10:37:33,710 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2560, vCores:2>, usedCapacity=0.625, absoluteUsedCapacity=0.625, numApps=1, numContainers=2
2017-05-03 10:37:33,710 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.625 absoluteUsedCapacity=0.625 used=<memory:2560, vCores:2> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:34,021 INFO org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : slave2:40022 for container : container_1493778955612_0001_02_000003
2017-05-03 10:37:34,023 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000003 Container Transitioned from ALLOCATED to ACQUIRED
2017-05-03 10:37:34,712 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000003 Container Transitioned from ACQUIRED to COMPLETED
2017-05-03 10:37:34,712 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1493778955612_0001_02_000003 in state: COMPLETED event:FINISHED
2017-05-03 10:37:34,712 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Released Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_02_000003
2017-05-03 10:37:34,712 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1493778955612_0001_02_000003 of capacity <memory:1024, vCores:1> on host slave2:40022, which currently has 0 containers, <memory:0, vCores:0> used and <memory:2048, vCores:8> available, release resources=true
2017-05-03 10:37:34,713 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:1536, vCores:1> numContainers=1 user=root user-resources=<memory:1536, vCores:1>
2017-05-03 10:37:34,713 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1493778955612_0001_02_000003, NodeId: slave2:40022, NodeHttpAddress: slave2:8042, Resource: <memory:1024, vCores:1>, Priority: 20, Token: Token { kind: ContainerToken, service: 192.168.22.22:40022 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 cluster=<memory:4096, vCores:16>
2017-05-03 10:37:34,713 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:1536, vCores:1> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:34,714 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1
2017-05-03 10:37:34,714 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1493778955612_0001_000002 released container container_1493778955612_0001_02_000003 on node: host: slave2:40022 #containers=0 available=<memory:2048, vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2017-05-03 10:37:35,039 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate of application :application_1493778955612_0001
2017-05-03 10:37:36,081 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000004 Container Transitioned from NEW to ALLOCATED
2017-05-03 10:37:36,082 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Allocated Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_02_000004
2017-05-03 10:37:36,082 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1493778955612_0001_02_000004 of capacity <memory:1024, vCores:1> on host slave2:40022, which has 1 containers, <memory:1024, vCores:1> used and <memory:1024, vCores:7> available after allocation
2017-05-03 10:37:36,082 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1493778955612_0001_000002 container=Container: [ContainerId: container_1493778955612_0001_02_000004, NodeId: slave2:40022, NodeHttpAddress: slave2:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 clusterResource=<memory:4096, vCores:16>
2017-05-03 10:37:36,082 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2560, vCores:2>, usedCapacity=0.625, absoluteUsedCapacity=0.625, numApps=1, numContainers=2
2017-05-03 10:37:36,082 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.625 absoluteUsedCapacity=0.625 used=<memory:2560, vCores:2> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:37,062 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000004 Container Transitioned from ALLOCATED to ACQUIRED
2017-05-03 10:37:38,070 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate of application :application_1493778955612_0001
2017-05-03 10:37:38,093 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000004 Container Transitioned from ACQUIRED to COMPLETED
2017-05-03 10:37:38,093 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1493778955612_0001_02_000004 in state: COMPLETED event:FINISHED
2017-05-03 10:37:38,093 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Released Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_02_000004
2017-05-03 10:37:38,093 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1493778955612_0001_02_000004 of capacity <memory:1024, vCores:1> on host slave2:40022, which currently has 0 containers, <memory:0, vCores:0> used and <memory:2048, vCores:8> available, release resources=true
2017-05-03 10:37:38,093 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:1536, vCores:1> numContainers=1 user=root user-resources=<memory:1536, vCores:1>
2017-05-03 10:37:38,095 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1493778955612_0001_02_000004, NodeId: slave2:40022, NodeHttpAddress: slave2:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 192.168.22.22:40022 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 cluster=<memory:4096, vCores:16>
2017-05-03 10:37:38,095 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:1536, vCores:1> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:38,095 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1
2017-05-03 10:37:38,095 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1493778955612_0001_000002 released container container_1493778955612_0001_02_000004 on node: host: slave2:40022 #containers=0 available=<memory:2048, vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2017-05-03 10:37:40,101 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000005 Container Transitioned from NEW to ALLOCATED
2017-05-03 10:37:40,101 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Allocated Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_02_000005
2017-05-03 10:37:40,101 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1493778955612_0001_02_000005 of capacity <memory:1024, vCores:1> on host slave2:40022, which has 1 containers, <memory:1024, vCores:1> used and <memory:1024, vCores:7> available after allocation
2017-05-03 10:37:40,102 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1493778955612_0001_000002 container=Container: [ContainerId: container_1493778955612_0001_02_000005, NodeId: slave2:40022, NodeHttpAddress: slave2:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 clusterResource=<memory:4096, vCores:16>
2017-05-03 10:37:40,102 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2560, vCores:2>, usedCapacity=0.625, absoluteUsedCapacity=0.625, numApps=1, numContainers=2
2017-05-03 10:37:40,102 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.625 absoluteUsedCapacity=0.625 used=<memory:2560, vCores:2> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:41,092 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000005 Container Transitioned from ALLOCATED to ACQUIRED
2017-05-03 10:37:42,099 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate of application :application_1493778955612_0001
2017-05-03 10:37:42,110 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000005 Container Transitioned from ACQUIRED to COMPLETED
2017-05-03 10:37:42,110 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1493778955612_0001_02_000005 in state: COMPLETED event:FINISHED
2017-05-03 10:37:42,110 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Released Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_02_000005
2017-05-03 10:37:42,110 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1493778955612_0001_02_000005 of capacity <memory:1024, vCores:1> on host slave2:40022, which currently has 0 containers, <memory:0, vCores:0> used and <memory:2048, vCores:8> available, release resources=true
2017-05-03 10:37:42,110 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:1536, vCores:1> numContainers=1 user=root user-resources=<memory:1536, vCores:1>
2017-05-03 10:37:42,111 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1493778955612_0001_02_000005, NodeId: slave2:40022, NodeHttpAddress: slave2:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 192.168.22.22:40022 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 cluster=<memory:4096, vCores:16>
2017-05-03 10:37:42,111 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:1536, vCores:1> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:42,111 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1
2017-05-03 10:37:42,111 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1493778955612_0001_000002 released container container_1493778955612_0001_02_000005 on node: host: slave2:40022 #containers=0 available=<memory:2048, vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2017-05-03 10:37:44,109 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: blacklist are updated in Scheduler.blacklistAdditions: [slave2], blacklistRemovals: []
2017-05-03 10:37:44,369 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000006 Container Transitioned from NEW to RESERVED
2017-05-03 10:37:44,369 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Reserved container  application=application_1493778955612_0001 resource=<memory:1024, vCores:1> queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:1536, vCores:1> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:44,370 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2560, vCores:2>, usedCapacity=0.625, absoluteUsedCapacity=0.625, numApps=1, numContainers=2
2017-05-03 10:37:44,370 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.625 absoluteUsedCapacity=0.625 used=<memory:2560, vCores:2> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:45,114 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: blacklist are updated in Scheduler.blacklistAdditions: [], blacklistRemovals: [slave2]
2017-05-03 10:37:45,134 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Application application_1493778955612_0001 unreserved  on node host: slave1:36670 #containers=1 available=<memory:512, vCores:7> used=<memory:1536, vCores:1>, currently has 0 at priority 5; currentReservation <memory:0, vCores:0>
2017-05-03 10:37:45,134 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:1536, vCores:1> numContainers=1 user=root user-resources=<memory:1536, vCores:1>
2017-05-03 10:37:45,134 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1493778955612_0001_02_000006, NodeId: slave1:36670, NodeHttpAddress: slave1:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 cluster=<memory:4096, vCores:16>
2017-05-03 10:37:45,135 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:1536, vCores:1> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:45,135 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000007 Container Transitioned from NEW to ALLOCATED
2017-05-03 10:37:45,135 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Allocated Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_02_000007
2017-05-03 10:37:45,135 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Assigned container container_1493778955612_0001_02_000007 of capacity <memory:1024, vCores:1> on host slave2:40022, which has 1 containers, <memory:1024, vCores:1> used and <memory:1024, vCores:7> available after allocation
2017-05-03 10:37:45,135 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: assignedContainer application attempt=appattempt_1493778955612_0001_000002 container=Container: [ContainerId: container_1493778955612_0001_02_000007, NodeId: slave2:40022, NodeHttpAddress: slave2:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: null, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 clusterResource=<memory:4096, vCores:16>
2017-05-03 10:37:45,135 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2560, vCores:2>, usedCapacity=0.625, absoluteUsedCapacity=0.625, numApps=1, numContainers=2
2017-05-03 10:37:45,135 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.625 absoluteUsedCapacity=0.625 used=<memory:2560, vCores:2> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:46,121 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000007 Container Transitioned from ALLOCATED to ACQUIRED
2017-05-03 10:37:47,129 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: checking for deactivate of application :application_1493778955612_0001
2017-05-03 10:37:47,143 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000007 Container Transitioned from ACQUIRED to COMPLETED
2017-05-03 10:37:47,143 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1493778955612_0001_02_000007 in state: COMPLETED event:FINISHED
2017-05-03 10:37:47,143 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Released Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_02_000007
2017-05-03 10:37:47,143 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1493778955612_0001_02_000007 of capacity <memory:1024, vCores:1> on host slave2:40022, which currently has 0 containers, <memory:0, vCores:0> used and <memory:2048, vCores:8> available, release resources=true
2017-05-03 10:37:47,143 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:1536, vCores:1> numContainers=1 user=root user-resources=<memory:1536, vCores:1>
2017-05-03 10:37:47,144 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1493778955612_0001_02_000007, NodeId: slave2:40022, NodeHttpAddress: slave2:8042, Resource: <memory:1024, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 192.168.22.22:40022 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1 cluster=<memory:4096, vCores:16>
2017-05-03 10:37:47,144 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.375 absoluteUsedCapacity=0.375 used=<memory:1536, vCores:1> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:47,144 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1536, vCores:1>, usedCapacity=0.375, absoluteUsedCapacity=0.375, numApps=1, numContainers=1
2017-05-03 10:37:47,144 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1493778955612_0001_000002 released container container_1493778955612_0001_02_000007 on node: host: slave2:40022 #containers=0 available=<memory:2048, vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2017-05-03 10:37:54,408 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1493778955612_0001_02_000001 Container Transitioned from RUNNING to COMPLETED
2017-05-03 10:37:54,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp: Completed container: container_1493778955612_0001_02_000001 in state: COMPLETED event:FINISHED
2017-05-03 10:37:54,408 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=AM Released Container        TARGET=SchedulerApp        RESULT=SUCCESS        APPID=application_1493778955612_0001        CONTAINERID=container_1493778955612_0001_02_000001
2017-05-03 10:37:54,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1493778955612_0001_02_000001 of capacity <memory:1536, vCores:1> on host slave1:36670, which currently has 0 containers, <memory:0, vCores:0> used and <memory:2048, vCores:8> available, release resources=true
2017-05-03 10:37:54,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: default used=<memory:0, vCores:0> numContainers=0 user=root user-resources=<memory:0, vCores:0>
2017-05-03 10:37:54,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: completedContainer container=Container: [ContainerId: container_1493778955612_0001_02_000001, NodeId: slave1:36670, NodeHttpAddress: slave1:8042, Resource: <memory:1536, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.22.21:36670 }, ] queue=default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0 cluster=<memory:4096, vCores:16>
2017-05-03 10:37:54,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: completedContainer queue=root usedCapacity=0.0 absoluteUsedCapacity=0.0 used=<memory:0, vCores:0> cluster=<memory:4096, vCores:16>
2017-05-03 10:37:54,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Re-sorting completed queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=1, numContainers=0
2017-05-03 10:37:54,408 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application attempt appattempt_1493778955612_0001_000002 released container container_1493778955612_0001_02_000001 on node: host: slave1:36670 #containers=0 available=<memory:2048, vCores:8> used=<memory:0, vCores:0> with event: FINISHED
2017-05-03 10:37:54,409 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1493778955612_0001_000002 with final state: FAILED, and exit status: 0
2017-05-03 10:37:54,410 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1493778955612_0001_000002 State change from RUNNING to FINAL_SAVING
2017-05-03 10:37:54,410 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1493778955612_0001_000002
2017-05-03 10:37:54,410 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1493778955612_0001_000002
2017-05-03 10:37:54,410 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1493778955612_0001_000002 State change from FINAL_SAVING to FAILED
2017-05-03 10:37:54,410 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 2. The max attempts is 2
2017-05-03 10:37:54,411 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Updating application application_1493778955612_0001 with final state: FAILED
2017-05-03 10:37:54,412 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1493778955612_0001 State change from RUNNING to FINAL_SAVING
2017-05-03 10:37:54,412 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Updating info for app: application_1493778955612_0001
2017-05-03 10:37:54,412 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1493778955612_0001_000002 is done. finalState=FAILED
2017-05-03 10:37:54,412 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1493778955612_0001 requests cleared
2017-05-03 10:37:54,412 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1493778955612_0001 user: root queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2017-05-03 10:37:54,412 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1493778955612_0001_000002
2017-05-03 10:37:54,412 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1493778955612_0001 failed 2 times due to AM Container for appattempt_1493778955612_0001_000002 exited with  exitCode: 0
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1493778955612_0001Then, click on links to logs of each attempt.
Diagnostics: Failing this attempt. Failing the application.
2017-05-03 10:37:54,414 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1493778955612_0001 State change from FINAL_SAVING to FAILED
2017-05-03 10:37:54,414 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        OPERATION=Application Finished - Failed        TARGET=RMAppManager        RESULT=FAILURE        DESCRIPTION=App failed with state: FAILED        PERMISSIONS=Application application_1493778955612_0001 failed 2 times due to AM Container for appattempt_1493778955612_0001_000002 exited with  exitCode: 0
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1493778955612_0001Then, click on links to logs of each attempt.
Diagnostics: Failing this attempt. Failing the application.        APPID=application_1493778955612_0001
2017-05-03 10:37:54,416 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1493778955612_0001,name=select word\,count(*) as count from (s...word(Stage-1),user=root,queue=default,state=FAILED,trackingUrl=http://master:8088/cluster/app/application_1493778955612_0001,appMasterHost=N/A,startTime=1493779007448,finishTime=1493779074411,finalStatus=FAILED,memorySeconds=119955,vcoreSeconds=83,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=MAPREDUCE
2017-05-03 10:37:54,416 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1493778955612_0001 user: root leaf-queue of parent: root #applications: 0
2017-05-03 10:37:57,247 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root        IP=192.168.22.20        OPERATION=Kill Application Request        TARGET=ClientRMService        RESULT=SUCCESS        APPID=application_1493778955612_0001
2017-05-03 10:45:55,490 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler: Release request cache is cleaned up
2017-05-03 10:53:26,069 WARN org.apache.hadoop.yarn.server.webapp.AppBlock: Container with id 'container_1493778955612_0001_01_000001' doesn't exist in RM.
2017-05-03 10:53:26,070 WARN org.apache.hadoop.yarn.server.webapp.AppBlock: Container with id 'container_1493778955612_0001_02_000001' doesn't exist in RM.
2017-05-03 10:53:39,553 WARN org.apache.hadoop.yarn.server.webapp.AppBlock: Container with id 'container_1493778955612_0001_01_000001' doesn't exist in RM.
2017-05-03 10:53:39,554 WARN org.apache.hadoop.yarn.server.webapp.AppBlock: Container with id 'container_1493778955612_0001_02_000001' doesn't exist in RM.
2017-05-03 10:53:57,844 INFO logs: Aliases are enabled
回复

使用道具 举报

2017 发表于 2017-5-3 15:45:36
mapreduce.map.java.opts和mapreduce.reduce.java.opts可适当调大,不超过1g即可
回复

使用道具 举报

langke93 发表于 2017-5-3 20:49:42
肯定内存的问题。楼主需要明白修改的参数的含义。
回复

使用道具 举报

Smile鹏鹏 发表于 2017-5-4 10:13:21
每个参数的含义我都有查过,昨天下午也多次修改,最后还是不行
回复

使用道具 举报

easthome001 发表于 2017-5-4 10:18:59
Smile鹏鹏 发表于 2017-5-3 10:58
2017-05-03 10:37:18,636 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.Capaci ...

2017-05-03 10:53:26,069 WARN org.apache.hadoop.yarn.server.webapp.AppBlock: Container with id 'container_1493778955612_0001_01_000001' doesn't exist in RM.
2017-05-03 10:53:26,070 WARN org.apache.hadoop.yarn.server.webapp.AppBlock: Container with id 'container_1493778955612_0001_02_000001' doesn't exist in RM.
2017-05-03 10:53:39,553 WARN org.apache.hadoop.yarn.server.webapp.AppBlock: Container with id 'container_1493778955612_0001_01_000001' doesn't exist in RM.
2017-05-03 10:53:39,554 WARN org.apache.hadoop.yarn.server.webapp.AppBlock: Container with id 'container_1493778955612_0001_02_000001' doesn't exist in RM.
Container被杀掉了。
yarn.nodemanager.vmem-pmem-ratio 的比率,默认是2.1.这个比率的控制影响着虚拟内存的使用,当yarn计算出来的虚拟内存,比在mapred-site.xml里的mapreduce.map.memory.mb或mapreduce.reduce.memory.mb的2.1倍还要多时,会被kill掉。
或则尝试修改下面参数:
yarn.nodemanager.vmem-check-enabled  虚拟内存的检查false掉

回复

使用道具 举报

Smile鹏鹏 发表于 2017-5-8 13:17:21
easthome001 发表于 2017-5-4 10:18
2017-05-03 10:53:26,069 WARN org.apache.hadoop.yarn.server.webapp.AppBlock: Container with id 'con ...

哦哦,好的,谢谢,我试一下。
我搞了个伪分布跑出来了,就没管之前这个了。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条