howtodown 发表于 2014-11-3 16:51 我在环境变量里已经配置好这些变量了,虽然没有按照你的方式去做,但是你的方式也是正确的,本帖描述的问题已经解决,感谢轮胎里像你一样帮助我的人 |
sstutu 发表于 2014-11-3 16:48 谢谢,已经按你说的做了,目前执行一个没有什么问题了,但是并发10个会再次出现该问题,我认为是内存问题,正在学习配置这些东西,感谢像你一样帮助我的人,这个帖子结贴了 |
hletian 发表于 2014-11-3 16:38 最好都有截图,一图胜万语,否则,不知道你该的是对,还是不对。 下面目录红字,都换成自己的实际目录 $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*, $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*, $YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/* |
hletian 发表于 2014-11-3 16:38 |
sstutu 发表于 2014-11-3 14:28 分别在yarn-site加了 <property> <name>yarn.application.classpath</name> <value> $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*, $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*, $YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/* </value> </property> 和mapperd-site加了 <property> <name>mapreduce.application.classpath</name> <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*, $HADOOP_COMMON_HOME/share/hadoop/common/lib/*, $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*, $YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*, $YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*</value> </property> 结果还是那个错误,没起作用 |
执行inset override ,发生了虚拟内存不够用的情况,刚才那个错误算是搞定了,又来了新的问题,这玩意比较坑啊 Task with the most failures(4): ----- Task ID: task_1414987379617_0012_m_000002 URL: http://redhat6-master1:8088/taskdetails.jsp?jobid=job_1414987379617_0012&tipid=task_1414987379617_0012_m_000002 ----- Diagnostic Messages for this Task: Container [pid=437,containerID=container_1414987379617_0012_01_000025] is running beyond virtual memory limits. Current usage: 24.9 MB of 1 GB physical memory used; 6.5 GB of 2.1 GB virtual memory used. Killing container. Dump of the process-tree for container_1414987379617_0012_01_000025 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 559 437 437 437 (java) 22 8 6900985856 6066 /usr/java/jdk1.6.0_26/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN 1024 -Djava.io.tmpdir=/data/app/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1414987379617_0012/container_1414987379617_0012_01_000025/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/data/app/hadoop/hadoop-2.4.0/logs/userlogs/application_1414987379617_0012/container_1414987379617_0012_01_000025 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.1.194.187 58718 attempt_1414987379617_0012_m_000002_3 25 |- 437 53997 437 437 (bash) 3 5 108642304 299 /bin/bash -c /usr/java/jdk1.6.0_26/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN 1024 -Djava.io.tmpdir=/data/app/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1414987379617_0012/container_1414987379617_0012_01_000025/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/data/app/hadoop/hadoop-2.4.0/logs/userlogs/application_1414987379617_0012/container_1414987379617_0012_01_000025 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 10.1.194.187 58718 attempt_1414987379617_0012_m_000002_3 25 1>/data/app/hadoop/hadoop-2.4.0/logs/userlogs/application_1414987379617_0012/container_1414987379617_0012_01_000025/stdout 2>/data/app/hadoop/hadoop-2.4.0/logs/userlogs/application_1414987379617_0012/container_1414987379617_0012_01_000025/stderr Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 |
bioger_hit 发表于 2014-11-3 14:02 我试过了,不行的,这个问题我找到原因了,是hive-env.sh里的HADOOP_CONF_DIR配错了。。。。,但是又有新的错误 Diagnostic Messages for this Task: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:505) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) |
检查下这里的配置
|
howtodown 发表于 2014-11-3 14:03 我试过了,不行的,这个问题我找到原因了,是hive-env.sh里的HADOOP_CONF_DIR配错了。。。。,但是又有新的错误 Diagnostic Messages for this Task: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:505) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) |