分享

Hadoop必看1-2:CentOS Hadoop-2.2.0集群安装配置实践

xioaxu790 2014-5-19 09:05:17 发表于 连载型 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 0 11580
继续接着上一篇内容
Hadoop必看1-1:Hadoop-2.2.0集群安装配置实践

1.如何启动集群2.如何通过web查看集群
3.查看查看slave资源状态?
4.JobHistory Server的作用是什么?
5.如何验证wordcount?



启动YARN集群
在主节点m1上,执行如下命令:

  1. start-yarn.sh
复制代码

可以查看启动日志,确认YARN集群启动是否成功:

  1. tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/yarn-shirdrn-resourcemanager-m1.log
  2. tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/yarn-shirdrn-nodemanager-s1.log
  3. tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/yarn-shirdrn-nodemanager-s2.log
  4. tail -100f /home/shirdrn/cloud/storage/hadoop-2.2.0/logs/yarn-shirdrn-nodemanager-s3.log
复制代码

或者,查看对应的进程情况:

  1.   jps
复制代码

另外,ResourceManager运行在主节点m1上,可以Web控制台查看状态:

  1. http://m1:8088/
复制代码

NodeManager运行在从节点上,可以通过Web控制台查看对应节点的资源状态,例如节点s1:

  1. http://s1:8042/
复制代码


管理JobHistory Server
启动可以JobHistory Server,能够通过Web控制台查看集群计算的任务的信息,执行如下命令:

  1. mr-jobhistory-daemon.sh start historyserver
复制代码

默认使用19888端口。
通过访问http://m1:19888/查看任务执行历史信息。
终止JobHistory Server,执行如下命令:

  1. mr-jobhistory-daemon.sh stop historyserver
复制代码


集群验证
我们使用Hadoop自带的WordCount例子进行验证。
先在HDFS创建几个数据目录:

  1. hadoop fs -mkdir -p /data/wordcount
  2. hadoop fs -mkdir -p /output/
复制代码


目录/data/wordcount用来存放Hadoop自带的WordCount例子的数据文件,运行这个MapReduce任务的结果输出到/output/wordcount目录中。
将本地文件上传到HDFS中:

  1. hadoop fs -put /home/shirdrn/cloud/programs/hadoop-2.2.0/etc/hadoop/*.xml /data/wordcount/
复制代码

可以查看上传后的文件情况,执行如下命令:

  1. hadoop fs -ls /data/wordcount
复制代码

可以看到上传到HDFS中的文件。
下面,运行WordCount例子,执行如下命令:

  1. hadoop jar /home/shirdrn/cloud/programs/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /data/wordcount /output/wordcount
复制代码

可以看到控制台输出程序运行的信息:

  1. [shirdrn@m1 hadoop-2.2.0]$ hadoop jar /home/shirdrn/cloud/programs/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /data/wordcount /output/wordcount
  2. 13/12/25 22:38:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  3. 13/12/25 22:38:03 INFO client.RMProxy: Connecting to ResourceManager at m1/10.95.3.48:8032
  4. 13/12/25 22:38:04 INFO input.FileInputFormat: Total input paths to process : 7
  5. 13/12/25 22:38:04 INFO mapreduce.JobSubmitter: number of splits:7
  6. 13/12/25 22:38:04 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
  7. 13/12/25 22:38:04 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
  8. 13/12/25 22:38:04 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
  9. 13/12/25 22:38:04 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
  10. 13/12/25 22:38:04 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
  11. 13/12/25 22:38:04 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
  12. 13/12/25 22:38:04 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
  13. 13/12/25 22:38:04 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
  14. 13/12/25 22:38:04 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
  15. 13/12/25 22:38:04 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
  16. 13/12/25 22:38:04 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
  17. 13/12/25 22:38:04 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
  18. 13/12/25 22:38:04 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1388039619930_0002
  19. 13/12/25 22:38:05 INFO impl.YarnClientImpl: Submitted application application_1388039619930_0002 to ResourceManager at m1/10.95.3.48:8032
  20. 13/12/25 22:38:05 INFO mapreduce.Job: The url to track the job: http://m1:8088/proxy/application_1388039619930_0002/
  21. 13/12/25 22:38:05 INFO mapreduce.Job: Running job: job_1388039619930_0002
  22. 13/12/25 22:38:14 INFO mapreduce.Job: Job job_1388039619930_0002 running in uber mode : false
  23. 13/12/25 22:38:14 INFO mapreduce.Job:  map 0% reduce 0%
  24. 13/12/25 22:38:22 INFO mapreduce.Job:  map 14% reduce 0%
  25. 13/12/25 22:38:42 INFO mapreduce.Job:  map 29% reduce 5%
  26. 13/12/25 22:38:43 INFO mapreduce.Job:  map 43% reduce 5%
  27. 13/12/25 22:38:45 INFO mapreduce.Job:  map 43% reduce 14%
  28. 13/12/25 22:38:54 INFO mapreduce.Job:  map 57% reduce 14%
  29. 13/12/25 22:38:55 INFO mapreduce.Job:  map 71% reduce 19%
  30. 13/12/25 22:38:56 INFO mapreduce.Job:  map 100% reduce 19%
  31. 13/12/25 22:38:57 INFO mapreduce.Job:  map 100% reduce 100%
  32. 13/12/25 22:38:58 INFO mapreduce.Job: Job job_1388039619930_0002 completed successfully
  33. 13/12/25 22:38:58 INFO mapreduce.Job: Counters: 44
  34.      File System Counters
  35.           FILE: Number of bytes read=15339
  36.           FILE: Number of bytes written=667303
  37.           FILE: Number of read operations=0
  38.           FILE: Number of large read operations=0
  39.           FILE: Number of write operations=0
  40.           HDFS: Number of bytes read=21904
  41.           HDFS: Number of bytes written=9717
  42.           HDFS: Number of read operations=24
  43.           HDFS: Number of large read operations=0
  44.           HDFS: Number of write operations=2
  45.      Job Counters
  46.           Killed map tasks=2
  47.           Launched map tasks=9
  48.           Launched reduce tasks=1
  49.           Data-local map tasks=9
  50.           Total time spent by all maps in occupied slots (ms)=457338
  51.           Total time spent by all reduces in occupied slots (ms)=65832
  52.      Map-Reduce Framework
  53.           Map input records=532
  54.           Map output records=1923
  55.           Map output bytes=26222
  56.           Map output materialized bytes=15375
  57.   Input split bytes=773
  58.           Combine input records=1923
  59.           Combine output records=770
  60.           Reduce input groups=511
  61.           Reduce shuffle bytes=15375
  62.           Reduce input records=770
  63.           Reduce output records=511
  64.           Spilled Records=1540
  65.           Shuffled Maps =7
  66.           Failed Shuffles=0
  67.           Merged Map outputs=7
  68.           GC time elapsed (ms)=3951
  69.           CPU time spent (ms)=22610
  70.           Physical memory (bytes) snapshot=1598832640
  71.           Virtual memory (bytes) snapshot=6564274176
  72.           Total committed heap usage (bytes)=971993088
  73.      Shuffle Errors
  74.           BAD_ID=0
  75.           CONNECTION=0
  76.           IO_ERROR=0
  77.           WRONG_LENGTH=0
  78.           WRONG_MAP=0
  79.           WRONG_REDUCE=0
  80.      File Input Format Counters
  81.           Bytes Read=21131
  82.      File Output Format Counters
  83.           Bytes Written=9717
复制代码

查看结果,执行如下命令:

  1. hadoop fs -cat /output/wordcount/part-r-00000 | head
复制代码

结果数据示例如下:

  1. [shirdrn@m1 hadoop-2.2.0]$ hadoop fs -cat /output/wordcount/part-r-00000 | head
  2. 13/12/25 22:58:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  3. "*"     17
  4. "AS     3
  5. "License");     3
  6. "alice,bob     17
  7. $HADOOP_HOME/share/hadoop/common/lib/*,     1
  8. $HADOOP_HOME/share/hadoop/hdfs/*,$HADOOP_HOME/share/hadoop/hdfs/lib/*,     1
  9. $HADOOP_HOME/share/hadoop/mapreduce/*,$HADOOP_HOME/share/hadoop/mapreduce/lib/*</value>     1
  10. $HADOOP_HOME/share/hadoop/yarn/*,$HADOOP_HOME/share/hadoop/yarn/lib/*,     1
  11. (ASF)     1
  12. (YARN-1229)-->     1
  13. cat: Unable to write to output stream.
复制代码

登录到Web控制台,访问链接http://m1:8088/可以看到任务记录情况。
可见,我们的HDFS能够存储数据,而YARN集群也能够运行MapReduce任务。

问题及总结
需要知道的默认配置
在Hadoop 2.2.0中,YARN框架有很多默认的参数值,如果你是在机器资源比较不足的情况下,需要修改这些默认值,来满足一些任务需要。
NodeManager和ResourceManager都是在yarn-site.xml文件中配置的,而运行MapReduce任务时,是在mapred-site.xml中进行配置的。


下面看一下相关的参数及其默认值情况:
参数名称
默认值
进程名称
配置文件
含义说明
yarn.nodemanager.resource.memory-mb8192NodeManageryarn-site.xml从节点所在物理主机的可用物理内存总量
yarn.nodemanager.resource.cpu-vcores8NodeManageryarn-site.xml节点所在物理主机的可用虚拟CPU资源总数(core)
yarn.nodemanager.vmem-pmem-ratio2.1NodeManageryarn-site.xml使用1M物理内存,最多可以使用的虚拟内存数量
yarn.scheduler.minimum-allocation-mb1024ResourceManageryarn-site.xml一次申请分配内存资源的最小数量
yarn.scheduler.maximum-allocation-mb8192ResourceManageryarn-site.xml一次申请分配内存资源的最大数量
yarn.scheduler.minimum-allocation-vcores1ResourceManageryarn-site.xml一次申请分配虚拟CPU资源最小数量
yarn.scheduler.maximum-allocation-vcores8ResourceManageryarn-site.xml一次申请分配虚拟CPU资源最大数量
mapreduce.framework.namelocalMapReducemapred-site.xml取值local、classic或yarn其中之一,如果不是yarn,则不会使用YARN集群来实现资源的分配
mapreduce.map.memory.mb1024MapReducemapred-site.xml每个MapReduce作业的map任务可以申请的内存资源数量
mapreduce.map.cpu.vcores1MapReducemapred-site.xml每个MapReduce作业的map任务可以申请的虚拟CPU资源的数量
mapreduce.reduce.memory.mb1024MapReducemapred-site.xml每个MapReduce作业的reduce任务可以申请的内存资源数量
yarn.nodemanager.resource.cpu-vcores8MapReducemapred-site.xml每个MapReduce作业的reduce任务可以申请的虚拟CPU资源的数量

异常java.io.IOException: Bad connect ack with firstBadLink as 10.95.3.66:50010
详细异常信息,如下所示:
  1. [shirdrn@m1 hadoop-2.2.0]$ hadoop fs -put /home/shirdrn/cloud/programs/hadoop-2.2.0/etc/hadoop/*.xml /data/wordcount/
  2. 13/12/25 21:29:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  3. 13/12/25 21:29:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream
  4. java.io.IOException: Bad connect ack with firstBadLink as 10.95.3.66:50010
  5.      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1166)
  6.      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
  7.      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
  8. 13/12/25 21:29:46 INFO hdfs.DFSClient: Abandoning BP-1906424073-10.95.3.48-1388035628061:blk_1073741825_1001
  9. 13/12/25 21:29:46 INFO hdfs.DFSClient: Excluding datanode 10.95.3.66:50010
  10. 13/12/25 21:29:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream
  11. java.io.IOException: Bad connect ack with firstBadLink as 10.95.3.59:50010
  12.      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1166)
  13.      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
  14.      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
  15. 13/12/25 21:29:46 INFO hdfs.DFSClient: Abandoning BP-1906424073-10.95.3.48-1388035628061:blk_1073741826_1002
  16. 13/12/25 21:29:46 INFO hdfs.DFSClient: Excluding datanode 10.95.3.59:50010
  17. 13/12/25 21:29:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream
  18. java.net.NoRouteToHostException: No route to host
  19.      at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  20.      at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
  21.      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
  22.      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
  23.      at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1305)
  24.      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1128)
  25.      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
  26.      at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
  27. 13/12/25 21:29:46 INFO hdfs.DFSClient: Abandoning BP-1906424073-10.95.3.48-
  28. 1388035628061:blk_1073741828_1004
  29. 13/12/25 21:29:46 INFO hdfs.DFSClient: Excluding datanode 10.95.3.59:50010
  30. 13/12/25 21:29:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream
复制代码

主要是由于Hadoop集群内某些节点的防火墙没有关闭,导致无法访问集群内节点。

如果你的能力还不能撑起你的欲望,请静下心来学习!!

本帖,完毕。

如果还有问题可查看:
hadoop2.X使用手册1:通过web端口查看主节点、slave1节点及集群运行状态
hadoop2.X使用手册2:如何运行自带wordcount
hadoop2.2运行mapreduce(wordcount)问题总结


分享自:简单之美




没找到任何评论,期待你打破沉寂

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条