19/05/02 11:55:42 INFO fs.TestDFSIO: TestDFSIO.1.8
19/05/02 11:55:42 INFO fs.TestDFSIO: nrFiles = 10
19/05/02 11:55:42 INFO fs.TestDFSIO: nrBytes (MB) = 128.0
19/05/02 11:55:42 INFO fs.TestDFSIO: bufferSize = 1000000
19/05/02 11:55:42 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
19/05/02 11:55:45 INFO fs.TestDFSIO: creating control file: 134217728 bytes, 10 files
19/05/02 11:55:47 INFO fs.TestDFSIO: created control files for: 10 files
19/05/02 11:55:47 INFO client.RMProxy: Connecting to ResourceManager at hadoop102/192.168.1.103:8032
19/05/02 11:55:48 INFO client.RMProxy: Connecting to ResourceManager at hadoop102/192.168.1.103:8032
19/05/02 11:55:49 INFO mapred.FileInputFormat: Total input paths to process : 10
19/05/02 11:55:49 INFO mapreduce.JobSubmitter: number of splits:10
19/05/02 11:55:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1556766549220_0004
19/05/02 11:55:50 INFO impl.YarnClientImpl: Submitted application application_1556766549220_0004
19/05/02 11:55:50 INFO mapreduce.Job: The url to track the job: http://hadoop102:8088/proxy/application_1556766549220_0004/
19/05/02 11:55:50 INFO mapreduce.Job: Running job: job_1556766549220_0004
19/05/02 11:56:04 INFO mapreduce.Job: Job job_1556766549220_0004 running in uber mode : false
19/05/02 11:56:04 INFO mapreduce.Job: map 0% reduce 0%
19/05/02 11:56:24 INFO mapreduce.Job: map 7% reduce 0%
19/05/02 11:56:27 INFO mapreduce.Job: map 23% reduce 0%
19/05/02 11:56:28 INFO mapreduce.Job: map 63% reduce 0%
19/05/02 11:56:29 INFO mapreduce.Job: map 73% reduce 0%
19/05/02 11:56:30 INFO mapreduce.Job: map 77% reduce 0%
19/05/02 11:56:31 INFO mapreduce.Job: map 87% reduce 0%
19/05/02 11:56:32 INFO mapreduce.Job: map 100% reduce 0%
19/05/02 11:56:35 INFO mapreduce.Job: map 100% reduce 100%
19/05/02 11:56:36 INFO mapreduce.Job: Job job_1556766549220_0004 completed successfully
19/05/02 11:56:36 INFO mapreduce.Job: Counters: 51
File System Counters
FILE: Number of bytes read=852
FILE: Number of bytes written=1304796
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=1342179630
HDFS: Number of bytes written=78
HDFS: Number of read operations=53
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Killed map tasks=1
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=8
Rack-local map tasks=2
Total time spent by all maps in occupied slots (ms)=233690
Total time spent by all reduces in occupied slots (ms)=7215
Total time spent by all map tasks (ms)=233690
Total time spent by all reduce tasks (ms)=7215
Total vcore-milliseconds taken by all map tasks=233690
Total vcore-milliseconds taken by all reduce tasks=7215
Total megabyte-milliseconds taken by all map tasks=239298560
Total megabyte-milliseconds taken by all reduce tasks=7388160
Map-Reduce Framework
Map input records=10
Map output records=50
Map output bytes=746
Map output materialized bytes=906
Input split bytes=1230
Combine input records=0
Combine output records=0
Reduce input groups=5
Reduce shuffle bytes=906
Reduce input records=50
Reduce output records=5
Spilled Records=100
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6473
CPU time spent (ms)=57610
Physical memory (bytes) snapshot=2841436160
Virtual memory (bytes) snapshot=23226683392
Total committed heap usage (bytes)=2070413312
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1120
File Output Format Counters
Bytes Written=78
19/05/02 11:56:36 INFO fs.TestDFSIO: ----- TestDFSIO ----- : read
19/05/02 11:56:36 INFO fs.TestDFSIO: Date & time: Thu May 02 11:56:36 CST 2019
19/05/02 11:56:36 INFO fs.TestDFSIO: Number of files: 10
19/05/02 11:56:36 INFO fs.TestDFSIO: Total MBytes processed: 1280.0
19/05/02 11:56:36 INFO fs.TestDFSIO: Throughput mb/sec: 16.001000062503905
19/05/02 11:56:36 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.202795028686523
19/05/02 11:56:36 INFO fs.TestDFSIO: IO rate std deviation: 4.881590515873911
19/05/02 11:56:36 INFO fs.TestDFSIO: Test exec time sec: 49.116
19/05/02 11:56:36 INFO fs.TestDFSIO:
复制代码
3)删除测试生成数据
[kgg@hadoop101 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -clean
4)使用Sort程序评测MapReduce
(1)使用RandomWriter来产生随机数,每个节点运行10个Map任务,每个Map产生大约1G大小的二进制随机数
[kgg@hadoop101 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar randomwriter random-data
(2)执行Sort程序
[kgg@hadoop101 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar sort random-data sorted-data
(3)验证数据是否真正排好序了
[kgg@hadoop101 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar testmapredsort -sortInput random-data -sortOutput sorted-data
4.1.4 项目经验之Hadoop参数调优
1)HDFS参数调优hdfs-site.xml
(1)dfs.namenode.handler.count=20 * log2(Cluster Size),比如集群规模为8台时,此参数设置为60
The number of Namenode RPC server threads that listen to requests from clients. If dfs.namenode.servicerpc-address is not configured then Namenode RPC server threads listen to requests from all nodes.
NameNode有一个工作线程池,用来处理不同DataNode的并发心跳以及客户端并发的元数据操作。对于大集群或者有大量客户端的集群来说,通常需要增大参数dfs.namenode.handler.count的默认值10。设置该值的一般原则是将其设置为集群大小的自然对数乘以20,即20logN,N为集群大小。
(2)编辑日志存储路径dfs.namenode.edits.dir设置与镜像文件存储路径dfs.namenode.name.dir尽量分开,达到最低写入延迟
2)YARN参数调优yarn-site.xml
(1)情景描述:总共7台机器,每天几亿条数据,数据源->Flume->Kafka->HDFS->Hive
面临问题:数据统计主要用HiveSQL,没有数据倾斜,小文件已经做了合并处理,开启的JVM重用,而且IO没有阻塞,内存用了不到50%。但是还是跑的非常慢,而且数据量洪峰过来时,整个集群都会宕掉。基于这种情况有没有优化方案。
(2)解决办法:
内存利用率不够。这个一般是Yarn的2个配置造成的,单个任务可以申请的最大内存大小,和Hadoop单个节点可用内存大小。调节这两个参数能提高系统内存的利用率。
(a)yarn.nodemanager.resource.memory-mb
表示该节点上YARN可使用的物理内存总量,默认是8192(MB),注意,如果你的节点内存资源不够8GB,则需要调减小这个值,而YARN不会智能的探测节点的物理内存总量。
(b)yarn.scheduler.maximum-allocation-mb
单个任务可申请的最多物理内存量,默认是8192(MB)。