最近Hadoop/HBase集群,DataNode节点大面积挂掉,查看日志
storageID=DS-1187529127-192.168.0.242-50010-1422253305779, infoPort=50075,
ipcPort=50020):DataXceiver
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:691)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:576)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:404)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
at java.lang.Thread.run(Thread.java:722)
发现datanode内存溢出(配置为1000m),我们调整datanode最大内存为2048m,重新启动,运用jmap查看datanode进程内存状况如下:
using thread-local object allocation.
Parallel GC with 10 thread(s)
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 2147483648 (2048.0MB)
NewSize = 1310720 (1.25MB)
MaxNewSize = 17592186044415 MB
OldSize = 5439488 (5.1875MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 21757952 (20.75MB)
MaxPermSize = 85983232 (82.0MB)
G1HeapRegionSize = 0 (0.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 16711680 (15.9375MB)
used = 11330912 (10.805999755859375MB)
free = 5380768 (5.131500244140625MB)
67.80235140931373% used
From Space:
capacity = 327680 (0.3125MB)
used = 294912 (0.28125MB)
free = 32768 (0.03125MB)
90.0% used
To Space:
capacity = 393216 (0.375MB)
used = 0 (0.0MB)
free = 393216 (0.375MB)
0.0% used
PS Old Generation
capacity = 125829120 (120.0MB)
used = 123925368 (118.18444061279297MB)
free = 1903752 (1.8155593872070312MB)
98.48703384399414% used
PS Perm Generation
capacity = 38141952 (36.375MB)
used = 17436280 (16.62853240966797MB)
free = 20705672 (19.74646759033203MB)
45.7141784458226% used
7361 interned Strings occupying 619352 bytes.
我们在DataNode节点上配置-xmx为2048m,并未配置-xmn值,默认值网上说是1/4*MaxHeapSize ,按理:
MaxHeapSize = PS Young Generation + PS Old Generation
然而,相差甚远,另外的内存去哪了?如果这样DataNode是否仍会OutOfMemoryError?
望解答,先谢谢啦~
|