hadoop2.7.1机架智能感知
本帖最后由 pig2 于 2015-12-11 18:28 编辑问题导读
1.Hadoop组件机架感知的作用是什么?
2.拓扑信息以什么格式保存,各自的含义是什么?
3.hadoop master保存slave集群机架id由哪两种方式,该如何配置?
4.如何指定Java类实现拓扑映射?
static/image/hrline/4.gif
Hadoop组件机架感知。HDFS放置为了容错,使用机架只能感知将副本放置不同机架。在网络故障或则集群内分区,可以提供有效的数据。hadoop master保存slave集群机架id,通过调用外部脚本或则Java class指定的配置文件。无论是使用Java class,还是使用外部脚本,输出必须遵循Java org.apache.hadoop.net.DNSToSwitchMapping接口。这个接口期望一对一的保存,拓扑信息以 ‘/myrack/myhost’格式,‘/’是拓扑分隔符,‘myrack’ 是机架标识, ‘myhost’是客户端。每个机架假设单个/ 24子网,一个可以使用的格式/ 192.168.100.0 / 192.168.100.5 '作为一个的机架主机拓扑映射。
使用拓扑映射Java 类,类的名字由配置文件参数 topology.node.switch.mapping.impl指定。举例:NetworkTopology.java,包括hadoop分布式,可以通过管理员定制。使用Java class代替外部脚本有一个性能优势,当一个新的 slave节点注册时,hadoop不需fork外部进程
如果实现外部脚本,它由配置文件参数 topology.script.file.name 指定。不想Javaclass,外部拓扑脚本不包括hadoop分布式,它是由管理员提供的。
当forking多个拓扑脚本,hadoop将发送多个ip地址到ARGV.IP地址的数量发送到拓扑脚本由net.topology.script.number.args 控制,默认是100.如果
net.topology.script.number.args 改为1,拓扑脚本会forked每个人ip提交给DataNodes 和/或NodeManagers.
如果topology.script.file.name 或则 topology.node.switch.mapping.impl 没有设置,机架id ‘/default-rack’ 返回任何通过的ip地址。尽管这种行为是可以的,它可能导致HDFS block replication问题,离架写一个备份作为默认行为。不能这样做,当仅有一个名为 ‘/default-rack’.机架,
一个额外的配置,设置mapreduce.jobtracker.taskcache.levels ,它决定了MapReduce将使用的caches 的级别(网络拓扑)。举例,如果默认值为2,将初始两个级别缓存。一个为hosts( (host -> task mapping))和另外一个为racks (rack -> task mapping). 给我们一对一映射‘/myrack/myhost’.
python 例子
#!/usr/bin/python
# this script makes assumptions about the physical environment.
#1) each rack is its own layer 3 network with a /24 subnet, which
# could be typical where each rack has its own
# switch with uplinks to a central core router.
#
# +-----------+
# |core router|
# +-----------+
# / \
# +-----------+ +-----------+
# |rack switch| |rack switch|
# +-----------+ +-----------+
# | data node | | data node |
# +-----------+ +-----------+
# | data node | | data node |
# +-----------+ +-----------+
#
# 2) topology script gets list of IP's as input, calculates network address, and prints '/network_address/ip'.
import netaddr
import sys
sys.argv.pop(0) # discard name of topology script from argv list as we just want IP addresses
netmask = '255.255.255.0' # set netmask to what's being used in your environment.The example uses a /24
for ip in sys.argv: # loop over list of datanode IP's
address = '{0}/{1}'.format(ip, netmask) # format address string so it looks like 'ip/netmask' to make netaddr work
try:
network_address = netaddr.IPNetwork(address).network # calculate and print network address
print "/{0}".format(network_address)
except:
print "/rack-unknown" # print catch-all value if unable to calculate network address
bash 例子
#!/bin/bash
# Here's a bash example to show just how simple these scripts can be
# Assuming we have flat network with everything on a single switch, we can fake a rack topology.
# This could occur in a lab environment where we have limited nodes,like 2-8 physical machines on a unmanaged switch.
# This may also apply to multiple virtual machines running on the same physical hardware.
# The number of machines isn't important, but that we are trying to fake a network topology when there isn't one.
#
# +----------+ +--------+
# |jobtracker| |datanode|
# +----------+ +--------+
# \ /
#+--------++--------++--------+
#|datanode|--| switch |--|datanode|
#+--------++--------++--------+
# / \
# +--------+ +--------+
# |datanode| |namenode|
# +--------+ +--------+
#
# With this network topology, we are treating each host as a rack.This is being done by taking the last octet
# in the datanode's IP and prepending it with the word '/rack-'.The advantage for doing this is so HDFS
# can create its 'off-rack' block copy.
# 1) 'echo $@' will echo all ARGV values to xargs.
# 2) 'xargs' will enforce that we print a single argv value per line
# 3) 'awk' will split fields on dots and append the last field to the string '/rack-'. If awk
# fails to split on four dots, it will still print '/rack-' last field value
echo $@ | xargs -n 1 | awk -F '.' '{print "/rack-"$NF}'
注释:
fork叉子\分岔\岔口\复刻,西方人吃饭用的东西,经常用作刀和叉。
计算机程序设计中的分叉函数。返回值: 若成功调用一次则返回两个值,子进程返回0,父进程返回子进程标记;否则,出错返回-1。
fork函数将运行着的程序分成2个(几乎)完全一样的进程,每个进程都启动一个从代码的同一位置开始执行的线程。这两个进程中的线程继续执行,就像是两个用户同时启动了该应用程序的两个副本。
argv
argv接收从命令行传来的参数,在程序里可以通过argv来使用。
ARGc和ARGv中的ARG指的是"参数"(外语:ARGuments, argument counter 和 argument vector )
至少有两个参数至主函数:ARGc和ARGv;
首先是一个计算提供的参数到程序,
第二个是对字符串数组的指针。
学习了 ~~~~~~~~~~~~~
页:
[1]