本帖最后由 sunshine_junge 于 2014-8-7 17:02 编辑
问题导读:
1.Hadoop2.x各种配置文件和hadoop1会有何不同?
2.如何具体修改相应的配置文件?
1、Hadoop2.x的发行版中有个小问题:libhadoop.so.1.0.0在64位OS中存在问题,因为它是32位的,在64位OS中hadoop启动时会报一个WARN的日志。这个包的作用是调用native的api,可以提高hadoop的性能,如果这个包失效,那就是使用jvm做压缩等工作,效率就会很低。处理方法就是重新编译Hadoo。
2、在打算做namenode的机器上,wget或其他方式下载hadoop的压缩包,并解压到本地指定目录。
3、各种配置文件和hadoop1会有所不同,共有七个文件,以下分别描述。
- # The java implementation to use.
- export JAVA_HOME=${JAVA_HOME}
复制代码
- # some Java parameters
- # export JAVA_HOME=/home/y/libexec/jdk1.6.0/
- if [ "$JAVA_HOME" != "" ]; then
- #echo "run java in $JAVA_HOME"
- JAVA_HOME=$JAVA_HOME
- fi
-
- if [ "$JAVA_HOME" = "" ]; then
- echo "Error: JAVA_HOME is not set."
- exit 1
- fi
-
- JAVA=$JAVA_HOME/bin/java
- JAVA_HEAP_MAX=-Xmx512m
- #默认的heap_max是1000m,我的虚拟机没这么大内存,所以改小了
复制代码
- #写入你slave的节点。如果是多个就每行一个,写入host名
- bd24
- bd25
复制代码
- <configuration>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://bd23:9000</value>
- </property>
- <property>
- <name>io.file.buffer.size</name>
- <value>131072</value>
- <property>
- <property>
- <name>hadoop.tmp.dir</name>
- <value>file:/home/wukong/a_usr/hadoop-2.4.1/tmp</value>
- <description>Abase for other temporary directories.</description>
- </property>
- <property>
- <name>hadoop.proxyuser.hduser.hosts</name>
- </value>*</value>
- </property>
- <property>
- <name>hadoop.proxyuser.hduser.groups</name.
- <value>*</value>
- </property>
- </configuration>
复制代码
- <configuration>
- <property>
- <name>dfs.namenode.secondary.http-address</name>
- <value>bd23:9001</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>file:/home/wukong/a_usr/hadoop-2.4.1/name</value>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>file:/home/wukong/a_usr/hadoop-2.4.1/data</value>
- </property>
- <property>
- <name>dfs.replication</name>
- <value>1</value>
- </property>
- <property>
- <name>dfs.webhdfs.enabled</name>
- <value>true</value>
- </property>
- </configuration>
复制代码
- <configuration>
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.address</name>
- <value>bd23:10020</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.webapp.address</name>
- <value>bd23.19888</value>
- </property>
- </configuration>
复制代码
- <configuration>
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
- <value>org.apache.hadoop.mapred.ShuffleHandler</value>
- </property>
- <property>
- <name>yarn.resourcemanager.address</name>
- <value>bd23:8032</value>
- </property>
- <property>
- <name>yarn.resourcemanager.scheduler.address</name>
- <value>bd23:8030</value>
- </property>
- <property>
- <name>yarn.resourcemanager.resource-tracker.address</name>
- <value>bd23:8031</value>
- </property>
- <property>
- <name>yarn.resourcemanager.admin.address</name>
- <value>bd23:8033</value>
- </property>
- <property>
- <name>yarn.resourcemanager.webapp.address</name>
- <value>bd23:8088</value>
- </property>
- </configuration>
复制代码
4、将hadoop目录拷贝到所有主机。
5、格式化 先将目录切换到hadoop2-4.1. - ./bin/hdfs namenode -format
复制代码
看到如下输出就证明成功了 - 14/07/31 13:58:30 INFO common.Storage: Storage directory /home/wukong/a_usr/hadoop-2.4.1/name has been successfully formatted.
复制代码
6、启动dfs 复制代码
看到如下输出就证明成功了 - Starting namenodes on [bd23]
- bd23: starting namenode, logging to /home/wukong/a_usr/hadoop-2.4.1/logs/hadoop-wukong-namenode-bd23.out
- bd24: starting datanode, logging to /home/wukong/a_usr/hadoop-2.4.1/logs/hadoop-wukong-datanode-bd24.out
- bd25: starting datanode, logging to /home/wukong/a_usr/hadoop-2.4.1/logs/hadoop-wukong-datanode-bd25.out
- Starting secondary namenodes [bd23]
- bd23: starting secondarynamenode, logging to /home/wukong/a_usr/hadoop-2.4.1/logs/hadoop-wukong-secondarynamenode-bd23.out
复制代码
7、使用jps查看机器启动的进程情况。
正常情况下master上应该有namenode和sencondarynamenode。slave上有datanode。
|