本帖最后由 lzw 于 2014-9-10 14:32 编辑
第一步,确认环境: 1、机器&系统 x86_64
2、修改host配置
vi /etc/hosts
192.168.30.58 n58
192.168.30.54 n54
192.168.30.56 n56
第二步,软件准备:
wget "http://download.oracle.com/otn-pub/java/jdk/7u45-b18/jdk-7u45-linux-x64.rpm?AuthParam=1385452720_34d05e9cb79cad4eff77fa95d9461298" -O jdk-7u45-linux-x64.rpm
wget http://ftp.riken.jp/net/apache/h ... hadoop-2.4.1.tar.gz
tar zxfv hadoop-2.4.1.tar.gz
wget http://ftp.riken.jp/net/apache/h ... -hadoop2-bin.tar.gz
tar zxfv hbase-0.98.5-hadoop2-bin.tar.gz
第三步:安装准备 ssh
在每台机器上:ssh login passwordlessly:
58:
ssh-keygen -t rsa
scp ~/.ssh/id_rsa.pub root@192.168.30.56:~/.ssh/authorized_keys58
scp ~/.ssh/id_rsa.pub root@192.168.30.54:~/.ssh/authorized_keys58
:56 ssh-keygen -t rsa
scp ~/.ssh/id_rsa.pub root@192.168.30.58:~/.ssh/authorized_keys56
scp ~/.ssh/id_rsa.pub root@192.168.30.54:~/.ssh/authorized_keys56
54: ssh-keygen -t rsa
scp ~/.ssh/id_rsa.pub root@192.168.30.58:~/.ssh/authorized_keys54
scp ~/.ssh/id_rsa.pub root@192.168.30.56:~/.ssh/authorized_keys54
分别:
58:
cat ~/.ssh/authorized_keys54>>~/.ssh/authorized_keys
cat ~/.ssh/authorized_keys56>>~/.ssh/authorized_keys
56:
cat ~/.ssh/authorized_keys58>>~/.ssh/authorized_keys
cat ~/.ssh/authorized_keys54>>~/.ssh/authorized_keys
54:
cat ~/.ssh/authorized_keys58>>~/.ssh/authorized_keys
cat ~/.ssh/authorized_keys56>>~/.ssh/authorized_keys
测试:
ssh 192.168.30.54
ssh 192.168.30.58
ssh 192.168.30.56
第四步:修改配置文件:
mkdir -p /home/hduser/dfs/name
mkdir -p /home/hduser/dfs/data
mkdir -p /home/hduser/tmp
修改相关文件:
这里要涉及到的配置文件有7个:
$HADOOP_HOME/etc/hadoop/hadoop-env.sh
$HADOOP_HOME
$HADOOP_HOME/etc/hadoop/yarn-env.sh
$HADOOP_HOME
$HADOOP_HOME/etc/hadoop/slaves
$HADOOP_HOME
$HADOOP_HOME/etc/hadoop/core-site.xml
$HADOOP_HOME
$HADOOP_HOME/etc/hadoop/hdfs-site.xml
$HADOOP_HOME
$HADOOP_HOME/etc/hadoop/mapred-site.xml
$HADOOP_HOME
$HADOOP_HOME/etc/hadoop/yarn-site.xml
以上个别文件默认不存在的,可以复制相应的template文件获得。
配置文件1:hadoop-env.sh
修改JAVA_HOME值(export JAVA_HOME=/usr/java/jdk1.7.0)
配置文件2:yarn-env.sh
修改JAVA_HOME值(export JAVA_HOME=/usr/java/jdk1.7.0)
配置文件3:slaves (这个文件里面保存所有slave节点)
写入以下内容:
n56
n58
配置文件4:core-site.xml
- <configuration>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://n54:9000</value>
- </property>
- <property>
- <name>io.file.buffer.size</name>
- <value>131072</value>
- </property>
- <property>
- <name>hadoop.tmp.dir</name>
- <value>file:/home/hduser/tmp</value>
- <description>Abase for other temporary directories.</description>
- </property>
- <property>
- <name>hadoop.proxyuser.hduser.hosts</name>
- <value>*</value>
- </property>
- <property>
- <name>hadoop.proxyuser.hduser.groups</name>
- <value>*</value>
- </property>
- </configuration>
复制代码
配置文件5:hdfs-site.xml
- <configuration>
- <property>
- <name>dfs.namenode.secondary.http-address</name>
- <value>n54:9001</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>file:/home/hduser/dfs/name</value>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>file:/home/hduser/dfs/data</value>
- </property>
- <property>
- <name>dfs.replication</name>
- <value>3</value>
- </property>
- <property>
- <name>dfs.webhdfs.enabled</name>
- <value>true</value>
- </property>
- </configuration>
复制代码
配置文件6:mapred-site.xml
- <configuration>
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.address</name>
- <value>n54:10020</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.webapp.address</name>
- <value>n54:19888</value>
- </property>
- </configuration>
复制代码
配置文件7:yarn-site.xml
- <configuration>
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
- <value>org.apache.hadoop.mapred.ShuffleHandler</value>
- </property>
- <property>
- <name>yarn.resourcemanager.address</name>
- <value>n54:8032</value>
- </property>
- <property>
- <name>yarn.resourcemanager.scheduler.address</name>
- <value> n54:8030</value>
- </property>
- <property>
- <name>yarn.resourcemanager.resource-tracker.address</name>
- <value> n54:8031</value>
- </property>
- <property>
- <name>yarn.resourcemanager.admin.address</name>
- <value> n54:8033</value>
- </property>
- <property>
- <name>yarn.resourcemanager.webapp.address</name>
- <value> n54:8088</value>
- </property>
- </configuration>
复制代码
修改hbase相关配置:
/$HBASE_HOME/conf/hbase-site.xml - <property>
- <name>hbase.tmp.dir</name>
- <value>/home/hduser/hbasetmp</value>
- </property>
- <property>
- <name>hbase.rootdir</name>
- <value>hdfs://n54:9000/hbase</value>
- </property>
- <property>
- <name>hbase.cluster.distributed</name>
- <value>true</value>
- </property>
- <property>
- <name>hbase.zookeeper.quorum</name>
- <value>master54,slave56,slave58</value>
- </property>
- <property>
- <name>hbase.zookeeper.property.dataDir</name>
- <value>${hbase.tmp.dir}/zookeeper</value>
- </property>
- <property>
- <name>base.zookeeper.property.clientPort</name>
- <value>2181</value>
- </property>
复制代码
/$HBASE_HOME/conf/hbase-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0
export HBASE_LOG_DIR=/$HADOOP_HOME/logs
export HBASE_MANAGES_ZK=true (启用hbase自带的zookeeper,这样也不用单独安装zookeeper了)
/$HBASE_HOME/conf/regionservers
n54
n56
n58
追加到 /etc/profile,并source /etc/profile它生效
export JAVA_HOME=/usr/java/jdk1.7.0
export CLASSPATH=.:$JAVA_HOME/lib.tools.jar
export HADOOP_HOME=/hdhbase/hadoop-2.4.1
export HBASE_HOME=/hdhbase/hbase-0.98.5
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
export PATH=$HADOOP_HOME/bin:$PATH
export PATH=$HBASE_HOME/bin:$PATH
第五步: copy 到 slavers
复制 环境变量
复制hadoop
scp -r /hdhbase/hadoop-2.4.1 root@n56:/hdhbase
scp -r /hdhbase/hadoop-2.4.1 root@n58:/hdhbase
复制dfs路径
scp -r /home/hduser root@hadoop-slave1:/home
scp -r /home/hduser root@hadoop-slave2:/home
复制java
scp -r /usr/java root@hadoop-slave1:/usr
scp -r /usr/java root@hadoop-slave2:/usr
复制hbase
scp -r /hdhbase/hbase-0.98.5 root@n56:/hdhbase
scp -r /hdhbase/hbase-0.98.5 root@n58:/hdhbase
第六步 启动hadoop,启动hbase
hadoop 命令如下:
1、format:
bin/hdfs namenode –format -clusterid n54 (集群的format:一定要记得加-clusterid n54)
2、启动和停止:
namenode 启动hdfs:
sbin/start-dfs.sh
sbin/start-yarn.sh
停止:sbin/stop-all.sh
查看集群状态:
bin/hdfs dfsadmin -report
hbase 启动命令如下:
/$HBASE_HOME/bin/start-hbase.sh
来自about云(群39327136)
|