立即注册 登录
About云-梭伦科技 返回首页

nettman的个人空间 https://aboutyun.com/?21 [收藏] [复制] [分享] [RSS]

日志

Hadoop自动化安装shell脚本

热度 4已有 2555 次阅读2015-1-1 20:48

理出了一个自动安装hadoop的shell脚本,脚本托管在github上面: hadoop-install

hadoop-install

hadoop-install上脚本,all-in-one-install.sh是在一个节点上安装hdfs、hive、yarn、zookeeper和hbase,编写该脚本是为了在本机(fedora19系统)上调试mapreduce、hive和hbase;cluster-install.sh是在多个节点上安装hadoop集群,同样目前完成了hdfs、hive、yarn、zookeeper和hbase的自动安装。

脚本片段

IDH安装脚本中有一些写的比较好的shell代码片段,摘出如下,供大家学习。

检测操作系统版本
( grep -i "CentOS" /etc/issue > /dev/null ) && OS_DISTRIBUTOR=centos ( grep -i "Red[[:blank:]]*Hat[[:blank:]]*Enterprise[[:blank:]]*Linux" /etc/issue > /dev/null ) && OS_DISTRIBUTOR=rhel ( grep -i "Oracle[[:blank:]]*Linux" /etc/issue > /dev/null ) && OS_DISTRIBUTOR=oel ( grep -i "Asianux[[:blank:]]*Server" /etc/issue > /dev/null ) && OS_DISTRIBUTOR=an ( grep -i "SUSE[[:blank:]]*Linux[[:blank:]]*Enterprise[[:blank:]]*Server" /etc/issue > /dev/null ) && OS_DISTRIBUTOR=sles ( grep -i "Fedora" /etc/issue > /dev/null ) && OS_DISTRIBUTOR=fedora major_revision=`grep -oP '\d+' /etc/issue | sed -n "1,1p"` minor_revision=`grep -oP '\d+' /etc/issue | sed -n "2,2p"` OS_RELEASE="$major_revision.$minor_revision"
修改root密码
echo 'redhat'|passwd root --stdin
修改dns
# Set up nameservers. # http://ithelpblog.com/os/linux/redhat/centos-redhat/howto-fix-couldnt-resolve-host-on-centos-redhat-rhel-fedora/ # http://stackoverflow.com/a/850731/1486325 echo "nameserver 8.8.8.8" | tee -a /etc/resolv.conf echo "nameserver 8.8.4.4" | tee -a /etc/resolv.conf
修改操作系统时区
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
修改hosts文件
cat > /etc/hosts <<EOF 127.0.0.1 localhost 192.168.56.121 cdh1 192.168.56.122 cdh2 192.168.56.123 cdh3 EOF
去掉b文件中包括a文件的内容
grep -vf a b >result.log
修改file-max
echo -e "Global file limit ..." rst=`grep "^fs.file-max" /etc/sysctl.conf` if [ "x$rst" = "x" ] ; then echo "fs.file-max = 727680" >> /etc/sysctl.conf || exit $? else sed -i "s:^fs.file-max.*:fs.file-max = 727680:g" /etc/sysctl.conf fi
生成ssh公要
[ ! -d ~/.ssh ] && ( mkdir ~/.ssh ) && ( chmod 600 ~/.ssh ) yes|ssh-keygen -f ~/.ssh/id_rsa -t rsa -N "" && ( chmod 600 ~/.ssh/id_rsa.pub )
ssh设置无密码登陆
set timeout 20 set host [lindex $argv 0] set password [lindex $argv 1] set pubkey [exec cat /root/.ssh/id_rsa.pub] set localsh [exec cat ./config_ssh_local.sh] #spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$host spawn ssh root@$host " umask 022 mkdir -p /root/.ssh echo \'$pubkey\' > /root/.ssh/authorized_keys echo \'$localsh\' > /root/.ssh/config_ssh_local.sh cd /root/.ssh/; sh config_ssh_local.sh " expect { timeout exit yes/no {send "yes\r";exp_continue} assword {send "$password\r"} } expect eof #interact
配置JAVA_HOME
### JAVA_HOME ### if [ -f ~/.bashrc ] ; then sed -i '/^export[[:space:]]\{1,\}JAVA_HOME[[:space:]]\{0,\}=/d' ~/.bashrc sed -i '/^export[[:space:]]\{1,\}CLASSPATH[[:space:]]\{0,\}=/d' ~/.bashrc sed -i '/^export[[:space:]]\{1,\}PATH[[:space:]]\{0,\}=/d' ~/.bashrc fi echo "" >>~/.bashrc echo "export JAVA_HOME=/usr/java/latest" >>~/.bashrc echo "export CLASSPATH=.:\$JAVA_HOME/lib/tools.jar:\$JAVA_HOME/lib/dt.jar">>~/.bashrc echo "export PATH=\$JAVA_HOME/bin:\$PATH" >> ~/.bashrc alternatives --install /usr/bin/java java /usr/java/latest 5 alternatives --set java /usr/java/latest source ~/.bashrc
格式化集群
su -s /bin/bash hdfs -c 'yes Y | hadoop namenode -format >> /tmp/format.log 2>&1'
创建hadoop目录
su -s /bin/bash hdfs -c "hadoop fs -chmod a+rw /" while read dir user group perm do su -s /bin/bash hdfs -c "hadoop fs -mkdir -R $dir && hadoop fs -chmod -R $perm $dir && hadoop fs -chown -R $user:$group $dir" echo "." done << EOF /tmp hdfs hadoop 1777 /tmp/hadoop-yarn mapred mapred 777 /var hdfs hadoop 755 /var/log yarn mapred 1775 /var/log/hadoop-yarn/apps yarn mapred 1777 /hbase hbase hadoop 755 /user hdfs hadoop 777 /user/history mapred hadoop 1777 /user/root root hadoop 777 /user/hive hive hadoop 777 EOF
hive中安装并初始化postgresql
yum install postgresql-server postgresql-jdbc -y >/dev/null chkconfig postgresql on rm -rf /var/lib/pgsql/data rm -rf /var/run/postgresql/.s.PGSQL.5432 service postgresql initdb sed -i "s/max_connections = 100/max_connections = 600/" /var/lib/pgsql/data/postgresql.conf sed -i "s/#listen_addresses = 'localhost'/listen_addresses = '*'/" /var/lib/pgsql/data/postgresql.conf sed -i "s/shared_buffers = 32MB/shared_buffers = 256MB/" /var/lib/pgsql/data/postgresql.conf sed -i "s/127.0.0.1\/32/0.0.0.0\/0/" /var/lib/pgsql/data/pg_hba.conf sudo cat /var/lib/pgsql/data/postgresql.conf | grep -e listen -e standard_conforming_strings rm -rf /usr/lib/hive/lib/postgresql-jdbc.jar ln -s /usr/share/java/postgresql-jdbc.jar /usr/lib/hive/lib/postgresql-jdbc.jar su -c "cd ; /usr/bin/pg_ctl start -w -m fast -D /var/lib/pgsql/data" postgres su -c "cd ; /usr/bin/psql --command \"create user hiveuser with password 'redhat'; \" " postgres su -c "cd ; /usr/bin/psql --command \"CREATE DATABASE metastore owner=hiveuser;\" " postgres su -c "cd ; /usr/bin/psql --command \"GRANT ALL privileges ON DATABASE metastore TO hiveuser;\" " postgres su -c "cd ; /usr/bin/psql -U hiveuser -d metastore -f /usr/lib/hive/scripts/metastore/upgrade/postgres/hive-schema-0.10.0.postgres.sql" postgres su -c "cd ; /usr/bin/pg_ctl restart -w -m fast -D /var/lib/pgsql/data" postgres
总结

更多脚本,请关注github:hadoop-install,你可以下载、使用并修改其中代码!


路过

雷人
2

握手

鲜花

鸡蛋

刚表态过的朋友 (2 人)

发表评论 评论 (3 个评论)

回复 stark_summer 2015-1-5 14:04
好资料 很详细
回复 babyLiyuan 2015-1-6 00:23
好棒!!一直想找这个资料
回复 6yuan789 2015-6-30 15:00
太好了,值得推荐

facelist doodle 涂鸦板

您需要登录后才可以评论 登录 | 立即注册

关闭

推荐上一条 /2 下一条