问题导读:
1.Spark的安装需要准备哪些软件? 2.单机怎样安装并配置Hadoop? 3.单机怎样安装Spark?
本文将介绍Apache Spark 1.6.0在单机的部署,与在集群中部署的步骤基本一致,只是少了一些master和slave文件的配置。直接安装scala与Spark就可以在单机使用,但如果用到hdfs系统的话hadoop和jdk也要配置,建议全部安装配置好。
0.Spark的安装准备Spark官网的文档 http://spark.apache.org/docs/latest/ 里是这样说的:
Spark runs on Java 7+, Python 2.6+ and R 3.1+. For the Scala API, Spark 1.6.0 uses Scala 2.10. You will need to use a compatible Scala version (2.10.x).
我的电脑环境是Ubuntu 14.04.4 LTS,还需要安装:
1.安装jdk解压jdk安装包到任意目录:
[mw_shl_code=bash,true]cd /home/tom
$ tar -xzvf jdk-8u73-linux-x64.tar.gz
$ sudo vim /etc/profile[/mw_shl_code]
[mw_shl_code=bash,true]export JAVA_HOME=/home/tom/jdk1.8.0_73/
export JRE_HOME=/home/tom/jdk1.8.0_73/jre
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH[/mw_shl_code]
保存并更新/etc/profile:
[mw_shl_code=bash,true]$ source /etc/profil[/mw_shl_code]
查看是否成功:
[mw_shl_code=bash,true]$ java -version[/mw_shl_code]
2.配置ssh localhost确保安装好ssh:
[mw_shl_code=bash,true]$ sudo apt-get update
$ sudo apt-get install openssh-server
$ sudo /etc/init.d/ssh start[/mw_shl_code]
生成并添加密钥:
[mw_shl_code=bash,true]$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys[/mw_shl_code]
如果已经生成过密钥,只需执行后两行命令。 测试ssh localhost
[mw_shl_code=bash,true]$ ssh localhost
$ exit[/mw_shl_code]
3.安装hadoop2.6.0解压hadoop2.6.0到任意目录:
[mw_shl_code=bash,true]$ cd /home/tom
$ wget http://apache.claz.org/hadoop/co ... hadoop-2.6.0.tar.gz
$ tar -xzvf hadoop-2.6.0.tar.gz[/mw_shl_code]
编辑/etc/profile文件,在最后加上java环境变量:
[mw_shl_code=bash,true]export HADOOP_HOME=/home/tom/hadoop-2.6.0
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin[/mw_shl_code]
编辑$HADOOP_HOME/etc/hadoop/hadoop-env.sh文件
[mw_shl_code=bash,true]$ vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh[/mw_shl_code]
在最后加上:
[mw_shl_code=bash,true]export JAVA_HOME=/home/tom/jdk1.8.0_73/[/mw_shl_code]
修改Configuration文件:
[mw_shl_code=bash,true]$ cd $HADOOP_HOME/etc/hadoop[/mw_shl_code]
修改core-site.xml:
[mw_shl_code=bash,true]<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>[/mw_shl_code]
修改hdfs-site.xml:
[mw_shl_code=bash,true]<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/tom/hadoopdata/hdfs/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/tom/hadoopdata/hdfs/datanode</value>
</property>
</configuration>[/mw_shl_code]
第一个是dfs的备份数目,单机用1份就行,后面两个是namenode和datanode的目录。
修改mapred-site.xml:
[mw_shl_code=bash,true]<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>[/mw_shl_code]
修改yarn-site.xml:
[mw_shl_code=bash,true]<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>[/mw_shl_code]
初始化hadoop:
[mw_shl_code=bash,true]$ hdfs namenode -format[/mw_shl_code]
启动
[mw_shl_code=bash,true]$ $HADOOP_HOME/sbin/start-all.sh[/mw_shl_code]
停止
[mw_shl_code=bash,true]$ $HADOOP_HOME/sbin/stop-all.sh[/mw_shl_code]
- port 8088: cluster and all applications
- port 50070: Hadoop NameNode
- port 50090: Secondary NameNode
- port 50075: DataNode
hadoop运行后可使用jps命令查看,得到结果:
[mw_shl_code=bash,true]10057 Jps
9611 ResourceManager
9451 SecondaryNameNode
9260 DataNode
9102 NameNode
9743 NodeManager[/mw_shl_code]
4.安装scala
解压scala安装包到任意目录
[mw_shl_code=bash,true]$ cd /home/tom
$ tar -xzvf scala-2.10.6.tgz
$ sudo vim /etc/profile[/mw_shl_code]
在/etc/profile文件的末尾添加环境变量:
[mw_shl_code=bash,true]export SCALA_HOME=/home/tom//scala-2.10.6
export PATH=$SCALA_HOME/bin:$PATH[/mw_shl_code]
保存并更新/etc/profile:
[mw_shl_code=bash,true]$ source /etc/profil[/mw_shl_code]
查看是否成功:
[mw_shl_code=bash,true]$ scala -version[/mw_shl_code]
5.安装Spark
解压spark安装包到任意目录:
[mw_shl_code=bash,true]$ cd /home/tom
$ tar -xzvf spark-1.6.0-bin-hadoop2.6.tgz
$ mv spark-1.6.0-bin-hadoop2.6 spark-1.6.0
$ sudo vim /etc/profile[/mw_shl_code]
在/etc/profile文件的末尾添加环境变量:
[mw_shl_code=bash,true]export SPARK_HOME=/home/tom/spark-1.6.0
export PATH=$SPARK_HOME/bin:$PATH[/mw_shl_code]
保存并更新/etc/profile:
[mw_shl_code=bash,true]$ source /etc/profil[/mw_shl_code]
在conf目录下复制并重命名spark-env.sh.template为spark-env.sh:
[mw_shl_code=bash,true]$ cp spark-env.sh.template spark-env.sh
$ vim spark-env.sh[/mw_shl_code]
在spark-env.sh中添加:
[mw_shl_code=bash,true]export JAVA_HOME=/home/tom/jdk1.8.0_73/
export SCALA_HOME=/home/tom//scala-2.10.6
export SPARK_MASTER_IP=localhost
export SPARK_WORKER_MEMORY=4G[/mw_shl_code]
启动
[mw_shl_code=bash,true]$ $SPARK_HOME/sbin/start-all.sh[/mw_shl_code]
停止
[mw_shl_code=bash,true]$ $SPARK_HOME/sbin/stop-all.sh[/mw_shl_code]
测试Spark是否安装成功:
[mw_shl_code=bash,true]$ $SPARK_HOME/bin/run-example SparkPi[/mw_shl_code]
得到结果:
[mw_shl_code=bash,true]Pi is roughly 3.14716[/mw_shl_code]
|