hadoop安装成功 jps看不到进程的各种原因总结
在hadoop的安装过程中,对于一些常见的错误,我们已经做了修改,关键是它不报任何的错误。这让我们摸不着头脑。这里总结一下:1.主机名与配置文件不一致
2.主机名包含特殊字符
思考:
是什么原因造成这种现象?
本帖最后由 pig2 于 2014-5-22 01:59 编辑
1.主机名与配置文件不一致
启动成功,但是看不到5个进程hadoop@node1:~/hadoop$ bin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out
node3: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out
node2: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out
node1: starting secondarynamenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node1.out
starting jobtracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out
node3: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out
node2: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out
hadoop@node1:~/hadoop$ jps
16993 SecondaryNameNode
17210 Jps配置与日志如下:hadoop@node1:~/hadoop/conf$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop/tmp</value>
<description></description>
</property>
<property>
<name>fs.default.name</name>
<value><font color="#ff0000">hdfs://masternode:54310</font></value>
<description></description>
</property>
</configuration>
hadoop@node1:~/hadoop/conf$ cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
<description></description>
</property>
</configuration>
hadoop@node1:~/hadoop/conf$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value><font color="#ff0000">masternode:54311</font></value>
<description></description>
</property>
</configuration>
jobtracker的log文件如下:
2006-03-11 23:54:44,348 FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException: Problem binding to masternode/122.72.28.136:54311 : Cannot assign requested address
at org.apache.hadoop.ipc.Server.bind(Server.java:218)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:289)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1443)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:343)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:324)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)
at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1450)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:258)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:250)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:245)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4164)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:216)
... 12 more
2006-03-11 23:54:44,353 INFO org.apache.hadoop.mapred.JobTracker: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down JobTracker at node1/192.168.10.237
************************************************************/
namenode的log文件如下:
2006-03-11 23:54:37,009 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Problem binding to masternode/122.72.28.136:54310 : Cannot assign requested address
at org.apache.hadoop.ipc.Server.bind(Server.java:218)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:289)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1443)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:343)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:324)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:305)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:433)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:421)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:216)
... 12 more
2006-03-11 23:54:37,010 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.10.237
************************************************************host如下:hadoop@node1:~/hadoop/conf$ cat masters
node1
hadoop@node1:~/hadoop/conf$ cat slaves
node2
node3
hadoop@node1:~/hadoop/conf$ cat /etc/hosts
127.0.0.1 localhost
192.168.10.237node1.node1 node1
192.168.10.238node2
192.168.10.239node3原因:主机名与配置文件不一致
解决办法:
1.修改主机名为masternode:具体如何修改查看:ubuntu修改hostname
修改后结果如下:hadoop@node1:~/hadoop/conf$ cat masters
masternode
---------------------------------------------------------------------------------------------------------------------------
2.主机名错误
(此错误总结来自about云官方群39327136)
由【云将】hadoop jackysparrow(61214484) 与【云神】Karmic Koala(2448096355)共同探讨
错误如下:
start-all.sh 没有报错 但是master 和slave都没有启动服务
配置文件:
core-site.xml:
错误原因:
主机名 有点“.“
解决办法:
同样修改主机名:尽量不要包含特殊字符,修改后结果如下:(当然你也可以改成其他名字)hadoop@node1:~/hadoop/conf$ cat masters
centOS64---------------------------------------------------------------------------------------------------------------------------------------------------
3.xml文件配置错误
jps的时候没有看到5个进程。
也格式化的,只看到一个进程,剩下就是报错。
这个最大的可能就是xml的格式错误。
要么没有配对,要么配对的情况有字符。导致。这里重现几个错误
starting namenode, logging to /usr/hadoop/libexec/../logs/hadoop-root-namenode-aboutyun.out
hdfs-site.xml:14:16: The end-tag for element type "configuration" must end with a '>' delimiter.
localhost: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-aboutyun.out
localhost: hdfs-site.xml:14:16: The end-tag for element type "configuration" must end with a '>' delimiter.
localhost: secondarynamenode running as process 3170. Stop it first.
jobtracker running as process 3253. Stop it first.
localhost: starting tasktracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-tasktracker-aboutyun.out
上面是因为标记不配对造成的
错误2:
http://www.aboutyun.com/data/attachment/album/201402/22/183004e3rs9aesttp4aape.png
hdfs-site.xml:14:17: Content is not allowed in trailing section.
localhost: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-aboutyun.out
localhost: hdfs-site.xml:14:17: Content is not allowed in trailing section.
这个错误,并不是不配对,而是因为在xml之外多个字符。是一个很小的横杠,在两个箭头中间部分。
4.注意权限问题:如下图所示,如果为了hadoop集群新建了用户,hadoop的临时目录要注意权限一致,可以通过‘ll’命令查看文件权限。
http://www.aboutyun.com/data/attachment/forum/201405/12/184139ralal2prfpqrof8o.png
---------------------------------------------------------------------------------------------------------------------------------------------------
5.没有安装JDK
这个错误不太常见
不知道怎么安装可以查看
linux(ubuntu)安装Java jdk环境变量设置及小程序测试
这里简单说一下步骤:
1.下载合适的JDK,要和本系统相一致(jdk分为Linux版本,window版本,32位和64位)
2.配置环境变量
3.通过java -version,即可查看版本,如果出现上面截图错误,说明环境变量错误。
注意的地方:
1.可以不卸载以前的版本
2.注意路径添加的是bin路径,而不是JDK路径。
转载注明出处:http://www.aboutyun.com/thread-7118-1-1.html
hadoop安装和启动的问题还是很多啊,我都不知道怎么弄 请问 使用 jps命令 ,会非常慢 才显示 , hadoop的 集群 。请问 是 什么 原因呢???? Ubuntu装hadoop,用localhost能运行,用ip或主机名不能成功本人用的是阿里云的服务器,Ubuntu是 16.04.1 (64位),Hadoop版本是1.2.1,JDK是1.8。core-site.xml,mapred-site.xml的配置是localhost时,运行起来没有问题
https://yqfile.alicdn.com/a7f72d9a678d45412021c8a4c9798e176e9016d4.png
https://yqfile.alicdn.com/e039be0f62f6010357200b4aa77be00aac319419.png但是改成主机名时,就会失败
https://yqfile.alicdn.com/8b6c31712e44e209dd1c95cf6bca210cef9c6015.png
https://yqfile.alicdn.com/c88972cbed93cb267b7748bae05801f66b10c656.png日志是报地址绑定不上去。我查了查,9000端口也没有被占用
https://yqfile.alicdn.com/88d82c7e3e1f6ce41e4fca6d777d28191e6eb80b.jpeg其中主机名也改过,hosts文件配置如下,ssh zxy免密登录也能成功。
https://yqfile.alicdn.com/5b1232fbace1e47f9dccad0f9e578c139cce7562.png哪位大神给予指导一下
hadoop启动之后jps第一次可以成功50070的web页面也可以访问但是刷新一下就不能访问了然后jps出现错误
页:
[1]