本帖最后由 pig2 于 2014-5-22 01:59 编辑
1.主机名与配置文件不一致
启动成功,但是看不到5个进程
hadoop@node1:~/hadoop$ bin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out
node3: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out
node2: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out
node1: starting secondarynamenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node1.out
starting jobtracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out
node3: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out
node2: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out
hadoop@node1:~/hadoop$ jps
16993 SecondaryNameNode
17210 Jps 复制代码
配置与日志如下: hadoop@node1:~/hadoop/conf$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop/tmp</value>
<description></description>
</property>
<property>
<name>fs.default.name</name>
<value><font color="#ff0000">hdfs://masternode:54310</font></value>
<description></description>
</property>
</configuration>
hadoop@node1:~/hadoop/conf$ cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
<description></description>
</property>
</configuration>
hadoop@node1:~/hadoop/conf$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value><font color="#ff0000">masternode:54311</font></value>
<description></description>
</property>
</configuration>
jobtracker的log文件如下:
2006-03-11 23:54:44,348 FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException: Problem binding to masternode/122.72.28.136:54311 : Cannot assign requested address
at org.apache.hadoop.ipc.Server.bind(Server.java:218)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:289)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1443)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:343)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:324)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)
at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1450)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:258)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:250)
at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:245)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4164)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:216)
... 12 more
2006-03-11 23:54:44,353 INFO org.apache.hadoop.mapred.JobTracker: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down JobTracker at node1/192.168.10.237
************************************************************/
namenode的log文件如下:
2006-03-11 23:54:37,009 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Problem binding to masternode/122.72.28.136:54310 : Cannot assign requested address
at org.apache.hadoop.ipc.Server.bind(Server.java:218)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:289)
at org.apache.hadoop.ipc.Server.<init>(Server.java:1443)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:343)
at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:324)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)
at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:305)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:433)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:421)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:216)
... 12 more
2006-03-11 23:54:37,010 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.10.237
************************************************************ 复制代码
host如下: hadoop@node1:~/hadoop/conf$ cat masters
node1
hadoop@node1:~/hadoop/conf$ cat slaves
node2
node3
hadoop@node1:~/hadoop/conf$ cat /etc/hosts
127.0.0.1 localhost
192.168.10.237 node1.node1 node1
192.168.10.238 node2
192.168.10.239 node3 复制代码
原因:主机名与配置文件不一致
解决办法:
1.修改主机名为masternode:具体如何修改查看:ubuntu修改hostname
修改后结果如下: hadoop@node1:~/hadoop/conf$ cat masters
masternode 复制代码
---------------------------------------------------------------------------------------------------------------------------
2.主机名错误
(此错误总结来自about云官方群39327136)
由【云将】hadoop jackysparrow(61214484) 与【云神】Karmic Koala(2448096355)共同探讨
错误如下:
start-all.sh 没有报错 但是master 和slave都没有启动服务
配置文件:
core-site.xml:
错误原因:
主机名 有点“.“
解决办法:
同样修改主机名:尽量不要包含特殊字符,修改后结果如下:(当然你也可以改成其他名字)hadoop@node1:~/hadoop/conf$ cat masters
centOS64 复制代码
---------------------------------------------------------------------------------------------------------------------------------------------------
3.xml文件配置错误
jps的时候没有看到5个进程。
也格式化的,只看到一个进程,剩下就是报错。
这个最大的可能就是xml的格式错误。
要么没有配对,要么配对的情况有字符。导致。这里重现几个错误
starting namenode, logging to /usr/hadoop/libexec/../logs/hadoop-root-namenode-aboutyun.out
[Fatal Error] hdfs-site.xml:14:16: The end-tag for element type "configuration" must end with a '>' delimiter.
localhost: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-aboutyun.out
localhost: [Fatal Error] hdfs-site.xml:14:16: The end-tag for element type "configuration" must end with a '>' delimiter.
localhost: secondarynamenode running as process 3170. Stop it first.
jobtracker running as process 3253. Stop it first.
localhost: starting tasktracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-tasktracker-aboutyun.out
上面是因为标记不配对造成的
错误2:
[Fatal Error] hdfs-site.xml:14:17: Content is not allowed in trailing section.
localhost: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-aboutyun.out
localhost: [Fatal Error] hdfs-site.xml:14:17: Content is not allowed in trailing section.
这个错误,并不是不配对,而是因为在xml之外多个字符。是一个很小的横杠,在两个箭头中间部分。
4.注意权限问题: 如下图所示,如果为了hadoop集群新建了用户,hadoop的临时目录要注意权限一致,可以通过‘ll’命令查看文件权限。
---------------------------------------------------------------------------------------------------------------------------------------------------
5.没有安装JDK
这个错误不太常见
不知道怎么安装可以查看
linux(ubuntu)安装Java jdk环境变量设置及小程序测试
这里简单说一下步骤:
1.下载合适的JDK,要和本系统相一致(jdk分为Linux版本,window版本,32位和64位)
2.配置环境变量
3.通过java -version,即可查看版本,如果出现上面截图错误,说明环境变量错误。
注意的地方:
1.可以不卸载以前的版本
2.注意路径添加的是bin路径,而不是JDK路径。
转载注明出处:http://www.aboutyun.com/thread-7118-1-1.html