bob007
发表于 2015-8-11 21:51:09
本帖最后由 bob007 于 2015-8-11 21:52 编辑
JsNoNo 发表于 2015-8-11 21:43
楼主我想请问,我成功运行了本地模式验证,但是伪分布式运行命令start-dfs.sh时,出现command not found的 ...
问题很多,建议参考一些视频。
http://www.aboutyun.com/data/attachment/forum/201504/27/174605izua2ky72itwyyu9.png
这里一定配置
然后连接还有问题,这个问题就多了
比如防火墙、端口、集群是否启动等等
about云有套视频:
hadoop生态系统零基础入门【后续不断更新】
已经更新到了hadoop2.7
JsNoNo
发表于 2015-8-11 21:52:45
bob007 发表于 2015-8-11 21:51
问题很多,建议参考一些视频。
多谢楼主 我先参考下~楼主辛苦了
JsNoNo
发表于 2015-8-16 13:03:36
bob007 发表于 2015-8-11 21:51
问题很多,建议参考一些视频。
问题已解决。hadoop/native 中是64位 32位系统下需要重新编译
chengzhb_about
发表于 2015-9-9 22:27:05
我的是hadoop2.7.1,dfs起来了。但启动过程中出现linux知识(log)方面的问题,查了很多资料,把path都改的一塌糊涂了,还是不行,请大家告知解决方案:
hadoop@ubuntu:~$ start-dfs.sh
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Starting namenodes on
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
localhost: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
localhost: SLF4J: Defaulting to no-operation (NOP) logger implementation
localhost: SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-ubuntu.out
Starting secondary namenodes
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
chengzhb_about
发表于 2015-9-9 22:40:30
ubuntu的日志问题,请大侠们帮忙解决:
hadoop@ubuntu2.7.1:~$ start-dfs.sh
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Starting namenodes on
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out
localhost: SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
localhost: SLF4J: Defaulting to no-operation (NOP) logger implementation
localhost: SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-ubuntu.out
Starting secondary namenodes
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
dantangkai
发表于 2015-10-21 22:53:44
谢谢分享
ableq
发表于 2015-11-2 10:51:58
3台ubuntu 12.04
一个做master
两个做slave
不适用yarn成功运行wordcount,使用yarn不能成功,一直卡在
hadoop@ubuntu12-1:~$ hadoop jar /home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
15/11/01 12:32:27 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.27:8032
15/11/01 12:32:46 INFO input.FileInputFormat: Total input paths to process : 1
15/11/01 12:32:47 INFO mapreduce.JobSubmitter: number of splits:1
15/11/01 12:32:50 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1446216843944_0001
15/11/01 12:33:05 INFO impl.YarnClientImpl: Submitted application application_1446216843944_0001
15/11/01 12:33:05 INFO mapreduce.Job: The url to track the job: http://ubuntu12-1:8088/proxy/application_1446216843944_0001/
15/11/01 12:33:05 INFO mapreduce.Job: Running job: job_1446216843944_0001
ableq
发表于 2015-11-2 10:52:28
3台ubuntu 12.04
一个做master
两个做slave
不使用yarn成功运行wordcount,使用yarn不能成功,一直卡在
hadoop@ubuntu12-1:~$ hadoop jar /home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
15/11/01 12:32:27 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.27:8032
15/11/01 12:32:46 INFO input.FileInputFormat: Total input paths to process : 1
15/11/01 12:32:47 INFO mapreduce.JobSubmitter: number of splits:1
15/11/01 12:32:50 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1446216843944_0001
15/11/01 12:33:05 INFO impl.YarnClientImpl: Submitted application application_1446216843944_0001
15/11/01 12:33:05 INFO mapreduce.Job: The url to track the job: http://ubuntu12-1:8088/proxy/application_1446216843944_0001/
15/11/01 12:33:05 INFO mapreduce.Job: Running job: job_1446216843944_0001
ableq
发表于 2015-11-2 10:53:52
3台ubuntu 12.04
一个做master
两个做slave
不使用yarn成功运行wordcount,使用yarn不能成功,一直卡在
hadoop@ubuntu12-1:~$ hadoop jar /home/hadoop/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
15/11/01 12:32:50 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1446216843944_0001
15/11/01 12:33:05 INFO mapreduce.Job: Running job: job_1446216843944_0001
starrycheng
发表于 2015-11-2 12:50:54
ableq 发表于 2015-11-2 10:53
3台ubuntu 12.04
一个做master
两个做slave
是不是resource manger出问题了,先看看yarn进程是否都在。