分享

求助帖hadoop wordcount查看输出没有结果

zhou@zhou-virtual-machine:/usr/local/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep /input /output 'dfs[a-z.]+'
15/11/13 13:26:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/13 13:26:26 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/11/13 13:26:27 INFO input.FileInputFormat: Total input paths to process : 2
15/11/13 13:26:28 INFO mapreduce.JobSubmitter: number of splits:2
15/11/13 13:26:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1447388622482_0001
15/11/13 13:26:29 INFO impl.YarnClientImpl: Submitted application application_1447388622482_0001
15/11/13 13:26:29 INFO mapreduce.Job: The url to track the job: http://zhou-virtual-machine:8088/proxy/application_1447388622482_0001/
15/11/13 13:26:29 INFO mapreduce.Job: Running job: job_1447388622482_0001
15/11/13 13:26:38 INFO mapreduce.Job: Job job_1447388622482_0001 running in uber mode : false
15/11/13 13:26:38 INFO mapreduce.Job:  map 0% reduce 0%
15/11/13 13:26:48 INFO mapreduce.Job:  map 50% reduce 0%
15/11/13 13:26:48 INFO mapreduce.Job: Task Id : attempt_1447388622482_0001_m_000000_0, Status : FAILED
Container killed on request. Exit code is 137
Container exited with a non-zero exit code 137
Killed by external signal

15/11/13 13:26:57 INFO mapreduce.Job:  map 100% reduce 0%
15/11/13 13:26:59 INFO mapreduce.Job:  map 100% reduce 100%
15/11/13 13:27:00 INFO mapreduce.Job: Job job_1447388622482_0001 completed successfully
15/11/13 13:27:01 INFO mapreduce.Job: Counters: 51
        File System Counters
                FILE: Number of bytes read=6
                FILE: Number of bytes written=347897
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=229
                HDFS: Number of bytes written=86
                HDFS: Number of read operations=9
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Failed map tasks=1
                Launched map tasks=3
                Launched reduce tasks=1
                Other local map tasks=1
                Data-local map tasks=2
                Total time spent by all maps in occupied slots (ms)=20798
                Total time spent by all reduces in occupied slots (ms)=7607
                Total time spent by all map tasks (ms)=20798
                Total time spent by all reduce tasks (ms)=7607
                Total vcore-seconds taken by all map tasks=20798
                Total vcore-seconds taken by all reduce tasks=7607
                Total megabyte-seconds taken by all map tasks=21297152
                Total megabyte-seconds taken by all reduce tasks=7789568
        Map-Reduce Framework
                Map input records=2
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=12
                Input split bytes=204
                Combine input records=0
                Combine output records=0
                Reduce input groups=0
                Reduce shuffle bytes=12
                Reduce input records=0
                Reduce output records=0
                Spilled Records=0
                Shuffled Maps =2
                Failed Shuffles=0
                Merged Map outputs=2
                GC time elapsed (ms)=412
                CPU time spent (ms)=2770
                Physical memory (bytes) snapshot=422985728
                Virtual memory (bytes) snapshot=5675397120
                Total committed heap usage (bytes)=262758400
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=25
        File Output Format Counters
                Bytes Written=86
15/11/13 13:27:01 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/11/13 13:27:01 INFO input.FileInputFormat: Total input paths to process : 1
15/11/13 13:27:01 INFO mapreduce.JobSubmitter: number of splits:1
15/11/13 13:27:01 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1447388622482_0002
15/11/13 13:27:01 INFO impl.YarnClientImpl: Submitted application application_1447388622482_0002
15/11/13 13:27:01 INFO mapreduce.Job: The url to track the job: http://zhou-virtual-machine:8088/proxy/application_1447388622482_0002/
15/11/13 13:27:01 INFO mapreduce.Job: Running job: job_1447388622482_0002
15/11/13 13:27:12 INFO mapreduce.Job: Job job_1447388622482_0002 running in uber mode : false
15/11/13 13:27:12 INFO mapreduce.Job:  map 0% reduce 0%
15/11/13 13:27:18 INFO mapreduce.Job:  map 100% reduce 0%
15/11/13 13:27:24 INFO mapreduce.Job:  map 100% reduce 100%
15/11/13 13:27:24 INFO mapreduce.Job: Job job_1447388622482_0002 completed successfully
15/11/13 13:27:24 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=6
                FILE: Number of bytes written=230891
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=215
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=7
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=4120
                Total time spent by all reduces in occupied slots (ms)=2758
                Total time spent by all map tasks (ms)=4120
                Total time spent by all reduce tasks (ms)=2758
                Total vcore-seconds taken by all map tasks=4120
                Total vcore-seconds taken by all reduce tasks=2758
                Total megabyte-seconds taken by all map tasks=4218880
                Total megabyte-seconds taken by all reduce tasks=2824192
        Map-Reduce Framework
                Map input records=0
                Map output records=0
                Map output bytes=0
                Map output materialized bytes=6
                Input split bytes=129
                Combine input records=0
                Combine output records=0
                Reduce input groups=0
                Reduce shuffle bytes=6
                Reduce input records=0
                Reduce output records=0
                Spilled Records=0
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=157
                CPU time spent (ms)=1090
                Physical memory (bytes) snapshot=284528640
                Virtual memory (bytes) snapshot=3783507968
                Total committed heap usage (bytes)=140132352
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=86
        File Output Format Counters
                Bytes Written=0
zhou@zhou-virtual-machine:/usr/local/hadoop$ bin/hdfs dfs -cat /output/part-r-00000
15/11/13 13:28:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
zhou@zhou-virtual-machine:/usr/local/hadoop$


为什么·····求大神帮帮我这个小白!~


1

1

2

2

3

3

4

4

5

5

6

6

已有(3)人评论

跳转到指定楼层
mituan2008 发表于 2015-11-13 14:02:47
先把集群配置好。不要出现警告。或则直接到文件下面看看有没有数据。

hadoop 2.6.0单节点-伪分布式模式安装
http://www.aboutyun.com/thread-10554-1-1.html




回复

使用道具 举报

lanxueren121 发表于 2015-11-16 12:39:59
mituan2008 发表于 2015-11-13 14:02
先把集群配置好。不要出现警告。或则直接到文件下面看看有没有数据。

hadoop 2.6.0单节点-伪分布式模式 ...

我的是ubuntu安装的···
回复

使用道具 举报

bob007 发表于 2015-11-16 14:08:26
lanxueren121 发表于 2015-11-16 12:39
我的是ubuntu安装的···


ubuntu参考这个
hadoop2.7【单节点】单机、伪分布、分布式安装指导
http://www.aboutyun.com/thread-12798-1-1.html



回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条