david@s0:~/Downloads$ hadoop jar HadoopDemo2.jar /user/it18zhang/ncdc_data/19*.gz /user/it18zhang/out
17/04/13 20:14:34 INFO client.RMProxy: Connecting to ResourceManager at s0/192.168.20.128:8032
17/04/13 20:14:34 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
17/04/13 20:14:35 INFO input.FileInputFormat: Total input paths to process : 2
17/04/13 20:14:35 INFO mapreduce.JobSubmitter: number of splits:2
17/04/13 20:14:36 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1492080262949_0002
17/04/13 20:14:36 INFO impl.YarnClientImpl: Submitted application application_1492080262949_0002
17/04/13 20:14:36 INFO mapreduce.Job: The url to track the job: http://s0:8088/proxy/application_1492080262949_0002/
17/04/13 20:14:36 INFO mapreduce.Job: Running job: job_1492080262949_0002
17/04/13 20:14:48 INFO mapreduce.Job: Job job_1492080262949_0002 running in uber mode : false
17/04/13 20:14:48 INFO mapreduce.Job: map 0% reduce 0%
17/04/13 20:15:23 INFO mapreduce.Job: Task Id : attempt_1492080262949_0002_m_000000_0, Status : FAILED
Error: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1808748719-192.168.20.128-1488809333278:blk_1073741826_1002 file=/user/it18zhang/ncdc_data/1902.gz
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:159)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:143)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
at java.io.InputStream.read(InputStream.java:101)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:184)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
...........(略)
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/13 20:15:23 INFO mapreduce.Job: Task Id : attempt_1492080262949_0002_m_000001_0, Status : FAILED
Error: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1808748719-192.168.20.128-1488809333278:blk_1073741825_1001 file=/user/it18zhang/ncdc_data/1901.gz
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:159)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:143)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
at java.io.InputStream.read(InputStream.java:101)
.............(略)
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/13 20:15:50 INFO mapreduce.Job: Task Id : attempt_1492080262949_0002_m_000000_1, Status : FAILED
Error: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1808748719-192.168.20.128-1488809333278:blk_1073741826_1002 file=/user/it18zhang/ncdc_data/1902.gz
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:159)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:143)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
..........(略)
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/13 20:16:10 INFO mapreduce.Job: map 50% reduce 0%
17/04/13 20:16:10 INFO mapreduce.Job: Task Id : attempt_1492080262949_0002_m_000000_2, Status : FAILED
Error: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1808748719-192.168.20.128-1488809333278:blk_1073741826_1002 file=/user/it18zhang/ncdc_data/1902.gz
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:159)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:143)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
at java.io.InputStream.read(InputStream.java:101)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:144)
..............(略)
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
17/04/13 20:16:11 INFO mapreduce.Job: map 0% reduce 0%
17/04/13 20:16:12 INFO mapreduce.Job: Task Id : attempt_1492080262949_0002_m_000001_1, Status : FAILED
Error: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1808748719-192.168.20.128-1488809333278:blk_1073741825_1001 file=/user/it18zhang/ncdc_data/1901.gz
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:159)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:143)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
at java.io.InputStream.read(InputStream.java:101)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
............(略)
17/04/13 20:16:38 INFO mapreduce.Job: map 100% reduce 100%
17/04/13 20:16:39 INFO mapreduce.Job: Job job_1492080262949_0002 failed with state FAILED due to: Task failed task_1492080262949_0002_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
17/04/13 20:16:40 INFO mapreduce.Job: Counters: 16
Job Counters
Failed map tasks=6
Killed map tasks=1
Killed reduce tasks=1
Launched map tasks=7
Other local map tasks=7
Total time spent by all maps in occupied slots (ms)=200749
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=200749
Total time spent by all reduce tasks (ms)=0
Total vcore-milliseconds taken by all map tasks=200749
Total vcore-milliseconds taken by all reduce tasks=0
Total megabyte-milliseconds taken by all map tasks=205566976
Total megabyte-milliseconds taken by all reduce tasks=0
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
false |
|