分享

windows通过hadoop-eclipse-plugin插件远程开发hadoop运行mapreduce遇到问题及解决

pig2 发表于 2014-7-2 23:51:20 [显示全部楼层] 只看大图 回帖奖励 阅读模式 关闭右栏 38 176471
OWTHY 发表于 2016-7-21 22:40:09
请问问题2 unable to load native lib 怎么解决 谢谢
回复

使用道具 举报

SuperDove 发表于 2016-8-25 16:44:52
java.lang.UnsatisfiedLinkError: 这个报错,在hadoop-2.6.4的版本(或者更高版本),这个位置的hadoop.dll和winutils.exe需要更新到相应的版本,可去http://down.51cto.com/data/1983230此处下载并更新到相应的目录,新手被版本坑,带个节奏
回复

使用道具 举报

Kevin517 发表于 2016-10-16 18:10:13
通过 Eclipse 创建 input/ 目录能成功,上传文件到 input/ 目录,文件无内容。
查看 namenode 日志如下。

2016-10-16 05:59:23,939 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 98
2016-10-16 05:59:35,444 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-faf87d25-857d-4691-b84f-b422f9003adb:NORMAL:10.0.0.10:50010|RBW], ReplicaUC[[DISK]DS-195f1b0f-88da-4017-8c33-1cafc1834123:NORMAL:10.0.0.9:50010|RBW]]} for /input/word.txt
2016-10-16 05:59:56,675 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2016-10-16 05:59:56,675 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2016-10-16 05:59:56,675 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2016-10-16 05:59:56,676 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073741831_1007{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-195f1b0f-88da-4017-8c33-1cafc1834123:NORMAL:10.0.0.9:50010|RBW]]} for /input/word.txt
2016-10-16 06:00:17,751 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 2 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2016-10-16 06:00:17,752 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 2 but only 0 storage types can be selected (replication=2, selected=[], unavailable=[DISK], removed=[DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2016-10-16 06:00:17,752 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 2 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2016-10-16 06:00:17,752 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.100.8:1125 Call#21 Retry#0
java.io.IOException: File /input/word.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3067)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:722)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
[root@master ~]#


=======================

能帮忙分析以下嘛,万分感谢
回复

使用道具 举报

Kevin517 发表于 2016-10-17 16:15:18
2016-10-17 16:05:05,213 INFO  Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1173)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2016-10-17 16:05:05,218 INFO  jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2016-10-17 16:05:05,619 WARN  mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(171)) - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2016-10-17 16:05:05,698 INFO  input.FileInputFormat (FileInputFormat.java:listStatus(283)) - Total input paths to process : 1
2016-10-17 16:05:05,742 INFO  mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(201)) - number of splits:1
2016-10-17 16:05:05,907 INFO  mapreduce.JobSubmitter (JobSubmitter.java:printTokens(290)) - Submitting tokens for job: job_local204455699_0001
2016-10-17 16:05:06,155 INFO  mapreduce.Job (Job.java:submit(1294)) - The url to track the job: http://localhost:8080/
2016-10-17 16:05:06,156 INFO  mapreduce.Job (Job.java:monitorAndPrintJob(1339)) - Running job: job_local204455699_0001
2016-10-17 16:05:06,164 INFO  mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null
2016-10-17 16:05:06,178 INFO  output.FileOutputCommitter (FileOutputCommitter.java:<init>(100)) - File Output Committer Algorithm version is 1
2016-10-17 16:05:06,184 INFO  mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2016-10-17 16:05:06,333 INFO  mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks
2016-10-17 16:05:06,340 INFO  mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local204455699_0001_m_000000_0
2016-10-17 16:05:06,401 INFO  output.FileOutputCommitter (FileOutputCommitter.java:<init>(100)) - File Output Committer Algorithm version is 1
2016-10-17 16:05:06,412 INFO  util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(192)) - ProcfsBasedProcessTree currently is supported only on Linux.
2016-10-17 16:05:07,158 INFO  mapreduce.Job (Job.java:monitorAndPrintJob(1360)) - Job job_local204455699_0001 running in uber mode : false
2016-10-17 16:05:07,160 INFO  mapreduce.Job (Job.java:monitorAndPrintJob(1367)) -  map 0% reduce 0%
2016-10-17 16:05:07,469 INFO  mapred.Task (Task.java:initialize(587)) -  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@56c02b51
2016-10-17 16:05:07,474 INFO  mapred.MapTask (MapTask.java:runNewMapper(756)) - Processing split: hdfs://192.168.200.105:9000/input/Kevin-master.txt:0+38
2016-10-17 16:05:07,522 INFO  mapred.MapTask (MapTask.java:setEquator(1205)) - (EQUATOR) 0 kvi 26214396(104857584)
2016-10-17 16:05:07,522 INFO  mapred.MapTask (MapTask.java:init(998)) - mapreduce.task.io.sort.mb: 100
2016-10-17 16:05:07,522 INFO  mapred.MapTask (MapTask.java:init(999)) - soft limit at 83886080
2016-10-17 16:05:07,522 INFO  mapred.MapTask (MapTask.java:init(1000)) - bufstart = 0; bufvoid = 104857600
2016-10-17 16:05:07,522 INFO  mapred.MapTask (MapTask.java:init(1001)) - kvstart = 26214396; length = 6553600
2016-10-17 16:05:07,527 INFO  mapred.MapTask (MapTask.java:createSortingCollector(403)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2016-10-17 16:05:28,547 WARN  hdfs.BlockReaderFactory (BlockReaderFactory.java:getRemoteBlockReaderFromTcp(712)) - I/O error constructing remote block reader.
java.net.ConnectException: Connection timed out: no further information
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3441)
        at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:773)
        at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:690)
        at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:352)
        at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:618)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896)
        at java.io.DataInputStream.read(Unknown Source)
        at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
        at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
        at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
        at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:143)
        at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:183)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
        at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.util.concurrent.FutureTask.run(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
2016-10-17 16:05:28,549 WARN  hdfs.DFSClient (DFSInputStream.java:blockSeekTo(654)) - Failed to connect to 10.0.0.16/10.0.0.16:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection timed out: no further information
java.net.ConnectException: Connection timed out: no further information
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3441)
        at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:773)
        at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:690)
        at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:352)
        at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:618)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:844)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:896)
        at java.io.DataInputStream.read(Unknown Source)
        at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
        at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
        at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
        at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:143)
        at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:183)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
        at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
        at java.util.concurrent.FutureTask.run(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)

eclipse 出现这种错误如何解决??
在 eclipse 中能创建文件夹。但是上传的文件没内容。我跑的是用命令行上传到的文件,出现这个问题。。。
希望能提供点思路。谢谢
回复

使用道具 举报

goldtimes 发表于 2016-10-17 17:57:11
Kevin517 发表于 2016-10-17 16:15
2016-10-17 16:05:05,213 INFO  Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(117 ...

是用命令行?还是通过程序上传,还是通过eclipse插件上传的。
window防火墙关了吗?
hadoop集群进程贴出来
hadoop防火墙贴出来
Linux的hosts贴出来

回复

使用道具 举报

Kevin517 发表于 2016-10-20 17:25:13
goldtimes 发表于 2016-10-17 17:57
是用命令行?还是通过程序上传,还是通过eclipse插件上传的。
window防火墙关了吗?
hadoop集群进程贴 ...

你好,
=============================进程如下

[root@master ~]# clear
[root@master ~]# jps
2167 NodeManager
3884 Jps
1642 NameNode
2070 ResourceManager
1918 SecondaryNameNode
1737 DataNode
[root@master ~]#

-----------------------------

[root@slaver ~]# jps
1630 NodeManager
1536 DataNode
1960 Jps
[root@slaver ~]#


====================================安全规则
[root@master ~]# getenforce                  
Disabled
[root@master ~]# /etc/init.d/iptables status
iptables: Firewall is not running.
[root@master ~]#

-------------------------------------------

[root@slaver ~]# getenforce
Disabled
[root@slaver ~]# /etc/init.d/iptables status
iptables: Firewall is not running.
[root@slaver ~]#


==================================主机名

[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.17 master
10.0.0.18 slaver
[root@master ~]#
[root@master ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=master
NOZEROCONF=yes
[root@master ~]# hostname
master

[root@master ~]#


-----------------------------------

[root@slaver ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.17 master
10.0.0.18 slaver
[root@slaver ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=slaver
NOZEROCONF=yes
[root@slaver ~]# hostname
slaver

[root@slaver ~]#


===================================

还望多多指教



回复

使用道具 举报

菜鸟历险记 发表于 2017-4-17 16:40:15
为什么在Windows环境下Eclipse里运行mapreduce程序每次都要指定main函数
回复

使用道具 举报

Wyy_Ck 发表于 2017-6-9 16:22:18
我的hadoop是2.7.3 centos  安装后报错如下,求指导哈 感谢感谢



http://www.aboutyun.com/forum.ph ... &extra=page%3D1
回复

使用道具 举报

1234
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条