分享

Hadoop运用常见难题以及解决方法

leo_1989 发表于 2013-12-11 14:57:29 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 18 14415
oYaoXiang1 发表于 2013-12-11 14:57:29
start-dfs.sh 开启报localhost: Exception in thread "main" java.lang.NullPointerException
localhost:      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
咋回事,在等..................
回复

使用道具 举报

louisthy 发表于 2013-12-11 14:57:29
starting namenode, logging to /home/hadoop/HadoopInstall/hadoop/bin/../logs/hadoop-hadoop-namenode-hadoop.out
localhost: starting datanode, logging to /home/hadoop/HadoopInstall/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop.out
localhost: starting secondarynamenode, logging to /home/hadoop/HadoopInstall/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-hadoop.out
localhost: Exception in thread "main" java.lang.NullPointerException
localhost:      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
localhost:      at org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:116)
localhost:      at org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:120)
localhost:      at org.apache.hadoop.dfs.SecondaryNameNode.initialize(SecondaryNameNode.java:124)
localhost:      at org.apache.hadoop.dfs.SecondaryNameNode.[i](SecondaryNameNode.java:108)
localhost:      at org.apache.hadoop.dfs.SecondaryNameNode.main(SecondaryNameNode.java:460)
回复

使用道具 举报

yuanqingyu0123 发表于 2013-12-11 14:57:29
14:09/08/31 18:25:45 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:Bad connect ack with firstBadLink 192.168.1.11:50010
> 09/08/31 18:25:45 INFO hdfs.DFSClient: Abandoning block blk_-8575812198227241296_1001
> 09/08/31 18:25:51 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.16:50010
> 09/08/31 18:25:51 INFO hdfs.DFSClient: Abandoning block blk_-2932256218448902464_1001
> 09/08/31 18:25:57 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.11:50010
> 09/08/31 18:25:57 INFO hdfs.DFSClient: Abandoning block blk_-1014449966480421244_1001
> 09/08/31 18:26:03 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.16:50010
> 09/08/31 18:26:03 INFO hdfs.DFSClient: Abandoning block blk_7193173823538206978_1001
> 09/08/31 18:26:09 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable
to create new block.
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2731)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1996)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2182)
>
> 09/08/31 18:26:09 WARN hdfs.DFSClient: Error Recovery for block blk_7193173823538206978_1001
bad datanode[2] nodes == null
> 09/08/31 18:26:09 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/umer/8GB_input"
- Aborting...
> put: Bad connect ack with firstBadLink 192.168.1.16:50010
解决方法:
I have resolved the issue:
What i did:
1) '/etc/init.d/iptables stop' -->stopped firewall
2) SELINUX=disabled in '/etc/selinux/config' file.-->disabled selinux
I worked for me after these two changes
回复

使用道具 举报

xukunddp 发表于 2013-12-11 14:57:29
15:把IP换成主机名,datanode 挂不上
解决方法:
把temp文件删除,重启hadoop集群就行了
是因为多次部署,造成temp文件与namenode不一致的原因
(注:来源于 群聊天记录 感谢 beyi 童鞋)
回复

使用道具 举报

llike90 发表于 2013-12-11 14:57:29
解决jline.ConsoleReader.readLine在Windows上不生效方法
在CliDriver.java的main()函数中,有一条语句reader.readLine,用来读取标准输入,但在Windows平台上该语句总是返回null,这个reader是一个例子jline.ConsoleReader例子,给Windows Eclipse调试带来不便。
我们可以通过运用java.util.Scanner.Scanner来替代它,将原来的
while ((line=reader.readLine(curPrompt+"> ")) != null)
复制代码
替换为:
Scanner sc = new Scanner(System.in);
while ((line=sc.nextLine()) != null)
复制代码
重新编译发布,即可正常从标准输入读取输入的SQL语句了。
Windows eclispe调试hive报does not have a scheme错误可能原因
1、Hive配置文件中的“hive.metastore.local”配置项值为false,需要将它修改为true,因为是单机版
2、没有设置HIVE_HOME环境变量,或设置错误
3、“does not have a scheme”很可能是因为找不到“hive-default.xml”。运用Eclipse调试Hive时,遇到找不到hive-default.xml的解决方法:http://bbs.hadoopor.com/thread-292-1-1.html
回复

使用道具 举报

qz2003 发表于 2013-12-11 14:57:29
1、中文
    从url中解析出中文,但hadoop中打印出来仍是乱码?我们曾经以为hadoop是不支持中文的,后来经过查看源代码,发现hadoop仅仅是不支持以gbk格式输出中文而己。
    这是TextOutputFormat.class中的代码,hadoop默认的输出都是继承自FileOutputFormat来的,FileOutputFormat的两个子类一个是基于二进制流的输出,一个就是基于文本的输出TextOutputFormat。
    public class TextOutputFormat extends FileOutputFormat {
  protected static class LineRecordWriter
    implements RecordWriter {
    private static final String utf8 = “UTF-8″;//这里被写死成了utf-8
    private static final byte[] newline;
    static {
      try {
        newline = “\n”.getBytes(utf8);
      } catch (UnsupportedEncodingException uee) {
        throw new IllegalArgumentException(”can’t find ” + utf8 + ” encoding”);
      }
    }

    public LineRecordWriter(DataOutputStream out, String keyValueSeparator) {
      this.out = out;
      try {
        this.keyValueSeparator = keyValueSeparator.getBytes(utf8);
      } catch (UnsupportedEncodingException uee) {
        throw new IllegalArgumentException(”can’t find ” + utf8 + ” encoding”);
      }
    }

    private void writeObject(Object o) throws IOException {
      if (o instanceof Text) {
        Text to = (Text) o;
        out.write(to.getBytes(), 0, to.getLength());//这里也需要修改
      } else {
        out.write(o.toString().getBytes(utf8));
      }
    }

}
    可以看出hadoop默认的输出写死为utf-8,因此如果decode中文正确,那么将Linux客户端的character设为utf-8是可以看到中文的。因为hadoop用utf-8的格式输出了中文。
    因为大多数数据库是用gbk来定义字段的,如果想让hadoop用gbk格式输出中文以兼容数据库咋办吗?
    我们可以定义一个新的类:
    public class GbkOutputFormat extends FileOutputFormat {
  protected static class LineRecordWriter
    implements RecordWriter {
//写成gbk即可
    private static final String gbk = “gbk”;
    private static final byte[] newline;
    static {
      try {
        newline = “\n”.getBytes(gbk);
      } catch (UnsupportedEncodingException uee) {
        throw new IllegalArgumentException(”can’t find ” + gbk + ” encoding”);
      }
    }

    public LineRecordWriter(DataOutputStream out, String keyValueSeparator) {
      this.out = out;
      try {
        this.keyValueSeparator = keyValueSeparator.getBytes(gbk);
      } catch (UnsupportedEncodingException uee) {
        throw new IllegalArgumentException(”can’t find ” + gbk + ” encoding”);
      }
    }

    private void writeObject(Object o) throws IOException {
      if (o instanceof Text) {
//        Text to = (Text) o;
//        out.write(to.getBytes(), 0, to.getLength());
//      } else {
        out.write(o.toString().getBytes(gbk));
      }
    }

}
    然后在mapreduce代码中加入conf1.setOutputFormat(GbkOutputFormat.class)
    即可以gbk格式输出中文。
2、某次正常运行mapreduce例子时,抛出错误
java.io.IOException: All datanodes xxx.xxx.xxx.xxx:xxx are bad. Aborting…
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2158)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
java.io.IOException: Could not get block locations. Aborting…
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
经查明,原因是linux机器打开了过多的文件导致。用命令ulimit -n可以发现linux默认的文件打开数目为1024,修改/ect/security/limit.conf,增加hadoop soft 65535
再重新运行程序(最好所有的datanode都修改),解决
3、运行一段时间后hadoop不能stop-all.sh的,显示出错
no tasktracker to stop ,no datanode to stop
的原因是hadoop在stop的时候依据的是datanode上的mapred和dfs进程号。而默认的进程号保存在/tmp下,linux默认会每隔一段时间(一般是一个月或者7天左右)去删除这个目录下的文件。因此删掉hadoop-hadoop-jobtracker.pid和hadoop-hadoop-namenode.pid两个文件后,namenode自然就找不到datanode上的这两个进程了。
在配置文件中的export HADOOP_PID_DIR可以解决这个
淘宝数据平台团队台共享
回复

使用道具 举报

poptang4 发表于 2013-12-11 14:57:29
本帖最后由 hadoopor 于 2010-1-15 16:15 编辑

Incompatible namespaceIDs in /usr/local/hadoop/dfs/data: namenode namespaceID = 405233244966; datanode namespaceID = 33333244
原因:
在每次运行hadoop namenode -format时,都会为NameNode生成namespaceID,,但是在hadoop.tmp.dir目录下的DataNode还是保留上次的namespaceID,因为namespaceID的不一致,而导致DataNode无法开启,所以只要在每次运行hadoop namenode -format之前,先删除hadoop.tmp.dir目录就可以开启成功。请注意是删除hadoop.tmp.dir对应的本地目录,而不是HDFS目录。
回复

使用道具 举报

starrycheng 发表于 2013-12-11 14:57:29
针对第一个我纠正下答案:
这是reduce预处理阶段shuffle时获取已完成的map的输出失败次数超过上限造成的,上限默认为5。引起此的方式可能会有很多种,比如网络连接不正常,连接超时,带宽较差以及端口阻塞等。。。通常框架内网络情况较好是不会出现此错误的。
回复

使用道具 举报

JavaShoote 发表于 2013-12-11 14:57:29
Problem: Storage directory not exist
2010-02-09 21:37:49,890 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = yijian/192.168.0.13
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/ ... /release-0.20.1-rc1 -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
************************************************************/
2010-02-09 21:37:52,093 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=8888
2010-02-09 21:37:52,125 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: 127.0.0.1/127.0.0.1:8888
2010-02-09 21:37:52,140 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2010-02-09 21:37:52,156 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2010-02-09 21:37:53,000 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=jian,None,root,Administrators,Users
2010-02-09 21:37:53,000 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2010-02-09 21:37:53,000 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2010-02-09 21:37:53,031 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2010-02-09 21:37:53,046 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2010-02-09 21:37:53,203 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory
[color=]D:\hadoop\run\dfs_name_dir
does not exist.
2010-02-09 21:37:53,203 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory D:\hadoop\run\dfs_name_dir is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2010-02-09 21:37:53,234 INFO org.apache.hadoop.ipc.Server: Stopping server on 8888
2010-02-09 21:37:53,234 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory D:\hadoop\run\dfs_name_dir is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2010-02-09 21:37:53,250 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at yijian/192.168.0.13
************************************************************/
solution: 是因为存储目录D:\hadoop\run\dfs_name_dir不存在,所以只需要手动创建好这个目录即可。
Problem: NameNode is not formatted
2010-02-09 21:52:49,343 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = yijian/192.168.0.13
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/ ... /release-0.20.1-rc1 -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
************************************************************/
2010-02-09 21:52:49,531 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=8888
2010-02-09 21:52:49,531 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: 127.0.0.1/127.0.0.1:8888
2010-02-09 21:52:49,546 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2010-02-09 21:52:49,546 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2010-02-09 21:52:50,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=jian,None,root,Administrators,Users
2010-02-09 21:52:50,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2010-02-09 21:52:50,250 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2010-02-09 21:52:50,265 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2010-02-09 21:52:50,265 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2010-02-09 21:52:50,359 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException:
[color=]NameNode is not formatted
.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2010-02-09 21:52:50,359 INFO org.apache.hadoop.ipc.Server: Stopping server on 8888
2010-02-09 21:52:50,359 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2010-02-09 21:52:50,359 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at yijian/192.168.0.13
************************************************************/
solution: 是因为HDFS还没有格式化,只需要运行hadoop namenode -format一下,然后再开启即可
回复

使用道具 举报

llike90 发表于 2013-12-11 14:57:29
bin/hadoop jps后报如下出错:
Exception in thread "main" java.lang.NullPointerException
        at sun.jvmstat.perfdata.monitor.protocol.local.LocalVmManager.activeVms(LocalVmManager.java:127)
        at sun.jvmstat.perfdata.monitor.protocol.local.MonitoredHostProvider.activeVms(MonitoredHostProvider.java:133)
        at sun.tools.jps.Jps.main(Jps.java:45)
原因为:
系统根目录/tmp文件夹被删除了。重新建立/tmp文件夹即可。
bin/hive
中出现 unable to  create log directory /tmp/...也可能是这个原因
回复

使用道具 举报

12
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条