分享

使用sqoop 1.4.4从mysql向hdfs导入表出现链接数据库失败错误的解决办法

quenlang 2014-9-25 21:27:09 发表于 介绍解说 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 5 83592
sqoop配置好后,使用下面这条命令列出mysql数据库中sqoop库中的所有表,成功,说明链接mysql数据库是没有问题的
[grid@rhel0 ~]$ sqoop list-tables --connect jdbc:mysql://localhost:3306/sqoop --username sqoop -P
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Enter password:
14/09/25 21:08:40 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
demo_blog
demo_log

但接下来的一幕让我震精了,使用同样的链接字符串从mysql向hdfs导入一张表却出现链接被拒绝的异常:
[grid@rhel0 ~]$ sqoop import --connect jdbc:mysql://localhost:3306/sqoop --username sqoop -P --table demo_blog
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Enter password:
14/09/25 21:11:07 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/09/25 21:11:07 INFO tool.CodeGenTool: Beginning code generation
14/09/25 21:11:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `demo_blog` AS t LIMIT 1
14/09/25 21:11:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `demo_blog` AS t LIMIT 1
14/09/25 21:11:07 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/hadoop-2.4.1
Note: /tmp/sqoop-grid/compile/bd4958cfadb8d7c2df3adabf176076cc/demo_blog.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/09/25 21:11:10 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-grid/compile/bd4958cfadb8d7c2df3adabf176076cc/demo_blog.jar
14/09/25 21:11:10 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/09/25 21:11:10 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
14/09/25 21:11:10 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/09/25 21:11:10 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/09/25 21:11:10 INFO mapreduce.ImportJobBase: Beginning import of demo_blog
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hbase-0.94.22/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
14/09/25 21:11:11 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/09/25 21:11:12 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/09/25 21:11:12 INFO client.RMProxy: Connecting to ResourceManager at rhel0/192.168.122.100:8032
14/09/25 21:11:15 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `demo_blog`
14/09/25 21:11:15 INFO mapreduce.JobSubmitter: number of splits:3
14/09/25 21:11:16 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1411648354055_0003
14/09/25 21:11:16 INFO impl.YarnClientImpl: Submitted application application_1411648354055_0003
14/09/25 21:11:16 INFO mapreduce.Job: The url to track the job: http://rhel0:8088/proxy/application_1411648354055_0003/
14/09/25 21:11:16 INFO mapreduce.Job: Running job: job_1411648354055_0003
14/09/25 21:11:26 INFO mapreduce.Job: Job job_1411648354055_0003 running in uber mode : false
14/09/25 21:11:26 INFO mapreduce.Job:  map 0% reduce 0%
14/09/25 21:11:32 INFO mapreduce.Job: Task Id : attempt_1411648354055_0003_m_000002_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:726)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:193)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:162)
    ... 9 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
    at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1127)
    at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:356)
    at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2502)
    at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2539)
    at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2321)
    at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:832)
    at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:46)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
    at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:417)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:344)
    at java.sql.DriverManager.getConnection(DriverManager.java:571)
    at java.sql.DriverManager.getConnection(DriverManager.java:215)
    at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:294)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:187)
    ... 10 more
Caused by: java.net.ConnectException: Connection refused
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:579)
    at java.net.Socket.connect(Socket.java:528)
    at java.net.Socket.<init>(Socket.java:425)
    at java.net.Socket.<init>(Socket.java:241)
    at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:258)
    at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:306)
    ... 26 more

折腾了半天,百思不得姐啊,随手把localhost换成了IP居然好使了,如下:
[grid@rhel0 ~]$ sqoop import --connect jdbc:mysql://192.168.122.100:3306/sqoop --username sqoop -P --table demo_blog
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Enter password:
14/09/25 21:15:50 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/09/25 21:15:50 INFO tool.CodeGenTool: Beginning code generation
14/09/25 21:15:51 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `demo_blog` AS t LIMIT 1
14/09/25 21:15:51 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `demo_blog` AS t LIMIT 1
14/09/25 21:15:51 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/hadoop-2.4.1
Note: /tmp/sqoop-grid/compile/55eaee36a18d0f2e71d1736cb58a7175/demo_blog.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/09/25 21:15:53 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-grid/compile/55eaee36a18d0f2e71d1736cb58a7175/demo_blog.jar
14/09/25 21:15:53 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/09/25 21:15:53 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
14/09/25 21:15:53 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/09/25 21:15:53 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/09/25 21:15:53 INFO mapreduce.ImportJobBase: Beginning import of demo_blog
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hbase-0.94.22/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
14/09/25 21:15:54 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/09/25 21:15:55 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/09/25 21:15:55 INFO client.RMProxy: Connecting to ResourceManager at rhel0/192.168.122.100:8032
14/09/25 21:15:58 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `demo_blog`
14/09/25 21:15:58 INFO mapreduce.JobSubmitter: number of splits:3
14/09/25 21:15:59 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1411648354055_0004
14/09/25 21:15:59 INFO impl.YarnClientImpl: Submitted application application_1411648354055_0004
14/09/25 21:15:59 INFO mapreduce.Job: The url to track the job: http://rhel0:8088/proxy/application_1411648354055_0004/
14/09/25 21:15:59 INFO mapreduce.Job: Running job: job_1411648354055_0004
14/09/25 21:16:09 INFO mapreduce.Job: Job job_1411648354055_0004 running in uber mode : false
14/09/25 21:16:09 INFO mapreduce.Job:  map 0% reduce 0%
14/09/25 21:16:18 INFO mapreduce.Job:  map 100% reduce 0%
14/09/25 21:16:19 INFO mapreduce.Job: Job job_1411648354055_0004 completed successfully
14/09/25 21:16:19 INFO mapreduce.Job: Counters: 30
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=303087
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=295
        HDFS: Number of bytes written=44
        HDFS: Number of read operations=12
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=6
    Job Counters
        Launched map tasks=3
        Other local map tasks=3
        Total time spent by all maps in occupied slots (ms)=20280
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=20280
        Total vcore-seconds taken by all map tasks=20280
        Total megabyte-seconds taken by all map tasks=20766720
    Map-Reduce Framework
        Map input records=3
        Map output records=3
        Input split bytes=295
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=173
        CPU time spent (ms)=3400
        Physical memory (bytes) snapshot=325918720
        Virtual memory (bytes) snapshot=3535638528
        Total committed heap usage (bytes)=47579136
    File Input Format Counters
        Bytes Read=0
    File Output Format Counters
        Bytes Written=44
14/09/25 21:16:19 INFO mapreduce.ImportJobBase: Transferred 44 bytes in 24.7072 seconds (1.7809 bytes/sec)
14/09/25 21:16:19 INFO mapreduce.ImportJobBase: Retrieved 3 records.

mysql中原表数据如下:
mysql> select * from demo_blog;
+----+-----------------+
| id    | blog                  |
+----+-----------------+
|  1    | micmiu.com      |
|  2    | ctosun.com      |
|  3    | baby.micmiu.com |
+----+-----------------+
3 rows in set (0.00 sec)

成功导入后,hdfs中的数据如下:
grunt> pwd
hdfs://rhel0:8020/user/grid/demo_blog
grunt> ls
hdfs://rhel0:8020/user/grid/demo_blog/_SUCCESS<r 4>    0
hdfs://rhel0:8020/user/grid/demo_blog/part-m-00000<r 4>    13
hdfs://rhel0:8020/user/grid/demo_blog/part-m-00001<r 4>    13
hdfs://rhel0:8020/user/grid/demo_blog/part-m-00002<r 4>    18
grunt> cat part-m-00000
1,micmiu.com
grunt> cat part-m-00001
2,ctosun.com
grunt> cat part-m-00002
3,baby.micmiu.com

最后hdfs中与mysql中表名一样的一个目录下又三个文件,每个文件一条记录,但是,不禁要问,为什localhost不可以,而ip却可以,况且我的数据库就是装在sqoop所在的节点上的,至此还是不能理解,求哪位大侠指点一下迷津!!跪谢





已有(5)人评论

跳转到指定楼层
howtodown 发表于 2014-9-25 22:03:48
这个属于通信问题,localhost实质也是ip,只不过配置了host,你的ip也可以写成host,只要在hosts里面配置就可以了。
所以localhost不是关键,还是归于网络配置的问题
回复

使用道具 举报

quenlang 发表于 2014-9-26 09:20:45
回复

使用道具 举报

quenlang 发表于 2014-9-26 09:21:53
howtodown 发表于 2014-9-25 22:03
这个属于通信问题,localhost实质也是ip,只不过配置了host,你的ip也可以写成host,只要在hosts里面配置就可 ...

版主这么一说我想起来了,看我的/etc/hosts:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.122.100  rhel0.updb.com rhel0
192.168.122.101  rhel1.updb.com rhel1
192.168.122.102  rhel2.updb.com rhel2
192.168.122.103  rhel3.updb.com rhel3
192.168.122.104  rhel4.updb.com rhel4
我的/etc/hosts中是没有自己添加localhost记录的,系统默认的localhost记录解析到了127.0.0.1,这样显然是不对的,今天我把sqoop命令中的IP改成/etc/hosts文件中IP对应的域名,导入成功。版主正解,真乃神人也!膜拜
回复

使用道具 举报

manmanluo 发表于 2014-11-5 09:09:30
你好,我完全使用的是IP:[root@s00065 ~]# sqoop list-tables --connect jdbc:mysql://192.168.22.220:3306/datavalley --username root --password root123 这条命令返回正确结果:
但是:sqoop import --connect jdbc:mysql://192.168.22.220:3306/datavalley --username root --password root123 --table com_carrier 却报connction refused了 为什么呢?
异常如下:14/11/05 09:00:07 ERROR tool.ImportTool: Encountered IOException running import job: java.net.ConnectException: Call From s00065/192.168.22.215 to s00054:8021 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
回复

使用道具 举报

hyj 发表于 2014-11-5 17:43:39
manmanluo 发表于 2014-11-5 09:09
你好,我完全使用的是IP:[root@s00065 ~]# sqoop list-tables --connect jdbc:mysql://192.168.22.220:330 ...

把问题描述清楚,你想干什么,由哪个表到哪个表?

初步判断,你的命令使用不正确

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条