分享

hive 不能drop 表

yjt 2020-5-29 11:15:31 发表于 异常错误 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 5 4252
hive 版本:3.1.2
1、问题描述:
线上集群有一个hive的分区表,在删除该表的时候,提示内存空间不足,信息如下:

2020-05-29T11:10:35,315 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metrics.PerfLogger: </PERFLOG method=drop_table_with_environment_context start=1590721511413 end=1590721835315 duration=323902 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=0 retryCount=-1 error=true>
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Java heap space
2020-05-29T11:10:35,315 ERROR [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] exec.DDLTask: Failed
java.lang.OutOfMemoryError: Java heap space
        at org.datanucleus.store.connection.AbstractManagedConnection.transactionPreClose(AbstractManagedConnection.java:98) ~[datanucleus-core-4.1.17.jar:?]
        at org.datanucleus.store.connection.ConnectionManagerImpl$2.transactionPreRollBack(ConnectionManagerImpl.java:445) ~[datanucleus-core-4.1.17.jar:?]
        at org.datanucleus.TransactionImpl.internalPreRollback(TransactionImpl.java:506) ~[datanucleus-core-4.1.17.jar:?]
        at org.datanucleus.TransactionImpl.rollback(TransactionImpl.java:433) ~[datanucleus-core-4.1.17.jar:?]
        at org.datanucleus.api.jdo.JDOTransaction.rollback(JDOTransaction.java:185) ~


一开始以为是hive客户端堆空间不足,最后调整hive-env.sh里面的堆大小,依然报这个错误,所以怀疑不是这个问题
2、个人分析与尝试解决:
1) 打开调试模式
满屏的刷写下面的这段日志:
I_QUOTES]
2020-05-29T11:10:29,906 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.MetaStoreDirectSql: Direct SQL query in 0.86995ms + 8.14E-4ms, the query is [select "PARTITIONS"."PART_ID" from "PARTITIONS"  inner join "TBLS" on "PARTITIONS"."TBL_ID" = "TBLS"."TBL_ID"     and "TBLS"."TBL_NAME" = ?   inner join "DBS" on "TBLS"."DB_ID" = "DBS"."DB_ID"      and "DBS"."NAME" = ?  where "DBS"."CTLG_NAME" = ?  order by "PART_NAME" asc]
2020-05-29T11:10:29,907 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.MetaStoreDirectSql: Direct SQL query in 0.606457ms + 0.040249ms, the query is [select "PARTITIONS"."PART_ID", "SDS"."SD_ID", "SDS"."CD_ID", "SERDES"."SERDE_ID", "PARTITIONS"."CREATE_TIME", "PARTITIONS"."LAST_ACCESS_TIME", "SDS"."INPUT_FORMAT", "SDS"."IS_COMPRESSED", "SDS"."IS_STOREDASSUBDIRECTORIES", "SDS"."LOCATION", "SDS"."NUM_BUCKETS", "SDS"."OUTPUT_FORMAT", "SERDES"."NAME", "SERDES"."SLIB" from "PARTITIONS"  left outer join "SDS" on "PARTITIONS"."SD_ID" = "SDS"."SD_ID"   left outer join "SERDES" on "SDS"."SERDE_ID" = "SERDES"."SERDE_ID" where "PART_ID" in (200263) order by "PART_NAME" asc]
2020-05-29T11:10:29,908 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.MetaStoreDirectSql: Direct SQL query in 0.366365ms + 0.034568ms, the query is [select "PART_ID", "PARAM_KEY", "PARAM_VALUE" from "PARTITION_PARAMS" where "PART_ID" in (200263) and "PARAM_KEY" is not null order by "PART_ID" asc]
2020-05-29T11:10:29,908 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.MetaStoreDirectSql: Direct SQL query in 0.296375ms + 0.02211ms, the query is [select "PART_ID", "PART_KEY_VAL" from "PARTITION_KEY_VALS" where "PART_ID" in (200263) order by "PART_ID" asc, "INTEGER_IDX" asc]
2020-05-29T11:10:29,908 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.MetaStoreDirectSql: Direct SQL query in 0.309955ms + 0.004444ms, the query is [select "SD_ID", "PARAM_KEY", "PARAM_VALUE" from "SD_PARAMS" where "SD_ID" in (206441) and "PARAM_KEY" is not null order by "SD_ID" asc]
2020-05-29T11:10:29,909 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.MetaStoreDirectSql: Direct SQL query in 0.329505ms + 0.01652ms, the query is [select "SD_ID", "COLUMN_NAME", "SORT_COLS"."ORDER" from "SORT_COLS" where "SD_ID" in (206441) order by "SD_ID" asc, "INTEGER_IDX" asc]
2020-05-29T11:10:29,909 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.MetaStoreDirectSql: Direct SQL query in 0.301971ms + 0.00435ms, the query is [select "SD_ID", "BUCKET_COL_NAME" from "BUCKETING_COLS" where "SD_ID" in (206441) order by "SD_ID" asc, "INTEGER_IDX" asc]
2020-05-29T11:10:29,909 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.MetaStoreDirectSql: Direct SQL query in 0.318231ms + 0.016342ms, the query is [select "SD_ID", "SKEWED_COL_NAME" from "SKEWED_COL_NAMES" where "SD_ID" in (206441) order by "SD_ID" asc, "INTEGER_IDX" asc]
2020-05-29T11:10:29,910 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.MetaStoreDirectSql: Direct SQL query in 0.547052ms + 0.034132ms, the query is [select "CD_ID", "COMMENT", "COLUMN_NAME", "TYPE_NAME" from "COLUMNS_V2" where "CD_ID" in (6913) order by "CD_ID" asc, "INTEGER_IDX" asc]
2020-05-29T11:10:29,910 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.MetaStoreDirectSql: Direct SQL query in 0.353399ms + 0.034575ms, the query is [select "SERDE_ID", "PARAM_KEY", "PARAM_VALUE" from "SERDE_PARAMS" where "SERDE_ID" in (206441) and "PARAM_KEY" is not null order by "SERDE_ID" asc]
2020-05-29T11:10:29,910 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.ObjectStore: Commit transaction: count = 1, isactive true  at:
        org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.commit(ObjectStore.java:3691)
        org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3588)
        org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsInternal(ObjectStore.java:2745)
2020-05-29T11:10:29,910 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.ObjectStore: 1 entries retrieved using SQL in 5.147804ms
2020-05-29T11:10:29,910 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.ObjectStore: Open transaction: count = 2, isActive = true  at:
        org.apache.hadoop.hive.metastore.ObjectStore.dropPartitions(ObjectStore.java:2639)
        sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
        sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-29T11:10:31,068 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.ObjectStore: Rollback transaction, isActive: true  at:
        org.apache.hadoop.hive.metastore.ObjectStore.dropPartitions(ObjectStore.java:2656)
        sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
        sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-29T11:10:35,207 DEBUG [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.ObjectStore: rolling back transaction: no open transactions: 0  at:
        sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-05-29T11:10:35,315 ERROR [456534d6-6dbd-4309-87a3-2ea10e0cd869 main] metastore.RetryingHMSHandler: java.lang.OutOfMemoryError: Java heap space
        at org.datanucleus.store.connection.AbstractManagedConnection.transactionPreClose(AbstractManagedConnection.java:98)
        at org.datanucleus.store.connection.ConnectionManagerImpl$2.transactionPreRollBack(ConnectionManagerImpl.java:445)
        at org.datanucleus.TransactionImpl.internalPreRollback(TransactionImpl.java:506)


怀疑内存空间不足是由于上述问题导致的。
通过百度,说是这种错误跟mysql里面hive的数据库字符集不一样,在mysql里面字符集确实utf-8,但是就算是字符集是这样也不应该不能删不掉表呀
2) 清空改表,同时删除hive的hdfs路径,然后drop这个表,依然不能删除

出现上述的原因是由于元数据异常了,但是没有具体的解决方案,还希望路过的大神,帮忙瞅一眼

已有(5)人评论

跳转到指定楼层
hyj 发表于 2020-5-29 12:38:35
可能是安装mysql的客户端内存有问题了。
楼主怎么设置的,可以贴出来,看看具体怎么设置的。

可以尝试调整下源数据库的内存
找到hive的安装目录,metastore.sh文件,增加以下内容:

export HIVE_METASTORE_HADOOP_OPTS="-Xms4096m -Xmx4096m"


也可以尝试下面调整:
hive> set mapreduce.map.memory.mb=4096;
hive> set mapreduce.reduce.memory.mb=5120;

回复

使用道具 举报

yjt 发表于 2020-5-29 14:08:07
首先很感谢帮忙!!!这个个人感觉跟mysql客户端的配置应该是没什么太大的关系,首先:
1、通过hive客户端删表的话,用到的基本也都是hive客户端的内存;
2、其他的表在hive客户端通过drop的方式是可以正常删除的;
3、这表目前只是一张空表,数据已经被完全清除。

调整export HIVE_METASTORE_HADOOP_OPTS="-Xms4096m -Xmx4096m"这个参数
这个参数应该是metastore守护进程的jvm大小,一开始我是配置的hive-env.sh文件的heapsize参数来调整的hive进程jvm内存大小,对于这两个配置jvm的地方,在脚本里面配置的优先级应该会比Hive-env.sh这个文件配置的要大。
hepasize参数我从默认值调整到8192再到10240,依然有这个问题。所以我怀疑跟内存无关,况且删除一张空表,内存占用不会太大。


像mr的相关参数的话,drop 表貌似是不走mr的。
回复

使用道具 举报

yjt 发表于 2020-5-29 14:09:56
hyj 发表于 2020-5-29 12:38
可能是安装mysql的客户端内存有问题了。
楼主怎么设置的,可以贴出来,看看具体怎么设置的。

首先很感谢帮忙!!!这个个人感觉跟mysql客户端的配置应该是没什么太大的关系,首先:
1、通过hive客户端删表的话,用到的基本也都是hive客户端的内存;
2、其他的表在hive客户端通过drop的方式是可以正常删除的;
3、这表目前只是一张空表,数据已经被完全清除。

调整export HIVE_METASTORE_HADOOP_OPTS="-Xms4096m -Xmx4096m"这个参数
这个参数应该是metastore守护进程的jvm大小,一开始我是配置的hive-env.sh文件的heapsize参数来调整的hive进程jvm内存大小,对于这两个配置jvm的地方,在脚本里面配置的优先级应该会比Hive-env.sh这个文件配置的要大。
hepasize参数我从默认值调整到8192再到10240,依然有这个问题。所以我怀疑跟内存无关,况且删除一张空表,内存占用不会太大。


像mr的相关参数的话,drop 表貌似是不走mr的。
回复

使用道具 举报

hyj 发表于 2020-5-29 15:38:48
yjt 发表于 2020-5-29 14:09
首先很感谢帮忙!!!这个个人感觉跟mysql客户端的配置应该是没什么太大的关系,首先:
1、 ...

那就奇怪了,既然就这一个表的问题,不行你直接到源数据库里面去删除试试
回复

使用道具 举报

yjt 发表于 2020-6-3 17:35:07
hyj 发表于 2020-5-29 15:38
那就奇怪了,既然就这一个表的问题,不行你直接到源数据库里面去删除试试

最终是通过删除元数据解决了这一问题,但是还是不知道底层到底发生了什么,导致通过hive删除不了表。只能后续再看了
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条