分享

CM 安装的oozie调用hive任务报找不到数据库

oozie调用简单的hive语句,
use mytest99;
select * from test where id > 500;


但是很奇怪的报错,

Script [script.q] content: ---------------------------- Licensed to the Apache Software Foundation (ASF) under one-- or more contributor license agreements.  See the NOTICE file-- distributed with this work for additional information-- regarding copyright ownership.  The ASF licenses this file-- to you under the Apache License, Version 2.0 (the-- "License"); you may not use this file except in compliance-- with the License.  You may obtain a copy of the License at----     http://www.apache.org/licenses/LICENSE-2.0---- Unless required by applicable law or agreed to in writing, software-- distributed under the License is distributed on an "AS IS" BASIS,-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.-- See the License for the specific language governing permissions and-- limitations under the License.--use mytest99;select * from test where id > 500;------------------------Parameters:------------------------  INPUT=/user/root/examples/input-data/table  OUTPUT=/user/root/examples/output-data/hive------------------------Hive command arguments :             --hiveconf             hive.log4j.file=/yarn/nm/usercache/root/appcache/application_1468285617775_0016/container_1468285617775_0016_01_000001/hive-log4j.properties             --hiveconf             hive.log4j.exec.file=/yarn/nm/usercache/root/appcache/application_1468285617775_0016/container_1468285617775_0016_01_000001/hive-exec-log4j.properties             -f             script.q             --hivevar             INPUT=/user/root/examples/input-data/table             --hivevar             OUTPUT=/user/root/examples/output-data/hiveFetching child yarn jobstag id : oozie-fb623f09053130f4ba9b0be07802e892Child yarn jobs are found - =================================================================>>> Invoking Hive command line now >>>9992 [uber-SubtaskRunner] WARN  org.apache.hadoop.hive.conf.HiveConf  - HiveConf of name hive.metastore.local does not exist10216 [uber-SubtaskRunner] INFO  hive.metastore  - Trying to connect to metastore with URI thrift://192.168.1.104:908310266 [uber-SubtaskRunner] INFO  hive.metastore  - Opened a connection to metastore, current connections: 110339 [uber-SubtaskRunner] INFO  hive.metastore  - Connected to metastore.10613 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.session.SessionState  - Created local directory: /yarn/nm/usercache/root/appcache/application_1468285617775_0016/container_1468285617775_0016_01_000001/hivetmp/scratchdir10617 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.session.SessionState  - Created local directory: tmp/f1c42ac5-a321-4a41-943e-9e2fa47ec772_resources10624 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.session.SessionState  - Created HDFS directory: /tmp/hive/root/f1c42ac5-a321-4a41-943e-9e2fa47ec77210630 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.session.SessionState  - Created local directory: /yarn/nm/usercache/root/appcache/application_1468285617775_0016/container_1468285617775_0016_01_000001/hivetmp/scratchdir/f1c42ac5-a321-4a41-943e-9e2fa47ec77210634 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.session.SessionState  - Created HDFS directory: /tmp/hive/root/f1c42ac5-a321-4a41-943e-9e2fa47ec772/_tmp_space.db10642 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.session.SessionState  - No Tez session required at this point. hive.execution.engine=mr.10666 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.log.PerfLogger  - <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>10667 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.log.PerfLogger  - <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>10667 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.log.PerfLogger  - <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver>10719 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.Driver  - Compiling command(queryId=yarn_20160712142222_3bedcd26-f702-4310-b000-9456bb3ef07f): use mytest9910723 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.log.PerfLogger  - <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver>11164 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.log.PerfLogger  - </PERFLOG method=parse start=1468304536055 end=1468304536496 duration=441 from=org.apache.hadoop.hive.ql.Driver>11168 [uber-SubtaskRunner] INFO  org.apache.hadoop.hive.ql.log.PerfLogger  - <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>11236 [uber-SubtaskRunner] ERROR org.apache.hadoop.hive.ql.Driver  - FAILED: SemanticException [Error 10072]: Database does not exist: mytest99org.apache.hadoop.hive.ql.parse.SemanticException: Database does not exist: mytest99
数据库明明存在,而且第一次标红的那里   WARN  org.apache.hadoop.hive.conf.HiveConf  - HiveConf of name hive.metastore.local does not exist 我已经在hive-site.xml中设置了false,但是好像没生效。CM中也没有地方去配置这一项。
这个问题困扰我好久了,开始我一直以为连不上 metastore 服务端,但是日志显示连接上。
表现出来的奇怪现象就是   hive的元数据库 Mysql中 没有记录你的hql语句的影响,比如我创建一个create database mytest; 在hdfs   /user/hive/warehouse/中确实创建了mytest.db 文件,但是元数据库中并没有对应更新,我不知道是 hive的设置出了问题 还是哪儿的问题。

已有(13)人评论

跳转到指定楼层
einhep 发表于 2016-7-12 15:19:57
看下当前用户是否有权限访问
最好把hive文件夹授权为777
假如hive路径为/tmp/hive
[mw_shl_code=bash,true] hadoop fs -chmod -R 777 /tmp/hive[/mw_shl_code]
回复

使用道具 举报

liuzhixin137 发表于 2016-7-12 16:35:44
einhep 发表于 2016-7-12 15:19
看下当前用户是否有权限访问
最好把hive文件夹授权为777
假如hive路径为/tmp/hive

有的,我每次改完东西,都是加了这句执行,给全权限
回复

使用道具 举报

liuzhixin137 发表于 2016-7-12 16:36:27
回复

使用道具 举报

easthome001 发表于 2016-7-12 17:01:48
liuzhixin137 发表于 2016-7-12 16:36
@levycui @helianthus @desehawk

楼主的意思是创建数据库,hdfs文件有了,但是mysql数据库中没有这个数据?
回复

使用道具 举报

tntzbzc 发表于 2016-7-12 17:08:07
贴下服务端和客户端hive-site.xml的配置文件
回复

使用道具 举报

liuzhixin137 发表于 2016-7-12 17:12:49
easthome001 发表于 2016-7-12 17:01
楼主的意思是创建数据库,hdfs文件有了,但是mysql数据库中没有这个数据?

对啊,很奇怪。
回复

使用道具 举报

liuzhixin137 发表于 2016-7-12 17:17:55
tntzbzc 发表于 2016-7-12 17:08
贴下服务端和客户端hive-site.xml的配置文件

我是通过CM 安装的hive跟 oozie 。
hive用的默认配置,后来就改过一个metastore  server的服务范围,如图。

然后oozie 调度需要用到一个hive-site.xml ,我就搜索了一下,从hive的配置目录下 取到了一个hive-site.xml文件,

<configuration>
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://192.168.1.104:9083</value>
  </property>  
  <property>  
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://hadoop1:3306/hive?createDatabaseIfNotExist=true</value>
  </property>
  <property>  
        <name>javax.jdo.option.ConnectionDriverName</name>  
        <value>com.mysql.jdbc.Driver</value>  
  </property>                                   
  <property>  
        <name>hive.metastore.local</name>  
        <value>false</value>
  </property>
  <property>
          <name>javax.jdo.option.ConnectionUserName</name>
          <value>root</value>                                  
        </property>
        <property>
          <name>javax.jdo.option.ConnectionPassword</name>
          <value>123456</value>
        </property>
        <property>
        <name>oozie.service.ProxyUserService.proxyuser.oozie.groups</name>
        <value>*</value>
    </property>
       
        <property>  
        <name>oozie.service.HadoopAccessorService.supported.filesystems</name>  
       <value>hdfs,viewfs</value>  
    </property>
//////////////////以下配置是原有的,以上是我手动添加的/////////////////////////
  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>300</value>
  </property>
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>hdfs://hadoop2:8020/user/hive/warehouse</value>
  </property>
  <property>
    <name>hive.warehouse.subdir.inherit.perms</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.auto.convert.join</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.auto.convert.join.noconditionaltask.size</name>
    <value>20971520</value>
  </property>
  <property>
    <name>hive.optimize.bucketmapjoin.sortedmerge</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.smbjoin.cache.rows</name>
    <value>10000</value>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
  </property>
  <property>
    <name>hive.exec.reducers.bytes.per.reducer</name>
    <value>67108864</value>
  </property>
  <property>
    <name>hive.exec.copyfile.maxsize</name>
    <value>33554432</value>
  </property>
  <property>
    <name>hive.exec.reducers.max</name>
    <value>1099</value>
  </property>
  <property>
    <name>hive.vectorized.groupby.checkinterval</name>
    <value>4096</value>
  </property>
  <property>
    <name>hive.vectorized.groupby.flush.percent</name>
    <value>0.1</value>
  </property>
  <property>
    <name>hive.compute.query.using.stats</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.vectorized.execution.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.vectorized.execution.reduce.enabled</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.merge.mapfiles</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.merge.mapredfiles</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.cbo.enable</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.fetch.task.conversion</name>
    <value>minimal</value>
  </property>
  <property>
    <name>hive.fetch.task.conversion.threshold</name>
    <value>268435456</value>
  </property>
  <property>
    <name>hive.limit.pushdown.memory.usage</name>
    <value>0.1</value>
  </property>
  <property>
    <name>hive.merge.sparkfiles</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.merge.smallfiles.avgsize</name>
    <value>16777216</value>
  </property>
  <property>
    <name>hive.merge.size.per.task</name>
    <value>268435456</value>
  </property>
  <property>
    <name>hive.optimize.reducededuplication</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.optimize.reducededuplication.min.reducer</name>
    <value>4</value>
  </property>
  <property>
    <name>hive.map.aggr</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.map.aggr.hash.percentmemory</name>
    <value>0.5</value>
  </property>
  <property>
    <name>hive.optimize.sort.dynamic.partition</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.execution.engine</name>
    <value>mr</value>
  </property>
  <property>
    <name>spark.executor.memory</name>
    <value>912680550</value>
  </property>
  <property>
    <name>spark.driver.memory</name>
    <value>966367641</value>
  </property>
  <property>
    <name>spark.executor.cores</name>
    <value>1</value>
  </property>
  <property>
    <name>spark.yarn.driver.memoryOverhead</name>
    <value>102</value>
  </property>
  <property>
    <name>spark.yarn.executor.memoryOverhead</name>
    <value>153</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.initialExecutors</name>
    <value>1</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.minExecutors</name>
    <value>1</value>
  </property>
  <property>
    <name>spark.dynamicAllocation.maxExecutors</name>
    <value>2147483647</value>
  </property>
  <property>
    <name>hive.metastore.execute.setugi</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.cluster.delegation.token.store.class</name>
    <value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>
  </property>
  <property>
    <name>hive.server2.enable.doAs</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.server2.use.SSL</name>
    <value>false</value>
  </property>
  <property>
    <name>spark.shuffle.service.enabled</name>
    <value>true</value>
  </property>
</configuration>



不知道是不是这儿出的问题。
232.jpg
回复

使用道具 举报

liuzhixin137 发表于 2016-7-12 17:23:13
这是我的hive-metastore 的日志 2016-07-12 14:19:29,512 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_all_databases from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:19:29,513 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_all_databases
2016-07-12 14:19:29,513 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_all_databases       
2016-07-12 14:19:29,521 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_all_databases start=1468304369512 end=1468304369521 duration=9 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:00,244 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=create_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:00,245 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 create_database: Database(name:mytest99, description:null, locationUri:null, parameters:null, ownerName:root, ownerType:USER)
2016-07-12 14:20:00,245 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 create_database: Database(name:mytest99, description:null, locationUri:null, parameters:null, ownerName:root, ownerType:USER)       
2016-07-12 14:20:00,267 INFO  org.apache.hadoop.hive.common.FileUtils: [pool-4-thread-2]: Creating directory if it doesn't exist: hdfs://hadoop2:8020/user/hive/warehouse/mytest99.db
2016-07-12 14:20:00,296 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=create_database start=1468304400244 end=1468304400296 duration=52 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:04,129 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:04,130 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_database: mytest99
2016-07-12 14:20:04,131 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_database: mytest99       
2016-07-12 14:20:04,138 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_database start=1468304404129 end=1468304404138 duration=9 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:04,162 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:04,163 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_database: mytest99
2016-07-12 14:20:04,163 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_database: mytest99       
2016-07-12 14:20:04,170 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_database start=1468304404162 end=1468304404170 duration=8 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:04,172 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:04,172 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_database: mytest99
2016-07-12 14:20:04,173 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_database: mytest99       
2016-07-12 14:20:04,179 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_database start=1468304404172 end=1468304404179 duration=7 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:10,634 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_database from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:10,635 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_database: mytest99
2016-07-12 14:20:10,635 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_database: mytest99       
2016-07-12 14:20:10,643 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_database start=1468304410634 end=1468304410643 duration=9 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:10,668 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=create_table_with_environment_context from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:10,668 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 create_table: Table(tableName:test, dbName:mytest99, owner:root, createTime:1468304410, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null), temporary:false)
2016-07-12 14:20:10,668 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 create_table: Table(tableName:test, dbName:mytest99, owner:root, createTime:1468304410, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id, type:int, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format= , field.delim= }), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null), temporary:false)       
2016-07-12 14:20:10,688 INFO  org.apache.hadoop.hive.common.FileUtils: [pool-4-thread-2]: Creating directory if it doesn't exist: hdfs://hadoop2:8020/user/hive/warehouse/mytest99.db/test
2016-07-12 14:20:10,828 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=create_table_with_environment_context start=1468304410667 end=1468304410828 duration=161 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:15,289 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:15,290 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_table : db=mytest99 tbl=test
2016-07-12 14:20:15,290 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_table : db=mytest99 tbl=test       
2016-07-12 14:20:15,323 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_table start=1468304415289 end=1468304415323 duration=34 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:15,332 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:15,332 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_table : db=mytest99 tbl=test
2016-07-12 14:20:15,332 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_table : db=mytest99 tbl=test       
2016-07-12 14:20:15,350 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_table start=1468304415332 end=1468304415350 duration=18 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:15,352 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:15,353 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_table : db=mytest99 tbl=test
2016-07-12 14:20:15,353 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_table : db=mytest99 tbl=test       
2016-07-12 14:20:15,368 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_table start=1468304415352 end=1468304415368 duration=16 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:15,409 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=alter_table_with_cascade from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:15,409 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 alter_table: db=mytest99 tbl=test newtbl=test
2016-07-12 14:20:15,409 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 alter_table: db=mytest99 tbl=test newtbl=test       
2016-07-12 14:20:15,447 INFO  hive.log: [pool-4-thread-2]: Updating table stats fast for test
2016-07-12 14:20:15,447 INFO  hive.log: [pool-4-thread-2]: Updated size of table test to 151
2016-07-12 14:20:15,457 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=alter_table_with_cascade start=1468304415409 end=1468304415457 duration=48 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:15,459 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:15,459 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_table : db=mytest99 tbl=test
2016-07-12 14:20:15,460 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_table : db=mytest99 tbl=test       
2016-07-12 14:20:15,474 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_table start=1468304415459 end=1468304415474 duration=15 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:15,475 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:15,475 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_table : db=mytest99 tbl=test
2016-07-12 14:20:15,476 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_table : db=mytest99 tbl=test       
2016-07-12 14:20:15,491 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_table start=1468304415475 end=1468304415491 duration=16 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:15,499 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=alter_table_with_cascade from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:15,500 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 alter_table: db=mytest99 tbl=test newtbl=test
2016-07-12 14:20:15,500 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 alter_table: db=mytest99 tbl=test newtbl=test       
2016-07-12 14:20:15,536 INFO  hive.log: [pool-4-thread-2]: Updating table stats fast for test
2016-07-12 14:20:15,536 INFO  hive.log: [pool-4-thread-2]: Updated size of table test to 151
2016-07-12 14:20:15,541 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=alter_table_with_cascade start=1468304415499 end=1468304415541 duration=42 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>
2016-07-12 14:20:33,008 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: <PERFLOG method=get_table from=org.apache.hadoop.hive.metastore.RetryingHMSHandler>
2016-07-12 14:20:33,009 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-2]: 2: source:192.168.1.104 get_table : db=mytest99 tbl=test
2016-07-12 14:20:33,010 INFO  org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-4-thread-2]: ugi=root        ip=192.168.1.104        cmd=source:192.168.1.104 get_table : db=mytest99 tbl=test       
2016-07-12 14:20:33,056 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [pool-4-thread-2]: </PERFLOG method=get_table start=1468304433008 end=1468304433056 duration=48 from=org.apache.hadoop.hive.metastore.RetryingHMSHandler threadId=2 retryCount=0 error=false>

看了一下,里面的配置都正常,也没有报错
回复

使用道具 举报

tntzbzc 发表于 2016-7-12 18:03:37
本帖最后由 tntzbzc 于 2016-7-12 18:17 编辑
liuzhixin137 发表于 2016-7-12 17:23
这是我的hive-metastore 的日志 2016-07-12 14:19:29,512 INFO  org.apache.hadoop.hive.ql.log.PerfLogger ...

这是服务端的配置文件,这里设置为
<property>  
        <name>hive.metastore.local</name>  
        <value>true</value>
  </property>

回复

使用道具 举报

12下一页
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条