以beeline的方式进入hive cli,不管是在gateway还是cloudera集群 hive server上执行select count(*)...都会报以下错误:
2016-02-18 23:01:22,129 WARN org.apache.hive.service.cli.thrift.ThriftCLIService: Error fetching results:
org.apache.hive.service.cli.HiveSQLException: Couldn't find log associated with operation handle: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=f41635f6-cadc-47a1-9147-38cd43e40bb0]
at org.apache.hive.service.cli.operation.OperationManager.getOperationLogRowSet(OperationManager.java:259)
at org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:658)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy23.fetchResults(Unknown Source)
at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:451)
at org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:672)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1553)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1538)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-02-18 23:01:22,591 INFO org.apache.hadoop.mapreduce.JobSubmitter: number of splits:1
2016-02-18 23:01:22,851 INFO org.apache.hadoop.mapreduce.JobSubmitter: Submitting tokens for job: job_1453186830330_0012
2016-02-18 23:01:23,030 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_1453186830330_0012
2016-02-18 23:01:23,038 INFO org.apache.hadoop.mapreduce.Job: The url to track the job:
2016-02-18 23:01:23,038 INFO org.apache.hadoop.hive.ql.exec.Task: Starting Job = job_1453186830330_0012, Tracking URL
2016-02-18 23:01:23,039 INFO org.apache.hadoop.hive.ql.exec.Task: Kill Command = /opt/cloudera/parcels/CDH-5.4.0-1.cdh5.4.0.p0.27/lib/hadoop/bin/hadoop job -kill job_1453186830330_0012
2016-02-18 23:01:39,159 INFO org.apache.hadoop.hive.ql.exec.Task: Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2016-02-18 23:01:39,254 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-02-18 23:01:39,255 INFO org.apache.hadoop.hive.ql.exec.Task: 2016-02-18 23:01:39,253 Stage-1 map = 0%, reduce = 0%
2016-02-18 23:02:00,318 INFO org.apache.hadoop.hive.ql.exec.Task: 2016-02-18 23:02:00,318 Stage-1 map = 100%, reduce = 100%
2016-02-18 23:02:02,503 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = job_1453186830330_0012 with errors
2016-02-18 23:02:02,504 ERROR org.apache.hadoop.hive.ql.exec.Task: Error during job, obtaining debugging information...
2016-02-18 23:02:02,514 ERROR org.apache.hadoop.hive.ql.exec.Task: Examining task ID: task_1453186830330_0012_m_000000 (and more) from job job_1453186830330_0012
2016-02-18 23:02:02,514 WARN org.apache.hadoop.hive.shims.HadoopShimsSecure: Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-02-18 23:02:02,534 WARN org.apache.hadoop.hive.shims.HadoopShimsSecure: Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-02-18 23:02:02,538 WARN org.apache.hadoop.hive.shims.HadoopShimsSecure: Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-02-18 23:02:02,543 WARN org.apache.hadoop.hive.shims.HadoopShimsSecure: Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-02-18 23:02:02,552 ERROR org.apache.hadoop.hive.ql.exec.Task:
Task with the most failures(4):
Task ID:
task_1453186830330_0012_m_000000
URL:
.....
-----
Diagnostic Messages for this Task:
Exception from container-launch.
Container id: container_1453186830330_0012_01_000005
Exit code: 127
2016-02-18 23:02:02,703 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Killed application application_1453186830330_0012
2016-02-18 23:02:02,743 ERROR org.apache.hadoop.hive.ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2016-02-18 23:02:02,743 INFO org.apache.hadoop.hive.ql.log.PerfLogger: </PERFLOG method=Driver.execute start=1455865277190 end=1455865322743 duration=45553 from=org.apache.hadoop.hive.ql.Driver>
2016-02-18 23:02:02,743 INFO org.apache.hadoop.hive.ql.Driver: MapReduce Jobs Launched:
2016-02-18 23:02:02,744 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-02-18 23:02:02,744 INFO org.apache.hadoop.hive.ql.Driver: Stage-Stage-1: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
2016-02-18 23:02:02,744 INFO org.apache.hadoop.hive.ql.Driver: Total MapReduce CPU Time Spent: 0 msec
2016-02-18 23:02:02,744 INFO org.apache.hadoop.hive.ql.log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-02-18 23:02:02,744 INFO ZooKeeperHiveLockManager: about to release lock for default
2016-02-18 23:02:02,806 INFO ZooKeeperHiveLockManager: about to release lock for _dummy_database/_dummy_table
2016-02-18 23:02:02,866 INFO ZooKeeperHiveLockManager: about to release lock for _dummy_database
2016-02-18 23:02:02,903 INFO org.apache.hadoop.hive.ql.log.PerfLogger: </PERFLOG method=releaseLocks start=1455865322744 end=1455865322903 duration=159 from=org.apache.hadoop.hive.ql.Driver>
2016-02-18 23:02:02,903 ERROR org.apache.hive.service.cli.operation.Operation: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:147)
at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:70)
at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:209)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-02-18 23:02:02,907 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:org.apache.hive.service.cli.HiveSQLException: Couldn't find log associated with operation handle: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=f41635f6-cadc-47a1-9147-38cd43e40bb0]
2016-02-18 23:02:02,908 WARN org.apache.hive.service.cli.thrift.ThriftCLIService: Error fetching results:
org.apache.hive.service.cli.HiveSQLException: Couldn't find log associated with operation handle: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=f41635f6-cadc-47a1-9147-38cd43e40bb0]
at org.apache.hive.service.cli.operation.OperationManager.getOperationLogRowSet(OperationManager.java:259)
at org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:658)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy23.fetchResults(Unknown Source)
at org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:451)
at org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:672)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1553)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1538)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-02-18 23:02:02,942 INFO org.apache.hadoop.hive.ql.exec.ListSinkOperator: 183 finished. closing...
2016-02-18 23:02:02,942 INFO org.apache.hadoop.hive.ql.exec.ListSinkOperator: 183 Close done
2016-02-18 23:02:03,052 INFO org.apache.hadoop.hive.ql.log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-02-18 23:02:03,052 INFO org.apache.hadoop.hive.ql.log.PerfLogger: </PERFLOG method=releaseLocks start=1455865323052 end=1455865323052 duration=0 from=org.apache.hadoop.hive.ql.Driver>
然后经网上查询,将以下设置加入到hive-site.xml中:
<property>
<name>hive.server2.logging.operation.enabled</name>
<value>true</value>
</property>
现在执行select count(*) from table,又报以错误:
2016-02-21 18:54:32,402 INFO org.apache.hadoop.hive.ql.exec.Utilities: PLAN PATH = hdfs://hdfs/tmp/hive/hdfs/53096047-2ed4-4078-bd3d-54329423a721/hive_2016-02-21_18-54-30_917_6527613778831482776-4/-mr-10004/5b45d2a7-d158-42ce-a565-208d06eb6275/map.xml
2016-02-21 18:54:32,403 INFO org.apache.hadoop.hive.ql.exec.Utilities: PLAN PATH = hdfs://hdfs/tmp/hive/hdfs/53096047-2ed4-4078-bd3d-54329423a721/hive_2016-02-21_18-54-30_917_6527613778831482776-4/-mr-10004/5b45d2a7-d158-42ce-a565-208d06eb6275/reduce.xml
2016-02-21 18:54:32,421 WARN org.apache.hadoop.mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2016-02-21 18:54:36,077 INFO org.apache.hadoop.hive.ql.log.PerfLogger: <PERFLOG method=getSplits from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
2016-02-21 18:54:36,078 INFO org.apache.hadoop.hive.ql.exec.Utilities: PLAN PATH = hdfs://hdfs/tmp/hive/hdfs/53096047-2ed4-4078-bd3d-54329423a721/hive_2016-02-21_18-54-30_917_6527613778831482776-4/-mr-10004/5b45d2a7-d158-42ce-a565-208d06eb6275/map.xml
2016-02-21 18:54:36,079 INFO org.apache.hadoop.hive.ql.io.CombineHiveInputFormat: Total number of paths: 1, launching 1 threads to check non-combinable ones.
2016-02-21 18:54:36,089 INFO org.apache.hadoop.hive.ql.io.CombineHiveInputFormat: CombineHiveInputSplit creating pool for hdfs://hdfs/tmp/hive/hdfs/53096047-2ed4-4078-bd3d-54329423a721/hive_2016-02-21_18-54-30_917_6527613778831482776-4/-mr-10003/0; using filter path hdfs://E3-GQC-HDFS/tmp/hive/hdfs/53096047-2ed4-4078-bd3d-54329423a721/hive_2016-02-21_18-54-30_917_6527613778831482776-4/-mr-10003/0
2016-02-21 18:54:36,101 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat: Total input paths to process : 1
2016-02-21 18:54:36,103 INFO org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 0, size left: 0
2016-02-21 18:54:36,104 INFO org.apache.hadoop.hive.ql.io.CombineHiveInputFormat: number of splits 1
2016-02-21 18:54:36,105 INFO org.apache.hadoop.hive.ql.io.CombineHiveInputFormat: Number of all splits 1
2016-02-21 18:54:36,105 INFO org.apache.hadoop.hive.ql.log.PerfLogger: </PERFLOG method=getSplits start=1456109676077 end=1456109676105 duration=28 from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
2016-02-21 18:54:36,488 INFO org.apache.hadoop.mapreduce.JobSubmitter: number of splits:1
2016-02-21 18:54:36,802 INFO org.apache.hadoop.mapreduce.JobSubmitter: Submitting tokens for job: job_1453186830330_0014
2016-02-21 18:54:36,987 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_1453186830330_0014
2016-02-21 18:54:36,997 INFO org.apache.hadoop.mapreduce.Job: The url to track the job: http://servername.domain.com:808 ... 1453186830330_0014/
2016-02-21 18:54:36,999 INFO org.apache.hadoop.hive.ql.exec.Task: Starting Job = job_1453186830330_0014, Tracking URL = http://servername.domain.com:808 ... 1453186830330_0014/
2016-02-21 18:54:36,999 INFO org.apache.hadoop.hive.ql.exec.Task: Kill Command = /opt/cloudera/parcels/CDH-5.4.0-1.cdh5.4.0.p0.27/lib/hadoop/bin/hadoop job -kill job_1453186830330_0014
2016-02-21 18:54:58,450 INFO org.apache.hadoop.hive.ql.exec.Task: Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2016-02-21 18:55:01,415 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2016-02-21 18:55:01,417 INFO org.apache.hadoop.hive.ql.exec.Task: 2016-02-21 18:55:01,415 Stage-1 map = 0%, reduce = 0%
2016-02-21 18:55:18,042 INFO org.apache.hadoop.hive.ql.exec.Task: 2016-02-21 18:55:18,040 Stage-1 map = 100%, reduce = 100%
2016-02-21 18:55:20,280 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = job_1453186830330_0014 with errors
2016-02-21 18:55:20,295 ERROR org.apache.hadoop.hive.ql.exec.Task: Error during job, obtaining debugging information...
2016-02-21 18:55:20,309 ERROR org.apache.hadoop.hive.ql.exec.Task: Examining task ID: task_1453186830330_0014_m_000000 (and more) from job job_1453186830330_0014
2016-02-21 18:55:20,309 WARN org.apache.hadoop.hive.shims.HadoopShimsSecure: Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-02-21 18:55:20,331 WARN org.apache.hadoop.hive.shims.HadoopShimsSecure: Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-02-21 18:55:20,337 WARN org.apache.hadoop.hive.shims.HadoopShimsSecure: Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-02-21 18:55:20,342 WARN org.apache.hadoop.hive.shims.HadoopShimsSecure: Can't fetch tasklog: TaskLogServlet is not supported in MR2 mode.
2016-02-21 18:55:20,353 ERROR org.apache.hadoop.hive.ql.exec.Task:
Task with the most failures(4):
-----
Task ID:
task_1453186830330_0014_m_000000
URL:
http://servername.domain.com:808 ... 30330_0014_m_000000
-----
Diagnostic Messages for this Task:
Exception from container-launch.
Container id: container_1453186830330_0014_01_000005
Exit code: 127
Exception message: Usage: java [-options] class [args...]
(to execute a class)
or java [-options] -jar jarfile [args...]
(to execute a jar file)
where options include:
-d32 use a 32-bit data model if available
-d64 use a 64-bit data model if available
-server to select the "server" VM
The default VM is server,
because you are running on a server-class machine.
-cp <class search path of directories and zip/jar files>
-classpath <class search path of directories and zip/jar files>
A : separated list of directories, JAR archives,
and ZIP archives to search for class files.
-D<name>=<value>
set a system property
-verbose:[class|gc|jni]
enable verbose output
-version print product version and exit
-version:<value>
require the specified version to run
-showversion print product version and continue
-jre-restrict-search | -no-jre-restrict-search
include/exclude user private JREs in the version search
-? -help print this help message
-X print help on non-standard options
-ea[:<packagename>...|:<classname>]
-enableassertions[:<packagename>...|:<classname>]
enable assertions with specified granularity
-da[:<packagename>...|:<classname>]
-disableassertions[:<packagename>...|:<classname>]
disable assertions with specified granularity
-esa | -enablesystemassertions
enable system assertions
-dsa | -disablesystemassertions
disable system assertions
-agentlib:<libname>[=<options>]
load native agent library <libname>, e.g. -agentlib:hprof
see also, -agentlib:jdwp=help and -agentlib:hprof=help
-agentpath:<pathname>[=<options>]
load native agent library by full pathname
-javaagent:<jarpath>[=<options>]
load Java programming language agent, see java.lang.instrument
-splash:<imagepath>
show splash screen with specified image
See http://www.oracle.com/technetwor ... entation/index.html for more details.
Stack trace: ExitCodeException exitCode=127: Usage: java [-options] class [args...]
(to execute a class)
or java [-options] -jar jarfile [args...]
(to execute a jar file)
where options include:
-d32 use a 32-bit data model if available
-d64 use a 64-bit data model if available
-server to select the "server" VM
The default VM is server,
because you are running on a server-class machine.
-cp <class search path of directories and zip/jar files>
-classpath <class search path of directories and zip/jar files>
A : separated list of directories, JAR archives,
and ZIP archives to search for class files.
-D<name>=<value>
set a system property
-verbose:[class|gc|jni]
enable verbose output
-version print product version and exit
-version:<value>
require the specified version to run
-showversion print product version and continue
-jre-restrict-search | -no-jre-restrict-search
include/exclude user private JREs in the version search
-? -help print this help message
-X print help on non-standard options
-ea[:<packagename>...|:<classname>]
-enableassertions[:<packagename>...|:<classname>]
enable assertions with specified granularity
-da[:<packagename>...|:<classname>]
-disableassertions[:<packagename>...|:<classname>]
disable assertions with specified granularity
-esa | -enablesystemassertions
enable system assertions
-dsa | -disablesystemassertions
disable system assertions
-agentlib:<libname>[=<options>]
load native agent library <libname>, e.g. -agentlib:hprof
see also, -agentlib:jdwp=help and -agentlib:hprof=help
-agentpath:<pathname>[=<options>]
load native agent library by full pathname
-javaagent:<jarpath>[=<options>]
load Java programming language agent, see java.lang.instrument
-splash:<imagepath>
show splash screen with specified image
See http://www.oracle.com/technetwor ... entation/index.html for more details.
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 127
2016-02-21 18:55:20,457 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Killed application application_1453186830330_0014
2016-02-21 18:55:20,497 ERROR org.apache.hadoop.hive.ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
2016-02-21 18:55:20,498 INFO org.apache.hadoop.hive.ql.log.PerfLogger: </PERFLOG method=Driver.execute start=1456109671461 end=1456109720498 duration=49037 from=org.apache.hadoop.hive.ql.Driver>
2016-02-21 18:55:20,498 INFO org.apache.hadoop.hive.ql.Driver: MapReduce Jobs Launched:
2016-02-21 18:55:20,500 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
2016-02-21 18:55:20,500 INFO org.apache.hadoop.hive.ql.Driver: Stage-Stage-1: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
2016-02-21 18:55:20,500 INFO org.apache.hadoop.hive.ql.Driver: Total MapReduce CPU Time Spent: 0 msec
2016-02-21 18:55:20,501 INFO org.apache.hadoop.hive.ql.log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-02-21 18:55:20,501 INFO ZooKeeperHiveLockManager: about to release lock for default/dbatest01
2016-02-21 18:55:20,558 INFO ZooKeeperHiveLockManager: about to release lock for default
2016-02-21 18:55:20,598 INFO org.apache.hadoop.hive.ql.log.PerfLogger: </PERFLOG method=releaseLocks start=1456109720501 end=1456109720598 duration=97 from=org.apache.hadoop.hive.ql.Driver>
2016-02-21 18:55:20,598 ERROR org.apache.hive.service.cli.operation.Operation: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:147)
at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:70)
at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:209)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-02-21 18:55:20,616 INFO org.apache.hadoop.hive.ql.exec.ListSinkOperator: 19 finished. closing...
2016-02-21 18:55:20,616 INFO org.apache.hadoop.hive.ql.exec.ListSinkOperator: 19 Close done
2016-02-21 18:55:20,703 INFO org.apache.hadoop.hive.ql.log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2016-02-21 18:55:20,703 INFO org.apache.hadoop.hive.ql.log.PerfLogger: </PERFLOG method=releaseLocks start=1456109720703 end=1456109720703 duration=0 from=org.apache.hadoop.hive.ql.Driver>
大家知道这是什么问题不?
|