代码是执行追加操作,往hdfs上已有的文件追加新数据,但是一直报错
Path hdfs_path = new Path(host + path + table + "/part.txt");
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(hdfs_path.toString()), conf);
fs.setReplication(hdfs_path, (short)1);
InputStream in = new BufferedInputStream(new ByteArrayInputStream(data.getBytes()));
OutputStream out = fs.append(hdfs_path);
IOUtils.copyBytes(in, out, conf);
out.close();
fs.close();
IOUtils.closeStream(in);
in.close();
错误内容:
org.apache.hadoop.ipc.RemoteException: failed to create file /user/root/uair11/air_info_package_cache_/part.txt for DFSClient_NONMAPREDUCE_1195989360_48 on client 192.168.22.129 because current leaseholder is trying to recreate file.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2342)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:220)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2453)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2414)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:508)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:320)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48063)
at