hadoop1的api和2的api看看是不是弄混要了
参考下面:
第一步:上传
- hadoop fs -put ./keyvalues.properties cache/keyvalues.properties
复制代码
第二步:读取
- public class DistributedCacheMapper extends Mapper<LongWritable, Text, Text, Text> {
-
-
-
- Properties cache;
-
-
-
- @Override
-
- protected void setup(Context context) throws IOException, InterruptedException {
-
- super.setup(context);
-
- Path[] localCacheFiles = DistributedCache.getLocalCacheFiles(context.getConfiguration());
-
-
-
- if(localCacheFiles != null) {
-
- // expecting only single file here
-
- for (int i = 0; i < localCacheFiles.length; i++) {
-
- Path localCacheFile = localCacheFiles[i];
-
- cache = new Properties();
-
- cache.load(newFileReader(localCacheFile.toString()));
-
- }
-
- } else {
-
- // do your error handling here
-
- }
-
-
-
- }
-
-
-
- @Override
-
- public void map(LongWritable key, Text value, Context context)throws IOException, InterruptedException {
-
- // use the cache here
-
- // if value contains some attribute, cache.get(<value>)
-
- // do some action or replace with something else
-
- }
-
- }
复制代码
第三步:加到驱动中
- JobConf jobConf = new JobConf();
-
- // set job properties
-
- // set the cache file
-
- DistributedCache.addCacheFile(newURI("cache/keyvalues.properties#keyvalues.properties"), jobConf);
复制代码
|