问题导读
1.Hadoop KMS是什么?
2.如何启动/停止KMS?
3.KMS的安全如何配置?
Hadoop KMS是一个基于 Hadoop的 KeyProvider API的用密码写的 key 管理server。Client是一个KeyProvider的实现,使用KMS HTTP REST API与KMS交互。
KMS和它的客户端内置安全和它们支持HTTP SPNEGO Kerberos 身份验证和HTTPS安全转换.
KMS是一个Java Web应用程序,运行在与Hadoop发行版绑定在一起的预先配置好的Tomcat服务器上。
KMS客户端配置
KMS 客户端KeyProvider 使用kms scheme,嵌入的URL必须是KMS的网址(URL)。举例:http://localhost:16000/kms运行KMS ,KeyProvider URI是 kms://http@localhost:16000/kms.并且https://localhost:16000/kms运行KMS ,KeyProvider URI是 kms://https@localhost:16000/kms.
KMS
KMS 配置
配置KMS 支持KeyProvider 属性在 etc/hadoop/kms-site.xml 配置文件
[mw_shl_code=bash,true] <property>
<name>hadoop.kms.key.provider.uri</name>
<value>jceks://file@/${user.home}/kms.keystore</value>
</property>
<property>
<name>hadoop.security.keystore.java-keystore-provider.password-file</name>
<value>kms.keystore.password</value>
</property>[/mw_shl_code]这个密码文件通过classpath查找hadoop的配置文件
注意:需要重启KMS生效配置文件。
KMS缓存
KMS caches keys 是为了避免在短时间内过多的点击
Cache 默认是启用的。(禁用设置hadoop.kms.cache.enable boolean 属性为false)cache 只用下面三个方法: getCurrentKey() 和getKeyVersion() 和 getMetadata().
getCurrentKey() 方法,cached 保留最多30000毫秒不管key正被访问的数量(避免过期key被认为是当前的)
getKeyVersion() 方法,cached默认闲置时600000毫秒(10分钟)超时。超时配置通过下面配置文件 etc/hadoop/kms-site.xml的属性配置
[mw_shl_code=bash,true] <property>
<name>hadoop.kms.cache.enable</name>
<value>true</value>
</property>
<property>
<name>hadoop.kms.cache.timeout.ms</name>
<value>600000</value>
</property>
<property>
<name>hadoop.kms.current.key.cache.timeout.ms</name>
<value>30000</value>
</property>[/mw_shl_code]
KMS 聚集Audit 日志
Audit 日志汇总 为API 访问GET_KEY_VERSION, GET_CURRENT_KEY, DECRYPT_EEK, GENERATE_EEK 的操作Entries 由combined key (用户,key,操作)进行了分组, combined key为一个可配置的聚合间隔,访问数次之后由user 指定 end-point,用户给予的key flushed 到 audit 日志.
聚合间隔由下面属性配置
[mw_shl_code=bash,true] <property>
<name>hadoop.kms.aggregation.delay.ms</name>
<value>10000</value>
</property>[/mw_shl_code]
启动/停止KMS
启动/停止KMS 使用 KMS的bin/kms.sh 脚本.举例:
[mw_shl_code=bash,true]hadoop-2.7.1 $ sbin/kms.sh start[/mw_shl_code]
注意:不需任何参数的情况下,调用这个脚本就可以列出所有可能的参数(开始、停止、运行等)。kms.sh 脚本是一个包装的Tomcat’s catalina.sh 脚本,设置环境变量和Java系统属性运行KMS
嵌入Tomcat 配置
配置嵌入Tomcat ,去share/hadoop/kms/tomcat/conf.KMS 预配置HTTP和Admin端口在Tomcat的 server.xml 16000 和16001.
Tomcat 日志也是预先配置到hadoop的logs/目录
下面环境变量(设置在KMS的 etc/hadoop/kms-env.sh脚本)可以用来改变这些值
- KMS_HTTP_PORT
- KMS_ADMIN_PORT
- KMS_MAX_THREADS
- KMS_LOGNOTE: 你需要重启KMS,生效配置改变.
加载本地库
下面环境变量(可以设置在KMS的etc/hadoop/kms-env.sh脚本)用于指定任何所需的本地库的位置。因为Tomact 本地Apache Portable Runtime (APR) 库:
[mw_shl_code=bash,true]JAVA_LIBRARY_PATH
[/mw_shl_code]
KMS的安全配置
启用Kerberos HTTP SPNEGO认证
配置Kerberos etc/krb5.conf 你的KDC server信息的文件为KMS创建 principal 和keytab ,它必须是一个 HTTP 服务principal.
配置KMS etc/hadoop/kms-site.xml用正确的安全值,举例
[mw_shl_code=bash,true] <property>
<name>hadoop.kms.authentication.type</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.kms.authentication.kerberos.keytab</name>
<value>${user.home}/kms.keytab</value>
</property>
<property>
<name>hadoop.kms.authentication.kerberos.principal</name>
<value>HTTP/localhost</value>
</property>
<property>
<name>hadoop.kms.authentication.kerberos.name.rules</name>
<value>DEFAULT</value>
</property>[/mw_shl_code]
注意:你需要重启KMS生效配置
KMS 代理用户配置
每一个代理用户必须配置在 etc/hadoop/kms-site.xml使用下面属性
[mw_shl_code=bash,true]<property>
<name>hadoop.kms.proxyuser.#USER#.users</name>
<value>*</value>
</property>
<property>
<name>hadoop.kms.proxyuser.#USER#.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.kms.proxyuser.#USER#.hosts</name>
<value>*</value>
</property>[/mw_shl_code]
#USER#是代理用户配置的用户名users 属性的意思是users是模拟的。
groups 属性的意思是用户代理必须属于这个组
hosts 属性的意思是从host 的代理用户可以做模拟请求
如果users, groups 或则 hosts有一个 *,它的意思是代理用户对于 users, groups or hosts没有限制
KMS 通过HTTPS (SSL)
配置KMS工作通过HTTPS,下面两个属性必须设置在 etc/hadoop/kms_env.sh脚本(展示默认值)
- KMS_SSL_KEYSTORE_FILE=$HOME/.keystore
- KMS_SSL_KEYSTORE_PASS=password
在KMS目录,与所提供的ssl-server.xml文件替换server.xml文件。
你需要创建SSL 认证为KMS,作为kms Unix 用户,使用Javakeytool 命令创建SSL 证书
[mw_shl_code=bash,true]$ keytool -genkey -alias tomcat -keyalg RSA[/mw_shl_code]
在交互的提示中,你将会被问一系列问题。它将会创建keystore 文件,被命名为.keystore和位于kms 用户的home目录。
你输入的密码-“keystore password”,必须匹配在配置目录的 kms-env.sh脚本的KMS_SSL_KEYSTORE_PASS环境变量的值
“What is your first and last name?”(即“CN”)的答案必须是运行KMS 的机器的hostname.
注意:
你必须重启KMS生效改变配置
KMS 访问控制
KMS ACL配置定义在KMS etc/hadoop/kms-acls.xml配置文件。当它改变时,这个文件是 hot-reloaded。KMS支持细粒度访问以及KMS操作黑名单通过ACL配置属性设置。
用户访问KMS首先检查列入访问控制列表的请求的操作,然后检查授权之前是否排除在黑名单。
[mw_shl_code=xml,true]<configuration>
<property>
<name>hadoop.kms.acl.CREATE</name>
<value>*</value>
<description>
ACL for create-key operations.
If the user is not in the GET ACL, the key material is not returned
as part of the response.
</description>
</property>
<property>
<name>hadoop.kms.blacklist.CREATE</name>
<value>hdfs,foo</value>
<description>
Blacklist for create-key operations.
If the user is in the Blacklist, the key material is not returned
as part of the response.
</description>
</property>
<property>
<name>hadoop.kms.acl.DELETE</name>
<value>*</value>
<description>
ACL for delete-key operations.
</description>
</property>
<property>
<name>hadoop.kms.blacklist.DELETE</name>
<value>hdfs,foo</value>
<description>
Blacklist for delete-key operations.
</description>
</property>
<property>
<name>hadoop.kms.acl.ROLLOVER</name>
<value>*</value>
<description>
ACL for rollover-key operations.
If the user is not in the GET ACL, the key material is not returned
as part of the response.
</description>
</property>
<property>
<name>hadoop.kms.blacklist.ROLLOVER</name>
<value>hdfs,foo</value>
<description>
Blacklist for rollover-key operations.
</description>
</property>
<property>
<name>hadoop.kms.acl.GET</name>
<value>*</value>
<description>
ACL for get-key-version and get-current-key operations.
</description>
</property>
<property>
<name>hadoop.kms.blacklist.GET</name>
<value>hdfs,foo</value>
<description>
ACL for get-key-version and get-current-key operations.
</description>
</property>
<property>
<name>hadoop.kms.acl.GET_KEYS</name>
<value>*</value>
<description>
ACL for get-keys operation.
</description>
</property>
<property>
<name>hadoop.kms.blacklist.GET_KEYS</name>
<value>hdfs,foo</value>
<description>
Blacklist for get-keys operation.
</description>
</property>
<property>
<name>hadoop.kms.acl.GET_METADATA</name>
<value>*</value>
<description>
ACL for get-key-metadata and get-keys-metadata operations.
</description>
</property>
<property>
<name>hadoop.kms.blacklist.GET_METADATA</name>
<value>hdfs,foo</value>
<description>
Blacklist for get-key-metadata and get-keys-metadata operations.
</description>
</property>
<property>
<name>hadoop.kms.acl.SET_KEY_MATERIAL</name>
<value>*</value>
<description>
Complimentary ACL for CREATE and ROLLOVER operation to allow the client
to provide the key material when creating or rolling a key.
</description>
</property>
<property>
<name>hadoop.kms.blacklist.SET_KEY_MATERIAL</name>
<value>hdfs,foo</value>
<description>
Complimentary Blacklist for CREATE and ROLLOVER operation to allow the client
to provide the key material when creating or rolling a key.
</description>
</property>
<property>
<name>hadoop.kms.acl.GENERATE_EEK</name>
<value>*</value>
<description>
ACL for generateEncryptedKey
CryptoExtension operations
</description>
</property>
<property>
<name>hadoop.kms.blacklist.GENERATE_EEK</name>
<value>hdfs,foo</value>
<description>
Blacklist for generateEncryptedKey
CryptoExtension operations
</description>
</property>
<property>
<name>hadoop.kms.acl.DECRYPT_EEK</name>
<value>*</value>
<description>
ACL for decrypt EncryptedKey
CryptoExtension operations
</description>
</property>
<property>
<name>hadoop.kms.blacklist.DECRYPT_EEK</name>
<value>hdfs,foo</value>
<description>
Blacklist for decrypt EncryptedKey
CryptoExtension operations
</description>
</property>
</configuration>[/mw_shl_code]
Key 访问控制
KMS支持访问控制为非可读操作在key级别。所有的key访问操作被分类为:
- MANAGEMENT - createKey, deleteKey, rolloverNewVersion(创建key,删除key,翻新版本)
- GENERATE_EEK - generateEncryptedKey, warmUpEncryptedKeys(生成key,加密key)
- DECRYPT_EEK - decryptEncryptedKey(解密加密key)
- READ - getKeyVersion, getKeyVersions, getMetadata, getKeysMetadata, getCurrentKey(获取key版本,获取key版本【多个】,得到元数据,到元数据【复数】,得到当前key)
- ALL - 上面所有
这些可以定义在KMS etc/hadoop/kms-acls.xml如下
所有的keys,key访问没有明确的配置,它可能配置默认key访问控制为操作类型的一个子集。
他也可能配置一个 “白名单” key ACL 为操作类型的一个子集。白名单key ACL 是一个白名单额外的明确的或则默认的每个key ACL.如果没有per-key ACL 明确设置,一个用户将会授权访问,如果他们出现在per-key ACL 或则白名单 key ACL.
如果没有ACL配置为一个指定key和没有默认ACL配置和没有root key ACL为请求操作,然后访问将会被拒绝。
注意:默认和白名单key ACL不支持所有的操作限定符(The default and whitelist key ACL does not support ALL operation qualifier.)
[mw_shl_code=xml,true] <property>
<name>key.acl.testKey1.MANAGEMENT</name>
<value>*</value>
<description>
ACL for create-key, deleteKey and rolloverNewVersion operations.
</description>
</property>
<property>
<name>key.acl.testKey2.GENERATE_EEK</name>
<value>*</value>
<description>
ACL for generateEncryptedKey operations.
</description>
</property>
<property>
<name>key.acl.testKey3.DECRYPT_EEK</name>
<value>admink3</value>
<description>
ACL for decryptEncryptedKey operations.
</description>
</property>
<property>
<name>key.acl.testKey4.READ</name>
<value>*</value>
<description>
ACL for getKeyVersion, getKeyVersions, getMetadata, getKeysMetadata,
getCurrentKey operations
</description>
</property>
<property>
<name>key.acl.testKey5.ALL</name>
<value>*</value>
<description>
ACL for ALL operations.
</description>
</property>
<property>
<name>whitelist.key.acl.MANAGEMENT</name>
<value>admin1</value>
<description>
whitelist ACL for MANAGEMENT operations for all keys.
</description>
</property>
<!--
'testKey3' key ACL is defined. Since a 'whitelist'
key is also defined for DECRYPT_EEK, in addition to
admink3, admin1 can also perform DECRYPT_EEK operations
on 'testKey3'
-->
<property>
<name>whitelist.key.acl.DECRYPT_EEK</name>
<value>admin1</value>
<description>
whitelist ACL for DECRYPT_EEK operations for all keys.
</description>
</property>
<property>
<name>default.key.acl.MANAGEMENT</name>
<value>user1,user2</value>
<description>
default ACL for MANAGEMENT operations for all keys that are not
explicitly defined.
</description>
</property>
<property>
<name>default.key.acl.GENERATE_EEK</name>
<value>user1,user2</value>
<description>
default ACL for GENERATE_EEK operations for all keys that are not
explicitly defined.
</description>
</property>
<property>
<name>default.key.acl.DECRYPT_EEK</name>
<value>user1,user2</value>
<description>
default ACL for DECRYPT_EEK operations for all keys that are not
explicitly defined.
</description>
</property>
<property>
<name>default.key.acl.READ</name>
<value>user1,user2</value>
<description>
default ACL for READ operations for all keys that are not
explicitly defined.
</description>
</property>[/mw_shl_code]
KMS Delegation Token 配置
KMS delegation token 密钥管理器可以用下面属性配置:
[mw_shl_code=bash,true] <property>
<name>hadoop.kms.authentication.delegation-token.update-interval.sec</name>
<value>86400</value>
<description>
How often the master key is rotated, in seconds. Default value 1 day.
</description>
</property>
<property>
<name>hadoop.kms.authentication.delegation-token.max-lifetime.sec</name>
<value>604800</value>
<description>
Maximum lifetime of a delagation token, in seconds. Default value 7 days.
</description>
</property>
<property>
<name>hadoop.kms.authentication.delegation-token.renew-interval.sec</name>
<value>86400</value>
<description>
Renewal interval of a delagation token, in seconds. Default value 1 day.
</description>
</property>
<property>
<name>hadoop.kms.authentication.delegation-token.removal-scan-interval.sec</name>
<value>3600</value>
<description>
Scan interval to remove expired delegation tokens.
</description>
</property>[/mw_shl_code]
使用多个KMS实例背后的负载均衡器或则VIP
使用多个KMS实例背后的负载均衡器或则VIP为了可扩展性和HA目的
当使用使用多个KMS实例背后的负载均衡器或则VIP,一个用户的多个请求被不同的KMS 实例处理。KMS实例背后的负载均衡器或则VIP必须指定配置工作属性作为单个逻辑服务。
HTTP Kerberos Principals 配置
当KMS实例背后的负载均衡器或则VIP,客户端将会使用VIP的hostname 。为了 Kerberos SPNEGO 身份认证,URL的hostname用于构建Kerberos服务的服务器名称, HTTP/#HOSTNAME#.这意味着所有的KMS 实例必须有一个Kerberos 服务名称与负载均衡器或则VIP hostname
为了能否访问指定的 KMS 实例,KMS 实例必须有Keberos 服务名与它自己的hostname.这是所需的监控和管理的目的。
Kerberos 服务principal 身份认证(对于负载均衡器/VIP hostname和对于实际的KMS 实例hostname)必须在配置身份验证keytab 文件。principal 名字在配置中指定必须是 ‘*’. 举例:[mw_shl_code=bash,true] <property>
<name>hadoop.kms.authentication.kerberos.principal</name>
<value>*</value>
</property>[/mw_shl_code]
注意:
如果使用HTTPS, SSL 证书用于KMS 实例必须配置支持多个hostnames (查看Java 7以上SAN扩展支持具体如何做)
HTTP认证签名
KMS 使用Hadoop 认证 HTTP 认证。一旦客户端认证成功,Hadoop 认证发布一个签名HTTP Cookie。 HTTP Cookie已过期,过期之后,它将触发一个新的认证序列。这样做是为了避免在客户机的每个HTTP请求触发的认证。
一个KMS实例必须验证HTTP Cookie 签名由其它KMS实例签署的。为了这个所有的KMS 实例必须共享签署的secret。这个secret共享可以被用于zookeeper服务,zookeeper服务这些配置在KMS以下的在 kms-site.xml的属性
[mw_shl_code=xml,true]<property>
<name>hadoop.kms.authentication.signer.secret.provider</name>
<value>zookeeper</value>
<description>
Indicates how the secret to sign the authentication cookies will be
stored. Options are 'random' (default), 'string' and 'zookeeper'.
If using a setup with multiple KMS instances, 'zookeeper' should be used.
</description>
</property>
<property>
<name>hadoop.kms.authentication.signer.secret.provider.zookeeper.path</name>
<value>/hadoop-kms/hadoop-auth-signature-secret</value>
<description>
The Zookeeper ZNode path where the KMS instances will store and retrieve
the secret from.
</description>
</property>
<property>
<name>hadoop.kms.authentication.signer.secret.provider.zookeeper.connection.string</name>
<value>#HOSTNAME#:#PORT#,...</value>
<description>
The Zookeeper connection string, a list of hostnames and port comma
separated.
</description>
</property>
<property>
<name>hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type</name>
<value>kerberos</value>
<description>
The Zookeeper authentication type, 'none' or 'sasl' (Kerberos).
</description>
</property>
<property>
<name>hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.keytab</name>
<value>/etc/hadoop/conf/kms.keytab</value>
<description>
The absolute path for the Kerberos keytab with the credentials to
connect to Zookeeper.
</description>
</property>
<property>
<name>hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.principal</name>
<value>kms/#HOSTNAME#</value>
<description>
The Kerberos service principal used to connect to Zookeeper.
</description>
</property>[/mw_shl_code]
Delegation Tokens
TBD
KMS HTTP REST API
创建 Key
请求
[mw_shl_code=bash,true]POST http://HOST:PORT/kms/v1/keys
Content-Type: application/json
{
"name" : "<key-name>",
"cipher" : "<cipher>",
"length" : <length>, //int
"material" : "<material>", //base64
"description" : "<description>"
}[/mw_shl_code]
响应
[mw_shl_code=bash,true]201 CREATED
LOCATION: http://HOST:PORT/kms/v1/key/<key-name>
Content-Type: application/json
{
"name" : "versionName",
"material" : "<material>", //base64, not present without GET ACL
}[/mw_shl_code]
Rollover Key
请求
[mw_shl_code=bash,true]POST http://HOST:PORT/kms/v1/key/<key-name>
Content-Type: application/json
{
"material" : "<material>",
}[/mw_shl_code]
响应
[mw_shl_code=bash,true]200 OK
Content-Type: application/json
{
"name" : "versionName",
"material" : "<material>", //base64, not present without GET ACL
}[/mw_shl_code]
删除Key
请求
[mw_shl_code=bash,true]DELETE http://HOST:PORT/kms/v1/key/<key-name>[/mw_shl_code]
响应
[mw_shl_code=bash,true]200 OK[/mw_shl_code]
获取Key 元数据
请求
[mw_shl_code=bash,true]GET http://HOST:PORT/kms/v1/key/<key-name>/_metadata[/mw_shl_code]
响应
[mw_shl_code=bash,true]200 OK
Content-Type: application/json
{
"name" : "<key-name>",
"cipher" : "<cipher>",
"length" : <length>, //int
"description" : "<description>",
"created" : <millis-epoc>, //long
"versions" : <versions> //int
}[/mw_shl_code]
获取当前key
请求
[mw_shl_code=bash,true]GET http://HOST:PORT/kms/v1/key/<key-name>/_currentversion[/mw_shl_code]
响应
[mw_shl_code=bash,true]200 OK
Content-Type: application/json
{
"name" : "versionName",
"material" : "<material>", //base64
}[/mw_shl_code]
生成加密密钥的当前key版本
请求
[mw_shl_code=bash,true]GET http://HOST:PORT/kms/v1/key/<key-name>/_eek?eek_op=generate&num_keys=<number-of-keys-to-generate>[/mw_shl_code]
响应
[mw_shl_code=bash,true]200 OK
Content-Type: application/json
[
{
"versionName" : "encryptionVersionName",
"iv" : "<iv>", //base64
"encryptedKeyVersion" : {
"versionName" : "EEK",
"material" : "<material>", //base64
}
},
{
"versionName" : "encryptionVersionName",
"iv" : "<iv>", //base64
"encryptedKeyVersion" : {
"versionName" : "EEK",
"material" : "<material>", //base64
}
},
...
][/mw_shl_code]
解密加密key
请求
[mw_shl_code=bash,true]POST http://HOST:PORT/kms/v1/keyversion/<version-name>/_eek?ee_op=decrypt
Content-Type: application/json
{
"name" : "<key-name>",
"iv" : "<iv>", //base64
"material" : "<material>", //base64
}[/mw_shl_code]
响应
[mw_shl_code=bash,true]200 OK
Content-Type: application/json
{
"name" : "EK",
"material" : "<material>", //base64
}[/mw_shl_code]
获取Key 版本
请求
[mw_shl_code=bash,true]GET http://HOST:PORT/kms/v1/keyversion/<version-name>[/mw_shl_code]
响应
[mw_shl_code=bash,true]200 OK
Content-Type: application/json
{
"name" : "versionName",
"material" : "<material>", //base64
}[/mw_shl_code]
获取Key Versions
请求
[mw_shl_code=bash,true]GET http://HOST:PORT/kms/v1/key/<key-name>/_versions[/mw_shl_code]
响应
[mw_shl_code=bash,true]200 OK
Content-Type: application/json
[
{
"name" : "versionName",
"material" : "<material>", //base64
},
{
"name" : "versionName",
"material" : "<material>", //base64
},
...
][/mw_shl_code]
获取Key 名字
请求
[mw_shl_code=bash,true]GET http://HOST:PORT/kms/v1/keys/names[/mw_shl_code]
响应
[mw_shl_code=bash,true]200 OK
Content-Type: application/json
[
"<key-name>",
"<key-name>",
...
][/mw_shl_code]
获取Keys 元数据
请求
[mw_shl_code=bash,true]GET http://HOST:PORT/kms/v1/keys/metadata?key=<key-name>&key=<key-name>,...[/mw_shl_code]
响应
[mw_shl_code=bash,true]200 OK
Content-Type: application/json
[
{
"name" : "<key-name>",
"cipher" : "<cipher>",
"length" : <length>, //int
"description" : "<description>",
"created" : <millis-epoc>, //long
"versions" : <versions> //int
},
{
"name" : "<key-name>",
"cipher" : "<cipher>",
"length" : <length>, //int
"description" : "<description>",
"created" : <millis-epoc>, //long
"versions" : <versions> //int
},
...
][/mw_shl_code]
相关文章
hadoop入门-第一章General:第一节单节点伪分布
hadoop入门-第一章General:第二节集群配置
hadoop入门-第一章General:第三节Hadoop初级入门之命令指南
hadoop入门-第一章General:第四节文件系统shell
hadoop入门-第一章General:第五节hadoop的兼容性说明
hadoop入门-第一章General:第六节开发人员和用户接口指南:hadoop接口分类
hadoop入门-第一章General:第七节Hadoop 文件系统 API :概述
hadoop入门-第二章common:第一节hadoop 本地库 指南
hadoop入门-第二章common:第二节hadoop代理用户 -超级用户代理其它用户
hadoop入门-第二章common:第三节机架智能感知
hadoop入门-第二章common:第四节安全模式说明
hadoop入门-第二章common:第五节服务级别授权指南
hadoop入门-第二章common:第六节Hadoop HTTP web-consoles认证机制
hadoop入门-第二章common:第七节Hadoop Key管理服务器(KMS) - 文档集
hadoop入门:第三章HDFS文档概述(一)
hadoop入门:第三章HDFS文档概述(二)
hadoop入门:第四章mapreduce文档概述
hadoop入门:第五章MapReduce REST APIs文档概述
hadoop入门:第六章YARN文档概述
hadoop入门:第七章YARN REST APIs
hadoop入门:第八章hadoop兼容文件系统
hadoop入门:第九章hadoop认证
hadoop入门:第十章hadoop工具
hadoop入门:第十一章hadoop配置
|