分享

使用 Gluster-swift 整合 Swift API

xioaxu790 发表于 2014-10-25 18:06:59 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 0 15877
问题导读
1、什么是gluster-swift ,扮演的是什么角色?
2、如何安装部署gluster-swift 环境?
3、Swift如何集成 Keystone 用户验证?





GLUSTER-SWIFT

gluster-swift 是由 Gluster 社区发起的一个项目,针对 Swift 的一个工具,可以让 Swift 后端使用 GlusterFS,可以通过 Swift API 和 GlusterFS 挂载方式访问。在 OpenStack 里实现了统一存储。
同时支持最新版本的 Swift。

项目地址:https://github.com/gluster/gluster-swift
快速入门:https://github.com/gluster/glust ... uick_start_guide.md
管理员手册:http://www.gluster.org/wp-conten ... ion_Guide-en-US.pdf


如何去整合:

前面提到 gluster-swift 只是一个工具而已,它的源码里并没有包含 swift 的代码。gluster-swift 做的一件事情就是生成 ring 文件、启动一个守护进程来检查 Swift 节点是否挂载 GlusterFS 的 Volume、转换数据格式。当 Swift 的数据放在 GlusterFS 上后,利用了GlusterFS 的副本机制实现冗余,无须使用 swift 的副本功能,所以 gluster-swift 在给 Swift 生成  Ring 时候默认只有一个副本。而 Swift 以前可以用多  Proxy、Account、Container、Object 集群方式来横向扩展,现在完全被 gluster-swift 整合到一起,即一个节点上只运行 Proxy、Account、Container、Object 四个主要服务,Swfit 副本只有一个也不需要运行 Replicator、Updates、Auditors 进程。

实现:

gluster-swift 依赖 GlusterFS 和 swift 环境,首先要做的是配置 GlusterFS 和 swift.
172.16.0.201  GlusterFS、swift、gluster-swift
172.16.0.202  GlusterFS

在 201 和 202 上安装 GlusterFS
先安装一些包,编译、git、xfs等
  1. apt-get -y install gcc python-dev python-setuptools libffi-dev git xfsprogs
复制代码



下载 GlusterFS 源码包
  1. git clone https://github.com/gluster/glusterfs.git
复制代码



安装 GlusterFS 编译所依赖的包:
  1. apt-get install flex bison attr libssl-dev openssl
复制代码



编译 GlusterFS:
  1. cd glusterfs
  2. ./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc
  3. make
  4. make install
复制代码



启动服务:
  1. /etc/init.d/glusterd start
复制代码


两台节点都做完上面步骤后,在 172.16.0.201 上添加另外一个 GlusterFS 节点:
  1. gluster peer probe 172.16.0.202
  2. gluster peer status
  3. Number of Peers: 1
  4. Hostname: 172.16.0.202
  5. Port: 24007
  6. Uuid: 43d0771d-650b-45d6-b71c-07874aa74825
  7. State: Peer in Cluster (Connected)
复制代码



创建一个 GlusterFS Volume 副本为 2,给 Swift 提供后端存储支撑:
  1. gluster volume create replica 2 swift-volumes 172.16.0.201:/opt/gluster_storage/swift-volumes 172.16.0.202:/opt/gluster_storage/swift-volumes
  2. volume create: swift-volumes: success: please start the volume to access data
复制代码



启动 swift-volumes :
  1. gluster volume start swift-volumes
  2. volume stop: swift-volumes: success
复制代码



查看 volume 的信息:
  1. gluster volume info swift-volumes
  2. Volume Name: swift-volumes
  3. Type: Replicate
  4. Volume ID: 3d4788d0-41b2-417c-b564-f8e3617b1879
  5. Status: Started
  6. Number of Bricks: 1 x 2 = 2
  7. Transport-type: tcp
  8. Bricks:
  9. Brick1: 172.16.0.201:/opt/gluster_storage/swift-volumes
  10. Brick2: 172.16.0.202:/opt/gluster_storage/swift-volumes
复制代码



在 172.16.0.201 上安装 Swift

下载源码包:
  1. git clone https://github.com/openstack/swift.git
复制代码



解决 swift 依赖:
  1. apt-get install python-xattr memcached
复制代码


切换 swift 的分支到 grizzly 或 havana,需要和 gluster-swift 版本一致:
  1. cd swift
  2. git branch -a
  3. * master
  4.   remotes/origin/HEAD -> origin/master
  5.   remotes/origin/feature/ec
  6.   remotes/origin/master
  7.   remotes/origin/stable/folsom
  8.   remotes/origin/stable/grizzly
  9.   remotes/origin/stable/havana
  10. git checkout stable/grizzly
  11. git branch
  12.   master
  13. * stable/grizzl
复制代码


用 pip 解决依赖, 并安装:
  1. python setup.py egg_info
  2. pip install -r swift.egg-info/requires.txt
  3. python setup.py develop
复制代码



在 172.16.0.201 上安装 gluster-swift

下载 gluster-swift 源码包:
  1. git clone https://github.com/gluster/gluster-swift.git
复制代码


切换和 swift 版本一样的分支:
  1. cd gluster-swift
  2. git branch -a
  3. * master
  4.   remotes/origin/HEAD -> origin/master
  5.   remotes/origin/grizzly
  6.   remotes/origin/havana
  7.   remotes/origin/master
  8.   remotes/origin/release-1.8.0
  9. git checkout havana
  10. git branch
  11. * havana
  12.   master
复制代码



直接安装:
  1. python setup.py develop
复制代码



装完后会有 /usr/local/bin/gluster-swift-gen-builders 这个文件,主要用来自动生成 Ring 到 /etc/swift 目录下。
创建 swift 运行时需要的目录:
  1. mkdir /etc/swift/
  2. mkdir /var/log/swift
  3. mkdir /var/cache/swift
复制代码



下面的配置文件都是参考在 gluster-swift 中 etc/ 目录下有相关配置文件的范例.

拷贝 gluster-swift/etc/swift.conf-gluster 到 /etc/swift 下:
  1. cp gluster-swift/etc/swift.conf-gluster /etc/swift/swift.conf
复制代码



拷贝 gluster-swift/etc/fs.conf-gluster 到 /etc/swift 下,并修改内容 mount_ip 的值:
  1. cp gluster-swift/etc/fs.conf-gluster /etc/swift/fs.conf
  2. sed -i 's/^mount_ip.*$/mount_ip = 172.16.0.201/g' /etc/swift/fs.conf
复制代码


mount_ip 的值就是一个 GlusterFS 集群节点的 IP 地址,这里可以写 172.16.0.201 或 172.16.0.202
配置 swift proxy-server.conf 文件:

(配置文件中红色被注释的部分是集成了 Keystone 验证)
  1. cat > /etc/swift/proxy-server.conf << _GEEK_
  2. [DEFAULT]
  3. bind_port = 8080
  4. user = root
  5. # Consider using 1 worker per CPU
  6. workers = 1
  7. [pipeline:main]
  8. #pipeline = catch_errors healthcheck proxy-logging cache authtoken keystoneauth proxy-logging proxy-server
  9. pipeline = catch_errors healthcheck proxy-logging cache proxy-logging proxy-server
  10. [app:proxy-server]
  11. use = egg:swift#proxy
  12. log_facility = LOG_LOCAL0
  13. log_level = DEBUG
  14. # The API allows for account creation and deletion, but since Gluster/Swift
  15. # automounts a Gluster volume for a given account, there is no way to create
  16. # or delete an account. So leave this off.
  17. #[filter:keystoneauth]
  18. #use = egg:swift#keystoneauth
  19. #operator_roles = Member,admin
  20. #[filter:authtoken]
  21. #paste.filter_factory = keystone.middleware.auth_token:filter_factory
  22. #service_protocol = http
  23. #service_port = 5000
  24. #service_host = control.local.com
  25. #auth_port = 35357
  26. #auth_host = control.local.com
  27. #auth_protocol = http
  28. #admin_tenant_name = service
  29. #admin_user = swift
  30. #admin_password = password
  31. #signing_dir = /etc/swift
  32. allow_account_management = false
  33. account_autocreate = true
  34. # Ensure the proxy server uses fast-POSTs since we don't need to make a copy
  35. # of the entire object given that all metadata is stored in the object
  36. # extended attributes (no .meta file used after creation) and no container
  37. # sync feature to present.
  38. object_post_as_copy = false
  39. # Only need to recheck the account exists once a day
  40. recheck_account_existence = 86400
  41. # May want to consider bumping this up if containers are created and destroyed
  42. # infrequently.
  43. recheck_container_existence = 60
  44. # Timeout clients that don't read or write to the proxy server after 5
  45. # seconds.
  46. client_timeout = 5
  47. # Give more time to connect to the object, container or account servers in
  48. # cases of high load.
  49. conn_timeout = 5
  50. # For high load situations, once connected to an object, container or account
  51. # server, allow for delays communicating with them.
  52. node_timeout = 60
  53. # May want to consider bumping up this value to 1 - 4 MB depending on how much
  54. # traffic is for multi-megabyte or gigabyte requests; perhaps matching the
  55. # stripe width (not stripe element size) of your storage volume is a good
  56. # starting point. See below for sizing information.
  57. object_chunk_size = 65536
  58. # If you do decide to increase the object_chunk_size, then consider lowering
  59. # this value to one. Up to "put_queue_length" object_chunk_size'd buffers can
  60. # be queued to the object server for processing. Given one proxy server worker
  61. # can handle up to 1,024 connections, by default, it will consume 10 * 65,536
  62. # * 1,024 bytes of memory in the worse case (default values). Be sure the
  63. # amount of memory available on the system can accommodate increased values
  64. # for object_chunk_size.
  65. put_queue_depth = 10
  66. [filter:catch_errors]
  67. use = egg:swift#catch_errors
  68. [filter:proxy-logging]
  69. use = egg:swift#proxy_logging
  70. access_log_level = WARN
  71. [filter:healthcheck]
  72. use = egg:swift#healthcheck
  73. [filter:cache]
  74. use = egg:swift#memcache
  75. # Update this line to contain a comma separated list of memcache servers
  76. # shared by all nodes running the proxy-server service.
  77. memcache_servers = localhost:11211
  78. _GEEK_
复制代码



配置 account-server.conf :
  1. cat > /etc/swift/account-server.conf << _GEEK_
  2. [DEFAULT]
  3. devices = /mnt/gluster_storage/swift_data
  4. #
  5. # Once you are confident that your startup processes will always have your
  6. # gluster volumes properly mounted *before* the account-server workers start,
  7. # you can *consider* setting this value to "false" to reduce the per-request
  8. # overhead it can incur.
  9. mount_check = true
  10. bind_ip = 0.0.0.0
  11. bind_port = 6012
  12. #
  13. # Override swift's default behaviour for fallocate.
  14. disable_fallocate = true
  15. #
  16. # One or two workers should be sufficient for almost any installation of
  17. # Gluster.
  18. workers = 1
  19. [pipeline:main]
  20. pipeline = account-server
  21. [app:account-server]
  22. use = egg:gluster_swift#account
  23. user = root
  24. log_facility = LOG_LOCAL1
  25. log_level = DEBUG
  26. #
  27. # After ensuring things are running in a stable manner, you can turn off
  28. # normal request logging for the account server to unclutter the log
  29. # files. Warnings and errors will still be logged.
  30. log_requests = off
  31. _GEEK_
复制代码



配置 container-server.conf:
  1. [DEFAULT]
  2. devices = /mnt/gluster_storage/swift_data
  3. # # Once you are confident that your startup processes will always have your # gluster volumes properly mounted *before* the container-server workers # start, you can *consider* setting this value to "false" to reduce the # per-request overhead it can incur. mount_check = true bind_ip = 0.0.0.0 bind_port = 6011 # # Override swift's default behaviour for fallocate. disable_fallocate = true # # One or two workers should be sufficient for almost any installation of # Gluster. workers = 1 [pipeline:main] pipeline = container-server [app:container-server] use = egg:gluster_swift#container user = root log_facility = LOG_LOCAL2 log_level = DEBUG # # After ensuring things are running in a stable manner, you can turn off # normal request logging for the container server to unclutter the log # files. Warnings and errors will still be logged. log_requests = off _GEEK_
复制代码



配置 object-server.conf:
  1. [DEFAULT]
  2. devices = /mnt/gluster_storage/swift_data
  3. # # Once you are confident that your startup processes will always have your # gluster volumes properly mounted *before* the object-server workers start, # you can *consider* setting this value to "false" to reduce the per-request # overhead it can incur. mount_check = true bind_ip = 0.0.0.0 bind_port = 6010 # # Maximum number of clients one worker can process simultaneously (it will # actually accept N + 1). Setting this to one (1) will only handle one request # at a time, without accepting another request concurrently. By increasing the # number of workers to a much higher value, one can prevent slow file system # operations for one request from starving other requests. max_clients = 1024 # # If not doing the above, setting this value initially to match the number of # CPUs is a good starting point for determining the right value. workers = 1 # Override swift's default behaviour for fallocate. disable_fallocate = true [pipeline:main] pipeline = object-server [app:object-server] use = egg:gluster_swift#object user = root log_facility = LOG_LOCAL3 log_level = DEBUG # # For performance, after ensuring things are running in a stable manner, you # can turn off normal request logging for the object server to reduce the # per-request overhead and unclutter the log files. Warnings and errors will # still be logged. log_requests = off # # Adjust this value to match the stripe width of the underlying storage array # (not the stripe element size). This will provide a reasonable starting point # for tuning this value. disk_chunk_size = 65536 # # Adjust this value match whatever is set for the disk_chunk_size initially. # This will provide a reasonable starting point for tuning this value. network_chunk_size = 65536 _GEEK_
复制代码



使用 rsyslog 记录四个服务的日志:
  1. echo -e "
  2. local0.*    /var/log/swift/proxy-server.log
  3. local1.*    /var/log/swift/account.log
  4. local2.*    /var/log/swift/container.log
  5. local3.*    /var/log/swift/object.log" >> /etc/rsyslog.conf
复制代码



重启 rsyslog 服务:
  1. /etc/init.d/rsyslog restart
复制代码



利用 gluster-swift 安装完的命令加 GlusterFS Volume 名字生成 Ring:
(在创建 Volume 时候,Volume 名字不要包含下划线,否则运行 gluster-swift-gen-builders example_volumes 会出现奇怪的现象)
  1. gluster-swift-gen-builders swift-volumes
复制代码



命令执行完会在 /etc/swift/ 下生成 *.builder *.ring.gz 文件。
启动 swift 四个服务:
  1. swift-init main restart
复制代码



安装 curl 命令:
  1. apt-get install curl
复制代码


使用 curl 访问 swift:
  1. curl -v -X PUT http://172.16.0.201:8080/v1/AUTH_swift-volumes/mycontainer
复制代码


上面命令如果返回 201 则创建容器成功. 同时会自动挂载 GlusterFS 的 swift-volumes 到 /mnt/gluster_storage/swift_data/ 目录下。
测试上传文件:
  1. echo "Hello World" > mytestfile
  2. curl -v -X PUT -T mytestfile http://172.16.0.201:8080/v1/AUTH_swift-volumes/mycontainer/mytestfile
复制代码


下载文件:
  1. curl -v -X GET -o newfile http://172.16.0.201:8080/v1/AUTH_swift-volumes/mycontainer/mytestfile
  2. cat newfile
复制代码



集成 Keystone 用户验证

如果需要使用 Keystone 来进行身份验证的话,只需要修改 /etc/swift/proxy-server.conf 中前面红色标记行注释打开,蓝色一行注释掉,保存,重启服务 swift-init main restart 后就集成进去了。
  1. swift -V 2 -A http://172.16.0.201:5000/v2.0 -U admin:admin -K password stat
  2. swift -V 2 -A http://172.16.0.201:5000/v2.0 -U admin:admin -K password post test-container
  3. swift -V 2 -A http://172.16.0.201:5000/v2.0 -U admin:admin -K password upload test-container mytestfile
  4. cd /tmp
  5. swift -V 2 -A http://172.16.0.201:5000/v2.0 -U admin:admin -K password download test-container mytestfile
复制代码




没找到任何评论,期待你打破沉寂

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条