前面提到 gluster-swift 只是一个工具而已,它的源码里并没有包含 swift 的代码。gluster-swift 做的一件事情就是生成 ring 文件、启动一个守护进程来检查 Swift 节点是否挂载 GlusterFS 的 Volume、转换数据格式。当 Swift 的数据放在 GlusterFS 上后,利用了GlusterFS 的副本机制实现冗余,无须使用 swift 的副本功能,所以 gluster-swift 在给 Swift 生成 Ring 时候默认只有一个副本。而 Swift 以前可以用多 Proxy、Account、Container、Object 集群方式来横向扩展,现在完全被 gluster-swift 整合到一起,即一个节点上只运行 Proxy、Account、Container、Object 四个主要服务,Swfit 副本只有一个也不需要运行 Replicator、Updates、Auditors 进程。
# Ensure the proxy server uses fast-POSTs since we don't need to make a copy
# of the entire object given that all metadata is stored in the object
# extended attributes (no .meta file used after creation) and no container
# sync feature to present.
object_post_as_copy = false
# Only need to recheck the account exists once a day
recheck_account_existence = 86400
# May want to consider bumping this up if containers are created and destroyed
# infrequently.
recheck_container_existence = 60
# Timeout clients that don't read or write to the proxy server after 5
# seconds.
client_timeout = 5
# Give more time to connect to the object, container or account servers in
# cases of high load.
conn_timeout = 5
# For high load situations, once connected to an object, container or account
# server, allow for delays communicating with them.
node_timeout = 60
# May want to consider bumping up this value to 1 - 4 MB depending on how much
# traffic is for multi-megabyte or gigabyte requests; perhaps matching the
# stripe width (not stripe element size) of your storage volume is a good
# starting point. See below for sizing information.
object_chunk_size = 65536
# If you do decide to increase the object_chunk_size, then consider lowering
# this value to one. Up to "put_queue_length" object_chunk_size'd buffers can
# be queued to the object server for processing. Given one proxy server worker
# can handle up to 1,024 connections, by default, it will consume 10 * 65,536
# * 1,024 bytes of memory in the worse case (default values). Be sure the
# amount of memory available on the system can accommodate increased values
# for object_chunk_size.
put_queue_depth = 10
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:proxy-logging]
use = egg:swift#proxy_logging
access_log_level = WARN
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
# Update this line to contain a comma separated list of memcache servers
# shared by all nodes running the proxy-server service.
memcache_servers = localhost:11211
_GEEK_
复制代码
配置 account-server.conf :
cat > /etc/swift/account-server.conf << _GEEK_
[DEFAULT]
devices = /mnt/gluster_storage/swift_data
#
# Once you are confident that your startup processes will always have your
# gluster volumes properly mounted *before* the account-server workers start,
# you can *consider* setting this value to "false" to reduce the per-request
# overhead it can incur.
mount_check = true
bind_ip = 0.0.0.0
bind_port = 6012
#
# Override swift's default behaviour for fallocate.
disable_fallocate = true
#
# One or two workers should be sufficient for almost any installation of
# Gluster.
workers = 1
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:gluster_swift#account
user = root
log_facility = LOG_LOCAL1
log_level = DEBUG
#
# After ensuring things are running in a stable manner, you can turn off
# normal request logging for the account server to unclutter the log
# files. Warnings and errors will still be logged.
log_requests = off
_GEEK_
复制代码
配置 container-server.conf:
[DEFAULT]
devices = /mnt/gluster_storage/swift_data
# # Once you are confident that your startup processes will always have your # gluster volumes properly mounted *before* the container-server workers # start, you can *consider* setting this value to "false" to reduce the # per-request overhead it can incur. mount_check = true bind_ip = 0.0.0.0 bind_port = 6011 # # Override swift's default behaviour for fallocate. disable_fallocate = true # # One or two workers should be sufficient for almost any installation of # Gluster. workers = 1 [pipeline:main] pipeline = container-server [app:container-server] use = egg:gluster_swift#container user = root log_facility = LOG_LOCAL2 log_level = DEBUG # # After ensuring things are running in a stable manner, you can turn off # normal request logging for the container server to unclutter the log # files. Warnings and errors will still be logged. log_requests = off _GEEK_
复制代码
配置 object-server.conf:
[DEFAULT]
devices = /mnt/gluster_storage/swift_data
# # Once you are confident that your startup processes will always have your # gluster volumes properly mounted *before* the object-server workers start, # you can *consider* setting this value to "false" to reduce the per-request # overhead it can incur. mount_check = true bind_ip = 0.0.0.0 bind_port = 6010 # # Maximum number of clients one worker can process simultaneously (it will # actually accept N + 1). Setting this to one (1) will only handle one request # at a time, without accepting another request concurrently. By increasing the # number of workers to a much higher value, one can prevent slow file system # operations for one request from starving other requests. max_clients = 1024 # # If not doing the above, setting this value initially to match the number of # CPUs is a good starting point for determining the right value. workers = 1 # Override swift's default behaviour for fallocate. disable_fallocate = true [pipeline:main] pipeline = object-server [app:object-server] use = egg:gluster_swift#object user = root log_facility = LOG_LOCAL3 log_level = DEBUG # # For performance, after ensuring things are running in a stable manner, you # can turn off normal request logging for the object server to reduce the # per-request overhead and unclutter the log files. Warnings and errors will # still be logged. log_requests = off # # Adjust this value to match the stripe width of the underlying storage array # (not the stripe element size). This will provide a reasonable starting point # for tuning this value. disk_chunk_size = 65536 # # Adjust this value match whatever is set for the disk_chunk_size initially. # This will provide a reasonable starting point for tuning this value. network_chunk_size = 65536 _GEEK_