ThorneLabs

Install a Stand-alone, Multi-node OpenStack Swift Cluster with VirtualBox or VMware Fusion and Vagrant

• Updated March 17, 2019


The OpenStack Swift developer website describes Swift best:

Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply.

For being such a powerful object storage platform, I found it surprisingly easy to setup and configure. However, setup becomes more difficult as the number of nodes, racks, and data centers increase.

But, most of us do not have that many nodes, racks, or data centers and simply want to setup a Swift cluster to play with on our workstation. This is where Vagrant and VirtualBox or VMware Fusion come in.

The following post will describe how to setup an OpenStack Swift cluster on CentOS 6.5 with one Swift Proxy node and three Swift Object nodes with automated and manual options. The environment will have two networks: one network for management traffic and the Swift API endpoint, and a second network for Swift backend and replication traffic.

Setup Vagrant

Download and install the latest version of Vagrant for your operating system.

Jump to either the Vagrant with VirtualBox or Vagrant with VMware Fusion section depending on what you want to use.

Using Vagrant with VirtualBox is free compared to using VMware Fusion which cost about $140.00 total.

Vagrant with VirtualBox

Download and install the latest version of VirtualBox for your operating system.

Once VirtualBox is installed, jump to the Setup a Vagrant Environment section.

Vagrant with VMware Fusion

First, purchase ($59.99), download, and install the latest version of VMware Fusion 5 or 6.

In addition, purchase ($79.00) the Vagrant VMware Provider License from HashiCorp; you cannot use Vagrant with VMware Fusion without this license.

Second, once you have purchased the plugin, open Terminal, and install the Vagrant VMware Fusion Provider Plugin:

vagrant plugin install vagrant-vmware-fusion

HashiCorp should have emailed you the Vagrant VMware Fusion Provider License by now. License the provider with the following command (save the license in a safe place, Vagrant will copy the license to it’s own directory as well):

vagrant plugin license vagrant-vmware-fusion ~/Downloads/license.lic

Verify everything is working by running any of the Vagrant commands. An error message will be thrown if there is something wrong.

Once VMware Fusion and the Vagrant Provider License are installed, jump to the Setup a Vagrant Environment section.

Install Swift

As mentioned, this post covers installing a multi-node Swift cluster with automated or manual options. The automated install contains all of the manual install steps in the Vagrantfile.

If this is your first time setting up a Swift cluster, I recommend going through the Manual Install section so you can see how everything works together. If you have already done a manual install of a Swift cluster or just want to quickly get started using Swift, go to the Automated Install section.

Automated Install

Create a directory somewhere on your workstation to save the Vagrantfile and change into that directory:

mkdir -p ~/Vagrant/swift-one-proxy-three-object

cd ~/Vagrant/swift-one-proxy-three-object

Run the following command to download the Vagrantfile:

curl https://raw.githubusercontent.com/jameswthorne/vagrantfiles-swift/master/vagrantfile-swift-centos-one-proxy-three-object.txt -o Vagrantfile

At this point you are ready to startup your Vagrant environment.

If you are using VirtualBox:

vagrant up

If you are using VMware Fusion:

vagrant up --provider vmware_fusion

Once the Vagrant environment is up, scroll down to the Using the Swift Cluster section.

Manual Install

Setup the Vagrant Environment

Create a directory somewhere on your workstation to save your Vagrantfile and change into that directory:

mkdir -p ~/Vagrant/swift-one-proxy-three-object

cd ~/Vagrant/swift-one-proxy-three-object

Run the follow command to download the Vagrantfile:

curl https://raw.githubusercontent.com/jameswthorne/vagrantfiles-swift/master/vagrantfile-manual-swift-centos-one-proxy-three-object.txt -o Vagrantfile

At this point you are ready to startup your Vagrant environment.

If you are using VirtualBox:

vagrant up

If you are using VMware Fusion:

vagrant up --provider vmware_fusion

Once the Vagrant environment is up, continue onto the Setup the proxy1 Swift Proxy Node section.

Setup the proxy1 Swift Proxy Node

Log in to the Swift Proxy node:

vagrant ssh proxy1

Login as the root user and stay logged in as root throughout this post (the root password is vagrant):

su -

Install EPEL and the RDO package repository:

yum install http://dl.fedoraproject.org/pub/epel/6Server/x86_64/epel-release-6-8.noarch.rpm

yum install https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm

sed -i 's$openstack/openstack$openstack/EOL/openstack$g' /etc/yum.repos.d/rdo-release.repo

Install the following repository packages:

yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token memcached

Open /etc/sysconfig/memcached and modify the following line so memcached listens on eth2:

OPTIONS="-l 192.168.252.60"

Enable and start memcached:

chkconfig memcached on

service memcached start

Swift can use its internal authentication system, TempAuth, or an OpenStack Keystone server. In this post you are going to setup Swift TempAuth. If you would rather setup authentication using Keystone, finish all the steps in this post, then follow the steps in authenticating OpenStack Swift against Keystone instead of TempAuth.

Open /etc/swift/proxy-server.conf and paste the following contents:

[DEFAULT]
bind_ip = 192.168.236.60
bind_port = 8080
workers = 8
user = swift

[pipeline:main]
pipeline = healthcheck cache tempauth proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.252.60:11211

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:tempauth]
use = egg:swift#tempauth
# user_<tenant>_<username> = <password> <privileges> 
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3

Change into /etc/swift:

cd /etc/swift

Run the following command to create /etc/swift/swift.conf. This file is important because the prefix and suffix values are used as salts when generating the hashes for the ring mappings.

cat >/etc/swift/swift.conf <<EOF
[swift-hash]
# random unique strings that can never change (DO NOT LOSE)
swift_hash_path_prefix = `od -t x8 -N 8 -A n </dev/random`
swift_hash_path_suffix = `od -t x8 -N 8 -A n </dev/random`
EOF

The /etc/swift/swift.conf file needs to be on every node in the Swift cluster. It is already on the proxy1 node, so copy it to the three Swift Object nodes (you will move this file to the proper directory on the Swift Object nodes later):

scp /etc/swift/swift.conf 192.168.236.70:/root/

scp /etc/swift/swift.conf 192.168.236.71:/root/

scp /etc/swift/swift.conf 192.168.236.72:/root/

Create the account, container, and object rings:

swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1

Typically a Swift Object node will have a lot of hard disks available to use as storage devices. However, because this virtual environment has very limited resources, you are only going to be using a 10 GB loopback file on each Object node. That loopback file will be setup in the next section, but for now you need to actually define your ring:

swift-ring-builder account.builder add z1-192.168.252.70:6002/loop2 10
swift-ring-builder container.builder add z1-192.168.252.70:6001/loop2 10
swift-ring-builder object.builder add z1-192.168.252.70:6000/loop2 10

swift-ring-builder account.builder add z2-192.168.252.71:6002/loop2 10
swift-ring-builder container.builder add z2-192.168.252.71:6001/loop2 10
swift-ring-builder object.builder add z2-192.168.252.71:6000/loop2 10

swift-ring-builder account.builder add z3-192.168.252.72:6002/loop2 10
swift-ring-builder container.builder add z3-192.168.252.72:6001/loop2 10
swift-ring-builder object.builder add z3-192.168.252.72:6000/loop2 10

Verify the ring contents:

swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder

Rebalance the rings (this could take a while):

swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance

Set permissions on the /etc/swift directory:

chown -R swift:swift /etc/swift

Similar to the /etc/swift/swift.conf file, every node in the Swift cluster needs a copy of the three ring files. They are already on the proxy1 node, so copy them to the three Swift Object nodes (you will move these files to the proper directory on the Swift Object nodes later):

scp /etc/swift/*.ring.gz 192.168.236.70:/root/

scp /etc/swift/*.ring.gz 192.168.236.71:/root/

scp /etc/swift/*.ring.gz 192.168.236.72:/root/

Enable the openstack-swift-proxy service:

chkconfig openstack-swift-proxy on

Start the openstack-swift-proxy service:

service openstack-swift-proxy start

The Swift Proxy node should now be configured. Next, you will setup the three Swift Object nodes.

Setup the object1 Swift Object Node

Log in to the Swift Object node:

vagrant ssh object1

Login as the root user and stay logged in as root throughout this process (the root password is vagrant):

su -

Install EPEL and the RDO package repository:

yum install http://dl.fedoraproject.org/pub/epel/6Server/x86_64/epel-release-6-8.noarch.rpm

yum install https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm

sed -i 's$openstack/openstack$openstack/EOL/openstack$g' /etc/yum.repos.d/rdo-release.repo

Install the necessary repository packages:

yum install openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd

Open /etc/xinetd.d/rsync and modify the following line to turn the rsync daemon on:

disable = no

Create file /etc/rsyncd.conf and paste the following contents:

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.252.70

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

Enable and start the xinetd service:

chkconfig xinetd on

service xinetd start

Create the Swift Recon cache directory and set its permissions:

mkdir -p /var/swift/recon

chown -R swift:swift /var/swift/recon

Open /etc/swift/account-server.conf and paste the following contents:

[DEFAULT]
bind_ip = 192.168.252.70
bind_port = 6002
workers = 2

[pipeline:main]
pipeline = recon account-server

[app:account-server]
use = egg:swift#account

[account-replicator]

[account-auditor]

[account-reaper]

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
account_recon = true 

Open /etc/swift/container-server.conf and paste the following contents:

[DEFAULT]
bind_ip = 192.168.252.70
bind_port = 6001
workers = 2

[pipeline:main]
pipeline = recon container-server

[app:container-server]
use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
container_recon = true

Open /etc/swift/object-server.conf and paste the following contents:

[DEFAULT]
bind_ip = 192.168.252.70
bind_port = 6000
workers = 3

[pipeline:main]
pipeline = recon object-server

[app:object-server]
use = egg:swift#object

[object-replicator]

[object-updater]

[object-auditor]

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
object_recon = true

Move the ring files into the proper directory:

mv /root/*.ring.gz /etc/swift/

Move swift.conf into the proper directory:

mv /root/swift.conf /etc/swift/

Set permissions on the /etc/swift directory:

chown -R swift:swift /etc/swift

Create the storage directory:

mkdir -p /srv/node/loop2

Configure the loopback storage file:

dd if=/dev/zero of=/mnt/object-volume1 bs=1 count=0 seek=10G

losetup /dev/loop2 /mnt/object-volume1

Swift is meant to be filesystem agnostic, but has been significantly tested with XFS, so format the loopback storage file as XFS:

mkfs.xfs -i size=1024 /dev/loop2

echo "/dev/loop2 /srv/node/loop2 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab

mount -a

Put the following lines in /etc/rc.d/rc.local to re-create the loopback storage file if you ever vagrant reload the virtual machine:

losetup /dev/loop2 /mnt/object-volume1
mount -a
swift-init all restart

Set permissions on the storage directory:

chown swift:swift /srv/node/loop2

Start and enable all services:

for service in openstack-swift-object openstack-swift-object-replicator openstack-swift-object-updater openstack-swift-object-auditor openstack-swift-container openstack-swift-container-replicator openstack-swift-container-updater openstack-swift-container-auditor openstack-swift-account openstack-swift-account-replicator openstack-swift-account-reaper openstack-swift-account-auditor; do service $service start; chkconfig $service on; done

The first Swift Object node should now be configured. Next, you will be going through the same steps to setup the remaining two Swift Object nodes.

Setup the object2 Swift Object Node

Log in to the Swift Object node:

vagrant ssh object2

Login as the root user and stay logged in as root throughout this process (the root password is vagrant):

su -

Follow all of the same steps from the object1 section above. However, be sure to change the IP addresses in the following files.

In /etc/rsyncd.conf be sure to set address to 192.168.252.71.

In /etc/swift/account-server.conf be sure to set bind_ip to 192.168.252.71.

In /etc/swift/container-server.conf be sure to set bind_ip to 192.168.252.71.

In /etc/swift/object-server.conf be sure to set bind_ip to 192.168.252.71.

Setup the object3 Swift Object Node

Log in to the Swift Object node:

vagrant ssh object3

Login as the root user and stay logged in as root throughout this process (the root password is vagrant):

su -

Follow all of the same steps from the object1 section above. However, be sure to change the IP addresses in the following files.

In /etc/rsyncd.conf be sure to set address to 192.168.252.72.

In /etc/swift/account-server.conf be sure to set bind_ip to 192.168.252.72.

In /etc/swift/container-server.conf be sure to set bind_ip to 192.168.252.72.

In /etc/swift/object-server.conf be sure to set bind_ip to 192.168.252.72.

Using the Swift Cluster

At this point you have setup the OpenStack Swift cluster and can use the swift command to list, upload, download, and delete containers and objects and post metadata changes to those containers and objects.

The following commands will do all of these functions within the admin tenant. The tempauth section in /etc/swift/proxy-server.conf also setup test and test2 tenants. Containers and objects in each tenant are separate from each other.

The following commands can be run from a workstation or server that has the Swift Python client installed and can communicate with IP address 192.168.236.60.

In order to do anything with the swift command, you have to authenticate to a tenant as a user. This can be done entirely on the command line:

swift -A http://192.168.236.60:8080/auth/v1.0 -U admin:admin -K admin $SUBCOMMAND

However, it quickly becomes cumbersome to write the authentication parameters over and over. Instead, you can create a file that contains the same parameters and source it into your environment to make the swift command shorter and quicker to use. Create a file called openrc with the following contents:

export ST_AUTH=http://192.168.236.60:8080/auth/v1.0
export ST_USER=admin:admin
export ST_KEY=admin

Now source the file into your environment:

source ~/openrc

List the containers for account admin in tenant admin (it should return nothing because there are no containers):

swift list

Display information about the tenant:

swift stat

Upload file photo1.jpg (this needs to exist on your workstation) to container photos in the admin tenant (the container will automatically be created if it does not already exist):

swift upload photos photo1.jpg

Display information about the container:

swift stat photos

Display information about the object:

swift stat photos photo1.jpg

List all objects in container photos in the admin tenant:

swift list photos

Download object photo1.jpg from container photos in the admin tenant:

swift download photos photo1.jpg

Right now, you can only access photo1.jpg using the swift command by authenticating as the admin user to the admin tenant. You can make every object within the photos container publicly accessible through a web browser with the following command:

swift post -r '.r:*' photos

Now you can also access object photo1.jpg through a web browser using the following URL:

http://192.168.236.60:8080/v1/AUTH_admin/photos/photo1.jpg

If you want to remove the ability to publicly access any object in the photos containers, you can reset the permissions with the following command:

swift post -r '' photos

Now you will get an access denied error if you try to access object photo1.jpg through a web browser with the URL above.

Delete object photo1.jpg from container photos in the admin tenant:

swift delete photos photo1.jpg

Delete container photos from the admin tenant (everything in the container will be deleted);

swift delete photos

Get the disk usage stats for your cluster:

swift-recon -d

Get the cluster load average stats:

swift-recon -l

Get the replication stats for your cluster:

swift-recon -r

References

If you found this post useful and would like to help support this site - and get something for yourself - sign up for any of the services listed below through the provided affiliate links. I will receive a referral payment from any of the services you sign-up for.

Get faster shipping and more with Amazon Prime: About to order something from Amazon but want to get more value out of the money you would normally pay for shipping? Sign-up for a free 30-day trial of Amazon Prime to get free two-day shipping, access to thousands of movies and TV shows, and more.

Thanks for reading and take care.