ThorneLabs

Install a Stand-alone, Multi-node OpenStack Swift Cluster with VirtualBox or VMware Fusion and Vagrant

• Updated March 17, 2019


The OpenStack Swift developer website describes Swift best:

Swift is a highly available, distributed, eventually consistent object/blob store. Organizations can use Swift to store lots of data efficiently, safely, and cheaply.

For being such a powerful object storage platform, I found it surprisingly easy to setup and configure. However, setup becomes more difficult as the number of nodes, racks, and data centers increase.

But, most of us do not have that many nodes, racks, or data centers and simply want to setup a Swift cluster to play with on our workstation. This is where Vagrant and VirtualBox or VMware Fusion come in.

The following post will describe how to setup an OpenStack Swift cluster on CentOS 6.5 with one Swift Proxy node and three Swift Object nodes with automated and manual options. The environment will have two networks: one network for management traffic and the Swift API endpoint, and a second network for Swift backend and replication traffic.

Setup Vagrant

Download and install the latest version of Vagrant for your operating system.

Jump to either the Vagrant with VirtualBox or Vagrant with VMware Fusion section depending on what you want to use.

Using Vagrant with VirtualBox is free compared to using VMware Fusion which cost about $140.00 total.

Vagrant with VirtualBox

Download and install the latest version of VirtualBox for your operating system.

Once VirtualBox is installed, jump to the Setup a Vagrant Environment section.

Vagrant with VMware Fusion

First, purchase ($59.99), download, and install the latest version of VMware Fusion 5 or 6.

In addition, purchase ($79.00) the Vagrant VMware Provider License from HashiCorp; you cannot use Vagrant with VMware Fusion without this license.

Second, once you have purchased the plugin, open Terminal, and install the Vagrant VMware Fusion Provider Plugin:

vagrant plugin install vagrant-vmware-fusion

HashiCorp should have emailed you the Vagrant VMware Fusion Provider License by now. License the provider with the following command (save the license in a safe place, Vagrant will copy the license to it’s own directory as well):

vagrant plugin license vagrant-vmware-fusion ~/Downloads/license.lic

Verify everything is working by running any of the Vagrant commands. An error message will be thrown if there is something wrong.

Once VMware Fusion and the Vagrant Provider License are installed, jump to the Setup a Vagrant Environment section.

Install Swift

As mentioned, this post covers installing a multi-node Swift cluster with automated or manual options. The automated install contains all of the manual install steps in the Vagrantfile.

If this is your first time setting up a Swift cluster, I recommend going through the Manual Install section so you can see how everything works together. If you have already done a manual install of a Swift cluster or just want to quickly get started using Swift, go to the Automated Install section.

Automated Install

Create a directory somewhere on your workstation to save the Vagrantfile and change into that directory:

mkdir -p ~/Vagrant/swift-one-proxy-three-object

cd ~/Vagrant/swift-one-proxy-three-object

Create file Vagrantfile with the following contents:

Vagrantfile: vagrantfile-swift-centos-one-proxy-three-object
# -*- mode: ruby -*-

# vi: set ft=ruby :

Vagrant.require_version ">= 1.5.0"

#####################
#
# Vagrant Box Parameters Begin
#
#####################

boxes = [
    {
        :name => "proxy1",
        :eth1 => "192.168.236.60",
        :eth2 => "192.168.252.60",
        :mem => "512",
        :cpu => "1",
        :nodetype => "proxy"
    },
    {
        :name => "object1",
        :eth1 => "192.168.236.70",
        :eth2 => "192.168.252.70",
        :mem => "512",
        :cpu => "1",
        :nodetype => "object"
    },
    {
        :name => "object2",
        :eth1 => "192.168.236.71",
        :eth2 => "192.168.252.71",
        :mem => "512",
        :cpu => "1",
        :nodetype => "object"
    },
    {
        :name => "object3",
        :eth1 => "192.168.236.72",
        :eth2 => "192.168.252.72",
        :mem => "512",
        :cpu => "1",
        :nodetype => "object"
    }
]

#####################
#
# Vagrant Box Parameters End
#
#####################

#####################
#
# Common Script Begin
#
#####################

$commonscript = <<COMMONSCRIPT
# Set verbose
set -v

# Set exit on error
set -e

# Silly Ubuntu 12.04 doesn't have the
# --stdin option in the passwd utility
echo root:vagrant | chpasswd

cat << EOF >> /etc/hosts
192.168.236.60 proxy1
192.168.236.70 object1
192.168.236.71 object2
192.168.236.72 object3
EOF

# Environment variables used in scripts
ETH1_IP=`ip route get 192.168.236.0/24 | awk 'NR==1 {print $NF}'`
echo export ETH1_IP=$ETH1_IP > /etc/profile.d/eth1-ip.sh
ETH2_IP=`ip route get 192.168.252.0/24 | awk 'NR==1 {print $NF}'`
echo export ETH2_IP=$ETH2_IP > /etc/profile.d/eth2-ip.sh
COMMONSCRIPT

#####################
#
# Common Script End
#
#####################

#####################
#
# Proxy Script Begin
#
#####################

$proxyscript = <<PROXYSCRIPT
# Set verbose
set -v

# Set exit on error
set -e

yum install -y https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm

sed -i 's$openstack/openstack$openstack/EOL/openstack$g' /etc/yum.repos.d/rdo-release.repo

yum install -y http://dl.fedoraproject.org/pub/epel/6Server/x86_64/epel-release-6-8.noarch.rpm

yum install -y openstack-swift-proxy python-swiftclient memcached

sed -i 's/^\(OPTIONS=\).*/\1"-l 192.168.252.60"/' /etc/sysconfig/memcached

chkconfig memcached on
service memcached start

cat << EOF > /etc/swift/proxy-server.conf
[DEFAULT]
bind_ip = $ETH1_IP
bind_port = 8080
workers = 8
user = swift

[pipeline:main]
pipeline = healthcheck cache tempauth proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:cache]
use = egg:swift#memcache
memcache_servers = $ETH2_IP:11211

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:tempauth]
use = egg:swift#tempauth
# user_<tenant>_<username> = <password> <privileges>
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
EOF

cat << EOF > /etc/swift/swift.conf
[swift-hash]
# random unique strings that can never change (DO NOT LOSE)
swift_hash_path_prefix = `od -t x8 -N 8 -A n </dev/random`
swift_hash_path_suffix = `od -t x8 -N 8 -A n </dev/random`
EOF

cd /etc/swift

swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1

swift-ring-builder account.builder add z1-192.168.252.70:6002/loop2 10
swift-ring-builder container.builder add z1-192.168.252.70:6001/loop2 10
swift-ring-builder object.builder add z1-192.168.252.70:6000/loop2 10

swift-ring-builder account.builder add z2-192.168.252.71:6002/loop2 10
swift-ring-builder container.builder add z2-192.168.252.71:6001/loop2 10
swift-ring-builder object.builder add z2-192.168.252.71:6000/loop2 10

swift-ring-builder account.builder add z3-192.168.252.72:6002/loop2 10
swift-ring-builder container.builder add z3-192.168.252.72:6001/loop2 10
swift-ring-builder object.builder add z3-192.168.252.72:6000/loop2 10

swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance

chown -R swift:swift /etc/swift

chkconfig openstack-swift-proxy on

service openstack-swift-proxy start
PROXYSCRIPT

#####################
#
# Proxy Script End
#
#####################

#####################
#
# Object Script Begin
#
#####################

$objectscript = <<OBJECTSCRIPT
# Set verbose
set -v

# Set exit on error
set -e

yum install -y https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm

sed -i 's$openstack/openstack$openstack/EOL/openstack$g' /etc/yum.repos.d/rdo-release.repo

yum install -y http://dl.fedoraproject.org/pub/epel/6Server/x86_64/epel-release-6-8.noarch.rpm

yum install -y openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd

cat << EOF > /etc/xinetd.d/rsync
# default: off
# description: The rsync server is a good addition to an ftp server, as it \
#   allows crc checksumming etc.
service rsync
{
    disable         = no
    flags           = IPv6
    socket_type     = stream
    wait            = no
    user            = root
    server          = /usr/bin/rsync
    server_args     = --daemon
    log_on_failure  += USERID
}
EOF

cat << EOF > /etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = $ETH2_IP

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
EOF

chkconfig xinetd on
service xinetd start

mkdir -p /var/swift/recon

chown -R swift:swift /var/swift/recon

cat << EOF > /etc/swift/account-server.conf
[DEFAULT]
bind_ip = $ETH2_IP
bind_port = 6002
workers = 2

[pipeline:main]
pipeline = recon account-server

[app:account-server]
use = egg:swift#account

[account-replicator]

[account-auditor]

[account-reaper]

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
account_recon = true
EOF

cat << EOF > /etc/swift/container-server.conf
[DEFAULT]
bind_ip = $ETH2_IP
bind_port = 6001
workers = 2

[pipeline:main]
pipeline = recon container-server

[app:container-server]
use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
container_recon = true
EOF

cat << EOF > /etc/swift/object-server.conf
[DEFAULT]
bind_ip = $ETH2_IP
bind_port = 6000
workers = 3

[pipeline:main]
pipeline = recon object-server

[app:object-server]
use = egg:swift#object

[object-replicator]

[object-updater]

[object-auditor]

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
object_recon = true
EOF

ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa

ssh-keyscan proxy1 >> /root/.ssh/known_hosts
ssh-keyscan 192.168.236.60 >> /root/.ssh/known_hosts

yum install -y expect

expect<<EOF
spawn ssh-copy-id proxy1
expect "root@proxy1's password:"
send "vagrant\n"
expect eof
EOF

scp proxy1:/etc/swift/*.ring.gz /etc/swift/

scp proxy1:/etc/swift/swift.conf /etc/swift/

chown -R swift:swift /etc/swift

mkdir -p /srv/node/loop2

dd if=/dev/zero of=/mnt/object-volume1 bs=1 count=0 seek=10G
losetup /dev/loop2 /mnt/object-volume1

mkfs.xfs -i size=1024 /dev/loop2
echo "/dev/loop2 /srv/node/loop2 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mount -a

chown swift:swift /srv/node/loop2

for service in openstack-swift-object openstack-swift-object-replicator openstack-swift-object-updater openstack-swift-object-auditor openstack-swift-container openstack-swift-container-replicator openstack-swift-container-updater openstack-swift-container-auditor openstack-swift-account openstack-swift-account-replicator openstack-swift-account-reaper openstack-swift-account-auditor; do service $service start; chkconfig $service on; done

cat << EOF > /etc/rc.local
losetup /dev/loop2 /mnt/object-volume1
mount -a
swift-init all restart
exit
EOF
OBJECTSCRIPT

#####################
#
# Object Script End
#
#####################

#####################
#
# Virtual Machine Definition Begin
#
#####################

Vagrant.configure(2) do |config|

  config.vm.box = "centos-6.5-x86_64"
  config.vm.box_url = "http://public.thornelabs.net/centos-6.5-x86_64.box"

  config.vm.provider "vmware_fusion" do |v, override|
    override.vm.box = "centos-6.5-x86_64"
    override.vm.box_url = "http://public.thornelabs.net/centos-6.5-x86_64.vmware.box"
  end

  # Turn off shared folders
  config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

  boxes.each do |opts|
    config.vm.define opts[:name] do |config|
      config.vm.hostname = opts[:name]

      config.vm.provision :shell, inline: $commonscript

      config.vm.network :private_network, ip: opts[:eth1]
      config.vm.network :private_network, ip: opts[:eth2]

      config.vm.provider "vmware_fusion" do |v|
        v.vmx["memsize"] = opts[:mem]
        v.vmx["numvcpus"] = opts[:cpu]
      end

      config.vm.provider "virtualbox" do |v|
        v.customize ["modifyvm", :id, "--memory", opts[:mem]]
        v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
      end

      if opts[:nodetype] == "proxy"
          config.vm.provision :shell, inline: $proxyscript
      end

      if opts[:nodetype] == "object"
          config.vm.provision :shell, inline: $objectscript
      end
    end
  end
end

#####################
#
# Virtual Machine Definition End
#
#####################

At this point you are ready to startup your Vagrant environment.

If you are using VirtualBox:

vagrant up

If you are using VMware Fusion:

vagrant up --provider vmware_fusion

Once the Vagrant environment is up, scroll down to the Using the Swift Cluster section.

Manual Install

Setup the Vagrant Environment

Create a directory somewhere on your workstation to save your Vagrantfile and change into that directory:

mkdir -p ~/Vagrant/swift-one-proxy-three-object

cd ~/Vagrant/swift-one-proxy-three-object

Create file Vagrantfile with the following contents:

Vagrantfile: vagrantfile-manual-swift-centos-one-proxy-three-object
# -*- mode: ruby -*-

# vi: set ft=ruby :

Vagrant.require_version ">= 1.5.0"

#####################
#
# Vagrant Box Parameters Begin
#
#####################

boxes = [
    {
        :name => "proxy1",
        :eth1 => "192.168.236.60",
        :eth2 => "192.168.252.60",
        :mem => "512",
        :cpu => "1",
        :nodetype => "proxy"
    },
    {
        :name => "object1",
        :eth1 => "192.168.236.70",
        :eth2 => "192.168.252.70",
        :mem => "512",
        :cpu => "1",
        :nodetype => "object"
    },
    {
        :name => "object2",
        :eth1 => "192.168.236.71",
        :eth2 => "192.168.252.71",
        :mem => "512",
        :cpu => "1",
        :nodetype => "object"
    },
    {
        :name => "object3",
        :eth1 => "192.168.236.72",
        :eth2 => "192.168.252.72",
        :mem => "512",
        :cpu => "1",
        :nodetype => "object"
    }
]

#####################
#
# Vagrant Box Parameters End
#
#####################

#####################
#
# Common Script Begin
#
#####################

$commonscript = <<COMMONSCRIPT
# Set verbose
set -v

# Set exit on error
set -e

# Silly Ubuntu 12.04 doesn't have the
# --stdin option in the passwd utility
echo root:vagrant | chpasswd

cat << EOF >> /etc/hosts
192.168.236.60 proxy1
192.168.236.70 object1
192.168.236.71 object2
192.168.236.72 object3
EOF
COMMONSCRIPT

#####################
#
# Common Script End
#
#####################

#####################
#
# Virtual Machine Definition Begin
#
#####################

Vagrant.configure(2) do |config|

  config.vm.box = "centos-6.5-x86_64"
  config.vm.box_url = "http://public.thornelabs.net/centos-6.5-x86_64.box"

  config.vm.provider "vmware_fusion" do |v, override|
    override.vm.box = "centos-6.5-x86_64"
    override.vm.box_url = "http://public.thornelabs.net/centos-6.5-x86_64.vmware.box"
  end

  # Turn off shared folders
  config.vm.synced_folder ".", "/vagrant", id: "vagrant-root", disabled: true

  boxes.each do |opts|
    config.vm.define opts[:name] do |config|
      config.vm.hostname = opts[:name]

      config.vm.provision :shell, inline: $commonscript

      config.vm.network :private_network, ip: opts[:eth1]
      config.vm.network :private_network, ip: opts[:eth2]

      config.vm.provider "vmware_fusion" do |v|
        v.vmx["memsize"] = opts[:mem]
        v.vmx["numvcpus"] = opts[:cpu]
      end

      config.vm.provider "virtualbox" do |v|
        v.customize ["modifyvm", :id, "--memory", opts[:mem]]
        v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
      end
    end
  end
end

#####################
#
# Virtual Machine Definition End
#
#####################

At this point you are ready to startup your Vagrant environment.

If you are using VirtualBox:

vagrant up

If you are using VMware Fusion:

vagrant up --provider vmware_fusion

Once the Vagrant environment is up, continue onto the Setup the proxy1 Swift Proxy Node section.

Setup the proxy1 Swift Proxy Node

Log in to the Swift Proxy node:

vagrant ssh proxy1

Login as the root user and stay logged in as root throughout this post (the root password is vagrant):

su -

Install EPEL and the RDO package repository:

yum install http://dl.fedoraproject.org/pub/epel/6Server/x86_64/epel-release-6-8.noarch.rpm

yum install https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm

sed -i 's$openstack/openstack$openstack/EOL/openstack$g' /etc/yum.repos.d/rdo-release.repo

Install the following repository packages:

yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token memcached

Open /etc/sysconfig/memcached and modify the following line so memcached listens on eth2:

OPTIONS="-l 192.168.252.60"

Enable and start memcached:

chkconfig memcached on

service memcached start

Swift can use its internal authentication system, TempAuth, or an OpenStack Keystone server. In this post you are going to setup Swift TempAuth. If you would rather setup authentication using Keystone, finish all the steps in this post, then follow the steps in authenticating OpenStack Swift against Keystone instead of TempAuth.

Open /etc/swift/proxy-server.conf and paste the following contents:

[DEFAULT]
bind_ip = 192.168.236.60
bind_port = 8080
workers = 8
user = swift

[pipeline:main]
pipeline = healthcheck cache tempauth proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.252.60:11211

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:tempauth]
use = egg:swift#tempauth
# user_<tenant>_<username> = <password> <privileges> 
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3

Change into /etc/swift:

cd /etc/swift

Run the following command to create /etc/swift/swift.conf. This file is important because the prefix and suffix values are used as salts when generating the hashes for the ring mappings.

cat >/etc/swift/swift.conf <<EOF
[swift-hash]
# random unique strings that can never change (DO NOT LOSE)
swift_hash_path_prefix = `od -t x8 -N 8 -A n </dev/random`
swift_hash_path_suffix = `od -t x8 -N 8 -A n </dev/random`
EOF

The /etc/swift/swift.conf file needs to be on every node in the Swift cluster. It is already on the proxy1 node, so copy it to the three Swift Object nodes (you will move this file to the proper directory on the Swift Object nodes later):

scp /etc/swift/swift.conf 192.168.236.70:/root/

scp /etc/swift/swift.conf 192.168.236.71:/root/

scp /etc/swift/swift.conf 192.168.236.72:/root/

Create the account, container, and object rings:

swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1

Typically a Swift Object node will have a lot of hard disks available to use as storage devices. However, because this virtual environment has very limited resources, you are only going to be using a 10 GB loopback file on each Object node. That loopback file will be setup in the next section, but for now you need to actually define your ring:

swift-ring-builder account.builder add z1-192.168.252.70:6002/loop2 10
swift-ring-builder container.builder add z1-192.168.252.70:6001/loop2 10
swift-ring-builder object.builder add z1-192.168.252.70:6000/loop2 10

swift-ring-builder account.builder add z2-192.168.252.71:6002/loop2 10
swift-ring-builder container.builder add z2-192.168.252.71:6001/loop2 10
swift-ring-builder object.builder add z2-192.168.252.71:6000/loop2 10

swift-ring-builder account.builder add z3-192.168.252.72:6002/loop2 10
swift-ring-builder container.builder add z3-192.168.252.72:6001/loop2 10
swift-ring-builder object.builder add z3-192.168.252.72:6000/loop2 10

Verify the ring contents:

swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder

Rebalance the rings (this could take a while):

swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance

Set permissions on the /etc/swift directory:

chown -R swift:swift /etc/swift

Similar to the /etc/swift/swift.conf file, every node in the Swift cluster needs a copy of the three ring files. They are already on the proxy1 node, so copy them to the three Swift Object nodes (you will move these files to the proper directory on the Swift Object nodes later):

scp /etc/swift/*.ring.gz 192.168.236.70:/root/

scp /etc/swift/*.ring.gz 192.168.236.71:/root/

scp /etc/swift/*.ring.gz 192.168.236.72:/root/

Enable the openstack-swift-proxy service:

chkconfig openstack-swift-proxy on

Start the openstack-swift-proxy service:

service openstack-swift-proxy start

The Swift Proxy node should now be configured. Next, you will setup the three Swift Object nodes.

Setup the object1 Swift Object Node

Log in to the Swift Object node:

vagrant ssh object1

Login as the root user and stay logged in as root throughout this process (the root password is vagrant):

su -

Install EPEL and the RDO package repository:

yum install http://dl.fedoraproject.org/pub/epel/6Server/x86_64/epel-release-6-8.noarch.rpm

yum install https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm

sed -i 's$openstack/openstack$openstack/EOL/openstack$g' /etc/yum.repos.d/rdo-release.repo

Install the necessary repository packages:

yum install openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd

Open /etc/xinetd.d/rsync and modify the following line to turn the rsync daemon on:

disable = no

Create file /etc/rsyncd.conf and paste the following contents:

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.252.70

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

Enable and start the xinetd service:

chkconfig xinetd on

service xinetd start

Create the Swift Recon cache directory and set its permissions:

mkdir -p /var/swift/recon

chown -R swift:swift /var/swift/recon

Open /etc/swift/account-server.conf and paste the following contents:

[DEFAULT]
bind_ip = 192.168.252.70
bind_port = 6002
workers = 2

[pipeline:main]
pipeline = recon account-server

[app:account-server]
use = egg:swift#account

[account-replicator]

[account-auditor]

[account-reaper]

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
account_recon = true 

Open /etc/swift/container-server.conf and paste the following contents:

[DEFAULT]
bind_ip = 192.168.252.70
bind_port = 6001
workers = 2

[pipeline:main]
pipeline = recon container-server

[app:container-server]
use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
container_recon = true

Open /etc/swift/object-server.conf and paste the following contents:

[DEFAULT]
bind_ip = 192.168.252.70
bind_port = 6000
workers = 3

[pipeline:main]
pipeline = recon object-server

[app:object-server]
use = egg:swift#object

[object-replicator]

[object-updater]

[object-auditor]

[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
object_recon = true

Move the ring files into the proper directory:

mv /root/*.ring.gz /etc/swift/

Move swift.conf into the proper directory:

mv /root/swift.conf /etc/swift/

Set permissions on the /etc/swift directory:

chown -R swift:swift /etc/swift

Create the storage directory:

mkdir -p /srv/node/loop2

Configure the loopback storage file:

dd if=/dev/zero of=/mnt/object-volume1 bs=1 count=0 seek=10G

losetup /dev/loop2 /mnt/object-volume1

Swift is meant to be filesystem agnostic, but has been significantly tested with XFS, so format the loopback storage file as XFS:

mkfs.xfs -i size=1024 /dev/loop2

echo "/dev/loop2 /srv/node/loop2 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab

mount -a

Put the following lines in /etc/rc.d/rc.local to re-create the loopback storage file if you ever vagrant reload the virtual machine:

losetup /dev/loop2 /mnt/object-volume1
mount -a
swift-init all restart

Set permissions on the storage directory:

chown swift:swift /srv/node/loop2

Start and enable all services:

for service in openstack-swift-object openstack-swift-object-replicator openstack-swift-object-updater openstack-swift-object-auditor openstack-swift-container openstack-swift-container-replicator openstack-swift-container-updater openstack-swift-container-auditor openstack-swift-account openstack-swift-account-replicator openstack-swift-account-reaper openstack-swift-account-auditor
do
service $service start
chkconfig $service on
done

The first Swift Object node should now be configured. Next, you will be going through the same steps to setup the remaining two Swift Object nodes.

Setup the object2 Swift Object Node

Log in to the Swift Object node:

vagrant ssh object2

Login as the root user and stay logged in as root throughout this process (the root password is vagrant):

su -

Follow all of the same steps from the object1 section above. However, be sure to change the IP addresses in the following files.

In /etc/rsyncd.conf be sure to set address to 192.168.252.71.

In /etc/swift/account-server.conf be sure to set bind_ip to 192.168.252.71.

In /etc/swift/container-server.conf be sure to set bind_ip to 192.168.252.71.

In /etc/swift/object-server.conf be sure to set bind_ip to 192.168.252.71.

Setup the object3 Swift Object Node

Log in to the Swift Object node:

vagrant ssh object3

Login as the root user and stay logged in as root throughout this process (the root password is vagrant):

su -

Follow all of the same steps from the object1 section above. However, be sure to change the IP addresses in the following files.

In /etc/rsyncd.conf be sure to set address to 192.168.252.72.

In /etc/swift/account-server.conf be sure to set bind_ip to 192.168.252.72.

In /etc/swift/container-server.conf be sure to set bind_ip to 192.168.252.72.

In /etc/swift/object-server.conf be sure to set bind_ip to 192.168.252.72.

Using the Swift Cluster

At this point you have setup the OpenStack Swift cluster and can use the swift command to list, upload, download, and delete containers and objects and post metadata changes to those containers and objects.

The following commands will do all of these functions within the admin tenant. The tempauth section in /etc/swift/proxy-server.conf also setup test and test2 tenants. Containers and objects in each tenant are separate from each other.

The following commands can be run from a workstation or server that has the Swift Python client installed and can communicate with IP address 192.168.236.60.

In order to do anything with the swift command, you have to authenticate to a tenant as a user. This can be done entirely on the command line:

swift -A http://192.168.236.60:8080/auth/v1.0 -U admin:admin -K admin $SUBCOMMAND

However, it quickly becomes cumbersome to write the authentication parameters over and over. Instead, you can create a file that contains the same parameters and source it into your environment to make the swift command shorter and quicker to use. Create a file called openrc with the following contents:

export ST_AUTH=http://192.168.236.60:8080/auth/v1.0
export ST_USER=admin:admin
export ST_KEY=admin

Now source the file into your environment:

source ~/openrc

List the containers for account admin in tenant admin (it should return nothing because there are no containers):

swift list

Display information about the tenant:

swift stat

Upload file photo1.jpg (this needs to exist on your workstation) to container photos in the admin tenant (the container will automatically be created if it does not already exist):

swift upload photos photo1.jpg

Display information about the container:

swift stat photos

Display information about the object:

swift stat photos photo1.jpg

List all objects in container photos in the admin tenant:

swift list photos

Download object photo1.jpg from container photos in the admin tenant:

swift download photos photo1.jpg

Right now, you can only access photo1.jpg using the swift command by authenticating as the admin user to the admin tenant. You can make every object within the photos container publicly accessible through a web browser with the following command:

swift post -r '.r:*' photos

Now you can also access object photo1.jpg through a web browser using the following URL:

http://192.168.236.60:8080/v1/AUTH_admin/photos/photo1.jpg

If you want to remove the ability to publicly access any object in the photos containers, you can reset the permissions with the following command:

swift post -r '' photos

Now you will get an access denied error if you try to access object photo1.jpg through a web browser with the URL above.

Delete object photo1.jpg from container photos in the admin tenant:

swift delete photos photo1.jpg

Delete container photos from the admin tenant (everything in the container will be deleted);

swift delete photos

Get the disk usage stats for your cluster:

swift-recon -d

Get the cluster load average stats:

swift-recon -l

Get the replication stats for your cluster:

swift-recon -r

References