Building a local OpenSwift object store

I needed a solution to save data and I wanted to create an API so I could handle different storage solutions. One of the solutions I wanted to support was OpenSwift, a part of OpenStack. Trying create a small local installation for testing wasn't easy and required research. Configuration options seems endless so I'm trying to summarize what is required to get an minimal install up and running.

Using Gentoo you need to fetch some packages and their dependencies. Your distribution probably have similar packages.

USE="account container object proxy"

emerge sys-cluster/swift
emerge sys-auth/keystone
emerge dev-python/python-keystoneclient

Keystone is used to handle the authentication and user permissions. There is multiple solutions for this and I tried a bunch and found that keystone was probably one of the simpler. To ease the use of keystone I set an admin token that later is used to give me access to the keystone API from the client. And in the example of a keystone configuration below I use mysql as my database. The standard install uses sqlite but I found that a MySQL database is a bit easier to handle if you want to look something up. Personal preference.


admin_token = ADMIN
token = keystone.auth.plugins.token.Token
connection = mysql://{username}:{password}@localhost/keystone

After this setup you need to start the mysql server and create the database keystone. Give your user permissions, then you can start the keystone service.

Then we set up some environment variables so you don't have to supply this information on every call to the client.

export OS_SERVICE_ENDPOINT=http://localhost:35357/v2.0

Next we need to run these commands to setup a tenant, user, role, service and endpoint.

keystone tenant-create --name {} [--description {}]
keystone user-create --name {} [--tenant {}] [--pass [{}]] [--email {}]
keystone role-create --name {}
keystone user-role-add --user {} --role {} [--tenant {}]
keystone service-create --type {} [--name {}] [--description {}]
keystone endpoint-create [--region {}] --service {} --publicurl {} [--adminurl {}] [--internalurl {}]

Below I provided some examples. In my case I created a test tenant, then a test user to that tenant. I created the admin role and connected the user to that role. Creating the object-store service is required for the API later as a service point, I call this service test as well. Lastly I create an endpoint with all urls set to the same proxy service url. (http://localhost:8080/v1.0/ ending with the tenant id created earlier.)

keystone tenant-create --name test
keystone user-create --name testuser --tenant ${tenant_id} --pass --email
keystone role-create --name admin
keystone user-role-add --user ${user_id} --role ${role_id} --tenant ${tenant_id}
keystone service-create --type object-store --name test
keystone endpoint-create --service test 
  --publicurl http://localhost:8080/v1.0/${tenant_id}
  --internalurl http://localhost:8080/v1.0/${tenant_id}
  --adminurl http://localhost:8080/v1.0/${tenant_id}

To secure our solution we need to add hash suffixes and prefixes in the /etc/swift/swift.config. This might not be a required action but is a good practice.

swift_hash_path_suffix = {SOME CRAZY SUFFIX}
swift_hash_path_prefix = {SOME CRAZY PREFIX}

Then we need to configure each server for the different functions. In my case I want them all to use the same device prefix of /srv/node. This is the path where we will mount the devices the servers will use to store data. I also explicitly define the default ports, even though this is not required, it's easier to keep track of the service ports.


bind_port = 6000
devices = /srv/node


bind_port = 6001
devices = /srv/node


bind_port = 6002
devices = /srv/node

Now we have all the storage servers ready and setup, but we don't have any data storage devices they could read. Each server could handle multiple devices but in my local test installations I only needed one and I hadn't any spare disks laying around so I created a virtual one.
First I dd a file of 1GB size, partition it with one primary partition, connect a loop device to it and lastly mount that device to /src/node/r0

dd if=/dev/zero of=disk1.raw bs=512 count=2097152
parted disk1.raw mklabel msdos
fdisk disk1.raw
losetup -P /dev/loop0 disk1.raw
mount /dev/loop0p1 /src/node/r0

Now we create the rings of devices that will handle data for objects, containers and accounts. The first command below creates a builder file with 2^18 partitions, 3 replicas and a setting that tells it it to restrict movement of partitions to 1 hour. This might be so much overkill but it works for my example and doesn't create overly large files. We add our server to it, in my case with port for each server. Then you partition your servers into regions and zones, I choose to put my server in zone 1 region 1. I supply my device r0 that I mounted above and lastly give the server a weight of 10. The weight is not important when you have one server but might be interesting if you want to favor a partition in your cluster.

swift-ring-builder object.builder create 18 3 1
swift-ring-builder object.builder add --ip --port 6000 --r 1 -z 1 -d r0 -w 10
swift-ring-builder object.builder rebalance
swift-ring-builder container.builder create 18 3 1
swift-ring-builder container.builder add --ip --port 6001 --r 1 -z 1 -d r0 -w 10
swift-ring-builder container.builder rebalance
swift-ring-builder account.builder create 18 3 1
swift-ring-builder account.builder add --ip --port 6002 --r 1 -z 1 -d r0 -w 10
swift-ring-builder account.builder rebalance

When we ran the rebalance function above we created the *.ring.gz files with the ring information required for the proxy server to handle requests, so now we need to configure it. We define the port from our endpoint above 8080 and then add keystoneauth into our main pipeline. To make it easy for us, I turn on account_autocreate so we get new accounts without any extra job on our part.


bind_port = 8080
pipeline = healthcheck cache authtoken keystoneauth proxy-server
use = egg:swift#proxy
account_autocreate = true

Then we need to define the authtoken filter in the pipeline. I have to set the identity_uri to the keystone server and the admin token I defined earlier so the proxy could connect to the keystone server to retrieve authentication information.

paste.filter_factory = keystonemiddleware.auth_token:filter_factory
identity_uri = http://localhost:35357/
admin_token = ADMIN

Later in the file we need to add a keystoneauth filter as well to handle incoming request and generate the auth tokens from keystone. In my example I allow admins to operate the cluster. Reseller prefix could be used to prefix your tenant id in the proxy endpoint. For example http://localhost:8080/v1.0/AUTH_${tenantId}

paste.filter_factory = keystone.middleware.swift_auth:filter_factory
use = egg:swift#keystoneauth
reseller_prefix =
operator_roles = admin

memcache configuration is required so you don't get any unnecessary errors in the logs. Just use the sample one should be fine.

Now we should be able to start all the services. swift-account, swift-container, swift-object and swift-proxy.

To try my solution I use the opencloud PHP library provided from rackspace. (
Below I added some code that could be used to connect and test your setup.

$client = new OpenCloud\OpenStack('http://localhost:5000/v2.0/', array(
  'username' => '{keystone_username}',
  'password' => '{keystone_password}',
  'tenantId' => '{keystone_tenant_id}'

$service = $client->objectStoreService(
               '{keystone_service_region (default "regionOne")}', 

For more help using this API you could check the extensive documentation at


Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.