Setting up a Ceph posix filesystem in your cluster

We look into how a Ceph filesystem works and how to add a filesystem to your Ceph cluster. Configuration, authentication and other important topics you need to know in order to run a cluster with a filesystem.

Setup MDS services

First, we need to add the MDS (MetaData Service) to each of the nodes.

The first node we call n1. In this call, we will create the directory for the keyring and then create a new one from the cluster config and pipe it to the keyring file. Lastly, we start the service.

sudo mkdir -p /var/lib/ceph/mds/ceph-n1
sudo ceph auth get-or-create mds.n1 mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' | sudo tee /var/lib/ceph/mds/ceph-n1/keyring
sudo systemctl start ceph-mds@n1

After the install, we will check the status of the network.

sudo ceph mds stat

The second node we call n2. In this call, we will create the directory for the keyring and then create a new one from the cluster config and pipe it to the keyring file. Lastly, we start the service.

sudo mkdir -p /var/lib/ceph/mds/ceph-n2
sudo ceph auth get-or-create mds.n2 mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' | sudo tee /var/lib/ceph/mds/ceph-n2/keyring
sudo systemctl start ceph-mds@n2

After the install, we will check the status of the network.

sudo ceph mds stat

Last node we call n3. In this call, we will create the directory for the keyring and then create a new one from the cluster config and pipe it to the keyring file. Lastly, we start the service.

sudo mkdir -p /var/lib/ceph/mds/ceph-n3
sudo ceph auth get-or-create mds.n3 mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' | sudo tee /var/lib/ceph/mds/ceph-n3/keyring
sudo systemctl start ceph-mds@n3

After the install, we will check the status of the network.

sudo ceph mds stat

Now that we have the Metadata services online (you should at least have 2 for high availability per filesystem plus one extra for redundancy).

Create a filesystem.

We will create two pools for data and metadata for this filesystem. The data pool needs more placements groups than a metadata pool, so we give it 32 PG.

sudo ceph osd pool create cephfs_data 32
sudo ceph osd pool create cephfs_metadata 1

Next up, we just create a filesystem. A filesystem could be multiple data pools but only one metadata pool. The data pools could be connected to different directories if you follow up on some of your applications.

sudo ceph fs new cephfs cephfs_metadata cephfs_data
sudo ceph fs ls

Mounting a filesystem on a client

Lastly, we will create a filesystem user that we could use to connect our client to the cluster filesystem. This user will have read/write access to the root / directory.

sudo ceph fs authorize cephfs client.fsuser / rw

We could use the key we get out in our /etc/fstab file for mounting our filesystem. What you see below is first the monitor nodes with port 6789. Next, we will use the root directory / and mount that to /cephfs on the client system. The name is the name of the client we created above. The secret is the secret the command above replied with. Flags as noatime and _netdev are good for performance.

n1:6789,n2:6789,n3:6789:/   /cephfs ceph    name=fsuser,secret=AAAAAAADDD5AAAAAAAAFFFA==,noatime,_netdev     0       2

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.