Setting up a new Ceph cache pool for better performance
In this video, we talk about how to set up a Ceph cache pool and tier your cache in order to improve read and writes. There is a lot of cache settings that we could cover but I will talk about the most important values you can set on a pool to cache your data in a good way.
Adding caching tier to your filesystem
First we need to check our pools and create a new cache pool.
sudo ceph osd lspools sudo ceph osd pool create hot_storage 32
Next up we setup the pool as a tier for your data drive and change the mode
writeback which means that you first write to it before flushing it back to your slower drives.
readproxy is better if you have the same kind of speed on your pools as it writes to the data drive and then adds entries to your cache pool when read often.
sudo ceph osd tier add cephfs_data hot_storage sudo ceph osd tier cache-mode hot_storage writeback sudo ceph osd tier set-overlay cephfs_data hot_storage
Caching in memory using the bloom filter.
sudo ceph osd pool set hot_storage hit_set_type bloom sudo ceph osd pool set hot_storage hit_set_count 12 sudo ceph osd pool set hot_storage hit_set_period 14400
Setting a upper limit for your cache storage.
sudo ceph osd pool set hot_storage target_max_bytes 1099511627776 sudo ceph osd pool set hot_storage target_max_objects 1000000
Set number of memory read or writes that is required before promoting to cache pool storage.
sudo ceph osd pool set hot_storage min_read_recency_for_promote 2 sudo ceph osd pool set hot_storage min_write_recency_for_promote 2
Set limits for warnings and flushing of cache storage.
sudo ceph osd pool set hot_storage cache_target_dirty_ratio 0.4 sudo ceph osd pool set hot_storage cache_target_dirty_high_ratio 0.6 sudo ceph osd pool set hot_storage cache_target_full_ratio 0.8
Set timeout before you flush or evict a value from cache. In this case keep written data in cache for 10 minutes before writing to backend storage and keep cached objects for 30 minutes before removing them from cache if unused.
sudo ceph osd pool set hot_storage cache_min_flush_age 600 sudo ceph osd pool set hot_storage cache_min_evict_age 1800
Removing cache pool.
Set your system back to read proxy in order to flush all the items back from your cache to the backend pool.
sudo ceph osd tier cache-mode hot_storage readproxy
Wait for a couple 20 minutes (if using the settings above). To ensure that all writes have been flushed.
You can check the status of the cache using the
ls command and to be sure you could flush and evict with the
cache-flush-evict-all. Remember that all read data could still be cached in the system even after you flush and evict.
sudo rados -p hot_storage ls sudo rados -p hot_storage cache-flush-evict-all
Now we will remove the overlay and caching tier so no new data will be read or written to the pool.
sudo ceph osd tier remove-overlay cephfs_data sudo ceph osd tier remove cephfs_data hot_storage
Last but not least we could remove the pool, this is an non standard operation so you first need to set a flag for the monitors to even be able to attempt this operation. Then you need to write the name twice and the flag
--yes-i-really-really-mean-it and then the data will be removed forever.
sudo ceph tell mon.\* injectargs '--mon-allow-pool-delete=true' sudo ceph osd pool delete hot_storage hot_storage --yes-i-really-really-mean-it sudo ceph tell mon.\* injectargs '--mon-allow-pool-delete=false'