Storing data in Ceph S3 using PHP
We look into how to store data in Ceph using the S3 API. Talking about API options and showing a custom solution for storing, recieving, deleting and checking existance in a Ceph S3 using PHP.
We look into how to setup an activity that changes language of your android application.
We look into how to create an application parsing different documents with embedded content in order to extract text and images.
In this video, I explore VanillaOS, an immutable Linux distribution. We guide you through the installation process. I look into the default applications bundled with VanillaOS, and I provide an overview of the essential tools and utilities available in a fresh install. Discover the advantages of using an immutable operating system like VanillaOS. Learn how…
Yesterday the new feature pack 8 and fix pack 9 of Websphere Commerce where released to the world. And I started to download it as soon as I saw it being available. Our process is to try out the new versions as fast as possible to nail down the technical challenges we will face moving…
In this video, we talk about how to set up a Ceph RADOS block device. We will mount this device on a Linux client and talk about what block device is used for and the difference between a Ceph file system. We also touch on iSCSI and its usage around block devices. Moreover, we talk…
In this informative video, join me as I unbox and install the PowerWalker 2200 RLE UPS, a powerful uninterruptible power supply system. Discover the benefits of having a UPS, such as safeguarding your devices during brownouts and power outages, and ensuring uninterrupted operation when the power company is troubleshooting. As I share my own experience,…
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Hello Daniel,
I’m a newbie to Ceph, and have watched several of your youtube videos. And using those videos, I successfully stood up a Ceph cluster with 4 nodes using VMs. I am now in the process of setting up a test environment to manage Ceph on metal. I have 3 NUCs using a single Samsung 500 GB drive on each of the 3 NUCs. I have partitioned it such that 50 GB is assigned for OS on each NUC, and the remaining 405 GB is left open for Ceph to use as a device for OSD etc. I set that partition to linux-lvm FS type. The details using lsblk command is below. This is on Ubuntu 22.04.1 (ubuntu-jammy) and latest docker-ce. I was successfully able to add the 2 nodes to the cluster through the UI. However, when I try to create an OSD, it does not allow me to select the primary devices. The Add button is disabled to add devices. The “Deployment Options” does not let me select any options. I am using latest Ceph Quincy 17.2+. I used cephadm to bootstrap.
I understand you may be incredibly busy, but it appears I may be missing something very obvious that I may be overlooking. Any thoughts, what I may be missing with my single disk and its partitions?
Thank you for your time and all your videos, that are so useful and informative.
Best wishes,
Atul
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 62M 1 loop /snap/core20/1587
loop1 7:1 0 63.3M 1 loop /snap/core20/1822
loop2 7:2 0 79.9M 1 loop /snap/lxd/22923
loop3 7:3 0 111.9M 1 loop /snap/lxd/24322
loop4 7:4 0 49.8M 1 loop /snap/snapd/17950
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 30G 0 part /
├─sda3 8:3 0 5G 0 part /boot
├─sda4 8:4 0 15G 0 part /var
├─sda5 8:5 0 10G 0 part [SWAP]
└─sda6 8:6 0 405.8G 0 part
Hi Atul
You got your answer via email but I’ll repost it here if someone finds this post so they can read the answer as well.
———————–
Sadly the CephAdmin orchestration is a work-in-progress kind of deal. I had so many bugs and issues when I tried it. Then again, that was a couple of years ago.
Moreover, I always use separate drives for each OSD on my machines for performance and simplicity. I’ve not tried to use partitions for OSD, but it should work.
“It is technically possible to run multiple Ceph OSD Daemons per SAS / SATA drive, but this will lead to resource contention and diminish overall throughput.” https://docs.ceph.com/en/latest/start/hardware-recommendations/
I don’t think the use case with running OSD on the same drive as your OS is a feature that the CephAdmin team will support soon.
So I would either see if I can put another drive in the NUC (I know some support NvME and SSD) or deploy the cluster manually and create the OSDs with the provided tooling.
———————–
Best regards
Daniel