Storing data in Ceph S3 using PHP
We look into how to store data in Ceph using the S3 API. Talking about API options and showing a custom solution for storing, recieving, deleting and checking existance in a Ceph S3 using PHP.
We talk about the requirements for home work and what experiances we have gained working from home for a couple of years.
Learn how to install and configure Sonatype for efficient package management in your Java projects. In this tutorial, we’ll guide you through the setup process and show you how to deploy and release custom modules using this powerful repository solution. Ideal for teams that need to share assets and manage package versions, Sonatype makes it…
We talk about building a real compiler used to build a Linux system.
We look into how to install Apache Kafka in production using ZooKeeper for redundancy and working with topics. We also setup a producer for syslogs and checking the consumer for all the logs to appear.
I looked into how to make Vitess run in a production environment with High availability. I set up a haproxy to load balance the …
I go through the basics of accessibility. Web Content Accessibility Guidelines (WCAG) 2.1 https://www.w3.org/TR/WCAG21/
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Hello Daniel,
I’m a newbie to Ceph, and have watched several of your youtube videos. And using those videos, I successfully stood up a Ceph cluster with 4 nodes using VMs. I am now in the process of setting up a test environment to manage Ceph on metal. I have 3 NUCs using a single Samsung 500 GB drive on each of the 3 NUCs. I have partitioned it such that 50 GB is assigned for OS on each NUC, and the remaining 405 GB is left open for Ceph to use as a device for OSD etc. I set that partition to linux-lvm FS type. The details using lsblk command is below. This is on Ubuntu 22.04.1 (ubuntu-jammy) and latest docker-ce. I was successfully able to add the 2 nodes to the cluster through the UI. However, when I try to create an OSD, it does not allow me to select the primary devices. The Add button is disabled to add devices. The “Deployment Options” does not let me select any options. I am using latest Ceph Quincy 17.2+. I used cephadm to bootstrap.
I understand you may be incredibly busy, but it appears I may be missing something very obvious that I may be overlooking. Any thoughts, what I may be missing with my single disk and its partitions?
Thank you for your time and all your videos, that are so useful and informative.
Best wishes,
Atul
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 62M 1 loop /snap/core20/1587
loop1 7:1 0 63.3M 1 loop /snap/core20/1822
loop2 7:2 0 79.9M 1 loop /snap/lxd/22923
loop3 7:3 0 111.9M 1 loop /snap/lxd/24322
loop4 7:4 0 49.8M 1 loop /snap/snapd/17950
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 30G 0 part /
├─sda3 8:3 0 5G 0 part /boot
├─sda4 8:4 0 15G 0 part /var
├─sda5 8:5 0 10G 0 part [SWAP]
└─sda6 8:6 0 405.8G 0 part
Hi Atul
You got your answer via email but I’ll repost it here if someone finds this post so they can read the answer as well.
———————–
Sadly the CephAdmin orchestration is a work-in-progress kind of deal. I had so many bugs and issues when I tried it. Then again, that was a couple of years ago.
Moreover, I always use separate drives for each OSD on my machines for performance and simplicity. I’ve not tried to use partitions for OSD, but it should work.
“It is technically possible to run multiple Ceph OSD Daemons per SAS / SATA drive, but this will lead to resource contention and diminish overall throughput.” https://docs.ceph.com/en/latest/start/hardware-recommendations/
I don’t think the use case with running OSD on the same drive as your OS is a feature that the CephAdmin team will support soon.
So I would either see if I can put another drive in the NUC (I know some support NvME and SSD) or deploy the cluster manually and create the OSDs with the provided tooling.
———————–
Best regards
Daniel