Storing data in Ceph S3 using PHP
We look into how to store data in Ceph using the S3 API. Talking about API options and showing a custom solution for storing, recieving, deleting and checking existance in a Ceph S3 using PHP.
In this video series, I try to challenge myself with the Advent of Code trials. Each solution will be published to Github, and I hope you will learn something from my coding mistakes and perhaps send some code my way on how you have done these challenges. I know by reading code, so this is…
I was looking into creating a reference editor to show an reference between my object to an SKU and I think it wasn’t obvious how to get this relationship to work. First of I wanted to use the reference editor to show my relationship. I defined it using the code below. I didn’t want the…
We solve todays challange at the advent of code 2020. Come join and have some fun.
I’m trying to solve all the Advent of Code puzzles in this video series.
In this video we look into the new preview features for connecting java applications to native libraries and run functions. We test both running against libraries loaded in the system and we also load a custom library we built from C code.
We look into how to set up log4j configuration with XML and properties. XML is pretty simple to configure, and properties are similar in structure but might need a bit more explanation. Configuration in log4j could also be done via code, but I have another video for that.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Hello Daniel,
I’m a newbie to Ceph, and have watched several of your youtube videos. And using those videos, I successfully stood up a Ceph cluster with 4 nodes using VMs. I am now in the process of setting up a test environment to manage Ceph on metal. I have 3 NUCs using a single Samsung 500 GB drive on each of the 3 NUCs. I have partitioned it such that 50 GB is assigned for OS on each NUC, and the remaining 405 GB is left open for Ceph to use as a device for OSD etc. I set that partition to linux-lvm FS type. The details using lsblk command is below. This is on Ubuntu 22.04.1 (ubuntu-jammy) and latest docker-ce. I was successfully able to add the 2 nodes to the cluster through the UI. However, when I try to create an OSD, it does not allow me to select the primary devices. The Add button is disabled to add devices. The “Deployment Options” does not let me select any options. I am using latest Ceph Quincy 17.2+. I used cephadm to bootstrap.
I understand you may be incredibly busy, but it appears I may be missing something very obvious that I may be overlooking. Any thoughts, what I may be missing with my single disk and its partitions?
Thank you for your time and all your videos, that are so useful and informative.
Best wishes,
Atul
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 62M 1 loop /snap/core20/1587
loop1 7:1 0 63.3M 1 loop /snap/core20/1822
loop2 7:2 0 79.9M 1 loop /snap/lxd/22923
loop3 7:3 0 111.9M 1 loop /snap/lxd/24322
loop4 7:4 0 49.8M 1 loop /snap/snapd/17950
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 30G 0 part /
├─sda3 8:3 0 5G 0 part /boot
├─sda4 8:4 0 15G 0 part /var
├─sda5 8:5 0 10G 0 part [SWAP]
└─sda6 8:6 0 405.8G 0 part
Hi Atul
You got your answer via email but I’ll repost it here if someone finds this post so they can read the answer as well.
———————–
Sadly the CephAdmin orchestration is a work-in-progress kind of deal. I had so many bugs and issues when I tried it. Then again, that was a couple of years ago.
Moreover, I always use separate drives for each OSD on my machines for performance and simplicity. I’ve not tried to use partitions for OSD, but it should work.
“It is technically possible to run multiple Ceph OSD Daemons per SAS / SATA drive, but this will lead to resource contention and diminish overall throughput.” https://docs.ceph.com/en/latest/start/hardware-recommendations/
I don’t think the use case with running OSD on the same drive as your OS is a feature that the CephAdmin team will support soon.
So I would either see if I can put another drive in the NUC (I know some support NvME and SSD) or deploy the cluster manually and create the OSDs with the provided tooling.
———————–
Best regards
Daniel