Ceph: Difference between revisions
imported>C4lliope mNo edit summary |
imported>C4lliope More digging.... |
||
Line 50: | Line 50: | ||
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-mon-$(hostname) | sudo -u ceph mkdir /var/lib/ceph/mon/ceph-mon-$(hostname) | ||
# Make a keyring! | |||
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' | sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' | ||
sudo mkdir -p /var/lib/ceph/bootstrap-osd && sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r' | sudo mkdir -p /var/lib/ceph/bootstrap-osd && sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r' | ||
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring | sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring | ||
sudo chown ceph:ceph /tmp/ceph.mon.keyring | sudo chown ceph:ceph /tmp/ceph.mon.keyring | ||
# Make a monitor! | |||
sudo monmaptool --create --add mesh-a $IP --fsid $FSID /tmp/monmap | sudo monmaptool --create --add mesh-a $IP --fsid $FSID /tmp/monmap | ||
sudo -u ceph ceph-mon --mkfs -i mon-$(hostname) --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring | sudo -u ceph ceph-mon --mkfs -i mon-$(hostname) --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring | ||
Line 86: | Line 89: | ||
</pre> | </pre> | ||
Clearly, Ceph is concerned that the `/etc/ceph/ceph.conf` file is missing. So am I! I'd assumed the Nixpkgs module should make that, based on all the <code>extraConfig</code> options supplied. | |||
So I ran in sequence: | |||
<syntaxhighlight lang="bash"> | |||
sudo su | |||
export FSID="4b687c5c-5a20-4a77-8774-487989fd0bc7" | |||
echo " | |||
[global] | |||
fsid=$FSID | |||
mon initial members = mesh-a | |||
mon host = 10.0.0.11 | |||
cluster network = 10.0.0.0/24 | |||
" > /etc/ceph/ceph.conf | |||
exit | |||
cat /etc/ceph/ceph.conf | |||
</syntaxhighlight> | |||
This should be minimally enough to load Ceph, and we can come back and nixify this soon. | |||
The problem remains... | |||
<pre> | |||
mesh@mesh-a:~/.build/ > sudo systemctl restart ceph-mesh | |||
mesh@mesh-a:~/.build/ > sudo systemctl status ceph-mesh | |||
○ ceph-mesh.service - Ceph OSD Bindings | |||
Loaded: loaded (/etc/systemd/system/ceph-mesh.service; enabled; preset: enabled) | |||
Active: inactive (dead) since Tue 2023-12-19 16:12:51 EST; 4s ago | |||
Process: 37570 ExecStart=/bin/sh -c timeout $CEPH_VOLUME_TIMEOUT /run/current-system/sw/bin/ceph-volume lvm activate --all --no-systemd (code=exited, status=0/SUCCESS) | |||
Main PID: 37570 (code=exited, status=0/SUCCESS) | |||
IP: 0B in, 0B out | |||
CPU: 162ms | |||
Dec 19 16:12:51 mesh-a systemd[1]: Starting Ceph OSD Bindings... | |||
Dec 19 16:12:51 mesh-a sh[37571]: --> Was unable to find any OSDs to activate | |||
Dec 19 16:12:51 mesh-a sh[37571]: --> Verify OSDs are present with "ceph-volume lvm list" | |||
Dec 19 16:12:51 mesh-a systemd[1]: ceph-mesh.service: Deactivated successfully. | |||
Dec 19 16:12:51 mesh-a systemd[1]: Finished Ceph OSD Bindings. | |||
mesh@mesh-a:~/.build/ > sudo ceph-volume lvm list | |||
No valid Ceph lvm devices found | |||
</pre> | |||
Here is a summary of records produced inside <code>/var/lib/ceph</code>: | Here is a summary of records produced inside <code>/var/lib/ceph</code>: |