Hello,
I am trying to build a ceph cluster based on 4 raspberry3B+.
As long as you have very realistic expectations as to the performance and reliability you
will get out of 4 severely underpowered (for Ceph) nodes, why not.
I do nearly the same, simply to learn Ceph, with 5 ODROID-HC2 (more bang than the 3B+ but
32 bit). I would not dream of expecting even wire speed out of SBCs with 2GiB RAM, a
single Gigabit network connection attached via USB and SATA via USB.
Following the official CEPH documentation is a dead end as packages
are only available for
readhat/centos and not fedora !
As was pointed out, your Raspis should find the packages in the repos. Please provide the
output of the yum commands that Troy Dawson mailed. Maybe your repository setup has an
issue.
You can view all builds of ceph for Fedora at
https://koji.fedoraproject.org/koji/search?match=glob&type=package&am...
It's built for
- aarch64
- ppc64le
- s390x
- x86_64
I tried to used the el7 repository but the are conflicts
and lacks with fedora repos.
Yeah, I would not attempt to mix that way.
Did I miss something ?
If "dnf search ceph" shows you results, then you might be trying to install Ceph
wrongly. Are you using ceph-ansible? I definitely recommend you do.
http://docs.ceph.com/ceph-ansible/stable-3.2/
or, if not using Luminous, but master
http://docs.ceph.com/ceph-ansible/master/
although I recommend you start with a stable version if this is your first foray into
Ceph.
I am now trying to compile from sources. it is still ongoing ( 16%
after 24h !)
Yeah, that will take a while. I'd be too impatient for that ;-)
On Ceph itself, I am happily playing with Ceph Luminous using Bluestore, if you want
Luminous too, be sure to use the stable-3.2 branch of ceph-ansible, as documented.
As my SBC definitely are at the lowest end of
http://docs.ceph.com/docs/master/start/hardware-recommendations/ I adjusted
osd_memory_target
http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/
I expect to have to do many more tunings in the days and weeks to come.
As always with a cluster, you may want to consider:
- using monitoring to notice if one of many nodes is down
- using a watchdog to bounce nodes that are unresponsive
- wiring up serial consoles and logging to a logserver
pcfe