Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs starting in fedora-30/rawhide.
The upstream project doesn't support it. The armv7hl builders don't have enough memory (or address space) to build some components.
And the other active maintainer (branto) and I don't have cycles to devote to keeping it building on 32-bit archs.
(FWIW, currently ceph-12.2.9 (luminous) is in rawhide, f29, and f28 and it has packages for i686 and armv7hl for people who want to run ceph on 32-bit archs.)
W dniu 05.12.2018 o 14:14, Kaleb S. KEITHLEY pisze:
Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs starting in fedora-30/rawhide.
The upstream project doesn't support it. The armv7hl builders don't have enough memory (or address space) to build some components.
BTW - how much memory is needed to build Ceph 14?
On Wed, 5 Dec 2018 14:23:49 +0100 Marcin Juszkiewicz mjuszkiewicz@redhat.com wrote:
W dniu 05.12.2018 o 14:14, Kaleb S. KEITHLEY pisze:
Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs starting in fedora-30/rawhide.
The upstream project doesn't support it. The armv7hl builders don't have enough memory (or address space) to build some components.
BTW - how much memory is needed to build Ceph 14?
have you tried building with reduced debuginfo, eg. -g1 or even -g0?
I wonder how much broken deps it will cause.
Dan
On 12/5/18 8:34 AM, Dan Horák wrote:
On Wed, 5 Dec 2018 14:23:49 +0100 Marcin Juszkiewicz mjuszkiewicz@redhat.com wrote:
W dniu 05.12.2018 o 14:14, Kaleb S. KEITHLEY pisze:
Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs starting in fedora-30/rawhide.
The upstream project doesn't support it. The armv7hl builders don't have enough memory (or address space) to build some components.
BTW - how much memory is needed to build Ceph 14?
More — apparently — than the armv7hl builders have. :-) branto may know.
have you tried building with reduced debuginfo, eg. -g1 or even -g0?
branto told me that he has tried all the different optimization levels.
I wonder how much broken deps it will cause.
Don't know. Hence this heads up warning.
W dniu 05.12.2018 o 14:45, Kaleb S. KEITHLEY pisze:
On 12/5/18 8:34 AM, Dan Horák wrote:
On Wed, 5 Dec 2018 14:23:49 +0100 Marcin Juszkiewicz mjuszkiewicz@redhat.com wrote:
W dniu 05.12.2018 o 14:14, Kaleb S. KEITHLEY pisze:
Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs starting in fedora-30/rawhide. The upstream project doesn't support it. The armv7hl builders don't have enough memory (or address space) to build some components.
BTW - how much memory is needed to build Ceph 14?
More — apparently — than the armv7hl builders have. :-) branto may know.
Random Fedora armhf builder hardware info:
------------------------------------------------------------------------------------------------------- CPU info: Architecture: armv7l Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Model: 1 Model name: ARMv7 Processor rev 1 (v7l) BogoMIPS: 100.00 Flags: half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
Memory: total used free shared buff/cache available Mem: 24929616 103092 24375448 348 451076 24524500 Swap: 18869244 21348 18847896
Storage: Filesystem Size Used Avail Use% Mounted on /dev/vda2 135G 5.7G 122G 5% / -------------------------------------------------------------------------------------------------------
Seriously 24 GB of ram + 18 GB of swap is not enough to build ceph? That's more real memory than x86-64 builder have (15 387 432 ram + 134 216 700 swap).
I understand "we drop because upstream does not care about 32bit" reason.
On Wed, 5 Dec 2018 15:02:41 +0100 Marcin Juszkiewicz mjuszkiewicz@redhat.com wrote:
W dniu 05.12.2018 o 14:45, Kaleb S. KEITHLEY pisze:
On 12/5/18 8:34 AM, Dan Horák wrote:
On Wed, 5 Dec 2018 14:23:49 +0100 Marcin Juszkiewicz mjuszkiewicz@redhat.com wrote:
W dniu 05.12.2018 o 14:14, Kaleb S. KEITHLEY pisze:
Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs starting in fedora-30/rawhide. The upstream project doesn't support it. The armv7hl builders don't have enough memory (or address space) to build some components.
BTW - how much memory is needed to build Ceph 14?
More — apparently — than the armv7hl builders have. :-) branto may know.
Random Fedora armhf builder hardware info:
CPU info: Architecture: armv7l Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Model: 1 Model name: ARMv7 Processor rev 1 (v7l) BogoMIPS: 100.00 Flags: half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
Memory: total used free shared buff/cache available Mem: 24929616 103092 24375448 348 451076 24524500 Swap: 18869244 21348 18847896
Storage: Filesystem Size Used Avail Use% Mounted on /dev/vda2 135G 5.7G 122G 5% /
Seriously 24 GB of ram + 18 GB of swap is not enough to build ceph? That's more real memory than x86-64 builder have (15 387 432 ram + 134 216 700 swap).
I understand "we drop because upstream does not care about 32bit" reason. _______________________________________________
The problem is usually in the 4GB address space on 32-bit platforms (much less is available for user data), when it builds large C++ codebase. Either the compiler OOMs or the linker OOMs. That's why reducing the generated debuginfos often helps.
Dan
On Wed, Dec 05, 2018 at 08:45:19AM -0500, Kaleb S. KEITHLEY wrote:
On 12/5/18 8:34 AM, Dan Horák wrote:
On Wed, 5 Dec 2018 14:23:49 +0100 Marcin Juszkiewicz mjuszkiewicz@redhat.com wrote:
W dniu 05.12.2018 o 14:14, Kaleb S. KEITHLEY pisze:
Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs starting in fedora-30/rawhide.
The upstream project doesn't support it. The armv7hl builders don't have enough memory (or address space) to build some components.
Is there any consideration given to only building the ceph client pieces on 32-bit ? Presumably those parts are simpler and thus not likely to hit the address/memory limits, and be more tractable for supporting ?
I very much doubt people would run ceph server parts on 32-bit, so any usage of ceph on 32-bit is likely to be limited to the client pieces
BTW - how much memory is needed to build Ceph 14?
More — apparently — than the armv7hl builders have. :-) branto may know.
have you tried building with reduced debuginfo, eg. -g1 or even -g0?
branto told me that he has tried all the different optimization levels.
I wonder how much broken deps it will cause.
Don't know. Hence this heads up warning.
repoquery can report on direct dependancies that would be broken by any ceph packages being removed. There could be transitive ripples out from there. From my own POV this would impact qemu & libvirt, which would need to conditionally turn off their rbd support for those archs.
Regards, Daniel
On Wed, Dec 05, 2018 at 02:31:10PM +0000, Daniel P. Berrangé wrote:
On Wed, Dec 05, 2018 at 08:45:19AM -0500, Kaleb S. KEITHLEY wrote:
On 12/5/18 8:34 AM, Dan Horák wrote:
On Wed, 5 Dec 2018 14:23:49 +0100 Marcin Juszkiewicz mjuszkiewicz@redhat.com wrote:
W dniu 05.12.2018 o 14:14, Kaleb S. KEITHLEY pisze:
Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs starting in fedora-30/rawhide.
The upstream project doesn't support it. The armv7hl builders don't have enough memory (or address space) to build some components.
Is there any consideration given to only building the ceph client pieces on 32-bit ? Presumably those parts are simpler and thus not likely to hit the address/memory limits, and be more tractable for supporting ?
I very much doubt people would run ceph server parts on 32-bit, so any usage of ceph on 32-bit is likely to be limited to the client pieces
As Dan says, I'd like to know if you (Kaleb) considered building only the client bits (librbd1 I think?). It's something which libguestfs needs too albeit indirectly.
Assuming I've got the right command, the complete list of reverse dependencies for the client side of Ceph is:
# repoquery -q --whatrequires 'librbd.so.1()(64bit)' ceph-common-1:12.2.8-1.fc29.x86_64 ceph-common-1:12.2.9-1.fc29.x86_64 ceph-test-1:12.2.8-1.fc29.x86_64 ceph-test-1:12.2.9-1.fc29.x86_64 fio-0:3.7-2.fc29.x86_64 librbd-devel-1:12.2.8-1.fc29.x86_64 librbd-devel-1:12.2.9-1.fc29.x86_64 libvirt-daemon-driver-storage-rbd-0:4.7.0-1.fc29.x86_64 python-rbd-1:12.2.8-1.fc29.x86_64 python-rbd-1:12.2.9-1.fc29.x86_64 python3-rbd-1:12.2.8-1.fc29.x86_64 python3-rbd-1:12.2.9-1.fc29.x86_64 qemu-block-rbd-2:3.0.0-1.fc29.x86_64 qemu-block-rbd-2:3.0.0-2.fc29.x86_64 rbd-fuse-1:12.2.8-1.fc29.x86_64 rbd-fuse-1:12.2.9-1.fc29.x86_64 rbd-nbd-1:12.2.8-1.fc29.x86_64 rbd-nbd-1:12.2.9-1.fc29.x86_64 scsi-target-utils-rbd-0:1.0.70-4.fc28.x86_64
Rich.
Forgot about librados, so there's actually a much bigger list:
# dnf repoquery -q --whatrequires 'librados.so.2()(64bit)' ceph-common-1:12.2.8-1.fc29.x86_64 ceph-common-1:12.2.9-1.fc29.x86_64 ceph-radosgw-1:12.2.8-1.fc29.x86_64 ceph-radosgw-1:12.2.9-1.fc29.x86_64 ceph-test-1:12.2.8-1.fc29.x86_64 ceph-test-1:12.2.9-1.fc29.x86_64 fio-0:3.7-2.fc29.x86_64 librados-devel-1:12.2.8-1.fc29.x86_64 librados-devel-1:12.2.9-1.fc29.x86_64 libradosstriper1-1:12.2.8-1.fc29.x86_64 libradosstriper1-1:12.2.9-1.fc29.x86_64 librbd1-1:12.2.8-1.fc29.x86_64 librbd1-1:12.2.9-1.fc29.x86_64 librgw2-1:12.2.8-1.fc29.x86_64 librgw2-1:12.2.9-1.fc29.x86_64 libvirt-daemon-driver-storage-rbd-0:4.7.0-1.fc29.x86_64 nfs-ganesha-0:2.7.0-3.fc29.x86_64 nfs-ganesha-0:2.7.1-2.fc29.x86_64 nfs-ganesha-rados-grace-0:2.7.0-3.fc29.x86_64 nfs-ganesha-rados-grace-0:2.7.1-2.fc29.x86_64 python-rados-1:12.2.8-1.fc29.x86_64 python-rados-1:12.2.9-1.fc29.x86_64 python-rbd-1:12.2.8-1.fc29.x86_64 python-rbd-1:12.2.9-1.fc29.x86_64 python-rgw-1:12.2.8-1.fc29.x86_64 python-rgw-1:12.2.9-1.fc29.x86_64 python2-cradox-0:2.1.0-2.fc29.x86_64 python3-cradox-0:2.1.0-2.fc29.x86_64 python3-rados-1:12.2.8-1.fc29.x86_64 python3-rados-1:12.2.9-1.fc29.x86_64 python3-rbd-1:12.2.8-1.fc29.x86_64 python3-rbd-1:12.2.9-1.fc29.x86_64 python3-rgw-1:12.2.8-1.fc29.x86_64 python3-rgw-1:12.2.9-1.fc29.x86_64 qemu-block-rbd-2:3.0.0-1.fc29.x86_64 qemu-block-rbd-2:3.0.0-2.fc29.x86_64 rbd-fuse-1:12.2.8-1.fc29.x86_64 rbd-fuse-1:12.2.9-1.fc29.x86_64 rbd-mirror-1:12.2.8-1.fc29.x86_64 rbd-mirror-1:12.2.9-1.fc29.x86_64 rbd-nbd-1:12.2.8-1.fc29.x86_64 rbd-nbd-1:12.2.9-1.fc29.x86_64 scsi-target-utils-rbd-0:1.0.70-4.fc28.x86_64 xrootd-ceph-1:4.8.4-2.fc29.x86_64 xrootd-ceph-1:4.8.5-2.fc29.x86_64
Rich.
“Client bits” also means stuff needed for accessing file-system interface of Ceph: – nfs-ganesha-ceph.x86_64 – ceph-fuse
and for some users – the rados gateway bits: – nfs-ganesha-rgw.x86_64 - ceph-radosgw.x86_64
Kaleb,
Firstly the title is misleading as there was no heads up, a heads up is notice before you actually push the change, not when you do the change.
Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs starting in fedora-30/rawhide.
The upstream project doesn't support it. The armv7hl builders don't have enough memory (or address space) to build some components.
And the other active maintainer (branto) and I don't have cycles to devote to keeping it building on 32-bit archs.
(FWIW, currently ceph-12.2.9 (luminous) is in rawhide, f29, and f28 and it has packages for i686 and armv7hl for people who want to run ceph on 32-bit archs.)
As others have asked in the thread can we possibly build client only?
On 12/5/18 9:50 PM, Peter Robinson wrote:
Kaleb,
Firstly the title is misleading as there was no heads up, a heads up is notice before you actually push the change, not when you do the change.
I suggest you take this up with branto. He's the one who built it without 32-bit archs without any warning. I only found out about it when I got build notices from koji (or pagure or whatever.)
You would have preferred no notice at all?
Ceph 14.x.x (Nautilus) will no longer be built on i686 and armv7hl archs starting in fedora-30/rawhide.
The upstream project doesn't support it. The armv7hl builders don't have enough memory (or address space) to build some components.
And the other active maintainer (branto) and I don't have cycles to
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
devote to keeping it building on 32-bit archs.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(FWIW, currently ceph-12.2.9 (luminous) is in rawhide, f29, and f28 and it has packages for i686 and armv7hl for people who want to run ceph on 32-bit archs.)
As others have asked in the thread can we possibly build client only?
We?
Since the above seems to have been unclear:
And the other active maintainer (branto) and I don't have cycles to devote to keeping (any part of) it building on 32-bit archs.
But people can always send patches. ;-)
--
Kaleb
On 12/6/18 7:04 AM, Kaleb S. KEITHLEY wrote:
And the other active maintainer (branto) and I don't have cycles to devote to keeping (any part of) it building on 32-bit archs.
But people can always send patches. ;-)
If someone else would like to take over as maintainer, I'm happy to give it up.
LMK.
--
Kaleb
Rather predictably this has broken libvirt on i686 and armv7hl (ie. 32 bit arches):
DEBUG util.py:439: - package libvirt-daemon-driver-storage-4.10.0-1.fc30.i686 requires libvirt-daemon-driver-storage-rbd = 4.10.0-1.fc30, but none of the providers can be installed DEBUG util.py:439: - conflicting requests DEBUG util.py:439: - nothing provides librados.so.2 needed by libvirt-daemon-driver-storage-rbd-4.10.0-1.fc30.i686
I filed a bug, against libvirt since it appears that no client library for ceph will be forthcoming:
https://bugzilla.redhat.com/show_bug.cgi?id=1657928
Rich.