omalleys(a)msu.edu wrote:
Quoting Gordan Bobic <gordan(a)bobich.net>:
> It'd have to be more finely grained than sub-architecture since a kernel
> for one target won't necessarily work on other CPU of the same
> sub-architecture (e.g. a Kirkwood kernel won't work on all ARMv5
> processors).
Is there a way around this? I mean like being able to make a megakernal
with a ton of modules that has support for every board and autodetects
hardware?
When you look at the defconfigs for arm, you see about 50 of them. It
would be nice to have a "generic" arm, or a generic arm5 configuration
that would be a megakernel with all the hardware in modules or something
that can autodetect the board and proc. Even broken out to armv5, armv6,
etc would be nice.
Whether the kernel is modifiable to allow for that, I don't know, but it
certainly doesn't seem to be possible to do that with the current
vanilla kernel.
If the split is not hardfp, I would seriously consider looking at
bootstrapping FAT binaries for optimisations between v5 and v7. Im
pretty sure this is how Apple did some of the optimisations for Altivec
for OS X which means some of this code maybe sitting in the Darwin
archives.
I haven't tested this myself, but I seem to remember somebody here
reporting that the typical improvement from optimizing for ARMv7 while
sticking with softfp was in the low single figure % numbers. I'm not
sure it justifies the effort.
(Apple started off by using something similar to the perl script to
relink, with OS 10.0 it took like 20 minute to download and install an
update, and about 3 hours to "optimise", they ended up backgrounding the
process and then they switched to something else which I think is the
bootstrap.)
(the 68k-> PPC fat binaries, actually was two separate binaries of the
program, and the bootstrap just picked which "fork" in HFS to read for
the binary from which you can't do on linux easily.)
IMO the fat binary support should be handled on the compiler level, not
post-processing. There's also the issue of dlls - you'd need the dynamic
linking to be aware of it too. And you might end up having /lib5s,
/lib5h, /lib6s, /lib6h, /lib7s, /lib7h (like we have /lib and /lib64 on
x86/x86-64).
All that seems like a lot more effort than maintaining two separate
builds, and I cannot think of a reasonable use case where it would be of
vital importance to have binary compatibility. Why bother with binary
compatibility when you have the source. :)
Gordan