On Sat, Jun 27, 2020 at 4:30 PM Konstantin Kharlamov <hi-angel(a)yandex.ru> wrote:
On Sat, 2020-06-27 at 12:42 -0600, Chris Murphy wrote:
What point are you trying to make here? If you're implying that "applications
startup time" that the article measured is more "syntetic test" than
compilation time you measuring, then this sounds odd. Because people start apps
up more often than compile the kernel. In fact, compiation process includes
starting up apps.
Developers are a target market for Fedora on the desktop. Developers
compile locally. The point is compiling the same thing on different
file systems shows, reproducibly, they're all the same ballpark.
> They're all in the same ballpark, except there's a write time hit for
> the one with zstd:1 on this particular setup (and the compression hit
> isn't consistent across all hardware or setups, it's case by case -
> and hence the proposal option for compression indicates applying it
> selectively to locations we know there's a benefit across the board).
> But also you can tell there's no read time (decompression) hit from
> this same data set.
It is nice to see, although I'm pretty surprised they all have the same
performance, except the one with compression. Could it be because all files got
cached in RAM?
Reboot between each test. Each test gets a clean copy of the source to
compile, setup prior to the reboot.
> Meanwhile, this is somewhere between embarrassing and comedy:
> Hmmm, 21 seconds to launch GNOME Terminal with an NVMe and you aren't
> curious about what went wrong? Because obviously something is wrong.
> The measurement is wrong or the method is wrong or something in the
> setup is fouling things up. How do you get a fast result with SSD but
> then such a slow result with NVMe?
> It makes no sense, but meh, we'll just publish that shit anyway! LOLZ!
> And that is how you light your credibility on fire, because you just
> don't give a crap about it.
You misread it, the NVMe startup time is 1.03sec. The 21.01sec. time is SATA
3.0 SSD. No need to swear.
I'm sorry, I'm referring to the article as being not credible. The
"you" is not directed at YOU. That's sloppy writing on my part.
I don't have an explanation for the enormous difference, just that I
can't reproduce these numbers, and they don't make sense. The
measurement method is wrong, or there's something pathological with
the set up that's causing just btrfs to be really slow on SSDs. Could
it be scheduler related? Maybe, no idea. At least on Fedora we are
using different schedulers for NVMe and SSDs, even though these tests
aren't on Fedora.
Not to say it is not odd compared to other results, but we can only
That's my complaint is that it doesn't make sense, and it's not explored.
Random guy on a list (me) complains about Phoronix benchmarks = not news.
Good for you. But you're trying take take decision for all other
peoples, so you
need to take into account not everyone has NVMe or SSD. HDDs that many people
are also using are much slower. This means your "1 second vs 0.5 second" can
easily turn into "5 seconds vs 10 seconds" (and not necessarily linearly).
I'm not making any claims about sysroot on HDD.
You misread me, I wasn't talking about CPU time, I was talking
:P Clearly I should NOT get into discussions about benchmarks. I find
them annoying, and mostly unhelpful, obviously.