The latter but considering they're a broad variety of workloads
I
think it's misleading to call them server workloads as if that's one
particular type of thing, or not applicable to a desktop under IO
pressure. Why? (a) they're using consumer storage devices (b) these
are real workloads rather than simulations (c) even by upstream's own
descriptions of the various IO schedulers only mq-deadline is intended
to be generic. (d) it's really hard to prove anything in this area
without a lot of data.
You are right that the difference between them is blurry. My question comes from being
unsure if it's the case that Fedora users are experiencing problems with bfq but are
not reporting them, or if there is something specific that is causing that pathological
scheduling behavior at Facebook. It was also my understanding that Facebook primarily uses
NVMe drives [1][2], and that is the class of storage Fedora does not use bfq with. Is it
possible these latency problems occurred when using bfq with NVMe drives?
I now see that Paolo was cc'd in comment #9 of the bugzilla ticket, so hopefully he
responds.
But fair enough, I'll see about collecting some data before
asking to
change the IO scheduler yet again.
For the record, I definitely agree that mq-deadline should become the default scheduler
for NVMe drives.
[1]
https://nvmexpress.org/how-facebook-leverages-nvme-cloud-storage-in-the-d...
[2]
https://engineering.fb.com/data-center-engineering/introducing-lightning-...