Proposed F18 feature: MiniDebugInfo
mzerqung at 0pointer.de
Mon May 7 21:36:04 UTC 2012
On Mon, 07.05.12 23:02, Jan Kratochvil (jan.kratochvil at redhat.com) wrote:
> On Mon, 07 May 2012 22:16:02 +0200, Lennart Poettering wrote:
> > Everybody who builds OSes or appliances, and who needs to supervise a
> > large number of systems, and hosts, wants stacktraces that just work,
> > and don't require network access or any complex infrastructure.
> Yes, they work for me.
> 3.9G /usr/lib/debug/
> People who build OSes or appliances can definitely afford several GBs
> of HDD.
Some certainly can, not all want. And it's not just disk space, it's
also downloading all that data in the first place...
I mean, just think of this: you have a pool of workstations to
administer. It's all the same machines, with the same prepared OS
image. You want to know about the backtraces as admin. Now, since the OS
images are all the same some errors will happen across all the machines
at the same time. Now, with your logic this would either result in all
of them downloading a couple of GB of debuginfo for glibc and stuff like
that, or all of them bombarding the retrace server, if they can.
But anyway, I don't think it's worth continuing this discussion, this is
a bit like a dialogue between two wet towels...
> Your objection was "without having to keep coredumps".
> So we agree we need to keep it at least for several seconds.
> For ABRT Retrace Server we need to keep it at most for several minutes, before
> it gets uploaded (either whole or in gdbserver-optimized way).
Well, assuming that the network works, and I am connected to one, and I
am happy to pay 3G for it. And so on.
Here's another thing that you should think about: the stack of things
that need to work to get a remote retrace done properly is immense: you
need abrt working, you need NM working (and all the stuff it pulls in)
and you need your ISP working, and your cabling and everything
else. With Alex' work you need very very little working, just a small
unwinder. Full stop.
> > Also note that a couple of projects over the years have been patched to do
> > in-process backtraces with backtrace(3). For example D-Bus did. The fact
> > that people did that makes clear that people want client side backtraces
> > that just work.
> These people probably do not have ABRT Retrace Server, so they are sufficient
> at least with poor solutions. Fedora already has better solution.
> Fedora should improve, not degrade.
I am pretty sure I don't want my local developer machine always talk to
the fedora server while i develop and something crashes. jeez. I want to
hack on trains and on planes, and I want my data private.
> > Well, but how do you figure out that a crash is unique? You extract a
> > backtrace in some form and match it against some central database of
> > some form. That's kinda hard to do without, well, centralization.
> Yes, this is already being developed by ABRT team. I do not welcome it as it
> will give occasional wrong decisions but if Retrace Server farm gets into some
> real capacity trouble this solution at least is available.
Look at this data from Mozilla:
For ffox 12.0 alone they get 110726 crashes per day. That's one package,
and one version of it. Admittedly they have a much bigger user base than
us, but we have an entire distribution to care for. An *entire*
*distribution*. And so far this is all done for complete coredumps, not
the minidumps Mozilla uses. With Mozilla's stats this is already 77
crashes per minute. If we want abrt ever to be usefully used, you'll
probably get into much higher ranges. That's a huge amount of requests.
Sure, one can make the retrace server scale to this, for example, by
being google, and hosting a datacenter just for this. But there's a much
smarter way: do client side backtraces and be done with it.
> Not "go away" but it says "here is the results already backtraced
But I want it for my data, no anybody else's!
Lennart Poettering - Red Hat, Inc.
More information about the devel