Proposed F18 feature: MiniDebugInfo
jan.kratochvil at redhat.com
Mon May 7 21:02:01 UTC 2012
On Mon, 07 May 2012 22:16:02 +0200, Lennart Poettering wrote:
> Everybody who builds OSes or appliances, and who needs to supervise a
> large number of systems, and hosts, wants stacktraces that just work,
> and don't require network access or any complex infrastructure.
Yes, they work for me.
People who build OSes or appliances can definitely afford several GBs of HDD.
The goal of minidebuginfo and/or Retrace Server is to provide good enough
service for regular users with crash/bugreport only occasionally and not
having any other use for the debuginfo files otherwise.
> Temporarily and briefly storing things on disk is not a problem. Whether
> something is in a temporary file or in memory is pretty much an
> implementation detail. What matters it that we don't have to keep them
> all the time.
Your objection was "without having to keep coredumps".
So we agree we need to keep it at least for several seconds.
For ABRT Retrace Server we need to keep it at most for several minutes, before
it gets uploaded (either whole or in gdbserver-optimized way).
I do not find seconds vs. minutes such a critical difference.
> We don't need the full backtrace in all cases. There's a lot of room
> between "no backtrace" and "best backtrace ever". For the client-side
> backtraces in "low quality" the way Alex suggests are perfectly OK.
For who and which purposes is "perfectly OK"? At least not for ABRT
backtraces. We should define the scope for usefulness of minidebuginfo.
If it is only for non-ABRT uses I can stop complaining as I do not know about
> Also note that a couple of projects over the years have been patched to do
> in-process backtraces with backtrace(3). For example D-Bus did. The fact
> that people did that makes clear that people want client side backtraces
> that just work.
These people probably do not have ABRT Retrace Server, so they are sufficient
at least with poor solutions. Fedora already has better solution.
Fedora should improve, not degrade.
> I mean, people all around of us go for client side backtraces,
It is the most simple way how to implement backtracing functionality. It does
not have to be optimal (for user - performance and developers
- backtrace quality).
> > Unique crashes do not happen so often.
> Well, but how do you figure out that a crash is unique? You extract a
> backtrace in some form and match it against some central database of
> some form. That's kinda hard to do without, well, centralization.
Yes, this is already being developed by ABRT team. I do not welcome it as it
will give occasional wrong decisions but if Retrace Server farm gets into some
real capacity trouble this solution at least is available.
> And anyway: so I want my backtrace resolved, and by your suggestions I'd
> hence have to talk to your server. But the server then tells me: nah, no
> can do, yours has been seen before, go away!
Not "go away" but it says "here is the results already backtraced before".
But here is a misunderstanding of the target user again. If you are
interested in the backtrace for other reasons than just ABRT bugreport you are
a developer. You can afford several GBs of /usr/lib/debug for the high
quality local backtraces.
> I mean, there are certain things that should just work, without any
> complex centralized infrastructure,
Yes, it is called /usr/lib/debug. But it should not be required for ABRT
bugreports. But ABRT bugreports require infrastructure where the Bug is filed
to anyway. So without available infrastructure neither bugreport not
backtracing makes sense for ABRT.
> I mean, I can tell you: I want client-side backtraces that just work,
So why don't you just install 3-4GB of /usr/lib/debug locally and you want to
push 2% of distro size to people who have no use for it as they are happy with
ABRT Retrace Server?
More information about the devel