Dne 5.1.2011 17:49, Jan Kratochvil napsal(a):
On Wed, 05 Jan 2011 17:31:44 +0100, Karel Klic wrote:
I'm working on a client uploading the whole coredump now.
It would be nice to collect some RTT + bandwidth + core sizes anonymous stats for further decisions. And also how the idea gets accepted wrt security.
Agreed.
60 seconds is to generate a backtrace from a coredump. Much more time is usually needed to download and extract debuginfos.
Why is there such hard limit? I wanted to bugreport my local Firefox crash but I could not. I would keep it running longer but I was given no such option.
A comment in abrt/src/abrt-action-generate-backtrace.c says: "Bugs in gdb or corrupted coredumps were observed to cause gdb to enter infinite loop. Therefore we have a (largish) timeout, after which we kill the child."
We should increase the limit if it can be reached during normal operation. It seems reasonable have some limit in place, because incomplete coredumps appear once in a while.
So what about increasing the limit to 240 seconds?
Denys might know more about this.
K