On Fri, 2010-08-13 at 21:57 -0400, Toshio Kuratomi wrote:
On Fri, Aug 13, 2010 at 08:24:07PM -0400, David Malcolm wrote:
> On Fri, 2010-08-13 at 19:38 -0400, Toshio Kuratomi wrote:
> > On Fri, Aug 13, 2010 at 02:20:51PM -0400, David Malcolm wrote:
> > > Possible ways forward:
> > > (a) don't fix this; treat enabling the warning in the "Doctor,
> > > hurts when I do this! So don't do that!" category, and add this
> > > release notes. Patch Python code that enables the warning so that it
> > > doesn't.
> > > (b) try to fix the ones that are self-contained; send fixes upstream
> > > (c) try to fix them all; send fixes upstream
> > > (d) hack the python rpm to remove this warning; this would be a
> > > significant change from upstream, given that it's already disabled.
> > >
> > Taking the next bit out of order:
> > > Personally, I'm leaning towards option (a) above (the "don't
> > > warnings" option): closing the various as WONTFIX, and adding a
> > > to the release notes, whilst working towards fixing this in Fedora 15.
> > > Affected applications should be patched in Fedora 14 to avoid touching
> > > the relevant warning setting, and we'll fix the root cause in Fedora
> > >
> > Is it overriding the warnings option that causes a problem or is it *only*
> > setting the warnings filter to 'error' that is the problem? I think
> > setting the warning level to always, default, module, or once should be
> > supported. Setting a "warning" to "error" could be seen as
> > though. ie: if it's only error that's affected, then (a) seems okay.
> > the others also cause issues, then I think (a) is the wrong fix.
> If you set it to "always", "default", "module", or
"once" you'll get
> noise on stderr, but it won't trigger the hard failure.
> It's only on setting it to "error" that you get the hard failure.
Okay -- so it sounds like:
* When used with pure python code, the warnings mechanism functions as documented
* When C code is involved *and* the warnings filter has been set to 'error'
(not when set to 'default', 'once', 'module', etc) then an
being raised where most C code is not expecting it.
I'm not sure I'd use
the word "most" here; but there are indeed a number
of significant code paths across Fedora where C code is not expecting it
(e.g. PyGTK initialization).
* By not knowing to deal with that exception condition, the C code is
subject to abort or SegFault.
One further question: Does this only cause problems with the PendingDeprecationWarning?
ie: Can code do this without a problem?:
This suppresses the crash, but leads to noise on stderr:
warnings.simplefilter adds the filter to the head of the list, so the
2nd line is run before the first.
If you change the 2nd line to "ignore", then things work without noise:
>> import warnings
>> warnings.simplefilter('ignore', PendingDeprecationWarning)
>> import gtk
So I'm inclined to say: if you have code that enables warnings with
"error", please add "ignore" on PendingDeprecationWarning as well, to
avoid problems from these modules.
If this is okay, then I'd modify your point (a) to be this plan:
When code that turns all warnings into errors is encountered, have it
instead cause PendingDeprecationWarning to print to stderr via
I don't like this approach: it's a divergence from upstream, and it
seems like magic: it seems to be second-guessing the API call.
It seems simpler and clearer to ask that code that enables errors on
warnings need to turn it off for PendingDeprecationWarning to be able to
use modules that are known to break in the presence of the
Send that patch to upstream with the explanation that setting
error outside of testing is not good with python-2.7 for a couple years
because C code that uses the only recently deprecated old PyCObject API is
likely to segfault or abort when this is done.
Anyone who wants to can work on porting the PyCObject API calls to PyCapsule
but this is not a Fedora requirement. If we happen to notice it being used,
feel free to notify upstream about the dangers.
I've done this for PyGTK.
> > > One issue here is that this API expresses a binary
> > > different Python modules, and that we don't yet have a way to express
> > > this at the rpm metadata level. I think we should, to make it easier to
> > > track these issues in the future. I don't think it's possible to
> > > these automatically, but we could do it manually.
> > >
> > Tracking this manualy is no good unless you can explain to people how to
> > detect it. Once you can explain how to manually detect it, it might be
> > possible to automatically detect it....
> You have to scrape through for ABI calls to PyCObject: the presence of
> the calls are visible in the ELF metadata, but not the exact strings.
> Actually, it _might_ be possible to figure them out via disassembly of
> the machine code, but this seems fragile.
This already sounds like something that is too involved for maintainers and
package reviewers to do. I think this might be something that doesn't leave
the drawing board without tooling to at least do part of the detective work.
Fundamentally the review process for this involves grepping through the
source code; any usage of a PyCObject_* function will return NULL if
PendingDeprectationWarning is set to "error". It needs a little ability
to read C code, but (I hope) not much. The issue with macros in the
header files is a nasty gotcha, though :-(
I had a go at writing some release notes for this issue here:
How does this look?