On Mon, Apr 30, 2012 at 3:57 PM, Jon Ciesla
> On Mon, Apr 30, 2012 at 8:56 AM, Jon Ciesla <limburgher(a)gmail.com> wrote:
>> On Mon, Apr 30, 2012 at 8:50 AM, "Germán A. Racca"
>> <german.racca(a)gmail.com> wrote:
>>> Hi list:
>>> I'm the packager of APLpy: http://aplpy.github.com/
>>> I'm going to update it to a new version, which comes with a set of
>>> but I'm not sure about what to do with them. I asked upstream and the
>>> "The tests are there for us to diagnose any issues related to specific
>>> dependency versions and platforms, and to make sure that we don't
>>> break anything when making changes. It would be useful if you include
>>> them so that we can ask users to run them if they are having issues we
>>> can't reproduce, but you don't need to run the tests as part of the
>>> I'm still not sure. Should I include them in the package?
>> Unless they impose huge build deps or something, run them in make check.
> To more directly answer your question, yes, include and run tests
> whenever possible. :)
This sounds like a SHOULD: include tests whenever possible...
It's enough to run it in %check and normally a user of the package
doesn't need to have the tests around.
I'd only include them, when upstream wants to have tham in and
installs them with "make" or "setup.py". And only in the latter
you need to decide to split the tests into a subpackage.
Whether to ship the tests for the end user is a more complicated problem
because test suites aren't terribly standardised and so different upstream
test suites may have various issues.
For instance, the APLpy runtests.py script has a chunk of encoded text at
the start. From the code that operates on it, I'm guessing that it's
a pickled compiled code block of a bundled library (python-py). That's
something we'd hesitate to run in the buildsys let alone on user's machines.