I finished the comparison of py.test and nose. Links to the detailed results and code are at the end of this email.
Comments and/or suggestions would be appreciated. Let the discussion commence!
Tim
Executive Summary: After a detailed comparison of the two tools, I think that py.test would be the better choice for a test tool for AutoQA.
Py.test has better documentation, more detailed output on test failure, more customizability without resorting to custom plugins and better support for test isolation.
While we would not be able to leverage as much local experience with py.test, better documentation should lead us towards finding solutions in that documentation instead of having to rely on the experience of others to find those solutions.
Detailed Results: https://fedoraproject.org/wiki/User:Tflink/AutoQA_nose_pytest_comparison
py.test code: https://github.com/tflink/autoqa-devel/tree/pytest
nose code: https://github.com/tflink/autoqa-devel/tree/nose
NOTE: I didn't see a point in merging this to the fedorahosted repo since its just a proof of concept. This can be done if desired, though.
----- Original Message -----
I finished the comparison of py.test and nose. Links to the detailed results and code are at the end of this email.
Comments and/or suggestions would be appreciated. Let the discussion commence!
Tim
Executive Summary: After a detailed comparison of the two tools, I think that py.test would be the better choice for a test tool for AutoQA.
Py.test has better documentation, more detailed output on test failure, more customizability without resorting to custom plugins and better support for test isolation.
While we would not be able to leverage as much local experience with py.test, better documentation should lead us towards finding solutions in that documentation instead of having to rely on the experience of others to find those solutions.
Detailed Results: https://fedoraproject.org/wiki/User:Tflink/AutoQA_nose_pytest_comparison
py.test code: https://github.com/tflink/autoqa-devel/tree/pytest
nose code: https://github.com/tflink/autoqa-devel/tree/nose
NOTE: I didn't see a point in merging this to the fedorahosted repo since its just a proof of concept. This can be done if desired, though.
I'm almost sold on pytest, good job :-)
Because I have never worked with standard Python unittest module, can you add a note about it to the comparison? Do I understand correctly that we can either write standard unittest tests and use pytest as their runner, or we can write pytest-specific tests (whereas you recommend the second approach)?
Thanks, Kamil
On 03/10/2011 06:06 AM, Kamil Paral wrote: <snip>
I'm almost sold on pytest, good job :-)
Because I have never worked with standard Python unittest module, can you add a note about it to the comparison? Do I understand correctly that we can either write standard unittest tests and use pytest as their runner, or we can write pytest-specific tests (whereas you recommend the second approach)?
You understand correctly. Older versions of py.test didn't have incredible integration with unittest test cases but 2.0 was supposed to bring unittest integration on par with nose.
I'll add some more stuff to the wiki page but as a short version:
There is nothing wrong with using unittest but the two down sides are: 1) xUnit clones like unittest have a decent amount of boilerplate code 2) unittest test cases can't take advantage of some of the more advanced features in py.test (nice failure ouput, funcargs, monkeypatching etc.)
The advantage is that unittest is the least common denominator in all of the testing frameworks - they work with nose, py.test or on their own.
Personally, I'm not a huge fan of xUnit when there are other options. My testing framework of choice in Java is TestNG and one of the reasons for that is boilerplate (IIRC JUnit4 fixed some of that, though).
Tim
autoqa-devel@lists.fedorahosted.org