On 2/14/20 10:08, Kamil Paral wrote:
We do have the test case for Core Applications and the ones for Browser and Terminal. Software even gets a tryout for updating. Question is: Is this sufficient?
https://fedoraproject.org/wiki/QA:Testcase_workstation_core_applications is just about a few apps being present, nothing more. I edited the test case to highlight that. But I was wrong, we have https://fedoraproject.org/wiki/QA:Testcase_desktop_menus which checks the basic functionality of all applications, including gnome-software. Which includes exactly installing/removing an app, running the app directly from the software center, and a couple of other actions, probably. We could have a separate test case for high profile applications like gnome-software, which would even help us define what "basic functionality' means (if it's in the test case, it's likely basic). Is that necessary? It probably isn't. Should we have a separate test case? Shrug, I don't have clear preferences.
I commented here a couple of times, a year or so ago, on testing all the applications that get installed as part of the Fedora install. I believe there was a test case along those lines then. I think what happened was that it was viewed as a lot of work and as I recall that's where the core applications test case came from.
No, see the links above and read the test cases carefully. The "all apps must work" testcase is still here, and "core workstation apps must be present" testcase is just about being present. Yes, QA dislike the former testcase and criterion, it's vague and a lot of work. We will want to do something about it and I should have sent a proposal directly related to this some time ago. I still haven't because I was waiting on clearing up some situation regarding new blocking artifacts in F32, but anyway, you'll some new proposal from me soon.
Sorry, For some reason I thought the one that tested all the app's had been removed. I'll be happy to help with a draft if you like.
Back then the original test was to Open, check the About, and Close each of the "standard" applications.
Well, the instruction to check the About menu is still there in the test case :)
When I do my tests, I verify a clean start, open a file (if applicable), make a minor edit, save the file, reopen the file to verify the edit, check the about/credits, verify a clean close. I doubt very much that a standard test case could tolerate all that. My guess is a clean start and close would be more like it. Also a list of applications to be tested. The remaining applications could be "boxed out" as optional.
I do this now as part of my "as deployed" testing after I run the standard test cases. Some of the "standard" applications do not get tested since they are removed in the "as deployed" configuration.
I think all the standard applications should get a basic dead or alive test. This might be able to be limited somewhat for things like the LibreOffice suite since there are so many common components for the different LibreOffice applications.
If "standard apps" means apps installed by default, that's exactly what the "desktop menus" test case is about. (And I guess we should rename it to make it clearer).
Yes, my use of Standard App's means those installed by default. I'm fully in favor of descriptive names :)
It might also be be beneficial to install a "non-standard" application and not only verify that the package manager worked, but also give the installed application a dead or alive test.
That's definitely beneficial, please do it often :-) But we're not likely to block on a broken application that's not pre-installed. So that's why it's not among test cases - we design test cases around release criteria.
No I wouldn't expect it to be a blocker. I have one buggy app in F32 from the Fedora repo that I haven't filed a bug on yet. My guess is that the bug should be filed with the folks who support the app rather than Bugzilla.
Shall I draft up a new version of the QA:Testcase_desktop_menus?
Have a Great Day!
Pat