[Fedora QA] #76: Use two methods to specify install source

Fedora QA trac at fedorahosted.org
Mon Jun 7 14:27:10 UTC 2010


#76: Use two methods to specify install source
--------------------------+-------------------------------------------------
  Reporter:  rhe          |       Owner:  rhe      
      Type:  enhancement  |      Status:  new      
  Priority:  major        |   Milestone:  Fedora 14
 Component:  Wiki         |     Version:           
Resolution:               |    Keywords:           
--------------------------+-------------------------------------------------
Comment (by jlaska):

 Hurry, I really like that you are working to eliminate the lack of
 specificity+clarity in some of the tests.  As Jesse points out, many of
 the test permutations map to specific use cases.  With our current matrix,
 I find it hard to identify why specific tests are needed and how they
 impact different install use cases.

 While this might not be specific to this ticket, I wonder if it would be
 helpful to group the tests much like Andre suggests in the F13 QA
 retrospective.  For me, this helps clarify what use case is covered by
 different tests, helps clear up the setup conditions for a particular
 test, and has the benefit of lining up with Liam's install automation
 efforts.

 What do you think about some type of matrix test grouping like the
 following?

 '''Installation using boot.iso'''  - All tests that are specific to
 booting and installing with a boot.iso.
  * Media sanity
    * QA:Testcase_Mediakit_ISO_Size
    * QA:Testcase_Mediakit_ISO_Checksums
  * Does it boot? - QA/TestCases/BootMethodsBootIso
  * Starts graphical install? -
 QA:Testcase_Anaconda_User_Interface_Graphical
  * Uses on disk install.img - no test defined
  * Uses remote package repository - QA/TestCases/InstallSourceHttp
  * Variations
    * askmethod - no test defined yet
    * remote install.img locations (stage2=)
    * rescue mode - QA:Testcase_Anaconda_rescue_mode
    * memtest86 - no test defined yet
    * text-mode install?

 '''Installation using DVD'''  - All tests that are specific to booting and
 installing with a DVD.
  * Media sanity
    * QA:Testcase_Mediakit_Repoclosure
    * QA:Testcase_Mediakit_FileConflicts
    * QA:Testcase_Mediakit_ISO_Size
    * QA:Testcase_Mediakit_ISO_Checksums
  * Does it boot? - QA/TestCases/BootMethodsDvd
  * Uses on disk install.img - no test defined (similar to
 QA/TestCases/InstallSourceDvd)
  * Uses on disk package repository - QA/TestCases/InstallSourceDvd
  * Installs with default package set -
 QA/TestCases/PackageSetsDefaultPackageInstall
  * Variations
    * askmethod - no test defined yet
    * remote install.img locations (stage2=)
    * remote package repositories (repo=) (QA:Testcase Additional Http
 Repository, QA:Testcase Additional Mirrorlist Repository,
    * rescue mode - QA:Testcase_Anaconda_rescue_mode
    * memtest86 - no test defined yet

 '''Installation using CD''' - All tests that are specific to booting and
 installing with a CD.
  * Basically same as above - replace DVD tests with the CD versions

 '''Installation using Live''' - All tests that are specific to booting and
 installing from a Live image.
  * Basically same as above

 '''Installation using PXE images''' - All tests specific to booting the
 PXE images
  * Basically same as above
    * No mediakit tests
    * No local stage2/repo tests

 '''General tests''' - Anything not yet specified, or tests that can be
 performed independently of boot and installation source methods.
  * Installation sources - hd, hdiso, nfs, nfsiso
  * Partitioning - a '''reduced''' set of partitioning tests
  * Kickstart delivery
  * Package set - default, minimal (if desired)
  * Recovery tests
    * updates=
    * traceback handling
  * Upgrades
  * User Interface - VNC, text-mode, cmdline, telnet (if still available)
  * Storage devices - a '''reduced''' set (likely dmraid and iscsi)

 What I like about the "idea" above
  * Clearly articulates which test cases are dependent on different media.
 And also which tests are not dependent on media.
  * Tighter association between test and use case - When testing a DVD,
 there is a clear list of tests that are specific to just the DVD.  If a
 failure happens, it's more apparent how that failure would impact the
 other tests, and the release criteria

 Concerns:
  * What would this look like on the wiki?  Multiple matrices/tables?
 Different wiki page for each use case?
  * Can we remove enough duplicate or unessential tests to offset any
 additional work required to tracking these new tests?

-- 
Ticket URL: <https://fedorahosted.org/fedora-qa/ticket/76#comment:2>
Fedora QA <http://fedorahosted.org/fedora-qa>
Fedora Quality Assurance


More information about the test mailing list