Greetings folks,
Whether you call it a post-mortem, retrospective or lessons learned ... the end result is the same. I'd like to collect thoughts on how good/bad of a job the QA group did in planning and testing the Fedora 12 release. In keeping with the release-wide retrospective from Fedora 11 [1], feel free to share any wishlist items as well.
I've started the discussion on the wiki at https://fedoraproject.org/wiki/Fedora_12_QA_Retrospective.
Adding your thoughts is easy ... * Edit the wiki directly (instructions provided for ~anonymous feedback) * Or, reply to this mail (I'll collect feedback and add to the wiki)
Over the next week, I plan to organize any feedback and discuss the highlights during an upcoming QA team meeting. The goal will be to prioritize the pain points and use as a basis for defining objectives for Fedora 13.
Thanks for your feedback! James
[1] https://fedoraproject.org/wiki/Fedora_11_Retrospective_Notes
On Wed, 2009-11-25 at 14:29 -0500, James Laska wrote:
Greetings folks,
Whether you call it a post-mortem, retrospective or lessons learned ... the end result is the same. I'd like to collect thoughts on how good/bad of a job the QA group did in planning and testing the Fedora 12 release. In keeping with the release-wide retrospective from Fedora 11 [1], feel free to share any wishlist items as well.
I've started the discussion on the wiki at https://fedoraproject.org/wiki/Fedora_12_QA_Retrospective.
Adding your thoughts is easy ... * Edit the wiki directly (instructions provided for ~anonymous feedback) * Or, reply to this mail (I'll collect feedback and add to the wiki)
Over the next week, I plan to organize any feedback and discuss the highlights during an upcoming QA team meeting. The goal will be to prioritize the pain points and use as a basis for defining objectives for Fedora 13.
Thanks for your feedback! James
1) in the end, we focused heavily on just three component areas for testing: anaconda, kernel, and X.org. This is primarily a function of the fact that these are the most vital bits; it feels like we're still at the point of doing 'let's make sure it's not totally broken' testing than 'let's make sure it's really good' testing. We didn't do stuff like making sure the desktop was polished.
2) there was clearly a lot of uncertainty about RAID issues; it's something we obviously don't as a team test well enough (and some of us personally don't understand enough :>). In the end there didn't turn out to be any horrible issues, but the confusion was evident, and we did miss the Intel BIOS RAID stuff-ups. For F13 we should have better RAID testing both in Test Days and in pre-release test cycles.
3) We weren't completely on top of X.org bugs for this release. The ones that wound up getting promoted to release blocker level were kind of an arbitrary selection. I think we got nouveau mostly right as I had a reasonable grip on nouveau triage, but we just had too few people to triage server / intel / ati bugs during the cycle, so when we hit beta / RC stage, we didn't have the whole bug set well enough triaged to be able to be sure we picked the most important bugs as blockers. For F13 we should stay on top of triage better so we can do blocker identification accurately. happily, matej is more active on X triage again now and we have some more assistance from Chris Campbell (thank you Chris!) further volunteers would be great. I will try to stay on top of nouveau, again.
4) we have the big security thing to deal with. I did start a thread on -devel about that.
5) test days went well again. it was nice to see how many 'independent' test days there were.
that's what i've got so far :)
On Nov 25, 2009, at 11:44, Adam Williamson awilliam@redhat.com wrote:
On Wed, 2009-11-25 at 14:29 -0500, James Laska wrote:
Greetings folks,
Whether you call it a post-mortem, retrospective or lessons learned ... the end result is the same. I'd like to collect thoughts on how good/bad of a job the QA group did in planning and testing the Fedora 12 release. In keeping with the release-wide retrospective from Fedora 11 [1], feel free to share any wishlist items as well.
I've started the discussion on the wiki at https://fedoraproject.org/wiki/Fedora_12_QA_Retrospective.
Adding your thoughts is easy ... * Edit the wiki directly (instructions provided for ~anonymous feedback) * Or, reply to this mail (I'll collect feedback and add to the wiki)
Over the next week, I plan to organize any feedback and discuss the highlights during an upcoming QA team meeting. The goal will be to prioritize the pain points and use as a basis for defining objectives for Fedora 13.
Thanks for your feedback! James
- in the end, we focused heavily on just three component areas for
testing: anaconda, kernel, and X.org. This is primarily a function of the fact that these are the most vital bits; it feels like we're still at the point of doing 'let's make sure it's not totally broken' testing than 'let's make sure it's really good' testing. We didn't do stuff like making sure the desktop was polished.
Define the "we" here as the desktop team did put a lot of time into polish of the desktop.
-- Jes
On Wed, 2009-11-25 at 12:11 -0800, Jesse Keating wrote:
Over the next week, I plan to organize any feedback and discuss the highlights during an upcoming QA team meeting. The goal will be to prioritize the pain points and use as a basis for defining objectives for Fedora 13.
Thanks for your feedback! James
- in the end, we focused heavily on just three component areas for
testing: anaconda, kernel, and X.org. This is primarily a function of the fact that these are the most vital bits; it feels like we're still at the point of doing 'let's make sure it's not totally broken' testing than 'let's make sure it's really good' testing. We didn't do stuff like making sure the desktop was polished.
Define the "we" here as the desktop team did put a lot of time into polish of the desktop.
Given that this is a QA retrospective for the QA team being discussed on the QA mailing list, I thought it fairly obvious that the 'we' was QA :)
On Wed, 2009-11-25 at 11:44 -0800, Adam Williamson wrote:
- in the end, we focused heavily on just three component areas for
testing: anaconda, kernel, and X.org. This is primarily a function of the fact that these are the most vital bits; it feels like we're still at the point of doing 'let's make sure it's not totally broken' testing than 'let's make sure it's really good' testing.
I agree with this assessment. I would go even further and say it is to a large extent 'lets make sure the installer is not totally broken for exotic cases' testing.
From my perspective, the two main avenues to a new Fedora release are
the live installer and preupgrade, and those two should get all the attention they can get.
We didn't do stuff like making sure the desktop was polished.
I don't think lack of QA was a major impediment to our polishing efforts. Of course, QA involvement is still appreciated...
On Wed, 2009-11-25 at 16:20 -0500, Matthias Clasen wrote:
We didn't do stuff like making sure the desktop was polished.
I don't think lack of QA was a major impediment to our polishing efforts. Of course, QA involvement is still appreciated...
to be more specific, I meant we didn't _test_ the appearance and functionality of the final desktop. i.e., we didn't test your work. as is the case everywhere, you (developers) do the work, we (qa) look for the bugs in it. at least, that's the theory :)
On Wed, 2009-11-25 at 16:20 -0500, Matthias Clasen wrote:
From my perspective, the two main avenues to a new Fedora release are
the live installer and preupgrade, and those two should get all the attention they can get.
I'd say the main problem with preupgrade testing is that, given the fairly limited resources QA has, it's rather hard for us to recreate the infinite configurations people in the real world will try to run preupgrade on. It's inherently a nightmare of complexity. We can certainly try and do _better_ testing than we currently do, though.
On Wed, 2009-11-25 at 13:35 -0800, Adam Williamson wrote:
On Wed, 2009-11-25 at 16:20 -0500, Matthias Clasen wrote:
From my perspective, the two main avenues to a new Fedora release are
the live installer and preupgrade, and those two should get all the attention they can get.
I'd say the main problem with preupgrade testing is that, given the fairly limited resources QA has, it's rather hard for us to recreate the infinite configurations people in the real world will try to run preupgrade on. It's inherently a nightmare of complexity. We can certainly try and do _better_ testing than we currently do, though.
Sure you can't hope to test a full matrix, but that is just as much the case for anaconda... yet the anaconda test matrix looks a lot more complete than the upgrade one. Anyway, I don't want to make it sound like the upgrade situation is mainly a QA problem - it is first-and-foremost a maintainership problem; we must get out of the situation that one of the two main avenues to the next release is wwoods weekend project - of course, the other one being the unloved stepchild of the installer team is not exactly perfect either...
On 11/25/2009 09:54 PM, Matthias Clasen wrote:
On Wed, 2009-11-25 at 13:35 -0800, Adam Williamson wrote:
On Wed, 2009-11-25 at 16:20 -0500, Matthias Clasen wrote:
From my perspective, the two main avenues to a new Fedora release are
the live installer and preupgrade, and those two should get all the attention they can get.
I'd say the main problem with preupgrade testing is that, given the fairly limited resources QA has, it's rather hard for us to recreate the infinite configurations people in the real world will try to run preupgrade on. It's inherently a nightmare of complexity. We can certainly try and do _better_ testing than we currently do, though.
Sure you can't hope to test a full matrix, but that is just as much the case for anaconda... yet the anaconda test matrix looks a lot more complete than the upgrade one. Anyway, I don't want to make it sound like the upgrade situation is mainly a QA problem - it is first-and-foremost a maintainership problem; we must get out of the situation that one of the two main avenues to the next release is wwoods weekend project - of course, the other one being the unloved stepchild of the installer team is not exactly perfect either...
FEI we are already improving preupgrade's QA process ( https://fedorahosted.org/fedora-qa/ticket/30 )
Afaik we dont official support upgrading between releases hence i'm not sure how high on the priority list upgrading is with Will and Team Anaconda and now "to shock you all" even with us...
If we have started to official support upgrading between release then we have to make dam sure user customization/configuration do not get overwritten and or lost in the process which means for example for the Gnome desktop spin no more "gconftool-2 --type int --set" workarounds for users to get their "old" behavior back.
How many backwards compatibility test cases have we receive from maintainers? ( afaik 0 )
How well have they informed us or the support team if a changes they have made breaks current behavior and or is backward incompatible heck hell do they even bother to inform us or the support team at all?
200 MiB boot partition used to be enough during preupgrades and I suspect the new initramfs might be the reason why it needs to be increased and I'm pretty sure Will and Team Anaconda gladly take any help they can get on improving preupgrading between releases.
JBG
On Wed, 2009-11-25 at 23:12 +0000, "Jóhann B. Guðmundsson" wrote:
On 11/25/2009 09:54 PM, Matthias Clasen wrote:
On Wed, 2009-11-25 at 13:35 -0800, Adam Williamson wrote:
On Wed, 2009-11-25 at 16:20 -0500, Matthias Clasen wrote:
From my perspective, the two main avenues to a new Fedora release are
the live installer and preupgrade, and those two should get all the attention they can get.
I'd say the main problem with preupgrade testing is that, given the fairly limited resources QA has, it's rather hard for us to recreate the infinite configurations people in the real world will try to run preupgrade on. It's inherently a nightmare of complexity. We can certainly try and do _better_ testing than we currently do, though.
Sure you can't hope to test a full matrix, but that is just as much the case for anaconda... yet the anaconda test matrix looks a lot more complete than the upgrade one. Anyway, I don't want to make it sound like the upgrade situation is mainly a QA problem - it is first-and-foremost a maintainership problem; we must get out of the situation that one of the two main avenues to the next release is wwoods weekend project - of course, the other one being the unloved stepchild of the installer team is not exactly perfect either...
FEI we are already improving preupgrade's QA process ( https://fedorahosted.org/fedora-qa/ticket/30 )
Afaik we dont official support upgrading between releases hence i'm not sure how high on the priority list upgrading is with Will and Team Anaconda and now "to shock you all" even with us...
I thought that it wasn't official also, but this method has been added to the installation guide for F-12 (see http://docs.fedoraproject.org/install-guide/f12/en-US/html/ch17s02.html).
If we have started to official support upgrading between release then we have to make dam sure user customization/configuration do not get overwritten and or lost in the process which means for example for the Gnome desktop spin no more "gconftool-2 --type int --set" workarounds for users to get their "old" behavior back.
How many backwards compatibility test cases have we receive from maintainers? ( afaik 0 )
How well have they informed us or the support team if a changes they have made breaks current behavior and or is backward incompatible heck hell do they even bother to inform us or the support team at all?
200 MiB boot partition used to be enough during preupgrades and I suspect the new initramfs might be the reason why it needs to be increased and I'm pretty sure Will and Team Anaconda gladly take any help they can get on improving preupgrading between releases.
Should I add a general "could have been better" item for improved communication between maintainers and QA? Does that accurately capture your thoughts here?
Thanks, James
On Wed, 2009-11-25 at 13:35 -0800, Adam Williamson wrote:
On Wed, 2009-11-25 at 16:20 -0500, Matthias Clasen wrote:
From my perspective, the two main avenues to a new Fedora release are
the live installer and preupgrade, and those two should get all the attention they can get.
I'd say the main problem with preupgrade testing is that, given the fairly limited resources QA has, it's rather hard for us to recreate the infinite configurations people in the real world will try to run preupgrade on. It's inherently a nightmare of complexity. We can certainly try and do _better_ testing than we currently do, though.
The good news for me was that the testing QA scoped out for preupgrade [1] helped highlight the preupgrade /boot disk-space problem. My understanding of this issue ...
* 534052 - Preupgrade should check for sufficient disk space in advance 1. Filed on 2009-11-10 by Kamil Paral during F-12-RC4 verification 2. Further triaged and found as a DUPLICATE of bug#530541 (see below) * 530541 - Free space check on /boot not thorough enough 1. Filed on 2009-10-23 by Alexander Boström while testing rawhide 2. Problem correctly identified as insufficient free-space for anaconda to install new kernel+initrd.img
Highlights for me ...
* The issue was discovered prior to release ... that's 'a good thing' [tm]. Just as cool, it was also discovered by someone outside the core QA team * The problem was correctly identified when filed by Alexander, but the impact to the default F-11 preupgrade user wasn't known at the time * Preupgrade is a great application, was an opportunity to identify failure scenarios missed when we (the royal 'we' == Fedora) chose it as a official upgrade method?
Did I miss any?
Thanks, James
[1] https://fedoraproject.org/wiki/QA:Testcase_Preupgrade and https://fedoraproject.org/wiki/QA:Testcase_Preupgrade_from_older_release
On Thu, 2009-11-26 at 12:28 -0500, James Laska wrote:
Highlights for me ...
* The issue was discovered prior to release ... that's 'a good thing' [tm]. Just as cool, it was also discovered by someone outside the core QA team * The problem was correctly identified when filed by Alexander, but the impact to the default F-11 preupgrade user wasn't known at the time * Preupgrade is a great application, was an opportunity to identify failure scenarios missed when we (the royal 'we' == Fedora) chose it as a official upgrade method?
Did I miss any?
yeah, we did catch that one. I think the only really obvious scenario we missed is one we already adjusted the test cases for - updating from a realistic previous-release configuration, not a brand new clean install. The problem is that there's as many potential failure cases as there are combinations of packages and (especially) third-party repositories and software, and there's a lot of those. Also different disk layouts and bootloader configurations. I don't know how many we can realistically expect to test, or where we'd want to draw the boundaries. There have been quite a lot of people on the forums running into issues with preupgrade for various reasons.
On Wed, 2009-11-25 at 11:44 -0800, Adam Williamson wrote:
On Wed, 2009-11-25 at 14:29 -0500, James Laska wrote:
Greetings folks,
Whether you call it a post-mortem, retrospective or lessons learned ... the end result is the same. I'd like to collect thoughts on how good/bad of a job the QA group did in planning and testing the Fedora 12 release. In keeping with the release-wide retrospective from Fedora 11 [1], feel free to share any wishlist items as well.
I've started the discussion on the wiki at https://fedoraproject.org/wiki/Fedora_12_QA_Retrospective.
Adding your thoughts is easy ... * Edit the wiki directly (instructions provided for ~anonymous feedback) * Or, reply to this mail (I'll collect feedback and add to the wiki)
Over the next week, I plan to organize any feedback and discuss the highlights during an upcoming QA team meeting. The goal will be to prioritize the pain points and use as a basis for defining objectives for Fedora 13.
Thanks for your feedback! James
- in the end, we focused heavily on just three component areas for
testing: anaconda, kernel, and X.org. This is primarily a function of the fact that these are the most vital bits; it feels like we're still at the point of doing 'let's make sure it's not totally broken' testing than 'let's make sure it's really good' testing. We didn't do stuff like making sure the desktop was polished.
- there was clearly a lot of uncertainty about RAID issues; it's
something we obviously don't as a team test well enough (and some of us personally don't understand enough :>). In the end there didn't turn out to be any horrible issues, but the confusion was evident, and we did miss the Intel BIOS RAID stuff-ups. For F13 we should have better RAID testing both in Test Days and in pre-release test cycles.
- We weren't completely on top of X.org bugs for this release. The ones
that wound up getting promoted to release blocker level were kind of an arbitrary selection. I think we got nouveau mostly right as I had a reasonable grip on nouveau triage, but we just had too few people to triage server / intel / ati bugs during the cycle, so when we hit beta / RC stage, we didn't have the whole bug set well enough triaged to be able to be sure we picked the most important bugs as blockers. For F13 we should stay on top of triage better so we can do blocker identification accurately. happily, matej is more active on X triage again now and we have some more assistance from Chris Campbell (thank you Chris!) further volunteers would be great. I will try to stay on top of nouveau, again.
- we have the big security thing to deal with. I did start a thread on
-devel about that.
I gather this goes under the heading "things that could have been better?" When you say 'security thing', do you mean that we don't have a policy or plan to ensure a basic level of security (as your f-devel-list thread raises). Or are there other security aspects from F-12 QA to consider?
On Thu, 2009-11-26 at 12:10 -0500, James Laska wrote:
I gather this goes under the heading "things that could have been better?"
Yeah, I'm a glass half-empty kinda guy :)
When you say 'security thing', do you mean that we don't have a policy or plan to ensure a basic level of security (as your f-devel-list thread raises). Or are there other security aspects from F-12 QA to consider?
that was it. The PolicyKit issue fallout.
On 11/26/2009 12:59 AM, James Laska wrote:
Over the next week, I plan to organize any feedback and discuss the highlights during an upcoming QA team meeting. The goal will be to prioritize the pain points and use as a basis for defining objectives for Fedora 13.
Thanks for your feedback!
My personal experiences in addition to the impressions from reading all the end user forums, mailing lists and news sites is that Fedora 12 is a solid release. I request you to explicitly get feedback from atleast fedora-list and http://fedoraforum.org from end users directly. Adam and James, are you subscribed to fedora-list and keeping track of the discussions there?
I think, there is room for improvement obviously:
* IMO, RCs needs to advertised loudly. We need all the testing we can get and more alpha or beta snapshots would help as well, I think.
* PackageKit signed install policy was a major issue and although QA was not really responsible for it, the team does need to do it all can do to avoid anything like this in the future . Signing Rawhide packages automatically would have caught this. Not many users in forum or fedora-list complained about it however.
* Lot more users are trying Preupgrade and QA needs strong focus on the upgrade story including test days directed towards it. The small /boot is a major issue and another common bug (not listed in the wiki but I think needs to be added) is https://bugzilla.redhat.com/show_bug.cgi?id=538118 which makes the problem worse. We really need to make sure Anaconda creates a bigger /boot for Fedora 13, preupgrade is explicitly tested well and has solid workarounds for the small /boot case.
* KMS is still flaky in some cases. In particular, a few Intel users seem to be reporting lower resolution by default without nomodeset and ATI performance seems to have regressed. Although I am no fan of proprietary drivers, I must note that installing the proprietary Nvidia driver has become a bit more of a hassle.
Bottom line: Pretty good job, overall
Rahul
On Thu, 2009-11-26 at 01:23 +0530, Rahul Sundaram wrote:
My personal experiences in addition to the impressions from reading all the end user forums, mailing lists and news sites is that Fedora 12 is a solid release. I request you to explicitly get feedback from atleast fedora-list and http://fedoraforum.org from end users directly. Adam and James, are you subscribed to fedora-list and keeping track of the discussions there?
I'm not. I just don't have time for it. I'm relying on you for that bit :) sorry. I had to pick either the mailing list or the forums to follow, I picked the forums.
I think, there is room for improvement obviously:
- IMO, RCs needs to advertised loudly. We need all the testing we can
get and more alpha or beta snapshots would help as well, I think.
We have this discussion every release. The problem is that RC testing is a very short phase, during which multiple candidates get rolled. By the time anyone outside of the actual physical network on which they're located is done downloading RC1, we've probably spun RC3 already.
even despite that, public testing of RCs can _sometimes_ be useful, but the problem is that if we publicise them any more than they're already publicised, the single server on which they're located will bog down and stop people who really need them from getting them fast enough. And given the time constraints, it's practically impossible to mirror or torrent them usefully.
- Lot more users are trying Preupgrade and QA needs strong focus on the
upgrade story including test days directed towards it. The small /boot is a major issue and another common bug (not listed in the wiki but I think needs to be added) is https://bugzilla.redhat.com/show_bug.cgi?id=538118 which makes the problem worse. We really need to make sure Anaconda creates a bigger /boot for Fedora 13, preupgrade is explicitly tested well and has solid workarounds for the small /boot case.
I agree, good point. Our current preupgrade testing is a bit perfunctory.
- KMS is still flaky in some cases. In particular, a few Intel users
seem to be reporting lower resolution by default without nomodeset
This is 'normal' if your EDID isn't detected: the fallback resolutions for the KMS drivers are lower than the ones for the old UMS path, I think (KMS fallback is 800x600 for most of the drivers, UMS fallback was often 1024x768). I notice the latest kernel has the ATI KMS fallback bumped to 1024x768 to match the UMS fallback, though I'm not sure if this is entirely the right decision, I think 800x600 is actually safe on some displays where 1024x768 isn't.
and ATI performance seems to have regressed.
This is known and being worked on.
Although I am no fan of proprietary drivers, I must note that installing the proprietary Nvidia driver has become a bit more of a hassle.
I've been tracking that and working with RPM Fusion to mitigate it. It's not something Fedora could really have done much about and kept in line with our policies. There's two issues - NVIDIA does something SELinux considers evil and blocks, and the nvidia kernel module conflicts with the nouveau one.
I believe the onus is on NVIDIA to fix both of these. The SELinux blocking is legitimate and genuinely points to the NVIDIA driver doing something it really shouldn't ought to do, I believe, and it's up to NVIDIA to make it not do that any more. We can hardly relax the SELinux default policies to allow something bad just because the NVIDIA proprietary driver wants to do it.
on the nouveau conflict, we can't suppress the nouveau module loading by default, it's needed for KMS. It's up to either NVIDIA or RPM Fusion to handle this smoothly. It's somewhat tricky to suppress the nouveau module entirely; you have to re-generate the initrd after blacklisting it, otherwise it gets loaded at initrd time (to do KMS-backed graphical boot). It's either up to RPM Fusion to do this in the packages, or up to NVIDIA to make the module somehow co-operate with the nouveau module (perhaps by unloading it and loading the nvidia module if you try to start X with the nvidia driver).
I suppose what could be improved here is better communication between Fusion and NVIDIA so NVIDIA is made aware of these issues in advance of a Fedora release, but that's not a topic for Fedora :)
On Wed, 2009-11-25 at 15:02 -0600, Michael Cronenworth wrote:
On 11/25/2009 02:15 PM, Adam Williamson wrote:
and the nvidia kernel module conflicts with the nouveau one.
There is no conflict. I'm not sure why RPMFusion is recommending rebuilding your initramfs. It is completely unnecessary.
Uh? If the nouveau kernel module is loaded, the nvidia kernel module refuses to load, complaining that another module is in control of the hardware. I've seen multiple people report this. I haven't seen anyone who's got the NVIDIA driver working if the nouveau module is loaded.
On 11/25/2009 03:07 PM, Adam Williamson wrote:
I haven't seen anyone who's got the NVIDIA driver working if the nouveau module is loaded.
I have three different systems all with nVidia hardware that use the nouveau and nvidia module loaded simultaneously. The nvidia driver is specified in my xorg.conf. X loads. 3D apps run. Compiz runs.
What would you like to see? Screenshots?
On Wed, 2009-11-25 at 15:52 -0600, Michael Cronenworth wrote:
On 11/25/2009 03:07 PM, Adam Williamson wrote:
I haven't seen anyone who's got the NVIDIA driver working if the nouveau module is loaded.
I have three different systems all with nVidia hardware that use the nouveau and nvidia module loaded simultaneously. The nvidia driver is specified in my xorg.conf. X loads. 3D apps run. Compiz runs.
What would you like to see? Screenshots?
dmesg might be interesting. I wonder if it depends on the chipset.
On Wed, Nov 25, 2009 at 3:56 PM, Adam Williamson awilliam@redhat.com wrote:
On Wed, 2009-11-25 at 15:52 -0600, Michael Cronenworth wrote:
On 11/25/2009 03:07 PM, Adam Williamson wrote:
I haven't seen anyone who's got the NVIDIA driver working if the nouveau module is loaded.
I have three different systems all with nVidia hardware that use the nouveau and nvidia module loaded simultaneously. The nvidia driver is specified in my xorg.conf. X loads. 3D apps run. Compiz runs.
What would you like to see? Screenshots?
dmesg might be interesting. I wonder if it depends on the chipset.
I could not get X to load until applying both the suggestions on the rpmfusion FAQ on my AMD 770 chipset with a Geforce 7600GT PCI-E card.
Richard
On 11/25/2009 03:56 PM, Adam Williamson wrote:
dmesg might be interesting. I wonder if it depends on the chipset.
Setting nomodeset on the kernel line was required, but the modules are co-loaded.
The attached is from my laptop.
I've used nouveau and nvidia with F11 as well, but KMS was not defaulted on in F11 so "nomodeset" was not required at the time. Now it is.
If KMS is left on, you will see nvidia module messages such as "blah blah blah in use is nvidiafb loaded?" etc.
2009/11/25 Adam Williamson awilliam@redhat.com:
On Wed, 2009-11-25 at 15:02 -0600, Michael Cronenworth wrote:
On 11/25/2009 02:15 PM, Adam Williamson wrote:
and the nvidia kernel module conflicts with the nouveau one.
There is no conflict. I'm not sure why RPMFusion is recommending rebuilding your initramfs. It is completely unnecessary.
Uh? If the nouveau kernel module is loaded, the nvidia kernel module refuses to load, complaining that another module is in control of the hardware. I've seen multiple people report this. I haven't seen anyone who's got the NVIDIA driver working if the nouveau module is loaded.
I think it's possible, but it's random. In some case, it works, in others not.
Once that said, I saw that it could be possible to use nouveau.modeset=0 on the grub line. I never tried that solution, but the perspective to have two differents driver loaded at the same time for one hardware just scared me.
IIRC nvidia-installer (not nvidia.ko) has detection routine to know if nvidiafb is loaded, because it has produced well knwon incompatiblities from the past. I wonder if they have updated it for nouveau.
Nicolas Chauvet (kwizart)
On Nov 25, 2009, Adam Williamson awilliam@redhat.com wrote:
Uh? If the nouveau kernel module is loaded, the nvidia kernel module refuses to load, complaining that another module is in control of the hardware.
And even the “nv” driver fails to work, with a similar complaint.
I had to blacklist nouveau on an ancient (8YO) notebook, because with nouveau X wouldn't work on it. The nouveau X driver failed in some way I can't remember, and nv complained about the hardware being controlled by another driver already.
On the happier side, the deblobbed nouveau driver in Linux-libre Freed-ora builds works nicely on another (5YO IIRC) notebook, so it can now change resolutions and control independently its own LCD and the TV it's connected to through a VGA cord. At last! :-)
Now if only we managed to reverse engineer (or get sources for) those blobs allegedly released under GPLv2, we'd have another hardware supplier with Free 3D acceleration!
Adam Williamson wrote:
Uh? If the nouveau kernel module is loaded, the nvidia kernel module refuses to load, complaining that another module is in control of the hardware. I've seen multiple people report this. I haven't seen anyone who's got the NVIDIA driver working if the nouveau module is loaded.
Me. As per google, add "nouveau.modeset=0 vga=0x318" to the end of the kernel boot line.
$ lsmod | grep '^(nouveau|nvidia)' nvidia 8096992 34 nouveau 568932 0
Doug
On Fri, 2009-12-11 at 12:00 -0500, Douglas Kilpatrick wrote:
Adam Williamson wrote:
Uh? If the nouveau kernel module is loaded, the nvidia kernel module refuses to load, complaining that another module is in control of the hardware. I've seen multiple people report this. I haven't seen anyone who's got the NVIDIA driver working if the nouveau module is loaded.
Me. As per google, add "nouveau.modeset=0 vga=0x318" to the end of the kernel boot line.
$ lsmod | grep '^(nouveau|nvidia)' nvidia 8096992 34 nouveau 568932 0
you're a bit late to the party. =) we've since confirmed that the two can co-exist in some cases but not others. we're not sure the exact intersection of module configuration / hardware setup that determines when it works and when it doesn't, though.
On 12/11/09 10:17, Adam Williamson wrote:
On Fri, 2009-12-11 at 12:00 -0500, Douglas Kilpatrick wrote:
Adam Williamson wrote:
Uh? If the nouveau kernel module is loaded, the nvidia kernel module refuses to load, complaining that another module is in control of the hardware. I've seen multiple people report this. I haven't seen anyone who's got the NVIDIA driver working if the nouveau module is loaded.
Me. As per google, add "nouveau.modeset=0 vga=0x318" to the end of the kernel boot line.
$ lsmod | grep '^(nouveau|nvidia)' nvidia 8096992 34 nouveau 568932 0
you're a bit late to the party. =) we've since confirmed that the two can co-exist in some cases but not others. we're not sure the exact intersection of module configuration / hardware setup that determines when it works and when it doesn't, though.
Though the nouveau module is loaded, nothing's using it (as evidenced by the 0 in final column). nouveau.modeset=0 keeps the kernel DRM from touching the hardware via nouveau, likewise the removing rhgb from the boot line. vga= explicitly uses the bios(?) vga support, again sidestepping nouveau. When Xorg starts, configured with nvidia, it's the first driver to access the device (not counting the VGA abstraction). nouveau may have scanned the bus and knows that it could run, but nothing has opened the device via nouveau. Coexistence is possible.
On Fri, 2009-12-11 at 11:05 -0700, Bob Arendt wrote:
On 12/11/09 10:17, Adam Williamson wrote:
On Fri, 2009-12-11 at 12:00 -0500, Douglas Kilpatrick wrote:
Adam Williamson wrote:
Uh? If the nouveau kernel module is loaded, the nvidia kernel module refuses to load, complaining that another module is in control of the hardware. I've seen multiple people report this. I haven't seen anyone who's got the NVIDIA driver working if the nouveau module is loaded.
Me. As per google, add "nouveau.modeset=0 vga=0x318" to the end of the kernel boot line.
$ lsmod | grep '^(nouveau|nvidia)' nvidia 8096992 34 nouveau 568932 0
you're a bit late to the party. =) we've since confirmed that the two can co-exist in some cases but not others. we're not sure the exact intersection of module configuration / hardware setup that determines when it works and when it doesn't, though.
Though the nouveau module is loaded, nothing's using it (as evidenced by the 0 in final column). nouveau.modeset=0 keeps the kernel DRM from touching the hardware via nouveau, likewise the removing rhgb from the boot line. vga= explicitly uses the bios(?) vga support, again sidestepping nouveau. When Xorg starts, configured with nvidia, it's the first driver to access the device (not counting the VGA abstraction). nouveau may have scanned the bus and knows that it could run, but nothing has opened the device via nouveau. Coexistence is possible.
I think we'd identified some cases where nouveau.modeset=0 is enough to make NVIDIA work and others where it isn't. I hadn't considered the vga= case yet.
There is another factor to consider: there's _two_ stages at boot where nouveau can get loaded. It can get loaded via dracut as part of the initramfs, or by udev via modaliases later in boot. If you want to completely suppress it, you have to pass a dracut kernel parameter to stop it getting loaded during initramfs ('rdblacklist nouveau') and also a /etc/modprobe.d file that blacklists it to stop it getting loaded via modaliases.
I'm a bit lost at the intersection of all these various factors and am not sure we've hit on a definitive answer to what's the minimum you need to do to reliably make sure the nvidia module can load and do what it needs to do...
On Wed, Nov 25, 2009 at 12:15:30 -0800, Adam Williamson awilliam@redhat.com wrote:
even despite that, public testing of RCs can _sometimes_ be useful, but the problem is that if we publicise them any more than they're already publicised, the single server on which they're located will bog down and stop people who really need them from getting them fast enough. And given the time constraints, it's practically impossible to mirror or torrent them usefully.
Would providing detailed documentation for people to build these images using a small amount of data from releng and the rest from public (or local) mirrors be worth the effort to set up? This is a higher bar than dealing with a complete image and there may not be enough people who take advantage of it to be worth the effort. I am also not sure if the process is repeatable so as to get bit for bit accuracy from private spins. But it does seem there should be a way to pull most of the data for the image from mirrors rather than from releng's server.
On Thu, 2009-11-26 at 01:03 -0600, Bruno Wolff III wrote:
On Wed, Nov 25, 2009 at 12:15:30 -0800, Adam Williamson awilliam@redhat.com wrote:
even despite that, public testing of RCs can _sometimes_ be useful, but the problem is that if we publicise them any more than they're already publicised, the single server on which they're located will bog down and stop people who really need them from getting them fast enough. And given the time constraints, it's practically impossible to mirror or torrent them usefully.
Would providing detailed documentation for people to build these images using a small amount of data from releng and the rest from public (or local) mirrors be worth the effort to set up? This is a higher bar than dealing with a complete image and there may not be enough people who take advantage of it to be worth the effort. I am also not sure if the process is repeatable so as to get bit for bit accuracy from private spins. But it does seem there should be a way to pull most of the data for the image from mirrors rather than from releng's server.
Jesse could answer that better; I don't know how secret sauce-y the image build process is.
It is always worth posting the standard disclaimer we try and attach to all RC-y stuff: the only thing you need the RC builds for is testing the actual DVD/multi-CD composes, basically. If you just want to test the bits, you can use a nightly live image or Rawhide install. If you want to test the install process, you can do a network install from Rawhide, which - as you surmise - gets you all the same bits that are in the RCs, basically.
I did do quite a lot of spinning my own *live* builds during the late stage of the release testing process, using the official desktop kickstart and the hourly repositories, which meant I could get what was basically an 'RC' live spin locally quite easily, using cached packages. I didn't try building my own DVD compose, though.
On Thu, Nov 26, 2009 at 00:38:19 -0800, Adam Williamson awilliam@redhat.com wrote:
It is always worth posting the standard disclaimer we try and attach to all RC-y stuff: the only thing you need the RC builds for is testing the actual DVD/multi-CD composes, basically. If you just want to test the bits, you can use a nightly live image or Rawhide install. If you want to test the install process, you can do a network install from Rawhide, which - as you surmise - gets you all the same bits that are in the RCs, basically.
Does this mean if we don't have a way for people to remotely generate exact bit for bit copies, then there isn't any point to trying to do something like this for RC testing?
I did do quite a lot of spinning my own *live* builds during the late stage of the release testing process, using the official desktop kickstart and the hourly repositories, which meant I could get what was basically an 'RC' live spin locally quite easily, using cached packages. I didn't try building my own DVD compose, though.
I do local builds of of the games spin myself, but haven't been doing a lot of extensive testing with those images.
On 11/26/2009 07:03 AM, Bruno Wolff III wrote:
On Wed, Nov 25, 2009 at 12:15:30 -0800, Adam Williamson awilliam@redhat.com wrote:
even despite that, public testing of RCs can _sometimes_ be useful, but the problem is that if we publicise them any more than they're already publicised, the single server on which they're located will bog down and stop people who really need them from getting them fast enough. And given the time constraints, it's practically impossible to mirror or torrent them usefully.
Would providing detailed documentation for people to build these images using a small amount of data from releng and the rest from public (or local) mirrors be worth the effort to set up? This is a higher bar than dealing with a complete image and there may not be enough people who take advantage of it to be worth the effort. I am also not sure if the process is repeatable so as to get bit for bit accuracy from private spins. But it does seem there should be a way to pull most of the data for the image from mirrors rather than from releng's server.
That wont work we need to make sure all the testers are testing the same bits hence it's best that we create and hand out the images.....
JBG
On Nov 26, 2009, at 2:41, "Jóhann B. Guðmundsson" johannbg@hi.is wrote:
On 11/26/2009 07:03 AM, Bruno Wolff III wrote:
On Wed, Nov 25, 2009 at 12:15:30 -0800, Adam Williamson awilliam@redhat.com wrote:
even despite that, public testing of RCs can _sometimes_ be useful, but the problem is that if we publicise them any more than they're already publicised, the single server on which they're located will bog down and stop people who really need them from getting them fast enough. And given the time constraints, it's practically impossible to mirror or torrent them usefully.
Would providing detailed documentation for people to build these images using a small amount of data from releng and the rest from public (or local) mirrors be worth the effort to set up? This is a higher bar than dealing with a complete image and there may not be enough people who take advantage of it to be worth the effort. I am also not sure if the process is repeatable so as to get bit for bit accuracy from private spins. But it does seem there should be a way to pull most of the data for the image from mirrors rather than from releng's server.
That wont work we need to make sure all the testers are testing the same bits hence it's best that we create and hand out the images.....
That's what beta, which used to be named preview, is for. It is the image we sync out to the world. The RCs come shortly after and fix anything critical found in the beta. RCs are fast and furious, no chance in mirroring and waiting days for feedback.
-- Jes
On 11/26/2009 07:59 AM, Jesse Keating wrote:
On Nov 26, 2009, at 2:41, "Jóhann B. Guðmundsson" johannbg@hi.is wrote:
On 11/26/2009 07:03 AM, Bruno Wolff III wrote:
On Wed, Nov 25, 2009 at 12:15:30 -0800, Adam Williamson awilliam@redhat.com wrote:
even despite that, public testing of RCs can _sometimes_ be useful, but the problem is that if we publicise them any more than they're already publicised, the single server on which they're located will bog down and stop people who really need them from getting them fast enough. And given the time constraints, it's practically impossible to mirror or torrent them usefully.
Would providing detailed documentation for people to build these images using a small amount of data from releng and the rest from public (or local) mirrors be worth the effort to set up? This is a higher bar than dealing with a complete image and there may not be enough people who take advantage of it to be worth the effort. I am also not sure if the process is repeatable so as to get bit for bit accuracy from private spins. But it does seem there should be a way to pull most of the data for the image from mirrors rather than from releng's server.
That wont work we need to make sure all the testers are testing the same bits hence it's best that we create and hand out the images.....
That's what beta, which used to be named preview, is for. It is the image we sync out to the world. The RCs come shortly after and fix anything critical found in the beta. RCs are fast and furious, no chance in mirroring and waiting days for feedback.
Do we say this directly anywhere in our docs... spell out the purpose for Alpha and Beta?
I'm trying to pull more of this stuff together so we have a a canonical place to point people to explaining how our release processes work.
John
On Mon, 2009-11-30 at 16:06 -0800, John Poelstra wrote:
Do we say this directly anywhere in our docs... spell out the purpose for Alpha and Beta?
I doubt it :/
I'm trying to pull more of this stuff together so we have a a canonical place to point people to explaining how our release processes work.
That work, as always, is very much appreciated!
On Thu, Nov 26, 2009 at 01:23:40AM +0530, Rahul Sundaram wrote:
On 11/26/2009 12:59 AM, James Laska wrote:
not really responsible for it, the team does need to do it all can do to avoid anything like this in the future . Signing Rawhide packages automatically would have caught this. Not many users in forum or fedora-list complained about it however.
(Off topic, Rahul, your email is set to reply to you AND the list--is that what you prefer, or is it better to take you off it and just send t the list.)
On topic--the lack of comment in the forum was due to the staff basically saying, very quickly, this is being discussed here, here, and here, so please discuss it there, and closing down any threads after that.
I assume the lack of comment here (on this list), was also because most of the action was going on in the bug reports, and of course, slashdot. :)
One comment--James, hopefully it's alright to intersperse this with Rahul's comments, if not, I do apologize. Although it's not much of a factor in the US, apparently bandwidth limits are quite common in Australia--there were several who seemed rather disgruntled about the somewhat confusing SHA1/SHA256 sum issue. Although it's now clear in the release notes, those who verify, as a rule, have done it before with other distributions, and are used to downloading an ISO, looking at the sums file, and seeing that it's md5 (if anyone's still using it), or SHAwhatever. Upon getting what seems to be a bad checksum, they download again, which in fairness, if you get one bad checksum, is probably the logical thing to do.
Few are going to look at the notes on verification--so I think it's really worth making sure that it's clearly marked in the checksum file. Not because it's so difficult to find the answer, but because I really think the natural thing to do is to try to download a second time before even thinking something is off, and then checking. For folks with bandwidth issues, it is the sort of thing that aggravates people.
Lastly, for me, with my relatively simple setups, works quite well, good job, people. :)
On 11/26/2009 02:19 AM, Scott Robbins wrote:
(Off topic, Rahul, your email is set to reply to you AND the list--is that what you prefer, or is it better to take you off it and just send t the list.)
Yep. The reply-to is set for my personal preference.
On topic--the lack of comment in the forum was due to the staff basically saying, very quickly, this is being discussed here, here, and here, so please discuss it there, and closing down any threads after that.
I am aware but even considering that, the amount of discussions were low.
I assume the lack of comment here (on this list), was also because most of the action was going on in the bug reports, and of course, slashdot. :)
Possibly. It just seemed a bit off track considering the history of such controversies. No particular reflection of QA but I think that matter got resolved pretty quickly once it was exposed but the key is to catch such changes *before* the release. Atleast in part, that falls within QA responsibility.
Rahul
On Thu, 2009-11-26 at 02:29 +0530, Rahul Sundaram wrote:
such changes *before* the release. Atleast in part, that falls within QA responsibility.
Indeed. However, as discussed at the last QA meeting, we have some prerequisites to be able to do any kind of meaningful security testing, which I've started that fedora-devel-list thread about.
I want to thank everyone involved for the help received and for doing such a good job on F12. I've installed F12 on 3 machines so far, and it has gone easily and I haven't had to go back to F11 for anything so far.
Specific points:
I've been assuming there was a problem using SATA disks, with them not showing up in Anaconda, until someone told me about the dregs of a dmraid array on disk causing Anaconda not to show it, and using the nodmraid boot option to get around it. I never would have guessed...
I've learned it is necessary to install foomatic-db to have my particular printer show up in system->administration->printing
I've learned it is necessary to install control-center-extra to get system->preferences->windows.
I've learned that checking dialup networking support at software customization time fails to install kudzu, which is necessary for system->administration->network to creat a dialup connection. Believe somebody is working on this to either make kudzu unnecessary or to have it installed when dialup networking support is checked.
I had the problem setting display resolution, but I had already been through that in F11 and had developed an xorg.conf that I could carry over and make it work. I don't know if this means something is broken in X so it can't read the information from the monitor, or if something is broken in the monitor or in the display hardware. system->preferences->display says the monitor is unknown.
And those are the only real problems I had. My gripe about unasked for language support seems to have stirred up a hornet's nest - well at least you have my thanks for giving it your attention.
Over all this is the easiest Fedora version upgrade I have ever done; and when I did have problems I got quick and correct answers from this list that I was not getting anywhere else. (Maybe that suggests we should have a current-version-install ombudsman list, where anybody can present questions but only authoritative people can give answers.)
Jim
On Wed, 2009-11-25 at 15:37 -0600, Jim Haynes wrote:
I've been assuming there was a problem using SATA disks, with them not showing up in Anaconda, until someone told me about the dregs of a dmraid array on disk causing Anaconda not to show it, and using the nodmraid boot option to get around it. I never would have guessed...
This is intentional behaviour on the part of the installer and is documented, I believe. Hence there's not much more QA can do about this.
I've learned it is necessary to install foomatic-db to have my particular printer show up in system->administration->printing
This has been documented in common bugs, but it does seem like something we ought to have caught - thanks for bringing it up. Perhaps we need more testing of common basic peripherals like printers.
I've learned it is necessary to install control-center-extra to get system->preferences->windows.
This is an intentional change on the part of the desktop team and is documented in the release notes, so no QA action possible here.
I've learned that checking dialup networking support at software customization time fails to install kudzu, which is necessary for system->administration->network to creat a dialup connection. Believe somebody is working on this to either make kudzu unnecessary or to have it installed when dialup networking support is checked.
Well, this has been reported, but the correct method here is to use NetworkManager rather than system-config-network to set up the dial-up connection, I believe. It's something that's pretty niche for QA to have caught (you need to be using dial-up *and* use s-c-n rather than NetworkManager).
I had the problem setting display resolution, but I had already been through that in F11 and had developed an xorg.conf that I could carry over and make it work. I don't know if this means something is broken in X so it can't read the information from the monitor, or if something is broken in the monitor or in the display hardware. system->preferences->display says the monitor is unknown.
Can't tell without more data - file a bug report with the appropriate info as described in https://fedoraproject.org/wiki/How_to_debug_Xorg_problems . We have a good process for filing bugs, good documentation of how to file them (see link :>) and a decent triage process for handling these once they're filed, so I think no further QA action is needed here.
Thanks for the feedback!
On Wed, 25 Nov 2009, Adam Williamson wrote:
On Wed, 2009-11-25 at 15:37 -0600, Jim Haynes wrote:
I've been assuming there was a problem using SATA disks, with them not showing up in Anaconda, until someone told me about the dregs of a dmraid array on disk causing Anaconda not to show it, and using the nodmraid boot option to get around it. I never would have guessed...
This is intentional behaviour on the part of the installer and is documented, I believe. Hence there's not much more QA can do about this.
I hope it's documented, but I sure didn't know where to go to look for it. Partly because this was my first-ever experience with an SATA disk, and I had no idea the disk had dregs of a dmraid array on it (nor how to get rid of them). I had first run across the problem in F11, then tested that F10 would see the disk OK, and set the problem aside until F12 was imminent. (And then you go to a Best Buy or similar store looking at disks and there is a poster saying SATA disks only work with Windows, so I had further reason to be misled into thinking there was some problem with them.)
Well, this has been reported, but the correct method here is to use NetworkManager rather than system-config-network to set up the dial-up connection, I believe. It's something that's pretty niche for QA to have caught (you need to be using dial-up *and* use s-c-n rather than NetworkManager).
I've never learned how to use NetworkManager, and in fact I don't know where to go to learn how. The times it has been turned on it has done what I don't want done. For example with Ethernet I use fixed IP addresses on my local LAN, and NetworkManager apparently doesn't like that - maybe it wants me to be running a DHCP server. And when it has been turned on for wireless LAN I don't get a usable connection, and I have to turn it off and bring up the wireless by hand. So I am not convinced NetworkManager is my friend. My hard case is my laptop, where I use Ethernet when I'm at home, and wireless when I travel, if it's available, and dialup when wireless is not available. So I know what I want depending on the situation; NetworkManager does not.
I had the problem setting display resolution, but I had already been
Can't tell without more data - file a bug report with the appropriate info as described in https://fedoraproject.org/wiki/How_to_debug_Xorg_problems . We have a good process for filing bugs, good documentation of how to file them (see link :>) and a decent triage process for handling these once they're filed, so I think no further QA action is needed here.
I agree, but until you mentioned it I didn't know about that wiki page - maybe I don't spend enough time just surfing the net to see what kind of help is out there. I'll try that when I get home (on the road with the laptop right now) and maybe that will help.
On Wed, 2009-11-25 at 18:05 -0600, Jim Haynes wrote:
On Wed, 25 Nov 2009, Adam Williamson wrote:
On Wed, 2009-11-25 at 15:37 -0600, Jim Haynes wrote:
I've been assuming there was a problem using SATA disks, with them not showing up in Anaconda, until someone told me about the dregs of a dmraid array on disk causing Anaconda not to show it, and using the nodmraid boot option to get around it. I never would have guessed...
This is intentional behaviour on the part of the installer and is documented, I believe. Hence there's not much more QA can do about this.
I hope it's documented, but I sure didn't know where to go to look for it. Partly because this was my first-ever experience with an SATA disk, and I had no idea the disk had dregs of a dmraid array on it (nor how to get rid of them). I had first run across the problem in F11, then tested that F10 would see the disk OK, and set the problem aside until F12 was imminent. (And then you go to a Best Buy or similar store looking at disks and there is a poster saying SATA disks only work with Windows, so I had further reason to be misled into thinking there was some problem with them.)
Well, this has been reported, but the correct method here is to use NetworkManager rather than system-config-network to set up the dial-up connection, I believe. It's something that's pretty niche for QA to have caught (you need to be using dial-up *and* use s-c-n rather than NetworkManager).
I've never learned how to use NetworkManager, and in fact I don't know where to go to learn how. The times it has been turned on it has done what I don't want done. For example with Ethernet I use fixed IP addresses on my local LAN, and NetworkManager apparently doesn't like that - maybe it wants me to be running a DHCP server.
I'm not sure where it's documented, but I believe you can use NetworkManager in a non-dhcp static IP network. Right-click on the nm-applet icon and select 'Edit connections...', or start the application 'nm-connection-editor'.
From there, create a new wired network connection and supply the static IP information.
<snip>
Thanks, James
It seems to me that this time there are way too many "unupgradable" packages. I mean by this that when upgrading from a maintaned Fedora 10 or 11 installation packages "on hand" have higher version than even _updates_ for Fedora 12 and are turned into pseudo-orphans.
With broken 'package-cleanup --problems' (see https://bugzilla.redhat.com/show_bug.cgi?id=541551) that adds an extra excitement and some of these "lefovers" break due to missing dependencies.
Here are some of these I was quickly made aware of: alsa-plugins-pulseaudio gnumeric iptstate iw libotf libnetfilter_conntrack libvolume_id sos tetex-elsevier tigervnc
and most likely trawling through bugzilla will bring more.
Interestingly enough 'yum downgrade ...' does not work for tigervnc packages and at least tigervnc-server breaks due to a wrong version of openssl libraries.
Michal
On 11/26/2009 03:29 AM, James Laska wrote:
Greetings folks,
Whether you call it a post-mortem, retrospective or lessons learned ... the end result is the same. I'd like to collect thoughts on how good/bad of a job the QA group did in planning and testing the Fedora 12 release. In keeping with the release-wide retrospective from Fedora 11 [1], feel free to share any wishlist items as well.
I've started the discussion on the wiki at https://fedoraproject.org/wiki/Fedora_12_QA_Retrospective.
Adding your thoughts is easy ... * Edit the wiki directly (instructions provided for ~anonymous feedback) * Or, reply to this mail (I'll collect feedback and add to the wiki)
Over the next week, I plan to organize any feedback and discuss the highlights during an upcoming QA team meeting. The goal will be to prioritize the pain points and use as a basis for defining objectives for Fedora 13.
Thanks for your feedback! James
[1] https://fedoraproject.org/wiki/Fedora_11_Retrospective_Notes
Sorry for reply this mail so later. The following is my feedback: Things that went well: 1.The schedule of Fedora 12 Quality Tasks is accurate,from there, we can exactly know when to start the install testing. 2. The ticket to request Compose or media is very convenient and has high performance 3. People who participate in install test are increasing
Could have been better: 1. Install test on ppc platform is not sufficient. I mean some cases can not be executed comparing with i386/x86_64 2. Speed of test Media for download is snow,people are unwilling to download DVD to test.could we put this to mirror?or think about other way to solve this problem? 3. The install time is very long,most of the time is spent on package install stage,but most of the bugs are not occurred on packaging install stage,can we build a CD which is very similar with LiveCD, but it just for install test,only install gnome and other basic packages?By this way, we can find anaconda bugs, but the size of this media is less then 700M. Or we can add this function to LiveCD,there is an option to start install on boot stage,not to start install after login desktop like liveCD. 4. The bug maintenance takes lots of time.Can we build a bug report instructions about what logs need to attach when report a bug for specified component.or this can build to bugzilla? when selected a component,some logs/information are recommended to attach to save the communication time between the developers and testers. By this way,at least some necessary information will not be missing because of the carelessness of testers, and the reproduce time will be reduced when some necessary information missed
Wishlist : 1. Have more method and more broadcast way to let more people know our testing
Thanks Liam