https://fedoraproject.org/wiki/Changes/Sqlite_Rpmdb
== Summary == Change format of the RPM database from Berkeley DB to a new Sqlite format.
== Owner == * Name: [[User:pmatilai| Panu Matilainen]] [[User:ffesti|Florian Festi]] * Email: pmatilai@redhat.com ffesti@redhat.com
== Detailed Description ==
The current rpm database implementation is based on Berkeley DB 5.x, a version which is unmaintained upstream for several years now. Berkeley DB 6.x is license incompatible so moving to that is not an option. In addition, the existing rpmdb implementation is notoriously unreliable as it's not transactional and has no other means to detect inconsistencies either.
Changing to a more sustainable database implementation is long overdue. We propose to change the default rpmdb format to the new sqlite based implementation. Support for current BDB format will be retained in Fedora 33, and phased out to read-only support in Fedora 34.
== Benefit to Fedora ==
* A far more robust rpm database implementation * Getting rid of Berkeley DB dependency in one of the core components
== Scope == * Proposal owners: ** Once [[Changes/RPM-4.16|RPM 4.16]] lands and passes initial shakedown, change the default rpmdb configuration to sqlite ** Address any bugs and issues in the database backend found by wider testing base ** Help other developers to address Berkeley DB dependencies
* Other developers: ** Test for hidden Berkeley DB dependencies in other software, address them as found and needed
* Release engineering: [https://pagure.io/releng/issue/9308 #9308]
* Policies and guidelines: Policies and guidelines are not affected
* Trademark approval: N/A (not needed for this Change)
== Upgrade/compatibility impact ==
=== Upgrading === * Ability to upgrade is not affected * After upgrade completes, manual action (rpmdb --rebuilddb) will probably be needed to convert to sqlite. Alternatively user can change configuration to stay on BDB.
=== Compatibility === * Container/chroot use-cases will be affected: older rpm versions will be unable to query/manipulate the rpmdb from outside the chroot * Koji/COPR may need to override the database format (back to) BDB for the time being
== How To Test == * Rpmdb gets thoroughly exercised as a matter of normal system operation, performing installs, updates, package builds etc * Of specific interest here is torture testing: forcibly killing rpm in various stages of execution - database should stay consistent and operational (other system state is out of scope) * Test database conversions from one backend to another (rpmdb --rebuilddb --define "_db_backend <backend>")
== User Experience == * In normal operation, users should see little or no change * Behavior in error situations is much more robust: forcibly killed transaction no longer causes database inconsistency or corruption
== Dependencies == * This change depends on [[Changes/RPM-4.16|RPM 4.16]], support for sqlite rpmdb is not present in older versions * RPM will grow a new dependency on sqlite-libs * Technically the rpmdb format is an internal implementation detail of RPM and the data is only accessible through the librpm API, but some software is making assumptions both about the format and/or in particular, file naming. These are being tracked at https://bugzilla.redhat.com/show_bug.cgi?id=1766120 * Upgrade tooling could/should perform rpmdb rebuild at end, this would be a good thing to do regardless of this change
== Contingency Plan ==
* Contingency mechanism: ** Revert the default database back to Berkeley DB backend in the package. Running 'rpmdb --rebuilddb' on hosts is currently required to actually convert the database, but means to automate conversion in specific conditions is being discussed upstream. ** The rpm-team does not expect problems with the database backend itself, but we are aware that postponing may be needed due to infrastructure or other tooling not being ready, primarily due to inability to access the database from older releases.
* Contingency deadline: Beta freeze * Blocks release? Yes
== Documentation == * [https://rpm.org/wiki/Releases/4.16.0 | RPM 4.16 release notes]
== Release Notes ==
* After upgrading from an older release, rpm operations will issue warnings about database backend configuration not matching what's on disk. Users should run 'rpmdb --rebuilddb' at earliest opportunity, or change configuration to stay on Berkeley DB backend (eg 'echo %_db_backend bdb > /etc/rpm/macros.db') * The details are subject to change, the database rebuild may be done by upgrade tooling
On Mon, Mar 16, 2020 at 11:24 AM Ben Cotton bcotton@redhat.com wrote:
https://fedoraproject.org/wiki/Changes/Sqlite_Rpmdb
== Summary == Change format of the RPM database from Berkeley DB to a new Sqlite format.
== Owner ==
- Name: [[User:pmatilai| Panu Matilainen]] [[User:ffesti|Florian Festi]]
- Email: pmatilai@redhat.com ffesti@redhat.com
== Detailed Description ==
The current rpm database implementation is based on Berkeley DB 5.x, a version which is unmaintained upstream for several years now. Berkeley DB 6.x is license incompatible so moving to that is not an option. In addition, the existing rpmdb implementation is notoriously unreliable as it's not transactional and has no other means to detect inconsistencies either.
Changing to a more sustainable database implementation is long overdue. We propose to change the default rpmdb format to the new sqlite based implementation. Support for current BDB format will be retained in Fedora 33, and phased out to read-only support in Fedora 34.
== Benefit to Fedora ==
- A far more robust rpm database implementation
- Getting rid of Berkeley DB dependency in one of the core components
== Scope ==
- Proposal owners:
** Once [[Changes/RPM-4.16|RPM 4.16]] lands and passes initial shakedown, change the default rpmdb configuration to sqlite ** Address any bugs and issues in the database backend found by wider testing base ** Help other developers to address Berkeley DB dependencies
- Other developers:
** Test for hidden Berkeley DB dependencies in other software, address them as found and needed
Release engineering: [https://pagure.io/releng/issue/9308 #9308]
Policies and guidelines: Policies and guidelines are not affected
Trademark approval: N/A (not needed for this Change)
== Upgrade/compatibility impact ==
=== Upgrading ===
- Ability to upgrade is not affected
- After upgrade completes, manual action (rpmdb --rebuilddb) will
probably be needed to convert to sqlite. Alternatively user can change configuration to stay on BDB.
=== Compatibility ===
- Container/chroot use-cases will be affected: older rpm versions will
be unable to query/manipulate the rpmdb from outside the chroot
- Koji/COPR may need to override the database format (back to) BDB for
the time being
== How To Test ==
- Rpmdb gets thoroughly exercised as a matter of normal system
operation, performing installs, updates, package builds etc
- Of specific interest here is torture testing: forcibly killing rpm
in various stages of execution - database should stay consistent and operational (other system state is out of scope)
- Test database conversions from one backend to another (rpmdb
--rebuilddb --define "_db_backend <backend>")
== User Experience ==
- In normal operation, users should see little or no change
- Behavior in error situations is much more robust: forcibly killed
transaction no longer causes database inconsistency or corruption
== Dependencies ==
- This change depends on [[Changes/RPM-4.16|RPM 4.16]], support for
sqlite rpmdb is not present in older versions
- RPM will grow a new dependency on sqlite-libs
- Technically the rpmdb format is an internal implementation detail of
RPM and the data is only accessible through the librpm API, but some software is making assumptions both about the format and/or in particular, file naming. These are being tracked at https://bugzilla.redhat.com/show_bug.cgi?id=1766120
- Upgrade tooling could/should perform rpmdb rebuild at end, this
would be a good thing to do regardless of this change
== Contingency Plan ==
- Contingency mechanism:
** Revert the default database back to Berkeley DB backend in the package. Running 'rpmdb --rebuilddb' on hosts is currently required to actually convert the database, but means to automate conversion in specific conditions is being discussed upstream. ** The rpm-team does not expect problems with the database backend itself, but we are aware that postponing may be needed due to infrastructure or other tooling not being ready, primarily due to inability to access the database from older releases.
- Contingency deadline: Beta freeze
- Blocks release? Yes
== Documentation ==
- [https://rpm.org/wiki/Releases/4.16.0 | RPM 4.16 release notes]
== Release Notes ==
- After upgrading from an older release, rpm operations will issue
warnings about database backend configuration not matching what's on disk. Users should run 'rpmdb --rebuilddb' at earliest opportunity, or change configuration to stay on Berkeley DB backend (eg 'echo %_db_backend bdb > /etc/rpm/macros.db')
- The details are subject to change, the database rebuild may be done
by upgrade tooling
I'm glad to *finally* see this happen, so congratulations to the RPM team for finally making this a reality! I look forward to trying this out in Rawhide as soon as possible. 😊
Also, yay, soon no more BDB... 🥳
-- 真実はいつも一つ!/ Always, there's only one truth!
On 3/16/20 6:25 PM, Neal Gompa wrote:
On Mon, Mar 16, 2020 at 11:24 AM Ben Cotton bcotton@redhat.com wrote:
https://fedoraproject.org/wiki/Changes/Sqlite_Rpmdb
== Summary == Change format of the RPM database from Berkeley DB to a new Sqlite format.
== Owner ==
- Name: [[User:pmatilai| Panu Matilainen]] [[User:ffesti|Florian Festi]]
- Email: pmatilai@redhat.com ffesti@redhat.com
== Detailed Description ==
The current rpm database implementation is based on Berkeley DB 5.x, a version which is unmaintained upstream for several years now. Berkeley DB 6.x is license incompatible so moving to that is not an option. In addition, the existing rpmdb implementation is notoriously unreliable as it's not transactional and has no other means to detect inconsistencies either.
Changing to a more sustainable database implementation is long overdue. We propose to change the default rpmdb format to the new sqlite based implementation. Support for current BDB format will be retained in Fedora 33, and phased out to read-only support in Fedora 34.
== Benefit to Fedora ==
- A far more robust rpm database implementation
- Getting rid of Berkeley DB dependency in one of the core components
== Scope ==
- Proposal owners:
** Once [[Changes/RPM-4.16|RPM 4.16]] lands and passes initial shakedown, change the default rpmdb configuration to sqlite ** Address any bugs and issues in the database backend found by wider testing base ** Help other developers to address Berkeley DB dependencies
- Other developers:
** Test for hidden Berkeley DB dependencies in other software, address them as found and needed
Release engineering: [https://pagure.io/releng/issue/9308 #9308]
Policies and guidelines: Policies and guidelines are not affected
Trademark approval: N/A (not needed for this Change)
== Upgrade/compatibility impact ==
=== Upgrading ===
- Ability to upgrade is not affected
- After upgrade completes, manual action (rpmdb --rebuilddb) will
probably be needed to convert to sqlite. Alternatively user can change configuration to stay on BDB.
=== Compatibility ===
- Container/chroot use-cases will be affected: older rpm versions will
be unable to query/manipulate the rpmdb from outside the chroot
- Koji/COPR may need to override the database format (back to) BDB for
the time being
== How To Test ==
- Rpmdb gets thoroughly exercised as a matter of normal system
operation, performing installs, updates, package builds etc
- Of specific interest here is torture testing: forcibly killing rpm
in various stages of execution - database should stay consistent and operational (other system state is out of scope)
- Test database conversions from one backend to another (rpmdb
--rebuilddb --define "_db_backend <backend>")
== User Experience ==
- In normal operation, users should see little or no change
- Behavior in error situations is much more robust: forcibly killed
transaction no longer causes database inconsistency or corruption
== Dependencies ==
- This change depends on [[Changes/RPM-4.16|RPM 4.16]], support for
sqlite rpmdb is not present in older versions
- RPM will grow a new dependency on sqlite-libs
- Technically the rpmdb format is an internal implementation detail of
RPM and the data is only accessible through the librpm API, but some software is making assumptions both about the format and/or in particular, file naming. These are being tracked at https://bugzilla.redhat.com/show_bug.cgi?id=1766120
- Upgrade tooling could/should perform rpmdb rebuild at end, this
would be a good thing to do regardless of this change
== Contingency Plan ==
- Contingency mechanism:
** Revert the default database back to Berkeley DB backend in the package. Running 'rpmdb --rebuilddb' on hosts is currently required to actually convert the database, but means to automate conversion in specific conditions is being discussed upstream. ** The rpm-team does not expect problems with the database backend itself, but we are aware that postponing may be needed due to infrastructure or other tooling not being ready, primarily due to inability to access the database from older releases.
- Contingency deadline: Beta freeze
- Blocks release? Yes
== Documentation ==
- [https://rpm.org/wiki/Releases/4.16.0 | RPM 4.16 release notes]
== Release Notes ==
- After upgrading from an older release, rpm operations will issue
warnings about database backend configuration not matching what's on disk. Users should run 'rpmdb --rebuilddb' at earliest opportunity, or change configuration to stay on Berkeley DB backend (eg 'echo %_db_backend bdb > /etc/rpm/macros.db')
- The details are subject to change, the database rebuild may be done
by upgrade tooling
I'm glad to *finally* see this happen, so congratulations to the RPM team for finally making this a reality! I look forward to trying this out in Rawhide as soon as possible. 😊
FWIW, those who want an early taste, you can try my rpm-snapshot repo: https://copr.fedorainfracloud.org/coprs/pmatilai/rpm-snapshot/
I run those snapshots on my own laptop at all times so it's not supposed or expected to eat your disk or anything like that, but caveat emptor.
- Panu -
On Tue, Mar 17, 2020 at 6:06 AM Panu Matilainen pmatilai@redhat.com wrote:
On 3/16/20 6:25 PM, Neal Gompa wrote:
I'm glad to *finally* see this happen, so congratulations to the RPM team for finally making this a reality! I look forward to trying this out in Rawhide as soon as possible.
FWIW, those who want an early taste, you can try my rpm-snapshot repo: https://copr.fedorainfracloud.org/coprs/pmatilai/rpm-snapshot/
I run those snapshots on my own laptop at all times so it's not supposed or expected to eat your disk or anything like that, but caveat emptor.
I've been running the snapshots for a few days now, and it seems to be somewhat faster than BDB on my machine. Generally haven't seen any issues so far!
Though out of curiosity, have you done some performance analysis on this to show off to everyone?
On 3/20/20 9:25 PM, Neal Gompa wrote:
On Tue, Mar 17, 2020 at 6:06 AM Panu Matilainen pmatilai@redhat.com wrote:
On 3/16/20 6:25 PM, Neal Gompa wrote:
I'm glad to *finally* see this happen, so congratulations to the RPM team for finally making this a reality! I look forward to trying this out in Rawhide as soon as possible.
FWIW, those who want an early taste, you can try my rpm-snapshot repo: https://copr.fedorainfracloud.org/coprs/pmatilai/rpm-snapshot/
I run those snapshots on my own laptop at all times so it's not supposed or expected to eat your disk or anything like that, but caveat emptor.
I've been running the snapshots for a few days now, and it seems to be somewhat faster than BDB on my machine. Generally haven't seen any issues so far!
Cool, that's the expectation.
Though out of curiosity, have you done some performance analysis on this to show off to everyone?
Not really, as performance is not what this is all about at this point. I've only really cared that it's on the ballpark with the thing its replacing, and in many cases it ends up being a bit faster. Which is not bad at all considering it's doing per-package ACID transactions on the database level, whereas BDB is most certainly is not (and is significantly slower if made to do that)
- Panu -
On Mon, Mar 16, 2020 at 11:22:47AM -0400, Ben Cotton wrote:
=== Upgrading ===
- Ability to upgrade is not affected
- After upgrade completes, manual action (rpmdb --rebuilddb) will
probably be needed to convert to sqlite. Alternatively user can change configuration to stay on BDB.
Do I understand correctly: - without the manual step, users will remain on the old format - with the old format, in F33 everything will still work fine, but after upgrade to F34, the database will become read-only
Why is an automatic 'rpmdb --rebuilddb' not part of upgrade plan?
Zbyszek
On 3/26/20 1:02 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Mon, Mar 16, 2020 at 11:22:47AM -0400, Ben Cotton wrote:
=== Upgrading ===
- Ability to upgrade is not affected
- After upgrade completes, manual action (rpmdb --rebuilddb) will
probably be needed to convert to sqlite. Alternatively user can change configuration to stay on BDB.
Do I understand correctly:
- without the manual step, users will remain on the old format
- with the old format, in F33 everything will still work fine, but after upgrade to F34, the database will become read-only
Why is an automatic 'rpmdb --rebuilddb' not part of upgrade plan?
To repeat what I said in https://pagure.io/fesco/issue/2360:
I left it open on purpose (note the "probably" in there) as there would be any number of ways to achieve the rebuild with varying degrees of automation and opt-out opportunities, depending on what is actually desireable for Fedora.
One possibility could be adding a rebuild step to dnf system-upgrade plugin, rebuilding the db after distro upgrades is not a bad idea regardless of db format changes (at least BDB performance would gradually degrade unless rebuilt every now and then). That would leave people doing the (unspeakable) distro-sync upgrade to deal with database format manually, which might be just the right balance of freedom. Or not, I dunno. Other possibilities include planting a one-shot service that does the db rebuild on the next reboot and decommissions itself afterwards in the rpm package itself. Other variations certainly exist.
Suggestions welcome, just as long as you don't suggest rebuilding from rpm %posttrans :)
- Panu -
Zbyszek _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Thu, Mar 26, 2020 at 01:16:22PM +0200, Panu Matilainen wrote:
On 3/26/20 1:02 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Mon, Mar 16, 2020 at 11:22:47AM -0400, Ben Cotton wrote:
=== Upgrading ===
- Ability to upgrade is not affected
- After upgrade completes, manual action (rpmdb --rebuilddb) will
probably be needed to convert to sqlite. Alternatively user can change configuration to stay on BDB.
Do I understand correctly:
- without the manual step, users will remain on the old format
- with the old format, in F33 everything will still work fine, but after upgrade to F34, the database will become read-only
Why is an automatic 'rpmdb --rebuilddb' not part of upgrade plan?
To repeat what I said in https://pagure.io/fesco/issue/2360:
Hi,
thanks for quick answer and sorry for double-posting. I started reading the fesco ticket, then the change page, then the discussion here, and forgot to read the rest of the comment on the ticket. I also posted there, but I think it's better to discuss here. I'll copy my post from there here, sorry for the mess.
I left it open on purpose (note the "probably" in there) as there would be any number of ways to achieve the rebuild with varying degrees of automation and opt-out opportunities, depending on what is actually desireable for Fedora.
One possibility could be adding a rebuild step to dnf system-upgrade plugin, rebuilding the db after distro upgrades is not a bad idea regardless of db format changes (at least BDB performance would gradually degrade unless rebuilt every now and then). That would leave people doing the (unspeakable) distro-sync upgrade to deal with database format manually, which might be just the right balance of freedom. Or not, I dunno. Other possibilities include planting a one-shot service that does the db rebuild on the next reboot and decommissions itself afterwards in the rpm package itself. Other variations certainly exist.
Suggestions welcome, just as long as you don't suggest rebuilding from rpm %posttrans :)
Right. I realize %posttrans is not a good idea. But *some* mechanism is necessary, because without that the change will mostly be a noop for most users. So I think this needs to be decided somehow.
Quoting from the FESCo ticket: About the various implementation options:
- in dnf system-upgrade: this does not cover normal 'dnf --releasever=33 upgrade' upgrades (as you mentioned earlier), but it also does not cover packagekit upgrades. It'd also create a previous-release-blocker(s) and previous-previous-release-blockers(s), since the changes would need to be deployed in F32 and F31. Also note that the last time when the upgrade plugins run code is in upgrade phase between two reboots, and the plugin is running pre-upgrade code. This code would then invoke post-upgrade rpm. It's certainly doable, but seems a bit funky.
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
No matter how it wrapped, is the upgrade itself atomic? Having the new db built under a temporary file name and then atomically rename(2)d into place would be ideal.
Zbyszek
On Thu, Mar 26, 2020 at 7:33 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Thu, Mar 26, 2020 at 01:16:22PM +0200, Panu Matilainen wrote:
On 3/26/20 1:02 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Mon, Mar 16, 2020 at 11:22:47AM -0400, Ben Cotton wrote:
=== Upgrading ===
- Ability to upgrade is not affected
- After upgrade completes, manual action (rpmdb --rebuilddb) will
probably be needed to convert to sqlite. Alternatively user can change configuration to stay on BDB.
Do I understand correctly:
- without the manual step, users will remain on the old format
- with the old format, in F33 everything will still work fine, but after upgrade to F34, the database will become read-only
Why is an automatic 'rpmdb --rebuilddb' not part of upgrade plan?
To repeat what I said in https://pagure.io/fesco/issue/2360:
Hi,
thanks for quick answer and sorry for double-posting. I started reading the fesco ticket, then the change page, then the discussion here, and forgot to read the rest of the comment on the ticket. I also posted there, but I think it's better to discuss here. I'll copy my post from there here, sorry for the mess.
I left it open on purpose (note the "probably" in there) as there would be any number of ways to achieve the rebuild with varying degrees of automation and opt-out opportunities, depending on what is actually desireable for Fedora.
One possibility could be adding a rebuild step to dnf system-upgrade plugin, rebuilding the db after distro upgrades is not a bad idea regardless of db format changes (at least BDB performance would gradually degrade unless rebuilt every now and then). That would leave people doing the (unspeakable) distro-sync upgrade to deal with database format manually, which might be just the right balance of freedom. Or not, I dunno. Other possibilities include planting a one-shot service that does the db rebuild on the next reboot and decommissions itself afterwards in the rpm package itself. Other variations certainly exist.
Suggestions welcome, just as long as you don't suggest rebuilding from rpm %posttrans :)
Right. I realize %posttrans is not a good idea. But *some* mechanism is necessary, because without that the change will mostly be a noop for most users. So I think this needs to be decided somehow.
Quoting from the FESCo ticket: About the various implementation options:
- in dnf system-upgrade: this does not cover normal 'dnf --releasever=33 upgrade' upgrades (as you mentioned earlier), but it also does not cover packagekit upgrades. It'd also create a previous-release-blocker(s) and previous-previous-release-blockers(s), since the changes would need to be deployed in F32 and F31. Also note that the last time when the upgrade plugins run code is in upgrade phase between two reboots, and the plugin is running pre-upgrade code. This code would then invoke post-upgrade rpm. It's certainly doable, but seems a bit funky.
It could be a libdnf post-transaction plugin. That would apply to any mechanism of system upgrade using libdnf, either through dnf or PackageKit.
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Wouldn't the systemd-inhibit plugin automatically ensure that a rebuild action would block sleep/poweroff?
No matter how it wrapped, is the upgrade itself atomic? Having the new db built under a temporary file name and then atomically rename(2)d into place would be ideal.
Since RPM 4.14, RPM creates a new directory, writes the database content there, then renames the directory when it's done.
On 3/26/20 1:32 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 01:16:22PM +0200, Panu Matilainen wrote:
On 3/26/20 1:02 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Mon, Mar 16, 2020 at 11:22:47AM -0400, Ben Cotton wrote:
=== Upgrading ===
- Ability to upgrade is not affected
- After upgrade completes, manual action (rpmdb --rebuilddb) will
probably be needed to convert to sqlite. Alternatively user can change configuration to stay on BDB.
Do I understand correctly:
- without the manual step, users will remain on the old format
- with the old format, in F33 everything will still work fine, but after upgrade to F34, the database will become read-only
Why is an automatic 'rpmdb --rebuilddb' not part of upgrade plan?
To repeat what I said in https://pagure.io/fesco/issue/2360:
Hi,
thanks for quick answer and sorry for double-posting. I started reading the fesco ticket, then the change page, then the discussion here, and forgot to read the rest of the comment on the ticket. I also posted there, but I think it's better to discuss here. I'll copy my post from there here, sorry for the mess.
No worries, just as long as we keep the discussion in one place :) devel works for me just fine.
I left it open on purpose (note the "probably" in there) as there would be any number of ways to achieve the rebuild with varying degrees of automation and opt-out opportunities, depending on what is actually desireable for Fedora.
One possibility could be adding a rebuild step to dnf system-upgrade plugin, rebuilding the db after distro upgrades is not a bad idea regardless of db format changes (at least BDB performance would gradually degrade unless rebuilt every now and then). That would leave people doing the (unspeakable) distro-sync upgrade to deal with database format manually, which might be just the right balance of freedom. Or not, I dunno. Other possibilities include planting a one-shot service that does the db rebuild on the next reboot and decommissions itself afterwards in the rpm package itself. Other variations certainly exist.
Suggestions welcome, just as long as you don't suggest rebuilding from rpm %posttrans :)
Right. I realize %posttrans is not a good idea. But *some* mechanism is necessary, because without that the change will mostly be a noop for most users. So I think this needs to be decided somehow.
Note that rpm will nag about the inconsistency between what's on disk and configuration until resolved one way or the other (the message could suggest --rebuilddb as well), so this wouldn't be an invisible thing.
Quoting from the FESCo ticket: About the various implementation options:
- in dnf system-upgrade: this does not cover normal 'dnf --releasever=33 upgrade' upgrades (as you mentioned earlier), but it also does not cover packagekit upgrades. It'd also create a
And which of these upgrade paths we actually "support", or maybe the term here is "recommend" to the average user?
This is the single biggest reason I left it so open: I got lost in the "maze of upgrade tools, all alike" years ago. There's not much point for me in devising a fancy scheme if it doesn't match what is expected in Fedora. Hence this conversation (which is good!)
previous-release-blocker(s) and previous-previous-release-blockers(s), since the changes would need to be deployed in F32 and F31. Also note that the last time when the upgrade plugins run code is in upgrade phase between two reboots, and the plugin is running pre-upgrade code. This code would then invoke post-upgrade rpm. It's certainly doable, but seems a bit funky.
Right, requiring changes to previous versions is not okay. I seem to be thinking our upgrade tooling had gotten fixed at some point to perform the upgrade on the target distro packaging management stack as it would really need to be, but guess that was just a dream.
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
An interrupted database rebuild is harmless, has always been. Just as long as the one-shot service only decommissions itself once successfully completed, there's no damage done, there will always be the next reboot.
No matter how it wrapped, is the upgrade itself atomic? Having the new db built under a temporary file name and then atomically rename(2)d into place would be ideal.
The new database is built in another directory and only if that completes successfully, the old directory is moved out of the way and replaced with the new. So it's as atomic as it can be.
- Panu -
Zbyszek _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Thu, Mar 26, 2020 at 07:38:33AM -0400, Neal Gompa wrote:
On Thu, Mar 26, 2020 at 7:33 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Thu, Mar 26, 2020 at 01:16:22PM +0200, Panu Matilainen wrote:
On 3/26/20 1:02 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Mon, Mar 16, 2020 at 11:22:47AM -0400, Ben Cotton wrote:
=== Upgrading ===
- Ability to upgrade is not affected
- After upgrade completes, manual action (rpmdb --rebuilddb) will
probably be needed to convert to sqlite. Alternatively user can change configuration to stay on BDB.
Do I understand correctly:
- without the manual step, users will remain on the old format
- with the old format, in F33 everything will still work fine, but after upgrade to F34, the database will become read-only
Why is an automatic 'rpmdb --rebuilddb' not part of upgrade plan?
To repeat what I said in https://pagure.io/fesco/issue/2360:
Hi,
thanks for quick answer and sorry for double-posting. I started reading the fesco ticket, then the change page, then the discussion here, and forgot to read the rest of the comment on the ticket. I also posted there, but I think it's better to discuss here. I'll copy my post from there here, sorry for the mess.
I left it open on purpose (note the "probably" in there) as there would be any number of ways to achieve the rebuild with varying degrees of automation and opt-out opportunities, depending on what is actually desireable for Fedora.
One possibility could be adding a rebuild step to dnf system-upgrade plugin, rebuilding the db after distro upgrades is not a bad idea regardless of db format changes (at least BDB performance would gradually degrade unless rebuilt every now and then). That would leave people doing the (unspeakable) distro-sync upgrade to deal with database format manually, which might be just the right balance of freedom. Or not, I dunno. Other possibilities include planting a one-shot service that does the db rebuild on the next reboot and decommissions itself afterwards in the rpm package itself. Other variations certainly exist.
Suggestions welcome, just as long as you don't suggest rebuilding from rpm %posttrans :)
Right. I realize %posttrans is not a good idea. But *some* mechanism is necessary, because without that the change will mostly be a noop for most users. So I think this needs to be decided somehow.
Quoting from the FESCo ticket: About the various implementation options:
- in dnf system-upgrade: this does not cover normal 'dnf --releasever=33 upgrade' upgrades (as you mentioned earlier), but it also does not cover packagekit upgrades. It'd also create a previous-release-blocker(s) and previous-previous-release-blockers(s), since the changes would need to be deployed in F32 and F31. Also note that the last time when the upgrade plugins run code is in upgrade phase between two reboots, and the plugin is running pre-upgrade code. This code would then invoke post-upgrade rpm. It's certainly doable, but seems a bit funky.
It could be a libdnf post-transaction plugin. That would apply to any mechanism of system upgrade using libdnf, either through dnf or PackageKit.
That sounds interesting...
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Wouldn't the systemd-inhibit plugin automatically ensure that a rebuild action would block sleep/poweroff?
Unfortunately... not. From the man page: inhibitors "may be used to block or delay system sleep and shutdown requests from the user, as well as automatic idle handling of the OS." Explicit non-interactive privileged requests override inhibitors [1,2]. This has been discussed, and I think there's general sentiment that we should have an ability to inhibit "everything", but so far nobody has pushed for a solution. A solution could be proritized if it turns out to be required in Fedora.
[1] https://github.com/systemd/systemd/issues/2680 [2] https://github.com/systemd/systemd/issues/6644
No matter how it wrapped, is the upgrade itself atomic? Having the new db built under a temporary file name and then atomically rename(2)d into place would be ideal.
Since RPM 4.14, RPM creates a new directory, writes the database content there, then renames the directory when it's done.
Does it use renameat2(RENAME_EXCHANGE)?
Zbyszek
On Thu, Mar 26, 2020 at 02:00:49PM +0200, Panu Matilainen wrote:
I left it open on purpose (note the "probably" in there) as there would be any number of ways to achieve the rebuild with varying degrees of automation and opt-out opportunities, depending on what is actually desireable for Fedora.
One possibility could be adding a rebuild step to dnf system-upgrade plugin, rebuilding the db after distro upgrades is not a bad idea regardless of db format changes (at least BDB performance would gradually degrade unless rebuilt every now and then). That would leave people doing the (unspeakable) distro-sync upgrade to deal with database format manually, which might be just the right balance of freedom. Or not, I dunno. Other possibilities include planting a one-shot service that does the db rebuild on the next reboot and decommissions itself afterwards in the rpm package itself. Other variations certainly exist.
Suggestions welcome, just as long as you don't suggest rebuilding from rpm %posttrans :)
Right. I realize %posttrans is not a good idea. But *some* mechanism is necessary, because without that the change will mostly be a noop for most users. So I think this needs to be decided somehow.
Note that rpm will nag about the inconsistency between what's on disk and configuration until resolved one way or the other (the message could suggest --rebuilddb as well), so this wouldn't be an invisible thing.
OK. That's good, but I still think we should strive for a fully automatic handling of this. In particular, this message will not be visible with graphical updates.
Quoting from the FESCo ticket: About the various implementation options:
- in dnf system-upgrade: this does not cover normal 'dnf --releasever=33 upgrade' upgrades (as you mentioned earlier), but it also does not cover packagekit upgrades. It'd also create a
And which of these upgrade paths we actually "support", or maybe the term here is "recommend" to the average user?
Both 'dnf system-upgrade' and gnome-software/packagekit.
This is the single biggest reason I left it so open: I got lost in the "maze of upgrade tools, all alike" years ago. There's not much point for me in devising a fancy scheme if it doesn't match what is expected in Fedora. Hence this conversation (which is good!)
previous-release-blocker(s) and previous-previous-release-blockers(s), since the changes would need to be deployed in F32 and F31. Also note that the last time when the upgrade plugins run code is in upgrade phase between two reboots, and the plugin is running pre-upgrade code. This code would then invoke post-upgrade rpm. It's certainly doable, but seems a bit funky.
Right, requiring changes to previous versions is not okay. I seem to be thinking our upgrade tooling had gotten fixed at some point to perform the upgrade on the target distro packaging management stack as it would really need to be, but guess that was just a dream.
Relying on the target distro management stack sound nice, but is actually problematic: how do you run the next version before you install the next version? Sure, you can install stuff to some temporary location and run the tools from there, but then you are running in a very custom franken-environment. Such a mode of running would face the same issue as anaconda installer: it would only get tested during the upgrade season, languishing otherwise.
So nowadays we have a much simpler mechanism: reboot to a special system target without most daemons running (to avoid interference during the upgrade), run the update there, reboot into the new environment. The biggest advantage is that this way we reduce the amount of "custom": we're running normal installed dnf + rpm in a normal boot environment, we just stop the boot from progressing all the way to the usual graphical environment.
I think it's fair to say that amount of bugs related to the upgrade mechanism has been greatly reduced compared to previous schemes. We still have various upgrade issues, but they are in the rpms themselves, and not how we install them.
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
An interrupted database rebuild is harmless, has always been. Just as long as the one-shot service only decommissions itself once successfully completed, there's no damage done, there will always be the next reboot.
OK, then I think this is the way to go. (A libdnf plugin as suggested elsewhere in the thread would work too, but a one-shot service seems much easier to implement and test.)
No matter how it wrapped, is the upgrade itself atomic? Having the new db built under a temporary file name and then atomically rename(2)d into place would be ideal.
The new database is built in another directory and only if that completes successfully, the old directory is moved out of the way and replaced with the new. So it's as atomic as it can be.
Zbyszek
On Thu, Mar 26, 2020 at 8:22 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Thu, Mar 26, 2020 at 07:38:33AM -0400, Neal Gompa wrote:
On Thu, Mar 26, 2020 at 7:33 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Wouldn't the systemd-inhibit plugin automatically ensure that a rebuild action would block sleep/poweroff?
Unfortunately... not. From the man page: inhibitors "may be used to block or delay system sleep and shutdown requests from the user, as well as automatic idle handling of the OS." Explicit non-interactive privileged requests override inhibitors [1,2]. This has been discussed, and I think there's general sentiment that we should have an ability to inhibit "everything", but so far nobody has pushed for a solution. A solution could be proritized if it turns out to be required in Fedora.
[1] https://github.com/systemd/systemd/issues/2680 [2] https://github.com/systemd/systemd/issues/6644
I'm not sure it's needed with rpm 4.14+, since worst case is that you have to trigger the rebuild some other time if it's interrupted.
No matter how it wrapped, is the upgrade itself atomic? Having the new db built under a temporary file name and then atomically rename(2)d into place would be ideal.
Since RPM 4.14, RPM creates a new directory, writes the database content there, then renames the directory when it's done.
Does it use renameat2(RENAME_EXCHANGE)?
No, but I don't think that matters with the way it's implemented?
See: https://github.com/rpm-software-management/rpm/commit/fffd652c56eaef8fc41d23...
-- 真実はいつも一つ!/ Always, there's only one truth!
On 3/26/20 2:35 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 02:00:49PM +0200, Panu Matilainen wrote:
[cutting to the chase]
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
An interrupted database rebuild is harmless, has always been. Just as long as the one-shot service only decommissions itself once successfully completed, there's no damage done, there will always be the next reboot.
OK, then I think this is the way to go. (A libdnf plugin as suggested elsewhere in the thread would work too, but a one-shot service seems much easier to implement and test.)
Yeah, it's a potential implementation of something. What we really need to discuss though is what exactly that something is.
Based on this exchange and https://pagure.io/packaging-committee/pull-request/954 comments so far, it seems to be:
"Rpm database is converted automatically on all upgrade paths, unless manual steps to opt-out are taken."
Right?
That's all right and even preferred by me.
- Panu -
On 3/26/20 2:20 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 07:38:33AM -0400, Neal Gompa wrote:
Since RPM 4.14, RPM creates a new directory, writes the database content there, then renames the directory when it's done.
Does it use renameat2(RENAME_EXCHANGE)?
No, rpm doesn't use many Linux-specific calls and this is no exception. In fact it doesn't use any of the *at() family calls directly either.
This is actually the first I hear about that system call which indeed seems highly useful for rpm, so thanks for the tip :)
- Panu -
On Thu, Mar 26, 2020 at 08:44:47AM -0400, Neal Gompa wrote:
On Thu, Mar 26, 2020 at 8:22 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Thu, Mar 26, 2020 at 07:38:33AM -0400, Neal Gompa wrote:
On Thu, Mar 26, 2020 at 7:33 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Wouldn't the systemd-inhibit plugin automatically ensure that a rebuild action would block sleep/poweroff?
Unfortunately... not. From the man page: inhibitors "may be used to block or delay system sleep and shutdown requests from the user, as well as automatic idle handling of the OS." Explicit non-interactive privileged requests override inhibitors [1,2]. This has been discussed, and I think there's general sentiment that we should have an ability to inhibit "everything", but so far nobody has pushed for a solution. A solution could be proritized if it turns out to be required in Fedora.
[1] https://github.com/systemd/systemd/issues/2680 [2] https://github.com/systemd/systemd/issues/6644
I'm not sure it's needed with rpm 4.14+, since worst case is that you have to trigger the rebuild some other time if it's interrupted.
No matter how it wrapped, is the upgrade itself atomic? Having the new db built under a temporary file name and then atomically rename(2)d into place would be ideal.
Since RPM 4.14, RPM creates a new directory, writes the database content there, then renames the directory when it's done.
Does it use renameat2(RENAME_EXCHANGE)?
No, but I don't think that matters with the way it's implemented?
See: https://github.com/rpm-software-management/rpm/commit/fffd652c56eaef8fc41d23...
I think otherwise it'd be hard to do an atomic replacement when the database consists of more than one file. But looking at the code:
xx = rename(dest, old); if (xx) { goto exit; } xx = rename(src, dest);
(dest, src, old are all single-file paths) if I'm reading this correctly, it doesn't even to atomic replacement of individual files. If the machine is rebooted between the two renames above, no database ;(
On Thu, Mar 26, 2020 at 03:29:50PM +0200, Panu Matilainen wrote:
No, rpm doesn't use many Linux-specific calls and this is no exception. In fact it doesn't use any of the *at() family calls directly either.
But why?! It's not like rpm is massive on Windows Server... Isn't good support for Linux absolutely the most important thing?
This is actually the first I hear about that system call which indeed seems highly useful for rpm, so thanks for the tip :)
So... does this mean that we can get #61413 a.k.a. #447156 resolved?
Zbyszek
On Thu, Mar 26, 2020 at 02:08:57PM +0000, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 03:29:50PM +0200, Panu Matilainen wrote:
No, rpm doesn't use many Linux-specific calls and this is no exception. In fact it doesn't use any of the *at() family calls directly either.
But why?! It's not like rpm is massive on Windows Server... Isn't good support for Linux absolutely the most important thing?
Well, RPM is a package manager on AIX. IBM/Redhat may want to keep AIX alive ;-)
On Thu, 26 Mar 2020 at 10:54, Tomasz Torcz tomek@pipebreaker.pl wrote:
On Thu, Mar 26, 2020 at 02:08:57PM +0000, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 03:29:50PM +0200, Panu Matilainen wrote:
No, rpm doesn't use many Linux-specific calls and this is no exception. In fact it doesn't use any of the *at() family calls directly either.
But why?! It's not like rpm is massive on Windows Server... Isn't good support for Linux absolutely the most important thing?
Well, RPM is a package manager on AIX. IBM/Redhat may want to keep AIX alive ;-)
My understanding is that is not the only place it is used. A Linux only version would end being another fork.. I doubt it matters much as it did 10 or 20 years ago.. but it would still be a splitting of community resources versus a growing of community resources. Not all the world can be as free as systemd :).
On Thu, Mar 26, 2020 at 11:41:56AM -0400, Stephen John Smoogen wrote:
On Thu, 26 Mar 2020 at 10:54, Tomasz Torcz tomek@pipebreaker.pl wrote:
On Thu, Mar 26, 2020 at 02:08:57PM +0000, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 03:29:50PM +0200, Panu Matilainen wrote:
No, rpm doesn't use many Linux-specific calls and this is no exception. In fact it doesn't use any of the *at() family calls directly either.
But why?! It's not like rpm is massive on Windows Server... Isn't good support for Linux absolutely the most important thing?
Well, RPM is a package manager on AIX. IBM/Redhat may want to keep AIX alive ;-)
My understanding is that is not the only place it is used. A Linux only version would end being another fork.. I doubt it matters much as it did 10 or 20 years ago.. but it would still be a splitting of community resources versus a growing of community resources. Not all the world can be as free as systemd :).
Well, OK, but let's consider that Linux installations are probably something like 99.9%. IMO it's totally appropriate to implement an atomic path for linux, and implement a non-atomic fallback for the systems that need that. We're not talking about anything big here, rather a ~10 line function.
Zbyszek
On 3/26/20 4:08 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 08:44:47AM -0400, Neal Gompa wrote:
On Thu, Mar 26, 2020 at 8:22 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Thu, Mar 26, 2020 at 07:38:33AM -0400, Neal Gompa wrote:
On Thu, Mar 26, 2020 at 7:33 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Wouldn't the systemd-inhibit plugin automatically ensure that a rebuild action would block sleep/poweroff?
Unfortunately... not. From the man page: inhibitors "may be used to block or delay system sleep and shutdown requests from the user, as well as automatic idle handling of the OS." Explicit non-interactive privileged requests override inhibitors [1,2]. This has been discussed, and I think there's general sentiment that we should have an ability to inhibit "everything", but so far nobody has pushed for a solution. A solution could be proritized if it turns out to be required in Fedora.
[1] https://github.com/systemd/systemd/issues/2680 [2] https://github.com/systemd/systemd/issues/6644
I'm not sure it's needed with rpm 4.14+, since worst case is that you have to trigger the rebuild some other time if it's interrupted.
No matter how it wrapped, is the upgrade itself atomic? Having the new db built under a temporary file name and then atomically rename(2)d into place would be ideal.
Since RPM 4.14, RPM creates a new directory, writes the database content there, then renames the directory when it's done.
Does it use renameat2(RENAME_EXCHANGE)?
No, but I don't think that matters with the way it's implemented?
See: https://github.com/rpm-software-management/rpm/commit/fffd652c56eaef8fc41d23...
I think otherwise it'd be hard to do an atomic replacement when the database consists of more than one file. But looking at the code:
xx = rename(dest, old); if (xx) { goto exit; } xx = rename(src, dest);
(dest, src, old are all single-file paths) if I'm reading this correctly, it doesn't even to atomic replacement of individual files. If the machine is rebooted between the two renames above, no database ;(
It *used to* replace files one by one before https://github.com/rpm-software-management/rpm/commit/fffd652c56eaef8fc41d23...
Now it's just two rename() calls on directories, so it's worlds better and the best we can do with portable system calls. So yes if you win the unlucky lottery to have system reboot between those two rename() calls, you can end up with having to manually rename it into place.
So to put that into perspective, --rebuilddb was *much* worse for over twenty years and I've never heard anybodys database getting nuked because of *that*.
On Thu, Mar 26, 2020 at 03:29:50PM +0200, Panu Matilainen wrote:
No, rpm doesn't use many Linux-specific calls and this is no exception. In fact it doesn't use any of the *at() family calls directly either.
But why?! It's not like rpm is massive on Windows Server... Isn't good support for Linux absolutely the most important thing?
The places where rpm turns up never ceases to astonish me, starting from OS X to FreeBSD to the old proprietory unixen (AIX was already mentioned). There's even an actively maintained OS/2 port of rpm. Seriously. To that, I had to mostly say no.
Of course, these things can be abstracted out in portable wrappers to take advantage of newer and/or OS-specific features, but guess what, that takes time and effort. As does replacing any code.
The *at() family was added in POSIX.1-2008, and time may finally be ripe for rpm to start requiring them without having to paper over with wrapper calls, but even then, somebody needs to do the work. We haven't recently had any spare cycles to go chase looking for stuff that could be maybe replaced with something newer just because its ... newer. I don't recall a single rpm bug report that would've been avoided by use of the *at() stuff, so there doesn't seem to be a whole lot of benefit.
Rewriting rpm's core file management operations for more robust error handling has been in the todo for years. When that finally happens, it'll be a good opportunity to take advantage of the newer interfaces.
This is actually the first I hear about that system call which indeed seems highly useful for rpm, so thanks for the tip :)
So... does this mean that we can get #61413 a.k.a. #447156 resolved?
No.
- Panu -
Zbyszek _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Thu, Mar 26, 2020 at 1:11 PM Panu Matilainen pmatilai@laiskiainen.org wrote:
On 3/26/20 4:08 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 08:44:47AM -0400, Neal Gompa wrote:
On Thu, Mar 26, 2020 at 8:22 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
On Thu, Mar 26, 2020 at 07:38:33AM -0400, Neal Gompa wrote:
On Thu, Mar 26, 2020 at 7:33 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Wouldn't the systemd-inhibit plugin automatically ensure that a rebuild action would block sleep/poweroff?
Unfortunately... not. From the man page: inhibitors "may be used to block or delay system sleep and shutdown requests from the user, as well as automatic idle handling of the OS." Explicit non-interactive privileged requests override inhibitors [1,2]. This has been discussed, and I think there's general sentiment that we should have an ability to inhibit "everything", but so far nobody has pushed for a solution. A solution could be proritized if it turns out to be required in Fedora.
[1] https://github.com/systemd/systemd/issues/2680 [2] https://github.com/systemd/systemd/issues/6644
I'm not sure it's needed with rpm 4.14+, since worst case is that you have to trigger the rebuild some other time if it's interrupted.
No matter how it wrapped, is the upgrade itself atomic? Having the new db built under a temporary file name and then atomically rename(2)d into place would be ideal.
Since RPM 4.14, RPM creates a new directory, writes the database content there, then renames the directory when it's done.
Does it use renameat2(RENAME_EXCHANGE)?
No, but I don't think that matters with the way it's implemented?
See: https://github.com/rpm-software-management/rpm/commit/fffd652c56eaef8fc41d23...
I think otherwise it'd be hard to do an atomic replacement when the database consists of more than one file. But looking at the code:
xx = rename(dest, old); if (xx) { goto exit; } xx = rename(src, dest);
(dest, src, old are all single-file paths) if I'm reading this correctly, it doesn't even to atomic replacement of individual files. If the machine is rebooted between the two renames above, no database ;(
It *used to* replace files one by one before https://github.com/rpm-software-management/rpm/commit/fffd652c56eaef8fc41d23...
Now it's just two rename() calls on directories, so it's worlds better and the best we can do with portable system calls. So yes if you win the unlucky lottery to have system reboot between those two rename() calls, you can end up with having to manually rename it into place.
So to put that into perspective, --rebuilddb was *much* worse for over twenty years and I've never heard anybodys database getting nuked because of *that*.
On Thu, Mar 26, 2020 at 03:29:50PM +0200, Panu Matilainen wrote:
No, rpm doesn't use many Linux-specific calls and this is no exception. In fact it doesn't use any of the *at() family calls directly either.
But why?! It's not like rpm is massive on Windows Server... Isn't good support for Linux absolutely the most important thing?
The places where rpm turns up never ceases to astonish me, starting from OS X to FreeBSD to the old proprietory unixen (AIX was already mentioned). There's even an actively maintained OS/2 port of rpm. Seriously. To that, I had to mostly say no.
And rpm is in fact available for Windows too, through midipix: https://midipix.org/
(And cygwin, but we don't talk about Cygwin...)
-- 真実はいつも一つ!/ Always, there's only one truth!
On 3/26/20 6:12 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 11:41:56AM -0400, Stephen John Smoogen wrote:
On Thu, 26 Mar 2020 at 10:54, Tomasz Torcz tomek@pipebreaker.pl wrote:
On Thu, Mar 26, 2020 at 02:08:57PM +0000, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 03:29:50PM +0200, Panu Matilainen wrote:
No, rpm doesn't use many Linux-specific calls and this is no exception. In fact it doesn't use any of the *at() family calls directly either.
But why?! It's not like rpm is massive on Windows Server... Isn't good support for Linux absolutely the most important thing?
Well, RPM is a package manager on AIX. IBM/Redhat may want to keep AIX alive ;-)
My understanding is that is not the only place it is used. A Linux only version would end being another fork.. I doubt it matters much as it did 10 or 20 years ago.. but it would still be a splitting of community resources versus a growing of community resources. Not all the world can be as free as systemd :).
Well, OK, but let's consider that Linux installations are probably something like 99.9%. IMO it's totally appropriate to implement an atomic path for linux, and implement a non-atomic fallback for the systems that need that. We're not talking about anything big here, rather a ~10 line function.
Patches welcome - well tested ones that is. That's one of the issues with such alternate paths: that simple thing suddenly has two separate paths you need to test instead of one.
Truly atomic database rebuild would be nice of course, but all this attention on that is way out of proportion.
- Panu -
On 3/26/20 2:35 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 02:00:49PM +0200, Panu Matilainen wrote:
previous-release-blocker(s) and previous-previous-release-blockers(s), since the changes would need to be deployed in F32 and F31. Also note that the last time when the upgrade plugins run code is in upgrade phase between two reboots, and the plugin is running pre-upgrade code. This code would then invoke post-upgrade rpm. It's certainly doable, but seems a bit funky.
Right, requiring changes to previous versions is not okay. I seem to be thinking our upgrade tooling had gotten fixed at some point to perform the upgrade on the target distro packaging management stack as it would really need to be, but guess that was just a dream.
Relying on the target distro management stack sound nice, but is actually problematic: how do you run the next version before you install the next version? Sure, you can install stuff to some temporary location and run the tools from there, but then you are running in a very custom franken-environment. Such a mode of running would face the same issue as anaconda installer: it would only get tested during the upgrade season, languishing otherwise.
Mock has this cool bootstrap image thing now. It seems to me we could use that image to run the system upgrades too [*]. And if/when we get koji to use that, it'll solve a number of ages old problems on the build system, AND that image will get heavily tested 24/7 so it wouldn't be any once in a full moon franken-thing.
[*] Mount the host filesystem from mock and perform a dnf --instalroot=... distro-upgrade on that, turning the whole landscape inside out.
So nowadays we have a much simpler mechanism: reboot to a special system target without most daemons running (to avoid interference during the upgrade), run the update there, reboot into the new environment. The biggest advantage is that this way we reduce the amount of "custom": we're running normal installed dnf + rpm in a normal boot environment, we just stop the boot from progressing all the way to the usual graphical environment.
I think it's fair to say that amount of bugs related to the upgrade mechanism has been greatly reduced compared to previous schemes. We still have various upgrade issues, but they are in the rpms themselves, and not how we install them.
Such a scheme may be feasible in a fast-moving distro like Fedora where you can always afford to sit out the next six months waiting for the new thing to become available also in rawhide-1 version, but it's totally non-feasible in something like RHEL. RHEL/CentOS 7 to 8 upgrades with such a scheme only happen to "work" because of bugs such as missing rpmlib() dependency on file triggers kinda let things stumble through the cracks.
- Panu -
On 3/27/20 9:04 AM, Panu Matilainen wrote:
On 3/26/20 2:35 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 02:00:49PM +0200, Panu Matilainen wrote:
previous-release-blocker(s) and previous-previous-release-blockers(s), since the changes would need to be deployed in F32 and F31. Also note that the last time when the upgrade plugins run code is in upgrade phase between two reboots, and the plugin is running pre-upgrade code. This code would then invoke post-upgrade rpm. It's certainly doable, but seems a bit funky.
Right, requiring changes to previous versions is not okay. I seem to be thinking our upgrade tooling had gotten fixed at some point to perform the upgrade on the target distro packaging management stack as it would really need to be, but guess that was just a dream.
Relying on the target distro management stack sound nice, but is actually problematic: how do you run the next version before you install the next version? Sure, you can install stuff to some temporary location and run the tools from there, but then you are running in a very custom franken-environment. Such a mode of running would face the same issue as anaconda installer: it would only get tested during the upgrade season, languishing otherwise.
You can also boot into live cd of the next version, mount the existing filesystems similarly to what the rescue image does, and run dnf --installroot=... upgrade from there. Whether manually, or using special tooling. Tooling which would still all be from the next version and thus issues fixable without pushing changes to multiple old versions.
- Panu -
On Fri, Mar 27, 2020 at 09:04:53AM +0200, Panu Matilainen wrote:
On 3/26/20 2:35 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 02:00:49PM +0200, Panu Matilainen wrote:
previous-release-blocker(s) and previous-previous-release-blockers(s), since the changes would need to be deployed in F32 and F31. Also note that the last time when the upgrade plugins run code is in upgrade phase between two reboots, and the plugin is running pre-upgrade code. This code would then invoke post-upgrade rpm. It's certainly doable, but seems a bit funky.
Right, requiring changes to previous versions is not okay. I seem to be thinking our upgrade tooling had gotten fixed at some point to perform the upgrade on the target distro packaging management stack as it would really need to be, but guess that was just a dream.
Relying on the target distro management stack sound nice, but is actually problematic: how do you run the next version before you install the next version? Sure, you can install stuff to some temporary location and run the tools from there, but then you are running in a very custom franken-environment. Such a mode of running would face the same issue as anaconda installer: it would only get tested during the upgrade season, languishing otherwise.
Mock has this cool bootstrap image thing now. It seems to me we could use that image to run the system upgrades too [*]. And if/when we get koji to use that, it'll solve a number of ages old problems on the build system, AND that image will get heavily tested 24/7 so it wouldn't be any once in a full moon franken-thing.
[*] Mount the host filesystem from mock and perform a dnf --instalroot=... distro-upgrade on that, turning the whole landscape inside out.
Where would mock be executing from? The same filesystem it is modifying? Somehow it seems that this doesn't change much, but just brings in another layer. Or will a complete copy of the system be made in memory to execute the upgrade tools from?
Let's consider a concrete example that came up recently: grub wants to rewrite something in the bootloader area on disk to help upgrades from very old installations. In current "offline upgrade" scheme, the upgrade tools are running on the real system, with udev active. They can query and touch hardware, can see all the disks as they are, etc. If we went through mock, it'd be running in an nspawn environment w/o access to hardware.
(Something like os-tree's atomic replacement of things, that's of course a completely different story. But so far we're talking about traditional systems.)
So nowadays we have a much simpler mechanism: reboot to a special system target without most daemons running (to avoid interference during the upgrade), run the update there, reboot into the new environment. The biggest advantage is that this way we reduce the amount of "custom": we're running normal installed dnf + rpm in a normal boot environment, we just stop the boot from progressing all the way to the usual graphical environment.
I think it's fair to say that amount of bugs related to the upgrade mechanism has been greatly reduced compared to previous schemes. We still have various upgrade issues, but they are in the rpms themselves, and not how we install them.
Such a scheme may be feasible in a fast-moving distro like Fedora where you can always afford to sit out the next six months waiting for the new thing to become available also in rawhide-1 version, but it's totally non-feasible in something like RHEL. RHEL/CentOS 7 to 8 upgrades with such a scheme only happen to "work" because of bugs such as missing rpmlib() dependency on file triggers kinda let things stumble through the cracks.
The premise is that the upgrade is really a normal dnf upgrade, i.e. a normal 'rpm -U' operation under the hood. The differences are: 1: package count, 2: setting up the machine in a mode where the graphical env. and other non-essential daemons are not active. So if requirements in rpms are specified correctly, the upgrade always should go through. If they are specified incorrectly — then the same problems would occur on smaller updates. So the upgrade path is something to test to catch such issues in packaging. (And in general, the less scriptlets, the better. This solves the issue even better.)
Zbyszek
On 3/27/20 9:55 AM, Zbigniew Jędrzejewski-Szmek wrote:
On Fri, Mar 27, 2020 at 09:04:53AM +0200, Panu Matilainen wrote:
On 3/26/20 2:35 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 02:00:49PM +0200, Panu Matilainen wrote:
previous-release-blocker(s) and previous-previous-release-blockers(s), since the changes would need to be deployed in F32 and F31. Also note that the last time when the upgrade plugins run code is in upgrade phase between two reboots, and the plugin is running pre-upgrade code. This code would then invoke post-upgrade rpm. It's certainly doable, but seems a bit funky.
Right, requiring changes to previous versions is not okay. I seem to be thinking our upgrade tooling had gotten fixed at some point to perform the upgrade on the target distro packaging management stack as it would really need to be, but guess that was just a dream.
Relying on the target distro management stack sound nice, but is actually problematic: how do you run the next version before you install the next version? Sure, you can install stuff to some temporary location and run the tools from there, but then you are running in a very custom franken-environment. Such a mode of running would face the same issue as anaconda installer: it would only get tested during the upgrade season, languishing otherwise.
Mock has this cool bootstrap image thing now. It seems to me we could use that image to run the system upgrades too [*]. And if/when we get koji to use that, it'll solve a number of ages old problems on the build system, AND that image will get heavily tested 24/7 so it wouldn't be any once in a full moon franken-thing.
[*] Mount the host filesystem from mock and perform a dnf --instalroot=... distro-upgrade on that, turning the whole landscape inside out.
Where would mock be executing from? The same filesystem it is modifying?
Where is the offline upgrade executing from? How's this fundamentally different?
Somehow it seems that this doesn't change much, but just brings in another layer. Or will a complete copy of the system be made in memory to execute the upgrade tools from?
Oh come on. Running from a bootstrap image allows using full native capabilities of rpm/dnf in any new version, without having to consider what the previous versions support. How's that "not much"?
Let's consider a concrete example that came up recently: grub wants to rewrite something in the bootloader area on disk to help upgrades from very old installations. In current "offline upgrade" scheme, the upgrade tools are running on the real system, with udev active. They can query and touch hardware, can see all the disks as they are, etc. If we went through mock, it'd be running in an nspawn environment w/o access to hardware.
And still that offline upgrade will be running on the old systems kernel which will simply *prevent* certain types of actions to be performed in an upgrade, just like using host system packaging stack *prevents* use of native capabilities in the next version, just because the old version doesn't support them, which is just totally a** backwards. Really.
Note that I'm talking about a high-level idea here. I haven't looked at what a mock bootstrap image looks like, I haven't looked at what offline upgrade looks like. Sure there would be technical details, perhaps obstacles even to sort out.
(Something like os-tree's atomic replacement of things, that's of course a completely different story. But so far we're talking about traditional systems.)
So nowadays we have a much simpler mechanism: reboot to a special system target without most daemons running (to avoid interference during the upgrade), run the update there, reboot into the new environment. The biggest advantage is that this way we reduce the amount of "custom": we're running normal installed dnf + rpm in a normal boot environment, we just stop the boot from progressing all the way to the usual graphical environment.
I think it's fair to say that amount of bugs related to the upgrade mechanism has been greatly reduced compared to previous schemes. We still have various upgrade issues, but they are in the rpms themselves, and not how we install them.
Such a scheme may be feasible in a fast-moving distro like Fedora where you can always afford to sit out the next six months waiting for the new thing to become available also in rawhide-1 version, but it's totally non-feasible in something like RHEL. RHEL/CentOS 7 to 8 upgrades with such a scheme only happen to "work" because of bugs such as missing rpmlib() dependency on file triggers kinda let things stumble through the cracks.
The premise is that the upgrade is really a normal dnf upgrade, i.e. a normal 'rpm -U' operation under the hood. The differences are: 1: package count, 2: setting up the machine in a mode where the graphical env. and other non-essential daemons are not active. So if requirements in rpms are specified correctly, the upgrade always should go through. If they are specified incorrectly — then the same problems would occur on smaller updates. So the upgrade path is something to test to catch such issues in packaging. (And in general, the less scriptlets, the better. This solves the issue even better.)
You're missing the point. Missing capabilities in the older version rpm can and will PREVENT you from doing the update AT ALL.
Even in Fedora people have seen occasional glimpses of this, in enterprise distros this is a complete show-stopper.
I want a normal dnf upgrade just as much as you do, it's just that it needs to be run from the new version, not the old. One way or the other.
- Panu -
On Fri, Mar 27, 2020 at 1:56 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
Where would mock be executing from? The same filesystem it is modifying? Somehow it seems that this doesn't change much, but just brings in another layer. Or will a complete copy of the system be made in memory to execute the upgrade tools from?
Snapshot it. If it doesn't work, throw away the snapshot.
Let's consider a concrete example that came up recently: grub wants to rewrite something in the bootloader area on disk to help upgrades from very old installations. In current "offline upgrade" scheme, the upgrade tools are running on the real system, with udev active. They can query and touch hardware, can see all the disks as they are, etc. If we went through mock, it'd be running in an nspawn environment w/o access to hardware.
This particular example affected BIOS GRUB, where the embedded "core.img" isn't ever updated. i.e. grub-install is called once at original install time, and never again. I'm not certain whether installation is, or could be, atomic.
On UEFI, the "core.img" makes up most of grubx64.efi, and it's updated anytime grub2-efi-x64 package is updated. Not only system upgrades. I don't know how RPM replaces it. But since it's on FAT, atomicity is limited to the VFS operation. There is a window where it could be interrupted and things wouldn't be in either working state.
(Something like os-tree's atomic replacement of things, that's of course a completely different story. But so far we're talking about traditional systems.)
Perhaps ironically, rpm-ostree + UEFI systems, don't have the bootloader updated. And it doesn't really want to be responsible for it. GRUB is very cool in many ways, but having a strategy for keeping itself up to date is not one of them; so far upstream GRUB development considers this a distribution problem, not a problem in search of a generic solution.
One idea is a service that ensures boot related things are in the proper state, including mirroring the EFI system partition for raid1 sysroot setups. It's not decided if it should be fwupd function or made into a separate boot daemon.
So, my concern here is timeline vs the upcoming datacenter move. ;(
Do you have any ideas when rpm 4.16 will be released? I don't see any dates on the change. Or perhaps I guess the question is when it will land in rawhide?
As soon as it lands in rawhide we need to upgrade the builders to the rawhide rpm and set macros so it uses bdb for everything but rawhide. It's very likely however that builders will still be Fedora 31 at that point. (If that matters for rpm any).
Or I suppose we could try and get mock's bootstrapping working before then.
In either case, it may be hard to have cycles for this while datacenter move is happening. It would help if we had a ballpark at least for when it will land / some folks willing to get bootstrapping working in koji.
Or what would you think of the idea of landing it in rawhide, but keeping default bdb until after we have the move done and can upgrade builders to f32?
kevin
On Fri, Mar 27, 2020 at 8:01 PM Kevin Fenzi kevin@scrye.com wrote:
So, my concern here is timeline vs the upcoming datacenter move. ;(
Do you have any ideas when rpm 4.16 will be released? I don't see any dates on the change. Or perhaps I guess the question is when it will land in rawhide?
Panu tagged the rpm 4.16 alpha earlier this week, so I would hope it'd land in Rawhide next week.
As soon as it lands in rawhide we need to upgrade the builders to the rawhide rpm and set macros so it uses bdb for everything but rawhide. It's very likely however that builders will still be Fedora 31 at that point. (If that matters for rpm any).
You can set the macro now for all targets that this would be a problem with, it'll be a no-op with current rpm.
On Fri, Mar 27, 2020 at 10:34:49AM +0200, Panu Matilainen wrote:
On 3/27/20 9:55 AM, Zbigniew Jędrzejewski-Szmek wrote:
On Fri, Mar 27, 2020 at 09:04:53AM +0200, Panu Matilainen wrote:
On 3/26/20 2:35 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 02:00:49PM +0200, Panu Matilainen wrote:
previous-release-blocker(s) and previous-previous-release-blockers(s), since the changes would need to be deployed in F32 and F31. Also note that the last time when the upgrade plugins run code is in upgrade phase between two reboots, and the plugin is running pre-upgrade code. This code would then invoke post-upgrade rpm. It's certainly doable, but seems a bit funky.
Right, requiring changes to previous versions is not okay. I seem to be thinking our upgrade tooling had gotten fixed at some point to perform the upgrade on the target distro packaging management stack as it would really need to be, but guess that was just a dream.
Relying on the target distro management stack sound nice, but is actually problematic: how do you run the next version before you install the next version? Sure, you can install stuff to some temporary location and run the tools from there, but then you are running in a very custom franken-environment. Such a mode of running would face the same issue as anaconda installer: it would only get tested during the upgrade season, languishing otherwise.
Mock has this cool bootstrap image thing now. It seems to me we could use that image to run the system upgrades too [*]. And if/when we get koji to use that, it'll solve a number of ages old problems on the build system, AND that image will get heavily tested 24/7 so it wouldn't be any once in a full moon franken-thing.
[*] Mount the host filesystem from mock and perform a dnf --instalroot=... distro-upgrade on that, turning the whole landscape inside out.
Where would mock be executing from? The same filesystem it is modifying?
Where is the offline upgrade executing from? How's this fundamentally different?
It's not — the point I was trying to make that IF we are running from the the host filesystem, it is easier to run directly from it.
This subject has a long history of different approaches. Things that are more like what you describe than what we're currently using have been used in the past. And at least for Fedora, it seems that the simplicity of the current approach wins over the limitations. For RHEL the best solution may need to be different.
Oh come on. Running from a bootstrap image allows using full native capabilities of rpm/dnf in any new version, without having to consider what the previous versions support. How's that "not much"?
Yes, that is an important hurdle that Fedora generally doesn't encounter at all. Fedora usually waits until the new rpm functionality is released in older versions of Fedora before allowing it to be used in rawhide. I think this should be a viable approach for RHEL too — after all, rpm is very good at keeping backwards compatibility.
Another approach could be to perform the upgrade in two steps: have a rpm+dnf stack compiled for the old version, install it, and then do the upgrade to the real target version. Dunno, that's quickly getting complex.
Let's consider a concrete example that came up recently: grub wants to rewrite something in the bootloader area on disk to help upgrades from very old installations. In current "offline upgrade" scheme, the upgrade tools are running on the real system, with udev active. They can query and touch hardware, can see all the disks as they are, etc. If we went through mock, it'd be running in an nspawn environment w/o access to hardware.
And still that offline upgrade will be running on the old systems kernel which will simply *prevent* certain types of actions to be performed in an upgrade, just like using host system packaging stack *prevents* use of native capabilities in the next version, just because the old version doesn't support them, which is just totally a** backwards. Really.
Note that I'm talking about a high-level idea here. I haven't looked at what a mock bootstrap image looks like, I haven't looked at what offline upgrade looks like. Sure there would be technical details, perhaps obstacles even to sort out.
(Something like os-tree's atomic replacement of things, that's of course a completely different story. But so far we're talking about traditional systems.)
So nowadays we have a much simpler mechanism: reboot to a special system target without most daemons running (to avoid interference during the upgrade), run the update there, reboot into the new environment. The biggest advantage is that this way we reduce the amount of "custom": we're running normal installed dnf + rpm in a normal boot environment, we just stop the boot from progressing all the way to the usual graphical environment.
I think it's fair to say that amount of bugs related to the upgrade mechanism has been greatly reduced compared to previous schemes. We still have various upgrade issues, but they are in the rpms themselves, and not how we install them.
Such a scheme may be feasible in a fast-moving distro like Fedora where you can always afford to sit out the next six months waiting for the new thing to become available also in rawhide-1 version, but it's totally non-feasible in something like RHEL. RHEL/CentOS 7 to 8 upgrades with such a scheme only happen to "work" because of bugs such as missing rpmlib() dependency on file triggers kinda let things stumble through the cracks.
The premise is that the upgrade is really a normal dnf upgrade, i.e. a normal 'rpm -U' operation under the hood. The differences are: 1: package count, 2: setting up the machine in a mode where the graphical env. and other non-essential daemons are not active. So if requirements in rpms are specified correctly, the upgrade always should go through. If they are specified incorrectly — then the same problems would occur on smaller updates. So the upgrade path is something to test to catch such issues in packaging. (And in general, the less scriptlets, the better. This solves the issue even better.)
You're missing the point. Missing capabilities in the older version rpm can and will PREVENT you from doing the update AT ALL.
Even in Fedora people have seen occasional glimpses of this, in enterprise distros this is a complete show-stopper.
I want a normal dnf upgrade just as much as you do, it's just that it needs to be run from the new version, not the old. One way or the other.
I see your point.
Zbyszek
On 28. 03. 20 1:01, Kevin Fenzi wrote:
Do you have any ideas when rpm 4.16 will be released? I don't see any dates on the change. Or perhaps I guess the question is when it will land in rawhide?
From the ticket https://pagure.io/fesco/issue/2360
My understanding is that 4.16 could be shipped today:
but lets please get 4.16 into rawhide for people to test, ASAP.
For the backend:
At any rate, even if sqlite was approved today we wouldn't be switching to that until at least a couple of weeks of shakedown for 4.16 first.
On 3/28/20 8:59 AM, Zbigniew Jędrzejewski-Szmek wrote:
On Fri, Mar 27, 2020 at 10:34:49AM +0200, Panu Matilainen wrote:
On 3/27/20 9:55 AM, Zbigniew Jędrzejewski-Szmek wrote:
On Fri, Mar 27, 2020 at 09:04:53AM +0200, Panu Matilainen wrote:
On 3/26/20 2:35 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 02:00:49PM +0200, Panu Matilainen wrote:
> previous-release-blocker(s) and previous-previous-release-blockers(s), > since the changes would need to be deployed in F32 and F31. Also > note that the last time when the upgrade plugins run code is in > upgrade phase between two reboots, and the plugin is running > pre-upgrade code. This code would then invoke post-upgrade rpm. It's > certainly doable, but seems a bit funky.
Right, requiring changes to previous versions is not okay. I seem to be thinking our upgrade tooling had gotten fixed at some point to perform the upgrade on the target distro packaging management stack as it would really need to be, but guess that was just a dream.
Relying on the target distro management stack sound nice, but is actually problematic: how do you run the next version before you install the next version? Sure, you can install stuff to some temporary location and run the tools from there, but then you are running in a very custom franken-environment. Such a mode of running would face the same issue as anaconda installer: it would only get tested during the upgrade season, languishing otherwise.
Mock has this cool bootstrap image thing now. It seems to me we could use that image to run the system upgrades too [*]. And if/when we get koji to use that, it'll solve a number of ages old problems on the build system, AND that image will get heavily tested 24/7 so it wouldn't be any once in a full moon franken-thing.
[*] Mount the host filesystem from mock and perform a dnf --instalroot=... distro-upgrade on that, turning the whole landscape inside out.
Where would mock be executing from? The same filesystem it is modifying?
Where is the offline upgrade executing from? How's this fundamentally different?
It's not — the point I was trying to make that IF we are running from the the host filesystem, it is easier to run directly from it.
Easier? Sure, upgrades are hard, lets go shopping.
This subject has a long history of different approaches. Things that are more like what you describe than what we're currently using have been used in the past. And at least for Fedora, it seems that the simplicity of the current approach wins over the limitations. For RHEL the best solution may need to be different.
Oh come on. Running from a bootstrap image allows using full native capabilities of rpm/dnf in any new version, without having to consider what the previous versions support. How's that "not much"?
Yes, that is an important hurdle that Fedora generally doesn't encounter at all. Fedora usually waits until the new rpm functionality is released in older versions of Fedora before allowing it to be used in rawhide. I think this should be a viable approach for RHEL too — after all, rpm is very good at keeping backwards compatibility.
No. This isn't about *backwards* compatibility, this is about *forward* compatibility, which places terrible limits to what features can be used.
RHEL 7 was released in 2014. RHEL 8 came out in 2019. In the meanwhile, Fedora had started using file triggers and rich dependencies because it could - and why not. But RHEL 7 rpm hasn't got a chance of dealing with those. So if you bring the waiting strategy into RHEL world, people could only start using file triggers and rich dependencies in RHEL and EPEL 9 which I can only assume will be released some year in the future. Think about that for a while.
In other words, that puts a better half of a *decade* between a feature being introduced in rpm to when it could be actually used in a RHEL release. And an even bigger gap of what can be done in RHEL and Fedora than it is now, as in: extra burden on maintainers. "Easier" has little to do with this all.
I'm sure rpm can be blamed for this in part - we don't introduce features with new rpmlib() dependencies often enough that people drift into this "yessss we can waaiiiit another yeaaar" mode. And then they snap out of it when the upgrade tooling fails because of new rpmlib() deds, at which point its already a year too late so save it for that release.
Another approach could be to perform the upgrade in two steps: have a rpm+dnf stack compiled for the old version, install it, and then do the upgrade to the real target version. Dunno, that's quickly getting complex.
This isn't feasible as the stack might not even be compilable on the previous release due to other version differences. Yes, upgrades are hard.
- Panu -
On 3/28/20 2:01 AM, Kevin Fenzi wrote:
So, my concern here is timeline vs the upcoming datacenter move. ;(
Do you have any ideas when rpm 4.16 will be released? I don't see any dates on the change. Or perhaps I guess the question is when it will land in rawhide?
RPM 4.16 alpha was released last week and will land in rawhide tomorroish if the change is accepted by FESCo (which seems very likely). Final version should be out well before F33 beta.
As soon as it lands in rawhide we need to upgrade the builders to the rawhide rpm and set macros so it uses bdb for everything but rawhide. It's very likely however that builders will still be Fedora 31 at that point. (If that matters for rpm any).
Or I suppose we could try and get mock's bootstrapping working before then.
No such things needed at this time. The database change is a separate change and does NOT come bundled with RPM 4.16, we'll only even consider switching that once 4.16 has had a proper shakedown.
In either case, it may be hard to have cycles for this while datacenter move is happening. It would help if we had a ballpark at least for when it will land / some folks willing to get bootstrapping working in koji.
Or what would you think of the idea of landing it in rawhide, but keeping default bdb until after we have the move done and can upgrade builders to f32?
This was the plan all along: the dust of RPM 4.16 landing in rawhide needs to settle first before any database changes are considered, excact schedule depending on all manner of things. I thought this was clear from the SQLite change proposal but guess not.
If it was up to us only, I recon we'd be looking to switch over to sqlite somewhere between 2-4 weeks from the time 4.16 lands in rawhide but our schedule is flexible here. What ballpark dates would we be looking at with the datacenter move and builder upgrade?
- Panu -
kevin
devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On 3/28/20 8:59 AM, Zbigniew Jędrzejewski-Szmek wrote:
Yes, that is an important hurdle that Fedora generally doesn't encounter at all. Fedora usually waits until the new rpm functionality is released in older versions of Fedora before allowing it to be used in rawhide.
Fortunately, that’s not the usual case. Any part of rpm that we refuse to use, before it is available in older version of Fedora, is a part that will bitrot and is unlikely to be ever used in Fedora.
Waiting does not make it more stable. Waiting makes sure the people that were interested in the feature in the first place will move away. Leaving rpm upstream with an untested feature no one wants to touch.
That would be even more braindamaged than forbidding the use of gcc features not present in older versions. At least gcc sees some use dev- side. But who is going to exercise packaging tools if packagers are forbiddent to use them?
That being said, yes Fedora has been a terrible rpm stackaholder. That has hurt both Fedora and rpm upstream. Half the NIH reinventing packaging tools people just can not stand the delays associated with rpm feature deployments.
Regards,
Dne 27. 03. 20 v 8:55 Zbigniew Jędrzejewski-Szmek napsal(a):
In current "offline upgrade" scheme, the upgrade tools are running on the real system, with udev active.
How the "offline upgrade" works under hood? Do I understand correctly that even "offline ugprade" will have problem with upgrade from F32 to F33 because of rpmdb?
On 3/30/20 3:00 PM, Nicolas Mailhot via devel wrote:
On 3/28/20 8:59 AM, Zbigniew Jędrzejewski-Szmek wrote:
Yes, that is an important hurdle that Fedora generally doesn't encounter at all. Fedora usually waits until the new rpm functionality is released in older versions of Fedora before allowing it to be used in rawhide.
Fortunately, that’s not the usual case. Any part of rpm that we refuse to use, before it is available in older version of Fedora, is a part that will bitrot and is unlikely to be ever used in Fedora.
Waiting does not make it more stable. Waiting makes sure the people that were interested in the feature in the first place will move away. Leaving rpm upstream with an untested feature no one wants to touch.
That would be even more braindamaged than forbidding the use of gcc features not present in older versions. At least gcc sees some use dev- side. But who is going to exercise packaging tools if packagers are forbiddent to use them?
That being said, yes Fedora has been a terrible rpm stackaholder. That has hurt both Fedora and rpm upstream. Half the NIH reinventing packaging tools people just can not stand the delays associated with rpm feature deployments.
Nicolas, thanks for this.
Indeed it's *really* hard (not to mention uninspiring and demotivating) to develop features in a setting where the first users of said feature *might* appear years in the future, at which point the feature will exists in some form in multiple releases already declared stable long ago, so its impossible to change anything, and even the simplest fixes would need to go to multiple branches and distros all at once.
So in this setting, with any rpm feature there's precisely one chance to get things *just* right. We all know how well that works with any software...
- Panu -
Regards,
On Mon, Mar 30, 2020 at 12:41:08PM +0300, Panu Matilainen wrote: ...snip...
No such things needed at this time. The database change is a separate change and does NOT come bundled with RPM 4.16, we'll only even consider switching that once 4.16 has had a proper shakedown.
ok, great.
In either case, it may be hard to have cycles for this while datacenter move is happening. It would help if we had a ballpark at least for when it will land / some folks willing to get bootstrapping working in koji.
Or what would you think of the idea of landing it in rawhide, but keeping default bdb until after we have the move done and can upgrade builders to f32?
This was the plan all along: the dust of RPM 4.16 landing in rawhide needs to settle first before any database changes are considered, excact schedule depending on all manner of things. I thought this was clear from the SQLite change proposal but guess not.
Sorry if it was, perhaps I just missed that. ;(
If it was up to us only, I recon we'd be looking to switch over to sqlite somewhere between 2-4 weeks from the time 4.16 lands in rawhide but our schedule is flexible here. What ballpark dates would we be looking at with the datacenter move and builder upgrade?
ok, 2-4 weeks from tomorrow would be between the 14th and the 28th?
Fedora 32 preferred release date is the 21st. I'm really not sure we will have cycles to update all the builders in less than a week. Whats the impact of using rawhide rpm on f31 builders? Will there be deps/issues ?
We probibly won't move to f32 on the builders until we are installing new ones in the new datacenter and switching to those. But I very leary of also replacing rpm on them at the same time... I guess it could work and we could always back it out if not. That would be the last week of may or so that we switch to those.
So, I guess the two windows are: as soon as it looks ok with f31 builders or last week or may/first week of june with f32. Or late june when we are done moving things.
Thoughts?
kevin
On Mon, Mar 30, 2020 at 12:46 PM Kevin Fenzi kevin@scrye.com wrote:
On Mon, Mar 30, 2020 at 12:41:08PM +0300, Panu Matilainen wrote:
If it was up to us only, I recon we'd be looking to switch over to sqlite somewhere between 2-4 weeks from the time 4.16 lands in rawhide but our schedule is flexible here. What ballpark dates would we be looking at with the datacenter move and builder upgrade?
ok, 2-4 weeks from tomorrow would be between the 14th and the 28th?
Fedora 32 preferred release date is the 21st. I'm really not sure we will have cycles to update all the builders in less than a week. Whats the impact of using rawhide rpm on f31 builders? Will there be deps/issues ?
We probibly won't move to f32 on the builders until we are installing new ones in the new datacenter and switching to those. But I very leary of also replacing rpm on them at the same time... I guess it could work and we could always back it out if not. That would be the last week of may or so that we switch to those.
So, I guess the two windows are: as soon as it looks ok with f31 builders or last week or may/first week of june with f32. Or late june when we are done moving things.
Thoughts?
RPM 4.16 is supposed to be ABI compatible to everything built against RPM 4.15. So RPM 4.16 should be able to plug in just fine on Fedora 31 or Fedora 32.
-- 真実はいつも一つ!/ Always, there's only one truth!
On Mon, Mar 30, 2020 at 02:40:03PM +0200, Miroslav Suchý wrote:
Dne 27. 03. 20 v 8:55 Zbigniew Jędrzejewski-Szmek napsal(a):
In current "offline upgrade" scheme, the upgrade tools are running on the real system, with udev active.
This thread has mostly died, but I didn't want to leave this unanswered.
How the "offline upgrade" works under hood?
It's essentially a 'dnf upgrade --releasever=NN', except that it runs is sandwiched between two reboots (to avoid things running during the upgrade, and to restart everything after the ugprade). See https://www.freedesktop.org/software/systemd/man/systemd.offline-updates.htm....
Do I understand correctly that even "offline ugprade" will have problem with upgrade from F32 to F33 because of rpmdb?
No, there shouldn't be any problem. New rpm will still support the old database, fully in F33, and then probably read-only in F34+. At some point the rpm --rebuilddb operation will need to happen, but we have plenty of time to do it. The discussion was mostly about whether it should happen automatically on upgrades, and when. The effect of *not* doing the upgrade automatically during upgrades to F33 is less use and testing, not breakage.
Zbyszek
On 3/30/20 7:45 PM, Kevin Fenzi wrote:
On Mon, Mar 30, 2020 at 12:41:08PM +0300, Panu Matilainen wrote: ...snip...
No such things needed at this time. The database change is a separate change and does NOT come bundled with RPM 4.16, we'll only even consider switching that once 4.16 has had a proper shakedown.
ok, great.
In either case, it may be hard to have cycles for this while datacenter move is happening. It would help if we had a ballpark at least for when it will land / some folks willing to get bootstrapping working in koji.
Or what would you think of the idea of landing it in rawhide, but keeping default bdb until after we have the move done and can upgrade builders to f32?
This was the plan all along: the dust of RPM 4.16 landing in rawhide needs to settle first before any database changes are considered, excact schedule depending on all manner of things. I thought this was clear from the SQLite change proposal but guess not.
Sorry if it was, perhaps I just missed that. ;(
If it was up to us only, I recon we'd be looking to switch over to sqlite somewhere between 2-4 weeks from the time 4.16 lands in rawhide but our schedule is flexible here. What ballpark dates would we be looking at with the datacenter move and builder upgrade?
ok, 2-4 weeks from tomorrow would be between the 14th and the 28th?
Fedora 32 preferred release date is the 21st. I'm really not sure we will have cycles to update all the builders in less than a week. Whats the impact of using rawhide rpm on f31 builders? Will there be deps/issues ?
The soname doesn't change and no dependencies on any rawhide latest-and-greatest otherwise, so from that side there shouldn't be any issues.
The only real incompatibility should be on the spec parse side - the bare word vs quoted strings thing in specs (as explained in my heads-up message), which affects a handful of packages but is also trivial for packagers to fix if they encounter it.
We probibly won't move to f32 on the builders until we are installing new ones in the new datacenter and switching to those. But I very leary of also replacing rpm on them at the same time... I guess it could work and we could always back it out if not. That would be the last week of may or so that we switch to those.
So, I guess the two windows are: as soon as it looks ok with f31 builders or last week or may/first week of june with f32. Or late june when we are done moving things.
I could live with switching at beginning of May, but end of May / sometime in June is terribly late in the cycle for this. So if choosing between these, it'd kinda have to be with f31 builders. OTOH there are various other possibilities too, including but probably not limited to:
a) Switch in two stages: override the database to bdb on builders, change rawhide on our own schedule and then remove builder override when it suits infra b) See if the copr bootstrap is usable now in koji c) Just leave the builders alone and see what happens. In theory it should all just work regardless as long as all installations are done from outside of the chroot.
In any case, we wont be switching anything at all before we have a working agreement with infra over the steps.
- Panu -
On 3/31/20 2:46 PM, Panu Matilainen wrote:
On 3/30/20 7:45 PM, Kevin Fenzi wrote:
On Mon, Mar 30, 2020 at 12:41:08PM +0300, Panu Matilainen wrote: ...snip...
No such things needed at this time. The database change is a separate change and does NOT come bundled with RPM 4.16, we'll only even consider switching that once 4.16 has had a proper shakedown.
ok, great.
In either case, it may be hard to have cycles for this while datacenter move is happening. It would help if we had a ballpark at least for when it will land / some folks willing to get bootstrapping working in koji.
Or what would you think of the idea of landing it in rawhide, but keeping default bdb until after we have the move done and can upgrade builders to f32?
This was the plan all along: the dust of RPM 4.16 landing in rawhide needs to settle first before any database changes are considered, excact schedule depending on all manner of things. I thought this was clear from the SQLite change proposal but guess not.
Sorry if it was, perhaps I just missed that. ;(
If it was up to us only, I recon we'd be looking to switch over to sqlite somewhere between 2-4 weeks from the time 4.16 lands in rawhide but our schedule is flexible here. What ballpark dates would we be looking at with the datacenter move and builder upgrade?
ok, 2-4 weeks from tomorrow would be between the 14th and the 28th?
Fedora 32 preferred release date is the 21st. I'm really not sure we will have cycles to update all the builders in less than a week. Whats the impact of using rawhide rpm on f31 builders? Will there be deps/issues ?
The soname doesn't change and no dependencies on any rawhide latest-and-greatest otherwise, so from that side there shouldn't be any issues.
The only real incompatibility should be on the spec parse side - the bare word vs quoted strings thing in specs (as explained in my heads-up message), which affects a handful of packages but is also trivial for packagers to fix if they encounter it.
We probibly won't move to f32 on the builders until we are installing new ones in the new datacenter and switching to those. But I very leary of also replacing rpm on them at the same time... I guess it could work and we could always back it out if not. That would be the last week of may or so that we switch to those.
So, I guess the two windows are: as soon as it looks ok with f31 builders or last week or may/first week of june with f32. Or late june when we are done moving things.
I could live with switching at beginning of May, but end of May / sometime in June is terribly late in the cycle for this. So if choosing between these, it'd kinda have to be with f31 builders. OTOH there are various other possibilities too, including but probably not limited to:
a) Switch in two stages: override the database to bdb on builders, change rawhide on our own schedule and then remove builder override when it suits infra b) See if the copr bootstrap is usable now in koji c) Just leave the builders alone and see what happens. In theory it should all just work regardless as long as all installations are done from outside of the chroot.
After getting some fresh air, aka consulting the dog: it's of course more complicated than that.
From the point of getting *packages* built for rawhide, it shouldn't make the slightest difference what rpm the builders are using at this time because even if the build runs rpm queries, the rpm on the inside does still support BDB. From package building perspective the only gain of having 4.16 on the builders is additional testing - which is nice, but also could perhaps wait a bit longer.
However, *image* builds that need the newer rpm to avoid having BDB database on eg Live images are a different story I suppose. I simply do not know all the things that are being built, and never mind the details of how and the interactions.
- Panu -
On Tue, Mar 31, 2020 at 8:53 AM Panu Matilainen pmatilai@redhat.com wrote:
On 3/31/20 2:46 PM, Panu Matilainen wrote:
On 3/30/20 7:45 PM, Kevin Fenzi wrote:
On Mon, Mar 30, 2020 at 12:41:08PM +0300, Panu Matilainen wrote: ...snip...
No such things needed at this time. The database change is a separate change and does NOT come bundled with RPM 4.16, we'll only even consider switching that once 4.16 has had a proper shakedown.
ok, great.
In either case, it may be hard to have cycles for this while datacenter move is happening. It would help if we had a ballpark at least for when it will land / some folks willing to get bootstrapping working in koji.
Or what would you think of the idea of landing it in rawhide, but keeping default bdb until after we have the move done and can upgrade builders to f32?
This was the plan all along: the dust of RPM 4.16 landing in rawhide needs to settle first before any database changes are considered, excact schedule depending on all manner of things. I thought this was clear from the SQLite change proposal but guess not.
Sorry if it was, perhaps I just missed that. ;(
If it was up to us only, I recon we'd be looking to switch over to sqlite somewhere between 2-4 weeks from the time 4.16 lands in rawhide but our schedule is flexible here. What ballpark dates would we be looking at with the datacenter move and builder upgrade?
ok, 2-4 weeks from tomorrow would be between the 14th and the 28th?
Fedora 32 preferred release date is the 21st. I'm really not sure we will have cycles to update all the builders in less than a week. Whats the impact of using rawhide rpm on f31 builders? Will there be deps/issues ?
The soname doesn't change and no dependencies on any rawhide latest-and-greatest otherwise, so from that side there shouldn't be any issues.
The only real incompatibility should be on the spec parse side - the bare word vs quoted strings thing in specs (as explained in my heads-up message), which affects a handful of packages but is also trivial for packagers to fix if they encounter it.
We probibly won't move to f32 on the builders until we are installing new ones in the new datacenter and switching to those. But I very leary of also replacing rpm on them at the same time... I guess it could work and we could always back it out if not. That would be the last week of may or so that we switch to those.
So, I guess the two windows are: as soon as it looks ok with f31 builders or last week or may/first week of june with f32. Or late june when we are done moving things.
I could live with switching at beginning of May, but end of May / sometime in June is terribly late in the cycle for this. So if choosing between these, it'd kinda have to be with f31 builders. OTOH there are various other possibilities too, including but probably not limited to:
a) Switch in two stages: override the database to bdb on builders, change rawhide on our own schedule and then remove builder override when it suits infra b) See if the copr bootstrap is usable now in koji c) Just leave the builders alone and see what happens. In theory it should all just work regardless as long as all installations are done from outside of the chroot.
After getting some fresh air, aka consulting the dog: it's of course more complicated than that.
From the point of getting *packages* built for rawhide, it shouldn't make the slightest difference what rpm the builders are using at this time because even if the build runs rpm queries, the rpm on the inside does still support BDB. From package building perspective the only gain of having 4.16 on the builders is additional testing - which is nice, but also could perhaps wait a bit longer.
However, *image* builds that need the newer rpm to avoid having BDB database on eg Live images are a different story I suppose. I simply do not know all the things that are being built, and never mind the details of how and the interactions.
They boil down to three major processes:
* Anaconda based * LiveCD Tools based * CoreOS based
All three tools use host RPM to build the target image environment. However, all three are executed in chroots of the target environment. Anaconda and LiveCD Tools based stuff is done so by Koji, and the CoreOS process is done by CoreOS Assembler, which uses OpenShift machinery to build the images.
So the tricky problem is that if we tell Koji to export a macro to change rpmdb to bdb, that influences down the chain (except CoreOS, which is not affected by this and will be fine no matter what happens).
However, if we _don't_ do anything, we *might* actually be fine, as long as the builders support bdb and sqlite. That means that the only targets where we need the %_db_backend macro set back to bdb are the EPEL targets, and I'm not even sure we need to do it there either.
On Tue, Mar 31, 2020 at 02:46:01PM +0300, Panu Matilainen wrote:
The soname doesn't change and no dependencies on any rawhide latest-and-greatest otherwise, so from that side there shouldn't be any issues.
The only real incompatibility should be on the spec parse side - the bare word vs quoted strings thing in specs (as explained in my heads-up message), which affects a handful of packages but is also trivial for packagers to fix if they encounter it.
ok
I could live with switching at beginning of May, but end of May / sometime in June is terribly late in the cycle for this. So if choosing between these, it'd kinda have to be with f31 builders.
ok.
OTOH there are various other possibilities too, including but probably not limited to:
a) Switch in two stages: override the database to bdb on builders, change rawhide on our own schedule and then remove builder override when it suits infra b) See if the copr bootstrap is usable now in koji c) Just leave the builders alone and see what happens. In theory it should all just work regardless as long as all installations are done from outside of the chroot.
In any case, we wont be switching anything at all before we have a working agreement with infra over the steps.
Sure. we can test some of these in staging.
kevin
On Mon, Mar 30, 2020 at 10:13:19AM +0300, Panu Matilainen wrote:
strategy into RHEL world, people could only start using file triggers and rich dependencies in RHEL and EPEL 9 which I can only assume will be released some year in the future. Think about that for a while.
For what it's worth, Red Hat has committed to an official three year cadence, so you don't need to assume: it's 2022.
On 3/26/20 1:32 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 01:16:22PM +0200, Panu Matilainen wrote:
Right. I realize %posttrans is not a good idea. But *some* mechanism is necessary, because without that the change will mostly be a noop for most users. So I think this needs to be decided somehow.
[...]
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Looking at the details of how to do this now.
The idea is to install a generic "rebuild rpmdb on boot" one-shot service, which can be flagged for action by 'touch /var/lib/rpm/.rebuilddb'. That would be done from rpm %posttrans when the rpmdb default changes, basically:
'[ -f /var/lib/rpm/Packages ] && touch /var/lib/rpm/.rebuilddb'
Should it become necessary, the same mechanism can be used to convert back. This will of course trigger some "extra" rebuilds for anybody staying on BDB backend but I'd say that's a feature...
I'm thinking of something like this for /usr/lib/systemd/rpmdb-rebuild.service:
--- [Unit] Description=RPM database rebuild ConditionPathExists=/var/lib/rpm/.rebuilddb
[Service] Type=oneshot ExecStart=/usr/bin/rpmdb --rebuilddb ExecStartPost=rm -f /var/lib/rpm/.rebuilddb
[Install] WantedBy=basic.target ---
This seems to do the trick in my local testing, but there are probably a million details that could be tweaked and improved. The systemd service ecosystem is bit overwhelming for the uninitiated, so crowdsourcing here:
This should be run quite early in the boot, before other daemons that potentially access the rpmdb get started (abrt, dnfdaemon), basically just as soon as /etc, /usr and /var are mounted. Is there something else I should add, or something better to hook onto? Other finer details that I'm missing?
It'll need a preset to enable by default if this ends up being the route taken, but lets hear the feedback first before I go file the bug...
- Panu -
On Fri, Apr 17, 2020 at 04:48:11PM +0300, Panu Matilainen wrote:
On 3/26/20 1:32 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 01:16:22PM +0200, Panu Matilainen wrote:
Right. I realize %posttrans is not a good idea. But *some* mechanism is necessary, because without that the change will mostly be a noop for most users. So I think this needs to be decided somehow.
[...]
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Looking at the details of how to do this now.
The idea is to install a generic "rebuild rpmdb on boot" one-shot service, which can be flagged for action by 'touch /var/lib/rpm/.rebuilddb'. That would be done from rpm %posttrans when the rpmdb default changes, basically:
'[ -f /var/lib/rpm/Packages ] && touch /var/lib/rpm/.rebuilddb'
Should it become necessary, the same mechanism can be used to convert back. This will of course trigger some "extra" rebuilds for anybody staying on BDB backend but I'd say that's a feature...
Shouldn't this be a one-time thing instead? E.g. '%triggerpostun rpm < n.n.n-n', where n.n.n-n is the first version with the changed default?
I'm thinking of something like this for /usr/lib/systemd/rpmdb-rebuild.service:
[Unit] Description=RPM database rebuild ConditionPathExists=/var/lib/rpm/.rebuilddb
[Service] Type=oneshot ExecStart=/usr/bin/rpmdb --rebuilddb ExecStartPost=rm -f /var/lib/rpm/.rebuilddb
[Install] WantedBy=basic.target
[Unit] Description=RPM database rebuild ConditionPathExists=/var/lib/rpm/.rebuilddb DefaultDependencies=no After=sysinit.target Before=basic.target shutdown.target Conflicts=shutdown.target
[Service] Type=oneshot ExecStart=rpmdb --rebuilddb ExecStartPost=rm -f /var/lib/rpm/.rebuilddb
[Install] WantedBy=basic.target
(Service units have default dependency on basic.target, so if this is to be ordered before basic.target, it needs DefaultDependencies=no.)
This should be run quite early in the boot, before other daemons that potentially access the rpmdb get started (abrt, dnfdaemon), basically just as soon as /etc, /usr and /var are mounted. Is there something else I should add, or something better to hook onto? Other finer details that I'm missing?
It'll need a preset to enable by default if this ends up being the route taken, but lets hear the feedback first before I go file the bug...
Zbyszek
On 4/17/20 5:09 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Fri, Apr 17, 2020 at 04:48:11PM +0300, Panu Matilainen wrote:
On 3/26/20 1:32 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 01:16:22PM +0200, Panu Matilainen wrote:
Right. I realize %posttrans is not a good idea. But *some* mechanism is necessary, because without that the change will mostly be a noop for most users. So I think this needs to be decided somehow.
[...]
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Looking at the details of how to do this now.
The idea is to install a generic "rebuild rpmdb on boot" one-shot service, which can be flagged for action by 'touch /var/lib/rpm/.rebuilddb'. That would be done from rpm %posttrans when the rpmdb default changes, basically:
'[ -f /var/lib/rpm/Packages ] && touch /var/lib/rpm/.rebuilddb'
Should it become necessary, the same mechanism can be used to convert back. This will of course trigger some "extra" rebuilds for anybody staying on BDB backend but I'd say that's a feature...
Shouldn't this be a one-time thing instead? E.g. '%triggerpostun rpm < n.n.n-n', where n.n.n-n is the first version with the changed default?
Not really, because with a once in a lifetime opportunity there are too many ways things can go wrong. Also, we need to be herding people away from BDB with increasing intensity so even if we allow them to stay on BDB in F33, they will need to switch over at a not-so-distant future point, and that's not tied to rpm package versions anymore.
I'm thinking of something like this for /usr/lib/systemd/rpmdb-rebuild.service:
[Unit] Description=RPM database rebuild ConditionPathExists=/var/lib/rpm/.rebuilddb
[Service] Type=oneshot ExecStart=/usr/bin/rpmdb --rebuilddb ExecStartPost=rm -f /var/lib/rpm/.rebuilddb
[Install] WantedBy=basic.target
[Unit] Description=RPM database rebuild ConditionPathExists=/var/lib/rpm/.rebuilddb DefaultDependencies=no After=sysinit.target
Should we also add Requires=sysinit.target I don't think we want this running on a boot where basics are failing...
Before=basic.target shutdown.target Conflicts=shutdown.target
Hmm, what's with the shutdown.target This is not a service that will remain active so shutdown doesn't seem relevant.
[Service] Type=oneshot ExecStart=rpmdb --rebuilddb ExecStartPost=rm -f /var/lib/rpm/.rebuilddb
[Install] WantedBy=basic.target
(Service units have default dependency on basic.target, so if this is to be ordered before basic.target, it needs DefaultDependencies=no.)
I guess the question is rather, is running before basic.target actually reasonable or even desireable? At the very least, we'd need to also add
RequiresMountsFor=/var /var/tmp
...because obviously /var needs to be there for this. And that makes me wonder what else is missing that we'd need.
- Panu -
This should be run quite early in the boot, before other daemons that potentially access the rpmdb get started (abrt, dnfdaemon), basically just as soon as /etc, /usr and /var are mounted. Is there something else I should add, or something better to hook onto? Other finer details that I'm missing?
It'll need a preset to enable by default if this ends up being the route taken, but lets hear the feedback first before I go file the bug...
Zbyszek _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
On Mon, Apr 20, 2020 at 10:38:18AM +0300, Panu Matilainen wrote:
On 4/17/20 5:09 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Fri, Apr 17, 2020 at 04:48:11PM +0300, Panu Matilainen wrote:
On 3/26/20 1:32 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 01:16:22PM +0200, Panu Matilainen wrote:
Right. I realize %posttrans is not a good idea. But *some* mechanism is necessary, because without that the change will mostly be a noop for most users. So I think this needs to be decided somehow.
[...]
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Looking at the details of how to do this now.
The idea is to install a generic "rebuild rpmdb on boot" one-shot service, which can be flagged for action by 'touch /var/lib/rpm/.rebuilddb'. That would be done from rpm %posttrans when the rpmdb default changes, basically:
'[ -f /var/lib/rpm/Packages ] && touch /var/lib/rpm/.rebuilddb'
Should it become necessary, the same mechanism can be used to convert back. This will of course trigger some "extra" rebuilds for anybody staying on BDB backend but I'd say that's a feature...
Shouldn't this be a one-time thing instead? E.g. '%triggerpostun rpm < n.n.n-n', where n.n.n-n is the first version with the changed default?
Not really, because with a once in a lifetime opportunity there are too many ways things can go wrong. Also, we need to be herding people away from BDB with increasing intensity so even if we allow them to stay on BDB in F33
While I don't disagree with this assessment, doing the conversion each time rpm is upgraded sounds wrong. I.e. if someone decides (for whatever reason) to skip the update in F33, they shouldn't be badgered until F34 comes along. When the switch is mandatory at some point (e.g. in F34), than we can update the scriptlet to perform the upgrade unconditionally.
Basically, "allow them to stay on BDB in F33" and "try to perform the upgrade each time rpm.rpm is upgaded" seem incompatible.
I'm thinking of something like this for /usr/lib/systemd/rpmdb-rebuild.service:
[Unit] Description=RPM database rebuild ConditionPathExists=/var/lib/rpm/.rebuilddb
[Service] Type=oneshot ExecStart=/usr/bin/rpmdb --rebuilddb ExecStartPost=rm -f /var/lib/rpm/.rebuilddb
[Install] WantedBy=basic.target
[Unit] Description=RPM database rebuild ConditionPathExists=/var/lib/rpm/.rebuilddb DefaultDependencies=no After=sysinit.target
Should we also add Requires=sysinit.target I don't think we want this running on a boot where basics are failing...
Yes.
Before=basic.target shutdown.target Conflicts=shutdown.target
Hmm, what's with the shutdown.target This is not a service that will remain active so shutdown doesn't seem relevant.
Conflicts=shutdown.target is normally added to all units where DefaultDependencies=no, to mimick the dependencies that would be added by default. It ensures that the service is stopped before a shutdown. It most cases it would not matter, but it's more correct to have it.
[Service] Type=oneshot ExecStart=rpmdb --rebuilddb ExecStartPost=rm -f /var/lib/rpm/.rebuilddb
[Install] WantedBy=basic.target
(Service units have default dependency on basic.target, so if this is to be ordered before basic.target, it needs DefaultDependencies=no.)
I guess the question is rather, is running before basic.target actually reasonable or even desireable? At the very least, we'd need to also add
RequiresMountsFor=/var /var/tmp
...because obviously /var needs to be there for this. And that makes me wonder what else is missing that we'd need.
/var and /var/tmp would be ordered before local-fs.target, so the dependency on sysinit.target should be enough to handle this.
Zbyszek
On 4/20/20 1:07 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Mon, Apr 20, 2020 at 10:38:18AM +0300, Panu Matilainen wrote:
On 4/17/20 5:09 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Fri, Apr 17, 2020 at 04:48:11PM +0300, Panu Matilainen wrote:
On 3/26/20 1:32 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Thu, Mar 26, 2020 at 01:16:22PM +0200, Panu Matilainen wrote:
Right. I realize %posttrans is not a good idea. But *some* mechanism is necessary, because without that the change will mostly be a noop for most users. So I think this needs to be decided somehow.
[...]
- a one-shot service: this is easier to implement, it just needs to happen in one place. The hard part is making sure that the machine does not get reboot while the upgrade is happening. This is in particular a problem with VMs and containers. The rebuild should be wrapped with systemd-inhibit and other guards to make it hard to interrupt.
Looking at the details of how to do this now.
The idea is to install a generic "rebuild rpmdb on boot" one-shot service, which can be flagged for action by 'touch /var/lib/rpm/.rebuilddb'. That would be done from rpm %posttrans when the rpmdb default changes, basically:
'[ -f /var/lib/rpm/Packages ] && touch /var/lib/rpm/.rebuilddb'
Should it become necessary, the same mechanism can be used to convert back. This will of course trigger some "extra" rebuilds for anybody staying on BDB backend but I'd say that's a feature...
Shouldn't this be a one-time thing instead? E.g. '%triggerpostun rpm < n.n.n-n', where n.n.n-n is the first version with the changed default?
Not really, because with a once in a lifetime opportunity there are too many ways things can go wrong. Also, we need to be herding people away from BDB with increasing intensity so even if we allow them to stay on BDB in F33
While I don't disagree with this assessment, doing the conversion each time rpm is upgraded sounds wrong. I.e. if someone decides (for whatever reason) to skip the update in F33, they shouldn't be badgered until F34 comes along. When the switch is mandatory at some point (e.g. in F34), than we can update the scriptlet to perform the upgrade unconditionally.
Basically, "allow them to stay on BDB in F33" and "try to perform the upgrade each time rpm.rpm is upgaded" seem incompatible.
Where does the service say anything about conversion? It merely rebuilds the rpmdb, which is something BDB in particular will benefit from in two ways: it restores performance, and fixes any unnoticed breakage from indexes going out of sync. So it's a secret service being done to BDB users ;) which will as a side-effect handle database conversions too.
I'm thinking of something like this for /usr/lib/systemd/rpmdb-rebuild.service:
[Unit] Description=RPM database rebuild ConditionPathExists=/var/lib/rpm/.rebuilddb
[Service] Type=oneshot ExecStart=/usr/bin/rpmdb --rebuilddb ExecStartPost=rm -f /var/lib/rpm/.rebuilddb
[Install] WantedBy=basic.target
[Unit] Description=RPM database rebuild ConditionPathExists=/var/lib/rpm/.rebuilddb DefaultDependencies=no After=sysinit.target
Should we also add Requires=sysinit.target I don't think we want this running on a boot where basics are failing...
Yes.
Before=basic.target shutdown.target Conflicts=shutdown.target
Hmm, what's with the shutdown.target This is not a service that will remain active so shutdown doesn't seem relevant.
Conflicts=shutdown.target is normally added to all units where DefaultDependencies=no, to mimick the dependencies that would be added by default. It ensures that the service is stopped before a shutdown. It most cases it would not matter, but it's more correct to have it.
[Service] Type=oneshot ExecStart=rpmdb --rebuilddb ExecStartPost=rm -f /var/lib/rpm/.rebuilddb
[Install] WantedBy=basic.target
(Service units have default dependency on basic.target, so if this is to be ordered before basic.target, it needs DefaultDependencies=no.)
I guess the question is rather, is running before basic.target actually reasonable or even desireable? At the very least, we'd need to also add
RequiresMountsFor=/var /var/tmp
...because obviously /var needs to be there for this. And that makes me wonder what else is missing that we'd need.
/var and /var/tmp would be ordered before local-fs.target, so the dependency on sysinit.target should be enough to handle this.
For local /var yes, but basic.target has this:
# We support /var, /tmp, /var/tmp, being on NFS, but we don't pull in # remote-fs.target by default, hence pull them in explicitly here. Note that we # require /var and /var/tmp, but only add a Wants= type dependency on /tmp, as # we support that unit being masked, and this should not be considered an error. RequiresMountsFor=/var /var/tmp Wants=tmp.mount
Not that BDB rpmdb on NFS is supported, except for read-only mounts maybe...
- Panu -
On Mon, Apr 20, 2020 at 02:40:36PM +0300, Panu Matilainen wrote:
On 4/20/20 1:07 PM, Zbigniew Jędrzejewski-Szmek wrote:
On Mon, Apr 20, 2020 at 10:38:18AM +0300, Panu Matilainen wrote:
On 4/17/20 5:09 PM, Zbigniew Jędrzejewski-Szmek wrote:
[Service] Type=oneshot ExecStart=rpmdb --rebuilddb ExecStartPost=rm -f /var/lib/rpm/.rebuilddb
[Install] WantedBy=basic.target
(Service units have default dependency on basic.target, so if this is to be ordered before basic.target, it needs DefaultDependencies=no.)
I guess the question is rather, is running before basic.target actually reasonable or even desireable? At the very least, we'd need to also add
RequiresMountsFor=/var /var/tmp
...because obviously /var needs to be there for this. And that makes me wonder what else is missing that we'd need.
/var and /var/tmp would be ordered before local-fs.target, so the dependency on sysinit.target should be enough to handle this.
For local /var yes, but basic.target has this:
# We support /var, /tmp, /var/tmp, being on NFS, but we don't pull in # remote-fs.target by default, hence pull them in explicitly here. Note that we # require /var and /var/tmp, but only add a Wants= type dependency on /tmp, as # we support that unit being masked, and this should not be considered an error. RequiresMountsFor=/var /var/tmp Wants=tmp.mount
Not that BDB rpmdb on NFS is supported, except for read-only mounts maybe...
Indeed, I stand corrected.
Zbyszek
On Thu, Mar 26, 2020, at 8:35 AM, Zbigniew Jędrzejewski-Szmek wrote:
Relying on the target distro management stack sound nice, but is actually problematic: how do you run the next version before you install the next version? Sure, you can install stuff to some temporary location and run the tools from there, but then you are running in a very custom franken-environment.
This is exactly what rpm-ostree does - it always makes a new rootfs (hardlinked) and runs scripts in there (using bubblewrap).
Such a mode of running would face the same issue as anaconda installer: it would only get tested during the upgrade season, languishing otherwise.
You have correctly identified the rationale behind why rpm-ostree works the way it does. Every single upstream commit and every Fedora CoreOS release is gated on this working. However, we have the opposite problem in that extending this model to supporting live updates *and* this is hard: https://github.com/coreos/rpm-ostree/issues/639
As far as the database transition...today rpm-ostree generates the rpmdb server side by default, and updating it is a transactional operation (along with the rest of the transaction) so dunno, I guess at some point we'll just flip the default build-side. We may need to make it a build-time option. It would be most ideal if Fedora N-1 at least supported reading from the new format, because if e.g. one does an upgrade, then e.g. the desktop a11y stack happens to be broken and you roll back, the old rpm-ostree may fail to parse the RPM database in the new deployment root. Although, if the new rpmdb is in a different place, then maybe it will just appear to be nonexistent and I *think* that would work.
This is probably best tracked in https://github.com/coreos/rpm-ostree/issues/1964
I'm seeing this when running fedora-review:
$ fedora-review -b 1838033 INFO: Processing bugzilla bug: 1838033 ... INFO: Installing built package(s) warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. INFO: Active plugins: Generic, Shell-api, Java warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:10 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:42 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:46 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:50 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:53 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:57 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:01 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:05 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:09 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:44 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:48 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:56 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:02:00 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:02:07 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. INFO: ExclusiveArch dependency checking disabled, enable with EXARCH flag
Is this expected? Can we please do something to avoid those warnings?
Zbyszek
On Wed, May 20, 2020 at 11:06 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
I'm seeing this when running fedora-review:
$ fedora-review -b 1838033 INFO: Processing bugzilla bug: 1838033 ... INFO: Installing built package(s) warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. INFO: Active plugins: Generic, Shell-api, Java warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:10 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:42 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:46 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:50 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:53 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:57 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:01 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:05 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:09 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:44 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:48 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:56 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:02:00 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:02:07 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. INFO: ExclusiveArch dependency checking disabled, enable with EXARCH flag
Is this expected? Can we please do something to avoid those warnings?
This is expected if you're not running on a host using SQLite rpmdb already. That warning is because the host created a bdb rpmdb and no sqlite rpmdb, rpm switches back from sqlite to bdb in this circumstance, since that functionality is still compiled in.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
I think you might have cache of buildroot which was populated using BDB backend, so I guess if you clean mock caches, the problem should go away.
On Wed, 2020-05-20 at 15:04 +0000, Zbigniew Jędrzejewski-Szmek wrote:
I'm seeing this when running fedora-review:
$ fedora-review -b 1838033 INFO: Processing bugzilla bug: 1838033 ... INFO: Installing built package(s) warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. INFO: Active plugins: Generic, Shell-api, Java warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:10 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:42 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:46 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:50 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:53 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:00:57 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:01 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:05 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:09 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:44 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:48 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:01:56 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:02:00 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. Last metadata expiration check: 0:02:07 ago on Wed May 20 15:08:57 2020. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. warning: Found bdb Packages database while attempting sqlite backend: using bdb backend. INFO: ExclusiveArch dependency checking disabled, enable with EXARCH flag
Is this expected? Can we please do something to avoid those warnings?
Zbyszek _______________________________________________ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
- -- Igor Raits ignatenkobrain@fedoraproject.org
On Wed, May 20, 2020 at 11:31:37 -0400, Neal Gompa ngompa13@gmail.com wrote:
On Wed, May 20, 2020 at 11:06 AM Zbigniew Jędrzejewski-Szmek zbyszek@in.waw.pl wrote:
I'm seeing this when running fedora-review:
$ fedora-review -b 1838033 INFO: Processing bugzilla bug: 1838033 ... INFO: Installing built package(s) warning: Found bdb Packages database while attempting sqlite backend: using bdb backend.
I got this on a machine I can't reboot right now and I ran rpm --rebuilddb which appears to have switched the database over. At least I stopped getting the messages when running dnf.
On Wed, May 20, 2020 at 05:56:23PM +0200, Igor Raits wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
I think you might have cache of buildroot which was populated using BDB backend, so I guess if you clean mock caches, the problem should go away.
Thanks, the issue went away after dropping caches.
Zbyszek