Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
Many thanks, -T
On Sun, 2025-05-04 at 15:53 -0700, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
No idea what you mean by 'sub'.
poc
On 5/5/25 3:59 AM, Patrick O'Callaghan wrote:
On Sun, 2025-05-04 at 15:53 -0700, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
No idea what you mean by 'sub'.
poc
substitute (replacement)
On Mon, 2025-05-05 at 10:54 -0700, ToddAndMargo via users wrote:
On 5/5/25 3:59 AM, Patrick O'Callaghan wrote:
On Sun, 2025-05-04 at 15:53 -0700, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
No idea what you mean by 'sub'.
poc
substitute (replacement)
Dump/restore is an ancient set of commands from the days of reel-to- reel tape drives, and is designed for backup of entire volumes. There are several superior backup systems around now, but for an existing backup made with dump I image the only option is some version of restore.
poc
---- On Mon, 05 May 2025 14:19:05 -0700 Patrick O'Callaghan pocallaghan@gmail.com wrote ---
Dump/restore is an ancient set of commands from the days of reel-to- reel tape drives, and is designed for backup of entire volumes. There are several superior backup systems around now, but for an existing backup made with dump I image the only option is some version of restore.
poc
You convinced me. Do you have a recommendation you like to replace dump/restore?
On Mon, 2025-05-05 at 19:58 -0700, toddandmargo via users wrote:
---- On Mon, 05 May 2025 14:19:05 -0700 Patrick O'Callaghan pocallaghan@gmail.com wrote ---
Dump/restore is an ancient set of commands from the days of reel-to- reel tape drives, and is designed for backup of entire volumes. There are several superior backup systems around now, but for an existing backup made with dump I image the only option is some version of restore.
poc
You convinced me. Do you have a recommendation you like to replace dump/restore?
I don't know your specific requirements, but I use BorgBackup (dnf install borgbackup). It's widely used, actively maintained, has deduplication, encryption and compression features, can manage local and remote backups, has hooks for special cases such as databases.
I run it from a nightly script which is actually a separate package called borgmatic. The config file can be somewhat quirky but once configured you can basically forget about it.
See: https://www.borgbackup.org/
poc
On 5/6/25 2:18 AM, Patrick O'Callaghan wrote:
On Mon, 2025-05-05 at 19:58 -0700, toddandmargo via users wrote:
---- On Mon, 05 May 2025 14:19:05 -0700 Patrick O'Callaghan pocallaghan@gmail.com wrote ---
Dump/restore is an ancient set of commands from the days of reel-to- reel tape drives, and is designed for backup of entire volumes. There are several superior backup systems around now, but for an existing backup made with dump I image the only option is some version of restore.
poc
You convinced me. Do you have a recommendation you like to replace dump/restore?
I don't know your specific requirements, but I use BorgBackup (dnf install borgbackup). It's widely used, actively maintained, has deduplication, encryption and compression features, can manage local and remote backups, has hooks for special cases such as databases.
I run it from a nightly script which is actually a separate package called borgmatic. The config file can be somewhat quirky but once configured you can basically forget about it.
See: https://www.borgbackup.org/
poc
Thank you!
On Mon, 5 May 2025 at 18:54, ToddAndMargo via users < users@lists.fedoraproject.org> wrote:
On 5/5/25 3:59 AM, Patrick O'Callaghan wrote:
On Sun, 2025-05-04 at 15:53 -0700, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
Is there an ext4 sub for dump/restore that is maintained by the repo?
No idea what you mean by 'sub'.
substitute (replacement)
Did you see Tim's comment: https://sourceforge.net/p/dump/bugs/158/#c975 ? You may well just be tilting at windmills.
Your 3rd link on the BZ (https://sourceforge.net/p/dump/support-requests/19/) includes fairly straightforward steps you could follow (with some minor modification) that would permit you to build the latest package until it's resolved upstream.
Dump does appear to still being actively maintained: https://sourceforge.net/p/dump/code/ci/main/tree/
If you want alternative options, the internet is replete with them: https://opensource.com/article/19/3/backup-solutions
On 5/5/25 3:08 PM, Will McDonald wrote:
On Mon, 5 May 2025 at 18:54, ToddAndMargo via users <users@lists.fedoraproject.org mailto:users@lists.fedoraproject.org> wrote:
On 5/5/25 3:59 AM, Patrick O'Callaghan wrote: > On Sun, 2025-05-04 at 15:53 -0700, ToddAndMargo via users wrote: >> Hi All, >> >> I have two servers affected by: >> >> restore: <name unknown>: ftruncate: Invalid argument >> https://bugzilla.redhat.com/show_bug.cgi?id=2359295 <https:// bugzilla.redhat.com/show_bug.cgi?id=2359295> >> >> Is there an ext4 sub for dump/restore that is maintained by >> the repo? > > No idea what you mean by 'sub'. substitute (replacement)Did you see Tim's comment: https://sourceforge.net/p/dump/bugs/158/#c975 https://sourceforge.net/p/dump/bugs/158/#c975 ? You may well just be tilting at windmills.
Your 3rd link on the BZ (https://sourceforge.net/p/dump/support- requests/19/ https://sourceforge.net/p/dump/support-requests/19/) includes fairly straightforward steps you could follow (with some minor modification) that would permit you to build the latest package until it's resolved upstream.
Dump does appear to still being actively maintained: https:// sourceforge.net/p/dump/code/ci/main/tree/ <https://sourceforge.net/p/ dump/code/ci/main/tree/>
If you want alternative options, the internet is replete with them: https://opensource.com/article/19/3/backup-solutions <https:// opensource.com/article/19/3/backup-solutions>
I was hoping for a recommendation from an actual user.
On Mon, 5 May 2025 23:08:16 +0100, Will McDonald wrote:
... 3rd link on the BZ (https://sourceforge.net/p/dump/support-requests/19/) includes fairly straightforward steps you could follow (with some minor modification) that would permit you to build the latest package until it's resolved upstream.
It's even easier than that, because for a very, very long time you can extract a src.rpm package with "rpm -i ...". No need to use rpm2cpio.
On 5/4/25 3:53 PM, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
Many thanks, -T
dump/restore is actively supp9rted in "upstream", but not in the Fedora Repo.
I am thinking it is time to upgrade to something else. I have seen several web sites with tons of recommendations.
I was hoping for an actual user's recommendation. They are always much better than web sites.
On Mon, 5 May 2025 at 23:53, ToddAndMargo via users < users@lists.fedoraproject.org> wrote:
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
dump/restore is actively supp9rted in "upstream", but not in the Fedora Repo.
I believe this might be intended as a response to me, but you've responded to the beginning of the thread?
dump is maintained upstream. It's also maintained by the Fedora team:
$ dnf -q changelog dump Changelogs for dump-1:0.4-0.58.b47.fc42.x86_64 * Thu Jan 16 12:00:00 2025 Fedora Release Engineering < releng@fedoraproject.org> - 1:0.4-0.58.b47 - Rebuilt for https://fedoraproject.org/wiki/Fedora_42_Mass_Rebuild * Wed Jul 17 12:00:00 2024 Fedora Release Engineering < releng@fedoraproject.org> - 1:0.4-0.57.b47 - Rebuilt for https://fedoraproject.org/wiki/Fedora_41_Mass_Rebuild * Wed Jan 24 12:00:00 2024 Fedora Release Engineering < releng@fedoraproject.org> - 1:0.4-0.56.b47 - Rebuilt for https://fedoraproject.org/wiki/Fedora_40_Mass_Rebuild * Fri Jan 19 12:00:00 2024 Fedora Release Engineering < releng@fedoraproject.org> - 1:0.4-0.55.b47 - Rebuilt for https://fedoraproject.org/wiki/Fedora_40_Mass_Rebuild
Intimating it's not maintained by Fedora feels disingenuous and could be construed as a bit of a slight to the all volunteers who build, test and release the distro?
Perhaps it's not maintained at a cadence you need, that would be fair. This is a free, open source distro. If this is something you really care about, maybe become a maintainer?
You haven't responded to the fact this could just be noise, either. Are your files being restored, just with some extraneous warnings? If so, maybe just hold out till the next mass rebuild and retest.
If they aren't being restored, and it's painful for you, you could just roll your own interim RPM until the next mass rebuild.
---- On Mon, 05 May 2025 16:25:51 -0700 Will McDonald wmcdonald@gmail.com wrote ---
On Mon, 5 May 2025 at 23:53, ToddAndMargo via users users@lists.fedoraproject.org wrote:
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
dump/restore is actively supp9rted in "upstream", but not in the Fedora Repo.
I believe this might be intended as a response to me, but you've responded to the beginning of the thread? dump is maintained upstream. It's also maintained by the Fedora team: $ dnf -q changelog dump Changelogs for dump-1:0.4-0.58.b47.fc42.x86_64
- Thu Jan 16 12:00:00 2025 Fedora Release Engineering releng@fedoraproject.org - 1:0.4-0.58.b47
- Wed Jul 17 12:00:00 2024 Fedora Release Engineering releng@fedoraproject.org - 1:0.4-0.57.b47
- Wed Jan 24 12:00:00 2024 Fedora Release Engineering releng@fedoraproject.org - 1:0.4-0.56.b47
- Fri Jan 19 12:00:00 2024 Fedora Release Engineering releng@fedoraproject.org - 1:0.4-0.55.b47
It is not maintained by the Fedora team. Of the four you showed me, he is rebuilding the same b47 srpm for each new version of Fedora. Thank you for helping me make my point.
Intimating it's not maintained by Fedora feels disingenuous and could be construed as a bit of a slight to the all volunteers who build, test and release the distro?
Only this maintainer. The reset are my heroes. They are very responsive and very professional.
He is only rebuilding an old srpm, including a bug that frightens the heck out of users. And he ignored my request to correct the problem. He is skating.
Perhaps it's not maintained at a cadence you need, that would be fair. This is a free, open source distro. If this is something you really care about, maybe become a maintainer?
Sorry. I am not smart enough. I know my limitations.
You haven't responded to the fact this could just be noise, either. Are your files being restored, just with some extraneous warnings? If so, maybe just hold out till the next mass rebuild and retest.
It may be "just noise" the two times I did a restore it was a small single file. The the error scared the heck out of me. And the "noise" was serious enough for up stream to correct it.
If they aren't being restored, and it's painful for you, you could just roll your own interim RPM until the next mass rebuild.--
Or find a better program. It has been pointed out to me this is a very old program originally made for tape backup. Do you have a recommendation?
Thank you for the thoughtful response!
On 5/4/25 3:53 PM, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
Many thanks, -T
This caught my attention: the ability to mount the archive in your file system
https://borgbackup.readthedocs.io/en/stable/usage/mount.html#
Anyone have any experience with it?
On Mon, 2025-05-05 at 20:29 -0700, ToddAndMargo via users wrote:
On 5/4/25 3:53 PM, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
Many thanks, -T
This caught my attention: the ability to mount the archive in your file system
https://borgbackup.readthedocs.io/en/stable/usage/mount.html#
Anyone have any experience with it?
Yes, though I don't generally do that. My backup medium is a pair of mirrored hard-drives (using BTRFS) on an external USB enclosure and I prefer to leave it offline except when needed. I have some scripts to mount/unmount as required.
poc
On 5/6/25 2:20 AM, Patrick O'Callaghan wrote:
On Mon, 2025-05-05 at 20:29 -0700, ToddAndMargo via users wrote:
On 5/4/25 3:53 PM, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
Many thanks, -T
This caught my attention: the ability to mount the archive in your file system
https://borgbackup.readthedocs.io/en/stable/usage/mount.html#
Anyone have any experience with it?
Yes, though I don't generally do that. My backup medium is a pair of mirrored hard-drives (using BTRFS) on an external USB enclosure and I prefer to leave it offline except when needed. I have some scripts to mount/unmount as required.
poc
The ability to recover with a standard file manager is way, way up there on my list!
Thank you!
On 5/6/25 11:16 AM, ToddAndMargo via users wrote:
On 5/6/25 2:20 AM, Patrick O'Callaghan wrote:
On Mon, 2025-05-05 at 20:29 -0700, ToddAndMargo via users wrote:
On 5/4/25 3:53 PM, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
Many thanks, -T
This caught my attention: the ability to mount the archive in your file system
https://borgbackup.readthedocs.io/en/stable/usage/mount.html#
Anyone have any experience with it?
Yes, though I don't generally do that. My backup medium is a pair of mirrored hard-drives (using BTRFS) on an external USB enclosure and I prefer to leave it offline except when needed. I have some scripts to mount/unmount as required.
poc
The ability to recover with a standard file manager is way, way up there on my list!
If you use btrfs, you can easily do differential (but full) backups whenever you want. And they are directly mountable and restorable.
On 5/6/25 2:11 PM, ToddAndMargo via users wrote:
On 5/6/25 12:59 PM, Samuel Sieb wrote:
If you use btrfs, you can easily do differential (but full) backups whenever you want. And they are directly mountable and restorable.
I am using ext4 everywhere.
I have been converting all my ext4 to btrfs.
What do you mean by "differential (but full)"?
It only has to transfer the difference between the snapshots, but the end result is a full mountable copy.
On 5/5/25 8:29 PM, ToddAndMargo via users wrote:
On 5/4/25 3:53 PM, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
Many thanks, -T
This caught my attention: the ability to mount the archive in your file system
https://borgbackup.readthedocs.io/en/stable/usage/mount.html#
Anyone have any experience with it?
It is "incremental" I need full backups and I need to be able to specify the /dev/* of the drive I want backed up.
Anyone know of a good full backup command line utility?
On Tue, 2025-05-06 at 20:51 -0700, ToddAndMargo via users wrote:
This caught my attention: the ability to mount
the archive in your file system
https://borgbackup.readthedocs.io/en/stable/usage/mount.html#
Anyone have any experience with it?
It is "incremental" I need full backups and I need to be able to specify the /dev/* of the drive I want backed up.
Anyone know of a good full backup command line utility?
Again, Borg. You can easily use it as a full backup if that's what you want, using the 'create' option.
poc
On 5/7/25 3:13 AM, Patrick O'Callaghan wrote:
On Tue, 2025-05-06 at 20:51 -0700, ToddAndMargo via users wrote:
This caught my attention: the ability to mount
the archive in your file system
https://borgbackup.readthedocs.io/en/stable/usage/mount.html#
Anyone have any experience with it?
It is "incremental" I need full backups and I need to be able to specify the /dev/* of the drive I want backed up.
Anyone know of a good full backup command line utility?
Again, Borg. You can easily use it as a full backup if that's what you want, using the 'create' option.
poc
Hi Patrick,
This is why I like hearing from actual users.
The direction page is filled with how to set up incremental backups with no discussion I could find about full backups. It led me to believe it was doing only incrementals with one one initial full backup. (I am allergic to incremental backups -- they make me swear when it comes time to recover things.)
Thank you!
Oh and dump-0.4-0.59.b52.fc41.x86_64 just hit. And, Yippee!! It Core Dumps!!!
/usr/sbin/dump -0a -z -f /lin-bak/2025-05-07_rootExt4Dump.gz / free(): invalid pointer /home/linuxutil/backup-rn6: line 640: 28450 Aborted (core dumped) /usr/sbin/dump -0a -z -f /lin-bak/2025-05-07_rootExt4Dump.gz /
I reported it over on Core Dump https://sourceforge.net/p/dump/bugs/185/
-T
On 5/7/25 4:00 PM, ToddAndMargo via users wrote:
I am allergic to incremental backups -- they make me swear when it comes time to recover things.
If you are wondering about that last statement, think of customers how refuse up upgrade any of their equipment until it comes down around their ears. (I had one happen two months ago.)
I am still trying to figure out a way to get customers to actually read their backup report. I put the freakin' statues in their eMail's subject line. They don't ever have to read the body. But nooooooooo. Can't be bothered.
I tell myself I can't care about their stuff any more than they do. But still ....
On Wed, 2025-05-07 at 16:05 -0700, ToddAndMargo via users wrote:
I am still trying to figure out a way to get customers to actually read their backup report. I put the freakin' statues in their eMail's subject line. They don't ever have to read the body. But nooooooooo. Can't be bothered.
I tell myself I can't care about their stuff any more than they do. But still ....
I used to fix up Windows screw-ups for a few people, you could never get them to do what they needed to do. Or, more to the point, get them to stop doing things they shouldn't do.
I *finally* convinced one person to stop installing random things, to say no to various extra add-ons, and pay some heed to security. They had called me to fix several problems on their system. One was a momentary pop-up of two naked big hairy guys, every time they logged in. I had an audience that day, and lots of things that required reboots between fixing them and the next thing. So I left fixing *that* problem until last.
On Wed, 2025-05-07 at 16:00 -0700, ToddAndMargo via users wrote:
(I am allergic to incremental backups -- they make me swear when it comes time to recover things.)
That's why I use mirrored drives. Belt and braces. Plus I can store a *lot* of history because of dedupes and compression:
# borg info /raid/Backups/Borg Repository ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Location: /raid/Backups/Borg Encrypted: No Cache: /root/.cache/borg/XXXXXXXXXXXXXXXXXXXXXXXXXX Security dir: /root/.config/borg/security/XXXXXXXXXXXXXXXXXXXXXXXXXXXXX ------------------------------------------------------------------------------ Original size Compressed size Deduplicated size All archives: 6.86 TB 5.62 TB 319.45 GB
Unique chunks Total chunks Chunk index: 690715 16357552
# borg list /raid/Backups/Borg Bree-2023-07-31_01:00 Mon, 2023-07-31 01:00:08 [04f4b7f87f30153c20ca2e6df1fbd01e516e5ab683abbd08cd1a1689baa755c6] Bree-2023-12-31_01:00 Sun, 2023-12-31 01:00:09 [ee9ff1153e8bacd6573313598ea9b98852520d076e688dcd8df3c8e37377ac1d] fedora-2024-08-02_01:00 Fri, 2024-08-02 01:00:02 [da5043fc8ebe066c6951cdb3f5b90f8a8eeef08d0595b3c23a9831fede8ed330] fedora-2024-08-03_01:00 Sat, 2024-08-03 01:00:00 [f006bf90b7484221b13eabe4ef59f540b0fc04d55f2f856f3e1e8d4439e3f54a] Bree-2024-11-30_01:00 Sat, 2024-11-30 01:00:07 [56412f702c50a00634a673c327d535f9b97350b9feba540125e58aa0bdb62294] Bree-2024-12-31_01:00 Tue, 2024-12-31 01:00:07 [680dec8cb2f911636023848d0ae313ba054236a36766989407239467d51157ad] Bree-2025-01-31_01:00 Fri, 2025-01-31 01:00:07 [fadec70e3bfc0c22a8302eb80348fb0c087fc1e5fe26ae1dddf59853d8b20ae8] Bree-2025-02-28_01:00 Fri, 2025-02-28 01:00:07 [8cb9cf642a27605042dabac96e4a6135b796a6c4a30469c1f632b509d415a30f] Bree-2025-03-31_01:00 Mon, 2025-03-31 01:00:06 [3f8d38514ac373dcda4d527d87f6f113edba8af9345f17e21592692fa9c77c5d] Bree-2025-04-06_01:00 Sun, 2025-04-06 01:00:07 [b65ebbedb30279a9bd2d78db3e7c3ace1ab620dc07fbdefe241909423d5ccb3d] Bree-2025-04-13_01:00 Sun, 2025-04-13 01:00:07 [e77adf684fb114f75e6e450daa7d848af2718a73fb1061f4838cc052fccea28d] Bree-2025-04-20_01:00 Sun, 2025-04-20 01:00:07 [d8afa87e3771f75ec8e71734e90ae4f6126827ecfe644946c5062c36083a4a54] Bree-2025-04-27_01:00 Sun, 2025-04-27 01:00:06 [e53aef71240aeb003266a9d773763b9ba0538d6003b50766ba6be65f91d1b0b1] Bree-2025-04-30_01:00 Wed, 2025-04-30 01:00:07 [afad16c751ed1efee4a0d4d426184c74116d425697dc2d2ed5c66fb1dc3e57f4] Bree-2025-05-02_01:00 Fri, 2025-05-02 01:00:07 [6f7c495f1752538e6a1eb4b2a063ba284f4e8a92bdeee1613f2339f5d02fe131] Bree-2025-05-03_01:00 Sat, 2025-05-03 01:00:07 [39e29641e3ef9aabb57bc8f43077c2f7b2ebba5c09468bd474cb1b6eae44a489] Bree-2025-05-04_01:00 Sun, 2025-05-04 01:00:06 [31d1444a0768997ad6abc63f88dd591adf37bcb6f172f48cd8c20b030d3a9b37] Bree-2025-05-05_01:00 Mon, 2025-05-05 01:00:07 [ae96c71055115015451f1d565bd1399c2afe6a53d245b21849f4cf00470e25f2] Bree-2025-05-06_01:00 Tue, 2025-05-06 01:00:07 [5e1c00df6256b591c5dcf85113f0a87ea79f521d5ed2d0754c4dd210c3500a59] Bree-2025-05-07_01:00 Wed, 2025-05-07 01:00:07 [b9a7a2501d6ae97c8ae6511837322b0512deb0c19a6032bd6e8a5d4de9d7947f]
Each half of the mirrored drive is a 1TB Samsung HD, retired after I upgraded my system to 4TB SSDs. Note that only about 30% of it is actually being used (YMMV of course). BTRFS does checksums on the chunks for extra reliability.
poc
On 5/7/25 3:13 AM, Patrick O'Callaghan wrote:
On Tue, 2025-05-06 at 20:51 -0700, ToddAndMargo via users wrote:
This caught my attention: the ability to mount
the archive in your file system
https://borgbackup.readthedocs.io/en/stable/usage/mount.html#
Anyone have any experience with it?
It is "incremental" I need full backups and I need to be able to specify the /dev/* of the drive I want backed up.
Anyone know of a good full backup command line utility?
Again, Borg. You can easily use it as a full backup if that's what you want, using the 'create' option.
poc
Hi Patrick,
Borg question:
I want to backup up /dev/nvme0n1p2
Problem: the backup drive is /dev/sda1
and is mounted on /lin-bak
Dump will only backup the partition associated with the path. It will not backup the drive mounted on on /lin-bak. For example:
# /usr/sbin/dump -0a -z -f /lin-bak/2025-05-07_rootExt4Dump.gz /
"/" is /dev/nvme0n1p2 and I could have also used that in place of "/".
My concern with Borg is that by telling it to backup "/", it will also catch everything in /lin-bak, which is considerable and not exclusive to only my dump archives.
How do you handle the issue?
-T
On 5/7/25 4:16 PM, ToddAndMargo via users wrote:
On 5/7/25 3:13 AM, Patrick O'Callaghan wrote:
On Tue, 2025-05-06 at 20:51 -0700, ToddAndMargo via users wrote:
This caught my attention: the ability to mount
the archive in your file system
https://borgbackup.readthedocs.io/en/stable/usage/mount.html#
Anyone have any experience with it?
It is "incremental" I need full backups and I need to be able to specify the /dev/* of the drive I want backed up.
Anyone know of a good full backup command line utility?
Again, Borg. You can easily use it as a full backup if that's what you want, using the 'create' option.
poc
Hi Patrick,
Borg question:
I want to backup up /dev/nvme0n1p2
Problem: the backup drive is /dev/sda1
and is mounted on /lin-bak
Dump will only backup the partition associated with the path. It will not backup the drive mounted on on /lin-bak. For example:
# /usr/sbin/dump -0a -z -f /lin-bak/2025-05-07_rootExt4Dump.gz /
"/" is /dev/nvme0n1p2 and I could have also used that in place of "/".
My concern with Borg is that by telling it to backup "/", it will also catch everything in /lin-bak, which is considerable and not exclusive to only my dump archives.
How do you handle the issue?
-T
Would this get around the problem?
# borg create --exclude /lin-bak --compression auto,zstd,7 /lin-bak::2025-05-07_rootExt4Borg /
On Wed, 2025-05-07 at 16:47 -0700, ToddAndMargo via users wrote:
My concern with Borg is that by telling it to backup "/", it will also catch everything in /lin-bak, which is considerable and not exclusive to only my dump archives.
How do you handle the issue?
-T
Would this get around the problem?
# borg create --exclude /lin-bak --compression auto,zstd,7 /lin-bak::2025-05-07_rootExt4Borg / --
The '-exclude' option would be useful for a one-shot backup. However for regular nightlies I prefer to use Borgmatic and configure it appropriately. It has a config file for the various options including excludes.
poc
On 5/4/25 3:53 PM, ToddAndMargo via users wrote:
Hi All,
I have two servers affected by:
restore: <name unknown>: ftruncate: Invalid argument https://bugzilla.redhat.com/show_bug.cgi?id=2359295
This is pretty critical to me. And pretty much anyone using dump/restore. The maintainer seems to be ignoring this.
Is there an ext4 sub for dump/restore that is maintained by the repo?
Many thanks, -T
Follow up. dump/restore has been fixed.
With borg, I backed up three file and restored them (somewhere else).
Originals: $ dd count=0 bs=1M seek=100 of=sparseFile
$ du --bytes KVM-W10.raw KVM-W11.raw sparseFile 64_424_509_440 KVM-W10.raw 133_143_986_176 KVM-W11.raw 104_857_600 sparseFile
$ du --block-size=1 KVM-W10.raw KVM-W11.raw sparseFile 48_891_629_568 KVM-W10.raw 51_850_604_544 KVM-W11.raw 0 sparseFile
$ sha256sum KVM-W10.raw KVM-W11.raw sparseFile 67965ba41959f8e43df5710bfb9f8e95f741b89ead48c94a2206d786e80d23cc /home/kvm/KVM-W10.raw 0b25189cdd77f9b9bbe663e06b73916ff9cce27f2c430358910977c062dfa8ba KVM-W11.raw 20492a4d0d84f8beb1767f6616229f85d44c2827b64bdbfb260ee12fa1109e0e sparseFile
Recovered files: # du --block-size=1 /home/temp/borg/* 64_424_521_728 /home/temp/borg/KVM-W10.raw 133_144_117_248 /home/temp/borg/KVM-W11.raw 104_857_600 /home/temp/borg/sparseFile
# sha256sum /home/temp/borg/* 67965ba41959f8e43df5710bfb9f8e95f741b89ead48c94a2206d786e80d23cc /home/temp/borg/KVM-W10.raw 0b25189cdd77f9b9bbe663e06b73916ff9cce27f2c430358910977c062dfa8ba /home/temp/borg/KVM-W11.raw 20492a4d0d84f8beb1767f6616229f85d44c2827b64bdbfb260ee12fa1109e0e /home/temp/borg/sparseFile
So the checksum was equivalent, but I lost my sparseness.
As a test, now that dump/restore is working again, I backed up a sparse file and restored:
Sparse file backups and restored with dump/restore: # du --bytes * 104857600 sparseFile 104857600 sparseFile.orig
# du --block-size=1 * 0 sparseFile 0 sparseFile.orig
# sha256sum *
20492a4d0d84f8beb1767f6616229f85d44c2827b64bdbfb260ee12fa1109e0e sparseFile
20492a4d0d84f8beb1767f6616229f85d44c2827b64bdbfb260ee12fa1109e0e sparseFile.orig
dump/restore gives back exactly what it backed up. Exactly.
Borg is incapable of the same result. And I am backing up a lot of qamu-kvm virtual drives. They are sparse files and large. If borg can not deal with sparse files, then it is not a good candidate for replacing dump/restore. Bord will give back equivalent files but they will be much larger, wasting a lot of drive space..
Hi.
On Thu, 08 May 2025 22:28:43 -0700 ToddAndMargo via users wrote:
With borg, I backed up three file and restored them (somewhere else).
...
So the checksum was equivalent, but I lost my sparseness.
...
As a test, now that dump/restore is working again, I backed up a sparse file and restored:
...
dump/restore gives back exactly what it backed up. Exactly.
...
Borg is incapable of the same result. ...
I don't think so. I made a successful test with a 100M sparseFile like yours, but you have to give the --sparse option to borg.
In short (under /tmp):
borg init --encryption none /tmp/borg/repo
borg create --sparse /tmp/borg/repo::sparseFile sparseFile
cd ../dst borg extract --sparse /tmp/borg/repo::sparseFile sparseFile
du -m ../dst/sparseFile 0 ../dst/sparseFile
On 5/9/25 12:47 AM, Francis.Montagnac@inria.fr wrote:
Hi.
On Thu, 08 May 2025 22:28:43 -0700 ToddAndMargo via users wrote:
With borg, I backed up three file and restored them (somewhere else).
...
So the checksum was equivalent, but I lost my sparseness.
...
As a test, now that dump/restore is working again, I backed up a sparse file and restored:
...
dump/restore gives back exactly what it backed up. Exactly.
...
Borg is incapable of the same result. ...
I don't think so. I made a successful test with a 100M sparseFile like yours, but you have to give the --sparse option to borg.
In short (under /tmp):
borg init --encryption none /tmp/borg/repo borg create --sparse /tmp/borg/repo::sparseFile sparseFile cd ../dst borg extract --sparse /tmp/borg/repo::sparseFile sparseFile du -m ../dst/sparseFile 0 ../dst/sparseFile
I will be backing up entire partitions. Not all will be sparse. Will the tag work only the sparse files it finds?
On 5/9/25 12:50 AM, ToddAndMargo via users wrote:
On 5/9/25 12:47 AM, Francis.Montagnac@inria.fr wrote:
Hi.
On Thu, 08 May 2025 22:28:43 -0700 ToddAndMargo via users wrote:
With borg, I backed up three file and restored them (somewhere else).
...
So the checksum was equivalent, but I lost my sparseness.
...
As a test, now that dump/restore is working again, I backed up a sparse file and restored:
...
dump/restore gives back exactly what it backed up. Exactly.
...
Borg is incapable of the same result. ...
I don't think so. I made a successful test with a 100M sparseFile like yours, but you have to give the --sparse option to borg.
In short (under /tmp):
borg init --encryption none /tmp/borg/repo
borg create --sparse /tmp/borg/repo::sparseFile sparseFile
cd ../dst borg extract --sparse /tmp/borg/repo::sparseFile sparseFile
du -m ../dst/sparseFile 0 ../dst/sparseFile
I will be backing up entire partitions. Not all will be sparse. Will the tag work only the sparse files it finds?
dump/restore does this automatically. I wonder what possessed borg to not do it? You want back exactly what you put in. Not an "equivalent".
On Fri, 2025-05-09 at 00:55 -0700, ToddAndMargo via users wrote:
I will be backing up entire partitions. Not all will be sparse. Will the tag work only the sparse files it finds?
dump/restore does this automatically. I wonder what possessed borg to not do it? You want back exactly what you put in. Not an "equivalent".
I'd say that most backup systems which operate at the file level don't do this. Dump/restore is designed to backup entire partitions, which is a different use case.
poc
On Fri, 09 May 2025 00:50:32 -0700 ToddAndMargo via users wrote:
On 5/9/25 12:47 AM, Francis.Montagnac@inria.fr wrote:
I don't think so. I made a successful test with a 100M sparseFile like yours, but you have to give the --sparse option to borg.
I will be backing up entire partitions. Not all will be sparse. Will the tag work only the sparse files it finds?
No idea: I don't use borg: just wanted to verify if it has a sparse flag.
For backing up entire partitions, switching from ext4 to btrfs is I think far better by using "btrfs subvolume snapshot" and "btrfs send"
On Fri, 2025-05-09 at 10:13 +0200, Francis.Montagnac@inria.fr wrote:
On Fri, 09 May 2025 00:50:32 -0700 ToddAndMargo via users wrote:
On 5/9/25 12:47 AM, Francis.Montagnac@inria.fr wrote:
I don't think so. I made a successful test with a 100M sparseFile like yours, but you have to give the --sparse option to borg.
I will be backing up entire partitions. Not all will be sparse. Will the tag work only the sparse files it finds?
No idea: I don't use borg: just wanted to verify if it has a sparse flag.
For backing up entire partitions, switching from ext4 to btrfs is I think far better by using "btrfs subvolume snapshot" and "btrfs send"
Agreed that BTRFS is better in many cases, though personally I haven't experimented with the 'btrfs send' option.
poc
On 5/9/25 3:17 AM, Patrick O'Callaghan wrote:
On Fri, 2025-05-09 at 10:13 +0200, Francis.Montagnac@inria.fr wrote:
On Fri, 09 May 2025 00:50:32 -0700 ToddAndMargo via users wrote:
On 5/9/25 12:47 AM, Francis.Montagnac@inria.fr wrote:
I don't think so. I made a successful test with a 100M sparseFile like yours, but you have to give the --sparse option to borg.
I will be backing up entire partitions. Not all will be sparse. Will the tag work only the sparse files it finds?
No idea: I don't use borg: just wanted to verify if it has a sparse flag.
For backing up entire partitions, switching from ext4 to btrfs is I think far better by using "btrfs subvolume snapshot" and "btrfs send"
Agreed that BTRFS is better in many cases, though personally I haven't experimented with the 'btrfs send' option.
I've been using it and it's very nice. There are also a few tools to automate it as well.
On Fri, 09 May 2025 01:03:31 -0700 ToddAndMargo via users wrote:
On 5/9/25 12:47 AM, Francis.Montagnac@inria.fr wrote:
du -m ..
"-m" is 1 M blocks. Your return answer could have a round to zero error is less that 1M blocks
"--block-size=1 " is better as it will catch everything
No: du rounds up:
echo > one du -m one 1 one
On 5/9/25 1:15 AM, Francis.Montagnac@inria.fr wrote:
On Fri, 09 May 2025 01:03:31 -0700 ToddAndMargo via users wrote:
On 5/9/25 12:47 AM, Francis.Montagnac@inria.fr wrote:
du -m ..
"-m" is 1 M blocks. Your return answer could have a round to zero error is less that 1M blocks
"--block-size=1 " is better as it will catch everything
No: du rounds up:
echo > one du -m one 1 one
Thank you!
Francis.Montagnac@inria.fr wrote:
No: du rounds up:
echo > one du -m one 1 one
That is correct, not rounded. 'echo' creates a file with one byte, a newline (0x0a).
On 5/9/25 9:06 PM, Dave Close wrote:
Francis.Montagnac@inria.fr wrote:
No: du rounds up:
echo > one du -m one 1 oneThat is correct, not rounded. 'echo' creates a file with one byte, a newline (0x0a).
Actually, it creates a file that is allocated 4096 bytes.
$ echo > one
$ du -m one 1 one
$ du --block-size=1 one 4096 one
I believe it is called a "cluster", but I may be wrong on the name.
ToddAndMargo via users wrote:
Actually, it creates a file that is allocated 4096 bytes.
$ echo > one
$ du -m one 1 one
$ du --block-size=1 one 4096 one
I believe it is called a "cluster", but I may be wrong on the name.
Sorry, I missed (or ignored) the "-m". But representing one byte as its allocated space (block size) is not what most people think of as rounding, at least in the decimal sense and likely not in the binary sense, either. And the increase is not due in any sense to "du"; the size of a block is related to the filesystem and the storage medium. So it would be wrong to say that "du rounds up": "du" just reports what the other parts of the system tell it.
On Fri, 2025-05-09 at 23:20 -0700, ToddAndMargo via users wrote:
On 5/9/25 9:06 PM, Dave Close wrote:
Francis.Montagnac@inria.fr wrote:
No: du rounds up:
echo > one du -m one 1 oneThat is correct, not rounded. 'echo' creates a file with one byte, a newline (0x0a).
Actually, it creates a file that is allocated 4096 bytes.
$ echo > one
$ du -m one 1 one
$ du --block-size=1 one 4096 one
No, the file is allocated 1 byte. The disk usage depends on the filesystem. IIRC some filesystems could - at least historically - use spare space in the inode for small files. That's why the output of 'du' is usually different from that of 'ls -s'. It's the difference between the *file size* and the *disk usage* (the clue is in the name).
I believe it is called a "cluster", but I may be wrong on the name.
A cluster is usually regarded as a group of basic allocatable units (i.e. blocks or pages), so this would not be a cluster except in the degenerate sense, i.e. a cluster of 1.
poc
On 5/10/25 4:41 AM, Patrick O'Callaghan wrote:
On Fri, 2025-05-09 at 23:20 -0700, ToddAndMargo via users wrote:
On 5/9/25 9:06 PM, Dave Close wrote:
Francis.Montagnac@inria.fr wrote:
No: du rounds up:
echo > one du -m one 1 oneThat is correct, not rounded. 'echo' creates a file with one byte, a newline (0x0a).
Actually, it creates a file that is allocated 4096 bytes.
$ echo > one
$ du -m one 1 one
$ du --block-size=1 one 4096 one
No, the file is allocated 1 byte. The disk usage depends on the filesystem. IIRC some filesystems could - at least historically - use spare space in the inode for small files. That's why the output of 'du' is usually different from that of 'ls -s'. It's the difference between the *file size* and the *disk usage* (the clue is in the name).
I believe it is called a "cluster", but I may be wrong on the name.
A cluster is usually regarded as a group of basic allocatable units (i.e. blocks or pages), so this would not be a cluster except in the degenerate sense, i.e. a cluster of 1.
poc
Indeed. The file's length is one byte.
The space on the drive allocated for that one byte file is 4096 bytes.
If the file grows to 4097 bytes, the file will get allocated another 4096 bytes for 8192 bytes. And so on and so forth.
On 5/9/25 1:15 AM, Francis.Montagnac@inria.fr wrote:
On Fri, 09 May 2025 01:03:31 -0700 ToddAndMargo via users wrote:
On 5/9/25 12:47 AM, Francis.Montagnac@inria.fr wrote:
du -m ..
"-m" is 1 M blocks. Your return answer could have a round to zero error is less that 1M blocks
"--block-size=1 " is better as it will catch everything
No: du rounds up:
echo > one du -m one 1 one
$ echo "abc" > abc.txt
$ du -m abc.txt 1 abc.txt
$ du --block-size=1 abc.txt 4096 abc.txt
And that makes sense.