Hi All,
I have a flash drive with about four partitions on is. Lets call it /dev/sdc.
Can I tar sdc or am I stuck with tarring the partitions?
Any drawback to this?
Many thanks, -T
On 03Sep2019 22:03, ToddAndMargo ToddAndMargo@zoho.com wrote:
I have a flash drive with about four partitions on is. Lets call it /dev/sdc.
Can I tar sdc or am I stuck with tarring the partitions?
Any drawback to this?
Depends what you want from it. If the partitions have real filesystems in them then mount them and tar the mounted trees.
Otherwise there's no benefit to using tar, which stores named files and their data. Just cat the flash drive to flash_drive.img and there you are: a usable drive image file - plain data.
Cheers, Cameron Simpson cs@cskk.id.au
On 9/3/19 10:56 PM, Cameron Simpson wrote:
On 03Sep2019 22:03, ToddAndMargo ToddAndMargo@zoho.com wrote:
I have a flash drive with about four partitions on is. Lets call it /dev/sdc.
Can I tar sdc or am I stuck with tarring the partitions?
Any drawback to this?
Depends what you want from it. If the partitions have real filesystems in them then mount them and tar the mounted trees.
Otherwise there's no benefit to using tar, which stores named files and their data. Just cat the flash drive to flash_drive.img and there you are: a usable drive image file - plain data.
Cheers, Cameron Simpson cs@cskk.id.au
Thank you!
Sounds like dd is the better option
On 03Sep2019 23:12, ToddAndMargo ToddAndMargo@zoho.com wrote:
On 9/3/19 10:56 PM, Cameron Simpson wrote:
On 03Sep2019 22:03, ToddAndMargo ToddAndMargo@zoho.com wrote:
I have a flash drive with about four partitions on is. Lets call it /dev/sdc.
Can I tar sdc or am I stuck with tarring the partitions?
Any drawback to this?
Depends what you want from it. If the partitions have real filesystems in them then mount them and tar the mounted trees.
Otherwise there's no benefit to using tar, which stores named files and their data. Just cat the flash drive to flash_drive.img and there you are: a usable drive image file - plain data.
Thank you!
Sounds like dd is the better option
Shrug. "cat" is easier to invoke:
cat /dev/sdBLAH >sdBLAH.img
Cheers, Cameron Simpson cs@cskk.id.au
On Tuesday, September 3, 2019 10:03:21 PM MST ToddAndMargo via users wrote:
Hi All,
I have a flash drive with about four partitions on is. Lets call it /dev/sdc.
Can I tar sdc or am I stuck with tarring the partitions?
Any drawback to this?
If you were to `tar` it, you'd be making an archive with a single file named 'sdc'. I suggest you do this:
dd if=/dev/sdc of=/path/to/file gzip /path/to/file
(Feel free to use some other software for compression)
On 9/3/19 10:59 PM, John Harris wrote:
On Tuesday, September 3, 2019 10:03:21 PM MST ToddAndMargo via users wrote:
Hi All,
I have a flash drive with about four partitions on is. Lets call it /dev/sdc.
Can I tar sdc or am I stuck with tarring the partitions?
Any drawback to this?
If you were to `tar` it, you'd be making an archive with a single file named 'sdc'. I suggest you do this:
dd if=/dev/sdc of=/path/to/file gzip /path/to/file
(Feel free to use some other software for compression)
This looks like what I want. Thank you!
Is there a way to pipe the dd into the gzip file?
On 04/09/2019 16.12, ToddAndMargo via users wrote:
On 9/3/19 10:59 PM, John Harris wrote:
On Tuesday, September 3, 2019 10:03:21 PM MST ToddAndMargo via users wrote:
Hi All,
I have a flash drive with about four partitions on is. Lets call it /dev/sdc.
Can I tar sdc or am I stuck with tarring the partitions?
Any drawback to this?
If you were to `tar` it, you'd be making an archive with a single file named 'sdc'. I suggest you do this:
dd if=/dev/sdc of=/path/to/file gzip /path/to/file
(Feel free to use some other software for compression)
This looks like what I want. Thank you!
Is there a way to pipe the dd into the gzip file?
As for your question (but see later), yes: dd if=/dev/sdc | gzip >file.gz but - gzip may not be the most suitable compression - to improve the compression, mount each partition and create a large file of zeroes sudo mount /dev/sdXn /path/to/mountPoint dd if=/dev/zero of=/path/to/mountPoint/zero rm /path/to/mountPoint/zero sudo umount /path/to/mountPoint Now you can do the dd and expect a smaller gz file.
However, the most important question that needs answering is this: What is the purpose of this exercise? - is this for backup - not expecting to read it again. dd is OK but clonezilla will do a better job (no need to do the zero thing) - is this to allow transferring the data to another USB disk clonezilla is your friend again - will you need regular access to individual files on these partitions? mount each partition and copy (rsync) the content to where you want it
HTH
On 9/3/19 11:32 PM, Eyal Lebedinsky wrote:
What is the purpose of this exercise?
I have a bootable Fedora 64 GB flash drive. I use it to troubleshoot customers' computers -- mostly Windows. Windows loved to eat these sticks. (I have had great luck switching to a Samsung stick.)
I have previously used dd to backup these sticks and reverse dd when they go corrupted. But this one is on the large side. So I was looking for some compression
On Wed, 2019-09-04 at 01:59 -0700, ToddAndMargo via users wrote:
On 9/3/19 11:32 PM, Eyal Lebedinsky wrote:
What is the purpose of this exercise?
I have a bootable Fedora 64 GB flash drive. I use it to troubleshoot customers' computers -- mostly Windows. Windows loved to eat these sticks. (I have had great luck switching to a Samsung stick.)
I have previously used dd to backup these sticks and reverse dd when they go corrupted. But this one is on the large side. So I was looking for some compression
The point of Eyal's method is to ensure that all the free space on the drive is filled with zeroes, thus improving the compression. Otherwise you are just uselessly compressing junk.
poc
On 9/4/19 2:44 AM, Patrick O'Callaghan wrote:
The point of Eyal's method is to ensure that all the free space on the drive is filled with zeroes, thus improving the compression. Otherwise you are just uselessly compressing junk.
poc
Is there a way to tell the stick itself to zero out all unused space?
On 9/4/19 1:25 PM, ToddAndMargo via users wrote:
On 9/4/19 2:44 AM, Patrick O'Callaghan wrote:
The point of Eyal's method is to ensure that all the free space on the drive is filled with zeroes, thus improving the compression. Otherwise you are just uselessly compressing junk.
poc
Is there a way to tell the stick itself to zero out all unused space?
This sounds like what I need:
https://manpages.ubuntu.com/manpages/trusty/man8/zerofree.8.html
Am I on the right track?
-T
scrub, in the Fedora repos, has a fillzero option and a freespace specifier that should do the trick. MAKE A BACKUP FIRST, as scrub's primary job is to erase any trace of everything on a device, so you'd hate to get the options wrong!
On Wed, Sep 4, 2019 at 4:41 PM ToddAndMargo via users < users@lists.fedoraproject.org> wrote:
On 9/4/19 1:25 PM, ToddAndMargo via users wrote:
On 9/4/19 2:44 AM, Patrick O'Callaghan wrote:
The point of Eyal's method is to ensure that all the free space on the drive is filled with zeroes, thus improving the compression. Otherwise you are just uselessly compressing junk.
poc
Is there a way to tell the stick itself to zero out all unused space?
This sounds like what I need:
https://manpages.ubuntu.com/manpages/trusty/man8/zerofree.8.html
Am I on the right track?
-T _______________________________________________ users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org
On 9/4/19 1:48 PM, Ted Roche wrote:
scrub, in the Fedora repos, has a fillzero option and a freespace specifier that should do the trick. MAKE A BACKUP FIRST, as scrub's primary job is to erase any trace of everything on a device, so you'd hate to get the options wrong!
Thank you!
Ya, backup first or ...
On 9/4/19 1:59 PM, John Harris wrote:
On Tuesday, September 3, 2019 10:03:21 PM MST ToddAndMargo via users wrote:
Hi All,
I have a flash drive with about four partitions on is. Lets call it /dev/sdc.
Can I tar sdc or am I stuck with tarring the partitions?
Any drawback to this?
If you were to `tar` it, you'd be making an archive with a single file named 'sdc'. I suggest you do this:
dd if=/dev/sdc of=/path/to/file gzip /path/to/file
(Feel free to use some other software for compression)
Of course a dd copy may be rather time consuming and space consuming with no apparent advantage.
If the flash drive is 115GB you'd get a file of size 115GB (give or take). That would be kinda silly if you're drive contained much less data than that.
The question would then arise as to how you would extract files/data from that dd created
On 9/4/19 5:00 PM, ToddAndMargo via users wrote:
On 9/3/19 11:12 PM, Ed Greshko wrote:
The question would then arise as to how you would extract files/data from that dd created
No extraction. I just want to do a mass overwrite when the stick gets corrupted
OK.
Pro-Tip: Spelling out your requirements/needs/intentions in the original post may get more meaningful responses without the need for follow-up questions.
On 4 Sep at 01:12, Ed Greshko ed.greshko@greshko.com wrote:
Of course a dd copy may be rather time consuming and space consuming with no apparent advantage.
A lot less time consuming if you use the "bs=" option. Haven't seen anyone mention that; I believe the default is still the old Unix 512b, painful.
Cheers, -- Dave Ihnat dihnat@dminet.com
On 9/4/19 9:16 PM, Dave Ihnat wrote:
On 4 Sep at 01:12, Ed Greshko ed.greshko@greshko.com wrote:
Of course a dd copy may be rather time consuming and space consuming with no apparent advantage.
A lot less time consuming if you use the "bs=" option. Haven't seen anyone mention that; I believe the default is still the old Unix 512b, painful.
Very good point. And, yes, the default is still 512.
On Wed, 2019-09-04 at 21:55 +0800, Ed Greshko wrote:
On 9/4/19 9:16 PM, Dave Ihnat wrote:
On 4 Sep at 01:12, Ed Greshko ed.greshko@greshko.com wrote:
Of course a dd copy may be rather time consuming and space consuming with no apparent advantage.
A lot less time consuming if you use the "bs=" option. Haven't seen anyone mention that; I believe the default is still the old Unix 512b, painful.
Very good point. And, yes, the default is still 512.
For a USB drive it probably doesn't make much difference. Output will be buffered and speed is limited by the USB interface.
poc
FOLLOW @gnome
-----Original Message----- From: users@lists.fedoraproject.org Sent: Wed, 04 Sep 2019 16:20:14 +0100 To: 3603060030@txt.att.net Subject: Re: tar a flash drive
On Wed, 2019-09-04 at 21:55 +0800, Ed Greshko wrote:
On 9/4/19 9:16 PM, Dave Ihnat wrote:
On 4 Sep at 01:12, Ed Greshko ed.greshko@greshko.com wrote:
Of course
================================================================== This mobile text message is brought to you by AT&T
SORRY! That message was not intended for you.
-----Original Message----- From: users@lists.fedoraproject.org Sent: Wed, 04 Sep 2019 16:20:14 +0100 To: 3603060030@txt.att.net Subject: Re: tar a flash drive
On Wed, 2019-09-04 at 21:55 +0800, Ed Greshko wrote:
On 9/4/19 9:16 PM, Dave Ihnat wrote:
On 4 Sep at 01:12, Ed Greshko ed.greshko@greshko.com wrote:
Of course
================================================================== This mobile text message is brought to you by AT&T
On Wed, Sep 04, 2019 at 04:20:14PM +0100, Patrick O'Callaghan wrote:
On Wed, 2019-09-04 at 21:55 +0800, Ed Greshko wrote:
On 9/4/19 9:16 PM, Dave Ihnat wrote:
On 4 Sep at 01:12, Ed Greshko ed.greshko@greshko.com wrote:
Of course a dd copy may be rather time consuming and space consuming with no apparent advantage.
A lot less time consuming if you use the "bs=" option. Haven't seen anyone mention that; I believe the default is still the old Unix 512b, painful.
Very good point. And, yes, the default is still 512.
For a USB drive it probably doesn't make much difference. Output will be buffered and speed is limited by the USB interface.
poc
I tend to use "bs=10M", which won't make it any faster, unless possibly you're using USB3.x devices on a USB3.x port.
Fred
On 9/4/19 8:20 AM, Patrick O'Callaghan wrote:
For a USB drive it probably doesn't make much difference. Output will be buffered and speed is limited by the USB interface.
If you aren't specifying a block size, the default block size tends to involve more round-trips through the kernel and through the USB bus. In that case, it isn't the USB interface bandwidth that causes slow transfers, but the latency involved in each tiny request.
I'd test this, but I seem to have left my bag of USB drives at home today. :)
Feel free to 'dd' a drive to /dev/null with and without a specified large block size to demonstrate the difference. Maybe I'm wrong.
On Wed, 2019-09-04 at 12:44 -0700, Gordon Messmer wrote:
On 9/4/19 8:20 AM, Patrick O'Callaghan wrote:
For a USB drive it probably doesn't make much difference. Output will be buffered and speed is limited by the USB interface.
If you aren't specifying a block size, the default block size tends to involve more round-trips through the kernel and through the USB bus. In that case, it isn't the USB interface bandwidth that causes slow transfers, but the latency involved in each tiny request.
I'd test this, but I seem to have left my bag of USB drives at home today. :)
Feel free to 'dd' a drive to /dev/null with and without a specified large block size to demonstrate the difference. Maybe I'm wrong.
This is for an otherwise unused 4.5GB partition on an SSD. The CPU is an i7-3770 with 8GB of RAM:
[poc@bree ~]$ sudo time dd if=/dev/sda2 of=/dev/null 8787968+0 records in 8787968+0 records out 4499439616 bytes (4.5 GB, 4.2 GiB) copied, 11.925 s, 377 MB/s 4.48user 7.26system 0:11.92elapsed 98%CPU (0avgtext+0avgdata 2092maxresident)k 8787968inputs+0outputs (0major+89minor)pagefaults 0swaps [poc@bree ~]$ sudo time dd bs=4096 if=/dev/sda2 of=/dev/null 1098496+0 records in 1098496+0 records out 4499439616 bytes (4.5 GB, 4.2 GiB) copied, 8.42924 s, 534 MB/s 0.59user 2.43system 0:08.43elapsed 35%CPU (0avgtext+0avgdata 2216maxresident)k 8788104inputs+0outputs (2major+89minor)pagefaults 0swaps
(increasing the block size to 10MB gets 549 MB/s, i.e. almost no difference)
Given that the (theoretical) speed of USB-2 is 60 MB/s, the drive would be a bottleneck in both cases. For USB-3 (10 or 20 times faster, depending on version) there would be a difference.
poc
On 9/4/19 6:16 AM, Dave Ihnat wrote:
On 4 Sep at 01:12, Ed Greshko ed.greshko@greshko.com wrote:
Of course a dd copy may be rather time consuming and space consuming with no apparent advantage.
A lot less time consuming if you use the "bs=" option. Haven't seen anyone mention that; I believe the default is still the old Unix 512b, painful.
Cheers,
I use the bs=4096, but can't remember why I picked that number exactly. Might have been from my Exobyte tape days, but I don't remember
On 9/3/19 10:03 PM, ToddAndMargo via users wrote:
Hi All,
I have a flash drive with about four partitions on is. Lets call it /dev/sdc.
Can I tar sdc or am I stuck with tarring the partitions?
Any drawback to this?
Many thanks, -T
Followup.
This is what I finally wound up doing to back up this stick. Dead Stick is a play on words off of Live USB:
Backup:
1) shutdown: zero out unused space
Figure out which partitions / and /boot are loated on. Gparted works well for this..l Usually they are /dev/sdb3 and /dev/sdb4:
partitions must not be mounted
# zerofree -v /dev/sd.. # zerofree -v /dev/sd..
2) shutdown: make a dd and gzip
Find the device name (/dev/sdx)
Note: if the "dd" crashes on a USB3 port, try a USB2 port
# dd bs=4096 if=/dev/sdx of=DeadStick.[date] # gzip DeadStick.[date] # creates DeadStick.[date].gz # rm DeadStick.[date]
A one liner to peak at the progress: $ ls -al | p6 'my @x=$*IN.lines; say (@x[3].words[4].Int) / 1000000000 ~ " GB";' 64.1604009 GB
I had to use a USB 2 port as my (usb3) stick would crash dd when used with all my USB3 ports.
-T
Hi.
On Wed, 04 Sep 2019 22:55:56 -0700 ToddAndMargo via users wrote:
This is what I finally wound up doing to back up this stick. Dead Stick is a play on words off of Live USB:
Backup:
1) shutdown: zero out unused space
...
# zerofree -v /dev/sd..
Beware that zerofree only works for ext filesystems.
2) shutdown: make a dd and gzip
...
# dd bs=4096 if=/dev/sdx of=DeadStick.[date] # gzip DeadStick.[date] # creates DeadStick.[date].gz # rm DeadStick.[date]
If you have a recent version of dd (for status=progress), you may try:
dd bs=4096 if=/dev/sdx status=progress | gzip DeadStick.[date].gz
and if you have pigz (faster than gzip):
dd bs=4096 if=/dev/sdx status=progress | pigz DeadStick.[date].gz
That will save the intermediate full backup and show periodically the transfer rate.
You may also save less disk space in the backup than with gzip, but (I think) gain in speed by doing a sparse file (again with a recent dd):
dd bs=4096 if=/dev/sdx of=DeadStick.[date] conv=sparse status=progress
On 9/4/19 11:24 PM, Francis.Montagnac@inria.fr wrote:
Hi.
On Wed, 04 Sep 2019 22:55:56 -0700 ToddAndMargo via users wrote:
This is what I finally wound up doing to back up this stick. Dead Stick is a play on words off of Live USB:
Backup:
1) shutdown: zero out unused space
...
# zerofree -v /dev/sd..
Beware that zerofree only works for ext filesystems.
2) shutdown: make a dd and gzip
...
# dd bs=4096 if=/dev/sdx of=DeadStick.[date] # gzip DeadStick.[date] # creates DeadStick.[date].gz # rm DeadStick.[date]
If you have a recent version of dd (for status=progress), you may try:
dd bs=4096 if=/dev/sdx status=progress | gzip DeadStick.[date].gz
and if you have pigz (faster than gzip):
dd bs=4096 if=/dev/sdx status=progress | pigz DeadStick.[date].gz
That will save the intermediate full backup and show periodically the transfer rate.
You may also save less disk space in the backup than with gzip, but (I think) gain in speed by doing a sparse file (again with a recent dd):
dd bs=4096 if=/dev/sdx of=DeadStick.[date] conv=sparse status=progress
Hi Francis,
Thank you!
status=progress, got to try that next time! I usually open another terminal and peek.
The partitions in question are Ext4.
If Windows read Ext4, I'd convert all my flash drives over to it.
-T
On Wed, 2019-09-04 at 23:58 -0700, ToddAndMargo via users wrote:
If Windows read Ext4, I'd convert all my flash drives over to it.
Apparently it can be done.
On 9/9/19 7:20 PM, Tim via users wrote:
On Wed, 2019-09-04 at 23:58 -0700, ToddAndMargo via users wrote:
If Windows read Ext4, I'd convert all my flash drives over to it.
Apparently it can be done.
Indeed. And the results are tragic. Here are my notes on it:
Paragon EXTFS for Windows:
http://www.paragon-software.com/home/extfs-windows/ Note: crashes and leaves a stale drive letter, and USB flash card readers will attempt to use the stale letter.
Ext2Fsd Project (Open Source):
http://www.ext2fsd.com/ Note: does not work with Cobian Backup 11 crashes a lot
On 9/4/19 10:55 PM, ToddAndMargo via users wrote:
# gzip DeadStick.[date] # creates DeadStick.[date].gz # rm DeadStick.[date]
The rm will fail because gzip removes the original file when it's finished compressing. However, the other suggestion to pipe straight through gzip (or other compression program) is better anyway.
On 9/5/19 3:44 AM, Samuel Sieb wrote:
On 9/4/19 10:55 PM, ToddAndMargo via users wrote:
# gzip DeadStick.[date] # creates DeadStick.[date].gz # rm DeadStick.[date]
The rm will fail because gzip removes the original file when it's finished compressing. However, the other suggestion to pipe straight through gzip (or other compression program) is better anyway.
I noticed that. I had tested it on a small png file first and it did not remove it. I bet I just did not refresh my file manager soon enough.
I am going to test the straight pipe today on my USB3 ports and see if the overhead slows down the dd enough to stop crashing dd.
Also going to test another Samsung USB flash drive on USB3 and see if dd crashes. Maybe it is my ports and not the drive. Will find out.
Thank you for the tips!
On 9/6/19 3:21 PM, ToddAndMargo via users wrote:
On 9/5/19 3:44 AM, Samuel Sieb wrote:
On 9/4/19 10:55 PM, ToddAndMargo via users wrote:
# gzip DeadStick.[date] # creates DeadStick.[date].gz # rm DeadStick.[date]
The rm will fail because gzip removes the original file when it's finished compressing. However, the other suggestion to pipe straight through gzip (or other compression program) is better anyway.
I noticed that. I had tested it on a small png file first and it did not remove it. I bet I just did not refresh my file manager soon enough.
I am going to test the straight pipe today on my USB3 ports and see if the overhead slows down the dd enough to stop crashing dd.
Also going to test another Samsung USB flash drive on USB3 and see if dd crashes. Maybe it is my ports and not the drive. Will find out.
Thank you for the tips!
Ahh poop! (Not my "exact" word.)
# dd status=progress bs=4096 if=/dev/sdb | gzip > DeadStick.FC30.2019-09-06 34489798656 bytes (34 GB, 32 GiB) copied, 404 s, 85.4 MB/s dd: error reading '/dev/sdb': Input/output error 8425692+0 records in 8425692+0 records out 34511634432 bytes (35 GB, 32 GiB) copied, 459.17 s, 75.2 MB/s
That was on my USB 3.1 port.
On 06Sep2019 19:34, ToddAndMargo ToddAndMargo@zoho.com wrote:
I am going to test the straight pipe today on my USB3 ports and see if the overhead slows down the dd enough to stop crashing dd. [...]
"crashing" ?
[...]
Ahh poop! (Not my "exact" word.)
# dd status=progress bs=4096 if=/dev/sdb | gzip > DeadStick.FC30.2019-09-06 34489798656 bytes (34 GB, 32 GiB) copied, 404 s, 85.4 MB/s dd: error reading '/dev/sdb': Input/output error 8425692+0 records in 8425692+0 records out 34511634432 bytes (35 GB, 32 GiB) copied, 459.17 s, 75.2 MB/s
That was on my USB 3.1 port.
Not sure this is a problem. How big is /dev/sdb? What do you expect dd to do when it hits the end of the drive?
Hmm. You have two "32GiB ...copied" lines up there. From the same run?
Ah, no, that is your "status=progress" getting cut in half by the dd error and the in/out report. So that's ok.
So, what's bad about the above?
Cheers, Cameron Simpson cs@cskk.id.au
On 9/6/19 7:45 PM, Cameron Simpson wrote:
On 06Sep2019 19:34, ToddAndMargo ToddAndMargo@zoho.com wrote:
I am going to test the straight pipe today on my USB3 ports and see if the overhead slows down the dd enough to stop crashing dd. [...]
"crashing" ?
[...]
Ahh poop! (Not my "exact" word.)
# dd status=progress bs=4096 if=/dev/sdb | gzip > DeadStick.FC30.2019-09-06 34489798656 bytes (34 GB, 32 GiB) copied, 404 s, 85.4 MB/s dd: error reading '/dev/sdb': Input/output error 8425692+0 records in 8425692+0 records out 34511634432 bytes (35 GB, 32 GiB) copied, 459.17 s, 75.2 MB/s
That was on my USB 3.1 port.
Not sure this is a problem. How big is /dev/sdb? What do you expect dd to do when it hits the end of the drive?
It is 64 GB.
and I expect a return prompt with a summary and no errors. It works on my USB2 port.
I am thinking the "85.4 MB/s" is the issue. Another Samsung 128 GB USB 3.1 stick I am currently testing is running at 32.3 MB/s (Perhaps someone is fibbing about the transfer speed too?)
Hmm. You have two "32GiB ...copied" lines up there. From the same run?
Ah, no, that is your "status=progress" getting cut in half by the dd error and the in/out report. So that's ok.
dd: error reading '/dev/sdb': Input/output error Is not okay.
So, what's bad about the above?
dd: error reading '/dev/sdb': Input/output error
The 128 stick stopped successfully:
# dd status=progress bs=4096 if=/dev/sdb | gzip > 128GB.Stick.$(date +%Y-%m-%d) 65474170880 bytes (65 GB, 61 GiB) copied, 2032 s, 32.2 MB/s^C 15986297+0 records in 15986296+0 records out 65479868416 bytes (65 GB, 61 GiB) copied, 2032.78 s, 32.2 MB/s
And you can see where the 64 gb (dead stick) crashed:
# ls -al -rw-r--r--. 1 root root 51720912896 Sep 6 20:10 128GB.Stick.2019-09-06 -rw-r--r--. 1 tony root 2733066922 Sep 6 19:21 DeadStick.FC30.2019-09-06
Methinks I have a bad stick. Your take?
Thank you for the help!
On 06Sep2019 20:16, ToddAndMargo ToddAndMargo@zoho.com wrote:
On 9/6/19 7:45 PM, Cameron Simpson wrote:
# dd status=progress bs=4096 if=/dev/sdb | gzip > DeadStick.FC30.2019-09-06 34489798656 bytes (34 GB, 32 GiB) copied, 404 s, 85.4 MB/s dd: error reading '/dev/sdb': Input/output error 8425692+0 records in 8425692+0 records out 34511634432 bytes (35 GB, 32 GiB) copied, 459.17 s, 75.2 MB/s
That was on my USB 3.1 port.
Not sure this is a problem. How big is /dev/sdb? What do you expect dd to do when it hits the end of the drive?
It is 64 GB.
Ah, badness then as you say.
Had you tried ddrescue, a dd-like command with an ability to step past bad areas and proceed with the rest?
Cheers, Cameron Simpson cs@cskk.id.au
On 9/6/19 9:36 PM, Cameron Simpson wrote:
On 06Sep2019 20:16, ToddAndMargo ToddAndMargo@zoho.com wrote:
On 9/6/19 7:45 PM, Cameron Simpson wrote:
# dd status=progress bs=4096 if=/dev/sdb | gzip > DeadStick.FC30.2019-09-06 34489798656 bytes (34 GB, 32 GiB) copied, 404 s, 85.4 MB/s dd: error reading '/dev/sdb': Input/output error 8425692+0 records in 8425692+0 records out 34511634432 bytes (35 GB, 32 GiB) copied, 459.17 s, 75.2 MB/s
That was on my USB 3.1 port.
Not sure this is a problem. How big is /dev/sdb? What do you expect dd to do when it hits the end of the drive?
It is 64 GB.
Ah, badness then as you say.
Had you tried ddrescue, a dd-like command with an ability to step past bad areas and proceed with the rest?
Cheers, Cameron Simpson cs@cskk.id.au
Trying it right now. Had to drop the if= and of=.
Thank you for all the help!
On 9/6/19 9:43 PM, ToddAndMargo via users wrote:
On 9/6/19 9:36 PM, Cameron Simpson wrote:
On 06Sep2019 20:16, ToddAndMargo ToddAndMargo@zoho.com wrote:
On 9/6/19 7:45 PM, Cameron Simpson wrote:
# dd status=progress bs=4096 if=/dev/sdb | gzip > DeadStick.FC30.2019-09-06 34489798656 bytes (34 GB, 32 GiB) copied, 404 s, 85.4 MB/s dd: error reading '/dev/sdb': Input/output error 8425692+0 records in 8425692+0 records out 34511634432 bytes (35 GB, 32 GiB) copied, 459.17 s, 75.2 MB/s
That was on my USB 3.1 port.
Not sure this is a problem. How big is /dev/sdb? What do you expect dd to do when it hits the end of the drive?
It is 64 GB.
Ah, badness then as you say.
Had you tried ddrescue, a dd-like command with an ability to step past bad areas and proceed with the rest?
Cheers, Cameron Simpson cs@cskk.id.au
Trying it right now. Had to drop the if= and of=.
Thank you for all the help!
More evidence of badness!
# ddrescue /dev/sdb DeadStick.FC30_2019-09-06.dd GNU ddrescue 1.23 Press Ctrl-C to interrupt ipos: 40491 MB, non-trimmed: 2725 MB, current rate: 0 B/s opos: 40491 MB, non-scraped: 0 B, average rate: 140 MB/s non-tried: 23666 MB, bad-sector: 0 B, error rate: 301 MB/s rescued: 37768 MB, bad areas: 0, run time: 4m 27s pct rescued: 58.86%, read errors: 41633, remaining time: n/a time since last successful read: 1s Copying non-tried blocks... Pass 5 (forwards) ddrescue: Input file disappeared: No such file or directory
On 9/5/19 3:44 AM, Samuel Sieb wrote:
However, the other suggestion to pipe straight through gzip (or other compression program) is better anyway.
Thank you!
Backup:
1) shutdown: zero out unused space
Figure out which partitions / and /boot are loated on. Gparted works well for this..l Usually they are /dev/sdb3 and /dev/sdb4:
partitions must not be mounted
# zerofree -v /dev/sd.. # zerofree -v /dev/sd..
2) shutdown: make a dd and gzip
Find the device name (/dev/sdx)
Note: if the "dd" crashes on a USB3 port, try a USB2 port status=progress cuts your size in half.
# dd status=progress bs=4096 if=/dev/sdx | gzip -v > DeadStick.FC30.$(date +%Y-%m-%d)