Is there a Linux util to scrub free disk blocks and keep everything else intact ??
On 08/27/2010 09:25 PM, JD wrote:
Is there a Linux util to scrub free disk blocks and keep everything else intact ??
Someone (not on this list) described a simple way to do this. Scrubbing files to be deleted is easy enough - there are utils for it already. But scrubbing existing free space is slower and requires patience.
cd to the root of the partition. sudo dd if=/dev/zero of=ZERO bs=20M
When the dd program fails to write any further, you have grabbed and zeroed all available free disk blocks in the partition.
Now all you do is use the command scrub to scrub the file ZERO and when done the file is deleted.
On Fri, 2010-08-27 at 21:53 -0700, JD wrote:
On 08/27/2010 09:25 PM, JD wrote:
Is there a Linux util to scrub free disk blocks and keep everything else intact ??
Someone (not on this list) described a simple way to do this. Scrubbing files to be deleted is easy enough - there are utils for it already. But scrubbing existing free space is slower and requires patience.
cd to the root of the partition. sudo dd if=/dev/zero of=ZERO bs=20M
When the dd program fails to write any further, you have grabbed and zeroed all available free disk blocks in the partition.
Now all you do is use the command scrub to scrub the file ZERO and when done the file is deleted.
From scrub(1):
-X, --freespace Create specified directory and fill it with files until write returns ENOSPC (file system full), then scrub the files as usual. The size of each file can be set with -s, otherwise it will be the maximum file size creatable given the user’s file size limit or 1g if umlimited.
However note that neither of these methods guarantees to scrub indirect blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
On 08/28/2010 06:22 AM, Patrick O'Callaghan wrote:
On Fri, 2010-08-27 at 21:53 -0700, JD wrote:
On 08/27/2010 09:25 PM, JD wrote:
Is there a Linux util to scrub free disk blocks and keep everything else intact ??
Someone (not on this list) described a simple way to do this. Scrubbing files to be deleted is easy enough - there are utils for it already. But scrubbing existing free space is slower and requires patience.
cd to the root of the partition. sudo dd if=/dev/zero of=ZERO bs=20M
When the dd program fails to write any further, you have grabbed and zeroed all available free disk blocks in the partition.
Now all you do is use the command scrub to scrub the file ZERO and when done the file is deleted.
From scrub(1):
-X, --freespace Create specified directory and fill it with files until write returns ENOSPC (file system full), then scrub the files as usual. The size of each file can be set with -s, otherwise it will be the maximum file size creatable given the user’s file size limit or 1g if umlimited.
However note that neither of these methods guarantees to scrub indirect blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
Very good. Actually, indirect blocks are used if and only if the file is larger than what can be addressed via direct blocks. So, scrub has no interest in whether or not a file has indirect blocks, nor should it care if blocks are direct or indirect. During file access, the file system will access ANY file block(s) that the process has the right permissions for.
On Sat, 2010-08-28 at 07:42 -0700, JD wrote:
However note that neither of these methods guarantees to scrub
indirect
blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
Very good. Actually, indirect blocks are used if and only if the file is larger than what can be addressed via direct blocks. So, scrub has no interest in whether or not a file has indirect blocks, nor should it care if blocks are direct or indirect. During file access, the file system will access ANY file block(s) that the process has the right permissions for.
So what? My point is that after scrubbing and removal of the space-filling files, you may be left with a number of sectors that were used for indirect blocks during the space-filling phase of the scrub process. These sectors may not themselves be scrubbed but will now be available on the free list after you're done.
poc
On 08/28/2010 10:24 AM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 07:42 -0700, JD wrote:
However note that neither of these methods guarantees to scrub
indirect
blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
Very good. Actually, indirect blocks are used if and only if the file is larger than what can be addressed via direct blocks. So, scrub has no interest in whether or not a file has indirect blocks, nor should it care if blocks are direct or indirect. During file access, the file system will access ANY file block(s) that the process has the right permissions for.
So what? My point is that after scrubbing and removal of the space-filling files, you may be left with a number of sectors that were used for indirect blocks during the space-filling phase of the scrub process. These sectors may not themselves be scrubbed but will now be available on the free list after you're done.
poc
You need to study filesystem architecture to gain better understanding.
On 08/28/2010 01:53 PM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 10:46 -0700, JD wrote:
You need to study filesystem architecture to gain better understanding.
If you can't explain what you mean, just say so.
poc
I can explain it alright - and I tried to tell you that a program can access all blocks of a file, direct and indirect, given that tha program has the access permissions for that file, but you ignored it. Direct and Indirect blocks are only meaningful to the inode layer. The file's offset will be computed and a block number arrived at. If that block number is outside the range of the direct blocks, then the indirect blocks are accessed. Enough said. Explaining FS architecture is beyond the scope of this list, and certainly beyond this thread.
On Sat, 2010-08-28 at 15:12 -0700, JD wrote:
On 08/28/2010 01:53 PM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 10:46 -0700, JD wrote:
You need to study filesystem architecture to gain better understanding.
If you can't explain what you mean, just say so.
poc
I can explain it alright - and I tried to tell you that a program can access all blocks of a file, direct and indirect, given that tha program has the access permissions for that file, but you ignored it. Direct and Indirect blocks are only meaningful to the inode layer. The file's offset will be computed and a block number arrived at. If that block number is outside the range of the direct blocks, then the indirect blocks are accessed. Enough said. Explaining FS architecture is beyond the scope of this list, and certainly beyond this thread.
I've been using Unix since 1975 and am very well aware of how the direct/indirect addressing scheme works. And I still fail to see what this has to do with the scrub command. Are you saying that scrub overwrites these blocks when deleting a file? If so, what is your basis for saying so, given that the manpage explicitly states that "only the data in the file (and optionally its name in the directory entry) is destroyed"?
OTOH if you're saying these blocks are irrelevant, you're contradicting your original requirement of scrubbing any free space.
poc
On 08/28/2010 05:42 PM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 15:12 -0700, JD wrote:
On 08/28/2010 01:53 PM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 10:46 -0700, JD wrote:
You need to study filesystem architecture to gain better understanding.
If you can't explain what you mean, just say so.
poc
I can explain it alright - and I tried to tell you that a program can access all blocks of a file, direct and indirect, given that tha program has the access permissions for that file, but you ignored it. Direct and Indirect blocks are only meaningful to the inode layer. The file's offset will be computed and a block number arrived at. If that block number is outside the range of the direct blocks, then the indirect blocks are accessed. Enough said. Explaining FS architecture is beyond the scope of this list, and certainly beyond this thread.
I've been using Unix since 1975 and am very well aware of how the direct/indirect addressing scheme works. And I still fail to see what this has to do with the scrub command. Are you saying that scrub overwrites these blocks when deleting a file? If so, what is your basis for saying so, given that the manpage explicitly states that "only the data in the file (and optionally its name in the directory entry) is destroyed"?
OTOH if you're saying these blocks are irrelevant, you're contradicting your original requirement of scrubbing any free space.
poc
Let me quote to you what YOU wrote: On 08/28/2010 06:22 AM, Patrick O'Callaghan wrote:
From scrub(1):
-X, --freespace Create specified directory and fill it with files until write returns ENOSPC (file system full), then scrub the files as usual. The size of each file can be set with -s, otherwise it will be the maximum file size creatable given the user’s file size limit or 1g if umlimited.
However note that neither of these methods guarantees to scrub indirect blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
It was you who made the off-the-wall and totally wrong statement that indirect blocks of files created by the scrub process ARE NOT SCRUBBED!
You have exposed your ignorance of the filesystem architecture, and mentioned something you vaguely remember the name of, and totally embarrassed yourself before a very large audience.
JD wrote:
On 08/28/2010 05:42 PM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 15:12 -0700, JD wrote:
On 08/28/2010 01:53 PM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 10:46 -0700, JD wrote:
You need to study filesystem architecture to gain better understanding.
If you can't explain what you mean, just say so.
poc
I can explain it alright - and I tried to tell you that a program can access all blocks of a file, direct and indirect, given that tha program has the access permissions for that file, but you ignored it. Direct and Indirect blocks are only meaningful to the inode layer. The file's offset will be computed and a block number arrived at. If that block number is outside the range of the direct blocks, then the indirect blocks are accessed. Enough said. Explaining FS architecture is beyond the scope of this list, and certainly beyond this thread.
I've been using Unix since 1975 and am very well aware of how the direct/indirect addressing scheme works. And I still fail to see what this has to do with the scrub command. Are you saying that scrub overwrites these blocks when deleting a file? If so, what is your basis for saying so, given that the manpage explicitly states that "only the data in the file (and optionally its name in the directory entry) is destroyed"?
OTOH if you're saying these blocks are irrelevant, you're contradicting your original requirement of scrubbing any free space.
poc
Let me quote to you what YOU wrote: On 08/28/2010 06:22 AM, Patrick O'Callaghan wrote:
From scrub(1):
-X, --freespace Create specified directory and fill it with files until write returns ENOSPC (file system full), then scrub the files as usual. The size of each file can be set with -s, otherwise it will be the maximum file size creatable given the user’s file size limit or 1g if umlimited.
However note that neither of these methods guarantees to scrub indirect blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
It was you who made the off-the-wall and totally wrong statement that indirect blocks of files created by the scrub process ARE NOT SCRUBBED!
Since you imply that they are, please identify the method of doing so, since the documentation and source only seem to scrub the file *content* and not the indirect (ie. inode) blocks at all. There's code to zero the filename, and maybe truncating the file would clear the primary inode and release the indirect blocks, but I think inode clearing is f/s dependent, not necessarily a given.
I can see how the inode indirect blocks might get zeroed (on some filesystems), but I see no way to cause scrubbing with multiple random patterns.
You have exposed your ignorance of the filesystem architecture, and mentioned something you vaguely remember the name of, and totally embarrassed yourself before a very large audience.
Having jumped all over Patrick, please point us to the filesystem or other code to scrub the inode indirect blocks.
Don't read this as a claim the indirect blocks need to be scrubbed, but do tell us just how that scrubbing occurs. You haven't confused indirect block (filesystem metadata) with the data blocks described by the indirect blocks, have you?
On 08/28/2010 06:35 PM, Bill Davidsen wrote:
JD wrote:
On 08/28/2010 05:42 PM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 15:12 -0700, JD wrote:
On 08/28/2010 01:53 PM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 10:46 -0700, JD wrote:
You need to study filesystem architecture to gain better understanding.
If you can't explain what you mean, just say so.
poc
I can explain it alright - and I tried to tell you that a program can access all blocks of a file, direct and indirect, given that tha program has the access permissions for that file, but you ignored it. Direct and Indirect blocks are only meaningful to the inode layer. The file's offset will be computed and a block number arrived at. If that block number is outside the range of the direct blocks, then the indirect blocks are accessed. Enough said. Explaining FS architecture is beyond the scope of this list, and certainly beyond this thread.
I've been using Unix since 1975 and am very well aware of how the direct/indirect addressing scheme works. And I still fail to see what this has to do with the scrub command. Are you saying that scrub overwrites these blocks when deleting a file? If so, what is your basis for saying so, given that the manpage explicitly states that "only the data in the file (and optionally its name in the directory entry) is destroyed"?
OTOH if you're saying these blocks are irrelevant, you're contradicting your original requirement of scrubbing any free space.
poc
Let me quote to you what YOU wrote: On 08/28/2010 06:22 AM, Patrick O'Callaghan wrote:
From scrub(1):
-X, --freespace Create specified directory and fill it with files until write returns ENOSPC (file system full), then scrub the files as usual. The size of each file can be set with -s, otherwise it will be the maximum file size creatable given the user’s file size limit or 1g if umlimited.
However note that neither of these methods guarantees to scrub indirect blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
It was you who made the off-the-wall and totally wrong statement that indirect blocks of files created by the scrub process ARE NOT SCRUBBED!
Since you imply that they are, please identify the method of doing so, since the documentation and source only seem to scrub the file *content* and not the indirect (ie. inode) blocks at all. There's code to zero the filename, and maybe truncating the file would clear the primary inode and release the indirect blocks, but I think inode clearing is f/s dependent, not necessarily a given.
I can see how the inode indirect blocks might get zeroed (on some filesystems), but I see no way to cause scrubbing with multiple random patterns.
The data blocks, be they direct or indirect data blocks ( an 'attribute' - and I use the word attribute here loosely to make a point - is only meaningful to the inode layer), has absolutely NOTHING to do with what the scrub program does to ALL the blocks of a file. So, just get off of this whole thing about direct and indirect blocks. This is irrelevant to the subject of scrubbing a file. Rest assured scrub will get all of a file's blocks.
You have exposed your ignorance of the filesystem architecture, and mentioned something you vaguely remember the name of, and totally embarrassed yourself before a very large audience.
Having jumped all over Patrick, please point us to the filesystem or other code to scrub the inode indirect blocks.
As stated above.
Don't read this as a claim the indirect blocks need to be scrubbed, but do tell us just how that scrubbing occurs. You haven't confused indirect block (filesystem metadata) with the data blocks described by the indirect blocks, have you?
A filesystem's metadata are things like inodes and cylinder groups, and superblocks. These are NOT indirect blocks. Do not confuse filesystem metadata with a file's data. When a file is deleted, then the inode for that file is returned to the free list. That inode does not have a file's content. Only info about that file, such as owner, permissions, .. and the addresses of the blocks holding the data of that file.
JD wrote:
On 08/28/2010 06:35 PM, Bill Davidsen wrote:
JD wrote:
On 08/28/2010 05:42 PM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 15:12 -0700, JD wrote:
On 08/28/2010 01:53 PM, Patrick O'Callaghan wrote:
On Sat, 2010-08-28 at 10:46 -0700, JD wrote:
> You need to study filesystem architecture to gain better > understanding. > If you can't explain what you mean, just say so.
poc
I can explain it alright - and I tried to tell you that a program can access all blocks of a file, direct and indirect, given that tha program has the access permissions for that file, but you ignored it. Direct and Indirect blocks are only meaningful to the inode layer. The file's offset will be computed and a block number arrived at. If that block number is outside the range of the direct blocks, then the indirect blocks are accessed. Enough said. Explaining FS architecture is beyond the scope of this list, and certainly beyond this thread.
I've been using Unix since 1975 and am very well aware of how the direct/indirect addressing scheme works. And I still fail to see what this has to do with the scrub command. Are you saying that scrub overwrites these blocks when deleting a file? If so, what is your basis for saying so, given that the manpage explicitly states that "only the data in the file (and optionally its name in the directory entry) is destroyed"?
OTOH if you're saying these blocks are irrelevant, you're contradicting your original requirement of scrubbing any free space.
poc
Let me quote to you what YOU wrote: On 08/28/2010 06:22 AM, Patrick O'Callaghan wrote:
From scrub(1):
-X, --freespace Create specified directory and fill it with files until write returns ENOSPC (file system full), then scrub the files as usual. The size of each file can be set with -s, otherwise it will be the maximum file size creatable given the user’s file size limit or 1g if umlimited.
However note that neither of these methods guarantees to scrub indirect blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
It was you who made the off-the-wall and totally wrong statement that indirect blocks of files created by the scrub process ARE NOT SCRUBBED!
Since you imply that they are, please identify the method of doing so, since the documentation and source only seem to scrub the file *content* and not the indirect (ie. inode) blocks at all. There's code to zero the filename, and maybe truncating the file would clear the primary inode and release the indirect blocks, but I think inode clearing is f/s dependent, not necessarily a given.
I can see how the inode indirect blocks might get zeroed (on some filesystems), but I see no way to cause scrubbing with multiple random patterns.
The data blocks, be they direct or indirect data blocks ( an 'attribute'
- and I use the word
attribute here loosely to make a point - is only meaningful to the inode layer), has absolutely NOTHING to do with what the scrub program does to ALL the blocks of a file. So, just get off of this whole thing about direct and indirect blocks. This is irrelevant to the subject of scrubbing a file. Rest assured scrub will get all of a file's blocks.
You have exposed your ignorance of the filesystem architecture, and mentioned something you vaguely remember the name of, and totally embarrassed yourself before a very large audience.
Having jumped all over Patrick, please point us to the filesystem or other code to scrub the inode indirect blocks.
As stated above.
Don't read this as a claim the indirect blocks need to be scrubbed, but do tell us just how that scrubbing occurs. You haven't confused indirect block (filesystem metadata) with the data blocks described by the indirect blocks, have you?
A filesystem's metadata are things like inodes and cylinder groups, and superblocks. These are NOT indirect blocks. Do not confuse filesystem metadata with a file's data. When a file is deleted, then the inode for that file is returned to the free list. That inode does not have a file's content. Only info about that file, such as owner, permissions, .. and the addresses of the blocks holding the data of that file.
If a file is moved to the trash, then the inode is retained, but the file is now linked to the trash directory. When it is removed from the trash and thrown away, the file is delinked. Correct. What does scrub do when run with the -X parameter? Does it do a series of overwrites and then move to the next data node in the free list?
If it does that, then you are 99% assured that the data cannot be read by 'ordinary' means. However, I would never give a drive, that has been 'erased' to anyone I would not trust. That's just me.
James McKenzie
On 08/28/2010 07:21 PM, James McKenzie wrote:
JD wrote:
On 08/28/2010 06:35 PM, Bill Davidsen wrote:
JD wrote:
On 08/28/2010 05:42 PM, Patrick O'Callaghan wrote:On Sat, 2010-08-28 at 15:12 -0700, JD wrote:
On 08/28/2010 01:53 PM, Patrick O'Callaghan wrote:
> On Sat, 2010-08-28 at 10:46 -0700, JD wrote: > >> You need to study filesystem architecture to gain better >> understanding. >> > If you can't explain what you mean, just say so. > > poc > > I can explain it alright - and I tried to tell you that a program can access all blocks of a file, direct and indirect, given that tha program has the access permissions for that file, but you ignored it. Direct and Indirect blocks are only meaningful to the inode layer. The file's offset will be computed and a block number arrived at. If that block number is outside the range of the direct blocks, then the indirect blocks are accessed. Enough said. Explaining FS architecture is beyond the scope of this list, and certainly beyond this thread.
I've been using Unix since 1975 and am very well aware of how the direct/indirect addressing scheme works. And I still fail to see what this has to do with the scrub command. Are you saying that scrub overwrites these blocks when deleting a file? If so, what is your basis for saying so, given that the manpage explicitly states that "only the data in the file (and optionally its name in the directory entry) is destroyed"?
OTOH if you're saying these blocks are irrelevant, you're contradicting your original requirement of scrubbing any free space.
poc
Let me quote to you what YOU wrote: On 08/28/2010 06:22 AM, Patrick O'Callaghan wrote:
From scrub(1):
-X, --freespace Create specified directory and fill it with files until write returns ENOSPC (file system full), then scrub the files as usual. The size of each file can be set with -s, otherwise it will be the maximum file size creatable given the user’s file size limit or 1g if umlimited.
However note that neither of these methods guarantees to scrub indirect blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
It was you who made the off-the-wall and totally wrong statement that indirect blocks of files created by the scrub process ARE NOT SCRUBBED!
Since you imply that they are, please identify the method of doing so, since the documentation and source only seem to scrub the file *content* and not the indirect (ie. inode) blocks at all. There's code to zero the filename, and maybe truncating the file would clear the primary inode and release the indirect blocks, but I think inode clearing is f/s dependent, not necessarily a given.
I can see how the inode indirect blocks might get zeroed (on some filesystems), but I see no way to cause scrubbing with multiple random patterns.
The data blocks, be they direct or indirect data blocks ( an 'attribute'
- and I use the word
attribute here loosely to make a point - is only meaningful to the inode layer), has absolutely NOTHING to do with what the scrub program does to ALL the blocks of a file. So, just get off of this whole thing about direct and indirect blocks. This is irrelevant to the subject of scrubbing a file. Rest assured scrub will get all of a file's blocks.
You have exposed your ignorance of the filesystem architecture, and mentioned something you vaguely remember the name of, and totally embarrassed yourself before a very large audience.
Having jumped all over Patrick, please point us to the filesystem or other code to scrub the inode indirect blocks.
As stated above.
Don't read this as a claim the indirect blocks need to be scrubbed, but do tell us just how that scrubbing occurs. You haven't confused indirect block (filesystem metadata) with the data blocks described by the indirect blocks, have you?
A filesystem's metadata are things like inodes and cylinder groups, and superblocks. These are NOT indirect blocks. Do not confuse filesystem metadata with a file's data. When a file is deleted, then the inode for that file is returned to the free list. That inode does not have a file's content. Only info about that file, such as owner, permissions, .. and the addresses of the blocks holding the data of that file.
If a file is moved to the trash, then the inode is retained, but the file is now linked to the trash directory. When it is removed from the trash and thrown away, the file is delinked. Correct. What does scrub do when run with the -X parameter? Does it do a series of overwrites and then move to the next data node in the free list?
If it does that, then you are 99% assured that the data cannot be read by 'ordinary' means. However, I would never give a drive, that has been 'erased' to anyone I would not trust. That's just me.
James McKenzie
Well, that's the point another op has made: If you have a failing disk, and you have credit card account data, bank account data, tax data, and many other type of proprietary product secrets, business plans, meetings records...etc, then you should consider destroying the disk utterly and thoroughly instead of sending it in for RMA.
But what about the need to recover the data from such a disk? THAT's when you will DEFINITELY and MOST CERTAINLY compromise that data when you send the disk to a data recovery company.
JD wrote: [trimmed]
Well, that's the point another op has made: If you have a failing disk, and you have credit card account data, bank account data, tax data, and many other type of proprietary product secrets, business plans, meetings records...etc, then you should consider destroying the disk utterly and thoroughly instead of sending it in for RMA.
Corrrect. That is why any 'production' level disk should be RMA without the physical disk. Just send back a tag and then certify the disk has been destroyed. Too many RMAs and you get 'black listed' and this means replacing the same drive time and time again. You have a bigger problem than hard drives at this point.
But what about the need to recover the data from such a disk? THAT's when you will DEFINITELY and MOST CERTAINLY compromise that data when you send the disk to a data recovery company.
DDR has a clause that they will NOT release information, unless requested, to anyone other than the originator, unless it clearly violates Federal/State/Local statues. Usually, this is records of viewable pornography of under aged participants. They have been used to recover data from a drive that was under water for months and for a drive that had a hole drilled through it.
In any case, if someone wants your data that bad, it is time to dig out the old sledgehammer and physically destroy the disk. Otherwise, scrub and other secure erasure programs should be sufficient.
James McKenzie
In any case, if someone wants your data that bad, it is time to dig out the old sledgehammer and physically destroy the disk. Otherwise, scrub and other secure erasure programs should be sufficient.
Programs that just write over the data a few times may or may not help but you never know. A sledgehammer won't help you either if the person who wants the data has resources.
Modern drives support a secure erase/wipe feature (see hdparm). It's probably the best option you get short of doing what probably would have been smartest - encrypting it at install time ;)
There is a school of security thought that beyond the point where it's cheaper to send a guy round to see you carrying instruments of persuasion the extra investment isn't actually useful...
Alan
2010/8/29 Alan Cox alan@lxorguk.ukuu.org.uk: <--SNIP-->
Modern drives support a secure erase/wipe feature (see hdparm). It's probably the best option you get short of doing what probably would have been smartest - encrypting it at install time ;)
<--SNIP-->
Alan
I always wondered would drive encryption significantly reduce access speed? For example, if I have an old Pentium III computer with 500 Mhz processor is it fast enough to be used with encrypted file system?
On Sun, Aug 29, 2010 at 1:06 PM, Hiisi very-cool@rambler.ru wrote:
2010/8/29 Alan Cox alan@lxorguk.ukuu.org.uk: <--SNIP-->
Modern drives support a secure erase/wipe feature (see hdparm). It's probably the best option you get short of doing what probably would have been smartest - encrypting it at install time ;)
<--SNIP-->
Alan
I always wondered would drive encryption significantly reduce access speed? For example, if I have an old Pentium III computer with 500 Mhz processor is it fast enough to be used with encrypted file system?
luks encrypted partitions show very little reduction in speed - I have used encrypted partitions for an entire laptop and you really don't notice any problem - I have not measured it so don't have quantitative data but it is a good way to go for laptops provided you make good provision for backups.
I guess this is because the encryption and decryption is at kernel level - though it is a bit more effort to set up in the first place.
On Sat, 2010-08-28 at 18:01 -0700, JD wrote:
However note that neither of these methods guarantees to scrub
indirect
blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
It was you who made the off-the-wall and totally wrong statement that indirect blocks of files created by the scrub process ARE NOT SCRUBBED!
Bullshit. I did said no such thing. Take the trouble to reread the above and understand what I actually said, not what your prejudice leads you to think I said. Even a child can understand the difference between "does not guarantee to do X" and "does not do X". Furthermore, you still insist on "does do X" without thus far producing a shred of evidence to support this.
Furthermore, it's clear from your replies that you still don't get what I'm talking about and can think of no comeback other than childish ad hominem attacks. This exchange is going nowhere.
poc
On 08/28/2010 06:22 AM, Patrick O'Callaghan wrote:
On Fri, 2010-08-27 at 21:53 -0700, JD wrote:
On 08/27/2010 09:25 PM, JD wrote:
Is there a Linux util to scrub free disk blocks and keep everything else intact ??
Someone (not on this list) described a simple way to do this. Scrubbing files to be deleted is easy enough - there are utils for it already. But scrubbing existing free space is slower and requires patience.
cd to the root of the partition. sudo dd if=/dev/zero of=ZERO bs=20M
When the dd program fails to write any further, you have grabbed and zeroed all available free disk blocks in the partition.
Now all you do is use the command scrub to scrub the file ZERO and when done the file is deleted.
From scrub(1):
-X, --freespace Create specified directory and fill it with files until write returns ENOSPC (file system full), then scrub the files as usual. The size of each file can be set with -s, otherwise it will be the maximum file size creatable given the user’s file size limit or 1g if umlimited.
However note that neither of these methods guarantees to scrub indirect blocks in the filesystem that were used to create the space-filling files. Maybe they do, maybe they don't, it's not clear.
poc
What is not clear from the man page is, when using the -X option, whether or not the directory and the files created in the directory are automatically deleted before scrub exits. I will assume that they are. Currently scrubbing 189GB free space.
On 08/27/2010 11:53 PM, JD wrote:
On 08/27/2010 09:25 PM, JD wrote:
Is there a Linux util to scrub free disk blocks and keep everything else intact ??
Someone (not on this list) described a simple way to do this. Scrubbing files to be deleted is easy enough - there are utils for it already. But scrubbing existing free space is slower and requires patience.
cd to the root of the partition. sudo dd if=/dev/zero of=ZERO bs=20M
When the dd program fails to write any further, you have grabbed and zeroed all available free disk blocks in the partition.
Now all you do is use the command scrub to scrub the file ZERO and when done the file is deleted.
You need to do that as the UID permitted to use the reserved blocks if you really want to clear _all_ the free space. Note that if the drive has reallocated any bad sectors there could still be some old data present on the disk.
On 08/28/2010 06:31 AM, Robert Nichols wrote:
On 08/27/2010 11:53 PM, JD wrote:
On 08/27/2010 09:25 PM, JD wrote:Is there a Linux util to scrub free disk blocks and keep everything else intact ??
Someone (not on this list) described a simple way to do this. Scrubbing files to be deleted is easy enough - there are utils for it already. But scrubbing existing free space is slower and requires patience.
cd to the root of the partition. sudo dd if=/dev/zero of=ZERO bs=20M
When the dd program fails to write any further, you have grabbed and zeroed all available free disk blocks in the partition.
Now all you do is use the command scrub to scrub the file ZERO and when done the file is deleted.
You need to do that as the UID permitted to use the reserved blocks if you really want to clear _all_ the free space. Note that if the drive has reallocated any bad sectors there could still be some old data present on the disk.
Guess you might call that caveat emptor. There is nothing you can do about relocated bad blocks.
Perhaps there might be a util that can bypass the driver and send direct commands to the drive via a SATA port, or and IDE interface or a SCSI port, to write random data to the bad blocks - but such a util would have to be a standalone utility. Perhaps there is a boot disk out there with such a util.
On Sat, Aug 28, 2010 at 08:36:48 -0700, JD jd1008@gmail.com wrote:
There is nothing you can do about relocated bad blocks.
You can use secure erase. That is suppsed to try to do something with those. You can also use encryption in the first place. It will be some work to get from where the system is now to a state where secure erase has been used and stuff is put back on top of an encrypted block device, but it can be done.
On Sat, Aug 28, 2010 at 5:13 PM, Bruno Wolff III bruno@wolff.to wrote:
On Sat, Aug 28, 2010 at 08:36:48 -0700, Â JD jd1008@gmail.com wrote:
There is nothing you can do about relocated bad blocks.
You can use secure erase. That is suppsed to try to do something with those. You can also use encryption in the first place. It will be some work to get from where the system is now to a state where secure erase has been used and stuff is put back on top of an encrypted block device, but it can be done.
This is interesting - I recently had a machine that was reporting an HD that passed the health check, but there were palimpsest popups appearing warning of possible disc failure due to pending reallocated sector counts going up. It took me a week to get a replacement machine to slot in and when I got the dying machine out of service I used secure erase to kill the disc data after booting a live cd and running hdparm commands to issue the secure erase. At the end of the secure erase process, which appeared to end normally, I used a live CD to boot and use palimpsest to check the "now erased" hard drive in the original position in the processor box - the report still showed a non-zero pending sector count so I was not sure what the state of the data was - I had not done this before and usually simply open up the HD and remove the platters and drive head and then "deal" with the platters such that they would not be likely to be read again!
However in this case I was interested to see what the disc state was (purely from an inquisitive viewpoint) after the secure erase process had completed (using HDparm commands)
I was still unsure whether before physical destruction any data could have been retrieved from the drive? However hdparm -I /dev/sda showed an apparently clean disc with no partitions on it.
On 08/27/2010 11:25 PM, JD wrote:
Is there a Linux util to scrub free disk blocks and keep everything else intact ??
This really is an interested thread... but why do you want to scrub free blocks?
I can think of a few reasons:
1) You're running Linux in a virtual machine. Zeroing all free space makes it easier to compress the image file.
2) You've been looking at something "bad" and want to make sure all traces of it are gone before the lawyers/cops arrive.
There are probably other good reasons, but I'm stalled out here.
On Sat, Aug 28, 2010 at 13:05:08 -0500, Steven Stern subscribed-lists@sterndata.com wrote:
I can think of a few reasons:
- You're running Linux in a virtual machine. Zeroing all free space
makes it easier to compress the image file.
- You've been looking at something "bad" and want to make sure all
traces of it are gone before the lawyers/cops arrive.
There are probably other good reasons, but I'm stalled out here.
You don't like nosy people snooping on your stuff.
You had proproietary data on your machine previously and you need to be sure it is gone now.
On 08/28/2010 11:05 AM, Steven Stern wrote:
On 08/27/2010 11:25 PM, JD wrote:
Is there a Linux util to scrub free disk blocks and keep everything else intact ??
This really is an interested thread... but why do you want to scrub free blocks?
I can think of a few reasons:
- You're running Linux in a virtual machine. Zeroing all free space
makes it easier to compress the image file.
- You've been looking at something "bad" and want to make sure all
traces of it are gone before the lawyers/cops arrive.
There are probably other good reasons, but I'm stalled out here.
Too dumb a question to respond to.
Steven Stern wrote:
On 08/27/2010 11:25 PM, JD wrote:
Is there a Linux util to scrub free disk blocks and keep everything else intact ??
This really is an interested thread... but why do you want to scrub free blocks?
I can think of a few reasons:
- You're running Linux in a virtual machine. Zeroing all free space
makes it easier to compress the image file.
- You've been looking at something "bad" and want to make sure all
traces of it are gone before the lawyers/cops arrive.
There are probably other good reasons, but I'm stalled out here.
One thing is that if you expect the police on your doorstop, you are screwed anyway. There is NO truly secure method, other than complete pulverization, to destroy disk data.
If you want to clear the free space and reuse it, then the methods described are sufficient.
James McKenzie SSCP 367830 (Yes, I could give a real technical description of why but it involves a bunch of phyics and electrical stuff that usually drives folks nuts, suffice it that Data Discovery and Restore of Tucson can do what I describe to a hard drive that was trown into a fire and the heads were melted to the disk. The police got what they wanted, the disk had child porn on it and had been 'secure erased' as well.)
On 08/28/2010 05:32 PM, James McKenzie wrote:
Steven Stern wrote:
On 08/27/2010 11:25 PM, JD wrote:
Is there a Linux util to scrub free disk blocks and keep everything else intact ??
This really is an interested thread... but why do you want to scrub free blocks?
I can think of a few reasons:
- You're running Linux in a virtual machine. Zeroing all free space
makes it easier to compress the image file.
- You've been looking at something "bad" and want to make sure all
traces of it are gone before the lawyers/cops arrive.
There are probably other good reasons, but I'm stalled out here.
One thing is that if you expect the police on your doorstop, you are screwed anyway. There is NO truly secure method, other than complete pulverization, to destroy disk data.
If you want to clear the free space and reuse it, then the methods described are sufficient.
James McKenzie SSCP 367830 (Yes, I could give a real technical description of why but it involves a bunch of phyics and electrical stuff that usually drives folks nuts, suffice it that Data Discovery and Restore of Tucson can do what I describe to a hard drive that was trown into a fire and the heads were melted to the disk. The police got what they wanted, the disk had child porn on it and had been 'secure erased' as well.)
It seems that at least 2 individuals on this list have made the assumption that hiding data, encrypting data and erasing data is for the purpose of hiding criminal activity. Such an assumption would put hundreds of millions of people, and possibly a lot more, within these two individuals' category of criminals or highly suspected of criminal activity.
Here we are in the 21st century and we still discover that there are people with such narrow minds, that they can easily pass through a 1 picometer wide slot. I am being very generous with that slot width.
JD wrote:
On 08/28/2010 05:32 PM, James McKenzie wrote:
Steven Stern wrote:
On 08/27/2010 11:25 PM, JD wrote:
Is there a Linux util to scrub free disk blocks and keep everything else intact ??
This really is an interested thread... but why do you want to scrub free blocks?
I can think of a few reasons:
- You're running Linux in a virtual machine. Zeroing all free space
makes it easier to compress the image file.
- You've been looking at something "bad" and want to make sure all
traces of it are gone before the lawyers/cops arrive.
There are probably other good reasons, but I'm stalled out here.
There are lots of good reasons. You have been accessing your bank accounts and want to give the drive to your highly intelligent prodigy and they know how to read a drive and recover data. You have been using the drive in your business and don't want business data going home with you when you pull the drive and use in it your laptop.
One thing is that if you expect the police on your doorstop, you are screwed anyway. There is NO truly secure method, other than complete pulverization, to destroy disk data.
If you want to clear the free space and reuse it, then the methods described are sufficient.
James McKenzie SSCP 367830 (Yes, I could give a real technical description of why but it involves a bunch of phyics and electrical stuff that usually drives folks nuts, suffice it that Data Discovery and Restore of Tucson can do what I describe to a hard drive that was trown into a fire and the heads were melted to the disk. The police got what they wanted, the disk had child porn on it and had been 'secure erased' as well.)
It seems that at least 2 individuals on this list have made the assumption that hiding data, encrypting data and erasing data is for the purpose of hiding criminal activity. Such an assumption would put hundreds of millions of people, and possibly a lot more, within these two individuals' category of criminals or highly suspected of criminal activity.
Did I say you were hiding criminal activity? There are LOTS of legitimate reasons to encrypt data and to clean it off. If you work in the PCI industry or for the US Federal Government, you have to do both, on a regular basis. This is why the NSA has Secure Erase available. Other folks don't want 'lingering' data to come back and bite them. However, if you think that Secure Erase or any other program is going to completely wipe your hard drive, that is not so. Secure Erase only gives the ability to reuse the disk in the same operating environment. That means if you were processing company proprietary data, then you cannot give the drive away. It has to be physically destroyed. That is the only way to ensure data is not available.
Here we are in the 21st century and we still discover that there are people with such narrow minds, that they can easily pass through a 1 picometer wide slot. I am being very generous with that slot width.
My mind is not narrow. All I did is state "IF you are expecting the police on your doorstop" not WHEN. If you are using these for legitimate reasons, that is wonderful. You are forward thinking, a lot better than most folks who find that they shipped their 'clean drives' to others only to find out they pulled a bunch of personal information and used it for ill intended reasons.
James McKenzie SSCP 367830
On Sunday, August 29, 2010 01:32:34 James McKenzie wrote:
One thing is that if you expect the police on your doorstop, you are screwed anyway. There is NO truly secure method, other than complete pulverization, to destroy disk data.
Whenever I see a statement like this (and this isn't the first time), I get baffled over and over with how is this possible.
Starting from the premise that every hard disk has in principle limited capacity to store data, one can always fill it up completely, then rewrite it completely again. I see no way of the old data being recoverable, because this is in contradiction with the fact that the disk was filled up completely two times. The old data has to be destroyed in order to make room for new data. At least as far as I can understand it.
OTOH, if the premise is false, it means that I can fill the disk with arbitrary amount of information and have it all recoverable in principle. That doesn't sound very reasonable, because the disk is made ultimately of a finite number of particles, and can thus be in finitely many different states. I see no way how a piece of metal can hold infinite amount of information.
Or let me put it more bluntly --- take a cup and fill it up with oil. Then try to fill it with the equal amount of water. The oil is going to overflow, and will be lost beyond any chance of recovery from within the cup. I see no way to avoid that. The structure of information storage on a hard disk is technically more intricate, but ultimately obeys the same principle.
So am I missing something?
Best, :-) Marko
There is NO truly secure method, other than complete pulverization, to destroy disk data.
Format the hd with DOS 3.1 or 5 that'll get rid of everything. And if unsure after that format to ext2.. everything gone. Roger
On Sun, Aug 29, 2010 at 07:46:49 +0100, Marko Vojinovic vvmarko@gmail.com wrote:
Starting from the premise that every hard disk has in principle limited capacity to store data, one can always fill it up completely, then rewrite it completely again. I see no way of the old data being recoverable, because this is in contradiction with the fact that the disk was filled up completely two times. The old data has to be destroyed in order to make room for new data. At least as far as I can understand it.
At least at one time it was possible because the data is stored in a region and when overwriting the region you don't hit the same spot every time. With the right equipment you could see these areas and tell what data had been written in that spot in the past.
I have heard that with the current generation of disks this is no longer practical. But practical is mostly defined by what your budget is; so if the data is valuable enough, it is potentially recoverable.
On 29 Aug 2010 at 3:16, Bruno Wolff III wrote:
Date sent: Sun, 29 Aug 2010 03:16:28 -0500 From: Bruno Wolff III bruno@wolff.to To: Marko Vojinovic vvmarko@gmail.com Subject: Re: Scrub free disk blocks Copies to: users@lists.fedoraproject.org Send reply to: Community support for Fedora users users@lists.fedoraproject.org mailto:users- request@lists.fedoraproject.org?subject=unsubscribe mailto:users- request@lists.fedoraproject.org?subject=subscribe
On Sun, Aug 29, 2010 at 07:46:49 +0100, Marko Vojinovic vvmarko@gmail.com wrote:
Starting from the premise that every hard disk has in principle limited capacity to store data, one can always fill it up completely, then rewrite it completely again. I see no way of the old data being recoverable, because this is in contradiction with the fact that the disk was filled up completely two times. The old data has to be destroyed in order to make room for new data. At least as far as I can understand it.
At least at one time it was possible because the data is stored in a region and when overwriting the region you don't hit the same spot every time. With the right equipment you could see these areas and tell what data had been written in that spot in the past.
I have heard that with the current generation of disks this is no longer practical. But practical is mostly defined by what your budget is; so if the data is valuable enough, it is potentially recoverable.
Recalling a presentation at Defcon 2006, the space between tracks would contain information that could determin what was there before a format operation. A DES level wipe required writing 7 different patterns to every sector to make this practically impossible.
I don't do that level of wiping disk, but do use scripts to clear the unused space before doing disk/partition images. Makes a huge difference in the image size, since zeroed out sectors compress to almost nothing in the image file. Did an image of an 80GB disk after a full install of Fedora, and it made a 12GB image file. After clearing the image was only 2.5GB.
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
+----------------------------------------------------------+ Michael D. Setzer II - Computer Science Instructor Guam Community College Computer Center mailto:mikes@kuentos.guam.net mailto:msetzerii@gmail.com http://www.guam.net/home/mikes Guam - Where America's Day Begins G4L Disk Imaging Project maintainer http://sourceforge.net/projects/g4l/ +----------------------------------------------------------+
http://setiathome.berkeley.edu (Original) Number of Seti Units Returned: 19,471 Processing time: 32 years, 290 days, 12 hours, 58 minutes (Total Hours: 287,489)
BOINC@HOME CREDITS SETI 9925545.785910 | EINSTEIN 4468268.520851 ROSETTA 2199349.596714 | ABC 2320812.078459
On Sunday, August 29, 2010 09:53:48 Michael D. Setzer II wrote:
Marko Vojinovic vvmarko@gmail.com wrote:
Starting from the premise that every hard disk has in principle limited capacity to store data, one can always fill it up completely, then rewrite it completely again. I see no way of the old data being recoverable, because this is in contradiction with the fact that the disk was filled up completely two times. The old data has to be destroyed in order to make room for new data. At least as far as I can understand it.
At least at one time it was possible because the data is stored in a region and when overwriting the region you don't hit the same spot every time. With the right equipment you could see these areas and tell what data had been written in that spot in the past.
Recalling a presentation at Defcon 2006, the space between tracks would contain information that could determin what was there before a format operation. A DES level wipe required writing 7 different patterns to every sector to make this practically impossible.
Ok, so if I read this correctly, after cca 7 rewrites of the whole disk with random contents, there is quite high probability that the original data is gone beyond any recognition ability, no matter how high is the budget of your favorite spy organization.
So if you want to be on a safe side, fill up the whole disk from /dev/random over and over 20 times, and the original data will be completely gone. Even for NSA & friends. :-)
Best, :-) Marko
On Sun, 2010-08-29 at 20:21 +0100, Marko Vojinovic wrote:
So if you want to be on a safe side, fill up the whole disk from /dev/random over and over 20 times, and the original data will be completely gone. Even for NSA & friends. :-)
Best, :-) Marko
Actually, the 'standard' safe side is 25 times. And besides /dev/random, you have 'shred' command, which can do the same:
from shred man:
Overwrite the specified FILE(s) repeatedly, in order to make it harder for even very expensive hardware probing to recover the data.
kalinix wrote:
On Sun, 2010-08-29 at 20:21 +0100, Marko Vojinovic wrote:
So if you want to be on a safe side, fill up the whole disk from /dev/random over and over 20 times, and the original data will be completely gone. Even for NSA & friends. :-)
Best, :-) Marko
Actually, the 'standard' safe side is 25 times. And besides /dev/random, you have 'shred' command, which can do the same:
Correct. However, this does take time.
from shred man:
Overwrite the specified FILE(s) repeatedly, in order to make it harder for even very expensive hardware probing to recover the data.
Yep. If the Feds wanted to see if you were pushing Child Porn or running books for a drug cartel, they may take the time to recover it. If you were faking your taxes, maybe not so. If you were giving your drive to charity, they don't have the time/resources. It does depend on who is doing the looking. I've seen divorce lawyers, when going after big money, get the DDR folks involved. However, you are correct, if you use something like shred or Secure Erase, you can state that the drive is 'clean'. Without it, you cannot. hdparam is called by either of these in the Linux environment.
A clean drive is a happy drive, for both you and the drive.
James McKenzie
On 08/29/2010 12:21 PM, Marko Vojinovic wrote:
On Sunday, August 29, 2010 09:53:48 Michael D. Setzer II wrote:
Marko Vojinovicvvmarko@gmail.com wrote:
Starting from the premise that every hard disk has in principle limited capacity to store data, one can always fill it up completely, then rewrite it completely again. I see no way of the old data being recoverable, because this is in contradiction with the fact that the disk was filled up completely two times. The old data has to be destroyed in order to make room for new data. At least as far as I can understand it.
At least at one time it was possible because the data is stored in a region and when overwriting the region you don't hit the same spot every time. With the right equipment you could see these areas and tell what data had been written in that spot in the past.
Recalling a presentation at Defcon 2006, the space between tracks would contain information that could determin what was there before a format operation. A DES level wipe required writing 7 different patterns to every sector to make this practically impossible.
Ok, so if I read this correctly, after cca 7 rewrites of the whole disk with random contents, there is quite high probability that the original data is gone beyond any recognition ability, no matter how high is the budget of your favorite spy organization.
So if you want to be on a safe side, fill up the whole disk from /dev/random over and over 20 times, and the original data will be completely gone. Even for NSA& friends. :-)
Best, :-) Marko
Has anyone thought of the effect of disk operating temperature on the regions where each bit resides? I mean that because of this difference in operating temperatures, platters and rw heads will shrink or expand. This would require the electronic logic to take account of this and account for some wiggle room for each bit's region. This would seem to imply that each bit region might have more than one value, depending on what temperature that value was written. Is this not what forensics labs use to extract data that was thought to have been overwritten?
On Sunday, August 29, 2010 09:16:28 Bruno Wolff III wrote:
On Sun, Aug 29, 2010 at 07:46:49 +0100, Marko Vojinovic vvmarko@gmail.com wrote:
Starting from the premise that every hard disk has in principle limited capacity to store data, one can always fill it up completely, then rewrite it completely again. I see no way of the old data being recoverable, because this is in contradiction with the fact that the disk was filled up completely two times. The old data has to be destroyed in order to make room for new data. At least as far as I can understand it.
At least at one time it was possible because the data is stored in a region and when overwriting the region you don't hit the same spot every time. With the right equipment you could see these areas and tell what data had been written in that spot in the past.
Yes, but given a certain number of rewrites of each region, the old data is bound to be rewritten sooner or later, right? In addition, if this overwriting is a random process in terms of a precise spot where data is written, you cannot hope to be able to recover *all* (or a high percentage of) old information. I guess that after 2 or 3 rewrites of the whole disk, the large portion of the original data gets completely overwritten, and only random bits of that data can be recovered, in amount that is too small to be useful for anything.
I have an old Seagate 1.2 GB hard disk from 10 years ago, which is still operational on one of my machines at home. If we assume that over these 10 years I have rewritten it 1000 times, the total amount of data that passed through the disk is cca 1 TB. I simply don't believe anyone is able to recover whole 1 TB of data from this disk, no matter how big budget and equipment one may have. I don't even believe they could recover even 20% of that terabyte.
So I still don't see how the original statement may hold. There just has to be a limit of how much information can be stored on a disk, and once you overflow this limit, old data is bound to get lost.
Best, :-) Marko
On Sat, Aug 28, 2010 at 17:32:34 -0700, James McKenzie jjmckenzie51@earthlink.net wrote:
One thing is that if you expect the police on your doorstop, you are screwed anyway. There is NO truly secure method, other than complete pulverization, to destroy disk data.
That depends on how good they are. Full disk encryption can prevent access if they power your machine down without taking measures to copy what is in memory (to retrieve the disk keys). A dead man system can deal with leaving it powered up for too long.
If you are running a gambling operation or the like and can afford to have a trusted person available at all times to react to a raid, you can arrange for memory to be cleared in a very short period of time. You should have enough time even in a no knock raid.