Dear All,
Does the size of the hard disk affect Fedora's performance?
I am looking for a 2 T disk to replace my current disk, but some people are warning me about performance; they say I should buy a disk of at most 1 T. Are they correct?
Thanks in advance,
Paul
Am 21.06.2013 23:16, schrieb Paul Smith:
Does the size of the hard disk affect Fedora's performance?
I am looking for a 2 T disk to replace my current disk, but some people are warning me about performance; they say I should buy a disk of at most 1 T. Are they correct?
these people are pure idiots - period
On Fri, Jun 21, 2013 at 10:42 PM, Reindl Harald h.reindl@thelounge.net wrote:
Does the size of the hard disk affect Fedora's performance?
I am looking for a 2 T disk to replace my current disk, but some people are warning me about performance; they say I should buy a disk of at most 1 T. Are they correct?
these people are pure idiots - period
Thanks, Reindl. They are all MS Windows users, and they say that using a disk larger than 1 T as a boot disk, the performance will be affected. Maybe, that is true with MS Windows and not with Fedora.
Paul
Allegedly, on or about 21 June 2013, Paul Smith sent:
They are all MS Windows users, and they say that using a disk larger than 1 T as a boot disk, the performance will be affected. Maybe, that is true with MS Windows and not with Fedora.
Well, there's this side of that situation:
Have you seen Windows complain at boot up that you hadn't shut down properly, and it needs to check the drive? (Of course you did shut down properly, *it* screwed up doing so.) Then you have the fun of waiting for it to scan through one hell of a huge drive. More so if your computer likes to regularly screw up.
Then there's drive fragmentation. Windows still seems to be horrid for that. I'd hate to have to wait for a 2 TB drive to defrag. Even if I wasn't watching the box, waiting for it to finish, because I wanted to use it, but left it overnight - it'd be at it all night.
Am 22.06.2013 14:18, schrieb Tim:
Allegedly, on or about 21 June 2013, Paul Smith sent:
They are all MS Windows users, and they say that using a disk larger than 1 T as a boot disk, the performance will be affected. Maybe, that is true with MS Windows and not with Fedora.
Have you seen Windows complain at boot up that you hadn't shut down properly, and it needs to check the drive? (Of course you did shut down properly, *it* screwed up doing so.) Then you have the fun of waiting for it to scan through one hell of a huge drive. More so if your computer likes to regularly screw up.
which does typically not happen
Then there's drive fragmentation. Windows still seems to be horrid for that. I'd hate to have to wait for a 2 TB drive to defrag. Even if I wasn't watching the box, waiting for it to finish, because I wanted to use it, but left it overnight - it'd be at it all night
which has nothing to do with *a disk* larger than 1 TB it's more depending on the partitions you create
in context of Linux it doe snot matter at all
Tim:
Have you seen Windows complain at boot up that you hadn't shut down properly, and it needs to check the drive? (Of course you did shut down properly, *it* screwed up doing so.) Then you have the fun of waiting for it to scan through one hell of a huge drive. More so if your computer likes to regularly screw up.
Reindl Harald:
which does typically not happen
Maybe not in your sheltered world, but it's an all too familiar message seen by scads of other people.
Then there's drive fragmentation. Windows still seems to be horrid for that. I'd hate to have to wait for a 2 TB drive to defrag. Even if I wasn't watching the box, waiting for it to finish, because I wanted to use it, but left it overnight - it'd be at it all night
which has nothing to do with *a disk* larger than 1 TB it's more depending on the partitions you create
It has an awful lot to do with such large discs. What's the default partitioning for Windows? One partition that covers the entire drive.
The next common scheme for pre-installed Windows, is one huge partition for Windows, and a small partition for recovery purposes. Which still leaves you with a huge partition to check when Windows regularly shoots itself in the foot.
And don't try to tell me otherwise. In something like 18 years of observing all incarnations of Windows shooting itself in the foot, on brand new installs, on well maintained installs, etc., nobody can convince me that it's not unstable. I have never regretted abandoning it.
in context of Linux it doe snot matter at all
The context was Windows users advising against large drives, because of "performance" issues. My counter was that Linux is different, and those are too very common drive-based time wasters with Windows.
Ok, folks, I just want to inject some bit of reality here. I've started working on Unix internals in 1980, and have worked ever since on just about any OS that has come my way--almost all variants of Unix, Linux, Windows, and a bunch of others that are irrelevant to this conversation. Why do I say this? To point out that I work in an heterogeneous environment, and have for a very long time.
In the following, I respond both to Reindl, and the poster he's responding to (since I didn't save the original post); note the double-carats.
Once, long ago--actually, on Sat, Jun 22, 2013 at 07:27:11AM CDT--Reindl Harald (h.reindl@thelounge.net) said:
... Am 22.06.2013 14:18, schrieb Tim:
Have you seen Windows complain at boot up that you hadn't shut down properly, and it needs to check the drive? ...
We've all seen Unix/Linux have the same complaint, and force a fsck. ALL operating systems occasionally have cause to believe their filesystem(s) need checking. And ALL operating systems occasionally crash, or have filesystem issues.
Also, since NTFS, the Windows filesystem has been at least as stable as most *nix variants in operation.
... Then you have the fun of waiting for it to scan through one hell of a huge drive. ...
Never sat through a fsck of a really big *nix drive or array, have you? It's all a matter of the level of the check, allocation unit size, number of large files, and volume of data.
More so if your computer likes to regularly screw up.
which does typically not happen
True. Quite seriously, Windows filesystems since NTFS are nowhere near as fragile as they used to be. Stability improved significantly after XP SP3, as well, and especially under Windows Server.
Then there's drive fragmentation. Windows still seems to be horrid for that. I'd hate to have to wait for a 2 TB drive to defrag. Even if I wasn't watching the box, waiting for it to finish, because I wanted to use it, but left it overnight - it'd be at it all night
which has nothing to do with *a disk* larger than 1 TB it's more depending on the partitions you create
Even more dependent on the allocation unit size selected at filesystem creation.
in context of Linux it doe snot matter at all
Beg to differ. Do you all know how a file is strucured in *nix? There is a primary inode. In this are direct block pointers--the number varies depending on OS, filesystem type, etc.--but generally there are 12 pointers to direct blocks, 1 to single-indirect blocks, 1 to double-indirect blocks, and 1 to triple-indirect blocks.
What does this mean? Well, essentially, *nix tries to optimize for small files--that is, files that can fit in twelve blocks. How big a block is depends on the allocation unit you picked when formatting the filesystem (as with NTFS). But once you go over that size, things start to get less efficient. Grow over the storage that can be addressed by direct file pointers, you have two lookups to carry out--on for the indirect block, then the pointers there. Double indirects guarantee three lookups; triple, four. And every one of the allocation units can be scattered anywhere on the disk--meaning that, after a while, yes, *nix is fragmented, too.
NTFS has a similarly complicated, but very different, system for directory and file management. But it, too, ends up having mechanisms for dealing with larger files, and it, too, has to deal with fragmentation. And the effects of fragmentation have been reduced in NTFS over the earlier FAT filesystems.
Both *nix and Windows play games with the disk drivers (f'rinstance, look up the elevator algorithm, and scatter-gather), caching, etc. to minimize the effect of fragmentation (as do, in fact, all operating systems). Disks have done their part to obfuscate the issue, since the allocation unit you think you're reading is certainly remapped internally by the disk firmware to different physical block(s).
Both *nix and Windows play other games with their internal data structures to mitigate filesystem corruption and hardware failure--log files, multiple MFTs or primary inode tables, etc., and these have gotten both more complicated and sophisticated over time.
Essentially, all filesystems fragment. All filesystems and operating systems have mechanisms to minimize the effects of fragmentation. And all have gotten very, very much better at it over time.
We do no good to the cause of furthering the promulgation of Linux (I've given up on Unix _per se_) if we carelessly repeat canards that are no longer applicable, or at worst are much less applicable, when discussing the differences between operating systems.
Enough pre-coffee pontificating. I just hit a tipping point, and had to point out that before we post something we "all know"--"Windows filesystems are fragile", "Linux doesn't fragment", etc.--we should think twice.
Cheers, -- Dave Ihnat dihnat@dminet.com
Am 23.06.2013 14:50, schrieb Tim:
which has nothing to do with *a disk* larger than 1 TB it's more depending on the partitions you create
It has an awful lot to do with such large discs. What's the default partitioning for Windows? One partition that covers the entire drive
who cares for default partitioning on Windows? who cares for default partitioning on Fedora? who cares for default partitioning at all?
And don't try to tell me otherwise. In something like 18 years of observing all incarnations of Windows shooting itself in the foot
i have used windos long enough but did "default partitioning" bother me? no, not after the first month at all
On 06/21/2013 04:16 PM, Paul Smith wrote:
Dear All,
Does the size of the hard disk affect Fedora's performance?
I am looking for a 2 T disk to replace my current disk, but some people are warning me about performance; they say I should buy a disk of at most 1 T. Are they correct?
Thanks in advance,
Paul
I have a 2TB disk in this system and I haven't seen any speed issues. I used Fedora's defaults to partition it:
$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.9G 5.9M 3.9G 1% /dev/shm tmpfs 3.9G 11M 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/sda3 50G 9.4G 40G 20% / tmpfs 3.9G 872K 3.9G 1% /tmp /dev/sda2 477M 118M 334M 27% /boot /dev/sda1 200M 288K 200M 1% /boot/efi /dev/sda5 1.8T 246G 1.5T 15% /home
Model Family: Seagate Barracuda 7200.14 (AF) Device Model: ST2000DM001-9YN164
Paul Smith phhs80@gmail.com writes:
On Fri, Jun 21, 2013 at 10:42 PM, Reindl Harald h.reindl@thelounge.net wrote:
Does the size of the hard disk affect Fedora's performance?
I am looking for a 2 T disk to replace my current disk, but some people are warning me about performance; they say I should buy a disk of at most 1 T. Are they correct?
What is their understanding of "performance"?
I/O ops/sec?
observed sustained data transfer rates like when copying files with rsync?
benchmark results?
specifications given by the manufacturer?
reliability and longevity?
price vs. what you get?
perceived responsiveness of the system?
noise and vibration levels, energy consumption, overall environmental impact?
heat production?
warranty given by the manufacturer and the quality of their support?
compatibility with RAID controllers?
error recovery capabilities?
connectivity and flexibility?
And in which way would the "performance" be affected? Are there any bottlenecks that need to be considered?
Use at least RAID-1, so you'll probably be buying at least two disks. Other than that, it depends on what you want or need, except that some combinations aren't possible (like 'high capacity + high data transfer rates != low price').