Folks,
Just got a new 10TB file server with 2 3ware controllers giving me 2 x 5TB arrays.
Have installed FC4 on it but after partitioning with parted I get endless amounts of ext2 fs errors. I've tried reconfiguring and initialising the arrays using 3ware tools but same result. The drives are all showing good health. fdisk is out due to the 2TB limitation. fsck and tune2fs throw out various inode/system block errors.
I'd really rather not have to roll my own kernels for XFS/ReiserFS support and all my other disks are ext3 - I want to export these arrays to 100+ client systems.
Can anyone recommend a better partitioning tool so I can get error-free ext3 filesystems on both arrays?
Many thanks in advance for any advice,
Mac
On Thu, 3 Aug 2006, Maccy wrote:
Folks,
Just got a new 10TB file server with 2 3ware controllers giving me 2 x 5TB arrays.
Have installed FC4 on it but after partitioning with parted I get endless amounts of ext2 fs errors. I've tried reconfiguring and initialising the arrays using 3ware tools but same result. The drives are all showing good health. fdisk is out due to the 2TB limitation. fsck and tune2fs throw out various inode/system block errors.
I'd really rather not have to roll my own kernels for XFS/ReiserFS support and all my other disks are ext3 - I want to export these arrays to 100+ client systems.
Can anyone recommend a better partitioning tool so I can get error-free ext3 filesystems on both arrays?
Many thanks in advance for any advice,
Mac
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Yes, use parted or gparted to format and XFS on top of that :)
If you use XFS though, use 2.6.17.7 or later, do not use 2.6.17-2.6.17.6 (corruption bug).
Justin.
On Thu, 3 Aug 2006, Maccy wrote:
Folks,
Just got a new 10TB file server with 2 3ware controllers giving me 2 x 5TB arrays.
Have installed FC4 on it but after partitioning with parted I get endless amounts of ext2 fs errors. I've tried reconfiguring and initialising the arrays using 3ware tools but same result. The drives are all showing good health. fdisk is out due to the 2TB limitation. fsck and tune2fs throw out various inode/system block errors.
I'd really rather not have to roll my own kernels for XFS/ReiserFS support and all my other disks are ext3 - I want to export these arrays to 100+ client systems.
Can anyone recommend a better partitioning tool so I can get error-free ext3 filesystems on both arrays?
Many thanks in advance for any advice,
Mac
-- fedora-list mailing list fedora-list@redhat.com To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Yes, use parted or gparted to format and XFS on top of that :)
If you use XFS though, use 2.6.17.7 or later, do not use 2.6.17-2.6.17.6 (corruption bug).
I'm about to hit a similar problem. As far as I understand things, with a 2.6 kernel, if you partition with parted and then mke2fs on those partitions, you should be OK in terms if the addressability of 5TB. What blocksize does 'dumpe2fs -h /dev/yourpartition' report? My guess is that it should be 4096 on a 5TB 'device'.
Do you have a screendump of your parted stuff?
Terry.
"m" == maccy maccy@maccomms.co.uk writes:
m> Just got a new 10TB file server with 2 3ware controllers giving me m> 2 x 5TB arrays.
I've done many large arrays and have just recently been wrestling with a 12TB single-controller array.
The bottom line is that unless you employ gross hacks, you will not be able to both partition and boot from an array larger than 2TB. The standard MS-DOS partition table maxes out at 2TB, so for larger devices you need to use the new style GPT (GUID Partition Table), but then you cannot boot from that device. (HP says they have a way to do so, and have conveniently patented it.)
One option is to stick another drive in the machine and boot from that. Or put in two drives on the motherboard controller and set up a simple mirrored RAID so that you don't lose redundancy. You probably lose hot-swap, though, unless your machine has additional hot-swap bays you can use.
Another option is to enable 2TB auto-carving in the 3ware BIOS and recreate your arrays. This will present the arrays as a number of 2TB drives to the OS; you can join them all via LVM and then create volumes out of that.
The latter option is probably the simplest, but I haven't evaluated the performance aspect yet.
There are some gross hacks which work to get grub installed on a >2TB volume as long as /boot is at the beginning of the drive (by supplying a fake geometry to GRUB) and a way to create an LVM partition that actually extends beyond the partitionable region of the disk, but it's not really something I'd recommend as you never know when something might decide to cut off the end of that volume and hose your data.
I'm still experimenting with trying to get grub installed on the protective MBR of a GPT-partitioned disk, but this technically results in something that isn't GPT and will probably cause the partitioning tool to become rather upset. It probably also breaks recovery disks. Neither of those things are good for a machine with critical data.
m> Have installed FC4
With just a few weeks until FC4 goes to Legacy, are you really sure you don't want to install FC5 or something like Centos?
- J<
The bottom line is that unless you employ gross hacks, you will not be able to both partition and boot from an array larger than 2TB. The standard MS-DOS partition table maxes out at 2TB, so for larger devices you need to use the new style GPT (GUID Partition Table), but then you cannot boot from that device. (HP says they have a way to do so, and have conveniently patented it.)
Thanks for all the suggestions. GPT labels are what came out of the box.
Another option is to enable 2TB auto-carving in the 3ware BIOS and recreate your arrays. This will present the arrays as a number of 2TB drives to the OS; you can join them all via LVM and then create volumes out of that.
The latter option is probably the simplest, but I haven't evaluated the performance aspect yet.
That's another possibility to add to the list - I'll probably investigate XFS first. I don't need to boot from these arrays - I already have a system disk and mirror disk in place.
m> Have installed FC4
With just a few weeks until FC4 goes to Legacy, are you really sure you don't want to install FC5 or something like Centos?
If the truth be told, I am using Scientific Linux 4.3 but I didn't want to be told to go elsewhere (as this list has proved to be very helpful to me in the past)
:)
Thanks again
Mark
I wrote:
m> Just got a new 10TB file server with 2 3ware controllers giving me m> 2 x 5TB arrays.
Hey guys,
Thanks for all the help, I have things sorted now.
It was my first time using parted - I was doing a mkpartfs with ext2 and that just produced all sorts of errors after mounting.
Following Chapter 12 of the RHEL 4 Sysadmin guide, I now have 2 nice ext3 partitions via mkpart (in parted) and mkfs.
Regards
Mark
On Thu, 2006-08-03 at 15:41 -0500, Jason L Tibbitts III wrote:
One option is to stick another drive in the machine and boot from that.
I would have thought that to be quite a practical approach. I'd presume that someone with a 10TB system was prepared to spend money, and that most of that space is for storage not OS and applications. Keeping system and storage completely separate would help with maintenance, too. You could remove storage while fiddling with system and applications and never have to worry about screwing it up.
"T" == Tim ignored_mailbox@yahoo.com.au writes:
On Thu, 2006-08-03 at 15:41 -0500, Jason L Tibbitts III wrote:
One option is to stick another drive in the machine and boot from that.
T> I would have thought that to be quite a practical approach.
It's not always that simple. It's now possible to get >2TB in a 1U case with no room for anything other than a single additional laptop drive. What now? Stick your OS on a non-redundant drive? Try to hot-glue another drive somewhere? Not really smart. One hack I've thought about is to stick a GPT-capable GRUB on a floppy or a USB dongle that's left permanently plugged in. (It's even possible to jury-rig an internal USB flash drive for this.)
The 2TB limit is going to start interfering much more often in the near future, especially with 1TB drives due out in a few months. I hope that server-class boards start switching over to EFI before that happens.
- J<