Folks,
Over the weekend, I re-installed my Ultra 5 with the March 1st tree, after hooking up a Compact Flash to IDE adapter for the on board IDE. The system recognizes a 4GB CF card as an IDE disk and uses that to boot (since Open Boot PROM requires that the device controller provide Fcode suited to booting, which in this case is only true of the on-board one), then uses a new SATA PCI adapter to run from a 320GB SATA disk.
I configured the system thus:
4 GB CF IDE "disk: 1). /boot (1GB) 2). /boot (1GB) 3). Whole disk 4). Unused
320 GB SATA hard disk: 1). /boot1 (1GB) - not used but for various purposes[0]. 2). LVM (100GB) 3). Whole disk 4). LVM (100GB) 5). LVM (rest)
The Open Boot PROM is configured to boot "rawhide" by default, which uses a boot type partition with "partition-boot" set in SILO running off the first partition. There is also a regular "stable" Fedora install using another SILO on the second partition. Thus I have completely independent setups I can trivially switch between. For now, they both pull from "development", but I will switch over when possible.
I didn't install twice (Anaconda does not like the physical disk layout after you have already done one install, even partitioning manually, and some time I can help figure out why - I know the Anaconda swap and LVM needs some attention). What I did was to dd the content of the LV from the first install to form the second, after imaging the /boot. I then changed the UUIDs with tune2fs, changed hostname, recreated ssh keys, after changing IP, etc. So they're two separate "hosts" on the same system and it's pretty clear which one is running at once.
I recommend this approach. The SATA disk is drastically faster than the original very legacy IDE on the Ultra 5. The use of a flash disk is very reliable (in theory, anyway) and I can even use eSATA if I want to, or the faster IDE or SATA ports on the SATA PCI upgrade card.
Jon.
[0] As has been noted by pale and should be well known, LVM requires the whole of a disk/partition upon which a PV is created, and so you need to at least begin on cylinder 1 when creating partitions. I decided to have another 1GB here for dumping purposes and my standard layout is similar.
On Mon, Mar 8, 2010 at 5:17 AM, Jon Masters jonathan@jonmasters.org wrote:
I recommend this approach. The SATA disk is drastically faster than the original very legacy IDE on the Ultra 5. The use of a flash disk is very reliable (in theory, anyway) and I can even use eSATA if I want to, or the faster IDE or SATA ports on the SATA PCI upgrade card.
Hi, I can provide you some troughput statistics that will hopefully show you a bit what to expect. SATA indeed gives you a lot of speed but there are some things to consider and not get your hopes up too much.
My system: - Ultra 5, 440Mh CPU, 512MB RAM.
sda: Maxtor 80GB ATA drive connected to c0t0 (internal IDE controler). The internal IDE controller goes up to MWDMA2, which gves you around 16MB/s according to the standards.
sda: multi word DMA 2 (16MB/s) (EXT3 filesystem) [root@medusa slowdrive]# dd if=/dev/zero of=myfile.txt bs=4096 count=250000 250000+0 records in 250000+0 records out 1024000000 bytes (1.0 GB) copied, 61.2546 s, 16.7 MB/s
sda: multi word DMA 2 (16MB/s) with RAID1 layer [root@medusa mnt]# dd if=/dev/zero of=myfile.txt bs=4096 count=250000 250000+0 records in 250000+0 records out 1024000000 bytes (1.0 GB) copied, 62.3785 s, 16.4 MB/s
sda: multi word DMA 2 (16MB/s) with RAID1 layer and LVM layer [root@medusa mnt]# dd if=/dev/zero of=myfile.txt bs=4096 count=250000 250000+0 records in 250000+0 records out 1024000000 bytes (1.0 GB) copied, 63.119 s, 16.2 MB/s
What you can see above is that your performance bottleneck is the IDE bus with the poor MWDMA2 capabilities. However, whatever you decide to do, RAID, LVM or RAID + LVM, you will always get the full speed.
sdc: Maxtor 1TB SATA disc connected to a Silicon Image 3512 PCI card. The kernel configured this with UDMA100 (100MB/s)
sdc: Ultra DMA 100 (with EXT3 FS) [root@medusa /]# dd if=/dev/zero of=myfile.txt bs=4096 count=250000 250000+0 records in 250000+0 records out 1024000000 bytes (1.0 GB) copied, 23.9389 s, 42.8 MB/s
sdb: Maxtor 1TB SATA disc connected t a Silicon Image 3512 PCI card. The kernel configured this with UDMA100 (100MB/s)
sdb: Ultra DMA 100 with RAID 1 layer [root@medusa mnt]# dd if=/dev/zero of=myfile.txt bs=4096 count=250000 250000+0 records in 250000+0 records out 1024000000 bytes (1.0 GB) copied, 39.9939 s, 25.6 MB/s
sdb: Ultra DMA 100 with LVM layer [root@medusa mnt]# dd if=/dev/zero of=myfile.txt bs=4096 count=250000 250000+0 records in 250000+0 records out 1024000000 bytes (1.0 GB) copied, 35.3786 s, 28.9 MB/s
sdb: Ultra DMA 100 with RAID 1 and LVM layer [root@medusa mnt]# dd if=/dev/zero of=myfile.txt bs=4096 count=250000 250000+0 records in 250000+0 records out 1024000000 bytes (1.0 GB) copied, 38.5377 s, 26.6 MB/s
Above shows that when you use SATA on an Ultra 5 you get way better speeds than with your IDE disc but not what you would get on a faster sparc. My guess is that eventhough you use UDMA, the troughput is just to much for the CPU. This becomes more clear when you use RAID or LVM volumes, where you can see a decrease of 20MB/s when either RAID or LVM is used. Also keep in mind that these tests were from /dev/zero to the filesystem. An rsync from sdb to sdc (SATA to SATA) was very disapointing.
Still LVM/RAID on SATA will give you a better performance than when you use IDE just don't expect wonders. An rsync from sdb to sdc for example went with 320KB/s...
Patrick