Hello,
I have a question about peoples experience with SATA and RAID 1.
I have 4 SATA drives, 2 which are presently md0 in an initial LVM. I am about to add the second two drives as md1 to the LVM. One controller is onboard and the other is a PCI card.
Now with the present setup, I have the md0 drives on the same SATA controller chip and I notice that when large files are written to the array, there is a pause in the operation of the workstation. Anything being done is affected.
I was wondering if moving the drives to different controllers would benefit in throughput as putting IDE drives on different controllers?
I am thinking of doing it but I have not seen any information on physically moving md drives from lets say sda to sdc.
As the LVM is looking at md0 and not the sd* devices, I don't think I will have to do anything with LVM during this.
I will backup the data just to be safe.
On 12/15/05, Robin Laing Robin.Laing@drdc-rddc.gc.ca wrote:
Now with the present setup, I have the md0 drives on the same SATA controller chip and I notice that when large files are written to the array, there is a pause in the operation of the workstation. Anything being done is affected.
I was wondering if moving the drives to different controllers would benefit in throughput as putting IDE drives on different controllers?
Accepted practise is to run mirrors across controllers for reasons of performance (ie; avoding the situation you're seeing) and for fault tolerance (loss of a controller). If you have a correctly configured /etc/mdadm.conf file, then you should (in theory) be able to simply plug the drive into the other controller and have it work.
The md subsystem does not look at, for lack of a better term, the "hardware path" of the drives it's assembling into an arry, it looks at the UUID that's tagged to each disk. This UUID is unique per RAID volume regardless of where the disks are physically located in your machine.
You can see this UUID by running the following command:
# mdadm --detail --scan | grep ARRAY
In fact, you can put the output of that command in /etc/mdadm.conf and have a more or less working configuration. You may choose to add "auto=md" at the end each line to force the md software to create the relavent md entries in /dev at boot time should they be found to not exist.
You'll also need to add the following two lines on top of each ARRAY entry:
DEVICE partitions MAILADDR root
The first line tells mdadm to use /proc/partitions when determining what it should scan for md plexes. You can specifically enumerate partitions or devices here instead as outlined in the manpage for mdadm.conf. The second line tells mdadm who to email should a plex in an md device fail or otherwise experience problems.
Best of luck!
-- Chris
"I trust the Democrats to take away my money, which I can afford. I trust the Republicans to take away my freedom, which I cannot."
Christofer C. Bell wrote:
On 12/15/05, Robin Laing Robin.Laing@drdc-rddc.gc.ca wrote:
Accepted practise is to run mirrors across controllers for reasons of performance (ie; avoding the situation you're seeing) and for fault tolerance (loss of a controller). If you have a correctly configured /etc/mdadm.conf file, then you should (in theory) be able to simply plug the drive into the other controller and have it work.
The md subsystem does not look at, for lack of a better term, the "hardware path" of the drives it's assembling into an arry, it looks at the UUID that's tagged to each disk. This UUID is unique per RAID volume regardless of where the disks are physically located in your machine.
You can see this UUID by running the following command:
# mdadm --detail --scan | grep ARRAY
In fact, you can put the output of that command in /etc/mdadm.conf and have a more or less working configuration. You may choose to add "auto=md" at the end each line to force the md software to create the relavent md entries in /dev at boot time should they be found to not exist.
You'll also need to add the following two lines on top of each ARRAY entry:
DEVICE partitions MAILADDR root
The first line tells mdadm to use /proc/partitions when determining what it should scan for md plexes. You can specifically enumerate partitions or devices here instead as outlined in the manpage for mdadm.conf. The second line tells mdadm who to email should a plex in an md device fail or otherwise experience problems.
Best of luck!
-- Chris
Chris,
Thank you for the information. I will give it a try and see what happens. md and lvm are both new to me. I wish I had thought of this when I added the drives.
If it is that easy, I will be a very happy boy. :) But as we all know, there is something that will make that 5 minute job a 3 hour task.
Robin
Christofer C. Bell wrote:
On 12/15/05, Robin Laing Robin.Laing@drdc-rddc.gc.ca wrote:
Now with the present setup, I have the md0 drives on the same SATA controller chip and I notice that when large files are written to the array, there is a pause in the operation of the workstation. Anything being done is affected.
I was wondering if moving the drives to different controllers would benefit in throughput as putting IDE drives on different controllers?
Accepted practise is to run mirrors across controllers for reasons of performance (ie; avoding the situation you're seeing) and for fault tolerance (loss of a controller). If you have a correctly configured /etc/mdadm.conf file, then you should (in theory) be able to simply plug the drive into the other controller and have it work.
The md subsystem does not look at, for lack of a better term, the "hardware path" of the drives it's assembling into an arry, it looks at the UUID that's tagged to each disk. This UUID is unique per RAID volume regardless of where the disks are physically located in your machine.
You can see this UUID by running the following command:
# mdadm --detail --scan | grep ARRAY
In fact, you can put the output of that command in /etc/mdadm.conf and have a more or less working configuration. You may choose to add "auto=md" at the end each line to force the md software to create the relavent md entries in /dev at boot time should they be found to not exist.
You'll also need to add the following two lines on top of each ARRAY entry:
DEVICE partitions MAILADDR root
The first line tells mdadm to use /proc/partitions when determining what it should scan for md plexes. You can specifically enumerate partitions or devices here instead as outlined in the manpage for mdadm.conf. The second line tells mdadm who to email should a plex in an md device fail or otherwise experience problems.
Best of luck!
Well as I have been on holidays, I finally get a chance to respond. It went so easy that I was surprised. I gave myself a few hours to do the change and expected major headaches. None occurred.
As you stated, the mdadm accepted the drives after moving between the controllers. First good sign.
Added the LVM with the system-config-lvm but the extra drive didn't work as expected. A quick search found that I had to use the CLI and I did lvextend and resize2fs. All worked like a dream. Had a beer and enjoyed the added disk space.
Now that I know it is easy, I am not afraid to add two more drives and extend the partition again. At least until I build the RAID NAS for my home.
Man media files take allot of space.