Hello, all in the title, i'm sick of loosing hours trying to get hardware working with super outdated drivers (tech support not helping) so if anyone knows a raid card compatible with raid 6 AND that will work on fedora 23 please by all means share it. Price is not relevant, i can go up to 300euros/dollars if needed, i just want something that works for sure.
On 01/22/2016 04:49 AM, thibaut noah wrote:
Hello, all in the title, i'm sick of loosing hours trying to get hardware working with super outdated drivers (tech support not helping) so if anyone knows a raid card compatible with raid 6 AND that will work on fedora 23 please by all means share it. Price is not relevant, i can go up to 300euros/dollars if needed, i just want something that works for sure.
I've always had good luck with Adaptec (now called Microsemi):
https://www.adaptec.com/en-us/
These are TRUE hardware RAID cards, not quasi-RAID. I've used them for years. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - -"Jimmie crack corn and I don't care." What kind of a lousy attitude - - is THAT to have, huh? -- Dennis Miller - ----------------------------------------------------------------------
LSI. Supports DDF metadata format, which mdadm can read. Less chance of vendor lock-in.
Chris Murphy
On Fri, Jan 22, 2016, 10:50 AM Rick Stevens ricks@alldigital.com wrote:
On 01/22/2016 04:49 AM, thibaut noah wrote:
Hello, all in the title, i'm sick of loosing hours trying to get hardware working with super outdated drivers (tech support not helping) so if anyone knows a raid card compatible with raid 6 AND that will work on fedora 23 please by all means share it. Price is not relevant, i can go up to 300euros/dollars if needed, i just want something that works for sure.
I've always had good luck with Adaptec (now called Microsemi):
https://www.adaptec.com/en-us/These are TRUE hardware RAID cards, not quasi-RAID. I've used them for years.
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
--"Jimmie crack corn and I don't care." What kind of a lousy attitude -
is THAT to have, huh? -- Dennis Miller -
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
Allegedly, on or about 22 January 2016, Rick Stevens sent:
I've always had good luck with Adaptec (now called Microsemi):
https://www.adaptec.com/en-us/These are TRUE hardware RAID cards, not quasi-RAID. I've used them for years.
Advice I've read elsewhere: If you're going to use hardware RAID, buy a spare controller (i.e. buy at least two of the same model). You don't want to lose all access to your drives at some time in the future, if your card fails, and there's no suitable drop-in replacement.
Changing the controller card is going to be an awful lot easier than restoring all your backups to a new RAID. Or worse, losing everything because you have no backup.
On 01/22/2016 01:49 PM, thibaut noah wrote:
Hello, all in the title, i'm sick of loosing hours trying to get hardware working with super outdated drivers (tech support not helping) so if anyone knows a raid card compatible with raid 6 AND that will work on fedora 23 please by all means share it. Price is not relevant, i can go up to 300euros/dollars if needed, i just want something that works for sure.
Do not dismiss the SoftwareRAID option too easily. Nothing is better than SoftwareRAID in terms of reliability, data unlocking and total control of what's happening at your data. Consider that with $300 you can seriously upgrade your system and let it manage RAID by itself without losing performance.
Regards.
Lsi seems like a good brand but the most expensive raid 6 card i found was 380 dollars :/
About software raid, if i choose this option, will i be able to pass the disk to qemu/kvm? My raid will mostly be use by my windows 10 vm
2016-01-24 15:05 GMT+01:00 Roberto Ragusa mail@robertoragusa.it:
On 01/22/2016 01:49 PM, thibaut noah wrote:
Hello, all in the title, i'm sick of loosing hours trying to get
hardware working with super outdated drivers (tech support not helping) so if anyone knows a raid card compatible with raid 6 AND that will work on fedora 23 please by all means share it.
Price is not relevant, i can go up to 300euros/dollars if needed, i just
want something that works for sure.
Do not dismiss the SoftwareRAID option too easily. Nothing is better than SoftwareRAID in terms of reliability, data unlocking and total control of what's happening at your data. Consider that with $300 you can seriously upgrade your system and let it manage RAID by itself without losing performance.
Regards.
-- Roberto Ragusa mail at robertoragusa.it -- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On 01/25/2016 10:50 AM, thibaut noah wrote:
Lsi seems like a good brand but the most expensive raid 6 card i found was 380 dollars :/
About software raid, if i choose this option, will i be able to pass the disk to qemu/kvm? My raid will mostly be use by my windows 10 vm
Of course, you will. If you use linux files as virtual disks, they will be on a filesystem on RAID6. If you give block devices to qemu, you just give it the RAID6 devices (e.g. dev/md?).
Okay, seems i have a lot of reading todo on zfs now. It appears it is possible to import zfs pool even if the host os died. So if it is possible to import the pool from a totally different os (but still linux)as long as it supports linux file system and zfs that should do it.
2016-01-25 12:47 GMT+01:00 Roberto Ragusa mail@robertoragusa.it:
On 01/25/2016 10:50 AM, thibaut noah wrote:
Lsi seems like a good brand but the most expensive raid 6 card i found
was 380 dollars :/
About software raid, if i choose this option, will i be able to pass the
disk to qemu/kvm? My raid will mostly be use by my windows 10 vm
Of course, you will. If you use linux files as virtual disks, they will be on a filesystem on RAID6. If you give block devices to qemu, you just give it the RAID6 devices (e.g. dev/md?).
-- Roberto Ragusa mail at robertoragusa.it -- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
El 22/1/16 a las 13:49, thibaut noah escribió:
Hello, all in the title, i'm sick of loosing hours trying to get hardware working with super outdated drivers (tech support not helping) so if anyone knows a raid card compatible with raid 6 AND that will work on fedora 23 please by all means share it. Price is not relevant, i can go up to 300euros/dollars if needed, i just want something that works for sure.
The Adaptec 6405 (4 Disks) / 6805 (8 Disks), note, without E at the end, works great out of the box. There are other more expensive, but this is good for it's price.
It's a bit pricey for what is is. I did some research, seems that the cheapest optimized way is to use a hba or a cheap raid card for the sata connection and use zfs on top. Would be great since i have too (not enough sata ports on the motherboard) but i have a heard time finding a card that runs on kernel 4.3
2016-01-25 14:23 GMT+01:00 José María Terry Jiménez jtj@tssystems.net:
El 22/1/16 a las 13:49, thibaut noah escribió:
Hello, all in the title, i'm sick of loosing hours trying to get hardware working with super outdated drivers (tech support not helping) so if anyone knows a raid card compatible with raid 6 AND that will work on fedora 23 please by all means share it. Price is not relevant, i can go up to 300euros/dollars if needed, i just want something that works for sure.
The Adaptec 6405 (4 Disks) / 6805 (8 Disks), note, without E at the end, works great out of the box. There are other more expensive, but this is good for it's price.
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On Sun, Jan 24, 2016 at 7:05 AM, Roberto Ragusa mail@robertoragusa.it wrote:
On 01/22/2016 01:49 PM, thibaut noah wrote:
Hello, all in the title, i'm sick of loosing hours trying to get hardware working with super outdated drivers (tech support not helping) so if anyone knows a raid card compatible with raid 6 AND that will work on fedora 23 please by all means share it. Price is not relevant, i can go up to 300euros/dollars if needed, i just want something that works for sure.
Do not dismiss the SoftwareRAID option too easily. Nothing is better than SoftwareRAID in terms of reliability, data unlocking and total control of what's happening at your data. Consider that with $300 you can seriously upgrade your system and let it manage RAID by itself without losing performance.
The caveat is that with this level of control comes quite a bit of a learning curve. There are lots of gotchas, possible the biggest two are:
1. SCT ERC value per drive must be less than that of the SCSI command timer. If this is not true the misconfiguration will prevent bad sectors from being fixed up by md. So it's important to get either enterprise drives that have SCT ERC already configured with ~ 7 second recovery, or buy drives with a configurable time. 'smartctl -l scterc <dev>' is the way to find out. Both the SCT ERC and SCSI command timer are per device and aren't persistent. This advice applies to md/mdadm, lvm, and Btrfs raids.
2. Separately backup mdadm.conf, and the mdadm superblock (that's mdadm -E) for each drive.
3. Bonus, don't follow advice on the web about using mdadm -C to recreate a broken array. Go the linux-raid@ list and ask for help first. Many users end up with total data loss of otherwise recoverable arrays because they followed this absurd advice to recreate arrays rather than force assemble.
If you are familiar with LVM, then it's got a neat edge over mdadm: each LV can have its own raid level. So you can create linear "throw away" LVs, or more scalable raid10 LVs, or slower but more space efficient raid6 LVs. So if you expect to want different redundancy levels, or make changes frequently, you might prefer LVM. But, it still doesn't have all the features mdadm offers, so you'll want to make a must haves list and check both out.
I'm not gonna use lvm or mdadm. I choose zfs. I will probably end up buying a little server since i cannot find compatible hardware cheaper than an full server... On Mon, 1 Feb 2016 at 04:59, Chris Murphy lists@colorremedies.com wrote:
On Sun, Jan 24, 2016 at 7:05 AM, Roberto Ragusa mail@robertoragusa.it wrote:
On 01/22/2016 01:49 PM, thibaut noah wrote:
Hello, all in the title, i'm sick of loosing hours trying to get
hardware working with super outdated drivers (tech support not helping) so if anyone knows a raid card compatible with raid 6 AND that will work on fedora 23 please by all means share it.
Price is not relevant, i can go up to 300euros/dollars if needed, i
just want something that works for sure.
Do not dismiss the SoftwareRAID option too easily. Nothing is better than SoftwareRAID in terms of reliability, data
unlocking and total
control of what's happening at your data. Consider that with $300 you can seriously upgrade your system and let it manage RAID by itself without losing performance.
The caveat is that with this level of control comes quite a bit of a learning curve. There are lots of gotchas, possible the biggest two are:
- SCT ERC value per drive must be less than that of the SCSI command
timer. If this is not true the misconfiguration will prevent bad sectors from being fixed up by md. So it's important to get either enterprise drives that have SCT ERC already configured with ~ 7 second recovery, or buy drives with a configurable time. 'smartctl -l scterc <dev>' is the way to find out. Both the SCT ERC and SCSI command timer are per device and aren't persistent. This advice applies to md/mdadm, lvm, and Btrfs raids.
- Separately backup mdadm.conf, and the mdadm superblock (that's
mdadm -E) for each drive.
- Bonus, don't follow advice on the web about using mdadm -C to
recreate a broken array. Go the linux-raid@ list and ask for help first. Many users end up with total data loss of otherwise recoverable arrays because they followed this absurd advice to recreate arrays rather than force assemble.
If you are familiar with LVM, then it's got a neat edge over mdadm: each LV can have its own raid level. So you can create linear "throw away" LVs, or more scalable raid10 LVs, or slower but more space efficient raid6 LVs. So if you expect to want different redundancy levels, or make changes frequently, you might prefer LVM. But, it still doesn't have all the features mdadm offers, so you'll want to make a must haves list and check both out.
-- Chris Murphy -- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On Mon, Feb 1, 2016, 12:32 AM thibaut noah thibaut.noah@gmail.com wrote:
I'm not gonna use lvm or mdadm. I choose zfs. I will probably end up buying a little server since i cannot find compatible hardware cheaper than an full server...
http://zfsonlinux.org/faq.html#BasicRequirements
They recommend 8G RAM minimum, and ECC. Most sites where VM images are used on ZFS recommend SSD for ZIL and L2 ARC to help make up for the IOPS limitation. So don't make the server too little.
Also note that growing a pool isn't as easy with ZFS as with either mdadm or LVM. You either have to create a new raid6 vdev (at least 4 drives) to add to the pool, or you have to replace existing drives with larger ones, resilvering between each replacement.
I'd do some research before committing to this. It's way more involved than the other suggestions. That doesn't necessarily mean it's a bad idea, just that it'll be a pet needing a lot of attention.
Chris Murphy
Yeah i saw some build reviews on freenas, they recommend ecc but it is not mandatory (and actually after reading some tests i don't get all the fuss on ecc ram).
Saw that too and i don't get it, i mean, what the hell? You can replace disks with bigger one but you'll have all this trouble if you want to expand the array? That doesn't feel right.
I'm a developper so i might get into a situation where some zfs expertise might come handy, i spend most of my time (10hours + daily even on weekends) on my computer so no problem. Thing is spending 600+$$ on a nas doesn't seem worth it compared to buying an high end raid card. Also it's either having a second case or buying a dual system case which cost more than 500$$, those guys... Spending much money on a raid card also seem like spending money for nothing too as it seems i'll have better performances with hba card + zfs that using a raid card. (did some research meanwhile)
If only there was a list of compatibility that would solve my problem, i emailed lsi support but didn't get a response yet, the adaptec card seems a bit pricey for what it is.
2016-02-01 17:16 GMT+01:00 Chris Murphy lists@colorremedies.com:
On Mon, Feb 1, 2016, 12:32 AM thibaut noah thibaut.noah@gmail.com wrote:
I'm not gonna use lvm or mdadm. I choose zfs. I will probably end up buying a little server since i cannot find compatible hardware cheaper than an full server...
http://zfsonlinux.org/faq.html#BasicRequirements
They recommend 8G RAM minimum, and ECC. Most sites where VM images are used on ZFS recommend SSD for ZIL and L2 ARC to help make up for the IOPS limitation. So don't make the server too little.
Also note that growing a pool isn't as easy with ZFS as with either mdadm or LVM. You either have to create a new raid6 vdev (at least 4 drives) to add to the pool, or you have to replace existing drives with larger ones, resilvering between each replacement.
I'd do some research before committing to this. It's way more involved than the other suggestions. That doesn't necessarily mean it's a bad idea, just that it'll be a pet needing a lot of attention.
Chris Murphy
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On 02/01/2016 08:33 AM, thibaut noah wrote:
Yeah i saw some build reviews on freenas, they recommend ecc but it is not mandatory (and actually after reading some tests i don't get all the fuss on ecc ram).
Just as with disks, bits can flip in RAM. Probably the most important feature of ZFS is checksums on all blocks so that bit flips can be detected and repaired. ECC RAM does the same for memory. If you don't have ECC RAM, and bits flip in memory, you're likely to silently corrupt data.
Saw that too and i don't get it, i mean, what the hell? You can replace disks with bigger one but you'll have all this trouble if you want to expand the array? That doesn't feel right.
The same is true of any disk array, I'd think. If you replace a disk, you need to rebuild the array. The array size is determined by the smallest member in the array. Given those two constraints, there's nothing unusual about the process.
Thing is spending 600+$$ on a nas doesn't seem worth it compared to buying an high end raid card.
ZFS (and btrfs) and hardware RAID are not, in my opinion, comparable. RAID arrays don't keep checksum information on each block, so if a bit flips they don't have a means of reliably repairing it. ZFS can repair bit flips. You probably don't want to use ZFS on hardware RAID, since many of ZFS' features rely on accessing each disk individually. A battery backed write cache can be useful, but I don't think it's better than having a UPS that's monitored.
Also it's either having a second case or buying a dual system case which cost more than 500$$, those guys... Spending much money on a raid card also seem like spending money for nothing too as it seems i'll have better performances with hba card + zfs that using a raid card. (did some research meanwhile)
It's possible, but I don't think that's necessarily true. ZFS' features come at a performance cost, in general.
This article seems to disagree with you : http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
I know and i won't be using zfs on hardware raid anyway. My issue here is hardware compatibility as i don't have enough sata ports to run my disks on my motherboard. Either i run pure hardware raid or hba + zfs, just need to be sure of the card.
zfs performance cost is not an issue when you have a high end desktop imo.
2016-02-01 18:39 GMT+01:00 Gordon Messmer gordon.messmer@gmail.com:
On 02/01/2016 08:33 AM, thibaut noah wrote:
Yeah i saw some build reviews on freenas, they recommend ecc but it is not mandatory (and actually after reading some tests i don't get all the fuss on ecc ram).
Just as with disks, bits can flip in RAM. Probably the most important feature of ZFS is checksums on all blocks so that bit flips can be detected and repaired. ECC RAM does the same for memory. If you don't have ECC RAM, and bits flip in memory, you're likely to silently corrupt data.
Saw that too and i don't get it, i mean, what the hell? You can replace
disks with bigger one but you'll have all this trouble if you want to expand the array? That doesn't feel right.
The same is true of any disk array, I'd think. If you replace a disk, you need to rebuild the array. The array size is determined by the smallest member in the array. Given those two constraints, there's nothing unusual about the process.
Thing is spending 600+$$ on a nas doesn't seem worth it compared to buying
an high end raid card.
ZFS (and btrfs) and hardware RAID are not, in my opinion, comparable. RAID arrays don't keep checksum information on each block, so if a bit flips they don't have a means of reliably repairing it. ZFS can repair bit flips. You probably don't want to use ZFS on hardware RAID, since many of ZFS' features rely on accessing each disk individually. A battery backed write cache can be useful, but I don't think it's better than having a UPS that's monitored.
Also it's either having a second case or buying a dual system case which
cost more than 500$$, those guys... Spending much money on a raid card also seem like spending money for nothing too as it seems i'll have better performances with hba card + zfs that using a raid card. (did some research meanwhile)
It's possible, but I don't think that's necessarily true. ZFS' features come at a performance cost, in general.
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On 02/02/2016 02:17 AM, thibaut noah wrote:
This article seems to disagree with you : http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
No, it doesn't. It refutes a description of a single hypothetical problem wherein scrubbing ZFS can corrupt your data. That is to say that it makes the case that ZFS doesn't need ECC any more than any other filesystem. However, it doesn't need ECC any less, either. ZFS doesn't protect your data from corruption anywhere but on the disk. If you care about the integrity of your data, you should still use ECC RAM.
I know and i won't be using zfs on hardware raid anyway. My issue here is hardware compatibility as i don't have enough sata ports to run my disks on my motherboard. Either i run pure hardware raid or hba + zfs, just need to be sure of the card.
I'd imagine it'd be harder to find a controller that Linux doesn't support. How many disks do you want to include in the array? (And, yes, most HBAs assume a hotplug backplane, not individual drive connections.)
zfs performance cost is not an issue when you have a high end desktop imo.
Sure, it's probably not. But you didn't say "not an issue," you said you expected better performance with ZFS.
It's not an enterprise environnement, i will just store movies, tons of it. Even for personnal datas like photos and stuff, never saw a simple user using ecc ram, especially in the windows world of gaming. It would be safer, but not worth the money.
Driver compatibility issues, i'm currently running 4.2.5 and i already returned 3 cards who didn't have a compatible driver (tried for one week to run the last one with the help of highpoint support but i returned it after, didn't want to take the chance to loose my money since the allowed delay to return an item is not that long). I want to run 6drives (3To each) in raid 6. What do you mean by hotplug backplane? I googled it but it is not very clear to me.
Because i do, if i understand what i read correctly hardware raid is limited by the card components when my zfs raid will be limited by my cpu and ram. I will not encrypt the data though, too much load on the cpu.
2016-02-02 17:13 GMT+01:00 Gordon Messmer gordon.messmer@gmail.com:
On 02/02/2016 02:17 AM, thibaut noah wrote:
This article seems to disagree with you : http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
No, it doesn't. It refutes a description of a single hypothetical problem wherein scrubbing ZFS can corrupt your data. That is to say that it makes the case that ZFS doesn't need ECC any more than any other filesystem. However, it doesn't need ECC any less, either. ZFS doesn't protect your data from corruption anywhere but on the disk. If you care about the integrity of your data, you should still use ECC RAM.
I know and i won't be using zfs on hardware raid anyway. My issue here is
hardware compatibility as i don't have enough sata ports to run my disks on my motherboard. Either i run pure hardware raid or hba + zfs, just need to be sure of the card.
I'd imagine it'd be harder to find a controller that Linux doesn't support. How many disks do you want to include in the array? (And, yes, most HBAs assume a hotplug backplane, not individual drive connections.)
zfs performance cost is not an issue when you have a high end desktop imo.
Sure, it's probably not. But you didn't say "not an issue," you said you expected better performance with ZFS.
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On 02/02/2016 05:43 PM, thibaut noah wrote:
Driver compatibility issues, i'm currently running 4.2.5 and i already returned 3 cards who didn't have a compatible driver (tried for one week to run the last one with the help of highpoint support but i returned it after, didn't want to take the chance to loose my money since the allowed delay to return an item is not that long).
And after all this bad experience just at the beginning, you still want to have a hardware RAID?
Anything involving a custom driver will be a problem whenever you upgrade kernel or upgrade the operating system. An incorrect lock or synchronization (it happens easily on out-of-tree drivers) can destroy your data.
If the driver is already in the kernel, you will not have these issues, but, you will certainly have to face others: - possibly slow BIOS scan (and "keep fingers crossed" config utilities) - possibly proprietary format on disk (if the controller dies, you lose the data) - excessive abstraction of disks from kernel (good luck running smartctl on simulated disks)
As already suggested: evaluate a software-RAID (mdadm) setup. You will probably be convinced by what you'll get.
On 02/02/2016 08:43 AM, thibaut noah wrote:
It's not an enterprise environnement, i will just store movies, tons of it. Even for personnal datas like photos and stuff, never saw a simple user using ecc ram, especially in the windows world of gaming.
Well, yes, but if a bit flips when you're playing a game, it'll probably produce an odd pixel. At worst, the game might crash. The effect is likely to be transient.
If bits flip in your storage array, the effect will probably be permanent.
It would be safer, but not worth the money.
It's your data. It just seems odd to me. I always use ECC where it's available. Crucial offers DIMMs for my system at $25-30 for 4GB, non-ECC. ECC is $35. A difference of $5-10 per DIMM seems pretty small when you compare it to the HBA and disks you're putting in the system.
Driver compatibility issues, i'm currently running 4.2.5 and i already returned 3 cards who didn't have a compatible driver (tried for one week to run the last one with the help of highpoint support but i returned it after, didn't want to take the chance to loose my money since the allowed delay to return an item is not that long).
https://ata.wiki.kernel.org/index.php/Hardware,_driver_status https://ata.wiki.kernel.org/index.php/SATA_hardware_features
The developers comment on various chipsets, and recommended avoiding Marvell. In the past, I've found some consumer grade add-in cards by selecting a chipset and searching for that, rather than more generic terms.
I want to run 6drives (3To each) in raid 6. What do you mean by hotplug backplane? I googled it but it is not very clear to me.
I think I'm talking about the cases you mentioned in an earlier email. A lot of HBAs have a single cable connection, such as a mini-SAS connection, that connects to a board (a backplane) that sits at the back of the drive bays, on which the power and data connections for the drives are mounted.
Because i do, if i understand what i read correctly hardware raid is limited by the card components when my zfs raid will be limited by my cpu and ram.
Well, ZFS will be subject to any limitations that exist in the card components as well, in addition to CPU and RAM limitations. I'm not really sure what you mean, here.
On 02/02/2016 10:04 AM, Gordon Messmer wrote:
On 02/02/2016 08:43 AM, thibaut noah wrote:
<snip>
I think I'm talking about the cases you mentioned in an earlier email. A lot of HBAs have a single cable connection, such as a mini-SAS connection, that connects to a board (a backplane) that sits at the back of the drive bays, on which the power and data connections for the drives are mounted.
Otherwise known as a JBOD ("just a bunch of disks"). There are tons of JBODs out there. Google search for "JBOD" or "storage arrays". I even see an HP enclosure for $250 US from HardDrives Direct. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - Okay, who put a "stop payment" on my reality check? - ----------------------------------------------------------------------
@roberto : would love to but i can't. I have only 3 sata ports available on my motherboard, otherwise i would be running zfs already. :/ I don't have any choice here, either i buy a compatible raid card or a hba card (or buying an expensive nas lol)
@gordon : The number of atx motherboard dedicated to gamers with ecc compatibility is close to none. Since i am no exception (no ecc support) i will not change my motherboard for this, never had an issue and i had 3+ To of datas for almost 5years, would be pointless to change my motherboard now anyway since i'm waiting for kubylake. The basic consumer just follow the trend, if constructors don't feel like we need it they won't put us fancy features.
I'll look at your links thanks. Already did look for specific chipset that are good with zfs but almost every list is too old, seems that every time a brand is building something that works they don't change it for like 10years or something.
Ah ! That's how you call it in english, didn't know, well that's what i'm looking for, already have the mini-sas to sata cable. I'm talking about passing your disks as jbod with an hba card, since you will not rely on some hardware card ram and processor my logic (and some tests i found online) tells me it will be faster.
@rick : was looking for hba already but everytime i find a good card i have to mail the tech support of the constructor to check if it will run with my kernel, such a pain...
2016-02-02 19:18 GMT+01:00 Rick Stevens ricks@alldigital.com:
On 02/02/2016 10:04 AM, Gordon Messmer wrote:
On 02/02/2016 08:43 AM, thibaut noah wrote:
<snip>
I think I'm talking about the cases you mentioned in an earlier email. A lot of HBAs have a single cable connection, such as a mini-SAS connection, that connects to a board (a backplane) that sits at the back of the drive bays, on which the power and data connections for the drives are mounted.
Otherwise known as a JBOD ("just a bunch of disks"). There are tons of JBODs out there. Google search for "JBOD" or "storage arrays". I even see an HP enclosure for $250 US from HardDrives Direct.
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-Okay, who put a "stop payment" on my reality check? -
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On 02/02/2016 11:51 AM, thibaut noah wrote:
@roberto : would love to but i can't. I have only 3 sata ports available on my motherboard, otherwise i would be running zfs already. :/ I don't have any choice here, either i buy a compatible raid card or a hba card (or buying an expensive nas lol)
@gordon : The number of atx motherboard dedicated to gamers with ecc compatibility is close to none. Since i am no exception (no ecc support) i will not change my motherboard for this, never had an issue and i had 3+ To of datas for almost 5years, would be pointless to change my motherboard now anyway since i'm waiting for kubylake. The basic consumer just follow the trend, if constructors don't feel like we need it they won't put us fancy features.
I'll look at your links thanks. Already did look for specific chipset that are good with zfs but almost every list is too old, seems that every time a brand is building something that works they don't change it for like 10years or something.
Ah ! That's how you call it in english, didn't know, well that's what i'm looking for, already have the mini-sas to sata cable. I'm talking about passing your disks as jbod with an hba card, since you will not rely on some hardware card ram and processor my logic (and some tests i found online) tells me it will be faster.
@rick : was looking for hba already but everytime i find a good card i have to mail the tech support of the constructor to check if it will run with my kernel, such a pain...
If you stick with one of the "major players" (e.g. Dell, HP, IBM, Adaptec, Emulex) and you're willing to pay a bit more, you'll probably be just fine. Example:
http://accessories.dell.com/sna/productdetail.aspx?c=us&l=en&s=dhs&a...
6Gbps dual-port SAS2 HBA, about $200 US, and I can pretty much guarantee it'll work with your kernel. Couple that to a JBOD and some drives and you're good to go.
If you stick with the cheaper stuff that's aimed at Winblows users, you'll have a much harder time. While you may not think of it this way, you're encroaching on "SOHO" (small home and office) territory, at least as far as the manufacturers are concerned, and that's sorta where you have to look.
I'd suggest spending your money on a decent HBA, and prowl eBay for a JBOD (there's lots out there and on the used market, they're rather cheap). The JBOD itself isn't a big deal--as long as it's compatible with your HBA. The drives you use IN the JBOD are important.
Just my $0.02. It's your data, you know how secure you need it and what you can spend.
2016-02-02 19:18 GMT+01:00 Rick Stevens <ricks@alldigital.com mailto:ricks@alldigital.com>:
On 02/02/2016 10:04 AM, Gordon Messmer wrote: On 02/02/2016 08:43 AM, thibaut noah wrote: <snip> I think I'm talking about the cases you mentioned in an earlier email. A lot of HBAs have a single cable connection, such as a mini-SAS connection, that connects to a board (a backplane) that sits at the back of the drive bays, on which the power and data connections for the drives are mounted. Otherwise known as a JBOD ("just a bunch of disks"). There are tons of JBODs out there. Google search for "JBOD" or "storage arrays". I even see an HP enclosure for $250 US from HardDrives Direct. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com <mailto:ricks@alldigital.com> - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - Okay, who put a "stop payment" on my reality check? - ---------------------------------------------------------------------- -- users mailing list users@lists.fedoraproject.org <mailto:users@lists.fedoraproject.org> To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
I think i misunderstood what you guys said about jbod (my fault for reading quickly), i talked about the jbod feature, not a hardware piece. My disks are sitting in my desktop, if i buy an external storage for the disks i might as well build a nas, which is what i do not want, i'm looking for internal connection not external.
was crawlking lsi and highpoint cards, cards between 200 to 300+$$, not really what i call cheaper stuff that's aimed at Winblows users ;)
2016-02-02 22:18 GMT+01:00 Rick Stevens ricks@alldigital.com:
On 02/02/2016 11:51 AM, thibaut noah wrote:
@roberto : would love to but i can't. I have only 3 sata ports available on my motherboard, otherwise i would be running zfs already. :/ I don't have any choice here, either i buy a compatible raid card or a hba card (or buying an expensive nas lol)
@gordon : The number of atx motherboard dedicated to gamers with ecc compatibility is close to none. Since i am no exception (no ecc support) i will not change my motherboard for this, never had an issue and i had 3+ To of datas for almost 5years, would be pointless to change my motherboard now anyway since i'm waiting for kubylake. The basic consumer just follow the trend, if constructors don't feel like we need it they won't put us fancy features.
I'll look at your links thanks. Already did look for specific chipset that are good with zfs but almost every list is too old, seems that every time a brand is building something that works they don't change it for like 10years or something.
Ah ! That's how you call it in english, didn't know, well that's what i'm looking for, already have the mini-sas to sata cable. I'm talking about passing your disks as jbod with an hba card, since you will not rely on some hardware card ram and processor my logic (and some tests i found online) tells me it will be faster.
@rick : was looking for hba already but everytime i find a good card i have to mail the tech support of the constructor to check if it will run with my kernel, such a pain...
If you stick with one of the "major players" (e.g. Dell, HP, IBM, Adaptec, Emulex) and you're willing to pay a bit more, you'll probably be just fine. Example:
http://accessories.dell.com/sna/productdetail.aspx?c=us&l=en&s=dhs&a...
6Gbps dual-port SAS2 HBA, about $200 US, and I can pretty much guarantee it'll work with your kernel. Couple that to a JBOD and some drives and you're good to go.
If you stick with the cheaper stuff that's aimed at Winblows users, you'll have a much harder time. While you may not think of it this way, you're encroaching on "SOHO" (small home and office) territory, at least as far as the manufacturers are concerned, and that's sorta where you have to look.
I'd suggest spending your money on a decent HBA, and prowl eBay for a JBOD (there's lots out there and on the used market, they're rather cheap). The JBOD itself isn't a big deal--as long as it's compatible with your HBA. The drives you use IN the JBOD are important.
Just my $0.02. It's your data, you know how secure you need it and what you can spend.
2016-02-02 19:18 GMT+01:00 Rick Stevens <ricks@alldigital.com
On 02/02/2016 10:04 AM, Gordon Messmer wrote: On 02/02/2016 08:43 AM, thibaut noah wrote: <snip> I think I'm talking about the cases you mentioned in an earlier email. A lot of HBAs have a single cable connection, such as a mini-SAS connection, that connects to a board (a backplane) that sits at the back of the drive bays, on which the power and data connections for the drives are mounted. Otherwise known as a JBOD ("just a bunch of disks"). There are tons of JBODs out there. Google search for "JBOD" or "storage arrays". I even see an HP enclosure for $250 US from HardDrives Direct. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com <mailto:ricks@alldigital.com> - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - Okay, who put a "stop payment" on my reality check? - ---------------------------------------------------------------------- -- users mailing list users@lists.fedoraproject.org <mailto:users@lists.fedoraproject.org> To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org--
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-Never eat anything larger than your head -
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
Upating this, seems that this card : http://www8.hp.com/us/en/products/server-host-bus-adapters/product-detail.ht... is compatible (saw a user running it with fedora 22 on amazon). Dell does not have any card with internal connectors. Waiting for other manufacturers to answer my mails.
2016-02-03 10:15 GMT+01:00 thibaut noah thibaut.noah@gmail.com:
I think i misunderstood what you guys said about jbod (my fault for reading quickly), i talked about the jbod feature, not a hardware piece. My disks are sitting in my desktop, if i buy an external storage for the disks i might as well build a nas, which is what i do not want, i'm looking for internal connection not external.
was crawlking lsi and highpoint cards, cards between 200 to 300+$$, not really what i call cheaper stuff that's aimed at Winblows users ;)
2016-02-02 22:18 GMT+01:00 Rick Stevens ricks@alldigital.com:
On 02/02/2016 11:51 AM, thibaut noah wrote:
@roberto : would love to but i can't. I have only 3 sata ports available on my motherboard, otherwise i would be running zfs already. :/ I don't have any choice here, either i buy a compatible raid card or a hba card (or buying an expensive nas lol)
@gordon : The number of atx motherboard dedicated to gamers with ecc compatibility is close to none. Since i am no exception (no ecc support) i will not change my motherboard for this, never had an issue and i had 3+ To of datas for almost 5years, would be pointless to change my motherboard now anyway since i'm waiting for kubylake. The basic consumer just follow the trend, if constructors don't feel like we need it they won't put us fancy features.
I'll look at your links thanks. Already did look for specific chipset that are good with zfs but almost every list is too old, seems that every time a brand is building something that works they don't change it for like 10years or something.
Ah ! That's how you call it in english, didn't know, well that's what i'm looking for, already have the mini-sas to sata cable. I'm talking about passing your disks as jbod with an hba card, since you will not rely on some hardware card ram and processor my logic (and some tests i found online) tells me it will be faster.
@rick : was looking for hba already but everytime i find a good card i have to mail the tech support of the constructor to check if it will run with my kernel, such a pain...
If you stick with one of the "major players" (e.g. Dell, HP, IBM, Adaptec, Emulex) and you're willing to pay a bit more, you'll probably be just fine. Example:
http://accessories.dell.com/sna/productdetail.aspx?c=us&l=en&s=dhs&a...
6Gbps dual-port SAS2 HBA, about $200 US, and I can pretty much guarantee it'll work with your kernel. Couple that to a JBOD and some drives and you're good to go.
If you stick with the cheaper stuff that's aimed at Winblows users, you'll have a much harder time. While you may not think of it this way, you're encroaching on "SOHO" (small home and office) territory, at least as far as the manufacturers are concerned, and that's sorta where you have to look.
I'd suggest spending your money on a decent HBA, and prowl eBay for a JBOD (there's lots out there and on the used market, they're rather cheap). The JBOD itself isn't a big deal--as long as it's compatible with your HBA. The drives you use IN the JBOD are important.
Just my $0.02. It's your data, you know how secure you need it and what you can spend.
2016-02-02 19:18 GMT+01:00 Rick Stevens <ricks@alldigital.com
On 02/02/2016 10:04 AM, Gordon Messmer wrote: On 02/02/2016 08:43 AM, thibaut noah wrote: <snip> I think I'm talking about the cases you mentioned in an earlier email. A lot of HBAs have a single cable connection, such as a mini-SAS connection, that connects to a board (a backplane) that sits at the back of the drive bays, on which the power and data connections forthe drives are mounted.
Otherwise known as a JBOD ("just a bunch of disks"). There are tonsof JBODs out there. Google search for "JBOD" or "storage arrays". I even see an HP enclosure for $250 US from HardDrives Direct.
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com <mailto:ricks@alldigital.com> - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2
Okay, who put a "stop payment" on my reality check?
-- users mailing list users@lists.fedoraproject.org <mailto:users@lists.fedoraproject.org> To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org--
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-Never eat anything larger than your head -
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On Wed, 2016-02-03 at 13:18 +0100, thibaut noah wrote:
Upating this, seems that this card : http://www8.hp.com/us/en/product s/server-host-bus-adapters/product-detail.html?oid=6995464 is compatible (saw a user running it with fedora 22 on amazon). Dell does not have any card with internal connectors. Waiting for other manufacturers to answer my mails.
2016-02-03 10:15 GMT+01:00 thibaut noah thibaut.noah@gmail.com:
I think i misunderstood what you guys said about jbod (my fault for reading quickly), i talked about the jbod feature, not a hardware piece. My disks are sitting in my desktop, if i buy an external storage for the disks i might as well build a nas, which is what i do not want, i'm looking for internal connection not external.
was crawlking lsi and highpoint cards, cards between 200 to 300+$$, not really what i call cheaper stuff that's aimed at Winblows users ;)
2016-02-02 22:18 GMT+01:00 Rick Stevens ricks@alldigital.com:
On 02/02/2016 11:51 AM, thibaut noah wrote:
Why not replace the motherboard for one with more connectors? That could very well prove to be a much simpler and cheaper route. Supermicro for example boards with 8 SATA3 ports on the chipset.... Louis
Finding a vt-d compatible motherboard (with real compatibility) is a pain, also i need a lot of pci-express ports for controlers cards/gpu etc, and i would need at least 9 sata ports. I'm not really willing to spend money on a motherboard i will drop in 4months top (kubylake) and not sure it would be cheaper, i found the hp card for 149euros.
2016-02-03 14:53 GMT+01:00 Louis Lagendijk louis@fazant.net:
On Wed, 2016-02-03 at 13:18 +0100, thibaut noah wrote:
Upating this, seems that this card : http://www8.hp.com/us/en/products/server-host-bus-adapters/product-detail.ht... is compatible (saw a user running it with fedora 22 on amazon). Dell does not have any card with internal connectors. Waiting for other manufacturers to answer my mails.
2016-02-03 10:15 GMT+01:00 thibaut noah thibaut.noah@gmail.com:
I think i misunderstood what you guys said about jbod (my fault for reading quickly), i talked about the jbod feature, not a hardware piece. My disks are sitting in my desktop, if i buy an external storage for the disks i might as well build a nas, which is what i do not want, i'm looking for internal connection not external.
was crawlking lsi and highpoint cards, cards between 200 to 300+$$, not really what i call cheaper stuff that's aimed at Winblows users ;)
2016-02-02 22:18 GMT+01:00 Rick Stevens ricks@alldigital.com:
On 02/02/2016 11:51 AM, thibaut noah wrote:
Why not replace the motherboard for one with more connectors? That could very well prove to be a much simpler and cheaper route. Supermicro for example boards with 8 SATA3 ports on the chipset....
Louis
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On 02/03/2016 01:15 AM, thibaut noah wrote:
I think i misunderstood what you guys said about jbod (my fault for reading quickly), i talked about the jbod feature, not a hardware piece. My disks are sitting in my desktop, if i buy an external storage for the disks i might as well build a nas, which is what i do not want, i'm looking for internal connection not external.
was crawlking lsi and highpoint cards, cards between 200 to 300+$$, not really what i call cheaper stuff that's aimed at Winblows users ;)
Putting 6 or 8 drives inside your chassis is really not a normal situation. Are you sure your power supply can handle it? You also mentioned somewhere that you have several GPUs and such. That would tax your power supply even more. Remember, spinning drives eat a lot of power when first "spun up".
I recommended a JBOD for the reason that it eliminates the need for a gazillion SATA or SAS connectors on your motherboard, cleans up the cabling and doesn't put a ridiculous load on your power supply. As to building a NAS, you'd need a motherboard with the same number of connectors, huge power supply, and all the drives to build your NAS and yeah, it's a right pain to build. With the JBOD, you need the HBA (one PCI slot), the JBOD and the drives and all of this is easily transferable to your new system when the new motherboard you want comes out. It'd be faster than a NAS as well as supporting native file systems with all the ACLs, permissions and goodies (NFS and CIFS don't support that well).
Regarding the new motherboard, will it have all the SATA/SAS connectors you need to drive this array or are you going to be in the same boat as you are now?
<snip> ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - Overweight: When you step on your dog's tail...and it dies. - ----------------------------------------------------------------------
My raid drives consume 11w per disk at maximum load and i have more than 400w left on my power supply.
Thanks for the clarification
I will have the hba card so that will be ok, 5satas port is ok with an al-x motherboard, the problem might be the number of pci ports and the vt-d compatibility.
2016-02-03 19:20 GMT+01:00 Rick Stevens ricks@alldigital.com:
On 02/03/2016 01:15 AM, thibaut noah wrote:
I think i misunderstood what you guys said about jbod (my fault for reading quickly), i talked about the jbod feature, not a hardware piece. My disks are sitting in my desktop, if i buy an external storage for the disks i might as well build a nas, which is what i do not want, i'm looking for internal connection not external.
was crawlking lsi and highpoint cards, cards between 200 to 300+$$, not really what i call cheaper stuff that's aimed at Winblows users ;)
Putting 6 or 8 drives inside your chassis is really not a normal situation. Are you sure your power supply can handle it? You also mentioned somewhere that you have several GPUs and such. That would tax your power supply even more. Remember, spinning drives eat a lot of power when first "spun up".
I recommended a JBOD for the reason that it eliminates the need for a gazillion SATA or SAS connectors on your motherboard, cleans up the cabling and doesn't put a ridiculous load on your power supply. As to building a NAS, you'd need a motherboard with the same number of connectors, huge power supply, and all the drives to build your NAS and yeah, it's a right pain to build. With the JBOD, you need the HBA (one PCI slot), the JBOD and the drives and all of this is easily transferable to your new system when the new motherboard you want comes out. It'd be faster than a NAS as well as supporting native file systems with all the ACLs, permissions and goodies (NFS and CIFS don't support that well).
Regarding the new motherboard, will it have all the SATA/SAS connectors you need to drive this array or are you going to be in the same boat as you are now?
<snip> ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - Overweight: When you step on your dog's tail...and it dies. -
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
Confirm the hp h240 to work out of the box with fedora 23 driver (hp drivers are not yet up to date with latest kernel)
2016-02-03 20:20 GMT+01:00 thibaut noah thibaut.noah@gmail.com:
My raid drives consume 11w per disk at maximum load and i have more than 400w left on my power supply.
Thanks for the clarification
I will have the hba card so that will be ok, 5satas port is ok with an al-x motherboard, the problem might be the number of pci ports and the vt-d compatibility.
2016-02-03 19:20 GMT+01:00 Rick Stevens ricks@alldigital.com:
On 02/03/2016 01:15 AM, thibaut noah wrote:
I think i misunderstood what you guys said about jbod (my fault for reading quickly), i talked about the jbod feature, not a hardware piece. My disks are sitting in my desktop, if i buy an external storage for the disks i might as well build a nas, which is what i do not want, i'm looking for internal connection not external.
was crawlking lsi and highpoint cards, cards between 200 to 300+$$, not really what i call cheaper stuff that's aimed at Winblows users ;)
Putting 6 or 8 drives inside your chassis is really not a normal situation. Are you sure your power supply can handle it? You also mentioned somewhere that you have several GPUs and such. That would tax your power supply even more. Remember, spinning drives eat a lot of power when first "spun up".
I recommended a JBOD for the reason that it eliminates the need for a gazillion SATA or SAS connectors on your motherboard, cleans up the cabling and doesn't put a ridiculous load on your power supply. As to building a NAS, you'd need a motherboard with the same number of connectors, huge power supply, and all the drives to build your NAS and yeah, it's a right pain to build. With the JBOD, you need the HBA (one PCI slot), the JBOD and the drives and all of this is easily transferable to your new system when the new motherboard you want comes out. It'd be faster than a NAS as well as supporting native file systems with all the ACLs, permissions and goodies (NFS and CIFS don't support that well).
Regarding the new motherboard, will it have all the SATA/SAS connectors you need to drive this array or are you going to be in the same boat as you are now?
<snip> ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - Overweight: When you step on your dog's tail...and it dies. -
-- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org