I've been using LVM on one computer for some time, without any real problems.
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
I thought it would simplify changing the sizes of partitions, but as it happens I never need to do this with today's enormous disks.
I wonder if that is the general experience?
Timothy Murphy wrote:
I've been using LVM on one computer for some time, without any real problems.
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
What applications, for example?
I thought it would simplify changing the sizes of partitions, but as it happens I never need to do this with today's enormous disks.
I wonder if that is the general experience?
Works fine for me. I moved to LVM on all my systems and haven't looked back.
Somebody in the thread at some point said:
I've been using LVM on one computer for some time, without any real problems.
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
I thought it would simplify changing the sizes of partitions, but as it happens I never need to do this with today's enormous disks.
I wonder if that is the general experience?
It has its uses, but for most machines, it not only makes no sense it invites disaster by enabling spanning your filesystem over multiple drives without redundancy. I don't think it should be on by default in Anaconda -- even for laptops with only one possible permanent drive for example. But it's not completely useless for higher-end purposes where it is on top of raided drives.
On a positive note though, I recently repurposed an old laptop to boot off a 4GB USB stick install of Fedora, naturally without LVM... that really rocks having the laptop with no moving parts except the fan. I found these "dual channel" USB flash sticks are roughly as fast as a laptop HDD in terms of booting and usage.
-Andy
On 7/31/07, Timothy Murphy tim@birdsnest.maths.tcd.ie wrote:
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
I think it definitely can make rescues and recovery more difficult. But, not impossible.
I thought it would simplify changing the sizes of partitions, but as it happens I never need to do this with today's enormous disks.
I think enormous disks are relative. For my desktop, I think I have an enormous disk. For our image-processing servers, we don't have enormous disks.
I think for many desktops, there isn't much of a need for multiple partitions. But, if there's a marginal improvement, and you can create abstractions that nicely bypass the downsides, its a good thing.
On Tue, 2007-07-31 at 15:54 +0100, Timothy Murphy wrote:
I've been using LVM on one computer for some time, without any real problems.
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices,
parted and parted front ends. Anyone lese?
while I don't find any real advantages to compensate.
... Depends on what you need. I regularly move free space between different partition. Having to do it without lvm is a major pain in the back.
I thought it would simplify changing the sizes of partitions, but as it happens I never need to do this with today's enormous disks.
OK. As I said before, it greatly depends on what you do with you machine.
I wonder if that is the general experience?
- Gilboa
Les Mikesell wrote:
Ed Greshko wrote:
I wonder if that is the general experience?
Works fine for me. I moved to LVM on all my systems and haven't looked back.
Has it improved anything?
For me, yes. With my definition of "improved" is that it has simplified my work. Otherwise I wouldn't have said so.
I have 3ware RAID controllers and mostly use RAID 5. The amount of space that I need on given partitions can vary a great deal over a month's time. With LVM I can simply shrink one partition and expand another.
Ed Greshko wrote:
I've been using LVM on one computer for some time, without any real problems.
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
What applications, for example?
I think there have been several, but I didn't make a note of them. The most recent was gparted (and Partition Manager). I think I also had problems with one or more of: dd, apache (httpd), unison, MySQL. But as I said, I didn't keep notes, and may be libeling LVM.
On Tue, Jul 31, 2007 at 11:03:26AM -0400, Michael H. Semcheski wrote:
On 7/31/07, Timothy Murphy tim@birdsnest.maths.tcd.ie wrote:
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
I think it definitely can make rescues and recovery more difficult. But, not impossible.
Agreed. You need a rescue/recovery disk that supports LVM, like finnix. I had lots of fun adding LVM support to my script. LVM is not documented at the level I needed, so it was a lot of reading the code, reverse engineering, and "by guess and by golly!".
http://www.charlescurley.com/Linux-Complete-Backup-and-Recovery-HOWTO.html
On Tuesday July 31 2007, Timothy Murphy wrote:
I've been using LVM on one computer for some time, without any real problems.
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
I thought it would simplify changing the sizes of partitions, but as it happens I never need to do this with today's enormous disks.
I wonder if that is the general experience?
Definitely an asset when connected to a SAN.
Terry Polzin wrote:
I've been using LVM on one computer for some time, without any real problems.
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
I wonder if that is the general experience?
Definitely an asset when connected to a SAN.
Why, as a matter of interest?
On Wed, Aug 01, 2007 at 01:54:25AM +0100, Timothy Murphy wrote:
Terry Polzin wrote:
I've been using LVM on one computer for some time, without any real problems.
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
I wonder if that is the general experience?
Definitely an asset when connected to a SAN.
Why, as a matter of interest?
I think it depends more on the uses of the storage than the underlying technology, but it helps to have something with enough space to have some flexibility. I have a server with ~16Tb of storage that's shared amongst research groups in a university dept. Each group has their own filesystem, and LVM means that I can allocate space to whichever one particularly needs it without predicting up front who that will be. It lets me add more storage without disrupting the logical structure (e.g. no splitting groups between /mnt/olddisk and /mnt/newdisk and finding that the group that needs more space is on the disk that doens't have any), and it means I can easily allocate space to temporary systems and claim it back afterwards for general use.
That machines predecessor didn't use LVM and it was a nightmare to admin with free space fragmented all over the place. I wouldn't go back.
OTOH, if you've got a single machine with a small disk and you want to divide it into /boot, / and swap, then you might as well use partitions as LVM.
Ewan
On Wed, 2007-08-01 at 02:22 +0100, Ewan Mac Mahon wrote:
I have a server with ~16Tb of storage that's shared amongst research groups in a university dept. Each group has their own filesystem, and LVM means that I can allocate space to whichever one particularly needs it without predicting up front who that will be. It lets me add more storage without disrupting the logical structure (e.g. no splitting groups between /mnt/olddisk and /mnt/newdisk and finding that the group that needs more space is on the disk that doens't have any), and it means I can easily allocate space to temporary systems and claim it back afterwards for general use.
That machines predecessor didn't use LVM and it was a nightmare to admin with free space fragmented all over the place. I wouldn't go back.
I'm curious about two things: Wouldn't resizing LVM involve fragmenting the drive, in another way? And, doesn't things like file quotas let you stop some users from using all available space?
Timothy Murphy wrote:
I've been using LVM on one computer for some time, without any real problems.
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
I thought it would simplify changing the sizes of partitions, but as it happens I never need to do this with today's enormous disks.
I wonder if that is the general experience?
Since the default is for a /boot regular partition and only a / and swap within the LVM, how is resizing possible even with an LVM? Can you pick and choose which directory to export or is it just / or swap that you could grow across the newly installed hard disk that you installed in the future?
Jim
On Wed, Aug 01, 2007 at 12:10:41PM +0930, Tim wrote:
On Wed, 2007-08-01 at 02:22 +0100, Ewan Mac Mahon wrote:
I have a server with ~16Tb of storage that's shared amongst research groups in a university dept. Each group has their own filesystem, and
<snip>
That machines predecessor didn't use LVM and it was a nightmare to admin with free space fragmented all over the place. I wouldn't go back.
I'm curious about two things: Wouldn't resizing LVM involve fragmenting the drive, in another way?
Only physically; if I allocate space to one filesystem, then create another, then extend the first one then the physical storage for the first one will be in two chunks with the second fs sitting between them. The point of LVM is that I don't need to care about it since it appears as a single logical space.
And, doesn't things like file quotas let you stop some users from using all available space?
Up to a point, but group quotas are rather less straightforward, IMHO.
There's the further point that I have some additional storage to add to this system; once I've done that with LVM I can simply seamlessly extend any of the existing filesystems onto that storage; while you could probably divide up the pie with quotas, there'd be no way to make the whole pie bigger.
Ewan
Ewan Mac Mahon wrote:
On Wed, Aug 01, 2007 at 12:10:41PM +0930, Tim wrote:
On Wed, 2007-08-01 at 02:22 +0100, Ewan Mac Mahon wrote:
I have a server with ~16Tb of storage that's shared amongst research groups in a university dept. Each group has their own filesystem, and
<snip> >> That machines predecessor didn't use LVM and it was a nightmare to >> admin with free space fragmented all over the place. I wouldn't go >> back.
There are issues about how you partition a giant hard drive like that. I forget just how many partitions your allowed but it is small compared to the size. I guess your the IT expert and you have REAL Experience to back your support for LVM.
I'm curious about two things: Wouldn't resizing LVM involve fragmenting the drive, in another way?
Only physically; if I allocate space to one filesystem, then create another, then extend the first one then the physical storage for the first one will be in two chunks with the second fs sitting between them. The point of LVM is that I don't need to care about it since it appears as a single logical space.
And, doesn't things like file quotas let you stop some users from using all available space?
Up to a point, but group quotas are rather less straightforward, IMHO.
There's the further point that I have some additional storage to add to this system; once I've done that with LVM I can simply seamlessly extend any of the existing filesystems onto that storage; while you could probably divide up the pie with quotas, there'd be no way to make the whole pie bigger.
Ewan
At my school the departments all have a computer that backs up all the computers in the departments. The Hard Drive(s) are smaller but do the job.
At home I have a tiny 160 GB and I do not need LVM and in fact it makes for even more wasted space than using the old /.
Tim:
I'm curious about two things: Wouldn't resizing LVM involve fragmenting the drive, in another way?
Ewan Mac Mahon:
Only physically; if I allocate space to one filesystem, then create another, then extend the first one then the physical storage for the first one will be in two chunks with the second fs sitting between them. The point of LVM is that I don't need to care about it since it appears as a single logical space.
Isn't that the situation with fragmentation of any sort, though? The heads having to skate about more, and only the drive really knows where all the bits are (pun intended). Does LVM really manage that more efficiently?
On Wed, 2007-08-01 at 21:40 +0930, Tim wrote:
Tim:
I'm curious about two things: Wouldn't resizing LVM involve fragmenting the drive, in another way?
Ewan Mac Mahon:
Only physically; if I allocate space to one filesystem, then create another, then extend the first one then the physical storage for the first one will be in two chunks with the second fs sitting between them. The point of LVM is that I don't need to care about it since it appears as a single logical space.
Isn't that the situation with fragmentation of any sort, though? The heads having to skate about more, and only the drive really knows where all the bits are (pun intended).
Yes. But unless you're using tiny block size (<1MB, default is 32MB, I usually use 64MB) the performance hit will go unnoticed.
- Gilboa
On Wed, Aug 01, 2007 at 09:40:54PM +0930, Tim wrote:
Tim:
I'm curious about two things: Wouldn't resizing LVM involve fragmenting the drive, in another way?
Ewan Mac Mahon:
Only physically; if I allocate space to one filesystem, then create another, then extend the first one then the physical storage for the first one will be in two chunks with the second fs sitting between them. The point of LVM is that I don't need to care about it since it appears as a single logical space.
Isn't that the situation with fragmentation of any sort, though?
It depends what you mean by 'fragmentation'; if you have several non-LVM disks in a system each with some free space on them a user has to decide which disk to use, or, if they've got a large chunk of data they may find that no single disk has enough free space so they have to split it up. It all gets very messy. In the LVM case the free space may be physically fragmented over just as many drives, but the user (and the admin :-) ) don't know or care; they just see simple logical divisions.
My killer use case is the one where a user comes and asks for more disk space - say an extra 200G: With LVM I need 200G free, and then I extend their filesystem, and the jobs done. Without it I'd either have to give them a new separate 200G chunk of space, or find enough room for all their existing data+200G; allocate it, copy the existing data across, and reclaim the old space.
The heads having to skate about more, and only the drive really knows where all the bits are (pun intended).
In this particular case there's really no point trying to second guess what the drive heads are doing since the 'drives' are actually hardware RAID arrays. I'm not sure it makes much difference for most single drive systems either; modern drives play all sorts of interesting games with the layout, so even on a laptop only the drive really knows where all the bits are anyway.
Does LVM really manage that more efficiently?
It's more efficient for the user since the process of figuring out who needs how much space, and which physical resources to use to provide that space are taken off your hands and given to the computer. Which is /good/ at boring fiddly stuff. It's not more efficient in terms of raw disk speed - there will be an overhead.
Ewan
Ewan Mac Mahon wrote:
I've been using LVM on one computer for some time, without any real problems.
However, I've decided that it causes unnecessary complications, as some applications do not seem to accept the LVM devices, while I don't find any real advantages to compensate.
I wonder if that is the general experience?
Definitely an asset when connected to a SAN.
Why, as a matter of interest?
I think it depends more on the uses of the storage than the underlying technology, but it helps to have something with enough space to have some flexibility. I have a server with ~16Tb of storage that's shared amongst research groups in a university dept. Each group has their own filesystem, and LVM means that I can allocate space to whichever one particularly needs it without predicting up front who that will be.
I guess if I was in that position I would use LVM! But I'm just a common-or-garden Linux user ...
[Reminds me of the Thurber cartoon with a mad-looking girl saying, "Thank you God for making me normal".]
On Tuesday July 31 2007, Timothy Murphy wrote:
Definitely an asset when connected to a SAN.
Why, as a matter of interest?
I'm sorry, I thought you wanted cases presented in favour of LVM.
It has it's place where it's use is appropriate and I use it on my machines that are connected to my SAN. It is sometimes easier to use some of the LVM utilities to manage LUNs presented to a linux box than the SAN especially when resizing a LUN.
At 12:10 PM +0930 8/1/07, Tim wrote: ...
I'm curious about two things: Wouldn't resizing LVM involve fragmenting the drive, in another way? And, doesn't things like file quotas let you stop some users from using all available space?
Yes, but they're really /big/ fragments. The default chunk size is 32 MB. If you think about how typical *nix filesystems allocate storage, and ext2/3 is typical, they already does somthing quite like this.
Karl Larsen wrote:
Ewan Mac Mahon wrote:
On Wed, Aug 01, 2007 at 12:10:41PM +0930, Tim wrote:
On Wed, 2007-08-01 at 02:22 +0100, Ewan Mac Mahon wrote:
I have a server with ~16Tb of storage that's shared amongst research groups in a university dept. Each group has their own filesystem, and
<snip> >> That machines predecessor didn't use LVM and it was a nightmare to >> admin with free space fragmented all over the place. I wouldn't go >> back.
There are issues about how you partition a giant hard drive likethat. I forget just how many partitions your allowed but it is small compared to the size. I guess your the IT expert and you have REAL Experience to back your support for LVM.
I'm curious about two things: Wouldn't resizing LVM involve fragmenting the drive, in another way?
Only physically; if I allocate space to one filesystem, then create another, then extend the first one then the physical storage for the first one will be in two chunks with the second fs sitting between them. The point of LVM is that I don't need to care about it since it appears as a single logical space.
And, doesn't things like file quotas let you stop some users from using all available space?
Up to a point, but group quotas are rather less straightforward, IMHO.
There's the further point that I have some additional storage to add to this system; once I've done that with LVM I can simply seamlessly extend any of the existing filesystems onto that storage; while you could probably divide up the pie with quotas, there'd be no way to make the whole pie bigger.
Ewan
At my school the departments all have a computer that backs up allthe computers in the departments. The Hard Drive(s) are smaller but do the job.
At home I have a tiny 160 GB and I do not need LVM and in fact itmakes for even more wasted space than using the old /.
On my home machine I started with the same size drive (RAID 1). A year later I was at it's limit. At that time, the cost of high capacity drives were out of my range. I added two drives as a second raid 1 device and created an LVM. Copied all data across and verified it then added the old drives to the system to increase the total space available. Damn multimedia files take up so much space. :)
I have had some issues with LVM in the past but the move to F7 was easy, including both the raid and LVM drives. Even when I had drive issues (bad power supply), I didn't have that much of a problem fixing it. Of course the RAID helped here.
For /home I will stick with RAID and LVM. It makes it easy to increase the space as needed. Especially with with using the home computer as a Multimedia Server.
The only issue that I see is recovery of data from a single drive outside the LVM. But this could also be a benefit in security if the drive gets stolen.
Karl Larsen wrote:
[snip]
There are issues about how you partition a giant hard drive likethat. I forget just how many partitions your allowed but it is small compared to the size. I guess your the IT expert and you have REAL Experience to back your support for LVM.
Umm, AIUI the standard way of partitioning drives has no limit on the number of extended partitions one may create.
Mike
On Thu, 2007-08-09 at 15:26 -0500, Mike McCarty wrote:
Karl Larsen wrote:
[snip]
There are issues about how you partition a giant hard drive likethat. I forget just how many partitions your allowed but it is small compared to the size. I guess your the IT expert and you have REAL Experience to back your support for LVM.
Umm, AIUI the standard way of partitioning drives has no limit on the number of extended partitions one may create.
Uhm, not exactly. You get up to four primary partitions, one of which can be an extended partition. Inside that extended partition you can have as many "logical" partitions as you wish.
---------------------------------------------------------------------- - Rick Stevens, Principal Engineer rstevens@internap.com - - CDN Systems, Internap, Inc. http://www.internap.com - - - - He who laughs last thinks slowest. - ----------------------------------------------------------------------
Rick Stevens wrote:
On Thu, 2007-08-09 at 15:26 -0500, Mike McCarty wrote:
Karl Larsen wrote:
[snip]
There are issues about how you partition a giant hard drive like that. I forget just how many partitions your allowed but it is small compared to the size. I guess your the IT expert and you have REAL Experience to back your support for LVM.
Umm, AIUI the standard way of partitioning drives has no limit on the number of extended partitions one may create.
Uhm, not exactly. You get up to four primary partitions, one of which can be an extended partition. Inside that extended partition you can have as many "logical" partitions as you wish.
That is indeed my understanding. What one gets is essentially a linked list of Logical Volumes (correct terminology, but in the present context confusing, so I avoided it). The Logical Volumes have disc addresses which must be contained within the range of disc addresses as that specified in the PT entry corresponding to the extended partition. There is no essential limit to the number of Logical Volumes, though there may be only up to one extended partition. Floppy discs may have only one Volume. Primary Partitions have up to one Volume per partition. An Extended Partition has a number of Volumes which is limited only by the space necessary to contain the BPB and overhead per Volume. Otherwise, one may place as many Logical Volumes as one wishes within the Extended Partition.
Mike
Rick Stevens wrote:
Umm, AIUI the standard way of partitioning drives has no limit on the number of extended partitions one may create.
Uhm, not exactly. You get up to four primary partitions, one of which can be an extended partition. Inside that extended partition you can have as many "logical" partitions as you wish.
I'm not sure if that is any longer true. Can one have as many partitions as you like in /dev/sda ? I had an idea one was constrained to SCSI's 16 partitions.
At 10:17 PM +0100 8/9/07, Timothy Murphy wrote:
Rick Stevens wrote:
Umm, AIUI the standard way of partitioning drives has no limit on the number of extended partitions one may create.
Uhm, not exactly. You get up to four primary partitions, one of which can be an extended partition. Inside that extended partition you can have as many "logical" partitions as you wish.
I'm not sure if that is any longer true. Can one have as many partitions as you like in /dev/sda ? I had an idea one was constrained to SCSI's 16 partitions.
That's just a design flaw in the driver, that it can't mount all the available partitions. Admittedly, the previous driver had its own larger limits.
On Thu, 2007-08-09 at 22:17 +0100, Timothy Murphy wrote:
Rick Stevens wrote:
Umm, AIUI the standard way of partitioning drives has no limit on the number of extended partitions one may create.
Uhm, not exactly. You get up to four primary partitions, one of which can be an extended partition. Inside that extended partition you can have as many "logical" partitions as you wish.
I'm not sure if that is any longer true. Can one have as many partitions as you like in /dev/sda ? I had an idea one was constrained to SCSI's 16 partitions.
SCSI doesn't give a horse's patoot about partitions...it only knows device numbers, the LUNs inside the devices and the block numbers inside those LUNs. How those blocks are used is up to the application.
Yet more obscure info from Rick, your font of useless data:
The four partition limit comes from the original PC specification. A BIOS-compatible partition table that a PC can access without help (i.e. some other application) only has space for 4 partitions. In fact, the BIOS can only boot from tracks 0 through 1023--which is why we have LBA mode which buggers the head and sector counts so that the cylinder count remains below 1024.
---------------------------------------------------------------------- - Rick Stevens, Principal Engineer rstevens@internap.com - - CDN Systems, Internap, Inc. http://www.internap.com - - - - If Windows isn't a virus, then it sure as hell is a carrier! - ----------------------------------------------------------------------
On Thu, 9 Aug 2007, Mike McCarty wrote:
Karl Larsen wrote:
[snip]
There are issues about how you partition a giant hard drivelike that. I forget just how many partitions your allowed but it is small compared to the size. I guess your the IT expert and you have REAL Experience to back your support for LVM.
Umm, AIUI the standard way of partitioning drives has no limit on the number of extended partitions one may create.
?????. AIUI, a standard PC hard drive can have at most 4 primary partitions, only one of which can be an extended partition. has that changed?
rday
On Thu, 9 Aug 2007, Rick Stevens wrote:
On Thu, 2007-08-09 at 15:26 -0500, Mike McCarty wrote:
Karl Larsen wrote:
[snip]
There are issues about how you partition a giant hard drive likethat. I forget just how many partitions your allowed but it is small compared to the size. I guess your the IT expert and you have REAL Experience to back your support for LVM.
Umm, AIUI the standard way of partitioning drives has no limit on the number of extended partitions one may create.
Uhm, not exactly. You get up to four primary partitions, one of which can be an extended partition. Inside that extended partition you can have as many "logical" partitions as you wish.
i'm not convinced of that infinite limit:
http://www.justlinux.com/forum/showthread.php?threadid=150073
anyone want to clarify that?
rday
Robert P. J. Day wrote:
On Thu, 9 Aug 2007, Rick Stevens wrote:
Uhm, not exactly. You get up to four primary partitions, one of which can be an extended partition. Inside that extended partition you can have as many "logical" partitions as you wish.
i'm not convinced of that infinite limit:
http://www.justlinux.com/forum/showthread.php?threadid=150073
anyone want to clarify that?
From what I understand, there are a max of 16 device entries created for a SCSI hard drive. (sdx and sdx1 through sdx15) So while you can have more partitions then that, Linux will not let you access them when using the SCSI code to access the drive. I believe it is a driver problem more then a udev problem.
Mikkel
On Thu, 2007-08-09 at 18:58 -0500, Mikkel L. Ellertson wrote:
Robert P. J. Day wrote:
On Thu, 9 Aug 2007, Rick Stevens wrote:
Uhm, not exactly. You get up to four primary partitions, one of which can be an extended partition. Inside that extended partition you can have as many "logical" partitions as you wish.
i'm not convinced of that infinite limit:
http://www.justlinux.com/forum/showthread.php?threadid=150073
anyone want to clarify that?
From what I understand, there are a max of 16 device entries created for a SCSI hard drive. (sdx and sdx1 through sdx15) So while you can have more partitions then that, Linux will not let you access them when using the SCSI code to access the drive. I believe it is a driver problem more then a udev problem.
Well, the "x" in your example can take the RE form "[a-z]+". For example, we have some storage arrays with, oh, 130 LUNs on them. They appear as /dev/sda[1-15] through /dev/sdiv[1-15]
As far as the partition numbers, that's based on the minor number of the block device. The formula is "(16 * drive number) + partition number". The "16" is what limits it to 16 partitions (with partition 0 being the same as the whole drive, e.g. "/dev/sda0" is the same as "/dev/sda").
"man sd" will show you the magic.
---------------------------------------------------------------------- - Rick Stevens, Principal Engineer rstevens@internap.com - - CDN Systems, Internap, Inc. http://www.internap.com - - - - To err is human, to moo bovine. - ----------------------------------------------------------------------
On Thursday 09 August 2007, Rick Stevens wrote:
On Thu, 2007-08-09 at 18:58 -0500, Mikkel L. Ellertson wrote:
Robert P. J. Day wrote:
On Thu, 9 Aug 2007, Rick Stevens wrote:
Uhm, not exactly. You get up to four primary partitions, one of which can be an extended partition. Inside that extended partition you can have as many "logical" partitions as you wish.
i'm not convinced of that infinite limit:
http://www.justlinux.com/forum/showthread.php?threadid=150073
anyone want to clarify that?
From what I understand, there are a max of 16 device entries created for a SCSI hard drive. (sdx and sdx1 through sdx15) So while you can have more partitions then that, Linux will not let you access them when using the SCSI code to access the drive. I believe it is a driver problem more then a udev problem.
Well, the "x" in your example can take the RE form "[a-z]+". For example, we have some storage arrays with, oh, 130 LUNs on them. They appear as /dev/sda[1-15] through /dev/sdiv[1-15]
As far as the partition numbers, that's based on the minor number of the block device. The formula is "(16 * drive number) + partition number". The "16" is what limits it to 16 partitions (with partition 0 being the same as the whole drive, e.g. "/dev/sda0" is the same as "/dev/sda").
"man sd" will show you the magic.
not on my uptodate FC6 install Rick. That manpage has a quite Jurassic 1992 date, and there is no way one can infer what you just wrote from that documents contents.
---------Bottom of file---------- FILES /dev/sd[a-h]: the whole device /dev/sd[a-h][0-8]: individual block partitions
1992-12-17 SD(4) --------- If this document isn't correct, it should be made so. I don't see any way out of sda16 not being equal to sdb0, and sda17 then = sdb1, eg the next device's first partition, (if its not an outright error) using the logic described in this file.
Rick Stevens wrote:
On Thu, 2007-08-09 at 18:58 -0500, Mikkel L. Ellertson wrote:
From what I understand, there are a max of 16 device entries created for a SCSI hard drive. (sdx and sdx1 through sdx15) So while you can have more partitions then that, Linux will not let you access them when using the SCSI code to access the drive. I believe it is a driver problem more then a udev problem.
Well, the "x" in your example can take the RE form "[a-z]+". For example, we have some storage arrays with, oh, 130 LUNs on them. They appear as /dev/sda[1-15] through /dev/sdiv[1-15]
As far as the partition numbers, that's based on the minor number of the block device. The formula is "(16 * drive number) + partition number". The "16" is what limits it to 16 partitions (with partition 0 being the same as the whole drive, e.g. "/dev/sda0" is the same as "/dev/sda").
"man sd" will show you the magic.
You and I are saying the same thing, just in different ways. I used x where you use [a-z]+. But I have never seen /dev/sda0 created. It is always /dev/sda. You could also argue that the entire disk is not a partition, even though you can access the entire disk as if it were a partition. (You can create a file system on /dev/sda, but you can not do that and have a partition table at the same time.) There have been times in the past where I have used the entire drive instead of creating a partition table. But most BIOS do not like it. (Tar doesn't care - the drive can be one big archive.)
When you get into SCSI LUNs, you will probably lose a lot of people. I have always found it easier to think of each LUN as its own device. But then again, I worked more with CD drive arrays connected to a LUN controller. 8 drives using one SCSI device number. Now days, it usually works better to copy the CD images to a hard drive, and loopback mount them. By using automount to manage them, you can even keep the number of loopback mounts down to a reasonable number.
Mikkel
Rick Stevens wrote:
You get up to four primary partitions, one of which can be an extended partition. Inside that extended partition you can have as many "logical" partitions as you wish.
I'm not sure if that is any longer true. Can one have as many partitions as you like in /dev/sda ? I had an idea one was constrained to SCSI's 16 partitions.
SCSI doesn't give a horse's patoot about partitions...it only knows device numbers, the LUNs inside the devices and the block numbers inside those LUNs. How those blocks are used is up to the application.
I'm talking about SCSI under Fedora/Redhat . There is certainly a 16 partition limit to true SCSI discs, unless it has been changed recently. I have a SCSI-only machine running Fedora, and Redhat 9 and early Fedora Cores definitely only allowed 16 partitions. (I haven't tried lately, as I got a second SCSI disk so no longer needed a lot of partitions.)
Incidentally, the limit was actually 15 at installation time. One could add a 16th partition after installation. I assume this was a glitch in the installation code.
Do you actually have more than 16 partitions on one disk, under Fedora 7?
Rick Stevens wrote:
Well, the "x" in your example can take the RE form "[a-z]+". For example, we have some storage arrays with, oh, 130 LUNs on them. They appear as /dev/sda[1-15] through /dev/sdiv[1-15]
But you won't get this with the standard Fedora installation, which I assume is what people are talking about. You will be told you have disks /dev/sda and /dev/sdb, or whatever, and asked how you want to partition them.
You are saying, in effect, that you can make Fedora think one of your disks is in fact several different disks. That may be true, but not with the standard Fedora installation.
Incidentally, what do you mean by a "storage array"? The discussion was about partitioning a single disk, IIRC.
On Fri, 10 Aug 2007, Timothy Murphy wrote:
Rick Stevens wrote:
Well, the "x" in your example can take the RE form "[a-z]+". For example, we have some storage arrays with, oh, 130 LUNs on them. They appear as /dev/sda[1-15] through /dev/sdiv[1-15]
But you won't get this with the standard Fedora installation, which I assume is what people are talking about. You will be told you have disks /dev/sda and /dev/sdb, or whatever, and asked how you want to partition them.
not to harp on this, but can someone confirm that, with standard hard disks and partitioning, the limits are:
1) 4 primary partitions 2) only one of which can be extended 3) that extended partition can hold up to 12 logical partitions (this limit is different from IDE to SCSI, as i recall)
in any event, it's simply not true that you can have an unbounded number of logical partitions on a single drive, unless something's changed drastically lately.
rday
From what I understand, there are a max of 16 device entries created for a SCSI hard drive. (sdx and sdx1 through sdx15) So while you can have more partitions then that, Linux will not let you access them when using the SCSI code to access the drive. I believe it is a driver problem more then a udev problem.
You can do it with device mapper but not the kernel partition code. Its easy enough to fix but certain people refuse to accept the sane way to fix it
Timothy Murphy wrote:
Rick Stevens wrote:
Well, the "x" in your example can take the RE form "[a-z]+". For example, we have some storage arrays with, oh, 130 LUNs on them. They appear as /dev/sda[1-15] through /dev/sdiv[1-15]
But you won't get this with the standard Fedora installation, which I assume is what people are talking about. You will be told you have disks /dev/sda and /dev/sdb, or whatever, and asked how you want to partition them.
You are saying, in effect, that you can make Fedora think one of your disks is in fact several different disks. That may be true, but not with the standard Fedora installation.
Incidentally, what do you mean by a "storage array"? The discussion was about partitioning a single disk, IIRC.
With SCSI, there is a way to have more then one physical drive that responds to one SCSI address. You can have 16 LUNs on one SCSI address. Each LUN can be a physical drive, or part of a drive, depending on the LUN controller used. I believe the "storage array" he is talking about is several drives attached to a LUN controller. This was more common when drives were smaller, but is still useful when you need large/fast storage. The system does not need to know how the storage array is set up - all it needs to know is that it needs to send the SCSI command to something like device 4, LUN 2, and the LUN controller does the rest. Setting up the LUN controller in the first place may be complicated, but access the attached devices is not. Oh yes - the "storage array" is normally an external box with its own power supply, cooling, etc. You can hook a large number of drives to a single SCSI controller this way.
Mikkel
On Fri, 2007-08-10 at 09:24 -0400, Robert P. J. Day wrote:
On Fri, 10 Aug 2007, Timothy Murphy wrote:
Rick Stevens wrote:
Well, the "x" in your example can take the RE form "[a-z]+". For example, we have some storage arrays with, oh, 130 LUNs on them. They appear as /dev/sda[1-15] through /dev/sdiv[1-15]
But you won't get this with the standard Fedora installation, which I assume is what people are talking about. You will be told you have disks /dev/sda and /dev/sdb, or whatever, and asked how you want to partition them.
not to harp on this, but can someone confirm that, with standard hard disks and partitioning, the limits are:
- 4 primary partitions
- only one of which can be extended
- that extended partition can hold up to 12 logical partitions (this
limit is different from IDE to SCSI, as i recall)
in any event, it's simply not true that you can have an unbounded number of logical partitions on a single drive, unless something's changed drastically lately.
To answer many of the queries to my posting:
1. When I referred to a "storage array", I am referring to a number of SAN systems we have here, some based on EMC CX3/20s and some on Hitachi 9585s. These SAN systems have large numbers of individual spindles bunched together as various types of RAIDs (RAID5, RAID10, etc.). These RAID groups are presented by the SAN controller over fiberchannel HBAs to the host machines and appear as if they were separate physical disks to Linux. In one case, these RAID groups appear as SCSI disks /dev/sda through /dev/sdiv. We then use these devices as PVs under LVM and stitch them together as needed.
2. SCSI does not limit LUNs to 16. In reality, LUNs are identified by a 64-bit number that can be broken down into several different models. In the simplest model, the LUN is represented by an 8-bit number, giving 256 LUNs. Other mechanisms are possible. You need to look at the SAM-2 specification, specifically section 4.9. It can be a bit mind-bending to read.
3. Again, the 4 primary partition thing is an artifact from the PC BIOS. There are systems that use a totally different partitioning scheme. The disks themselves don't give a toss.
4. The virtual partitions inside a primary extended partition are not limited to 24 or whatever someone said earlier...that's an artifact of the DOS/Windows limiting of hard drive letters to c: through z:. There's nothing in the spec that says 24's the limit.
---------------------------------------------------------------------- - Rick Stevens, Principal Engineer rstevens@internap.com - - CDN Systems, Internap, Inc. http://www.internap.com - - - - Brain: The organ with which we think that we think. - ----------------------------------------------------------------------
Rick Stevens wrote:
- SCSI does not limit LUNs to 16.
I said _Fedora_ limits the number of SCSI partitions on a single SCSI disk to 16.
There may well be a way to modify this, but it is not available during a standard Fedora installation.
- Again, the 4 primary partition thing is an artifact from the PC BIOS.
There are systems that use a totally different partitioning scheme. The disks themselves don't give a toss.
But _Fedora_ standard installation limits each disk to 4 primary partitions (even though this makes no sense when applied to SCSI disks).
- The virtual partitions inside a primary extended partition are not
limited to 24 or whatever someone said earlier...that's an artifact of the DOS/Windows limiting of hard drive letters to c: through z:. There's nothing in the spec that says 24's the limit.
I don't think anyone is talking about SCSI specs, except you. Everyone else is talking about Fedora.
On Fri, 2007-08-10 at 20:57 +0100, Timothy Murphy wrote:
Rick Stevens wrote:
- SCSI does not limit LUNs to 16.
I said _Fedora_ limits the number of SCSI partitions on a single SCSI disk to 16.
Yes, that's true.
There may well be a way to modify this, but it is not available during a standard Fedora installation.
It would take a recompile of the kernel.
- Again, the 4 primary partition thing is an artifact from the PC BIOS.
There are systems that use a totally different partitioning scheme. The disks themselves don't give a toss.
But _Fedora_ standard installation limits each disk to 4 primary partitions (even though this makes no sense when applied to SCSI disks).
Because Fedora is designed for PC-style hardware. If you install it on a SPARC, for example, that rule doesn't apply.
- The virtual partitions inside a primary extended partition are not
limited to 24 or whatever someone said earlier...that's an artifact of the DOS/Windows limiting of hard drive letters to c: through z:. There's nothing in the spec that says 24's the limit.
I don't think anyone is talking about SCSI specs, except you. Everyone else is talking about Fedora.
I was trying to elucidate the differences between the restrictions placed on it by the PC hardware and what the actual disk is capable of. Sorry if I confused people.
---------------------------------------------------------------------- - Rick Stevens, Principal Engineer rstevens@internap.com - - CDN Systems, Internap, Inc. http://www.internap.com - - - - "People tell me I look at the dark side. That's not true. I have - - the heart of a small boy......in a jar right here on my desk." - - -- Stephen King - ----------------------------------------------------------------------
Rick Stevens wrote:
I don't think anyone is talking about SCSI specs, except you. Everyone else is talking about Fedora.
I was trying to elucidate the differences between the restrictions placed on it by the PC hardware and what the actual disk is capable of. Sorry if I confused people.
OK, apologies from me for the slight rudeness!
Robert P. J. Day wrote:
not to harp on this, but can someone confirm that, with standard hard disks and partitioning, the limits are:
- 4 primary partitions
- only one of which can be extended
- that extended partition can hold up to 12 logical partitions (this
limit is different from IDE to SCSI, as i recall)
This is not quite correct.
For a standard partition table (PT) there are up to four entries. Each entry may be either primary or extended, but not more than one entry may be extended. Each primary partition may contain up to one logical volume. If an extended partition exists, then any number of logical volumes may be created, space permitting.
SCSI is a physical interface and a physical protocol. What you do with it depends on the driver installed.
IDE is not a physical interface, it is a disc drive description. The physical interface is properly called ATA. What you do with it depends on how you want to use it.
There are more than one partitioning scheme. Any of them can be used with either SCSI or ATA. Whether software to allow this exists, I don't know. Partitioning schemes are conventions for partitioning drives, they are part of the format. They are not part of the physical access method. So SCSI and ATA, being physical access methods, are independent of format.
That's not to say that often by convention only certain formats are used with certain physical access methods.
in any event, it's simply not true that you can have an unbounded number of logical partitions on a single drive, unless something's changed drastically lately.
Space permitting, one may have potentially any number of logical volumes.
The format levels are
structured file/database (application specific) file system/volume format (file system specific) partition format physical format physical I/F
The access levels are
structured file/database (application specific, records, fields, etc.) file level (file name) logical level (partition + logical sector within partition) logical/physical level (logical sector on disc) physical level (C/H/S)
Mike
Mike McCarty wrote:
[snip]
If an extended partition exists, then any number of logical volumes may be created, space permitting.
Please allow me to reword that...
If an extended partition exists, then any number of logical volumes be created within the extended partition, space permitting.
[snip]
Sorry.
Each volume has a volume descriptor, the boot record (BR), containing the BIOS Parameter Block (BPB). Some people prefer to call the BPB the "geometry". Some refer to the BR as the "geometry".
The first level of formatting, the physical or low level format, lays down tracks and sectors. On modern drives, low level formatting is no longer expected to be performed by end users.
The second level of formatting is partitioning. Floppies don't have this level. This format is contained in the Master Boot Record (MBR) of which there is at most one per drive. This level is optional for hard discs, but if one wants multiple file systems on a drive, then one must use this level.
The third level of formatting is the volume, which creates logical volumes. Floppies and hard drives both have this. It creates the Boot Record (BR), one per volume.
The fourth level of formatting is the file system. Each logical volume may have up to one file system in it. (I'm neglecting loopback type mounts here.)
The fifth level of formatting is application specific, and defines the file type. Some OS supply predefined structured files, UNIX like OS do not, leaving all database definitions and the like to be application defined.
Mike
Mike McCarty wrote:
There are more than one partitioning scheme. Any of them can be used with either SCSI or ATA. Whether software to allow this exists, I don't know. Partitioning schemes are conventions for partitioning drives, they are part of the format. They are not part of the physical access method. So SCSI and ATA, being physical access methods, are independent of format.
That's not to say that often by convention only certain formats are used with certain physical access methods.
It's a little more than convention in a normal PC. If you'd like to boot, you need a structure that bios understands.
Les Mikesell wrote:
Mike McCarty wrote:
There are more than one partitioning scheme. Any of them can be used with either SCSI or ATA. Whether software to allow this exists, I don't know. Partitioning schemes are conventions for partitioning drives, they are part of the format. They are not part of the physical access method. So SCSI and ATA, being physical access methods, are independent of format.
That's not to say that often by convention only certain formats are used with certain physical access methods.
It's a little more than convention in a normal PC. If you'd like to boot, you need a structure that bios understands.
The BIOS has nothing to do with the partition table layout. I have heard that some BIOS can lay down an MBR, but an MBR isn't even needed for boot.
Mike
Mike McCarty wrote:
The BIOS has nothing to do with the partition table layout. I have heard that some BIOS can lay down an MBR, but an MBR isn't even needed for boot.
It depends on the BIOS. Some BIOS's will refuse to boot off a hard drive that does not have one primary partition marked as active. (Or bootable, depending on the program setting the flag.) No, the BIOS should not care about how the drive is partitioned. It should just check the signature on the first sector of the drive for a valid boot loader. But some BIOS's will check the partition table, and refuse to boot if they don't like what they find.
Mikkel