Hello!
When I mount my disk with XFS I saw error in dmesg:
Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: vmap allocation for size 4194304 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: XFS (dm-4): Ending clean mount
I have more then 10 millions inodes on partition. My computer: Pentium 4 3GHz and 2 GB memory. I use Fedora 20 with kernel 3.14.7-200.fc20.i686+PAE
What is it meen "use vmalloc=<size> to increase size"? When can I change it parameter?
Thank you!
On 27.06.2014, Roman Kravets wrote:
Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. What is it meen "use vmalloc=<size> to increase size"? When can I change it parameter?
It is a kernel boot parameter, which you can add manually be editing /boot/grub2/grub.cfg. Try "vmalloc=256M".
On Sat, Jun 28, 2014 at 10:41 AM, Heinz Diehl htd@fritha.org wrote:
On 27.06.2014, Roman Kravets wrote:
Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. What is it meen "use vmalloc=<size> to increase size"? When can I change it parameter?
It is a kernel boot parameter, which you can add manually be editing /boot/grub2/grub.cfg. Try "vmalloc=256M".
I had quite a bit of trouble around vmalloc/32bit and XFS, I finally went to 64 bit (vmalloc address space is much bigger--no issues once I went to 64bit) but that does not help you if your hardware is 32bit only, if you pentium can support 64bit and you have a 64bit machine elsewhere compiling a 64bit one there and installing and booting a 64 bit kernel on a full 32-bit userspace is possible (I ran that setup for 6 months on fedora 18) if you want to avoid a 64-bit reinstall in the short term.
/proc/meminfo does show basic vmalloc allocation size and usage, so you know what your current size and usage is and can adjust from there, I think the vmalloc space comes out of the 1024M of low memory address space which is the reason for the limit on 32bit.
Dear Roger and Heinz,
Thank you for you answer!
I added it parameter to grub2.conf and problem resolved.
On Sat, Jun 28, 2014 at 11:55:39AM -0500, Roger Heflin wrote:
On Sat, Jun 28, 2014 at 10:41 AM, Heinz Diehl htd@fritha.org wrote:
On 27.06.2014, Roman Kravets wrote:
Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. What is it meen "use vmalloc=<size> to increase size"? When can I change it parameter?
It is a kernel boot parameter, which you can add manually be editing /boot/grub2/grub.cfg. Try "vmalloc=256M".
I had quite a bit of trouble around vmalloc/32bit and XFS, I finally went to 64 bit (vmalloc address space is much bigger--no issues once I went to 64bit) but that does not help you if your hardware is 32bit only, if you pentium can support 64bit and you have a 64bit machine elsewhere compiling a 64bit one there and installing and booting a 64 bit kernel on a full 32-bit userspace is possible (I ran that setup for 6 months on fedora 18) if you want to avoid a 64-bit reinstall in the short term.
/proc/meminfo does show basic vmalloc allocation size and usage, so you know what your current size and usage is and can adjust from there, I think the vmalloc space comes out of the 1024M of low memory address space which is the reason for the limit on 32bit. -- users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
Dear Chris,
After I added parameter vmalloc=256M to my grub2.conf problem resolved.
I don't know need will add it parameter to default grub2.conf or not, but when I used my XFS partition with default vmalloc I did't see problem. It's just interested for me and therefore I asked Community.
On Mon, Jun 30, 2014 at 11:10:01AM -0600, Chris Murphy wrote:
On Jun 27, 2014, at 2:40 AM, Roman Kravets admin@softded.net wrote:
Hello!
When I mount my disk with XFS I saw error in dmesg:
Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: vmap allocation for size 4194304 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: XFS (dm-4): Ending clean mount
I have more then 10 millions inodes on partition. My computer: Pentium 4 3GHz and 2 GB memory. I use Fedora 20 with kernel 3.14.7-200.fc20.i686+PAE
What is it meen "use vmalloc=<size> to increase size"? When can I change it parameter?
I suggest also asking about this directly on the XFS list. XFS is going to be the default fs for Fedora 21 Server, and will include i686 arch as far as I know. So we should try to squash any related i686 bugs or non-graceful failures as possible.
Chris Murphy
users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On Jun 27, 2014, at 2:40 AM, Roman Kravets admin@softded.net wrote:
Hello!
When I mount my disk with XFS I saw error in dmesg:
Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: vmap allocation for size 1048576 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: vmap allocation for size 4194304 failed: use vmalloc=<size> to increase size. Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: XFS (dm-4): xfs_buf_get_uncached: failed to map pages Jun 27 11:14:38 softded kernel: XFS (dm-4): Ending clean mount
I have more then 10 millions inodes on partition. My computer: Pentium 4 3GHz and 2 GB memory. I use Fedora 20 with kernel 3.14.7-200.fc20.i686+PAE
What is it meen "use vmalloc=<size> to increase size"? When can I change it parameter?
I suggest also asking about this directly on the XFS list. XFS is going to be the default fs for Fedora 21 Server, and will include i686 arch as far as I know. So we should try to squash any related i686 bugs or non-graceful failures as possible.
Chris Murphy
On 06/30/2014 10:10 AM, Chris Murphy issued this missive: <snip>
...or non-graceful failures as possible.
You mean there is such a thing as a "graceful" failure? Wow! Gotta tell my boss about that!
(tongue planted firmly in cheek) ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 22643734 Yahoo: origrps2 - - - - This message printed using recycled bandwidth - ----------------------------------------------------------------------
On Jun 30, 2014, at 9:51 AM, Roman Kravets admin@softded.net wrote:
Dear Chris,
After I added parameter vmalloc=256M to my grub2.conf problem resolved.
I don't know need will add it parameter to default grub2.conf or not, but when I used my XFS partition with default vmalloc I did't see problem. It's just interested for me and therefore I asked Community.
It sounds like a bug somewhere. What are the mount options and fs info for this XFS file system?
mount | grep xfs xfs_info <dev>
Chris Murphy
Allegedly, on or about 30 June 2014, Rick Stevens sent:
You mean there is such a thing as a "graceful" failure? Wow! Gotta tell my boss about that!
Ungraceful failure: Instant hard crash with complete lock-up! Graceful failure: You get an error message, first, but you're still screwed.
;-p
[softded@softded ~]$ mount | grep xfs /dev/mapper/encrypt-store on /mnt/store type xfs (rw,noatime,nodiratime,attr2,inode64,sunit=1024,swidth=2048,noquota)
[softded@softded ~]$ xfs_info /dev/mapper/encrypt-store meta-data=/dev/mapper/encrypt-store isize=256 agcount=16, agsize=2440064 blks = sectsz=512 attr=2, projid32bit=0 = crc=0 data = bsize=4096 blocks=39041024, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=19064, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
On Mon, Jun 30, 2014 at 02:00:26PM -0600, Chris Murphy wrote:
On Jun 30, 2014, at 9:51 AM, Roman Kravets admin@softded.net wrote:
Dear Chris,
After I added parameter vmalloc=256M to my grub2.conf problem resolved.
I don't know need will add it parameter to default grub2.conf or not, but when I used my XFS partition with default vmalloc I did't see problem. It's just interested for me and therefore I asked Community.
It sounds like a bug somewhere. What are the mount options and fs info for this XFS file system?
mount | grep xfs xfs_info <dev>
Chris Murphy
users mailing list users@lists.fedoraproject.org To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org
On Jun 30, 2014, at 9:44 PM, Roman Kravets admin@softded.net wrote:
[softded@softded ~]$ mount | grep xfs /dev/mapper/encrypt-store on /mnt/store type xfs (rw,noatime,nodiratime,attr2,inode64,sunit=1024,swidth=2048,noquota)
[softded@softded ~]$ xfs_info /dev/mapper/encrypt-store meta-data=/dev/mapper/encrypt-store isize=256 agcount=16, agsize=2440064 blks = sectsz=512 attr=2, projid32bit=0 = crc=0 data = bsize=4096 blocks=39041024, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=19064, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
149GiB file system, is that correct? Is this filesystem on hardware or software raid?
Chris Murphy
Dear Chris,
Yes. It is correct.
I have software raid10 witch 4x80GB HDD.
I store on it raid my mail archive.
On Wed, Jul 02, 2014 at 02:50:10PM -0600, Chris Murphy wrote:
On Jun 30, 2014, at 9:44 PM, Roman Kravets admin@softded.net wrote:
[softded@softded ~]$ mount | grep xfs /dev/mapper/encrypt-store on /mnt/store type xfs (rw,noatime,nodiratime,attr2,inode64,sunit=1024,swidth=2048,noquota)
[softded@softded ~]$ xfs_info /dev/mapper/encrypt-store meta-data=/dev/mapper/encrypt-store isize=256 agcount=16, agsize=2440064 blks = sectsz=512 attr=2, projid32bit=0 = crc=0 data = bsize=4096 blocks=39041024, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=19064, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
149GiB file system, is that correct? Is this filesystem on hardware or software raid?
Chris Murphy
On Jul 3, 2014, at 10:49 AM, Roman Kravets admin@softded.net wrote:
On Wed, Jul 02, 2014 at 02:50:10PM -0600, Chris Murphy wrote:
On Jun 30, 2014, at 9:44 PM, Roman Kravets admin@softded.net wrote:
[softded@softded ~]$ mount | grep xfs /dev/mapper/encrypt-store on /mnt/store type xfs (rw,noatime,nodiratime,attr2,inode64,sunit=1024,swidth=2048,noquota)
[softded@softded ~]$ xfs_info /dev/mapper/encrypt-store meta-data=/dev/mapper/encrypt-store isize=256 agcount=16, agsize=2440064 blks = sectsz=512 attr=2, projid32bit=0 = crc=0 data = bsize=4096 blocks=39041024, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=19064, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
149GiB file system, is that correct? Is this filesystem on hardware or software raid?
Dear Chris,
Yes. It is correct.
I have software raid10 witch 4x80GB HDD.
I store on it raid my mail archive.
Ok well everything there looks normal. I'd still consider posting the experience to the XFS list, checking the XFS FAQ for the information to include in the submission. The fact you get this on fs mount and it inhibits a normal mount is pretty weird. It's not like it's a big file system either, but maybe it has something to do with 10 million files vs the size of the file system.
Chris Murphy
I don't remember, this error appeared immediately after create file system or after created many files. But after I add parameter vmalloc to grub2.conf problem was resolved.
On Thu, Jul 03, 2014 at 03:24:37PM -0600, Chris Murphy wrote:
On Jul 3, 2014, at 10:49 AM, Roman Kravets admin@softded.net wrote:
On Wed, Jul 02, 2014 at 02:50:10PM -0600, Chris Murphy wrote:
On Jun 30, 2014, at 9:44 PM, Roman Kravets admin@softded.net wrote:
[softded@softded ~]$ mount | grep xfs /dev/mapper/encrypt-store on /mnt/store type xfs (rw,noatime,nodiratime,attr2,inode64,sunit=1024,swidth=2048,noquota)
[softded@softded ~]$ xfs_info /dev/mapper/encrypt-store meta-data=/dev/mapper/encrypt-store isize=256 agcount=16, agsize=2440064 blks = sectsz=512 attr=2, projid32bit=0 = crc=0 data = bsize=4096 blocks=39041024, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=19064, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
149GiB file system, is that correct? Is this filesystem on hardware or software raid?
Dear Chris,
Yes. It is correct.
I have software raid10 witch 4x80GB HDD.
I store on it raid my mail archive.
Ok well everything there looks normal. I'd still consider posting the experience to the XFS list, checking the XFS FAQ for the information to include in the submission. The fact you get this on fs mount and it inhibits a normal mount is pretty weird. It's not like it's a big file system either, but maybe it has something to do with 10 million files vs the size of the file system.
Chris Murphy