hi everyone,
I've a quest whose image resides on a gluster vol, with selinux I see:
virsh # start rhel-work2 error: Failed to start domain rhel-work2 error: internal error: qemu unexpectedly closed the monitor: (process:57641): GLib-WARNING **: gmem.c:482: custom memory allocation vtable not supported [2016-12-16 14:35:31.748659] E [MSGID: 104007] [glfs-mgmt.c:637:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch volume file (key:QEMU-VMs) [Invalid argument] 2016-12-16T14:35:32.728242Z qemu-kvm: -drive file=gluster://127.0.0.1/QEMU-VMs/rhel-work2.qcow2,format=raw,if=none,id=drive-virtio-disk0: Gluster connection failed for server=127.0.0.1 port=0 volume=QEMU-VMs image=rhel-work2.qcow2 transport=tcp: Permission denied
an attempt to catch sealerts I see only:
]$ ausearch -ts 14:28 | egrep -i '(virt|glust|qem)' | audit2allow
#============= svirt_t ==============
#!!!! WARNING: 'unlabeled_t' is a base type. allow svirt_t unlabeled_t:dir write;
and probably a lot more. Before I start looking at silent denials - would there be a boolean for libvirt+gluster ?
many thanks, L.
On 12/16/2016 03:47 PM, lejeczek wrote:
hi everyone,
I've a quest whose image resides on a gluster vol, with selinux I see:
virsh # start rhel-work2 error: Failed to start domain rhel-work2 error: internal error: qemu unexpectedly closed the monitor: (process:57641): GLib-WARNING **: gmem.c:482: custom memory allocation vtable not supported [2016-12-16 14:35:31.748659] E [MSGID: 104007] [glfs-mgmt.c:637:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch volume file (key:QEMU-VMs) [Invalid argument] 2016-12-16T14:35:32.728242Z qemu-kvm: -drive file=gluster://127.0.0.1/QEMU-VMs/rhel-work2.qcow2,format=raw,if=none,id=drive-virtio-disk0: Gluster connection failed for server=127.0.0.1 port=0 volume=QEMU-VMs image=rhel-work2.qcow2 transport=tcp: Permission denied
an attempt to catch sealerts I see only:
]$ ausearch -ts 14:28 | egrep -i '(virt|glust|qem)' | audit2allow
Please provide the output of ausearch | egrep without audit2allow, Raw AVC messages help to better understand the problem and an investigator can use audit2allow himself
#============= svirt_t ==============
#!!!! WARNING: 'unlabeled_t' is a base type. allow svirt_t unlabeled_t:dir write;
and probably a lot more. Before I start looking at silent denials - would there be a boolean for libvirt+gluster ?
Try Red Hat Gluster Storage chapter [1]
[1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm...
Petr
On 16/12/16 15:03, Petr Lautrbach wrote:
On 12/16/2016 03:47 PM, lejeczek wrote:
hi everyone,
I've a quest whose image resides on a gluster vol, with selinux I see:
virsh # start rhel-work2 error: Failed to start domain rhel-work2 error: internal error: qemu unexpectedly closed the monitor: (process:57641): GLib-WARNING **: gmem.c:482: custom memory allocation vtable not supported [2016-12-16 14:35:31.748659] E [MSGID: 104007] [glfs-mgmt.c:637:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch volume file (key:QEMU-VMs) [Invalid argument] 2016-12-16T14:35:32.728242Z qemu-kvm: -drive file=gluster://127.0.0.1/QEMU-VMs/rhel-work2.qcow2,format=raw,if=none,id=drive-virtio-disk0: Gluster connection failed for server=127.0.0.1 port=0 volume=QEMU-VMs image=rhel-work2.qcow2 transport=tcp: Permission denied
an attempt to catch sealerts I see only:
]$ ausearch -ts 14:28 | egrep -i '(virt|glust|qem)' | audit2allow
Please provide the output of ausearch | egrep without audit2allow, Raw AVC messages help to better understand the problem and an investigator can use audit2allow himself
0-GLUSTERs]$ ausearch -ts 15:45 | egrep -i '(virt|glust|vnc|spice|qxl|qem)'
type=VIRT_MACHINE_ID msg=audit(1481903143.572:23118): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf vm-ctx=system_u:system_r:svirt_t:s0:c444,c977 img-ctx=system_u:object_r:svirt_image_t:s0:c444,c977 model=selinux exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=VIRT_MACHINE_ID msg=audit(1481903143.572:23119): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf vm-ctx=+107:+107 img-ctx=+107:+107 model=dac exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=VIRT_RESOURCE msg=audit(1481903144.648:23121): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=net reason=open vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf net=52:54:00:c6:99:da path="/dev/net/tun" rdev=0A:C8 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=VIRT_RESOURCE msg=audit(1481903144.671:23122): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=net reason=open vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf net=52:54:00:c6:99:da path="/dev/vhost-net" rdev=0A:EE exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=VIRT_RESOURCE msg=audit(1481903144.784:23126): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=cgroup reason=deny vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf cgroup="/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d20\x2drhel\x2dwork3.scope/" class=all exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=VIRT_RESOURCE msg=audit(1481903144.784:23127): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=cgroup reason=allow vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf cgroup="/sys/fs/cgroup/devices/machine.slice/machine-qemu\x2d20\x2drhel\x2dwork3.scope/" class=major category=pty maj=88 acl=rw exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=PROCTITLE msg=audit(1481903144.631:23120): proctitle="/usr/sbin/libvirtd" type=SYSCALL msg=audit(1481903144.631:23120): arch=c000003e syscall=16 success=yes exit=0 a0=22 a1=89a2 a2=7f647dcf3110 a3=2 items=0 ppid=1 pid=6652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="libvirtd" exe="/usr/sbin/libvirtd" subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 key=(null) type=SYSCALL msg=audit(1481903146.990:23143): arch=c000003e syscall=42 success=no exit=-13 a0=17 a1=7fffa9c1d250 a2=6e a3=7fffa9c1cf70 items=0 ppid=1 pid=10614 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c444,c977 key=(null) type=AVC msg=audit(1481903146.990:23143): avc: denied { write } for pid=10614 comm="qemu-kvm" name="nss" dev="dm-4" ino=806444624 scontext=system_u:system_r:svirt_t:s0:c444,c977 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=sock_file permissive=0 type=SYSCALL msg=audit(1481903146.992:23144): arch=c000003e syscall=2 success=no exit=-13 a0=7f58c3525580 a1=c2 a2=180 a3=1 items=0 ppid=1 pid=10651 auid=4294967295 uid=107 gid=107 euid=107 suid=107 fsuid=107 egid=107 sgid=107 fsgid=107 tty=(none) ses=4294967295 comm="qemu-kvm" exe="/usr/libexec/qemu-kvm" subj=system_u:system_r:svirt_t:s0:c444,c977 key=(null) type=AVC msg=audit(1481903146.992:23144): avc: denied { write } for pid=10651 comm="qemu-kvm" name="tmp" dev="dm-4" ino=805700962 scontext=system_u:system_r:svirt_t:s0:c444,c977 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=dir permissive=0 type=VIRT_RESOURCE msg=audit(1481903149.303:23163): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=net reason=start vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf old-net="?" new-net="52:54:00:c6:99:da" exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=VIRT_RESOURCE msg=audit(1481903149.303:23164): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=dev reason=start vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf bus=usb device=555342207265646972646576 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=VIRT_RESOURCE msg=audit(1481903149.303:23165): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=dev reason=start vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf bus=usb device=555342207265646972646576 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=VIRT_RESOURCE msg=audit(1481903149.303:23166): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=mem reason=start vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf old-mem=0 new-mem=1048576 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=VIRT_RESOURCE msg=audit(1481903149.303:23167): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm resrc=vcpu reason=start vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf old-vcpu=0 new-vcpu=1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=success' type=VIRT_CONTROL msg=audit(1481903149.303:23168): pid=5637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:virtd_t:s0-s0:c0.c1023 msg='virt=kvm op=start reason=booted vm="rhel-work3" uuid=5501263b-181d-47ed-ab03-a6066f3d26bf vm-pid=-1 exe="/usr/sbin/libvirtd" hostname=? addr=? terminal=? res=failed'
#============= svirt_t ==============
#!!!! WARNING: 'unlabeled_t' is a base type. allow svirt_t unlabeled_t:dir write;
and probably a lot more. Before I start looking at silent denials - would there be a boolean for libvirt+gluster ?
Try Red Hat Gluster Storage chapter [1]
[1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm...
Petr
seems like glusterfs.x86_64 0:3.7.18-1.el7 (from oVirt) fixes it, problem exists in glusterfs-3.7.16-1.el7.x86_64
On 16/12/16 15:03, Petr Lautrbach wrote:
On 12/16/2016 03:47 PM, lejeczek wrote:
hi everyone,
I've a quest whose image resides on a gluster vol, with selinux I see:
virsh # start rhel-work2 error: Failed to start domain rhel-work2 error: internal error: qemu unexpectedly closed the monitor: (process:57641): GLib-WARNING **: gmem.c:482: custom memory allocation vtable not supported [2016-12-16 14:35:31.748659] E [MSGID: 104007] [glfs-mgmt.c:637:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch volume file (key:QEMU-VMs) [Invalid argument] 2016-12-16T14:35:32.728242Z qemu-kvm: -drive file=gluster://127.0.0.1/QEMU-VMs/rhel-work2.qcow2,format=raw,if=none,id=drive-virtio-disk0: Gluster connection failed for server=127.0.0.1 port=0 volume=QEMU-VMs image=rhel-work2.qcow2 transport=tcp: Permission denied
an attempt to catch sealerts I see only:
]$ ausearch -ts 14:28 | egrep -i '(virt|glust|qem)' | audit2allow
Please provide the output of ausearch | egrep without audit2allow, Raw AVC messages help to better understand the problem and an investigator can use audit2allow himself
#============= svirt_t ==============
#!!!! WARNING: 'unlabeled_t' is a base type. allow svirt_t unlabeled_t:dir write;
and probably a lot more. Before I start looking at silent denials - would there be a boolean for libvirt+gluster ?
Try Red Hat Gluster Storage chapter [1]
[1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm...
Petr
selinux@lists.fedoraproject.org