Le 05/12/2018 à 21:43, Samuel Sieb a écrit :
On 12/5/18 4:03 AM, François Patte wrote:
1- déc. 05 11:33:05 dipankar systemd[1]: systemd-modules-load.service: Main process exited, code=exited, status=1/FAILURE déc. 05 11:33:05 dipankar systemd[1]: systemd-modules-load.service: Failed with result 'exit-code'. déc. 05 11:33:05 dipankar systemd[1]: Failed to start Load Kernel Modules. -- Subject: L'unité (unit) systemd-modules-load.service a échoué
This error occurs several times during the boot process.
You can ignore this. I think I get these messages on all systems.
2- A lot of errors about lvm also occur:
déc. 05 11:33:06 dipankar lvm[1178]: 3 logical volume(s) in volume group "systeme" now active déc. 05 11:33:06 dipankar lvm[1178]: WARNING: Device mismatch detected for debian/deb-racine which is accessing /dev/md127 instead of /dev/sda2. déc. 05 11:33:06 dipankar lvm[1178]: device-mapper: reload ioctl on (253:0) failed: Périphérique ou ressource occupé<---- busy déc. 05 11:33:06 dipankar lvm[1178]: Failed to suspend debian/deb-racine. déc. 05 11:33:06 dipankar kernel: device-mapper: table: 253:0: linear: Device lookup failed déc. 05 11:33:06 dipankar kernel: device-mapper: ioctl: error adding target to table
This is where the problem is. Someone ran into a similar problem at work. I think it has to do with the RAID metadata type. lvm is detecting the volumes on the partition before the RAID is setup, so lvm tries to use the partition. Then it finds duplicates on the other RAID partition and doesn't like that.
Since all your lvm volumes are only on RAID volumes, you could try modifying the /etc/lvm/lvm.conf file. Find the global_filter option and add the following line: global_filter = [ "r|sd|" ]
Magic! This is a magic filter: everything went fine --- including the /opt partition --- after adding this line in the /etc/lvm/lvm.conf file.
I don't understand why... but my system works again.
Thank you.