On Wed, 5 Apr 2017 12:28:11 +0200
François Patte <francois.patte(a)mi.parisdescartes.fr> wrote:
Le 04/04/2017 17:39, stan a écrit :
> On Tue, 4 Apr 2017 14:23:52 +0200
> François Patte <francois.patte(a)mi.parisdescartes.fr> wrote:
>> What is the meaning of these error/warning messages (f25) and how
>> to correct the config to get rid of them:
>> -- LVM
>> Daemon lvmetad returned error 104: 1 Time(s)
I don't know the internals, but this is probably because lvmetad is
>> WARNING: Failed to connect to lvmetad. Falling back to
>> scanning.: 2 Time(s)
When lvmetad is not used, LVM commands revert to scanning disks for LVM
>> WARNING: lvmetad is being updated, retrying (setup) for 10
>> seconds.: 1 Time(s)
New LVM disks that appear on the system must be scanned before lvmetad
knows about them. If lvmetad does not know about a disk, then LVM
commands using lvmetad will also not know about it. When disks are
added or removed from the system, lvmetad must be updated
Moreover lvmetad.service is disabled, so I don't understand why
complaining and about what.
Something, somewhere has been told that lvmetad.service is enabled, and
is trying to use it.
>> -- systemd
>> Failed unmounting /var.: 1 Time(s)
>> Requested transaction contradicts existing jobs: Transaction is
>> destructive.: 1 Time(s)
>> lvm2-lvmetad.socket: Failed to queue service startup job (Maybe the
>> service file is missing or not a non-template unit?): Transaction
>> is destructive.: 1 Time(s)
>> lvm2-lvmetad.socket: Unit entered failed state.: 1 Time(s)
But this one is running and not in "failed state" when you enquiry
about its status.
Perhaps this is the source of the above errors? It is trying to use
lvmetad even though it has been disabled?
>> What does mean "Transaction is destructive"?
> My guess is that, if it runs, information will be lost from the
> system, or process sequence will be broken.
> The systemd message seems pretty clear in that regard; there are
> jobs running that are using /var. If it wasn't you that was trying
> to umount /var, then this seems like a system error, as the system
> shouldn't be trying to umount /var when jobs are running, except
> maybe at shutdown.
This message appears at shutdown for a while now: yes, systemd does
not shutdown some services before unmounting partitions used by these
services (so you can wait for sometimes because a "job is running").
How long will it takes untill these kinds of bugs will be fixed? I
have an f21 system without this kind of bugs, so it is a regression
which lasts for some times now.... When you ask the systemd folks,
they reply that it is a bug from the distro!
In my case, this delay (seemed to be 90 seconds) was caused by a
daemon that didn't respond properly to the kill -15 command that
systemd sends out to running processes at shutdown. If I kill that
daemon with a kill -9 command before I shutdown, the shutdown occurs
quickly. So, in that sense, systemd folks are right. But I suspect
they changed the kill from -9 to -15 somewhere along the way, causing
this issue. The -9 doesn't allow cleanup, while the -15 does allow a
process to catch the signal and clean up before shutdown, which is
probably why they did it, and won't be changing back.
Are you running any non standard daemons on your system? Daemons that
weren't packaged by Fedora?