Hi,
About 1 in 4 times Fedora 14 hangs during shutdown on at least 4 of my systems. Looking at the shutdown messages (ESC in the splash screen) and adding some debug statements to /etc/rc.d/rc0.d/S01halt, it hangs after the messages:
"Unmounting file systems" "init: Re-executing /sbin/init"
with the message:
"mount: you must specify the file system type."
Adding some debug, this appears after the following command is executed: "fstab-decode mount -n -o ro,remount /dev/sda1 /"
The file system is ext4 on all of the systems and that command looks ok.
Any ideas ?
Terry
On 01/29/2011 11:42 PM, JB wrote:
Terry Barnaby<terry1<at> beam.ltd.uk> writes:
...
Give us unedited outputs: $ cat /etc/fstab $ cat /etc/mtab $ cat /proc/mounts
JB
The above files:
/etc/fstab =========================================
# # /etc/fstab # Created by anaconda on Fri Nov 26 19:45:43 2010 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=5f18c8b8-2817-47cd-90d0-44f4dcf063de / ext4 defaults 1 1 UUID=450baf62-09d3-4618-bcd7-79bd440a6c71 swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 king.kingnet:/home /home nfs defaults 0 0 king.kingnet:/data /data nfs defaults 0 0 #king.kingnet:/data/video /data/video nfs defaults 0 0 king.kingnet:/var/cache/yum /var/cache/yum nfs defaults 0 0 /dev/sdb1 /datal auto defaults 1 2
/etc/mtab ========================================= /dev/sda1 / ext4 rw 0 0 proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 devpts /dev/pts devpts rw,gid=5,mode=620 0 0 tmpfs /dev/shm tmpfs rw 0 0 /dev/sdb1 /datal ext3 rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0 king.kingnet:/home /home nfs rw,addr=192.168.2.1 0 0 king.kingnet:/data /data nfs rw,vers=4,addr=192.168.2.1,clientaddr=192.168.2.2 0 0 king.kingnet:/var/cache/yum /var/cache/yum nfs rw,vers=4,addr=192.168.2.1,clientaddr=192.168.2.2 0 0 fusectl /sys/fs/fuse/connections fusectl rw 0 0 gvfs-fuse-daemon /home/dawn/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=dawn 0 0
/proc/mounts ========================================= rootfs / rootfs rw 0 0 /proc /proc proc rw,relatime 0 0 /sys /sys sysfs rw,relatime 0 0 udev /dev devtmpfs rw,relatime,size=506280k,nr_inodes=126570,mode=755 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /dev/shm tmpfs rw,relatime 0 0 /dev/sda1 / ext4 rw,relatime,barrier=1,data=ordered 0 0 /proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0 /dev/sdb1 /datal ext3 rw,relatime,errors=continue,barrier=0,data=ordered 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0 king.kingnet:/home /home nfs rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.2.1,mountvers=3,mountport=48450,mountproto=udp,addr=192.168.2.1 0 0 king.kingnet:/data/ /data nfs4 rw,relatime,vers=4,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.2,minorversion=0,addr=192.168.2.1 0 0 king.kingnet:/var/cache/yum/ /var/cache/yum nfs4 rw,relatime,vers=4,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.2,minorversion=0,addr=192.168.2.1 0 0 /etc/auto.misc /misc autofs rw,relatime,fd=6,pgrp=1539,timeout=300,minproto=5,maxproto=5,indirect 0 0 -hosts /net autofs rw,relatime,fd=12,pgrp=1539,timeout=300,minproto=5,maxproto=5,indirect 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 gvfs-fuse-daemon /home/dawn/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1020,group_id=1020 0 0
Cheers
Terry
Terry Barnaby <terry1 <at> beam.ltd.uk> writes:
Hi,
About 1 in 4 times Fedora 14 hangs during shutdown on at least 4 of my systems. Looking at the shutdown messages (ESC in the splash screen) and adding some debug statements to /etc/rc.d/rc0.d/S01halt, it hangs after the messages:
"Unmounting file systems" "init: Re-executing /sbin/init"
with the message:
"mount: you must specify the file system type."
Adding some debug, this appears after the following command is executed: "fstab-decode mount -n -o ro,remount /dev/sda1 /"
The file system is ext4 on all of the systems and that command looks ok.
Any ideas ?
Terry
This is our offending script, with the section of interest to us. /etc/rc.d/rc0.d/S01halt
... # Tell init to re-exec itself. kill -TERM 1
# ################################################################ # debugging snapshot statements # ---------------------------------------------------------------- date >> /halt.debug cat /etc/mtab >> /halt.debug cat /proc/mounts >> /halt.debug # ################################################################
# Remount read only anything that's left mounted. # echo $"Remounting remaining filesystems readonly" mount | awk '{ print $1,$3 }' | while read dev dir; do fstab-decode mount -n -o ro,remount $dev $dir done
# If we left mdmon's running wait for the raidsets to become clean ...
Place these debugging snapshot statements in there to see what is in /etc/mtab prior to 'mount' (which are equivalent). The others two are helpers ('date' to document time, ""proc/mounts" is even more useful as it is generally more accurate than /etc/mtab).
Let it be there as long as needed to catch the difference between good and bad shutdowns (as you said the bad one happens sporadically, every 4th shutdown or so).
Keep in mind that system update may replace this script.
Just in case, this display would be of interest: $ cat /proc/filesystems
JB
JB <jb.1234abcd <at> gmail.com> writes:
# ################################################################ # debugging snapshot statements # ---------------------------------------------------------------- date >> /halt.debug cat /etc/mtab >> /halt.debug cat /proc/mounts >> /halt.debug # ################################################################
I think correction is needed as /proc is not available any more because it was unmounted immediatelly prior to our debugging statements. So, remove that: cat /proc/mounts >> /halt.debug
JB
On 01/30/2011 02:11 PM, JB wrote:
JB<jb.1234abcd<at> gmail.com> writes:
# ################################################################ # debugging snapshot statements # ---------------------------------------------------------------- date>> /halt.debug cat /etc/mtab>> /halt.debug cat /proc/mounts>> /halt.debug # ################################################################
I think correction is needed as /proc is not available any more because it was unmounted immediatelly prior to our debugging statements. So, remove that: cat /proc/mounts>> /halt.debug
JB
I added the debug, and basically it was the same when it shutdown cleanly and when it failed.
# A bad one Sun Jan 30 17:12:08 GMT 2011 /dev/sda1 / ext4 rw 0 0 proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 Mount: /dev/sda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) fstab-decode mount -n -o ro,remount /dev/sda1 / fstab-decode mount -n -o ro,remount proc /proc fstab-decode mount -n -o ro,remount sysfs /sys
# A good one, / has been remounted ro and so the last two unmount messages are not present Sun Jan 30 17:18:16 GMT 2011 /dev/sda1 / ext4 rw 0 0 proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 Mount: /dev/sda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) fstab-decode mount -n -o ro,remount /dev/sda1 /
I put a /bin/sh after this so I could have a look at the systems state at this point when the remount failed. The last few items of the "ps ax" list is shown:
1282 ? S 0:00 [rpciod/1] 1378 ? S 0:00 [nfsiod] 1381 ? S 0:00 [lockd] 1960 ? D 0:00 [flush-0:19] 2006 ? Zl 0:00 [akonadi_control] <defunct> 2008 ? Z 0:00 [akonadiserver] <defunct> 2010 ? Zl 0:00 [mysqld] <defunct> 2125 ? Ds 0:00 [pulseaudio] 2332 ? Z 0:00 [gconf-helper] <defunct> 2365 ? D 0:00 [dcopserver] 2448 ? Ss 0:00 /bin/bash /etc/rc0.d/S01halt start 3001 ? S 0:00 /bin/sh 3019 ? R 0:00 ps ax
It looks like some processes are left over from the GUI (KDE). I suspect they have log files or something else opened on / in write mode and this is stopping the remount to ro working. Running "mount -o remount,ro /" at this point fails with "/ is busy". They are probably waiting for /home, which is an NFS files system, that was unmounted earlier on in the shutdown process. I restarted the network and netfs and these processes disappeared. After shuting down netfs and network as well as some other processes left over the remount command worked fine and the system shutdown.
Note I am using the "network" not "NetworkManager" service. The NetworkManager service does not work well for me with systems using networked /home and other file systems.
I suspect an issue further up the shudown chain where the system should wait for all of the processes to shutdown "before" unmounting the NFS files systems. I will have a look here, any ideas ?
Terry
On 01/30/2011 05:55 PM, Terry Barnaby wrote:
On 01/30/2011 02:11 PM, JB wrote:
JB<jb.1234abcd<at> gmail.com> writes:
# ################################################################ # debugging snapshot statements # ---------------------------------------------------------------- date>> /halt.debug cat /etc/mtab>> /halt.debug cat /proc/mounts>> /halt.debug # ################################################################
I think correction is needed as /proc is not available any more because it was unmounted immediatelly prior to our debugging statements. So, remove that: cat /proc/mounts>> /halt.debug
JB
I added the debug, and basically it was the same when it shutdown cleanly and when it failed.
# A bad one Sun Jan 30 17:12:08 GMT 2011 /dev/sda1 / ext4 rw 0 0 proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 Mount: /dev/sda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) fstab-decode mount -n -o ro,remount /dev/sda1 / fstab-decode mount -n -o ro,remount proc /proc fstab-decode mount -n -o ro,remount sysfs /sys
# A good one, / has been remounted ro and so the last two unmount messages are not present Sun Jan 30 17:18:16 GMT 2011 /dev/sda1 / ext4 rw 0 0 proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 Mount: /dev/sda1 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) fstab-decode mount -n -o ro,remount /dev/sda1 /
I put a /bin/sh after this so I could have a look at the systems state at this point when the remount failed. The last few items of the "ps ax" list is shown:
1282 ? S 0:00 [rpciod/1] 1378 ? S 0:00 [nfsiod] 1381 ? S 0:00 [lockd] 1960 ? D 0:00 [flush-0:19] 2006 ? Zl 0:00 [akonadi_control]<defunct> 2008 ? Z 0:00 [akonadiserver]<defunct> 2010 ? Zl 0:00 [mysqld]<defunct> 2125 ? Ds 0:00 [pulseaudio] 2332 ? Z 0:00 [gconf-helper]<defunct> 2365 ? D 0:00 [dcopserver] 2448 ? Ss 0:00 /bin/bash /etc/rc0.d/S01halt start 3001 ? S 0:00 /bin/sh 3019 ? R 0:00 ps ax
It looks like some processes are left over from the GUI (KDE). I suspect they have log files or something else opened on / in write mode and this is stopping the remount to ro working. Running "mount -o remount,ro /" at this point fails with "/ is busy". They are probably waiting for /home, which is an NFS files system, that was unmounted earlier on in the shutdown process. I restarted the network and netfs and these processes disappeared. After shuting down netfs and network as well as some other processes left over the remount command worked fine and the system shutdown.
Note I am using the "network" not "NetworkManager" service. The NetworkManager service does not work well for me with systems using networked /home and other file systems.
I suspect an issue further up the shudown chain where the system should wait for all of the processes to shutdown "before" unmounting the NFS files systems. I will have a look here, any ideas ?
Terry
I am guessing this is primarily a KDE problem (although the system should still shutdown cleanly even if processes are still there waiting on NFS). I presume the KDE shutdown should wait for all of its processes to complete exit before it asks init to shutdown the system ...
Terry
Terry Barnaby <terry1 <at> beam.ltd.uk> writes:
...
Firstly, I have to re-correct myself - my original debugging statemets were correct. I checked it on my machine and /proc/mounts is still available, so we should include it as it has more info than /etc/mtab. It could give us a clue about any other mount-related things.
... # ################################################################ # debugging snapshot statements # ---------------------------------------------------------------- echo "date" >> /halt.debug date >> /halt.debug echo "cat /etc/mtab" >> /halt.debug cat /etc/mtab >> /halt.debug echo "cat /proc/mounts" >> /halt.debug cat /proc/mounts >> /halt.debug # ################################################################ ...
Secondly, I have few things to check with regard to all of this. Perhaps something pops up.
JB
On 01/30/2011 06:51 PM, JB wrote:
Terry Barnaby<terry1<at> beam.ltd.uk> writes:
...
Firstly, I have to re-correct myself - my original debugging statemets were correct. I checked it on my machine and /proc/mounts is still available, so we should include it as it has more info than /etc/mtab. It could give us a clue about any other mount-related things.
... # ################################################################ # debugging snapshot statements # ---------------------------------------------------------------- echo "date">> /halt.debug date>> /halt.debug echo "cat /etc/mtab">> /halt.debug cat /etc/mtab>> /halt.debug echo "cat /proc/mounts">> /halt.debug cat /proc/mounts>> /halt.debug # ################################################################ ...
Secondly, I have few things to check with regard to all of this. Perhaps something pops up.
JB
I am fairly sure the problem is the akonadi/pulseaudio/gconf-helper/dcopserver processes that are still hanging around due to the fact that the NFS mounts they are using have gone away.
As I said remounting the NFS /home allows them to exit and allows / to then be remounted ro and the system shutdown.
I think there are three bugs here: 1. KDE is not waiting for all of its sessions processes to exit before telling init to halt the system. 2. The rc0 scripts are not making sure all processes using the NFS files systems have exited prior to unmounting them. 3. The rc0 final / remount ro commadn should make sure all processes have been killed prior to issuing the remount command. (will the kernel allow them to be killed when waiting on unmounted NFS ? Kernel bug ?)
Cheers
Terry
On 01/30/2011 11:54 AM, Terry Barnaby wrote:
I am fairly sure the problem is the akonadi/pulseaudio/gconf-helper/dcopserver
I don't want to hijack the main thread, so I've changed the subject slightly. I've been wondering something and it finally got to the point that I had to ask: is it just me, or does anybody else look at the word "akonadi" and think it's Hebrew?
Joe Zeff <joe <at> zeff.us> writes:
On 01/30/2011 11:54 AM, Terry Barnaby wrote:
I am fairly sure the problem is the akonadi/pulseaudio/gconf-helper
dcopserver
I don't want to hijack the main thread, so I've changed the subject slightly. I've been wondering something and it finally got to the point that I had to ask: is it just me, or does anybody else look at the word "akonadi" and think it's Hebrew?
Joe, I do not mean to be touchy, but ... your [Wandering OT] is really strange :-)
It is KDE related. That's all as far as what we are doing here in this thread. http://en.wikipedia.org/wiki/Akonadi
Be serious ... :-)
JB
On 01/30/2011 12:52 PM, JB wrote:
Joe, I do not mean to be touchy, but ... your [Wandering OT] is really strange :-)
It is KDE related. That's all as far as what we are doing here in this thread. http://en.wikipedia.org/wiki/Akonadi
Be serious ... :-)
This is why I put it into a new thread and marked it as OT or, "Off Topic." The word itself looks like Hebrew to me and I wondered if anybody else saw it that way.
Joe Zeff <joe <at> zeff.us> writes:
On 01/30/2011 12:52 PM, JB wrote:
Joe, I do not mean to be touchy, but ... your [Wandering OT] is really strange
It is KDE related. That's all as far as what we are doing here in this thread. http://en.wikipedia.org/wiki/Akonadi
Be serious ...
This is why I put it into a new thread and marked it as OT or, "Off Topic." The word itself looks like Hebrew to me and I wondered if anybody else saw it that way.
This is what I found.
Akonadi. A Ghanaian oracular goddess who was worshipped by many West Africans. In Accra, she had a celebrated oracular shrine. Akonadi is also a deity associated with justice and the protection of women.
JB
On 01/30/2011 01:16 PM, JB wrote:
This is what I found.
Akonadi. A Ghanaian oracular goddess who was worshipped by many West Africans. In Accra, she had a celebrated oracular shrine. Akonadi is also a deity associated with justice and the protection of women.
Thank you. Now I know where the word came from, but it still looks like Hebrew. Clearly, however, I'm the only one on this list who sees it that way, and that's what I wanted to find out.
Terry Barnaby <terry1 <at> beam.ltd.uk> writes:
...
Your analysis is very plausible. I remember from Slackware (many years ago ...) - it took explicit steps to TERM active processes, reasonably waited for them, and then killed them.
I tried to follow the selinux line as well. The support for nfs home dirs caused problems in the past. You have a mix of nfs3 and nfs4, and the nfs4 may be buggy (some selinux and 'mount' related features are scheduled to be ironed out in F15).
I would dive in, just for kicks, and tried both cases: - switch selinux to permissive mode ; this may not be enough, so ... - disable selinux entirely You can do it on the kernel command line or /etc/sysconfig/selinux - but you have to shutdown twice in order to test the halt script.
JB
On 01/30/2011 08:40 PM, JB wrote:
Terry Barnaby<terry1<at> beam.ltd.uk> writes:
...
Your analysis is very plausible. I remember from Slackware (many years ago ...) - it took explicit steps to TERM active processes, reasonably waited for them, and then killed them.
I tried to follow the selinux line as well. The support for nfs home dirs caused problems in the past. You have a mix of nfs3 and nfs4, and the nfs4 may be buggy (some selinux and 'mount' related features are scheduled to be ironed out in F15).
I would dive in, just for kicks, and tried both cases:
- switch selinux to permissive mode ; this may not be enough, so ...
- disable selinux entirely
You can do it on the kernel command line or /etc/sysconfig/selinux - but you have to shutdown twice in order to test the halt script.
JB
Hi,
Thanks for the info. selinux is actually disabled on all of these systems. I'm not sure why /home uses NFS 3 while the others use NFS4. They are from the same server and there is no specific config for 3 or 4, so on Fedora 14 I would have expected them to be 4. The server is Fedora 14 as well.
I think it is the /home mount that is likely to be causing the problem (as the GUI programs are probably accessing files in the users directory) and this uses NFS V3. So I wouldn't have expected the NFS V4 code to be much involved here.
Note this is a Fedora 14 issue. Fedora 12 has been running in this environment for more than a year with the same setup without this issue.
I could add a delay before the unmount the NFS file systems as see if this reduces the problem.
Terry
Terry Barnaby <terry1 <at> beam.ltd.uk> writes:
... Thanks for the info. selinux is actually disabled on all of these systems. I'm not sure why /home uses NFS 3 while the others use NFS4. They are from the same server and there is no specific config for 3 or 4, so on Fedora 14 I would have expected them to be 4. The server is Fedora 14 as well.
Poke around a little ... Do you have both nfs3 and nfs4 fs in /proc ?
# find /proc -iname "*nfs*" # ls -al /proc/fs/ # cat /proc/fs/...
Use some nfs-specific tools to verify these things.
JB
JB <jb.1234abcd <at> gmail.com> writes:
...
1. Add firewall log rules to ip(6)tables (both client and server). To capture any nfs3 or nfs4 related errors (some traffic may be unsolicited: error or control messages, NEW in terms of iptables)
/etc/sysconfig/iptables: /etc/sysconfig/ip6tables:
... # log ----------------------------------------------------------------------- -A INPUT -m limit --limit 1/second --limit-burst 5 -j LOG --log-prefix "debug" -A FORWARD -m limit --limit 1/second --limit-burst 5 -j LOG --log-prefix "debug" # log -----------------------------------------------------------------------
-A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT
2. Restart nfs server and client (if possible). Let them offer and obtain nfs shares.
3. Scan /var/log/messages, also compare time stamps to those in halt debugging log file.
JB
Terry Barnaby <terry1 <at> beam.ltd.uk> writes:
... I'm not sure why /home uses NFS 3 while the others use NFS4. They are from the same server and there is no specific config for 3 or 4, so on Fedora 14 I would have expected them to be 4. The server is Fedora 14 as well. ...
The analysis above is subject to verification of any custom config on your server and client in: /etc/sysconfig/nfs
Now the analysis.
network fs services script: /etc/init.d/netfs start ... < based on /etc/fstab > service rpcbind start action $"Mounting NFS filesystems: " mount -a -t nfs,nfs4 action $"Mounting CIFS filesystems: " mount -a -t cifs action $"Mounting NCP filesystems: " mount -a -t ncpfs ... touch /var/lock/subsys/netfs action $"Mounting other filesystems: " mount -a -t nonfs,nfs4,cifs,ncpfs,gfs
Comments:
man mount.nfs or mount.nfs4 says: ... mount.nfs4 is used for mounting NFSv4 file system, while mount.nfs is used to mount NFS file systems versions 3 or 2. ...
First it calls 'mount -a -t nfs,nfs4', then it calls 'mount -a -t nonfs,nfs4,...'
Why the second call (except for gfs) ? Should there be also nonfs4, nocifs, noncpfs ? This probably caused some confusion as it might have attempted to overwrite previous (first call) nfs assignment with nfs4.
We hava a case that client's /etc/fstab specified "nfs" fs type for 3 shares obtainable from the same server (even the same Fedora 14 on both server and client), but obtained nfs3 for one share (/home) and nfs4 for 2 others. Why ?
You already have the nfs network set up for this. You should be able to debug it inside this script by placing debugging statements recording /etc/mtab AND /proc/mounts after each 'mount'.
If you confirm it, I will be able to request a fix in Bugzilla.
It would also indicate some bug in 'mount' itself as I am not sure that 'mount' should be allowed to overwrite one fs type with another, here nfs with nfs4. You can test it manually as well.
JB
Terry Barnaby <terry1 <at> beam.ltd.uk> writes:
... Note I am using the "network" not "NetworkManager" service. The NetworkManager service does not work well for me with systems using networked /home and other file systems. ...
Looks like the reason for this is:
# ls /etc/rc0.d/* ... S00killall S01halt
# cat /etc/rc0.d/S00killall ... # Networking could be needed for NFS root. [ $subsys = network ] && continue ...
The NetworkManager is set in /var/lock/subsys/NetworkManager but it is not skipped as well, so the network is brought down.
Next, the halt script is executed that will try to unmount external fs shares (e.g. nfs), and of course will fail.
The fix: # Networking could be needed for NFS root. [ $subsys = NetworkManager ] && continue
Will that make you use NetworkManager now ? :-)
JB
On 02/01/2011 06:30 PM, JB wrote:
Terry Barnaby<terry1<at> beam.ltd.uk> writes:
... Note I am using the "network" not "NetworkManager" service. The NetworkManager service does not work well for me with systems using networked /home and other file systems. ...
Looks like the reason for this is:
# ls /etc/rc0.d/* ... S00killall S01halt
# cat /etc/rc0.d/S00killall ... # Networking could be needed for NFS root. [ $subsys = network ]&& continue ...
The NetworkManager is set in /var/lock/subsys/NetworkManager but it is not skipped as well, so the network is brought down.
Next, the halt script is executed that will try to unmount external fs shares (e.g. nfs), and of course will fail.
The fix: # Networking could be needed for NFS root. [ $subsys = NetworkManager ]&& continue
Will that make you use NetworkManager now ? :-)
JB
No, it won't make me use Networkmanager !
I am not using NFS root, only /home, /data. When I used NetworkManager it had problems on my laptop and desktop PC's when going to and coming out of sleep with the NFS mounts. I can't remember the exact details, but it was thought to be insolvable at the time. There is no need for NetworkManager in a home network anyway, it just adds complications, gets in the way and consumes resources.
I do use it on my laptop when roaming however, to allow easy access to WiFi networks.
Cheers
Terry
On Thu, 2011-02-03 at 20:47 +0000, Terry Barnaby wrote:
There is no need for NetworkManager in a home network anyway, it just adds complications, gets in the way and consumes resources.
Maybe on *your* home network. But it's good for mine, and when I take the laptop around to visit other places. I just connect to it, be it wireless or ethernet. I don't have to fiddle around with reconfiguring the network for each one.
On 02/04/2011 12:21 PM, Tim wrote:
On Thu, 2011-02-03 at 20:47 +0000, Terry Barnaby wrote:
There is no need for NetworkManager in a home network anyway, it just adds complications, gets in the way and consumes resources.
Maybe on *your* home network. But it's good for mine, and when I take the laptop around to visit other places. I just connect to it, be it wireless or ethernet. I don't have to fiddle around with reconfiguring the network for each one.
True, for Laptops NetworkManager is good. And as I said on my Laptop I do have a separate init script that automatically chooses to use NetworkManager or network depending on where I am (does a iwlist | grep on the WiFi).
However, in my case, at home most of my systems are desktops, settop boxes, kitchen radios etc and don't have WiFi just hard wired Ethernet and they all, including the laptop, have /home and /data mounted with NFS and use NIS, NTPD etc. They all run from a local Linux server. So all the users and other data, videos, music, documents etc are on the server, the PC's just have OS and anyone can log in from anywhere.
I did try to use NetworkManager for this, but it had problems with NFS, NIS and sleeping and in the end I could not use it (there are a few bug reports in BugZilla which I suspect are still there).