My raid1 gets corrupted _everytime_ I shut down a f19-kde-livecd-image. I used kernel.f19 and mdadm.f19 in a f17-livecd and everything works fine. So these two are not the problem.
What should I look at? maybe dracut???
PS: Testing and experimenting isn't a good idea here because it takes almost 3 hours for the raid to rebuild...
On Thu, Aug 8, 2013 at 6:15 PM, Joshua C. joshuacov@gmail.com wrote:
My raid1 gets corrupted _everytime_ I shut down a f19-kde-livecd-image. I used kernel.f19 and mdadm.f19 in a f17-livecd and everything works fine. So these two are not the problem.
What should I look at? maybe dracut???
PS: Testing and experimenting isn't a good idea here because it takes almost 3 hours for the raid to rebuild...
-- --joshua
With Fedora-Live-Desktop-x86_64-19-1 installed to a vfat formatted Live USB device, I find this report in /var/log/messages on each reboot:
Aug 8 17:24:09 localhost kernel: [ 8.255350] FAT-fs (sdc1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. Aug 8 17:24:09 localhost kernel: [ 11.052845] bio: create slab <bio-1> at 1 Aug 8 17:24:09 localhost kernel: [ 11.179108] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
Once unmounted, fsck reports that the dirty bit is set:
[root@localhost ~]# fsck.vfat -rv /dev/sdc1 fsck.fat 3.0.22 (2013-07-19) fsck.fat 3.0.22 (2013-07-19) Checking we can access the last sector of the filesystem 0x41: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt. 1) Remove dirty bit 2) No action ? 1 Boot sector contents: System ID "SYSLINUX" Media byte 0xf8 (hard disk) 512 bytes per logical sector 4096 bytes per cluster 32 reserved sectors First FAT starts at byte 16384 (sector 32) 2 FATs, 32 bit entries 7798784 bytes per FAT (= 15232 sectors) Root directory start at cluster 2 (arbitrary size) Data area starts at byte 15613952 (sector 30496) 1948715 data clusters (7981936640 bytes) 62 sectors/track, 247 heads 0 hidden sectors 15620218 sectors total Checking for unused clusters. Checking free cluster summary. Perform changes ? (y/n) y /dev/sdc1: 18 files, 644955/1948715 clusters
I wonder if this may be due to a Bash shell not getting properly shut down during shutdown, as reported here, http://lists.freedesktop.org/archives/systemd-devel/2013-July/012307.html
--Fred
2013/8/9 Frederick Grose fgrose@gmail.com:
On Thu, Aug 8, 2013 at 6:15 PM, Joshua C. joshuacov@gmail.com wrote:
My raid1 gets corrupted _everytime_ I shut down a f19-kde-livecd-image. I used kernel.f19 and mdadm.f19 in a f17-livecd and everything works fine. So these two are not the problem.
What should I look at? maybe dracut???
PS: Testing and experimenting isn't a good idea here because it takes almost 3 hours for the raid to rebuild...
-- --joshua
With Fedora-Live-Desktop-x86_64-19-1 installed to a vfat formatted Live USB device, I find this report in /var/log/messages on each reboot:
Aug 8 17:24:09 localhost kernel: [ 8.255350] FAT-fs (sdc1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. Aug 8 17:24:09 localhost kernel: [ 11.052845] bio: create slab <bio-1> at 1 Aug 8 17:24:09 localhost kernel: [ 11.179108] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
Once unmounted, fsck reports that the dirty bit is set:
[root@localhost ~]# fsck.vfat -rv /dev/sdc1 fsck.fat 3.0.22 (2013-07-19) fsck.fat 3.0.22 (2013-07-19) Checking we can access the last sector of the filesystem 0x41: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.
- Remove dirty bit
- No action
? 1 Boot sector contents: System ID "SYSLINUX" Media byte 0xf8 (hard disk) 512 bytes per logical sector 4096 bytes per cluster 32 reserved sectors First FAT starts at byte 16384 (sector 32) 2 FATs, 32 bit entries 7798784 bytes per FAT (= 15232 sectors) Root directory start at cluster 2 (arbitrary size) Data area starts at byte 15613952 (sector 30496) 1948715 data clusters (7981936640 bytes) 62 sectors/track, 247 heads 0 hidden sectors 15620218 sectors total Checking for unused clusters. Checking free cluster summary. Perform changes ? (y/n) y /dev/sdc1: 18 files, 644955/1948715 clusters
I wonder if this may be due to a Bash shell not getting properly shut down during shutdown, as reported here, http://lists.freedesktop.org/archives/systemd-devel/2013-July/012307.html
--Fred
-- livecd mailing list livecd@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/livecd
I was suspecting that systemd could be involved. Do you know if there is a patch about this?
Since I'm using a livecd image without persistent overlay, there is no way to find any logs from the shutting down process. But this is very frustrating....
2013/8/9 Joshua C. joshuacov@gmail.com:
2013/8/9 Frederick Grose fgrose@gmail.com:
On Thu, Aug 8, 2013 at 6:15 PM, Joshua C. joshuacov@gmail.com wrote:
My raid1 gets corrupted _everytime_ I shut down a f19-kde-livecd-image. I used kernel.f19 and mdadm.f19 in a f17-livecd and everything works fine. So these two are not the problem.
What should I look at? maybe dracut???
PS: Testing and experimenting isn't a good idea here because it takes almost 3 hours for the raid to rebuild...
-- --joshua
With Fedora-Live-Desktop-x86_64-19-1 installed to a vfat formatted Live USB device, I find this report in /var/log/messages on each reboot:
Aug 8 17:24:09 localhost kernel: [ 8.255350] FAT-fs (sdc1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. Aug 8 17:24:09 localhost kernel: [ 11.052845] bio: create slab <bio-1> at 1 Aug 8 17:24:09 localhost kernel: [ 11.179108] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
Once unmounted, fsck reports that the dirty bit is set:
[root@localhost ~]# fsck.vfat -rv /dev/sdc1 fsck.fat 3.0.22 (2013-07-19) fsck.fat 3.0.22 (2013-07-19) Checking we can access the last sector of the filesystem 0x41: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.
- Remove dirty bit
- No action
? 1 Boot sector contents: System ID "SYSLINUX" Media byte 0xf8 (hard disk) 512 bytes per logical sector 4096 bytes per cluster 32 reserved sectors First FAT starts at byte 16384 (sector 32) 2 FATs, 32 bit entries 7798784 bytes per FAT (= 15232 sectors) Root directory start at cluster 2 (arbitrary size) Data area starts at byte 15613952 (sector 30496) 1948715 data clusters (7981936640 bytes) 62 sectors/track, 247 heads 0 hidden sectors 15620218 sectors total Checking for unused clusters. Checking free cluster summary. Perform changes ? (y/n) y /dev/sdc1: 18 files, 644955/1948715 clusters
I wonder if this may be due to a Bash shell not getting properly shut down during shutdown, as reported here, http://lists.freedesktop.org/archives/systemd-devel/2013-July/012307.html
--Fred
-- livecd mailing list livecd@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/livecd
I was suspecting that systemd could be involved. Do you know if there is a patch about this?
Since I'm using a livecd image without persistent overlay, there is no way to find any logs from the shutting down process. But this is very frustrating....
-- --joshua
I'll backport commits 82659fd7571bda0f3dce9755b89a23c411d53dda "core: optionally send SIGHUP in addition to the configured kill signal" and a6c0353b9268d5b780fb7ff05a10cb5031446e5d "core: open up SendSIGHUP property for transient units" to systemd-204 and turn this on in my test built. I hope this can fix the problem.
As I already said, it annoying to rebuild the raid after every reboot!!!
Has this behavior been reported in a real installation (no livecds)?
2013/8/9 Joshua C. joshuacov@gmail.com:
2013/8/9 Joshua C. joshuacov@gmail.com:
2013/8/9 Frederick Grose fgrose@gmail.com:
On Thu, Aug 8, 2013 at 6:15 PM, Joshua C. joshuacov@gmail.com wrote:
My raid1 gets corrupted _everytime_ I shut down a f19-kde-livecd-image. I used kernel.f19 and mdadm.f19 in a f17-livecd and everything works fine. So these two are not the problem.
What should I look at? maybe dracut???
PS: Testing and experimenting isn't a good idea here because it takes almost 3 hours for the raid to rebuild...
-- --joshua
With Fedora-Live-Desktop-x86_64-19-1 installed to a vfat formatted Live USB device, I find this report in /var/log/messages on each reboot:
Aug 8 17:24:09 localhost kernel: [ 8.255350] FAT-fs (sdc1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. Aug 8 17:24:09 localhost kernel: [ 11.052845] bio: create slab <bio-1> at 1 Aug 8 17:24:09 localhost kernel: [ 11.179108] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
Once unmounted, fsck reports that the dirty bit is set:
[root@localhost ~]# fsck.vfat -rv /dev/sdc1 fsck.fat 3.0.22 (2013-07-19) fsck.fat 3.0.22 (2013-07-19) Checking we can access the last sector of the filesystem 0x41: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt.
- Remove dirty bit
- No action
? 1 Boot sector contents: System ID "SYSLINUX" Media byte 0xf8 (hard disk) 512 bytes per logical sector 4096 bytes per cluster 32 reserved sectors First FAT starts at byte 16384 (sector 32) 2 FATs, 32 bit entries 7798784 bytes per FAT (= 15232 sectors) Root directory start at cluster 2 (arbitrary size) Data area starts at byte 15613952 (sector 30496) 1948715 data clusters (7981936640 bytes) 62 sectors/track, 247 heads 0 hidden sectors 15620218 sectors total Checking for unused clusters. Checking free cluster summary. Perform changes ? (y/n) y /dev/sdc1: 18 files, 644955/1948715 clusters
I wonder if this may be due to a Bash shell not getting properly shut down during shutdown, as reported here, http://lists.freedesktop.org/archives/systemd-devel/2013-July/012307.html
--Fred
-- livecd mailing list livecd@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/livecd
I was suspecting that systemd could be involved. Do you know if there is a patch about this?
Since I'm using a livecd image without persistent overlay, there is no way to find any logs from the shutting down process. But this is very frustrating....
-- --joshua
I'll backport commits 82659fd7571bda0f3dce9755b89a23c411d53dda "core: optionally send SIGHUP in addition to the configured kill signal" and a6c0353b9268d5b780fb7ff05a10cb5031446e5d "core: open up SendSIGHUP property for transient units" to systemd-204 and turn this on in my test built. I hope this can fix the problem.
As I already said, it annoying to rebuild the raid after every reboot!!!
Has this behavior been reported in a real installation (no livecds)?
-- --joshua
I tested this with systemd git-f535088ef72a92533f2c4270d06289c89737fa2a "systemctl: add missing newline to --help output" as of 20130809 without luck. On every shutdown my raid1 is marked dirty!!!
On Fri, Aug 9, 2013 at 6:52 PM, Joshua C. joshuacov@gmail.com wrote:
2013/8/9 Joshua C. joshuacov@gmail.com:
2013/8/9 Joshua C. joshuacov@gmail.com:
2013/8/9 Frederick Grose fgrose@gmail.com:
On Thu, Aug 8, 2013 at 6:15 PM, Joshua C. joshuacov@gmail.com wrote:
My raid1 gets corrupted _everytime_ I shut down a f19-kde-livecd-image. I used kernel.f19 and mdadm.f19 in a f17-livecd and everything works fine. So these two are not the problem.
What should I look at? maybe dracut???
PS: Testing and experimenting isn't a good idea here because it takes almost 3 hours for the raid to rebuild...
-- --joshua
With Fedora-Live-Desktop-x86_64-19-1 installed to a vfat formatted
Live USB
device, I find this report in /var/log/messages on each reboot:
Aug 8 17:24:09 localhost kernel: [ 8.255350] FAT-fs (sdc1): Volume
was
not properly unmounted. Some data may be corrupt. Please run fsck. Aug 8 17:24:09 localhost kernel: [ 11.052845] bio: create slab
<bio-1> at
1 Aug 8 17:24:09 localhost kernel: [ 11.179108] EXT4-fs (dm-0):
mounted
filesystem with ordered data mode. Opts: (null)
Once unmounted, fsck reports that the dirty bit is set:
[root@localhost ~]# fsck.vfat -rv /dev/sdc1 fsck.fat 3.0.22 (2013-07-19) fsck.fat 3.0.22 (2013-07-19) Checking we can access the last sector of the filesystem 0x41: Dirty bit is set. Fs was not properly unmounted and some data
may be
corrupt.
- Remove dirty bit
- No action
? 1 Boot sector contents: System ID "SYSLINUX" Media byte 0xf8 (hard disk) 512 bytes per logical sector 4096 bytes per cluster 32 reserved sectors First FAT starts at byte 16384 (sector 32) 2 FATs, 32 bit entries 7798784 bytes per FAT (= 15232 sectors) Root directory start at cluster 2 (arbitrary size) Data area starts at byte 15613952 (sector 30496) 1948715 data clusters (7981936640 bytes) 62 sectors/track, 247 heads 0 hidden sectors 15620218 sectors total Checking for unused clusters. Checking free cluster summary. Perform changes ? (y/n) y /dev/sdc1: 18 files, 644955/1948715 clusters
I wonder if this may be due to a Bash shell not getting properly shut
down
during shutdown, as reported here,
http://lists.freedesktop.org/archives/systemd-devel/2013-July/012307.html
--Fred
-- livecd mailing list livecd@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/livecd
I was suspecting that systemd could be involved. Do you know if there is a patch about this?
Since I'm using a livecd image without persistent overlay, there is no way to find any logs from the shutting down process. But this is very frustrating....
-- --joshua
I'll backport commits 82659fd7571bda0f3dce9755b89a23c411d53dda "core: optionally send SIGHUP in addition to the configured kill signal" and a6c0353b9268d5b780fb7ff05a10cb5031446e5d "core: open up SendSIGHUP property for transient units" to systemd-204 and turn this on in my test built. I hope this can fix the problem.
As I already said, it annoying to rebuild the raid after every reboot!!!
Has this behavior been reported in a real installation (no livecds)?
-- --joshua
I tested this with systemd git-f535088ef72a92533f2c4270d06289c89737fa2a "systemctl: add missing newline to --help output" as of 20130809 without luck. On every shutdown my raid1 is marked dirty!!!
-- --joshua
It seems that dosfstools has become more thorough in checking fat volumes in early 2013.
See these commits, http://daniel-baumann.ch/gitweb/?p=software/dosfstools.git;a=blob;f=ChangeLo...
--Fred
livecd@lists.fedoraproject.org