Interesting problem with MD array creation
by Patrick Ale
Hi,
I have an interesting challenge, maybe one of you can help me :-)
I am creating an MD RAID 1 array consisting out of two SATA disks
(currently /dev/sdb1 and /dev/sdc1).
Both partitions use the Sun disk label and the partitions used start
on cylinder 1.
Device Flag Start End Blocks Id System
/dev/sdb1 1 60788 488271577+ fd Linux raid autodetect
Device Flag Start End Blocks Id System
/dev/sdc1 1 60788 488271577+ fd Linux raid autodetect
When I run the mdadm command to construct the array (mdadm --create
/dev/md0 --level=1 --raid-devices=2 --run /dev/sdb1 /dev/sdc1) all
goes fine, the array syncs up and all.
However, when you do a blkid -c /dev/null you'll see that both
/dev/sdb1 and /dev/sdc1 both share the same UUID.
/dev/sdb1: UUID="978635d6-52ab-c92b-76ea-5a12e49bd4b5" TYPE="linux_raid_member"
/dev/sdc1: UUID="978635d6-52ab-c92b-76ea-5a12e49bd4b5" TYPE="linux_raid_member"
This is a big problem since after a reboot it might be that /dev/sdb
becomes /dev/sda and /dev/sdc becomes /dev/sda (this actually happens,
both on x86 and sparc).
With other words, without having unique UUIDs there is no way you can
construct a proper /etc/mdadm.conf file and start the array
automaticly on every boot.
Any ideas on a work around?
Thanks,
Patrick
13 years, 8 months
F12 release plans?
by Jon Masters
Hi Folks,
This is mostly directed at dgilmore, but let me ask here so there's an
archived record of my question. I am interested to know what the release
timeline/plan is for F12 on SPARC? Not because I'm impatient, but
because I want to just keep the box automatically tracking "development"
and to know when to switch the "stable" image over. Will there be an
updated version of fedora-release that also changes the repos?
Jon.
13 years, 8 months
SATA support on an Ultra 5 system
by Jon Masters
Folks,
Over the weekend, I re-installed my Ultra 5 with the March 1st tree,
after hooking up a Compact Flash to IDE adapter for the on board IDE.
The system recognizes a 4GB CF card as an IDE disk and uses that to boot
(since Open Boot PROM requires that the device controller provide Fcode
suited to booting, which in this case is only true of the on-board one),
then uses a new SATA PCI adapter to run from a 320GB SATA disk.
I configured the system thus:
4 GB CF IDE "disk:
1). /boot (1GB)
2). /boot (1GB)
3). Whole disk
4). Unused
320 GB SATA hard disk:
1). /boot1 (1GB) - not used but for various purposes[0].
2). LVM (100GB)
3). Whole disk
4). LVM (100GB)
5). LVM (rest)
The Open Boot PROM is configured to boot "rawhide" by default, which
uses a boot type partition with "partition-boot" set in SILO running off
the first partition. There is also a regular "stable" Fedora install
using another SILO on the second partition. Thus I have completely
independent setups I can trivially switch between. For now, they both
pull from "development", but I will switch over when possible.
I didn't install twice (Anaconda does not like the physical disk layout
after you have already done one install, even partitioning manually, and
some time I can help figure out why - I know the Anaconda swap and LVM
needs some attention). What I did was to dd the content of the LV from
the first install to form the second, after imaging the /boot. I then
changed the UUIDs with tune2fs, changed hostname, recreated ssh keys,
after changing IP, etc. So they're two separate "hosts" on the same
system and it's pretty clear which one is running at once.
I recommend this approach. The SATA disk is drastically faster than the
original very legacy IDE on the Ultra 5. The use of a flash disk is very
reliable (in theory, anyway) and I can even use eSATA if I want to, or
the faster IDE or SATA ports on the SATA PCI upgrade card.
Jon.
[0] As has been noted by pale and should be well known, LVM requires the
whole of a disk/partition upon which a PV is created, and so you need to
at least begin on cylinder 1 when creating partitions. I decided to have
another 1GB here for dumping purposes and my standard layout is similar.
13 years, 9 months
snd-sun-cs4231 and pulseaudio
by Jon Masters
Folks,
PulseAudio crashes in a repeating loop when I manually load the sound
driver on this system - why it is not being loaded automatically can be
fixed later on. Anyway, removing pulseaudio also removes gnome-bluetooth
and bluez due to rpm dependencies that need fixing. I have filed a bug
against gnome-bluetooth and asked it not to explicitly want pulseaudio.
The lack of bluetooth isn't really a huge issue :)
Jon.
13 years, 9 months
[BIG WARNING!!] LVM and Sun label
by Patrick Ale
Hi!
Saturday morning, rain, girlfriend doing her hair, what else to do :P
I tried to create an LVM volume group and run into something strange.
- With fdisk I created a new Sun label, this renders you with three
partitions. After changing slice 1 to type LVM you end up with this:
Disk /dev/sdb (Sun disk label): 255 heads, 63 sectors, 56065 cylinders
Units = cylinders of 16065 * 512 bytes
Device Flag Start End Blocks Id System
/dev/sdb1 0 121595 976711837+ 8e Linux LVM
/dev/sdb2 u 121595 121601 48195 82 Linux swap
/dev/sdb3 0 121601 976760032+ 5 Whole disk
Create a PV with the command: pvcreate /dev/sdb1
root@medusa /]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created
Now create a volume group and you get into the strangeness:
[root@medusa /]# vgcreate test /dev/sdb1
Found duplicate PV 6DenBomNFcsol8rBNZh4eoWdSnXdS2y8: using /dev/sdb1
not /dev/sdb
Found duplicate PV 6DenBomNFcsol8rBNZh4eoWdSnXdS2y8: using /dev/sdb3
not /dev/sdb1
Found duplicate PV 6DenBomNFcsol8rBNZh4eoWdSnXdS2y8: using /dev/sdb1
not /dev/sdb3
Volume group "test" successfully created
Now when you type pvdisplay:
[root@medusa /]# pvdisplay
Found duplicate PV 6DenBomNFcsol8rBNZh4eoWdSnXdS2y8: using /dev/sdb1
not /dev/sdb
Found duplicate PV 6DenBomNFcsol8rBNZh4eoWdSnXdS2y8: using /dev/sdb3
not /dev/sdb1
--- Physical volume ---
PV Name /dev/sdb3
VG Name test
PV Size 931.46 GB / not usable 4.15 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 238454
Free PE 238454
Allocated PE 0
PV UUID 6DenBo-mNFc-sol8-rBNZ-h4eo-WdSn-XdS2y8
This is the whole disk and it actually corrupts your partition/slice
table and a new one will be created...
[root@medusa /]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or
OSF disklabel
Building a new sun disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Command (m for help): p
Disk /dev/sdb (Sun disk label): 255 heads, 63 sectors, 56065 cylinders
Units = cylinders of 16065 * 512 bytes
Device Flag Start End Blocks Id System
/dev/sdb1 0 121595 976711837+ 83 Linux native
/dev/sdb2 u 121595 121601 48195 82 Linux swap
/dev/sdb3 0 121601 976760032+ 5 Whole disk
So I guess, DO NOT use LVM on sd?1 or you end up with possible corruption.
Patrick
13 years, 9 months
SPARC Status update
by Dennis Gilmore
Hi all,
Ive pushed a fedora 12 install tree to
http://secondary.fedoraproject.org/pub/fedora-secondary/releases/test/12-
Alpha/Fedora/sparc/
which should also be on all the mirrors.
https://mirrors.fedoraproject.org/mirrorlist?repo=fedora-rawhide&arch=sparc
will give you the list of mirrors.
Please test and report issues here.
To partition your system you need to break to a shell. either in rescue mode
or via a tty in anaconda as soon as the install starts. you should then do a
manual partition in anaconda and allocate mount point and file systems to the
partitions you have made. formatting works just fine, creating the partitions
doesn't.
some big fat warnings
Partitioning in anaconda does not work.
upgrades from previous releases are completely untested.
prelink is broken and needs to be disabled post install
selinux does not work in enforcing mode
you will need to use vnc for install (you need to select manual partitioning)
network booting the installer does not work and likely will not be supported
at all (the images are way bigger than OBP allows. we need some new solution)
The UUID issue reported on this list seems to be solely due to artifacts left
over from lvm previously being used on an install.
i've probably missed things.
Dennis
13 years, 9 months
Bug with UUIDs?
by Patrick Ale
Good morning!
Before filing a bug, I want to see what you have to say :)
I did an FC12 installation on my Ultra 5 and noticed something weird.
[root@medusa /]# blkid
/dev/sda1: UUID="5227bdcc-de97-4a1a-8d01-49e06e882771" TYPE="ext3"
/dev/sda3: UUID="5227bdcc-de97-4a1a-8d01-49e06e882771" TYPE="ext3"
/dev/sdb1: UUID="90a9428f-72c6-ae35-e4e6-37e93e5dd7b5" TYPE="linux_raid_member"
/dev/sdc1: UUID="7fd60fe3-8252-40e8-8ec4-8790287eb8e8" TYPE="ext4"
/dev/sdc2: UUID="e07227d5-8519-44b7-a046-75406f2598ce" TYPE="swap"
/dev/sdc3: UUID="7fd60fe3-8252-40e8-8ec4-8790287eb8e8" TYPE="ext4"
/dev/sdc4: UUID="01d42e64-932b-4abb-98f0-5c78807e8b06" TYPE="ext4"
/dev/sdc5: UUID="8e47e08c-4918-452d-a266-d26d8185c106" TYPE="ext4"
Note how all first slices and the third slices on both my ATA and SATA
disk are duplicates.
This becomes very problamatic during booting.
silo.conf by default after installation uses
root=UUID="7fd60fe3-8252-40e8-8ec4-8790287eb8e8".
The kernel panics, mentions "you have to specify a filesystem" and
sleeps forever.
My assumption is, that due to the duplicated UUIDs, it tries to mount
/dev/sdc, since /dev/sdc3 is the Sun/Solaris equilivent to "Whole
disk" and fdisk highly recommends you to have this /dev/sd?3 as "Whole
disk" partition. (As Solaris has slice 2 as whole disk).
/etc/fstab has problems too. It tries to mount both / and /export with
the same UUID, again
UUID=7fd60fe3-8252-40e8-8ec4-8790287eb8e8 / ext4
defaults 1 1
UUID=7fd60fe3-8252-40e8-8ec4-8790287eb8e8 /export ext4
defaults 1 2
Another problem in /etc/fstab is that it tries to mount filesystems
based on UUIDs that don't exist.
[root@medusa /]# grep /slowdrive /etc/fstab
#UUID=8bd7d37d-21fb-4da5-9865-c20cc07ca6be /slowdrive
ext3 defaults 1 2
[root@medusa /]# blkid | grep 8bd7d37d-21fb-4da5-9865-c20cc07ca6be
[root@medusa /]#
Any ideas what is going on?
Patrick
13 years, 9 months
Fun with old machines and big disks
by Patrick Ale
Hi,
I put a Sil 3512 PCI SATA card in my Ultra 5 and added a 1TB SATA disk.
To state the obvious to the question "why would you do that" the
answer is "because I can" ;-)
When partitioning the harddrive with a DOS label, I can partition the full 1TB.
DOS DISK LABEL output:
---------------------------------------
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x16b15aba
Device Boot Start End Blocks Id System
/dev/sdb1 1 121601 976760001 83 Linux
mkfs.ext3 with DOS LABEL output
----------------------------------------------------------
mke2fs 1.41.9 (22-Aug-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
61054976 inodes, 244190000 blocks
12209500 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
7453 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 27 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
df -h output with DOS LABEL
----------------------------------------------
[root@medusa patrick]# mount /dev/sdb1 /mnt
[root@medusa patrick]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 40G 2.4G 35G 7% /
/dev/sda1 1012M 54M 907M 6% /boot
/dev/sdc5 20G 172M 19G 1% /home
/dev/sdc4 9.9G 242M 9.2G 3% /var
tmpfs 248M 0 248M 0% /dev/shm
/dev/sdb1 917G 200M 871G 1% /mnt
Great so far, now for the issue in two steps.
fdisk /dev/sdb again, 's' for creating a new Sun Label.
fdisk output with SUN LABEL
----------------------------------------------
Disk /dev/sdb (Sun disk label): 255 heads, 63 sectors, 56065 cylinders
Units = cylinders of 16065 * 512 bytes
Device Flag Start End Blocks Id System
/dev/sdb1 0 121595 976711837+ 83 Linux native
/dev/sdb2 u 121595 121601 48195 82 Linux swap
/dev/sdb3 0 121601 976760032+ 5 Whole disk
mkfs.ext3 output with SUN LABEL
------------------------------------------------------------
mke2fs 1.41.9 (22-Aug-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
61046784 inodes, 244177958 blocks
12208897 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
7452 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 32 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
df -h output with SUN LABEL
-----------------------------------------------
[root@medusa patrick]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 40G 2.4G 35G 7% /
/dev/sda1 1012M 54M 907M 6% /boot
/dev/sdc5 20G 172M 19G 1% /home
/dev/sdc4 9.9G 242M 9.2G 3% /var
tmpfs 248M 0 248M 0% /dev/shm
/dev/sdb1 917G 200M 871G 1% /mnt
This was step 1, to prove that both DOS label and Solaris label are
capable to address the full 1TB, but now..
[root@medusa patrick]# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb (Sun disk label): 255 heads, 63 sectors, 56065 cylinders
Units = cylinders of 16065 * 512 bytes
Device Flag Start End Blocks Id System
/dev/sdb1 0 121595 976711837+ 83 Linux native
/dev/sdb2 u 121595 121601 48195 82 Linux swap
/dev/sdb3 0 121601 976760032+ 5 Whole disk
Command (m for help): d
Partition number (1-8): 2
Command (m for help): d
Partition number (1-8): 1
Disk /dev/sdb (Sun disk label): 255 heads, 63 sectors, 56065 cylinders
Units = cylinders of 16065 * 512 bytes
Device Flag Start End Blocks Id System
/dev/sdb3 0 121601 976760032+ 5 Whole disk
Command (m for help): n
Partition number (1-8): 1
First cylinder (0-56065): 0
Last cylinder or +size or +sizeM or +sizeK (0-56065, default 56065): 56065
Command (m for help): p
Disk /dev/sdb (Sun disk label): 255 heads, 63 sectors, 56065 cylinders
Units = cylinders of 16065 * 512 bytes
Device Flag Start End Blocks Id System
/dev/sdb1 0 56065 450342112+ 83 Linux native
/dev/sdb3 0 121601 976760032+ 5 Whole disk
Command (m for help): n
Partition number (1-8): 2
Other partitions already cover the whole disk.
Delete some/shrink them before retry.
And here is where it goes wrong. Suddenly fdisk can only partition
half of the disk, 500GB. It doesn't make sense to me as two other
methods do address the full 1TB.
mkfs + mount to confirm the above:
mke2fs 1.41.9 (22-Aug-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
28147712 inodes, 112585528 blocks
5629276 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
3436 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 27 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@medusa patrick]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 40G 2.4G 35G 7% /
/dev/sda1 1012M 54M 907M 6% /boot
/dev/sdc5 20G 172M 19G 1% /home
/dev/sdc4 9.9G 242M 9.2G 3% /var
tmpfs 248M 0 248M 0% /dev/shm
/dev/sdb1 423G 199M 402G 1% /mnt
Patrick
13 years, 9 months