When I boot into the install there is an error in the destination section.
I looked at the debug info in the storage.log and there was an error about sdb1 did not exist. But...
When I reboot to F24 then ...
cat /proc/mdstat
md126 : active raid1 sda2[2] sdb2[1] 961261568 blocks super 1.2 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
md127 : active raid1 sdb1[1] sda1[2] 15368064 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
The section of ks.cfg for hard drive setup is as follows:
ignoredisk --only-use=sda,sdb bootloader --location=mbr --boot-drive=sda
# Partition clearing information clearpart --none --initlabel
# Disk partitioning information
part raid.6 --fstype=mdmember --noformat --onpart=sda1 part raid.27 --fstype=mdmember --noformat --onpart=sdb1 part raid.14 --fstype=mdmember --noformat --onpart=sda2 part raid.32 --fstype=mdmember --noformat --onpart=sdb2
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid /home --device=home --fstype=ext4 --level=raid1 --noformat --useexisting
I currently have a raid1 setup with 2 drives sda and sdb
Since I am using the option --useexisting do I still need to use the part commands?
The last time I did an upgrade was to F24 I have not found anything that says the syntax has changed.
Any Ideas?
David
On 08/09/2017 11:52 AM, D&R wrote:
When I boot into the install there is an error in the destination section.
I looked at the debug info in the storage.log and there was an error about sdb1 did not exist. But...
When I reboot to F24 then ...
cat /proc/mdstat
md126 : active raid1 sda2[2] sdb2[1] 961261568 blocks super 1.2 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
md127 : active raid1 sdb1[1] sda1[2] 15368064 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
The section of ks.cfg for hard drive setup is as follows:
ignoredisk --only-use=sda,sdb bootloader --location=mbr --boot-drive=sda
# Partition clearing information clearpart --none --initlabel
# Disk partitioning information
part raid.6 --fstype=mdmember --noformat --onpart=sda1 part raid.27 --fstype=mdmember --noformat --onpart=sdb1 part raid.14 --fstype=mdmember --noformat --onpart=sda2 part raid.32 --fstype=mdmember --noformat --onpart=sdb2
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid /home --device=home --fstype=ext4 --level=raid1 --noformat --useexisting
I currently have a raid1 setup with 2 drives sda and sdb
Since I am using the option --useexisting do I still need to use the part commands?
The last time I did an upgrade was to F24 I have not found anything that says the syntax has changed.
Any Ideas?
Uhm, when you're booting the install, is it possible that the CD/DVD you're booting from becomes /dev/sda? If so, then your first hard drive is /dev/sdb and the second is /dev/sdc and the
ignoredisk --only-use=sda,sdb
would block using the second hard drive, since it's /dev/sdc at this time. This is just a wild guess. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - If your broker is so damned smart...why is he still working? - ----------------------------------------------------------------------
On Wed, 9 Aug 2017 12:00:00 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 11:52 AM, D&R wrote:
When I boot into the install there is an error in the destination section.
I looked at the debug info in the storage.log and there was an error about sdb1 did not exist. But...
When I reboot to F24 then ...
cat /proc/mdstat
md126 : active raid1 sda2[2] sdb2[1] 961261568 blocks super 1.2 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
md127 : active raid1 sdb1[1] sda1[2] 15368064 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
The section of ks.cfg for hard drive setup is as follows:
ignoredisk --only-use=sda,sdb bootloader --location=mbr --boot-drive=sda
# Partition clearing information clearpart --none --initlabel
# Disk partitioning information
part raid.6 --fstype=mdmember --noformat --onpart=sda1 part raid.27 --fstype=mdmember --noformat --onpart=sdb1 part raid.14 --fstype=mdmember --noformat --onpart=sda2 part raid.32 --fstype=mdmember --noformat --onpart=sdb2
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid /home --device=home --fstype=ext4 --level=raid1 --noformat --useexisting
I currently have a raid1 setup with 2 drives sda and sdb
Since I am using the option --useexisting do I still need to use the part commands?
The last time I did an upgrade was to F24 I have not found anything that says the syntax has changed.
Any Ideas?
Uhm, when you're booting the install, is it possible that the CD/DVD you're booting from becomes /dev/sda? If so, then your first hard drive is /dev/sdb and the second is /dev/sdc and the
ignoredisk --only-use=sda,sdb
would block using the second hard drive, since it's /dev/sdc at this time. This is just a wild guess.
I am booting from an iso file from another computer. As I recall that is what I did when I installed F24 over F22.
In the setup above it shows raid.<number> (ie. raid.6). Do you know what the number represents? Can it be changed from one install to the next?
David
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-
- If your broker is so damned smart...why is he still working? -
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On 08/09/2017 12:08 PM, D&R wrote:
On Wed, 9 Aug 2017 12:00:00 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 11:52 AM, D&R wrote:
When I boot into the install there is an error in the destination section.
I looked at the debug info in the storage.log and there was an error about sdb1 did not exist. But...
When I reboot to F24 then ...
cat /proc/mdstat
md126 : active raid1 sda2[2] sdb2[1] 961261568 blocks super 1.2 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
md127 : active raid1 sdb1[1] sda1[2] 15368064 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
The section of ks.cfg for hard drive setup is as follows:
ignoredisk --only-use=sda,sdb bootloader --location=mbr --boot-drive=sda
# Partition clearing information clearpart --none --initlabel
# Disk partitioning information
part raid.6 --fstype=mdmember --noformat --onpart=sda1 part raid.27 --fstype=mdmember --noformat --onpart=sdb1 part raid.14 --fstype=mdmember --noformat --onpart=sda2 part raid.32 --fstype=mdmember --noformat --onpart=sdb2
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid /home --device=home --fstype=ext4 --level=raid1 --noformat --useexisting
I currently have a raid1 setup with 2 drives sda and sdb
Since I am using the option --useexisting do I still need to use the part commands?
The last time I did an upgrade was to F24 I have not found anything that says the syntax has changed.
Any Ideas?
Uhm, when you're booting the install, is it possible that the CD/DVD you're booting from becomes /dev/sda? If so, then your first hard drive is /dev/sdb and the second is /dev/sdc and the
ignoredisk --only-use=sda,sdb
would block using the second hard drive, since it's /dev/sdc at this time. This is just a wild guess.
I am booting from an iso file from another computer. As I recall that is what I did when I installed F24 over F22.
How are you booting an ISO file from another computer? Is this a network kickstart install, where the iso image is located on an NFS or CIFS server?
Whatever it is, can you boot it again without invoking kickstart? If you can, open up a command line window and do "fdisk -l", which should list the disks the system sees. Verify the devices are the ones you think they are. Remember that when you're booting F24 from the hard disk, you are absolutely making /dev/sda the first hard drive. When booting from the network, a CD/DVD or a bootp server, that may NOT be the case and your drive letters may be different, in which the limits in your "ignoredisk" line would prevent finding the second drive.
In the setup above it shows raid.<number> (ie. raid.6). Do you know what the number represents? Can it be changed from one install to the next?
The "raid" bit of the label simply means they're to be used in a software RAID. I have no idea why they're numbered in that manner rather than sequentially.
Right below those "part" definitions, you see "raid" definitions where those labels are normally used. In your case,
raid / --device=root --fstype=ext4 --level=raid1 --useexisting
tells the system to use the first two devices in the "part" section (/dev/sda1 and /dev/sdb1) as a RAID1, format it as ext4 and mount it at "/". Since no partitions are specified, it uses the first two in the "part" section. In reality, that line with all the bits specified would be:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.27
If the partitions to use weren't sequential (e.g. you wanted to use the first and third partitions), you'd need to specify them explicitly at the end of the line:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.14
You should be able to rename the labels in your ks.cfg if you wish, but again if your RAID definition doesn't use sequential partitions, make sure you specify them appropriately. The labels have no significance outside of Anaconda/kickstart as far as I know. ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - Charter Member of the International Sarcasm Society - - "Yeah, like we need YOUR support!" - ----------------------------------------------------------------------
On 08/09/2017 02:27 PM, Rick Stevens wrote:
Right below those "part" definitions, you see "raid" definitions where those labels are normally used. In your case,
raid / --device=root --fstype=ext4 --level=raid1 --useexisting
tells the system to use the first two devices in the "part" section (/dev/sda1 and /dev/sdb1) as a RAID1, format it as ext4 and mount it at "/". Since no partitions are specified, it uses the first two in the "part" section.
Is that documented somewhere? I've never seen that behavior described in the kickstart documentation, and I was curious enough to test it. If I provide a "raid" specification with no partitions, installation of CentOS fails with an error that reads "Partitions required for raid".
I didn't test Fedora, but the documentation for the "raid" command in both appears to be the same.
On 08/09/2017 04:02 PM, Gordon Messmer wrote:
On 08/09/2017 02:27 PM, Rick Stevens wrote:
Right below those "part" definitions, you see "raid" definitions where those labels are normally used. In your case,
raid / --device=root --fstype=ext4 --level=raid1 --useexisting
tells the system to use the first two devices in the "part" section (/dev/sda1 and /dev/sdb1) as a RAID1, format it as ext4 and mount it at "/". Since no partitions are specified, it uses the first two in the "part" section.
Is that documented somewhere? I've never seen that behavior described in the kickstart documentation, and I was curious enough to test it. If I provide a "raid" specification with no partitions, installation of CentOS fails with an error that reads "Partitions required for raid".
You have to have at least two "part raid.somenumber" lines to create a RAID1, and a "raid" line to define the type of RAID, filesystem type and mountpoint.
I didn't test Fedora, but the documentation for the "raid" command in both appears to be the same.
Have a look at:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm...
(that URL is all one line, your mail client may wrap it).
Scroll down to the "part" section and also the "raid" section. For a more advanced example:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm...
(again, all one line) ---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com - - AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 - - - - IGNORE that man behind the keyboard! - - - The Wizard of OS - ----------------------------------------------------------------------
On 08/09/2017 06:14 PM, Rick Stevens wrote:
You have to have at least two "part raid.somenumber" lines to create a RAID1, and a "raid" line to define the type of RAID, filesystem type and mountpoint.
I did. I used a kickstart that was as close to D&R's snippet as possible.
Have a look at:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/htm...
Yeah, that's the platform I tested. It definitely does not work as you described. At least not in my tests. As far as I can tell, you *must* specify the partitions unless you are reusing an existing RAID device.
On Wed, 9 Aug 2017 14:27:07 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 12:08 PM, D&R wrote:
On Wed, 9 Aug 2017 12:00:00 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 11:52 AM, D&R wrote:
When I boot into the install there is an error in the destination section.
I looked at the debug info in the storage.log and there was an error about sdb1 did not exist. But...
When I reboot to F24 then ...
cat /proc/mdstat
md126 : active raid1 sda2[2] sdb2[1] 961261568 blocks super 1.2 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
md127 : active raid1 sdb1[1] sda1[2] 15368064 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
The section of ks.cfg for hard drive setup is as follows:
ignoredisk --only-use=sda,sdb bootloader --location=mbr --boot-drive=sda
# Partition clearing information clearpart --none --initlabel
# Disk partitioning information
part raid.6 --fstype=mdmember --noformat --onpart=sda1 part raid.27 --fstype=mdmember --noformat --onpart=sdb1 part raid.14 --fstype=mdmember --noformat --onpart=sda2 part raid.32 --fstype=mdmember --noformat --onpart=sdb2
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid /home --device=home --fstype=ext4 --level=raid1 --noformat --useexisting
I currently have a raid1 setup with 2 drives sda and sdb
Since I am using the option --useexisting do I still need to use the part commands?
The last time I did an upgrade was to F24 I have not found anything that says the syntax has changed.
Any Ideas?
Uhm, when you're booting the install, is it possible that the CD/DVD you're booting from becomes /dev/sda? If so, then your first hard drive is /dev/sdb and the second is /dev/sdc and the
ignoredisk --only-use=sda,sdb
would block using the second hard drive, since it's /dev/sdc at this time. This is just a wild guess.
I am booting from an iso file from another computer. As I recall that is what I did when I installed F24 over F22.
How are you booting an ISO file from another computer? Is this a network kickstart install, where the iso image is located on an NFS or CIFS server?
yes, it is nfs mounted. I have read and reread the doc and in one place it says to point to an install tree another place it says iso or install tree. I tried both and neither worked.
In fact, after I tried a number of changes as I understood the doc I got worse results.
I then changed to using a flash drive attached to the computer I am upgrading and got to the installer before it crashed. Doing alt-f3 I printed out some info. It is as follows: ============================================================= brw-rw---- 1 root disk 8, 0 Aug 10 19:04 /dev/sda brw-rw---- 1 root disk 8, 1 Aug 10 19:04 /dev/sda1 brw-rw---- 1 root disk 8, 2 Aug 10 19:04 /dev/sda2 brw-rw---- 1 root disk 8, 16 Aug 10 19:04 /dev/sdb brw-rw---- 1 root disk 8, 17 Aug 10 19:04 /dev/sdb1 brw-rw---- 1 root disk 8, 18 Aug 10 19:04 /dev/sdb2 brw-rw---- 1 root disk 8, 32 Aug 10 19:04 /dev/sdc brw-rw---- 1 root disk 8, 33 Aug 10 19:04 /dev/sdc1 brw-rw---- 1 root disk 9, 126 Aug 10 19:04 /dev/md126 brw-rw---- 1 root disk 9, 127 Aug 10 19:04 /dev/md127
/dev/md: total 0 lrwxrwxrwx 1 root root 8 Aug 10 19:04 home -> ../md127 lrwxrwxrwx 1 root root 8 Aug 10 19:04 root -> ../md126
lrwxrwxrwx 1 root root 8 Aug 10 19:04 /dev/md/root -> ../md126 lrwxrwxrwx 1 root root 8 Aug 10 19:04 /dev/md/home -> ../md127
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 2.2G 1 loop /run/install/repo loop1 7:1 0 392.4M 1 loop loop2 7:2 0 2G 1 loop |-live-rw 253:0 0 2G 0 dm / `-live-base 253:1 0 2G 1 dm loop3 7:3 0 512M 0 loop `-live-rw 253:0 0 2G 0 dm / sda 8:0 1 931.5G 0 disk |-sda1 8:1 1 14.7G 0 part | `-md126 9:126 0 14.7G 0 raid1 `-sda2 8:2 1 916.9G 0 part `-md127 9:127 0 916.7G 0 raid1 sdb 8:16 1 931.5G 0 disk |-sdb1 8:17 1 14.7G 0 part | `-md126 9:126 0 14.7G 0 raid1 `-sdb2 8:18 1 916.9G 0 part `-md127 9:127 0 916.7G 0 raid1 sdc 8:32 1 14.5G 0 disk `-sdc1 8:33 1 14.5G 0 part /run/install/isodir
-rwxr-xr-x 1 root root 2401239040 Jul 5 21:47 /run/install/isodir/Fedora-Server-dvd-x86_64-26-1.5.iso
-rwxr-xr-x 1 root root 6527 Aug 10 17:01 /run/install/isodir/ks.cfg ==============================================================================
It appears to have located all the drives and raid instances as well as the iso file and the ks.cfg file
Is there any other info that would be useful to get?
David
Whatever it is, can you boot it again without invoking kickstart? If you can, open up a command line window and do "fdisk -l", which should list the disks the system sees. Verify the devices are the ones you think they are. Remember that when you're booting F24 from the hard disk, you are absolutely making /dev/sda the first hard drive. When booting from the network, a CD/DVD or a bootp server, that may NOT be the case and your drive letters may be different, in which the limits in your "ignoredisk" line would prevent finding the second drive.
In the setup above it shows raid.<number> (ie. raid.6). Do you know what the number represents? Can it be changed from one install to the next?
The "raid" bit of the label simply means they're to be used in a software RAID. I have no idea why they're numbered in that manner rather than sequentially.
Right below those "part" definitions, you see "raid" definitions where those labels are normally used. In your case,
raid / --device=root --fstype=ext4 --level=raid1 --useexisting
tells the system to use the first two devices in the "part" section (/dev/sda1 and /dev/sdb1) as a RAID1, format it as ext4 and mount it at "/". Since no partitions are specified, it uses the first two in the "part" section. In reality, that line with all the bits specified would be:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.27
If the partitions to use weren't sequential (e.g. you wanted to use the first and third partitions), you'd need to specify them explicitly at the end of the line:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.14
You should be able to rename the labels in your ks.cfg if you wish, but again if your RAID definition doesn't use sequential partitions, make sure you specify them appropriately. The labels have no significance outside of Anaconda/kickstart as far as I know.
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-
Charter Member of the International Sarcasm Society -
"Yeah, like we need YOUR support!" -
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On Thu, 10 Aug 2017 14:03:46 -0700 Gordon Messmer gordon.messmer@gmail.com wrote:
On 08/10/2017 01:36 PM, D&R wrote:
/dev/md: total 0 lrwxrwxrwx 1 root root 8 Aug 10 19:04 home -> ../md127 lrwxrwxrwx 1 root root 8 Aug 10 19:04 root -> ../md126
Good. Try removing the "ignoredisk", "clearpart", and "part" lines from your kickstart file.
Did as you suggested. It initialized the video and after a couple of lines stopped displaying anything on the screen. I waited for several minutes before doing ctrl-alt-delete
David
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On Wed, 9 Aug 2017 14:27:07 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 12:08 PM, D&R wrote:
On Wed, 9 Aug 2017 12:00:00 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 11:52 AM, D&R wrote:
When I boot into the install there is an error in the destination section.
I looked at the debug info in the storage.log and there was an error about sdb1 did not exist. But...
When I reboot to F24 then ...
cat /proc/mdstat
md126 : active raid1 sda2[2] sdb2[1] 961261568 blocks super 1.2 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
md127 : active raid1 sdb1[1] sda1[2] 15368064 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
The section of ks.cfg for hard drive setup is as follows:
ignoredisk --only-use=sda,sdb bootloader --location=mbr --boot-drive=sda
# Partition clearing information clearpart --none --initlabel
# Disk partitioning information
part raid.6 --fstype=mdmember --noformat --onpart=sda1 part raid.27 --fstype=mdmember --noformat --onpart=sdb1 part raid.14 --fstype=mdmember --noformat --onpart=sda2 part raid.32 --fstype=mdmember --noformat --onpart=sdb2
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid /home --device=home --fstype=ext4 --level=raid1 --noformat --useexisting
I currently have a raid1 setup with 2 drives sda and sdb
Since I am using the option --useexisting do I still need to use the part commands?
The last time I did an upgrade was to F24 I have not found anything that says the syntax has changed.
Any Ideas?
Uhm, when you're booting the install, is it possible that the CD/DVD you're booting from becomes /dev/sda? If so, then your first hard drive is /dev/sdb and the second is /dev/sdc and the
ignoredisk --only-use=sda,sdb
would block using the second hard drive, since it's /dev/sdc at this time. This is just a wild guess.
I am booting from an iso file from another computer. As I recall that is what I did when I installed F24 over F22.
How are you booting an ISO file from another computer? Is this a network kickstart install, where the iso image is located on an NFS or CIFS server?
Whatever it is, can you boot it again without invoking kickstart? If you can, open up a command line window and do "fdisk -l", which should list the disks the system sees. Verify the devices are the ones you think they are. Remember that when you're booting F24 from the hard disk, you are absolutely making /dev/sda the first hard drive. When booting from the network, a CD/DVD or a bootp server, that may NOT be the case and your drive letters may be different, in which the limits in your "ignoredisk" line would prevent finding the second drive.
Sorry it took so long to reply, I was out of town on vacation. However, I copied the Server iso for F24, F25, F26 to the home directory on a second computer. The directory listings is: -rw-r--r--. 1 root root 2401239040 Aug 17 21:33 /home/Fedora-Server-dvd-x86_64-26-1.5.iso -rw-r--r--. 1 root root 2018508800 Aug 19 14:49 /home/Fedora-Server-dvd-x86_64-25-1.3.iso -rw-r--r--. 1 root root 1868562432 Aug 19 16:28 /home/Fedora-Server-dvd-x86_64-24-1.2.iso
The grub.cfg is setup up as:
menuentry 'Remote Install' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod diskfilter insmod mdraid1x insmod ext2 set root='hd0,msdos1' echo 'Loading Linux' # linux16 /boot/vmlinuz-remote acpi=off audit=0 selinux=0 inst.repo=nfs:10.10.1.2:/home/Fedora-Server-dvd-x86_64-24-1.2.iso ramdisk_size=8192 panic=30 linux16 /boot/vmlinuz-remote acpi=off audit=0 selinux=0 inst.repo=nfs:10.10.1.2:/home/Fedora-Server-dvd-x86_64-25-1.3.iso ramdisk_size=8192 panic=30 echo 'Loading initial ramdisk ...' initrd16 /boot/initrd-remote.img }
F24 came up in the installer with no error. F25 came up in the installer with an error 'device already in tree' F26 came up in the installer with an error 'device already in tree'
From a F25 install fdisk -l:
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x0009d086
Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 30738431 30736384 14.7G fd Linux raid autodetect /dev/sda2 30738432 1953523711 1922785280 916.9G fd Linux raid autodetect
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x0009d086
Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 30738431 30736384 14.7G fd Linux raid autodetect /dev/sdb2 30738432 1953523711 1922785280 916.9G fd Linux raid autodetect
Disk /dev/sdc: 7.2 GiB, 7743995904 bytes, 15124992 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xc3072e18
Device Boot Start End Sectors Size Id Type /dev/sdc1 16 15124479 15124464 7.2G 83 Linux
Disk /dev/loop0: 1.9 GiB, 2018508800 bytes, 3942400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x50e78d4f
Device Boot Start End Sectors Size Id Type /dev/loop0p1 * 0 3942399 3942400 1.9G 0 Empty /dev/loop0p2 11236 21875 10640 5.2M ef EFI (FAT-12/16/32)
Disk /dev/loop1: 405 MiB, 424710144 bytes, 829512 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop2: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop3: 512 MiB, 536870912 bytes, 1048576 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/live-rw: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/live-base: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md127: 14.7 GiB, 15736897536 bytes, 30736128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/md126: 916.7 GiB, 984331845632 bytes, 1922523136 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/loop4: 1.9 GiB, 2018508800 bytes, 3942400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x50e78d4f
Device Boot Start End Sectors Size Id Type /dev/loop4p1 * 0 3942399 3942400 1.9G 0 Empty /dev/loop4p2 11236 21875 10640 5.2M ef EFI (FAT-12/16/32)
David
In the setup above it shows raid.<number> (ie. raid.6). Do you know what the number represents? Can it be changed from one install to the next?
The "raid" bit of the label simply means they're to be used in a software RAID. I have no idea why they're numbered in that manner rather than sequentially.
Right below those "part" definitions, you see "raid" definitions where those labels are normally used. In your case,
raid / --device=root --fstype=ext4 --level=raid1 --useexisting
tells the system to use the first two devices in the "part" section (/dev/sda1 and /dev/sdb1) as a RAID1, format it as ext4 and mount it at "/". Since no partitions are specified, it uses the first two in the "part" section. In reality, that line with all the bits specified would be:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.27
If the partitions to use weren't sequential (e.g. you wanted to use the first and third partitions), you'd need to specify them explicitly at the end of the line:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.14
You should be able to rename the labels in your ks.cfg if you wish, but again if your RAID definition doesn't use sequential partitions, make sure you specify them appropriately. The labels have no significance outside of Anaconda/kickstart as far as I know.
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-
Charter Member of the International Sarcasm Society -
"Yeah, like we need YOUR support!" -
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On Sat, 19 Aug 2017 17:18:53 -0500 dwoody5654@gmail.com wrote:
On Wed, 9 Aug 2017 14:27:07 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 12:08 PM, D&R wrote:
On Wed, 9 Aug 2017 12:00:00 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 11:52 AM, D&R wrote:
When I boot into the install there is an error in the destination section.
I looked at the debug info in the storage.log and there was an error about sdb1 did not exist. But...
When I reboot to F24 then ...
cat /proc/mdstat
md126 : active raid1 sda2[2] sdb2[1] 961261568 blocks super 1.2 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
md127 : active raid1 sdb1[1] sda1[2] 15368064 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
The section of ks.cfg for hard drive setup is as follows:
ignoredisk --only-use=sda,sdb bootloader --location=mbr --boot-drive=sda
# Partition clearing information clearpart --none --initlabel
# Disk partitioning information
part raid.6 --fstype=mdmember --noformat --onpart=sda1 part raid.27 --fstype=mdmember --noformat --onpart=sdb1 part raid.14 --fstype=mdmember --noformat --onpart=sda2 part raid.32 --fstype=mdmember --noformat --onpart=sdb2
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid /home --device=home --fstype=ext4 --level=raid1 --noformat --useexisting
I currently have a raid1 setup with 2 drives sda and sdb
Since I am using the option --useexisting do I still need to use the part commands?
The last time I did an upgrade was to F24 I have not found anything that says the syntax has changed.
Any Ideas?
Uhm, when you're booting the install, is it possible that the CD/DVD you're booting from becomes /dev/sda? If so, then your first hard drive is /dev/sdb and the second is /dev/sdc and the
ignoredisk --only-use=sda,sdb
would block using the second hard drive, since it's /dev/sdc at this time. This is just a wild guess.
I am booting from an iso file from another computer. As I recall that is what I did when I installed F24 over F22.
How are you booting an ISO file from another computer? Is this a network kickstart install, where the iso image is located on an NFS or CIFS server?
Whatever it is, can you boot it again without invoking kickstart? If you can, open up a command line window and do "fdisk -l", which should list the disks the system sees. Verify the devices are the ones you think they are. Remember that when you're booting F24 from the hard disk, you are absolutely making /dev/sda the first hard drive. When booting from the network, a CD/DVD or a bootp server, that may NOT be the case and your drive letters may be different, in which the limits in your "ignoredisk" line would prevent finding the second drive.
Sorry it took so long to reply, I was out of town on vacation. However, I copied the Server iso for F24, F25, F26 to the home directory on a second computer. The directory listings is: -rw-r--r--. 1 root root 2401239040 Aug 17 21:33 /home/Fedora-Server-dvd-x86_64-26-1.5.iso -rw-r--r--. 1 root root 2018508800 Aug 19 14:49 /home/Fedora-Server-dvd-x86_64-25-1.3.iso -rw-r--r--. 1 root root 1868562432 Aug 19 16:28 /home/Fedora-Server-dvd-x86_64-24-1.2.iso
The grub.cfg is setup up as:
menuentry 'Remote Install' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod diskfilter insmod mdraid1x insmod ext2 set root='hd0,msdos1' echo 'Loading Linux' # linux16 /boot/vmlinuz-remote acpi=off audit=0 selinux=0 inst.repo=nfs:10.10.1.2:/home/Fedora-Server-dvd-x86_64-24-1.2.iso ramdisk_size=8192 panic=30 linux16 /boot/vmlinuz-remote acpi=off audit=0 selinux=0 inst.repo=nfs:10.10.1.2:/home/Fedora-Server-dvd-x86_64-25-1.3.iso ramdisk_size=8192 panic=30 echo 'Loading initial ramdisk ...' initrd16 /boot/initrd-remote.img }
F24 came up in the installer with no error. F25 came up in the installer with an error 'device already in tree' F26 came up in the installer with an error 'device already in tree'
From a F25 install fdisk -l:
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x0009d086
Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 30738431 30736384 14.7G fd Linux raid autodetect /dev/sda2 30738432 1953523711 1922785280 916.9G fd Linux raid autodetect
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x0009d086
Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 30738431 30736384 14.7G fd Linux raid autodetect /dev/sdb2 30738432 1953523711 1922785280 916.9G fd Linux raid autodetect
Disk /dev/sdc: 7.2 GiB, 7743995904 bytes, 15124992 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xc3072e18
Device Boot Start End Sectors Size Id Type /dev/sdc1 16 15124479 15124464 7.2G 83 Linux
Disk /dev/loop0: 1.9 GiB, 2018508800 bytes, 3942400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x50e78d4f
Device Boot Start End Sectors Size Id Type /dev/loop0p1 * 0 3942399 3942400 1.9G 0 Empty /dev/loop0p2 11236 21875 10640 5.2M ef EFI (FAT-12/16/32)
Disk /dev/loop1: 405 MiB, 424710144 bytes, 829512 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop2: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop3: 512 MiB, 536870912 bytes, 1048576 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/live-rw: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/live-base: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md127: 14.7 GiB, 15736897536 bytes, 30736128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/md126: 916.7 GiB, 984331845632 bytes, 1922523136 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/loop4: 1.9 GiB, 2018508800 bytes, 3942400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x50e78d4f
Device Boot Start End Sectors Size Id Type /dev/loop4p1 * 0 3942399 3942400 1.9G 0 Empty /dev/loop4p2 11236 21875 10640 5.2M ef EFI (FAT-12/16/32)
David
Additional info:
Doing some more research I found the following bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1225184
I am unsure if it refers to the same problem I am having but I assume they are at least related.
One Note: This computer has been running a 32 bit F24 and I was planning to move to the 64 bit. I have install F26 64 bit on about 10 computers that had F24 32 bit on them. The installs worked with no problems. They were also plain single drive computers.
I did an install using nfs for F24 Server 64 bit with no problem.
I have tried the F25 and F26 versions for Server, netinstall, Workstation. I did this using nfs from another computer and a flash drive. None worked.
At this point I see two options: do a dnf upgrade from F24 to F25, then do a dnf upgrade from F25 to F26, or install Centos7, which I would prefer not to do. Nothing against Centos, I used it for 4/5 years but changed to Fedora because I wanted to deal with incremental changes instead of a lot of changes after running Centos for 8 -10 years.
Does anyone have other ideas or workarounds?
How solid is the dnf upgrade process?
Thanks for all the input,
David
In the setup above it shows raid.<number> (ie. raid.6). Do you know what the number represents? Can it be changed from one install to the next?
The "raid" bit of the label simply means they're to be used in a software RAID. I have no idea why they're numbered in that manner rather than sequentially.
Right below those "part" definitions, you see "raid" definitions where those labels are normally used. In your case,
raid / --device=root --fstype=ext4 --level=raid1 --useexisting
tells the system to use the first two devices in the "part" section (/dev/sda1 and /dev/sdb1) as a RAID1, format it as ext4 and mount it at "/". Since no partitions are specified, it uses the first two in the "part" section. In reality, that line with all the bits specified would be:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.27
If the partitions to use weren't sequential (e.g. you wanted to use the first and third partitions), you'd need to specify them explicitly at the end of the line:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.14
You should be able to rename the labels in your ks.cfg if you wish, but again if your RAID definition doesn't use sequential partitions, make sure you specify them appropriately. The labels have no significance outside of Anaconda/kickstart as far as I know.
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-
Charter Member of the International Sarcasm Society -
"Yeah, like we need YOUR support!" -
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On Mon, 21 Aug 2017 22:19:05 -0500 D&R dwoody5654@gmail.com wrote:
On Sat, 19 Aug 2017 17:18:53 -0500 dwoody5654@gmail.com wrote:
On Wed, 9 Aug 2017 14:27:07 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 12:08 PM, D&R wrote:
On Wed, 9 Aug 2017 12:00:00 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 11:52 AM, D&R wrote:
When I boot into the install there is an error in the destination section.
I looked at the debug info in the storage.log and there was an error about sdb1 did not exist. But...
When I reboot to F24 then ...
cat /proc/mdstat
md126 : active raid1 sda2[2] sdb2[1] 961261568 blocks super 1.2 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk
md127 : active raid1 sdb1[1] sda1[2] 15368064 blocks super 1.0 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk
The section of ks.cfg for hard drive setup is as follows:
ignoredisk --only-use=sda,sdb bootloader --location=mbr --boot-drive=sda
# Partition clearing information clearpart --none --initlabel
# Disk partitioning information
part raid.6 --fstype=mdmember --noformat --onpart=sda1 part raid.27 --fstype=mdmember --noformat --onpart=sdb1 part raid.14 --fstype=mdmember --noformat --onpart=sda2 part raid.32 --fstype=mdmember --noformat --onpart=sdb2
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid /home --device=home --fstype=ext4 --level=raid1 --noformat --useexisting
I currently have a raid1 setup with 2 drives sda and sdb
Since I am using the option --useexisting do I still need to use the part commands?
The last time I did an upgrade was to F24 I have not found anything that says the syntax has changed.
Any Ideas?
Uhm, when you're booting the install, is it possible that the CD/DVD you're booting from becomes /dev/sda? If so, then your first hard drive is /dev/sdb and the second is /dev/sdc and the
ignoredisk --only-use=sda,sdb
would block using the second hard drive, since it's /dev/sdc at this time. This is just a wild guess.
I am booting from an iso file from another computer. As I recall that is what I did when I installed F24 over F22.
How are you booting an ISO file from another computer? Is this a network kickstart install, where the iso image is located on an NFS or CIFS server?
Whatever it is, can you boot it again without invoking kickstart? If you can, open up a command line window and do "fdisk -l", which should list the disks the system sees. Verify the devices are the ones you think they are. Remember that when you're booting F24 from the hard disk, you are absolutely making /dev/sda the first hard drive. When booting from the network, a CD/DVD or a bootp server, that may NOT be the case and your drive letters may be different, in which the limits in your "ignoredisk" line would prevent finding the second drive.
Sorry it took so long to reply, I was out of town on vacation. However, I copied the Server iso for F24, F25, F26 to the home directory on a second computer. The directory listings is: -rw-r--r--. 1 root root 2401239040 Aug 17 21:33 /home/Fedora-Server-dvd-x86_64-26-1.5.iso -rw-r--r--. 1 root root 2018508800 Aug 19 14:49 /home/Fedora-Server-dvd-x86_64-25-1.3.iso -rw-r--r--. 1 root root 1868562432 Aug 19 16:28 /home/Fedora-Server-dvd-x86_64-24-1.2.iso
The grub.cfg is setup up as:
menuentry 'Remote Install' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod diskfilter insmod mdraid1x insmod ext2 set root='hd0,msdos1' echo 'Loading Linux' # linux16 /boot/vmlinuz-remote acpi=off audit=0 selinux=0 inst.repo=nfs:10.10.1.2:/home/Fedora-Server-dvd-x86_64-24-1.2.iso ramdisk_size=8192 panic=30 linux16 /boot/vmlinuz-remote acpi=off audit=0 selinux=0 inst.repo=nfs:10.10.1.2:/home/Fedora-Server-dvd-x86_64-25-1.3.iso ramdisk_size=8192 panic=30 echo 'Loading initial ramdisk ...' initrd16 /boot/initrd-remote.img }
F24 came up in the installer with no error. F25 came up in the installer with an error 'device already in tree' F26 came up in the installer with an error 'device already in tree'
From a F25 install fdisk -l:
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x0009d086
Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 30738431 30736384 14.7G fd Linux raid autodetect /dev/sda2 30738432 1953523711 1922785280 916.9G fd Linux raid autodetect
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x0009d086
Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 30738431 30736384 14.7G fd Linux raid autodetect /dev/sdb2 30738432 1953523711 1922785280 916.9G fd Linux raid autodetect
Disk /dev/sdc: 7.2 GiB, 7743995904 bytes, 15124992 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xc3072e18
Device Boot Start End Sectors Size Id Type /dev/sdc1 16 15124479 15124464 7.2G 83 Linux
Disk /dev/loop0: 1.9 GiB, 2018508800 bytes, 3942400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x50e78d4f
Device Boot Start End Sectors Size Id Type /dev/loop0p1 * 0 3942399 3942400 1.9G 0 Empty /dev/loop0p2 11236 21875 10640 5.2M ef EFI (FAT-12/16/32)
Disk /dev/loop1: 405 MiB, 424710144 bytes, 829512 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop2: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop3: 512 MiB, 536870912 bytes, 1048576 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/live-rw: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/live-base: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md127: 14.7 GiB, 15736897536 bytes, 30736128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/md126: 916.7 GiB, 984331845632 bytes, 1922523136 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/loop4: 1.9 GiB, 2018508800 bytes, 3942400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x50e78d4f
Device Boot Start End Sectors Size Id Type /dev/loop4p1 * 0 3942399 3942400 1.9G 0 Empty /dev/loop4p2 11236 21875 10640 5.2M ef EFI (FAT-12/16/32)
David
Additional info:
Doing some more research I found the following bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1225184
I am unsure if it refers to the same problem I am having but I assume they are at least related.
One Note: This computer has been running a 32 bit F24 and I was planning to move to the 64 bit. I have install F26 64 bit on about 10 computers that had F24 32 bit on them. The installs worked with no problems. They were also plain single drive computers.
I did an install using nfs for F24 Server 64 bit with no problem.
I have tried the F25 and F26 versions for Server, netinstall, Workstation. I did this using nfs from another computer and a flash drive. None worked.
At this point I see two options: do a dnf upgrade from F24 to F25, then do a dnf upgrade from F25 to F26, or install Centos7, which I would prefer not to do. Nothing against Centos, I used it for 4/5 years but changed to Fedora because I wanted to deal with incremental changes instead of a lot of changes after running Centos for 8 -10 years.
Does anyone have other ideas or workarounds?
How solid is the dnf upgrade process?
Thanks for all the input,
David
I did a dnf upgrade from F24 x86_64 to F25. All appeared to go well. On reboot it stopped after starting command scheduler. Going to a console (F2) I could login as my normal user and then run startxfce4 From there all looked good. But going back to 'F1' the boot process had not completed. I looked at grub.cfg and it looked correct. Is there any additional info I need to add?
Any thoughts?
David
In the setup above it shows raid.<number> (ie. raid.6). Do you know what the number represents? Can it be changed from one install to the next?
The "raid" bit of the label simply means they're to be used in a software RAID. I have no idea why they're numbered in that manner rather than sequentially.
Right below those "part" definitions, you see "raid" definitions where those labels are normally used. In your case,
raid / --device=root --fstype=ext4 --level=raid1 --useexisting
tells the system to use the first two devices in the "part" section (/dev/sda1 and /dev/sdb1) as a RAID1, format it as ext4 and mount it at "/". Since no partitions are specified, it uses the first two in the "part" section. In reality, that line with all the bits specified would be:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.27
If the partitions to use weren't sequential (e.g. you wanted to use the first and third partitions), you'd need to specify them explicitly at the end of the line:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.14
You should be able to rename the labels in your ks.cfg if you wish, but again if your RAID definition doesn't use sequential partitions, make sure you specify them appropriately. The labels have no significance outside of Anaconda/kickstart as far as I know.
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-
Charter Member of the International Sarcasm Society -
"Yeah, like we need YOUR support!" -
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On Sat, 26 Aug 2017 13:06:43 -0500 D&R dwoody5654@gmail.com wrote:
On Mon, 21 Aug 2017 22:19:05 -0500 D&R dwoody5654@gmail.com wrote:
On Sat, 19 Aug 2017 17:18:53 -0500 dwoody5654@gmail.com wrote:
On Wed, 9 Aug 2017 14:27:07 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 12:08 PM, D&R wrote:
On Wed, 9 Aug 2017 12:00:00 -0700 Rick Stevens ricks@alldigital.com wrote:
On 08/09/2017 11:52 AM, D&R wrote: > When I boot into the install there is an error in the destination > section. > > I looked at the debug info in the storage.log and there was an > error about sdb1 did not exist. But... > > When I reboot to F24 then ... > > cat /proc/mdstat > > md126 : active raid1 sda2[2] sdb2[1] > 961261568 blocks super 1.2 [2/2] [UU] > bitmap: 2/8 pages [8KB], 65536KB chunk > > md127 : active raid1 sdb1[1] sda1[2] > 15368064 blocks super 1.0 [2/2] [UU] > bitmap: 1/1 pages [4KB], 65536KB chunk > > The section of ks.cfg for hard drive setup is as follows: > > ignoredisk --only-use=sda,sdb > bootloader --location=mbr --boot-drive=sda > > # Partition clearing information > clearpart --none --initlabel > > # Disk partitioning information > > part raid.6 --fstype=mdmember --noformat --onpart=sda1 > part raid.27 --fstype=mdmember --noformat --onpart=sdb1 > part raid.14 --fstype=mdmember --noformat --onpart=sda2 > part raid.32 --fstype=mdmember --noformat --onpart=sdb2 > > raid / --device=root --fstype=ext4 --level=raid1 --useexisting > raid /home --device=home --fstype=ext4 --level=raid1 --noformat > --useexisting > > I currently have a raid1 setup with 2 drives sda and sdb > > Since I am using the option --useexisting do I still need to use > the part commands? > > The last time I did an upgrade was to F24 I have not found anything > that says the syntax has changed. > > Any Ideas?
Uhm, when you're booting the install, is it possible that the CD/DVD you're booting from becomes /dev/sda? If so, then your first hard drive is /dev/sdb and the second is /dev/sdc and the
ignoredisk --only-use=sda,sdb
would block using the second hard drive, since it's /dev/sdc at this time. This is just a wild guess.
I am booting from an iso file from another computer. As I recall that is what I did when I installed F24 over F22.
How are you booting an ISO file from another computer? Is this a network kickstart install, where the iso image is located on an NFS or CIFS server?
Whatever it is, can you boot it again without invoking kickstart? If you can, open up a command line window and do "fdisk -l", which should list the disks the system sees. Verify the devices are the ones you think they are. Remember that when you're booting F24 from the hard disk, you are absolutely making /dev/sda the first hard drive. When booting from the network, a CD/DVD or a bootp server, that may NOT be the case and your drive letters may be different, in which the limits in your "ignoredisk" line would prevent finding the second drive.
Sorry it took so long to reply, I was out of town on vacation. However, I copied the Server iso for F24, F25, F26 to the home directory on a second computer. The directory listings is: -rw-r--r--. 1 root root 2401239040 Aug 17 21:33 /home/Fedora-Server-dvd-x86_64-26-1.5.iso -rw-r--r--. 1 root root 2018508800 Aug 19 14:49 /home/Fedora-Server-dvd-x86_64-25-1.3.iso -rw-r--r--. 1 root root 1868562432 Aug 19 16:28 /home/Fedora-Server-dvd-x86_64-24-1.2.iso
The grub.cfg is setup up as:
menuentry 'Remote Install' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod diskfilter insmod mdraid1x insmod ext2 set root='hd0,msdos1' echo 'Loading Linux' # linux16 /boot/vmlinuz-remote acpi=off audit=0 selinux=0 inst.repo=nfs:10.10.1.2:/home/Fedora-Server-dvd-x86_64-24-1.2.iso ramdisk_size=8192 panic=30 linux16 /boot/vmlinuz-remote acpi=off audit=0 selinux=0 inst.repo=nfs:10.10.1.2:/home/Fedora-Server-dvd-x86_64-25-1.3.iso ramdisk_size=8192 panic=30 echo 'Loading initial ramdisk ...' initrd16 /boot/initrd-remote.img }
F24 came up in the installer with no error. F25 came up in the installer with an error 'device already in tree' F26 came up in the installer with an error 'device already in tree'
From a F25 install fdisk -l:
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x0009d086
Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 30738431 30736384 14.7G fd Linux raid autodetect /dev/sda2 30738432 1953523711 1922785280 916.9G fd Linux raid autodetect
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x0009d086
Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 30738431 30736384 14.7G fd Linux raid autodetect /dev/sdb2 30738432 1953523711 1922785280 916.9G fd Linux raid autodetect
Disk /dev/sdc: 7.2 GiB, 7743995904 bytes, 15124992 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xc3072e18
Device Boot Start End Sectors Size Id Type /dev/sdc1 16 15124479 15124464 7.2G 83 Linux
Disk /dev/loop0: 1.9 GiB, 2018508800 bytes, 3942400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x50e78d4f
Device Boot Start End Sectors Size Id Type /dev/loop0p1 * 0 3942399 3942400 1.9G 0 Empty /dev/loop0p2 11236 21875 10640 5.2M ef EFI (FAT-12/16/32)
Disk /dev/loop1: 405 MiB, 424710144 bytes, 829512 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop2: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop3: 512 MiB, 536870912 bytes, 1048576 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/live-rw: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/live-base: 2 GiB, 2147483648 bytes, 4194304 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/md127: 14.7 GiB, 15736897536 bytes, 30736128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/md126: 916.7 GiB, 984331845632 bytes, 1922523136 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/loop4: 1.9 GiB, 2018508800 bytes, 3942400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x50e78d4f
Device Boot Start End Sectors Size Id Type /dev/loop4p1 * 0 3942399 3942400 1.9G 0 Empty /dev/loop4p2 11236 21875 10640 5.2M ef EFI (FAT-12/16/32)
David
Additional info:
Doing some more research I found the following bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1225184
I am unsure if it refers to the same problem I am having but I assume they are at least related.
One Note: This computer has been running a 32 bit F24 and I was planning to move to the 64 bit. I have install F26 64 bit on about 10 computers that had F24 32 bit on them. The installs worked with no problems. They were also plain single drive computers.
I did an install using nfs for F24 Server 64 bit with no problem.
I have tried the F25 and F26 versions for Server, netinstall, Workstation. I did this using nfs from another computer and a flash drive. None worked.
At this point I see two options: do a dnf upgrade from F24 to F25, then do a dnf upgrade from F25 to F26, or install Centos7, which I would prefer not to do. Nothing against Centos, I used it for 4/5 years but changed to Fedora because I wanted to deal with incremental changes instead of a lot of changes after running Centos for 8 -10 years.
Does anyone have other ideas or workarounds?
How solid is the dnf upgrade process?
Thanks for all the input,
David
I did a dnf upgrade from F24 x86_64 to F25. All appeared to go well. On reboot it stopped after starting command scheduler. Going to a console (F2) I could login as my normal user and then run startxfce4 From there all looked good. But going back to 'F1' the boot process had not completed. I looked at grub.cfg and it looked correct. Is there any additional info I need to add?
Any thoughts?
David
I did a dnf upgrade to F26. With the same result as above. I looked at the altered config files with rpmconf -a and adjusted as appropriate. Rebooted and came up in the login screen as it should. There were some changes to /etc/lightdm/lightdm.conf that needed to be changed to match the new config file.
David
In the setup above it shows raid.<number> (ie. raid.6). Do you know what the number represents? Can it be changed from one install to the next?
The "raid" bit of the label simply means they're to be used in a software RAID. I have no idea why they're numbered in that manner rather than sequentially.
Right below those "part" definitions, you see "raid" definitions where those labels are normally used. In your case,
raid / --device=root --fstype=ext4 --level=raid1 --useexisting
tells the system to use the first two devices in the "part" section (/dev/sda1 and /dev/sdb1) as a RAID1, format it as ext4 and mount it at "/". Since no partitions are specified, it uses the first two in the "part" section. In reality, that line with all the bits specified would be:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.27
If the partitions to use weren't sequential (e.g. you wanted to use the first and third partitions), you'd need to specify them explicitly at the end of the line:
raid / --device=root --fstype=ext4 --level=raid1 --useexisting raid.6 raid.14
You should be able to rename the labels in your ks.cfg if you wish, but again if your RAID definition doesn't use sequential partitions, make sure you specify them appropriately. The labels have no significance outside of Anaconda/kickstart as far as I know.
- Rick Stevens, Systems Engineer, AllDigital ricks@alldigital.com -
- AIM/Skype: therps2 ICQ: 226437340 Yahoo: origrps2 -
-
Charter Member of the International Sarcasm Society -
"Yeah, like we need YOUR support!" -
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
users mailing list -- users@lists.fedoraproject.org To unsubscribe send an email to users-leave@lists.fedoraproject.org
On 08/09/2017 11:52 AM, D&R wrote:
I looked at the debug info in the storage.log and there was an error about sdb1 did not exist. But...
Switch to VT2 (where I assume you examined storage.log) and run "ls -l /dev/sd* /dev/md" or "lsblk" to see what block devices *do* exist.
You want to make sure that sda and sdb are the drives you expect, that they have the expected partitions, and that /dev/md/home and /dev/md/root exist.
ignoredisk --only-use=sda,sdb bootloader --location=mbr --boot-drive=sda
# Partition clearing information clearpart --none --initlabel
--initlabel is not required. It only makes sense in conjunction with --all.
Since I am using the option --useexisting do I still need to use the part commands?
As far as I know: no. However, reusing existing block devices is extremely prone to breaking and very difficult to troubleshoot, in my experience. You may need to experiment. Typically, I'll start with the anaconda-generated kickstart file from a manual installation and test each individual change, line by line, option by option, when I'm troubleshooting anaconda.