On 2020-05-25 06:39, Patrick O'Callaghan wrote:
On Mon, 2020-05-25 at 05:34 +0800, Ed Greshko wrote:
> On 2020-05-25 05:20, Patrick O'Callaghan wrote:
>> On Mon, 2020-05-25 at 03:16 +0800, Ed Greshko wrote:
>>>>> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
>>>>> sda 8:0 0 50G 0 disk
>>>>> └─md0 9:0 0 50G 0 raid1
>>>>> sdb 8:16 0 50G 0 disk
>>>>> └─md0 9:0 0 50G 0 raid1
>>>>> sr0 11:0 1 1024M 0 rom
>>>>> vda 252:0 0 30G 0 disk
>>>>> ├─vda1 252:1 0 1G 0 part /boot
>>>>> └─vda2 252:2 0 29G 0 part
>>>>> ├─fedora_f31k-root 253:0 0 27G 0 lvm /
>>>>> └─fedora_f31k-swap 253:1 0 2.1G 0 lvm [SWAP]
>>>>> and it seems a bit more "sane" than your configuration.
>>>> Yours is using LVM, which I wanted to avoid. That may be the root of
>>>> the issue (though I've no idea why).
>>> ?????
>>>
>>>
>>>
>>> The RAID Array isn't using LVM.
>>>
>>>
>>>
>>> This is just an added pair of disks, with RAID.
>> Oops, I was looking at the vda[12] rather than sd[ab]
> OK. All things being equal, if I were in your shoes I'd go back and redo the
RAID creation.
>
> When the "mdadm --create" is performed and mirroring of the drives begins
the array can still
> be used. You can proceed with mkfs on it simultaneous.
OK, I did this:
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[de]
mdadm: /dev/sdd appears to be part of a raid array:
level=raid1 devices=2 ctime=Wed May 20 16:34:58 2020
mdadm: partition table exists on /dev/sdd but will be lost or
meaningless after creating array
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/sde appears to be part of a raid array:
level=raid1 devices=2 ctime=Wed May 20 16:34:58 2020
mdadm: partition table exists on /dev/sde but will be lost or
meaningless after creating array
Continue creating array? y
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
And now I find:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
[...]
sdd 8:48 0 931.5G 0 disk
└─md0 9:0 0 931.4G 0 raid1
└─md0p1 259:0 0 931.4G 0 part
/run/media/poc/6cb66da2-147a-4f3c-a513-36f6164ab581
sde 8:64 0 931.5G 0 disk
└─md0 9:0 0 931.4G 0 raid1
└─md0p1 259:0 0 931.4G 0 part
/run/media/poc/6cb66da2-147a-4f3c-a513-36f6164ab581
So although the above message says the existing partition table will be
lost, for some reason I'm still getting a partition, while you
apparently didn't. I copied the --create command directly from the man
page. Is this not the "standard" way you mentioned in an earlier reply?
Finally, the /run/media/... etc. mounts now show my existing data. All
the same, the disk lights are busy and I expect them to be going all
night.
Well, I had 2 totally clean disks to start with.
I see I should not have used the word "redo". What I was meaning to convey is
that I'd destroy
the existing ARRAY and start all over again.
I did...
mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
followed by
mkfs.ext4 -F /dev/md0
--
The key to getting good answers is to ask good questions.