raid 1 badly detected as raid 0 when one drive is missingWhere is the Missing Disk Space on Linux Software...

How to convince somebody that he is fit for something else, but not this job?

Strong empirical falsification of quantum mechanics based on vacuum energy density?

Taxes on Dividends in a Roth IRA

What is the difference between lands and mana?

Can a stoichiometric mixture of oxygen and methane exist as a liquid at standard pressure and some (low) temperature?

How to explain what's wrong with this application of the chain rule?

Which Article Helped Get Rid of Technobabble in RPGs?

Review your own paper in Mathematics

Does the Linux kernel need a file system to run?

What features enable the Su-25 Frogfoot to operate with such a wide variety of fuels?

Is this toilet slogan correct usage of the English language?

How to preserve electronics (computers, iPads and phones) for hundreds of years

The IT department bottlenecks progress, how should I handle this?

Why do ¬, ∀ and ∃ have the same precedence?

What does "Scientists rise up against statistical significance" mean? (Comment in Nature)

Is there a way to have vectors outlined in a Vector Plot?

How to get directions in deep space?

Can I say "fingers" when referring to toes?

How to make money from a browser who sees 5 seconds into the future of any web page?

Are Captain Marvel's powers affected by Thanos breaking the Tesseract and claiming the stone?

How does electrical safety system work on ISS?

Giving feedback to someone without sounding prejudiced

How much theory knowledge is actually used while playing?

It grows, but water kills it



raid 1 badly detected as raid 0 when one drive is missing


Where is the Missing Disk Space on Linux Software Raidmdadm - software raidLaCie NAS failed - Recover Raid 5 From bad deviceWhy make a partition at all for mdadm/raid?RAID10 Recoverymdadm --assemble fails when /dev/sd[abcde]1 reorderedReplaced a RAID 10 drive on my Debian server - what do I do next?On ubuntu add a HD to RAID5 but the size do not ExtensionRecovering a Synology Raid 0 from Ubuntu (Healthy drives)Computer crashed, raid array in trouble













1















I'm learning raids, so maybe this is some basic question, but it's not covered anywhere...



When I create raid 1, update /etc/mdadm/mdadm.conf as[1], run update-initramfs -u, I can reboot and mount it. Everything is fine. Now I remove one drive, and reboot, to simulate critical failure. raid will be wrongly detected as raid 0 (WHY?), inactive (WHY? because we "just have half of raid0?) and as such cannot be used. What I expected to see was active, degraded drive, not this fatal. What's wrong? See [2] for error state description.



Related question: why mdadm.conf [1] contains devices=/dev/sdb1,/dev/sdc1 if allegedly all partitions (resp. ones defined in DEVICE variable) should be scanned for raid UUID? So why is this part generated? What is its use and why isn't there partition UUID used instead? Could it even be used here?



[1] mdadm.conf



cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR alfonz19gmail.com

MAILFROM vboxSystem

# definitions of existing MD arrays

# This configuration was auto-generated on Sun, 10 Feb 2019 09:57:56 +0100 by mkconf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:0 UUID=16624299:11ed3af5:3a8acd02:cd24d4d0
devices=/dev/sdb1,/dev/sdc1
root@mmucha-VirtualBox1:~# cat /etc/mdadm/mdadm.conf


[2] errorneous state:



root@mmucha-VirtualBox1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb1[0](S)
5236719 blocks super 1.2

unused devices: <none>
root@mmucha-VirtualBox1:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive
Working Devices : 1

Name : mmucha-VirtualBox1:0 (local to host mmucha-VirtualBox1)
UUID : 16624299:11ed3af5:3a8acd02:cd24d4d0
Events : 19

Number Major Minor RaidDevice

- 8 17 - /dev/sdb1




UPDATE creation steps



I'd wanted to share something non-interactive, but 'sfdisk' interface and functionality does not work for me; When I ask it to create gpt disklabel type and write, it 'says' it's ok, but did nothing. Ehm. So sorry, you're getting fdisk commands here.



Description: I created 2 new disks for existing VM ubuntu 18.04, set gpt partition table for both, create 1 partition for both, create raid 1, create ext4fs, mount, create some test file, update mdadm.conf, run update-initramfs -u. Reboot, verify, works. Poweroff, remove sde drive, boot. Same failure.



ubuntu release:



lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic


fdisk:



fdisk /dev/sdd
g
n
1


t
29
p
w


Prints:



VDisk /dev/sdd: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E16A3CCE-1EF7-3D45-8AEF-A70B45B047CC

Device Start End Sectors Size Type
/dev/sdd1 2048 10485726 10483679 5G Linux filesystem


same for /dev/sde:



Disk /dev/sde: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AEE480EE-DFA8-C245-8405-658B52C7DC0A

Device Start End Sectors Size Type
/dev/sde1 2048 10485726 10483679 5G Linux filesystem


raid creation:



mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[d-e]1

mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Thu Feb 21 08:54:50 2019
Raid Level : raid1
Array Size : 5236672 (4.99 GiB 5.36 GB)
Used Dev Size : 5236672 (4.99 GiB 5.36 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Thu Feb 21 08:55:16 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

Name : mmucha-VirtualBox1:1 (local to host mmucha-VirtualBox1)
UUID : 1c873dd9:87220378:fc4de07a:99db62ae
Events : 17

Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1


formatting and mounting:



mkfs.ext4 /dev/md1 
mkdir /media/raid1
mount /dev/md1 /media/raid1/

mdadm --detail --scan --verbose >> /etc/mdadm.conf

update-initramfs -u

cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR alfonz19gmail.com

MAILFROM vboxSystem

# definitions of existing MD arrays

# This configuration was auto-generated on Sun, 10 Feb 2019 09:57:56 +0100 by mkconf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:0 UUID=16624299:11ed3af5:3a8acd02:cd24d4d0
devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:1 UUID=1c873dd9:87220378:fc4de07a:99db62ae
devices=/dev/sdd1,/dev/sde1


And that's it. As aforementioned, if you remove 1 hdd now, you won't be able to mount the raid:



sudo mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive
Working Devices : 1

Name : mmucha-VirtualBox1:1 (local to host mmucha-VirtualBox1)
UUID : 1c873dd9:87220378:fc4de07a:99db62ae
Events : 23

Number Major Minor RaidDevice

- 8 49 - /dev/sdd1




UPDATE 2: I tested the same commands (minus update-initramfs -u) on arch and it worked without hickup. I booted back to ubuntu VM, where I had 2 sets of 2-drive raids of level 1. I removed again one drive, and it worked: clean degraded. I did not even run that VM single time since last time. Ok, so then I removed one drive from another set. So now I should have 2 clean degraded drives on md0 and md1. But I have 2 clean degraded on md0 and md127. But I know that, so I know that I have to stop md127, run mdadm --assemble --scan to get it back on md1, run update-initramfs -u, and after reboot it should be good. But surprisingly it's not. I have md0 and md1 as expected, but each set is missing 1 drive, while 1 is in state clean degraded and other in inactive with bad level. But stopping & reassebling fix that again. All of that happened without single modification of mdadm.conf.



It's deep magic.










share|improve this question

























  • I'm not sure if it is an actual configuration error, as this is mentioned to happen in arch wiki in section "mounting from live cd", sadly it's not explained at all why this is happening and what does it mean: wiki.archlinux.org/index.php/RAID

    – Martin Mucha
    Feb 11 at 12:37











  • The actual configuration looks completely wrong. A raid0 made of just one device doesn't make any sense. Normally I would say this is an entirely different configuration from the one seen in the config file but the matching UUID is intriguing.

    – Pavel Šimerda
    Feb 17 at 19:52











  • well technically if you have configured raid 0 and one disk went missing, you will endup in this, which is understandable. The issue however is, why something ignores explicit hint, that this is raid 1 and not 0. So the response should be active,degraded drive

    – Martin Mucha
    Feb 17 at 20:16
















1















I'm learning raids, so maybe this is some basic question, but it's not covered anywhere...



When I create raid 1, update /etc/mdadm/mdadm.conf as[1], run update-initramfs -u, I can reboot and mount it. Everything is fine. Now I remove one drive, and reboot, to simulate critical failure. raid will be wrongly detected as raid 0 (WHY?), inactive (WHY? because we "just have half of raid0?) and as such cannot be used. What I expected to see was active, degraded drive, not this fatal. What's wrong? See [2] for error state description.



Related question: why mdadm.conf [1] contains devices=/dev/sdb1,/dev/sdc1 if allegedly all partitions (resp. ones defined in DEVICE variable) should be scanned for raid UUID? So why is this part generated? What is its use and why isn't there partition UUID used instead? Could it even be used here?



[1] mdadm.conf



cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR alfonz19gmail.com

MAILFROM vboxSystem

# definitions of existing MD arrays

# This configuration was auto-generated on Sun, 10 Feb 2019 09:57:56 +0100 by mkconf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:0 UUID=16624299:11ed3af5:3a8acd02:cd24d4d0
devices=/dev/sdb1,/dev/sdc1
root@mmucha-VirtualBox1:~# cat /etc/mdadm/mdadm.conf


[2] errorneous state:



root@mmucha-VirtualBox1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb1[0](S)
5236719 blocks super 1.2

unused devices: <none>
root@mmucha-VirtualBox1:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive
Working Devices : 1

Name : mmucha-VirtualBox1:0 (local to host mmucha-VirtualBox1)
UUID : 16624299:11ed3af5:3a8acd02:cd24d4d0
Events : 19

Number Major Minor RaidDevice

- 8 17 - /dev/sdb1




UPDATE creation steps



I'd wanted to share something non-interactive, but 'sfdisk' interface and functionality does not work for me; When I ask it to create gpt disklabel type and write, it 'says' it's ok, but did nothing. Ehm. So sorry, you're getting fdisk commands here.



Description: I created 2 new disks for existing VM ubuntu 18.04, set gpt partition table for both, create 1 partition for both, create raid 1, create ext4fs, mount, create some test file, update mdadm.conf, run update-initramfs -u. Reboot, verify, works. Poweroff, remove sde drive, boot. Same failure.



ubuntu release:



lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic


fdisk:



fdisk /dev/sdd
g
n
1


t
29
p
w


Prints:



VDisk /dev/sdd: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E16A3CCE-1EF7-3D45-8AEF-A70B45B047CC

Device Start End Sectors Size Type
/dev/sdd1 2048 10485726 10483679 5G Linux filesystem


same for /dev/sde:



Disk /dev/sde: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AEE480EE-DFA8-C245-8405-658B52C7DC0A

Device Start End Sectors Size Type
/dev/sde1 2048 10485726 10483679 5G Linux filesystem


raid creation:



mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[d-e]1

mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Thu Feb 21 08:54:50 2019
Raid Level : raid1
Array Size : 5236672 (4.99 GiB 5.36 GB)
Used Dev Size : 5236672 (4.99 GiB 5.36 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Thu Feb 21 08:55:16 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

Name : mmucha-VirtualBox1:1 (local to host mmucha-VirtualBox1)
UUID : 1c873dd9:87220378:fc4de07a:99db62ae
Events : 17

Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1


formatting and mounting:



mkfs.ext4 /dev/md1 
mkdir /media/raid1
mount /dev/md1 /media/raid1/

mdadm --detail --scan --verbose >> /etc/mdadm.conf

update-initramfs -u

cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR alfonz19gmail.com

MAILFROM vboxSystem

# definitions of existing MD arrays

# This configuration was auto-generated on Sun, 10 Feb 2019 09:57:56 +0100 by mkconf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:0 UUID=16624299:11ed3af5:3a8acd02:cd24d4d0
devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:1 UUID=1c873dd9:87220378:fc4de07a:99db62ae
devices=/dev/sdd1,/dev/sde1


And that's it. As aforementioned, if you remove 1 hdd now, you won't be able to mount the raid:



sudo mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive
Working Devices : 1

Name : mmucha-VirtualBox1:1 (local to host mmucha-VirtualBox1)
UUID : 1c873dd9:87220378:fc4de07a:99db62ae
Events : 23

Number Major Minor RaidDevice

- 8 49 - /dev/sdd1




UPDATE 2: I tested the same commands (minus update-initramfs -u) on arch and it worked without hickup. I booted back to ubuntu VM, where I had 2 sets of 2-drive raids of level 1. I removed again one drive, and it worked: clean degraded. I did not even run that VM single time since last time. Ok, so then I removed one drive from another set. So now I should have 2 clean degraded drives on md0 and md1. But I have 2 clean degraded on md0 and md127. But I know that, so I know that I have to stop md127, run mdadm --assemble --scan to get it back on md1, run update-initramfs -u, and after reboot it should be good. But surprisingly it's not. I have md0 and md1 as expected, but each set is missing 1 drive, while 1 is in state clean degraded and other in inactive with bad level. But stopping & reassebling fix that again. All of that happened without single modification of mdadm.conf.



It's deep magic.










share|improve this question

























  • I'm not sure if it is an actual configuration error, as this is mentioned to happen in arch wiki in section "mounting from live cd", sadly it's not explained at all why this is happening and what does it mean: wiki.archlinux.org/index.php/RAID

    – Martin Mucha
    Feb 11 at 12:37











  • The actual configuration looks completely wrong. A raid0 made of just one device doesn't make any sense. Normally I would say this is an entirely different configuration from the one seen in the config file but the matching UUID is intriguing.

    – Pavel Šimerda
    Feb 17 at 19:52











  • well technically if you have configured raid 0 and one disk went missing, you will endup in this, which is understandable. The issue however is, why something ignores explicit hint, that this is raid 1 and not 0. So the response should be active,degraded drive

    – Martin Mucha
    Feb 17 at 20:16














1












1








1








I'm learning raids, so maybe this is some basic question, but it's not covered anywhere...



When I create raid 1, update /etc/mdadm/mdadm.conf as[1], run update-initramfs -u, I can reboot and mount it. Everything is fine. Now I remove one drive, and reboot, to simulate critical failure. raid will be wrongly detected as raid 0 (WHY?), inactive (WHY? because we "just have half of raid0?) and as such cannot be used. What I expected to see was active, degraded drive, not this fatal. What's wrong? See [2] for error state description.



Related question: why mdadm.conf [1] contains devices=/dev/sdb1,/dev/sdc1 if allegedly all partitions (resp. ones defined in DEVICE variable) should be scanned for raid UUID? So why is this part generated? What is its use and why isn't there partition UUID used instead? Could it even be used here?



[1] mdadm.conf



cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR alfonz19gmail.com

MAILFROM vboxSystem

# definitions of existing MD arrays

# This configuration was auto-generated on Sun, 10 Feb 2019 09:57:56 +0100 by mkconf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:0 UUID=16624299:11ed3af5:3a8acd02:cd24d4d0
devices=/dev/sdb1,/dev/sdc1
root@mmucha-VirtualBox1:~# cat /etc/mdadm/mdadm.conf


[2] errorneous state:



root@mmucha-VirtualBox1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb1[0](S)
5236719 blocks super 1.2

unused devices: <none>
root@mmucha-VirtualBox1:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive
Working Devices : 1

Name : mmucha-VirtualBox1:0 (local to host mmucha-VirtualBox1)
UUID : 16624299:11ed3af5:3a8acd02:cd24d4d0
Events : 19

Number Major Minor RaidDevice

- 8 17 - /dev/sdb1




UPDATE creation steps



I'd wanted to share something non-interactive, but 'sfdisk' interface and functionality does not work for me; When I ask it to create gpt disklabel type and write, it 'says' it's ok, but did nothing. Ehm. So sorry, you're getting fdisk commands here.



Description: I created 2 new disks for existing VM ubuntu 18.04, set gpt partition table for both, create 1 partition for both, create raid 1, create ext4fs, mount, create some test file, update mdadm.conf, run update-initramfs -u. Reboot, verify, works. Poweroff, remove sde drive, boot. Same failure.



ubuntu release:



lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic


fdisk:



fdisk /dev/sdd
g
n
1


t
29
p
w


Prints:



VDisk /dev/sdd: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E16A3CCE-1EF7-3D45-8AEF-A70B45B047CC

Device Start End Sectors Size Type
/dev/sdd1 2048 10485726 10483679 5G Linux filesystem


same for /dev/sde:



Disk /dev/sde: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AEE480EE-DFA8-C245-8405-658B52C7DC0A

Device Start End Sectors Size Type
/dev/sde1 2048 10485726 10483679 5G Linux filesystem


raid creation:



mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[d-e]1

mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Thu Feb 21 08:54:50 2019
Raid Level : raid1
Array Size : 5236672 (4.99 GiB 5.36 GB)
Used Dev Size : 5236672 (4.99 GiB 5.36 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Thu Feb 21 08:55:16 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

Name : mmucha-VirtualBox1:1 (local to host mmucha-VirtualBox1)
UUID : 1c873dd9:87220378:fc4de07a:99db62ae
Events : 17

Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1


formatting and mounting:



mkfs.ext4 /dev/md1 
mkdir /media/raid1
mount /dev/md1 /media/raid1/

mdadm --detail --scan --verbose >> /etc/mdadm.conf

update-initramfs -u

cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR alfonz19gmail.com

MAILFROM vboxSystem

# definitions of existing MD arrays

# This configuration was auto-generated on Sun, 10 Feb 2019 09:57:56 +0100 by mkconf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:0 UUID=16624299:11ed3af5:3a8acd02:cd24d4d0
devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:1 UUID=1c873dd9:87220378:fc4de07a:99db62ae
devices=/dev/sdd1,/dev/sde1


And that's it. As aforementioned, if you remove 1 hdd now, you won't be able to mount the raid:



sudo mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive
Working Devices : 1

Name : mmucha-VirtualBox1:1 (local to host mmucha-VirtualBox1)
UUID : 1c873dd9:87220378:fc4de07a:99db62ae
Events : 23

Number Major Minor RaidDevice

- 8 49 - /dev/sdd1




UPDATE 2: I tested the same commands (minus update-initramfs -u) on arch and it worked without hickup. I booted back to ubuntu VM, where I had 2 sets of 2-drive raids of level 1. I removed again one drive, and it worked: clean degraded. I did not even run that VM single time since last time. Ok, so then I removed one drive from another set. So now I should have 2 clean degraded drives on md0 and md1. But I have 2 clean degraded on md0 and md127. But I know that, so I know that I have to stop md127, run mdadm --assemble --scan to get it back on md1, run update-initramfs -u, and after reboot it should be good. But surprisingly it's not. I have md0 and md1 as expected, but each set is missing 1 drive, while 1 is in state clean degraded and other in inactive with bad level. But stopping & reassebling fix that again. All of that happened without single modification of mdadm.conf.



It's deep magic.










share|improve this question
















I'm learning raids, so maybe this is some basic question, but it's not covered anywhere...



When I create raid 1, update /etc/mdadm/mdadm.conf as[1], run update-initramfs -u, I can reboot and mount it. Everything is fine. Now I remove one drive, and reboot, to simulate critical failure. raid will be wrongly detected as raid 0 (WHY?), inactive (WHY? because we "just have half of raid0?) and as such cannot be used. What I expected to see was active, degraded drive, not this fatal. What's wrong? See [2] for error state description.



Related question: why mdadm.conf [1] contains devices=/dev/sdb1,/dev/sdc1 if allegedly all partitions (resp. ones defined in DEVICE variable) should be scanned for raid UUID? So why is this part generated? What is its use and why isn't there partition UUID used instead? Could it even be used here?



[1] mdadm.conf



cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR alfonz19gmail.com

MAILFROM vboxSystem

# definitions of existing MD arrays

# This configuration was auto-generated on Sun, 10 Feb 2019 09:57:56 +0100 by mkconf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:0 UUID=16624299:11ed3af5:3a8acd02:cd24d4d0
devices=/dev/sdb1,/dev/sdc1
root@mmucha-VirtualBox1:~# cat /etc/mdadm/mdadm.conf


[2] errorneous state:



root@mmucha-VirtualBox1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdb1[0](S)
5236719 blocks super 1.2

unused devices: <none>
root@mmucha-VirtualBox1:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive
Working Devices : 1

Name : mmucha-VirtualBox1:0 (local to host mmucha-VirtualBox1)
UUID : 16624299:11ed3af5:3a8acd02:cd24d4d0
Events : 19

Number Major Minor RaidDevice

- 8 17 - /dev/sdb1




UPDATE creation steps



I'd wanted to share something non-interactive, but 'sfdisk' interface and functionality does not work for me; When I ask it to create gpt disklabel type and write, it 'says' it's ok, but did nothing. Ehm. So sorry, you're getting fdisk commands here.



Description: I created 2 new disks for existing VM ubuntu 18.04, set gpt partition table for both, create 1 partition for both, create raid 1, create ext4fs, mount, create some test file, update mdadm.conf, run update-initramfs -u. Reboot, verify, works. Poweroff, remove sde drive, boot. Same failure.



ubuntu release:



lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic


fdisk:



fdisk /dev/sdd
g
n
1


t
29
p
w


Prints:



VDisk /dev/sdd: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E16A3CCE-1EF7-3D45-8AEF-A70B45B047CC

Device Start End Sectors Size Type
/dev/sdd1 2048 10485726 10483679 5G Linux filesystem


same for /dev/sde:



Disk /dev/sde: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AEE480EE-DFA8-C245-8405-658B52C7DC0A

Device Start End Sectors Size Type
/dev/sde1 2048 10485726 10483679 5G Linux filesystem


raid creation:



mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[d-e]1

mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Thu Feb 21 08:54:50 2019
Raid Level : raid1
Array Size : 5236672 (4.99 GiB 5.36 GB)
Used Dev Size : 5236672 (4.99 GiB 5.36 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Thu Feb 21 08:55:16 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

Name : mmucha-VirtualBox1:1 (local to host mmucha-VirtualBox1)
UUID : 1c873dd9:87220378:fc4de07a:99db62ae
Events : 17

Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1


formatting and mounting:



mkfs.ext4 /dev/md1 
mkdir /media/raid1
mount /dev/md1 /media/raid1/

mdadm --detail --scan --verbose >> /etc/mdadm.conf

update-initramfs -u

cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR alfonz19gmail.com

MAILFROM vboxSystem

# definitions of existing MD arrays

# This configuration was auto-generated on Sun, 10 Feb 2019 09:57:56 +0100 by mkconf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:0 UUID=16624299:11ed3af5:3a8acd02:cd24d4d0
devices=/dev/sdb1,/dev/sdc1
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=mmucha-VirtualBox1:1 UUID=1c873dd9:87220378:fc4de07a:99db62ae
devices=/dev/sdd1,/dev/sde1


And that's it. As aforementioned, if you remove 1 hdd now, you won't be able to mount the raid:



sudo mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Raid Level : raid0
Total Devices : 1
Persistence : Superblock is persistent

State : inactive
Working Devices : 1

Name : mmucha-VirtualBox1:1 (local to host mmucha-VirtualBox1)
UUID : 1c873dd9:87220378:fc4de07a:99db62ae
Events : 23

Number Major Minor RaidDevice

- 8 49 - /dev/sdd1




UPDATE 2: I tested the same commands (minus update-initramfs -u) on arch and it worked without hickup. I booted back to ubuntu VM, where I had 2 sets of 2-drive raids of level 1. I removed again one drive, and it worked: clean degraded. I did not even run that VM single time since last time. Ok, so then I removed one drive from another set. So now I should have 2 clean degraded drives on md0 and md1. But I have 2 clean degraded on md0 and md127. But I know that, so I know that I have to stop md127, run mdadm --assemble --scan to get it back on md1, run update-initramfs -u, and after reboot it should be good. But surprisingly it's not. I have md0 and md1 as expected, but each set is missing 1 drive, while 1 is in state clean degraded and other in inactive with bad level. But stopping & reassebling fix that again. All of that happened without single modification of mdadm.conf.



It's deep magic.







linux raid software-raid mdadm






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 24 at 20:33







Martin Mucha

















asked Feb 11 at 7:32









Martin MuchaMartin Mucha

587




587













  • I'm not sure if it is an actual configuration error, as this is mentioned to happen in arch wiki in section "mounting from live cd", sadly it's not explained at all why this is happening and what does it mean: wiki.archlinux.org/index.php/RAID

    – Martin Mucha
    Feb 11 at 12:37











  • The actual configuration looks completely wrong. A raid0 made of just one device doesn't make any sense. Normally I would say this is an entirely different configuration from the one seen in the config file but the matching UUID is intriguing.

    – Pavel Šimerda
    Feb 17 at 19:52











  • well technically if you have configured raid 0 and one disk went missing, you will endup in this, which is understandable. The issue however is, why something ignores explicit hint, that this is raid 1 and not 0. So the response should be active,degraded drive

    – Martin Mucha
    Feb 17 at 20:16



















  • I'm not sure if it is an actual configuration error, as this is mentioned to happen in arch wiki in section "mounting from live cd", sadly it's not explained at all why this is happening and what does it mean: wiki.archlinux.org/index.php/RAID

    – Martin Mucha
    Feb 11 at 12:37











  • The actual configuration looks completely wrong. A raid0 made of just one device doesn't make any sense. Normally I would say this is an entirely different configuration from the one seen in the config file but the matching UUID is intriguing.

    – Pavel Šimerda
    Feb 17 at 19:52











  • well technically if you have configured raid 0 and one disk went missing, you will endup in this, which is understandable. The issue however is, why something ignores explicit hint, that this is raid 1 and not 0. So the response should be active,degraded drive

    – Martin Mucha
    Feb 17 at 20:16

















I'm not sure if it is an actual configuration error, as this is mentioned to happen in arch wiki in section "mounting from live cd", sadly it's not explained at all why this is happening and what does it mean: wiki.archlinux.org/index.php/RAID

– Martin Mucha
Feb 11 at 12:37





I'm not sure if it is an actual configuration error, as this is mentioned to happen in arch wiki in section "mounting from live cd", sadly it's not explained at all why this is happening and what does it mean: wiki.archlinux.org/index.php/RAID

– Martin Mucha
Feb 11 at 12:37













The actual configuration looks completely wrong. A raid0 made of just one device doesn't make any sense. Normally I would say this is an entirely different configuration from the one seen in the config file but the matching UUID is intriguing.

– Pavel Šimerda
Feb 17 at 19:52





The actual configuration looks completely wrong. A raid0 made of just one device doesn't make any sense. Normally I would say this is an entirely different configuration from the one seen in the config file but the matching UUID is intriguing.

– Pavel Šimerda
Feb 17 at 19:52













well technically if you have configured raid 0 and one disk went missing, you will endup in this, which is understandable. The issue however is, why something ignores explicit hint, that this is raid 1 and not 0. So the response should be active,degraded drive

– Martin Mucha
Feb 17 at 20:16





well technically if you have configured raid 0 and one disk went missing, you will endup in this, which is understandable. The issue however is, why something ignores explicit hint, that this is raid 1 and not 0. So the response should be active,degraded drive

– Martin Mucha
Feb 17 at 20:16










2 Answers
2






active

oldest

votes


















0














I had the same problem with inactive raid1 array reported as raid0. Not sure why but this fixed it for me



mdadm --stop /dev/md0
mdadm --assemble /dev/md0 --run




share








New contributor




g.kovatchev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




























    -1














    You must have done something bad.
    Try the procedure from the beginning and send all the steps you have done.
    It must work.






    share|improve this answer
























    • will try, I'll be in touch shortly with exact commands. But to be honest, where is possibility to be wrong, if raid 1 was created, mounted and was working (well allegedly according to mdadm, which gets faulty behavior after drive removal). Anyways, I'll be back with specific commands.

      – Martin Mucha
      Feb 18 at 13:53











    • sorry for delay. I added creationg details. Does this show all the details you need?

      – Martin Mucha
      Feb 21 at 9:08











    • Hi, dont now where is problem but if you remove RAID you will also remove his configuration from /etc/mdadm.conf

      – Josef Jebavy
      Feb 22 at 12:31






    • 1





      I'm not following. I did not remove RAID in virtualized operating system. I did nothing in virtualized system, as we are simulating critical failure; when there is critical failure you do nothing in system, right? So I removed virtual drive from virtul machine in virtual box configuration. Which should equal to someone showed in at your PC and stole 1 your drive. This is what happened. The problem is, that there was no sw change, 1 HDD went missing. This shouldn't affect functionality of level 1 raid.

      – Martin Mucha
      Feb 22 at 13:37











    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "3"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1404338%2fraid-1-badly-detected-as-raid-0-when-one-drive-is-missing%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    I had the same problem with inactive raid1 array reported as raid0. Not sure why but this fixed it for me



    mdadm --stop /dev/md0
    mdadm --assemble /dev/md0 --run




    share








    New contributor




    g.kovatchev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.

























      0














      I had the same problem with inactive raid1 array reported as raid0. Not sure why but this fixed it for me



      mdadm --stop /dev/md0
      mdadm --assemble /dev/md0 --run




      share








      New contributor




      g.kovatchev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.























        0












        0








        0







        I had the same problem with inactive raid1 array reported as raid0. Not sure why but this fixed it for me



        mdadm --stop /dev/md0
        mdadm --assemble /dev/md0 --run




        share








        New contributor




        g.kovatchev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.










        I had the same problem with inactive raid1 array reported as raid0. Not sure why but this fixed it for me



        mdadm --stop /dev/md0
        mdadm --assemble /dev/md0 --run





        share








        New contributor




        g.kovatchev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.








        share


        share






        New contributor




        g.kovatchev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.









        answered 2 mins ago









        g.kovatchevg.kovatchev

        101




        101




        New contributor




        g.kovatchev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        New contributor





        g.kovatchev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.






        g.kovatchev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.

























            -1














            You must have done something bad.
            Try the procedure from the beginning and send all the steps you have done.
            It must work.






            share|improve this answer
























            • will try, I'll be in touch shortly with exact commands. But to be honest, where is possibility to be wrong, if raid 1 was created, mounted and was working (well allegedly according to mdadm, which gets faulty behavior after drive removal). Anyways, I'll be back with specific commands.

              – Martin Mucha
              Feb 18 at 13:53











            • sorry for delay. I added creationg details. Does this show all the details you need?

              – Martin Mucha
              Feb 21 at 9:08











            • Hi, dont now where is problem but if you remove RAID you will also remove his configuration from /etc/mdadm.conf

              – Josef Jebavy
              Feb 22 at 12:31






            • 1





              I'm not following. I did not remove RAID in virtualized operating system. I did nothing in virtualized system, as we are simulating critical failure; when there is critical failure you do nothing in system, right? So I removed virtual drive from virtul machine in virtual box configuration. Which should equal to someone showed in at your PC and stole 1 your drive. This is what happened. The problem is, that there was no sw change, 1 HDD went missing. This shouldn't affect functionality of level 1 raid.

              – Martin Mucha
              Feb 22 at 13:37
















            -1














            You must have done something bad.
            Try the procedure from the beginning and send all the steps you have done.
            It must work.






            share|improve this answer
























            • will try, I'll be in touch shortly with exact commands. But to be honest, where is possibility to be wrong, if raid 1 was created, mounted and was working (well allegedly according to mdadm, which gets faulty behavior after drive removal). Anyways, I'll be back with specific commands.

              – Martin Mucha
              Feb 18 at 13:53











            • sorry for delay. I added creationg details. Does this show all the details you need?

              – Martin Mucha
              Feb 21 at 9:08











            • Hi, dont now where is problem but if you remove RAID you will also remove his configuration from /etc/mdadm.conf

              – Josef Jebavy
              Feb 22 at 12:31






            • 1





              I'm not following. I did not remove RAID in virtualized operating system. I did nothing in virtualized system, as we are simulating critical failure; when there is critical failure you do nothing in system, right? So I removed virtual drive from virtul machine in virtual box configuration. Which should equal to someone showed in at your PC and stole 1 your drive. This is what happened. The problem is, that there was no sw change, 1 HDD went missing. This shouldn't affect functionality of level 1 raid.

              – Martin Mucha
              Feb 22 at 13:37














            -1












            -1








            -1







            You must have done something bad.
            Try the procedure from the beginning and send all the steps you have done.
            It must work.






            share|improve this answer













            You must have done something bad.
            Try the procedure from the beginning and send all the steps you have done.
            It must work.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Feb 18 at 7:35









            Josef JebavyJosef Jebavy

            1




            1













            • will try, I'll be in touch shortly with exact commands. But to be honest, where is possibility to be wrong, if raid 1 was created, mounted and was working (well allegedly according to mdadm, which gets faulty behavior after drive removal). Anyways, I'll be back with specific commands.

              – Martin Mucha
              Feb 18 at 13:53











            • sorry for delay. I added creationg details. Does this show all the details you need?

              – Martin Mucha
              Feb 21 at 9:08











            • Hi, dont now where is problem but if you remove RAID you will also remove his configuration from /etc/mdadm.conf

              – Josef Jebavy
              Feb 22 at 12:31






            • 1





              I'm not following. I did not remove RAID in virtualized operating system. I did nothing in virtualized system, as we are simulating critical failure; when there is critical failure you do nothing in system, right? So I removed virtual drive from virtul machine in virtual box configuration. Which should equal to someone showed in at your PC and stole 1 your drive. This is what happened. The problem is, that there was no sw change, 1 HDD went missing. This shouldn't affect functionality of level 1 raid.

              – Martin Mucha
              Feb 22 at 13:37



















            • will try, I'll be in touch shortly with exact commands. But to be honest, where is possibility to be wrong, if raid 1 was created, mounted and was working (well allegedly according to mdadm, which gets faulty behavior after drive removal). Anyways, I'll be back with specific commands.

              – Martin Mucha
              Feb 18 at 13:53











            • sorry for delay. I added creationg details. Does this show all the details you need?

              – Martin Mucha
              Feb 21 at 9:08











            • Hi, dont now where is problem but if you remove RAID you will also remove his configuration from /etc/mdadm.conf

              – Josef Jebavy
              Feb 22 at 12:31






            • 1





              I'm not following. I did not remove RAID in virtualized operating system. I did nothing in virtualized system, as we are simulating critical failure; when there is critical failure you do nothing in system, right? So I removed virtual drive from virtul machine in virtual box configuration. Which should equal to someone showed in at your PC and stole 1 your drive. This is what happened. The problem is, that there was no sw change, 1 HDD went missing. This shouldn't affect functionality of level 1 raid.

              – Martin Mucha
              Feb 22 at 13:37

















            will try, I'll be in touch shortly with exact commands. But to be honest, where is possibility to be wrong, if raid 1 was created, mounted and was working (well allegedly according to mdadm, which gets faulty behavior after drive removal). Anyways, I'll be back with specific commands.

            – Martin Mucha
            Feb 18 at 13:53





            will try, I'll be in touch shortly with exact commands. But to be honest, where is possibility to be wrong, if raid 1 was created, mounted and was working (well allegedly according to mdadm, which gets faulty behavior after drive removal). Anyways, I'll be back with specific commands.

            – Martin Mucha
            Feb 18 at 13:53













            sorry for delay. I added creationg details. Does this show all the details you need?

            – Martin Mucha
            Feb 21 at 9:08





            sorry for delay. I added creationg details. Does this show all the details you need?

            – Martin Mucha
            Feb 21 at 9:08













            Hi, dont now where is problem but if you remove RAID you will also remove his configuration from /etc/mdadm.conf

            – Josef Jebavy
            Feb 22 at 12:31





            Hi, dont now where is problem but if you remove RAID you will also remove his configuration from /etc/mdadm.conf

            – Josef Jebavy
            Feb 22 at 12:31




            1




            1





            I'm not following. I did not remove RAID in virtualized operating system. I did nothing in virtualized system, as we are simulating critical failure; when there is critical failure you do nothing in system, right? So I removed virtual drive from virtul machine in virtual box configuration. Which should equal to someone showed in at your PC and stole 1 your drive. This is what happened. The problem is, that there was no sw change, 1 HDD went missing. This shouldn't affect functionality of level 1 raid.

            – Martin Mucha
            Feb 22 at 13:37





            I'm not following. I did not remove RAID in virtualized operating system. I did nothing in virtualized system, as we are simulating critical failure; when there is critical failure you do nothing in system, right? So I removed virtual drive from virtul machine in virtual box configuration. Which should equal to someone showed in at your PC and stole 1 your drive. This is what happened. The problem is, that there was no sw change, 1 HDD went missing. This shouldn't affect functionality of level 1 raid.

            – Martin Mucha
            Feb 22 at 13:37


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Super User!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1404338%2fraid-1-badly-detected-as-raid-0-when-one-drive-is-missing%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            VNC viewer RFB protocol error: bad desktop size 0x0I Cannot Type the Key 'd' (lowercase) in VNC Viewer...

            Tribunal Administrativo e Fiscal de Mirandela Referências Menu de...

            looking for continuous Screen Capture for retroactivly reproducing errors, timeback machineRolling desktop...