I’m working on a script to automate the process of creating a RAID array and am currently trying to figure out how to get multiple machines to recognize the array so the data can be transferred after collection.
The ultimate goal is to use a set of 5 SATA drives in a RAID 5 array, but I’m currently doing the proof of concept using 3 USB drives. I think I’m missing something in the process when stopping the RAID array on the original machine before attempting to assemble it on the other one. I copied the entries for the array from /etc/fstab
and /etc/mdadm.conf
from the original machine to the other one (I’m not sure if this was necessary… I’m assuming I could do it without this, just with a longer --assemble
command). I had to change the /dev
location because a /dev/md0
already existed on the second system, but otherwise made no other changes. However, upon assembling it on the second machine, only two of the 3 drives reported as active, and the third drive was ignored because it reported one of the other drives as failed. When using --examine
, the array state information is as follows:
/dev/sdd - Array State: AA. /dev/sde - Array State: AA. /dev/sdf - Array State: ..A
The /dev/sdf
drive is the one that is being ignored. So it looks like the first two drives report sdf
as missing, but sdf
thinks the other two are missing.
To stop the array on the first system, I simply unmounted the RAID location, then --fail
ed and --remove
d the drives and --stop
ped the RAID. Is there something else I forgot to do? Or is this confusion caused by my manually copying over the RAID config info? Or something else?
The post Drive reports as failed when attempting to assemble a RAID array created on another machine appeared first on 100% Private Proxies - Fast, Anonymous, Quality, Unlimited USA Private Proxy!.