MHDDFS tips on "JBOD" and Breaking RAID5 - Printable Version +- Linux-Noob Forums (https://www.linux-noob.com/forums) +-- Forum: Linux Noob (https://www.linux-noob.com/forums/forum-3.html) +--- Forum: How Do I? (https://www.linux-noob.com/forums/forum-60.html) +--- Thread: MHDDFS tips on "JBOD" and Breaking RAID5 (/thread-141.html) |
MHDDFS tips on "JBOD" and Breaking RAID5 - chj - 2012-08-26 I need some comments in regars to my NAS. I have a Linux Server running Ubuntu 10.04 (may upgrade) and I have a RAID5 for my storage aray, it consits of 4 x 1.5 TB HDDs. OS is running on a seperate disk and I have a 1 TB USB drive that is currently empty. I am thinking of "destroying" the RAID 5, to gain additional storage space, and start using MHDDFS to mount multiple drives too one mount point for a "large array". Today I loose 1 x 1.5 TB diskspace the the RAID 5. I do not want to run 1 large volume, because if I lose 1 disk, I lose it all. Any comments in regards to using MHDDFS? I have never tried it before. Also, any comments in regards to breaking a raid5 using that is setup with MDADM. My plan is to: 1. Move 1 TB of data off the 4 TB raid 5 2. Remove 1 of the drives from the RAID5 3. Format the "broken" drive 4. Move 1.5 TB off the degraded RAID5 5. Reshape the RAID5 with 3 drives 6. Remove 1 of the drives from the "new" RAID5 7. Format the 2nd "broken" drive 8. Move 1.5 TB off the new degraded RAID5, which should be empty 9. Format the remaining 2 drives 10. Mount all drives using MHDDFS Questions: A. Is my plan viable (assumbing that I have enough data space available) B. Am I risking all the data by breaking the drives (assuming no HW failures) C. Comments in regards to MHDDFS D. Can I mount multiple drives, with data, using MHDDFS to one mountpoint? Any views and comments? BTW: I do know that I will not have any redundancy any more, but the main point is not to lose ALL data if A SINGLE drive fails. Thanks. MHDDFS tips on "JBOD" and Breaking RAID5 - chj - 2012-09-09 Ok, I solved the issue. Thanks to even more googeling, and a failed drive that "forced" me into moving on with this. What solved it? Installing an experimental version of mdadm, allows the removal of a drive. NOTE: This is an experimental case, so please do not run if you are not willing to risk data. Links: Blog Post with Howto Ubuntu forum post about upgrading Debian Package for mdadm How to Resize RAID (tips only) Here is my "solution": 1. 4 Drives in RAID5, 1 drive failed Code: cat /proc/mdstat 2. Forced 3 of 4 drives online Code: mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sde1 /dev/sdd1 3. Copied out data leaving only 1.3 TB on degraded RAID5 4. Resized /dev/md0 to 1.3TB instead of 4 TB Code: e2fsck -f /dev/md0 5. Installed mdadm experimental Code: # wget http://ftp.se.debian.org/debian/pool/main/m/mdadm/mdadm_3.2.5-3_i386.deb 6. Resize Array Code: # mdadm /dev/md0 --grow --array-size=1385120320 7. Decrease number of drives Code: # mdadm /dev/md0 --grow --raid-devices=3 --backup-file=/tmp/mdadm.backup 8. View new raid set Code: # mdadm --detail /dev/md0 8. Monitor progress Code: cat /proc/mdstat Now I just need to wait for 50 hours for the reshape to be done! Data is accessable still, so lets hope that nothing kills the data on the drive. MHDDFS tips on "JBOD" and Breaking RAID5 - Dungeon-Dave - 2012-09-12 That's... a pretty comprehensive guide! Yeah, I was surprised to see I could run "pvcreate" against my mirror array whilst it was still being built. I created LVs, put filesystems on them, mounted them and restored data... all whilst /dev/md0 was still synchronising. Fantastic! |