How to fail add stop and remove software raid devices in Linux

Posted: 13 Dec 2016 in RAID and LVM
Tags: , , , , , ,

 In this tutorial we will learn how to manually fail a device used in raid and how to add a new to device to existing raid. we will also learn how to use spare device and how to stop and remove software raid completely from our linux machine. But before we starting removing any devices from existing  raid device first check the built topology of created raid.check the number of disk and name of disk used in raid as active device and find out the device name used as spare device using below linux command line tools.

 

[root@localhost ~]# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
              ext3     29G  2.0G   26G   8% /
/dev/sda1     ext3     99M   12M   82M  13% /boot
tmpfs        tmpfs    506M     0  506M   0% /dev/shm
/dev/md0      ext3    950M   18M  885M   2% /linuxtiwary
[root@localhost ~]#

[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Mon Dec 12 14:18:02 2016
     Raid Level : raid1
     Array Size : 987840 (964.85 MiB 1011.55 MB)
  Used Dev Size : 987840 (964.85 MiB 1011.55 MB)
   Raid Devices : 2
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent
    Update Time : Mon Dec 12 14:27:48 2016
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1
           UUID : 7547df35:f740b7f7:06234e24:013d8e13
         Events : 0.4
    Number   Major   Minor   RaidDevice State
       0       8       19        0      active sync   /dev/sdb3
       1       8       35        1      active sync   /dev/sdc3
       2       8       49        -      spare   /dev/sdd1

hence now its completely clear from the above output which device is used as active device and which one is spare device.

 

Step1:Now i am going to fail one of device active in raid-1 named /dev/sdb3 manually and analysie what will happen after one disk failure.

Let’s see how to manually fail and remove active devices from raid device.

here i am going to remove /dev/sdb3 device from raid /dev/md0

remove disk from raid

[root@localhost ~]# mdadm /dev/md0 --fail /dev/sdb3

 

Just after above command i have run below command and you can see spare device is taking place of failed device.

[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Mon Dec 12 14:18:02 2016
     Raid Level : raid1
     Array Size : 987840 (964.85 MiB 1011.55 MB)
  Used Dev Size : 987840 (964.85 MiB 1011.55 MB)
   Raid Devices : 2
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent
    Update Time : Mon Dec 12 15:15:43 2016
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 1
  Spare Devices : 1
 Rebuild Status : 95% complete
           UUID : 7547df35:f740b7f7:06234e24:013d8e13
         Events : 0.6
    Number   Major   Minor   RaidDevice State
       2       8       49        0      spare rebuilding   /dev/sdd1
       1       8       35        1      active sync   /dev/sdc3
       3       8       19        -      faulty spare   /dev/sdb3

 

 

I HAVE CROSS CHECK IT ONCE AGAIN USING DIFFERENT COMMAND BUT I FOUND THE SAME RESULT.

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdd1[0] sdc3[1] sdb3[2](F)
      987840 blocks [2/2] [UU]
unused devices: <none>




 

Step2:Now if you want to remove faulty spare device from raid device use below command.

 

[root@localhost ~]# mdadm /dev/md0 --remove /dev/sdb3
mdadm: hot removed /dev/sdb3


Above method is also know as hot removal method.

Let’s check the output again after hot removal of faulty spare device(/dev/sdb3) from raid.

[root@localhost ~]# mdadm –detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Dec 12 14:18:02 2016
Raid Level : raid1

Array Size : 987840 (964.85 MiB 1011.55 MB)
Used Dev Size : 987840 (964.85 MiB 1011.55 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Dec 12 15:30:54 2016
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 7547df35:f740b7f7:06234e24:013d8e13
Events : 0.10
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 35 1 active sync /dev/sdc3
[root@localhost ~]#

 

you can clearly see there is no /dev/sdb3 device anywhere.

 

Step4:Now let’s learn how to manually add any disk to existing raid so that it can be used as a spare disk.

[root@localhost ~]# mdadm /dev/md0 --add /dev/sdb3
mdadm: added /dev/sdb3

 

Now check whether /dev/sdb3 is added to raid device as a spare device or not.
[root@localhost ~]# mdadm –detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Mon Dec 12 14:18:02 2016
Raid Level : raid1
Array Size : 987840 (964.85 MiB 1011.55 MB)
Used Dev Size : 987840 (964.85 MiB 1011.55 MB)
Raid Devices : 2
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Dec 12 15:30:54 2016
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
UUID : 7547df35:f740b7f7:06234e24:013d8e13
Events : 0.10
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 35 1 active sync /dev/sdc3
2 8 19 – spare /dev/sdb3
[root@localhost ~]#

add spare device

You can clearly see now /dev/sdb3 is being used as spare device.

 

Step5:Now how to remove and delete raid device /dev/md0

you must follow this step to delete a raid device cleanly from your linux machine.

first unmount then stop the raid device and then remove it.

[root@localhost ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
 ext3 29G 2.0G 26G 8% /
/dev/sda1 ext3 99M 12M 82M 13% /boot
tmpfs tmpfs 506M 0 506M 0% /dev/shm
/dev/md0 ext3 950M 18M 885M 2% /linuxtiwary

First unmount it.

[root@localhost ~]# umount /linuxtiwary

Check whether unmounted or not using df -h command.

[root@localhost ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
 ext3 29G 2.0G 26G 8% /
/dev/sda1 ext3 99M 12M 82M 13% /boot
tmpfs tmpfs 506M 0 506M 0% /dev/shm

 

Now STOP the RAID DEVICE FIRST.

[root@localhost ~]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0

 

Then Remove the RAID Device.
[root@localhost ~]# mdadm –remove /dev/md0
[root@localhost ~]#

 

Now check whether raid is deleted or not.

[root@localhost ~]# mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
[root@localhost ~]#

 

so you can clearly see there is no active raid.

It means we have successfully deleted software raid from our linux machine.

Now you can delete partitions used in raid using fdisk command or you can use those partitions for different use. It’s all up to you know.

Advertisement
Comments
  1. dharmendrachelani says:

    this is one of the best site i have ever seen. the content is perfect, the theme is perfect and the style of writing is awesome. very impressive. i have also started writing please check it out https://thevtechblog.wordpress.com/.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.