Re-assemble existing Linux logical volume array Ubuntu 10.04

Ubuntu Linux Specific Guides
Post Reply
User avatar
dedwards
Site Admin
Posts: 70
Joined: Wed Mar 15, 2006 8:28 pm
Contact:

Re-assemble existing Linux logical volume array Ubuntu 10.04

Post by dedwards » Mon Oct 11, 2010 12:53 pm

Re-assembling and mounting existing Linux logical volume arrays can bit a bit tricky. Using logical volumes seems to not be worth the hassle involved in maintaining and recovering those arrays. Nevertheless, people seem hellbent on using logical volumes, so this guide will walk you through getting all your data back and mounted.

If you are planning on re-mounting an existing Linux array on a brand new Linux installation the best piece of advise I can offer you is this. DO NOT install the new OS with the Linux array drives attached. Disconnect all your array drives, install/configure the OS on a SEPARATE drive and THEN shutdown the machine, re-attach the array drives and bring your machine back up. This way you guarantee that you that the new install does NOT touch the existing array with all your valuable data.

1. Install all necessary components

Install mdadm:

Code: Select all

sudo apt-get install mdadm


Install Logical Volume Manager:

Code: Select all

sudo apt-get install lvm2


2. Identify the array drives and re-assemble the array.

Issue the following command to get the listing of all the drives detected in the system:

Code: Select all

sudo fdisk -l


This will list all the drives detected by the system. It will look similar to the output below:

Code: Select all

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001915c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1      121442   975474688   83  Linux
/dev/sda2          121442      121602     1285121    5  Extended
/dev/sda5          121442      121602     1285120   82  Linux swap / Solaris

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      121602   976762583+  ee  GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      121602   976762583+  ee  GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      121602   976762583+  ee  GPT


In this particular example, drive "/dev/sda" is the boot drive with where Linux is installed. You can easily identify it by the fact that it has the "Linux" file system and the "swap" partition on it. So logically, this drive would not be part of the array. The rest of the drives "/dev/sdb", "/dev/sdc", "/dev/sdd" would be part of the array. This can be further confirmed by the fact that the file system on each of those drives is "GPT" which stands for "GUID Partition Table".

3. Re-assemble the Linux array

Issue the following command to assemble the array:

Code: Select all

sudo mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1


One thing to note here is the fact that you assemble the array using the partition numbers NOT the drive numbers. So in the example above, we assembled array "md0" using partitions "/dev/sdb1", "/dev/sdc1" and "/dev/sdd1". If you get no errors, your array should had been assembled. You can check the status of the array by issuing the following command:

Code: Select all

sudo mdadm --detail /dev/md0


You should get an output like below:

Code: Select all

Version : 00.90
  Creation Time : Thu Nov  6 19:19:28 2008
     Raid Level : raid5
     Array Size : 1953517824 (1863.02 GiB 2000.40 GB)
  Used Dev Size : 976758912 (931.51 GiB 1000.20 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Oct 11 14:45:19 2010
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

UUID : 381c3ef0:c71e02f7:4edd5491:507a3f5d
         Events : 0.316136

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
       2       8       17        2      active sync   /dev/sdb1


Ensure that the "state" is "clean" and at the very bottom it should list the partition you just used to assemble the array. Once your array has been re-assembled and it's clean, proceed to the next step.

Now, enter your new array in the mdadm.conf file:

Code: Select all

mdadm --detail --scan >> /etc/mdadm/mdadm.conf


4. Mount the array you just re-assembled

a. First make a mount directory for your array. Usually I create mine under the "/dev" directory:

Code: Select all

sudo mkdir /mnt/raid


b. Load the necessary modules to detect the logical volume group and volume name of your array:

Code: Select all

sudo modprobe dm-mod


c. Scan for volume groups:

Code: Select all

sudo vgscan


You should get an output like below:

Code: Select all

Reading all physical volumes.  This may take a while...
Found volume group "raid0" using metadata type lvm2


Make a note of your volume group which in this example is "raid0" but yours will most likely differ.

d. Switch to the volume group of you array and list the logical volumes under that volume group:

Code: Select all

sudo vgchange -ay raid0
sudo lvs


You should get an output like below:

Code: Select all

LV           VG     Attr      LSize    Origin Snap%  Move Log Copy%  Convert
volume0  raid0  -wi-ao  1.82t


As you can see from the example above, our array which has a volume group name of "raid0" has logical volume name of "volume0". Again, your group and logical volume names will be different.

e. Mount you array:

Code: Select all

sudo mount /dev/raid0/volume0 /mnt/raid/ -o rw,user


In the example above, we've just mounted the array under the "/mnt/raid" directory we created before and set it to read/write (rw) mode.

Switch to the directory you mounted you array and ensure that all your files are there. Once you have verified your files are there goto the next step.

f. Create and entry for your array in your "fstab". Edit your "/etc/fstab" file:

sudo vi /etc/fstab

Enter a new line similar to the one below substituting your own volume group, logical volume name and of course file system that was used in the array.

For ext3 file system array:

Code: Select all

/dev/raid0/volume0 /mnt/raid ext3 defaults 0 0


For xfs file system array:

Code: Select all

/dev/raid0/volume0 /mnt/raid xfs user,auto 0 0


If you don't know the file system that was used on the array, simply issue the following command:

Code: Select all

sudo df -T


You will get an output similar to below:

Code: Select all

Filesystem    Type   1K-blocks      Used Available Use% Mounted on
/dev/sda1     ext4   960167320   1084592 910308996   1% /
none      devtmpfs      214512       208    214304   1% /dev
none         tmpfs      219552         0    219552   0% /dev/shm
none         tmpfs      219552       524    219028   1% /var/run
none         tmpfs      219552         0    219552   0% /var/lock
none         tmpfs      219552         0    219552   0% /lib/init/rw
/dev/mapper/raid0-volume0
              ext3   1922845496 1547739760 277430884  85% /mnt/raid


As you can see from this example, the filesystem for this array is "ext3".

Reboot your machine and ensure that your array is mounted automatically.
Post Reply