Home > Articles > Operating Systems, Server > Solaris

  • Print
  • + Share This
Like this article? We recommend

Example 1: Logical Device With SVM

To illustrate what has been presented, this section describes a method to access a replicated set of data, created using an enterprise class subsystem: the StorEdge SE99x0 systems. ShadowImage is used to replicate the LUNs. The data to access is configured as a metadevice using SVM patched at the latest level. The primary metadevice is called d101 and is a soft partition created on top of metadevice d100: RAID-0 of four SE99x0 LUNs. All of these metadevices are part of the metaset labset.

In this example, the primary and secondary volumes (metadevices) are accessed from two different hosts (the primary host is storage10 and the secondary host is storage26). In this situation, the primary host has access to the primary LUNs only, and secondary host sees only the secondary LUNs. This constraint forces you to reconstruct the metaset and metadevices on the secondary site before accessing the data. There is no possibility of importing or exporting the metaset from one host to the other (take and release ownership of a metaset implies that every disk is visible on both hosts).

SVM metaset

labset

SVM metadevice

d100, RAID 0 of 4 LUNs

d101, softpartition on top of d100

Primary disks:

LUNS visible from storage103

c8t500060E8000000000000ED1600000200d0

c8t500060E8000000000000ED1600000201d0

c8t500060E8000000000000ED1600000202d0

c8t500060E8000000000000ED1600000203d0

Secondary disks:

LUNS visible from storage26

c3t500060E8000000000000ED160000020Ad0

c3t500060E8000000000000ED160000020Bd0

c3t500060E8000000000000ED160000020Cd0

c3t500060E8000000000000ED160000020Dd0


ShadowImage is set up so that LUNs are paired, under the consistency group LAB, as follows:

Primary Disk -->

Secondary Disk

c8t500060E8000000000000ED1600000200d0 -->

c3t500060E8000000000000ED160000020Ad0

c8t500060E8000000000000ED1600000201d0 -->

c3t500060E8000000000000ED160000020Bd0

c8t500060E8000000000000ED1600000202d0 -->

c3t500060E8000000000000ED160000020Cd0

c8t500060E8000000000000ED1600000203d0 -->

c3t500060E8000000000000ED160000020Dd0


As described in the previous section, to access the replicated data, you must insure that every layer of the I/O stack is correctly set up. In this example, the steps would be divided as follows:.

  • Physical layer—ensure the replicated disks are consistent and accessible.

  • Driver layer—detect the replicated disks.

  • LVM Layer—reconstruct the replicated metasets and metadevices.

  • File system layer—make the replicated file system consistent.

  • Application layer—make the data ready for the application.

To Ensure Consistent and Accessible Replicated Disks in the Physical Layer

  • Suspend the replication:

  • Before accessing the replicated LUNs, stop (suspend) the replication and make sure every LUN is in a suspended (psus) state:

    root@storage103 # pairsplit -g LAB 
    root@storage103 # pairevtwait -g LAB -s psus -t 1800

    Of course, pairspli must be issued when all the LUNs are already synchronized (state PAIR). Failing to do so will result in corrupted secondary devices

To Detect the Replicated Disks in the Driver Layer

  • Scan the disks and verify that they are all visible and accessible from the secondary host.

  • This is achieved using the Solaris OS command devfsadm, which scans I/O buses for new devices and reads the partition table of each disk:

    root@storage26 # devfsadm 

To Reconstruct the Replicated Metasets and Metadevices in the LVM Layer

Modify the primary metaset configuration to reflect the new devices, and apply the modified configuration to a newly created metaset.

  1. Create a metaset on secondary host:

  2. root@storage26 # metaset -s labset -a -h storage26 
  3. Populate the new metaset with cloned disks:

  4. root@storage26 # metaset -s labset -a \ 
    c3t500060E8000000000000ED160000020Ad0\ 
    c3t500060E8000000000000ED160000020Bd0 \ 
    c3t500060E8000000000000ED160000020Cd0 \ 
    c3t500060E8000000000000ED160000020Dd0 \ 
  5. Create new configuration for the secondary metaset.

  6. Start by obtaining the metadevice configuration of the primary host:

    root@storage103 # metaset -s labset -p 
    labset/d101 -p labset/d100 -o 1 -b 10485760 
    labset/d100 1 4 c8t500060E8000000000000ED1600000200d0s0 \ 
    c8t500060E8000000000000ED1600000201d0s0 \ 
    c8t500060E8000000000000ED1600000202d0s0 \ 
    c8t500060E8000000000000ED1600000203d0s0 -i 32b 
  7. On the secondary host, create a metadevice configuration file called /etc/lvm/md.tab containing the previous output with the correct secondary LUNs.

  8. The order of appearance must be respected:

    root@storage26 # cat /etc/lvm/md.tab
    labset/d101 -p labset/d100 -o 1 -b 10485760
    labset/d100 1 4 c3t500060E8000000000000ED160000020Ad0s0 \ 
    c3t500060E8000000000000ED160000020Bd0s0 \ 
    c3t500060E8000000000000ED160000020Cd0s0 \ 
    c3t500060E8000000000000ED160000020Dd0s0 -i 32b 
  9. Apply the metadevice configuration file to the replicated host:

  10. root@storage26 # metainit -s labset -a
    labset/d100: Concat/Stripe is setup
    labset/d101: Soft Partition is setup
    root@storage26 #

To Make the Replicated File System Consistent in the File System Layer

  1. Check the consistency of the file system:

  2. root@storage26 # fsck /dev/md/labset/rdsk/d101 

    The fsck lists the corrupted files. Action must be taken to recover them. This operation makes sense in case of a crash. During a crash, some files might be corrupted (files in creation and modification mode).

  3. Mount the file system:

  4. root@storage26 # mount /dev/md/labset/dsk/d101 /mnt/LAB 

To Make the Data Ready for the Application in the Application Layer

At this stage, you can consider the replicated data accessible. Some application specific actions might take place, such as modifying configuration files, links, or other clean up and recover processes.

  • + Share This
  • 🔖 Save To Your Account