Tuesday, September 8, 2015

Bidirectional replication with Unison



About UNISON


Unison is a file-synchronization tool for Unix and Windows. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.

For this implementation I use unison227-2.27.57-13.el6.x86_64.rpm on RHEL6.

The setup work under following scenarios:

1) Replication is for two hosts (node1 & node2)
2) The synced folder is user's home directory - /home/rsftp
3) The file transfer is done using passwordless ssh protocol
4) The synchronization is done in 1 minute interval using cron scheduler

Step 1: Install unison rpm and add user

On both nodes, install the package

rpm  -ivh  unison227-2.27.57-13.el6.x86_64.rpm
useradd rsftp 
echo  rsftp!@#  |  passwd  --stdin  rsftp

above says "set rsftp!@# as password for rsftp user.

Step 2: Create ssh public key and exchange to both nodes

As rsftp user, 

On node1:

[rsftp@node1 ~] $ ssh-keygen -t  dsa
[rsftp@node1 ~] $ ssh-copy-id  -i  /home/rsftp/.ssh/id_dsa.pub node2

On node2:

[rsftp@node2 ~] $ ssh-keygen -t  dsa
[rsftp@node2 ~] $ ssh-copy-id  -i  /home/rsftp/.ssh/id_dsa.pub node1


Once done, next is try ssh in to the partner, expected result is passwordless login.

Step 3: Modify unison configuration file

The configuration file is /home/rsftp/.unison/default.prf

On node1, put these lines

root = /home/rsftp
root = ssh://node2//home/rsftp
auto = true
batch = true

do the same on node2,

root = /home/rsftp
root = ssh://node1//home/rsftp
auto = true
batch = true

Step 4: Create a cronjob on both nodes

As rsftp user

crontab -e

put this line

*/1 * * * * /usr/bin/unison &> /dev/null


Step 5: Create a sample file using dd command to create a file with 10MB size

On node1,

[rsftp@node1 ~] $ cd  ~
[rsftp@node1 ~] $ dd if=/dev/zero of=file1.dat  count=1 bs=10M


After 1 minute pass, on node2 list file in rsftp home directory,

On node2,

[rsftp@node2 ~] $ cd  ~
[rsftp@node2 ~] $ ls -lah


You should see file1.dat with 10MB size.Repeat above steps, create a file2.dat in node2 and list it in node1. Also try other operation such deleting and modifying a file.

Thursday, September 3, 2015

iscsi target, initiator and multipath configuration


What is iSCSI?

iSCSI (Internet Small Computer System Interface) is a TCP/IP based protocol for sending SCSI command over IP based networks. This allows iSCSI infrastructures to extend beyond local LAN and be used over a WAN.

It typically viewed as a low-cost solution to SAN Fibre Channel, it is however limited by speed by the network infrastructure. It is recommend to use a separate and dedicated link for iSCSI.

In this tutorial, I will demonstrate how to install, configure iscsi target (server), initiator (client) and multipath for redundancy.

Brief explanation, I would suggest you to read Redhat Multipath

The setup consists of two virtual machines running rhel6 (VMWare Player) with local repo, refer here for setting up local repositories.


A) host - iscsi target (server)

IP: 192.168.136.128 (represent 1st storage controller)
IP: 192.168.136.138 (represent 2nd storage controller)

B) node1 - iscsi initiator (client)

IP: 192.168.136.129


Step 1: Install packages and configure a backing store on the target

To create a target we first need to install the SCSI target daemon and utility programs

yum -y install scsi-target-utils

and create a backing store (will be presented as LUN to the client), this can be a regular file, a partition, a logical volume or even an entire drive, for flexibility we will use lvm.

On the vm setting, I add a new vdisk with 10GB size to the target server.


















If the vm is running while you added the new vdisk, we use scsi-rescan command (part of sg3_utils package) to rescan the new disk thus no reboot required.

scsi-rescan

from fdisk I can see the new disk /dev/sdc

fdisk -l |grep -i sd

Disk /dev/sdc: 10.7 GB, 10737418240 bytes
.
.
output truncated

with sdc available we can proceed with logical volume creation, using 100% of the disk size.

pvcreate /dev/sdc
vgcreate vgiscsi /dev/sdc
lvcreate -n lvol01  -l 100%FREE vgiscsi 










We will specify the backing store in  /etc/tgt/targets.conf, edit this file, shift+g to go to the last line


<target iqn.2015-09.serveritas.com.ansible:lun1>
        backing-store /dev/vgiscsi/lvol01
</target>


Enable and start tgtd daemon

chkconfig tgtd on ; service tgtd start 

Check the backing store created above

tgt-admin -s

Target 1: iqn.2015-09.serveritas.com.ansible:lun1
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
            LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 10733 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/vgiscsi/lvol01
            Backing store flags:
    Account information:
    ACL information:
    ALL

To this point, bring down second nic (eth1)

Step 2: Install packages and configure a LUN on the initiator

yum -y install iscsi-initiator-utils

Enable and start iscsi daemon

chkconfig iscsi on ; service iscsi restart

Before we can start using a target, we must first discover. Discovering a target will store configuration and discovery information for this target in
/var/lib/iscsi/nodes.

iscsiadm -m discovery -t sendtargets -p 192.168.136.128






Lets have a look at existing disk in node1 before adding the LUN




Now lets use the LUN by login in to the iSCSI target, we use

iscsiadm -m node -T iqn.2015-09.serveritas.com.ansible:lun1 [ -p 192.168.136.128 ] -l





from fdisk, the new disk appear as sdb, you can make a filesystem on it, but lets continue with multipath.







Step 3: Install packages and configure a Multipathing on the initiator

Multipathing allows you to combine multiple physical connections between a server and a storage array into one virtual device. This can be done to provide a more resilient connection to the storage array.

To simulate above scenario, we now bring up second interface on the target.

On the initiator, re-run discovery but this time with second IP of the target

iscsiadm -m discovery -t sendtargets -p 192.168.136.138

Log in to the target using both IP Addresses (represent two storage controllers)

iscsiadm -m node -T iqn.2015-09.serveritas.com.ansible:lun1 [ -p 192.168.136.138 ] -l

Find the iSCSI disk name

grep "Attached SCSI" /var/log/messages

Sep  3 02:29:27 node1 kernel: sd 35:0:0:1: [sdb] Attached SCSI disk
Sep  3 02:29:44 node1 kernel: sd 36:0:0:1: [sdc] Attached SCSI disk











Above screenshot shows sdb and sdc and they are actually the same disk coming from 2 paths.

Now install device-mapper-multipath on node1 and enable it.

yum -y install device-mapper-multipath
chkconfig multipathd on
service multipathd start

Monitor the status with multipath command.

multipath -l
multipath -ll

If everything is good a path will show active ready













RHEL6 support multipathing using the dm-multipath subsystem in which the kernel device mapper used to create virtual device.

Once device-mapper-multipath is installed, configured and started, the device node will be listed in in /dev/mapper. In this example the name is 1IET\x20\x20\x20\x20\x2000010001 and that is not user friendly :(








To make it more human readible we run,

mpathconf --user_friendly_names y

The line will be put in /etc/multipath.conf

## Use user friendly names, instead of using WWIDs as names.
defaults {
        user_friendly_names yes
}














The device name now become mpatha with spaces between 1IET  and  00010001.
Remove spaces for simplicity by adding these lines into /etc/multipath.conf to the default section.

getuid_callout "/lib/udev/scsi_id --replace-whitespace --whitelisted --device=/dev/%n"











There is a situation where we want to identify which device for what use, for example you we have few multipaths disks come in such mpathA mpathB mpathC. Now lets give an alias such as webdata.

Find wwid for the disk

scsi_id --whitelisted --device=/dev/sdb





Modify the /etc/multipath.conf to below,

## Use user friendly names, instead of using WWIDs as names.
defaults {
        user_friendly_names n
     
}

## Give an alias/name to a path for easy to identify
multipaths {
 multipath  {
  wwid "1IET     00010001"
   alias "webdata"
 }

Restart multipath,

service multipathd restart
multipath -ll











Step 4: Path down simulation

On the target server, lets bring down eth1 (second storage controller link)

ifdown eth1

On the initiatior, check multipath status

multipath -ll








The second disk (sdb) status is failed faulty

Bring up back eth1 on the target, the status should turn to active ready.

The /var/log/messages indicate connection has been lost and finally once the link is ready the paths are active.


Sep  6 23:26:07 node1 iscsid: connect to 192.168.136.138:3260 failed (No route to host)
Sep  6 23:26:13 node1 iscsid: connect to 192.168.136.138:3260 failed (No route to host)
Sep  6 23:26:19 node1 iscsid: connect to 192.168.136.138:3260 failed (No route to host)
Sep  6 23:26:25 node1 iscsid: connect to 192.168.136.138:3260 failed (No route to host)
Sep  6 23:26:32 node1 iscsid: connect to 192.168.136.138:3260 failed (No route to host)
Sep  6 23:26:50 node1 multipathd: webdata: sdb - directio checker reports path is down
Sep  6 23:26:50 node1 iscsid: connection2:0 is operational after recovery (57 attempts)
Sep  6 23:26:56 node1 multipathd: webdata: sdb - directio checker reports path is up
Sep  6 23:26:56 node1 multipathd: 8:16: reinstated
Sep  6 23:26:56 node1 multipathd: webdata: remaining active paths: 2

As normal operation we will make a filesystem on the LUN and mount it somewhere in the server.

pvcreate /dev/mapper/webdata
vgcreate vgwebdata /dev/mapper/webdata
lvcreate -n lvol01  -l 100%FREE vgwebdata
mkfs.ext4 /dev/mapper/vgwebdata-lvol01
mount /dev/mapper/vgwebdata-lvol01 /var/www/html