What is iSCSI?
iSCSI (Internet Small Computer System Interface) is a TCP/IP based protocol for sending SCSI command over IP based networks. This allows iSCSI infrastructures to extend beyond local LAN and be used over a WAN.
It typically viewed as a low-cost solution to SAN Fibre Channel, it is however limited by speed by the network infrastructure. It is recommend to use a separate and dedicated link for iSCSI.
In this tutorial, I will demonstrate how to install, configure iscsi target (server), initiator (client) and multipath for redundancy.
Brief explanation, I would suggest you to read Redhat Multipath
The setup consists of two virtual machines running rhel6 (VMWare Player) with local repo, refer here for setting up local repositories.
A) host - iscsi target (server)
IP: 192.168.136.128 (represent 1st storage controller)
IP: 192.168.136.138 (represent 2nd storage controller)
B) node1 - iscsi initiator (client)
IP: 192.168.136.129
Step 1: Install packages and configure a backing store on the target
To create a target we first need to install the SCSI target daemon and utility programs
yum -y install scsi-target-utils
and create a backing store (will be presented as LUN to the client), this can be a regular file, a partition, a logical volume or even an entire drive, for flexibility we will use lvm.
On the vm setting, I add a new vdisk with 10GB size to the target server.
If the vm is running while you added the new vdisk, we use scsi-rescan command (part of sg3_utils package) to rescan the new disk thus no reboot required.
scsi-rescan
from fdisk I can see the new disk /dev/sdc
fdisk -l |grep -i sd
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
.
.
output truncated
with sdc available we can proceed with logical volume creation, using 100% of the disk size.
pvcreate /dev/sdc
vgcreate vgiscsi /dev/sdc
lvcreate -n lvol01 -l 100%FREE vgiscsi
We will specify the backing store in /etc/tgt/targets.conf, edit this file, shift+g to go to the last line
<target iqn.2015-09.serveritas.com.ansible:lun1>
backing-store /dev/vgiscsi/lvol01
</target>
Enable and start tgtd daemon
chkconfig tgtd on ; service tgtd start
Check the backing store created above
tgt-admin -s
Target 1: iqn.2015-09.serveritas.com.ansible:lun1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 10733 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/vgiscsi/lvol01
Backing store flags:
Account information:
ACL information:
ALL
To this point, bring down second nic (eth1)
Step 2: Install packages and configure a LUN on the initiator
yum -y install iscsi-initiator-utils
Enable and start iscsi daemon
chkconfig iscsi on ; service iscsi restart
Before we can start using a target, we must first discover. Discovering a target will store configuration and discovery information for this target in
/var/lib/iscsi/nodes.
iscsiadm -m discovery -t sendtargets -p 192.168.136.128
Lets have a look at existing disk in node1 before adding the LUN
Now lets use the LUN by login in to the iSCSI target, we use
iscsiadm -m node -T iqn.2015-09.serveritas.com.ansible:lun1 [ -p 192.168.136.128 ] -l
from fdisk, the new disk appear as sdb, you can make a filesystem on it, but lets continue with multipath.
Step 3: Install packages and configure a Multipathing on the initiator
Multipathing allows you to combine multiple physical connections between a server and a storage array into one virtual device. This can be done to provide a more resilient connection to the storage array.
To simulate above scenario, we now bring up second interface on the target.
On the initiator, re-run discovery but this time with second IP of the target
iscsiadm -m discovery -t sendtargets -p 192.168.136.138
Log in to the target using both IP Addresses (represent two storage controllers)
iscsiadm -m node -T iqn.2015-09.serveritas.com.ansible:lun1 [ -p 192.168.136.138 ] -l
Find the iSCSI disk name
grep "Attached SCSI" /var/log/messages
Sep 3 02:29:27 node1 kernel: sd 35:0:0:1: [sdb] Attached SCSI disk
Sep 3 02:29:44 node1 kernel: sd 36:0:0:1: [sdc] Attached SCSI disk
Above screenshot shows sdb and sdc and they are actually the same disk coming from 2 paths.
Now install device-mapper-multipath on node1 and enable it.
yum -y install device-mapper-multipath
chkconfig multipathd on
service multipathd start
Monitor the status with multipath command.
multipath -l
multipath -ll
If everything is good a path will show active ready
RHEL6 support multipathing using the dm-multipath subsystem in which the kernel device mapper used to create virtual device.
Once device-mapper-multipath is installed, configured and started, the device node will be listed in in /dev/mapper. In this example the name is 1IET\x20\x20\x20\x20\x2000010001 and that is not user friendly :(
To make it more human readible we run,
mpathconf --user_friendly_names y
The line will be put in /etc/multipath.conf
## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
}
The device name now become mpatha with spaces between 1IET and 00010001.
Remove spaces for simplicity by adding these lines into /etc/multipath.conf to the default section.
getuid_callout "/lib/udev/scsi_id --replace-whitespace --whitelisted --device=/dev/%n"
There is a situation where we want to identify which device for what use, for example you we have few multipaths disks come in such mpathA mpathB mpathC. Now lets give an alias such as webdata.
Find wwid for the disk
scsi_id --whitelisted --device=/dev/sdb
Modify the /etc/multipath.conf to below,
## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names n
}
## Give an alias/name to a path for easy to identify
multipaths {
multipath {
wwid "1IET 00010001"
alias "webdata"
}
Restart multipath,
service multipathd restart
multipath -ll
Step 4: Path down simulation
On the target server, lets bring down eth1 (second storage controller link)
ifdown eth1
On the initiatior, check multipath status
multipath -ll
The second disk (sdb) status is failed faulty
Bring up back eth1 on the target, the status should turn to active ready.
The /var/log/messages indicate connection has been lost and finally once the link is ready the paths are active.
Sep 6 23:26:07 node1 iscsid: connect to 192.168.136.138:3260 failed (No route to host)
Sep 6 23:26:13 node1 iscsid: connect to 192.168.136.138:3260 failed (No route to host)
Sep 6 23:26:19 node1 iscsid: connect to 192.168.136.138:3260 failed (No route to host)
Sep 6 23:26:25 node1 iscsid: connect to 192.168.136.138:3260 failed (No route to host)
Sep 6 23:26:32 node1 iscsid: connect to 192.168.136.138:3260 failed (No route to host)
Sep 6 23:26:50 node1 multipathd: webdata: sdb - directio checker reports path is down
Sep 6 23:26:50 node1 iscsid: connection2:0 is operational after recovery (57 attempts)
Sep 6 23:26:56 node1 multipathd: webdata: sdb - directio checker reports path is up
Sep 6 23:26:56 node1 multipathd: 8:16: reinstated
Sep 6 23:26:56 node1 multipathd: webdata: remaining active paths: 2
As normal operation we will make a filesystem on the LUN and mount it somewhere in the server.
pvcreate /dev/mapper/webdata
vgcreate vgwebdata /dev/mapper/webdata
lvcreate -n lvol01 -l 100%FREE vgwebdata
mkfs.ext4 /dev/mapper/vgwebdata-lvol01
mount /dev/mapper/vgwebdata-lvol01 /var/www/html
No comments:
Post a Comment