How to Connect to an iSCSI LUN in Linux
SSD Nodes' Managed Support Department can install and configure your iSCSI initiators.Connecting from a single server to a single iSCSI LUN
Instruction notes:
Replace the "ISCSI_USERNAME" token with the "username" of the target volume.
Replace the "ISCSI_PASSWORD" token with the corresponding "password" of the target volume.
Replace the "ISCSI_TARGET" token with the corresponding host IP address of the target volume.Setup Instructions:
Install the open-iscsi package for your OS:
CentOS 5, CentOS 6
yum install iscsi-initiator-utilsDebian 6, Ubuntu 10.04, Ubuntu 12.04:
apt-get install open-iscsiDebian/Ubuntu only: fix file paths for iscsiadm:
ln -s /etc/{iscsid.conf,initiatorname.iscsi} /etc/iscsi/Create the iscsid.conf configuration file and backup the original configuration:
cp /etc/iscsi/iscsid.conf /etc/iscsi/iscsid.conf.backup
Open /etc/iscsi/iscsid.conf with a command line text editor and replace the contents with the following:
node.startup = automatic
node.session.auth.username = ISCSI_USERNAME
node.session.auth.password = ISCSI_PASSWORD
discovery.sendtargets.auth.username = ISCSI_USERNAME
discovery.sendtargets.auth.password = ISCSI_PASSWORD
node.session.timeo.replacement_timeout = 120
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.noop_out_interval = 10
node.conn[0].timeo.noop_out_timeout = 15
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.conn[0].iscsi.MaxRecvDataSegmentLength = 65536Start iscsid:
CentOS 5, CentOS 6
/etc/init.d/iscsi start
Debian 6, Ubuntu 10.04, Ubuntu 12.04:
/etc/init.d/open-iscsi restartRun a discovery against the iscsi target host:
iscsiadm -m discovery -t sendtargets -p ISCSI_TARGETRestart the iscsi service (Since node.startup was set to automatic in iscsid.conf it will automatically login to the target host).
CentOS 5, CentOS 6
/etc/init.d/iscsi restart
Debian 6, Ubuntu 10.04, Ubuntu 12.04:
/etc/init.d/open-iscsi restart
You should now see an additional drive on the system. You can print out the drive device with the following command:
find /sys/devices/platform/host* -name block\* -exec ls -la '{}' \; | sed s#^.*../block/#/dev/#gConnecting from multiple servers to a single iSCSI LUNConecting from multiple servers to a single iSCSI LUN is possible, but with special requirements.
Please note: The Linux ext3 and ext4 filesystems are NOT cluster aware. There are a number of clustering filesystems available, but for simplicity this guide will cover using OCFS2, the Oracle Cluster File System for Linux.
- The filesystem you use on the iSCSI mount must be cluster aware.
- Your applications must also be cluster aware to use this filesystem.
Requirements:
CentOS 5, CentOS 6, Debian 6, Ubuntu 10.04, Ubuntu 12.04 servers updated (yum update or apt-get/aptitude)
ISCSI Target (only one needed)The iSCSI target should already be set up and running on each server (using the first section of this guide above). You can verify this by running:
iscsiadm -m session
This should show you information like:
tcp: [1] 10.x.x.x (iqn.2001-05.com.equallogic:0-xxx-xxx-xxx-xxx)]Installation of Required Software (done on every server):
CentOS 5, CentOS 6:Retrieve and install rpms: Module must match the kernel version running on the server. We will need the OCFS2 module rpm for our kernel and the OCFS2-tools.rpmhttp://oss.oracle.com/projects/ocfs2/files/RedHat/
http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/Debian 6, Ubuntu 10.04, Ubuntu 12.04:apt-get install ocfs2-tools ocfs2consoleConfiguration (done on every server):CentOS 5, CentOS 6:O2CB is disabled by default. We will run a simple perl script to enable it.This needs to be run on every server:
perl -pi.bak -e 's/O2CB_ENABLED=false/O2CB_ENABLED=true/' /etc/sysconfig/o2cb
You can verify this enabled O2CB by running:
grep O2CB_ENABLED /etc/sysconfig/o2cbDebian 6, Ubuntu 10.04, Ubuntu 12.04:Open the file: /etc/default/o2cb and change the following line:O2CB_ENABLED=falsetoO2CB_ENABLED=trueCentOS 5, CentOS 6, Debian 6, Ubuntu 10.04, Ubuntu 12.04:Create the configuration directory, if it doesn't exist:mkdir /etc/ocfs2OCFS2 will then need to be configured. A sample of /etc/ocfs2/cluster.conf has been provided below:node:ip_port = 7777ip_address = 10.0.0.1number = 0name = node01.internal.example.comcluster = ocfs2node:ip_port = 7777ip_address = 10.0.0.2number = 1name = node02.internal.example.comcluster = ocfs2node:ip_port = 7777ip_address = 10.0.0.3number = 2name = node03.internal.example.comcluster = ocfs2cluster:node_count = 3name = ocfs2Please note:
1) The node names should match the hostnames of each machine and resolve to private IPs. If they don't, edit /etc/hosts with the appropriate entries, an example has been given below:10.0.0.1 node01 node01.internal.example.com10.0.0.2 node02 node02.internal.example.com10.0.0.3 node03 node03.internal.example.com2) The IP addresses should match the private IPs of each machine.
3) This file MUST be the same across all nodes in this cluster.
4) The cluster name needs to match the oc2b configuration file (ocfs2 is the default cluster name).Starting O2BC (On All servers)
Now you will need to start the O2BC daemons on each server to start the cluster. This is done by running:
/etc/init.d/o2cb start
Loading module "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OKPartitioning and Formatting the disk (iSCSI Target)
This should only be done on ONE machine, since we're setting up only one target. Verify which disk the iSCSI target is located as (/dev/sda in this case). If you select the wrong disk here, you will delete the data from another disk in your system.
First we will use parted to partition the disk (if this program isn't already installed, run yum install parted or apt-get install parted)
parted -s /dev/sda mklabel msdos
parted -s -- /dev/sda mkpart primary 0 -1To view the new partition table, run:
fdisk -l /dev/sda
Disk /dev/sda: 8453 MB, 8453619712 bytes
255 heads, 63 sectors/track, 1027 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 1028 8255487+ 83 LinuxNow, we are going to format the disk with some sane default settings and give it a label of cluster-storage:
mkfs.ocfs2 -b 4k -C 32k -N4 -L cluster-storage /dev/sda1
mkfs.ocfs2 1.2.7
Filesystem label=cluster-storage
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=8453586944 (257983 clusters) (2063864 blocks)
8 cluster groups (tail covers 32191 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 2 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful
If you require more than 4 nodes, change -N to a higher number than 4.This creates a file system with 4096 block size and 32768 (32k) cluster size.Mounting the new partition (All servers)
We now need to have the partition table updated on all the servers in the cluster. In this case all the servers have /dev/sda as the iSCSI target.
We will run the following to re-read the partition:
blockdev --rereadpt /dev/sda
Next, we will want to create a mount point on the servers for this cluster.
mkdir /cluster-storage
Once the mount point is created, we will mount the partition. We are going to do this via its label we used in the format command. This will make it portable across all servers as the label will be the same, where the drive might be different (/dev/sda1, or /dev/sdb1, etc).
mount -L cluster-storage /cluster-storage
Next, we will want to make sure it mounted.
mount | grep ocfs
This should display:
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sda1 on /cluster-storage type ocfs2 (rw,_netdev,heartbeat=local)
This means the filesystem mounted on that server. Now, we will want to check the o2cb status on each server:
/etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Active
This shows that the clustering is on and active.You will need to verify this on every machine in the cluster.Does it work?
The servers now have the filesystem mounted and o2cb says they are working correctly, so we need to test it by writing files.
On each server, we will want to echo to a file to see if it works correctly.With the first server in the cluster, do the following:
echo testing1 >> /cluster-storage/test.txt
On the next machine, lets see if we can read that file:
cat /cluster-storage/test.txt
This should give you:
testing1
Now, let us echo to it again from the second machine:
echo testing2 >> /cluster-storage/test.txt
Then, lets echo from the third machine:
echo testing3 >> /cluster-storage/test.txt
On the first machine, lets read the file again to see if all the updates were done correctly:
cat /cluster-storage/test.txt
This should show:
testing1
testing2
testing3
Your filesystem is now clustering correctly.Automatic Mounting (on bootup, on each machine).
CentOS 5, CentOS 6:To have the cluster mount at startup, we only need to make a few additions to your system. The first is to make sure the netfs service is set to start at bootup:
chkconfig -list netfs
This should show:
netfs0:off1:off2:off3:on4:on5:on6:off
If your current runlevel (more than likely runlevel 3) is listed as off, run the following:
chkconfig --level 3 on
This will enable the netfs service to start at bootup on the server.CentOS 5, CentOS 6, Debian 6, Ubunau 10.04, Ubuntu 12.04:Next, we will want to modify the /etc/fstab file to add the mount point. Here is an example for our configuration:
LABEL=cluster-storage /cluster-storage ocfs2 dev,defaults 0 0
This tells us to use the label on the partition to mount to /cluster-storage. We are telling it to figure out the file system type automatically (via the auto entry)._netdev is used to tell the system not to try to mount it until the network is brought online. By then, iscsid should also be online, and it will mount successfully. The last two fields are for the mount order and fsck orders.0 for both these fields is a good default in this case.If the storage doesn't mount at boot, you may need to add this line to /etc/rc.localmount -a