How to Connect to an iSCSI LUN in Linux

SSD Nodes' Managed Support Department can install and configure your iSCSI initiators.
 

Connecting from a single server to a single iSCSI LUN

 
Instruction notes: 
Replace the "ISCSI_USERNAME" token with the "username" of the target volume.
Replace the "ISCSI_PASSWORD" token with the corresponding "password" of the target volume.
Replace the "ISCSI_TARGET" token with the corresponding host IP address of the target volume.
 
Setup Instructions: 
Install the open-iscsi package for your OS: 

CentOS 5, CentOS 6
yum install iscsi-initiator-utils 
 
Debian 6, Ubuntu 10.04, Ubuntu 12.04:
apt-get install open-iscsi 
 
Debian/Ubuntu only: fix file paths for iscsiadm: 
ln -s /etc/{iscsid.conf,initiatorname.iscsi} /etc/iscsi/ 
 
Create the iscsid.conf configuration file and backup the original configuration: 
cp /etc/iscsi/iscsid.conf /etc/iscsi/iscsid.conf.backup

Open /etc/iscsi/iscsid.conf with a command line text editor and replace the contents with the following:

node.startup = automatic 
node.session.auth.username = ISCSI_USERNAME
node.session.auth.password = ISCSI_PASSWORD
discovery.sendtargets.auth.username = ISCSI_USERNAME
discovery.sendtargets.auth.password = ISCSI_PASSWORD
node.session.timeo.replacement_timeout = 120 
node.conn[0].timeo.login_timeout = 15 
node.conn[0].timeo.logout_timeout = 15 
node.conn[0].timeo.noop_out_interval = 10 
node.conn[0].timeo.noop_out_timeout = 15 
node.session.iscsi.InitialR2T = No 
node.session.iscsi.ImmediateData = Yes 
node.session.iscsi.FirstBurstLength = 262144 
node.session.iscsi.MaxBurstLength = 16776192 
node.conn[0].iscsi.MaxRecvDataSegmentLength = 65536
 
Start iscsid: 
CentOS 5, CentOS 6
/etc/init.d/iscsi start 

Debian 6, Ubuntu 10.04, Ubuntu 12.04:
/etc/init.d/open-iscsi restart 
 
Run a discovery against the iscsi target host: 
iscsiadm -m discovery -t sendtargets -p ISCSI_TARGET
 
Restart the iscsi service (Since node.startup was set to automatic in iscsid.conf it will automatically login to the target host). 
CentOS 5, CentOS 6 
/etc/init.d/iscsi restart 

Debian 6, Ubuntu 10.04, Ubuntu 12.04: 
/etc/init.d/open-iscsi restart 

You should now see an additional drive on the system. You can print out the drive device with the following command: 
find /sys/devices/platform/host* -name block\* -exec ls -la '{}' \; | sed s#^.*../block/#/dev/#g 
 
 
Connecting from multiple servers to a single iSCSI LUN
 
Conecting from multiple servers to a single iSCSI LUN is possible, but with special requirements. 
  • The filesystem you use on the iSCSI mount must be cluster aware.
  • Your applications must also be cluster aware to use this filesystem.
Please note: The Linux ext3 and ext4 filesystems are NOT cluster aware. There are a number of clustering filesystems available, but for simplicity this guide will cover using OCFS2, the Oracle Cluster File System for Linux. 
 
Requirements:
CentOS 5, CentOS 6, Debian 6, Ubuntu 10.04, Ubuntu 12.04 servers updated (yum update or apt-get/aptitude)
ISCSI Target (only one needed)
 
The iSCSI target should already be set up and running on each server (using the first section of this guide above). You can verify this by running:

iscsiadm -m session

This should show you information like:

tcp: [1] 10.x.x.x (iqn.2001-05.com.equallogic:0-xxx-xxx-xxx-xxx)]
 
Installation of Required Software (done on every server):

CentOS 5, CentOS 6:
Retrieve and install rpms: Module must match the kernel version running on the server. We will need the OCFS2 module rpm for our kernel and the OCFS2-tools.rpm
 
Debian 6, Ubuntu 10.04, Ubuntu 12.04:
apt-get install ocfs2-tools ocfs2console
 
 
Configuration (done on every server):
 
CentOS 5, CentOS 6:
O2CB is disabled by default. We will run a simple perl script to enable it.This needs to be run on every server:

perl -pi.bak -e 's/O2CB_ENABLED=false/O2CB_ENABLED=true/' /etc/sysconfig/o2cb

You can verify this enabled O2CB by running:

grep O2CB_ENABLED /etc/sysconfig/o2cb
 
Debian 6, Ubuntu 10.04, Ubuntu 12.04:
Open the file: /etc/default/o2cb and change the following line:
 
O2CB_ENABLED=false
to
O2CB_ENABLED=true
 
CentOS 5, CentOS 6, Debian 6, Ubuntu 10.04, Ubuntu 12.04:
 
Create the configuration directory, if it doesn't exist:
 
mkdir /etc/ocfs2
 
OCFS2 will then need to be configured. A sample of /etc/ocfs2/cluster.conf has been provided below:
 
node:
ip_port = 7777
ip_address = 10.0.0.1
number = 0
name = node01.internal.example.com
cluster = ocfs2
 
node:
ip_port = 7777
ip_address = 10.0.0.2
number = 1
name = node02.internal.example.com
cluster = ocfs2
 
node:
ip_port = 7777
ip_address = 10.0.0.3
number = 2
name = node03.internal.example.com
cluster = ocfs2
 
cluster:
node_count = 3
name = ocfs2
 
Please note:
1) The node names should match the hostnames of each machine and resolve to private IPs. If they don't, edit /etc/hosts with the appropriate entries, an example has been given below:
 
10.0.0.1               node01 node01.internal.example.com
10.0.0.2               node02 node02.internal.example.com
10.0.0.3               node03 node03.internal.example.com
 
2) The IP addresses should match the private IPs of each machine.
3) This file MUST be the same across all nodes in this cluster.
4) The cluster name needs to match the oc2b configuration file (ocfs2 is the default cluster name).
 
Starting O2BC (On All servers)

Now you will need to start the O2BC daemons on each server to start the cluster. This is done by running:

/etc/init.d/o2cb start

Loading module "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
 
Partitioning and Formatting the disk (iSCSI Target)

This should only be done on ONE machine, since we're setting up only one target. Verify which disk the iSCSI target is located as (/dev/sda in this case). If you select the wrong disk here, you will delete the data from another disk in your system.

First we will use parted to partition the disk (if this program isn't already installed, run yum install parted or apt-get install parted)

parted -s /dev/sda mklabel msdos
parted -s -- /dev/sda mkpart primary 0 -1
 
To view the new partition table, run:

fdisk -l /dev/sda

Disk /dev/sda: 8453 MB, 8453619712 bytes
255 heads, 63 sectors/track, 1027 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 1028 8255487+ 83 Linux
 
Now, we are going to format the disk with some sane default settings and give it a label of cluster-storage:

mkfs.ocfs2 -b 4k -C 32k -N4 -L cluster-storage /dev/sda1

mkfs.ocfs2 1.2.7
Filesystem label=cluster-storage
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=8453586944 (257983 clusters) (2063864 blocks)
8 cluster groups (tail covers 32191 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4

Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 2 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful

If you require more than 4 nodes, change -N to a higher number than 4.This creates a file system with 4096 block size and 32768 (32k) cluster size.
 
Mounting the new partition (All servers)

We now need to have the partition table updated on all the servers in the cluster. In this case all the servers have /dev/sda as the iSCSI target.

We will run the following to re-read the partition:

blockdev --rereadpt /dev/sda

Next, we will want to create a mount point on the servers for this cluster.

mkdir /cluster-storage

Once the mount point is created, we will mount the partition. We are going to do this via its label we used in the format command. This will make it portable across all servers as the label will be the same, where the drive might be different (/dev/sda1, or /dev/sdb1, etc).

mount -L cluster-storage /cluster-storage

Next, we will want to make sure it mounted.

mount | grep ocfs

This should display:

ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sda1 on /cluster-storage type ocfs2 (rw,_netdev,heartbeat=local)

This means the filesystem mounted on that server. Now, we will want to check the o2cb status on each server:

/etc/init.d/o2cb status

Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold: 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Active

This shows that the clustering is on and active.You will need to verify this on every machine in the cluster.
 
Does it work?

The servers now have the filesystem mounted and o2cb says they are working correctly, so we need to test it by writing files.
On each server, we will want to echo to a file to see if it works correctly.With the first server in the cluster, do the following:

echo testing1 >> /cluster-storage/test.txt

On the next machine, lets see if we can read that file:

cat /cluster-storage/test.txt

This should give you:

testing1

Now, let us echo to it again from the second machine:

echo testing2 >> /cluster-storage/test.txt

Then, lets echo from the third machine:

echo testing3 >> /cluster-storage/test.txt

On the first machine, lets read the file again to see if all the updates were done correctly:

cat /cluster-storage/test.txt

This should show:

testing1

testing2

testing3

Your filesystem is now clustering correctly.
 
Automatic Mounting (on bootup, on each machine).

CentOS 5, CentOS 6:
To have the cluster mount at startup, we only need to make a few additions to your system. The first is to make sure the netfs service is set to start at bootup:

chkconfig -list netfs

This should show:

netfs0:off1:off2:off3:on4:on5:on6:off

If your current runlevel (more than likely runlevel 3) is listed as off, run the following:

chkconfig --level 3 on

This will enable the netfs service to start at bootup on the server. 
 
CentOS 5, CentOS 6, Debian 6, Ubunau 10.04, Ubuntu 12.04:
Next, we will want to modify the /etc/fstab file to add the mount point. Here is an example for our configuration:

LABEL=cluster-storage  /cluster-storage  ocfs2  dev,defaults  0 0

This tells us to use the label on the partition to mount to /cluster-storage. We are telling it to figure out the file system type automatically (via the auto entry)._netdev is used to tell the system not to try to mount it until the network is brought online. By then, iscsid should also be online, and it will mount successfully. The last two fields are for the mount order and fsck orders.0 for both these fields is a good default in this case.
 
If the storage doesn't mount at boot, you may need to add this line to /etc/rc.local
 
mount -a

Was this answer helpful?

 Print this Article

Also Read

Additional IPs

With the current IPv4 shortage, it's becoming increasingly difficult to acquire more IPs. At the...

Getting Started - How to Access Your Server

Our servers by default are accessible using SSH (secure shell). The easiest way to access them is...

SSL VPN Connections

This only applies to Enterprise customers in our DAL1 facility (IBM)Connection...

DNS Hosting and Nameservers

SSD Nodes provides DNS hosting for all domains that are bought or transfered in our system.If...

Networking Issues

This article applies only to the OpenVZ Containers which we do not offer now.   There is a bug...