Tag Archives: RHCS

SCSI-3 Persistent Reservations on Fedora Core 20 with targetcli over iSCSI and Red Hat Cluster

In this article, I’ll show how to set up SCSI-3 Persistent Reservations on Fedora Core 20 using targetcli, serving a pair of iSCSI LUNs to a simple Red Hat Cluster that will host a failover filesystem for the purposes of testing the iSCSI implementation. The Linux IO target (LIO) (http://linux-iscsi.org/wiki/LIO) has been the Linux SCSI target since kernel version 2.6.38. It supports a rapidly growing number of fabric modules, and all existing Linux block devices as backstores. For the purposes of our demonstration, the important fact is that it supports operating as an iSCSI target. targetcli is the tool used to perform the LIO configuration. SCSI-3 persistent reservations are required for a number of cluster storage configurations for I/O fencing and failover/retakeover. Therefore, LIO can be used as the foundation for high-end clustering solutions such as Red Hat Cluster Suite. You can read more about persistent reservations here.

The nodes in the lab are as follows:

  • 10.1.1.103centos03 – Red Hat Cluster node 1 on CentOS 6.5
  • 10.1.1.104centos04 – Red Hat Cluster node 2 on CentOS 6.5
  • 10.1.1.108fedora01 – Fedora Core 20 storage node

Installation

I’ll start by installing targetcli onto fedora01:

Let’s check that it has been installed correctly:

Make sure that, before proceeding, any existing configuration is removed:

Continue reading

GFS2 Implementation Under RHEL

This article will demonstrate setting up a simple RHCS (Red Hat Cluster Suite) two-node cluster, with an end goal of having a 50GB LUN shared between two servers, thus providing clustered shared storage to both nodes. This will enable applications running on the nodes to write to a shared filesystem, perform correct locking, and ensure filesystem integrity.

This type of configuration is central to many active-active application setups, where both nodes share a central content or configuration repository.

For this article, two RHEL 6.1 nodes, running on physical hardware (IBM blades) were used. Each node has multiple paths back to the 50GB SAN LUN presented, and multipathd will be used to manage path failover and rebuild in the event of interruption.

Continue reading