Tag Archives: CentOS

Configuring GFS2 on CentOS 7

This article will briefly discuss how to configure a GFS2 shared filesystem across two nodes on CentOS 7. Rather than rehashing a lot of previous content, this article presumes that you have followed the steps in my previous article, in order to configure the initial cluster and storage, up to and including the configuration of the STONITH device – but no further. All other topology considerations, device paths/layouts, etc. are the same, and the cluster nodes are still centos05 and centos07. The cluster name is webcluster and the 8GB LUN is presented as /dev/disk/by-id/wwn-0x60014055f0cfae3d6254576932ddc1f7 upon which a single partition has been created: /dev/disk/by-id/wwn-0x60014055f0cfae3d6254576932ddc1f7-part1.

First, install the lvm2-cluster and gfs2-utils packages:

Enable clustered locking for LVM, and reboot both nodes:

Create clone resources for DLM and CLVMD, so that they can run on both nodes. Run pcs commands from a single node only:

Create an ordering and a colocation constraint, so that DLM starts before CLVMD, and both resources start on the same node:

Check the status of the clone resources:

Set the no-quorum-policy of the cluster to freeze so that that when quorum is lost, the remaining partition will do nothing until quorum is regained – GFS2 requires quorum to operate.

Create the LVM objects as required, again, from a single cluster node:

Create the GFS2 filesystem. The -t option should be specified as <clustername>:<fsname>, and the right number of journals should be specified (here 2 as we have two nodes accessing the filesystem):

We will not use /etc/fstab to specify the mount, rather we’ll use a Pacemaker-controlled resource:

This is configured as a clone resource so it will run on both nodes at the same time. Confirm that the mount has succeeded on both nodes:

Note the use of noatime and nodiratime which will yield a performance benefit. As per Red Hat Documentation, SELinux should be disabled too.

Next, create an ordering constraint so that the filesystem resource is started after the CLVMD resource, and a colocation constraint so that both start on the same node:

And we’re done.

We can even grow the filesystem online:

Building a Highly-Available Apache Cluster on CentOS 7

This article will walk through the steps required to build a highly-available Apache cluster on CentOS 7. In CentOS 7 (as in Red Hat Enterprise Linux 7) the cluster stack has moved to Pacemaker/Corosync, with a new command line tool to manage the cluster (pcs, replacing commands such as ccs and clusvcadm in earlier releases).

The cluster will be a two node cluster comprising nodes centos05 and centos07, and iSCSI shared storage will be presented from node fedora01. There will be a 8GB LUN presented for shared storage, and a 1GB LUN for fencing purposes. I have covered setting up iSCSI storage with SCSI-3 persistent reservations in a previous article. There is no need to use CLVMD in this example as we will be utilising a simple failover filesystem instead.

The first step is to add appropriate entries to /etc/hosts on both nodes for all nodes, including the storage node, to safeguard against DNS failure:

Next, bring both cluster nodes fully up-to-date, and reboot them:

Continue reading

Configuring and Deploying MCollective with Puppet on CentOS 6

The Marionette Collective (MCollective) is a server orchestration/parallel job execution framework available from Puppet Labs (http://docs.puppetlabs.com/mcollective/). It can be used to programmatically execute administrative tasks on clusters of servers. Rather than directly connecting to each host (think SSH in a for loop), it uses publish/subscribe middleware to communicate with many hosts at once. Instead of relying on a static list of hosts to command, it uses metadata-based discovery and filtering and can do real-time discovery across the network.

Getting MCollective up and running is not a trivial task. In this article I’ll walk through the steps required to setup a simple MCollective deployment. The middleware of choice as recommended by the Puppet Labs documentation is ActiveMQ. We’ll use a single ActiveMQ node for the purposes of this article. For a Production deployment, you should definitely consider the use of a clustered ActiveMQ configuration. Again for the sake of simplicity we will only configure a single MCollective client (i.e. our “admin” workstation). For real-world applications you’ll need to manage clients as per the standard deployment guide.

There are four hosts in the lab – centos01 which is our Puppet Master and MCollective client, centos02 which will be the ActiveMQ server and an MCollective server, centos03 and centos04 which are both MCollective servers. All hosts run Puppet clients already, which I’ll use to distribute the appropriate configuration across the deployment. All hosts are running Centos 6.5 x86_64.

Continue reading

SCSI-3 Persistent Reservations on Fedora Core 20 with targetcli over iSCSI and Red Hat Cluster

In this article, I’ll show how to set up SCSI-3 Persistent Reservations on Fedora Core 20 using targetcli, serving a pair of iSCSI LUNs to a simple Red Hat Cluster that will host a failover filesystem for the purposes of testing the iSCSI implementation. The Linux IO target (LIO) (http://linux-iscsi.org/wiki/LIO) has been the Linux SCSI target since kernel version 2.6.38. It supports a rapidly growing number of fabric modules, and all existing Linux block devices as backstores. For the purposes of our demonstration, the important fact is that it supports operating as an iSCSI target. targetcli is the tool used to perform the LIO configuration. SCSI-3 persistent reservations are required for a number of cluster storage configurations for I/O fencing and failover/retakeover. Therefore, LIO can be used as the foundation for high-end clustering solutions such as Red Hat Cluster Suite. You can read more about persistent reservations here.

The nodes in the lab are as follows:

  • 10.1.1.103centos03 – Red Hat Cluster node 1 on CentOS 6.5
  • 10.1.1.104centos04 – Red Hat Cluster node 2 on CentOS 6.5
  • 10.1.1.108fedora01 – Fedora Core 20 storage node

Installation

I’ll start by installing targetcli onto fedora01:

Let’s check that it has been installed correctly:

Make sure that, before proceeding, any existing configuration is removed:

Continue reading

Building a Highly-Available Load Balancer with Nginx and Keepalived on CentOS

In this post I will show how to build a highly-available load balancer with Nginx and keepalived. There are issues running keepalived on KVM VMs (multicast over the bridged interface) so I suggest you don’t do that. Here, we’re running on physical nodes, but VMware machines work fine too. The end result will be a high performance and scalable load balancing solution which can be further extended (for example, to add SSL support).

First, a diagram indicating the proposed topology. All hosts are running CentOS 6.5 x86_64.

nginx_load_balancerAs you can see, there are four hosts. lb01 and lb02 will be running Nginx and keepalived and will form the highly-available load balancer. app01 and app02 will be simply running an Apache webserver for the purposes of this demonstration. www01 is the failover virtual IP address that will be used for accessing the web application on port 80. My local domain name is .local.

Continue reading

Implementing Git Dynamic Workflows with Puppet

Puppet is the obvious choice for centralised configuration management and deployment, but what happens when things go wrong (or you have the need to test changes)? A typo in a manifest or module, or an accidental deletion, and all hell could break loose (and be distributed to hundreds of servers). What’s needed is integration with a version control system.

I thought about using Subversion, but instead I decided to get with the times, and look at implementing a git repository for the version of my Puppet manifests and modules. Whilst I was at it, I decided to make use of Puppet’s dynamic environment functionality. The end goal was to be able to take a branch of the master Puppet configuration, and have that environment immediately available for use using the --environment=<environment> option to the Puppet agent.

An example will help clarify. Suppose I’m working on a new set of functionality, and don’t want to touch the current set of Puppet modules and inadvertently cause change in production. I could do this:

and then run my Puppet agent against this new testing code:

It would be a pain to have to update /etc/puppet/puppet.conf each time I create a new environment, so it is much easier to use dynamic environments, where a variable ($environment) is used in the configuration instead of static configuration. See the Puppet Labs documentation for more clarity.

First, edit /etc/puppet/puppet.conf - mine looks like this after editing – yours may be different:

As you can see, I set a default environment of production, and then specify paths to the manifest and modulepath directories, using the $environment variable to dynamically populate the path. Production manifest and modulepath paths will end up being $confdir/environments/production/manifests/site.pp and $confdir/environments/production/modules respectively. As new environments are dynamically created, the $environment variable will be substituted as appropriate.

Next, I moved my existing Puppet module and manifest structure around to suit the new configuration:

And restarted Apache (as I run my puppetmaster under Apache HTTPD/Passenger):

I then ran a couple of agents to ensure everything was still working:

They defaulted, as expected, to the Production environment.

Next, I installed git on my puppetmaster:

After this I created a root directory for my git repository:

/opt is on a separate logical volume in my setup. Next, create a local git repository from the existing Puppet configuration:

And clone a bare repository from this commit:

This cloned repository is where people will clone their own copies of the code, make changes, and push them back to – this is our remote repository.

All of the people making changes are in the wheel group, so set appropriate positions across the repository:

We can now clone the repository, make changes, and push them back up to the remote repository. But we still need to add the real functionality. Two git hooks need to be added – one to occur on update (the update hook) to perform some basic syntax checking of the Puppet code being updated and rejecting the update if syntax is bad, and a post-receive hook to check the code out into the appropriate place under /etc/puppet/environments, taking into account whether this is an update, a new branch, or a deletion of an existing branch. I took the update script from projects.puppetlabs.com and made a slight alteration (as it was failing on import statements), and took the Ruby from here and the shell script from here, plus some of my own sudo shenanigans, to come up with a working post-receive script.

Here is /opt/git/puppet.git/hooks/update:

And here is /opt/git/puppet.git/hooks/post-receive:

As previously discussed, all admins working with Puppet are members of the wheel group, so I made sure they could run commands as puppet so that the sudo commands in the post-receive hook would work:

I also removed my Puppet account from lockdown for this:

With all these changes in place, I can now work as expected, and dynamically create environments with all the benefits of version control for my Puppet configuration.

User, Group and Password Management on Linux and Solaris

This article will cover the user, group and password management tools available on the Linux and Solaris Operating Systems. The specific versions covered here are CentOS 6.4 and Solaris 11.1, though the commands will transfer to many other distributions without modifications (especially RHEL and its clones), or with slight alterations to command options. Check your system documentation and manual pages for further information.

Knowing how to manage users effectively and securely is a requirement of financial standards such as PCI-DSS, and information security management systems such as ISO 27001.

In this article, I will consider local users and groups – coverage of naming services such as NIS and LDAP is beyond its scope but may be covered in a future article. This article also presumes some prior basic system administration exposure with a UNIX-like operating system. 

Continue reading

Clustering with DRBD, Corosync and Pacemaker

Introduction

This article will cover the build of a two-node high-availability cluster using DRBD (RAID1 over TCP/IP), the Corosync cluster engine, and the Pacemaker resource manager on CentOS 6.4. There are many applications for this type of cluster – as a free alternative to RHCS for example. However, this example does have a couple of caveats. As this is being built in a lab environment on KVM guests, there will be no STONITH (Shoot The Other Node In The Head) (a type of fencing). If this cluster goes split-brain, there may be manual recovery required to intervene, tell DRBD who is primary and who is secondary, and so on. In a Production environment, we’d use STONITH to connect to ILOMs (for example) and power off or reboot a misbehaving node. Quorum will also need to be disabled, as this stack doesn’t yet support the use of quorum disks – if you want that go with RHCS (and use cman with the two_node parameter, with or without qdiskd).

This article, as always, presumes that you know what you are doing. The nodes used in this article are as follows:

  • 192.168.122.30 – rhcs-node01.local – first cluster node – running CentOS 6.4
  • 192.168.122.31 – rhcs-node02.local – second cluster node – running CentOS 6.4
  • 192.168.122.33 – failover IP address

DRBD will be used to replicate a volume between the two nodes (in a Master/Slave fashion), and the hosts will eventually run the nginx webserver in a failover topology, with this example having documents being served from the replicated volume.

Ideally, four network interfaces per host should be used (1 for “standard” node communications, 1 for DRBD replication, 2 for Corosync), but for a lab environment a single interface per node is fine.

Let’s start the build …

Continue reading

Securing CentOS and Solaris 11 with Puppet

Puppet is system administration automation software from Puppet Labs (http://puppetlabs.com). It has gained a lot of popularity, and rivals other automation/orchestration software such as Chef and Ansible.

In this article, I will detail how security can be managed on CentOS 6.x and Solaris 11.1 hosts with Puppet 3.x. Some familiarity with Puppet or some other automation software, as well as a Linux/UNIX system administrator audience, is assumed.

The topology being used for the examples given in this article is shown in Figure 1.

 puppet-centos-sol11

Figure 1. Example Puppet topology

As you can see, centosa is the Puppet master. Four hosts will contact it for configuration, including itself. There are three CentOS hosts in total (centos[a-c]) and a single Solaris host (sol11test). We will start with server and agent installation, then move on to cover various Puppet configuration tasks, and develop our own security module to deploy a set of security configuration to the hosts.

Whilst this article has been written with CentOS 6.x and Solaris 11.1 in mind, the techniques utilised should translate to RHEL/OEL 6.x and Solaris 10 without many changes. In case of doubt, consult the relevant security guide for your operating system at http://cisecurity.org.

Continue reading

Adding Logging to IPTables under CentOS

Whilst troubleshooting some firewall issues with a CentOS host, I wanted to enable logging. Thankfully, there is a very customisable iptables target – LOG (funnily enough) – that will do this in a few steps.

First, add a new chain with a reasonable name. I chose LOGGING:

Next, review the current iptables configuration to ensure that the chain has been created successfully:

Next, insert a rule at the appropriate point (hence me using --line-numbers above). You could replace the existing REJECT at line 7 in its entirety as its functionality will be moved into the LOGGING chain (where I change it to a DROP anyway):

Add the actual logging rule next. I also use the limit module to add some rate-limiting into the mix. The iptables man page documents both the LOG target and the limit module in great detail. But – here I specify a limit of 10 messages per minute, which is ample for my testing. The log level is set at debug (as per standard syslog log levels).

Finally, we actually DROP the packet (whether it has been logged, or not (if the rate limit has been exceeded)):

OK – let’s check our iptables configuration once again:

Everything looks good. Try telnetting to a bad port, or doing something else the firewall should block, and depending upon your syslog configuration, the DROP message should be logged, for example:

If nothing is broken, save your configuration:

As an aside, if (as with the default rsyslog configuration under CentOS 6.x) nothing is logged, you will need to configure rsyslog appropriately. We specified a --log-level of 7 – which is the debug syslog log level. So we need to configure rsyslog to send messages from the kern facility at log level 7 to somewhere useful. I chose /var/log/firewall.log:

As a final tidy up, don’t forget to update your logrotate configuration, if required: