Tag Archives: nginx

Building a Highly-Available Load Balancer with Nginx and Keepalived on CentOS

In this post I will show how to build a highly-available load balancer with Nginx and keepalived. There are issues running keepalived on KVM VMs (multicast over the bridged interface) so I suggest you don’t do that. Here, we’re running on physical nodes, but VMware machines work fine too. The end result will be a high performance and scalable load balancing solution which can be further extended (for example, to add SSL support).

First, a diagram indicating the proposed topology. All hosts are running CentOS 6.5 x86_64.

nginx_load_balancerAs you can see, there are four hosts. lb01 and lb02 will be running Nginx and keepalived and will form the highly-available load balancer. app01 and app02 will be simply running an Apache webserver for the purposes of this demonstration. www01 is the failover virtual IP address that will be used for accessing the web application on port 80. My local domain name is .local.

Continue reading

Installing Nagios under Nginx on Ubuntu 14.04 LTS

Nagios is an excellent open source monitoring solution that can be configured to monitor pretty much anything. In this article, I’ll describe how to install Nagios under Nginx on Ubuntu 14.04 LTS.

First of all, check that the system is fully up to date:

Next, install the build-essential package so that we can build Nagios and its plugins from source:

Install Nginx, and verify that it has started:

Install libgd2-xpm-dev, php5-fpm, spawn-fcgi and fcgiwrap:

Next, create a nagios user:

Issue the following commands to create a nagcmd group, and add it as a secondary group to both the nagios and www-data users:

Download the latest Nagios core distribution from http://www.nagios.org/download – at the time of writing this was version 4.0.7.

Continue reading

Clustering with DRBD, Corosync and Pacemaker

Introduction

This article will cover the build of a two-node high-availability cluster using DRBD (RAID1 over TCP/IP), the Corosync cluster engine, and the Pacemaker resource manager on CentOS 6.4. There are many applications for this type of cluster – as a free alternative to RHCS for example. However, this example does have a couple of caveats. As this is being built in a lab environment on KVM guests, there will be no STONITH (Shoot The Other Node In The Head) (a type of fencing). If this cluster goes split-brain, there may be manual recovery required to intervene, tell DRBD who is primary and who is secondary, and so on. In a Production environment, we’d use STONITH to connect to ILOMs (for example) and power off or reboot a misbehaving node. Quorum will also need to be disabled, as this stack doesn’t yet support the use of quorum disks – if you want that go with RHCS (and use cman with the two_node parameter, with or without qdiskd).

This article, as always, presumes that you know what you are doing. The nodes used in this article are as follows:

  • 192.168.122.30 – rhcs-node01.local – first cluster node – running CentOS 6.4
  • 192.168.122.31 – rhcs-node02.local – second cluster node – running CentOS 6.4
  • 192.168.122.33 – failover IP address

DRBD will be used to replicate a volume between the two nodes (in a Master/Slave fashion), and the hosts will eventually run the nginx webserver in a failover topology, with this example having documents being served from the replicated volume.

Ideally, four network interfaces per host should be used (1 for “standard” node communications, 1 for DRBD replication, 2 for Corosync), but for a lab environment a single interface per node is fine.

Let’s start the build …

Continue reading

AWS: Ruby on Rails Deployment Part 2: Ruby, RubyGems, Rails, Thin and Nginx

The previous article in this series has left us with a minimally-configured Nginx installation running on an EBS-backed Ubuntu EC2 instance.

This article will pick up where we left off. The latest versions of Ruby and RubyGems will be downloaded and installed. Then the Rails and Thin gems will be installed. Nginx will then have the final configuration changes applied to enable it to proxy through to the Thin workers. Thin is a lean Ruby-based web-server that has been designed to replace Mongrel (which was the standard Ruby web server until development ceased), and uses various components lifted from Mongrel (e.g. the parser – giving us the same (or better?) speed and security as Mongrel).

It’s been a while since I deployed Ruby on Rails – and that was using Mongrel – so let’s see how Thin matches up.

Continue reading

AWS: Ruby on Rails Deployment Part 1: Nginx Installation and Configuration

Over the course of this series of articles, I will cover the build and configuration of an Amazon EC2 Instance capable of serving Ruby on Rails applications. The series will cover the build and installation of Nginx from source, virtual host and proxy configuration within Nginx, installation of Ruby and RubyGems, installation of the Rails and Thin gems, and the deployment of a set of clustered Thin workers. I chose Nginx over Apache HTTPD as it is renowned for both performing very well as a reverse proxy as well as serving static content whilst having a very low memory footprint. Plus, I’m always interested in looking at “alternative” software solutions to common problems.

Read my article around EC2 instance management via the ec2-api-tools if you’d like to provision your instance(s) via the command line, otherwise just provision your instance(s) via the EC2 Management Console. This article presumes that you have an instance running and ready to go. I used ami-08df4961 (which is Ubuntu 12.10 i386 Server, EBS-backed). I’d use a RHEL instance but they are not eligible for the free tier due to licensing, plus the Ubuntu instances are very well supported by Canonical.

Continue reading