Category Archives: Puppet Module

Configuring and Deploying MCollective with Puppet on CentOS 6

The Marionette Collective (MCollective) is a server orchestration/parallel job execution framework available from Puppet Labs (http://docs.puppetlabs.com/mcollective/). It can be used to programmatically execute administrative tasks on clusters of servers. Rather than directly connecting to each host (think SSH in a for loop), it uses publish/subscribe middleware to communicate with many hosts at once. Instead of relying on a static list of hosts to command, it uses metadata-based discovery and filtering and can do real-time discovery across the network.

Getting MCollective up and running is not a trivial task. In this article I’ll walk through the steps required to setup a simple MCollective deployment. The middleware of choice as recommended by the Puppet Labs documentation is ActiveMQ. We’ll use a single ActiveMQ node for the purposes of this article. For a Production deployment, you should definitely consider the use of a clustered ActiveMQ configuration. Again for the sake of simplicity we will only configure a single MCollective client (i.e. our “admin” workstation). For real-world applications you’ll need to manage clients as per the standard deployment guide.

There are four hosts in the lab – centos01 which is our Puppet Master and MCollective client, centos02 which will be the ActiveMQ server and an MCollective server, centos03 and centos04 which are both MCollective servers. All hosts run Puppet clients already, which I’ll use to distribute the appropriate configuration across the deployment. All hosts are running Centos 6.5 x86_64.

Continue reading

Puppet Module: security::tcpwrappers with Hiera

Module: security::tcpwrappers

Purpose: This module configures TCP Wrappers on CentOS 6 and Solaris 10 hosts with Hiera.

Notes: This module does some pretty fancy things. It uses Hiera to provide lookup for the $hostsallow and $hostsdeny variables, and interfaces with inetadm and svccfg on Solaris. Let’s look at hiera.yaml first.

File: /etc/puppet/hiera.yaml

As you can see, we first check the %{::clientcert}, then the %{::operatingsystem} before falling back to common. So, essentially you have hostname-specific control if you need it.

Under /etc/puppet/hieradata, I have the following:

Host centosa.local would use centosa.local.yaml (due to %{::clientcert} in the hierarchy) and pull the values in for security::tcpwrappers::hostsallow and security::tcpwrappers::hostsdeny from that file. Host centosb.local would fall through to common.yaml (unless there was a %{::clientcert}.yaml or Centos.yaml), and a Solaris host would use Solaris.yaml.

File: security/manifests/tcpwrappers.pp

Notes: Uses Hiera to copy in appropriate files. On Solaris, it configures inetd-controlled services to use TCP Wrappers via inetadm, and enables TCP Wrappers for the RPC portmapping service via svccfg.

 

 

Adding Puppet DB to a CentOS 6.4-based puppetmaster

After configuring my puppetmaster to run on Passenger on Apache, the next step was to add Puppet DB so that I could take advantage of exported resources, as well as have a central store of facts/catalogs that I could query. I am configuring Puppet DB to run on the same host as my puppetmaster, on CentOS 6.4.

Whilst you can use the embedded HSQLDB database, I opted for the scale-out offered by a backend PostgreSQL database (which is recommended by Puppet Labs for deployments of more than 100 nodes). MySQL is not supported, as it doesn’t support recursive queries.

I installed PostgreSQL using my postgresql::server class. The class is shown below (you can see it includes postgres, which does nothing more than install the PostgreSQL client package). Essentially, it covers the installation of the postgresql-server package on the host, initialisation tasks, ensuring an appropriate pg_hba.conf is installed (for example, replacing localhost ident authentication methods with password), registering the service, and starting PostgreSQL:

To install, it was as simple as:

Next, I created a PostgreSQL user and database:

The user is created so that it cannot create databases (-D), or roles (-R) and doesn’t have superuser privileges (-S) – it’ll prompt for a password (-P). Let’s assume a password of “s3cr3tp@ss” has been used. The database is created and owned (-O) by the puppetdb user.

Access to the database can then be tested:

Next, install the puppetdb and puppetdb-terminus packages. I installed from the Puppet Labs yum repository:

Configure /etc/puppetdb/conf.d/database.ini as appropriate (it ships configured for HSQLDB):

Create /etc/puppet/puppetdb.conf, as appropriate for the Jetty configuration on the host. You can check the Jetty configuration under /etc/puppetdb/conf.d/jetty.ini. Here’s puppetdb.conf (my puppetmaster runs on sun.local):

Create /etc/puppet/routes.yaml:

Finally, update /etc/puppet/puppet.conf:

Once all steps are complete, restart your puppetmaster. As I’m running under Passenger:

Run a puppet agent --test from once of your nodes (or wait for your scheduled runs):

As you can see, I ran from venus.local. Let’s check that the data is being stored in puppetdb – by the time I got there three more nodes had already had an agent run:

Awesome! Let’s get something meaningful. Suppose we want to know what OS is running on all of our nodes:

An important note: ensure that all of your puppet nodes can connect to the puppetdb server on the appropriate SSL-enabled Jetty port (by default, 8081) so they can report in.

With Puppet DB now configured, I can start writing some reports, and using exported resources.

Puppet Module: apache2: VirtualHost templates

Module: apache2

Purpose: This module shows how an ERB template can be used to create VirtualHost definitions

File: apache2/manifests/init.pp

Notes: This is the base class. It can just be included within a node definition to install a stock httpd from yum. Also sets up a site-specific define – see comments in file.

File: apache2/manifests/vhost.pp

Notes: Contains the virtual host definition. Supports only port 80 (else we’d need to ensure SELinux had the correct configuration for http_port_t, etc.), and fairly basic VirtualHost configuration. Anything more “fancy” can be implemented using the site-specific define set up in init.pp, or by using a custom $template.

File: apache2/templates/vhost-default.conf.erb

Notes: ERB template for the VirtualHost configuration

 

Puppet Module: logging-server: Centralised rsyslog Server

Module: logging-server

Purpose: This module is used to configure my centralised rsyslog server, on CentOS 6.

File: logging-server/manifests/init.pp

Notes: This will refresh the rsyslog service if the configuration file is updated, and will also run restorecon to fix the SELinux context on the configuration file.

File: logging-server/files/rsyslog.conf

Notes: This configuration allows rsyslog to receive messages over both UDP and TCP.