Tag Archives: puppet

Configuring and Deploying MCollective with Puppet on CentOS 6

The Marionette Collective (MCollective) is a server orchestration/parallel job execution framework available from Puppet Labs (http://docs.puppetlabs.com/mcollective/). It can be used to programmatically execute administrative tasks on clusters of servers. Rather than directly connecting to each host (think SSH in a for loop), it uses publish/subscribe middleware to communicate with many hosts at once. Instead of relying on a static list of hosts to command, it uses metadata-based discovery and filtering and can do real-time discovery across the network.

Getting MCollective up and running is not a trivial task. In this article I’ll walk through the steps required to setup a simple MCollective deployment. The middleware of choice as recommended by the Puppet Labs documentation is ActiveMQ. We’ll use a single ActiveMQ node for the purposes of this article. For a Production deployment, you should definitely consider the use of a clustered ActiveMQ configuration. Again for the sake of simplicity we will only configure a single MCollective client (i.e. our “admin” workstation). For real-world applications you’ll need to manage clients as per the standard deployment guide.

There are four hosts in the lab – centos01 which is our Puppet Master and MCollective client, centos02 which will be the ActiveMQ server and an MCollective server, centos03 and centos04 which are both MCollective servers. All hosts run Puppet clients already, which I’ll use to distribute the appropriate configuration across the deployment. All hosts are running Centos 6.5 x86_64.

Continue reading

Implementing Git Dynamic Workflows with Puppet

Puppet is the obvious choice for centralised configuration management and deployment, but what happens when things go wrong (or you have the need to test changes)? A typo in a manifest or module, or an accidental deletion, and all hell could break loose (and be distributed to hundreds of servers). What’s needed is integration with a version control system.

I thought about using Subversion, but instead I decided to get with the times, and look at implementing a git repository for the version of my Puppet manifests and modules. Whilst I was at it, I decided to make use of Puppet’s dynamic environment functionality. The end goal was to be able to take a branch of the master Puppet configuration, and have that environment immediately available for use using the --environment=<environment> option to the Puppet agent.

An example will help clarify. Suppose I’m working on a new set of functionality, and don’t want to touch the current set of Puppet modules and inadvertently cause change in production. I could do this:

and then run my Puppet agent against this new testing code:

It would be a pain to have to update /etc/puppet/puppet.conf each time I create a new environment, so it is much easier to use dynamic environments, where a variable ($environment) is used in the configuration instead of static configuration. See the Puppet Labs documentation for more clarity.

First, edit /etc/puppet/puppet.conf - mine looks like this after editing – yours may be different:

As you can see, I set a default environment of production, and then specify paths to the manifest and modulepath directories, using the $environment variable to dynamically populate the path. Production manifest and modulepath paths will end up being $confdir/environments/production/manifests/site.pp and $confdir/environments/production/modules respectively. As new environments are dynamically created, the $environment variable will be substituted as appropriate.

Next, I moved my existing Puppet module and manifest structure around to suit the new configuration:

And restarted Apache (as I run my puppetmaster under Apache HTTPD/Passenger):

I then ran a couple of agents to ensure everything was still working:

They defaulted, as expected, to the Production environment.

Next, I installed git on my puppetmaster:

After this I created a root directory for my git repository:

/opt is on a separate logical volume in my setup. Next, create a local git repository from the existing Puppet configuration:

And clone a bare repository from this commit:

This cloned repository is where people will clone their own copies of the code, make changes, and push them back to – this is our remote repository.

All of the people making changes are in the wheel group, so set appropriate positions across the repository:

We can now clone the repository, make changes, and push them back up to the remote repository. But we still need to add the real functionality. Two git hooks need to be added – one to occur on update (the update hook) to perform some basic syntax checking of the Puppet code being updated and rejecting the update if syntax is bad, and a post-receive hook to check the code out into the appropriate place under /etc/puppet/environments, taking into account whether this is an update, a new branch, or a deletion of an existing branch. I took the update script from projects.puppetlabs.com and made a slight alteration (as it was failing on import statements), and took the Ruby from here and the shell script from here, plus some of my own sudo shenanigans, to come up with a working post-receive script.

Here is /opt/git/puppet.git/hooks/update:

And here is /opt/git/puppet.git/hooks/post-receive:

As previously discussed, all admins working with Puppet are members of the wheel group, so I made sure they could run commands as puppet so that the sudo commands in the post-receive hook would work:

I also removed my Puppet account from lockdown for this:

With all these changes in place, I can now work as expected, and dynamically create environments with all the benefits of version control for my Puppet configuration.

Securing CentOS and Solaris 11 with Puppet

Puppet is system administration automation software from Puppet Labs (http://puppetlabs.com). It has gained a lot of popularity, and rivals other automation/orchestration software such as Chef and Ansible.

In this article, I will detail how security can be managed on CentOS 6.x and Solaris 11.1 hosts with Puppet 3.x. Some familiarity with Puppet or some other automation software, as well as a Linux/UNIX system administrator audience, is assumed.

The topology being used for the examples given in this article is shown in Figure 1.

 puppet-centos-sol11

Figure 1. Example Puppet topology

As you can see, centosa is the Puppet master. Four hosts will contact it for configuration, including itself. There are three CentOS hosts in total (centos[a-c]) and a single Solaris host (sol11test). We will start with server and agent installation, then move on to cover various Puppet configuration tasks, and develop our own security module to deploy a set of security configuration to the hosts.

Whilst this article has been written with CentOS 6.x and Solaris 11.1 in mind, the techniques utilised should translate to RHEL/OEL 6.x and Solaris 10 without many changes. In case of doubt, consult the relevant security guide for your operating system at http://cisecurity.org.

Continue reading

Installing Puppet Client on Solaris 11 with OpenCSW

The easiest way to install on Solaris is to obtain the packages from http://OpenCSW.org. OpenCSW uses a tool called pkgutil on top of the existing Solaris toolset to obtain, install and maintain OpenCSW packages.

Continue reading

Puppet Module: security::tcpwrappers with Hiera

Module: security::tcpwrappers

Purpose: This module configures TCP Wrappers on CentOS 6 and Solaris 10 hosts with Hiera.

Notes: This module does some pretty fancy things. It uses Hiera to provide lookup for the $hostsallow and $hostsdeny variables, and interfaces with inetadm and svccfg on Solaris. Let’s look at hiera.yaml first.

File: /etc/puppet/hiera.yaml

As you can see, we first check the %{::clientcert}, then the %{::operatingsystem} before falling back to common. So, essentially you have hostname-specific control if you need it.

Under /etc/puppet/hieradata, I have the following:

Host centosa.local would use centosa.local.yaml (due to %{::clientcert} in the hierarchy) and pull the values in for security::tcpwrappers::hostsallow and security::tcpwrappers::hostsdeny from that file. Host centosb.local would fall through to common.yaml (unless there was a %{::clientcert}.yaml or Centos.yaml), and a Solaris host would use Solaris.yaml.

File: security/manifests/tcpwrappers.pp

Notes: Uses Hiera to copy in appropriate files. On Solaris, it configures inetd-controlled services to use TCP Wrappers via inetadm, and enables TCP Wrappers for the RPC portmapping service via svccfg.

 

 

Running Puppet Master under Apache and Passenger – CentOS 6.4

I have been running my puppetmaster using the embedded WEBrick server for a while. I decided it was time to migrate to something a little more robust – namely Apache and Passenger. I loosely followed the documentation available on the Puppet site, although that covers Passenger 3.0.x and I’m using 4.0.x, and the supplied Apache configuration does not work. There were also a few other changes I had to make along the way to suit my configuration requirements. My puppetmaster is running CentOS 6.4.

Continue reading

Adding Puppet DB to a CentOS 6.4-based puppetmaster

After configuring my puppetmaster to run on Passenger on Apache, the next step was to add Puppet DB so that I could take advantage of exported resources, as well as have a central store of facts/catalogs that I could query. I am configuring Puppet DB to run on the same host as my puppetmaster, on CentOS 6.4.

Whilst you can use the embedded HSQLDB database, I opted for the scale-out offered by a backend PostgreSQL database (which is recommended by Puppet Labs for deployments of more than 100 nodes). MySQL is not supported, as it doesn’t support recursive queries.

I installed PostgreSQL using my postgresql::server class. The class is shown below (you can see it includes postgres, which does nothing more than install the PostgreSQL client package). Essentially, it covers the installation of the postgresql-server package on the host, initialisation tasks, ensuring an appropriate pg_hba.conf is installed (for example, replacing localhost ident authentication methods with password), registering the service, and starting PostgreSQL:

To install, it was as simple as:

Next, I created a PostgreSQL user and database:

The user is created so that it cannot create databases (-D), or roles (-R) and doesn’t have superuser privileges (-S) – it’ll prompt for a password (-P). Let’s assume a password of “s3cr3tp@ss” has been used. The database is created and owned (-O) by the puppetdb user.

Access to the database can then be tested:

Next, install the puppetdb and puppetdb-terminus packages. I installed from the Puppet Labs yum repository:

Configure /etc/puppetdb/conf.d/database.ini as appropriate (it ships configured for HSQLDB):

Create /etc/puppet/puppetdb.conf, as appropriate for the Jetty configuration on the host. You can check the Jetty configuration under /etc/puppetdb/conf.d/jetty.ini. Here’s puppetdb.conf (my puppetmaster runs on sun.local):

Create /etc/puppet/routes.yaml:

Finally, update /etc/puppet/puppet.conf:

Once all steps are complete, restart your puppetmaster. As I’m running under Passenger:

Run a puppet agent --test from once of your nodes (or wait for your scheduled runs):

As you can see, I ran from venus.local. Let’s check that the data is being stored in puppetdb – by the time I got there three more nodes had already had an agent run:

Awesome! Let’s get something meaningful. Suppose we want to know what OS is running on all of our nodes:

An important note: ensure that all of your puppet nodes can connect to the puppetdb server on the appropriate SSL-enabled Jetty port (by default, 8081) so they can report in.

With Puppet DB now configured, I can start writing some reports, and using exported resources.

Puppet Module: apache2: VirtualHost templates

Module: apache2

Purpose: This module shows how an ERB template can be used to create VirtualHost definitions

File: apache2/manifests/init.pp

Notes: This is the base class. It can just be included within a node definition to install a stock httpd from yum. Also sets up a site-specific define – see comments in file.

File: apache2/manifests/vhost.pp

Notes: Contains the virtual host definition. Supports only port 80 (else we’d need to ensure SELinux had the correct configuration for http_port_t, etc.), and fairly basic VirtualHost configuration. Anything more “fancy” can be implemented using the site-specific define set up in init.pp, or by using a custom $template.

File: apache2/templates/vhost-default.conf.erb

Notes: ERB template for the VirtualHost configuration

 

Puppet Module: logging-server: Centralised rsyslog Server

Module: logging-server

Purpose: This module is used to configure my centralised rsyslog server, on CentOS 6.

File: logging-server/manifests/init.pp

Notes: This will refresh the rsyslog service if the configuration file is updated, and will also run restorecon to fix the SELinux context on the configuration file.

File: logging-server/files/rsyslog.conf

Notes: This configuration allows rsyslog to receive messages over both UDP and TCP.