Configuring and Deploying MCollective with Puppet on CentOS 6

The Marionette Collective (MCollective) is a server orchestration/parallel job execution framework available from Puppet Labs (http://docs.puppetlabs.com/mcollective/). It can be used to programmatically execute administrative tasks on clusters of servers. Rather than directly connecting to each host (think SSH in a for loop), it uses publish/subscribe middleware to communicate with many hosts at once. Instead of relying on a static list of hosts to command, it uses metadata-based discovery and filtering and can do real-time discovery across the network.

Getting MCollective up and running is not a trivial task. In this article I’ll walk through the steps required to setup a simple MCollective deployment. The middleware of choice as recommended by the Puppet Labs documentation is ActiveMQ. We’ll use a single ActiveMQ node for the purposes of this article. For a Production deployment, you should definitely consider the use of a clustered ActiveMQ configuration. Again for the sake of simplicity we will only configure a single MCollective client (i.e. our “admin” workstation). For real-world applications you’ll need to manage clients as per the standard deployment guide.

There are four hosts in the lab – centos01 which is our Puppet Master and MCollective client, centos02 which will be the ActiveMQ server and an MCollective server, centos03 and centos04 which are both MCollective servers. All hosts run Puppet clients already, which I’ll use to distribute the appropriate configuration across the deployment. All hosts are running Centos 6.5 x86_64.

Getting Started

As per the deployment guide, the first thing we need to do is get our numerous credentials and certificates in order. Thankfully, many of the required certificates are already part of the deployed Puppet infrastructure, so we can reuse those. Read the deployment guide for further understanding on which keys are required for which components of the MCollective deployment.

Traffic between MCollective and ActiveMQ uses CA-signed X.509 certificates for encryption and verification. We can use the Puppet CA for this.

First, we need to decide on a username and password for connecting to ActiveMQ. I’ll use the username mcollective with password Passw0rd. I suggest you choose a much stronger password – for the lab, this is fine.

Next we need to verify the location of our CA certificate (that which has already signed some of the required certificates, and that we’ll use to sign some of our new certificates below). Run the following command on the Puppet master, and verify the paths:

Our CA cert will be located at ${certdir}/ca.pem.

Generate a certificate for Active MQ on the Puppet master (we could choose to use an existing certificate here, but I opted for a new certificate).

The certificate is now available at ${certdir}/activemq.pem and the key at ${privatekeydir}/activemq.pem. We’ll need to transfer those to the ActiveMQ server later and create a truststore and a keystore.

We now need to generate a shared server keypair – again on the Puppet master:

The shared server certificate is now available at ${certdir}/mcollective-servers.pem and the key at ${privatekeydir}/mcollective-servers.pem.

We now need per-server certificates. Thankfully, every server node already has its own puppet agent certificate, so we can re-use it instead of generating new server certificates.The certificate and key are located at ${certdir}/<HOSTNAME>.pem and ${privatekeydir}/<HOSTNAME>.pem.

Finally, we need the client certificate. As in the real-world these would be generated for each admin user, I specified my name when generating the keypair:

The certificate is at ${certdir}/toki.pem and the key at ${privatekeydir}/toki.pem.

OK – all of the required certificates and credentials are taken care of.

ActiveMQ Configuration

I deployed ActiveMQ to centos02 via Puppet. My simple ActiveMQ Puppet module is located on centos01 at /etc/puppet/modules/activemq. I downloaded the sample activemq.xml single-broker configuration file to /etc/puppet/modules/activemq/files/activemq.xml as follows:

My /etc/puppet/modules/activemq/manifests/init.pp is as follows:

ActiveMQ is installed from the Puppet repo which I already have enabled (from when I installed the Puppet agents on to the nodes). If you don’t already have the repo enabled, you can do so as follows:

Once the manifest was complete I just ran puppet agent --test, and ActiveMQ was installed, up and running.

We now need to perform further configuration. On the Puppet master, edit activemq.xml with the credentials we decided upon earlier. You can see I also set an admin password – as well as the password for the mcollective user. Remember to make your passwords more secure than this!:

Edit the transportConnectors stanza so that stomp+nio+ssl is used as follows:

Next, we need to transfer the appropriate certificates and key over to centos02 from centos01:

As you can see, we’ve copied the CA cert (ca.pem), the ActiveMQ certificate (activemq.pem) and the ActiveMQ key (which I carefully renamed activemq.key to avoid clobbering the certificate).

First, create a truststore:

Verify that the md5 certificate fingerprints match:

Next, create the keystore, ensuring that the same password is used for all steps:

Again, verify the fingerprints:

Copy the stores into place:

Next, update activemq.xml once again and add the following inside the broker stanza, substituting your passwords as appropriate:

Add a firewall rule to allow inbound access on port 61614:

Finally, do a Puppet run:

Your ActiveMQ configuration will be updated, and the middleware configuration will be complete.

MCollective Deployment

I used the following manifests to deploy MCollective. /etc/puppet/modules/mcollective/manifests/client.pp:

You can see that this manifest installs a couple of additional client modules (service and puppet) so that I can query/restart system services remotely, and manage remote puppet agents. It also pulls down client.cfg, which is shown below. Remember – we’re running the client on the Puppet master, so paths can be specified directly to certificates within the Puppet /var/lib/puppet/ssl hierarchy:

For the servers, I first did the following to get the appropriate certificates (and shared server key) as well as the authorised client public key (in my case, toki.pem), into a deployable structure:

The following class then deployed the MCollective server components (and populated facts.yaml, as per the deployment guide):

 server.cfg.erb is as follows:

It is important to note that this configuration allows ALL users to perform ALL commands on ALL nodes, which is fine for my purposes, but may not be what you want. If it isn’t, you’ll need to look at a plugin such as ActionPolicy.

Run a puppet agent --test on all MCollective servers.

Testing

Now that MCollective has been rolled out, it’s time to test. The administrative command-line tool mco is used for command-line interaction with MCollective from the client machine (centos01 in our case). Let’s try a simple ping test:

All three servers are responding. Let’s try obtaining a fact about our machines:

Let’s kick off a Puppet run:

All is working as expected.

This has only scratched the surface of the capabilities of MCollective, which is an amazingly rich tool for managing large-scale server deployments.