Posts Tagged ‘Ubuntu’

Since we started WireLoad, the number of websites we host have grown steadily. It’s a myriad of sites, ranging from product sites to websites belonging to friends and family. Many of them are WordPress-sites (just like this one), while others are more complicated. Some use databases, while others don’t. Since we’re often in a rush when we set up a site, documentation often suffers, and you have to spend time later on trying to decipher how things were set up.

This past week I finally had enough and decided to resolve this issue once and for all. What we really needed was a template-based system that could take care of everything and provide us with a user friendly interface. Puppet felt like the best tool for the job so I got busy writing a custom module for this.

The final result is a module I named Puppet-hosting. It allows you to manage all your websites using Puppet. To add a new site, all you need to do is to add a few lines to in your site.pp, and you’re all set.

Here’s an example of how it can look:

hosting::site { 'My Website':
  type        => 'wordpress',
  url         => '',
  url_aliases => ['mysite.*', ''],
  ip          => $ipaddress_eth0,
  contact     => 'Viktor Petersson [email protected]>',

Not only does this make it much easier to manage all the sites, it also speeds things up and avoids human errors, such as typos (and if you do have a typo, it’s easy to spot, since it’s only a few lines in sites.pp).

I love Munin. It’s a great monitoring tool, quick to set up, and doesn’t come with too many bloated requirements. It’s also very flexible and easy to write plugins for.

On Ubuntu, Munin comes with most (non-custom) plugins I use. Unfortunately, the Nginx-plugins (nginx_status and nginx_request) are incorrectly documented. In the header of the plugins, one can read that the default URL is ‘http://localhost/nginx_status‘, which certainly makes sense. The plugin even documents how one were to set this up. However, the plugin logic tells a different story. In nginx_requests, we can see that it relies on ‘hostname -f’ to look up the hostname that it connects to (which in 99.999% of all production servers isn’t ‘localhost’).

The fix however, is very easy. All you need to do is to add the URL to the Munin’s plugin configuration file. This can be done using the following command:

if [[ $(cat /etc/munin/plugin-conf.d/munin-node | grep "nginx") = "" ]]; then echo -e "\n[nginx*]\nenv.url http://localhost/nginx_status" >> /etc/munin/plugin-conf.d/munin-node; fi

This can be run as many times as you’d like, as it only appends the config-snippet if it’s not there already.

In my case, I wanted to push this out using Puppet, so I have the following block in my Puppet Munin-module:

package { 'munin-node':
	ensure  => 'present',

service { 'munin-node':
	ensure     => 'running',
	hasrestart => 'true',
	hasstatus  => 'true',
	enable     => 'true',

file { 'nginx_request':
	ensure  => 'link',
	path    => '/etc/munin/plugins/nginx_request',
	target  => '/usr/share/munin/plugins/nginx_request',
	require => Package['munin-node'],

file { 'nginx_status':
	ensure  => 'link',
	path    => '/etc/munin/plugins/nginx_status',
	target  => '/usr/share/munin/plugins/nginx_status',
	require => Package['munin-node'],

# Fixes a bug in the plugin and configures it to poll using localhost
exec { 'activate_nginx_munin':
	command => 'bash -c if [[ $(cat /etc/munin/plugin-conf.d/munin-node | grep "nginx") = "" ]]; then echo "\n[nginx*]\nenv.url http://localhost/nginx_status" >> /etc/munin/plugin-conf.d/munin-node; fi',
	user    => 'root',
	require => [
	path    => [

(I also use a Puppet-template for Munin’s config-file, but that’s beyond the scope of this article.)

Munin-plugin for Zendesk

June 23, 2012 7:53 am    Posted by Viktor Petersson    Comments (0)

In recent time, I’ve really started to appreciate Munin. I’ve deployed Munin in multiple architectures already, and I still get impressed every time by how easy it is to setup.

I also really like how easy it is to write plugins. For a crash-course in writing plugins for Munin, take a look at this page.

Since I first deployed Munin to monitor YippieMove‘s architecture, I’ve written a handful of custom plugins to visualize various datapoints. However, one thing I’ve been wanting for some time was to a tool to visualize the volume of support tickets. Since we use Zendesk for support and the fact that they already got an API, all data I needed was already accessible.

I started writing the plugin this morning, and few hours later I had written one plugin for plotting all tickets (with their state), and another one for customer satisfaction.

If you want to take it for a spin, I’ve published it on Github.

Monitor Memcached with Munin (on Ubuntu)

April 14, 2012 11:41 am    Posted by Viktor Petersson    Comments (2)

Let me first admit that I am new to Munin. I’ve played around with most monitoring tool, but never Munin for some reason. I really don’t know why, since it appears to be a great tool. As a result, this might be obvious to seasoned Munin-users, but it wasn’t to me at least.

It appear as most stock-plugins are configured in /etc/munin/plugin-conf.d/munin-node. This isn’t the case for the Memcached plugin. Hence this post.

Start by grabbing the Memcached plugin from here. Copy these files to /usr/share/munin/plugins.

Next, install the required Memcached-perl plugin:

sudo apt-get install libcache-memcached-perl

Now to the strange part. In order for this plugin to work, we need to include the hostname and the port of Memcached in the filename. We do this we create a symlink to the plugin that includes this info (eg. ‘memcached_traffic_127_0_0_1_11211′ if Memcached is listening on

Since I deployed this on a few servers, I wrote a simple shell-script that does this:

for i in $(find /usr/share/munin/plugins/memcached*); do
	ln -s $i /etc/munin/plugins/$(basename $i)127_0_0_1_11211

After you’ve installed the plugin, let’s make sure it worked.

for i in $(find /etc/munin/plugins/memcached*); do
	sudo munin-run $i

Assuming you didn’t get any errors when running the script above, go ahead and restart Munin-node (service munin-node restart).

Update: It appears as this is a pretty common approach to pass variables to Munin-plugins.

Recently I wrote a post titled ‘Notes on MongoDB, GridFS, sharding and deploying in the cloud.’ I talked about various aspects of running MongoDB and how to scale it. One thing we really didn’t take into consideration was if MongoDB performed differently on different operating systems. I naively assumed that it would perform relatively similar. That was a very incorrect assumption. Here are my findings when I tested the write-performance.

As it turns out, MongoDB performs very differently on CentOS 6.2, FreeBSD 9.0 and Ubuntu 10.04. This is at least true virtualized. I tried to set up the nodes as similar as possible — they all had 2GHz CPU, 2GB RAM and used VirtIO both for disk and network. All nodes also ran MongoDB 2.0.2.

To test the performance, I set up a FreeBSD 9.0 machine (with the same specifications). I then created a 5GB file with ‘dd’ and copied it into MongoDB on the various nodes using ‘mongofiles.’ I also made sure to wipe MongoDB’s database before I started to ensure similar conditions.

For FreeBSD, I installed MongoDB from ports, and for CentOS and Ubuntu I used 10gen’s MongoDB binaries. The data was copied over a private network interface. I copied the data five times to each server (“mongofiles -hMyHost -l file put fileN”) and recorded the time for each run using the ‘time’-command. The data below is simply (5120MB)/(average of real time in seconds).

Puppet on Ubuntu 10.04

December 18, 2011 4:44 am    Posted by Viktor Petersson    Comments (2)

Yesterday I decided that it’s about time to learn Puppet. I’ve had my eye on both Puppet and Chef for some time now. Yesterday after reading this Quora-thread and this blog-post, I decided to go with Puppet.

After downloading their test-VM and going through the tutorial, I pretty quickly fell in love with the simplicity and structure. Puppet is straight forward and rather intuitive.

One of the architectures that I wanted to deploy Puppet on was running Ubuntu 10.04 LTS. Unfortunately, the version from Ubuntu’s repository is really old (0.25.4), and not really compatible with much of the cool things you can do with Puppet.

Fortunately, PuppetLabs do provide their own repository, but the instructions for adding this repo wasn’t really at par with the rest of their excellent documentations — hence this post.

If you’re a die-hard Ubuntu/Debian-fan, this is probably pretty straight-forward, but if you’re not, here is what you need to do:

sudo su - 
echo -e "deb lucid main\ndeb-src lucid main" >> /etc/apt/sources.list
apt-key adv --keyserver --recv 4BD6EC30
apt-get update
apt-get install puppet

Ok, so that was pretty straight forward. The only tricky part was importing the keys, but now you know how to do that too.

Rebuilding a Linux software RAID array

June 18, 2011 9:49 am    Posted by Viktor Petersson    Comments (3)

The process is pretty straight forward, and I’m writing this as a ‘Note-to-self’ for future references than anything else. If anyone else find it useful, that’s great.

Identify the broken drive

Start by identifying the device as the system know it (ie. /dev/sdX or /dev/hdX). The following commands should provide you with the information:

cat /proc/mdstat
mdadm --detail /dev/mdX
cat /var/log/messages | grep -e "/dev/hd" -e "/dev/sd"

Once you’ve identified the drive, you want to know something more about this drive, as /dev/sdX doesn’t really tell us how the drive looks like. In my case, I have three identical drives, so the following command didn’t help me much, but maybe it does for you.

hdparm -i /dev/sdX

That should give you both the model, brand and in some cases even the serial number. Hence this should be plenty to identify the drive physically.

Replace the drive

Not much to be said here. I assume you already know this, but you need a drive of equal size or larger.

Partition the new drive

If your system boot up in degraded mode, then just boot up your system. If not, boot it off of a Live CD (I used Ubuntu’s LiveCD in ‘Rescue mode’).

Once you’ve made it to a console, the first thing we need to do is to partition the new hard drive. The easiest way to do this is to use sfdisk and use one of the existing disks as the template.

sfdisk -d /dev/sdY | sfdisk /dev/sdX

(where sdY is a working drive in the array, and sdX is your new drive)

Rebuilding the array

The final step is to add the new drive to the array. Doing this is surprisingly easy. Just type the following command:

mdadm /dev/mdZ -a /dev/sdX1

(assuming you want to add the partition sdX1 to the RAID array mdZ)

Of that went fine, the system will now automatically rebuild the array. You can monitor the status by running the following command:

cat /proc/mdstat

This solution is so ugly that I felt that I had to post it =).

I had a problem. Whenever I plugged in or rebooted, the a 3G modem into a Linux machine, it appeared on a different path (/dev/ttyUSBX). That creates some issues, as I’m using to connect to the internet, and Wvdial is using a hardcoded path to the modem. To fix this, I had to manually edit the file every time it changed. That’s very annoying. Now add in the fact that this is sitting on a remote machine that I have little physical access to, this is a real problem.

My initial approach was to turn to udev and write a rule for the modem that creates an automatic symlink, such as /dev/modem. Unfortunately, when you add usb_modeswitch into the mix, it breaks. For some reason, usb_modeswitch simply wouldn’t detect the modem when doing this, and hence render it useless.

Instead, I figured, if I write a Bash-script that automatically creates a symlink, that would take care of the issue. Of course, it is very ugly, but it does indeed work. Now I can simply run this script in Cron and that way know that I always have the correct path to the modem.

So how did this script look you may ask. This is how:

MODEM=$(cat /var/log/messages |grep "GSM modem (1-port) converter now attached to" | tail -n 3 | head -n 1 | perl -pe "s/.*GSM modem \(1-port\) converter now attached to (ttyUSB.*)$/\1/g;")
CURRENT=$(ls -l /dev/modem | awk '{print $10}' | perl -pe "s/\/dev\/(ttyUSB.*)$/\1/g;")

if [ $CURRENT != $MODEM ];
	rm /dev/modem
	ln -s /dev/$MODEM /dev/modem

I never said it was pretty, but it does indeed work. If you wonder what the ‘head’ and ‘tail’ part is all about, it is because the system creates three paths, but only the first one works.

Update: Turns out it is a bad idea to run this in Cron, as the logs will rotate. Instead, launch it at boot in rc.local, but make sure you insert a ‘sleep 10′ or similar to allow the modem to settle.

Update 2: Turns out there is a far more elegant solution to the problem. The system automatically generates a symlink for you. In my case, the modem is accessible via:
This means that you can hard-code that path instead of having to run a silly script to generate a symlink for you.

Local management tools are critical for most Linux and Unix distributions. For instance, if you delete Python 2.6 from your Ubuntu installation, it becomes more or less unusable. This is because most local management tools are written in Python. I have no problem with this. On the contrary, I think it makes a whole lot of sense to write management tools in a high-level language, such as Python or Ruby.

The problem is that there are many circumstances in where you’d would need to install a different version of these languages, as some other tool or application you’re using requires it. This is likely to cause problems. It is particularly true if you’re running an LTS-version of Ubuntu, or CentOS/RHEL (which is still using Python 2.5). Yes, you can run multiple versions of Python on the same machine, but it’s quite likely that applications will be confused on what version to use. Also, what version should you point the command ‘python’ to? Yes, you can call on Python with it’s full name (ie python26), but ‘python’ is still what many scripts call on.

Monitor Nginx and disk-usage with Monit

July 12, 2010 2:19 am    Posted by Viktor Petersson    Comments (2)

Yesterday I posted an article on how to monitor Apache and PostgreSQL with Monit. After setting that up I was amazed how simple and flexible Monit was, so I moved on with two more tasks: monitor Nginx and disk usage.