Posts Tagged ‘FreeBSD’

Munin-plugin for Zendesk

June 23, 2012 7:53 am    Posted by Viktor Petersson    Comments (0)

In recent time, I’ve really started to appreciate Munin. I’ve deployed Munin in multiple architectures already, and I still get impressed every time by how easy it is to setup.

I also really like how easy it is to write plugins. For a crash-course in writing plugins for Munin, take a look at this page.

Since I first deployed Munin to monitor YippieMove‘s architecture, I’ve written a handful of custom plugins to visualize various datapoints. However, one thing I’ve been wanting for some time was to a tool to visualize the volume of support tickets. Since we use Zendesk for support and the fact that they already got an API, all data I needed was already accessible.

I started writing the plugin this morning, and few hours later I had written one plugin for plotting all tickets (with their state), and another one for customer satisfaction.

If you want to take it for a spin, I’ve published it on Github.

Some time ago, I posted a question on Serverfault about how people managed and deployed ports in a large environment. Despite a fair number of comment, nobody seemed to really have the anser to this (or at least that suited my needs). It simply appears to be the case that the built-in tools in FreeBSD (primarily portsnap and portupgrade) are not built for a shared ports-tree.

Having a local ports-tree on each node seemed both wasteful and inefficient to me, yet this what was I had to resort to.

With that decided, how do you optimize the setup given this new setup?
more>>

The ‘pg_ha’ project has moved to Github

February 27, 2012 11:24 am    Posted by Viktor Petersson    Comments (0)

I recently posted a rather lengthy article titled “High availability with PostgreSQL, PGPool-II and FreeBSD.” The article was a bi-product of setting this up and the blog-post was simply my own notes with some polish.

Little did I know when I started this that I would end up having to write this much code to make the system behave the way I wanted. While crafting the article, I decided that the most logical thing would be to create a Github repository with the scripts for easy maintenance.

After publishing the article, I realize that the instructions I wrote really should be with the code. Hence, I have now moved the instructions over to the Github wiki.

Hopefully this will create more visibility to the project and I will have other people contribute to the project.

Recently I wrote a post titled ‘Notes on MongoDB, GridFS, sharding and deploying in the cloud.’ I talked about various aspects of running MongoDB and how to scale it. One thing we really didn’t take into consideration was if MongoDB performed differently on different operating systems. I naively assumed that it would perform relatively similar. That was a very incorrect assumption. Here are my findings when I tested the write-performance.

As it turns out, MongoDB performs very differently on CentOS 6.2, FreeBSD 9.0 and Ubuntu 10.04. This is at least true virtualized. I tried to set up the nodes as similar as possible — they all had 2GHz CPU, 2GB RAM and used VirtIO both for disk and network. All nodes also ran MongoDB 2.0.2.

To test the performance, I set up a FreeBSD 9.0 machine (with the same specifications). I then created a 5GB file with ‘dd’ and copied it into MongoDB on the various nodes using ‘mongofiles.’ I also made sure to wipe MongoDB’s database before I started to ensure similar conditions.

For FreeBSD, I installed MongoDB from ports, and for CentOS and Ubuntu I used 10gen’s MongoDB binaries. The data was copied over a private network interface. I copied the data five times to each server (“mongofiles -hMyHost -l file put fileN”) and recorded the time for each run using the ‘time’-command. The data below is simply (5120MB)/(average of real time in seconds).
more>>

I’m a big fan of FreeBSD. However, as painful it is to admit, it isn’t always the best OS to run in the cloud. Compared to Linux, you will get worse network and disk performance even with Virtio installed. There are also other issues. For instance, it is likely that you won’t get CARP to fully work (while this works perfectly fine with OpenBSD’s CARP, and Linux’s VRRP). I have written about workarounds for this issue in the past, but they do not seem to work equally well in FreeBSD 9.0.

Luckily, there is a userland implementation of CARP called UCARP that works better than CARP. It’s also very similar to CARP when it comes to configuration.
more>>

In the previous post, I benchmarked three different virtual network drivers under FreeBSD. The clear winner was, perhaps not very surprisingly, the VirtIO network driver.

In this article I will do some further benchmarking and try to optimize the driver further. Similarly to in the last post, I will use two FreeBSD 9.0 boxes with 2GB RAM and 2GHz CPU. Both nodes are set up with a private network and running in a public cloud (at CloudSigma).

As many of you might know, running tests in a public cloud is difficult. For instance, you can’t control the load other nodes puts on the host resources and network architecture. To cope with this, I ran all tests five times with a 60 second sleep in between. This of course, isn’t perfect, but it is at least better than a single test.
more>>

With the launch of FreeBSD 9, I was curious to learn how the VirtIO driver performed. I’ve seen a significant boost in disk performance, but how about the network driver?

Luckily, that’s rather easy to find the answer to. I spun up two FreeBSD 9 nodes on CloudSigma and configured them with VirIO (just like in this guide) and a private network. Once they were up and running, I installed Iperf and started testing away.

I had three different network drivers that I wanted to benchmark:

  • Intel PRO/1000 (Intel 82540EM chipset)
  • RealTek RTL8139
  • VirtIO (QEMU/KVM)

more>>

Some time ago, I wrote about how to use Virtio with FreeBSD 8.2. As I pointed out in the article, the performance was not nearly as good in FreeBSD 8.2 as it was in 9.0-RC1. Hence I wanted to get all my nodes over to 9.0 as soon as possible to take use of the massive boost in I/O performance.

In this article I will walk you through the process of updating an existing system from FreeBSD 8.2 (without Virtio) to 9.0 with Virtio.

If you’re just curious on how to get Virtio working on a fresh FreeBSD 9.0 installation, skip to Step 2.
more>>

This really bugs me…

December 14, 2011 4:39 am    Posted by Viktor Petersson    Comments (0)

If you’ve ever tried to install ImageMagick on FreeBSD, you’ve probably run into this issue too. You have a head-less box in some datacenter, you don’t want to bloat the machine with X11.

You try to install the no-X11-version of Image Magick:

cd /usr/ports/graphics/ImageMagick-nox11 && make install

The next thing you know, you the dependency ‘print/ghostscript9-nox11′ gets installed. Notice that this is the ‘no-x11′ version. Yet, look at the fifth option from the top:
FreeBSD Ports-failure

Isn’t it pretty obvious that I don’t want X11 if I install the ‘nox11′ port? Why is that even an option?

How to use Virtio on FreeBSD 8.2+

October 20, 2011 9:08 am    Posted by Viktor Petersson    Comments (7)

In the past few years, virtualization has been the big topic everybody keeps talking about. There are good reasons for that, but one thing that really annoys me as a hardcore FreeBSD-fan is how poorly FreeBSD performs virtualized.

For some time, the Linux-community have been using the Virtio-drivers to boost both I/O and network performance. Simply put, Virtio is a driver written to cut out any unnecessary emulation on the host and as a result both reduce load from the host and improve performance.

Unfortunately the FreeBSD-community haven’t been able to utilize this, as there were no port for this. Luckily that just changed and here’s how you enable it.

Just as a disclosure, I’ve only tried the I/O driver on CloudSigma, and it seems to be stable both on 8.2 and 9.0-RC1. According to other this post, the network driver should work too though. It should however be said that the I/O performance on 8.2 is significantly slower than on 9.0-RC1.
more>>