Posts Tagged ‘CloudSigma’
A few months back I wrote csconnect to make it easier for myself to logon to our Cloud Sigma-nodes. It’s a pretty simple application. All it does is to poll Cloud Sigma’s API and lookup the IPs for the node(s) specified.
Yet, since this was one of the first Python-programs I ever wrote (beyond Hello World), the state of the code was pretty terrible. Last night I took on myself to rewrite the program from the ground up.
It works more or less the same. The only differences are that:
- I feel a lot better knowing the code is prettier
- It now uses a config-file to host the Cloud Sigma-credentials (instead of being stored in the actual script)
You can grab csconnect from its Github-repo.
In the last few years, we‘ve spent a lot of time migrating away from all our physical servers and into the cloud. This has been a very interesting task, that presented its own set of challenges, but it has certainly been worth it.
One of the issues though with working in a public cloud environment is that you don’t necessarily have the same static IP configuration as you do with dedicated hardware. When you power off a node, and spin it back up (or clone a new server for that sake), it’s likely that it will switch IP. This is at least the case with CloudSigma, which is what we are using for a large part of our server needs.
Normally, you’d have to login to CloudSigma’s web-interface to find out the IP for a given node. Needless to say, this gets old pretty fast when you quickly want to SSH into a node.
To resolve this, I wrote csconnect.py
This little handy script connects to CloudSigma’s API and resolves your IPs. It even uses a local cache of your servers IPs for quick lookups.
In the previous post, I benchmarked three different virtual network drivers under FreeBSD. The clear winner was, perhaps not very surprisingly, the VirtIO network driver.
In this article I will do some further benchmarking and try to optimize the driver further. Similarly to in the last post, I will use two FreeBSD 9.0 boxes with 2GB RAM and 2GHz CPU. Both nodes are set up with a private network and running in a public cloud (at CloudSigma).
As many of you might know, running tests in a public cloud is difficult. For instance, you can’t control the load other nodes puts on the host resources and network architecture. To cope with this, I ran all tests five times with a 60 second sleep in between. This of course, isn’t perfect, but it is at least better than a single test.
With the launch of FreeBSD 9, I was curious to learn how the VirtIO driver performed. I’ve seen a significant boost in disk performance, but how about the network driver?
Luckily, that’s rather easy to find the answer to. I spun up two FreeBSD 9 nodes on CloudSigma and configured them with VirIO (just like in this guide) and a private network. Once they were up and running, I installed Iperf and started testing away.
I had three different network drivers that I wanted to benchmark:
- Intel PRO/1000 (Intel 82540EM chipset)
- RealTek RTL8139
- VirtIO (QEMU/KVM)
In the past few years, virtualization has been the big topic everybody keeps talking about. There are good reasons for that, but one thing that really annoys me as a hardcore FreeBSD-fan is how poorly FreeBSD performs virtualized.
For some time, the Linux-community have been using the Virtio-drivers to boost both I/O and network performance. Simply put, Virtio is a driver written to cut out any unnecessary emulation on the host and as a result both reduce load from the host and improve performance.
Unfortunately the FreeBSD-community haven’t been able to utilize this, as there were no port for this. Luckily that just changed and here’s how you enable it.
Just as a disclosure, I’ve only tried the I/O driver on CloudSigma, and it seems to be stable both on 8.2 and 9.0-RC1. According to other this post, the network driver should work too though. It should however be said that the I/O performance on 8.2 is significantly slower than on 9.0-RC1.
For a few months now, we’ve been working on migrating our physical architecture for YippieMove over to CloudSigma. We got everything up and running swiftly, with the exception of one thing: CARP.
As it turns out FreeBSD’s CARP implementation doesn’t really work very well in a virtual environment. (For those curious about the details, please see this mailing list post.)
In order to get up and running with CARP on CloudSigma, you need to do the following:
- Download the FreeBSD kernel source
- Download this patch (mirror)
- Apply the patch (cd /usr/src/sys/netinet && patch -p0 < /path/to/esx-carp.diff)
- Recompile and install your kernel (make sure to include “device carp” in your kernel config)
- Add “net.inet.carp.drop_echoed=1″ to /etc/sysctl.conf
- Reboot into the new kernel
That’s it. You should now be able to set up CARP as usual. For more information on how to configure CARP, please see my article Setting up a redundant NAS with HAST and CARP. That article also includes detailed instructions on how to download FreeBSD’s kernel source and how to compile your kernel.
As a technical side note, I got this working with FreeBSD 8.2 and the kernel source from the RELENG_8 branch.
Credits: Matthew Grooms for the patch and Daniel Hartmeier for great analysis.