Posts Tagged ‘Virtualization’
There’s no doubt that virtualization and the cloud is here to stay. So you migrated your entire architecture to the cloud and everyone is happy. Eventually, you’ll come to a point where you start decommission servers.
If this was an on-premise server, all you had to do was to powered it off and perhaps put it to use elsewhere (or if virtualized, simply delete it). In the cloud however, it’s tempting to do the same.
What people don’t think about however is that most cloud vendors use regular magnetic disks. This means that when you delete a virtual drive, it will be provisioned to someone else. Normally, the first thing the next person who gets provisioned your old disk blocks (or parts of it) would do is to format it and fill it with data.
However, if this person is a malicious user, s/he could restore what was written to those disk blocks, just as s/he could with a magnetic drive that has been formatted.
Therefore, before I decommission any drives in the cloud, this is what I do:
- Power off the system
- Change the boot device to a Live CD (most linux-distributions will do)
- Run shred on the device
- Power off the system and delete the drive
While shredding the drive will take a fair amount of time, we know that even if a malicious user is provisioned the same disk blocks, they won’t find any of your data.
In the previous post, I benchmarked three different virtual network drivers under FreeBSD. The clear winner was, perhaps not very surprisingly, the VirtIO network driver.
In this article I will do some further benchmarking and try to optimize the driver further. Similarly to in the last post, I will use two FreeBSD 9.0 boxes with 2GB RAM and 2GHz CPU. Both nodes are set up with a private network and running in a public cloud (at CloudSigma).
As many of you might know, running tests in a public cloud is difficult. For instance, you can’t control the load other nodes puts on the host resources and network architecture. To cope with this, I ran all tests five times with a 60 second sleep in between. This of course, isn’t perfect, but it is at least better than a single test.
Some time ago, I wrote about how to use Virtio with FreeBSD 8.2. As I pointed out in the article, the performance was not nearly as good in FreeBSD 8.2 as it was in 9.0-RC1. Hence I wanted to get all my nodes over to 9.0 as soon as possible to take use of the massive boost in I/O performance.
In this article I will walk you through the process of updating an existing system from FreeBSD 8.2 (without Virtio) to 9.0 with Virtio.
In the past few years, virtualization has been the big topic everybody keeps talking about. There are good reasons for that, but one thing that really annoys me as a hardcore FreeBSD-fan is how poorly FreeBSD performs virtualized.
For some time, the Linux-community have been using the Virtio-drivers to boost both I/O and network performance. Simply put, Virtio is a driver written to cut out any unnecessary emulation on the host and as a result both reduce load from the host and improve performance.
Unfortunately the FreeBSD-community haven’t been able to utilize this, as there were no port for this. Luckily that just changed and here’s how you enable it.
Just as a disclosure, I’ve only tried the I/O driver on CloudSigma, and it seems to be stable both on 8.2 and 9.0-RC1. According to other this post, the network driver should work too though. It should however be said that the I/O performance on 8.2 is significantly slower than on 9.0-RC1.
My experience with VMware goes way back. I think the first version I ever used was VMware Workstation 4.0 back in ’03. That’s seven years ago. Back then it was really cool as a proof-of-concept, but not very useful as the hardware didn’t have enough power (primarily RAM) to run multiple OS’es simultaneously (or, at least my hardware).
A few years ago I started to use VMware more seriously. VMware Server was great. It ran on Linux and was pretty flexible. It lacked a few features (such as multiple snapshots), but it did the job. When we first launched YippieMove, we actually ran the entire architecture with a few VMware Servers. It worked, but due to budget hardware, it didn’t perform as well as we liked it to. (We eventually switched to FreeBSD Jails and the article we wrote about it made it to Slashdot.)