Archive for the ‘Rambling’ Category
Ok, this a long rant, but I just need to warn anyone else looking to buy this monitor. It’s the worst piece of garbage I’ve ever owned. If you’re thinking about buy one, just don’t. Pretty much anything else you can get your hands on is better.
On paper, it’s a pretty decent monitor. It’s a regular 23″ monitor with 2x HDMI input and one VGA. It seemed pretty ideal, since I was primarily looking to use it for my Raspberry Pi labs. For me, monitors are pretty disposable, and I don’t pay much attention to them these days.
This monitor however must have been created by a bunch of either extremely incompetent engineers, or skilled engineers with an extreme hate for the human race.
For instance, the input detector is so shitty that when you plug in a regular computer to it, you’ll be way into the operating bootup before it even detects the monitor (if at all). That means, there’s no way in you’ll be able to do access the BIOS with this monitor (unless you know the exact key combinations (and no, it’s not just ‘Delete’ as it used to be in the good old days)). When using this monitor with the Raspberry Pi, things gets way worse.
In order for this piece of garbage to detect the Raspberry Pi, you need to carefully time when you power up the Raspberry Pi compared to the power up cycle for the monitor. If you either power up the Raspberry Pi too early, or too late, it won’t detect it at all and go to sleep.
Then there’s the monitor’s menu system. You’d imagine that a feature like switching between inputs on a monitor with three inputs would be pressing one key. Nope, on the Acer S235HLBII, you need to dive into the menu and dive into a submenu to do this. Moreover, you can’t even get to the menu if the monitor can’t detect any input.
If you could at least force the monitor to a given input, I guess I could live with this monitor, but you can’t. It will automatically jump back and fort in order to its own pathetic probing.
For the above reasons, I want to congratulate Acer for designing the world’s shittiest monitor.
There’s no doubt that virtualization and the cloud is here to stay. So you migrated your entire architecture to the cloud and everyone is happy. Eventually, you’ll come to a point where you start decommission servers.
If this was an on-premise server, all you had to do was to powered it off and perhaps put it to use elsewhere (or if virtualized, simply delete it). In the cloud however, it’s tempting to do the same.
What people don’t think about however is that most cloud vendors use regular magnetic disks. This means that when you delete a virtual drive, it will be provisioned to someone else. Normally, the first thing the next person who gets provisioned your old disk blocks (or parts of it) would do is to format it and fill it with data.
However, if this person is a malicious user, s/he could restore what was written to those disk blocks, just as s/he could with a magnetic drive that has been formatted.
Therefore, before I decommission any drives in the cloud, this is what I do:
- Power off the system
- Change the boot device to a Live CD (most linux-distributions will do)
- Run shred on the device
- Power off the system and delete the drive
While shredding the drive will take a fair amount of time, we know that even if a malicious user is provisioned the same disk blocks, they won’t find any of your data.
I’m not sure if I’m the only one having issues with Time Machine, but it has been completely broken for me since I upgraded to Mountain Lion. As you can see above, Time Machine basically freezes and reports outrageous ETA statistic.
What is strange is that I did a fresh install. I do have FileVault2 activated on both the Time Machine-drive (which is brand new) and my internal drive. That should work fine, as I used the same setup on Lion.
At this point, I’m just waiting for 10.8.1 to be released (the developer preview is already out) and hoping that it will solve the issue. Other than that I’m out of ideas. I’ve formatted the Time Machine drive a few times just to ensure that it wasn’t a filesystem issue. Heck, I even ‘securely’ formatted it and wrote zeros all over the drive just to be safe, but still no change.
The one other thing that I’m thinking may be related is that the external drive used is a USB3-drive. Perhaps that is causing trouble for my MacBook Pro (MacBookPro8,2).
Update: Looks like i’m not alone with having this issue.
Update 2: After struggling with this annoying issue for a few weeks now, I decided to try using my NAS as the Time Machine-target. Since you can encrypt your remote backups, I thought I’d give it a shot. While it started out fine and copied about 1.5GB of data, it then stalled. I guess this proves that there weren’t anything wrong with my USB-drive and that the issue is with Time Machine’s engine.
Update 3: After a reboot and running ‘Repair disk permission’ (in ‘Disk Utility’) on the system drive, I was able to successfully backup my drive on the NAS. I hate to not be able to share a solution to all the other people who were having the issue, but it seems to work for me. As much as I’d love to see if it also works with the USB-drive, I do not want to jeopardize anything right now.
Update 4:: It was probably unfair to blame this on Apple. It turns out that my SSD was giving up on me. Unlike magnetic drives, there was no physical way to tell. No clicking noises, and no obvious write errors. Instead the disk appears to have failed silently and caused Spotlight’s indexing to fail. That in turn caused Time Machine to fail. I’ve now switch out the drive for a brand new one, and the issues appears to have gone away.
I’m a coffee junkie. Like many of my fellow geeks, I consume way more than the average person. On a normal day, I drink somewhere between 5-10 cups perhaps. How much is that in relation to the population at large?
To find the answer, let’s turn to Wikipedia’s List of countries by coffee consumption per capita. Let’s assume that all the data in there are true. Let’s also make the assumption that one serving of coffee is about 6 grams of coffee beans.
If you’ve ever tried to install ImageMagick on FreeBSD, you’ve probably run into this issue too. You have a head-less box in some datacenter, you don’t want to bloat the machine with X11.
You try to install the no-X11-version of Image Magick:
cd /usr/ports/graphics/ImageMagick-nox11 && make install
The next thing you know, you the dependency ‘print/ghostscript9-nox11′ gets installed. Notice that this is the ‘no-x11′ version. Yet, look at the fifth option from the top:
Isn’t it pretty obvious that I don’t want X11 if I install the ‘nox11′ port? Why is that even an option?
Is it possible that Facebook know more about how many visitors a given website has than Alexa or Hitwise? I’m not talking about people sharing a link on Facebook and then tracking outbound clicks. I’m talking about capturing all visitors. Both Alexa and Hitwise used to both rely on browser-addons to capture this data. Since only a small percentage of users will install this add-on, they will have really rough data (and skewed towards non-technical users).
So how can Facebook then acquire more accurate data than these traditional companies? It’s pretty simple. You know that Like-button that is showing up all over the web these days? Turns out that Facebook is hosting all those images (and you cannot host this yourself as that is a Term of Service breach).
Since Facebook is hosting all those images, they know exactly (or at least could know) how many visitors you have on your website for each page you got a Like-button. No estimates, but exact up-to-date metrics. If you’re logged into Facebook and visit a site with a Like-icon, they can also track that you have visited.
The bottom line is that Facebook knows a lot more than you might think about the web as a whole.
If you ran one of the beta releases of Mac OS X Lion, you probably ran into a problem when trying to upgrade to the final release of OS X Lion.
When I tried to upgrade one of my dev-machines, I was prompted with an error saying “A newer version of this app is already installed on this computer”
Of course, you know that isn’t the case, as OS X Lion, just came out.
After some digging, I found the solution. It turns out that all you need to do is to hold the option key (⌥) while pressing install. That will force the Lion to install.
We all have our must-have apps. Whenever we re-install our system, these are the first apps we want to get in so that we can get back to business. From time to time we discover new apps that are really mind-blowing and we wonder how we survived without them. For me, 1Password and Visor are two apps like that.
Yesterday I read an article titled “Why Apple can’t beat Android” over at VentureBeat. It was an interesting read, and Mr. Grim argued in the article that Android is here to stay, and that Android will soon dominate the smartphone market. I do not disagree with Mr. Grim. It makes a whole lot of sense. In the opening of the article, Mr. Grim states that Windows is ‘Big, ugly, buggy, clunky, and everywhere.’ I think that’s spot on, and I also think this is where Android is heading.
I got my hand on a Nexus One about six months ago. I’ve been using it as my primary phone since. However, I’ve never really liked Android. Surely, there are some really awesome features in Android 2.2 (Froyo), like a WiFi Hotspot-feature, Over-the-air contact/calendar-sync, a deep Google Voice integration pretty good Skype integration. Yet, I can enjoy all of those features because the Nexus One is pure Android. My network operator cannot restrict features on my device. If you for instance purchased an Android phone from a network operator (and managed, against all odds, to find one with Android 2.2), chances are the WiFi Hotspot-feature has been disabled.
In this article I’m going to try to explain why Android will never be at par with the iPhone (or rather, iOS). There are two parts to this argument. The first one relates to Google, and the second relates to the network operators and hardware manufacturers. Let’s start with Google itself.
When Apple announced Time Machine, I was overwhelmed and thought it was the best invention since sliced bread. I’ve been using it since then in setups both with a dedicated external hard drive and a Network Attached Storage (NAS).
Lately though, I’ve started to get more and more annoyed with Time Machine. It consumes a significant amount of resources, as it keeps backing up changes continuously, and it will fill up the destination drive until it’s full.
Another great thing with Time Machine is the ability to simply recover a backup within the OS X installer. Simply boot up on the installation disk and pick the Time Machine drive, and off you go. That’s pretty awesome.
Unfortunately, the recovery takes many, many hours. I’m fine with the fact that it takes sme time to copy my 200GBish backup over a gigabit network, right now my I’m staring at a screen saying the estimated time remaining as 26 hours. That’s just bizarre. No, the NAS is not that slow, I can easily copy files from the NAS in ~20MB/sec. Yes, I know there are plenty of small files in the backup, but that doesn’t explain this slow speed.
I’ve done recoveries in the past from external drives, and I was equally surprised back then that the backup took that long.