Archive for the ‘slugworthy’ Category.

Top tips from #pyconau

Last weekend I was at PyCon-AU in Hobart. Plenty has been said, on twitter and else where about what a great conf it was, so I won’t go into that too much.

I will mention that my biggest complaint is that there were too many talks that I wanted to see, so I missed about 2/3rds of them simply through being unable to be in more than one place at once. Fortunately all the talks that I missed are available on YouTube so I’ll be gradually catching up on them as time permits.

I came away from the conference with, amongst other things, a new grab-bag of tools that I plan to be using shortly. Some of the most valuable are:

I’m already excited about next year. Terrifyingly, I’ve already started planning a couple of talks I’m going to propose.

If you’re going to compare Terms of Service, kindly do so based on facts.

ObDisc: I used to work for Google. I still have lots of friends at Google. I have a bias towards trusting Google that’s largely based on knowing people who work there and trusting them personally. I pay money for many Google services – several Apps domains, for instance. I am also a paying customer of both Dropbox and iCloud.

I had thought that the nonsense about Google’s Terms of Service and their impact on Google Drive was dead with Nilay Patel‘s comprehensive summary of Dropbox, Skydrive, Google Drive, and iCloud Terms of Service, but then I saw this tweet and realised that the nonsense is continuing. The article linked (published on Trend Micro’s “Cloud Security” blog) to was written on the same day as Nilay’s piece, so it’s not new – but apparently this nonsense is still being spread.

Trend Micro also have a similar service, called Safe Sync. In the footnotes below, I’ve included (as well as the comparison from Nilay’s article) the equivalent sections from Safe Sync’s own EULA for you to compare.

All of these Terms are fairly standard. Amongst their many similarities, each of them has a “You retain ownership” clause[1], and a “You grant us the right to” clause[2]. [3]

Without exception, every bit of FUD I’ve seen has been predicated around comparing the “You retain ownership” clauses from the other services with Google’s “You grant us the right to” clause. Today’s bit of nonsense does exactly the same thing: it lists the “You retain ownership” clauses from Skydrive and Dropbox against the “You grant us rights” clause from Google. This one goes one step further though: it first argues that the “You retain ownership” clauses in the other Terms are vital for establishing a Reasonable Expectation of Privacy under US law; then makes the explicit claim that Google’s terms destroy any argument that content uploaded to your cloud storage service has a reasonable expectation of privacy – implying (although never actually stating) that Google’s Terms, unlike the others, lack the vital “You retain ownership” clause.

Utter nonsense.

It’s quite possible that not having a “You retain ownership” clause might have consequences on a Reasonable Expectation of Privacy; but as Google’s Terms are equivalent to the others, this would apply equally to the other services. I don’t see how this could arise from a genuinely mistaken reading of Google’s Terms, either: the “You retain ownership” clause is quite literally in the previous sentence to the one quoted in the Trend Micro article – I don’t see how any honest attempt at understanding the Terms could miss the clause. I don’t see how this can be anything other than a deliberate attempt to create FUD.

The Trend Micro ends with a plug for their own product. The penultimate sentence says:

Here’s hoping the EFF shames Google into at least being less evil.

Good news! Ars Technica got in touch with the EFF and asked them to read over Google’s policy.. Was the result the shaming that Trend Micro were hoping would be bestowed on their competitors?

When Ars spoke to the Electronic Frontier Foundation about Google Drive’s terms of service, the EFF found little about them that was more suspicious than in any other similar cloud service.

I’m sure that Google is indeed positively burning with shame.

Edited to add: Just to be clear, I’m not intending to imply that there is no reason to be concerned about putting your private data on any of these services. Any time you decided to use any of these services (or any cloud Webmail service, or an online photo sharing site, or a social network…) you need to carefully balance the utility you get from the service against the very real privacy and security issues associated with the service. However, these decisions need to be based on *facts*: what the relevant Terms and Policies actually say. Spreading FUD about the contents of the policies doesn’t help anyone make a decision about which services to use (or not to use).

Google Drive
Some of our Services allow you to submit content. You retain ownership of any intellectual property rights that you hold in that content. In short, what belongs to you stays yours.
By using our Services you provide us with information, files, and folders that you submit to Dropbox (together, “your stuff”). You retain full ownership to your stuff. We don’t claim any ownership to any of it.
Except for material that we license to you, we don’t claim ownership of the content you provide on the service. Your content remains your content
Except for material we may license to you, Apple does not claim ownership of the materials and/or Content you submit or make available on the Service
You are the owner of your files and are solely responsible
for your conduct and content of your files, as well as any of the content contained in communications with other
users of the Trend Micro Products/Services.
Trend Micro does not claim any ownership rights
in your files.


Google Drive
you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content. The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones.
We may need your permission to do things you ask us to do with your stuff, for example, hosting your files, or sharing them at your direction. This includes product features visible to you, for example, image thumbnails or document previews. It also includes design choices we make to technically administer our Services, for example, how we redundantly backup data to keep it safe. You give us the permissions we need to do those things solely to provide the Services.
You understand that Microsoft may need, and you hereby grant Microsoft the right, to use, modify, adapt, reproduce, distribute, and display content posted on the service solely to the extent necessary to provide the service.
you grant Apple a worldwide, royalty-free, non-exclusive license to use, distribute, reproduce, modify, adapt, publish, translate, publicly perform and publicly display such Content on the Service solely for the purpose for which such Content was submitted or made available, without any compensation or obligation to you.
You understand that in order to provide the Service and make your Content available thereon, Apple may transmit your Content across various public networks, in various media, and modify or change your Content to comply with technical requirements of connecting networks or devices or computers. You agree that the license herein permits Apple to take any such actions.
In order to make the Trend Micro Products/Services available to you, you agree to grant Trend Micro a limited, nonexclusive, perpetual, fully-paid and royalty-free, sub-licensable and worldwide license: (i) to use, copy, transmit, distribute, store and cache files that you choose to sync; and (ii) to copy, transmit, publish, and distribute to others the files as you designate

[3]All of the service have Privacy Policys which modify the Terms of Service in various ways. It’s interesting comparing these too.

  • Google’s Privacy Policy mostly limits what they can do with the rights you’ve granted them under the ToS. For instance, although the Terms of Service require that you grant Google the right to use your data for “the limited purpose of … promoting … our Services”, the Privacy Policy seems to restrict Google’s ability to actually do this – as far as I can tell, only data you have expressly chosen to make world-visible could ever be used in this way.
  • Dropbox’ Privacy Policy, by contrast, greatly *expands* the rights Dropbox have. For instance, the Terms say that “aside from the rare exceptions we identify in our Privacy Policy, no matter how the Services change, we won’t share your content with … law enforcement, for any purpose unless you direct us to”. On the surface, this seems much more restricted than Google’s equivalent terms – until you find this in the Privacy Policy: We may disclose to parties outside Dropbox files stored in your Dropbox and information about you that we collect when we have a good faith belief that disclosure is reasonably necessary to (a) comply with a law, regulation or compulsory legal request. In short, despite the misleading wording in the Terms, Dropbox can and will share your data with law enforcement just as readily as any other corporation.

Precise Pangolin install hints

My desktop at work is a Dell Precision T5500 – a fairly standard desktop, you’d think. My video card is an NVIDIA Quadro FX 580.

I recently spent most of 3 days trying to upgrade from Lucid to Pangolin. I’m not going to bore you with the details, but here are some things I wish I’d known.

  • I have two monitors, one on each of the DisplayPort outputs. The LiveCD will not use either of them *unless you have a third monitor plugged in to the DVI port*. My monitors happen to be able to handle Picture-by-Picture, so I can actually make one of them track both the DisplayPort and DVI inputs, which comes in handy.
  • Although the installer can use the graphics card just fine, the system it installs by default is broken. At the very first part of the installer (a purple screen with a keyboard and a human – at least, I think that’s what those two fuzzy blobs are meant to be), press any key. You’ll be asked to choose a language, then you’ll get a menu with options like “Try Ubuntu without installing” and “Install Ubuntu”. Press F6, arrow-down to “nomodeset”, and press x to activate it. This makes no difference at all to the installer, but does result in it installing a system that can use your graphics card later.
  • This part of the installer uses the DVI input to your monitor. Take the time to set up Picture-by-Picture so you can track the install as it flips back and forth between DVI and DisplayPort throughout the rest of the process.
  • Now choose “Try Ubuntu without installing”. Despite the misleading name, this gives you a chance to set your system up before running the installer.
  • The installer may now switch from DVI to DisplayPort, but then again, sometimes it won’t. Be glad you set up PbP so you can catch it wherever it appears. If it’s on DisplayPort, you probably didn’t set nomodeset correctly. Don’t waste your time continuing with the installer, even though it seems to be working fine – just restart it.
  • The standard LiveCD does not support LVM, so will not handle the LVM partitions already on your desktop (you do use LVM, right?). You can switch to a terminal by pressing alt+F1, and then:
    • sudo apt-get install lvm2
    • sudo vgchange -ay

    You can then use alt+F7 to get back to the GUI and kick off the installer.

  • The installer doesn’t seem to be able to cope with existing swap partitions – at least, not when you have several swap partitions. It does amusing things like popping up modal dialogs to tell you that creating the swap space failed – and then doesn’t let you dismiss the dialog, so your only option is a hard power-down. Don’t waste your time, just tell the partitioner not to use any swap partitions at all.
  • If you choose to use encrypted homedirs, the process that creates the homedirs assumes you have at least one swap partition. Because you’ve had to choose not to use any swap partitions, this will fail – but it does so in a recoverable way. Just use alt+F1 again, “sudo swapon /dev/sdXY”, (assuming that /dev/sdXY is your swap partition), then switch back to the GUI and click “Try Again” on the installer.
  • When you’re setting up your partitions, you will be asked to choose a device for boot loader installation. Choose your hard drive, not your USB stick. Near the end of the installer, sometimes the installer will try to install grub on the USB stick anyway. This will fail, but you will get the chance to pick another partition to install GRUB to. Pick your actual hard drive again.
  • If you cancel the installer, it will think something went wrong and ask if you want to send a message with details to Ubuntu developers. If you cancel this, the window never dies. If you start the installer again, it will eventually reach a point where it’s blocked waiting for the original window to go away. Switch back to the console and use “kill $(ps auxwww | grep [a]pport)” to terminate the original process.
  • Even though you’ve manually installed lvm2 and are installing onto LVM, ubuntu won’t bother installing LVM into the system it creates. Make sure you follow step 7 of this guide before you reboot. If you forget, you can always boot up the LiveCD again and run through this step.

Edited to add: One more tip: Once you’re done, unplug the DVI cable. If you leave it plugged in, a reboot will see the system using the DVI output and ignoring the two DisplayPort outputs again. If I ever do a reboot and the screens go to sleep, I’m going to try plugging in the DVI cable again. The system really seems to love that DVI output.

OpenAustralia/ScraperWiki hackfest: my first ruby code!

This weekend, I’ve been hanging out at my old office, taking part in the OpenAustralia/ScraperWiki “What are you up to next weekend?” hackfest. I’ve been to quite a few OA hackfests before, but always as a host – this is the first time I’ve been to one with the intent to code.

I’ve been meaning to learn Ruby for a while, and this seemed like a good opportunity, so I decided to write a scraper to get some more data into PlanningAlerts.

PlanningAlerts is a project of the OpenAustralia Foundation, and aims to provide you with email alerts of development applications near you. Development applications are scraped from council websites, alerts are sent (via RSS or email) to people who have requested notifications about applications in that area; and the site gives you a simple way to send your feedback back to the council.

Henare from OpenAustralia has written a guide to writing scrapers using the excellent ScraperWiki. Utilising that, cadging from some of his existing scrapers, and asking a few noob questions along the way, I created a scraper that pulls in information about development applications from the Redfern/Waterloo Authority site.

The good parts of the code I’ve scraped together come from the doc or from other samples; the ugly parts are my own invention.

When I started, the provided sample code when I started working looked like this:

 if"* from swdata where `council_reference`='#{record['council_reference']}'").empty?
  ScraperWiki.save_sqlite(['council_reference'], record)
  puts "Skipping already saved record " + record['council_reference']

This breaks on a couple of corner cases: if the swdata table doesn’t already exist, this will die. If you want to trample on your existing data, you have to manually comment out 4 lines of code. As well, it results in one select code per record – fine in small cases, but potentially a time-sink for larger cases.

While I was working on the code, the first problem was fixed by changing the first line to:

if ("* from swdata where `council_reference`='#{record['council_reference']}'").empty? rescue true)

I expanded on that (and along the way taught myself a little bit about Ruby classes):

class Saver
  def initialize
    #If you want to trample on existing data, set this to true
    @trample_data = false
    @references = ("council_reference from swdata") rescue nil)

  def save(record)
    if record
      if @trample_data || @references.nil? || @references.include?(record['council_reference'])
        ScraperWiki.save_sqlite(['council_reference'], record)
        puts "Skipping already saved record " + record['council_reference']

This will only do one lookup, and can then do in-memory comparisons to decide if the database needs to be updated for each record. This handles the case where swdata doesn’t exist yet; and if you want to trample on the data, just one word needs to be changed.

There’s some real ugliness in other parts of the code though.

* The entire page uses a tables-based layout, so to find the data I want I have to use'table table table table table table table table tr')
* Both DAs on the site right now have the same data items in the same order; but rather than assume this is consistent, I have my parser iterating over the rows and using a nasty big case to interpret the contents of the second cell based on the value of the first cell in the same row.
* Each DA is on public exhibition from a specifc date to another specific date. The two dates are expressed in compact form: if the month/year values are the same for both dates, they’ll only be expressed once, on the second date. There’s another nasty case block to handle the different possible values here and extract useful dates.
* Every time the code encounters the start of a new record, it tries to save the old record. This leads to an attempt to save an empty record at the start of the parsing (hence the if record test in; and a need to manually do One Last Save at the bottom of the code.

The complete code is available on ScraperWiki, and the data is already available on the PlanningAlerts site.

Running multiple instances of Chrome on Mac/Linux

Edit 2013/02/24 – This post is seriously out-of-date now. Chrome has built-in multi-profile support that’s much easier to use than these scripts. Use that instead!

Sometimes it’s handy to be able to have multiple browser instances open at once. For instance, Google’s Multiple Login only allows me to have 3 accounts signed in at once, which isn’t enough for me to have all the personal accounts I want to check plus my work account. Even if it could, I like to keep my personal and work search and browsing histories separate, so that it’s easier for me to find something I vaguely remember seeing recently.

When doing web development, it’s often handy to have one browser signed into the site as an admin, another signed in as a regular user, and one not signed in. Chrome’s “Incognito Window” feature can help with one of these, but you can’t have two Incognito windows at the same time (at least, not on Mac/Linux – I hear tell that the Windows version may have supported multiple incognito sessions at some point, but I don’t know if that’s still the case)


I’ve created a little script. I call it chrome and it lives in ~/bin on all my machines. It detects the platform and calls the appropriate binary.

More importantly, it takes one (optional) parameter, which it uses to figure out which profile to run.

I usually start my day by running this script twice: once as chrome work and once as chrome personal. The order is significant, as clicking on urls in other applications will result in them being opened in the first profile that ran. So, while I’m at work I want most things to open in the work profile; if I’m not working I want a different default behaviour.

If you don’t pass a parameter, the script will invoke the default profile – the one that gets used if you don’t specify a profile at all.

I’ve put the script on github for your amusement and pleasure (and hardcore forking action).

Deprecating your phone number made easy

18 months ago, I ended up with an Optus account – I was on a 12 month contract in order to receive an iPhone. For various reasons, I decided not to port the number I’d been using for almost a decade to Optus, but keep it active on another carrier instead. As of a few weeks ago, I’ve now migrated away from Optus, and I want to switch back to my original number. I want to keep the number I’d been using on Optus active for a while, but I don’t want to be answering it – I just want people who use it to be notified about my new number.

This is made easier by the fact that the SIM lives in my Nexus One (given to me by my employer as a Christmas gift last year, but this post, as always, is entirely my own opinion), which runs Android 2.2. Unlike on an iPhone, this means I can have all sorts of applications always running in the background – and those apps can take access the SMS database, respond to incoming SMSes, and sending outbound SMS.

I tried a few apps, but ended up settling on Ultimate SMS. This app allows me to set an auto-response sent in reply to any incoming SMS (‘James does not use this number any more; he can be reached on 0407123456 instead). This app also forwards a copy of the inbound SMS on to my new number – so I usually get it, and respond to it, while the person who messaged me is still reading my auto-reply.

One last special feature from Telstra makes this twice as useful: SMSes sent from their Message2Text service show the original caller’s number as the origin of the SMS. This means that if anyone calls me and leaves a message, they still get an SMS in response notifying them of my new number. Even better, Ultimate SMS includes the original number when it forwards that SMS to me – so even if their call was from a number that can’t receive SMS, I still get their message on the phone I do carry, and I know what number the message came from.

Update: Between drafting this and posting it, my Nexus One went missing. I’m now doing the same thing on my G1 running Android 1.6.

openwrt, dnsmasq, linuxigd, and Back To My Mac

Simple task: set up my wrt-54g (running openwrt) with miniupnpdlinuxigd so that “Back To My Mac” works[1].

miniupnpdlinuxigd: trivial. Click a few buttons to enable it, done. I tried miniupnpd first; but althought it initially looked good, I couldn’t get it to work consistently.

However, that’s when I start getting the MobileMe prefpane telling me that BTMM couldn’t start because “Your DNS server isn’t responding”. A little bit of searching on Google finds me pages like this one, which baldly state that “Back to My Mac isn’t compatible with dnsmasq.”

Well, dear internets, I’m here to tell you that you are wrong. BTMM is perfectly compatible with dnsmasq. Sure,openwrt’s default settings don’t work, but that doesn’t make the two incompatible.

It did take me a while to figure out what was going on. The clue also came from Apple’s forums, which told me to do this:

betelgeuse:~ james$ echo "show State:/Network/BackToMyMac" | scutil

<dictionary> { : <dictionary> {

    ExternalAddress :

    StatusMessage : GetZoneData failed:

    AutoTunnelExternalPort : 4500

    StatusCode : -65554

    LLQExternalPort : 5353

    RouterAddress :

    LastNATMapResultCode : 0



The vital clue was the StatusMessage, which tells you exactly which DNS lookup failed. The important thing is that the hostname starts with an underscore.

Take a look at the dnsmasq man page, specifically the filterwin2k option. Once upon a time, SRV records (and records with underscores) really were a sign that you had win2k machines on your network. Once upon a time, “triggering dial-on-demand links” was actually something to be worried about. Those times are long past.

I turned this option off (vi /etc/dnsmasq.conf, add a # at the start of that line to comment the option out, save the file, and run /etc/init.d/S65dnsmasq to restart the service). As expected BTMM now works fine. Well, as fine as you could expect.

[1] I’m ideologically opposed to all things UPnP, and BTMM in particular. What’s the point of having a firewall if you’re going to allow everything inside to poke so many holes in it it may as well not be there? There’s nothing BTMM can give me that a small firewall hole (to allow SSH on a non-standard port) + ssh portforwarding can’t give me in a more controlled way – and without shelling out $$$ to Uncle Steve, too. Nevertheless…

For all your expert travel advice


QNAP TS-409 Pro: initial setup from a non-windows (linux/mac) machine

I just bought myself a QNAP TS-409 Pro from Skycomp. Very happy with both the device and Skycomp so far.

However, the initial setup was a struggle.

The device has a very limited openwrt-style firmware. Very, very limited: it contains the bare minimum functionality to be able to bootstrap the device with a more capable OS once you have disks installed.

The documented way of doing this is via a “QuickInstall Wizard”, that comes on a provided CD in Mac and Windows flavors. I only have Macs on my home network, so the windows flavor wasn’t useable for me. The Mac flavor is… interesting. I ran into the problem described here: In short, the full firmware isn’t pushed until after the drives are initiated; but the Wizard gets stuck at the “Initializing drives” stage, so the full firmware is never pushed.

I got around it using these instructions – they’re described as being “For linux”, but as it just uses basic tools like telnet and ftpd, it will work on any *nix.

Some notes:

  • Obviously, had to enable file sharing via FTP on my mac first. Did this under “Sharing” prefpane, “File Sharing”, “Share files and folders using FTP”. As the warning states, this involves transmitting your username and password in cleartext: only enable this if you’re confident you’ll only be transmitting them across a safe network. Better, use a username/password you created just for this purpose; which has no special privileges, and which will be turned off as soon as you’re done.
  • Out of the box, the device listens for telnet connections on port 13131. Username and password are “admin”.
  • Once you’ve successfully updated the firmare and rebooted, you won’t find a telnetd on 13131 any more. THIS IS NOT AN ERROR, DON’T PANIC. Instead, you’ll find an sshd listening on port 22.
  • You’ll also find a web interface listening on port 8080. If you visit that, you can start the process of setting up the device.
  • It may be helpful to have let the wizard run at least to the “Initializing drives” stage at least once. After I thought I knew what I was doing I switched to a new set of disks and tried again; and this time the hard drives weren’t mounted at all, so I couldn’t go through the documented process.

It’s not clear from the documentation, but the device creates a RAID-1 segment 500Mb in size on each disk you insert (/dev/md9 in my case), and mounts this on /mnt/HDA_ROOT. This is where configs for the device, packages you install, and so on are stored.

The device can handle multiple raidsets – although with only 4 disks to play with, you’re not likely to end up with >2 sets. In my cause I currently have 3 1Tb drives in a RAID-5 set, and a single 500Gb disk sitting on its own.

Laundry powder gets huge upgrade

I was in the supermarket getting some laundry powder last night and noticed something really strange: every single brand of concentrated laundry powder was advertising on their packaging the fact that they’re about to be relaunched in a new version. The new powders are all going to be 2x as concentrated, and most brands made a big deal out of the fact that the new packaging will therefore be half the size.

Golly. Every brand? All at once? All deciding to redo their formulation, redo their packaging, and retool their manufacturing plants, all with identical changes to formulation and packaging, all at the same time? Unpossible!

You’d almost think that every brand of powder was actually exactly the same, made at the same plant, and just packaged slightly differently. But that would surely never happen!