Below are the five most recent posts in my weblog. You can also see a chronological list of all posts, dating back to 1999.
The gig poster
On July 31st a friend and I went to see Maxïmo Park and support at a mini-festival day in Times Square, Newcastle. The key attraction for me to this gig was the top support band, Lush who are back after a nearly 20 year hiatus.
Nano Kino 7"
I first heard of Lush quite recently from the excellent BBC Documentary Girl in a Band: Tales from the Rock 'n' Roll Front Line. They were excellent: the set was quite heavy on material from their more dreampop/shoegaze albums which is to my taste.
I also particularly enjoyed Warm Digits, motorik instrumental dance stuff that reminded me of Lemon Jelly mixed with Soulwax, who had two releases very reasonably priced on the merch stand; Nano Kino in the adjacent "Other Rooms", also channelling dreampop/shoegaze; and finally Maxïmo Park themselves. I was there for Lush really but I still really enjoyed the headliners. I've seen them several times but I've lost track of what they've been up to in recent years. Both their earliest material and several newer songs were well received by the home crowd and atmosphere in the enclosed Times Square was excellent.
It's been four years since I last wrote about music players. In the meantime ⅔ of my Sansa Fuzes broke, and the third does not have great rockbox support. I've also been using a Sansa Clip+ (a leaving present from my last job, thanks again!) and a Sansa Clip Zip. Unfortunately Sandisk's newer Sansa devices (Sport, Jam - the only ones still in production) are not supported by Rockbox.
The Clips have been very reliable and sturdy players, but I have missed the larger display of the Fuze. Since I've been exploring HD audio I've also been interested in something with an A/D converter that can handle it. I also still wish to carry my entire music library around with me, which limits my options.
I decided to try an iPod. The older iPods had a Wolfson-manufacturered ADC which had a good reputation and supported (in headline terms at least) 24/48. The iPod colour (aka "4th gen") and subsequent models have a large colour display. The click-wheel interface is also very nice. Apple have now discontinued the "classic" iPod and their second hand value has greatly increased, but I managed to get an older 5th generation model ("video", with a base capacity of 30G) whilst trading in some unwanted DVDs. The case was scratched to heck but a replacement was readily and cheaply available from auction sites.
Rockbox support in these iPods is pretty good, and you can mod the hardware to support CF or SD cards with kits such as the iFlash kits by Tarkan Akdam, which I picked up, along with a new 128G SD card.
Unfortunately I have found writing to the iPod to be very poor with Rockbox, but it's fine for playback, and booting the iPod in OF or DFU mode is very easy and works reliably.
Whilst Rockbox on the iPod works pretty well, installing it is far harder than
on the Sandisk Sansa devices. The difficulty in my case is because rockbox
requires a PC-formatted iPod to install, and I had a Mac-formatted one. I
couldn't find a way to convert the iPod to PC format using a Mac. I tried doing
so on a PC but for some reason the PC wasn't playing ball so I gave up after a
few hours. In the end I assembled the filesystem by hand using
dumps of partition tables from other people's iPods, via a Linux machine. This
was enough to convince iTunes on Mac to restore it's hidden partition and boot
software without reverting back to a Mac disklabel format.
For developing complex, real-world Docker images, there are a number of tools that can make life easier.
The first thing to realise is that the
Dockerfile format is severely limited.
At work, we have eventually outgrown it and it has been replaced with a
structured YAML document that is processed into a
Dockerfile by a tool called
dogen. There are several
advantages to this, but I'll point out two: firstly, having data about the
image available in a structured format makes automatically deriving technical
documentation very easy. Secondly, some of the quirks of
Dockerfiles, such as
ADD command respecting the environment's umask, are worked around in the
We have a large suite of integration tests that we run against images to make sure that we haven't introduced regressions during their development. The core of this is the Container Testing Framework, which makes use of the Behave system.
Each command that is run in a
Dockerfile generates a new docker image layer.
In practice, this can mean a real-world image has a great number of layers
underneath it. Docker-dot-com have resisted introducing layer squashing into
their tools, but with both hard limits for layers in some of the storage
backends, and performance issues for most of the rest, this is a very real
issue. Marek Goldmann wrote a squashing tool
that we use to control the number of intermediate layers that are introduced by
Finally, even with tools like dogen and ctf, we would like to be able to have more sophisticated tools than shell scripts for configuring images, both at image build time and container run time. We want to do this without introducing extra dependencies inside the images which will not otherwise be used for their operation.
Ansible could be a solution for this, but there are practical issues with relying on it for runtime configuration in our situation. For that reason David Becvarik is designing and implementing Container Configuration Tool, or cct, a framework for performing configuration of containers written in Python.
It has become a bit traditional within Debian to announce these things in a geeky manner, so for now
# ed -p: /etc/exim4/virtual/dow.land :a holly: :fail: reserved for future use . :wq 99
Last week, someone posted a request for help on the popular Server Fault Q&A site: they had apparently accidentally deleted their entire web hosting business, and all their backups. The post (now itself deleted) was a reasonably obvious fake, but mainstream media reported on it anyway, and then life imitated art and 123-reg went and did actually delete all their hosted VMs, and their backups.
I was chatting to some friends from $job-2 and we had a brief smug moment that
we had never done anything this bad, before moving on to incredulity that we
had never done anything this bad in the 5 years or so we were running the
University web servers. Some time later I realised that my personal backups
were at risk from something like this because I have a permanently mounted
/backup partition on my home NAS. I decided to fix it.
I already use Systemd to manage mounting the
/backup partition (via a
backup.mount file) and its dependencies. I'll skip the finer details of that
I planned to define some new Systemd units for each backup job which
was previously scheduled via Cron in order that I could mark them as
depending on the
/backup mount. I needed to adjust that mount definition by
StopWhenUnneeded=true. This ensures that
/backup will be unmounted
when it is not in use by another job, and not at risk of a stray
The backup jobs are all simple shell scripts that convert quite easily into services. An example:
[Unit] Requires=backup.mount After=backup.mount [Service] User=backupuser Group=backupuser ExecStart=/home/backupuser/bin/phobos-backup-home
To schedule this, I also need to create a timer:
[Timer] OnCalendar=*-*-* 04:01:00 [Install] WantedBy=timers.target
To enable the timer, you have to both enable and start it:
systemctl enable backup-home.timer
systemctl start backup-home.timer
I created service and timer units for each of my cron jobs.
The other big difference to driving these from Cron is that by default I won't
get any emails if the jobs generate output - in particular, if they fail. I
definitely do want mail if things fail. The Arch Wiki has an interesting
proposed solution to this
which I took a look at. It's a bit clunky, and my initial experiments with a
derivation from this (using
sendmail(1)) have not yet generated any
Pros and Cons
The Systemd timespec is more intuitive than Cron's. It's a shame you need a
minimum of three more lines of boilerplate for the simplest of timers. I think
WantedBy=timers.target should probably be an implicit default for all
type units. Here I think clarity suffers in the name of consistency.
start doesn't kick-off the job, it really means "enable" in the
context of timers, which is clumsy considering the existing
which seems almost superfluous, but is necessary for consistency,
units need to be enabled before they can be started As Simon points out in the
comments, this is not true. Rather, "enable" is needed for the timer to be active
upon subsequent boots, but won't enable it in the current boot. "Start" will enable
it for the current boot, but not for subsequent ones.
Since I need a
.service and a
.unit file for each active line in my
crontab, that's a lot of small files (twice as many as the number of jobs
being defined) and they're all stored in system-wide folder because of the
dependency on the necessarily system-level units defining the mount.
It's easy to forget the
After= line for the backup services. On the one hand,
it's a shame that
After= doesn't imply
Require=, so you don't need both; or
alternatively there was a convenience option that did both. On the other hand,
there are already too many Systemd options and adding more conjoined ones would
just make it even more complicated.
It's a shame I couldn't use user-level units to achieve this, but they could
not depend on the system-level ones, nor activate
/backup. This is a sensible
default, since you don't want any user to be able to start any service
on-demand, but some way of enabling it for these situations would be good.
I ruled out
systemd.automount because a stray
rm -rf would trigger the mount
which defeats the whole exercise.
Apparently this might be something you solve with Polkit, as the Arch Wiki
explains, which looks like it has
I need to get mail-on-error working reliably.
Older posts are available on the all posts page.