Below are the five most recent posts in my weblog. You can also see a chronological list of all posts, dating back to 1999.

Fabien's proof copies

Fabien's proof copies

*proud smug face*

proud smug face

Today is Doom's' 25th anniversary. To mark the occasion, Fabien Sanglard has written and released a book, Game Engine Black Book: DOOM.

It's a sequel of-sorts to "Game Engine Black Book: Wolfenstein 3D", which was originally published in August 2017 and has now been fully revised for a second edition.

I had the pleasure of proof-reading an earlier version of the Doom book and it's a real treasure. It goes into great depth as to the designs, features and limitations of PC hardware of the era, from the 386 that Wolfenstein 3D targetted to the 486 for Doom, as well as the peripherals available such as sound cards. It covers NeXT computers in similar depth. These were very important because Id Software made the decision to move all their development onto NeXT machines instead of developing directly on PC. This decision had some profound implications on the design of Doom as well as the speed at which they were able to produce it. I knew very little about the NeXTs and I really enjoyed the story of their development.

Detailed descriptions of those two types of personal computer set the scene at the start of the book, before Doom itself is described. The point of this book is to focus on the engine and it is explored sub-system by sub-system. It's fair to say that this is the most detailed description of Doom's engine that exists anywhere outside of its own source code. Despite being very familiar with Doom's engine, having worked on quite a few bits of it, I still learned plenty of new things. Fabien made special modifications to a private copy of Chocolate Doom in order to expose how various phases of the renderer worked. The whole book is full of full colour screenshots and illustrations.

The main section of the book closes with some detailed descriptions of the architectures of various home games console systems of the time to which Doom was ported, as well as describing the fate of that particular version of Doom: some were impressive technical achievements, some were car-crashes.

I'm really looking forward to buying a hard copy of the final book. I would recommend this to anyone has fond memories of that era, or is interested to know more about the low level voodoo that was required to squeeze every ounce of performance possible out of the machines from the time.

Edit: Fabien has now added a "pay what you want" option for the ebook. If the existing retailer prices were putting you off, now you can pay him for his effort at a level you feel is reasonable. The PDF is also guaranteed not to be mangled by Google Books or anyone else.

Tags:

I'm very excited to announce that I've moved roles within Red Hat: I am now part of the OpenJDK team!

I've been interested in the theory and practise behind compilers, programming language design and the interaction of the two for a long time¹. Before my undergrad I was fascinated by the work of Wouter van Oortmerssen, who built lots of weird and wonderful experimental languages and systems². During my undergrad, dissatisfied with the available choice of topics for my third year, I petitioned the Computer Science Department at Durham University to revive an older module "Programming Language Design & Compiling". I'm eternally grateful to Dr. Paul Callaghan for being prepared to teach it to us³.

I've spent my time within Red Hat so far in "Cloud Enablement". Our mission was to figure out and develop the tools, techniques and best practises for preparing containerized versions of the Middleware product portfolio to run on OpenShift, Red Hat's enterprise container management platform. The team was always meant to be temporary, the end game being the product teams themselves taking responsibility for building the OpenShift containers for their products, which is where we are today. And so, myself and the other team members are dismantling the temporary infrastructure and moving on to other roles.

Within Cloud Enablement, one of my responsibilities was the creation of the Java Applications for OpenShift container image, which is effectively OpenJDK and integration scripts for OpenShift. I am going to continue maintaining and developing this image (or images) within the OpenJDK team.

Longer term I'm looking forward to getting my teeth into some of the technical work within OpenJDK: such as the JVM, architecture ports, garbage collectors or the JIT or AOT compilers within the runtime.

Earlier this year, I put together a private "bucket list" of things I wanted to achieve in the near-ish future. I recently stumbled across it, having not thought about it for a while, and I was pleasantly surprised to see I'd put on "compilers/lang design" as something to revisit. With my move to OpenJDK I can now consider that ticked off.


  1. I'm straying into this area a little bit with my PhD work (graph rewriting, term rewriting, etc.)
  2. one of which, WadC, I took over maintenance of, ten years ago
  3. Paul has recently had some of his writing published in the book Functional Programming: A PragPub Anthology
Tags:

Recently I filled up the storage in my iPod and so planned to upgrade it. one. This is a process I've been through several times in the past. My routine used to be to buy the largest capacity SD card that existed at the time (usually twice the capacity of the current one) and spend around £90. Luckily, SD capacity has been growing faster than my music collection. You can buy 400G SD cards today, but I only bought a 200G one, and I only spent around £38.

As I wrote last time, I don't use iTunes: I can move music on and off it from any computer, and I choose music to listen to using a simple file manager. One drawback of this approach is I tend to listen to the same artists over and over, and large swathes of my collection lie forgotten about. The impression I get is that music managers like iTunes have various schemes to help you keep in touch with the rest of your collection, via playlists: "recently added", "stuff you listened to this time last year", or whatever.

As a first step in this direction, I decided it would be useful to build up playlists of recently modified (or added) files. I thought it would be easiest to hook this into my backup solution. In case it's of interest to anyone else, I thought I'd share my solution. The scheme I describe there is used to run a shell script to perform the syncing, which now looks (mostly) like this:

date="$(/bin/date +%Y-%m-%d)"
plsd=/home/jon/pls

make_playlists()
{
    grep -v deleting \
        | grep -v '/\._' \
        | grep -E '(m4a|mp3|ogg|wav|flac)$' \
        | tee -a "$plsd/$date.m3u8"
}

# set the attached blinkstick LED to a colour indicating "work in progress"
# systemd sets it to either red or green once the job is complete
blinkstick --index 1 --limit 10 --set-color 33c280

# sync changes from my iPod onto my NAS; feed the output of files changed
# into "make_playlists"
rsync -va --delete --no-owner --no-group --no-perms \
    --exclude=/.Spotlight-V100 --exclude=/.Trash-1000 \
    --exclude=/.Trashes --exclude=/lost+found /media/ipod/ /music/ \
    | make_playlists

# sync all generated playlists back onto the iPod
rsync -va --no-owner --no-group --no-perms \
    /home/jon/pls/ /media/ipod/playlists/

Time will tell whether this will help.

Tags:

Continuing a series of blog posts about Debian packages I have adopted (Previously: smartmontools; duc), in January this year I also adopted glBSP.

I was surprised to see glBSP come up for adoption; I found out when I was installing something entirely unrelated, thanks to the how-can-i-help package. (This package is a great idea: it tells you about packages you have installed which are in danger of being removed from Debian, or have other interesting bugs filed against them. Give it a go!) glBSP is a dependency on another of my packages, WadC, so I adopted it fairly urgently.

glBSP is a node-building tool for Doom maps. A Map in Doom is defined in a handful of different lumps of data. The top-level, canonical data structures are relatively simple: THINGS is a list of things (type, coordinates, angle facing); VERTEXES is a list of points for geometry (X/Y coordinates); SECTORS define regions (light level, floor height and texture,…), etc. Map authoring tools can build these lumps of data relatively easily. (I've done it myself: I generate them all in liquorice, that I should write more about one day.)

During gameplay, Doom needs to answer questions such as: the player is at location (X,Y) and has made a noise. Can Monster Z hear that noise? Or: the player is at location (X,Y) at facing Z°, what walls need to be drawn? These decisions needed to be made very quickly on the target hardware of 1993 (a 486 CPU) in order to maintain the desired frame-rate (35fps). To facilitate this, various additional data structures are derived from the canonical lumps. glBSP is one of a class of tools called node builders that calculate these extra lumps. The name "node-builder" comes from one of the lumps (NODES), which encodes a binary-space partition of the map geometry (and that's where "BSP" comes from).

If you would be interested to know more about these algorithms (and they are fascinating, honest!), I recommend picking up Fabien Sanglard's forthcoming book "Game Engine Black Book: DOOM". You can pre-order an ebook from Google Play here. It will be available as a physical book (and ebook) via Amazon on publication date, which will be December 10, marking Doom's 25th anniversary.

The glBSP package could do with some work to bring it up to the modern standards and conventions of Debian packages. I haven't bothered to do that, because I'm planning to replace it with another node-builder. glBSP is effectively abandoned upstream. There are loads of other node builders that could be included: glBSP and Eureka author Andrew Apted started a new one called AJBSP, and my long-time friend Kim Roar Foldøy Hauge has one called zokumbsp. The best candidate as an all-round useful node-builder is probably ZDBSP, which was originally developed as an internal node-builder for the ZDoom engine, and was designed for speed. It also copes well with some torture-test maps, such as WadC's "choz.wl", which brought glBSP to its knees. I've submitted a package of ZDBSP to Debian and I'm waiting to see if it is accepted by the FTP masters. After that, we could consider removing glBSP.

Tags:

duc

`duc`'s GUI view

duc's GUI view

Continuing a series of blog posts about Debian packages I have adopted (starting with smartmontools), in January this year I adopted duc ("Dude, where are my bytes?")

duc is a tool to record and visualise disk space usage. Recording and visualising are performed separately, meaning the latter is very fast. There are several visualisers available. The three most interesting ones are

  • duc ui, a text terminal/ncurses-based heirarchical browser
  • duc gui, a GUI/X11 application
  • duc cgi, a CGI for access with a web browser

The GUI and CGI resemble the fantastic Filelight KDE tool, which I've always preferred to the similar tools available for GNOME, Windows or macOS. (duc itself works fine on macOS). The CGI could be deployed on my NAS, but I haven't set it up yet.

Indexing is performed via duc index <path> and seems very quick when compared to something like du -sh. The index is stored in a local database.

I adopted duc in sad circumstances after the prior maintainer decided to step down, in response to a discussion we had about a feature request for the Debian package. This wasn't the outcome I wanted, but it's a package I use regularly on several machines so I stepped up to adopt it.

Tags:

Older posts are available on the all posts page.


Comments