The Future Is Now

Exploring JVS

Everyone had their weird pandemic hobby, right? Well, mine is that I bought an arcade video game. Not the whole cabinet - just a board, to hook up to a TV and a game controller. If I can’t go to the arcade, at least I can bring a favourite game home, right?

As you might imagine, an arcade board isn’t just plug and play; you can’t just plug in a USB gamepad and call it a day. Finding out how to actually use it took some research. I looked up the different methods, and found I had more or less two options: a more expensive one involving repurposed arcade hardware, and an open-source one using off-the-shelf parts. While I was figuring out if the more expensive options were worth it, I decided I’d try out the open source one, OpenJVS. I started out as just a user, but ended up getting involved in development. Over the past few months, I’ve contributed a bunch of new features and bugfixes. In the process I learned a lot about the specification… and quashed my fair share of weird bugs.

So what is JVS anyway?

Imagine this: you’re an arcade machine manufacturer, and you need to make it easy for a machine operator to swap a new board into one of their cabinets. How do you make sure they don’t have to run dozens of wires to get the controls hooked up? Video is easy; you can just use VGA, DVI, or HDMI. Power is easy too. What else is left? Well, there’s many different kinds of input so players can actually play the game; coin inputs and money bookkeeping; smart cards to communicate information to a game; in other words, all kinds of input from the player.

The solution? JAMMA Video Standard, or JVS, the standard used by most arcade games since the late 90s.1 It’s a serial protocol that makes arcade hardware close to plug and play: connect a single cable from your cabinet’s I/O board to your new arcade board, and you get all of your controls and coin slots connected at once.

If you want to use a JVS game at home, you could always put something together using an official I/O board, but - as it happens, the JVS spec is available2. There’s nothing stopping you from making your own I/O board, and it turns out a few different people have. There are a few different open source options intended to work in a few different ways. I use OpenJVS, written by Bobby Dilley, which transforms a Raspberry Pi running Linux into a JVS I/O board.

How does JVS work?

So that’s JVS, but how does it work? I won’t go into deep detail, but here’s the 10,000 meter view.

JVS defines communication between an I/O board and a video game board. It uses the RS-485 serial standard to communicate data, meaning that it’s possible to use a standard USB RS-485 device to send and receive commands.3 The I/O board handles tracking the state of all of the cabinet’s inputs, and it’s also responsible for all the coin management: it has its own count of how much money players have spent, and the game just reads that number instead of keeping track of it itself.

The game board communicates with the I/O board by sending commands, then receiving packets of information in return. The game board tells the I/O board how many bytes of data to expect, then sends one or more messages as a set of raw bytes. The I/O board reads those bytes, interprets them, then sends a set of responses for each command.

Open source JVS implementations don’t just emulate the things a player would normally touch in the arcade, either. They also simulate features like the service and test buttons, which are tucked away inside a cabinet where only arcade employees can normally use them. The service button allows access to the operator menu, where employees can change hardware and software settings, while the test button does exactly what it sounds like. Arcade players will never get to use these, but for home players it’s useful to be able to do things like change how many coins a play costs, turn on free play, or change the game language. OpenJVS supports both of these, and they seemed to work when I tested them. The test button did cause OpenJVS to log a message about an “unsupported command”, which seemed suspicious, but Bobby and I suspected this was just the board playing fast and loose with the spec so we ignored it. (This will come up again later.)

OpenJVS was already basically complete and implemented all of the common parts of the protocol, so I started out as just a user. As I ran into a few bugs, I started contributing bugfixes and then implementations of more parts of the protocol.

Let’s talk money

I mentioned coin management earlier. Obviously, in arcades, games are pay-to-play. Most game boards have a free play option so you can play as much as you want without paying, and a lot of home players will just switch their boards into the free play mode. There’s nothing stopping you from emulating a coin slot though. OpenJVS lets you map a button on your controller to inserting a coin for something closer to the authentic arcade feeling. It seems like this might not be a common usecase, but the option is always there.

Before long, I noticed a really strange bug. I’d be playing games, and then notice that somehow I had over 200 credits in the machine. I definitely wasn’t mashing the coin button that much, so I knew something had to be wrong. After some experimentation, I eventually figured out it only happened if you did a few specific things in exactly the right order:

  • Insert one or more coins
  • Enter the service menu and then the input test menu
  • Exit the service menu

At this point, I realized something suspicious was happening. I was always getting 200+ coins… but it was more specific than that. I was ending up with exactly 256 minus the number of coins I had when entering the service menu. For example, if I started with 3 coins, I’d always have 253 coins after leaving the service menu. That was a pretty good sign I was seeing something in particular: integer underflow.

Experienced programmers, feel free to skip this paragraph. But for those who aren’t familiar: languages like C, which OpenJVS is written in, feature something called integer overflow and underflow. Number types have a maximum size which affects how many numbers it can represent. A 16-bit (or 2-byte) integer that can store only positive numbers, for example, can only represent numbers between 0 and 65535. Picture, for a moment, what happens if you ask a program to subtract 1 from a 16-bit integer that’s already at 0, or add 1 to a number that’s already at 65535. In C, and many other languages, it will underflow or underflow: subtract 1 from 0, and you get 65535; add 1 to 65535, and you get 0.

Having figured out that I was probably seeing underflow, I took a look at the protocol. Since the JVS I/O board keeps track of the money balance, the protocol provides commands for dealing with that and it seemed like the most likely place I’d find the bug. When I dug into OpenJVS’s implementation of the “decrease number of coins” command, it wasn’t too hard to find the culprit. It was right here:

1
2
3
4
5
6
/* Prevent underflow of coins */
if (coin_decrement > jvsIO->state.coinCount[0])
    coin_decrement = jvsIO->state.coinCount[0];

for (int i = 0; i < jvsIO->capabilities.coins; i++)
    jvsIO->state.coinCount[i] -= coin_decrement;

When OpenJVS received the command to decrement the number of coins by a certain amount, it tried to prevent underflows. Unfortunately, the underflow protection itself was buggy: it checked the number of coins in the first coin slot, but then decremented the number of coins in every slot. If slot 2 has fewer coins than slot 1, then slot 2 will end up underflowing by whatever the difference is. I still wasn’t sure why it was trying to remove 256 coins, which seemed weird. I figured it must just be trying to clear the slot of all coins and trusting the I/O board to prevent underflows, and moved on.

With that bug fixed, I decided to keep working at improving coin support. While I was working on that command, I noticed that OpenJVS was ignoring a similar command. While the board usually only needs to send commands to reduce the number of coins in the balance, like when the player starts a game, it can also send a command to increase the number of coins. I’d noticed that my game board was trying to send that command, but OpenJVS had only implemented a placeholder that logged the request and then moved on without doing anything. The quickest way to figure out what was going on was just to implement the command myself. The actual command in the spec is pretty simple:

Purpose Sample
Command code (always 35) 0x35
Coin slot index 0x01
Amount (first half) 0x00
Amount (second half) 0x01

Easy enough: you can identify that it’s the command to increase coins by the first byte, 35, and then it tells you which coin slot to act on and how many coins to add to it. But when I was replacing the old placeholder command, I noticed something funny:

1
2
3
4
5
6
7
case CMD_WRITE_COINS:
{
    debug(1, "CMD_WRITE_COINS\n");
    size = 3;
    outputPacket.data[outputPacket.length++] = REPORT_SUCCESS;
}
break;

The placeholder command just reported success without doing anything, then jumped ahead in the stream by the command’s length. But it jumped ahead 3 bytes, and according to the spec this command should be 4 bytes long. I tested OpenJVS with the command fully implemented, and noticed two things: pressing the test button now inserted 256 coins into the second coin slot, instead of doing nothing; and the “unsupported command” error I used to see went away. Why? When the buggy placeholder skipped ahead by three bytes, that left one byte in the buffer for OpenJVS to find and mistake for being a command. That byte, 0x01, was actually the last byte of the “insert coin” command that was being sent when the test button was activated. I hadn’t even set out to fix the “unsupported command” bug, but I fixed it anyway.

At this point I’d fixed all the bugs I set out to, but I was still seeing something that just didn’t feel right. When exiting the service menu, the game board now withdrew coins from the balance; when pressing the test button, the board now added coins. But the number of coins looked wrong: it was happening in increments of 256, instead of 1, and even for test commands that seemed unlikely. So I took another look at the spec, and realized the answer had been staring me in the face the entire time. Let’s take another look at the last part of that table from earlier:

Purpose Sample
Amount (first half) 0x00
Amount (second half) 0x01

The number of coins to add or remove is a two-byte, or 16-bit, value. Since JVS is communicating through single bytes, any multi-byte values have to be split up into single bytes for transmission. The spec helpfully tells us how to decode that: the numbers are stored in the big-endian, or most-significant byte first, format. But taking a look at OpenJVS’s code shows that anywhere it was decoding these values, it was decoding them in little-endian format - in other words, it was reading the bytes backwards. What’s 256 in little-endian format? It’s the bytes “0” and “1”, in that order. What’s 1 in big-endian format? It’s those same two bytes in that same order. In other words, the game board hadn’t been trying to add or subtract 256 coins all this time: it was just trying to add and remove single coins.

Putting it all together, what exactly was happening when I saw coins being added and removed? It actually turns out to be pretty simple. Pressing the test button inserts a coin in the second coin slot. When the operator menu is activated, it can be used to test the coin counter. When the operator leaves the menu, the unit sends commands to remove all of the coins that were added by the test button; this should leave it with the same number of coins as it had when they started. The strange behaviour was a combination of all the bugs working together: the test button didn’t do anything because the command wasn’t implemented, and the buggy bounds checking and incomplete “remove coins” command meant it could underflow and leave the player with hundreds of coins.

I originally set out to fix some pretty simple bugs, but every new bug I uncovered revealed a new one. The experience was quite a fun one. I can’t say I ever thought I’d be writing software for arcade machines, but not only did I fix my own problems, but I had the chance to learn more about how things work in a domain I might never otherwise have gotten the chance to touch.


  1. This was actually the second JAMMA standard, following a simpler standard that was used internationally between 1985 and the late 90s.

  2. Including an excellent English translation by Alex Marshall!

  3. Mostly. JVS actually introduces an extra wire, known as the sync line, but I’m going to ignore that here to keep the explanation simple.

Bundle Install With Homebrew Magic Using Brew Bundle Exec

Has this ever happened to you? You’re writing a project in Ruby, JavaScript, Go, etc., and you have to build a dependency that uses a system library. So you bundle install and then, a few minutes later, your terminal spits up an ugly set of C compiler errors you don’t know how to deal with. After dealing with this enough times I decided to do something about it.

Homebrew already has a great tool in its arsenal for dealing with these problems. Homebrew needs to be able to build software reliably and robustly, after all - even if the user’s system has weird software installed on it or strange misconfigurations. The “superenv” build environment features intelligent automatic setup of build-related environment variables and PATHs based on just the requested dependencies, which filters out unrequested software and prevents a lot of common build failures that come from interfering software. It also uses shims for many common build tools to enforce just the right arguments passing through to the real tools.

So I thought to myself - we solved that problem for Homebrew builds already, right? Wouldn’t it be nice if I could just reuse that work for other things? So that’s what I did. Homebrew already provides the Brewfile dependency declaration format and the brew bundle tool to library dependencies with Homebrew, and as a result there’s already a great way to get the dependency information we’d need to produce a reliable build environment. Since brew bundle is a Homebrew plugin, it has access to Homebrew’s core code - including build environment setup. Putting these together, I wrote a feature called brew bundle exec. It takes the software you specify in your Brewfile and builds a dependency tree out of that, then sets up just the right build flags to let anything you want use them.

For example, say I want to gem install mysql2. Often, you get something like this:

1
2
3
4
5
6
7
$ gem install mysql2
Building native extensions. This could take a while...
# several dozen lines later...
linking shared-object mysql2/mysql2.bundle
ld: library not found for -lssl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [mysql2.bundle] Error 1

Ew, right? Let’s make that better.

By creating a Brewfile with the line brew "mysql", we can specify that we want to build against a Homebrew-installed MySQL and all of its dependencies. Just by running our command prefixed with brew bundle exec --, for example, brew bundle exec -- gem install mysql2, we can run that command in a build environment that knows exactly how to use its dependencies. Suddenly, everything works—no messing around with flags, no special options passed to gem install, and no fragile bundle config trickery.

1
2
3
4
5
6
7
$ brew bundle exec -- gem install mysql2
Building native extensions. This could take a while...
Successfully installed mysql2-0.5.3
Parsing documentation for mysql2-0.5.3
Installing ri documentation for mysql2-0.5.3
Done installing documentation for mysql2 after 0 seconds
1 gem installed

What exactly does brew bundle exec set? There’s a variety of flags set which are useful for a variety of different compilers and buildsystems.

  • CC and CXX, the compiler specification flags, point to Homebrew’s compiler shims which help ensure that the right flags are passed to the real compiler being used.
  • CFLAGS, CXXFLAGS, and CPPFLAGS ensure that C and C++ compilers know about the header and library lookup paths for all of the Brewfile dependencies.
  • PATH ensures that all of the executables installed by Brewfile dependencies will be found first, before any tools of the same name that may be installed elsewhere on your system.
  • PKG_CONFIG_LIBDIR and PKG_CONFIG_LIBDIR ensure that the pkg-config tool finds Brewfile dependencies.
  • Buildsystem-specific flags, such as CMAKE_PREFIX_PATH, ensure that buildsystems can make use of the Brewfile dependencies.

So the next time you’re bashing your head against build failures in your project, give brew bundle exec a try. It might just solve your problems for you!

The Hunt for the Lost Phoenix Games

Among trashgame fans, Phoenix Games are one of the most infamous game publishers of all time. They’re famous for making games of the lowest possible quality for the lowest possible price; their most infamous “games” were just poorly-animated movies on a PS2 disc. They’re the kind of fascinating bad you just can’t help but love for their own sake.

In their last year of life, Phoenix started publishing Nintendo DS games. They had an aggressive release schedule; in 2008 they announced plans to release 20 games inside of a year, most of which never came out. Which games did or didn’t come out is still hard to figure out. Many online release lists take Phoenix Games’s estimated release dates at face value, including many games that probably never came out. I’ve put together a list of what I’ve been able to confirm in the hopes that anyone with more information can help out.

Out of the 20 games Phoenix announced, only five definitely came out under Phoenix themselves. There are confirmed physical copies of these games and ROM dumps on the internet. Three more were licensed to other publishers after Phoenix’s bankruptcy. The other twelve are less certain; I’ve divided these into three categories with explanations below. Phoenix declared bankruptcy in early 2009, and I haven’t been able to confirm that any games were released after the end of 2008. If you have any information, please contact me via email or on Twitter!

Definitely released:

  • 12 - NorthPole Studio, 2008
  • Adventures of Pinocchio - Unknown, 2008
  • Peter Pan’s Playground - Unknown, 2008
  • Polar Rampage - CyberPlanet, 2008
  • Valentines Day - NorthPole Studio, 2008

Released by other developers:

  • Coral Savior - NorthPole Studio; released as Bermuda Triangle: Saving the Coral by Storm City Games, in North America, 2010; and as Bermuda Triangle by Funbox Media, in Europe, 2011
  • Hoppie II - CyberPlanet; released as Hoppie, by Maximum Family Games, in North America, 2011
  • Veggy World - CyberPlanet; released by Maximum Family Games, in North America, 2011

Possibly released:

These games had finalized cover art and announced release dates. The games in this category have been found in some online store catalogues, suggesting they may have either been released or at least had already been offered to stores.

  • Jungle Gang: Perfect Plans - Listed in an MCV article as having been released October 17, 2008. Listed in a few online store catalogues as sold out.
  • Love Heart - Reviewed by Polish site GRYOnline.pl, who gave it a 2008 release date.

Probably not released:

Included in release lists, but with no store pages to indicate they were ever actually released. Some of these games were given multiple release dates, probably because they were delayed and solicited multiple times.

  • Dalmatians 4 - Listed in a PlayStation Collecting forums catalogue as having been released on February 13, 2009.
  • Lion and the King 3 - Listed in a PlayStation Collecting forums catalogue as having been released on both October 24, 2008 and February 13, 2009.
  • Monster Egg II - Listed in a PlayStation Collecting forums catalogue as having been released on both November 28, 2008 and March 13, 2009.
  • Rat-A-Box - Listed in an MCV article as having been released October 17, 2008, but no store catalogue entries.

Definitely not released:

  • Cinderella’s Fairy Tale - Final cover art, but never included in any release lists.
  • Greatest Flood - No cover art or homepage.
  • Iron Chef II - Final cover art, but never included in any release lists.
  • Monster Dessert - No cover art or homepage.
  • Princess Snow White - No cover art or homepage.
  • War in Heaven - Final cover art, but never included in any release lists.

Recover From FileVault Doom

Just upgraded to Mojave? Getting the 🚫 of doom when you try to boot your Mac? Worried you’ve lost all your data? Don’t panic! Your Mac is recoverable, and this guide will help you get your drive back in shape.

This bug is caused by a new limitation introduced for FileVault encryption. As mentioned in the release notes, any APFS volume encrypted using Mojave’s implementation of FileVault becomes invisible to earlier versions of macOS until the encryption or decryption process completes. By itself, this is mostly harmless. However, this is rendered much worse by a bug1 that causes the Mojave installer to incorrectly trigger reencryption of an already-encrypted FileVault volume during the install process. Because full drive encryption cannot complete in a timely manner, the Mojave installer times out before encryption completes. The result is a macOS 10.13 (or older) install with a macOS 10.14 FileVault volume that hasn’t yet finished encrypting. This volume can’t be read by the startup manager, and so becomes unbootable.

In order to fix this issue, we’ll need to let your drive finish encrypting. This guide will walk you through the process. Before we get started, make sure you have the right supplies on hand. You’ll need a USB key with a capacity of at least 8GB and access to a working Mac.

1. Create a USB Mojave installer

Download the Mojave installer application on your working Mac using the App Store. Don’t worry, you won’t be using that to install - just to create a USB installer for your broken Mac. Then follow these instructions from Apple. This will erase your USB key and turn it into a bootable USB install drive.

2. Boot into the USB installer

Turn off the Mac that you’re upgrading to Mojave and insert the USB drive you just created. Turn the Mac back on, and immediately hold the option key until the Startup Manager appears. Select the drive you created.

3. Enter Recovery Mode

The installer should provide an option to enter Recovery Mode. This is a different version of Recovery Mode than the one that’s builtin to your Mac, and it will be able to see your hard drive.

4. Unlock and mount your hard drive

From the Utilities menu bar at the top of the screen, select Terminal. From the terminal prompt, type diskutil apfs list. In the output, you should see one volume whose FileVault status is listed as FileVault: in progress (or something similar). If it’s already listed as “unlocked”, then you’re good - proceed to step 5.

Unlock this drive by running diskutil apfs unlockVolume DISK, where DISK is the identifier listed after APFS Volume Disk for the volume you identified in the previous paragraph. For example, you might type diskutil apfs unlockVolume disk1s1. This command will ask you for a password; enter either your personal log in password, or a FileVault recovery key.

Once you’ve completed this step, exit Terminal. You should be returned to the Recovery Mode main menu.

5. Create a new APFS volume

Open Disk Utility from the Recovery Mode main menu. Select your main hard drive, right-click it, and choose “Add APFS Volume”. Don’t worry, this isn’t repartitioning your drive - this bootable volume will live on the same volume as your main volume, and you will be able to delete it later. Name it “Temporary”, then click “Add”.

Once you’ve completed this step, exit Disk Utility. You should be returned to the Recovery Mode main menu. Return to the main Mojave installer menu.

6. Install Mojave on the new volume

Choose to install Mojave on the volume named “Temporary” that you created in the previous step. This will be a temporary fresh install, so don’t worry too much about it - go ahead and keep all the options at their defaults.

7. Allow drive encryption to finish

Once your install has finished and you’ve logged in, open Disk Utility. Unlock your normal hard drive by clicking on it and clicking the “mount” button in the toolbar. You will be asked for a password; like in the previous step in the terminal, you will need to provide either your login password (from your old Mac install), or your recovery key.

At this point, drive encryption will start back up again. You will need to wait for that to finish; this could take a few hours. If you want to check on the progress, you can open a terminal and look at the output of diskutil apfs list; encryption is finished when you run that command and see the status read FileVault: Yes.

8. Upgrade your main drive to Mojave

Now you’re finally (finally!) ready to let the Mojave install complete. Open up your USB drive in the Finder and run the macOS installer app. This time, instead of picking the temporary volume, pick your original Mac volume. Let the installer complete. This time, your Mac should boot up like normal. You’re saved!

9. Clean up

Now that your main Mac volume is bootable again, you can get rid of the temporary volume we created in the previous step. To do that, open Disk Utility and right-click the volume named “Temporary”. Click “Delete APFS Volume”, then confirm by clicking “Delete” again.

You’re done!

With any luck, you should be all set at this point. Feel free to reach out to me if you’re still having trouble! Big thanks to @mikeymikey for the suggestion of using a second APFS volume to install the temporary macOS 10.14. Thanks also to my manager, who allowed me to use my internal blog post as the basis for this post.


  1. I wish I could provide more detail about this bug, but as far as I know no details have been published. I’ve filed a radar but haven’t received a response. All I know is that this occurs, seemingly at random, to a small percentage of people who try to upgrade from macOS 10.13.6 to macOS 10.14 when FileVault is already enabled.

Review: Undertale Vinyl Soundtrack

I got my copy of the Undertale vinyl soundtrack in the mail the other day. While I’ve seen a lot of excitement for it, I haven’t seen many people talk about what it’s like, so I decided to write up my impressions.

First, to start with the good news: the art design is beautiful, and gorgeously printed. The red-on-black front cover is stylish, and the mostly-new pixel artwork is very well-done. The gatefold opens to a house party scene with a ton of the game’s NPCs, which is adorable and fits the mood of the game really well too. Most of the printing is matte, with a few details in glossy ink that really stands out—the Undertale logo on the cover, and the coloured lights in the party scene.

Unfortunately the quality of the sleeve itself is subpar. The cardboard feels thin and flimsy compared to other double-albums I own; it lacks weight. The gatefold hinge is also poorly-folded, and doesn’t stay closed on its own when the records are in; it flops open awkwardly. The records also don’t slide comfortably into the sleeve when the gatefold is closed, which makes putting records back after a play more awkward than it has to be. (See right: the record is inserted as far as it gets when the gatefold is closed. The black thing poking out is the inner record sleeve.) If you open the sleeve to insert the records all the way, then they can’t be removed while the gatefold is closed, which is even more awkward. The overall feeling is surprisingly cheap for a $35 album.

The records themselves are also mixed quality. The transparent red and blue coloured vinyl is classy, and the simple label design is attractive as well. It’s a minor detail, but the inner record sleeves are very attractive and easy to get the discs in and out of.

Unfortunately, my copy shipped with large scratches on sides A and B. (See the photo on the left—the scratches are very visible at full size.) Both discs were also covered in dust fresh out of the package. Everything I’ve seen suggests this isn’t an isolated issue—I know other people whose iam8bit records shipped with scratches before the first play, and from scanning their twitter feed, it looks like this is a common complaint with other customers. Needless to say, this is a huge issue, and I’m shocked it’s as common a problem as it is with them.

Given those defects, I was surprised when the discs sounded great. The mastering quality is very good, and leaves very little to complain about if you’re lucky enough to get undamaged discs. I did notice a mastering error that clips off the first note of Snowy on side one, but that was the only discernible error.

Given just how huge the Undertale soundtrack is, the entire thing was not going to fit on two discs. The vinyl release curates a selection of tracks to fit the available running time. They generally made a good set of calls, hitting all the major notes and popular moments, though by necessity your favourite deep cut is probably missing. Given the time constraints, I don’t see much to complain about in the selection. Pushing the soundtrack to three discs would probably have been a good call, even if that would have pushed up the price.

Despite the good sound quality, it doesn’t feel like iam8bit actually has the experience it takes to release a premium package like Undertale was meant to be. It’s pretty clear they have big ambitions, and I hope they learn from their mistakes so they get to the point that their quality lives up, but they’re not there yet. The stuff they’re good at—art and graphic design—is offset by poor physical design and unreliable QC. I can’t really recommend this as-is, especially with the high chance of getting a dud.

Aside from the main album, preorders came with a special dogsong single. This one-sided 7" single has an extended version of dogsong, and is pressed on transparent vinyl with an image of the annoying dog on the other side. It’s very cute, and for what it is it’s well-produced and pretty-looking.

Elegance

A few months ago, I wrote decoders for the PCM formats used by the Sega CD and Sega Saturn versions of the game Lunar: Eternal Blue1. I wanted to write up a few notes on the format of the Sega CD version’s uncompressed PCM format, and some interesting lessons I learned from it.

All files in this format run at a sample rate of 16282Hz. This unusual number is based on the base clock of the PCM chip. Similarly, the reason that samples are 8-bit signed instead of 16-bit is because this is the Sega CD PCM chip’s native format.

Each file is divided into a set of fixed-size frames; the header constitutes the first frame, and every group of samples constitutes the subsequent frames. These frames are set to a size of 2048 bytes—chosen because this is the exact size of a Mode 1 CD-ROM sector, and hence the most convenient chunk of data which can be read from a disc when streaming. This is also why the header takes up 2048 bytes, when the actual data stored within it uses fewer than 16 bytes.

Each frame of content contains 1024 samples. Because each sample takes up one byte in the original format, this means only half of a frame is used for content. The actual content is interleaved with 0 bytes. Despite seeming a pointless, inefficient use of space, this does serve a purpose by allowing for oversampling. The PCM chip’s datasheet lists a maximum frequency of 19800Hz, but this trick allows for pseudo-32564Hz playback.

The format used in Lunar: Eternal Blue supports stereo playback, though only two songs are actually encoded in stereo2. Songs encoded in stereo interleave their content not every sample, as in most PCM formats, but every frame; one 2048-byte frame will contain 1024 left samples, the next frame will contain the matching 1024 right samples, and so forth. This was likely chosen because the PCM chip expects samples for only one output channel at a time, and doesn’t itself implement stereo support.

The loop format used is highly idiosyncratic. Loop starts are measured in bytes from the beginning of the header; that is, a position to seek to from the start of actual content. The loop end time, however, uses a different unit; loop end is measured in terms of the sample at which the loop ends, not the byte. Because one sample is stored for every two bytes, and because left/right samples in a stereo pair are counted numerically as the same sample, this means that this count differs from the byte position at the end of the song by a factor of either 2 or 4. This is a quirk of how the PCM playback routine works; it’s more efficient to keep track of the number of samples played instead of the bytes played, and therefore storing the data in that format means that no extra math has to be performed to determine if the end of a loop has been reached. Similarly, the treatment of left/right samples as being the same sample is likely an artifact of what was the simplest way for the PCM playback code to handle this condition.

Looking at these PCM files gave me a new appreciation for design, and helped me appreciate more how important it is to understand the reason behind design decisions. In a lot of ways this format feels like it should be “bad” design: it’s strange, it’s idiosyncratic, it’s internally inconsistent. But every single detail is carefully chosen; everything serves a purpose. Each of these idiosyncrasies was carefully chosen to solve a particular problem, or to ensure peak performance in a particular bottleneck. Given the restrictions of a 16-bit game console, all of these choices were necessary to be able to support constantly streaming audio like this in the first place. Aligning every single detail of a format or API with the job which needs to be done is its own kind of elegance—a kind of elegance I understand a little better now.


  1. I referenced a couple of open-source decoders for the Sega CD version’s format when writing my own (foo_lunar2 by kode54, and vgmstream), along with some notes given to me by kode54.

  2. The Pentagulia theme, and the sunken tower.

Flat Whites in Vancouver

This post is completely off-topic. Please indulge me.

I recently spent three lovely weeks in Melbourne, Australia. I’m a big coffee fan, and Melbourne is one of the best cities for coffee in the world, so I spent a lot of time in cafes acquainting myself with the native coffee—specifically the flat white. Now that I’m back in Vancouver, I find myself craving flat whites nostalgically; sipping a flat white in a nice place reminds me of the wonderful time I spent there. (That nostalgia may have something to do with spending so many of those days out with a particular girl.) Finding a good flat white locally is hard though, and while there isn’t a really authentic one anywhere in the city there are a few good places I keep going back to.

There are three things that define a good flat white: milk volume, milk texture, and espresso. The right combination isn’t always easy to find on this side of the Pacific.

Milk

Volume

In my experience in Melbourne, most flat whites are served in roughly 190mL (6.5oz) cups; this is smaller than the average North American small latte (235mL, 8oz), and larger than the New Zealand flat white (160mL, 5.5oz). The relatively lower milk volume helps more of the espresso taste come through, which I find very nice.

Some Canadian coffeeshops I’ve been to will serve their flat whites in much smaller cups—smaller than both North American lattes and Australian flat whites, presumably intended to be closer to the size of a New Zealand flat white. Or perhaps they’ve simply heard that flat whites are served smaller and, having only too-small and too-large cups to choose from, they chose too-small.

Purely as a personal preference, I’ll take a bit too much milk over a bit too little, though 190mL as I had in Melbourne feels just right.

Foam

Flat whites are made using microfoam, a very fine, smoothly-textured, velvety foamed milk that gives the coffee the perfect texture. A good flat white also retains the crema from the espresso, merging the milk with the crema at the surface of the drink.

The espresso

Given the lower milk volume, it’s important for the coffee to not be overpowering. An espresso that’s too earthy or too bitter can ruin the drink for me; I like the flavour to be strong but not overly sharp. According to Wikipedia the kiwi flat white is usually served using a ristretto shot, and I find that can help cut the bitterness of certain beans; however I don’t feel like it’s a requirement for a good one. That said, a few of the Vancouver shops I’ve been to served me overly earthy, bitter flat whites that didn’t work for me. I make my own flat whites at home using ristretto shots.

The rankings

  1. Old Crow (New Westminster)

    A photo posted by @mistydemeo on

    Old Crow serves their flat white in an 8oz cup, though they’ll also do it in a 5oz glass on request. (I find it works better in the 8oz cup, personally.) They steam the milk very nicely, producing a good microfoam. Old Crow uses Bows X Arrows’s St. 66 beans, which is one of my favourite espresso roasts—pleasant, low bitterness, wonderful flavour with good acidity. At my request they started making flat whites using ristretto shots, which works beautifully with these beans. The result is a velvety, smooth coffee with a lovely natural sweetness. If they had 190mL cups to serve this in instead of 8oz, this could easily be mistaken for a Melbournian flat white.

  2. Continental Coffee (Commercial Drive)

    A photo posted by @mistydemeo on

    Continental also produces a very nice flat white. Served in an 8oz cup, it’s served with decently-foamed milk and an excellent ristretto shot. The foam on the milk as a bit thick compared to Old Crow and Prado, but it was delicious.

  3. Prado (Commercial Drive)

    A photo posted by @mistydemeo on

    Just down the street from Continental, Prado also produces a nice flat white. The milk is textured beautifully, a bit better than Continental’s, though I found the coffee a bit too sharp in comparison.

  4. Milano (Gastown)

    A photo posted by @mistydemeo on

    Milano made their flat white with an very earthy coffee, which overpowered everything else for me—even given the higher milk volume (8oz). This wasn’t a bad drink, but it wasn’t great. Next time I’d try ordering with a different espresso and see if that’s any better, or ask them to do a ristretto shot.

  5. Nelson the Seagull (Gastown)

    A photo posted by @mistydemeo on

    Nelson’s flat white suffers from basically the same problem as Milano’s, though it’s magnified by being served in a smaller cup. I also tried this a second time using their house almond milk; it has too much of a flavour of its own and ended up competing with the coffee.

  6. Revolver (Gastown)

    A photo posted by @mistydemeo on

    Revolver serves their flat white in a small 5oz (150mL) glass, which from what I’ve read is presumably closer to the kiwi style. The milk is steamed well, and the espresso is excellent—not that I’d expect anything less from Revolver.

    This doesn’t resemble anything I had in straya, but it is tasty. I’m only rating this low for failing to rekindle my nostalgia; it’s delicious taken on its own.

  7. Delany’s (West End)

    A photo posted by @mistydemeo on

    Delany’s smallest size is a 12oz (355mL)—nearly twice the size of an Australian flat white. The huge quantity of milk makes it hard to think of this as being a flat white at all.

  8. Bump N Grind

    A photo posted by @mistydemeo on

    Bump N Grind is one of my favourite shops so I had high hopes, but I was very disappointed in their flat white.

    Like Revolver, their flat white runs noticeably smaller than an Australian flat white; Bump N Grind serves theirs in a cappucino cup. The coffee is nice, but the milk was steamed poorly and the foam at the surface was far too thick—not as thick as a cappucino, but this’d be on the thick side for a latte, much less a flat white. The result was okay, but I wouldn’t call it a flat white at all. It’s basically a cappucino with less foam.

Release: Asuka 120% Limit Over English Translation

Asuka 120% Limited was a 1997 fighting game for the Sega Saturn, the final1 game in a long-running series. The Asuka 120% games were always underdog favourites of mine; despite looking like throwaway anime fan-pandering, they were surprisingly deep, innovative games with unique mechanics that paved the way for later, more famous games like Guilty Gear and X-Men vs Street Fighter.

In 1999, an unofficial mod titled Asuka 120% Limit Over was released. Rumoured to have been made by the original developers, Limit Over is a surprisingly polished update with many refinements and new gameplay mechanics. It went ignored by the English internet for many years, until the patch was discovered in 2007 by Lost Levels forum posters; it’s now also circulating as a prepatched ISO.

Even though there isn’t much text, Limit Over is hard to play without knowing Japanese, so I’ve prepared a translation patch.

The patch

Asuka 120% Limit Over English patch, version 1.0

This patch is compatible with the final release of Limit Over for the Saturn2. In order to use it, you need to have already built or obtained a disc image of Limit Over. The patch includes two options: a PPF disc image patch, or individual patches for each file on the disc that was changed. Detailed installation instructions are included in the ZIP file.

For more on what’s been translated, and how it was done, read on.

Graphics

Limit Over contains very little text, almost all of it in English. Unfortunately, though, one critical thing is in Japanese: the character names. Since the barebones menus have no graphics, not even character portraits, it’s very difficult to actually play without knowing Japanese. Have any idea who you’re picking in the screenshot below? I don’t…

A few other minor bits of text are stored in Japanese: the “N characters beaten” text in ranking and deathmatch modes, and the round start and round end graphics. Their meaning is obvious without being able to read them, however, so I decided to leave them alone.

Finding the tiles

Like most games of this era, Asuka 120%’s graphics are stored as sets of fixed-size tiles with a set, non-true colour palette. Since these are generally stored as raw pixel data without any form of header, it can be tricky to figure out where the tiles are and how they’re stored; fortunately, however, there are many good tile editors available that simplify the task.

I used a free Windows tile editor called Crystal Tile 2, pictured above, which has some very useful features, including presets for a large number of common tile formats, support for arbitrary tile size, and the ability to import and export tiles to PNG. Via trial and error, and with help from information gleaned via the Yabause emulator’s debug mode, pictured right, I was able to locate three copies of the character name tiles in the TTLDAT.SSP3, V_GAGE.SSP and VS_SPR.SSP files. The latter two files are used to store the in-battle user interface and the menus, respectively.

Each character name is a 4 bits per pixel 72x244 tile and, fortunately, Crystal Tile’s “N64/MD 4bpp” preset supports them perfectly. After configuring the palette in Crystal Tile I exported every name to a set of 13 PNG files, like the tile to the left.

Editing

I redrew the text using a highly-edited version of a font lovingly stolen from the Neo-Geo game Pochi & Nyaa. Compared to other fonts I looked at, it had the advantage of being both attractive and variable-width—which is important since the English names (which take up at least twice as many characters as the original Japanese) were very hard to fit in a width of 72 pixels. I also expanded the size of the characters compared to the original.

I briefly experimented with a thin variation of the character names for use in the game’s menus, but abandoned it after determining legibility was poor; the menus are 480i, and the flicker inherent in an interlaced image on a CRT or a deinterlaced image rendered the thinner lines harder to read than necessary. To the right are the thick and thin variations of the main character’s name.

Text

The rest of the game’s menu text is stored as ASCII strings in the main executable, and is completely in English. I did, however, make several changes to the character names displayed during loading screens. The one American character’s name, Cathy, was misromanized as “Cachy”. This is an easy mistake to make, since her name was rendered in Japanese as “きゃしい” (Kyashii)5. I also changed the romanization of several characters' names from Kunrei-shiki (Sinobu, Tetuko, Genitiro) to the more familiar Hepburn (Shinobu, Tetsuko, Genichirou).

What’s new in Limit Over?

It’s been many years since I’ve played Limited, so this list is based on my imperfect memory.

  • Every character has been rebalanced, and every single one of the core game mechanics has been refined.
  • Every character now has three strengths of normals and special moves instead of two.
  • A dodge button has been added, allowing characters to sidestep out of attacks.
  • Many characters have new special or super moves.

  1. The original developer, Fill-in-Cafe, went bankrupt after Limited was released in 1997, but a mediocre sequel and PC port were released in 1999 by another company.

  2. The only version readily available on the internet is dated “12/31”; I’ve heard there were earlier versions, but I’ve never seen them.

  3. This file is probably unused in Limit Over, since there is no graphical title screen.

  4. Except the tile for Genichirou, which is so long it’s allocated two tiles.

  5. Cathy’s name is rendered in Hiragana despite being a foreign name; it would more normally be rendered as “キャシー”.

Attack the Vector

One downside of working with ancient OSs is coming across bugs that will never be fixed.

In the early days of Tigerbrew, I had just started experimenting with GCC 4.2 as a replacement for Xcode 2.5’s GCC 4.0. Everything was going great until I built something that uses Cocoa, which blew up with this message:

1
2
3
4
5
In file included from /System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/DriverServices.h:32,
                 from /System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/CarbonCore.h:125,
                 from /System/Library/Frameworks/CoreServices.framework/Headers/CoreServices.h:21,
                 from test.c:2:
/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/MachineExceptions.h:115: error: expected specifier-qualifier-list before ‘vector’

A syntax error coming not from within the package I was building, but from within… Carbon?

The error actually comes from the Vector128 union within MachineExceptions.h, located in CoreServices’s CarbonCore framework. The very first section of this union had the offending code. Let’s play spot the bug:

Here’s what it looks like as of OS X 10.4.11:

1
2
3
4
5
6
7
8
9
typedef struct FPUInformationPowerPC    FPUInformationPowerPC;
union Vector128 {
#ifdef __VEC__
 vector unsigned int         v;
#endif
  unsigned long       l[4];
  unsigned short      s[8];
  unsigned char       c[16];
};

And here it is in OS X 10.7.51:

1
2
3
4
5
6
7
8
9
typedef struct FPUInformationPowerPC    FPUInformationPowerPC;
union Vector128 {
#ifdef __APPLE_ALTIVEC__
   vector unsigned int         v;
#endif
  unsigned long       l[4];
  unsigned short      s[8];
  unsigned char       c[16];
};

The bug comes from the use of the vector keyword. As pointed out in this MacPorts ticket, the issue is with the #ifdef that checks for __VEC__. __VEC__ is defined if AltiVec is on, without necessarily meaning that AltiVec syntactic extensions are enabled. The vector keyword is only available if either altivec.h is included, or the -mpim-altivec or -faltivec flags are passed. Since Tigerbrew always optimizes for the host CPU, G4 and G5 users were getting AltiVec enabled without forcing syntactic extensions. I fixed this in Tigerbrew this by always passing -faltivec on PowerPC systems when building any package, regardless of whether AltiVec is being used.

As to why Apple never caught this, GCC 4.0’s behaviour seems to be different; it seems to enable AltiVec syntactic extensions whenever -maltivec is on. Apple did eventually fix the bug, as seen in the Lion header above. According to the MacPorts ticket linked previously, it was fixed in the CarbonCore header in 10.5 and in the 10.4u SDK packaged in Xcode 3 for Leopard. Since the 10.4u SDK fix was never backported to Tiger itself, Tiger users have to make do with the workaround.


  1. 10.7 may be Intel-only, but the relevant code’s still there. In fact, the same PowerPC unions and structs are still there as of 10.9.4.

Reevaluate

My least favourite backtrace is a backtrace that doesn’t include my own code.

Tigerbrew on OS X 10.4 uses Ruby 1.8.2, which was shipped on Christmas Day, 2004, and it has more than its fair share of interesting bugs. In today’s lesson we break Ruby’s stdlib class OpenStruct.

OpenStruct is a simple data structure that provides a JavaScript object-like interface to Ruby hashes. It’s essentially a hash that provides getter and setter methods for each defined attribute. For example:

1
2
3
os = OpenStruct.new
os.key = 'value'
os.key #=> 'value'

Homebrew uses OpenStruct instances in place of hashes in code which only performs reading and writing of attributes, without using any other hash features. For example, in the deps command, OpenStruct is used for read-only access to a set of attributes read from ARGV:

1
2
3
4
5
6
7
8
9
10
mode = OpenStruct.new(
  :installed?  => ARGV.include?('--installed'),
  :tree?       => ARGV.include?('--tree'),
  :all?        => ARGV.include?('--all'),
  :topo_order? => ARGV.include?('-n'),
  :union?      => ARGV.include?('--union')
)

if mode.installed? && mode.tree?
  # ...

The first time I ran brew deps in Tigerbrew, however, I was greeted with this lovely backtrace:

1
2
3
4
5
6
7
8
9
10
11
SyntaxError: (eval):3:in `instance_eval': compile error
(eval):3: syntax error
        def topo_order?=(x); @table[:topo_order?] = x; end
                       ^
(eval):3: syntax error
    from /usr/lib/ruby/1.8/ostruct.rb:72:in `instance_eval'
    from /usr/lib/ruby/1.8/ostruct.rb:72:in `instance_eval'
    from /usr/lib/ruby/1.8/ostruct.rb:72:in `new_ostruct_member'
    from /usr/lib/ruby/1.8/ostruct.rb:51:in `initialize'
    from /usr/lib/ruby/1.8/ostruct.rb:49:in `each'
    from /usr/lib/ruby/1.8/ostruct.rb:49:in `initialize'

Given that the backtrace includes only stdlib code and nothing I wrote, I wasn’t sure how to interpret this until I saw “(eval)”. It couldn’t be, could it…? Of course it was.

Accessors for attribute of OpenStruct instances are methods, and they are defined by OpenStruct a) whenever a new attribute is assigned, and b) when OpenStruct is initialized with a hash. This is achieved using the method OpenStruct#new_ostruct_member1, which was defined like this in Ruby 1.8.2:

1
2
3
4
5
6
7
8
def new_ostruct_member(name)
  unless self.respond_to?(name)
    self.instance_eval %{
      def #{name}; @table[:#{name}]; end
      def #{name}=(x); @table[:#{name}] = x; end
    }
  end
end

Yes: OpenStruct dynamically defines method names by interpolating the name of the variable into a string and evaluating the string in the context of the object. Unsurprisingly, this is very fragile. In our example, the attributes being defined end with a question mark; #installed? is a valid method name in Ruby, but #installed?= is not, and so a SyntaxError exception is raised inside eval.

This was eventually fixed2; in Ruby 2.2.2’s definition, the #define_singleton_method method is used instead; metaprogramming is not limited to the normal naming restrictions, so the unusual setters are defined properly3.

1
2
3
4
5
6
7
8
def new_ostruct_member(name)
  name = name.to_sym
  unless respond_to?(name)
    define_singleton_method(name) { @table[name] }
    define_singleton_method("#{name}=") { |x| modifiable[name] = x }
  end
  name
end

Thankfully, the definiton of the method from modern versions of Ruby is fully compatible with Ruby 1.8.2, so Tigerbrew ships with a backported version of OpenStruct#new_ostruct_member.


  1. This sounds like it should be a private method, and is documented as being “used internally”, but for some reason this was a public instance method right up until Ruby 2.0.

  2. Close to a year after Ruby 1.8.2 was released.

  3. These illegal method names can’t be called using the normal syntax, but they can be called via metaprogramming using the #send instance method, e.g. os.send "foo?=", "baz"