It has 64 bits, so it must be better!

Germs! Bugs!

These little chaps may LOOK cute, but trust me when I say that you absolutely don't want your software infested with them

Since, oh, 1989 or so, I have been developing on 32 bit platforms. First it was the Amiga and then the PC. Even when 64 bit capabilities arrived, most stuff was 32 bit and I didn’t really need those extra bits for what I was doing. Indeed, for a great deal of time in the past decade, very few people have needed or would have noticed 64 bit data processing. In fact, 64 bits generally made people’s lives less pleasant; badly written software wouldn’t have benefited and generally small things got bigger (nudge nudge, wink wink). Net result is that I have worked with high performance 32 bit software for absolutely yonks: for over twenty years now. In fact, it is only recently that I have had any reason to consider 64 bit programming: memory. Finally, in the last decade, I created a piece of software that could actively benefit from accessing more than 4GB of memory.

Generally, though, I stick to 32 bits and assume 32 bits out of habit and sheer exposure. Where is this going, you may well quite reasonably ask? Well, when I was working up my iPhone app – the reference one, not the game – for submission to the app store, I needed to knock out a data converter utility. This little baby would take a raw data file and preprocess it into data that the app could load in the blink of an eye without any complex parsing.

It was such a straightforward piece of software that nothing could possible have gone wrong. Indeed, if the thing compiled it should work. There was absolutely no chance at all that any bugs, even those like the cute little guys in the picture, could trip me up for more than a second or three. Program, program, program-coffee-program, program, program-coffee-program, compile, job’s a good’un, crack open the vin rouge. If these sound like famous last words, then my work here is nearly complete.

I knocked this app out in Xcode on the Mac as a Unix command line tool. Previously in my life I would have done this on the PC side of my computer and used Visual Studio and knocked out a quick 32 bit command line application… but, I like Xcode, and I couldn’t be arsed to reboot into bootcamp repeatedly all day to try it so I figured I would write a Unix command line tool instead.

Oddly, though even though the utility was able to encode and decode the file just fine, the same decoding code in the iPhone app didn’t work. This was enormously frustrating: it should work, but it didn’t. I looked at the code again and again. It absolutely should work. There was nothing complex there at all.

Then Me and Mr Debugger got down to a serious session that actually lasted all of a few minutes when I looked at the loaded file header: the critical number I was reading was 4 bytes away from where it should have been and there was an odd set of 4 unexpected zeros after the header code. You see, according to the C and C++ standards, the size of ints and longs have a minimum size, but the maximum could easily change. Indeed, on 32 bit platforms or with 32 bit software generally, they are 32 bits. On 64 bit platforms… well, you get the idea. My Mac utility was compiling as 64 bit and the iPhone was reading it as 32 bit. Same software, wildly different results. My mistake was getting so used to 32 bit platforms that I did not consider designing this kind of fault out of my development by using fixed size types like uint32_t (from C99 specification) for 32 bits. These nice types were also adopted for the new C++0x standards (although you will need to have the Boost libraries installed or grab this if you’re not using at least Microsoft’s Visual Studio 2010).

I am a great believer in defensive programming: deliberately writing code in a way that ensures that common errors of general buffoonery cannot occur. I do this, for example:

if (1 == counter)

Rather than:

if (counter == 1)

Because it is too easy for me to miss one of the =s out and completely change the meaning and functionality of the code. My first way of writing it means that the compiler will spot my error before I have to tear my increasingly greying hair out looking for subtle errors involving an easy mistype.

I have annoyed myself falling for this bittage issue. It therefore falls into my list of things to get right, always, which includes:

  1. const correctness always
  2. absolutely every deadly error condition checked in code
  3. high quality error messages
  4. write non-platform specific standards-compliant code wherever possible
  5. constants always on the left, if possible, on any equality test
  6. correct use of signed and unsigned types
  7. make no assumptions
  8. write functional comments when you write the code paying careful attention to explaining the non-obvious
  9. prefer code readability generally
  10. no premature optimisation no matter how tempting it is

… a basic philosophy of “let the compiler work for you, and when it can’t make sure you’re rolling in useful data for debugging”. To this list, I now add:

  1. when you want and expect 32 bit, use a bloody 32 bit data type, you tool, particularly in protocols, data file formats or anything else where 32 bits magically becoming something else will screw you over. I.e., if it’s nails you’re putting in, use a hammer, not a cement mixer and a drunken octopus: “implementation dependent” is not put into standards for a bit of a giggle, it actually means something.

Anyhow, lesson learnt, but with an awfully large amount of code to convert.

This entry was posted in Software development and tagged , , , , , , . Bookmark the permalink.

2 Responses to It has 64 bits, so it must be better!

  1. Pingback: In Soviet Russia, pointers dangle you | Cobras Cobras

  2. Pingback: Stop the universe, someone knows everything | Cobras Cobras