Generally, though, I stick to 32 bits and assume 32 bits out of habit and sheer exposure. Where is this going, you may well quite reasonably ask? Well, when I was working up my iPhone app – the reference one, not the game – for submission to the app store, I needed to knock out a data converter utility. This little baby would take a raw data file and preprocess it into data that the app could load in the blink of an eye without any complex parsing.
It was such a straightforward piece of software that nothing could possible have gone wrong. Indeed, if the thing compiled it should work. There was absolutely no chance at all that any bugs, even those like the cute little guys in the picture, could trip me up for more than a second or three. Program, program, program-coffee-program, program, program-coffee-program, compile, job’s a good’un, crack open the vin rouge. If these sound like famous last words, then my work here is nearly complete.
I knocked this app out in Xcode on the Mac as a Unix command line tool. Previously in my life I would have done this on the PC side of my computer and used Visual Studio and knocked out a quick 32 bit command line application… but, I like Xcode, and I couldn’t be arsed to reboot into bootcamp repeatedly all day to try it so I figured I would write a Unix command line tool instead.
Oddly, though even though the utility was able to encode and decode the file just fine, the same decoding code in the iPhone app didn’t work. This was enormously frustrating: it should work, but it didn’t. I looked at the code again and again. It absolutely should work. There was nothing complex there at all.
Then Me and Mr Debugger got down to a serious session that actually lasted all of a few minutes when I looked at the loaded file header: the critical number I was reading was 4 bytes away from where it should have been and there was an odd set of 4 unexpected zeros after the header code. You see, according to the C and C++ standards, the size of ints and longs have a minimum size, but the maximum could easily change. Indeed, on 32 bit platforms or with 32 bit software generally, they are 32 bits. On 64 bit platforms… well, you get the idea. My Mac utility was compiling as 64 bit and the iPhone was reading it as 32 bit. Same software, wildly different results. My mistake was getting so used to 32 bit platforms that I did not consider designing this kind of fault out of my development by using fixed size types like uint32_t (from C99 specification) for 32 bits. These nice types were also adopted for the new C++0x standards (although you will need to have the Boost libraries installed or grab this if you’re not using at least Microsoft’s Visual Studio 2010).
I am a great believer in defensive programming: deliberately writing code in a way that ensures that common errors of general buffoonery cannot occur. I do this, for example:
if (1 == counter)
Rather than:
if (counter == 1)
Because it is too easy for me to miss one of the =s out and completely change the meaning and functionality of the code. My first way of writing it means that the compiler will spot my error before I have to tear my increasingly greying hair out looking for subtle errors involving an easy mistype.
I have annoyed myself falling for this bittage issue. It therefore falls into my list of things to get right, always, which includes:
- const correctness always
- absolutely every deadly error condition checked in code
- high quality error messages
- write non-platform specific standards-compliant code wherever possible
- constants always on the left, if possible, on any equality test
- correct use of signed and unsigned types
- make no assumptions
- write functional comments when you write the code paying careful attention to explaining the non-obvious
- prefer code readability generally
- no premature optimisation no matter how tempting it is
… a basic philosophy of “let the compiler work for you, and when it can’t make sure you’re rolling in useful data for debugging”. To this list, I now add:
- when you want and expect 32 bit, use a bloody 32 bit data type, you tool, particularly in protocols, data file formats or anything else where 32 bits magically becoming something else will screw you over. I.e., if it’s nails you’re putting in, use a hammer, not a cement mixer and a drunken octopus: “implementation dependent” is not put into standards for a bit of a giggle, it actually means something.
Anyhow, lesson learnt, but with an awfully large amount of code to convert.
Pingback: In Soviet Russia, pointers dangle you | Cobras Cobras
Pingback: Stop the universe, someone knows everything | Cobras Cobras