Recently while developing Gemini I ran into a problem where I broke some functionality that had been working for a long time. Since I didn’t have any automated regression testing in place I did not notice at first and it took me more effort to fix things than it should have. I decided then and there that Gemini needed regression testing. So I started looking for an iOS integration testing framework.
There were a few things that influenced my final choice of framework. First, Gemini is a platform for building OpenGL ES based games, so I needed a testing framework that could somehow validate the contents of an OpenGL render as opposed to validating the existence / state of some UI component. This basically means I need to be able to compare a screen shot to some reference screen shot taken earlier. For normal apps, using screen shot comparisons is too brittle as small changes can alter the screen in ways that are acceptable but will be caught by an automated screen comparison. For an OpenGL app I don’t see much alternative, however.
Second, I needed a framework that could interact with Gemini at a low level. I needed to be able to generate touch events and touch moved events for arbitrary screen coordinates as well as accelerometer and other events, preferably with a mechanism to record and play these back.
The final thing that influenced my choice is that I am a fan of Behavior Driven Development. I use Cucumber for my Ruby on Rails work. I really like writing test steps in Ruby and having access to the functionality that the many Ruby gems provide. And I really didn’t want to have to work with two different testing frameworks when going from RoR code to iOS code.
With these criteria in mind I started looking at the various integration testing frameworks available to iOS developers.
The next framework I looked at was Frank. I had read a bit about Frank before and was very interested in it. It is a polished framework with a lot of momentum. Also, it is based on Cucumber, so that is a big win. Unfortunately, it is designed to interact with UI elements using UISpec. I couldn’t see any easy way to make it’s selectors work with rendered OpenGL ES. Also, it’s touch simulation was limited to simple taps.
Next I looked at Zucchini, a framework layered on UIAutomation with an approach based on Cucumber. This looked very promising and their home page is very impressive. Plus it’s based on screen shot comparisons. I followed a simple example and managed to get it working, screen shots and all.
I wanted to get my testing environment up to speed as quickly as possible, so I didn’t want to take the time to learn CoffeeScript and the UIAutomation API. So I kept looking and came across Calabash (In case your wondering, a calabash is yet another oblong vegetable). Although a bit less polished looking than the other projects, Calabash has a lot going for it. It supports Cucumber (although is not limited to it) with the test steps written in Ruby. It provides low level access to touch events and other events. It provides an irb implementation that lets you connect to your running application (on simulator or device) and interact with it using ruby code. This is incredibly useful as it allows you to explore your application and build test code that can then be used in step definitions.
In setting up Calabash for Gemini I had to solve one problem: screen shots taken from Calabash did not work for OpenGL screens. Work is underway to fix this, but in the meantime I developed a suitable workaround. Calabash provides a “backdoor” mechanism that allows you to invoke a method on your
AppDelegate passing in a string and getting a string back in return. For obvious reasons, this “backdoor” is only compiled into test versions and not into the final app distribution.
It was a simple matter for me to implement this method so that it takes a screen shot (using the
snapshot method) and returns this as a Base64 encoded string. This could then be decoded in Ruby and saved to a file or used directly. Since the test step is Ruby code it was easy to use the RMagick gem to implement an image comparison between a new screen shot and a reference screen shot saved on disk, similar to what Zucchini does.
This backdoor functionality lends itself to other uses as well. Since the ruby code can pass a string to the
AppDelegate method, it’s possible to use this string to specify what code to execute. So in addition to the screen shot functionality, I use this method to specify which Lua scene to load. This allows me to create a separate Lua file for each test scenario and then invoke that scene during a test.
Calabash provides other nice features as well, including the ability to record and play back a series of touches. And tests can be run on the simulator or on device. And since it is essentially a Cucumber driver (the authors describe it as a sort of Selenium for iOS/Android), it was easy to get it running under Jenkins. Now I have my local copy of Jenkins monitoring my github repository for changes and running regression tests whenever I do a push.
So I would recommend that anyone looking to do integration testing on iOS (or Android, though I haven’t tried it) for graphics programming give Calabash a try. The other frameworks I looked at that are all extremely useful, but don’t lend themselves to testing pure OpenGL apps the way Calabash does.