I will start this blog post with two examples of non-code "hard to test" scenarios (about sight) and their solutions:
- How do blind people read when they cannot see? they touch the braille letters
- How do you find an invisible man? You can spray paint in the room and cover him, or you use a infrared camera and spot him, or you can flood the flood with water so when he walks you see where he is
Instead of thinking "oh, I cannot see xxxxx, so I can't do it", the first scenario uses a different approach (a different sense), while the second one tries to transform the situation into one that fits our need (making the invisible man visible so we can use our sight to find him).
Now, applying this to software, in many cases hard to test things are due to code complexity, and I don't want to dwell into that as there are many articles and techniques on how to solve it, but sometimes there are scenarios that really seem hard to test or too fragile if tested traditionally.
I'm going to use an example that I read somewhere not long ago: Imagine you want to test some UI that gets rendered into an HTML5 canvas component. Assuming we have full control over the source code, meaning that we build the rendering and the associated Javascript that handles interactions with the canvas.
This scenario poses a bit of a challenge because, once rendered, the canvas is just a container of pixels (or a byte array), so if we have buttons and other interactive items, how we can test them?
Well, there are three types of tests that come into my mind one can do:
I) Unit tests
First of all, if you control generating the elements, you can build a lot of tests around element generation and composition of those elements, without actually needing to care about the rendering/UI.
As I prefer to not repeat things already explained way better, there is an Uncle Bob blog post about applying TDD to terrain generation which exemplifies how to test something as hard initially as a terrain generation algorithm, and the "trick" used is to focus not on the data generated but on the generators and actions. Terrain generation is deterministic, because oversimplifying, the only things that change are input parameters and the seed, so if you test well enough those, you can replicate any terrain inside a test (although the test could get big!).
As a real world example, at a previous job I had to build a server-side avatar generation platform, and wanted to add tests to it. By making the generation code a set of deterministic actions, it was really easy to test that, given actions A, B and C the result was a "female avatar wearing a cape with a chinese temple background". You can see a small example of how the PHP code looks at this slide.
II) Acceptance tests - method A
If you want to test replicating the scenario of a user browsing to your page with the canvas-based UI and interacting with it, using for example Sellenium, initially you're in trouble, as it is designed to work with standard HTML & CSS markup. It can navigate through nodes and/or CSS rules. You can also make it perform an action of moving to a certain coordinate and click (and we can easily adjust it to be inside the canvas as we at least know it's placement and size) so... how do we test it?
Just think about why Sellenium works: simply because the browser lets it know about the DOM elements and their CSS properties.
Well, we could then emit additional data useful for testing, because we control the whole process of generating the UI component. We could make each component inform of its coordinates where a user can click, so then we could make a test saying go to position (x, y) and click, then assert the webpage has navigated to the next screen
.
This is nothing new, games have been doing this for game menus since long long ago (buttons are just rectangles that detect clicks on them), and moving forward, even Google Chrome does that with what they call hit test zones:
When the compositor thread sends an input event to the main thread, the first thing to run is a hit test to find the event target. Hit test uses paint records data that was generated in the rendering process to find out what is underneath the point coordinates in which the event occurred.
III) Acceptance tests - method B
(This method extends method A)
Another way to do tests is, assuming you control the source code of the Javascript event-handling code, combining the previous method of emitting coordinates data with making the event-handler code also emit events, like I've clicked coordinates (x, y)
. This combined with method A lead to being able to infer I've clicked button A
or I've clicked the background of the canvas
and have fine-control over events. This is how for example Windows handles messages and signals events over controls of programs, windows, etc. since ages.
This would allow for integration tests, where you don't need to actually render anything: You can generate the elements, feed them to a mock canvas, link it to the event-handler and test its outcomes simulating clicks at different coordinates (after all, a canvas is a list of bytes).
A problem that we had long ago at a previous job was that Sellenium tests were potentially slow and flaky when performing AJAX calls because the only way to detect those were "wait for element to be present" active waits, so we would be waiting more time than needed (to avoid waiting too few and then giving a false negative just because it took more but ended up showing). The frontend framework team solved this issue by making the JS MVC framework emit events like I've started action xxx
, I've finished action xxx
, and tweaking Sellenium to understand those and be able to respond much quicker to those AJAX loads.
Conclusion
I am far from an expert in testing, but I firmly believe that, given some time to think and effort to implement, most scenarios can be reduced to simpler ones that can be actually tested.
As a small shameless plug, while reading a book about maze generation algorithms, I decided to port (and extend) the examples to Python, and along the way did some testing of the basic building blocks: generating the maze grid with its cells, calculating neighbours of a cell and distances. The results are small yet practical tests, and afterwards I can focus on implementing the book algorithms and if something failed I can be certain it is not a bug on the building block, but on the algorithm being coded. Given time I could also now easily test each algorithm.
Tags: Development