Model-Based Testing

Robert Binder’s Testing Object-Oriented Systems book sits permanenly on my desk. At over 1500 pages long, it is almost a never-ending read, but from time to time I pause to read a few choice chapters.

Binder also wrote about the compliance testing of Microsoft’s court-ordered publication of its Windows client-server protocols in 2012. An interesting fact from the article is that instead of having to test software against documentation, Microsoft had to do the reverse because the code was already published and had to remain used as the gold standard. Under scrutiny and tight deadlines, they managed to check that 60,000 pages of documentation matched the protocols exactly, all thanks to model-based testing (MBT).

Test fixtures

I try my best to avoid the Arrange-Act-Assert (AAA) pattern for unit tests. I find that with multiple test methods needing the same starting conditions, the ‘arrange’ code must be repeated too much. This makes writing code tedious and complicates later changes.

I prefer to initialise a test fixture to a common initial state that each test method can start from. Like the real thing, a test fixture keeps the state steady as the tests are executed.

For example, a fixture to test ‘new customer’ scenarios will be done as class NewCustomerTests, and a fixture to test ‘existing customer’ scenarios as class ExistingCustomerTests.

Most test frameworks allow a method in a test class to be run before each test method is executed. In JUnit, the decorator @Before  identifies this method; in MSTest, the attribute [TestInitialize] does the same.

I believe that using test fixtures to organise tests around common initial conditions makes test code more organised and maintainable.

The unit in unit-testing

Interpreted extremely, unit testing means testing each class separately. However, complete isolation is difficult to achieve given that a class typically interacts with other classes in most applications, and for this definition of the unit to be upheld, collaborating classes have be faked during testing. But, using mocks for this purpose, developers confront two problems mainly.

First, tests become tightly coupled to mocks because the latter must be set up to act precisely like the classes for which they stand in. Thus dependencies are also formed between the tests and classes that are not the actual test subjects.

Second, the class under test loses encapsulation as its implementation is revealed as a natural consequence of how mocks are configured to anticipate specific calls and to return results expected by the class.

Still, mocks remain useful in many cases and cannot, therefore, be eliminated. Fortunately, a developer can minimise their unpleasantness by taking certain approaches.

One way to avoid mocks is to carefully consider the value of each test. For example, is it necessary to test interactions in order to validate a given class? Could the correctness of the class be verified by other means (for example, by checking its state)?

If testing of interactions is unavoidable, a developer must at least ensure that the test is sensible and that it checks the correct end of the interaction. As per their name, mocks are fake. Hence they do not require verification. A developer must, therefore, guard against inadvertently writing tests that target mocks instead of the actual class.

An even better way is to interpret ‘unit’ more broadly as a cohesive set of classes, whether it is a set of just one class with no dependency or of many collaborating classes. This definition gives developers the liberty to test several classes together, thus reducing the need for mocks.


Learning BASE64 encoding

BASE64 encoding is so prevalent that it is worth learning how it works and how to code your own implementation.

The point of BASE64 is to communicate binary data as text, using only characters that are likely to exist on most computer platforms. These safe characters are known as the BASE64 alphabet and are the letters A to Z and a to z, the numerals 0 to 9, and the characters / and +. There are other ways to represent bytes as text; for example, by converting them to hexadecimal strings made up of the characters 0 to 9 and A to F. But doing so means that for every character in the original set, two hexadecimal characters are required, which doubles the size of the data.

The BASE64 alphabet consists of 64 characters, each one associated with an integer value. For example, the character A is represented by 0, the character Z by 25, and character / by 63. This means that to cover the range of integers from 0 to 63, the BASE64 word size must be six bits (because 2^6=64). As a consequence of this, during BASE64 encoding the original data must be laid out and padded to make its size — in bits — a number divisible by six.

The smallest number of bytes (or 8-bit words) that can be re-arranged in groups of 6-bit words is three (3×8 bits = 24 bits, which is divisible by six). In other words, data must be processed in groups of 24 bits, each group being equivalent to four 6-bit words (4×6 bits = 24 bits). The BASE64 character matching the value of each 6-bit word is then output as an 8-bit ASCII character. So for every three bytes of input, four bytes of output are generated, giving an inflation factor of 4:3 (which is a better compromise than the 2:1 ratio from hexadecimal encoding).

Data that cannot be split exactly in groups of 24 bits must be padded to make them so. For example, data that are one byte long (i.e. 8 bits) must be padded with two zero-value bytes (i.e. 8 bits + (2×8 bits)), and data that are 11 bytes long (i.e. 88 bits) must be padded with one zero-value byte (i.e. 88 bits + 8 bit = 96 bits = 4×24 bits). In other words, data must be padded to reach a size that is divisible by three.

With the theory out of the way, here is how BASE64 is implemented in Java, using the example ‘any carnal pleasure’.

First, encode the string as a series of bytes.

byte[] bytes = "any carnal pleasure".getBytes();

This results in an array of 19 bytes.

Next, pad the array with two zero-value bytes to make its size divisible by three.

byte[] padded = Arrays.copyOf(bytes, 21);

Then, convert each triplet of bytes into four 6-bit words and calculate the value of each. (Use bit shift operators.) Append the BASE64 character represented by each 6-bit value to a StringBuilder instance.

for (int byteIndex = 0; byteIndex < padded.length; byteIndex += byteGroupSize) {

    // read the value of the 24-bit word starting at the current index
    int wordOf24Bits = (padded[byteIndex] << 16) + 
                       (padded[byteIndex + 1] << 8) + 
                        padded[byteIndex + 2];

    // read the 24-bit word as 6-bit word values
    int wordOf6Bits1 = (wordOf24Bits >> 18) & 63;
    int wordOf6Bits2 = (wordOf24Bits >> 12) & 63;
    int wordOf6Bits3 = (wordOf24Bits >>  6) & 63;
    int wordOf6Bits4 = (wordOf24Bits      ) & 63;


This yields the BASE64 string ‘YW55IGNhcm5hbCBwbGVhc3VyZQAA’.

Finally, replace the padding characters (“AA” in this example resulting from the two zero-value bytes) with as many “=” characters. The “=” is used in the BASE64 decoding process (which is not covered in this post) to determine the amount of padding that has been applied.

for (int i = result.length(); i > result.length() - paddingSize; i--) {
    result.setCharAt(i - 1, '=');

This gives the final result ‘YW55IGNhcm5hbCBwbGVhc3VyZQ==’.

I know that there are at least two classes in the standard Java libraries that provide BASE64 operations. One of those is undocumented and is subject to change, and the other is meant to be used by the mail library, which could cause confusion (or would be bad form?) if they are referenced in code that does not otherwise depend on the libraries where the classes reside. By writing my own implementation, I can avoid these unnecessary dependencies, and most importantly, I can do BASE64 in any language that does not have a built-in function for it.

How we use SQL Server Data Tools

This post describes the process that we use to develop databases with SQL Server Data Tools (SSDT) in Visual Studio.

For this process to work, the conventions below must be respected.

  • Use the live database as the gold standard for schema objects (and data).
  • Deploy only database projects that have been built successfully.
  • Deploy to a database that matches the schema of the live database.

At the beginning of a development iteration

  1. Restore a copy of the live database onto the development computer.
  2. Synchronise database project schema objects with the schema objects in the restored database.
  3. Remove pre-deployment and post-deployment scripts from the database project.
  4. Update the database project version number.
  5. Build the database project.
  6. If the build fails, fix the errors and rebuild.
  7. If the build completes, check in the changes.

During a development iteration

  1. Make changes to script files in the database project.
  2. If the changes might result in data loss, write pre-deployment and post-deployment scripts to migrate the data.
  3. Build the database project.
  4. If the build fails, fix the errors and rebuild.
  5. If the build succeeds, publish the changes onto the database on the development computer and test.

Interim releases to the test environment

  1. Restore a copy of the live database from backup.
  2. Build the database project.
  3. Publish the database project onto the test server.

Deployment to the live environment

  1. Back up the live database.
  2. Build the database project.
  3. Publish the database onto the live server.


Three golden rules to tackle complexity

Tim Newing, the IT director of Camelot, shares three golden rules to manage complexity in IT projects.

  • Think of a collection of simple solutions instead of one complex project.
  • Manage outside the ‘business as usual’. In other words, set up a different structure so that the project team is not distracted by the normal business.
  • Give people a good reason to complete the project. This is not the same as motivating them to make the project a success; instead the objective is to convince them to finish the project when it is time to do so in order to avoid feature creep.

How to identify and fix an anaemic domain model

In CRUD applications most classes usually consist of many accessor methods and few behaviour methods. While many developers recognise these classes as symptoms of an anaemic model, not so many of them know how to address the problem.

Anaemic classes are characterised by not having any responsibilities other than to write and to read attributes. Not only are the benefits of object-oriented programming wasted in such classes, but the amount of boilerplate code for managing these classes is also increased.

Unit-testing can help to identify when a domain model is anaemic.

Unit testing dictates that the ‘usage code’ be written before the implementation. By writing tests first, one can identify how the interfaces of a domain model will be used and can spot things that are out of place or redundant. For example, consider an Account class that has the following responsibilities:

  • represent a user account
  • hold information about a user (ie. username, password, email address, status)
  • used for authentication

Consider one possible unit test for user authentication as follows.


Here the responsibility for checking the validity of the input password is handled by the calling code, but really it belongs to the Account class.

This can be fixed by rewriting the unit test as follows.


Now the behaviour is encapsulated within the Account class, making the implementation of the authenticate user use-case more robust.

Such small refactorings can be performed iteratively in order to turn anaemic classes into richer classes.

In my experience, the following rules help to identify classes that can be refactored.

  • If calling code reads an attribute value from an object, tests it, then calls another method on the same same object, the behaviour must be moved to the object.
  • Getters and setters must not be implemented mindlessly; instead, they must be written only when they are appear in unit tests.
  • Daisy-chain calls to methods of the same object indicate that the object is anaemic and needs more behaviour methods.

How to switch off a screen laptop under Linux

This post describes how to save power on a laptop running Linux by switching off the screen backlight. The instructions are tested on a Dell Latitude C810 running Ubuntu Linux 5.10.

The first part is straightforward: set the correct governor for the CPU frequency with the following command.

echo powersave > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor

This is executed at run-level 2 from /etc/rc2.d/S30freq-scaling.

The second part is a little more involved. Out of the box, Breezy does very good power management, but some problems still exist due to the incompatibility of Linux with some laptops.

The first problem with the  C810 is that ACPI events, such as activation of the lid button, are not reported by the kernel. To fix this, an additional option must be passed to the kernel at boot time by modifying GRUB’s menu.lst as follows.

title           Ubuntu, kernel 2.6.12-9-686 
root            (hd0,0)
kernel          /boot/vmlinuz-2.6.12-9-686 root=/dev/hda1 acpi_irq_balance ro quiet splash resume=/dev/hda5
initrd          /boot/initrd.img-2.6.12-9-686

The second problem seems to be related to the NVIDIA Linux driver that prevents the screen from switching off with xset dpms force off. The solution is to use vbetool, which is marked as causing un-expected behaviour in the man pages. Nevertheless, it works with the C810.

I created the following script to turn the screen on or off. (This actually controls the backlight and does not just blank/unblank the screen.)


case "$1" in
                /usr/sbin/vbetool dpms on
                /usr/sbin/vbetool dpms off
                echo "Usage: $N {on|off}"


Using this script, after Linux boots, the screen can be switched off by issuing the following command.

/etc/acpi/ off

Alternatively, screen de-activation can be automated by including this command in an appropriate run-level. The same script can also be used to improve the built-in ACPI-related scripts, for example, for suspend, sleep or hibernate power-saving.

Rapid web development

I am experimenting with the following technique to speed up web application development with Java. First, define an interface called Controller with a single method called handleRequest that takes an HttpServletRequest and an HttpServletResponse as arguments. The method should also throw an Exception.

package web;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

public interface Controller {

    public void handleRequest(HttpServletRequest request, 
            HttpServletResponse response) throws Exception;

Then, create a hierarchy of JSP files and folders based on user stories. For example, say, we are writing a page to allow users to register new accounts. We’ll create the following files. In the /user/register folder: – index.jsp This is the entry point for our controller.

  • default.jsp This is the view that is shown by default.
  • success.jsp This is the view that is displayed when user registration is successful.

Now, in index.jsp, write the following:

<% web.Controller c = new web.Controller() {

    public void handleRequest(HttpServletRequest request,
            HttpServletResponse response) throws Exception {

        account = null;

        messages = new java.util.ArrayList();

        username = request.getParameter(username);
        password1 = request.getParameter(password1);
        password2 = request.getParameter(password2);
        email = request.getParameter(email);

        if (request.getParameter(register) != null) {
            registerActionPerformed(request, response);

        request.setAttribute(messages, messages);

        request.setAttribute(username, username);
        request.setAttribute(email, email);

        if (account != null) {
                    forward(request, response);
        } else {
                    forward(request, response);

    protected void registerActionPerformed(HttpServletRequest request,
            HttpServletResponse response) throws Exception {

        if (validate()) {
            model.user.Account existing = model.user.AccountRepository.
            if (existing != null)

            existing = model.user.AccountRepository.getByEmail(email);
            if (existing != null)

            if (messages.isEmpty()) {
                model.user.RegisterAccountService service = 
                        new model.user.RegisterAccountService();
                account = service.register(username, password1, email);

    protected boolean validate() throws Exception {
        if (username == null || username.trim().length() == 0)

        if (password1 == null || password1.trim().length() == 0)

        if (password1 != null)
            if (!password1.equals(password2))

        if (email == null || email.trim().length() == 0)

        return messages.isEmpty();

    private model.user.Account account;

    private java.util.ArrayList messages;

    private String username;
    private String password1, password2;
    private String email;

c.handleRequest(request, response);

Next, create the default.jsp and success.jsp pages. For example, default.jsp looks like this.

< %@page contentType=text/html%>
< %@page pageEncoding=UTF-8%>
< %@taglib uri= prefix=c%> 
< %@taglib uri= prefix=fmt%> 

< !DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.01 Transitional//EN>

        <h1><fmt:message key=title.register_account /></h1>

I am not sure how viable this is as as way to speed up web application development, but the following benefits are already obvious. – index.jsp serves as the controller.

  • There is no need to re-compile after adding controllers since JSP pages are compiled automatically, thus speeding up code-test cycle.
  • There is no need for servlet mappings. The folder structure and the designation of index.jsp as a welcome file take care of that.
  • URLs are clean and clear.

There are certainly weaknesses in using this scheme, but as I wrote at the beginning, I am still experimenting with this.