Monday 20 April 2015

C++ template-based polymorphism

Templates are considered part of the expert's tool box. They look strange and are thought to play tricks on you. I really think they have been misunderstood. Especially from those who have learned C before C++.
When it comes to polymorphism, there seem to be only one tool: inheritance with virtual methods. This solution has been used since a very long time, so no wonder is the first tool anybody reaches to. It has its own advantages and I don't advice against using it. But we can achieve polymorphism through templates and duct-typing, and this has its own advantages too. Very interesting ones, actually.

A good example to look at is the Visitor design pattern. Using the classical virtual-methods-based polymorphism we have virtual methods everywhere. The Visitor interface (a.k.a. virtual class) declares a virtual method visit() for each element in the class hierarchy it can visit. Assuming this is a hierarchy for polygons, we might have the Polygon interface declaring the accept() method to "let the visitor in". We then implement two visitors pretending to print information on the console or to actually draw the polygon on an SVG canvas. The code would be roughly the following.

struct Triangle;
struct Square;
struct Pentagon;

struct Visitor {
  virtual void visit(const Triangle &triangle) const = 0;
  virtual void visit(const Square   &square)   const 0;
  virtual void visit(const Pentagon &pentagon) const 0;
};

struct Polygon {
  virtual void accept(const Visitor& v) const 0;
};

struct Triangle : Polygon {
  void accept(const Visitorv) const override {
    v.visit(*this);
  }
};

struct Square   Polygon { /* as above*/ }
struct Pentagon Polygon { /* as above*/ }

struct LoggerVisitor Visitor {
  void visit(const Triangle&) const override {
    cout << "Print triangle info" << endl;
  }
  void visit(const Square&) const override {
    cout << "Print square info" << endl;
  }
  void visit(const Pentagon&) const override {
    cout << "Print pentagon info" << endl;
  }
};

struct SvgVisitor Visitor { /* as above */ };

Let aside stylistic factors and/or personal issues with the Visitor design pattern, this code should look pretty reasonable. If we decided to use template-based polymorphism, the code would be roughly:

struct Triangle {
  template<typename VISITOR>
  void accept(const VISITOR& vconst {
    v.visit(*this);
  }
};

struct Square   { /* as above*/ };
struct Pentagon /* as above*/ };

struct LoggerVisitor {
  void visit(const Triangle&const {
    cout << "Print triangle info" << endl;
  }
  void visit(const Square&const {
    cout << "Print square info" << endl;
  }
  void visit(const Pentagon&const {
    cout << "Print pentagon info" << endl;
  }
};

struct SvgVisitor /* as above*/ };

Here there is no virtual method, whatsoever. Plus, Polygon and Visitor have completely gone because there is no need for pure interfaces (i.e. virtual classes), whether this is for the goods or not. Obviously they wouldn't have gone if they had an actual method or data member as opposed to only pure virtual methods. Because this code uses templates, it brings all the advantages of templates. The main important one is the optimisation the compiler can do, and we add polymorphism on top of these.

There are two typical use cases for polymorphism.
  1. A generic function accepting a pointer/reference to the root of the hierarchy, foo(Polygon&)
  2. An heterogeneous container of objects of that class hierarchy, vector<Polygon*>
Using virtual methods isn't really necessary in the former case. In fact, we can use templates there too, rather than passing function a pointer/reference.

template<typename T>
void genericFunction(const T &polygon) {
  LoggerVisitor loggerVisitor;
  SvgVisitor    svgVisitor;

  polygon.accept(loggerVisitor);
  polygon.accept(svgVisitor);
}

This code calls the accept() method of the actual polygon passed to genericFunction(). It does not look up into the vtable, because there isn't one. Using the template-based version may not be always achievable though, or at least without paying some cost somewhere else. For example, if it's the user (via some kind of input) deciding which polygon to apply genericFunction() to, then the virtual method approach may result in less lines of code, depending on the overall design of the application.

If instead we're dealing with heterogeneous containers, e.g. vectors containing a mix of Triangle, Square and Pentagon, then the template approach is just not applicable, because the compiler won't have any clue on what's the actual type of the i-th element. However, a different question should be asked in this case: why having a heterogeneous container in the first place? Heterogeneous containers may be more complex to manage and maintain in some cases. Separate homogeneous containers could make the code easier and would then enable template-based polymorphism.

Another good reason to prefer templates to virtual methods is that classes with no virtual methods don't need virtual destructor, which removes the risk of memory and resource leaks caused by destructors not been declared virtual.

I think template-based polymorphism is really interesting and worth spending some time considering it in place of virtual methods, next time there is the need for polymorphism.

Saturday 11 April 2015

Fuzzy Software Quality

When it comes to Software Quality there are several tools that try to measure it: test coverage, cyclomatic complexity, static analysis, technical debt, etc. We try and make these numbers look good but bugs get delivered, users get annoyed and engineers get frustrated. So it was all just for the sake of having good looking numbers.

I think Software Quality isn't something you can imprison in one number. It isn't something that deserves numbers precision nor their strictness. It's more a questionnaire-kind-of-thing where you ask a general question to the right stakeholder (i.e. the developer, the tester, the user etc) about the aspects of Software Quality you consider worth "measuring". The answer must be one of: strongly disagree, disagree, agree, strongly agree.
To me, the following would be the questionnaire-kind-of-questions worth asking ourselves and our customers or users:
  • Code Maintainability: As a developer, I am happy of making the next change. The code is in a good shape and the time I spent on the last code-change was reasonably proportional to the complexity of the behavioral-change.
  • Code Quality: As a developer, when I need to make a code-change in an area I'm not expert of, the time I spend reverse-engineering is reasonably proportional to the behavioral-complexity of that area.
  • Product Quality: As a user, I am overall satisfied with the software stability, performance, ease of use and correctness.
There is something that probably needs a bit of clarification. I referred to code-change and behavioral-change. These aren't terms commonly used but I believe they're pretty easy to understand. Code-change means the actual change to the source code whereas behavioral-change is the feature itself, from the user's point of view.

For example, since we are in 2015 chances are that sending an e-mail to the user will be considered a trivial behavioral-change. If implementing this feature required a lot of code-changes and took 2 days, then the answer to the Code Maintainability question is likely to be disagree. To add insult to injury, it took another couple of days just to reverse-engineer how the users list is handled, so strongly disagree is the answer to Code Quality. Nevertheless, the users are happy with our product so far and don't seem to complain too much about the time it takes to add features, so they answer agree to Product Quality. The following would then be how our "perception of Software Quality" would look like on graph.
So what about all those fancy tools that measure test and branch coverage, cyclomatic complexity, do static analysis and lot more? They're useful. Definitely worth using. But not to measure the Software Quality directly but rather to build our own confidence that we're writing good code. If the test coverage is bad, the cyclomatic complexity is skyrocketing and the compiler spits out tons of warnings then I would answer strongly disagree to Code Quality, without even asking myself how long it takes to reverse-engineer a bit of code.

But I'm not suggesting this is as yet another Software Quality measuring tool. Software Quality is really a hard thing to measure precisely. There won't be silver bullets and there won't be magic questions or magic answers. Just ask the stakeholders what they think and build your own confidence on it.

Wednesday 3 December 2014

When C++ templates outperform C

A colleague has recently faced me with a problem. He's writing some interesting stuff on an Arduino board which ships with an AVR microcontroller. On this sort of platform they provide header files with #define-s along the line of:

#define PORTA (*(volatile uint8_t*)0x1234)
#define PORTB (*(volatile uint8_t*)0x1235)

They're meant to be used to read and write devices ports with simple code like this:

    PORTA = 0; /* clear device register */

Also, many devices may be connected to the same port if they need just some bit instead of a whole word. So an LED can be seen as a device on bit 7 of PORTA and turned on with something like:

    PORTA |= 0x80;


His problem was something like: I'd like to have a C++ template to use as a type to declare my device on PORTx and bit N so I can just set and clear bits without worrying about bit-shifts, something like:

    device<PORTA, 7> led;
    led.set(); // or alternatively led |= 1

However, I know that 

    PORTA |= 0x80;

translates to a single assembly instruction to set just one bit along the line of

    sbi $0x1234, 7

I want that the template translates to exactly the same assembly code, so to not loose performances.




Monday 9 December 2013

An effective logging model

Logging is a really trivial thing. At least it looks so. However, the trivial thing is just printing out a log line. What to print out and which extra information should that log contain is not that straightforward. We all have seen and used logging model where log statements have been classified into several categories. For example, the Java logging utility defines 7 different categories. Another common classification seems to be:
  1. Fatal: something extremely bad and unrecoverable happened causing the system to stop or exit;
  2. Error: something bad and unrecoverable happened but not so bad to halt the system;
  3. Warning: something bad happened but it has been recovered;
  4. Info: some important event has been received or stage reached;
  5. Debug: used to print out any sort of internal data to help debugging.
Along with this classification model there also exist logging frameworks capable of fancy things like:
  • Turning on and off each log level dynamically at run-time;
  • Redirecting log statements to separate media (e.g. files, sockets, system's log, etc.) based on the log level or the class or file the log has been generated from.
All this sounds useful, but too often the outcome is an extremely noisy and untrustworthy log file containing too many stuff and at times too many errors, even if the system was just behaving itself. Furthermore, the application is unusable because 90% of the time it's doing a printer's job instead of its. This is why all that fancy features exist; to try to filter all this noise and restore some performance back.

Logging is a really trivial thing and in my projects I keep it that way by using a really trivial logging model:
  1. Error: an error internal to the program occurred but it shouldn't have; this means that it's unrecoverable. For example we placed a try-catch block at some very high level and it caught some sort of exception. On the opposite, if we were able to write code to recover from an error then it wouldn't be an error, because it's something we can handle and dealing with it is part of the program's behavior. 
  2. Warning: an error external to the program occurred. Being outside of our control is something we knew it could occur and so we wrote defensive code. For example, the server received a malformed message from a client.
  3. Info: any other event which is meaningful in the application's domain. For example an embedded device discovered that a new software version is available. On the opposite, printing out a temporary variable's value is probably not meaningful in the application's domain.
There's no need for lots of logging level. For example, there's no need for debug logs which print out variables and other internal data just because "it could help". We should rather write unit tests. If they pass, then we know exactly how the program behave.

There's no need for fancy stuff like turning logging on and off at run time, because logging is less than 10% of the job, not the other way round.

There's no reason to remove logging when releasing the product, for the same reason. The only context where it might still be reasonable to strip logging off is embedded devices with no communications at all, but this sounds an extremely rare case; there's always a way to inspect/extract logs from a device.

In conclusion, logging is a trivial thing. We should keep it that way without adding fancy features of any sort. However, what is worth to be logged and which level it belongs to is more an art than a science.

Monday 2 December 2013

The evil static methods

Most programming languages provide a way to define static methods. We all know what they are: functions of a class that can be globally invoked, without requiring an instance of that class. They do really sound handy, but the reality is different.
Let's have a look at the following code:

public class Person {
  static public boolean isSocialNumberValid(String socialNumber){
    // TODO: Check for valid prefixes and suffixes
    return  socialNumber.length() == 9                  &&
            Character.isLetter(socialNumber.charAt(0))  &&
            Character.isLetter(socialNumber.charAt(1))  &&
            Character.isDigit(socialNumber.charAt(2))   &&
            Character.isDigit(socialNumber.charAt(3))   &&
            Character.isDigit(socialNumber.charAt(4))   &&
            Character.isDigit(socialNumber.charAt(5))   &&
            Character.isDigit(socialNumber.charAt(6))   &&
            Character.isDigit(socialNumber.charAt(7))   &&
            Character.isLetter(socialNumber.charAt(8));
  }
}

The class Person provides a static method to check if a given string is a valid social number (the UK National Insurance Number). This method looks so useful that we're tempted to use it literally everywhere while developing an hypothetical management system for the public entities. It will soon become a weakness in that system.

Static methods kill OO's polymorphism

If we sold the system also to the US public entities we would need to extend that method but we can't because it's static. The only workaround to avoid changing every single call to that function, is to support both UK NIN and US Social Security Number, resulting in the following bad-looking code:

public class Person {
  static public boolean isSocialNumberValid(String socialNumber){
    return isUkNin(socialNumber) || isUsSsn(socialNumber);
  }
  static private boolean isUkNin(String socialNumber) {
    // TODO: Check for valid prefixes and suffixes
    return  socialNumber.length() == 9                  &&
            Character.isLetter(socialNumber.charAt(0))  &&
            Character.isLetter(socialNumber.charAt(1))  &&
            Character.isDigit(socialNumber.charAt(2))   &&
            Character.isDigit(socialNumber.charAt(3))   &&
            Character.isDigit(socialNumber.charAt(4))   &&
            Character.isDigit(socialNumber.charAt(5))   &&
            Character.isDigit(socialNumber.charAt(6))   &&
            Character.isDigit(socialNumber.charAt(7))   &&
            Character.isLetter(socialNumber.charAt(8));
  }
  static private boolean isUsSsn(String socialNumber) {
    return  socialNumber.length() == 9                &&
            Character.isDigit(socialNumber.charAt(0)) &&
            Character.isDigit(socialNumber.charAt(1)) &&
            Character.isDigit(socialNumber.charAt(2)) &&
            Character.isDigit(socialNumber.charAt(3)) &&
            Character.isDigit(socialNumber.charAt(4)) &&
            Character.isDigit(socialNumber.charAt(5)) &&
            Character.isDigit(socialNumber.charAt(6)) &&
            Character.isDigit(socialNumber.charAt(7)) &&
            Character.isDigit(socialNumber.charAt(8));
  }
}

Static method shouldn't exist at all

Static methods can be invoked without an instance of the class they are declared into. This means that they don't deal with objects of that class, hence they shouldn't be part of the class at all. An exception might be factory methods because they create instances of that class. However not even they should be static, because we won't never be able to leverage polymorphism to change the way those instances are created.

They hide dependencies

Take a look at the following method:

public class Database {
  public Person createPerson(String name, String address) {
    Person p = new Person();
    p.setName(name);
    p.setAddress(address);
    save(p);
    return p;
  }
}

It looks very clear. We call Database.createPerson() passing the name and the address of a person and it will create a new Person object, save it into the database and return the new instance. It's really that trivial, is it? It isn't! Someone didn't know that static methods are evil and wrote the following code:

public class Person {
  public void setAddress(String address) {
    this.address = address;
    this.latlon = GoogleGeoCoder.convertAddress(address);
  }
}

The usage of the static method GoogleGeoCoder.convertAddress() is hiding the important fact that Database depends on GoogleGeoCoder!

The Singleton anti-pattern

The only way to obtain the instance of a Singleton, is to invoke the usual getInstance() static method. Hence, Singletons have the very same issues that static methods have:
  1. They can't be extended: If GoogleGeoCoder was a Singleton and we called GoogleGeoCoder.getInstance(), then we would always obtain an instance of a GoogleGeoCoder; never one of its subclasses.
  2. Singletons can be hidden everywhere, even behind the most innocent API like Person.setAddress().

They make testing difficult

This is another big problem caused by static methods and singletons. We can't easily replace the real implementation with a fake one, as discussed in previous posts. This makes our life more complicated while testing.

The new operator is a static method

It might not be that evident but the lesser we use the new operator, the better it is. Let's look at the following snippet of code:

public class UserRegistrationController {
  public void register(String email, String password) {
    User user = new User();
    user.setEmail(email);
    user.setPassword(password);
    _db.createUser(user);
    sendWelcomeMessage(user)
  }
  private void sendWelcomeMessage(User u) {
    Email email = new Email();
    email.to(u.getEmail());    
    email.setSubject("Welcome");
    email.setBody("Welcome!");
    _emailSender.send(email);
  }
}

Because of the new operator, UserRegistrationController is tightly coupled with Email. If we wanted to send an SMS or invoke a social network API to send a private message we would need to write two new functions and stick some if to choose which type of welcome message to send.

Not all the new operators are equivalent

In fact, new Email() is a new operator we really want to remove; because of it UserRegistrationController depends on both the Email and some sort of EmailSender. There is no real need for this. It would be better if it depended only on one more abstract Notifier. On the other hand, new User() isn't harmful because User belongs to the application's domain.

Use factory objects but remember the Law of Demeter

Many of the new operators can be replaced by factory objects which would provide more extensibility and maintainability. However, keep in mind the Law of Demeter. In the following code we use an EmailFactory to create the welcome email, which the EmailSender will send:

public class UserRegistrationController {
  public void register(String email, String password) {
    User user = new User();
    user.setEmail(email);
    user.setPassword(password);
    _db.createUser(user);
    Email email = _emailFactory.welcomeEmail(user);
    _emailSender.send(email);
  }
}
There is no real need for UserRegistrationController to know about the EmailFactory; it only wants to send an email it shouldn't even create. EmailSender should rather provide the method sendWelcomeMessage() taking User as an argument: it would take care of creating and sending the necessary Email (either by means of a factory or manually by doing new Email() or even in some other fancy way).

Sunday 10 November 2013

Test Driven Education [part 4 of 4]

A full Test Driven Project to learn from.

Our simple Home Security system is almost complete. We've written most of the units it is made of but to be fully complete and deployable, we still need to connect the dots. In all the previous posts, we've written unit tests as we were writing units, i.e. the core parts of the application. Since we're now going to finish it, we need to write code to cross the boundaries between the application and the environment it interacts with. We'll need to implement a concrete Database, a concrete Logger, a concrete Siren and the UI. This will give us the opportunity to understand the so called Testing Pyramid. It is a concept by which application's tests can be classified into three different categories, each one having an increasing number of tests (hence the pyramid shape):
  1. Unit tests. The base of the pyramid, where the greatest number of tests lie. They exercise the core part of the application: its units. A unit is definable as any piece of code we write (i.e. not 3rd party libraries we use). A unit tests is then a test which exercise only that single unit; every other unit it eventually interacts with will be replaced by a test double.
  2. Integration tests. Fewer than unit tests, they exercise the interaction between two or more concrete units or the interaction between a concrete unit and the operating system or, similarly, the application's execution environment.
  3. Component tests. The top of the pyramid, being the category with the smallest number of tests. They exercise the application as a whole. Every unit the application is made with will be real and the tests can only drive the application's inputs and read its outputs.
Being a simple project, Domus will have the simplest database ever; it will read/write the PIN code from/to a file and we'll call it FileDatabaseOur first requirement is: when FileDatabase is created it has to create the file and write the factory PIN code into it.

public class FileDatabaseTest extends TestCase {
  
  protected void tearDown() { 
    File file = new File("pin.dat");
    assertTrue("File couldn't be deleted", file.delete());
  }

  public void testCreatePinFileIfNotExists() throws Exception {
    // setup
    FileDatabase db = new FileDatabase("pin.dat");
    // verify
    Scanner dbScanner = new Scanner(new File("pin.dat"));
    assertEquals("0000", dbScanner.next());
  }
}

This is an Integration Test and the reason is that FileDatabase interacts directly with the execution environment (i.e. the JVM and a real file). Because of this interaction, integration tests are slower than unit tests. It might not sound like a big issue but as soon as the integration tests get more and more, running all the tests will take longer and longer, until they will eventually take so much time that we'll be annoyed by running them or, anyway, we'll spend lot of time waiting rather than implementing features. This should not make us think that integration tests are bad. We need them. We just want run them less frequently than unit tests, to not slow us down.
The following is the implementation of FileDatabase constructor:

  public FileDatabase(String pinFileName) throws IOException {
    FileWriter pinFile = new FileWriter(pinFileName);
    pinFile.write("0000");
    pinFile.close();
  }

To keep this short, I'll skip over the full implementation of FileDatabase (you can find all the code on git, also for FileLogger and FileSiren). To launch Domus, we'll also need a Main file which will wire all the parts together. It will also implement a classic command line interface printing out a menu with all the choices, which for our simple project will be:
  • 0, to quit the program
  • 1, to add a sensor
  • 2, to engage the alarm
  • 3, to disengage the alarm
  • 4, to trigger a sensor
  • 5, change pin code
The implementation of Main can be found on git. What's more important to look at is one component test for Domus. This test exercises the whole Domus, by adding a sensor, engaging the alarm and then triggering the sensor; the expected outcome is the activation of the siren, which is just a matter of writing 1 into the file siren.out (as often happens for /dev files on Unix platforms).

public class DomusTest extends TestCase {
  
  protected void tearDown() { 
    assertTrue(new File("pin.dat").delete());
    assertTrue(new File("log.txt").delete());
    assertTrue(new File("siren.out").delete());
  }

  public void testActivateSiren() throws Exception {
    // setup
    StringBuffer commands = new StringBuffer();
    commands.append("1\n"); /* Add a sensor */
    commands.append("2\n"); /* Engage the alarm */
    commands.append("0000\n");
    commands.append("4\n"); /* Trigger a sensor */
    commands.append("0\n");
    commands.append("0\n"); /* Quit the program */
    System.setIn(new ByteArrayInputStream(commands.toString().getBytes()));
    // exercise
    Main.main(new String[] {"pin.dat", "log.txt", "siren.out"});
    // verify
    Scanner sirenScanner = new Scanner(new File("siren.out"));
    assertEquals(1, sirenScanner.nextInt());
  }

}


Two other component tests can be written to verify that the user can:

  1. change the PIN code
  2. disengage the alarm.
You can find these tests on git. What is important to highlight is that these three tests are the only component tests needed. Looking at how many tests we've written so far, we can finally understand what the Testing Pyramid is:
  • 3 component tests
  • 8 integration tests
  • 16 unit tests
As expected, in a well tested codebase, the majority of the tests are unit tests, because they exercise all the workers of a system, proving that each one is doing its job; a lower number of integration tests prove that the peripheral parts of the system interacts properly with the execution environment; finally, a small set of component tests prove that everything has been wired correctly.

Sunday 13 October 2013

Test Driven Education [part 3 of 4]

A full Test Driven Project to learn from.

In the last post, we've seen what the Dependency Injection is and, particularly, how it makes unit testing easier: by injecting fake collaborators, we fully control inputs and outputs of the class being tested. In this third part we're going to discover five different ways of faking collaborators, commonly known as Test Doubles:
  • Dummy Object
  • Stub Object
  • Fake Object
  • Spy Object
  • Mock Object
As usual, we'll extend Domus to fulfill new requirements. In particular, we want to persist the PIN code into a database to survive system reboots and we want to produce logs reporting relevant events such alarm engagement and disengagement. To satisfy this two requirements, we'll introduce a Database and a Logger.


Stub Object

The first Test Double to talk about is the Stub Object: its only responsibility is to return known values (fixed or configurable) to its callers; therefore stubs suite very well when an object serves as an input to the class being tested.
We need a stub object to redefine the requirement of all the tests dealing with the PIN code. The new requirement, in fact, is that SecurityAlarm shall validate the PIN entered by the user against the one persisted into the database (for size reasons here there is only the code of testEngageAlarm()):

  public void testEngageAlarm() {
    // setup
    StubDatabase db = new StubDatabase();
    SecurityAlarm alarm = new SecurityAlarm(db);
    // exercise
    alarm.engage("0000");
    // verify
    assertTrue("Alarm not engaged", alarm.isEngaged());
  }

All these tests now don't compile anymore, as SecurityAlarm now expects a Database to be injected, so we define the new constructor:

  public SecurityAlarm(Database db) {
  }

All the previous tests now compiles, however all the tests we didn't change (not dealing with the PIN code) aren't compiling anymore, because there is no empty constructor for SecurityAlarm. This gives us the opportunity to talk about dummies.

Dummy Object

Dummies are objects whose only responsibility is to shut the compiler up. They concretely implement their interfaces to pass the type checking at compilation time, but they shouldn't be invoked at all by the class being tested. Bear in mind that the null keyword is considered a dummy object and it is more than welcomed in unit tests as it highlights that a particular collaborator is not involved in that particular unit test (hence, the cause of failure should be elsewhere). However, the class being tested might not allow null to be passed; in these cases, a Null Object needs to be used. As said, it concretely implements its interface but either does nothing (i.e. empty methods) or makes the test failing, depending on the context and the taste.
To make our unit tests compiling again, we'll use the null keyword wherever required, for example in testAlarmNotEngagedAtStartup():

  public void testAlarmNotEngagedAtStartup() {
    // setup
    SecurityAlarm alarm = new SecurityAlarm(null);
    // verify
    assertFalse("Alarm engaged", alarm.isEngaged());
  }

With these changes the tests are now compiling again. They still pass, but this shouldn't confuse us; SecurityAlarm is still satisfying its requirements:
  • Engaging/Disengaging the alarm
  • Activating the sirens
  • etc
The fact is: we're now doing a refactoring. When we refactor some code we move from a software that satisfies its requirements to a different version which still satisfies its requirements. So, let's keep refactoring SecurityAlarm and remove all references to the private variable _pin and the DEFAULT_PIN constant:

public class SecurityAlarm implements Sensor.TriggerListener {
  ...
  private Database _db;
  ...
  public SecurityAlarm(Database db) {
    _db = db;
  }

  private boolean isPinValid(String pin) {
    return pin.equals(_db.getPin());
  }

  ...

  public void changePinCode(String oldPin, String newPin) {
    if (isPinValid(oldPin)) {

    }
  }
  ...
}

Most of the test are still passing but others, like testChangePinCode(), are now failing which means that something is going wrong. This brings us to a new requirement and a new Test Double.

Fake Object

Fakes are the next step toward the real implementation; they pretend very well to be what they say to be. They're more clever than stubs because they start having some degree of logic driving their return values, possibly function of other properties that collaborators might even set. A FakeDatabase is then what we need to make testChangePinCode() passing again (please note that the Database interface has also been changed to offer the method setPin()):

public class SecurityAlarmTest extends TestCase {

  class StubDatabase implements Database {
    @Override
    public String getPin() { return "0000"; }
    @Override
    public void setPin(String pin) {
      /* Do nothing */
    }
  }

  class FakeDatabase implements Database {
    String pin = "0000";
    @Override
    public String getPin() { return pin; }
    @Override
    public void setPin(String pin) { this.pin = pin; }
  }
  ...
  public void testChangePinCode() { 
    // setup
    FakeDatabase db = new FakeDatabase();
    SecurityAlarm alarm = new SecurityAlarm(db);
    // exercise
    alarm.changePinCode("0000", "1234");
    alarm.engage("1234");
    // verify
    assertTrue("Alarm not engaged", alarm.isEngaged());
  }
  ...
  public void testCanChangePinCodeMoreThanOnce() { 
    // setup
    FakeDatabase db = new FakeDatabase();
    SecurityAlarm alarm = new SecurityAlarm(db);
    // exercise
    alarm.changePinCode("0000", "1234");
    alarm.changePinCode("1234", "5678");
    alarm.engage("5678");
    // verify
    assertTrue("Alarm not engaged", alarm.isEngaged());
  }
}

Spy Object

We've already seen spies in the previous post: they're useful when we're only interested in the outputs of the class being tested. They're not usually as clever as Fakes are, because they have a different responsibility: they only keep track of which methods have been invoked (and eventually their arguments); the unit test will then queries the Spy Object to assert that the expected methods have been invoked. Finally, as a courtesy, they might also return known values (fixed or configurable) to their callers but there shouldn't be any logic behind these return values. In Domus, a Spy is exactly what we need for the Logger. Reflecting the events relevant in the Home Security domain, Logger is an interface exposing the following methods:
  • alarmEngaged()
  • alarmDisengaged()
To keep this post short, I'm going to show only the code for the engagement case (you can find the complete code on git). As Database, Logger is a collaborator and we'll inject it into SecurityAlarm constructor. The requirement is very simple: when the alarm is engaged a log should be produced. The following is the unit test:

public class SecurityAlarmTest extends TestCase {
  ...
  class SpyLogger implements Logger {
    private boolean _alarmEngagedLogged;

    @Override
    public void alarmEngaged() {
      _alarmEngagedLogged = true;
    }

    public void assertAlarmEngagedLogged() {
      Assert.assertTrue("Alarm engaged not logged", _alarmEngagedLogged);
    }
      
  }

  public void testLogWhenAlarmIsEngaged() {
    // setup
    StubDatabase db = new StubDatabase();
    SpyLogger logger = new SpyLogger();
    SecurityAlarm alarm = new SecurityAlarm(db, logger);
    // exercise
    alarm.engage("0000");
    // verify
    logger.assertAlarmEngagedLogged();
  }
}

Making this test passing should be quite easy as it's just about invoking the logger when the alarm is engaged:

  public SecurityAlarm(Database db, Logger logger) {
    _db = db;
    _logger = logger;
  }
  ...
  public void engage(String pin) {
    if (isPinValid(pin)) {
      _engaged = true;
      _logger.alarmEngaged();
    }
  }

However, all the old tests fails if we pass null as a dummy logger. This is, in fact, a case when a Null Object is needed as a dummy.

Mock Object

We're not going to see this in practice because we should avoid Mock Objects as much as possible. A Mock Object is kind of a superset of a Spy. It keeps track of every single method and all its parameters that the class under test calls (even indirectly) and makes the test fail if:
  • a method has been invoked unexpectedly
  • a method has not been invoked when it should have been
  • a method has been invoked too many or too few times
  • a method has been invoked with the wrong arguments
  • a method has been invoked in the wrong order with respect to another one
All these checks might sound good but they're not in most of the cases. The reason to avoid using Mocks is because, by their nature, they impose how the class being tested should be implemented rather then what it should achieve. As soon as the class being tested is extended to implement new features (which is a very good thing), all the tests relying on Mock Objects will very probably fail (which is a very bad thing). So, what is a Mock Object good for? It's good only when we need to test that a piece of code invokes an external API properly, which is something rare and for which component tests are probably better.