Ryan Greenhall

Thoughts on Software Development

Introducing Insight: A Dashboard to Collate Status information from Multiple Application Instances

without comments

The goal of insight is to help teams have greater visibility of the state of the software they are deploying. Insight makes it easy to answer questions such as: are all nodes running the same version? Are all nodes pointing at the correct database? Are all the instances running? In short, being able to answer these questions takes a lot of the stress out of performing a release.  I was introduced to this style of dashboard by Sam Newman and have used them extensively ever since.

insight dashboard

Configuration

Insight relies on web applications exposing their configuration properties as a resource represented as JSON.  For example, a request to http://mywebapp:8080/internal/status.json responds with:

{
    "username" : {
        "value": "user",
        "type" : "property"
    },
    "end.point": {
        "value" : "http://end.point.i.need.to.talk.to",
        "type" : "integration"
    },
    "number.of.requests.in.last.hour": {
        "value": "1678",
        "type" : "event"
    }
}

I started the project over a year ago as a breakable toy in order to learn Node.  It has proved useful at Forward and a recent facelift by Luke Williams has prompted me to advertise it for use by others.

Feedback most welcome.

Written by Ryan Greenhall

August 21st, 2011 at 3:28 pm

Posted in Uncategorized

Monitoring Hadoop Clusters using Ganglia.

with 36 comments

I spent a couple of days this week working with my Forward colleague Abs configuring Ganglia to monitor our Hadoop cluster and automating the installation to our production servers. The goal of this article is to provide an overview of the Ganglia architecture combined with our experience of getting it to play nicely with Hadoop.

Ganglia Overview

Ganglia is comprised of three components:
  1. Ganglia Monitoring Deamon (gmond) – The Ganglia Monitoring Deamon (gmond) needs to be installed on each machine that you want to monitor.  In our case this included our slave and master Hadoop nodes. The gmond service collects server metrics and exposes them over TCP.
  2. Ganglia Meta Deamon (gmetad) – The meta Deamon polls all of the available gmond data sources (over TCP) and makes the data available for the web interace. We decided to use a dedicated server for the collection and presentation of the gathered metrics.
  3. Ganglia Web Application – Provides a PHP based web app that presents various visualisation around server performance over various time periods.

ganglia-hadoop-configuration

Installing gmond on your Hadoop servers.

We found the following installation guide, Installing ganglia-3.1.1 on Ubuntu 8.04 Hardy Heron, helpful when installing gmond on our Hadoop servers.

We placed the gmond configuration in the default location: /etc/ganglia/gmond.conf and made the following changed to the defaults.
cluster {
    name = "hadoop"
    owner = "your company"
    latlong = "unspecified"
    url = "unspecified"
}

/* Specifies the port that gmond will receive data on */
udp_recv_channel {
  port = 8649
}

/* Specifies the port and host that this gmond service will send data to. Our gmond services post to themselves rather than gmond services on other machines */
udp_send_channel {
    host = your.hadoop.host.name
    port = 8649
    ttl = 1
}

/* Specifies the port that metrics can be retrieved from */
tcp_accept_channel {
  port = 8650
}
Start gmond using sudo gmond.  To ensure that gmond is collecting stats correctly use: telnet localhost 8650.  This should output a stream of XML containing collected stats.

Configuring Hadoop to send metrics to gmond

Fortunately for us, Hadoop provides gmond monitoring integration through org.apache.hadoop.metrics.ganglia.GangliaContext31, which is configured in hadoop-metrics.properties.  A restart of the tasktracker is required for hadoop specific metrics to appear in the Ganglia web app.
/etc/init.d/hadoop-tasktracker restart

Ganglia Monitoring Server

We decided to install gmetad and the Ganglia web app on a standalone machine.  Once again we found Installing ganglia-3.1.1 on Ubuntu 8.04 Hardy Heron very helpful in installing these two components.  Once gmetad has been installed it needs to know which datasources to poll for metrics.  To do this we added the following entries into /etc/ganglia/gmetad.conf:
data_source "master" master.hadoop:8650
data_source "slave1" slave1.hadoop:8650
data_source "slave2" slave2.hadoop:8650
data_source "slave3" slave3.hadoop:8650
data_source "slave4" slave4.hadoop:8650
data_source "slave5" slave5.hadoop:8650
Finally, start gmetad to be see server metrics in the Ganglia web app (http://your.ganglia.host/ganglia).
sudo metad

Written by Ryan Greenhall

October 22nd, 2010 at 2:04 pm

Posted in devops

Introducing Bumblebee: A JavaScript testing toolkit combining Ant, Rhino, Envjs and JSpec

with 12 comments

As JavaScript continues to gain prominence many teams are seeking to understand how TDD/BDD can be applied to JavaScript. To help people get started I have created a reference project, bumblebee, that combines Ant, Rhino, Envjs and JSpec such that specs can be executed in headless fashion allowing easy integration to a CI build.  This project has been influence by the excellent blue-ridge, the difference here is that bumblebee is not coupled to a web framework such as Rails.

Written by Ryan Greenhall

July 7th, 2010 at 12:50 pm

Posted in BDD, JavaScript

Exposed Application Configuration

with 16 comments

Problem:

As a developer or operations person
I want easy access to the current configuration of a web application
So that I can diagnose configuration problems more effectively

Solution:

Expose application properties as a simple HTML page.  Using a URI such as: /internal/status allows the page to be hidden from end users through appropriate configuration of your web server.  For example:

status page example

In this example status page, each configurable property is listed alongside the configured value.  The page even provides the location of the properties file should modifications need to be made.

Teams can go one step further and expose “health checks” through such a page.  In this example the application has three
dependencies that need to be satisfied for correct operation:

1) Need to be able to access a HTTP endpoint;
2) Need a directory to exist (and have read/write permissions)
3) Need to be able to connect to a database.

For each of these properties we can check whether the dependency is satisfied. For example, does the directory exist?
Can we read from the directory?  Any failure can then be exposed visually, providing early warning signs immediately
after a deployment that the application is not healthy and requires further investigation.

For more information on this topic and many other techniques for smoothing the path from dev to production I highly recommend Sam Newman’s QCon 2010 presentation: From Development to Production

Written by Ryan Greenhall

June 3rd, 2010 at 2:41 pm

Posted in devops

QCon Day 2 Keynote: Working and Living with Aging Software

with 24 comments

The second day of QCon 2010 started with a keynote by Ralph Johnson (GoF author and refactoring pioneer) on the topic of aging software.  Ralph began by asking the audience to raise their hand if they had been involved in their current project since it’s inception.  Approximately 50 percent of people present raised their hands enthusiastically into the air.  This is a far higher percentage than I had thought and Ralph pointed out that this number is unlikely to representative of the industry as a whole.  Attendees at QCon are typically experienced developers, leads and architects. The type of folks that love to work on new exciting projects and move on to pastures new as projects enter their typical “business as usual” phase. According to Ralph 98 percent of software developers were likely to have not been involved at the beginning of projects that they are work on.  As an industry we have long recognised that software often lasts much longer than original anticipated.

He then moved on to discussions of the word legacy and it’s negative connotations in the software world. Legacy code was described as:

  • Code with poor design;
  • Code that is difficult to understand and change;
  • Code developed with old/unsupported technology;
  • Knowledge of the original architectural design vision has been lost;
  • No tests.

The argument was made that clean code and a comprehensive suite of tests is not enough to document a large system.  I tend to agree with this view as I have seen large code bases where the tests do not always describe the behaviour of the application clearly.  I refer to such applications as modern legacy.  High level architecture diagrams (boxes and arrows) are invaluable for seeing the bigger picture when working with large systems.

The audience were treated to an entertaining story involving a large software organisation who spent huge amounts of time and effort obfuscating their code base before handing the source over to a partner.  The code base in question was developed over many years by thousands of developers and was extremely difficult to understand, even with the source code.  In fact to cope with this challenge the most effective developers in the organisation created an informal support network, where over time people came to recognise who had knowledge of a particular areas.  The most effective problem solvers were not necessarily great debuggers, but rather good communicators. They knew how to ask for help.  Ralph went on to joke that the best thing that this organisation could do to get the upper hand over their competitors was to give them the source code.  He predicted that their competitors would go out of business long before they understood the software.

Then next question considered was:  what is the capitol investment in software?  History has shown that as an industry matures entry becomes more expensive to the point where it becomes prohibitive.  Taking the automotive industry as an example it is getting extremely difficult for established firms to remain competitive yet alone starting from scratch.  The same pattern has not held true for software. Whilst it would be extremely challenging to create a product to compete with the mighty Microsoft Word, our industry (putting aside the recent economic challenges for a moment) still supports start ups and allows the creation of great products by small teams of talented people.  It therefore seems reasonable to think of the capitol investment in software as being talented people and their knowledge.

Software maintenance was defined as “all the work you put into a software system when you decide it is going to be replaced” and the audience were encouraged to think about ongoing changes to an aging application as evolution.  The motivation around this viewpoint is to avoid the stigma sometimes associated with maintenance roles in other fields categorized by lower salaries and less privileged academic backgrounds.   Of course, this stigma soon disappears when ones washing machine breaks.  Few, if any, raised there hands to the question: “who is a maintenance developer?”.   Ralph suggested that once a piece of software truly enters it’s “maintenance” phase then perhaps it is acceptable to approach development with little thought about the future state of the codebase.  In context of an application with a firm date for decommission this may well be a sound strategy.  However, the challenge we face as an industry is that even software with an end of life date has a nasty habit of lasting a lot longer into the future, typically in the response to the replacement system not being ready.

Refactoring, was of course, promoted as a technique to assist in the evolution of a software system through a series of small disciplined steps aimed at improving the design whilst maintaining application behaviour.  Ralph warned against the dangers of “refactoring projects”.  The main objection to this style of development is that it is completely counter to the original refactoring mindset, where small improvements are made on a daily basis to make new features easier to add.  The very notion of a refactoring project indicates that the design has been neglected over a long period of time to the point that features are becoming prohibitively expensive to add.  Ralph used the analogy of flossing daily versus root canal surgery and reminds us that as soon as senior management start talking about refactoring we know we are in trouble.  If you do find yourself taking part in a refactoring project – be sure to have a clear direction of where you are heading and work in small steps.

In summary this keynote covered topics that were generally well understood by the type of audience QCon attracts.  However, it was great to hear thoughts on software evolution from one of the early refactoring pioneers.  My main takeaway from the keynote was that as a profession we should always seek to encourage healthy relationships between academia and industry.  The mainstream adoption of refactoring is a combination of researchers (Bill Opdyke, Don Roberts) and passionate evangelists such as Fowler and Beck working together to bring these ideas to the masses with the classic Refactoring book.

One final thought from the keynote: in twenty years some programmers may be working on software older than they are.

Written by Ryan Greenhall

March 18th, 2010 at 3:15 pm

Posted in qcon, refactoring

Tired of Ant? Try Gant

with 16 comments

Ant has been a solid workhorse for building Java applications since its release in 2000, having many favorable properties including: cross platform, mature, provides a large collection of useful tasks and lets not forget the comprehensive documentation. However, Ant is showing its age and has been for some time. The use of XML to represent an external DSL is now generally regarded as a mistake, not to mention noisy due to all of those angled brackets.

Many in the Java community, especially those with experience with Rake from their Ruby adventures, have for some time been looking for alternative ways to build their applications. One noteworthy alternative is Gant (Groovy with Ant).

Gant allows Ant tasks to be defined using Groovy. This approach provides an internal DSL for describing build tasks, which happens to be much more readable than its XML counterpart. Furthermore, users of Gant require very little Groovy knowledge to become productive. One only has to look at an example Gant script to see how Ant task definitions map to Gant definitions and away you go.

One of the arguments I often hear against moving away from Ant is that almost every Java developer is familiar with Ant. Why would we want to move to something that fewer people know? With Gant this argument is not an issue as all of the Ant tasks that people have come to know and love are available. It is only the representation of those tasks that has changed.

If your project is currently using Ant, you are happy with the functionality provided by Ant tasks, but can no longer stand to define your build script with XML. Gant may be just what you have been looking for.

Here is an example Gant build file that performs a number of standard build operations, such as: compile, run tests, report test results and distribute as a jar.

Example Gant Build


sourceDirectory = 'src'
testDirectory = 'spec'
buildDirectory = 'build'

sourceClassesDirectory = buildDirectory + '/src'
specClassesDirectory = buildDirectory + '/spec'
testReportsDirectory = buildDirectory + '/junit-reports'
distributionDirectory  = buildDirectory + '/dist'

includeTargets << gant.targets.CleancleanPattern << '**/*~'cleanDirectory << buildDirectory

libDir = 'lib'

def buildtimeDependenciesJars = ant.path(id: 'jars') {
    fileset(dir: libDir) {
        include(name: '*.jar')
    }
}

target(compile: 'Compile source to build directory.') {
    mkdir(dir: sourceClassesDirectory)
    javac(srcdir: sourceDirectory, destdir: sourceClassesDirectory, debug: 'on')
}

target(compileTests: 'Compile the examples') {

  depends(compile)

  mkdir(dir: specClassesDirectory)
  javac(srcdir: testDirectory, destdir: specClassesDirectory, debug: 'on') {
      classpath {
          path(refid: 'jars')
          pathelement ( location : sourceClassesDirectory )
      }
  }
}

target ( test : 'Runs the examples') {

  depends (compileTests)

  mkdir(dir: testReportsDirectory)
  junit ( printsummary : 'yes' , failureproperty : 'testsFailed' , fork : 'true' ) {
     formatter ( type : 'plain' )
        classpath {
           pathelement ( location : specClassesDirectory )
           pathelement ( location : sourceClassesDirectory )
           path(refid: 'jars')
        }

        batchtest ( todir : testReportsDirectory ) {
           fileset ( dir : testDirectory , includes : '**/*Behaviour.java' )
        }
    }
}

target ( distribute : 'Distributes the library as a jar.') {
  depends (clean, test)
  mkdir(dir: distributionDirectory)

  echo("Creating example-app.jar ...")
  jar ( destfile : distributionDirectory + '/example-app.jar' , basedir : sourceClassesDirectory )
}

setDefaultTarget(distribute)

Written by Ryan Greenhall

January 13th, 2009 at 4:58 pm

Posted in build

RSpec’s Scenario Runner is dead – Long Live Cucumber!

with 20 comments

Recently when browsing RSpec’s homepage I noticed the bold (in the markup sense) announcement that RSpec’s scenario runner has been deprecated in favour of Aslak Hellesøy’s Cucumber. I have always been a fan of RSpec’s scenario runner and love the plain text story support. I therefore felt compelled to see what Cucumber has to offer and I am pleased to report that it’s wonderful.

First Taste of Cucumber

Having previously used RSpec’s scenario runner to provide automated acceptance scenarios for a Ruby implementation of the Game of Life, I was keen to see how easy it was to migrate to Cucumber.

Cucumber can be installed with the following command:

  gem install cucumber

Feature Injection

Cucumber is built around features rather than stories and recommends the Feature Injection template for describing features.

  In order to [achieve value]
  As a [role]
  I need [feature].

I therefore revisited the create a cell story (originally found in the examples provided in the RSpec code base) and rephrased the requirement using the Feature Injection template.

Story Format

Story: Cell Creation

As a game producer
I want to create a cell
So that I can set the initial game state

Scenario:  ...

Feature Injection Format

Feature: Cell Creation

In order to set the initial game state
As a game player
I need to be able to create live cells.

Scenario: Empty Grid

Given a 3 x 3 game
Then the grid should look like"........."

Scenario: Create a single cell

Given a 3 x 3 game
When I create a cell at 1, 1
Then the grid should look like"....X...."

Convention over Configuration

Cucumber applies a healthy dose of convention over configuration to provide a scenario runner that runs out of the box. The convention is to keep textual descriptions of features, using a .feature extension, in a features directory. Scenario steps are mapped to application code using Ruby. Step classes live in a steps directory as a child of the features directory.

For example:

   /features/steps/game_of_life_steps.rb
   /features/create-a-cell.feature

When following the prescribed directory structure, scenarios can be executed with the following command:

   rake features

Pending Steps

Initially my game_of_life_steps.rb file did not define any steps. When executing the scenarios for the create cell feature Cucumber kindly told me which steps I needed to provide and even provided suggested implementations. This feature made me smile and was clearly developed in response to the question; how can I reduce the effort required to implement steps?

       10 steps pending

  You can use these snippets to implement pending steps:

  Given /^a 3 x 3 game$/ do  end

  Then /^the grid should look like$/ do  end

  When /^I create a cell at 1, 1$/ do  end

  When /^I create a cell at 0, 0$/ do  end

  When /^I create a cell at 0, 1$/ do  end

  When /^I create a cell at 2, 2$/ do  end

Implementing Steps

Textual scenarios are mapped to code using a simple DSL that allows step patterns to be associated with the following step keywords: Given, When, Then. Each keyword accepts a Ruby block that will be executed when a step pattern is matched against an actual scenario step.

Steps can be parameterised using regular expressions, for example:

require "spec"

require "domain/game"
require "view/string_game_renderer"

Given /a (\d) x (\d) game/ do |x, y|  
    @game = Game.create(x.to_i, y.to_i)
end

When /I create a cell at (\d), (\d)/ do |x, y|  
    @game.create_cell_at(x.to_i, y.to_i)
end

Then /the grid should look like/ do |grid|
    StringGameRenderer.new(@game).render.should eql(grid)
end

Alternatively, steps can be represented as strings using the dollar symbol to prefix a parameter.

For example:

require "spec"

require "domain/game"
require "view/string_game_renderer"

Given "a $x x $y game" do |x, y|
    @game = Game.create(x.to_i, y.to_i)
end

When "I create a cell at $x, $y" do |x, y|
    @game.create_cell_at(x.to_i, y.to_i)
end

Then "the grid should look like$" do |grid|
    StringGameRenderer.new(@game).render.should eql(grid)
end

Migrating away from Rspec’s Story Runner

Using RSpec’s story runner I used the following approach to execute my scenarios:

  1. GameOfLifeSteps class extending Spec::Story::StepGroup
  2. GameOfLifeStoryRunner class delegating to Spec::Story::Runner::PlainTextStoryRunner configured with the GameOfLifeSteps
  3. Story classes for each story that delegate to the GameOfLifeStoryRunner passing the filename of the story to execute
  4. A Rake task to execute all of my stories

At the time this did not seem too unreasonable although there was a significant learning curve figuring out how everything was configured. Cucumber solves the configuration problem using convention. Developers need to provide step definitions and Cucumber will handle the rest.

Migrating to Cucumber was largely an exercise in deleting code that was no longer required. The conversion from RSpec step definitions to Cucumber was painless as they are very similar.

Example RSpec Step Definition

require'spec/story'
require "spec"

require "domain/game"
require "view/string_game_renderer"

class GameOfLifeSteps < Spec::Story::StepGroup

  steps do |define|

      define.given("a $x x $y game") do |x, y|          
            @game = Game.create(x.to_i, y.to_i)
      end

      define.when("I create a cell at $x, $y") do |x, y|
            @game.create_cell_at(x.to_i, y.to_i)
      end

      define.then("the grid should look like $grid") do |grid|
           StringGameRenderer.new(@game).render.should eql(grid)
      end
   end
end

Readers will immediately appreciate how easy it is to convert from RSpec step definition format to the format required by Cucumber. Admittedly, my toy application is tiny in comparison to a typical production application, but I get the feeling that migrating a larger code base would not be too troublesome. The more adventurous may even wish to automate the migration process. More advice on migrating from RSpec scenarios can be found here

Steady Evolution

It is very encouraging to see the tooling around BDD evolve so that the task of mapping textual scenarios to code is now extremely simple. Certainly much easier than the previous generations of BDD frameworks. The Java community are well served by JBehave and the Ruby community now have Cucumber. Now that the technical challenges in mapping scenarios to code have largely been solved, teams can focus their efforts on collaborating with stakeholders and fellow team members to define the desired behaviour of the system being developed. After all, isn’t that what BDD is all about?

Written by Ryan Greenhall

November 7th, 2008 at 6:37 pm

Posted in BDD

Exposed Scenario Implementation

with 15 comments

In my previous post I refactored a noisy scenario method so that it communicated the required scenario steps more clearly. Although this was a huge improvement in terms of readability the SignInAcceptanceScenarios class still has a significant weakness; maintainability due to the exposure of a number of implementation details.

Here is the current state of the sign-in scenario:

public class SignInAcceptanceScenarios {

   private WebDriver driver;

   @Before   
   public void setup() {
       driver = new HtmlUnitDriver();
   }

   @Test
   public void shouldPresentKnownUserWithTheWelcomePage() {

       Credentials credentials = new Credentials("ryangreenhall", "password");

       givenAnExistingUserWith(credentials);
       whenUserLogsInWith(credentials);
       thenTheUserIsPresentedWithTheWelcomePage();
   }

   private void givenAnExistingUserWith(Credentials credentials) {
       User user = new UserBuilder().withCredentials(credentials).build();

       UserRespository respository = new UserRespository();
       respository.create(user);
   }

   private void whenUserLogsInWith(Credentials credentials) {
       browseToHomePage();
       enterUsernameAndPassword(credentials);
       clickSignInButton();
   }

   private WebDriver browseToHomePage() {
       driver.get("http://www.example.com/sign-in");
       return driver;
   }

   private void enterUsernameAndPassword(Credentials credentials) {       
       driver.findElement(By.id("username")).sendKeys(credentials.getUserName());
       driver.findElement(By.id("password")).sendKeys(credentials.getPassword());
   }

   private void pressSignInButton() {
       driver.findElement(By.id("login")).submit();
   }

   private void thenTheUserIsPresentedWithTheWelcomePage() {
       Assert.assertEquals("Welcome", driver.getTitle());
   }
}

Violating the Single Responsibility Principle

Following the Single Responsibility Principle we know that a class should have only one reason to change. The SignInAcceptanceScenario is responsible for ensuring that the identified sign-in scenarios execute correctly. Let’s consider how many reasons it has to change:

  1. The sign in resource changes: e.g. /log-in;
  2. We want to replace Web Driver with another web testing framework;
  3. The parameter names for username and password change;
  4. The title of the welcome page changes;
  5. The behaviour of the application changes;

We have identified that our scenario class has five reasons to change! The only valid reason for this class to change is when the behaviour of the sign in process changes. For example, rather than presenting the user with the welcome page they are taken to their profile page, a common feature for most social networking sites these days.

Clearly with so many reasons to change, acceptance scenarios written in this style have the potential to require many changes throughout the lifetime of the application.

Abstraction and Encapsulation to the Rescue

Wouldn’t it be great if we could encapsulate our scenarios from implementation details such as the location of the sign-in resource and the parameter names used to communicate the users credentials? This would allow the implementation of the application to change without modifying our scenarios.

Currently when implementing the sign in scenario we are thinking in terms of the abstractions provided by the Web, for example, browse to the home page and submit parameters to a sign-in resource. Wouldn’t it be great if we could raise the level of abstraction and just sign in to the application using credentials, asking the resulting page if it is the welcome page?

Let’s briefly consider one possible approach.

Fluent Navigation

Rather than exposing the implementation details of how we are interacting with application we need a suitable abstraction. One such abstraction involves representing each page in the application as a Page Object. The Page Object Model, introduced to me by my colleague Dan Bodart and also recommended by the WebDriver team, nicely encapsulates the internal representation of a page and provides methods for appropriate interactions. Another nice feature of the Page Object Model is that we can make use of a fluent interface to allow seamless navigation through any number of pages.

The goal here is to demonstrate the value of introducing a suitable abstraction for interacting with the application. I will therefore leave detailed descriptions of the implementation in the interest of brevity.

SignInPage

Given that the scenario in the example involves signing in to the application we will need a SignInPage that allows credentials to be entered and submitted.

For example:

import com.example.domain.Credentials;

public class SignInPage implements Page {

    public SignInPage() {
    }

    public SignInPage with(Credentials credentials) {        
        // enter the username and password
        return this;
    }

    public Page submit() {        
        // submit the username and password to the sign-in resource
        // and return a Page representation
        // of the response.
    }

    public String getTitle() {        

    }
}

Application Facade

Now that we have a SignInPage how can our SignInAcceptanceScenarios class get hold of one? The SignInPage will be dispensed by a class called MyApp, which acts a facade for the application providing methods to access various pages in the application. This approach nicely decouples the scenario from the URLs used to address pages in the application.

For example:

import com.example.web.pages.HomePage;import com.example.web.pages.Page;import com.example.web.pages.SignInPage;

public class MyApp {

    public static SignInPage signIn() {        
        // GET the sign in page and return a sign in page object
    }
}

Matching Pages

We now have the ability to navigate to the sign-in page, enter the users credentials and submit, receiving a resultant page. The only remaining functionality required by the sign-in scenario is to ensure that the page returned as a result of submitting user credentials is the welcome page. How can we encapsulate the sign-in scenario from knowing the internals of the welcome page? This sounds like a job for a Hamcrest matcher. We will create a WelcomePageMatcher that knows if a given page is the welcome page by inspecting the title of the page.

For example:

import org.hamcrest.BaseMatcher;
import org.hamcrest.Description;
import com.example.web.pages.Page;

public class WelcomePageMatcher extends BaseMatcher {

    public boolean matches(Object actualPage) {
        return "Welcome".equals(((Page)actualPage).getTitle());
    }

    public void describeTo(Description description) {
    }

    public static WelcomePageMatcher isWelcomePage() {
        return new WelcomePageMatcher();
    }
}

Bringing Everything Together

The application facade, SignInPage and WelcomePageMatcher can be combined to provide a very concise specification of the behaviour required when users sign-in to the example application.

import com.example.domain.builders.UserBuilder;
import com.example.domain.Credentials;
import com.example.domain.User;
import com.example.persistence.UserRespository;
import static com.example.web.MyApp.signIn;
import com.example.web.pages.Page;
import static com.example.web.pages.matchers.WelcomePageMatcher.isWelcomePage;
import org.junit.Test;

public class SignInAcceptanceScenarios {

   @Test
   public void shouldPresentKnownUserWithTheWelcomePage() {
       Page pageAfterLogIn = null;
       Credentials credentials = new Credentials("ryangreenhall", "password");

       givenAnExistingUserWith(credentials);
       whenUserLogsInWith(credentials, pageAfterLogIn);
       thenUserIsPresentedWithTheWelcomePage(pageAfterLogIn);
   }

   // Alternatively we could have:
   @Test
   public void shouldPresentKnownUserWithTheWelcomePage() {
       Credentials credentials = new Credentials("ryangreenhall", "password");

       // Given
       givenAnExistingUserWith(credentials);

       // When
       Page pageAfterLogin = signIn().with(credentials).submit();

       // Then
       ensureThat(pageAfterLogin, isWelcomePage());
   }

   // Alternatively we can combine the when and then steps in a single line:   
   @Test
   public void shouldPresentKnownUserWithTheWelcomePage() {
       Credentials credentials = new Credentials("ryangreenhall", "password");

       givenAnExistingUserWith(credentials);

       ensureThat(signIn().with(credentials).submit(), respondsWithWelcomePage());
   }

   private void givenAnExistingUserWith(Credentials credentials) {
       User user = new UserBuilder().withCredentials(credentials).build();

       UserRespository respository = new UserRespository();
       respository.create(user);
   }

   private void whenUserLogsInWith(Credentials credentials, Page pageAfterLogIn) {
       pageAfterLogIn = signIn().with(credentials).submit();
   }

   private void thenUserIsPresentedWithTheWelcomePage(Page pageAfterLogin) {
       ensureThat(page, isWelcomePage());
   }
}

Summary

We have seen that the introduction of a simple internal DSL to interact with the application has greatly improved the readability of this scenario. Furthermore the SignInAcceptanceScenarios class is now completely decoupled from the implementation of the application making it less susceptible to change.

We can now change the following concerns without modifying the SignInAcceptanceScenario:

  1. Location of the sign in resource (encapsulated in the SignInPage class);
  2. Web testing framework (encapsulated in both the Application Facade and Page classes);
  3. The parameter names for username and password (encapsulated in the SignInPage);
  4. The title of the welcome page changes (encapsulated in the WelcomePageMatcher);

The SignInAcceptanceScenarios class now has only one responsibility; defining the behaviour of the application and thus should only require modification when the behaviour of the application changes.

Written by Ryan Greenhall

October 21st, 2008 at 8:13 am

Posted in testing

Noisy Scenario Methods

with 12 comments

This is the second entry in a series of posts that provide practical advice on how to avoid or refactor away from common functional testing smells.

Noisy scenario methods can generally be identified by the presence of long methods, which fail to highlight the important interactions with the application clearly. The code driving the application will typically be at a low level of abstraction adding to the noise. For web applications this will involve requesting resources, posting parameters and ensuring the expected response is returned. For example:

    public class SignInAcceptanceScenarios {

        @Test
        public void shouldPresentKnownUserWithTheWelcomePage() {

            User user = new UserBuilder().withUsername("ryangreenhall").withPassword("password").build();

            UserRespository respository = new UserRespository();            r
            respository.create(user);

            WebDriver driver = new HtmlUnitDriver();
            driver.get("http://www.example.com");

            driver.findElement(By.id("username")).sendKeys("ryangreenhall");
            driver.findElement(By.id("password")).sendKeys("password");
            driver.findElement(By.id("login")).submit();

            Assert.assertEquals("Welcome", driver.getTitle());
     }
}

When reading scenarios I like to see clear descriptions of the steps that are taken when interacting with the application. The main problem with noisy scenario methods is that the reader has to filter through the noise to figure out the following: what is the starting state of the system? What interactions occur? An additional problem is that the reader is required to think at a low level of abstraction. Rather than thinking about signing in to the application we are forced to think in terms of posting parameters representing the username and password to a sign-in resource.

Discovering the Scenario Steps

Reading the example scenario we can see that ensuring a known user is presented with the welcome page requires the following steps:

  1. Create a user
  2. Navigate to the applications home page
  3. Fill in the sign in form with known credentials
  4. Submit the signin form
  5. Ensure that the user is taken to the welcome page

Some would argue that the example scenario could be improved with the introduction of comments prior to each step. However, I favour scenarios that are composed of small methods, whose names clearly describe: the starting state of the application, the interactions with the application and the expected outcomes. This allows the scenario to be expressed in terms of the domain without distracting the reader with the implementation details.

The Given-When-Then scenario format popularised by Behaviour Driven Development is a good starting point for structuring scenarios.

Making Your Code Read Like a Scenario

The following example shows how the original scenario has been improved using a series of Extract Method refactorings in order to communicate the important steps in the scenario, namely: create a new user, sign in with known credentials and ensure that the user is presented with the welcome page.

     public class SignInAcceptanceScenarios {

        private WebDriver driver;

        @Before        
        public void setup() {
            driver = new HtmlUnitDriver();
        }

        @Test
        public void shouldPresentKnownUserWithTheWelcomePage() {

            Credentials credentials = new Credentials("ryangreenhall", "password");

            givenAnExistingUserWith(credentials);
            whenUserLogsInWith(credentials);
            thenTheUserIsPresentedWithTheWelcomePage();
        }

        private void givenAnExistingUserWith(Credentials credentials) {
            User user = new UserBuilder().withCredentials(credentials).build();

            UserRespository respository = new UserRespository();
            respository.create(user);
        }

        private void whenUserLogsInWith(Credentials credentials) {
            browseToHomePage();
            enterUsernameAndPassword(credentials);
            clickSignInButton();
        }

        private WebDriver browseToHomePage() {
            driver.get("http://www.example.com/sign-in");
            return driver;
        }

        private void enterUsernameAndPassword(Credentials credentials) {
            driver.findElement(By.id("username")).sendKeys(credentials.getUserName());            
            driver.findElement(By.id("password")).sendKeys(credentials.getPassword());
        }

        private void pressSignInButton() {
            driver.findElement(By.id("login")).submit();
        }

        private void thenTheUserIsPresentedWithTheWelcomePage() {
            Assert.assertEquals("Welcome", driver.getTitle());
        }
    }

Structuring scenario methods in this style allows the reader to quickly understand the behaviour expected of the application in the context of a given scenario. Furthermore given that they read like written scenarios in terms of the problem domain they can be used to assist in conversations with QAs and Business Analysts, in addition to serving as useful documentation for future maintainers.

Written by Ryan Greenhall

October 19th, 2008 at 2:34 pm

Posted in testing

Mind Mapping Behaviour Driven Development

with 37 comments

I recently presented an overview of Behaviour Driven Development at my current client. To help gather my thoughts I created the following mind map and have made it available here as others may find it useful.

mind-mapping-bdd

This mind map distills information from the following sources:

Written by Ryan Greenhall

October 16th, 2008 at 4:26 pm

Posted in BDD