Friday, October 16, 2015

Test Driven Development Environment for Javascript

Episodes 1-8

Even though JS frequents the GUI slice of an architecture diagram, there is ample functionality that can be unit tested. (For an overview of the testing pyramid, Agile Thoughts podcast has a nice overview on this topic.) The Javascript environment has a a rich history of unit testing tools.  JSUnit is the earliest that I know and was part of the initial wave of xUnit test frameworks in early 2000.  Due to the explosion of xJS frameworks in the last three years, it's time to update knowledge of what tools to use for doing TDD in Javascript.

The tool chains I evaluated were: NodeJS + Karma + Jasmine versus NodeJS + Karma + Mocha + Chai + Sinon.

Here is what they do:
NodeJS is a javascript runtime environment which will run our test tools.
Karma enables pushing our tests into different browsers and automated test launching.
Jasmine versus Mocha + Chai are two choices for test libraries for organizing our tests and give us ways to build assertions.
Jasmin versus Sinon are choices for Mocking

Take a look at the picture below and you'll see the same unit test expressed in three different was.
The above is a nice incremental improvement on typical xUnit with the "toBe."

Mocha and using Chai's "expect"

Using Chai's expect is a bit better than Jasmine's as it allows building of "chains of purpose."

Mocha and using Chai "Should"
I wanted to use Jasmine since it included a lot of functionality as opposed to installing Mocha + Chai + Sinon.  But Chai's "should" is really superior as it prominently shows what is being tested (translate in this case) and gets to the point about what's expected.  Notice how you're less likely to develop "parenthesis blindness."  Here is a good overview of mocha, chai, sinon.  Let's talk about how to put these tools pulled together into an environment.

Install and setup NodeJS, Mocha, Chai, Sinon


Install a javascript runtime and package manager.  We'll use NodeJs's package manager to install the remaining tools.  NodeJs's runtime will be used to operate our tools, which are also written in javascript, on our workstation.  Make a work directory to install your javascript test tools.  From this location, you'll configure the test tools to find your source code.

2) Karma and Friends

I opted for Karma to Javascript code in assortment of browsers in order to execute the tests in the browser environments.  Karma will do all this automatically by running a server to controll those browser environments in NodeJs.

Install the Karma cross browser execution framework:
npm install karma --save-dev
npm -g install karma-cli
"-g" is used to do a "global" install, meaning get class paths setup so you can conveniently execute it.

The steps at are pretty close but miss on the dependencies as they've changed since authored, and "npm init" is unnecessary.  So here is what to do:
npm install X --save-dev, where X =>{mocha, karma-mocha, chai, karma-chai, sinon, karma-sinon, karma-chrome-launcher}
Said another way:
npm install mocha  --save-dev
npm install karma-mocha  --save-dev
npm install chai --save-dev 
npm install karma-chai --save-dev
npm install sinon --save-dev   
npm install karma-sinon --save-dev
npm install karma-chrome-launcher --save-dev

(If after doing a "npm install... "and there's a warning about not installing dependencies, respond by installing those dependencies explicitly as ordered to by the computer.)

3) Initialize karma

karma init
"Karma init" will interrogate about the below to generate a boilerplate karma.conf.js.  You'll want to tell it the following:

  • select mocha test framework
  • Add Require.js which we'll use for loading dependencies.
  • Select what browser(s) you want to test against.
  • For location of source files, I used the below. After you're up and running and have written a few tests, you'll want to change the source code settings to point at your code under source control:
    • js/*.js
    • test/*.js
    • lib/*.js 
    • Exclude js/main.js if you have a main.js so that your application under tests doesn't get control of the javascript boot loader.  You want your tests to be loaded and executed rather than your application, right?
  • Accept the defaults for the rest.

4) example karma.conf.js

My file looks like the below.  Note especially the frameworks setting as you'll need to add chai.  If you messed up during the init interrogation, then you can correct it by hand.
Check karma.conf.js to see that you've got the correct file filtering setup.  As of today, the Windows install of Karma does the wrong thing and filters out all your tests.  You want included:true.
 // list of files / patterns to load in the browser
    files: [
      {pattern: 'test/*.js', included: true},
      {pattern: 'lib/*.js', included: true},
      {pattern: 'js/*.js', included: true}
Here is my entire config file on Windows.
module.exports = function(config) {
    // base path that will be used to resolve all patterns (eg. files, exclude)
    basePath: '',
    // frameworks to use
    // available frameworks:
    frameworks: ['mocha', 'sinon-chai'],
    // list of files / patterns to load in the browser
    files: [
      {pattern: 'test/*.js', included: true},
      {pattern: 'lib/*.js', included: true},
      {pattern: 'js/*.js', included: true}
    // list of files to exclude
    exclude: [
    // preprocess matching files before serving them to the browser
    // available preprocessors:
    preprocessors: {
    // test results reporter to use
    // possible values: 'dots', 'progress'
    // available reporters:
    reporters: ['progress'],
    // web server port
    port: 9876,
    // enable / disable colors in the output (reporters and logs)
    colors: true,
    // level of logging
    // possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
    logLevel: config.LOG_INFO,
    // enable / disable watching file and executing tests whenever any file changes
    autoWatch: true,
    // start these browsers
    // available browser launchers:
    browsers: ['Chrome'],
    // Continuous Integration mode
    // if true, Karma captures browsers, runs the tests and exits
    singleRun: false

Test the environment

Type: $karma start
Hopefully you see this:
If you read the messages carefully, you'll see that it didn't find any tests to executed.

If "karma start" fails, then the karma.conf.js likely has an error.  Read the message and see if you can figure out what it's asking for, then use "npm install" to install what's missing or fix the problem in karma.conf.js.  If you get no error, it just runs the karma process.

If Karma is running, you'll see a browser window launched. This is what Karmar does: it uses a NodJS server client and server (one that can access you tests in the configured test directory, and one in the browser(s) in which you want to execute the unit test.

Lets write a test:

describe("This test suite will fail", function() {
   it('should fail', function() {
As soon as the test is saved, the part of Karma that is watching your test directory files for changes, will grab that test and ship it to the browser(s) it's controlling.  After running the test on the browser(s), it returns a report to the Karma controlling running in your DOS prompt and will write out the results.  In the DOS prompt where you launched Karma, you should see something like this:

What's Karma, What's Mocha, What's Chai?

Karma runs in Nod.js and looks for and executes test ".js" files.  It loads the .js file and executes any "describe(...)."  Mocha, the "test runner" handles reporting results.  In the test above, Mocha is called via the "it(...)."  Chai is the library for investigating test results using "expect(...)" among many other library calls.


Karma-chai-Sinon plugin is a good way to get everything you needed installed in fewer step. I'll update the above with this improvement. Then I'll put a nice mocking tutorial here. 


Karma tips:

Trouble shooting

When karma executes against a changed test, it returns Executed 0 of 0 ERROR

This happen because my karma.conf.js had, in the FILES section, had "exclude=true." This problem is also documented at:

After "karma start" returns: Delaying execution, these browsers are not ready:

Recently, this happened when I tried to use RequireJS as a framework in my karma.conf.  When I removed this, it worked.  I don't need RequireJS.  Lots of people do.  I'm sure there is a way to get this to work.  Please comment if you know how to fix that problem.

Thursday, April 9, 2015

Know thy Most Productive Mode (part 2 of Having Great Standups)

When working with new Scrum teams, team members often don't recognize what is impeding them because even though something is obvious to me, to them it's business as usual.

Working with these new teams, I've stumbled upon a way to structure their thinking around this retrospection: Identifying the Most Productive Mode. Once the team knows what their most productive mode is, they can see what is impeding their ability to reach that state. Once they "see it" then they can do something about it such as bringing it up at standup.


Teams under stress will feel that cutting out some of their new development practices will somehow make them go faster. This is a natural reaction to fear--going back to the old way. This is why it's so important for each team member to intellectualize (know) why they are doing the practices and how *not* doing them will slow them down.

It's better to deliver as much code as you can with your new good practices and communicate what is keeping you from reaching your most productive mode:

Characteristics of productivity
Focus-- focus on one task at a time and getting the artifacts finished and checked in before dealing with distractions (email, phone, meetings, lunch break). Do this work with another (pair programming) who can bring immediate feedback, mentorship, and different perspective

Quick Feedback-- create feedback loops so you know ASAP that something is right or wrong:
writing tests and code in baby-steps (Test Driven Development);
executing the unit tests at least once every fifteen minutes and recognizing you've bitten off too much to do in one step when you can't (a coverage report is a medium-slow feedback mechanism, but will be a constant one once we get it fixed so it gives you feedback every time you run the tests in JUnit); and
getting feedback on the product owner as soon as a story is done.

When something gets in the way of you reaching your most productive modes, something is impeding you. (Examples: the unit tests take more than 30 seconds to run, the IDE is difficult to setup into a productive state, it's hard to setup a pairing station, people/email/IM keep taking you out of your most productive state, you've been pairing too long and you want to disagree with everything your partner is saying so you need a break, you come into work too tired to interact with people because you were up late developing code at home/work.) We're human, sometimes these can't be avoided. If most of the time you can't reach your most productive mode, then something is wrong.

* You should know what is on the sprint backlog. If you're confused about this, then something is dearly wrong! During standup, if you can't see how what someone is saying maps back to the backlog, then ask. If they don't know, then solve the problem: stop working on stuff that's not on the backlog.
* Get those definitions of done for a Sprint and a Release in a clear, easy to understand state. If it has 20 separate items and you need a lawyer, it doesn't help with clarity.
* You should know your velocity by sprint planning for the next sprint.
* At standup, the team members should all understand at some level what each person is saying. If you don't have a clue, say "what's that about?" so you know. Sometimes you'll realize it isn't helpful for person A to recount a blow-by-blow of a meeting about some other concern that has little to no impact to your project. In that case, ask them to parking lot it or follow up after standup with the one other person they are talking to.
* Standups > 15 minutes should be an anomaly. If they aren't then something is wrong. Keep standup focused on the team talking to each other about the three questions. Make it clear when standup is over and your going to shift into something else so people can leave or understand what you are about to go into is a hallway meeting. Otherwise, standups turn into 45 minute hallway meetings every day. Immediately raise impediments during standup about why standup is going too long (trying to do release planning, trying to solve a problem by falling into a 5 minute discussion).
* Use the fist of five. It's simple and effective.
* Come to standup prepared. You may need to make some notes. You may need to meet with a subset of the team *before* standup (the ones which could be a 1-2 on a fist-of-five) and have those long discussions so you have some alignment so standup goes smoother.
* Use big visible charts. If you're team has a problem, then make a chart. If your standups run too long, make a chart of the time. If the team sees the chart, but refuses to adjust despite that, then you need another approach. (Setting an egg timer for two minutes and passing it around to each speaker really works because it provides a quick feedback mechanism). By day three most of them will have figured out how to communicate effectively in that constraint.)

Wednesday, February 18, 2015

Automated Tests for Database Procedures

Why not future-proof your database procedures just as the middle tier and front-end developers have been future proofing theirs? Not only can automated unit tests be built for each procedure but Test Driven Development (the practice of writing a simple unit test that fails and forces you to implement some simple procedure to satisfy the failing test, and once the test passes you enhance the test or add a new unit test to grow the procedure further) can be done as well.

Why Unit Test Procedures?

Every time there's a change in a procedure or schema, unexpected errors can happen. To mitigate disaster, you'll have been doing some kind of testing: manual testing, automated testing, ... 
An early adopter's mistake is to test every possible case you can imagine. For unit testing, this isn't your test target. You're goal is to have every line of procedure be necessary to pass your test procedures. Although this means that some lines of code are tested as a side affect, and that's OK.  When constructing your unit test, focus on testing only the one procedure you're targeting.


Without unit tests whose job description is to test each stored procedure, then you're gambling that the testing in other tiers will uncover problems. Why live with this uncertainty? If you have a test suite that confirms every line of procedure works as expected, you'll have supreme confidence that your next release is truly an improvement (rather than new features with new bugs). 

Fast feedback

Since we're writing unit tests that test each procedure in isolation from other procedures (or at least as isolated as possible), then the tests will execute quickly. Likely you can test 100 procedures in a minute which scales to thousands of procedures in thirty minutes. In fact, if you run these tests continuously (triggered by code checkins) or whenever the you feel some uncertainty, you can execute the test suite to ask the question, "do I have a regression?" and get back the answer in less than a minute.

The tests "tell you" the regression's location

With individual unit test (test procedures) designed to test a stored procedure in isolation, you'll have signaling that shows what procedures are fine and which are in failure. This means once there is a regression it the tests will indicate approximately what procedure or procedures to examine.

Tools for Unit Testing Database Procedures

Here are a few that either I've personally used or I read through the documentation and meets my minimum feature list (asserts, pre test execution routines, post test execution routines, outputs a report, allows easy execution of entire test suite).


(PLUnit looked promising but isn't maintained any longer, which makes it unusable since it's closed source.)

Oracle SQL Developer has unit testing tools built in (view->unit tests).  It's a bit complicated.  The learning curve to use this tool is higher than PLUnit.  Here are some links about this tool:

PLUTo is a minimalist setup.  utPLSQL has a rich API.  The Oracle one is hard to get into because Oracle's tech writing is so dull and uninspiring.  And the last time I tried to use it, the tool was disabled for some reason (license issues?).  My favorite is utPLSQL.  It's made popular mention on Steven Feuerstein's (of Oracle) blog as well.  You can install it in minutes and start with the examples.


T.S.T. Sql is a nice simple tool with some getting started videos and an InfoQ article.

Common Test Environment Configurations

To execute automated tests for a DB means you need a DB instance to put the procedures inside of and data for the procedures to execute.  As for data/schema, this should be created (inserted) as part of the automated test.  To answer the question of how many DB instances, there is a range to this answer:  You need enough. :-)

Usually you need:
  • 1 instance for continuous integration, since you're going to want to execute your automated tests continuously.
  • At least one instance for someone to develop automated tests.  Typically, every developer creates an instance on their own system whenever they want to execute, debug, or create automated tests.  If for some number of reasons you can't have an instance for each developer, then you'll need to manage them as a pool.

Test Design

The above tools add some supporting procedures so you can focus on creating automated tests in the form of test procedures.  The job of unit tests is to confirm that the code under tests operates as the developer intended it.
Good automated unit tests have the following characteristics:
  • each test executes in microseconds
  • independent of execution of other tests (said another way, you should be able to execute the tests in any order you wish)
  • prefer having many simple test procedures to test a DB procedure (which are easy to understand and maintain) rather than a few test procedures that check many thing.
  • use test data which is as simple as possible--just enough to make the procedure under test happy.
  • test data should be created either by the test itself
  • test procedure names should express: what procedure they are testing and the test scenario
  • each of DB procedure under test should be tested in isolation (without dependency on other procedures)
System level tests are supposed to answer the question, is feature XYZ working as the user expects it?  These tests will work with large data sets and developing system level tests:
  • each test executes seconds, minutes, or longer
  • independent of other tests
  • destructive tests should undo their "inserts" so as not to affect other tests (could be implemented by either refreshing the data afterwards or not committing the transaction).
  • prefer creating tons of non-destructive tests and few destructive tests so you don't need to often refresh the data which is slow
  • keep destructive tests which require data refresh in a separate test suit so you have a "fast" suite for continuous builds and your "slow" suite for hourly or nightly builds.

Test Data Creation

Ways that test data is created are: 
  • restoring to the database a set of test data.
  • sql script that inserts the data.
  • sql code that creates test data that is used by your test procedures
Your DB instance creation along with schema should be automated as well so that any developer on the team can easily create a DB test environment and so your continuous integration system can recreate the test environment each time it executes.