Wednesday, December 3, 2014

Recommended Behavior Driven Development (BDD) Toolsets

As a consultant who works with many different teams on mostly .Net,  Java and JavaScript projects, let me recommend my favorites.

Cuccumber versus JBehave
I don't have a clear favorite between these two.  The tools between them seems to be about at the same level of maturity (or immaturity).  Both allow you to use JUnit as a test runner.  Both allow Gherkin syntax.  I give an edge to Cucumber in that it needs a little less hand-holding configuration files to get things up and running.

Cucumber
Here is my favorite Cucumber setup which I'm using (as of Mar 2017):

If you just want to download cucumber jars the traditional way:
http://repo1.maven.org/maven2/info/cukes/cucumber-java/1.2.5/
http://repo1.maven.org/maven2/info/cukes/cucumber-junit/1.2.5/
http://repo1.maven.org/maven2/info/cukes/cucumber-core/1.2.5/
http://repo1.maven.org/maven2/info/cukes/gherkin/2.12.2/
http://repo1.maven.org/maven2/info/cukes/cucumber-jvm-deps/1.0.3/
http://repo1.maven.org/maven2/info/cukes/gherkin-jvm-deps/1.0.2/ http://repo1.maven.org/maven2/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar  http://repo1.maven.org/maven2/junit/junit/4.12/junit-4.12.jar

The above will allow you to develop and execute feature files and BDD automation.  Great for executing using maven and add these external jars to your build so you can build step definitions in Java.  But no sane person would do this without a nice syntax highlighting editor and "open on declaration" goodies plugged into Eclipse.

Natural

Natural is a feature file editor that uses code-assist to fill in Given, When, Then, and allows you to jump to the step definition.  Install Natural from the Eclipse Marketplace found in Eclipse at help->Eclipse Marketplace. Natural itself wasn't tagged in the depot correctly so you'll find it by searching for "cucumber," "jbehave," or several other BDD related tags.  Install it.

Test it out by creating a plain text file with a .feature extension.  I find when creating a new file with a ".feature" extension, I need to close the tab and re-open it so Eclipse can hand off editing the .feature file to Natural.

What Natural does for you when editing a feature file:
  • content assistance in choosing Steps, 
  • F3 (open declaration) which shows the Java definition for the step (assuming one exists), 
  • Outline view, 
  • syntax highlighting, and
  • caution marks for Gherkin steps that don't have a Java definition.




Manual installation
Sometime back, when traveling Asia, I needed to do manual installs as the Eclipse Marketplace and Natural repository were timing out.  In those cases I did manual installs.
https://github.com/rlogiacco/Natural/wiki/Installation-Guide
  1. Using Eclipse Install XText from this location: http://download.eclipse.org/modeling/tmf/xtext/updates/composite/releases/
  2. Download an Eclipse archive (.zip) of Natural from this location: https://github.com/rlogiacco/Natural/releases
  3. In the Eclipse->Install New Software, click "Add..." and this time select "Archive" and select the path to the zip file downloaded in the above step.

But Natural alone won't run your feature tests.  You need to install Cucumber-Eclipse, or as I prefer, run them via "cucumberized" JUnit test cases via @RunWith(Cucumber.class).

Cucumber-Eclipse
I don't recommend this tool as I've never gotten it to work for the last 12 month (2016/2017).  But If you want to give it a go....  As it's not in the Eclipse Marketplace for whatever reason, use help->install new software.  Click Add and add this url:
Select Cucumber-Eclipse and install.

What Cucumber-Ecilpse should do you for you
The context menu will have "Run feature tests" and "Debug feature tests" which should use the Cucumber CLI to execute the selected feature file, showing the test results in the Eclipse Console.  As of 2016/2017, I've not got this feature to work.  I think it can be made to work via fiddling with Runtime configurations.  I've given up on it and execute BDD tests via cucumberized JUnit test cases and JUnit suits.  (Naturally you can do the same with TestNG.) 

Good Organization and Configuration
Ask yourself how many features you'll build this year.  Then ask yourself what those categories look like.  Now go into eclipse and make a heirarchy, something like this: features->, features->.

So sprint by sprint, add your feature files into to that hierarchy and grow the hierarchy as needed.  I suggest representing the hierarchy as a package in the source code alongside the application it's testing.  If your Steps definitions have a strong relationship with their features, then put them alongside the feature files.  If there isn't such a relationship then don't do that.  (People have different feelings about 'global' namespaces of the BDD Steps.)

source/java/com/my/awesome/app
source/java/com/my/features

Most teams hook their JUnit runner to their BDD tests (using JUnit's @RunWith(...).  Put that test class in the features directory and make it responsible for running all your features.
source/java/com/my/features/RunFeatureTests.java

Using the above organization, follow the principle of "keep things that relate, together" and place the step definitions along side the feature files:
source/java/com/my/features/Foo/Foo.feature
source/java/com/my/features/Foo/FooSteps.java
source/java/com/my/features/ShoppingCart/Purchase.feature
source/java/com/my/features/ShoppingCart/PurchaseSteps.java
source/java/com/my/features/ShoppingCart/TakePayPal.feature
source/java/com/my/features/ShoppingCart/TakePayPalSteps.java
source/java/com/my/features/ShoppingCart/TakeVisa.feature
source/java/com/my/features/ShoppingCart/TakeVisaSteps.java

If you have global steps definitions (don't worry about this when starting out), then put that library in the features directory.  If you're driving a UI, you'll need a place to put your page objects too.  (Please avoid testing the UI unless you have to.  Also use the emergent design principle and develop your page objects as you need to.)
source/java/com/my/features/global_steps/*Steps.java
source/java/com/my/features/page_objects/Login.java
source/java/com/my/features/page_objects/ProductPage.java

SpecFlow
For .Net, all I ever seem to use is Spec Flow and it works good enough.  Here are some pretty good directions.  (FYI, don't believe him that this will work for Express versions of VS.Net as Microsoft turns off useful things like plug-in installation and debugger .)  The main thing is you need to install two things: SpecFlow libraries, and SpecFlow templates (for editing .feature files).

Tuesday, July 8, 2014

Don't forget to AGILE your Test Plan when transitioning from Waterfall to Scrum

Goal: Repeatable quality through automated tests!

But we need people to develop them. Traditional organizations call these people testers.


Tester-> Automated Tests!

In trad. organizations, testers rarely do this because of the traditions of Waterfall.

Tester-> Test Plan -> Automated Tests!

And this is where the trouble starts. Due to the divide-and-conquer and handoff approach of Waterfall, it made sense to split the role of software development into programmer and tester (I'll not debate the pros and cons of doing this, Agile comic SCRUM NOIR—A Silo to Hell! does a nice job.) but in the Agile context, these testers are challenged in integrating their work with a Scrum team: they can't write automated tests until late in the Sprint. This is a big problem when moving to a practice such as Acceptance Test Driven Development where automated test development starts on the first day of the Sprint.

Iterative Testing Problems:
  1. Testers don't plan together with the developers during Sprint planning.
  2. Testers never check-in an automated test on the first day of the Sprint.
  3. Test Planning isn't done continuously (a little bit here, a little bit there, and then executed, then repeat) but instead is an upfront event that takes up 50% or more of the Sprint.
  4. Testers complain they don't have enough information to get started.
The above happen in organizations in transition to Agile where people carry old habits and behaviors to their new roles, even if the behaviors aren't optimal. The Waterfall process was planning heavy and everyone (testers, developers, ...) were encouraged to spend a few months writing documents and reviewing them, creating a false sense that we "planned well." (The security was demonstratively false since the plans between waterfall phases always changed.) Since test and development are often in separate organizations, they coordinate around events set on calendars to deliver artifacts: Test Plans (among other things). The tester is responsible for the Test Plan and had dependencies on pretty near everything in order to produce it (development, architecture, business requirements, ...).

This habit is antithetical to the Scrum process since a Scrum team is doing incremental delivery of product and doesn't know until Sprint Planning what will be done during the next Sprint. This forces the Testers to produce Test Plans under immense stress if they preserve old habits.

This results in:
  1. Testers hating Agile.
  2. The Test part of the organization hates Agile and starts working against change initiatives.
  3. Testers complain that they have to drive everything because the tester's need becomes the event rather than a scheduled milestone.
  4. Many preconditions to creating a Test Plan: Testers demanding a lot of input documentation from other roles (design specs, arch specs, use cases, business case studies, hardware diagrams, call-flow, process flow,...) before they feel they can complete their Test Plan.
  5. Equating Incremental testing with incomplete testing: Tester's consider their Test Plans incomplete because they are only supposed to think of testing the functionality they are delivering that Sprint rather than for the entire release.
Points 1 and 2 are effects which are mitigated by showing them how to be successful. Point 3 is discomfort about the culture change from Waterfall, where the process drives the events, into an Agile process where individuals drive the process. Testers and those who support Test Plan creation get used to it after a few Sprints. Point 4 is a "process over working software" habit from Waterfall where if testers (or their management) feels rushed, they can buy more time by demanding more process and documentation from others as dependencies. Point 5 requires another change in thinking which will happen after doing a few Sprints, witnessing that small high quality subfeatures will sum up to a large high quality feature.

If you change your process without changing how you do things then nothing's going to change (except for the labels). So points 3 and 5 are natural to "storming" (Tuckman model) and must be allowed to happen. Once we get to later steps, "norming" and "performing," and do it say in 3 Sprints, the dangers of the change rejection (points 1 and 2) will go away. Point 4 is a deep culture change which takes time until people to understand the Agile Manifesto Values and Principles, namely, Working Software is the primary measure of progress (automated tests are working software) rather than process checkpoints, milestones, and documentation completion.

No matter what, an organization in transition will have storming and that is healthy if conflicts are allowed to be exposed AND resolved (healthy stress/conflict drives change). To help "storm well," here are some activities.

How to Agile your Test Plan

The Scrum team containing programmers and testers must re-invent how they work together and be open to new ideas and ditch some bad ones (the heavy weight, comprehensive test plan). Each story needs it's own Test Plan. Look at solving this problem at two levels: Test Plan format and timing.

Making the format you already have, lightweight (format)

Rather than introduce any additional process/methodologies, make what you already do lightweight. If your usual Test Plans are 10-20 pages for features completed during a multi-month release, try and do the same plan but on one sticky for one User Story.

How can one stick get results as good as multipage test plans? We're leveraging the fact (and would be foolish not to) that within a few days we'll be building small features, so we don't need to document so heavily.

Agile Context (and why Agile development really is different than Waterfall):
  • a lot of information is successfully retained in our tacit knowledge since we are working on a small team that interacts daily,
  • we are dedicated to one sprint backlog and become experts in its execution,
  • we'll start acting on our plan within days,
  • we're working on sub features and only need to test the sub features we are doing this Sprint, and
  • each User Story is independent of the others so each must be tested independently
  • the majority (if not all) of the tests will be automated and checked in and will be our best documentation and reporting system
If you argue that your automated tests cannot act as your documentation, then you'd better work on your test design because you've got a problem that must be resolved.

Limber the mind

To change the results of your work you need to change yourself. A lot of what stops us are habits learned during the Waterfall context which need to be shed so we can develop sensible ones for the Agile context. This problem affects anyone when changing work contexts: I'm a science fiction author who spends hours getting my words right who sometimes spends hours getting an email right. This is complete waste for a one-off email. I had to learn that when I write technical planning documents or emails to reel myself back since prose isn't necessary. It took conscious effort to shift gears and doing pair work with another helped.

To do so I had to:
  • decide it was important to change,
  • be open to new ideas, in fact, be open to trying something so crazy it couldn't possibly work and then go for it.
James Whittaker of Google has a great facilitation style called The 10 Minute Test Plan which breaks down mental barriers that prevent writing a good and quick Test Plan.

Acceptance Criteria (format)

Acceptance Criteria are the most lightweight and commonly used of all Test Plan formats: a simple bullet list of what the application should do before the PO will accept the User Story.




Acceptance Criteria can also take the format of wire frames, call flows, non functional requirements, ....
Each User Story should have acceptance criteria before going into Sprint Planning. More can be added at any time, but it's important to have a rough list before Sprint Planning or the meeting will become overloaded and slow.
Each User Story should have acceptance criteria before going into Sprint Planning. More can be added at any time, but it's important to have a rough list before Sprint Planning or the meeting will become overloaded and slow.

4D Analysis (format)

4D Analysis was introduced to one of my teams by friend and fellow coach XuYi. 4D adds three dimensions of additional analysis to simply using Acceptance Criteria. Adding more analysis isn't necessarily a good thing because that's what got us the 20 page Test Plan. However 4D is still one page and maybe your team'll find working on the first 3Ds helpful before getting to the Acceptance Criteria. The 4D analysis is attached behind each user story.
The Process dimension is further broken down into 3 types:
  • User workflow (how it works from a User's perspective)
  • Business process (how it works from a Business Analysis perspective)
  • Technical flow (system engineer or architecture view)
The idea is that for an individual User Story you'll need one of the above process types and rarely, you'll need more than one.

Strangely enough, the feedback I've received from teams doing 4D Analysis is that despite the single sheet of paper, the amount of analysis is greater than they were used to from their traditional multipage test plan. Customize the 4D analysis to fit your team's needs.

BDD (process, format, and technology)

Get your PO adding Behavior Driven Development scenarios to your User Stories. Or at a minimum work with your PO and get them written down. Good BDD tests make wonderful test plans AND automated Tests AND traceability documents!

Given there exists bad entries in [playlist]
When trying to play this playlist
Then remove invalid playlist entries

examples:
|playlist|
|Never Played|
|My Top Rated|
|Whole Library|

More BDD articles:Well written BEHAVIOR Driven Development Scenarios, BDD Practices that Maximize Team Collaboration and Reduce Risk.

Timing

When should Test Plans be created? The two strategies are:
  • Create Test plans during the Sprint
  • Create Test plans before Sprint Planning.
The testers and developers include in their estimates made during Sprint planning the effort to create Test Plans and implement tests. But they need to learn to move fast and test like a jazz band rather than a symphony which has a conductor that has organized a lot of planning and process.

Lightweight test plans can be created before the Sprint by leveraging Backlog Grooming. Here's an agenda that broke backlog grooming up into the following: 30 minute kickoff, offline (outside of meeting) collaborative work, 1 hour grooming meeting.

Here's an example calendar for a three week Sprint:

MTWTF MTWTF MTWTF
         ^- Kickoff Preperation for Grooming
            ^- Groom results for 1 hour

Constraints:
  • a team spends no more than 10% of a Sprint preparing for the next Sprint
  • other than Sprint Demo and Retro, avoid meetings on last 2 days during Sprint crunch time
  • have Grooming far enough in advance of Sprint Planning to fix problems uncovered in Grooming
Kickoff Agenda (team, PO, SMEs in attendance) < 30 minutes
  • During meeting
    • PO brings proposed Sprint Backlog
    • Team members select stories to prep for grooming and are encouraged to collaborate with other team members
    • Decide what stories need SME deliverables/support and make that status visible
  • Offline, to be finished by Grooming Meeting start (Monday 2PM)
    • Fill out prep document
      • User Scenarios {Functional flow, User Experience behavior} (done by: programmer or tester)
      • Y/N needs SME deliverables (architecture view, etc.)  (roles: tester, SME)
      • Refine User Story Acceptance Criteria (roles: everyone)
      • Update Assumptions (roles: everyone)
    • PO and SME will visit each team member (individuals and interactions per the Agile Manifesto) before Grooming Meeting and see how they can help.
    • ScrumMaster will make sure the process is successful and that everyone comes to the Backlog Grooming meeting prepared.
Grooming Meeting 1hour, review the results of the prep work as a group.
A team reviewing User Stories and grooming prep doc. They are standing up during the action moments for maximum collaboration. Stand up meetings are 30% faster than their "sit down" counterparts.

During Sprint Planning, the team will adjust further and are expected to refine further even during the Sprint.

Summary

Goal: Automated Tests!

But we need people to develop them.

Ideal:
Tester->Automated Tests!

Usually, teams do at least a lightweight Test Plan: 
Tester-> Acceptance Criteria-> Automated Tests
Tester-> BDD-> Automated Tests
Tester-> 4D Analysis -> Acceptance Criteria -> Automated Tests 

Some teams need more analysis. Find a way to do it using only 10% of your current Sprint to plan for you next Sprint:
Tester->Test Plan->{Acceptance Criteria, BDD, 4D Analysis}-> Automated Tests

Remember you're doing Test Plans to have automated test cases to defend your product from regression. Your Test Plans should be designed to serve this purpose. Having large Test Plan documents was never the goal. If you can produce 2-6 automated test cases from a one page Test Plan, and spend no more than 10% of your Sprint doing Test Plans, then you're on the right track.

Monday, May 26, 2014

Why STATIC is the keyword your mama warned you about

In the beginning, there was machine code and all was good. Then there was assembly and after that, a progeny of later cousins such as FORTRAN, COBAL, C, and BASIC, and these cousins had a GOTO command. They were procedural languages and GOTO allowed the programmer to command the procedure to immediately jump to a different location.

Later, as more software was developed, programmers noticed that *understanding* what was happening during a GOTO/GOSUB was easily forgotten. Later, GOTO was recognized as a face of evil so better to use procedures which have a name, and this name provided meaning.  Adding to the bargain, procedures defined variable scope so we didn't have to worry about creating unique variable names across the whole of the application. Everything was packed in a package of meaning, parameters, procedure, and scope.

Although programmers rejoiced, because the habits of those who had been using assembly had trouble adapting to the new paradigm, the new languages left a crunch behind in the form of GOTO. So many still built code with GOTO and big comment blocks to explain the intent of their GOTO blocks. As time passed habits evolved and people made their GOTOs to GTFO.

OOPS!

 Later, actually much later in computer years, some smart people wanted more sophisticated control of variable scope and designed object oriented languages. These languages allowed grouping together of procedures with scope and classifying the groups with a meaningful names.  These languages dropped GOTO and cousin GOSUB because by this time, it was accepted that they did more damage than good

But friends, nothing shakes confidence in a design more than seeing static. Unlike previous keywords such as GOTO and GOSUB, static begets more static design. Like a cult of evangelicals, static methods insist on converting everything they touch into static. One static method begets another, and that one begets another until as far as the eye can see are static fields and static methods in a hulking procedural design that shuttles behavioraless data structures through static methods with long parameter lists. Each static method "switch and casing" to discover state so a valid response can be made.  Where OO design encourages co-locating state with related procedures in a way that minimizes scope, static data structures global to the process are bandied about, convenient for communicating state for a few static methods but unfortunately turns into a "what's good for the goose is good for the gander" mess that is shared across all kinds of activities, increasing coupling, making the application a devil to add features to or unit test.

When can using STATIC be righteousness?

I most often use "final static" to create constants. Outside of that, I think really hard before I type the words STATIC.  How do you know you're in trouble?
  • If at the top of many of your class declarations you have STATIC initialization blocks.
  • If at any moment in your code you could have a reference to a STATIC class that couples you to some heavy I/O services (DBs, network, filesystem) that'll screw you when you want to write some unit tests.

Static methods force a procedural design--a style that is as 1970s as brown wall board, orange shag, and bitchen fros. And some dirty hippies always hitch a ride: static variables (as opposed to constants) and big global data structures. Although it's cool to throw on bell bottoms with an Atari T, and tuck a comb in the fro, fatty procedures with comment blocks every 100 lines is about as cool as banging your junk against the knob ridden dash of a classic Chevy Impala. It's only funny when it happens to somebody else. So unless all other avenues are exhausted, don't use static. And even then, minimize it's linkage by using the Singleton pattern which is unit test friendly.

And remember to tell Mom, "Thanks." She'll love you even if you use static, but your co-workers and sustaining team will love you a LOT less.

Other Sources

Technically correct article:
http://stackoverflow.com/questions/7026507/why-are-static-variables-considered-evil

Technically correct AND entertaining:
http://blog.goyello.com/2009/12/17/why-static-classes-are-evil/