Monday, July 1, 2013

BDD Practices that Maximize Team Collaboration and Reduce Risk


Maximizing team collaboration and reducing risk

Guiding Principles: KISS – Keep it simple silly & Incremental delivery

I’ll think about the future but only implement what I need this sprint, this day, this hour. I do that so I can start my implementation at the simplest level (but not too simple) and grow the feature minute by minute, hour by hour.
Because I work this way, I can check-in often. This enables other team members to have visibility on what I’m doing. This allows for cheaper integration (merges) for the team and myself. It allows me to get the latest changes from the source tree so I can receive new work from the team (and get visibility into what they’re doing). Checking in often reduces risk and increases code reuse. I can’t reuse my team member’s latest new shared libraries if it isn’t checked in. If I get pulled away from the work (personal or fire drill), another team member (who has some insight into my work because they’ve been seeing my many changes coming into their development environment, who has heard about what I’m doing at a high level during standup, who is collocated and has heard me working with my pair, who maybe has paired with me) can finish the task or story so the team is successful.
Because the team is checking in often, we are releasing our best design efforts and reducing latency for others to review and use our work.

Test Strategies

Guiding Principles: Tests run independent of each other, give timely feedback, are maintainable, deterministic, and test a feature once.

It’s best that EACH test run in a clean environment: clean data, clean server state, clean client state. This may not be practical if the tests take too long to give timely feedback.
Common strategies are any combination of:
  • Test login only once, the other tests skip login: share selenium driver across tests, or have a way to generate a session token so your tests don’t need to login.
  • Don’t commit data destructive tests to the DB or find a way to get the data "reset."
  • If you don’t mind your tests running slow in different contexts, then make the above "workarounds" configurable so you get fast feedback in dev branch, but the "slow tests" that login and clean all the state/data in int branch.
Managing Data:
  • Maintain a data dictionary—This is a single point where we declare what data in the DB we are dependent on. Three columns: short hand description that’s used in the code, database id, notes. The data dictionary reserves data and is a signal to the viewer that they shouldn’t write to the data or change it as there are tests relying in the data.

BDD in your Sprint

Behavior Driven Development implies that something is driving development. This something is failing or pending BDD tests. Immediately after Sprint planning, get those failing tests checked in and executing so that failures/pending are visible on your continuous build monitor. Why? So everyone can casually see the status of your project, just as you do when your commute takes you by a construction project.

Three levels of being driven:

1. Immediately get the entire sprint backlog features checked in as pending tests. Then before each feature is finished, implement the Steps to prove the feature works.
Risk: Too much automation work is left for the end of the sprint and we deliver features without automation. We don't discover requirements gaps early in the sprint because automation is postponed.
Value: Make visible what work is accomplished versus pending. Doing automation before implementation tests your understanding before you do it all wrong. :-)

2. Immediately get the entire sprint backlog features checked in as pending tests. Next, implement the Steps to prove each isn’t working (imagining the UI elements are built, coding to IDs or button names that you make up in your head). As each feature is implemented, tests should pass.
Value: if you can automate the tests, you've proven a deep understanding of the requirements or have revealed problems early in the sprint when there is time to fix that problem through collaboration.
Also, you've made visible on your build monitor a lot of status: Pending automation versus automation completed versus features completed.

3. At project start, enter all the pending features in the product backlog, and then sprint by sprint stakeholders see progress toward features as the teams use the above level 2, and as time passes, those features are split or remove or new ones added.
Value: Makes visible how the project is doing at the release level.

What happens if I get stuck automating?

  1. Get help from a team member because pair working results in 80% effort toward correctness and we come up with more ideas to try and solve the problem.
  2. Talk to the whole team about the issue as no single individual owns quality.
  3. Escalate issue to organization
  4. By Sprint end, maybe the team decides it’s not automatable and that they’ll test it manually.

  5. Given When Then Design

    These are BAD smells:
    • Too many GIVEN, GIVEN, AND, AND,WHEN, AND, AND, THEN.
    • Need an engineer or power user to understand the GWT. (An end user or end user’s boss whose never used the product can understand the GWT.)
    • UI language mentioned in the GWT
    • feature file is many pages long
    • GWT language is not reusable. Lack of consistency of language.
    What if my "feature" has no behavior:
    • Go find it!!!
      • Talk to your PO and find out "why" the customer wants this.
    • If you’re delivering a component that is *part* of a feature, then
      • Go look at the feature’s GWT and understand how your component supports that feature.
      • Implement Steps for that feature to test that your component delivers that part of the feature.

    Test Steps

    These are NICE smells:
    • Step methods are about three lines long.
    • No @Alias(es).
    • Steps call into a shared library (rather than call interfaces on other Step classes).

    Use Continuous Integration

    It's hard to get started using CI if you aren't already. Here's how to get started.

    Guiding Principles: you can’t automated what you can’t do manually and you'll always find a better way to do it next week (but don't wait).

    If you tackle all your technical difficulties from head on, it's hard to make progress. So deliver your way to a CI by using evolving coding and learning, and incremental delivery. Because each CI server out there can work with any possible script, script it using methodologies that get the job done (for you). Start getting the value from your integration efforts as soon as possible.
    1. Manual integration: integrate on a separate machine (using scripts that later will be executed continuously.)Scripts should be able to find new tests at run time rather than rely on static lists. A script should be able to build, install/deploy. If you can't yet create scripts, do it manually until you learn how.
    2. Continuous Integration: Select CI software, install it on a server, and then automate the execution of scripts ina CI.

    If you have feedback, agree, disagree, feel free to comment as you'll be adding value!