Monday, June 17, 2019

Ansible Playbooks and Modules

Playbooks are the primary way to produce "infrastructure as code" when using Ansible.  Infrastructure as code is the aesthetic of having all (or as much as you know how to) infrastructure setup, configuration changes, monitoring, rollback, and tear down represented as automated code/scripts.  Code for the demos are in a GitHub branch here.

Playbooks

The goal of a "play" is, like a sports team (let's say American football) to map players to their role on the field, and to an activity.  In the context of IT, routers, servers, clouds,... are players.  Roles are routing, firewall, blog, CMS, ....  And activities could be: Update, migrate, re-route, copy file, ....

The IT professional writes playbooks to handle activities they care about.  Playbooks are checked into source control and become the system of record. 
Playbooks are:
  • are human readable
  • combines the act of writing notes to document configuration with the act of declaring configuration, and with act of confirming/testing configuration.
  • describe a policy you want your remote system to enforce
  • a set of steps in a general IT process
  • manage configuration and deployment to remote machines
  • (advanced) sequence multi tier rollouts involving rolling updates, delegate actions to other hosts, interact with monitors and load balancers
Although one could get by using ansible to "execute" scripts to multiple hosts via Ansible modules, Playbooks are a structured way to organize interaction between equipment, their roles, and activities.  Playbooks are defined in a .yml file and start with "---". For each play in a playbook, you get to choose what to target in your infrastructure and what remote user executes the tasks.
The metaphors connect like this: playbook -> {play -> {tasks -> modules}*}*
Where a playbook may have zero, one, or many plays.  And a play may have zero, one, or many tasks.  A task requires a call to a module in order to affect some change.

A single play is in verify-apache.yml playbook:

Two plays are in this playbook:

The above examples are just one way to declare remote_user after hosts. The remote_user can be declared in many different contexts.  Here is a short document that covers YAML syntax.

Modules

A task is a call to an ansible module.  Here the connections to the metaphors: task -> module
In Yaml, tasks are started with "-name."  Modules are started on the next line.

The Yaml after the module name are key=value arguments, space delimited. "yum:", "template:", "service:" are references to modules which are packages of features you can use with Ansible. Ansible will pass whatever is after the module, into the module as attributes and value pairs. In Play 1, "yum:" module will get the following list as arguments {name=httpd, state=latest}. The Yum module documentation explains what it does with arguments such as "name" and "state."  When Yum receives "state: latest" it will check that the latest httpd is installed and if not, upgrade it.

Modules are a way of re-using code in Ansible.  They can also directly invoked from the ansible command line via "-m":
$ ansible webservers -m service -a "name=httpd state=started"
$ ansible webservers -m ping
$ ansible webservers -m command -a "/sbin/reboot -t now"

The modules used above are: service, ping, and command.  Here is a reboot activity done in a playbook:
---
- hosts: webservers
- name: reboot the servers
  action: command /sbin/reboot -t now

Rather than on the basic "action" module, we can use a more specific module, "command:"


---
- hosts: webservers
- name: reboot the servers
  command: /sbin/reboot -t now

Here is the "restart httpd activity" done in a playbook:


---
- hosts: webservers
- name: restart webserver
  service:
    name: httpd
    state: restarted

Notice how playbooks connect the activities to a name so the playbook writer is organizing their scripts by giving them names.  "restart httpd" becomes "restart webserver."  Having good names is important to supporting maintainability and human readability.

Here are example playbooks that do "real worlwork."  However, I found the use of roles and how the files are organized confusing for getting started. And there is the real possibility that there is a better, less confusing way to organize an enterprises inventory, roles, and files. 

Playbooks are executed by the ansible-playbook command.

Write a playbook

Let's build a "compliance" playbook that establishes the state "no errors should be in system logs."  The biggest complication I had in learning Ansible was writing YAML and understanding its implicit syntax between Lists and Dictionary. These problems go away with experience and since writing YAML is foundational, it's important to get this working "for you" rather than "against you." So to get some experience, I found slowly "growing" (iterating on) the simplest playbook into something useful, helped me learn how to work with YAML. Jumping straight to the example playbooks listed above didn't give me the experience with YAML that I needed.

Here is the simplest possible playbook:
---
...
Run it:

Let's connect to a host by adding it as a list item.
---
- hosts: blogs
...

From a YAML syntax perspective, "- " means list.  "hosts: blog" is a key (hosts) and value (blog).
Run it:

Since I didn't add the inventories, it couldn't resolve what "blogs" meant.  So I added the inventory file (-i ) just like I did in the Enterprise Infrastructure as Code with Ansible article.
Now it is resolving the definition of "blogs" to a hostname.  But it says the host is unreachable because I need to pass along the username via (-u ).  (You can also embed that in the inventory or even playback.  I chose to use the command line to pass the username so I can put my files into public source control without exposing my username.)
Success! Our first end-to-end playbook execution.  You can get the code on GitHub at this branch.

In YAML, "-hosts ..." is how a list is declared (dash with a space).  Then "hosts : blogs" is a key and value pair.  So Ansible loaded the YAML and accesses the list and looks for the key named "hosts."  It then tried to resolve the value "blogs."  Since there isn't a hostname literally named blog, it couldn't ignored that host until an inventory was referenced which defined "blogs."

Let's start checking logs

As you develop a playbook, keep the Ansible Module documentation handy.  Since we'll need to execute a shell script on our remote servers, let's do something easy to "kick the tires" of the shell module. (For tips for deciding between command, shell, and script module see the module notes here and here.  In this case, we could start with the Command module but since we'll eventually want to use pipes we need to use the Shell module. ) Make the changes needed to have simplest_playbook.yml as the following:

---
- hosts: blogs
  tasks:
    - shell: echo "hello" > hello.txt
    - shell: grep "hello" hello.txt 
...
And test the file for errors:


It's happy with the YAML.

Syntax Sidebar: 
"- " declares a sequence.  "hosts: blogs" is a key and value pair. 
"tasks: " maps the tasks to whatever follows, a sequence of more key values: "- " shell: ...:
Ansible requires the key following "tasks: " to be either "name: " or a module.

Note about YAML style:
In YAML the following two playbooks are equivalent. Normally, I advise people to use the most succinct style but I had a lot of confusion separating the sequence marks "- " from the mappings.  Otherwise, at least to me, "- hosts: blogs" and "tasks:" don't seem to both be mappings. My brain keeps seeing the "- " and that keeps interfering with my understanding. If you agree with me, great.  If you don't then reformat it to how you like.  This article about YAML describes how to work with sequences and mappings in a general sense. It also was less confusing than the others.

This:
---
- hosts: blogs
  tasks:
    - shell: echo "hello" > hello.txt
    - shell: grep "hello" hello.txt 
...

Verses this:
---
  hosts: blogs
  tasks:
    - 
      shell: echo "hello" > hello.txt
    - 
      shell: grep "hello" hello.txt 
...
Are equivalent.

Let's execute this against the remote host by telling ansible-playbook about our inventory and user:

Notice that it reports "OK."  Notice the TASK [shell] which echoes back what module executed. More information can be sent to the console by describing what is happing with the call to the modules using the "name :" attribute before the call to the module:
 ---
  hosts: blogs
  tasks:
    - 
      name: creating file
      shell: echo "hello" > hello.txt
    - 
      name: confirming it worked
      shell: grep "hello" hello.txt 
...

Notice the output about TASK contains what was mentioned by the "name: "mapping.

The play can be more DRY (Do not Repeat Yourself) by declaring a variable.
---
  hosts: blogs
  vars:
    message: hello

  tasks:
    -
     name: creating file
     shell: echo {{message}} > hello.txt
    -
     name: confirming it worked
     shell: grep {{message}} hello.txt
...

Check server logs

Now we can write a new playbook that checks that our system logs are error free.  First, I developed the following by working and testing in a terminal window:
find /var/log -name "*.log" -type f -exec grep -i "error" {} +  | grep -v "error_log" | wc -l | grep "0"
If Shell detects a command that returns a none zero code, it will signal there was an error.
(See Ansible and error codes if you want the details.) The above is designed to return a 0 if there aren't any errors, or a non-zero if there are errors found in the log files.

---
-
  hosts: blogs

  tasks:
      -
        shell: find /var/log -name "*.log" -type f -exec grep -i "error" {} +  | grep -v "error_log" | wc -l | grep "0"

And run it:
If your logs aren't clean, then the PLAY RECAP will tell you the results, which in this case is "failed."  Ansible dumps a json file containing what was sent to Shell, what was sent to stderror, and so on.  After cleaning the logs up (or you can cheat and chang the script to look for something more "unique" than error), run the command again.

When running the playbook, which runs top to bottom, hosts with failed tasks are taken out of the rotation for the entire playbook. If things fail, simply correct the playbook file and rerun.
Adding a "name: " mapping would make TASK output more sensible. "TASK [shell] ******" isn't a very useful message:

---
-
  hosts: blogs

  tasks:
      -
        name: Scanning Logs for error.
        shell: find /var/log -name "*.log" -type f -exec grep -i "error" {} +  | grep -v "error_log" | wc -l | grep "0"
...

Notice the yellow box below contains more meaningful output.

File Organization

A very basic organization is a directory that's checked into source control and containing:
  • directory of staging inventory
  • directory of production inventory
  • playbooks
This is the organization used in the example code used in this tutorial.  Building on that idea, most IT departments will need to add:
  • directory of roles
  • directory of group_vars
  • directory of host_vars
And eventually, an organization with a mature use of Ansible will be creating their own modules for code reuse and will need directories for these files:
  • library  <- code="" li="" lives="" module="" the="" where="">
  • module_utils  <- common="" for="" li="" libraries="" live="" modules="" those="" where="">
  • filter_plugins
Modules "should be idempotent" meaning running the module many times should have the same affect as running it once. They should be designed to check that the final state has already been achieved and if so, exit without performing any actions. 


If you're curious to dig deeper into learning more about the more advanced levels of organization, more details here.

Next Level Ansible

Once you've got some experience making a few playbooks, these topics that will take your Ansible work to the next level.  Each other the below are an article by themselves.

Resources:

YAML syntax. The syntax doc is a short and sweet read.

Friday, May 31, 2019

Enterprise Infrastructure as Code with Ansible

Keeping up on enterprise network configuration with spreadsheets, CAB meetings, scripts, and a swiss army knife of configuration software isn't sustainable.  Normalizing network administration to a single tool brings focus and effective standardization.  Ansible is one of the many ways to do this. Ansible is lightweight and easily extensible to administer any equipment that allows SSH.  I perfromed the below on MacOS Mojave.  Here is the code on GitHub.  

Get Ansible on MacOS

To run Ansible on Mac install the following: Python 2 (version 2.7) or Python 3 (versions 3.5 and higher), Open SSL, and Ansible ( https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#control-node-requirements )  This will allow your Mac to be a "control node" and send commands out to many machines.  MacOS is limited to 15 open files so you may need to adjust this if you're controlling a lot of nodes.  (See https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#control-node-requirements)

It's a bit "sporty" to build and install OpenSSL on a Mac.  If you use Brew or MacPorts, do one of the following and skip to install Ansible:

If you're like me and not a user of brew or MacPorts, then see the following steps.

Update Python 

The Python that comes stock on MacOS is likely pretty old. Go to the terminal and check:
(If you happen to accidentally type "python -v" use ctrl-D to close the REPL.)
$ python -V 
Or 
$ python3 -V

The later option checks for a common name that the newer python is installed as.  You can also check /usr/local/bin and see if there are other versions of python already sitting there.  The stock version of python is at /usr/bin.

If you don't have python 3.5 or greater, install a newer version.  Here are some instructions:
https://www.macobserver.com/analysis/how-to-upgrade-your-mac-to-python-3-2017-update/

Build and Install Open SSL

Go here and follow the instructions (https://github.com/openssl/openssl/blob/master/INSTALL) to build, execute tests, and install OpenSSL.  

Install Ansible

Follow the instructions for MacOS here: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#latest-releases-on-macos

Test it with:
$ ansible --version

Notice what version of python it reports back. This is the version of python your control node is using. If it's not using the Python 3 that you intended, keep this into account before doing something important with ansible. If you notice some of the ansible commands not working, then you'd better attend to this.  (You can't simply change /user/bin/python as MacOS keeps this immutable for a number of good reasons.)

Your first configuration

Since we want to use network configuration as code, make a directory for storing configurations so they can be checked in.  In my case, I used Git for source control and VI for editing files:
$ mkdir ansible_repository
$ cd ansible_repository
$ git init
$ mkdir hosts
$ vi lancer_kind_com.ini

Here is background on creating inventory files.  For this example, it's simple.  https://docs.ansible.com/ansible/2.3/intro_inventory.html




Test it by invoking the ping module:
$ ansible macattack -i lancer_kind_com.ini -m ping

Managing Remote Objects

Great.  You can administer your workstation with Ansible.  Time for the next level: managing a single remote site.

Configure SSH, Passwords, and Security

Let's all agree that it doesn't make any sense to type in SSH passwords as that's not a scalable or very useful automation strategy.  If you're new to using ssh public/private keys, and want to set it up by hand, here is a good article.  Once you've done that, test your configuration:
$ ssh username@hostname ls

(When you generated a key with ssh-keygen, you set a pass phrase, you'll still need to type in the pass phrase whenever ssh needs to work with the key you created.  If you don't like that, set the pass phrase to empty--press return.  You can use "ssh-keygen -p -f " to change the pass phrase of an existing file.)

To configure SSH for all your thousands of nodes in a scalable fashion, read how to do key distribution.

Ansible

Add the remote host to the inventory file.  For more details on configuration file, see the Ansible docs: https://docs.ansible.com/ansible/2.3/intro_inventory.html.

[] is a grouping mechanism.  With [blogs] for example, many sites can be listed and managed as a group.  Execute the ping module on [blogs].  "-u " has been redacted.  It is the SSH user that ansible will use.

Where to put your Ansible user name?

You can put the username in the inventory file, or in the roles (I'll go over this in an article about Ansible Playbooks), or at the command line.  I chose the command line so that I could check in files and still keep my ansible user name secret from the internet.  Here is a StackOverflow thread about these three options.

Executing ad hoc CLI

A simple yet powerful feature is that now a command can be sent to many nodes in a SIMD manner.  Since there is no "-m" argument, ansible uses the default which is the "command" module.  Command module takes a "-a" argument which means what is after "-a" is executed by the endpoint.  So "hostname" will be executed it every machine in the "blogs" group, in parallel execution.  Using -f tells ansible to do them in sets of 20 rather than ALL the endpoints grouped under "blogs."

Conclusion

Ansible can be used to manage any system which can handle SSH (not only computers but routers, ...).  Keeping your private key secure and distributing keys is a bit more work.  What's left to learn is to organize your inventory files (or integrate an inventory service into Ansible) in a maintainable way.  The good news is that you can start small and grow your Ansible skills.

Here are two pathways for continued education:

  • Other Ansible utilities:
    • ansible  - the command used in this article
    • ansible-config - list configurations that ansible has access too
    • ansible-console - a REPL environment to practice your ansible commands
    • ansible-doc - list plugins and documents
    • ansible-galaxy - install modules from ansible galaxy
    • ansible-inventory - display or dump configuration information as ansible sees it
    • ansible-playbook - execute playbooks
    • ansible-pull - pulls a playbook from a VCS repo
    • ansible-vault - encrypt a structured data file used by Ansible.
  • Ansible for network administration

References

Nice getting started video: https://www.ansible.com/resources/videos/quick-start-video?extIdCarryOver=true&sc_cid=701f2000001OH7YAAW

Current suggested practices: https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html

Monday, April 8, 2019

Designers and Agile Teams

It's the WILD WEST out there when it comes to how designers work with Agile teams. Sometimes they are part of the team and go to all the team events such as daily standup and planning. Other designers work on their own as a "team of designers" and stretch themselves across multiple Agile teams.

I'm collecting experiences of these "guardians of the hip and beautiful"to learn what they found worked well and what didn't. If you'd like to add your own ideas, use the comments or contact Agile Thoughts podcast and get on our interview schedule.

Here is a link to the Agile Thoughts episode archive.

Monday, February 4, 2019

Mocking with Google Application Scripts (Javascript)

This article covers why you need to do mocking and how to do it using examples using Google Application Script (a form of Javascript).  In each case, we'll start with a code example (not following the TDD so we can focus on mocking) then write a micro test. From there, we'll focus on the steps to Implement a Mock Object:
  1. find a way to inject the mock
  2. design a minimal mock object that achieves the necessary goals
  3. use the mock in an automated micro test

What is Mocking

Mocking is the practice of using simple objects in place of *real* objects in order to have full control of product code for which you want to write an automated test.  (Product code is the code you put into production as it gives you the functionality you or your users want.  Test code is Javascript code that tests the product code.)

Because Javascript is loaded and executed at runtime and isn't strict about types, it is an extremely flexible language. Yet, when writing micro tests there will be code that correctly does its job in production, yet needs to be mocked or return fake data for unit testing.

Situations that require a Mock object

Mock objects are used in place of real objects are doing the following "no nos."

Micro Testing No Nos

  • write to our computer screen (ui widgets)
  • communicate with a network (databases, webservices, cloud services,...)
  • connect to a system service such as the system clock
  • operate the file system
  • in general: bring complication to your micro test that slows it down or makes it reliant on something at adds complexity.

Example:

Copy the below code into a GAS script file called Alert.gs:
function showAlert() {
  var ui = SpreadsheetApp.getUi(); // Same variations.
  var result = ui.alert(
     'Please confirm',
     'Are you sure you want to continue?',
      ui.ButtonSet.YES_NO);
  // Process the user's response.
  if (result == ui.Button.YES) {
    // User clicked "Yes".
    ui.alert('Confirmation received.');
  } else {
    // User clicked "No" or X in the title bar.
    ui.alert('Permission denied.');
  }
}

Run the function and you'll get the following in the script editor:
Note the script editor's prompt about: "Cancel Dismiss." What has happened is the widget is showing in the Google Docs application to which the script is attached.  In this case, I clicked on the tab for the Google Sheet.
Take a look at the above and think about what is important to check with an automated test. For this type of functionality, the below is a typical list of "checks" or test case:

Checks for Alert

  1. The prompt text correct. In this case, the prompt is "Please confirm."
  2. The question you're asking the user is correct. In this case "Are you sure you want to continue?"
  3. the dialog box is of the correct type.  In this case, has a Yes and No.
  4. When Yes is clicked, a "yes message" is returned to the caller.  When No is clicked, a "no message" is returned to the caller.

If we test the above checks in an automated micro test, we avoid paying the biggest (by 80%) cost of software development, the cost of maintenance.
  • No manual labor is used every iteration to confirm that this code is working as designed.
  • It takes essentially no time to execute a test.
  • We can use this test countless of times before shipping our product to confirm that everything is working as we designed it.
  • No manual labor will be necessary to later debug the code to track down why it stopped working.
  • The micro test acts as documentation of our code. And if we break our code, our documentation will tell us there is a problem.
Create the following micro test in AlertTest.gs. Google Application Script Test GAST is used in this article (you'll need to install it or copy and paste the library into a script in your GAS project). It's an alright framework whose main advantage is that it's simple. QUnit for GAS is a more "feature-ful" alternative for doing Google Application Script (GAS) work. (Due to a unique situation, simpler was better for the GAS development I was involved with.)
function alertTest(){
  test('methodToTest and the test scenario.', function(assert){
    // Arrange what we must to get the test ready
    // Act on the code that must be tested
    // Assert what the results should be 
  })
}
Run allertTest and check the logs (in the scripts editor select View->Logs) for the output.  You should see the following:
 If the above is essentially you see, then your environment is in good shape.  Now we start doing the  three steps to implementing a mock object.

Step 1, find a way to inject a mock

The fragment from Alert.gs has our mocking target highlighted.
function showAlert() {
  var ui = SpreadsheetApp.getUi(); 
...
Our code is using the SpreadsheetApp global object (a Singleton) could be a problem.  So I write down on a piece of paper: worry about SpreadsheetApp.  Since we don't know if this is a big deal yet, we continue reading the code.
  var result = ui.alert(
     'Please confirm',
     'Are you sure you want to continue?',
      ui.ButtonSet.YES_NO);
Now we find our UI code which is what we want to test.  So the question in my mind is: how to inject a mock ui object here?  It's being accessed via a variable called "ui."  Where did "ui" come from?


  var ui = SpreadsheetApp.getUi(); 
It came from our Singleton.  OK.  On my paper I underline SpreadSheetApp and add the following comment.
  var ui = SpreadsheetApp.getUi();  // inject a mock for UI here!
Great!  Now how to do it?  It turns out that javascript being a dynamic language lets us do this in many ways. Simply refactoring the code to the following will not bother any of the callers of the showAlert function.
function showAlert(uiMock) {
  var ui = (uiMock == null ? SpreadsheetApp.getUi() : uiMock); 
...
 As with any refactoring, you should test it. Since we don't yet have a micro test you'll need to do a manual test. Execute showAlert function, switch back to the Google Sheet you embedded the script into so you can see the dialog and click "yes" or "no."

Step 2, Design a minimal mock object that achieves the necessary goals 

We don't want our micro tests to be anywhere as complicated as our product code, so they must test only one scenario and do it as simply as possible. It's best to not need mock objects at all, but we need SOMETHING to take place of the real UI object which SpreadSheetApp normally returns.
So we are committed to that. Now what would be the minimal functions that this mock object must provide?  Go read the code and see what is called upon the ui variable.
  var result = ui.alert(
     'Please confirm',
     'Are you sure you want to continue?',
      ui.ButtonSet.YES_NO);

  // Process the user's response.
  if (result == ui.Button.YES) {
    // User clicked "Yes".
    ui.alert('Confirmation received.');
  } else {
    // User clicked "No" or X in the title bar.
    ui.alert('Permission denied.');
  }
}
On a piece of paper I write down the items bolded above. Our mock ui needs to have a function for alert, return a ButtonSet property, and a Button property. These are things we need to put into the Arrange section of our micro test. We are ready to implement our first micro test case.

Step 3, use the mock in an automated micro test

Let's build this micro test capability one simple test case at a time.  (Each function named "test" is a "test case.") Among our list of Checks for Alert, let's select the first, which is to check the prompt text.  Roughly, the following is what we want to do.
function alertTest(){
  test(' ', function(assert){
    // Arrange what we must to get the test ready
    var mockUI = {}
    // Act on the code that must be tested
    showAlert(mockUI)
    // Assert what the results should be
    // assert.equal(, 'Please confirm','')
  })
}
To finish what's bolded, the assert line, we need to get our mock object to "spy on" what happens when showAlert does the highlighted section.
 We can implement a spy in our mock object thusly and finish up our assert:
Update the first argument to the test function (method_to_test and test scenario) with what we are testing: 'showAlert prompt message is correct.'
Go back and look at our list of items that are called upon our ui variable. Although we've only implemented one of the three, simply run the micro test and discover via the error message or log to see what that function needs.  (I run the micro test now rather than assume my list is correct because sometimes I discover something new.)
I confirm it does need the ButtonSet implemented, and so I add it.
Running the test again fails with a TypeError because it's trying to read someone's YES property.  The logs reveal the "someone" is on line 10 of Alert which for me is the following:

So we add the "Button" property to our mock object.
Run the test and now it passes.  To confirm it is working correctly, inject a bug by change the highlighted line in the product code and observe the test fail:


Go ahead and remove the bug, run the test and observe it passes.

Continue and Share

Go ahead and implement all the checks for this test by adding more asserts and spies.  At some point you'll need to add another "test" function because the "arrange" section you have will be focused on one scenario and you'll need to "arrange" a different scenario for some of the checks.

Share your solution and I'll be happy to give you feedback.  Take a picture of the test automation you built and tweet it to: @LancerKind.  I'll give feedback.  Go for it because your solution may be better than mine.

Cheers,
Coach






Wednesday, January 9, 2019

Learning JavaScript

JavaScript is the world's most popular programming language.  With the advent of ECMA standard 6 (ES6), the JavaScript language also got a pretty modern facelift.  Since JavaScript integrates with a lot of different concerns, from web pages to servers, there are a lot of ways to get started.

Here are a few pathways along with tutorials or readings

http://www.tothenew.com/blog/how-to-test-google-apps-script-using-qunit/
https://github.com/simula-innovation/qunit/tree/gas/gas

Useful Intro Reference for All JavaScript paths

Everyone works with strings, arrays, sets, and functions.  Become familiar with these libraries:

Google Docs Automation:

Google uses a version of JavaScript for automating Google applications.  This language has become known as GAS.
  • Your first time: https://www.benlcollins.com/spreadsheets/starting-gas/
  • https://developers.google.com/apps-script/overview
Here is a simple and easy to get started micro test library so you can do Test Driven Development or micro testing of your GAS code:
  • https://github.com/huan/gast

Advanced:

Sending your form data into a database:
  • https://www.youtube.com/watch?v=or78bBOeFU0
  • https://www.dataeverywhere.com/use-database-forms

Basic HTML and JavaScript route:

  • https://dev.to/programliftoff/create-a-basic-webpage-with-css-and-javascript--104i
  • https://medium.com/@blondiebytes/how-to-create-interactive-websites-with-javascript-627a6d998fed

Advanced Web App development

Pick a framework and start learning.  I recommend going "the long road" by investing in learning NodeJS instead of going straight to a web UI library such as ReactJS.  Learning how to build and create with NodeJS pays dividends for everything else you learn with javascript.  The information below suggests how to "discover" a link because most direct links I can provide will get out of date in twelve to twenty-four months.

NodeJS: for building a service

    ReactJS: for building a web UI
  • Install React JS environment 
  • Learn how to make a single page application.  
    • ReactJS getting started and get something working that creates a simple UI.  If you get stuck in installing your own development environment, you can also use online Playgrounds until you work out what's wrong with your own environment.
    • Another source of tutorials is to search youtube for "Single Page App React" and find something less than two years old.
  • Learn how to connect your single page application to a web service.  You can find tutorials on Youtube by searching for "react connect to webservice."  You can connect to existing web services.  NodeJS is a good way to build your own web service.
  • Learn how to connect your application to a datasource.  Search Youtube for : React connect to database getting started.
  • Redux for managing your view updates with a datasource.  Search Youtube for : React Redux getting started and find something less than two years old.

General JavaScript programming:

Unless you're writing Google Application Scripts, learn how to form your code with the latest standard (ES6 at this moment).  I'm a fan of using NodeJS for general JavaScript programming.

Sunday, January 7, 2018

Getting JUnit 5 into your Eclipse 4.7 Oxygen

Good Things Merit New Releases

Like Java, JUnit versions continue their onward march.  JUnit 5 is a more modularized version of JUnit 4, and supports and requires Java 8.

This is what entails a bundle of JUnit 5:

  • JUnit Platform – which enables launching testing frameworks on the JVM
  • JUnit Jupiter – which contains new features for writing tests in JUnit 5
  • JUnit Vintage – which provides support for running JUnit 3 and JUnit 4 tests on the JUnit 5 platform
https://www.eclipse.org/modeling/downloads/build-types.php

My situation was more complicated as I am using Spring Tool Suite rather than straight Eclipse.

Updated my build using second bullet at: https://marketplace.eclipse.org/content/junit-5-support-oxygen#group-details
Notice the bit about clearing a checkbox regarding categories.
Update your Eclipse 4.7 build using this update site: http://download.eclipse.org/eclipse/updates/4.7-U-builds.
In Eclipse, go to Help > Install New Software... and uncheck "Group items by category". Select "Eclipse SDK" from the list of items and proceed with the installation.
If you were using Spring STS3.9, your splash screen will change.  I couldn't use my usual MacOS icon and instead had to go to a terminal, navigate to my Spring install, and launch it with "./springsource/STS.app/Contents/MacOS/eclipse"  For me, my STS was stored in my home directory so navigate there and find a "springsource" directory.

It takes a few minutes for the IDE to move beyond the splash screen.  Be aware there may be a modal popup about updating your workspace hiding somewhere.
Don't be alarmed. It's still STS and as far as I could tell, everything was working with the exception of the Mac launch application and the splash screen change.

(XXX Did I really need to go and install JUnit 5 Support or was it already in that 4.7-U-builds bundle?)

After doing this, I was able to install "JUnit 5 Support for Oxygen" plugin by using STS to go to the marketplace, and then dragging the "install button" from https://marketplace.eclipse.org/content/junit-5-support-oxygen#group-details and dropping it onto the marketplace.

And then:

 After rebooting, check that it was installed by going back to the Marketplace and selecting the installed filter.





Add a Junit Test case to your source and you'll see JUnit Jupiter below as one of the radio buttons.
(XXX At the beginning introduce the JUnit5 ecosystem: JUnit5, Jupiter, ....)



 Upon clicking finish you'll be asked about adding JUnit5 to your build path.


This is the template test case it inserted. Notice the org.junit.jupiter namespace. Let's run it and see if it fails.


Like you've always done with JUnit, you can select the project name or the single test case, then execute tests with the context menu.
 And you should see the below red bar since the test case template has a "fail()" statement.

By the way: regarding TDD

Check out the Agile Thoughts podcast.  This podcast gets the Confessions Of An Agile Coach's endorsement (naturally, since the same people produce it) of quality materials for developers and teams trying to get coding done in a way that avoids dealing with legacy issues such as bugs and hard to maintain code.  Agile Thoughts has a lot of great TDD conceptual content along with a radio drama about the difficulties to getting a TDD initiative started.  Click the podcast cover below or here for more information.


Agile Thoughts podcast

References

A Look at JUnit 5’s Core Features & New Testing Functionality https://stackify.com/junit-5/
JDT UI/JUnit 5 https://wiki.eclipse.org/JDT_UI/JUnit_5


Saturday, August 12, 2017

Developers, confess why you don't TDD

Dish me the dirt and give it to me straight. Don't hide anything. Just lay it out there: why don't you do TDD? TDD has been fighting for mainstream acceptance since the late nineties with the introduction of eXtreme Programming.  It's still not there yet. Today, at least everyone knows what the term TDD is but really so few actually know how to do it.
So I've put on a black shirt and twisted the collar on backwards, and I'm ready to take your confession.  Later, I'll cover your reasons, my dear developer, on Agile Thoughts podcast (https://agilenoir.biz/series/agile-thoughts/) where I'm doing a series on, brace yourself, "Why Devs don't TDD."

Update:

They are now live! Visit Agile Thoughts (敏捷理念播客) and listen to these insightful, easy to understand, and entertaining episodes about TDD.  At episode 14, I’m sure you’ll notice its, shall we say, singular style.  At Agile Thoughts, we work hard to be Portlandia friendly. ;-)
Like a good TDD priest, I'll keep the "actors" anonymous and discuss what I learn in an entertaining monologue.  So don't worry about your scandals and secrets.  What is your or your team's excuse?  What's the fear or problem? What's stopping you? 

  • 009 Introducing the Test Driven Development series
  • 010 Agile and TDD Neglect
  • 011 The Old Way isn't Sustainable
  • 012 An Example of doing TDD
  • 013 Developer Intent and the Bible
  • 014 Why Devs don’t TDD
  • 015 The TDD FUD Spreader
  • 016 Driving under the Influence of FUD
  • 017 The Architect Disses TDD
  • 018 TDD gets no Love from the PO
  • 019 A QA Professional Questions Micro Tests and TDD
  • ... and the series is still ongoing.


And hey, if you are doing TDD, then why? What got you going?

Drop me the info either on the Hacker News posting thread, or use the comment section below, or send me the "goods" via twitter: @LancerKind