No activity today, make something!
cdent-rhat GabbiPresentation

20150419204851 cdent  

The start of notes for a presentation about gabbi.

Exploring slides here (ask me for the password if you want in).

A new set of GabbiPresentationNotes, built from the slides. The process was:

  • make gabbi
  • make these notes
  • make the slides from these notes
  • make the notes above from the slides without these notes

That seems like a good filtering mechanism.


Bad formatting follows

Constraints

I'll be presenting as the second half of a 40 minute presentation entitled "APIs Matter". The first half will be narrative about the why of the title, from Jay Pipes. In an email to Jay I said:

My thinking, so far but please feel free to redirect, is that we
essentially do two presentations in the space of the slot, with you
first. You'll do your bit and then I'll follow with a description
and demo of gabbi based on the stage you've set. I hope I'll be able
to tie to some of the things you've made relevant in your half
alongside a bit of "here's why I made it" and "here's how the
existence of gabbi might provide opportunities for you to make
contributions".

I will be unable to guarantee network access so need to have everything preinstalled and if I wish to use a live API either need a VM running or to establish if any of the OpenStack APIs will load on my machine (I could use the big heap of Red Hat machine, but ow, so heavy). The Ceilometer gabbi tests run on OS X (requiring a mongodb install). The Gnocchi tests do as well (requiring a mysql install). So there are options.

It seems likely that a more straightforward API may be the right way to go so as to not be caught up in details. However using an existing API may safe some trouble. And of course there is always TiddlyWeb.

From the speaker handlers:

  • prepared with 16:9 slides
  • vgi or dvi compatible laptop (will need mini display to vga dongle, currently in laptop bag)
  • include your name, title, company name, and title of your presentation in your introductory slide
  • there will be ethernet on stage

I'll probably put my slides in slides.com as that seems okay (it's reveal.js under the covers).

Overview

The presentation abstract says that gabbi aims to clarify how devs

think about APIs as well as how we validate that our APIs are consistent, user friendly, and well structured

so should probably structure along those lines.

An API is a conversation in which there is a well known and fairly strict grammar. When that grammar is followed well many participants can engage in the conversation. Gabbi's job is to make sure the grammar is followed and to make it easier to learn the grammar.

Need summary statement of "the problem" here.

Metaphor

Problem

This is OpenStack (turns out this image is licensed so won't be using it, got something better)

image source

Solution

2289327791_b8031d218c_o.jpg my image, my uppies

The confusion and uppy image can provide a theme that contrasts noise and disorder of python-based tests with clarity and balance of gabbi tests. Any given slide can have a visual cue of which kind of point it is trying to make by using one of the images or neither of the images.

Caveats

It may be relevant to warn that much of the development has been against Ceilometer and Gnocchi which have their own set of constraints. There are plans for supporting a looping call to deal with asynch but not yet implemented.

Why

  • why it was created
  • why it was made the way it was
  • way to get in the door
  • enable ecosystem
  • the difference of my background (as a web guy) and expectation of diverse client ecoysystems as the primary thing
  • Easy tool to evaluate (and hopefully correct) API health ("I don't know if framework X does Y")
  • Julien Danjou says: "I'd prefer we go full gabbi where possible. It's less verbose and less error prone than using the generic test framework." when discussing whether to replace existing webtest tests.

History

  • the timing out of a binary test
  • incredible difficulty learning ceilometer
  • obfusction (by OO, unittest, testscenarios) of what's really happening
  • weirdness of openstack (notably tempest) traditions:

    Use of specific clients to manage DRYness and the primacy of blessed client-side tools/sdks makes it hard to learn, explore.

  • had a similar (much less complex) tool in the past

  • TDD for APIs1

Revelations

  • being able to throw a test at something, in ignorance leads to a lot of learning
  • the unittest, testr, subunit ecosystem is complex and powerful and kind of pain in the ass but also awesome
  • lots of tools required to traverse JSON APIs in an ignorant (of the api) way
  • when testing against a live server, managing data can be challenging, especially when sharing live and intercept tests with the same yaml

Show me

  • request any url on wsgi-intercept or host/port
  • send any request headers
  • evaluate response headers (complete or regex)
  • evaluate strings in response body
  • evaluate json_path expression
  • magic variable replacements
  • sourcing data from files
  • defaults
  • fixtures
  • live functional tests in gate

Architecture

  • Unittest TestCaseS being dynamically created from YAML
  • Each file is a single ordered and linked TestSuite
  • If you request one test in the middle of a file all the tests prior to it will be run (in order) to do whatever transaction is happening
  • Fixtures are per file, so possible to use e.g. a different data store per file.
  • Extensible response handlers.

Contributing with it

  • Easy way to learn the api of a service while adding coverage
  • Clean and clear way to develop new API endpoints

Contributing to it

  • Needs better how to docs
  • Additional response body handlers.

Future

  • Parse other files
  • Loop request until condition met
  • Better visibility over the test run and results

Antipatterns

  • Trying to test the entire body of something in one test
  • Wanting to test deep in the code instead of just the API and its use of HTTP.

Fun Patterns

A common experience with gabbi is to choose some section of an exisiting API's "routes" and generate tests by exploring with enforced ignorance. Pretend that you are a client attempting to use the API without documentation. Ask questions like:

  • Does path X exist?
  • What happens when I use an unusual accept header?
  • What about an unexpected request method?
  • If I give some bad data mixed in with good data (in a body) do things still work? Is the error response

What often happens in these cases is that while the API is quite robust in the face of expected input it is less so when confronted with fuzzy input. This then leads to the creation of new bug reports. Once actual data is being sent back and forth, trying to inspect that data in reliable (non-ambigous) ways leads to requests for desirable feaures (e.g. better sorting).

Required Patterns:

It is quite likely that all tests will require some kind of ConfigFixture which sets up the persistence layer to be used during the run of the TestSuite. Such a fixture should create a unique datastore and then kill it off.


How to do a demo

  • requirements
    • a wsgi app or a running web site
    • gabbi
    • a test runner that supports the load_tests protocol (subunit is the usual in OpenStack land, under testr, under tox)

  1. So many assumptions or requirements to do this. We need a framework that is actually well behaved when it comes to all the many ways in which a 4xx response can be generated. That does content negotiation reasonably. Are de-facto standards for query parameter handling followed?