No activity today, make something!
cdent-rhat GabbiPresentationNotes

20150514223156 cdent  
  • Introduction

    • Hi, my name is Chris Dent, I work with Red Hat. I've been working on OpenStack for a year and four days. Before Red Hat I spent ten or so years building web APIs. Today I'm going to talk about a tool I've created which can help us make our existing OpenStack APIs better and create new ones that are great.
  • API as Conversation

    • I started using the internet before there was a web. When the web came along I was excited: Here was a tool that was going to allow people to engage in an unprecedented level of information sharing.
    • When the idea of REST API gained popularity I looked on it as an opportunity for humans to extend their engagement with all that information to make it more interactive.
    • To me the best APIs are those that allow unexpected people to do unexpected things. They expand ecosystems and opportunities.
    • If we want to allow that expansion, then our APIs need to follow the standards and grammar of the web. And if we want our APIs to follow the rules, we need tests.
  • Confusion Openstack

    • OpenStack is hard and confusing and everyone seems to know it.
  • At least it is for me

    • Maybe I'm over generalizing, but my brain on OpenStack is a naked confused guy sitting staring at a dead clown. What is everything? How does it fit?
  • How do we get from confusion to balance?

    • While I was trying to learn all this stuff in the past year I reached a point of frustration where I needed some way to be able to actively inspect what I was working on, what I was trying to improve, without feeling this constant headscratch. I had to reach some balance where the pace of learning was greater than the growth of confusion.
    • What I needed to do was think about the problem in terms of the elegance, inclusiveness and balance of the web and not in terms of the apparent chaos of OpenStack.
    • I set about identifying what I perceived to be the problems and thinking about the solutions.
  • Some of the problems are:

    • The existing projects and tests are difficult to use as a learning aid. They are hard to inspect.
    • What requests are being made, what are they doing? I do not know. How can I think about the API if I can't see it in the code?
    • The tests, while clever, overuse subclassing and mixins to such an extent that it is frequently hard to tell where the action is actually happening.
    • Some tests are often driven by client code that was purpose designed. That means they do not behave like an HTTP client in the wild. They just do what we want them to do and are blase about headers and response codes. This can mask faults.
    • Tests are supposed to make things better, not just confirm what we know and prevent regression.
  • And some of the characteristics of something better.

    • A tool that's useful for APIs would allow us to do our evaluations with lower verbosity, greater transparency, and a greater focus on HTTP, while still fitting in with how testing already works.
    • It would also speed up the writing, reading and running of tests and encourage learning about the service.
    • And if we're lucky we might be able to get it to help us do a bit of test driven development so that we can take an API first approach to creating services.
  • Here are some test samples

    • Both of these code chunks are lies, there is lots of background code that is not shown here.
    • But what matters is what is emphasized in the foreground. In a TestCase we see a bunch of Python and data formatting and it's not actually clear what the URL is. The gabbi test has its own style of noise but at least it is the request that is in the foreground.
    • Gabbi doesn't get all the way to solving the problems, but it helps.
  • The Demo

  • A gabbi test is a data structure, currently formed as YAML, that represents a list of single HTTP requests and expected responses.
  • I'm going to show some sample tests and explain the features. These tests are running against a live server I have already started.

  • simple.yaml

    • Unless you say otherwise, the default request method is GET and the test passes if the response code is 200.
  • strings.yaml

    • A response can be evaluated in a variety of ways with code called response handlers. There are some that are built in to gabbi or you can add your own. This one checks for strings in the response body.
  • deletefront.yaml

    • Here we declare the request method to be used and the expected status.
    • We also expect an Allow header in the response.
  • create-container.yaml

    • Any test file can declare defaults that will be used for all the tests in the file (unless overriden).
    • This verbose flag is what is causing the output you see after I show the test.
    • Here we are PUTting a container owned by sam. When the value of the data key is not a string it is transformed into JSON. This is an object with a single key owner.
    • $LOCATION is replaced with the value of the location header of the previous response.
    • One of the response handlers uses JSONPath. If you're not familiar with it it is basically a way to access members of a JSON object. This example says "give me the value of the owner key on the root object".
  • objects.yaml

    • Another example of a JSONPath: this is listing the objects in the shed. There aren't any yet.
  • create-object.yaml

    • This evaluation of the location header shows two features:
      • Wrapping a header value in forward slashes makes it a regular expression.
      • $SCHEME and $NETLOC are replaced with the real values associated with those names in a URL.
    • So what we get here is a way of checking the full URL in the location header and validate that it has the form of a UUID on the end.
  • put-object.yaml

    • When sending data to the server you can reference a file in the same directory.
    • This looks like it may be a kitten, let's check.
  • list-objects.yaml

    • This final example shows an additional feature: You can use JSONPath to access response data from the previous test in the current test. In this example we use it to create a URL but it can also be used pretty much anywhere including request headers and data and response headers and handlers.

As you can see there's a lot of flexibility. The server used as the backend here was developed using gabbi as the test harness. Tests are loaded using a file that looks like this. Note the intercept: no actual server is run. When we run the tests concurrently they are grouped according to the name of the file so all the tests one from one file will be in the same process.

  • Architecture Overview

    • This all works because under the covers gabbi generates ordered sets of unittest TestCases in a special class of TestSuite. Fixtures are context managers that nest around the suite.
    • Each aspect of a response is evaluated by one of several handlers. As I've already mentioned the list of handlers can be extended with your own code to evaluate the response however you like.
  • Fun Patterns

    • Being fun is at the heart of why gabbi exists. It's trying to make something that is otherwise tedious easier. It can free you to experiment and engage the API in a conversation. If you play dumb and do the unexpected, you will be behaving like client code in the wild and you will break things and you will find bugs. Fixing those bugs is what it's all about.
  • Required Patterns

    • In the gabbi tests that have been created thus far for OpenStack projects it's common to need a fixture which sets the basic configuration for the API server, removes the Keystone middleware, and establishes an appropriate and unique connection to a storage layer. The same fixture can, when complete, clean up temp files and drop databases.
  • Anti Patterns

    • Writing tests with gabbi is so easy that there is a temptation to use it to test everything. This is probably a bad idea. You want to use it to test the API layer, not the storage layer. We should be able to assume that the storage layer is correct. Testing too much will make the tests hard to read and hard to use as a introduction to the API.
    • Similarly, avoid the temptation to validate everything about a response in one request. You don't want to replicate the entire body of a response in the test. The data that created the response ought to have been tested elsewhere. What we want to validate is that the response is there, is sane, and is couched in proper HTTP. A test should be relatively short and sweet.
  • Contributing With It

    • Gabbi is a great way to learn any API. It is easy to make repeatable experiments.
    • If you need to create a new endpoint, you can design the behavior in gabbi tests and then write the code to get those tests to pass.
    • The API working group is producing a lot of guidelines. Writing gabbi tests to validate those guidelines can identify areas that may need improvement.
  • Contributing To It

    • There's a lot to do to make gabbi great.
    • Like everything else, it could do with improved documentation, notably tutorials from different angles.
    • Additional response handlers, for processing responses more effectively, would be handy as would ways to parse something besides YAML for input.
    • Most importantly, however, is that it needs input from other communities of users. The OpenStack universe has its own unique characteristics and goals. There are worlds out there with different characterisitcs and goals that can help evolve and improve gabbi.
  • Thanks

    • And that's all I've got to say about that. If people want to know more please find me in the hallways.
    • Or if you want to work together on adding gabbi to your projects, let's meetup on Friday afternoon.