No activity today, make something!
cdent-rhat GabbiCeiloPlan

20150206125443 cdent  

This page describes the high level plan for adding gabbi tests to both Ceilometer and Gnocchi.


The overall goal is provide tests of the respective APIs that are easier for new or casual inspectors to discover and read without needing to sort through and understand a mass of base classes, mocks and associated Python cruft. Instead there is a list of ordered tests that focus on the HTTP interaction and little else.

This goal was initially specified in Declarative HTTP API Tests which says:

This spec proposes a suite of "grey-box" tests for the Ceilometer HTTP API that are driven by one or several human-readable text files that declare URL requests alongside expected responses. These tests will provide end to end tests of API endpoints that in addition to confirming the functionality of the API will provide a cogent and readable overview of the HTTP API.

Gabbi was created after the spec. Initially the idea was to build something within the Ceilometer tree, but unwinding the necessary framing to get dynamically generated test cases to work with subunit and testrepository was sufficiently hairy that minimizing the problem into a separate tool was necessary. This has led to a generic tool which has been accepted into OpenStack global-requirements and should be useful in other contexts.


Because gabbi is a new tool using it in real testing situations has pointed out missing features and bugs. This is expected to continue as there is more use. Therefore rather than delay adding any tests to target projects until there is a perfect tool it is better to add tests in a phased fashion. Currently the phases are as follows but this is subject to change with experience:

  • Phase 0: In a throwaway branch do some exploration to determine what fixtures are necessary to provide the minimum configuration and storage to allow the API to respond to requests effectively.
  • Phase 1: On a new branch implement those fixtures and a basic confirmatory test or tests of the API existing and being healthy.
  • Phase 2: Subsequent to phase 1 merging or in patchsets depending on phase 1 create multiple patches which explore each of the major classes of resources presented by the API. Traverse those resources performing simple CRUD upon them just to show the basics.
  • Phase 3: Additional patchsets which test complex queries, error conditions, authZ handling and whatever edge cases we can think of.

Some of these phases have started for both Ceilometer and Gnocchi:

  • Ceilometer phase 0 shook out a lot of issues but led to a reasonable phase 1. Phase 2 has only just started and is currently pitched as a work in progress to garner some discussion on how best to deal with warts in the pecan and wsme tools used to mount the API.
  • Gnocchi phase 0 was extremely productive: It led to a very useful refactoring of the way fixtures and wsgi-interception is handled. No further progress on Gnocchi has yet been made.


The plan of action is to fill in the blanks on the phases while simultaneously fixing bugs and misfeatures found in gabbi, pecan and wsme.

  • Ceilometer
    • Phase 0
    • Phase 1
    • Phase 2
      • alarms
      • meters
      • resources
      • events
    • Phase 3
  • Gnocchi
    • Phase 0 (just need to confirm existing WIP is aligned with expectations)
    • Phase 1
    • Phase 2
      • figure out what the resource chunks are
    • Phase 3

Phase 4

Find interested folk to do it to the rest of the OpenStack APIs.

Notes & Comments

One of the most interesting aspects of the process of writing these tests, no matter the phase, is the way in which it really exposes what the API is doing. I find it is best to write them with as little foreknowledge as possible. I figure out what the endpoints are and then call methods on them and inspect the results using intentionally failing tests. It's incredibly illuminating. Far more than any documentation or inspection of the code (which is, for the most part, unreadable given the amount of action at a distance that is happening).

For the Ceilometer tests one trick I've used is parsing the WADL files used for the api-site to generate a YAML file containing the endpoints. This provides an initial suite of failing tests from which I can then fill in the blanks. It's likely that pecan and other frameworks (e.g. flask) can be inspected in a similar fashion to generate endpoints.