I'd like to start gathering some data on test validity in ceilometer, aodh and gnocchi. It's pretty clear that coverage is not that great and where coverage appears to be good, cyclomatic complexity in the code means that the tests are unlikely to be good enough. With methods that are too long, as well, tests will be complex to get right.
Recent activities:
- Failed to rdo-manager on fedora 22 (libvirt too new)
- General care and feeding of ceilometer + devstack + grenade in the gate, most recently this bug with disable_service fixed by this change. In general the move to a plugin has been mostly smooth. The few problems it has revealed have been real problems either in the ceilo or devstack or grenade. Including breaking and then fixing neutron's gate.
- Reviewing coverage and complexity in the ceilo and related code to see how far away we are from having reliable tests, in prep of the "we need to do better at testing" session at summit.
- Trying to work with Ilya about influxdb but not making as much progress as desired.
- Gabbi refactorings and 2.0 prep.
- Much reviewing.
- gnocchi benchmark watching (what about swift and other scenes?)
- disappointment over inability to attend to cross project things (e.g. gordc abandoned a cross project proposal for dealing with resource id cleanification because he couldn't moderate, I would have but I didn't know about it)
Queries:
- Downstream drivers to represent at summit?
Had a dig round in the nova code to learn about how it does api tests. Didn't get too far but on good starting point is nova/tests/functional/test_servers.py
. There's a ton of indirection and jiggery pokery that gabbi could clear away.