I need to write faster tests

This is not ideal:

----------------------------------------------------------------------
Ran 84 tests in 370.741s

OK

My tests take so long for two reasons. First of all, most of them use twill to simulate a browser walking through a version of the web app running on localhost. Second, my test code reads like a novel. Here’s an example, slightly embellished to make a point:

setup: connect to the database and find or create a hospital and an employee named “Nurse Ratched.” Find or create a bunch of open shifts in the emergency department. Find or create another nurse named Lunchlady Doris*.

test: Nurse Ratched wants to see what shifts are available to be picked up. So she logs into the app. Then she navigates to the “open shifts” screen, and then filters down to shifts in the emergency department over the next seven days. Then she wants to sign up for the shift starting at midnight on Saturday night. So, she clicks the “sign up” icon. The system verifies that this shift + her already-scheduled hours won’t push her into overtime, and she has no other flags on her account, so she is automatically scheduled.

Then the system sends her a confirmation message, which according to her preferences, is sent to her email address. Then the system queues an SMS message to be delivered an hour before the shift starts in order to remind her (also according to her preferences).

Finally, the test verifies that the shift is now not listed as available by simulating Lunchlady Doris logging in and checking that same “open shifts” screen.

If everything checks out, print a dot, and move on to the next chapter.

teardown: Unassign Nurse Ratched from the shift she picked up.

I think twill in itself is fine. Marching through a series of pages is problematic. I do this to set up conditions for testing later on. As a side benefit, I verify everything checks out along the way.

On the plus side, I’m confident that the integration of all these components do in fact play nice together. I don’t think it’s safe to abandon end-to-end testing like this, but I would like not to depend it every time I want to make some slight change to a component. It would be nice to run these right before a commit, but only run some super-fast tests after each save.


[*]People that understand this reference should reevaluate their priorities in life. back

10 thoughts on “I need to write faster tests

  1. I've found that two things you mentioned here tend to be problematic for functional testing.

    First is using an actual interface to setup the data you need to test with. Although this hides the implementation of how your data gets stored, which is nice, it is often slower than necessary. You just need data in the db, right? I actually created a module called fixture to load up just the data for such a test: http://farmdev.com/projects/fixture/ 🙂

    Second, it's not a good idea to insert data in one test that a later test depends on. This is because your tests are no longer modular so you couldn't, for example, re-run only one test that is failing or use a grid solution (like http://selenium-grid.openqa.org/ or like http://code.google.com/p/python-nose/source/bro… — running tests in parallel with nose).

  2. Hi Kumar, thanks for the comment! I saw your fixture project earlier today on planetpython. Right now, my setup code doesn't go through the app to setup. It makes a database connection directly and monkeys around.

    All my tests use the same setup and teardown, so each test starts at the same state. That still might not let me run tests in parallel, however, since they're all working on the same user objects. If I could run all these in parallel, that would likely cut time way down. Maybe I could change my setup code so that each test gets objects created with names just for them.

    I'll be the first one to admit that I have a lot more to learn in terms of testing. Right now, I feel a lot of frustration waiting for these things to finish execution.

    I'll check out fixture over the weekend. Thanks for the tips.

  3. In the main application that I work on, we have both unit (py.test) and functional (Selenium) tests.

    The unit tests are quite fast and do all of the low-level library testing and stuff. I run them often.

    The Selenium tests take longer, but they really work the system over.

    I think that is something that you can't avoid, really. If you want to test your application like users are going to use it, it is bound to be slower, even with a computer at the wheel.

  4. +1 to separating unit and integration tests.

    Integration tests *should* use a real database (etc) – but will take long er and longer as your tests cover more things (although there are obviously techniques you can use to minimize this).

    Unit tests should mock out every they can in terms of contact with the 'outside world' – as a result they can run much faster.

    You can run your functional tests in a loop on your integration machine (using a fresh checkout each time and emailing you failures – use something like CC.NET or just knock up an integration script tied to a post commit hook yourself).

    You can run your unit tests whenever you make a change (preferably before to see the test fail and then afterwards to see it pass of course).

  5. I like to think of tests in terms of cost/benefit.

    Longer (integration) tests cost more because they eat time, but in theory have greater benefit, because they reflect actual use more closely. ( http://googletesting.blogspot.com/2008/03/cost-…).

    Teeny unit tests are cheap, but don't verify as much as the longer integration tests.

    So running cheap tests all the time is ok, just like paying for electricity in your house all the time is ok. It doesn't cost much.

    Expensive tests can run less often but give better payoff, just like only buying a car every couple of years.

  6. Yeah, I think maybe there's nothing wrong with these narrative-based tests, but I would like to separate them from my other split-second tests.

  7. The other thing that unit tests won't catch is when one unit alters its interface but the second unit doesn't find out about it.

    Incidentally, I don't agree with the idea that interfaces shouldn't change. If I come up with a better way of doing something, I'm going to do it.

  8. Yeah, I think maybe there's nothing wrong with these narrative-based tests, but I would like to separate them from my other split-second tests.

  9. The other thing that unit tests won't catch is when one unit alters its interface but the second unit doesn't find out about it.

    Incidentally, I don't agree with the idea that interfaces shouldn't change. If I come up with a better way of doing something, I'm going to do it.

Comments are closed.