Running Rspec acceptance tests in TeamCity

At work we use TeamCity as our CI service to automate the build and deployment of our software to a number of pre-production environments for testing and evaluation. Since we’re already bottling up all the build and deployment steps for our software, I figured we could piggy back on this process and kick off a simple login test. It seems faster and easier to have an automated test tell us this, than to wait until someone stumbles across it. After all who cares if a server has the latest code if you can’t login to use it?

Note: I’m calling the test that attempts to login to our system a sanity test. It could just as easily described it as a smoke test.

The strategy looked something like:

  • Make sure tests are segmented (at least one ‘sanity’ test)
  • Hook up tests to Jenkins as a proof of concept
  • Once the configuration works in Jenkins (and I’ve figured out any additional pre-reqs), reconfigure tests in TeamCity to run “sanity tests” (which is a tag)
  • If sanity tests prove stable, add additional tests or segments

Segmenting tests is a great way to run certain parts of the test suite at a time. Initially this would be a single login test since logging in is a pre-cursor to anything else we’d want to do in the app. For our test framework RSpec this was done by specifying a ‘sanity’ tag.

There didn’t appear to be any guidelines or instructions on the interwebs on how you might configure TeamCity to run RSpec tests but I found them for another CI, Jenkins. Unlike TeamCity, Jenkins is super easy to set up and configure: download the war file from the Jenkins homepage, launch it from the terminal and create a job! Our test code is written in ruby which means I can kick off our tests from the command line using a rake file. Once a job was created and the command line details were properly sorted, I was running tests with the click of a button! (Reporting and other improvements took a little longer).  Note, we don’t use any special RSpec runner for this, just a regular old command line, although we do have a Gemfile with all relevant depencies listed.

Configuring TeamCity

Since I couldn’t find any guidelines on how to configure TeamCity to run RSpec accpetance tests, I’m hoping this helps. We already had the server running so this assumes all you need to do is add your tests to an existing service. After some trial and error here’s how we got it to work:

  1. Created a new build configuration to run the sanity tests
  2. Added version control settings for the automation repo
  3. Within the build configuration added 3 steps:
    1. Install Bundler. This is a command line step that runs a custom script when the previous step finishes. Basically handles the configuration information for Sauce Labs (our Selenium grid provider) and the first pre-req.
    2. Bundle Install. Also a command line step running a custom script. Second pre-req.
    3. Run Tests. Final command line step using rake and my test configuration settings to run the actual tests
  4. Added a build trigger to launch the sanity tests after a successful completion of the previous step (deploy to server)

After this was all put in, I manually triggered the configuration execution to see how well the process worked. There were quite a few hiccups along the way. One of the more interesting problems was finding out the TeamCity agent machine had outdated versions of Ruby and Ruby gems. The version of Ruby gems was so out of date it couldn’t be upgraded, it had to be re-installed which is never much fun on an RDP session.

Once the execution went well I triggered a failure. When the tests fail they print “tests failed” to the build log. Unfortunately the server didn’t seem to understand when a failure occurred so I went back and added a specific “failure condition” (another configuration option) looking for the word “tests failed” which, if found, would mark the test as a failure. Simple enough!

What’s next?

We’ve been running this sanity test for a few months now and it’s quite valuable to know when an environment is in an unusable state and yet I think visibility is still a challenge. Although the failures report directly into a slack channel I’m not sure how quickly the failures are noticed and/or if the information reported in the failed test is useful.

A few articles I’ve read suggest using the CI server to launch integration tests instead of the UI level acceptance tests we are running. I think what we are doing is valuable and I’d like to expand it. I wonder what additional sanity or segments of tests do we add to this process? Are there more or better ways to do what we’re doing now? Please share your experiences!

Subscribe to Shattered Illusion by Chris Kenst

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe