I’m used to running selenium tests against Firefox locally (OS X Yosemite and now MacOS Sierra) both from the command line using RSpec and when using a REPL like IRB or Pry. I don’t use FF often so when I started having issues I couldn’t remember how long it had been since things were working. The problem was pretty obvious. The browser would launch, wouldn’t accept any commands or respond to Selenium and then error out with the message:
Selenium::WebDriver::Error::WebDriverError: unable to obtain stable firefox connection in 60 seconds (127.0.0.1:7055) from /usr/local/rvm/gems/ruby-2.1.2/gems/selenium-webdriver-2.53.0/lib/selenium/webdriver/firefox/launcher.rb:90:in `connect_until_stable’
This occurred for Selenium-WebDriver versions 2.53.0 and 2.53.4. It also seemed to occur for Firefox versions 47, 48, and 49.
I’m not the first one to post about the upcoming changes and lack of support for Firefox 47+. I probably deserve the error message for not paying more attention to the upcoming changes and will certainly look forward to implementing the new MarionetteDriver.
tl;dr If you’ve ever wanted to learn Selenium but didn’t know where to start, The Selenium GuideBook is the place (doesn’t matter which edition you use, it’ll be good).
The challenge of trying to learn Selenium WebDriver (simply referred to as Selenium) or any relatively new technology is that there is either too much disparate information (on the web), the material is out of date (many of the books I found) or of poor quality. With Selenium too many of the tutorials available to beginners had you using the Selenium IDE, which is a really poor option for writing maintainable and reusable test code. (See my previous post’s video for more on this.) I’ve walked out on conference workshops that sought to teach people to use the Selenium IDE to start their automation efforts. It wasn’t for me. I was going to do it right, I just had to figure out what that meant and where to start.
From the start I knew I wanted to learn about GUI test automation and more specifically Selenium WebDriver. I had tried WATIR (Web Application Testing in Ruby) and a few other tools but Selenium was open source and quickly becoming the go-to standard for automating human interaction with web browsers. It was and is the only choice.
Naturally I went searching the web for some tutorial or examples when I stumbled across several tutorials including Sauce Lab’s boot camp written by someone named Dave Haeffner. After struggling through the Bootcamp series (and finding some bugs in the process) I found Dave also produced a tip series called Elemental Selenium. I signed up for the Ruby weekly newsletter tips and went through many of the tips. Satisfied that Dave was worth learning from (good quality, relevant code examples) I decided it was time to try his book The Selenium GuideBook. I knew going into it, I was going to be the person maintaining the test suite and since I was more or less comfortable with Ruby I was happy The Selenium GuideBook came in that language!
The package itself contains a lot of great information and a number of materials:
The Selenium GuideBook; the Ruby edition is roughly 100 pages
Ruby code examples broken out by chapter
Elemental Selenium Tips eBook in Ruby
The Automated Visual Testing Guidebook
The first time I went through the book and code examples, it seemed redundant having different code for each chapter and example. It was only after I had gone through the chapters and examples for a second time, trying to apply and understand the differences that I began to understand the relative importance of seeing the code change chapter by chapter, example by example. The code examples all target an open source application Dave created called The Internet. It seems simple enough but many of the books and materials I went through either tried using Google or some badly written / hurried example.
Despite being less than 100 pages the Selenium GuideBook covers:
Defining a test strategy for your automation goals
Using page objects and base page objects to keep code clean
Locator strategies and tools
Relying on good locators seemed like a smart way to design tests. I wanted to avoid any record and playback tools and the poor locator strategies often employed.
How to write good acceptance tests
Writing Re-usable test code
Running on browsers locally and then in the cloud
Running tests in parallel
Taking your newly built tests, tagging them and getting them loaded into a CI server.
The whole package literally. In the preface Dave says the book is not full and comprehensive, it’s more of a distilled and actionable guide. The book really shows you how to start thinking about and building a test suite. It’s up to the user to take what they learned here and apply it to their application. That’s the fun part of the Elemental Selenium tips as well.
Applying the Book and Examples
After I had gone through all of the chapters and examples once, I went back through the relevant to me chapters and examples doing the following:
Start with some very simple login tests.
The book starts out this way as well. Writing tests that are not DRY or re-usable. but eventually get that way.
Continuing through the code examples, getting a little more complicated and applying it to my own application.
As I built out tests and start to see commonly repeated patterns, abstract out repeated code into relevant page objects. Eventually getting to a base page object.
In hindsight the hardest part of applying the book was trying to understand and apply a locator strategy within our application. While The Internet target application is great, it’s also a bit simplistic. Good for teaching, hard for bridging the sample application to the target application. Our single page .NET application was far more complicated and it took several attempts before I understood how my own strategy would work.
The transfer problem is always difficult. I mean how do you take what you learned and apply it to a slightly more sophisticated problem? It’s a difficult problem, not really a criticism of this book. It’s worth noting that whenever I had questions about what was written in the book, found a bug or two, or got stuck I could email Dave and within a week or so get a helpful response back.
With the lessons in this book and the Elemental Selenium Tips I was, through some focused time and lots of iterations, able to get a fairly good set of automated acceptance tests running against our application.
In other words, I highly recommend you buy the book. It’s slightly more expensive than similar books about Selenium but it’s far more effective. You are also directly supporting the author and with free updates and email tech support I think its well worth the cost.
Don’t believe me? Watch this video of Dave giving an overview of the GuideBook:
At work we use TeamCity as our CI service to automate the build and deployment of our software to a number of pre-production environments for testing and evaluation. Since we’re already bottling up all the build and deployment steps for our software, I figured we could piggy back on this process and kick off a simple login test. It seems faster and easier to have an automated test tell us this, than to wait until someone stumbles across it. After all who cares if a server has the latest code if you can’t login to use it?
Note: I’m calling the test that attempts to login to our system a sanity test. It could just as easily described it as a smoke test.
The strategy looked something like:
Make sure tests are segmented (at least one ‘sanity’ test)
Hook up tests to Jenkins as a proof of concept
Once the configuration works in Jenkins (and I’ve figured out any additional pre-reqs), reconfigure tests in TeamCity to run “sanity tests” (which is a tag)
If sanity tests prove stable, add additional tests or segments
Segmenting tests is a great way to run certain parts of the test suite at a time. Initially this would be a single login test since logging in is a pre-cursor to anything else we’d want to do in the app. For our test framework RSpec this was done by specifying a ‘sanity’ tag.
There didn’t appear to be any guidelines or instructions on the interwebs on how you might configure TeamCity to run RSpec tests but I found them for another CI, Jenkins. Unlike TeamCity, Jenkins is super easy to set up and configure: download the war file from the Jenkins homepage, launch it from the terminal and create a job! Our test code is written in ruby which means I can kick off our tests from the command line using a rake file. Once a job was created and the command line details were properly sorted, I was running tests with the click of a button! (Reporting and other improvements took a little longer). Note, we don’t use any special RSpec runner for this, just a regular old command line, although we do have a Gemfile with all relevant depencies listed.
Since I couldn’t find any guidelines on how to configure TeamCity to run RSpec accpetance tests, I’m hoping this helps. We already had the server running so this assumes all you need to do is add your tests to an existing service. After some trial and error here’s how we got it to work:
Added version control settings for the automation repo
Within the build configuration added 3 steps:
Install Bundler. This is a command line step that runs a custom script when the previous step finishes. Basically handles the configuration information for Sauce Labs (our Selenium grid provider) and the first pre-req.
Bundle Install. Also a command line step running a custom script. Second pre-req.
Run Tests. Final command line step using rake and my test configuration settings to run the actual tests
Added a build trigger to launch the sanity tests after a successful completion of the previous step (deploy to server)
After this was all put in, I manually triggered the configuration execution to see how well the process worked. There were quite a few hiccups along the way. One of the more interesting problems was finding out the TeamCity agent machine had outdated versions of Ruby and Ruby gems. The version of Ruby gems was so out of date it couldn’t be upgraded, it had to be re-installed which is never much fun on an RDP session.
Once the execution went well I triggered a failure. When the tests fail they print “tests failed” to the build log. Unfortunately the server didn’t seem to understand when a failure occurred so I went back and added a specific “failure condition” (another configuration option) looking for the word “tests failed” which, if found, would mark the test as a failure. Simple enough!
We’ve been running this sanity test for a few months now and it’s quite valuable to know when an environment is in an unusable state and yet I think visibility is still a challenge. Although the failures report directly into a slack channel I’m not sure how quickly the failures are noticed and/or if the information reported in the failed test is useful.
A few articles I’ve read suggest using the CI server to launch integration tests instead of the UI level acceptance tests we are running. I think what we are doing is valuable and I’d like to expand it. I wonder what additional sanity or segments of tests do we add to this process? Are there more or better ways to do what we’re doing now? Please share your experiences!
The main product I test was designed to follow a responsive web design layout so it could theoretically be used on anything from desktop computers to tablets and smartphones. Practically speaking this means different viewable window sizes (viewport sizes) will result in the browser placing elements of our application in different locations on the screen. When running my selenium acceptance tests I wanted to be able to specify different viewport sizes both locally and remotely on Sauce Labs. While the sizes may not make a difference to Selenium they give me another variable to specify if I so choose to do responsive testing. The examples I found weren’t very helpful so I decided to make my own for both.
Resizing your window. Locally the browser can be resized to a specific width and height by using the resize_to() command. For a window size of 1280×1024 the line of code we are looking for is:
If you don’t use a helper spec you can include this code in your setup method or right after you call WebDriver.
Setting screen resolution. Resizing your window works great locally but what if you want to run your tests remotely at Sauce Labs? How do we ensure our screen resolution is large enough to support a larger window size? Luckily Sauce Labs opens their browsers to the maximum window size so all we have to do is set the screen resolution.
If we go back to the example-selenium repo you’ll see I’m actually using caps[“screenResolution”] = ENV[‘resolution’] in the above spec_helper at line 11.
I’m setting a global variable for screen resolution so I can update it in the config_cloud file as I might update other global settings like operating system or browser version. This is important because in some cases, I may have to either adjust the resolution size or in the case of Safari, actually comment it out. For some reason Sauce Labs doesn’t have many resolutions options for Mac OS X, which is a bit annoying. The latest versions of OS X don’t even support resolutions of 1280×1024.
I’ve been building a GUI acceptance test automation suite locally in Ruby using the RSpec framework. When it was time to get the tests running remotely on Sauce Labs, I ran into the following error:
RSpec::Core::ExampleGroup::WrongScopeError: `example` is not available from within an example (e.g. an `it` block) or from constructs that run in the scope of an example (e.g. `before`, `let`, etc). It is only available on an example group (e.g. a `describe` or `context` block).
occurred at /usr/local/rvm/gems/ruby-2.1.2/gems/rspec-core-3.2.2/lib/rspec/core/example_group.rb:642:in `method_missing'
It took a few minutes debugging before I spotted the error: