Today, test automation is an obvious choice for most teams and projects. Products are always more complex, leading to always bigger non-regression test plans. Running such a non-regression test plan manually requires a lot of people, dedicated to this task. But most projects can’t afford, or do not want, to pay that much people.
Many teams decidedly went for automating their tests — and that’s a good thing! But too often, attempts are failing. Why is that? We give you the 3 biggest reasons why such test automation attempts fail. The good news is that, now that you know, you can avoid these pitfalls!
Pitfall #1: The team did not try hard enough
Teams that just started automating their tests quickly find out that it is much harder than expected. Things don’t come cheap, and they are not willing to pay that much. As a result, many teams just give up before they can take advantage of automated tests.
This is particulary prevalent when working on legacy code, as testing code that has not been designed to be tested is the hardest. Basically, you need to refactor legacy code to enable proper testing.
Implications of this pitfall
As a result of this pitfall, there may not be any automated test at all. The team may end up not writing even the simplest unit tests, even on new code.
Or maybe an even worse situation, is when the team has a set of old, out-of-date tests that they struggle to maintain and steals their time on a daily basis. The worst part is that they probably don’t care when the tests fail!
Bad smells revealing this pitfall
“This code cannot be tested.”
Did you really try? Maybe you wanted to say: “I don’t know how to make this code testable.”
“We spent so much time writing a single test, that we stopped.”
Listen, I know it’s not easy. Maybe you started too big, too fast?
“You know, we are working in <insert whatever industry/technology/team specifics here> so automating tests is not worth it for us.”
Usually heard from people that do not believe in Agile or Scrum either, and use the same words on those topics. Bad faith is common here.
How to avoid this pitfall
So, having automated tests is not always easy. Indeed you have to pay a lot at the beginning before you can reap the benefits. But it is clearly worth it. Actually, it’s not like you have any other option at your disposal… Unless your project is very simple, you can’t guarantee quality without testing it, and you can’t be competitive by testing manually. Even if most of your testing is done by people, you’ll save a lot by having a minimum set of automated tests, checking that the basics work. OK, let’s stop the fairy tale; there is another option: skipping on quality. This is actually what happens on these projects where automated tests are inexistent. Obviously this is not sustainable.
Back to the subject, writing automated tests is hard, especially on legacy code. This is true mainly because it requires skills that your team may not have yet. So your team should train. I can only recommend books like “Working effectively with Legacy Code” which directly adress this issue. Your team can also get help from outside Software Craftsmen who will gladly help you or deliver trainings about how to write tests and how to refactor. As your team will write tests, they’ll get better at it, and not only they’ll reap the benefits of having automated tests but it’ll also be easier and easier to write more tests. Hang on!
Benefits of having automated tests on your project
- Peace of mind: Pushing to production with no fear of the regressions. Not being paralized by the fact that you don’t know everything about every parts of the system: the test harness will warn you of any wrong assumption you had. Less bugs, meaning less time spent debugging and an improved reputation of quality. Release more often and reduce significantly the time needed to answer to an urgent request. All in all, the team becomes more predictible.
- Better functional design: When perfoming a functional change, the test harness will highlight any unforeseen impacts. As a result, you will be able to either know the limitations of the change, or to provide a better functional change request.
- Better code design: Dependencies in a piece of code is what makes it hard to test and maintain, and slow and costly to run. So basically the main skill you’ll learn when writing tests is how to remove and minimize dependencies between pieces of code, so that writing your test is easy and that the resulting tests are efficient and maintainable. It just happens that minimizing dependencies is one of the best ways to devise good code designs. In other words, writing tests will naturally get your code in a good shape.
- Better productivity: That may not sound obvious at first as you will focus on writing tests in addition to production code. But as your automated test coverage increases, your productivity will actually increase until it becomes stellar. This is obviously due to the other benefits but also because it shortens significantly the feedback loop: you know when you broke something almost immediately, before you switch to another task. It means that what should be a 5-minutes fix, can indeed be fixed in 5 minutes.
What I’d recommend is to start easy, for instance with automated acceptance tests on new code. Avoid legacy code at first if possible and address it later. Even though your pain come mainly from legacy code, the team needs to learn the basics first. Also, depending on the kind of project you are working on, it may make little sense to cover code that is not meant to change.
On the other hand, if that’s really not possible to avoid legacy code — actually a pretty common case on some projects that build new changes and code on top of existing, legacy code — you may try to train first on non-production code, The goal would be to produce a complete project sample where good coding practices are enforced and which everybody in the company can use afterwards as a reference of how things should be done. The simple fact of seeing that it is possible will have a tremendous effect by itself on convincing people to keep trying until they succeed.
A good recommendation is to try to do things diffently. Peer-programming can be real life-saver when writing tests for the first time. Switch people from teams for a few days. Why not gather several teams together for a single day or afternoon to try to address the issues you encounter? All these initiatives will be incredibly helpful to work differently, to keep going forward, to simply think differently and to find the way out.
In short: do not give up. There is light at the end of the tunnel, and it is definitely worth it. The first steps may be incredibly hard, but every step after that will be ever easier.
Pitfall #2: Keeping the same organization
A lot of teams do not realize at first, or ever, that this is not only about automating tests. You need to work differently in order to achieve a succesful automated testing strategy, which is sustainable and that actually gets you the maximum benefits.
Implications of this pitfall
What’s wrong with this approach? A whole lot, actually.
Before you have an army of testers running test cases by hand. This amounts to so much work that they probably don’t have time to perform exploratory testing and other activities that actually bring real value. After you’re just doing the same, but instead of having people running test cases by hand, you have people writing test automation code that run said test cases. You’re doing the same because you are still automating at the user activity level, which leads to very slow and brittle automated tests. Writing and maintaining those automated tests amounts to so much work that there is probably no time left to perform exploratory testing and other activites that actually bring real value.
The big mistake here, is that the team is working the same way as before excepted that instead of having many manual test runners, you have many maintainers of the automated test runners. Things may be better that way but only marginally.
Sometimes they even get worse, for instance when badly designed tests give you false confidence that everything is fine. At that point maybe you’ll end up both testing manually and maintaining the tests, which are both daunting tasks with major overlap: basically doing the same job twice. That may sound like an absurd situation, however that may happen easilly when two different teams are responsible for writing the production code and ensuring its quality. Fact is that having a separate QA department is a common organization scheme and testing twice the same thing is their everyday life.
Bad smells revealing this pitfall
“My job is to automate existing test cases.”
Welcome silos! What really is your job? I bet the utltimate goal of your job is to bring value to your customers — whatever the means to reach this end. People are usually not expected to restrict themselves to some specific tasks. Agile and Scrum advocates the contrary: just get the people together in a team and let them make it happen in whatever way which is best.
Another fundamental subject to put on the table: testers bring marginal value when they are included only at the end of the software development lifecycle, checking and validating specifications with no real added value. On the other hand, they can have a tremendous impact on the quality of the software if they are included from the very beginning of the project. Indeed testers will bring a system point of view and a focus on the user, helping significanty in designing a good software solution. They will make sure that any necessary testing facility (be it tool or hook in the software) are provisioned and available to check and improve quality, right from the start. They will make clear that quality is everyone’s job, and that developers are required to put a lot of efforts on this, beginning with a comprehensive unit testing coverage.
“Maintaining tests takes a lot of effort, they break all the time.”
You should investigate how you wrote your tests and check if they have been designed with test automation in mind. We want “Automation in testing” over “Test Automation”. I bet you simply automated test cases designed to be run by hand!
“Running the regression test suite is so long that we cannot run it anytime we want.”
A classic mistake is to perform the bulk of test automation at the level of the user, on top of the actual application with a complete stack. Tests at this level are the slowest of all, in addition to being the most brittle and costly to maintain. They also require a lot of dependencies, tools and ressources which make them complex to run at will.
How to avoid this pitfall
The goal is not to simply automate tests. It is about writing tests specifically designed for automation. It is also about writing tests at the proper technical level to give enough confidence at the smallest possible cost: cost of writing, cot of running and cost of maintaining.
This way of doing embraces a mindset change from black-box testing to white-box testing. Instead of trying to check that there is no interaction between two related but independent features, we open the box to get confident that the code is designed to guarantee that the two features are indeed independent and we reduce the number of test cases accordingly — skipping those useless tests entirely.
Overall, the role of testers or QA changes significantly. We want them to embody the Quality Engineer (QE) role where they are not here to check that features are properly implemented anymore, they are here to help the rest of the team directly implement properly the features. They are giving their insight during backlog grooming sessions, they are working hand-in-hand with the developers to help them cover properly features with automated tests, they are helping the PO to use Specification by Example and Behavior-Driven Development tools to enable automated acceptance testing… And hopefully, they finally have enough time to perform exploratory testing and other activities that actually bring real value
Pitfall #3: Test code does matter less than production code
This is a common one. Maybe the most important longer-term root cause of failure.
Implications of this pitfall
Teams that write tests at the user level, for instance with the Selenium ecosystem, often dismiss completely the necessity to design a Page Object Model, that is to design and write an abstraction that model the application from a business point of view, instead of directly using the technical component of the user interface. Please note that this concept equally applies to any framework that drives an application and is not restricted to Selenium. This good practice is well-documented by itself, but I prefer to get back to the roots: would you do that with production code? Would you write again and again the same code? Wouldn’t you add some abstraction layer, make use of object-oriented features, and make all of this maintainable? Don’t blame the framework when you are not doing your part!
A common process trouble is that tests are not reviewed. Or they get reviewed but people do not put much thought into it. Obsolete test cases aren’t deleted. Good practices, both coding practices and testing practices, are not respected, leading to tests that are brittle and costly to maintain.
Another possible issue, is that it may be very hard to get the ressources necessary to implement and run tests. For instance it may be hard to get dedicated computers (be it physically or in the cloud) to run the automated test suites in a timely fashion. In particular, front-end testing can be very costly: browser testing, mobile device testing…
Bad smells revealing this pitfall
“The goal is to write this test once, and to not touch it again ever after.”
So we plan not to maintain the test? The first time it gets red because the assumptions changed we just throw away the test instead of updating it?
“I just made sure that the tests pass, I did not review the content of the tests.”
Would have you done so for production code? Don’t you want the best code, design and architecture for production code? Why is it any different for test code?
“Testing is not my job.”
So how do you make sure your code works as expected, or even at all?
How to avoid this pitfall
My motto: “Tests as first-class citizen.”
You would review production code, wouldn’t you? You should review test code too. And not only at the code-level: you should ask yourself whether enough test cases have been added, if existing test cases are still needed, and review the test cases themselves.
You would refactor production code to the highest standard, wouldn’t you? You should refactor test code to the highest standard too. See my rant about the lack of use of the Page Object Model design pattern.
Keep in mind that code that isn’t executed slowly degrades until it breaks — and maybe you won’t find out until the next push to production! That’s why you need to make sure that your production code gets exercised, and automated testing is the best tool for that purpose. Make sure these tests are run often to check that both tests and production code are kept in a healthy state.
Tests are part of your product; you need to design tests as a core feature of your product.
I can only recommend reading of the following books and online resources on the topic.
Succeeding in writing tests on Legacy Code
Michael Feathers starts “Working Effectively with Legacy Code” by clearly explaining why you need a test harness, even on legacy code — especially on legacy code! — before giving countless practical advices on how you can refactor legacy code to get it in into a test harness. The first chapters are gold even if you are not with hands in the code, while the remaining chapters will help developers get started with legacy code refactoring. A skill you should earn and keep!
Page Object Model
Martin Fowler explains the need for this pattern : PageObject
The official Selenium documentation explains this pattern and gives implementation hints: Test Design Considerations — Selenium Documentation
Common mistakes in test automation
Gojko Adzic, David Evans and Tom Roden show in “Fifty Quick Ideas to Improve Your Tests” many patterns that will impede your test automation efforts and explain in return how to make them work. Many other advices are not related directly to test automation but that do not make them any less useful. In short, this book reads quickly, is full of wonderful ideas to experiment with, while being highly entertaining.
Designing your organization around tests
Google engineers (the book cover cites James A. Whittaker, Jason Arbon and Jeff Carollo as the main contributors but the book is actually the work of many Google engineers) describe in “How Google Tests Software” how Google changed radically the way they work to ensure quality at all levels with a minimal set of manual attendance. They expose Google’s way of working in a completely transparent and direct manner, so that reading this book will give you the maximum of information for a minimum of reading time. Even if your organization has zero hope of becoming Google-like (how many companies could claim that anyway?) you can still pick and re-use many ideas there and there. At a very minimum, it should make you think and reflect about how you are working and what you could try out.
To read: Why Your Well-Paid Developers Hate Your KPIs