Automated Testing Series - Part 1 - What and Why?

30 July 2015 - Testing

Over the past year or so I've been getting more and more interested in the various forms of automated testing. I've been a developer for verging on 20 years, and yet it's only been these past few years where I've seen a much larger interest in automated testing in the industry as a whole. It's obviously been around for a while, but it's never been as popular as it is today. Especially now the tools and libraries are getting better and better. In the past, it was a rare exception where a developer would heavily write tests as part of their code. Now, it's frowned upon if you don't.

I thought it would make an interesting serious of blog posts to pass on what I've learnt. For this first post, I'm going to talk about what it is and why you should be doing it.

So what is it?

Okay, so "automated testing" is quite a generic umbrella term and can mean quite a lot of things. I'm going to quickly summarise here, but will go into much more detail in future posts.

In general, automated testing in software development is broken down into three types:

  • Unit tests
  • Integration tests
  • Acceptance / Functional tests

A unit test tests a very small unit of functionality - generally a single method in a class. The code the unit test is testing should not rely on any external state that isn't controlled by the test itself, and should not interact with any other functionality or services - eg. logic in other classes, web service calls, database interaction, etc. Any dependencies that the code we're testing requires should be mocked or stubbed. I'll go into both of these concepts in much more detail in future posts in this series.

An integration test approaches it at a higher level and tests that the code integrates as expected with it's dependencies. These kind of tests can make web service calls, or connect to data stores. Integration tests do not tend to involve any user interfaces though. This could be testing an API or service. Or just a larger scope than that of a unit test - for example how a group of classes interact with each other.

Finally there are acceptance (also known as functional) tests. These generally do involve the UI and simulates what the user would be doing, validating that what the user sees and experiences is what's expected.

Why go to all this trouble?

There's a lot of work associated with writing automated tests - especially when you're dealing with an existing codebase that hasn't been doing it from the very start. Then there's the additional business pressures to create a deliverable. You can understand why people don't bother. However, whilst there's this additional upfront work involved, the amount of time you save later is HUGE.

A bug found during development doesn't tend to cost a huge amount of time.

If that bug makes it to QA, then it costs much more time, as more people have to get involved. The tester who found the bug has to report it and also communicate how to reproduce it to the developer. The developer then needs to try and reproduce it, fix it, redeploy it to the test environment. The tester needs to then retest. Sometimes this can go back and forth many times.

If this bug isn't caught by QA and makes it to production, then we're now involving customers, testers, project managers in that cycle. A simple bug that would have initially taken a few minutes to fix has now potentially cost us many days and has involved many more people. Not only this, but we've risked losing customer confidence at this point.

Think about the time saving if we catch most of our bugs before they even leave the developers' machine and get committed to source control. To say it's HUGE is a bit of an understatement.

You'll also find that a codebase that has high test coverage tends to be of much higher quality. Writing tests (especially unit tests) require the code to be written in a certain way - forcing separation of concerns and use of best practices, as well as giving us confidence to heavily refactor as we go along. We'll go into this in much more details in future posts in this series when we start to talk about how the SOLID principles make our code more testable.

So far we've just talked about catching bugs earlier. But how about the repeatability of this testing? If a QA team is manually testing the product with no automated tests, then their testing is only applicable to that one build on the environment that they're testing on. And even then, they can't test every little thing every time something changes. Automated tests can however. Once written, they can run any time you like. Many times a day, overnight, weekends, bank holidays - running thousands of test per test run. Running on triggers, so all your tests run every time a piece of code is committed to source control. This level of testing can't possibly be done manually. Not even close.

Summary

So, hopefully I've persuaded you of the extreme importance of automated testing. In this series, we'll be digging much deeper into the various aspects of automated testing. I'll start off focusing more on unit tests, then later on move onto testing user interfaces.

Whilst some of the things I talk about aren't language specific - my background is in .NET, so I'll be focusing on it from a .NET perspective. The examples will be in C#, and we will be using .NET libraries. The next post will talk about getting started using NUnit, which is a popular test framework which we'll be using for all our tests in this series.

Search


Recent Posts


Featured Posts


.NET Oxford Links