in DotNet

Continuous Testing

Most people are used to the idea of continuous integration environments running tests. However, I often find myself wanting to have the tests run all the time. Such a practice is called Continuous Testing, and in the .Net space we currently don’t have a solution that allows this out-of-the-box.

However, there is a small lifehack you can employ that will get you there (to some extent) provided you buy into the idea of using multiple computers for development. Why multiple computers? Well, the assumption is that some stuff needs to run ‘elsewhere’ to prevent interference with normal operation of the machine you’re developing on.

The Solution

So here’s how I do it. Let’s say I have a solution that I want to test continuously, i.e. I want the tests rebuilt and run so frequently that I always get that ‘dashboard’ feeling of knowing what’s going on. Here’s what I do to get this.

1. Place the source code in synchronized cloud storage

First, stick your code in cloud storage that syncs with many machines. DropBox is great – it gives you 2Gb free space, and syncs flawlessly on all your machines.

2. Optimize test-runner machine performance

Ensure that the machine that builds the tests and runs them is optimized for just that. Do the lightest possible OS install (e.g., Windows XP). Also, ensure disk access is lightning-fast: I swap to RAM, and you can’t really get any better speed than this. Seeing how most developers have 8-16Gb RAM nowadays, allocating a bit of that and fitting a solution in there isn’t such a big problem.

3. Set up the scheduler to build your solution

Pick a good time interval and set the Windows scheduler to run your build script. This will result in your solution being rebuilt over and over and over. If you followed my advice about keeping the solution in RAM (which, IMHO, totally fits with the cloud storage practice), builds should be somewhat faster than builds from your Raptor or whatnot. Don’t forget to fiddle MSBuild to do its multithreading right.

Of course, if you have a cloud build solution (IncrediBuild, Electric Cloud, etc.), just use that.

4. Set up the test runner to rerun all tests whenever binaries change

That’s the easy part – get your runner to rerun tests whenever your build artifacts change. That way, you will be constantly informed about how many tests you caused to fail. Naturally, this implies you have a separate screen that shows you tests and allows you to navigate them. To be able to navigate tests on a different computer, just use Synergy.


The approach I describe above is workable, but it’s far from perfect. Here are some issues you will encounter with it.

First, there’s no way to avoid rebuilding the whole solution, because currently no software is able to intelligently detect which systems a particular tests affects. And even if you could, it would still not work because, currently, we simply have a blanket call for a single build script whatever happens, whereas to have smarter builds we would need to specify explicitly which test assemblies need to be rebuilt.

Another problem is that a solution like DropBox does not sync with our building, which sometimes causes problems when we try to do a build while the file is being updated. Because we cannot reasonably predict when DropBox will be syncing files, we just have to hope there’s no collision. Also, on a side note, I don’t recommend sharing DropBox between developers: use ordinary source control for that. If you try to use DropBox as source control, developers will be constantly interrupting each other’s work, causing uncompilable code, and so on. In other words: the solution I present currently works best for just one developer.

Finally, if you’re into things like coverage, you’re somewhat out of luck: you won’t be able to view coverage inside on your development machine because coverage calculations have actually happened elsewhere. Of course, nothing prevents you from running code coverage analysis and simply presenting aggregated results, but those are more or less useless without you actually being able to tell which spots your tests seem to miss.

Write a Comment



  1. This article does not really describe a continous testing solution. It describes a form of continuous integration with focus on certain triggering mechanisms.

    Continuous Testing is about running unit tests continuously, with no effort, as you work in your IDE. This article describes an out-of-IDE solution without intelligent test integration, and without intelligent test run prioritization.

    Several solutions for “truly” continuous testing are available for .NET. Check out for our continuous testing solution.

    • Hold on… I’m describing what you’re describing. I mean, the goal of continuous testing is to constantly have an update on which test failed and which passed, right? I mean, sure, you can throw in as much internal logic in there as you want, but the point is to just have an overview of what you broke and what you fixed.

      • Yes, the goal is to constantly know the state of your tests, and continuous integration gives us that to a certain extent. However, the point of (and the reason for the term) continuous testing is a tighter integration and more immediate feedback than a continuous integration approach can provide, which is why we make software specifically for this purpose.

        • You’re suggesting that what I am describing is continuous integration, and it isn’t. CI is when tests run on check-in, CT is when they run all the time. So my approach is, in fact CT and not CI.

          As for integration, this point is arguable: I mean, the main thing is to know which tests pass/fail, right? With my approach, we get exactly that.

          • Triggering mechanisms is not what makes CI transform into CT, and CT is not only about knowing what passes/fails. You already have that knowledge with CI.We’re arguing semantics here, but again, what this article presents is just CI with an improved/more immediate triggering mechanism. “True” CT can offer a far better (and faster) feedback experience by running integrated with your environment, and running with smart test prioritization.

          • Well, what I’m going by is the definitions I found on the internet, specifically:

            Continuous testing uses excess cycles on a developer’s workstation to continuously run regression tests in the background, providing rapid feedback about test failures as source code is edited. It reduces the time and energy required to keep code well-tested, and prevents regression errors from persisting uncaught for long periods of time. (

            What is continuous testing? It’s turning the knob on Test Driven Development up to 11, by automatically running the tests on every save. This has profound effects on the way that TDD is applied, and is likely to make you a more efficient and productive programmer. (

            … and so on. The idea is this: running tests on every save. That’s it. Nothing here says it has to be integrated with the IDE, and nothing says you have to prioritize things in a particular way.

            Now, embellishments around this idea do make sense to an extent. But I beleive the core intent is achieved regardless.

          • Would like to jump in.
            CT is CI taken one step further. The idea is to get the fastest feedback possible.
            The ideal would be that by the time you finish typing a statement you will already know the new test status.
            Integration with the IDE tends to make things faster in the snse that the results has better visibility.
            Smart prioritization will also speed up things.

            BTW Rapid-Dev, the tool mentioned by Artyom, is designed to answer your first point in the discussion.
            i.e., it does execute only a subset of the tests on every compilation.

          • I use the Continuous Testing plugin by @ContinuousTest myself, it works great. Article is more about a CI-way of reaching the goal of CT, but solutions like Continuous Testing and possibly also Rapid-Dev (haven’t tried) makes this a lot easier.

          • I think the idea of things running as you finish writing a statement is very nice, but also unrealistic given how slow compilation actually is (even if you are compiling a Hello World project).