Comments by "" (@ContinuousDelivery) on "Testing Strategy for DevOps: What to Test and When" video.
-
10
-
6
-
3
-
3
-
I suppose it depends on how far you take unit testing and what you mean by the percentages. For a feature I'd generally expect to create a handful of acceptance criteria and an automated "Acceptance Test" for each. If you take my approach, most of these "executable specifications" will reuse lots of test infrastructure code and will usually add little new code. The test case itself is usually a few lines of code written in your test DSL.
Unit testing is driven, for me, from TDD. So I'd create unit tests to support nearly all of my code. So I'd have quite a lot more code in unit tests than code in acceptance tests, though the testing infrastructure code for acceptance tests will be more complex.
One that basis, in terms of effort then something like 70% unit vs 10% acceptance is probably about right, though a guideline rather than a rule to stick to.
If you count tests, then I think it is harder to generalise. Some features, may already exist by accident, so you will write an acceptance test to validate the feature, but don't need to write any additional code or unit tests. Unusual, but I have seen it happen. Other code may need a simple acceptance test and loads of work, and so loads of unit tests, to accomplish.
I confess that I am not as big a fan of the test pyramid as some other people, in part for these kinds of reasons. I think that it can constrain people's thinking. However, if you see it as a rough guide, then it makes sense. I would expect, as an average, over the life of a project, for there to be more unit tests than acceptance tests, lots more.
The danger, and a trap that I have fallen into on my own teams, is that the acceptance tests are more visible and more understandable, so there is a temptation to write more of them. QA people for example, often say to me "we can't see what the devs do in unit tests, so we will cover everything in acceptance tests". This is wrong on multiple fronts. 1) it isn't the QA's responsibility to own the testing or the gatekeeping 2) its an inefficient way to test 3) it skews the team in the wrong direction, if the QAs test "everything" in acceptance tests it will be slow, flaky and inefficient but it will nevertheless tempt the devs to relax their own testing, and abdicate responsibility to the QAs.
Ultimately I think that unit testing is more valuable as a tool, but that acceptance testing gives us insight and a viewpoint that we would miss without it.
3
-
3
-
2
-
2
-
2
-
2
-
2
-
2
-
1
-
No I prefer to run them all the time, but not necessarily on every commit. I divide deployment pipelines into 3 phases, commit, acceptance and production.Â
The commit phase runs on every commit and is optimised to give fast feedback. Its job is to keep development flowing.
Acceptance is about determining releasability, tests are more complex, more "whole system" and so slower.Â
Prod is about release and monitoring and gathering feedback from production.Â
Let's imagine commit takes 5 minutes and acceptance 50. So during a single acceptance test run, we may have processed 10 commits. When the acceptance run finishes, the acceptance test logic looks for the newest successful release candidate - generated by a successful commit run, and deploys that and runs all the tests. So Acceptance testing "hops over" commits, which is ok because they are additive, the 10th commit included all the new features of the previous 9, so acceptance testing is testing everything as frequently as possible.
1
-
1
-
1
-
1