Skip to main content

Documentation: Distribution of Tests (Testing Shapes)

Reading Time: 4 minutes

 

When starting on a new project, it’s very common for me to be the first and / or only tester.  I find myself frequently wanting to document the same things, rewriting them from scratch.  To help with this, I’ve decided to include some of this documentation on my own blog, so I no longer have to start from scratch.  I’ll probably edit and improve them over time.  You can view all posts in the documentation series.

 

 

Note: This page includes lots of testing terminology and concepts. Please see Documentation: General Quality and Testing Concepts for explanations of these terms.

 

Testing can be performed at different layers, on different targets. It can be helpful to visualise how tests are distributed across each of these layers or targets. For that, we can use models, which often show the distribution as a shape.

 

 

Test Automation Pyramid Model (“Ideal”)

 

Pyramid depicting automated unit tests at the bottom, integration tests in the middle, and system integration tests on top

 

You’ve probably seen some variation of the test automation pyramid before. The “automation” part is often left out from the name, highlighting the issue that this model doesn’t account for any testing performed by humans.

 

This model can depict either the testing layer or the testing target. When using this model, it’s important to understand which of these is being used. The model pictured refers to the testing layer, suggesting that the most automated tests should be on the unit test level; there should be fewer (but still quite a lot) on the integration level; and that the system integration level should have the fewest automated tests. This takes advantage of the concept that the lower the level of tests, the more efficient they will be, both in terms how long it takes to run the test, and how long it takes to find exactly where the issue lies. The higher the level, the more layers of the system architecture are exercised, and the more variables there are.

 

This model is often referenced as the “ideal” distribution to aim for. However, what is ideal is always dependent on context.

 

 

Spinning Top Model (Status Quo Example)

 

Model showing some automated unit tests at the bottom, more integration tests above, very few system integration tests, then human testing on top, consisting of some validation tests and a lot of verification tests

 

You’ve probably never seen this model before. That’s because I created it to reflect an example of how tests might currently be distributed. This model focusses on the testing target, as opposed to the testing layer, but we can still compare it to the test automation pyramid. The difference in distribution can be seen immediately.

 

In terms of automated tests, we do have unit tests, but not enough, compared to how many we want. We have the most on the integration level. A system integration test suite does exist, and is designed to test the team’s SUT along with other connecting systems that might exist; for example, in a company with multiple teams working on multiple related products. In this example, the team doesn’t have any “internal only” tests on the GUI level.

 

In this example, a large portion of the testing has not been automated at all. Most of these tests focus on verification of requirements and bug fixes. In general, much less validation testing is being done.

 

 

Party Hat Model (Goal?)

 

Model showing lots of automated unit tests at the bottom, fewer integration tests above, fewer E2E tests, fewer system integration tests, then human testing on top, consisting of lots of validation tests and some verification tests

 

This model may be familiar to you, or you might have seen something similar in the form of an “ice cream cone” model, which literally turns the test automation model on its head and adds in human testing.

 

The “party hat” model also refers to testing targets, and balances out the “spinning top” to be more in line with the “ideal” distribution shown in the test automation pyramid, most notably including more automated unit tests. It also redistributes the balance between verification and validation tests, on the basis that more of the former would be automated.

 

This model also introduces automated E2E tests, which represent tests designed to cover complete user workflows, slicing through all layers of the system. The reason this is (crudely) greyed out is because, depending on your context and SUT, you might not want or need these tests. Just because you can doesn’t mean you should. It’s important to think about whether tests on this level are necessary at all, and if so, whether the value they provide would be worth the implementation and maintenance costs of automating them.

 


 

Find this useful?  I’m happy for you to use it as a basis for your documentation too.  Please just add appropriate attribution (e.g., linking to this post).

2 thoughts to “Documentation: Distribution of Tests (Testing Shapes)”

  1. Your “Spinning Top” model stood out to me as a refreshingly honest representation of how tests are often distributed in practice. It’s not something I’ve encountered before, but it perfectly captures the current testing landscape, especially the imbalance between unit and integration tests.

Share Your Thoughts