Thoughts: Testing Strategies in a microservice Architecture – takeaways and reflections

“The Causeway”

This post based on some research we did about hypothetically moving to a micro service architecture/soa. 

This first post is a reflection on an article by Martin Fowler.
My personal problem with these testing approaches to a micro service architecture  is that a lot of it is automated and in isolation. Automation is good as it allows us to make sure we are building the code right but we need manual testing a lot earlier to make sure we are building the right code.
There are many benefits with this approach such as the ability to independently deploy, scale and maintain each component and parallelize development across multiple teams. However, once these additional network partitions have been introduced, the testing strategies that applied for monolithic in process applications need to be reconsidered.

Testing Pyramid – How to focus the testing effort 

This is actually not that different from other automation test strategies I have read.

Aim to provide coverage at each layer and between layers with the challenge being to stay lightweight.
The first three layers are best tested in isolation where possible. Component and integration tests may need to call on other services but these can be stubbed to keep tests fast and maintainable. 
Any external service that is out of the team’s control should be stubbed/mocked in order to test failure scenarios.
One key characteristic in this type of architecture is graceful failure! The system must fail gracefully as it is prone to fail in many places.

Testing micro services in isolation:
Fast feedback as no end to end test.
Controlled environment – minimizes moving parts
Unit tests can help to identify areas which are too coupled as the unit under test needs to be isolated from its collaborators.

Resources for testing:
Expose internal resources such as logs, database commands and metrics to be able to test the system from another layer

Business value not achieved unless it is proven that many services work together to fulfill a larger business process, so testing in isolation will not suffice.

End to end tests:
Automate end to end tests using the GUI and API as well as perform manual end to end tests. But external services may still need to be stubbed/mocked. Due to more moving parts the system is likely to fail, so write less end to end tests at GUI level to avoid expensive maintenance and feedback times. Are these story acceptance tests?
Need to define a time we are happy to wait for these to give feedback – 5 mins?
Best tests would be data independent – not relying on any data to have been set up as this makes the tests more brittle
In which sort of environment do we run these tests? Is in production with a feature toggle feasible? Do we keep a production like environment running with the downside of double the overhead to technically have 2 live environments?

Exploratory manual testing:
These are manual tests that are performed with a mission statement in mind, they are session based ideally; time boxed from 45 to 135 minutes. What sort of environment is feasible for exploratory tests? Production environment with a feature toggle?

Have you solved the conundrum of testing micro services sufficiently in a continuous integration world? Please send me any resources or thoughts! Would be very interested to read them! 🙂


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s