|Me starting the walk along the Causeway
While pondering the future of how we will be developing and therefore testing we did a lot of research. I shared my first post about this the other day here.
I have since had a chat with Amy whose talk on testing in a continuous delivery world I mention here.
Re-reading that post and the chat with her gave me some great insights and I am trying to relay parts of our conversation below. (Thanks for taking the time for the chat Amy!) 🙂
To mock or not to mock
One of the my issues was the emphasis that was also pushed for internally that microservices will be tested in isolation, with mocking in place.
Whereas in my mind you want to be able to integration test your services and make sure they play nicely together in production BEFORE they are live.
So how can we create end to end tests that touch as much of the stack as possible and do not mock most of it?
And if we do need to resort to mocking why?
Our reasons for mocking would be two-fold:
1. Cost: cost of resources during development may mean we cannot have all services that may talk to each other up and running at once.
2. 3rd Party Applications – Need for integration with 3rd party software which we do not have sandbox environments for, hence this would need to be mocked.
Amy shared a great question with me which I really need to use to help us decide on where we invest in front end automation tests and where we don’t.
“What must never break vs. what can we break as long as we fix it quickly?”
This helps her team to decide what needs testing vs monitoring.
Keeping the business happy and keeping devs happy
As a tester we can find out who is worried by frequent releases and find out what it will take to make them less worried and base some tests around these worries as well.
Understanding your business’ needs is key.
But not only that, do not neglect the dev team. What annoys them? Slow build times, maybe? Can providing them with an SSD help? (I know we are lobbying for faster machines).
Be careful with metrics
Metrics can be useful but be careful how you use them. This was also a theme at the Lean Software Testing course I attended and wrote about here.
Production bugs will always be a good measure to see if your tests are in the right place and providing the value you hoped for.
If too many bugs get out into the wild then you need more tests (at some level).
At the beginning of transitioning to a service oriented architecture manual testing may be quite intensive because you need to start building confidence in the product and the tests that are being run.
Once you have a reasonable number of tests running then you’ll hopefully find fewer bugs during manual testing and can relax a little.
I have some action items from the chat:
Find the key people, assess what worries them about releases and how to provide them with confidence of the release process
Push for unit and integration tests
Push for more manual time to begin with and then define the key journeys for selenium tests to be run against.
EDIT: Since the conversation this great post has appeared. A real world story about Microservices. Which ends with what sounds like a change of heart from Martin Fowler.
“…my primary guideline would be don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.”
So we will see where we end up in practice! But for our proof of concept I have a lot to do right now!
P.S: There is Brighton Tester meet up on the 27th May about BDD with free food and drink! RSVP here so I can get enough! 😉