Yes, developers should verify their bug fixes, but …

First of all, thank you to everyone who has spoken to me about this on twitter, on the blog, in person and even in the aisle of Tesco! 😉

This is part 2 of the conversations about my blog post “Should developers verify their now bug fixes?”. Thank you to all participants!

The main consensus is that “Yes developers should verify their bug fixes.”. And now come the many “buts” (is that even a word?).

Yes developers should verify their bug fixes, but…

  1. … still use the 4 eyes principle or two heads are better than one. It does not have to be a tester necessarily who helps with the bug verification, it could be another developer, a product owner/manager, scrum master or designer. -Basically, most have stated that another pair of eyes makes the team feel comfortable that the bugs were fixed correctly.
  2.  should this be a discussion about quality? I expect devs to replicate the bug and test it’s fixed as part of fixing it , regardless of if there’s another step in the process or not. –I would like to add here that I find the pointer towards quality interesting and want to explore that more. And I think both point 1 and 2 also point towards the fact that you should be owning your work and make sure it is getting the attention it needs.
  3. clear bug reports help build trust. – I totally agree. If you can rely on your team members to communicate issues effectively and clearly this builds trust and you know that if you follow the steps indicated you can then verify if the bug is now fixed.
  4. if we fall back to “aren’t the AC enough?” then why test at all? Aren’t the AC enough for a Dev to verify all their work? –I think this went in a slightly different direction. I still feel that testers’ time is well spent exploring stories and testing implementation of features. In the process they may find bugs and report these (hopefully clearly). And the steps and expected results in those bugs should be enough for anyone to verify the bug is fixed.
  5. is verifying a fix, not more than that? Is it not also retesting a full feature and doing a regression of the system’s area? Hence it should be a tester doing the verification. -Well, I think they are different tasks and exercises. At least in my current context. If the fixes to bugs are so large that the whole feature or area needs regression testing than maybe there are other process issues to think about.
  6. there should be testing expertise available if needed. -Yes!
  7. in certain context the risk is too high to not involve testers in bug verification. -This was generally the consensus from people working in financial industries. I totally agree and think this makes sense. These are often large legacy systems and you never know what you may find as a result of a bug fix. I mostly deal with newish code (less than 2 years old)

 

So this is my second post on this subject. I think a lot of my thinking comes down to team sizes and speed of working. I am trying to work out if we have enough testers across the projects and how we may have to change the way we work if more projects drop on us in the near future. One of these changes may involve the way we think about testing and what a tester’s role is on the team. Will their role still involve verifying all  bug fixes? I think i’d like to push for No and see what happens. More on this if I get somewhere, or not. 🙂

So far this has been great in getting an idea of how people may react and how it may affect the projects we are working on. Thank you all!

Agile Testing: Frustrations can make us creative

  

creative space design of a restaurant

I love the fact that I have worked at a few different companies and hence have gotten to know so many awesome people.

It also has the great benefit that I still get to take part in different conversations around Agile Software Development practices and especially when the focus is on testing.

The other day I was forwarded a little article by Mountain Goat Software which focused around the fact that in an agile team, at the end of the sprint everyone tests. And some developers may not like manual testing so they will do automation testing or pair with testers to find efficiencies.

“At the end of the sprint, everyone tests.”

I am wondering a few things around this line of text. Why does everyone only test at the end of the sprint?

I know there have been many discussions that “testing” means manual testing being done by a human. But I do think that developers are often using Test Driven Design to write code and hopefully are also running the code they write to test that it does what they intended and maybe also how it does not (depending on experience, mind set, time pressures and learned habits).
Nevertheless the above quote is important for agile teams. I have experienced that as testers you can easily become a bottleneck because the developers think their part is done. Ideally they will have written the code and written automation tests as well. Now the manual testers get to test the new feature works and hasn’t broken anything.

But this leads to the problem that the developers are sitting there twirling their thumbs or starting more work, adding it to the bottleneck until the pipeline completely collapses.

So I like the fact that there was an emphasis put out that everyone should test as the team is responsible for shipping the work altogether.

I do hope though that testers and developers in a team get to test early as well, or maybe even designers or product managers, to make sure that the feedback loops is as fast as possible. The statement has the danger to be interpreted as such that testing should only happen at the end of the iteration. (I don’t think that is actually what they are saying.)

The excerpt I read actually states that the better the agile team the later the testing can occur. This seems wrong to me, but maybe I am taking it out of context and it is referring to the time needed for the whole team to be testers.

 

“Establishing a mindset of we’re all testers at the end, does not mean that programmers need to execute manual test scripts, which is something they’ll probably hate.”

 

The article went out to say that developers may dislike using manual test scripts for testing and hence maybe could focus on other things such as automation scripts.

I actually thing that it will be really beneficial for developers to do manual testing, not necessarily follow test scripts but explore the features they wrote instead.

By all means make sure your code is well tested using automation tools but you will not know if you have written the right code if you do not manually test it. You will only know that the code is written in the right way.

 

Frustrations can make us creative

I recently watched a video that Chris recommended on How frustrations can make us creative and it highlighted how doing something you maybe don’t like can actually result in you finding ways to make it better a lot easier than doing stuff you enjoy all the time. So developers definitely can really gain from doing some manual testing. And not just developers but hopefully the whole team.

A sort of related example of this is, that at Songkick we try and learn about what each department does and were encouraged to help the support teams. This has actually led to improvements being made to the admin systems when developers used it to help solve support cases. It is OK here to JFDI, when an improvement can really help someone.

A small example was that someone added extra information to be pulled in on one screen, so that you do not need 2 tabs open anymore to find all the information you need to close a support case. This was a huge time saving for everyone and was only achieved because a developer used the tools he created for the support team.

So I encourage everyone to test, collaborate and try something that frustrates you and see if you can make it less frustrating to do.