Yes, developers should verify their bug fixes, but …

First of all, thank you to everyone who has spoken to me about this on twitter, on the blog, in person and even in the aisle of Tesco! 😉

This is part 2 of the conversations about my blog post “Should developers verify their now bug fixes?”. Thank you to all participants!

The main consensus is that “Yes developers should verify their bug fixes.”. And now come the many “buts” (is that even a word?).

Yes developers should verify their bug fixes, but…

  1. … still use the 4 eyes principle or two heads are better than one. It does not have to be a tester necessarily who helps with the bug verification, it could be another developer, a product owner/manager, scrum master or designer. -Basically, most have stated that another pair of eyes makes the team feel comfortable that the bugs were fixed correctly.
  2.  should this be a discussion about quality? I expect devs to replicate the bug and test it’s fixed as part of fixing it , regardless of if there’s another step in the process or not. –I would like to add here that I find the pointer towards quality interesting and want to explore that more. And I think both point 1 and 2 also point towards the fact that you should be owning your work and make sure it is getting the attention it needs.
  3. clear bug reports help build trust. – I totally agree. If you can rely on your team members to communicate issues effectively and clearly this builds trust and you know that if you follow the steps indicated you can then verify if the bug is now fixed.
  4. if we fall back to “aren’t the AC enough?” then why test at all? Aren’t the AC enough for a Dev to verify all their work? –I think this went in a slightly different direction. I still feel that testers’ time is well spent exploring stories and testing implementation of features. In the process they may find bugs and report these (hopefully clearly). And the steps and expected results in those bugs should be enough for anyone to verify the bug is fixed.
  5. is verifying a fix, not more than that? Is it not also retesting a full feature and doing a regression of the system’s area? Hence it should be a tester doing the verification. -Well, I think they are different tasks and exercises. At least in my current context. If the fixes to bugs are so large that the whole feature or area needs regression testing than maybe there are other process issues to think about.
  6. there should be testing expertise available if needed. -Yes!
  7. in certain context the risk is too high to not involve testers in bug verification. -This was generally the consensus from people working in financial industries. I totally agree and think this makes sense. These are often large legacy systems and you never know what you may find as a result of a bug fix. I mostly deal with newish code (less than 2 years old)

 

So this is my second post on this subject. I think a lot of my thinking comes down to team sizes and speed of working. I am trying to work out if we have enough testers across the projects and how we may have to change the way we work if more projects drop on us in the near future. One of these changes may involve the way we think about testing and what a tester’s role is on the team. Will their role still involve verifying all  bug fixes? I think i’d like to push for No and see what happens. More on this if I get somewhere, or not. 🙂

So far this has been great in getting an idea of how people may react and how it may affect the projects we are working on. Thank you all!

Thoughts: Accessibility testing for the web

screen-shot-2016-10-23-at-21-58-16

 

Myself and Emma recently organised an intro into accessibility testing with the awesome people at Test Partners.

They are really up for doing talks or tutorials with companies and are super friendly and knowledgeable. Do reach out to them if you have any questions on how to get started.

Why is accessibility important?

Up to 20% of the UK population has a disability but moreover research shows that 57% of the UK population benefit from accessibility features:

    • 20 million people over 50 in the UK – diminishing eye sight
    • 6 million + people with dyslexia in the UK
    • 1.5 million people with arthritis in the hand or wrist – use keyboard shortcuts over a mouse
    • Anyone can have a temporary accessibility need – like breaking an arm or hand

 

I have been aware of accessibility testing and really enjoyed the talk by Michael Larsen on the subject at the end of last year but did not look into it anymore.

This is a shame really because the internet is meant to be for everyone and therefore be inclusive but many websites are not.

Do we as testers do enough or even know enough about accessibility testing to point these things out? Are we knowledgeable enough to have these conversations on why we are excluding certain members of the population on using your website or services?

Have you ever had these conversations?

One place I worked at, I had just hired a new tester who came from an external testing services provider. He knew quite a bit about accessibility and started to raise bugs against our website for the images missing alt text for example.

On the one hand the things he was raising were just good practice to have really. But why did we not raise this before?

I never knew what to look for. I was more focused on functionality for able bodied users, maybe because I could easily be that sort of user.

The positive effect the other tester had, was that the designers and front end developers became more interested in how to make the website more accessible and started to reform from within. They started to design good patterns in accordance with the Web Content Accessibility Guidelines (WCAG) 2.0. I think this is the hard thing when it comes to accessibility, you can raise bugs and therefore awareness against accessibility features missing or keyboard navigation not working but if the designers and front end developers and maybe more importantly, the product team are not considering accessibility from the start then it is really hard to add it back in.

Why do we not learn about these things when we start our testing career? Was I just blind to it? The ISTQB certainly did not tell me about it.

If you are on the dojo there is a great resource to get you started. Maybe there will be a 30 day testing challenge for accessibility testing soon as well. I would love that.

One thing that Steve and Paul from the Test Partners showed us was this tool. You can add the bookmarklets to your browser bookmarks toolbar and then click on them to get visual feedback on how your website is built.

This helps with understanding the semantics of your page. How it is built and structured. For example headings are usually large and lists usually have bullet points. However, the semantic structure also need to be conveyed programmatically by means of tags in the source code. And these bookmarklets highlight where something has been tagged as a list or image for example.

This is a good starting point I feel, to understand what flaws your page may have for screen readers. Screen readers use the tags in your source code to tell the user what the item on the page is and does.

I would love to pair with some users of accessibility related tools at some point. On a similar note this podcast on a blind architect was incredible to find out how blind people actually use screen readers. TL;DR at a way faster setting than you think.

Have you ever paired with a blind user? Or someone who cannot use a mouse? Or maybe someone who uses a magnifyer?

 

 

Thoughts: Pair Testing…sort of

Pair Testing

At the beginning of the year, I attended TestBash in Brighton and there was a talk that just stuck with me with practical application.

This was the talk by Katrina Clokie on Pair Testing. She even outlined how she trailed it in her job.

I recently got a buddy at work and we were talking about sharing knowledge. I really wanted to try pair testing, so we did a version of it.

Step 1 – finding the right task

The team did some work on a tool that I wasn’t too familiar with and as part of our development process we created a testing mind map.

During coding the developer will use this mind map to test his code and depending on risk and sadly often time, the tester will also do some testing using the mind map.

In my pair testing example, the developer had done some testing and so had I, noting down some questions before involving my buddy. I then walked him through what we had tested so far and how the application was working.

Step 2 – What happened

Just due to different experience and knowledge he asked some other valuable questions which aided my testing to go a bit deeper and got us thinking of other testing types such as performance and database behaviour.

For me this was an invaluable experience, as I learned to think about other testing types and techniques and scenarios and I think for my buddy it was also a great experience as he got to see bits of the company’s product catalogue he wouldn’t necessarily get to see on a day to day basis.

I want to try and make this a more regular thing, and also try it before I test something and let the other tester drive.

The other side of this is, that we have a weekly test team meeting and I will try to show features or functionality that the other members may not necessarily see but may have to pick up, when I go on holiday. During this session we can also ask valuable questions of the new feature or product. Which is easier than listening to a monologue when handing something over. I think!

As testers I feel we generally want to know about the company’s products and about all the things they can do for customers and ourselves, so that is also why I envision this knowledge share to be good.

Do you do regular pair testing sessions? How do you structure them? Do you do knowledge sharing sessions with other team members?