Thoughts: Pair Testing…sort of

Pair Testing

At the beginning of the year, I attended TestBash in Brighton and there was a talk that just stuck with me with practical application.

This was the talk by Katrina Clokie on Pair Testing. She even outlined how she trailed it in her job.

I recently got a buddy at work and we were talking about sharing knowledge. I really wanted to try pair testing, so we did a version of it.

Step 1 – finding the right task

The team did some work on a tool that I wasn’t too familiar with and as part of our development process we created a testing mind map.

During coding the developer will use this mind map to test his code and depending on risk and sadly often time, the tester will also do some testing using the mind map.

In my pair testing example, the developer had done some testing and so had I, noting down some questions before involving my buddy. I then walked him through what we had tested so far and how the application was working.

Step 2 – What happened

Just due to different experience and knowledge he asked some other valuable questions which aided my testing to go a bit deeper and got us thinking of other testing types such as performance and database behaviour.

For me this was an invaluable experience, as I learned to think about other testing types and techniques and scenarios and I think for my buddy it was also a great experience as he got to see bits of the company’s product catalogue he wouldn’t necessarily get to see on a day to day basis.

I want to try and make this a more regular thing, and also try it before I test something and let the other tester drive.

The other side of this is, that we have a weekly test team meeting and I will try to show features or functionality that the other members may not necessarily see but may have to pick up, when I go on holiday. During this session we can also ask valuable questions of the new feature or product. Which is easier than listening to a monologue when handing something over. I think!

As testers I feel we generally want to know about the company’s products and about all the things they can do for customers and ourselves, so that is also why I envision this knowledge share to be good.

Do you do regular pair testing sessions? How do you structure them? Do you do knowledge sharing sessions with other team members?

Advertisements

Thoughts: Encouraging change when you are the only tester

Sometimes things come from small conversation snippets.

Some of you may know that I did a testinginthepub episode not too long ago. In there I talk a lot about what is awesome in my job now, like being valued as a team member, being invited to kick off meetings (these help us talk about what we are building and why)  teams ask me for mind mapping the testing areas.

However this was not always the case. It is important to remember that anyone can encourage change. I was not here before the changes so I will base my advice on what I did in my previous role as the first and only tester.

I did a talk on this last year at Tiny Testbash as well. The recording is available through the dojo subscription.

 

Encouraging change when you are the only tester.

In my previous job, I was introduced into a company which had a successful product that was starting to make them money but they had no testing department. When I questioned how they test their product I got the following answers:

“We don’t have any testers. Our product owners and stakeholders do the manual testing and we have a vast suite of front end automation tests.”

Nevertheless they decided to hire me, and subsequently a team of tester. How this came about and some learning along the way is what I would like to share below.

 

The problem:

So I mentioned that the company had a successful product, they had people testing it and a vast suite of automation tests. So what was the problem? The company was 6 years old at this point, had moved to their version of agile software development and the need for a fast feedback loop from the product owners and stakeholders was becoming apparent.

However these individuals generally already had a full time job and could not attend stand-ups, reviews, planning meetings, let alone test the product. More often than not the stakeholders were shown a feature at the review, and then they would find out that it may not have been what they asked for.

So the problems they were having were:

  1. Slow or no feedback from stakeholders
  2. No dedicated feature testing of new or existing features
  3. Front end automation tests that back to back would take over 60 hours, and run in parallel would run for 4 hours. (these were also fragile)
  4. No trust in the selenium tests – the tests are being fixed rather than the code.
  5. No unit tests
  6. Some business logic tests but they relied on experts being available to check them
  7. No stakeholder involvement in meetings
  8. No-one is thinking like the user

 

How did I approach this list?

Slow or no feedback from stakeholders:

Well this wasn’t so easy, but I tried to do a couple of things.

First of all it became clear very quickly that the business side of the company and the development side of the company actually did not know what each other did. To be able to be a representative of the user I had to understand who our users are and what they do.

Engage with the whole company:

So my first point of call was to spend time with the business, but not the managers, the actual people who use the systems from an admin POV and understand the user pains.

This would help manifold. I could understand where their frustrations may come from, and in turn the users’ frustrations and also see how the system is actually being used. In some places it wasn’t being used like the development team thought but there were workarounds in place to go around bugs. Now we didn’t even know those bugs existed.

This in turn led me to propose we talk and communicate with each other more. We set up specific projects for the internal bugs to be raised and had a dedicated team working on those, improving and maintaining the current systems rather than purely focusing on getting new features out. Which had 2 spokespeople from the business who ran teams of account managers or accountants so both sides of the system were represented. These then owned a backlog and could priorities their bug fixes and discuss them as well as progress in weekly meetings.

The important thing here was to engage with everyone across the business. Try to find out what everyone’s job role is, what do they love about their job, what isn’t going too well and why.

I did get pushbacks though when I asked the teams to forward issues to their managers which would then be logged in the dedicated project. I heard things such as “it has always been like that. I tried to tell someone before.”

We as a development team even offered to automate some data inputs but people are always wary when it comes to change.

Changes:

Once we started with the engagement I tried to create regular sessions on what testing is, what development is etc.

So we learned from the business but now we needed to see if they are willing to understand what we do. We set up fun little coding dojos over lunchtimes, where we used a language called Processing and tried to illustrate how the numbers and letters create systems ( or code).

Furthermore we set up a brown bag lunch on agile testing. When I first joined I was asked if I would be writing the unit tests. This got me thinking and I tried to collaborate instead. I am no coder or automation tester but I can pair on writing those tests and give feedback.

This way I tried to illustrate what I do and I decided to make my commitments public to the team. This came about because I was constantly being asked if I would write the automation tests, such as unit and selenium tests for the devs, which wasn’t the case. I stumbled upon the Tester’s commitments from James Bach around that time and wrote my own version, covering how I provide a service, etc.

http://www.satisfice.com/blog/archives/652

Alongside this I made suggestions what the business/stakeholders could focus on for their UAT, held a brown bag lunch about how a manual tester fits into an agile team working on bi-weekly sprints, and did some pairing with developers to understand how they work.

Consequences of change:

As much as this driving of change and engagement with the business had a positive effect – we were now on the road to better unit test coverage, understanding of testing, it also ended up backfiring a little bit, where I was starting to be seen as a quality gatekeeper, with there being a reliance on manual regression testing before a release, so testing became a bottleneck, especially as the team of developers grew and more streams of work were happening. More streams of work also meant context switching a lot from regression testing bug fixes, to new features, to new architecture.

Note to self: Do not let anyone call you QA, gatekeeper, etc when you first start somewhere and if that is not your role.

The effect of this was the thinking that we need a tester per product team to embed into each agile team, as well as dedicated product owners. A stakeholder or business person with a full time job was not going to cut it anymore if we wanted to get back up to speed and be focused.

So we wanted to work towards integrated a tester into each team, to be able to test small iterations of work frequently, and aim for full time product owners so we can reduce the feedback loop and having to spend time chasing stakeholders.

Hiring and building a team:

I was mainly involved with the hiring of the testers.

We have discussed the problem briefly and why hiring was a solution. We were trying to find tester who can integrate into the various teams and be the testing professional. As the product teams were quite different it was important to keep this in mind and not just hire 3 of the same type of tester.

For the hiring process I had a good idea of what I wanted, a good fit, an experienced self starter and not an automation tester but someone who would be a front end tester, with complimentary skills to myself. As I was keen for the devs to own the automation tests.

For the actual process we did phone interviews, and face to face interviews which included a live action testing problem. Just to see how someone would test and tackle a tasks and if they ask questions or not. How proactive are they? There are some very succinct and good resources on the ins and outs of hiring testers, I am mainly thinking of the ones by Rob Lambert if you want to have a further read and get some more ideas.

Once we had a team (I counted 2 testers as a team, but we grew to 4 in total within my first year), I wanted the team to be engaged and constantly learning about testing.

Time boxed Sprints give you the great advantage to try new techniques and experiment with testing techniques and I didn’t want anyone to feel boxed in, while having some consistency. Also each tester was working on a different product which would need different testing activities and tools anyway based on their context.

This meant we had bi-weekly knowledge sharing sessions, new features, testing techniques you name it.

I also set up monthly one to ones and tried to share a blog a week or whenever I saw a good one with the team. Furthermore we added testing related books to the library which had the nice side effect that developers would see the titles of the books and have a chat with you, so you could sell them more on the subject.

There was an especially a good session with jerry weinberg’s Perfect software- where a dev walked past and challenged me that perfect software doesn’t exist. Of course the full title is “perfect software and other illusions about software testing”.

Also being an organiser of the brighton tester meet up, I encouraged the team to attend events or even speak at them. One thing I learned from this was that not everyone wants to engage as much as I maybe do. At least not on the surface. And this is OK. Just keep doing your thing and don’t get downhearted. It can seem frustrating, like you are not getting through but I would just persevere.

We did manage to have developers attend the tester meet up and even speak about testing as well.

So what does my story tell you?

The main thing I did  was to communicate.

Find like minded people in your organisation and start sharing ideas, product knowledge, skills and experience and collaborate with them.

Start creating a library of resources and learning. This could be a physical library of books, or a wiki page with suggested reading/watching or regular events that you host inside and/or outside of the company.

Pair

Seek out people in different roles within your organisation and learn about their work. Try to understand what works for them and what does not.

But don’t forget people in the same role as you. Start pairing with other people in the same role in an effort to learn more about what they do, but also to help to build relationships.

Start socialising your improvement ideas with anyone who will listen. In my experience, planting the seed of an idea can make discussions about change easier later down the line. Get feedback on the ideas. Talk to people about how the future could be different. Listen to others.

I was passionate about what I do and tried to be positive in the light of change and when driving it.

Start gathering interesting trends about the work you’re doing – and then socialise these with the rest of the business.

Agile Testing: Frustrations can make us creative

  

creative space design of a restaurant

I love the fact that I have worked at a few different companies and hence have gotten to know so many awesome people.

It also has the great benefit that I still get to take part in different conversations around Agile Software Development practices and especially when the focus is on testing.

The other day I was forwarded a little article by Mountain Goat Software which focused around the fact that in an agile team, at the end of the sprint everyone tests. And some developers may not like manual testing so they will do automation testing or pair with testers to find efficiencies.

“At the end of the sprint, everyone tests.”

I am wondering a few things around this line of text. Why does everyone only test at the end of the sprint?

I know there have been many discussions that “testing” means manual testing being done by a human. But I do think that developers are often using Test Driven Design to write code and hopefully are also running the code they write to test that it does what they intended and maybe also how it does not (depending on experience, mind set, time pressures and learned habits).
Nevertheless the above quote is important for agile teams. I have experienced that as testers you can easily become a bottleneck because the developers think their part is done. Ideally they will have written the code and written automation tests as well. Now the manual testers get to test the new feature works and hasn’t broken anything.

But this leads to the problem that the developers are sitting there twirling their thumbs or starting more work, adding it to the bottleneck until the pipeline completely collapses.

So I like the fact that there was an emphasis put out that everyone should test as the team is responsible for shipping the work altogether.

I do hope though that testers and developers in a team get to test early as well, or maybe even designers or product managers, to make sure that the feedback loops is as fast as possible. The statement has the danger to be interpreted as such that testing should only happen at the end of the iteration. (I don’t think that is actually what they are saying.)

The excerpt I read actually states that the better the agile team the later the testing can occur. This seems wrong to me, but maybe I am taking it out of context and it is referring to the time needed for the whole team to be testers.

 

“Establishing a mindset of we’re all testers at the end, does not mean that programmers need to execute manual test scripts, which is something they’ll probably hate.”

 

The article went out to say that developers may dislike using manual test scripts for testing and hence maybe could focus on other things such as automation scripts.

I actually thing that it will be really beneficial for developers to do manual testing, not necessarily follow test scripts but explore the features they wrote instead.

By all means make sure your code is well tested using automation tools but you will not know if you have written the right code if you do not manually test it. You will only know that the code is written in the right way.

 

Frustrations can make us creative

I recently watched a video that Chris recommended on How frustrations can make us creative and it highlighted how doing something you maybe don’t like can actually result in you finding ways to make it better a lot easier than doing stuff you enjoy all the time. So developers definitely can really gain from doing some manual testing. And not just developers but hopefully the whole team.

A sort of related example of this is, that at Songkick we try and learn about what each department does and were encouraged to help the support teams. This has actually led to improvements being made to the admin systems when developers used it to help solve support cases. It is OK here to JFDI, when an improvement can really help someone.

A small example was that someone added extra information to be pulled in on one screen, so that you do not need 2 tabs open anymore to find all the information you need to close a support case. This was a huge time saving for everyone and was only achieved because a developer used the tools he created for the support team.

So I encourage everyone to test, collaborate and try something that frustrates you and see if you can make it less frustrating to do.

 

 

Don’t get stuck, try something different

 

view when trying a new running route I saw a double rainbow!


If you don’t know by now, 5 months ago (5 months!!!) I joined Songkick.

One of the many things that appealed, apart from working with Amy of course, was to experience another way to develop software, namely Continuous Delivery.

So this post is a bit about, not getting stuck in your job, routine, thought processes, try something different when you can or if what you are doing doesn’t feel right!

How did I get into this topic?

Songkick has had quite a few new hires in the tech team over the past few months and as part of the on-boarding there is a an intro into how we test at Songkick.

For 2016 Amy has been mixing this up by making the session super interactive and it really works.

As part of this we spoke about exploratory testing over test plans and how your emotions can drive your journey through the system when you are hunting for information and bugs.

All this “huh!?”, “oh!”, “OH!?”, “ah”, and “erm” moments can drive your test journey through the application as a tester and developer.

On the flipp-side to this is boredom. Boredom is another emotional state to avoid. Question why you are bored. Have you exhausted your testing journeys, do you need to mix it up by using different devices?

Boredom and feeling a little stuck was what drove me to try something different.

First a quick back story.

As part of a merger in early 2015 Songkick is doing some re-aligning of technologies. To keep it as vague as possible. I have been part of one of those teams working towards one platform. This has been super exciting and provided a great learning experience as I could learn about both companies technological history.

But this means we have some (or a lot of) manual regression testing we do as a team. This is generally unusual for projects there but a way of providing confidence for the team and whole business.

Now where am I going with this?

My boredom (don’t tell my team, I keep telling them that all testing is fun) came from the fact that I was regression testing something that we did have confidence in, over and over again on different devices and browsers, but it is quite a risky project so we rather test twice and in person alongside automation.

In the retrospective I was surprised how concerned everyone was that the testing was left to one person while the rest of the team tries to finish off other tasks before the release date. 

One of our team members had the great idea to block out some time and get everyone to test for an hour together.

Go with the energy.

Straight away I jumped at this and expanded on the idea. We discussed that this 1 hour session will involve all team members and we will test on different devices and browsers.

I was however not keen on providing a list of testing tasks or test plans as I wanted everyone to feel free to explore, so I created some user scenarios for everyone to have and try. They were meant as inspiration of what to test and a way of visulaisin how the software may actually be used.

 This seemed to work well, although the volume of scenarios I supplied was frightening some initially, but by emphasising that these were ideas and did not have to be ticked off we got somewhere really quickly.

Make it as easy as possible.

One major thing with testing systems can be the test data. This can be time consuming to create and manage, putting people off spending time on testing. I took this away from the team and provided them with different test data options for the various scenarios. These were the same selection of test data meaning we had a mini performance test of several users, accessing the same data. Being in ticket sales though we generally deal with short and high spikes of data which we are testing using automation. 🙂

Entice them

This can be as simple as buying jaffa cakes or bringing in some home made cookies. Always works a treat!

 

Trying a new recipe!


Results?

We had a focused testing session and logged several issues. None of which will block our release, thank god, but some which would have taken ages to find for one person.

Also we logged improvements on usability, as our designers and usability experts took part and got to experience the app first hand in a repetitive situation.

Overall this was a success, we got many eyes and devices to be used at the same time and it was a great team experience!

So don’t get stuck but do try something new! You never know what you may achieve. 

MeetUp: Brighton’s #Testactually at MatchboxMobile

Shameless plug about our next Tester meet up in Brighton. RSVP here to make sure we have enough food and drink!

It is on Weds 11th Nov, and this time food, drink and venue is sponsored by Matchbox Mobile.

The talk will be on a really interesting subject called Dark Patterns by Emma Keaveny.

Have you ever found yourself downloading a tool bar you didn’t want?  How about suddenly receiving emails because you accidentally signed up for a mailing list?

Well if you have, then you have been whacked with a Dark Pattern!  These patterns are designed to fool you, into applying or buying things you had no intention of getting.  In this presentation I will be going through the different types of dark patterns that are out there, how we should approach these as testers (is there a right way or a wrong way to deal with them), as well as covering some pros and cons on these controversial barely legal techniques that are used more frequently than you would think.

Speaker:

Emma Keaveny is an enthusiastic eager and always learning tester.  For the last year she has been working for Interica on archive and retrieval software.

She is also co-organiser of BrighTest Actually, a Brighton based testing meetup where fellow testers get together either for a few drinks, games or some interesting talks.

Events: BrightTest Take 2 – BDD: Testing Requirements

Brighton Eye

Shameless brief plug about the next Brighton tester meet up.

This time all about BDD with a presentation by Alan Parkinson (and I think a game!).

Venue is provded by Rakuten Attribution and food and drinks by Crunch Accounting.

Hopefully see you there. RSVP Here.

P.S: If you want to speak in Brighton let me know. 🙂

WHAT:

We will focus on Behavioural Driven Development with a presentation by Alan Parkinson!

Beers, Soft drinks and Pizza sponsored by Crunch.

Summary:

BDD – Testing Requirements

Many software projects suffer from a hidden requirements crisis. Theses are the projects where the client says “That’s not what I wanted” or testers discover functionality and business value problems after the code has been written. The common root cause of theses problem is miscommunication; these range from cognitive bias to different domain terminology used by stakeholders, business analysts, developers and testers.

Behaviour Driven Development (BDD) solves the problem by enhancing User Stories with examples called Scenarios. Each scenario is an example of how the requirement should behave in a real world situation, and aid communication by allowing the team members to ask questions to clarify the understanding of scenarios.

Alan will demonstrate and discuss the different causes of requirements miscommunication and how BDD can bridge the communication gap using examples.

WHO:
Alan is CEO and Co-founder of Hindsight Software, a start-up focused on supporting BDD in the Enterprise. He has worked in the industry for over 15 years and in a wide range of industry sectors, including embedded real-time systems, safety critical systems, e-commerce, and Algorithmic trading systems.

He is a passionate believer in finding talented engineers and works with a “Do Tank” the New Engineering Foundation to influence the UK government and educational bodies on STEMs education. Alan is also involved in a number of open source projects including Karma, sbt and Cucumber-JVM

WHERE:

RAKUTEN ATTRIBUTION

42 Frederick Place, Brighton, East Sussex, BN1 4EA

WHEN:

Turn up from 7pm.

Talk(s) start from 7:30 pm.

SPONSORS:

Venue: Sponsored by Rakuten Attribution

http://marketing.rakuten.com/attribution

Food and Drink:

Beers, soft drinks and Pizza sponsored by Crunch Ltd.

https://www.crunch.co.uk/

Follow Up: Testing Strategies in a microservice Architecture

Me starting the walk along the Causeway



While pondering the future of how we will be developing and therefore testing we did a lot of research. I shared my first post about this the other day here.


I have since had a chat with Amy whose talk on testing in a continuous delivery world I mention here.
Re-reading that post and the chat with her gave me some great insights and I am trying to relay parts of our conversation below. (Thanks for taking the time for the chat Amy!) 🙂


To mock or not to mock


One of the my issues was the emphasis that was also pushed for internally that microservices will be tested in isolation, with mocking in place.
Whereas in my mind you want to be able to integration test your services and make sure they play nicely together in production BEFORE they are live.
So how can we create end to end tests that touch as much of the stack as possible and do not mock most of it?
And if we do need to resort to mocking why?


Our reasons for mocking would be two-fold:
1. Cost: cost of resources during development may mean we cannot have all services that may talk to each other up and running at once.
2. 3rd Party Applications – Need for integration with 3rd party software which we do not have sandbox environments for, hence this would need to be mocked.


Amy shared a great question with me which I really need to use to help us decide on where we invest in front end automation tests and where we don’t.
“What must never break vs. what can we break as long as we fix it quickly?”
This helps her team to decide what needs testing vs monitoring.



Keeping the business happy and keeping devs happy


As a tester we can find out who is worried by frequent releases and find out what it will take to make them less worried and base some tests around these worries as well.
Understanding your business’ needs is key.

But not only that, do not neglect the dev team. What annoys them? Slow build times, maybe? Can providing them with an SSD help? (I know we are lobbying for faster machines).



Be careful with metrics


Metrics can be useful but be careful how you use them. This was also a theme at the Lean Software Testing course I attended and wrote about here.


Production bugs will always be a good measure to see if your tests are in the right place and providing the value you hoped for.
If too many bugs get out into the wild then you need more tests (at some level).


At the beginning of transitioning to a service oriented architecture manual testing may be quite intensive because you need to start building confidence in the product and the tests that are being run.
Once you have a reasonable number of tests running then you’ll hopefully find fewer bugs during manual testing and can relax a little.



I have some action items from the chat:

  1. Find the key people, assess what worries them about releases and how to provide them with confidence of the release process
  2. Push for unit and integration tests
  3. Push for more manual time to begin with and then define the key journeys for selenium tests to be run against.


EDIT: Since the conversation this great post has appeared. A real world story about Microservices. Which ends with what sounds like a change of heart from Martin Fowler.
“…my primary guideline would be don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.”
So we will see where we end up in practice! But for our proof of concept I have a lot to do right now!

P.S: There is Brighton Tester meet up on the 27th May about BDD with free food and drink! RSVP here so I can get enough! 😉