Testing Handbook


Please see here for the latest testing handbook - https://github.com/openfoodfoundation/openfoodnetwork/wiki/Feature-Testing-Handbook

Spree Testing - Crowdtesting
Automated visual testing
QA / Manual test process
Spree Testing - Crowdtesting
Standing Orders - version 1
Who are we? I want to know the people behind the names :-)

thanks Sally. This looks very clear and comprehensive. pinging @mags who has offered to do some testing


@sstead hi, is there any smoke test definition or longer test plan that can be used as a reference, outlining the basic steps to test a decent coverage of an OFN install?


@pmackay - I’ll generally place an order in a shop to check that checkout is working, a kind of smoke test? Typically I’ll test the functionalities that have been modified, not the whole site. In some cases, it’s necessary to test the site more broadly, in which case it’s a matter of doing all the basic things (creating an enterprise and running an order cycle) and then also testing the more ‘advanced features’ (like inventory, E2Es, tagging, etc). Sorry, this isn’t really a formal test plan. You can use the user guide as a kind of checklist for functionalities to test and ensure are working as described.


Note, when testing PinPayments and Paypal

Pinpayments won’t process if amount is too low. $2 works.
Paypal: if Solution is ‘Sole’ then the Landing Page must be ‘Billing’. If the solution is ‘Mark’ then Landing Page must be ‘Login’.


Hey @sstead, we just copied the whole handbook in a GH’s wiki page: https://github.com/openfoodfoundation/openfoodnetwork/wiki/Feature-Testing-Handbook

We want it to make it easy for new testers to refer to it. However, when discussing about it we pointed out couple problems that we found in the way we report testing results now.

Firstly, adding a long list of screenshots and paragraphs into an already crowded PR make it’s hard to follow.

And secondly, it’s also hard for the devs to know whether the PR passed the test run or not and which test cases failed and why.

To that end, we suggest starting simple and share a link to a google doc with the testing results of the PR but also in a more structured. We’d provide a table with the name of test case together with its status: success or failure at the top of the file. Then, below we can go into details an share the screenshot and its description as we do now.

What do you think?


@sstead I would suggest a slight modification on this.
Every issue should (in the upcoming GH no bug issue template) list the “acceptance criteria” = what the tester needs to check to make sure he can accept the PR. It shouldn’t be the dev to tell the tester what to test but the product owner who opened the issue and is going finally to accept what is built (after the speccing session probably).
BUT the dev should warn the tester about any dependency he found while working on it (like : “I had to touched also product section because of that so make sure nothing has broken there”)
What do you think about that ?

Also on this, speccing is supposed to tackle that, not tester, no ? If the issue has not been well specced and is not easy to use and we have to redo it, we have a problem with our processes. It’s too late to do that at testing stage.

Same comment, it’s not the tester who should do that IMHO, it should already be in the acceptance criteria written by the Product Owner in the issue after the speccing session. Of course the tester can report if something is not practical / user friendly, etc. But should probably be another issue to improve usability in that case, no?


Thank @sauloperez and @MyriamBoure for the feedback.

I agree that in some cases, the test notes are long and complicated and I’m happy to move the notes to a google doc. Here’s a draft template, please make adjustments/comments - https://docs.google.com/document/d/16UZXJdemEI3EmcpFzJeuLchzkWKaaLoiJThr4cROig0/edit?usp=sharing

In other test cases, the feedback is simple and I don’t see the need for writing long formatted notes in a google doc. For example this PR https://github.com/openfoodfoundation/openfoodnetwork/pull/1923. Do you think for simple PRs I can leave my notes in the PR thread? Or you prefer all in a google doc?

I’ll make some updates to the ‘testing handbook’ to reflect your comments.


Hi @sstead ! I requested access to the template :slight_smile: Feel free to put it on the globale drive maybe, there should be a testing section, and if there is not, please create one.
For simple feedbacks I guess it should be done in the thread, but my feeling is that it’s not much longer to put it in a google doc so I would say: let’s agree on a process and stick to the process (I’m sure @sauloperez will love that jajajaja!). Also in this example there is no indication of which envirionment you have tested in for instance.

We had some questions yesterday when discussing with @Rachel, she is going to set up a call so that we can discuss some broader questions and kickstart the organization of our broader testing team. For instance, do we say it’s enough to test in one browser only? What are our promesses in terms of browsers and versions compatibility? Should we test also the mobile responsivness in all our tests? Can we test in whatever language or should all the tests be done in English? Who each tester should ask for staging? If we want the testers to be able to stage themselves (do we want that? To save some dev time and gain in responsiveness) how could we easily do that? Some questions that needs to be discussed and we can all start to think about :wink:


Sorry about that, it’s in the global drive, in tech then testing notes.
I’ll put all future test notes in google docs, and we’ll see how that goes.
Yes, happy to discuss the testing protocol. Would be good to have the developers involved- they can tell us what info they need from the testers.


@sstead IMO with the new testing process we do document the whole thing in a separate document but still give in the thread a bullet point summary of what remains to be fixed before it can be merged. From what we discussed with Enrico, Pau and Rob on the case 1967 that I delt with that’s what I understood, and it makes sense to me, like the devs wants to know what he still needs to fix for his PR to be merged. if it’s not clear and he needs some details then he goes in the full doc to understand.


Agree on inviting he devs btw, @Rachel I suggest we invite people from Europe+Aus to start with to make it simpler for time zones, so that means Lynne, Sally, Rob and Maikel, Pau and Enrico, and you and I. Would be great if at least one dev is here from those invited :wink:


I see a remaining bottleneck we need to lever in order to have a fluid process: we are two testers in France Rachel and I, there are 4 PR in test ready that we need to have staged so that we can test. @paco is pretty reactive to stage but he also have his day job so he can’t stage 4 PR in the same day as soon as we are ready to test the new one. How can we make that smoother? @lin-d-hop suggested that testers could learn to stage, but we discussed with @paco and @rachel and it doesn’t seem so easy at least with the current French staging set up… Also testers who are not near devs who have access to a staging might have issues to know whom to ask to stage, etc. (like @tschumilas for instance). Any idea on how to make that smoother? @sauloperez @maikel @enricostn @oeoeaio
Something like: we click on a button and the branch is automatically staged and we can test would be a dream of course :slight_smile:


Yeah, that’s something I dream of for our other Coopdevs projects as well. I think we can start using other staging environments for now.

So I suggest that from the PR itself you ask any of the devs to deploy to staging. Then, we answer telling you where was that deployed. This way other devs won’t have to bother if someone did it already and you’ll know where to test.


Thanks @sstead I added my feedback on the doc.


Do you think for simple PRs I can leave my notes in the PR thread? Or you prefer all in a google doc?


For simple feedbacks I guess it should be done in the thread, but my feeling is that it’s not much longer to put it in a google doc so I would say: let’s agree on a process and stick to the process (I’m sure @sauloperez will love that jajajaja!). Also in this example there is no indication of which envirionment you have tested in for instance.

Although I understand that simple small bug fixes have short test results, I’d prefer to follow my engineering mindset and stick to having always a doc and sharing its link.

While this has the not-so-big “burden” of having to copy and share the link in the PR, we will all now that the PR has a doc we can refer to regardless of its size. Less thinking, less mistakes.

Of course, this is just a first iteration. We will this process along the way.

Do we agree @sstead @MyriamBoure?


Ok I will take your advice and do this. We can see how it goes for a few months.


Hi all! So complementary to the testing notes and the Handbook, I would like to push the testing handbook forward and in more details.
This echo to some discussion we had on Discourse but also on Slack.

Like @luisramos0 put it there, we need something in between the actual guide and these automated tests https://github.com/openfoodfoundation/openfoodnetwork/blob/master/spec/features/admin/variant_overrides_spec.rb

The purpose would be to develop a guide for new testers to run sanity checks, release validations and PR validations. And this for several reasons:

  1. Help onboarding more tester and share knowledge. It’s awesome that Sally and Myriam have so much testing experience that they know that when testing one thing another thing has to be tested has well. But I don’t want to be the next one gaining this knowledge and whom we fear leaving the testing work would damage the project. So I need a place to put this knowledge.
  2. Avoid some recurring bugs. Ex : https://github.com/openfoodfoundation/openfoodnetwork/issues/3335 This might have been avoided if - when testing fees - we would have know that A. it is better to stage them on a server that enable this context and B. to test commas
  3. Gather in more details some testing tools : how to set-up a Paypal sandbox ? (I’ve just discovered this recently thanks to Maikel… so far Sally and Myriam were testing with real money)
  4. Help non tester check the production side of their instance from time to time to see bugs before their users (the guide would need a section on how to do a sanity check).
  5. Use this to feed some automated tests?

Would this be redundant with user guide? Some parts maybe yes. The way I see it, is that for new features, testing scenarios (that should be written before) would feed the user guide. I don’t see screenshots or context sentences in the testing guide. Just plain testing step such as Kristina is doing in her PR (which shouldn’t be her task btw but that’s another topic).
Don’t we have testing scenarios in testing notes? Yes we do, but searching for past tests on a particular page is a google drive nightmare :slight_smile:
Would it be difficult to maintain? Maybe yes. But if it is, that would mean that nobody uses it. So it would just die and that’s not a bad thing. As after almost a year of testing, I feel I need this document, so I’m willing to take the risk of doing a first version.

I have created a first doc structure here with an example on checkout:

Google doc would be better to have a first quick version but I guess we could also use gitbook and create a tester handbook…

Before I go forward with this are there any concerns or maybe even veto @MyriamBoure @sstead @luisramos0 @Kirsten @sauloperez @lin_d_hop ?


This is superb @Rachel


Sounds like a great plan Rachel


That sounds great @Rachel ! I’m with you and happy to iterate on it once you managed to extract @sstead knowledge out of her head before she leaves :slight_smile: Maybe what we could do is open testing notes and extract from them cases we have tested. Btw it could be an occasion to delete all testing notes older than 3 months maybe ? The less data on servers, the happier the climate :slight_smile: