Automated visual testing

We have good automated testing, but it’s not visual. It just checks for certain elements and text on the page. We always perform manual testing as well which can be time consuming and still doesn’t always find every little display error.

One visual testing tool is https://percy.io/, a Californian company. It works by comparing screenshots of predefined pages. All pages that change in appearance have to be approved by one our developers or testers. Unexpected side effects are more likely to be noticed at this stage.

Advantages:

  • Can reduce manual testing time.
  • Can decrease shipped bugs and display errors.
  • Can make testing more fun.

Disadvantage:

  • Requires a tester to think which pages would be best to track.
  • Requires a developer to mark the pages to track. This is one task in the beginning and is later maintained with the rest of the automated tests.
  • The service has a monthly fee.
  • It still can’t catch all bugs since we can’t capture every page and they may be depending on production data.
  • Testers have to be careful not to rely on the tool too much since it can’t catch all cases.

The basic version costs $149 per month and comes with 10,000 screenshots per month. Last month we had around 250 automated test builds. Which would mean that we can define 40 pages to take screenshots of. We can start with a 14 day trial. Update: We could get a free plan, see below.

I asked the support about a discount for charities or free software.

Yes! We’d be happy to make something work for you. What is the project?

Our open source plan is pretty new, but we plan to sponsor some open source applications if they meet some qualifications, namely not-for-profit and without corporate sponsorship. If the organization has some reasonable way to pay for Percy, we’d ask that they do so and we’d offer a discounted plan.

We don’t actively advertise it, but we do have a “free” plan for tiny users—once you complete the 14-day trial of the Growth plan, things will keep working and you’ll be moved to our limited-use Free plan that includes 500 visual diffs per month.

So, feel free to start a trial and give it a go! If you decide you want to use Percy in your open source app at the end, you might fit into the free plan or we can set something up for you depending on your usage.

We have good automated testing

Honestly, I don’t quite share the same opinion. I don’t have scientific evidence yet but I believe we have poor test coverage according to the code that I’ve seen so far. In most of the PRs I did I had to add missing tests or refactor them before touching any code. Things like reports, for instance, are very poorly tested. I’ll try to give some data soon.

Not only that, but it’s also very well known that we have feature tests failing in CI every now and then, and that requires us to rebuild them. We should fix them before jumping into implementing another testing layer.

As for visual testing itself, we have also to consider the cost of setting up the tests and maintaining them. It’s very common to have a considerable amount of false positives, which need to be manually retested and fixed. Also, with every ui change (an element moved 5px, for instance) we need to update the visual tests.

So I think these kind of tests do pay off for very big projects where their maintenance cost is way below the cost of not having them reducing the cost of QA. For us however, I think they will make us spend more time than what we invest now on testing.

IMHO this money would be better invested in improving the quality of our integration tests and our test suite in general, which has much lower maintenance cost than visual tests. Then, when we have a robust and reliable test suite we could take a look at it, as long as the benefits outweight the costs.

Thank you, I actually share your view on the test suite. I find especially the unreliable feature tests very frustrating.

In regards of maintaining visual tests, I think it’s mainly that extra step of going through all screens that changed. There is no spec for what’s in the screenshot. So there is no maintenance of code unless we want to change the screens we want to monitor.
The biggest time commitment is the human that has to sign off all screens that change for every pull request. The time of going through the screens should be saved when performing manual testing. But that is just a theory.

Hey all :slight_smile:

We probably need to have a bigger discussion on standards for the different kinds of testing we do, and incorporate the potential of visual testing into this. As more people get involved in the development of the platform and as more hands are dipping into the codebase and making changes it is becoming evident that we need to be more deliberate about how we proactively cover off quality assurance…

…however, we have such limited test coverage in terms of manual testing, that considering additional human effort that is not done by devs is going to be difficult, due to 1. we only have sally and oliver doing testing, and 2. we don’t have a lot of $ to spend and traditionally have spent the majority of it on developers.

What would be so great to see is some kind of outline of a QA strategy / proposed standard that we want to reach, alongside of a view of where we are starting from and what steps we need to take (project wise and/or ongoing human effort wise) to be able to reach the standard.

Getting an understanding of test coverage (integration tests, test suite, manual testing, all the testing things!) and where we ideally want to go will help to make this a more strategic approach to quality that could be funded via co-budget and owned by the entire OFN community.

< / $0.02 >

2 Likes