Proposal: Pipeline & Release Management, Expert Reviewers and Expert Testers

Hi all,

There’s been chatter around the traps as more and more non-AU led development has started becoming more and more prevalent (yay!), around how the AU team can move from being a single point that all things must go through and empowering people outside of the AU team to fulfill the same role.

The following is the outcome of a discussion that myself, @RohanM and @oeoeaio had last week around how we could better manage getting issues that are picked up and worked on through code review, testing, and merged, and then released. It’s high level and lacking in some detail, which we are hoping y’all can assist with :slight_smile:

Setting up a global pool of Expert Reviewers

At the moment all PRs need a final review by either @oeoeaio or @maikel or @RohanM before it can be pushed to master. This creates bottlenecks for other teams and also incurs a fee for the hours used to complete this work.

It makes good sense to look at setting up people outside of the AU team to be able to fulfil this role within their local OFN dev community. However, this role requires a level of qualification and understanding of many many things within the OFN codebase, and so we believe there needs to be a “certification” process to qualify as an “Expert Reviewer” (placeholder name, no doubt a better title can be thought up).

This certification process would be a set of criteria that the candidate would need to meet, things around their knowledge of the code base, their knowledge of how to best set up GitHub branches, merging experience, etc etc.

**Input required: What do you think the criteria for this accreditation should be?

Setting up a global pool of Expert Testers

There is also a bottleneck for testing new features and functionality or tweaks/bugs, where @sstead is required to do a final test before things go live, and this is a service that also incurs a charge from OFN AU. We think it makes sense to look at certifying testers for the different teams around the globe, the same as setting up Expert Reviewers.

What we have figured out is what are the criteria that would qualify someone to fulfil this role. @sstead and @Oliver perhaps the two of you have thoughts on this?

**Input required: What do you think the criteria for this accreditation should be?

Using a Release Manager to manage the process

Our final thought was that in the same way I manage the AU team’s work ensuring that questions and PRs are done, we think there is merit in having someone fulfil the role of coordinating pushing PRs through an expert reviewer and tester to done, and then coordinating releases. This role would also be responsible for gardening tasks on GitHub, ensuring that old PRs are removed and everything is running smoothly.

I’ve been doing some of this role up to now, and would be willing to take on the rest of the role along with setting up and kicking off the Reviewer and Tester people pools. However, it also makes sense to have a back up for this at times when I’m away or if I ever (unfortunately) get hit by a bus :grin:

OK, so that’s all our thinking. Note that it’s only around the second half of the pipeline, where issues are already defined, picked up, and developed and a PR is ready for review. There’s definitely a need for similar thinking around the front end of the process (specifying what we are to build, creating issues, designing, getting input, etc. Ping @enricostn), however we’re biting off the easier bit first :slight_smile:

Now over to y’all. Thoughts?

Ping @oeoeaio @maikel @RohanM @Kirsten @lin_d_hop @NickWeir @stveep @Matt-Yorkley @CynthiaReynolds @sigmundpetersen and everyone else with a vested interest :slight_smile:

1 Like

Many thanks for creating this thread @danielle, @RohanM and @oeoeaio :clap:

I think that clear and documented processes are really important in general but especially when there are different teams involved, different TZs and different level of knowledge of the code base and the product itself.

There is generally this idea of core contributors that denotes a group of people that know better the code and the product. It seems that the weight of this knowledge is heavy on AUS team now and we must find a way to spread this knowledge and transfer to other teams.

As per the code review part I think this group of core contributors should dedicate some time to involve other people while following the development of a feature, reviewing the code, etc. Large comments in GitHub doesn’t help much IMHO and some small meeting could help.

One thing that I’ve been seeing helping a lot is splitting big tasks in smaller ones. I know that it seems rethorical but once you learn how to do that you can focus the discussion on a small part of the application without having to explain all the OFN history every time.

This help code review, knowledge transfer, quality assurance (testing), etc… and bit by bit we’ll end up with more people joining the core contributors team.

Regarding Testing
For me, it took some time to get a full grasp of all the different elements of OFN’s functionality, so that I could easily predict desired behavior and spot when something was not right. Knowing that someone is ready to be a final tester mostly comes down to how confident they are that they know what to look for. It’s difficult to describe what someone needs to show before they can be a tester, but I would say they need to have spent a lot of time playing with the different features (for the perspective of different types of users), be comfortable reading documentation (user guide, past github issue etc) and/or have past experience doing software testing. There are different pathways to becoming a confident tester, and it will depend on the individual. @Oliver I’d be interested to know how you got yourself up to speed before you started testing, and how you think we could empower more testers.

1 Like

Hi Sally,
@sstead
I think in my case it was just a matter of using OFN lots on a daily basis.
Isn’t end user testing (we haven’t called it that here) a fairly simple “give it to the user and see if they can break it”? I wonder if therefore rather than getting the user to a certain level of knowledge and ability it might be more beneficial to get 2 or 3 people testing the same changes. Maybe then one will pick up things others have missed?

@RohanM @oeoeaio @maikel thoughts on this?

What do you think the criteria for this accreditation should be?

Here is a technical perspective. A reviewer or a tester takes a pull request and generates an answer. An answer can be “yes”, “no” or “maybe, I don’t know either”. We want to know if a person performs as well as another person doing that task. So we can look at pull requests both people reviewed or tested. If a sufficient amount of results are the same, then we call them equally qualified for this task.

There are some parameters in here. How many pull requests need to be reviewed by two people? We can simplify this to groups of people. So how many pull requests need to be reviewed by New Guy and one of the already “qualified” people? And how many of these have to be successful?

This sounds a bit harsh, like exams, but it’s just looking objectively at the past pull requests and how well it went so far. If there were things someone didn’t know, it will just take a little bit more time until that person will know.

A simple process could be like this: Every time a “qualified” person reviews or tests a pull request and comes to the same conclusion as the person before, the person earns a point. If changes are requested, the person looses a point. Once a person has at least, let’s say 10 points, we can call that person qualified. We can track that somewhere in the forum. People who would like to become qualified, put there name down. Then we look at the recent pull requests and start counting. The number that qualifies has to be estimated with some experience, I guess.

I don’t know if this is too technical. If a person feels ready to become a final reviewer or tester, then we could just look at the recent pull requests and judge with our gut feeling. That’s much easier and will take a bigger picture into account instead of just counting points. We could require something like at least three people looking at the case and agree that the person does a good job. Done.

I think I prefer the less technical approach…but it would be good to have some set of metrics that could be used to guide. Maybe just a couple of hurdles?: I think we’d want to see at least 10 PRs from a given person and probably at least the same number of “successful” reviews of other people’s work before we’d even start looking at them as a candidate for core contributor. I suppose it would also be preferable if those 10 PRs included at least a couple of bigger pieces of work that demonstrated an ability to understand the interaction between multiple parts of the codebase…

2 Likes

Another point of view (which does not include suggestions what to do when contributer keeps doing same mistakes over and over).
I am thankful for all comments on my PRs as I was rusty when joined and it helped me to get up to speed but sometimes it was getting OTT :slight_smile:
As well as most code changes were made by taking examples from current code base which is not perfect (it doesn’t mean it can’t be improved).

  • So, there is no perfect code. It is possible to tweak it forever and every developer can contribute it.
    Thats why there should be a time limit to review pull request, if no, it can be merged then. If reviewer is too late and wants to make changes, he/she should change the code and contact contributer with the example how it should be done. All changes need to be added to “PR requirements/style guide”.
    Also there should be limit for amount of reviewers. If one person understands the code and you do not, maybe you are putting not enough effort?

  • If there is a comment in a pull request about style, that style needs to be added to “PR requirements/style guide” (if there is no one yet).

  • Do comments in one go. Switching between tasks is very time consuming.

  • I am not sure what to do and how to explain quickly when some decisions were made by some insights while writing code that reviewer can’t see while reviewing the code.
    Maybe chat in mumble is a good idea.

  • Raising short question on code is not a good idea as well. If you know the answer already or have a suggestion, just write it.

  • Do expect that there is business value and requirements as well (though it doesn’t make excuses to write bad code).

There always will be special cases and exceptions for all the rules so do not take my thoughts directly :slight_smile:

Hey @softgust, thanks for adding your thoughts. They are appreciated :slight_smile:

Here’s some thoughts in response:

I don’t agree with this approach because it’s too black and white. Think of a scenario where a completely new dev came in and created a PR, and then no one had time to look at it within the designated period and so it then was pushed live and broke the entire system. Not a good result. There needs to be a discussion within the community about how to improve the flow of code reviews so that there are less bottlenecks but I don’t think that means just allowing anything through. The Linux founder still checks every single PR before it’s merged…and at least we’re looking at how to extend the capability of final review and merge beyon the AU team.

Does this then impact on the discussion and community involvement that comes from different devs coming in and looking at things? It’s a great way to get to understand the codebase and what others are doing, so I don’t see there being a problem with multiple people reviewing. Especially when we’re trying to get more expert reviewers up and running.

Could you bring these up at the next dev catchup? Worthy of discussion with the group.

Cheers!

Good point! My comment was too black and white and I didn’t mean that everything should go through. But what if the code doesn’t break anything, it just needs some styling which is “nice to have”, or can be done on spare time, or via next issue/PR? And we have test coverage and Continuous Integration and staging servers which should cover “broken” cases. And good code is very time consuming and/or expensive. My point is that maybe we are spending too much time on discussions and making the code “shiny”. Otherwise code base would be way better (or maybe there is another reason which I am not aware of).
And I do not agree with comparison of Linus Torvalds with OFN as later can’t deliver everything by volunteers anymore (not to mention that currently there are 514 issues in github) as there are business demands which requires deadlines and professional developers, who btw, have to find motivation with such slow pace (I am just moaning :slight_smile:).
But hopefully everything will be resolved with time :slight_smile:

I share the same opinion, besides github gives us the tools already Contributors to openfoodfoundation/openfoodnetwork · GitHub. By digging in the insights section we can probably get the information we need.