How long would it take to clear the OSPR backlog?

Hey everyone! :waving_hand:

Prompted by Xavier’s questions from one of the Core Contributor Sprint Retros, I recently ran a little analysis to get a better understanding of:

  1. The monthly volume of hours required to review OSPRs
  2. The current size of the OSPR backlog (in hours)

… and to see how much time would be required per CC/maintainer to clear the backlog.

The results were interesting:

Monthly volume

In Q3-Q4 of 2025, the average number of OSPRs opened per month was 440.

Assuming an average review time of 2-4h per PR, the number of hours required per maintainer/CC to process them all would be 8-15h/month1.

Current backlog size

I tried a few different filters on the Contributions board to determine backlog size:

Again, if we assume that engineering review2 takes 2-4h per PR on average, the number of hours required per maintainer/CC to clear the backlog would be:

  • 1-3h
  • 0.5-1h
  • <0.5h, respectively.

What this means

Of course, maintainers and CCs have different areas of expertise, and are more suited to reviewing OSPRs against some repos than others. This is an aspect that the numbers above don’t take into account.

However, they do seem to suggest that the amount of effort required to clear the OSPR backlog doesn’t vastly exceed what the group of existing maintainers and CCs could reasonably handle.


More info

You can find the complete data set, as well as notes on how it was compiled, here.


1 Based on data from GitHub and the wiki, the Open edX project had a total of 114 coding CCs and maintainers at the time I ran the analysis.

2 This analysis focuses on engineering review only. It might be interesting to collect similar data for product review in the future.

2 Likes

It seems harder than it should be to figure out how many coding CCs there are. I wonder why not everyone is added to the “committers” team on GitHub? (55 people)

I see only 100 people listed on the wiki as “Code Contributor” and I suspect that many of them do not regularly review OSPRs. So I’m not sure the 114 number is correct?

Of course, maintainers and CCs have different areas of expertise, and are more suited to reviewing OSPRs against some repos than others.

It’s not just expertise - most CCs are only able to approve PRs for certain repos, although they can theoretically review PRs in any repo, which is still helpful.

That’s a good question, maybe @feanil could weigh in here?

That number is the sum of

(as of Jan 20, 2026).

The latter aren’t listed on the wiki; I had to reference data from a couple other sources to find and establish a count of them.

Yep, even without merge rights CCs can help maintainers out a lot by reviewing OSPRs that touch code they’re familiar with.

I don’t see this being done frequently on the repos that I track as OSPR manager. Not sure if it’s a knowledge issue (are CCs simply not aware that they have this option?), a matter of capacity, or something else :thinking:

@itsjeyd Thanks for crunching the numbers on this! That might be helpful context to @e0d in the context of finalizing a scale of contribution levels for partners, as part of the Partners as maintainers proposal.

Given the overall promised core contributor capacity and the number of PRs merged each month, it does sound like it should be quite doable to catch-up on the backlog… And yet we don’t. Expertise and rights to merge are definitely a factor, another is that many core contributors don’t get the committed hours to contribute from their organization, and I wonder if there are also different types of PRs to consider.

I would guess that teams able to do their own reviews are a big part of the large number of merge requests merged each month. It’s a good thing that teams can work like that, but that’s not going to be representative of the experience of new or more casual contributors. I wonder if you could isolate the merge requests from contributors who aren’t part of the team maintaining the repository, and see how they differ?

Imho at first it is not easy to feel legitimate to comment on MRs from an unfamiliar codebase. There are benefits to it though, both for the reviewer who gets to learn about the codebase in meaningful and manageable chunks (a PR), and for the maintainers, who can have the low hanging fruits & nits already pointed out. Then reading the comments of the maintainers after having already done a pass allows to learn, by reading about what we missed and getting more context to understand how the maintainers think. But it’s not necessarily obvious before attempting it…

Maybe we could setup some sort of sprint, like a “Open edX Contributions Reviews Spring Cleaning”, where all core contributors are invited to review pending contributions - maybe with points for each review on a pending MR where the core contributor doesn’t have merge rights, and a leaderboard? It could be easier to feel legitimate if others are also doing it.

1 Like

That sounds like a nice idea to try! :+1:

Can you say more about the kinds of data that you’d be interested in here? It sounds like you might be asking about comparing the number of PRs by maintainers that get merged each month to the number of merged PRs coming from other community members, but I wanted to confirm.

I’ve updated that team with all the known coding core contributors. We’ll have to keep it up-to-date and that’s in our runbooks but I’ll add something to do a regulary re-sync to keep from missing anything.

3 Likes

Great, thank you @feanil :+1:

@itsjeyd

Alright, I have created a ticket for it then: Contributions Review Spring Cleaning · Issue #163 · openedx/wg-governance · GitHub - I’ve listed some elements we will need to figure out to better define it, and see if it’s worth doing.

Almost, but if we only consider the PRs authored by maintainers, we ignore the not-uncommon scenario where the maintainer reviews PRs from their own team - the other team members might not be maintainers or even core contributors themselves.

It’s probably a bit hard to properly identify those, as teams can be from multiple organizations (eg on blended projects). So looking at the PRs from maintainers vs the rest could already be interesting, even though it might understate the difference. Some of the maintainers teamwork would be counted as occasional/external contributions.