Pull Requests Review Delays

Yep, I was describing an ideal scenario, and to achieve it we’d need to get around the problem that you mentioned.

Agreed, this could be a way to bootstrap increased CC coverage for individual repos.

That’s what I was trying to express, yes :slightly_smiling_face:

If I were to summarize the current situation I’d say that both the CC program and the maintainership pilot are great initiatives. There just seems to be a bit of a disconnect between the two. We could probably gain a lot by implementing a process that would bring the two groups closer and enable them to support each other better, as well as by increasing CC coverage.

Gathering some data about where the longest delays are currently happening will probably yield additional and/or more specific insights.

As mentioned here, @Michelle_Philbrick, @gabrieldamours and I are

planning to meet next week to talk about reporting needs for highlighting and better understanding delays and impediments in PR review workflows.

CC @e0d @nedbat @sarina @Kelly_Buchanan @Felipe

1 Like

There have been various discussions in different places related to this topic since the last post, so I’m giving a quick update here, for those who might not have seen them.

Maintainer program targeting edx-platform fully maintained

The maintainer pilot program is now entering phase 3, and a good news is that getting edx-platform fully maintained is being considered as one of its goals, as that it has been identified as one of the main blockers for PR reviews currently. Thanks for the input on this @Michelle_Philbrick & @itsjeyd !

Core contributors as backup reviewers

Blocker - automated deployments on edx.org

On the front of getting core contributors involved as a backup when maintainers aren’t available, one of the blocker discussed with @Kelly_Buchanan is the fact that currently, trusting someone to merge on the project, equals to giving deployment rights on edx.org – so it’s definitely much harder to trust someone with merge rights, given the implications.

The usual solution discussed for this is forking, but many are wary of the impact on the project contributions and the amount of drift that would end up remaining on the fork – understandably so imho.

Alternative to edx.org fork

It’s still an important blocker to resolve though, as it will hinder collaboration and trust within the project until we can solve it. There is another other option though would not require to fork, would be to keep the automated deployments of the project master as they currently are, but introduce an approval/review step of the deployments on edx.org, when the code to be deployed has been merged without anyone from 2U being involved:

  • If a PR merges on master that is from someone from 2U or has been approved by someone from 2U => automated deployment, like now
  • If a PR merges on master without anyone from 2U involved => automated deployment stopped, until someone double-checks the merge, and greenlights the deployment

It could be automatic for most PRs only to add an approval of code deployments by someone from 2U, . Ie block the deployments until someone checked that the code merged is safe to deploy on edx.org (or has worked fixes/reverts to allow the build queue to resume). This way you’re sure that no code that hasn’t been reviewed by 2U ever reaches edx.org, without any of the risk of an actual fork, and with a strong incentive to quickly double-check the community’s independent review work. At the same time, merges by other community members become less risky, because they don’t immediately go on production without anyone from 2U looking.

@jristau1984 @nedbat @Kelly_Buchanan What do you think? Would that help with trusting core contributors more easily with reviews?

1 Like

TBH, I don’t see how this reduces risk or increases capacity. It delays the review until post-merge when the deployment pipeline is now blocked. Will the review then be somehow quicker than a review pre-merge? I don’t see how it would. But now it would be under the pressure of getting something else out the door. The overall capacity to review changes going to production hasn’t changed. The reviews have simply been delayed. Maybe I’m missing some subtlety here.

This also requires that the deployment tooling be changed to recognize the origin of the changes in the pipeline to be deployed.

If we can do quick reviews of community review work, let’s do it pre-merge, and keep things simple.

It doesn’t delay the review until post-merge – to be clear, the review would be the one done by the core contributor who would :+1: the merge, and would replace the maintainer’s review when maintainers aren’t available. The merge is what decides if a specific piece of code is accepted in the code base and the project. What would change is the edx.org-specific part, where you would get a second chance to do a review before it gets deployed to edx.org, and monitor the deployment itself – to avoid getting code pushed to production without anyone from 2U being aware or there to handle issues/revert if needed.

Of course, it’s better if a maintainer reviews before the merge itself - and there would still be an opportunity to do that pre-merge. What you would gain is a second opportunity before deployment, even if the code review is passed on to someone who isn’t from 2U.

It might also be useful for areas of the code where 2U isn’t the maintainer or where core contributors merge without anyone from 2U being involved, already - the instructions at https://openedx.atlassian.net/wiki/spaces/COMM/pages/3334635570/Merge+Guidelines+for+Coding+Core+Contributors show that this is also creating constraints there, not just with the current discussion of review backups/escalation.

The 2U review might not be quicker than a review pre-merge, but it would allow the rest of the community to be unblocked in the meantime. When we have to wait for months for a review, it means having to keep rebasing the work against the other evolutions of the codebase, and it prevents other community members from benefiting from the contributed work. The time & energy cost of a delayed review is borne entirely by the contributor.

Having to deal with the backlog of unreviewed upstream changes when wanting to push new changes isn’t the ideal situation, but it would balance the burden in these cases, and realign everyone’s incentives around getting all reviews done within reasonable delays.

Happy to contribute time/work to help with that. :slight_smile:

@antoviaque @e0d @gabrieldamours Following yesterday’s conversation about PR reporting I added two date fields to the Contributions board:

  • Date opened
  • Ready for review since
    • Note that review is short for engineering review here.

For the repos that I’m responsible for these new fields have already been populated. (@Michelle_Philbrick I’m out of time for spending more time on this this week, but I could help update the fields for your repos starting next week.)

So you can now filter OSPRs based on how long they’ve been open/in the queue for engineering review. Examples:

For the record, I also renamed the Notes field to Last PM check to make the contents of that field easier to understand for people less familiar with the Contributions board.

It didn’t take a lot of time to populate the new fields retroactively. So if we wanted to be able to filter OSPRs by how long it took for them to get merged/closed, I think we could add another field for recording the merge date and populate that as well.

CC @nedbat

Syntax for date filters for reference.


Thanks, @itsjeyd!

Also to add, we’d like to collect more use cases for reporting in order to know where we should focus reporting creation/effort.

We’ve added a page here for people to add use cases to, so anyone here who would like to contribute, please do!


Thanks for getting the wiki page set up @Michelle_Philbrick :+1:

I added some more use cases to the list.

CC @e0d @antoviaque @gabrieldamours @nedbat


It has been a long thread with many different proposals to improve the project’s management of pull request reviews, so I thought it would be useful to list them summarized:

If anyone would like to help, mention it here, or on one of the tasks above.

I’ve summarized some of the points and potential steps in each of the task description - it’s rough so don’t hesitate to expand/split/add.

@antoviaque Thanks for summarizing all these points and creating GitHub issues for them :+1: :slightly_smiling_face:

I can give it a shot, yep. Just noting that it might take a few sprints to complete, as I’ll have to timebox the work to leave enough time for OSPR management itself.

CC @Michelle_Philbrick @e0d

1 Like

@itsjeyd @Michelle_Philbrick Thanks for the work on Faciliate monitoring and guiding of OSPRs · Issue #103 · openedx/wg-coordination · GitHub ! It’s great to be able to review the stuck PRs.

And kudos for the follow-ups on the PRs themselves, it looks like the pile has shrunk quite a bit since we last discussed, right? Do you think the reason why it could be reduced that much is the fact that we can now focus on the stuck PRs? Or was there also a better turn-around time generally and/or when you would ping people?

For the PRs currently stuck, it seem to be mostly e-commerce related PRs, correct? I see three of them: 1, 2, 3. Do we know the reason why they are stuck? And do we know of any core contributor with expertise on the e-commerce repo? It could be a good opportunity to suggest to the team responsible for the reviews, to delegate them to a CC, as an experiment?

Also, I have heard that @ndespujol has had troubles with PR delays on enterprise PRs recently - is this related?

1 Like

Hi @antoviaque!

Tim and I have been escalating things in the last few weeks, and the teams have been helping us chip away at stalled PRs, which is great!

  • The “e-commerce” PRs are in a bit of a limbo as 2U might take those off our hands completely. They weren’t a priority for 2U, and I think e-comm in general is a bit in flux for them at the moment. I’ve escalated these prior, and last week they were escalated again, but not sure when we’ll have a decision on what happens with them, though I will follow-up and keep you posted :grinning:

  • For the “enterprise” PRs for @ndespujol, I escalated some of these to 2U last week, but I don’t think they were related to the ones you may be referring to. @ndespujol - would you mind letting me know the specific PR(s) that need attention? I can look into it for you.


Thank you very much @antoviaque and @Michelle_Philbrick.

@Felipe from Edunext knows the exact PR, that are going slower than anticipated. I am going to tell him to speak with you @Michelle_Philbrick.


Process changes to speed up OSPR reviews

Following recent developments at 2U (1, 2) the new Maintenance Working Group discussed implications for the OSPR review process, and made the following decisions to keep PRs from stalling indefinitely (3):

  • CC reviews: Instead of requiring approval from a 2U member by default, and only considering OSPRs eligible for CC review upon explicit confirmation from 2U, it is now possible for

    … any CC [to] take on Reviews and Merges that they can manage.

  • Maintenance coverage: To speed up the process of finding/replacing maintainers for Open edX repositories, requirements for becoming a maintainer were temporarily changed – in a nutshell, for now there is

    … No need to have contributed to a repo if you are a CC in another Open edX Repo

    See this thread for full context and important details.

    • This change is relevant in the context of OSPRs because maintainership involves triaging and reviewing incoming community contributions, or routing them to other reviewers as necessary based on availability and/or expertise. (An expectation that was carried over from the Maintainership Pilot program.) So a larger number of actively maintained repositories means less risk of the OSPR review process being throttled to the speed with which OSPR managers can get through the entire set of open PRs on the Contributions board.

CC @feanil @antoviaque


@itsjeyd Could you summarize changes to OSPR review and Coding Core Contributors expectations a top level post, and then email it around to the coding core contributors? (reach out in Slack if you don’t have that address) - I think it would be good to circulate this. Thanks!

1 Like

@sarina Sure, I’ll take care of that in the next OpenCraft sprint (starting Tuesday).

1 Like