Generative AI and Open edX code

Hi all!

I’m writing to share a new policy around the use of generative AI tools to help write code for contributing to Open edX. It outlines some very important contributor responsibilities when using these tools, so everyone who contributes code should take a look. The policy is here: https://openedx.atlassian.net/wiki/spaces/COMM/pages/5022416899/Open+edX+Policy+for+Generative+AI+Tools

This is in addition to the Github Copilot code review project, which is currently limited to just a few people as we trial it. Please feel free to ask questions, preferably on the docs so everyone can learn.

Are there any exemplary PRs that we can use for reference?

I think this PR is a great example. We’re not trying to make it an onerous process, but rather trying to understand what the work done by the human vs. the machine was.

I just want to give this thread a bump to remind folks to please review the Open edX Policy for Generative AI Tools, even if you’ve already read it before. More recently, the list of approved tools has been expanded to:

  • Microsoft Copilot (all models and versions)
  • Anthropic Claude (all models and versions)
  • OpenAI (all models and versions)
  • Amazon Web Services (AWS) (Kiro and all other models and versions)
  • Google (all models and versions)

Also, please note the Transparency section:

Contributors who submit code that was supported by GAI tools must (1) identify the tool they used and (2) briefly describe the work they did on top of the tool to ensure human intervention

This is really helpful for us to understand when reviewing, because “human misunderstanding” and “LLM hallucination” can play out very differently, and cause reviewer confusion. Also, even briefly describing how you used the LLM can help educate others and advance the state of AI usage within our community. Over time, the pull request review process can become a way to review how we use the tools to generate the output, as much as the output itself.

Thank you!