Open edX AI Contribution Policy

Hi everyone,

A quick note to all contributors - please review the Open edX AI Contribution Policy. This policy is broken into two sections:

For Contributors

Covers all guidelines for GitHub contributions, including pull requests and creation of issues. Non-code contributors should also review this guide and practice responsible disclosure when using AI in other artifact creation, such as designs, product requirement documents, and bug reports.

For Reviewers

Covers guidelines for reviewers, including how to handle low-quality contributions or contributions where the author doesn’t engage appropriately.

Maintainer Note

If you’d like to add a link to this policy in your repo’s README, please follow the pattern from this openedx-platform PR. You may tag me, @sarina, for review & merge.

This was very much needed, thanks a lot @sarina

but only the following tools:

  • Microsoft Copilot (all models and versions)
  • Anthropic Claude (all models and versions)
  • OpenAI (all models and versions)
  • Amazon Web Services (AWS) (Kiro and all other models and versions)
  • Google (all models and versions)

This feels a bit restrictive. All of these are commercial and mega-corp run services. LLMs have been made a commodity by the release of open-weight models by multiple companies - Meta, Google, Mistral, DeepSeek, Alibaba to name a few. They can be run on consumer hardware locally or via a 3rd party hoster like OpenRouter.

Why can’t we use them? The policy states

We only allow the use of listed tools because they have a sufficient reputation
for proper training.

What does it mean “proper training”? Is it specialized training around “AI Safety”? While that’s a valid concern for general LLM use, the impact on code contributions and review are minimal. Especially when we follow

Review all AI-generated output before submitting.
Verify the accuracy of AI-generated material.