Let's talk about our E2E Testing strategies and instruments

The main goal of this topic is to start wrapping up our test strategy in one place, identify issues, collect ideas on how to improve and evolve our instruments.

I will try to leave here some of my thoughts about manual testing automatization and a bit about the current test suite distribution and I’ll appreciate any correction to my conclusions.

Being a part of the build-TEST-release working group I’m starting to think more about our smallest part of the test pyramid - User Interface or E2E tests. There is an idea that we can do the TEST part more efficiently and quickly.


There’re different Test Pyramid interpretations and different distribution models for test types.
I’m not focusing on Inverted pyramid, Cupcake or Dual Pyramid now.

So we can rely on one of the standard tests distribution models for Prism’s Pyramid:

  • 50% unit test, 30% integration test, 20% e2e test
  • 70% unit tests, 20% integration tests, and 10% end-to-end tests.
    – E2E (8%) and Manual (Exploratory?) (2%)

Edx platform tests.

Juniper release codebase has 20k+ Unit and Integration tests.
Also, there’re acceptance tests in common/test/acceptance/tests backed by the Bokchoy framework but the total number of bokchoy tests is much less than 2k (based on 10% distribution). I prefer to use the number of 8% for this calculation, so we have to have something like 1.5k bokchoy tests. I didn’t assess the total number of acceptance tests and their quality yet so it’s just an assumption. If anyone did such an investigation it will be great to have it for analysis.

Apart from the CI synthetic tests, there is another testing strategy, probably not highly related to the edx.org testing strategy but for the OeX community and OeX providers strategy. It’s all about manual or automated regression testing (or at least smoke testing).
It is a required activity because there is no single installation (as for edx.org) but different customized deployments for each new customer. Each installation can use different cloud providers to deploy in.
Also, such non-default deployments as Tutor or any custom Kubernetes installation can have integration issues that can’t be identified with unit, integration, or synthetic acceptance tests.


So we come to the E2E Test Suite.

The main issue with the repo is that it has a small number of test cases - it simply doesn’t cover all critical test scenarios. So we can’t rely on it doing any regression testing.

Another issue is a lack of flexibility - some deployments require doing just simple SmokeSuite so the full Regression testing can be overkill. Ideally, SmokeSuite should/can be implemented as read-only scenarios to not change persistent data (mainly to be able to implement a roll-back approach).

There are other non-critical issues/questions comes to mind (and to be honest major one was already fixed):

  1. Lack of out-of-the-box ability to test on different browsers - as a point to discuss is an idea to use the Selenoid.
  2. Outdated approach to use pure Selenium :slight_smile:
  3. In the past there was an issue in the complicated deployment process where we had to clone the full edx-platform repo just for PageObjects but now it is fixed and all required page objects are copied to the e2e repo.

Cypress and other Selenium alternatives.

Cypress tests is a good alternative for Selenium especially for MFEs but I’m not aware of the current real use cases (either in the CI flow or as an automatization for manual testing). Probably I need to dig deeper into the roadmap and the current coverage of the real scenarios.

Another topic to raise is the new frameworks in the market (playwright, puppeteer).
As far as I understand we are moving forward with Cypress so I’m not sure whether it will be appropriate to talk about another Selenium alternatives here.
Personally, I’m interested in Playwright as it provides an easy way to do cross-browser testing, easy theming testing abilities, etc. :slight_smile:


Human readable scenarios using GWT style.

Actually, I’m very optimistic about this representation and implementation of E2E scenarios. When designed properly it can save development time constructing new test cases by building it from previously implemented steps.
However, there’re commits in the edx-platform transforming GWT style scenarios into simple docstring notation (particularly in the Video XBlock acceptance tests).
So I’m wondering about architecture decision not to use this approach and will appreciate any information about it.


P.S. I hope it’s a proper place to start the discussion.

1 Like

I would love to have end-to-end tests that were easy to run against an Open edX installation. I don’t know how to use the edX e2e tests, and it doesn’t seem easy to just point them at a server and let them run.

Playwright looks very capable, so I would be interested to see what it offers over existing tools.

I don’t have experience writing these kinds of tests, so for example, I don’t know what the best strategies are for resetting the server to known states.

Have others written these kinds of test suites before?