How do edX developers run edx-platform unit tests?

I recently started a development task on edx-platform, and it’s been a while since the last time I submitted a PR that was not a simple bug fix on edx-platform. So I find myself running the unit tests locally – as opposed to just waiting for Jenkins to run them for me.

Unless I’m mistaken, the official recommendation has always been to run the tests with paver test_* commands. But these days the documentation is a bit fuzzy on the topic. I could not find an up-to-date recommendation:

Anyway, I tried to run the tests with paver, but this command was clearly not designed for humans (as opposed to CI). I found the following issues:

  1. It takes a very long time to run, as it performs many test bootstrapping tasks, such as checking Python prerequirements, cleaning xml reports, etc.
  2. Failed test results and stacktraces are not printed in the console, but instead stored in xml reports.
  3. Running unit tests on the current master branches causes many deprecation warnings to be printed in the console. Here is an excerpt:
  /edx/app/edxapp/edx-platform/lms/djangoapps/lti_provider/ SysPathHackWarning: Importing lti_provider.models instead of lms.djangoapps.lti_provid
er.models is deprecated                                                                                                                                       
    from lti_provider.models import LtiUser
  /edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/sorl/thumbnail/conf/ RemovedInDjango30Warning: The DEFAULT_CONTENT_TYPE setting is d
    setattr(self, attr, getattr(obj, attr))

  /edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/sorl/thumbnail/conf/ RemovedInDjango31Warning: The FILE_CHARSET setting is deprecate
d. Starting with Django 3.1, all files read from disk must be UTF-8 encoded.                                                                                  
    setattr(self, attr, getattr(obj, attr))

  /edx/app/edxapp/edx-platform/lms/djangoapps/bulk_email/ SysPathHackWarning: Importing bulk_email.models instead of lms.djangoapps.bulk_email.mod
els is deprecated
    from bulk_email.models import COURSE_EMAIL_MESSAGE_BODY_TAG, CourseAuthorization, CourseEmailTemplate

  /edx/app/edxapp/venvs/edxapp/lib/python3.5/site-packages/django/test/ RemovedInDjango31Warning: `TestCase.multi_db` is deprecated. Database
s available during this test can be defined using lms.djangoapps.dashboard.tests.test_sysadmin.SysadminBaseTestCase.databases.
    warnings.warn(msg, RemovedInDjango31Warning)


There are so many of those deprecation warnings that the test output extends past my console history.

I managed to address issues 1 by running:

NO_PREREQ_INSTALL=1 paver test_system  --fail-fast --fasttest --skip-clean --system=lms

But clearly this is not a practical command for humans to remember. Also, it does not resolve issue 2.

The, I realized that paver commands are equivalent to simple pytest commands. So instead, I now run:

pytest --ds=lms.envs.test --exitfirst openedx/core/

This simple enough command addresses issues 1 and 2. Of course, specific development efforts need to be made to resolve issue 3.

The PythonTestSuite in edx-platform was created in 2014, and was little modified since then. It probably had its merits then, but I wonder whether we should now avoid recommending to use it, and move to a pytest-based set of commands?

I’m curious to hear how the rest of the community runs unit tests in edx-platform; maybe I’m doing this the wrong way?

1 Like

Personally, I avoid using paver for anything. The supposed benefit is that it always re-runs a bunch of set-up steps that can burn you if you forget them (installing requirements, building/collecting static assets, etc.), but I’d rather just accept the risk of doing these manually instead of waiting literal minutes to run a single test. I’ve been advocating to rip out pavelib/ for a while – let’s keep being loud about this and maybe it’ll happen.

Anyways, I’ve always had good luck with:

pytest some/folder/or/file -k 'TestClass and test_function'

to run all tests named like TestClass.test_function within some/folder/or/file. You can omit the folder/file argument, but it will drive up the test collection time. I think setup.cfg will make sure that lms.envs.test is used, so you can omit the --ds= argument.

You can also get fairly complex with the -k argument:

pytest lms/djangoapps/program_enrollments -k 'EnrollmentTest or EnrollmentsTest and not test_unauthenticated'

In terms of the issues you mentioned:

  1. It takes a very long time to run, as it performs many test bootstrapping tasks, such as checking Python prerequirements, cleaning xml reports, etc.
  1. Failed test results and stacktraces are not printed in the console, but instead stored in xml reports.

Avoiding paver should fix both of these.

  1. Running unit tests on the current master branches causes many deprecation warnings to be printed in the console.

Yeahh, there’s a lot there, and it’s likely been exacerbated by some recent version upgrades and clean-up work. In the past, we have hit a point where we say “this is out of control” and then devote time to cleaning them up. It sounds like we’re nearing this point again. Furthermore, I believe the Django 3.0 upgrade is in the pipeline, which will fix many of them.

I would hesitate to advocate for silencing these warnings for unit test runs on master. They are, in fact, issues that need to be fixed, and seeing them fly by will ideally lead us to fixing them. I do believe that we ought to significantly reduce the noise before the next open release, though. You can locally cut down on noise by modifying setup.cfg:

filterwarnings =
    ignore:No request passed to the backend, unable to rate-limit:UserWarning
    ignore:some message pattern to

Hope this helps and/or adds some clarity.

1 Like

Adding to the comment by Kyle, I personally prefer using pytest to only run the selective set of tests. In either LMS/Studio shell, depending on where the test is located, simply use

pytest folder/app/tests/

to run the tests in the selected file. If you want to run a specific test suite in a file, add ::TestSuite after the file name. e.g.

pytest cms/djangoapps/contentstore/views/tests/ VideosHandlerTestCase

would run VideosHandlerTestCase tests only. If you add ::unit_test after the test suite name, it will run that specific test only.

This would run tests quickly and you can debug the tests in this way rather easily. By this approach, Issue 1 and 2 will be catered. I am not sure this will be much help with Issue 3. Instead of running all tests, ins

1 Like

I’ve used both of the aforementioned testing approaches however something to note is that using the pytest path/to/test/file::TestName syntax won’t work for any test that uses the decorator. Using the pytest path/to/test/file -k 'TestName' syntax will.

This is because the decorator creates as many new methods as data items passed to it, each with expanded names including the position of the data argument and the data itself. That causes a name mismatch when using the file::TestName approach, but since the file -k 'TestName' runs all tests named like the string you pass (as Kyle pointed out), the expanded method names will all still match.

Here are the pytest docs for specifying tests and ddt api docs for further reading.


Hey everyone,

This is tremendously useful information, thanks a lot. Anyone feels like adding this to the Open edX documentation? Potential destinations are:

I would hesitate to advocate for silencing these warnings for unit test runs on master.

I agree with you @kmccormick. These warnings provide an incentive to fix them :slight_smile:

Hey all,

A lot of useful information. Thanks, @Jeff_Chaves @kmccormick @Dawoud_Sheraz
I want to add my +1 to the sad story :slight_smile: of realizing the paver strange behavior and I agree with the intention to move toward the pytest-based way.

Notes for the documentation - actually testing docs is good enough to start:

  • it describes the to way to select test using pytest -k option
  • describes the way to do parallel tests execution by --processes=2 option
  • re-run failed tests by --failed option

Despite this I see ways to improve:

  1. Fix the lack of propper tests separation by subsystems - now we have to run tests for cms like:
pytest cms/djangoapps/ cms/djangoapps/api cms/djangoapps/cms_user_tasks cms/djangoapps/contentstore cms/djangoapps/course_creators cms/djangoapps/maintenance cms/djangoapps/models cms/djangoapps/pipeline_js cms/djangoapps/xblock_config common/djangoapps/course_action_state common/djangoapps/course_modes common/djangoapps/database_fixups common/djangoapps/edxmako common/djangoapps/entitlements common/djangoapps/pipeline_mako common/djangoapps/static_replace common/djangoapps/status common/djangoapps/student common/djangoapps/terrain common/djangoapps/third_party_auth common/djangoapps/track common/djangoapps/util common/djangoapps/xblock_django openedx/core/djangoapps/ openedx/core/djangoapps/ace_common openedx/core/djangoapps/api_admin openedx/core/djangoapps/auth_exchange openedx/core/djangoapps/bookmarks openedx/core/djangoapps/cache_toolbox openedx/core/djangoapps/catalog openedx/core/djangoapps/ccxcon openedx/core/djangoapps/certificates openedx/core/djangoapps/commerce openedx/core/djangoapps/common_initialization openedx/core/djangoapps/common_views openedx/core/djangoapps/config_model_utils openedx/core/djangoapps/content openedx/core/djangoapps/content_libraries openedx/core/djangoapps/contentserver openedx/core/djangoapps/cors_csrf openedx/core/djangoapps/course_date_signals openedx/core/djangoapps/course_groups openedx/core/djangoapps/coursegraph openedx/core/djangoapps/courseware_api openedx/core/djangoapps/crawlers openedx/core/djangoapps/credentials openedx/core/djangoapps/credit openedx/core/djangoapps/dark_lang openedx/core/djangoapps/debug openedx/core/djangoapps/django_comment_common openedx/core/djangoapps/embargo openedx/core/djangoapps/enrollments openedx/core/djangoapps/external_user_ids openedx/core/djangoapps/geoinfo openedx/core/djangoapps/header_control openedx/core/djangoapps/heartbeat openedx/core/djangoapps/lang_pref openedx/core/djangoapps/models openedx/core/djangoapps/monkey_patch openedx/core/djangoapps/oauth_dispatch openedx/core/djangoapps/olx_rest_api openedx/core/djangoapps/password_policy openedx/core/djangoapps/plugin_api openedx/core/djangoapps/plugins openedx/core/djangoapps/profile_images openedx/core/djangoapps/programs openedx/core/djangoapps/safe_sessions openedx/core/djangoapps/schedules openedx/core/djangoapps/self_paced openedx/core/djangoapps/service_status openedx/core/djangoapps/session_inactivity_timeout openedx/core/djangoapps/signals openedx/core/djangoapps/site_configuration openedx/core/djangoapps/system_wide_roles openedx/core/djangoapps/theming openedx/core/djangoapps/user_api openedx/core/djangoapps/user_authn openedx/core/djangoapps/util openedx/core/djangoapps/verified_track_content openedx/core/djangoapps/video_config openedx/core/djangoapps/video_pipeline openedx/core/djangoapps/waffle_utils openedx/core/djangoapps/xblock openedx/core/djangoapps/xmodule_django openedx/core/djangoapps/zendesk_proxy openedx/tests/ openedx/tests/completion_integration openedx/tests/ openedx/tests/util openedx/tests/xblock_integration openedx/core/lib/ openedx/core/lib/api openedx/core/lib/blockstore_api openedx/core/lib/ openedx/core/lib/celery openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/django_require openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/gating openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/license openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/ openedx/core/lib/tests openedx/core/lib/ openedx/core/lib/ openedx/core/lib/x_forwarded_for openedx/core/lib/xblock_builtin openedx/core/lib/xblock_pipeline openedx/core/lib/xblock_utils cms/lib/ cms/lib/xblock

Using markers it can be simplified to:

pytest -m cms
  1. We can marks heavy/slow and fast tests to be able to run fast tests as often as possible (--fasttest is not for this but for skipping collectstatic - compiling Sass to CSS) and slow tests (actually full test suite) for major PRs. This also can be useful in CI optimization work (e.g. to implement the fail early strategy to not block a pipeline executor).

  2. Return back to the BDD style for our Acceptance tests (or at least add more human-readable output). I’m not sure why some old BDD tests were rewritten into default unittests and BDD scenarios were moved into docstrings.
    Current Karma tests (karma-bdd) is a great example of how the behavior style can help developers and testers to understand system behavior:

    ✓ is unfinished by default
    ✓ is not uploading by default
    ✓ is valid by default
    ✓ is valid for text files by default
    ✓ is valid for PNG files by default
    ✓ can accept a file type when explicitly set
    ✓ can accept a file format when explicitly set
    ✓ can accept multiple file types
    ✓ can accept multiple file formats
      ✓ returns a list of the uploaders file types
      ✓ returns a map of formatted file types and extensions
      ✓ does not format with only one mime type
  1. There is a possibility to improve a developer experience by utilizing docker features to avoid unnecessary setup steps (like a DB setup or dependency installation). For the developer work probably it is not so critical as for CI flow which can be optimized.

To push all these ideas forward I’ve created the very initial roadmap issue (it’s named as a CI improvement issue but I believe it should cover developer day-to-day experience too)
Also, it should be described in more detail with exact proposals - will add some.

@regis great thanks to starting raising all these questions :metal: