As part of my recent investigations into mass-deployment of Open edX with Tutor and thinking about how we deploy MFEs, I want to propose a method for deploying them on Docker that I haven’t seen discussed yet.
Challenge:
- There are already around 20 MFEs, and there are going to be more.
- If I deploy 100 Open edX instances onto a Kubernetes cluster, each with a different theme, and each instance has ~40 MFEs, and each MFE requires a custom image to be themed, that would require 4,000 containers running just for MFEs, each with a different image.
I don’t think that’s an ideal approach, especially for those of us who deploy dozens or hundreds of instances.
Proposal:
The openedx-frontend
Docker container would be a new docker image versioned and released along with other Open edX components. This one container image would contain all standard Open edX MFEs, Node.js, and a webserver (like nginx) to serve them statically at different URL prefixes / virtual hosts. Each MFE has its own node_modules
and the image is built with all dependencies for each MFE pre-installed, but no static files generated. So the layout of the image is something like this:
/edx/frontend/frontend-app-learning/
/edx/frontend/frontend-app-learning/node_modules/ <-- pre-populated
/edx/frontend/frontend-app-learning/dist/ <-- empty at first
...
/edx/frontend/frontend-app-gradebook/
/edx/frontend/frontend-app-gradebook/node_modules/ <-- pre-populated
/edx/frontend/frontend-app-gradebook/dist/ <-- empty at first
...
The entry point of this container would be a script which:
- checks environment variables to determine which MFEs should be enabled
- for each enabled MFE, runs
npm install @edx/brand@$FRONTEND_THEME_PACKAGE
andnpm run build
(all in parallel); and then - starts nginx
What this gets us is a single image that can be used by almost everyone to deploy whichever subset of MFEs they want to use, with custom branding. Because the entrypoint builds the MFEs, whatever environment variables you want (site name, LMS URL, brand/theme, feature toggles, etc.) only need to be set once for the container and will be applied to all MFEs.
The pros of this approach as I see it are:
- You would rarely need to build a new container image; everyone in the community could use the exact same
openedx-frontend
image (versioned for each named release) - You only need to have a single frontend container running for each Open edX instance, and it has a single set of environment variables to control theming, feature toggles, LMS URLs, and so on
- Doing things like changing the theme, changing feature toggles, or enabling a new MFE is as simple as changing an environment variable and restarting the container / rolling out a new version
- A kubernetes startup probe would ensure that as you roll out new configurations/versions of the frontend image, the previous version is still used until the new version is fully built.
- It’s easy to put a CDN in front of the container for faster performance
- It provides a forcing function to encourage all MFEs to use standardized environment variables for configuration.
The cons are:
- The container takes a while to start up during initial deployment, because it has to use webpack to build each MFE (though they can build in parallel on as many CPUs as your k8s nodes have). However, you have to do this build on initial deployment of a new Open edX instance in any case. This is only really different than other approaches if you accidentally shut down your frontend container(s) in production and you weren’t using a CDN; in that case it would be much slower to start back up.
- The container is not entirely immutable, and because it installs a theme package in its entrypoint, it’s possible that it could display changing behavior depending on how you specify the version of the theme package.
However, for anyone who needs the extra reliability and startup speed of a fully immutable container, you have two very simple choices that fully remove the cons: (1) create a derived image from the base container (after every MFE has been built with your customizations) and deploy that, or (2) just use this container to build your MFEs, then copy the compiled files to S3; don’t use the container for deployment at all. Either way, it’s much simpler than dealing with dozens of separate containers (I think).
Thoughts?