Using a CDN to speed up MFEs

I’ve been playing around with the idea of using a CDN service to speed up the MFE pages, in the end it’s just a javascript bundle so it seems like a good target for this.

I’ve done this with edxapp simply by using cloudfront and setting the LMS as the origin server. I’ve found this approach to be pretty straightforward, I just need to configure my cloudfront distribution and add STATIC_URL = "https://d1234567890.cloudfront.net/static/" to my settings.

I tried to do something similar with the MFEs, but ran into some blockers. The approach that seemed more similar to the previous one was to use output.publicPath and set it to the CDN URL. The problem is that it doesn’t seem to be particularly easy to modify publicPath in the tutor-mfe plugin at the moment. Even then, after applying manual changes and setting ENV PUBLIC_PATH='https://d1234567890.cloudfront.net/{{ app_name }}/', I was able to load the assets from cloudfront but I encountered a blank page when accessing the app. Upon further investigation, the error seems to be related to the history object which expects and actual path instead of the full URL.

In the end, modifying frontend-build to use an additional variable when configuring output.publicPath did the trick.

One of my teammates faced the same issue and actually opened a PR (frontend-build#398).

I was wondering what kind of deployment strategies do you use for higher traffic installations, in particular those of you using tutor. I think this approach is quite straightforward once the frontend-build changes land, you would need a new MFE image but that doesn’t seem like a big deal.

I would like to push that PR forward, maybe have a discussion at the next BTR meeting?

3 Likes

This looks like a great initiative, and somewhat similar to some of the work by @ghassan. Do you have some numbers you can share in terms of performance improvement?

Yes this relate to an initiative I suggested in the DevOps meeting back in the conference, it’s detialed here Serving MFEs through Cloudflare · Issue #14 · openedx/wg-devops · GitHub .

I suggested using Cloudflare but it’s really doesn’t matter, be it, AWS Cloudfront, Fastly…etc assuming the end goal is to server the MFEs assests from a CDN.

It’s worth noting that by average a MFE assests is ~5MB, and compressed it’s ~1MB, so the effect of this on perforamance would depend on the connection speed/latency of the user on the platform and the load on the server it’s just an educated guess I haven’t tested it yet.

I think though there is a slight difference between what you are suggesting and what I had in mind, I wanted not only to enable the MFEs assests to be served through CDNs but also to simplify the workflow of an MFEs to get deployed, that is each MFE would be deployed/built by an external service and the URL of the the MFE would be unique. e.g. instead app.myopenedx.com/learning it would be learning.myopenedx.com and with this approach you no longer need to modifiy the publicPath. The only thing is that it require more configuration when setting up the platform.

Regarding how the assets get uploaded to Cloudfront did you use a webpack plugin to do that? may this one GitHub - matrus2/webpack-s3-uploader: Upload all your assets to AWS S3 during webpack build.

Do you have some numbers you can share in terms of performance improvement?

I only ran a small test in a k8s cluster that retrieved the static files used by the learning MFE (the ones listed when you check your network tab while accessing an wrong route /learning/nogood) with X amount of simultaneous connections. Looping through these files for 1 minute with 20 connections I made 3.1k requests and by grouping the files I got a response time (p95) of 2.35s. Doing the same test with 80 connections I made 4k requests and the p95 increased up to 9s. Repeating the 80 connections using cloudfront is obviously way faster with 20k+ requests and p(95) of 700ms.

This amount of load is obviously excessive in most scenarios, but it does show degradation in the performance and using a CDN seems like an easy fix. I’m not that concerned about speeding up delivery for users in ideal conditions where their own connection is the bottleneck, but rather when my service is under heavy load (an exam for example).

Regarding how the assets get uploaded to Cloudfront did you use a webpack plugin to do that? may this one GitHub - matrus2/webpack-s3-uploader: Upload all your assets to AWS S3 during webpack build.

What I did was set the MFE domain (apps.myopenedx.com) as the origin for my distribution, at first cloudfront would hit my server for the files but after that it will cache them and serve them to the users.

I think though there is a slight difference between what you are suggesting and what I had in mind, I wanted not only to enable the MFEs assests to be served through CDNs but also to simplify the workflow of an MFEs to get deployed, that is each MFE would be deployed/built by an external service and the URL of the the MFE would be unique.

I would like to eventually land on a more sophisticated method that allows me to have independent deployments for each MFE, especially when we are working on a single fork but have to rebuild all of them. But I think this approach offers minimal friction at the moment.

I also have a use case for hosting the MFEs on certain paths: we want to use multiple sites, and serving the MFE in lms.myopenedx.com/learning is really handy.