My understanding of how tutor uses uWSGI isn’t great, so please pardon me if I’m completely off base here…
While looking into something unrelated, someone pointed me to the tutor uWSGI config file for the LMS, and it looks like we’re serving all our static assets through uwsgi? Even if individual assets can be sent relatively quickly, doesn’t that cause worker starvation because we’re using our relatively small number of high-memory uwsgi workers to serve dozens of static assets (when they could be processing Django app requests instead)? Particularly when a user hits a courseware page that throws 50+ requests at a server?
Folks running larger sites: Do you run into this issue? Do you use nginx or caddy to serve these assets directly instead? Or does it usually not matter for you because you put everything behind a CDN anyway?
We have seen performance issues related to static assets serving on our larger installations. We mostly avoid the issue by setting STATIC_URL to a cloudfront endpoint that uses the LMS as an origin.
When we originally launched one of our larger sites in k8s we experienced really poor performance (of course it was courseware the worst offender). In our case the main factor was an exceedingly high number of I/O that seemed to be performing worse in a containerized context. That lead to this pr perf: add lru_cache to improve performance with multiple themes by Alec4r · Pull Request #31090 · openedx/edx-platform · GitHub. For us was even worse, because our custom theme had a really large number of child templates.
I think one of the main appeals for serving the static files through uWSGI is that it kind of encapsulates edx-platform in a single artifact that is the tutor openedx image.
Oh, right. I remember that discussion thread too. @ztraboo: Just a heads up that this might be another source of issues for you folks, since you’ve been looking into production performance lately. @Alecar’s fix for this was in the Olive release, though it’s a very small patch if you want to try to use it as well.
I can see the appeal of that. I guess the alternative would be to have each service build its production assets out to a shared volume, and have a separate static asset serving container running Caddy serving it out? It’s a little more cumbersome, but I think that could have a lot of upsides–a different domain would mean that we wouldn’t have to send cookies to it, for one.