Deploying Sumac with Tutor on Multiple Servers Without Kubernetes

Hi everyone,

I’m planning to deploy the new Sumac release of Open edX to production using Tutor. However, I’m wondering how to configure Tutor to run across multiple servers with a single domain, especially for the LMS and MFE services.

I understand that Kubernetes is often recommended for this kind of setup, but unfortunately, Kubernetes is not an option in my current environment.

The system is expected to support approximately 10,000 users, with around 1,500–2,000 concurrent users. I’d like to know:

  • Is it possible to separate LMS, CMS, MFE, and Worker services onto independent servers?
  • If so, what’s the recommended approach to achieve this using Tutor without Kubernetes?

Any advice or guidance would be greatly appreciated.
Thanks in advance!

As far as I understand, what you’re asking isn’t officially supported/documented outside of a k8s deployment, you might find some useful information about this in the tutorials documentation Running Open edX at scale, in particular the section Offloading data storage discusses having resources such as MySQL, MongoDB, Redis, etc running on a separate dedicated system.

There is a very similar conversation that happened here: Horizontal Scaling - CMS/LMS and Workers
But my takeaway from that is that this would be a very complex and unsupported configuration that you would be expected to develop and maintain on your own if you get it to work.

1 Like

Thanks, the information you provided is helpful.

@vuthehuyht
We decided against moving forward with manually setting up our own load balancers and decided to use k8s instead. We decided to use a solution that’s been available to the community for a while now called Grove due to it being open sourced. It supports changes using a Git commit to Gitlab CI/CD pipeline.

You fork the Grove Template to a private repo where you can do further customize for your site (e.g. Grove instance). Multisites can be setup using this same repo.

Documentation for Grove is here.

There is a little learning curve with Grove and k8s, however, it will spawn up a load balancer, auto-scaling group, and multiple EC2 instances if working on AWS. This was everything that we were doing manually ourselves but now with Grove the infrastructure is managed through Terraform changes.