A good deployment story for most Open edX hosting platforms

In discussing an event bus @regis posted a statement that was both reasonable and thought provoking for me:

We need a good deployment story for most Open edX hosting platforms, and not just edX.org.

I realized that the definition of a “good” story here is probably very relative and I’m curious to know what others think.

  • What is a good deployment story?
  • How much is it just about ease of deployment?
  • How much is about the minimum number of resources you need to run it(cpus, memory, machines, etc)?
  • What other factors could go into it and WHat are people’s thresholds for those factors?

Hey Feanil, thanks for picking up on this :slight_smile:

When Ned first got in touch with me with the idea of creating a build/test/release working group, I thought long and hard about what would be its purpose. I followed a very similar train of thoughts when I started working on Tutor, the goal of which was to improve the experience of deploying Open edX. At the time, my goal was to make the Open edX more reliable. And by that, I meant: approachable, repeatable, fast.

In other words:

  1. It should be easy to understand and modify the deployment process. In particular, frequently performed tasks should be intuitive, and there must be extensive documentation to cover less frequent use cases.
  2. Deploying Open edX should be a consistently successful task.
  3. Deploying Open edX should be fast, such that people can easily play with the knobs and features of the installation.

So I think that reliability is definitely one of the core components of a “good deployment story”. But it’s not the whole story.

As a software and devops engineer, I tend to use many different pieces of technology. And the ones that I like the most are the ones that I can simply forget about. They “just work”. Upgrades are transparent. When I need to perform administration tasks, the CLI is easy to understand. Error messages are informative. The documentation is well written.

Examples of such pieces of software include: Django, Redis, Gitlab, Discourse.

In that sense, a piece of software that uses a lot of computing resources is not so great; it’s going to increase the total cost of the project; migrating this software from one server to another will take time; running it locally will require expensive laptops; it’s probably not going to be possible to run multiple instances of that software on the same machine. To summarize: I won’t really be able to “forget about it”.

And I think that these properties (reliability & “forgettability”) of good software are not specific to a certain class of users. It’s not just users with little technical expertise that benefit from these properties. Expert users who deploy high-availability, scalable apps on large clusters also appreciate software that is easy to use. This, at the condition that the software makes it easy to override the default settings that are provided for the majority of users, but that some advanced users may want to change.

My bottom line is that by making the software easier to use for the least technical class of users, you also make it easier to use for the more advanced class of users. The reverse is also true: making the software harder to use for non-experts also makes it harder to use for experts.

I understand that I’m not giving straight precise answers to your questions Feanil :innocent: Anyway, I’m glad that we can have an open conversation about this.

1 Like