Introducing: Frontend Plugin Framework library

Hey everyone!

I’m Jason, a software engineering in the Aperture team at 2U. In recent weeks and after the latest Frontend Pluggability Summit, a consensus has been reached among the community to support the frontend-plugin-framework library (or FPF as we so lovingly call it). This is a project that first originated from work that was started by @djoy . The work was picked back up by my team (@MaxFrank , @deborahgu , @jhynes , @Kelly_Buchanan ) as the open-source community started to see a greater need to make frontend components more pluggable and customizable without the need to hard fork every time.

FPF lets you insert plugins into your Micro-frontend application (MFE) by reading from a JavaScript configuration. In production, the JS-config is independent of the MFE source code, which enables developers to keep business-specific code in private repos/packages while still making use of the open-source MFE code.

Note, FPF requires that the MFE has JS-based configuration enabled in order to pass the non-string configuration values, namely arrays, objects, React components, or any imported functions/packages. Examples of this implementation can be found with Learner Record and Learner Dashboard MFE, and steps to apply this to other MFEs can be found in FPF’s README. In terms of having Tutor be able to handle JS-configs in production, there’s an issue ticket that exists to talk about it!

And recently, the project was packaged up and put into npm for public use (thanks @arbrandes !), and contributions to minimize the size of the NPM package and simplify running the example apps have been made by @Leangseu_Kim .

I know there have already been some groups that have used FPF in their production environments which is pretty exciting! If you’re interested in playing around with the framework, you can start by taking a look at the README — with more documentation around troubleshooting and implementation examples coming soon. In the mean time, you can take a look at some of the PRs that’ve been made in Learning MFE and Learner Record.

Consider this the official announcement for the launch of the Frontend Plugin Framework. Many thanks and congratulations to all who support and continue to support this project!

12 Likes

It’s great that we are making progress on MFE extensibility! I’m more of a backend developer, so I don’t have a very informed point of view. Still, I’d like to comment on the API of the FPF. Everything in the FPF is provisional, but the API is going to be here to stay, so it’s imperative that we get it right from the start – or at least, “as right as we can”.

I hope I’m not way off the mark here. Also, please let me know if this is the right place to have this conversation, I’m happy to move it elsewhere if need be.

Plugin declaration

If I understand correctly, MFE developers will have to advertise their extensible components by adding them to config.pluginSlots in env.config.js. Am I getting this right? If yes, then this solution will not scale with a great number of plugin components. (remember that having a large number of extensible components is the actual desired outcome) We will end up either with very large and complex configuration files, or a very large number of configuration files.

declarative vs imperative API

Declaring plugin arguments as keys in a dictionary seems rather inadequate to me – but maybe this is a javascript thing? My impression is that, as a developer, I’d rather write something like sidebar.insertDirectPlugin(id, SocialMediaLink, {optional args including priority here}).

Insert/Wrap/Modify

Unless I’m mistaken, these three operations actually boil down to just one, and that is “Modify”. “Modify” is the equivalent of the “filter” hook in Wordpress, Open edX and Tutor. As a reminder, a filter typically works like this:

// 1. declare filter callbacks
function my_callback(plugin_component) {
    // modify plugin_component 
    ...
    // return the modified object
    return plugin_component;
}
// insert callback in the list of callbacks, in a position that depends on the priority
const priority = 10;
filterCallbacks.splice(priority, 0, my_callback);

// 2. then, apply all filter callbacks
var component = ...
foreach(callback in filterCallbacks) {
    component = callback(component, {extra_args})
}

// 3. finally, do something with component, such as rendering it

With this definition in mind, instead of defining three different API endpoints, just one may be enough:

  • insert is actually modify on the list of PluginSlot.
  • wrap is actually a modify callback which returns the wrapped component.

In other words, insert and wrap are actually thin wrappers, or syntactic sugar, on top of modify; which really reinforces the idea that we should be using a functional API, and not hash keys.

1 Like

(I’m not dead yet! I think I’ll go for a walk or express some thoughts!)

Plugin Declaration

MFE developers will need to advertise their slots in documentation on the MFE, and maintain a stable, predictable contract/interface for each of those slots so plugin developers know what they can use. MFE plugin developers will use that documentation in order to configure - in env.config.js - their plugins into the slots. iframe plugins could also conceivably be configured via the MFE config API.

declarative vs imperative API

I think the rationale for defining the configuration declaratively, instead of imperatively, is primarily because we’re doing it in a config file there’s no library support to give us functions (sidebar.insertDirectPlugin, etc.) to call. We could build that to put some guard rails around the config doc schema; we could also use TypeScript types or do error checking on the doc. I think any of those would get the job done, but would require more work. It also feels like a virtue to me that the config is decoupled from actual plugin slots/code from the MFE, but I don’t feel super strongly there.

Insert/Wrap/Modify

I do feel like they’re separate things - even if we called them ‘modify’ here, we’d still need to break it down into sub-types to understand how it affects the component hierarchy. An operation of “modify” doesn’t tell us how to insert the plugin into the DOM on its own.

I think my concern here is more that we’re adding a lot of API surface area by starting with all these operations. If we have concrete use cases for them, great, but Wrap/Modify seem like the sorts of things we could add later if not. The priority thing seems a bit confusing to me; I’m not sure what that means (or why it’s ‘priority’ instead of ‘order’) or how its different than the order of elements in the array defining the plugins.

As @regis put it, once we decide on this API it’s really hard to change. We should build things into it as we have need, and be careful not to get too far ahead of ourselves until we know what we need. Adding net-new capabilities in the schema to make it more expressive is, I think, easier than modifying the schema if we get it wrong.

1 Like

OK, I had misunderstood the docs. I thought that MFE developers would have to declare new extensible components by adding them to env.config.js.

Still, we cannot expect plugin developers to all put their changes in a single env.config.js file. This approach is convenient for a single organization, like 2U, who needs to make centralized changes to an MFE. But, as platform administrator, I need to install plugin1 from org1 and plugin2 from org2; and thus I need some self-discovery mechanism to load both plugins.

My point exactly: I think we should have a stable library API and not a complex settings file. If Open edX history has taught us anything, it’s that config-driven APIs lead to unmanageable complexity.

From my experience with the Tutor plugin API, automated type-checking is a valuable addition. But that’s not the only benefit of an imperative API: functions are easier to maintain, document, improve and upgrade. By defining custom configuration settings, we are basically inventing a new DSL, where there is no need for that.

I do think that it would be worth it in the long run to put in the work :slight_smile:

Well, the parent (“host”) MFE is in charge of inserting its children in the DOM, right? So the “modify” action applied to the list of children should be enough.

The point that I’m trying to get across is that many very different people, running very different pieces of software, such as Wordpress or Open edX have all landed on the same common definition of a plugin API, which boils down to just two things: filters and actions. Do we really need to do things differently for MFEs? If yes, why?

Yes, I think that priority is just the order of the elements.

Correct - you declare the plugin slots throughout the MFE codebase, much like you can declare i18n message strings inline in python code using _("text"). Changing the config is only necessary when using the MFE, and only if one wishes to insert a new plugin into the slot or otherwise change the plugins.

The config file is essentially a list of plugins that you want to use on your instance. You can still import packages. I tried a self-discovery process for “Pages & Resources” and based on what I learned from that, I do not recommend using a self-discovery mechanism.That is, I don’t think plugins should be “activated” merely by npm installing them. For one thing, the way that webpack handles this is quite complex and very difficult to reason about when things go wrong. For another, it’s impossible to ever know if the “right” plugins are loaded or not - if you specify the plugins in the config and one isn’t found, we can display a build error, but if you somehow don’t install the package, you won’t see any error because the MFE doesn’t know the plugin “should” be installed.

Though it’s not quite what I recommend, you can write something like this (which works perfectly well with the current frontend plugin framework implementation):

import { config as PluginOneConfig } from '@myorg/our-mfe-plugins/PluginOne';
import { config as PluginTwoConfig } from '@myorg/our-mfe-plugins/PluginTwo';
import { config as AspectsPluginConfig } from '@openedx/aspects-mfe-plugin';

export const config = {
  ...
  pluginSlots: {
    dashboardSidebar: {
      keepDefault: true,
      plugins: [
        PluginOneConfig,
        PluginTwoConfig,
        AspectsPluginConfig,
      ],
    },
  },
};

or if you really wanted to, you could even do this (again this works perfectly fine today and is absolutely an option you can choose):

import { registerPluginOne, registerPluginTwo } from '@myorg/our-mfe-plugins';
import { register as registerAspectsPlugin } from '@openedx/aspects-mfe-plugin';

export const config = {
   BASE_URL: '...',
   ...
};
registerPluginOne(config);
registerPluginTwo(config);
registerAspectsPlugin(config);

While that’s true, I think it pushes a lot of complexity into each configuration file. Personally I don’t want to see code like filterCallbacks.splice(priority, 0, my_callback); in my list of plugins. In my experience using frontend plugins so far, I’d say 95% of the use cases are: insert a new widget, remove an existing widget, or conditionally show an existing widget (which requires “wrap”).

What I think the config should look like is something fairly simple:

import { PLUGIN_OPERATIONS as ops } from '@openedx/frontend-plugin-framework';
import PluginOne from '@myorg/our-mfe-plugins/PluginOne';
import PluginTwo from '@myorg/our-mfe-plugins/PluginTwo';
import AspectsPlugin from '@openedx/aspects-mfe-plugin';
import ShowOnlyToStaff from '@openedx/frontend-platform/plugins';

export const config = {
  pluginSlots: {
    dashboardSidebar: {
      keepDefault: true,
      plugins: [
        // Insert PluginOne into the dashboard's sidebar, above the default widgets:
        {op: ops.Insert, widget: PluginOne, order: 10},
        // Insert PluginOne into the dashboard's sidebar, at the bottom:
        {op: ops.Insert, widget: PluginTwo, order: 900},
        // Hide the "Social" sidebar widget from students while we're testing out the new forums with staff 
        {op: ops.Wrap, widgetId: 'social', wrapper: ShowOnlyToStaff},
      ],
    },
  },
};

So I do think we could probably do without Hide (instead just Wrap with a function that returns null) and and Modify (not sure there’s a use case).

Almost all of the software related to building frontend stuff uses config files (sometimes declarative, sometimes imperative) where you explicitly list plugins. In the frontend world, I’d say that’s the mainstream approach, rather than filters and actions. (Though of course once you list plugins in the config file, they often can use filters or actions internally.)

What’s more, an imperative API that just leaves you to write code like some_list.splice(...) is not as self-documenting as a declarative configuration block. Look at what happens when you have a very simple configuration format defined using TypeScript:

Literally as you type any key or any value anywhere in the config, you see the contextual documentation rendered in Markdown that tells you what is going on, provides a list of valid options, highlights errors, and provides links to more detailed documentation. For some reason, the Open edX frontend community is not in the habit of including this kind of documentation using TypeScript, which I find causes a major drop in productivity compared to other frontend codebases I work with - I’m trying to help change that.

(Also, I know the Tutor python API is similarly well-documented which is fantastic, but python’s support for this is not as robust as TypeScript - for example, in my IDE hovering tutor_hooks.Filters shows documentation but tutor_hooks.Filters.COMPOSE_MOUNTS has only a type and the nice documentation you wrote is somehow not picked up by VS Code.)

So I think what I’d personally prefer to see is a super simple, well-typed and well-documented declarative format that just provides Insert and Wrap operations.

I’d be fine with renaming it to order if that’s more clear. Or again I think the most common needs are: insert at the front, insert at the end, or insert before/after an existing item - we could just use those four options instead of an order number.

2 Likes

I stand by all of @braden’s points. A flexible, declarative env.config.js is a good solution for plugin slot configuration given our current constraints, dictated to one extent or another by the micro-frontend architecture and tech stack (Node, Webpack, etc) the frontend code is written with. In particular, it is a better solution than the imperative approach.

That said, we can and should rely on type-checking for documentation of the configuration (and for everything else type-checking is good for). This should help alleviate @regis’s very valid concern.

On a possible redundancy in the plugin operations, I don’t have strong feelings either way. As long as the full gamut of outcomes we want to support are possible, the details of how to achieve them are just that: implementation details.

However, I do have two important things to bring up:

1. It’s likely too late to change FPF significantly for Redwood

… except if we decide to postpone plugin slots entirely. Which is exactly the opposite of what we agreed to try and do. Plus, there are already FPF implementations in the wild (in draft mode) that are going to be very interesting for us as a community if we can get them to land, such as customizable sidebar navigation in the Learning MFE.

This doesn’t mean we shouldn’t postpone plugin slots, if sustainable objections are made. Which is to say, I think it’s great that @regis raised some potential ones! I just want to be clear what the implications are to course correct now.

2. FPF is not (the whole of) the Frontend Plugin API

FPF is only a part of what the Open edX Frontend Plugin API is actually going to be. The PluginSlot ids, their places on the page, the props that they take, these are also things operators are going to expect to be documented properly and to be stable from one release to the next.

I’d argue that it is even more important to get them right than FPF itself! And it’s not going to be easy. For example:

  • Let’s make everything a plugin slot! Oh, wait, now you can’t change anything from one release to the next without (potentially) breaking everybody’s plugins.
  • Just make each slot’s interface be completely generic, then! Well, now we’re back in Comprehensive Theming territory, and we all know how great it was to have to track upstream changes and port them to your customized version of the learner landing page. In other words, a fork by another name.

And so on and so forth.

The good news is that we can start small and consider each slot very carefully, having the “do we really want a slot here?” conversation on a case-by-case basis. I’ve already started doing just that for the upcoming slots I’m aware of.

By the way, it’s also on my plate to propose an OEP (or ADR) that includes some guidelines around how these decisions to be made.

I feel like most of my comments were misunderstood; probably because I tried to address too many items at the same time. In particular, I am definitely not arguing in favor of a low-level approach that would make use of *.splice calls; neither am I suggesting that we should automatically load all npm-installed plugins.

I’ll rephrase my comments by focusing on just two questions.

  1. What would a typical MFE plugin with multiple customized component look like? Do we have an example plugin repository that is not just a toy example? Once we have such a repo we can start discussing whether we have the right API. OP mentions example implementations in the Learner Record and Learner Dashboard MFEs, but these are “host” MFEs, not plugins, right?
  2. If the filters and actions paradigm is good enough for Wordpress, why isn’t it good enough for microfrontends? Are we really inventing something that is better than filters and hooks? If yes, how? For instance, the current FPF does not implement actions at all, unless I’m mistaken, so it would seem that it’s more limited than the Wordpress-inspired hooks framework.

You can infer an answer to this from the example app in FPF. A plugin can export an arbitrary number of components, which can then be individually imported and assigned to slots in the configuration file.

Yes, the example components in the example app only export one component per file, but you could package that whole directory as an NPM module and call it a plugin. Alternatively, you could export many components from a single file.

The important thing is that it is not up to the plugin codebase to say where it will be slotted. Neither is it up to the host MFE. This is left to the MFE user via the configuration file.

Generally speaking, I dare say that this is comparing apples and oranges. As I understand it (though I’m no expert), Wordpress is much more akin to Django and the template system we had in edx-platform, or even Tutor itself, the difference being that it’s written in PHP, and so are its plugins. Which is to say, the template engine lends itself well to a hook system that applies to both backend and frontend.

Micro-frontends, specially ones based on React, are a different matter. To start with, they’re decoupled from the backend code. Writing a single hook system to encompass both would be very complicated - I’d argue too complicated to be worth it, at least for the moment. I suggest we instead keep the paradigms separate, and let the MFEs have their own separate system.

Otherwise, from what I can tell it’s possible to compare the Wordpress filter hooks to what we’re doing with FPF more directly. I suspect the only differences are:

  • FPF requires declarative configuration, where Wordpress filters are imperative
  • FPF only “filters” specific UI slots

I stand by Braden’s argument that declarative configuration is actually better in the micro-frontend/Node/webpack world.

As for the second point, it’s just a reduction in scope. Like Wordpress’ actions, I can imagine having more generic hooks in FPF in the future, but I’d argue that they would make more sense when/if MFEs are more standardized in their data flow than they are now.

Caveat: Admittedly I’m just learning about the details of Wordpress hooks/filters/actions, and so may not have a great grasp of them yet. This is based on my impressions from what I’m reading, so I may have some details wrong.

Forgive me while I talk through this a bit: I see a few fundamental differences between WP hooks and what we’re building, and I think they’re solving a different problem.

EDIT: I also didn’t see Adolfo’s response to Régis when I wrote this and am maybe saying some of the same stuff. :sweat_smile:

What WP hooks solve

Hooks are created by adding code to the WP instance at pre-defined places to “hook into” the build/runtime processes. WP was built in a different era for different use cases; you’re expected to own and modify your codebase - in our parlance, your own fork. Then, on top of that, they built in ways for you to overlay customizations on top of that in a way that (presumably) makes it easier to migrate your instance to newer releases when it’s time.

We have a different architecture

One of our fundamental assumptions is that forking is difficult to maintain and we want ways of customizing and extending the platform that don’t require maintaining your own downstream version of the core code. That assumption is at odds with the WP approach. We could have gone the WP way, but we were also burned by the complexity of maintaining comprehensive theming and went in a different direction. I can’t say whether it was the right choice, but here we are.

Another difference is that we have independently built and deployed micro-frontends, and we want to compose those UIs together at runtime if possible. Our micro-frontends are also frontend-only and have no server to run code for us - that backend is the rest of the platform, and it does have a filters/hooks mechanism.

So we have 1) independent MFEs, 2) that we don’t want to fork, 3) that we can’t share code between easily, 4) and which are primarily full of UI components which we want to override or extend.

Our primary use case is different

Given all that, our primary use case is adding/modifying UI, not changing data flow. Yes, changing data with filters and actions could allow us to modify UI, but that’s a pretty indirect way of doing it that’d require a more complex system behind what we’ve already built in FPF. Should we allow filters/actions on the client? Maybe, we would need some use cases to justify it. I can absolutely see adding middleware to some of the data processing, but I haven’t heard that as a pain point yet.

That’s a bit rambling, and based on a quick read-through of WP filters/hooks, but I do expect our primary frontend pluggability use cases and needs are a bit fundamentally different than what WP’s solution was meant to solve.

There’s certainly virtue in using industry standard/familiar concepts, but I also think we’d be shoehorning them in on the frontend a bit, given what we think we need today. And also, we totally do have filters/hooks on the backend just like Wordpress. :smile:

1 Like

This :point_up:

Another quick thought - we could absolutely have “data plugins” or non-UI plugins. If we did, I’d fully expect those to adopt the same hooks/filters/actions/etc. that we see on Open edX’s backend/WP.

Firstly, the distinction you are both making between backend and frontend code is incorrect. In both cases, filters and actions are not changing data flow. They are: 1. changing data 2. triggering events. All changes to UI fall in the first category. And thus they could be implemented in the form of filters. In my understanding, the FPF is a particular implementation of the filters/actions paradigm but with less features and a less consistent API.

Second, I’m looking at the example plugin app in the FPF repo. Are we looking at this env.config.js file and saying this is a good developer API? Are we saying that this 5-levels deep hash table with duplicated key names is developers-friendly? If yes then I’m ready to admit that all my opinions on MFE architecture are moot.

Yes!

Yes! I defend that this is a good thing, at this point. We can always add more later.

And here we’re back to declarative vs imperative. I mean, we can abstract some of this out, but I suspect we’d just be making things more opaque.

Feel like we’re arguing semantics here a bit. I muddied the waters by using “changing data flow” as shorthand for doing stuff with data: specifically, yeah, changing data and triggering events. That’s on me for being imprecise.

Fundamentally, the plugin configuration is just that, configuration. We load it once, its generally immutable after that, and the React component hierarchy uses it to render the frontend. I’m not sure how useful it is to try to express that configuration as a set of filters, which presumably take the config in and spit out a modified version of it before passing it on to the next filter (which the JS folks generally call middleware).

The runtime half of this is based on our use cases - and today that’s inserting components into the React component hierarchy. That means components that know how to read that configuration and respond to it. As I was saying earlier, if our use case involved manipulating state at runtime, or letting plugins respond to events as they happen, then we’d have filters and actions/events, certainly. But we’re not changing state at runtime by configuring plugins; we’re just using it to render the UI layer after it was defined in config. Again, this feels a bit like semantics and agreeing on a common vernacular, but this is how I’ve always thought about it. To solve this use case, we need a more expressive config doc.

I agree that we could improve on the developer API and by adding some safety nets here: it’d be a virtue to add type safety to the process of defining that configuration, either through helper functions to help generate it, TypeScript, or a linter. Minimally the schema needs good documentation. The good news is that this is a JavaScript file; we can create helpers and people can incrementally adopt them. Until those exist, their output would be this blob of configuration.

Let’s just address this before I decide if I should withdraw from coding entirely, retire to a monastery and never type on a keyboard again.

This huge hash table is parsed by some unknown function in a a hard-to-find repo (I’m guessing it’s frontend-platform based on another comment you made), we don’t really know how its values are loaded and in which order, but because it’s a familiar data structure we consider that it’s less “opaque” than a regular function call?

I trust your judgement Adolfo, I really do, and if you tell me I’m wrong then I’m 100% onboard with your approach. It’s just that this goes so much against everything I’ve learned so far about software engineering that I need some explicit confirmation.

Is this not true of Tutor plugin API as well? If I’m looking at this:

hooks.Filters.ENV_TEMPLATE_TARGETS.add_items(
    [
        ("indigo", "build/openedx/themes"),
    ],
)

then that is essentially opaque to me until I look up the ENV_TEMPLATE_TARGET docs. Before that documentation existed, I had to dive into Tutor’s core to figure out how the heck ENV_TEMPLATE_TARGET worked. So, in both the case of FPF and Tutor, the ease-of-use of the API is going to hinge on how well it is documented.

Allow me to triple down on this. Adding net-new capabilities in the schema to make it more expressive is absolutely easier than modifying the schema if we get it wrong :slightly_smiling_face: Unnecessary breaking changes are the bane of site operators’ existence.

Is this not possible?

import { insertDirect, insertIFrame } from '@openedx/frontend-plugin-framework';

...

const config = { ... };  // just settings

// these would modify the `config` dictionary in-place
insertDirect(config, "slot_with_insert_operations", PluginDirect)
insertIFrame(config, "slot_with_insert_operations", 'http://localhost:8081/plugin_iframe', 'The iFrame title');

I’m not saying that’s a great API; rather I’m trying to understand the declarative/imperative debate better, which to me seems like a syntactic distinction rather than a fundamental decision. Both declarative and imperative config APIs are ultimately both means to determine the state of your configuration.

Perhaps the crux of it is that imperative config APIs lend themselves better to composability of overrides (ie, multiple config files). For declarative config APIs, you either mandate a single config file, or you define rules for how your declarative config files stack upon one another, which is not straightforward.

1 Like

Could FPF land in Redwood with a big “V0/EXPERIMENTAL – EXPECT BREAKING CHANGES IN SUMAC” warning, allowing us all to make some plugins, refine the API over the next few months, and then release it as V1 in Sumac?

(If the V0 API is kept simple enough, then we might even be able to seriously reshape it between V0->V1 without breaking changes. If we did want hooks, then you can imagine a V1 hooks-based API which is a generalization of the V0 slots-based API, much like Tutor’s V1 Hooks API was a generalization of its Tutor’s V0 YAML API).

On the dict vs imperative declaration debate: I do want to point out that Tutor used to have a dict-based configuration strategy. In some ways, it was very similar to the config-based API we are discussing here. A few releases ago, we decided this v0 was not sufficient for Tutor, and we migrated to a more extensive “v1” API based on filters and actions. I thing that we should skip this intermediate step and jump straight to v1 for MFEs.

@regis,

Let me first get something out of the way: I don’t think what you’re defending is wrong. As a matter of fact, I think it would probably be a Good Thing to have a zero- (or low-) configuration plugin API based on hooks… provided we made sure not to run into pitfalls such as the ones @braden aluded to earlier.

Do I think it’s doable? Probably, yes. But we don’t yet know exactly how to do it, where all the pitfalls are, or even what all the plugin use cases are going to be. This is going to require data on how frontend plugins are used by the community, and then work to implement whatever changes we decide on - much like the process you went through with the Tutor v0 plugin API.

The thing is, our frontend codebase is different enough from Wordpress and Tutor that I believe we don’t have enough data to make that decision yet. But it just so happens we do have a way to get that data: the current state of frontend-plugin-framework, and the slots that are already being added with it.

The point is that we have two choices:

To stop and/or revert the work linked above before Redwood.

We still have the option of stopping now and reworking the plugin API, thus avoiding a potential breaking change in Sumac, but the following things will happen as a result:

  • There will be significant pressure to add 2U-specific code to master (and thus, to Redwood) that would otherwise live in a plugin. The worst offender would probably be the “AI Translations” code.
  • There will be less of an effort to remove 2U-specific code from other MFEs before Redwood, for the simple fact that there would be no alternative but for them to fork.
  • We will not have released any PluginSlots by Redwood, which means we will not have real-world plugin usage data to inform the development of Sumac.

To move ahead with v0 and improve the API post Redwood

If we do this it’s possible we’ll have to deal with deprecation (or even removal) of v0 of the plugin API in Sumac or later. This could be painful (like it was for Tutor), but:

  • We will have real-world data (including developer and operator feedback) on this version of the API, letting us iterate on the next one with more confidence
  • We will have less 2U-specific code in our repositories than otherwise
  • We will have less feature flags and more PluginSlots, which I argue is a good thing

In case it wasn’t clear, I’m arguing for this second option, and unless I hear more objections, it’s probably what we’re going to do. Not because I’m saying so, but because It is the outcome of a few months of work since the first Pluggability Summit, where we all agreed to try a couple of experimental implementations before we settled on one for Redwood. There were 3: FPF as it stands now is the one that won as per the second Pluggability Summit. I just wish you had brought up these ideas back then, and maybe gotten your org to produce an example implementation yourselves!

Again: I’m not saying we won’t take these ideas to heart. We most assuredly will! (Hint: we already ran into a need for at least one type of hook.)

Yes, if it boils down to these two options then of course it’s the best course of action.

That’s fair. I said multiple times that we (Edly and me) would come up with an extension mechanism and we didn’t deliver.

Still, this process reinforces my feeling that technical reviews should not be performed during live meetings. I have the same regrets concerning the original MFE implementation: @Felipe and I joined a meeting for the presentation of the original microfrontends design, back in the days, and we were unable to detect the flaws that were, in hindsight, quite obvious. In retrospect, we failed to figure out the limitations because:

  • Understanding oral explanations is harder than reading words: there’s the language barrier, the difficulty to highlight important parts, the capacity of the presenter to explain things clearly, the inability to edit past words, etc.
  • Discussions don’t leave much time to think – usually no more than a few minutes of clear thinking for a 1-hour meeting.
  • It’s harder to ask questions in a discussion than in a PR.
  • People who are absent from a meeting (again: my bad) don’t have a chance to comment.

Lazy consensus only works if people are given enough time to think about the decisions and respond. In this particular case, both extension mechanisms were developed prior to the meeting, and participants were given ample time to review them prior to the 2nd extensibility summit. But as I watched the video I was really under the impression that participants discovered the FPF implementation during that meeting – and I was as well. Experience tells me that this is not a great way to make architecture decisions in Open edX.

Then again, maybe all I’m saying here does not matter much, because no one else is voicing these concerns. So it’s fine if we agree to disagree. I just wished you had taken on my bet, but maybe you’re not such a big fan of piña colada :cocktail:

1 Like