Micro Learning: Perform task upon user completing each Unit

In our platform, learners earn credit for each individual unit they complete rather than waiting until the entire course is completed. As such, there are tasks we need to process each time a unit is completed. Somehow the database does get updated to reflect the user’s incremental progress within a course, and we are looking to inject additional steps of processing into that same event.

We have thought of two approaches for this granular level of custom processing, but would like to solicit input from the community to evaluate the feasibility or preference between the two approaches.

Option 1:

Listen for a hook that gets fired upon unit completion. Does this hook exist? If not, is it reasonable to add? If it exists, this approach seems like it could be relatively straight forward and preferred.

Option 2:

Add handling into a custom frontend template so that each time a unit is completed the browser triggers the processing tasks before proceeding to the next unit.

Option 3:

I recently learned of xAPI, but I am brand new to its usage and functionality. Would this be an appropriate scenario for considering a solution utilizing xAPI?

Option 4:

I would love to hear if you have any better suggestions.

3 Likes

@Felipe I see that you have written a few posts related to hooks. Could you possibly advise about viability of Option 1?

There is a django signal that is fired whenever the completion status of any XBlock is updated. The completion aggregator listens to that signal and uses it to know when to update the overall completion percentages for a section, subsection, unit, and course. That even fires more often than you need, but when it fires you could check the completion of the related unit and do something when that happens. And yes I think it would be nice to make this into a general purpose hook.

3 Likes

Hi @Jeff_Cohen,

From the options you outlined, I believe that #1 is the more robust and simple to maintain. So I would go for that. Now, the particular event that you need is not yet created in the public repository of events. See: openedx-events/signals.py at main · openedx/openedx-events · GitHub

The signal that @braden is mentioning would be the ideal candidate for this application.

Moving forward, you could use that signal as is or you could also convert that signal into a public event into the events repo. It would add the benefit that you can test your code without depending on edx-platform and you would get a promise that if this signal is ever going away you will have enough notice and a stable deprecation path.

3 Likes

@braden and @Felipe thank you so much for your thoughtful responses and guidance. I may start with the Django signal directly to get rolling, and consider expanding to the public event implementation as I get comfortable. Thanks again!

2 Likes

One follow up question if you don’t mind. I have the receiver working by following the pattern of manually registering it using post_save.connect as done in completion-aggregator. However, I wonder if this case is now supported by the plugin_settings PluginSignals.CONFIG approach and it is preferred to align with that standard? Or does that config approach only support the public events?

Yes, connecting vía the plugin_settings works just as well.

You can see examples at: eox-tenant/apps.py at master · eduNEXT/eox-tenant · GitHub

1 Like

Thank you. Got it. Will give that a try.

Hi @Jeff_Cohen!

@braden and @Felipe have a very deep understanding of the platform, and I agree with their assessment on how to grab that event info and the long term goal of adding it to openedx-events.

But one thing I would caution you is that defining what it means to “complete” a unit can be surprisingly difficult, and the approximation that the LMS makes may not line up to your expectations. We don’t have an explicit “I’m Done With This Part” button for the user to hit, though the idea comes up occasionally, and we sort of had one a long time ago. Because there is no such button today, the system has to try to infer what completion means based on various content-dependent heuristics. Is that problem “complete” if you tried it but got it wrong and still have a couple of tries available? Did you really “complete” a Unit with a video if you only watched the first half? (That one’s tunable, btw–there’s a setting for it.)

I don’t remember all the rules we use to make those determinations, but I just want to make sure that you’re aware of the fuzziness there. If you’re looking for something more straightforward like “they got the problems correct” or something, you might need to look at something in the grades API.

Good luck. :smile:

1 Like

@dave Thank you for calling my attention to this. I will monitor activity as we get deeper into our development to confirm that the logic aligns with our expectations.

It appears that a BlockCompletion post_save signal is triggered after each individual component within a vertical unit is completed, but no signal seems to be emitted when the vertical as a whole is completed. For example, the unit below contains two questions. A BlockCompletion object is saved the first time each question is answered. I only want to give credit once both questions are answered and that unit is complete, but I do not see a signal when the vertical is complete. Do you have any further advice for how to handle that level of activity?

/learning/course/course-v1:edX+DemoX+Demo_Course/block-v1:edX+DemoX+Demo_Course+type@sequential+block@workflow/block-v1:edX+DemoX+Demo_Course+type@vertical+block@vertical_ac391cde8a91