New analytic - usage / time on task?

We have a need to generate some usage data indicating how long a learner has been interacting with the course. In our first use case we need this information in order to determine, in a subscription scenario, if a learner has accessed the course for a sufficient amount of time in a month to trigger a usage event that would result in a royalty payment. We could also potentially use this for engagement, learning efficiency analysis, predicting the time it will take to complete a unit, sub-section, section etc.

There doesnā€™t appear to be a reliable way to generate such a metric for all content types. Video generates detailed viewing metrics, but other content types do not.

Completion events canā€™t be relied upon. In particular a completion event is triggered for every HTML block in a vertical about 4s after the page is rendered regardless of how many blocks are present, how much text is contained in each block and whether the content appeared in the viewport.

I know this has been a topic of some interest in the learning analytics community for some time. Iā€™m interested to hear from this community how this metric could be reliably generated for all courses, and if a definition can be agreed upon if it would be possible to add this to Aspects.

Thanks,

Scott.

3 Likes

Howdy! Iā€™ve looked into this a fair amount, from my PhD thesis to more recent research, so I feel like I can answer pretty authoritatively. If you want to skip the reasoning, scroll down to the bold part.

You are correct that completion events cannot be relied on.

The amount of time between when someone loads one page and when they load the next page is not something you want to rely on either. You donā€™t know whether someone has read a page for 20 minutes, or whether they read it for 10 minutes and then visited the restroom. When they loaded a problem and answered it an hour later, were they working on the problem or did they watch YouTube?

You can remove some obvious outliers (27 hours between page loads means they went away and came back the next day), but what amount of each outlier do you want to count? 10 minutes of it? 5% of it? None of it? Are you going to keep track of the actual length of the content, the grade level of the writing, and the learnerā€™s language proficiency when determining whatā€™s an outlier?

Video metrics also cannot be relied on. You do not know that the learner actually watched the video, only that the video played. Someone might have pressed ā€œplayā€, made a sandwich, and skipped to the next page. There are learners who do this for a variety of reasons, most of them involving the Treasured Green Checkmark of Warm Fuzzy Feelings. Other motivations for not actually watching a video include a disinterest in that particular topic, or the fact that they already know it and just want to get the green check and move on.

If it sounds like Iā€™m saying that you canā€™t measure time on task, that is correct. You cannot measure time on task.

You can measure proxies for time on task, but the relationship between those proxies and actual time is dependent on the individual and on their out-of-platform activities that day. If you tell me that one individual learner spent more time in a course than another learner, Iā€™m not going to believe you, because you cannot measure that.

Now, if you tell me that the time on task for an entire course is, on average, longer than another course, then Iā€™ll believe you. We can make proxy measurements, and the huge variability in individual measurements can be averaged out pretty well at the whole-course level. I might believe it for cohorts within a course, depending on how large those cohorts are. ā€œHow large does it have to be?ā€ is probably a research question. ā€œHow close is the average of a particular proxy to the average actual time?ā€ is another good research question, one that will require tracking actual time spent in some reliable manner (not eye tracking or online proctoring).

Iā€™m not against having a proxy measure in Aspects, but it would need to be labeled as a proxy and not counted in hours or minutes. It is better to make an arbitrary decision and A/B test it than it is to make a decision on the basis of faulty data, and I would rather not have us mislead people.

Long answer, I know.

1 Like

Thanks for the comprehensive response Colin. Not too surprised at your comments. Our primary use case is determining when the ā€˜usageā€™ of content by a student crosses a time threshold that would trigger a royalty payment. In which case if the content was playing in the background while the user made a coffee thatā€™s on them. Not really so different from any other digital content. For that use case thereā€™s a much lower bar for calculating an appropriate metric. Would have been ideal if the same metric could have some usefulness for determining learning efficiency but I guess I was being naive. I acknowledge this will have much lower interest as a general metric available via Aspects, although I suspect weā€™d probably add this an extension even if the community has no interest in it.

I think for our use case a variation on what RĆ©gis had described would probably meet our needs and is sufficiently defensible. So maybe break the day up into X minute blocks and for each non-video event that occurs in the block assume a Y% of X minutes as usage. Sum all the blocks for a day and add video metrics to get a ā€˜usageā€™ value. X and Y could be tweaked by course.

Relatively simple, likely defensible for the purpose of determining if there should be a royalty payment - the user is already paying a fixed subscription so wonā€™t be impacted by the calculation being overly ā€˜aggressiveā€™ in determine usage.

2 Likes

If you can get them to agree to that sort of usage value, it does seem fairly reasonable to do it that way. ā€œX number of image loadsā€ or ā€œY number of clicksā€ is pretty standard in advertising.

Honestly part of the problem is that the community has quite a bit of interest in measuring time on task. :confused: The proxies we have for time on task arenā€™t as bad as using self-reported learning gains as proxies for actual learning gain, but some of them are close.

1 Like

Displaying how long a learner has been interacting with a course is unfortunately out of scope for Aspects at this time.

The time an individual learner spent engaging with the content on a specific unit or page would be interesting data to capture that Iā€™d expect a wide range of users would want to get their hands on, but as Colin pointed out is impossible to reliably calculate (as how long someone spends on a page does not equal how long they spent meaningfully interacting with the content on that page).

I understand that your use case is distinct from this - you want to know how long a learner spent with course material (whether or not the time they spent with it was active or while they stepped away to make lunch) for royalty/pricing considerations.

Being able to calculate and display this would require sizeable effort on our end to enable and this particular use case is not one Iā€™ve come across as being a common one. If other members of the community do have a similar need for this - please do chime in! I will certainly make note of this request and continue to keep my ear to the ground for similar needs across the community as we continue to build out Aspects in the future.

Thanks Chelsea. To be clear, while it would have been nice if the Aspects devs jumped on this and developed it this quarter(!), thatā€™s not the request. Weā€™re looking to develop a metric that could also be used as a proxy for course activity. We need it for a specific business purpose but if we can agree on a definition that has more general use, without significantly increasing the scope of the effort then my preference would be to develop that and contribute it. So Iā€™m looking for input on what such a metric might look like.

What I have so far, is to break the day into time buckets, and for each enrollment if there is any activity in a bucket then log that as usage equivalent to the size of the bucket. Presumably the smaller the bucket the more ā€˜accurateā€™ the measure, and the more expensive the calculation and storage.

Absent additional community input weā€™ll likely test this approach on some historical data to see if it yields reasonable results.

Thanks,

Scott.