Discussion Grader XBlock – A New Way to Automate Grading for Student Participation in Discussions

Hello Educators,

OpenCraft is proposing an XBlock-based solution to automate the grading of student participation in discussions on the Open edX platform. The goal is to track engagement metrics (e.g., post count, length) and integrate them with Open edX’s grading system. We aim to develop a foundational version (MVP) that can evolve with feedback. For the full proposal, click here :arrow_right:

We’re currently seeking funding for this feature, and would love your input:

Will this tool add value to your courses?

  • Yes
  • No
0 voters

Let me know if there are additional features that will make this tool more effective for your needs. Some future ideas include:

  • Automated word counts
  • AI-powered quality checks to flag low-effort posts
  • Automated peer assessment criteria (e.g., respond to “X posts”)
  • Automated thread count and reply count
  • Automated Instructor alerts and analytics for low participation
  • Automated grading windows and late submission handling
  • Manual grading adjustments for more nuanced evaluation

Thank you for your time and feedback!

cc. @Fox_Piacenti @john_curricume

1 Like

Hi there! I see some value in here, but the quality issue needs to be addressed. Some elements such as number of “likes”, clicks on “follow threads”, or the number of replies to one topic of discussion, can be consider as quality elements to be assessed.

2 Likes

Thanks for the insights @cabralpedrofccn - just curious do you create online courses? And if so, would you use this XBlock as is stands?

Right now we just need to build the foundation features in order to expand this further.

Hi @Cassie.
I used to be an instructional designer (mostly for Moodle and edx.org). Currently, I’m an invited professor at an Open University (that uses Moodle) and I assess my students’ discussion performance based on a rubric. Furthermore, I manage a national MOOC platform (Open EdX instance - Redwood version).
Having this feature of assessment based only on the number of threads created and replies, I wouldn’t use it. I would use if the quality could be included. :slight_smile:

2 Likes

@cabralpedrofccn Thanks so much for sharing your thoughts :slight_smile: it’s great to hear from someone with hands-on experience in both Moodle and Open edX (as well as actively running a MOOC platform). Feedback like yours helps us think more clearly about how to design features that actually make a difference.

With this in mind, could you check out this post: What’s on Your Open edX Wishlist? I think with your background you’ll definitely be able to provide valuable thoughts to this post.

cc. @Fox_Piacenti @xitij2000 @ali_hugo

2 Likes

I know there’s a lot of demand for this feature (has been for years), but it’s going to get gamed heavily. Back when I was an instructor for online courses on other platforms, you could tell who was just there to type in “great idea!” and move on; but there’s no good automatable way to see whether someone’s actually contributing to a discussion or just doing the minimum to tick a box.

For learners who take it seriously, I could see it increasing participation in the forums, but I don’t imagine it’ll improve the signal-to-noise ratio.

1 Like

@colin.fredericks I hear you - and this is definitely something we plan to tackle once the core functionality is in place. Do you think adding features like the below would help reduce the chances of students gaming the system?

  • Automated word counts
  • AI-powered quality checks to flag low-effort posts
  • Automated peer assessment criteria (e.g., respond to “X posts”)
  • Automated thread count and reply count
  • Automated Instructor alerts and analytics for low participation
  • Automated grading windows and late submission handling
  • Manual grading adjustments for more nuanced evaluation

@cabralpedrofccn I’d love your perspective too.

  • Automated word counts - No, this is the most gameable item here.
  • AI-powered - I don’t particularly trust AI for this. It needs to properly flag low-effort responses and writing quality for both native and non-native speakers. For Open edX used outside the English-speaking world, it needs to handle other languages. It needs to recognize copy-pasta from Wikipedia. My take is that there’s a high risk of false positives and false negatives. The monetary cost could also be extremely high depending on what model/provider you’re using - the more accurate it is, the more it’s going to cost and the worse the environmental impact will be. AI can do the same quality of job as a disengaged TA with one foot out the door.
  • Respond to X posts - Easy to game, just make more cheap posts.
  • Thread count - Same
  • Instructor alerts - This just collects other items on this list, so it’s equally gameable.
  • Grading windows - I’m not sure how this would be different from regular deadlines in the course.
  • Manual grading adjustments - This is the good way to do graded discussions: manually. Naturally, no one wants to do it because it’s a lot of effort, but :person_shrugging:

This is really helpful! Thank you for your detailed response :star: We’ll definitely need to gather more feedback from the community to see if any form of discussion grading will be useful or just feel like extra noise.

It’s interesting though because platforms like Canvas and Blackboard already include discussion grading, so we’re trying to figure out whether the Open edX platform needs this feature to stay competitive. I’d be curious to know how many community members use tools like Yellowdig to grade discussions within the platform. @ehuthmacher is there any way to get this type of information?

Thanks again @colin.fredericks this kind of feedback is really helpful as we figure out if this is worth building.