Each time a user submits a problem answer, two records are getting stored in the Courseware Student Module History table. This appears to be happening because the StudentModule record’s fields are updated in two separate saves, each of which triggers a post_save
event that inserts a new history record.
First capa_module.ProblemBlock.submit_problem
publishes a grade event, which ultimately updates the StudentModule instance in courseware.model_data.set_score
. At this time only the grade and max_grade fields are updated, which causes the StudentModule instance to be internally-inconsistent, with the state field still reflecting the data from the last submission, but the grade field reflecting the status of the new submssion.
Then the submit_problem function completes, and the standard xblock.runtime.Runtime.handle
function saves the instance again. This time all of the StudentModule’s fields are updated to match ends in the correct state.
However, in the process the student module history table collects extra records. For example, in this submission history, both submissions 13 and 14 show attempts: 4
. Note that in submission 14, the header score is 1.0 / 1.0 but in the json score.raw_earned still shows 0 and the correct_map still says incorrect. This is because the state is leftover unchanged from the early submission 13. Only when submission 15 is saved one second later does the header score match the score.raw_earned.
#15: 2022-08-12 19:27:32 UTC
Score: 1.0 / 1.0
{
"attempts": 5,
"correct_map": {
"9cee77a606ea4c1aa5440e0ea5d0f618_2_1": {
"answervariable": null,
"correctness": "correct",
"hint": "",
"hintmode": null,
"msg": "",
"npoints": null,
"queuestate": null
}
},
"done": true,
"input_state": {
"9cee77a606ea4c1aa5440e0ea5d0f618_2_1": {}
},
"last_submission_time": "2022-08-12T19:27:31Z",
"score": {
"raw_earned": 1,
"raw_possible": 1
},
"seed": 1,
"student_answers": {
"9cee77a606ea4c1aa5440e0ea5d0f618_2_1": [
"choice_3"
]
}
}
#14: 2022-08-12 19:27:31 UTC
Score: 1.0 / 1.0
{
"attempts": 4,
"correct_map": {
"9cee77a606ea4c1aa5440e0ea5d0f618_2_1": {
"answervariable": null,
"correctness": "incorrect",
"hint": "",
"hintmode": null,
"msg": "",
"npoints": null,
"queuestate": null
}
},
"done": true,
"input_state": {
"9cee77a606ea4c1aa5440e0ea5d0f618_2_1": {}
},
"last_submission_time": "2022-08-12T19:27:20Z",
"score": {
"raw_earned": 0,
"raw_possible": 1
},
"seed": 1,
"student_answers": {
"9cee77a606ea4c1aa5440e0ea5d0f618_2_1": [
"choice_0",
"choice_1"
]
}
}
#13: 2022-08-12 19:27:22 UTC
Score: 0.0 / 1.0
{
"attempts": 4,
"correct_map": {
"9cee77a606ea4c1aa5440e0ea5d0f618_2_1": {
"answervariable": null,
"correctness": "incorrect",
"hint": "",
"hintmode": null,
"msg": "",
"npoints": null,
"queuestate": null
}
},
"done": true,
"input_state": {
"9cee77a606ea4c1aa5440e0ea5d0f618_2_1": {}
},
"last_submission_time": "2022-08-12T19:27:20Z",
"score": {
"raw_earned": 0,
"raw_possible": 1
},
"seed": 1,
"student_answers": {
"9cee77a606ea4c1aa5440e0ea5d0f618_2_1": [
"choice_0",
"choice_1"
]
}
}
Do you have advice for how to avoid collecting extra noise and uphold the credibility of our learner performance dashboards?
Thank you!