Question about sharing a random value between two Advanced Component Problems

Hello,

I often program custom Python-Evaluated Input problems. In order for students to not cheat by giving each other the answers I use a random number to select a variation for the problem. With the variation Python will calculate the question and the answer to it. The problem is that I need a way to give this random number to the next Component (Blank Problem).

Here is an example:

  1. Exercise → Fill the truth-table with the given function
  2. Exercise → Calculate the Disjunctive Normal Form using a KV-Map using the truthtable
  3. Exercise → Calculate the Conjunctive Normal Form using a KV-Map using the truthtable
  4. Calculate the number of links for the disjunctive and conjunctive Form and pick the option best suited for implementation

If you don’t know what the example things mean, don’t worry. The only thing that really matters is that you need to have done step 1 to do step 2, 3 and you need to do step 2, 3 to do step 4. I don’t wann do do this in a single Componnent as it would be too long and the students cannot check if their previous step was correct.

If I was able to exchange the variation I could calculate the right answer in each component so that it would make the students life much easier.

So I wonder is it in any way possible to do that or is open edX just limited to do that? And if it is possible a thorough explanation would be helpful.

PS: It’s my first time posting here, I hope whoever read this understands my question well enough.

Thanks in advance

2 Likes

Hi @fynnleer and welcome to the community!

I love your use-case, it’s really interesting. I don’t know though how to achieve that in Open edX. @dave would you have any idea?

I’m moving this to the Educators > Authoring category, since I suspect that some folks in the authoring community have already achieved this effect.

If not, I can think of a couple of hacky things that might work, but I haven’t tested them.

Sharing information between pure Python problems isn’t something I’ve cracked. However, there’s a way to do it with custom Javascript problems that are on the same page.

The basic idea is that you use the parent frame for the problems to store the data you want to pass. You can set variables on window.parent.window , and all of the problems on the page will be able to access them. Because you can store the HTML files for the custom JS problems in edX’s Files page, there are no cross-site scripting issues between them and the rest of the page.

If you still want to use Python as the randomizer, you’ll have to do a little extra work to read python-randomized value off the page, but it’s certainly doable. To keep the grading code concealed from the learners, you’d then pass their answer and the random value into your custom grader, which all happens server-side.

There is a way to do it if they’re not on the same page (see my feature request for a learner data store), but it’s currently a pain to set up. An easier method would be to create a pseudo-random number in javascript, seeding it with the learner’s user ID (via either %%USER_ID%% or analytics[1][1] or similar things), which you can put into an invisible div so it stays out of view of the learner.

2 Likes

Hi,
thanks for your answer. I have actually found a way to get the User ID in Python and then using an algorithm to generate a random number from the ID.
I get the User ID in the Python Script by using this line:
globals().get(“anonymous_student_id”)

The problem is that the ID disappears when using a customresponse cfn. When you click the Submit Answer button, the python code is rerun WITHOUT access to the variable. To bypass this you need to put the User ID in the expect argument of the customresponse.

Now you need a way to make the Show Answer button work, as it will now only show the UserID or the number given in the expect argument. To do this, you need at least 2 textlines and give the answer to the correct_answer argument of the textline.

If only one textline is needed, you still have to have at least 2 but one of them can be hidden. Now the cfn-function receives the answer as a list, where one will be empty.

To give the correct grading, you need to return a dict with 2 gradings in the input list. Both gradings need to be the same, True, False or Partial, in order to give the correct amount of points

Very cool! Do you have a demo of this anywhere? And what else is in the “globals” function?