LTI debugging and Juniper

Can anyone recommend a way to debug LTI ?

Under Ironwood I could access one of our courses through http://ltiapps.net/test/tc.php and get the result back in the gradebook.

Under Juniper I can still access the same course but the result is not propagated back to the gradebook.

Here are some extracts from the logs, but I can’t figure out why I get a 400 response back… Any ideas? Any suggestions?

Jan 21 11:05:48 ip-10-0-0-71 [service_variant=lms][celery.worker.strategy][env:sandbox] INFO [ip-10-0-0-71 19192] [user None] [strategy.py:60] - Received task: lms.djangoapps.grades.tasks.recalculate_subsection_grade_v3[34db3fa2-9a5d-4a14-b84d-ca36f9fc105e] eta:[2020-01-21 16:05:50.049101+00:00]

Jan 21 11:05:48 ip-10-0-0-71 [service_variant=lms][celery_utils.logged_task][env:sandbox] INFO [ip-10-0-0-71 19166] [user 131970] [logged_task.py:31] - Task lms.djangoapps.grades.tasks.recalculate_subsection_grade_v3[34db3fa2-9a5d-4a14-b84d-ca36f9fc105e] submitted with arguments None, {‘course_id’: ‘course-v1:UMontreal+TEST_LTI_01+E2016’, ‘event_transaction_id’: ‘666a496c-55ca-4734-97dc-f49037c82e72’, ‘usage_id’: ‘block-v1:UMontreal+TEST_LTI_01+E2016+type@problem+block@5753b0b25f2443c6867b348275a3818d’, ‘force_update_subsections’: False, ‘event_transaction_type’: ‘edx.grades.problem.submitted’, ‘score_deleted’: None, ‘only_if_higher’: None, ‘expected_modified_time’: 1579622748, ‘anonymous_user_id’: None, ‘user_id’: 131970, ‘score_db_table’: ‘csm’}

Jan 21 11:05:48 ip-10-0-0-71 [service_variant=lms][celery.worker.strategy][env:sandbox] INFO [ip-10-0-0-71 19192] [user None] [strategy.py:60] - Received task: lti_provider.tasks.send_composite_outcome[c616c30c-1f1f-4511-9d6e-3568f6eb51d1] eta:[2020-01-21 16:08:48.090865+00:00]

Jan 21 11:05:51 ip-10-0-0-71 [service_variant=lms][lms.djangoapps.grades.course_grade_factory][env:sandbox] INFO [ip-10-0-0-71 19853] [user None] [course_grade_factory.py:225] - Grades: Update, Course: course_key: course-v1:UMontreal+TEST_LTI_01+E2016, version: 5db092fc6a2caa0b6573e212, edited_on: 2019-10-23 17:50:52.091000+00:00, grading_policy: z9TLT0kWbxYf2HFm6Xr2N1QJBbg=, User: 131970, Course Grade: percent: 1.0, letter_grade: Pass, passed: True, persisted: False

Jan 21 11:05:51 ip-10-0-0-71 [service_variant=lms][celery.worker.job][env:sandbox] INFO [ip-10-0-0-71 19192] [user None] [job.py:402] - Task lms.djangoapps.grades.tasks.recalculate_subsection_grade_v3[34db3fa2-9a5d-4a14-b84d-ca36f9fc105e] succeeded in 0.5742135950004013s: None

Jan 21 11:08:48 ip-10-0-0-71 [service_variant=lms][lms.djangoapps.grades.course_grade_factory][env:sandbox] INFO [ip-10-0-0-71 19853] [user None] [course_grade_factory.py:225] - Grades: Update, Course: course_key: course-v1:UMontreal+TEST_LTI_01+E2016, version: 5db092fc6a2caa0b6573e212, edited_on: 2019-10-23 17:50:52.091000+00:00, grading_policy: z9TLT0kWbxYf2HFm6Xr2N1QJBbg=, User: 131970, Course Grade: percent: 1.0, letter_grade: Pass, passed: True, persisted: False

Jan 21 11:08:49 ip-10-0-0-71 [service_variant=lms][edx.lti_provider][env:sandbox] ERROR [ip-10-0-0-71 19853] [user None] [outcomes.py:160] - Outcome Service: Failed to update score on LTI consumer. User: qmUwUMD8Ds6jANodGFPnlV254Btyz1, course: course-v1:UMontreal+TEST_LTI_01+E2016, usage: block-v1:UMontreal+TEST_LTI_01+E2016+type@sequential+block@1c40b4c92c58421da885bea7a63992e1, score: 1.0, status: <Response [400]>, body: Unknown

Jan 21 11:08:49 ip-10-0-0-71 [service_variant=lms][celery.worker.job][env:sandbox] INFO [ip-10-0-0-71 19192] [user None] [job.py:402] - Task lti_provider.tasks.send_composite_outcome[c616c30c-1f1f-4511-9d6e-3568f6eb51d1] succeeded in 0.33479275300032896s: None

What I find interesting is that my first attempt was made with our fork of edx-platform.

I tried it again on a clean open-release/juniper.alpha1 release on a single server. I imported my course. Regenerated new LTI consumer key and secret.

Exact same problem. I can therefore eliminate that the problem is with our fork. I would have preferred a successful result to be honest… Now, I just don’t know where to look at for an answer to this problem…

Jan 21 18:11:05 ip-10-0-0-139 [service_variant=lms][lms.djangoapps.grades.course_grade_factory][env:sandbox] INFO [ip-10-0-0-139  4181] [user None] [course_grade_factory.py:225] - Grades: Update, Course: course_key: course-v1:UMontreal+TEST_LTI_01+E2016, version: 5e273afff5037a1057e3af08, edited_on: 2020-01-21 17:55:11.692000+00:00, grading_policy: z9TLT0kWbxYf2HFm6Xr2N1QJBbg=, User: 17, Course Grade: percent: 1.0, letter_grade: Pass, passed: True, persisted: False
Jan 21 18:11:05 ip-10-0-0-139 [service_variant=lms][edx.lti_provider][env:sandbox] ERROR [ip-10-0-0-139  4181] [user None] [outcomes.py:152] - Outcome Service: Failed to update score on LTI consumer. User: uSJpaTAlvgf8j3vLA776oo6BlimrZ5, course: course-v1:UMontreal+TEST_LTI_01+E2016, usage: block-v1:UMontreal+TEST_LTI_01+E2016+type@sequential+block@1c40b4c92c58421da885bea7a63992e1, score: 1.0, status: <Response [400]>, body: Unknown
Jan 21 18:11:05 ip-10-0-0-139 [service_variant=lms][celery.worker.job][env:sandbox] INFO [ip-10-0-0-139  2356] [user None] [job.py:402] - Task lti_provider.tasks.send_composite_outcome[41e5c23b-7dc7-4d48-8792-6d1f6910c7ec] succeeded in 0.24040627600015796s: None

@sambapete Have you checked whether this issue exists in the latest master branch of edx-platform? It’s possible there were issues with the Python3 upgrade, which changed how we treat strings, but were then fixed with the Django2 upgrade, which did a better job in enforcing string usage.

@nimisha I didn’t try with the latest master branch. To be honest, our team tries to stay as close as possible to the open-release branch without moving ahead of it or too far ahead if there is something we can cherry-pick for our fork. Therefore, I was waiting and planning to test it again with the next (alpha or beta) release of Juniper whenever @nedbat will make it available to the open-source community.

We have a series of manual tests we run in order to check if some things/stuff/components still work with our fork with each new open release. LTI being the first thing we test after we rebase our fork, reinstall a new test environment and migrate our data. And that’s because our internal Moodle installation uses a lot of LTI components that were originally developed for courses that ran or still run in our Open edX instance. Lots of grading going back to Moodle.

If I find the time, I might try to install a new instance with master early next week, just to try your hypothesis and import one of our LTI courses on it, but no promises on that front. But I will definitely try it again with the next “release” of Juniper when it becomes available.

Thanks @sambapete. We hope to start cutting the Juniper release in a week or 2 - we’re now very close to completing the Django upgrade.

@sambapete I’ve had a look at the lti_provider code diff between open-release/ironwood.2 and open-release/juniper.alpha1, and the differences are really minimal:

  • This code was added for Blockstore.
    This change is related to the signal handler for “weighted problem score changes”. The effect of the change is to avoid sending a score update to the consumer if the course_id sent with the signal is not a valid course ID. This check is needed for Blockstore because it has blocks which are not connected to courses, but this should have no effect for normal courses.
    Furthermore, since we are seeing the log message above, that means the score change signal is being sent to the consumer, but there’s an error happening on receipt of this message.
  • Remaining changes are related to python3 changes: absolute imports and unicode strings.

All of the rest of the LTI Provider code, including what generates the XML, and what does the OAuth1 authentication, looks exactly the same as it was in Ironwood.

Do you have any logs from your LTI Consumer side? The 400 Bad Request that’s logged here is a response from the consumer, so maybe there’s some clues there?

Also, let me know if I can help by setting up a master sandbox for you to try, since that’s the closest we have to the next juniper branch right now.

Hi @jill Thanks for looking into this. I greatly appreciate it.

I tried myself a native installation of master late Friday evening after my discussion with @nimisha. It failed. I was told recently by @billatedx that there are current “issues” with the configuration repository that does not allow master to be installed right now. @nedbat confirmed it to me yesterday (April 14th, 2020). If you are able still able to set one up, let me know. I might try again when the “issues” are resolved or when @nedbat releases the next Juniper release.

Thanks again.

Sure thing @sambapete, I can set one up for you tomorrow.

BTW, @billatedx’s configuration PR fixes the major issues for us, so you’re welcome to try it too if you like: configuration#5723

@jill @sambapete That PR is now merged. Please let me know if there are other issues you run into

Thanks @billatedx Tried installing from master this evening. Still encountered a few issues,

One issue with COMMON_CONFIG_NO_LOGGING that was set to True and caused problems while in the edx_service role.

While trying to get around in, I received a new error message “msg”: “AnsibleUndefinedVariable: ‘COMMON_OIDC_ISSUER’ is undefined” still while running the edx_service role. Could only find one instance of COMMON_OIDC_ISSUER in the configuration repo. It is used in playbooks/roles/edx_django_service/defaults/main.yml but not assigned anywhere.

I guess master is still not installing as of Wednesday night. I’ll wait.

PS: Went around it by installing through /var/tmp/configuration and not installing ecommerce and edx_notes for example. I got something. I’ll check if it is Django 2 and if I can access LTI in the next few days.

PS2: Every modification I make to /edx/etc/lms.yml or /edx/etc/studio.yml doesn’t even allow me to recompile the assets. And when I put back the original files, I can’t either.

Build failed running pavelib.assets.update_assets: Subprocess return code: 1

Using an m5.xlarge, I should have enough CPU and RAM. That definitely worked with juniper.alpha1. I’ll try again in a few days.

So awesome, thank you for getting that PR merged @billatedx!

@sambapete You’re welcome to use this sandbox if it helps. It’s based off current master, so the closest we have to a juniper.beta right now.

Let me know if you need me to elevate a user for you.

For ref:

configuration_version: eda5f35a57b8f7bb62a326c42c1245b46c8fc210
edx_platform_version: d0decc3da275e5fb7d0874d6081eef5a8e567c37
forum_version: cd2675988d1452e49cece483df548828d1067046