Studio's having trouble saving your work

Hello everyone!

I’m having this issue when I try to update a section ofa course: “Studio’s having trouble saving your work. This may be happening because of an error with our server or your internet connection. Try refreshing the page or making sure you are online.”

When I look my tail -f /edx/var/log/cms/edx.log, I get this:

[service_variant=cms][xmodule.modulestore.django][env:sandbox] INFO [ip-172-20-1-213 30510] [django.py:198] - Sent course_published signal to <function trigger_update_xblocks_cache_task at 0x7fd88d844e60> with kwargs {‘course_key’: CourseLocator(u’Empresa’, u’curso01’, u’2019’, None, None)}. Response was: [Errno 104] Connection reset by peer

But when I press F5 to refresh the page, the change I made is there!

Does anyone have ideas?

Sep 11 20:55:53 ip-172-20-1-213 [service_variant=cms][openedx.core.djangoapps.content.course_overviews.models][env:sandbox] INFO [ip-172-20-1-213 30514] [models.py:150] - Updating course overview for course-v1:Empresa+curso01+2019.
Sep 11 20:59:34 ip-172-20-1-213 [service_variant=cms][xmodule.modulestore.django][env:sandbox] INFO [ip-172-20-1-213 30514] [django.py:198] - Sent course_published signal to <function listen_for_course_publish at 0x7fd88da937d0> with kwargs {‘course_key’: CourseLocator(u’Empresa’, u’curso01’, u’2019’, None, None)}. Response was: None
Sep 11 20:59:34 ip-172-20-1-213 [service_variant=cms][xmodule.modulestore.django][env:sandbox] INFO [ip-172-20-1-213 30514] [django.py:198] - Sent course_published signal to <function _listen_for_course_publish at 0x7fd88db63668> with kwargs {‘course_key’: CourseLocator(u’Empresa’, u’curso01’, u’2019’, None, None)}. Response was: None
Sep 11 20:59:34 ip-172-20-1-213 [service_variant=cms][xmodule.modulestore.django][env:sandbox] INFO [ip-172-20-1-213 30514] [django.py:198] - Sent course_published signal to <function _listen_for_course_publish at 0x7fd88db63758> with kwargs {‘course_key’: CourseLocator(u’Empresa’, u’curso01’, u’2019’, None, None)}. Response was: None
Sep 11 20:59:34 ip-172-20-1-213 [service_variant=cms][xmodule.modulestore.django][env:sandbox] INFO [ip-172-20-1-213 30514] [django.py:198] - Sent course_published signal to <function listen_for_course_publish at 0x7fd88db63b18> with kwargs {‘course_key’: CourseLocator(u’Empresa’, u’curso01’, u’2019’, None, None)}. Response was: [Errno 104] Connection reset by peer
Sep 11 20:59:34 ip-172-20-1-213 [service_variant=cms][xmodule.modulestore.django][env:sandbox] INFO [ip-172-20-1-213 30514] [django.py:198] - Sent course_published signal to <function update_block_structure_on_course_publish at 0x7fd88d8dc1b8> with kwargs {‘course_key’: CourseLocator(u’Empresa’, u’curso01’, u’2019’, None, None)}. Response was: [Errno 104] Connection reset by peer
Sep 11 20:59:34 ip-172-20-1-213 [service_variant=cms][xmodule.modulestore.django][env:sandbox] INFO [ip-172-20-1-213 30514] [django.py:198] - Sent course_published signal to <function _listen_for_course_publish at 0x7fd88d8dc2a8> with kwargs {‘course_key’: CourseLocator(u’Empresa’, u’curso01’, u’2019’, None, None)}. Response was: None
Sep 11 20:59:34 ip-172-20-1-213 [service_variant=cms][xmodule.modulestore.django][env:sandbox] INFO [ip-172-20-1-213 30514] [django.py:198] - Sent course_published signal to <function update_discussions_on_course_publish at 0x7fd88d8aa0c8> with kwargs {‘course_key’: CourseLocator(u’Empresa’, u’curso01’, u’2019’, None, None)}. Response was: [Errno 104] Connection reset by peer
Sep 11 20:59:34 ip-172-20-1-213 [service_variant=cms][xmodule.modulestore.django][env:sandbox] INFO [ip-172-20-1-213 30514] [django.py:198] - Sent course_published signal to <function trigger_update_xblocks_cache_task at 0x7fd88d843e60> with kwargs {‘course_key’: CourseLocator(u’Empresa’, u’curso01’, u’2019’, None, None)}. Response was: [Errno 104] Connection reset by peer

hi

I can think of two things to verify:

  1. Check mongo, since it dies when memory runs out
  2. Reduce the number of lms and cms workers
1 Like

Hi, @sbernesto! Thanks for replying!

I think it is saving on mongo, because when I refresh the page, the modification is there… My mongo DB is in a EC2 t3.medium, should I increase the instance type?

How do I reduce the number of workers?

You need edit lms_gunicorn.py and cms_gunicorn.py in line start with workers, and restart all the works of lms/cms

El El mié, 11 de septiembre de 2019 a la(s) 17:58, Scarback via Open edX openedx@discoursemail.com escribió:

@sbernesto,

I’ve changed the number of workers to:

workers = (multiprocessing.cpu_count()-1) + 1

But, yet, I face the same problem =/

It might be a problem of timeout of the app? I’ve changed cms.auth.json and lms.auth.json to:

“connectTimeoutMS”: 20000
“socketTimeoutMS”: 30000

But it didn’t work =/

If that has not worked for you, I recommend that you then check the logs that are in the supervisor subdirectory, as well as the mongo ones, to see what is happening.

@sbernesto,

looking into /edx/var/logs/supervisor/cms_high_1-stderr.log, it seems that I have a problem with rabbitmq too, not sure if this is the cause of all that mess:

[2019-09-12 16:44:05,450: ERROR/MainProcess] consumer: Cannot connect to amqp://celery:**@127.0.0.1:5672//: [Errno 104] Connection reset by peer.
Trying again in 32.00 seconds…

Restart rabbitmq service

I had this rabbitmq problem a long time ago (2015) after the IP address of my Open edX server changed.

I was told to by someone at edX to try the following commands in order to reinstall rabbitmq:

sudo bash
. /edx/app/edx_ansible/venvs/edx_ansible/bin/activate
cd /edx/app/edx_ansible/edx_ansible/playbooks/
ansible-playbook -c local -i ‘localhost,’ ./run_role.yml -e “role=rabbitmq” \
-e@/edx/app/edx_ansible/server-vars.yml

After reinstalling rabbitmq, you might want to look at the /etc/rabbitmq/rabbitmq-env.conf file. In my case it had the new IP address of the server for RABBITMQ_NODE_IP_ADDRESS. I had to change it from 10.0.0.35 to 127.0.0.1 and restart the server.

Maybe this solution won’t work for you but it helped me fix the following error I had at the time:
Jan 15 18:32:55 ip-10-0-0-35 [service_variant=lms][celery.worker.consumer][env:sandbox] ERROR [ip-10-0-0-35 1588] [consumer.py:796] - consumer: Cannot connect to amqp://celery@127.0.0.1:5672//: [Errno 111] Connection refused. Trying again in 32.00 seconds…

This may or may not work in your case. It’s a solution I had to use each time my Open edX server had its local IP address change.

3 Likes

thank you very much, @sbernesto and @sambapete!! I was near giving up, but you help made me keep trying!

The problem was with rabbitmq… there was no celery user created! This post helped me out to create the user: https://open-edx.readthedocs.io/en/latest/amazon_snapshot_bug.html

\o/

4 Likes

hello, @Scarback @sbernesto @sambapete i have the same error with you. How about your error progress in this case? I can’t duplicate my course and its show studio saving error. And I run error log cms. it not show last today log, it show on last May 10. this is cms error log. Please help me