Tutor: Upgrade to Nutmeg - Discovery plugin init finishes with error

Initialization of Discovery on tutor local quickstart finishes with an error:

Running init task: discovery/tasks/lms/init

[snip]

Creating index 'learner_pathway_20220611_122350'
2022-06-11 12:24:00,090 WARNING 23 [elasticsearch] /openedx/venv/lib/python3.8/site-packages/elasticsearch/connection/base.py:293 - PUT http://elasticsearch:9200/learner_pathway_20220611_122350 [status:N/A request:10.011s]
Traceback (most recent call last):
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
    response.begin()
  File "/usr/lib/python3.8/http/client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.8/http/client.py", line 277, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/usr/lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/openedx/venv/lib/python3.8/site-packages/elasticsearch/connection/http_urllib3.py", line 251, in perform_request
    response = self.pool.urlopen(
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 785, in urlopen
    retries = retries.increment(
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/util/retry.py", line 525, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/packages/six.py", line 770, in reraise
    raise value
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 451, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 340, in _raise_timeout
    raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='elasticsearch', port=9200): Read timed out. (read timeout=10)
2022-06-11 12:24:00,090 WARNING 23 [elasticsearch] /openedx/venv/lib/python3.8/site-packages/elasticsearch/connection/base.py:293 - PUT http://elasticsearch:9200/learner_pathway_20220611_122350 [status:N/A request:10.011s]
Traceback (most recent call last):
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
    response.begin()
  File "/usr/lib/python3.8/http/client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.8/http/client.py", line 277, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/usr/lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/openedx/venv/lib/python3.8/site-packages/elasticsearch/connection/http_urllib3.py", line 251, in perform_request
    response = self.pool.urlopen(
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 785, in urlopen
    retries = retries.increment(
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/util/retry.py", line 525, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/packages/six.py", line 770, in reraise
    raise value
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 451, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 340, in _raise_timeout
    raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='elasticsearch', port=9200): Read timed out. (read timeout=10)
Traceback (most recent call last):
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse
    response.begin()
  File "/usr/lib/python3.8/http/client.py", line 316, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.8/http/client.py", line 277, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/usr/lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/openedx/venv/lib/python3.8/site-packages/elasticsearch/connection/http_urllib3.py", line 251, in perform_request
    response = self.pool.urlopen(
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 785, in urlopen
    retries = retries.increment(
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/util/retry.py", line 525, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/packages/six.py", line 770, in reraise
    raise value
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 451, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "/openedx/venv/lib/python3.8/site-packages/urllib3/connectionpool.py", line 340, in _raise_timeout
    raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='elasticsearch', port=9200): Read timed out. (read timeout=10)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./manage.py", line 15, in <module>
    execute_from_command_line(sys.argv)
  File "/openedx/venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
    utility.execute()
  File "/openedx/venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 413, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/openedx/venv/lib/python3.8/site-packages/django/core/management/base.py", line 354, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/openedx/venv/lib/python3.8/site-packages/django/core/management/base.py", line 398, in execute
    output = self.handle(*args, **options)
  File "/openedx/discovery/course_discovery/apps/edx_elasticsearch_dsl_extensions/management/commands/update_index.py", line 87, in handle
    self._update(models, options)
  File "/openedx/discovery/course_discovery/apps/edx_elasticsearch_dsl_extensions/management/commands/update_index.py", line 102, in _update
    alias, new_index_name = self.prepare_backend_index(index)
  File "/openedx/discovery/course_discovery/apps/edx_elasticsearch_dsl_extensions/management/commands/update_index.py", line 231, in prepare_backend_index
    registered_index.create(using=backend)
  File "/openedx/venv/lib/python3.8/site-packages/elasticsearch_dsl/index.py", line 279, in create
    return self._get_connection(using).indices.create(
  File "/openedx/venv/lib/python3.8/site-packages/elasticsearch/client/utils.py", line 168, in _wrapped
    return func(*args, params=params, headers=headers, **kwargs)
  File "/openedx/venv/lib/python3.8/site-packages/elasticsearch/client/indices.py", line 123, in create
    return self.transport.perform_request(
  File "/openedx/venv/lib/python3.8/site-packages/elasticsearch/transport.py", line 415, in perform_request
    raise e
  File "/openedx/venv/lib/python3.8/site-packages/elasticsearch/transport.py", line 381, in perform_request
    status, headers_response, data = connection.perform_request(
  File "/openedx/venv/lib/python3.8/site-packages/elasticsearch/connection/http_urllib3.py", line 265, in perform_request
    raise ConnectionTimeout("TIMEOUT", str(e), e)
elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host='elasticsearch', port=9200): Read timed out. (read timeout=10))
Error: Command failed with status 1: docker-compose -f /srv/tutor/env/local/docker-compose.yml -f /srv/tutor/env/local/docker-compose.prod.yml -f /srv/tutor/env/local/docker-compose.tmp.yml --project-name tutor_local -f /srv/tutor/env/local/docker-compose.jobs.yml -f /srv/tutor/env/local/docker-compose.jobs.tmp.yml run --rm discovery-job sh -e -c make migrate

# Development partners
./manage.py create_or_update_partner  \
  --site-id 1 \
  --site-domain discovery.domain-redacted.tld:8381 \
  --code dev --name "Open edX - development" \
  --lms-url="http://lms:8000" \
  --studio-url="http://cms:8000" \
  --courses-api-url "http://domain-redacted.tld:8000/api/courses/v1/" \
  --organizations-api-url "http://domain-redacted.tld:8000/api/organizations/v1/"

# Production partner
./manage.py create_or_update_partner  \
  --site-id 2 \
  --site-domain discovery.domain-redacted.tld \
  --code openedx --name "Open edX" \
  --lms-url="http://lms:8000" \
  --studio-url="http://cms:8000" \
  --courses-api-url "https://domain-redacted.tld/api/courses/v1/" \
  --organizations-api-url "https://domain-redacted.tld/api/organizations/v1/"

./manage.py refresh_course_metadata --partner_code=openedx
./manage.py update_index --disable-change-limit

Update - The problem has gone away by:

  1. tutor plugins disable ecommerce discovery
  2. tutor local quickstart
  3. tutor plugins enable ecommerce discovery
  4. tutor local init
  5. tutor local start -d

Seems “tutor local init” did the trick somehow…

1 Like

Update 2: Running again “tutor local quickstart” throws the same error as mentioned in my first post here, so there’s definitely something wrong.

Update 3: Now also problems with forum search index … looks like there’s something very wrong with the elasticsearch module - this happened after I disabled both the ecommerce and discovery plugins, running again “tutor local quickstart”:

Running init task: forum/tasks/forum/init
docker-compose -f /srv/tutor/env/local/docker-compose.yml -f /srv/tutor/env/local/docker-compose.prod.yml -f /srv/tutor/env/local/docker-compose.tmp.yml --project-name tutor_local -f /srv/tutor/env/local/docker-compose.jobs.yml -f /srv/tutor/env/local/docker-compose.jobs.tmp.yml run --rm forum-job sh -e -c bundle exec rake search:initialize
bundle exec rake search:rebuild_indices
Starting tutor_local_mongodb-permissions_1       ... done
Starting tutor_local_elasticsearch-permissions_1 ... done
Starting tutor_local_mongodb_1                   ... done
Starting tutor_local_elasticsearch_1             ... done
Waiting for mongodb/elasticsearch...
2022/06/12 09:14:56 Waiting for: tcp://mongodb:27017
2022/06/12 09:14:56 Waiting for: http://elasticsearch:9200
2022/06/12 09:14:56 Connected to tcp://mongodb:27017
2022/06/12 09:14:56 Received 200 from http://elasticsearch:9200
/app/cs_comments_service/models/constants.rb:2: warning: already initialized constant COURSE_ID
/app/cs_comments_service/lib/tasks/db.rake:28: warning: previous definition of COURSE_ID was here
/app/cs_comments_service/lib/tasks/deep_search.rake:7: warning: already initialized constant ROOT
/app/cs_comments_service/lib/tasks/kpis.rake:7: warning: previous definition of ROOT was here
/app/cs_comments_service/lib/tasks/flags.rake:6: warning: already initialized constant ROOT
/app/cs_comments_service/lib/tasks/deep_search.rake:7: warning: previous definition of ROOT was here
W, [2022-06-12T09:14:59.103484 #18]  WARN -- : Overwriting existing field _id in class User.
W, [2022-06-12T09:14:59.139637 #18]  WARN -- : MONGODB | Unsupported client option 'max_retries'. It will be ignored.
W, [2022-06-12T09:14:59.139701 #18]  WARN -- : MONGODB | Unsupported client option 'retry_interval'. It will be ignored.
W, [2022-06-12T09:14:59.139745 #18]  WARN -- : MONGODB | Unsupported client option 'timeout'. It will be ignored.
I, [2022-06-12T09:14:59.163384 #18]  INFO -- : Skipping initialization. Indices already exist. If 'rake search:validate_indices' indicates a problem with the mappings, you could either use 'rake search:rebuild_indices' to reload from the db or 'rake search:initialize[true]' to force initialization with an empty index.
/app/cs_comments_service/models/constants.rb:2: warning: already initialized constant COURSE_ID
/app/cs_comments_service/lib/tasks/db.rake:28: warning: previous definition of COURSE_ID was here
/app/cs_comments_service/lib/tasks/deep_search.rake:7: warning: already initialized constant ROOT
/app/cs_comments_service/lib/tasks/kpis.rake:7: warning: previous definition of ROOT was here
/app/cs_comments_service/lib/tasks/flags.rake:6: warning: already initialized constant ROOT
/app/cs_comments_service/lib/tasks/deep_search.rake:7: warning: previous definition of ROOT was here
W, [2022-06-12T09:15:01.800033 #26]  WARN -- : Overwriting existing field _id in class User.
W, [2022-06-12T09:15:01.831644 #26]  WARN -- : MONGODB | Unsupported client option 'max_retries'. It will be ignored.
W, [2022-06-12T09:15:01.831707 #26]  WARN -- : MONGODB | Unsupported client option 'retry_interval'. It will be ignored.
W, [2022-06-12T09:15:01.831748 #26]  WARN -- : MONGODB | Unsupported client option 'timeout'. It will be ignored.
warning: 299 Elasticsearch-7.10.1-1c34507e66d7db1211f66f3513706fdf548736aa "Parameter [boost] on field [title] is deprecated and will be removed in 8.0"
I, [2022-06-12T09:16:06.990257 #26]  INFO -- : New indices ["comments_20220612091501838", "comment_threads_20220612091501838"] are created.
rake aborted!
Faraday::TimeoutError: Net::ReadTimeout
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:158:in `block in request_via_request_method'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:152:in `request_via_request_method'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:137:in `request_with_wrapped_block'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:128:in `perform_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:70:in `block in call'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter.rb:60:in `connection'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:68:in `call'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/rack_builder.rb:153:in `build_response'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/connection.rb:492:in `run_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-transport-7.8.0/lib/elasticsearch/transport/transport/http/faraday.rb:40:in `block in perform_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-transport-7.8.0/lib/elasticsearch/transport/transport/base.rb:287:in `perform_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-transport-7.8.0/lib/elasticsearch/transport/transport/http/faraday.rb:37:in `perform_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-transport-7.8.0/lib/elasticsearch/transport/client.rb:176:in `perform_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-api-7.8.0/lib/elasticsearch/api/actions/bulk.rb:70:in `bulk'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model/importing.rb:170:in `block in import'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model/adapters/mongoid.rb:93:in `block in __find_in_batches'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongoid-7.0.5/lib/mongoid/contextual/mongo.rb:705:in `yield_document'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongoid-7.0.5/lib/mongoid/contextual/mongo.rb:134:in `block in each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongo-2.5.3/lib/mongo/collection/view/iterable.rb:46:in `block in each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongo-2.5.3/lib/mongo/cursor.rb:121:in `block in each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongo-2.5.3/lib/mongo/cursor.rb:121:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongo-2.5.3/lib/mongo/cursor.rb:121:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongo-2.5.3/lib/mongo/collection/view/iterable.rb:45:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongoid-7.0.5/lib/mongoid/query_cache.rb:227:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongoid-7.0.5/lib/mongoid/contextual/mongo.rb:133:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongoid-7.0.5/lib/mongoid/contextual.rb:20:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model/adapters/mongoid.rb:92:in `each_slice'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model/adapters/mongoid.rb:92:in `__find_in_batches'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model/importing.rb:161:in `import'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model.rb:115:in `import'
/app/cs_comments_service/lib/task_helpers.rb:34:in `block in rebuild_indices'
/app/cs_comments_service/lib/task_helpers.rb:31:in `each'
/app/cs_comments_service/lib/task_helpers.rb:31:in `rebuild_indices'
/app/cs_comments_service/lib/tasks/search.rake:17:in `block (2 levels) in <top (required)>'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/rake-12.3.3/exe/rake:27:in `<top (required)>'
/app/.gem/ruby/2.5.0/bin/bundle:23:in `load'
/app/.gem/ruby/2.5.0/bin/bundle:23:in `<main>'

Caused by:
Net::ReadTimeout: Net::ReadTimeout
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:158:in `block in request_via_request_method'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:152:in `request_via_request_method'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:137:in `request_with_wrapped_block'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:128:in `perform_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:70:in `block in call'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter.rb:60:in `connection'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/adapter/net_http.rb:68:in `call'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/rack_builder.rb:153:in `build_response'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/faraday-1.0.1/lib/faraday/connection.rb:492:in `run_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-transport-7.8.0/lib/elasticsearch/transport/transport/http/faraday.rb:40:in `block in perform_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-transport-7.8.0/lib/elasticsearch/transport/transport/base.rb:287:in `perform_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-transport-7.8.0/lib/elasticsearch/transport/transport/http/faraday.rb:37:in `perform_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-transport-7.8.0/lib/elasticsearch/transport/client.rb:176:in `perform_request'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-api-7.8.0/lib/elasticsearch/api/actions/bulk.rb:70:in `bulk'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model/importing.rb:170:in `block in import'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model/adapters/mongoid.rb:93:in `block in __find_in_batches'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongoid-7.0.5/lib/mongoid/contextual/mongo.rb:705:in `yield_document'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongoid-7.0.5/lib/mongoid/contextual/mongo.rb:134:in `block in each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongo-2.5.3/lib/mongo/collection/view/iterable.rb:46:in `block in each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongo-2.5.3/lib/mongo/cursor.rb:121:in `block in each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongo-2.5.3/lib/mongo/cursor.rb:121:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongo-2.5.3/lib/mongo/cursor.rb:121:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongo-2.5.3/lib/mongo/collection/view/iterable.rb:45:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongoid-7.0.5/lib/mongoid/query_cache.rb:227:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongoid-7.0.5/lib/mongoid/contextual/mongo.rb:133:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/mongoid-7.0.5/lib/mongoid/contextual.rb:20:in `each'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model/adapters/mongoid.rb:92:in `each_slice'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model/adapters/mongoid.rb:92:in `__find_in_batches'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model/importing.rb:161:in `import'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/elasticsearch-model-7.1.0/lib/elasticsearch/model.rb:115:in `import'
/app/cs_comments_service/lib/task_helpers.rb:34:in `block in rebuild_indices'
/app/cs_comments_service/lib/task_helpers.rb:31:in `each'
/app/cs_comments_service/lib/task_helpers.rb:31:in `rebuild_indices'
/app/cs_comments_service/lib/tasks/search.rake:17:in `block (2 levels) in <top (required)>'
/app/cs_comments_service/vendor/bundle/ruby/2.5.0/gems/rake-12.3.3/exe/rake:27:in `<top (required)>'
/app/.gem/ruby/2.5.0/bin/bundle:23:in `load'
/app/.gem/ruby/2.5.0/bin/bundle:23:in `<main>'
Tasks: TOP => search:rebuild_indices
(See full trace by running task with --trace)
Error: Command failed with status 1: docker-compose -f /srv/tutor/env/local/docker-compose.yml -f /srv/tutor/env/local/docker-compose.prod.yml -f /srv/tutor/env/local/docker-compose.tmp.yml --project-name tutor_local -f /srv/tutor/env/local/docker-compose.jobs.yml -f /srv/tutor/env/local/docker-compose.jobs.tmp.yml run --rm forum-job sh -e -c bundle exec rake search:initialize
bundle exec rake search:rebuild_indices

Note: I have an elasticsearch instance for another service, and the thing always gives a timeout after reboot, did whatever I found on internet to try to remedy this. So after a reboot, I need always manually start the elasticsearch instance, and restart the dependant service!
I think something similar is happening here, elasticsearch is telling it’s ready without being so, other modules calling the elasticsearch api URL give a time-out and finish with an error.

I was having the same issue when I wanted to run publisher.

I haven’t had time to fix it but I strongly belive that the issue is related to connection between containers.

For example the refresh_metadata command is being fired from discovery container right ?

It needs to get connected to various other containers, but internally it can’t do that. I was planning to change the hosts or give connection as per docker containers.

Worth a shot.

Hey Albert,
Can you please check out the logs from the elasticsearch container? Maybe it contains a clue as to why the connection cannot be established. For instance, it may happen that there isn’t enough free disk space to write new data.

tutor local logs -f elasticsearch

Hi Regis,

The disk where Tutor data is stored has 1.25 TB free, so that can’t be a problem. Here the exceptions from Saturday I see, as I can’t attach the whole log here. But I think these are not the main cause. Main problem is imho that other containers cannot connect to the elasticsearch container.

e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:42:52,927Z", "level": "INFO", "component": "o.e.d.DiscoveryModule", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "using discovery type [single-node] and seed hosts providers [settings]" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:42:53,329Z", "level": "WARN", "component": "o.e.g.DanglingIndicesState", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:42:53,737Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "initialized" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:42:53,737Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "starting ..." }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:42:53,928Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "publish_address {172.18.0.2:9300}, bound_addresses {0.0.0.0:9300}" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:42:56,968Z", "level": "INFO", "component": "o.e.c.c.Coordinator", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "cluster UUID [LA9x0JsnRAuRqZiJM9byRA]" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:42:59,262Z", "level": "INFO", "component": "o.e.c.c.JoinHelper", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "failed to join {ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{7STNxqwnQk-gM0gF5Yeuzw}{172.18.0.2}{172.18.0.2:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20} with JoinRequest{sourceNode={ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{7STNxqwnQk-gM0gF5Yeuzw}{172.18.0.2}{172.18.0.2:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}, minimumTerm=272, optionalJoin=Optional[Join{term=273, lastAcceptedTerm=272, lastAcceptedVersion=32945, sourceNode={ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{7STNxqwnQk-gM0gF5Yeuzw}{172.18.0.2}{172.18.0.2:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}, targetNode={ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{7STNxqwnQk-gM0gF5Yeuzw}{172.18.0.2}{172.18.0.2:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}}]}", 
e[36melasticsearch_1              |e[0m "stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [ef963cc5e897][172.18.0.2:9300][internal:cluster/coordination/join]",
e[36melasticsearch_1              |e[0m "Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: incoming term 273 does not match current term 274",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.CoordinationState.handleJoin(CoordinationState.java:225) ~[elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:1013) ~[elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at java.util.Optional.ifPresent(Optional.java:176) ~[?:?]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:377) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:364) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$1(JoinHelper.java:140) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257) [x-pack-security-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315) [x-pack-security-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:809) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
e[36melasticsearch_1              |e[0m "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
e[36melasticsearch_1              |e[0m "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:42:59,288Z", "level": "INFO", "component": "o.e.c.s.MasterService", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "elected-as-master ([1] nodes joined)[{ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{7STNxqwnQk-gM0gF5Yeuzw}{172.18.0.2}{172.18.0.2:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 274, version: 32946, delta: master node changed {previous [], current [{ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{7STNxqwnQk-gM0gF5Yeuzw}{172.18.0.2}{172.18.0.2:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:43:00,447Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "master node changed {previous [], current [{ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{7STNxqwnQk-gM0gF5Yeuzw}{172.18.0.2}{172.18.0.2:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}, term: 274, version: 32946, reason: Publication{term=274, version=32946}" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:43:00,474Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}", "cluster.uuid": "LA9x0JsnRAuRqZiJM9byRA", "node.id": "z7qASFBYRQihSwbx4u6wPQ"  }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:43:00,474Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "started", "cluster.uuid": "LA9x0JsnRAuRqZiJM9byRA", "node.id": "z7qASFBYRQihSwbx4u6wPQ"  }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:43:01,460Z", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "license [ad36d1c0-aa9a-47a7-8acf-994e4572defd] mode [basic] - valid", "cluster.uuid": "LA9x0JsnRAuRqZiJM9byRA", "node.id": "z7qASFBYRQihSwbx4u6wPQ"  }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-10T12:43:01,462Z", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "Active license is now [BASIC]; Security is disabled", "cluster.uuid": "LA9x0JsnRAuRqZiJM9byRA", "node.id": "z7qASFBYRQihSwbx4u6wPQ"  }

Another:

e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-11T11:15:20,522Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "starting ..." }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-11T11:15:20,724Z", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "publish_address {172.18.0.5:9300}, bound_addresses {0.0.0.0:9300}" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-11T11:15:24,482Z", "level": "INFO", "component": "o.e.c.c.Coordinator", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "cluster UUID [LA9x0JsnRAuRqZiJM9byRA]" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-11T11:15:25,701Z", "level": "INFO", "component": "o.e.c.c.JoinHelper", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "failed to join {ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{fcJXpuLJT4e_vlHvDj0amg}{172.18.0.5}{172.18.0.5:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20} with JoinRequest{sourceNode={ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{fcJXpuLJT4e_vlHvDj0amg}{172.18.0.5}{172.18.0.5:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}, minimumTerm=278, optionalJoin=Optional[Join{term=279, lastAcceptedTerm=278, lastAcceptedVersion=34235, sourceNode={ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{fcJXpuLJT4e_vlHvDj0amg}{172.18.0.5}{172.18.0.5:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}, targetNode={ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{fcJXpuLJT4e_vlHvDj0amg}{172.18.0.5}{172.18.0.5:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}}]}", 
e[36melasticsearch_1              |e[0m "stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [ef963cc5e897][172.18.0.5:9300][internal:cluster/coordination/join]",
e[36melasticsearch_1              |e[0m "Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: incoming term 279 does not match current term 280",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.CoordinationState.handleJoin(CoordinationState.java:225) ~[elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.Coordinator.handleJoin(Coordinator.java:1013) ~[elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at java.util.Optional.ifPresent(Optional.java:176) ~[?:?]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.Coordinator.processJoinRequest(Coordinator.java:532) ~[elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.Coordinator.lambda$handleJoinRequest$7(Coordinator.java:496) ~[elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.action.ActionListener$1.onResponse(ActionListener.java:63) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:377) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:364) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.Coordinator.handleJoinRequest(Coordinator.java:483) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$1(JoinHelper.java:140) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler$1.doRun(SecurityServerTransportInterceptor.java:257) [x-pack-security-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$ProfileSecuredRequestHandler.messageReceived(SecurityServerTransportInterceptor.java:315) [x-pack-security-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.transport.TransportService$8.doRun(TransportService.java:809) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:737) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
e[36melasticsearch_1              |e[0m "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
e[36melasticsearch_1              |e[0m "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-11T11:15:25,725Z", "level": "INFO", "component": "o.e.c.s.MasterService", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "elected-as-master ([1] nodes joined)[{ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{fcJXpuLJT4e_vlHvDj0amg}{172.18.0.5}{172.18.0.5:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 280, version: 34236, delta: master node changed {previous [], current [{ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{fcJXpuLJT4e_vlHvDj0amg}{172.18.0.5}{172.18.0.5:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-11T11:15:27,269Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "master node changed {previous [], current [{ef963cc5e897}{z7qASFBYRQihSwbx4u6wPQ}{fcJXpuLJT4e_vlHvDj0amg}{172.18.0.5}{172.18.0.5:9300}{cdhilmrstw}{ml.machine_memory=67364216832, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}, term: 280, version: 34236, reason: Publication{term=280, version=34236}" }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-11T11:15:27,343Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "publish_address {172.18.0.5:9200}, bound_addresses {0.0.0.0:9200}", "cluster.uuid": "LA9x0JsnRAuRqZiJM9byRA", "node.id": "z7qASFBYRQihSwbx4u6wPQ"  }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-11T11:15:27,344Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "started", "cluster.uuid": "LA9x0JsnRAuRqZiJM9byRA", "node.id": "z7qASFBYRQihSwbx4u6wPQ"  }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-11T11:15:28,871Z", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "license [ad36d1c0-aa9a-47a7-8acf-994e4572defd] mode [basic] - valid", "cluster.uuid": "LA9x0JsnRAuRqZiJM9byRA", "node.id": "z7qASFBYRQihSwbx4u6wPQ"  }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-11T11:15:28,873Z", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "openedx", "node.name": "ef963cc5e897", "message": "Active license is now [BASIC]; Security is disabled", "cluster.uuid": "LA9x0JsnRAuRqZiJM9byRA", "node.id": "z7qASFBYRQihSwbx4u6wPQ"  }

Please let me know if you want me to run other tests and which logs you need, evt. I’ll send them by PM to you if they are too big to attach here.

Regards, Albert

Small part of forum logs - a lot of “connection refused” + “retry”:

e[36mforum_1                      |e[0m `Rack::Timeout.timeout=`: class-level settings are deprecated. See README for examples on using the middleware initializer instead.
e[36mforum_1                      |e[0m I, [2022-06-12T10:13:17.744359 #16]  INFO -- : listening on addr=0.0.0.0:4567 fd=12
e[36mforum_1                      |e[0m I, [2022-06-12T10:13:17.744458 #16]  INFO -- : worker=0 spawning...
e[36mforum_1                      |e[0m I, [2022-06-12T10:13:17.745737 #16]  INFO -- : worker=1 spawning...
e[36mforum_1                      |e[0m I, [2022-06-12T10:13:17.746484 #24]  INFO -- : worker=0 ready
e[36mforum_1                      |e[0m I, [2022-06-12T10:13:17.746686 #16]  INFO -- : worker=2 spawning...
e[36mforum_1                      |e[0m I, [2022-06-12T10:13:17.747412 #27]  INFO -- : worker=1 ready
e[36mforum_1                      |e[0m I, [2022-06-12T10:13:17.747894 #16]  INFO -- : worker=3 spawning...
e[36mforum_1                      |e[0m I, [2022-06-12T10:13:17.749287 #30]  INFO -- : worker=2 ready
e[36mforum_1                      |e[0m I, [2022-06-12T10:13:17.749819 #16]  INFO -- : master process ready
e[36mforum_1                      |e[0m I, [2022-06-12T10:13:17.750401 #33]  INFO -- : worker=3 ready
e[36mforum_1                      |e[0m Waiting for mongodb/elasticsearch...
e[36mforum_1                      |e[0m 2022/06/12 10:53:55 Waiting for: tcp://mongodb:27017
e[36mforum_1                      |e[0m 2022/06/12 10:53:55 Waiting for: http://elasticsearch:9200
e[36mforum_1                      |e[0m 2022/06/12 10:53:55 Connected to tcp://mongodb:27017
e[36mforum_1                      |e[0m 2022/06/12 10:53:55 Problem with request: Get http://elasticsearch:9200: dial tcp 172.18.0.11:9200: connect: connection refused. Sleeping 5s
e[36mforum_1                      |e[0m 2022/06/12 10:54:00 Problem with request: Get http://elasticsearch:9200: dial tcp 172.18.0.11:9200: connect: connection refused. Sleeping 5s
e[36mforum_1                      |e[0m 2022/06/12 10:54:05 Problem with request: Get http://elasticsearch:9200: dial tcp 172.18.0.11:9200: connect: connection refused. Sleeping 5s
e[36mforum_1                      |e[0m 2022/06/12 10:54:10 Problem with request: Get http://elasticsearch:9200: dial tcp 172.18.0.11:9200: connect: connection refused. Sleeping 5s
e[36mforum_1                      |e[0m 2022/06/12 10:54:15 Received 200 from http://elasticsearch:9200
e[36mforum_1                      |e[0m W, [2022-06-12T10:54:18.135867 #17]  WARN -- : Overwriting existing field _id in class User.
e[36mforum_1                      |e[0m W, [2022-06-12T10:54:18.168038 #17]  WARN -- : MONGODB | Unsupported client option 'max_retries'. It will be ignored.
e[36mforum_1                      |e[0m W, [2022-06-12T10:54:18.168118 #17]  WARN -- : MONGODB | Unsupported client option 'retry_interval'. It will be ignored.
e[36mforum_1                      |e[0m W, [2022-06-12T10:54:18.168146 #17]  WARN -- : MONGODB | Unsupported client option 'timeout'. It will be ignored.
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.194991 #17]  INFO -- : Passed: Index 'comments' exists with up-to-date mappings.
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.195141 #17]  INFO -- : Passed: Index 'comment_threads' exists with up-to-date mappings.
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.195959 #17]  INFO -- : Refreshing Gem list
e[36mforum_1                      |e[0m Loading config.ru.
e[36mforum_1                      |e[0m `Rack::Timeout.timeout=`: class-level settings are deprecated. See README for examples on using the middleware initializer instead.
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.203233 #17]  INFO -- : listening on addr=0.0.0.0:4567 fd=12
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.203355 #17]  INFO -- : worker=0 spawning...
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.204982 #17]  INFO -- : worker=1 spawning...
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.205805 #25]  INFO -- : worker=0 ready
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.207696 #17]  INFO -- : worker=2 spawning...
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.208767 #17]  INFO -- : worker=3 spawning...
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.208824 #28]  INFO -- : worker=1 ready
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.209970 #31]  INFO -- : worker=2 ready
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.211417 #17]  INFO -- : master process ready
e[36mforum_1                      |e[0m I, [2022-06-12T10:54:18.212651 #34]  INFO -- : worker=3 ready

I don’t have any experience with Elasticsearch, but yes a lot with Apache Solr. That runs on an embedded Jetty server (I think now rebranded by Eclipse, don’t remember well the actual name), and had to do some tweeking to the Jetty config, as the newer Jetty versions have some rather low limits on max. memory usage, request time-out, max. request size etc. Maybe that’s here the case?

The forum and discovery services cannot connect to Elasticsearch because the elasticsearch container is unavailable. It is unavailable because it is crashing with the following error:

Are you running a cluster of multiple elasticsearch instances?

You can view your cluster health by running:

tutor local exec elasticsearch curl http://localhost:9200/_cluster/health?pretty=true

Have only ES running outside docker, can stop that service and test again? Here the output of the health check:

$ tutor local exec elasticsearch curl http://localhost:9200/_cluster/health?pretty=true
docker-compose -f /srv/tutor/env/local/docker-compose.yml -f /srv/tutor/env/local/docker-compose.prod.yml -f /srv/tutor/env/local/docker-compose.tmp.yml --project-name tutor_local exec elasticsearch curl http://localhost:9200/_cluster/health?pretty=true
{
  "cluster_name" : "openedx",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 387,
  "active_shards" : 387,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 387,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 50.0
}

My ES cluster is not looking very healthy I think, will follow instructions I found here:

UPDATE - Did the following:

  1. Open a bash shell inside the running ES container:
    docker exec -it xxxxxxxxxx bash
  2. Delete all unassigned shards, as per the URL above:
    curl -XGET http://localhost:9200/_cat/shards | grep UNASSIGNED | awk {'print $1'} | xargs -i curl -XDELETE "http://localhost:9200/{}"
  3. Got a “green” health status from:
    curl http://localhost:9200/_cluster/health?pretty

After this, I ran tutor local quickstart until completion and without errors, but now again I have a “yellow” health status:

tutor local exec elasticsearch curl http://localhost:9200/_cluster/health?pretty=true
docker-compose -f /srv/tutor/env/local/docker-compose.yml -f /srv/tutor/env/local/docker-compose.prod.yml -f /srv/tutor/env/local/docker-compose.tmp.yml --project-name tutor_local exec elasticsearch curl http://localhost:9200/_cluster/health?pretty=true
{
  "cluster_name" : "openedx",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 14,
  "active_shards" : 14,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 14,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 50.0
}
```
Trying to find out more:
```
tutor local exec elasticsearch curl -XGET localhost:9200/_cluster/allocation/explain?pretty
docker-compose -f /srv/tutor/env/local/docker-compose.yml -f /srv/tutor/env/local/docker-compose.prod.yml -f /srv/tutor/env/local/docker-compose.tmp.yml --project-name tutor_local exec elasticsearch curl -XGET localhost:9200/_cluster/allocation/explain?pretty
{
  "index" : "learner_pathway_20220613_122513",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "INDEX_CREATED",
    "at" : "2022-06-13T12:25:14.012Z",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "z7qASFBYRQihSwbx4u6wPQ",
      "node_name" : "ef963cc5e897",
      "transport_address" : "172.18.0.13:9300",
      "node_attributes" : {
        "ml.machine_memory" : "67364216832",
        "xpack.installed" : "true",
        "transform.node" : "true",
        "ml.max_open_jobs" : "20"
      },
      "node_decision" : "no",
      "weight_ranking" : 1,
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "a copy of this shard is already allocated to this node [[learner_pathway_20220613_122513][0], node[z7qASFBYRQihSwbx4u6wPQ], [P], s[STARTED], a[id=N5wMKWXpQa-xnRwOe3RGmQ]]"
        }
      ]
    }
  ]
}
```
It has to do with the new Nutmeg learner_pathway index, but I haven't an idea how to solve this?

Another update:

  1. Stopped tutor
  2. Deleted the /srv/tutor/data/elasticsearch folder
  3. Deleted the tutor_elasticsearch container
  4. Deleted the elastisearch docker image
  5. Run tutor local quickstart

After:

 tutor local exec elasticsearch curl http://localhost:9200/_cluster/health?pretty=true
docker-compose -f /srv/tutor/env/local/docker-compose.yml -f /srv/tutor/env/local/docker-compose.prod.yml -f /srv/tutor/env/local/docker-compose.tmp.yml --project-name tutor_local exec elasticsearch curl http://localhost:9200/_cluster/health?pretty=true
{
  "cluster_name" : "openedx",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 14,
  "active_shards" : 14,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 14,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 50.0
}

So no “green” status, lots of UNASSIGNED shards …

Exception:

e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-13T13:49:36,937Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "openedx", "node.name": "7313fc4fd6e1", "message": "adding index template [logs] for index patterns [logs-*-*]", "cluster.uuid": "c2-eFnFMQX2lHLayTDcYrw", "node.id": "gphHj-UiSBepk0I35g5ktA"  }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-13T13:49:37,859Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "openedx", "node.name": "7313fc4fd6e1", "message": "adding index template [synthetics] for index patterns [synthetics-*-*]", "cluster.uuid": "c2-eFnFMQX2lHLayTDcYrw", "node.id": "gphHj-UiSBepk0I35g5ktA"  }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-13T13:49:39,285Z", "level": "ERROR", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "openedx", "node.name": "7313fc4fd6e1", "message": "failed to set monitoring pipeline [xpack_monitoring_7]", "cluster.uuid": "c2-eFnFMQX2lHLayTDcYrw", "node.id": "gphHj-UiSBepk0I35g5ktA" , 
e[36melasticsearch_1              |e[0m "stacktrace": ["org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-pipeline-xpack_monitoring_7) within 30s",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$0(MasterService.java:143) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at java.util.ArrayList.forEach(ArrayList.java:1511) [?:?]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$1(MasterService.java:142) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
e[36melasticsearch_1              |e[0m "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
e[36melasticsearch_1              |e[0m "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-13T13:49:39,461Z", "level": "ERROR", "component": "o.e.x.m.e.l.LocalExporter", "cluster.name": "openedx", "node.name": "7313fc4fd6e1", "message": "failed to set monitoring pipeline [xpack_monitoring_6]", "cluster.uuid": "c2-eFnFMQX2lHLayTDcYrw", "node.id": "gphHj-UiSBepk0I35g5ktA" , 
e[36melasticsearch_1              |e[0m "stacktrace": ["org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-pipeline-xpack_monitoring_6) within 30s",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$0(MasterService.java:143) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at java.util.ArrayList.forEach(ArrayList.java:1511) [?:?]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.cluster.service.MasterService$Batcher.lambda$onTimeout$1(MasterService.java:142) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:678) [elasticsearch-7.10.1.jar:7.10.1]",
e[36melasticsearch_1              |e[0m "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
e[36melasticsearch_1              |e[0m "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
e[36melasticsearch_1              |e[0m "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-13T13:49:39,589Z", "level": "INFO", "component": "o.e.c.m.MetadataIndexTemplateService", "cluster.name": "openedx", "node.name": "7313fc4fd6e1", "message": "adding index template [metrics] for index patterns [metrics-*-*]", "cluster.uuid": "c2-eFnFMQX2lHLayTDcYrw", "node.id": "gphHj-UiSBepk0I35g5ktA"  }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-13T13:49:41,176Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "openedx", "node.name": "7313fc4fd6e1", "message": "adding index lifecycle policy [ml-size-based-ilm-policy]", "cluster.uuid": "c2-eFnFMQX2lHLayTDcYrw", "node.id": "gphHj-UiSBepk0I35g5ktA"  }
e[36melasticsearch_1              |e[0m {"type": "server", "timestamp": "2022-06-13T13:49:41,991Z", "level": "INFO", "component": "o.e.x.i.a.TransportPutLifecycleAction", "cluster.name": "openedx", "node.name": "7313fc4fd6e1", "message": "adding index lifecycle policy [metrics]", "cluster.uuid": "c2-eFnFMQX2lHLayTDcYrw", "node.id": "gphHj-UiSBepk0I35g5ktA"  }

Hi @insad,
Sorry about the delay… It’s perfectly normal that the Elasticsearch cluster is in the “yellow” state; it just means that there is no configured replica for active shards, and that’s normal because you are running a single instance of Elasticsearch.

Hi RĂ©gis,

Thanks for this information! I’m not a regular Elasticsearch user, as you can see :slight_smile:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.