Run Script on Docker Container Stop/Terminate – CMS/LMS

Background
I’ve been working with this article to perform logrotate and s3cmd to upload rotated logs back to S3. For Step 4 – Sending logs on Shutdown it uses systemd to setup a service to handle that. For what I’ve been reading online, systemd is not recommended to be installed within a Docker container so I’m looking to alternative solutions like Docker ONBUILD, HEALTHCHECK, and --init.

How To Use Logrotate and S3cmd to Archive Logs to Object Storage on Ubuntu 16.04 | DigitalOcean

Potential Solution (In-Progress)
Has anyone figured out how to run a bash script when a Docker container closes/terminates. I ran into this article that explains how to do that but I was wondering if anyone had alternative solutions. In this article, they talk about using Docker ONBUILD, HEALTHCHECK and --init to do that.

I’d like to run a script when the CMS/LMS/worker containers closes to perform a logrotate and backup compressed tracking logs to S3.

I would completely de-correlate log collection and image building. I suggest running a script on the host. For instance:

import json
import docker  # pip install docker


def main():
    client = docker.from_env()
    for event in client.events(decode=True):
        if event.get("status") == "stop":
            attributes = event.get("Actor", {}).get("Attributes", {})
            if (
                attributes.get("com.docker.compose.project.working_dir")
                == "/home/regis/.local/share/tutor/env/local"
                and attributes.get("com.docker.compose.service") == "lms"
            ):
                print("lms stopped: ", event)


if __name__ == "__main__":
    try:
        main()
    except KeyboardInterrupt:
        pass

You can even run this script in a container if you bind-mount /var/run/docker.sock.

1 Like

@regis
I appreciate you mentioning decoupling log collection from image building. I was trying to make the responsibility of the CMS/LMS container to be the one to logrotate their own logs and push back to a central storage location like S3. I created separate directories based on container host name and ip to avoid file name clashes even though a time stamp was added during log rotations.

I was considering this approach due to ephemeral of the containers in local or k8s installs.

If I were to continue this route of log collection at the container level, this article below helps with that. They use the Docker ENTRYPOINT statement to listen for signals like SIGTERM.

Your recommendation of collecting the logs at the host helps with keeping the events stored in the tracking.log in historical event order.

I noticed that in a k8s setup, that when I loaded a video the play and pause events could get stored in separate LMS container logs if you have more than one LMS pods running. Would your script above work in for k8s too or just local tutor install?

The script would work “as-is” just for local installations. But I’m sure that you can adopt a similar approach with Kubernetes. Perhaps by setting up a dedicated operator.

@regis
I was trying to find one solution for both local and k8s tutor installations by doing it at the container level but that may be too much to handle at this moment and as you suggest above might not be an ideal solution due to tight coupling of logrotate and image build.

For now, I will forget about k8s installations and just focus on local install and use your recommended approach above by performing a logrotate at the host level rather than the container. We’re still running tutor local install at the moment but looking to add a load balancer to extend the LMS containers if necessary to multiple EC2 instances on AWS.

@regis Why do we need to see if the com.docker.compose.project.working_dir is at that address? Does it matter?

@regis I noticed that the volume mount of /var/run/docker.sock works with your script for the root account, however, I’m looking to do this for a non-root account within the container and I’ve come across this article which seems to be helpful in doing that.
Docker in Docker access for non-root users - Jon Friesen

I realize that you mentioned running this script on the host environment but what are your suggestions on doing this from within a new container?

I did get this non-root user to work. Here are some docker events displayed from the new container.

I have no idea… but I’ll be interested to hear about your findings!