Currently I'm hosting a bunch of static HTML files (with CSS, JS, images, ...) inside a plane nginx container. Mostly nginx:alpine, but sometimes also openresty/openresty:alpine and other similar containers, to be precise. The odd thing is, this container is either behind an nginx reverse proxy, which could server the files directly, or on AWS ECS with a custom sub domain, so the files could be much easier dropped into S3 and served by the CDN CloudFront. Both options seem to be much easier, so why am I hosting them in an nginx docker container instead?
Originally I wanted to write down several reasons, but in the end I realized, it can be all summarized under one big topic:
Leverage Existing Infrastructure
I don't have just nginx containers with static files. Instead I'm mostly running applications with databases and other dependencies. Since I'm already hosting those in docker containers I had to create scripts and tools, to deploy and update them easily, maybe even to automate such tasks. By hosting my static files the same way I host other services, I can leverage those, already existing, tools without creating anything new.
Next point is versioning of those static files, but again it comes to leveraging existing infrastructure. Just as with hosting and deployment I'm working on a team and we want every commit of master being potentially deployable to production. Doesn't mean we deploy it, but we have means of doing so if we think a commit is good enough. To do this, we package every master build into a container and automatically deploy to staging. If staging seems table enough, we push the current version to production. If we were mistaken, we roll back to the original version. With containers it's super simple to version your static files. It again works the same way as with other services, you just have the image and tag it with the corresponding commit on master. How would I do it without containers? I'd anyway need to keep those versions somewhere around, in an additional service or store, which isn't used for anything else.
The last point is, a little bit different. Although those containers are behind an nginx reverse proxy, I still can customize each of them specifically for those static files they are hosting. I can fine tune the cache headers they return and in the case of openresty, I can even add some custom Lua scripts to execute on the server. All this is very light weight and efficient compared to using nodejs, python or other higher level tools to host my files and provide a little bit of added functionality. On the other hand, I'm not screwing up the configuration of my main reverse proxy, which is used by pretty much all of my services. I also don't need to worry about installing correct libraries and bindings for Lua.
You see, Docker can make your life in so many ways easier. Sure now reading what I wrote, it sounds to me like Docker would be my tool (hammer) and every problems looks like a nail to me. This is probably true for my, not very performance critical, use cases. If I'd need to server files to thousands of users every minute, I might build a more special purpose setup. Until then, this is totally fine, quick and efficient.