Docker Hub Registry is down

(status.docker.com)

116 points | by 0vermorrow 1654 days ago

11 comments

  • alpb 1654 days ago
    Regular reminder that Docker Hub is not really an enterprise registry with an SLA. You should use pretty much anything else for serious applications that rely on pulling images in the hot path (such as auto-scaling up).
    • LiamPa 1654 days ago
      I wish I knew this 2 years ago, going to have to migrate to something that isn’t down every month.
    • FpUser 1654 days ago
      Being paranoid helps. My pipelines never pull images from the hub, I always store those locally.
      • gchamonlive 1654 days ago
        do you have some kind of a maintenance routine that pulls image updates? you can end up with ancient docker hub images, because without --pull when compiling docker image, docker build won't pull base image updates by default
        • FpUser 1654 days ago
          That's right. Update on on-need basis only. Do not fix what's working ;)
  • danielecook 1654 days ago
    It seems like dockerhub requires a lot of bandwidth...lots of people being able to pull gigabytes worth of images everyday. Does anyone know anything behind the economics behind this? How can they offer it for free?
    • meritt 1654 days ago
      > Does anyone know anything behind the economics behind this?

      You use VC money and run at a loss while focusing on marketing and tech evangelism, getting more and more startups and hopefully established companies using your software. As the cracks begin to show those growing organizations have too much tied to your system and they can't afford outages and need to scale. So they pay you for the Enterprise version of your software where you actually fix all of flaws present in the community version.

      Look at MongoDB if you need a good case study. It was incredibly hyped from about 2009-2015, people would defend it in heated online arguments, and today it's rarely considered for greenfield projects. But they're making about $100M/qtr selling subscriptions to Enterprise & Atlas servicing the technical debt established during that hype cycle.

    • ptomato 1654 days ago
      Likely the traditional model of taking a large amount of VC money, putting it in a pile, and then setting it on fire and waiting until they stop adding more, at which point the company ends.
    • rtempaccount1 1654 days ago
      yeah there's a variety of "free at point of use" services driving the Internet and, sooner or later it seems likely there will need to be a change in how they're funded.

      It's not just Docker hub, there's services like the various Programming language package repos (npm, rubygems etc) and the Linux distro package repos.

      I would have had Github in that category, but now it's owned by MS, presumably they don't have many of that kind of funding problems...

      • toomuchtodo 1654 days ago
        Github used to be in their own colo on their own bare metal. I'm not sure if they've been pushed into Azure cloud as part of the MS acquisition, but either way Github isn't paying cloud retail ($$$) for their bandwidth, and it's likely sustainable.
  • bluedino 1654 days ago
    What's the best way around this kind of outage?
    • LethargicStud 1654 days ago
      1) You can run your own pull-through cache[0]

      2) You can use a different registry

      3) Run something like kraken[1] so machines can share already-downloaded images with eachother

      4) If you need an emergency response, you can docker save[2] an image on a box that has it cached and manually distribute it/load it into other boxes

      0: https://docs.docker.com/registry/recipes/mirror/

      1: https://github.com/uber/kraken

      2: https://docs.docker.com/engine/reference/commandline/save/

    • toomuchtodo 1654 days ago
      Have the ability to build containers on demand from source as well as host your own repo.
      • ownagefool 1654 days ago
        Realistically, it's probably a cache/mirror.

        If you can't build a deploy a new version of your app, you can probably live with it and grab a cup of coffee.

        If you server fails over and your new server can't pull the current image, your app is potentially down, and that's a lot worse.

        The math you do here is the cost of wasted time versus the cost for you to run your own registry with better uptime.

        • toomuchtodo 1654 days ago
          If you're an enterprise, you're likely already running Nexus, Artifactory, or some other artifact manager. The additional overhead to store containers in these systems is so close to zero, we can round down for our purposes. It's all blobs and SHA hashes anyway. Storage is cheap.

          If you're not an enterprise, fall back to your cloud provider's container registry (which is likely backed by highly durable and reliable storage; AWS ECR & S3 for example). It's likely you already are using Jenkins or some other runner to build your own containers (and if you're not setup to do so with your CI pipeline, you should be); it's trivial to support caching to your in house cloud provider registry as part of the container build/retrieval/deployment process. This functionality is a handful of properly written Jenkins jobs based on my experience.

          I'm not trying to avoid going to get a coffee while you wait for external provider interfaces to settle when your systems are nominal. I'm trying to avoid those moments where you absolutely need to deploy an existing or updated container and can't (which you mention in your comment). Critical infra requires redundancy. Container storage is critical infra when part of a deployment process. One cannot say, "Sorry engineering, that hotfix can't go out until the registry is back up. See you in 30. exit stage right to coffee shop" in most environments.

          EDIT: Also consider Docker's finances are precarious and it's possible they're going to suddenly go dark. Plan accordingly.

          Disclaimer: Previously infra/devops engineer.

          • ownagefool 1653 days ago
            I don't disagree.

            The opinion I gave pretty much would help you define what is critical.

            Every single smart enteprise should be able to build from source, push to a registry and deploy via a pipeline defined in source control. This likely gives you most of the DR you need, and it also helps you work as part of a team.

            However, when you take these decisions you should be able to quantify why. You can always spend more money on more 9s and tighter SLAs, but at some point you need to draw a line in the sand and call it good enough.

            For a small startup, that might be before running their own registry, for a large ecommerce website it's probably after. Humorously, a tech first startup would likely do it, whilst a sales first established business probably won't, because neither of them are really quantifying their efforts.

            Disclaimer: Also do infra and devops. These are concepts from the SRE book, though I don't use the SRE terms because I can't remember them off the top of my head. Interestingly, the book publishes how google add errors into some products, so people don't rely them unsustainable relability attribuites.

    • rodgerd 1654 days ago
      Maintaining your own registry as a cache.
    • dweomer 1654 days ago
      pull-through mirror with tons of disk
  • nelsonmarcos 1654 days ago
    If we only had listened our sysadmin...
  • beilabs 1654 days ago
    So my CI environment requires access to other docker images, all hosted on Docker Hub.

    Seems like the tech giants should load balance these images for the good of the Internet to provide some decent redundancy and for my sanity at 11.30pm.

    • treve 1654 days ago
      Whenever one of our essential 3rd party services go down, I can only shrug and hope they figure it out quickly. They provide a good service and nobody has 100% uptime. Still better than solving it internally, which is even more likely to have downtime.

      Partial failure is just fact of life. If this is a major issue for your process, it might be better to try and find ways to alter your process so this isn't an issue. Alternatively, mirror locally.

      • beilabs 1654 days ago
        You're absolutely right.

        Being honest no build is worth losing sleep over. We are piggybacking on their service and bandwidth. For us to start building the infrastructure to cache their images doesn't make financial sense, we deploy daily and their uptime always allows for that.

    • thresh 1654 days ago
      No, you should rewrite your CI not to depend on external stuff, if you want sane evenings.
    • geggam 1654 days ago
      If you are getting paid to do CI you are simply doing it wrong.

      Rule #1 Host your own stuff, never rely on others.

      Rule #2 automate everything

  • driverdan 1654 days ago
    This has broken fresh containerized deploys on Heroku, which is surprising since they run their own registry. They should be proxying Hub, it'd save them a ton of bandwidth.
  • popotamonga 1654 days ago
    What a coincidence, the same minute all my lightsail instances got unresponsive and then 20 minutes stuck on "stopping".

    Launched a new one.. docker pull bam error. Customer unsatisfied.

  • dpix 1654 days ago
    Looks like it's back up now, whew!
  • bluedino 1654 days ago
    It went from orange to red.

    Incident Status Full Service Disruption

  • tekno45 1654 days ago
    uuugh. was just about to do some testing
  • tryphan 1654 days ago
    More confidence for the folks at Docker Inc.