14 comments

  • giobox 1984 days ago
    Interesting. Those that decide to self-host this stuff, do you ever worry about the maintenance burden? While obviously this doesn’t help privacy concerns and I’m sure certain other things, for those of us with less serious OpSec needs this looks like a _lot_ of stack to update, patch against vulnerabilities etc.

    For me personally, I’m happy to cede some small degree of control and whatever else to a good quality third party provider who has a team that is paid to actively maintain and secure the product, leaving me to spend more of my limited free time with my family rather than debug why the letsencrypt certificate for my self hosted mail server/VPN/cloud store/whatever hasn’t auto-renewed correctly etc.

    I’m sure there will be plenty of “i’ve ran my own mailserver since the 90s and nothing has ever broken” wizards here, but everything works until such time as it doesn’t. I’d be curious to hear from people who have had trouble.

    • mStreamTeam 1984 days ago
      I'm a developer who's been taking a crack at the convenience problem of selfhosting. I've been hosting my own services with various success for the last few years, and maintenance has never been an issue. Once the software is running, it's pretty trivial to update it.

      The biggest pain point is installation. Most selfhosted software has a ton of dependencies to install first. And after that there's usually some configuration that has to be done before it will work. A complicated installation is enough to drive even tech savy users away.

      I've had some luck with my own software by targeting Windows users as well. Most people don't want to setup a linux box just to selfhost a single piece of software.

      • orange222 1983 days ago
        Why not just go with a docker image, no manual installation of individual dependencies necessary.
        • weberc2 1983 days ago
          Most interesting apps run as multiple containers (e.g., a database) and then you need to provision volumes for application's data and configuration files. It doesn't seem clear to me that this is a strictly simpler state of affairs than local installation.

          The real wins from Docker (for this use case) are:

          1. Docker is a better process supervisor than systemd and friends

          2. Simple, fast deployment (no manage ansible scripts or rebuild/reboot a machine image)

          3. Built-in, standard logging

          • orange222 1983 days ago
            I think helm (helm.sh) solves that problem. Helm is basically package manager of kubernetes. So to install any app, as long as there is a helm chart for that app, you simply do: helm install myapp and helm will install the app on your kubernetes cluster.
            • weberc2 1983 days ago
              Right, but now you're running Kubernetes for a single server, which is the very definition of overkill. Installing Kubernetes isn't easy, at least not when you consider DNS, ingres/load balancing, logging, etc.
              • orange222 1983 days ago
                Scenario #1: Installing kubernetes, helm and then installing your app

                1. Spend maybe 2-3 full days install kubernetes, helm

                2. Spend maybe 3-4 hours installing your app through helm because you're new to installing things in kubernetes.

                3. The next app that you want to install on your server is only 20 minutes away, now that you understand how kube and helm work.

                Scenario #2:

                1. Install app directly on server, hunt down dependencies and other weird things, probably takes 1 day at least, to do the whole installation.

                2. The next app that you want to install will take the same amount of time again.

                I'd go with Scenario #1 as it is more scalable if I want to install more apps on my server.

                • tlrobinson 1983 days ago
                  Wait, what happened to just using a published Docker image? No need to hunt down dependencies.

                  I've found using docker-compose is a nice way to do basic orchestration for "self-hosted" type apps.

                • berti 1983 days ago
                  > Install app directly on server, hunt down dependencies and other weird things, probably takes 1 day at least, to do the whole installation.

                  What distro are you running? Either I am very spoilt with Arch (+AUR) or this is way off the mark.

                  • snazz 1983 days ago
                    Really depends on the package. I’ve had programs that take about a day to install, because the source wasn’t really portable and I really needed to get it to build, and I’ve had programs that are just five commands and it runs. It’s more work if you get the source from upstream than if you get it from ports or Portage or the AUR.
              • the_duke 1983 days ago
                It's still not trivial, but with kubeadm [1] it is much, much easier than it used to be.

                [1] https://kubernetes.io/docs/setup/independent/create-cluster-...

                • weberc2 1983 days ago
                  My experience was with kubeadm. If that’s easy mode, I shudder to think of how things were before!
      • h1d 1984 days ago
        I suppose those "ton of dependencies" are usually handled via package managers in a single line usually mentioned in their doc. I've run servers for 15 years but if something asks me to install tons of stuff manually, I won't even look at it.

        Initial config surely varies from wall of text config to a simple web based config but popular apps usually have decent doc or googleability to keep you out of maze easily.

        • GordonS 1984 days ago
          Alas, but in my long experience that's seldom the case.

          Invariably you end up having to add GPG keys for a bunch of weird and wonderful 3rd party package sites, then fight with your package manager to get it to use them, spend hours scouring the web for source code tarballs, spend an age fiddling with compiler flags to get things to build, and... ach! Eventually you're bound to wish you'd never started in the first place!

          • h1d 1984 days ago
            What kind of niche product are you using? I wouldn't add some 'weird' 3rd party package sites but only if those are official repositories from the upstream. You do not compile stuff unless you're on something like OpenBSD and the binary doesn't exist. That is a very weird experience if you're using popular apps.
            • GordonS 1983 days ago
              I have this kind of problem regularly, especially if I want to run versions of software newer than 3 years old.

              Also, compiling software is quite normal on Linux. Often it's just `make && sudo make install`, and everything works - but sometimes you've got to go on a goose chase to fulfill dependencies, tinker with compiler flags etc.

            • ccmcarey 1983 days ago
              You compile stuff if the binary doesn't exist, regardless of OS.

              It's very normal on Arch Linux at least to pull build files from the Arch User Repository and build the software locally.

      • aklemm 1984 days ago
        Maintenance and updates as the software changes is arguably a bigger inconvenience than the initial setup.
    • zimablue 1984 days ago
      I have what I think is almost a hack for security, if it doesn't make sense maybe someone here can tell me why and I'll go rip down a lot of services.

      I use an aws lambda running flask with a password to expose a website where I can IP whitelist any other servers to the requesting IP, it sends me an email when an IP is added and I periodically clear them all (try to do when leaving if on public wifi).

      It's a kind of very low-tech but I think very decent extra first layer of security, none of my servers are visible to anyone most of the time but I can hit them whenever.

      • GordonS 1984 days ago
        I actually quite like this idea - I may steal it!
      • oarsinsync 1983 days ago
        Can you publish this solution somewhere? It sounds grand!
        • zimablue 1983 days ago
          Sorry I'm too ashamed, I was learning basically all the technology involved. I can give a recipe: first 4 pages of this: * https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial... swapped out/skipped the sqllite3 for a single hardcoded user class (it's just me!) * throw in boto3 routes which do add/remove/clear ip rules, and start/stop instances, a list of checkboxable instances pulled from ec2 listing them * deploy using zappa https://github.com/Miserlou/Zappa

          some things that can trip you: * for the ip there's a stupid mistake I made, when testing locally you can request.get(myip) to get which ip to use, remotely you pull it off of request.remote_addr * your lambda will need a specialised IAM role to have rights to do all this, it's configurable in zappa config and I just clicked creating one through aws management console * this is in the zappa docs but I didn't rtfm so I buil everything using conda which doesn't work, then I built it with a virtualenv with the same name as the project which doesn't work.

    • tlrobinson 1984 days ago
      For me, most of the things I self-host work better running on my local network, namely home automation and home theater type stuff.

      Also, I don't expose any of it directly to the internet.

      IMHO Docker has been a godsend for deploying self-hosted software.

      I haven't tried it yet, but watchtower (https://github.com/v2tec/watchtower) can be used to automatically update running Docker containers.

    • holri 1984 days ago
      I stick strictly to Debian stable. If a software is not in Debian stable I will not use it, no matter how "good" it might be. Therefore there is seldom manual maintenance necessary despite initial configuration.
      • drybjed 1984 days ago
        If you are looking for a similar project based on Debian Stable, you can check out DebOps: https://debops.org/ (disclaimier, I'm the author and main contributor).

        This is also a set of Ansible playbooks and roles that use a Debian Stable net install as a base and let you pick and choose the services you want to configure. It supports multi-host deployments, provides an internal CA by default and is designed to be used as a base for your own applications - you are encouraged to use DebOps roles in your own playbooks to integrate with existing service configuration like firewall, web server, etc.

        You can check out the list of Ansible roles available in the project [1], there's also a Getting Started guide [2]. Project has its own IRC channel, #debops on FreeNode, as well as a mailing list [3], if you need support.

        [1]: https://docs.debops.org/en/master/ansible/role-index.html

        [2]: https://docs.debops.org/en/master/debops-playbooks/guides/ge...

        [3]: https://lists.debops.org/mailman/listinfo

        • holri 1983 days ago
          sorry, could not find DebOps in Debian stable, so nothing for me ;-)
          • drybjed 1983 days ago
            Touché. The `debops` package is in Debian Experimental, but it is a very old version. Before re-introducing it, I would like to clean up the code of the roles first and bring everything up to date, but this will take time, definitely not before Buster becomes Stable.
      • stevekemp 1983 days ago
        I'm a Debian guy too, but there are times when you need something more recent than Debian supports, or something which is not packaged at all.

        I've backported packages for myself a few times, and in the other cases I'll install binaries beneath `/opt`, where they'll be out of the way.

    • amyjess 1984 days ago
      > Those that decide to self-host this stuff, do you ever worry about the maintenance burden?

      Given that the name is "HomelabOS", I'm sure that this is aimed at people who are doing this because they find it fun and/or to learn new technologies, not people who firmly believe this is the most efficient way to run production software.

      • iamdbtoo 1984 days ago
        This is part of it for me. I self-host a lot of services in my home and the maintenance is kind of part of the reason I do it. It's like a digital garden I can tend to and learn from.
    • syshum 1983 days ago
      >>>but everything works until such time as it doesn’t.

      Including these fabled "cloud services" that everyone believes have no down time..... yet I get alerts monthly from these various services about unplanned outages of this service or that service for the various "cloud" products my company uses...

      Services from major companies like Microsoft who has all kinds of issues with their Office 365 Product, and in addition to the endless Windows 10 and office Activation server issues...

      While it looks good on the marketing brochure to have "good quality third party provider who has a team that is paid to actively maintain and secure the product," reality is often very very different.

      I prefer the control, and responsibility. Others prefer to loose control and have the ability to "blame the vendor".

    • znpy 1983 days ago
      > the maintenance burden

      The key to avoid the burden is to restrict the scope of your services to what you actually need.

      Let me elaborate.

      I've been hosting my own email (plus something else) at home for the last five years or so. So far, so good: various services signup, job-related email exchanges, no problems.

      I rarely touch my home server and I mostly do software upgrades. I run postfix, and when configuring virtual users on virtual domains, I invested an afternoon to actually go trough the documentation and came up with a reasonably simple configuration where all the data about users and domains is stored on simple text files: no SQL database to manage (not even SQLite).

      Such a solution is easy to back up and easy to interact with.

      For more adventurous testing, I just have another machine.

    • scarecrowbob 1983 days ago
      "everything works until such time as it doesn’t"

      That's true. I was running email, personal web server, and VPN with soverign and that worked really weell for years until it didn't.

      Still, that is a long time of pretty functional work.

      At this point, I know enough professionally to run my stuff just fine, so I do. But seriously, although I will never get the 3-5 hours of time that it took me to debug my mailserver setup back, that was spent almost 5 years ago and I haven't had to touch it since, really.

      A lot of this stuff really does run just fine once you have it setup. I didn't find the maintenance burden all that high even when I knew less about these systems than I do now.

    • WrtCdEvrydy 1984 days ago
      Docker with Watchtower has made a lot of this easier...

      I am often surprised about how often docker images are updated and auto-spun up in my home environment.

      • giobox 1983 days ago
        This approach is probably closest to one I would use personally, but you gotta have a lot of faith in the image maintainer to trust unattended auto-updating of an image you don’t maintain yourself. One stolen set of credentials later and your mailserver image is probably farming crypto-currency tokens for someone...

        Also, this assumes that the container run arguments for the container don’t change over time. Good container maintainers are sensible about this, but you are still leaving your uptime to the chance some stranger cocks something up.

        for certain homelab style applications this might even be an annoyance - if I’m in the middle of a film I probably don’t want the Plex container updating itself.

        • WrtCdEvrydy 1983 days ago
          Now, mind you, I'm pretty religious about only using linuxserver.io stuff, or containers that are old enough not to have continous updates.
    • kop316 1984 days ago
      Honestly its less than <30 minutes per week for manual updating. I usually get notifications by mail if I need to update things (e.g. nextcloud, freenas, pfsense), and I have it running on Debian for auto update. I have had nextcloud break a couple of times on a major update, and that did get annoying.

      It did take me quite a bit of time to set up though, and understand what I was doing. I would be incredibly hesitant to run what was in the link, as I have no understanding of how it works, security issues, etc.

    • yjftsjthsd-h 1984 days ago
      At least my stuff is usually just "weekly or so, run `apt-get update; apt-get upgrade` and restart as needed. I also have a script to check for updates and email me to apply them, and a WIP script to automatically determine if reboot is needed. I'm gradually working towards auto-patching, but don't quite feel comfortable with it.
      • cellularmitosis 1984 days ago
        Are you familiar with the unattended upgrade packages available on Debian? https://wiki.debian.org/UnattendedUpgrades
        • yjftsjthsd-h 1984 days ago
          Yeah, I looked at that. If memory serves, I didn't end up using it for reasons that amounted to me wanting finer control then it offered and not being convinced that I definitely wanted to install updates automatically. (I've been working under the model where the system checks for updates and if any are found it automatically downloads them and notifies me that they need to be installed)
      • kop316 1984 days ago
        So one thought to try, I have ZFS root on Debian. I have a daily snapshot run everyday and I have set it to auto-patch. If something were to happen, I can snapshot back and everything is fine.
        • cellularmitosis 1984 days ago
          Do you run separate partitions for OS vs. user data? (even if not, losing one day of data on a personal stack probably isn't that big of a deal if rollbacks are rare)
          • kop316 1984 days ago
            Yes I do, for that exact reason. If you want to know more I am Happy to share, I just don't want to write a dissertation to fall on deaf ears
            • oarsinsync 1983 days ago
              I would like to know more! :-)
              • kop316 1983 days ago
                Certainly! I use FreeNAS as a backend (file server) and Debian as an App server. What I do is mount any data paths via NFS to Freenas. This is primarily for Nextcloud and Plex, so if I want to reinstall it elsewhere, I can do that without needing to back up and it will come up. Both Nextcloud and FreeNAS run ZFS with snapshots, and Debian runs as an auto update and alert be is a restart is needed.

                FreeNAS also allows for VMs, so I can install VMs on my freeNAS and allow then the same thing (I use regular ext4 for the VMs, as they are mounted on a zblock and get snapshotting). Right now they are all encrypted, so if something does restarts, I have to physically be there to decrypt. I think i can use dropbear SSH to prevent this, but I am okay with this limitation for now.

                If you have specific questions please let me know.

                • kop316 1983 days ago
                  I should also say, zfs is fairly flexible as well, and you can partition different parts of the filesystem to have different snapshots too. That way you can also revert certain parts of the filesystem without reverting everything.
  • davestephens 1984 days ago
    This is awesome!

    I run a similar project called Ansible-NAS, which was borne out of FreeNAS being a pain in the ass to manage and upgrade. https://github.com/DaveStephens/ansible-nas

    • barrystaes 1982 days ago
      Do you know of Unraid? Webinterface for NAS and installing dockers in just a few clicks. Simple and versatile. And no need for raid shenanigans - hence the name.
    • NickBusey 1984 days ago
      Wow, nice! That is definitely the most similar project to HomelabOS that I've seen so far.
  • mnutt 1984 days ago
    This is an interesting effort and reminds me a bit of https://github.com/sovereign/sovereign, though HomelabOS has significantly more apps and sovereign hasn't been touched in a while.
    • NickBusey 1984 days ago
      I used Sovereign for a while before making this. It just didn’t scratch my particular set of itches.
    • ireflect 1983 days ago
      I'm still using it, 5 years later. I spend an hour every few months to run updates and stuff, but otherwise it's been pretty easy going.
  • INTPenis 1984 days ago
    I was more interested in the list of software than the ansible playbooks. Minio was especially interesting as I just started doing cloud development and have yet to figure out how to do it offline since my app fully integrates into S3.
    • FooHentai 1984 days ago
      Minio is a really interesting piece of software to play with. The docs have some fairly important pieces missing, and the scale-out/resiliency capabilities are mostly missing (deliberately I think, guided by the way the authors describe it's intended uses). As a result it ends up more of a toy/dev tool than something to rely on for persistent 'home production' backing of services.

      That said, for transient development purposes it's perfect. As an on-prem replica of object storage/S3, it's tantalizingly close but not quite there. Maybe it's improved in the last year or so since I last used it...

      • INTPenis 1983 days ago
        Unfortunately it does not mirror the S3 admin API which I need for my app because I let users create their own credentials.

        I'm going to try Zenko next.

    • NickBusey 1984 days ago
  • etbusch 1984 days ago
    This looks really nice, and is similar to my homelab, except that mine took many tens of hours to setup and tune.
  • alaq 1984 days ago
    Super interested in this, will try it out. Thank you for building it.

    A couple of questions: - Can I deploy this to a Digital Ocean droplet or similar? (I am assuming it's the case, but just checking). - There's openvpn, and there is pi-hole. Can I assume that if I connect a device to the VPN, I'll also get ad blocking via pi-hole as a bonus, or do I have to edit my DNS servers on the device separately.

    A couple of software suggestions: - I'd love to see Wireguard instead of openvpn. The setup/speed is just amazing. - I'd love to have Matrix (https://matrix.org/blog/home/) as a messaging option

    • NickBusey 1984 days ago
      You can definitely deploy this to a DO droplet.

      Pi-hole out of the box support is a bit wonky at the moment, I've been working on it, but it's not quite to the point you described just yet. Contributions encouraged!

      Those both sound great to me, and again, Merge Requests are highly encouraged. :)

  • foolinaround 1984 days ago
    This is an interesting effort, How can this be compared with the Sandstorm project? TIA!!
    • NickBusey 1984 days ago
      I have actually used the hosted version of Sandstorm in the past with some success, but did not realize at the time that they also offered self-hosting for it.

      In general though it looks like they are taking a very different approach to deployments as a whole. They describe some of those differences here: https://sandstorm.io/news/2014-08-19-why-not-run-docker-apps

      While I won't get into the specifics of the pros and cons of each of their bullet points, I will say HomelabOS arose (as some of the other commenters have pointed out) as a way for people interested in this sort of thing to experiment with it. Sandstorm looks more geared toward being usable by 'anyone', which is an admirable goal, if perhaps a bit ambitious in my mind.

    • brian_herman__ 1984 days ago
      Yes! I would like to know also!
  • neuromantik8086 1984 days ago
    Maybe I'm being obtuse, but doesn't using a configuration management tool to deploy black-box Docker containers eliminate many of the advantages of using config management in the first place?
    • NickBusey 1984 days ago
      So you’re asking why not simply use Ansible to deploy all this software? Because that would be anything but simple and would negate almost all the benefits of docker like easy updates and immutability. This is the best of both worlds in my opinion. Ansible handles deploying the configuration that docker then uses.

      Additionally the plan is to move to Kubernetes soon for multiple node deployment, and that wouldn’t really be possible without Docker.

      And to be clear, some software is installed directly by Ansible, where it makes sense to do so.

    • antocv 1984 days ago
      Yes, lol.
  • indigodaddy 1984 days ago
    Can't get to the URL (Gitlab appears down?), however does this support, or preferably incorporate, a reverse proxy like HAProxy or Nginx to handle SSL (auto/LE preferably) and domain/ACL based back-ends?(eg, instead of having a bunch of different front-end ports with a single-domain entry-point)
    • dsumenkovic 1983 days ago
      Some users may had issues connecting to GitLab due to an issue with our upstream provider's IP addresses being routed to other service providers. More info can be found at https://status.cloud.google.com/incident/cloud-networking/18....
      • NickBusey 1983 days ago
        How often do the GitLab trending repositories update? HomelabOS has doubled from 130 to 260 stars in under a day, but the 'Trending' page mostly shows a bunch of repos with 10-30 stars.
        • dsumenkovic 1981 days ago
          That's a really interesting question. However, I think that's not something regular, so there's no fixed multiplication. It may double or multiply even more, there's no rule :-).
    • NickBusey 1984 days ago
      It uses Traefik to handle reverse proxy and by default sets up a sub domain per service.
      • voltagex_ 1983 days ago
        How hard is it to add in a different reverse proxy? I'm currently working with sslh + Caddy to handle this kind of thing.
    • CompelTechnic 1984 days ago
      I agree that gitlab appears to be down at the moment.
  • joh6nn 1984 days ago
    Interesting. Does this handle SSO at all, and if so how? Semi-relatedly, does it support multiple users?
    • NickBusey 1984 days ago
      Not yet. The plan is to use LDAP for this, but it hasn't been tackled yet. https://gitlab.com/NickBusey/HomelabOS/issues/20

      Regarding multiple users, it really just sets up an admin user for most services. Any multiple user support is then up to each service individually. But if you're asking if it does automated separate instances of services for different users, then no, it does not do that.

  • sigstoat 1984 days ago
    related, i’d like to see some software which could configure aws services to implement zapier like functionality for you.

    and/or implement/deploy simple personal web services (rss reader, wiki, maybe even webmail?) on top of apigw/lambda+other services as necessary.

  • barrystaes 1982 days ago
    For getting started with dockers on your home server, try Unraid OS. Its terrific and takes all the pain away. I consider myself a docker power user now, but i still use Unraid at home because it works great.
  • alexnewman 1983 days ago
    I'm curious what the largest installation of nodes which used ansible. Every time I've tried using it at scale it was incredibly difficult vs k8s, cfengine, salt
  • sudovancity 1977 days ago
    Awesome this is what I have been looking for!