Ask HN: What do you self-host?

I know this is has been posted before but that was a few years ago so I wanted to restart the discussion, as I love hearing about what people host at home.

I am currently running an Unraid server with some docker containers, here are a few of them: Plex, Radarr, Sonarr, Ombi, NZBGet, Bitwarden, Storj, Hyrda, Nextcloud, NginxProxyManager, Unifi, Pihole, OpenVPN, InfluxDB, Grafana.

538 points | by aeleos 1656 days ago

114 comments

  • mavidser 1656 days ago
    I reworked my servers a while ago to host literally everything through docker, managed via terraform.

    All web-services are reverse-proxied through traefik

    At home:

        loki + cadvisor + node-exporter + grafana + prometheus
        syncthing
        tinc vpn server
        jackett + radarr + sonarr + transmission
        jellyfin
        samba server
        calibre server
    
    On a remote server:

        loki + cadvisor + node-exporter + grafana + prometheus
        syncthing
        tinc vpn server
        dokuwiki
        firefox-sync
        firefox-send
        vscode server
        bitwarden
        freshrss
        znc bouncer + lounge irc client + bitlbee
        an httptunnel server (like ngrok)
        firefly iii
        monicahq
        kanboard
        radicale
        syncthing
        wallabag
        tmate-server
    • tnsittpsif 1656 days ago
      How much do you spend on the remote server on a monthly basis? Also, what's the hardware you use for the home server?
      • mavidser 1655 days ago
        Remote server's a 20USD/month DigitalOcean droplet with 4GB memory. Though even half of that would also have specified for these services.

        Home server's a Raspberry Pi 4.

        • vaxman 1654 days ago
          Prefab system images From Russia With Love, including password managers and surfing proxies, spun up on a VPS operated by totally unknown people (probably remoted to the actual DC from some place with bad water)...security nightmare. When I see Statue of Liberty sticking up out of the water on the shoreline, imma scream like Charles Heston! Need Congress/FTC to set guidelines. In mean time, know that you don't get all the benefits of that stack for "free", you're burning down future hours that will be in disaster recovery mode.
          • dkmb 1653 days ago
            I uhh.. what?
            • BrandoElFollito 1642 days ago
              I think it means he does not do any self-hosting.
              • vaxman 1641 days ago
                opposite of correct
        • mavidser 1655 days ago
          s/specified/sufficed
          • vaxman 1654 days ago
            ..and note rPis don't have error checking memory and have disk errors all the time
            • unixhero 1643 days ago
              Hmmm... With which filesystems?
              • vaxman 1641 days ago
                it's an electrical issue (contacts on SD cards, voltage/thermal spikes, etc.)
                • unixhero 1641 days ago
                  Sure. I still wonder and it would be interesting to find out which filesystems is the most resilient under the conditions you describe there.
    • nerdponx 1656 days ago
      Was it hard to set up Firefox-Sync/Send? Last I checked, self-hosting these was undocumented and difficult.
    • dmos62 1655 days ago
      I see you're using Bitwarden.

      Does anyone have recommendations for password+sensitive-data management?

      I'm currently using Keepass and git, but I have one big qualm. You cannot choose to not version-control that one big encrypted (un-diff-able) file.

      • johntash 1654 days ago
        You might like Pass [0] or GoPass [1] which had more features the last I looked at it.

        They both store passwords/data in gpg-encrypted files in a git repo. I'm not sure what the state of GUIs/browser plugins are for it, but I'm pretty sure there are some out there.

        You can also set up your git config to be able to diff encrypted .gpg files so that the files are diff-able even though they're encrypted.

        [0]: https://www.passwordstore.org/

        [1]: https://github.com/gopasspw/gopass

        • dmos62 1653 days ago
          Yeah, I like Pass the most in this space, but it doesn't encrypt the index of logins/items that you're keeping. I.e. it's a folder tree of encrypted files, so you can see the sites, logins and other things that I'm using. That's kind of a deal breaker for me, though I'm pondering if I'm being practical, or just overly cautious.
      • monotux 1654 days ago
        Bitwarden can be self-hosted and it's server is open source (and security audited, for what it's worth). I've used it for a few years or so and I've had no issues this far.

        One other alternative to keepass is pass[1].

        [1]: https://www.passwordstore.org/

      • erulabs 1655 days ago
        Vault or Bitwarden are great for projects once they get serious - Unfortunately there isn't a one-size-fits all solution that doesn't suck in one way or another. Setting up vault is fairly non-trivial.
    • captn3m0 1656 days ago
      Or stacks look so similar, it’s creepy. Thankfully, not running sync thing now.
      • mavidser 1655 days ago
        Yeah, I too have noticed that. Haven't seen a lot of terraform usage for personal services.

        What are the issues with syncthing?

        • captn3m0 1653 days ago
          Now running NextCloud
          • ekianjo 1648 days ago
            That is not an issue?
      • drakenot 1656 days ago
        What do you use instead?
        • pickdenis 1656 days ago
          Not him, but I'm gonna use this as a chance to plug unison[1]. I've been using it for more than a year now to keep files synced across more than 3 computers and it works flawlessly. It gets a tad slow to start propogating changes if you have too many files and a weak server (around 150k files, server has an Atom N2800), but it's not more than 15 seconds.

          One nifty thing is that you don't need to run unison on the server ever, just have it installed. I have systemd units that I enable on my client machines and that does all of the syncing; unison connects to the server with ssh and does all the work there over that.

          [1]: https://www.cis.upenn.edu/~bcpierce/unison/index.html

          • equalunique 1655 days ago
            I've been wanting to give Ocaml a try and Unison source code seems to be one of the most popular reference applications for it.
    • emit_time 1656 days ago
      Have you considered moving from tinc to Wireguard?
      • mavidser 1656 days ago
        Yes, I've been meaning to give it a go for a while now. Couldn't use it initially because of (then) lack of availability on BSD.
      • masterfooo 1656 days ago
        I use both, and one thing I found that is sucky about WG is that it does not work well with the Windows firewall. I need to give full permission to an app to be able to access ip addresses routed by WG. Tinc does not have this problem.
        • LinuxBender 1653 days ago
          WG also doesn't do dynamic mesh routing. With tinc, I can have a network path down, and my mesh will find it's way around it. Tinc is slower than WG, but I will take that hit for the benefit of availability. (my preference anyway)

          One thing I noticed with tinc is that it does not take advantage of sysctl network tuning. I had to increase the network buffers for that dynamic routing to not make as much of a noticeable slowdown.

              Cipher = aes-128-cbc
              ClampMSS = yes
              UDPRcvBuf = 81920000
              UDPSndBuf = 81920000
              Compression = 0
    • ekianjo 1648 days ago
      How do you like bitlbee?
  • teddyh 1655 days ago
    “Self-host” is such a weird word. Having your own stuff yourself should be the default, should it not? I mean, you don’t “self-drive” your car, nor “self-work” your job. The corresponding words instead exists for the opposites: You can have a chauffeur and you can outsource your job.

    I think the problem is entirely caused by the US having absolutely abysmal private internet speeds and capacity. Since you can’t then have your own server at home, you are forced to have it elsewhere with sensible internet connections.

    It’s as if, in an alternate reality, no private residences had parking space for cars; no garages, no street parking. Everyone would be forced to either use public transport, taxis and chauffeur services to get anywhere. Having a private vehicle would be an expensive hobby for the rich and/or enthusiasts, just like having a personal server is in our world.

    • IanCal 1655 days ago
      Hmm. I get other people to build my car, grow my food, generate my electricity, extract and refine my petrol, clean my water, I've rented cars, I get others to fly planes for me that I don't own, I use trains others drive and own.

      I do everything for little to nothing in my life, and there's no reasonable default as to where the line is other than a cost/benefit comparison.

      For my for many years owning a car was far more expensive than renting or getting taxis when needed. Owning a car absolutely would have been an expensive hobby, and the same is true for many in cities.

      Having a personal server is exceptionally cheap. I had a VPS unnoticed recently I'd forgotten to cancel which cost about 10 dollars per year. That's about one minimum wage hour where I live. If you mean literally a personal server a raspberry pi can easily run a bunch of things and can cost about the same as a one off.

      It's time, and upfront costs of software. If I want updates, and I do (security at least) I need some ongoing payments for those, and then I need to manage a machine. That management is better done by people other than me (as even if they earned the same as me they'd be faster and better) and they can manage more machines without a linear increase in their time.

      So why self host? Sometimes it'll make sense, but the idea it should be the default to me doesn't hold. Little needs to be 100% in house, and sharing things can often be far more efficient. Software just happens to be incredibly easy to share.

      • teddyh 1655 days ago
        > So why self host? Sometimes it'll make sense, but the idea it should be the default to me doesn't hold.

        You can’t outsource your privacy. Once you’ve given your information to a third party, that third party can and will probably use it as much as they can get away with. And legal protection from unreasonable search and seizure is also much weaker once you’ve already given out your information to a third party.

        To generalize, and to also answer your other comments in a more general sense, you can’t outsource your freedom or civic responsibility. If you do, you turn yourself into a serf; someone with no recourse when those whom you place your trust in ultimately betray you.

        (Also, just like “owning” a timeshare is not like owning your house, having a VPS is not self-hosting.)

        • IanCal 1654 days ago
          This is of course part of the cost/benefit thing you need to look at - but again it doesn't mean it is obviously the default. People can do things with my image if I leave the house, but going out in CCTV dazzling outfits with a fully covered face is not "the default".

          > If you do, you turn yourself into a serf;

          I'm really not sure I follow. This is about self hosting services, I can't really link (e.g.) hosting my data on github.com and turning myself "into a serf".

          > someone with no recourse when those whom you place your trust in ultimately betray you.

          There's obviously recourse - as an EU citizen (at least currently) it's possible that companies can lose 4% of their global turnover for misusing data.

          > (Also, just like “owning” a timeshare is not like owning your house, having a VPS is not self-hosting.)

          You can see from my post that I also put up the price of running a rpi if you didn't count a VPS as self hosting, which I absolutely would because to me it's about what services I run vs what services I pay others to run.

      • rovr138 1655 days ago
        > I had a VPS unnoticed recently I’d forgotten to cancel

        I found on my Linode account one last weekend. It’s been up since 2010 running Debian 5, no updates cause the repos are archived. Couple of PHP sites on there which I don’t control the domains of (but the sites where active).

        Last email I have from the people there is 2012, a backup. The company apparently is not in business anymore (I know the domains registrar was on the personal account of the owner. He might have have auto renew on).

        Backed up everything there and shut it down.

    • bhauer 1655 days ago
      > I think the problem is entirely caused by the US having absolutely abysmal private internet speeds and capacity. Since you can’t then have your own server at home, you are forced to have it elsewhere with sensible internet connections.

      The trend definitely traces to the advent and eventual domination of asymmetric Internet connectivity. My first DSL connection was symmetric, so peer-to-peer networking and running servers ("self-hosting") were just natural. Since then, asymmetric bandwidth has ruled the US.

      It's not so much that connectivity technology in the US is strictly poor—many cities have options providing hundreds of megabits or a gigabit or more of aggregate bandwidth. It's that the capacity allocation of some shared delivery platforms (e.g., cable) is dramatically biased toward download/consumption, and against upload/share/host. And there's no way for consumers to opt for a different balance. I'd gladly take 500/500 versus 1000/50. Even business accounts, which for their greatly increased costs are a refuge of symmetric connectivity and static IPs, are more commonly asymmetric today.

      I think that this capacity imbalance and bias toward consumption snowballs and reinforces the broader assumptions of consumption at the edge (why make a product you self-host when most people don't have the proper connectivity?). This in turn means more centralization of services, applications, and data.

      Nevertheless, even with mediocre upload speeds (measured in mere tens of megabits), I insist on self-hosting data and applications as much as I can muster. All of my devices are on my VPN (using the original notion of "VPN," meaning quite literally a virtual private network; not the more modern use of VPN to mean "encrypted tunnel to an Internet browsing egress node located in a data center"). For example, why would I use Dropbox when I can just access my network file system from anywhere? To me, it's a matter of simplicity. Everything I use understands a simple file system.

    • maxerickson 1655 days ago
      If you take a broader lens, having a private vehicle is an expensive hobby for the rich.

      And most people actually do outsource their jobs. They are employees rather than working for themselves…

      • avl999 1654 days ago
        > If you take a broader lens, having a private vehicle is an expensive hobby for the rich.

        That might be true if you are in SF, NY, Toronto, London or some other major metropolitan with a good public transportation network. However for a large number of places in North America including metropolitans like LA, San Diego, Minneapolis, Dallas, having a car is almost as necessary as anything as that is the only way to get around the city without spending half a day in public transit.

      • ekianjo 1649 days ago
        > having a private vehicle is an expensive hobby for the rich.

        Having a car is not a hobby when you live outside of a very dense city center. That's just the tool that enables you to live.

      • tbrownaw 1655 days ago
        I tend to think of "expensive hobby" as meaning you do it for fun rather than for practical reasons.

        While I know that some car owners do just have it for fun, I think a lot more are because it's useful.

    • dillonmckay 1655 days ago
      So, that would be interesting to note who here is in the EU self-hosting w/ their symmetric, low-cost, high-speed ISPs, versus the US, paying $600/mo for a 5 year contract for 10/10 Mbit DIA setup (anecdote).
      • stiray 1655 days ago
        I am paying 76 euros/month for 500/100 (This is max achievable, with latency at around 6ms, after replacing their crappy router (12ms+) with mikrotik, throughput can be lower but mostly it is throttled by source) fiber connection + 1 phone (50gb download, LTE, with 80% of country coverage) + max iptv scheme with HBO + static ipv4 ip and reverse resolve. I would love to hear what the prices are around the world.

        (edit: forgot to state country, Slovenia)

        • BrandoElFollito 1642 days ago
          France

          50€ for 950/300 (fibre by Orange) . I could get 10G/1G (fiber by Free) for 100€ but I could not use my own router inserted of the provided one.

        • GrayShade 1655 days ago
          In Romania there's an ISP with a 300 / 150 Mbps plan is 6.3 EUR/mo and a 940 / 450 Mbps plan for 8.4 EUR/mo. They also have phone plans similarly dirt-cheap (from 2 EUR/mo).
        • 0xAF 1654 days ago
          In Bulgaria for businnes, you can get 1Gbps/1Gbps dual-line fiber (in case one gets broken) with 16 IPs for about 200-250 EUR/month. But I guess the price is negotiable and will differ. For home you can get 100Mbps/100Mbps (no reverse DNS) for between 7-15 EUR/month.
        • dillonmckay 1655 days ago
          I have DSL in US, and get 12Mbps down and less than 1Mbps up, for $55/mo.

          Shared LTE phone and data plan (2 people) w/ 22GB/mo total is $160.

          And I also pay about $800/mo for health insurance for 2 people.

          • BrandoElFollito 1642 days ago
            We also pay for health insurance in France. I would say about 100€/mo for a family of four. It is directly taken off my pay slip so I am not sure about the exact number.
    • kraftman 1655 days ago
      Reminds me of "wild" camping. Used to just be called camping...
    • kleer001 1653 days ago
      >> Everyone would be forced to either use public transport, taxis and chauffeur services to get anywhere.

      Saudi is like this, I hear, Jakarta too. I assume there's more.

    • ianthiel 1655 days ago
      "self-driving" ones car may become the common parlance before we die
      • teddyh 1655 days ago
        “Where can I get a hire car? Self-drive.”

        “No self-drive. Only taxis.”

        The Prisoner, 1967

  • cyphar 1655 days ago
    I self-host the following at home. Everything is running under LXD (and I have all of the scripts to set it up here[1]):

      * nginx to reverse-proxy each of the services.
      * NextCloud.
      * Matrix Homeserver (synapse).
      * My website (dumb Flask webapp).
      * Tor (non-exit) relay.
      * Tor onion service for my website.
      * Wireguard VPN (not running in a container, obviously).
    
    All running on an openSUSE Leap box, with ZFS as the filesystem for my drives (simple stripe over 2-way mirrors of 4TB drives).

    It also acts as an NFS server for my media center (Kodi -- though I really am not a huge fan of LibreELEC) to pull videos, music, and audiobooks from. Backups are done using restic (and ZFS snapshots to ensure they're atomic) and are pushed to BackBlaze B2.

    I used to run an IRC bouncer but Matrix fills that need these days. I might end up running my own Gitea (or gitweb) server one day though -- I don't really like that I host everything on GitHub. I have considered hosting my own email server, but since this is all done from a home ISP connection that probably isn't such a brilliant idea. I just use Mailbox.org.

    [1]: https://github.com/cyphar/cyphar.com/tree/master/srv

    • douglascoding 1645 days ago
      > * Wireguard VPN (not running in a container, obviously).

      I plan to use Wireguard too, so I shouldn't run on containers? Can you elaborate on that?

      • BrandoElFollito 1642 days ago
        From the small research I did, I think you need a customized kernel on the host to do that.

        I run it on the host.

    • mwcampbell 1655 days ago
      > It also acts as an NFS server for my media center [...] to pull videos, music, and audiobooks from.

      This is a bit tangential, but to clarify, do you mean that you listen to audiobooks on your TV using Kodi? Do you also have a way of syncing them to a more portable device, like your phone?

      • cyphar 1655 days ago
        > This is a bit tangential, but to clarify, do you mean that you listen to audiobooks on your TV using Kodi?

        Sometimes, though not very often -- I work from home and so sometimes I'll play an audiobook in my living room and work at the dinner table rather than working from my home office.

        > Do you also have a way of syncing them to a more portable device, like your phone?

        Unfortunately not in an automated way (luckily I don't buy audiobooks very regularly -- I like to finish one before I get another one). I really wish that VLC on Android supported NFS, but it doesn't AFAIK (I think it requires kernel support).

    • big_chungus 1655 days ago
      Why SUSE over another OS? I've used it and like it, though I see more ubuntu, debian, centos among servers. Any particular distinguishing factor/advantage, or just preference?
      • cyphar 1655 days ago
        I've worked for SUSE for quite a few years now, and so I've gotten fairly used to running openSUSE on all my machines (and I do quite like things like the Open Build Service and other openSUSE projects). I'm am a package maintainer for a bunch of openSUSE packages (most of the container-related ones and a few others) -- so I might as well use them myself to make sure they work properly.
    • mwcampbell 1655 days ago
      I'm curious about why you're using lxd. Is it just that you wanted to try something different from Docker and its rivals? Or is there a reason you think lxd is better for your setup? For a service per container, I figured minimal, immutable containers, rather than containers running full distros, would be better.
      • cyphar 1655 days ago
        The primary reason is that LXD has an indisputably better overall security policy than Docker. They support isolated user namespaces (containers running with different userns mappings), user namespaces are the default, they make use of far more new kernel hardening features than Docker, and so on. If I'm going to self-host something at home and expose it to the internet, I'm simply not going to use Docker.

        I used to run Docker containers several years ago, but I found them far more frustrating to manage. --restart policies were fairly hairy to make sure they actually worked properly, the whole "link" system in Docker is pretty frustrating to use, docker-compose has a laundry-list of problems, and so on. With LXD I have a fairly resilient setup that just requires a few proxy devices to link services together, and boot.autostart always works.

        Personally, I also find it much simpler to manage a couple of services as full-distro containers. Having to maintain your own Dockerfiles to work around bugs (and missteps) in the "official library" Docker images also added a bunch of senseless headaches. I just have a few scripts that will auto-set up a new LXD container using my configuration -- so I can throw away and recreate any one of my LXD containers.

        [Note: I do actually maintain runc -- which is the runtime underneath Docker -- and I've contributed to Docker a fair bit in the past. So all of the above is a bit more than just uneducated conjecture.]

        • gnfurlong 1654 days ago
          Have you taken a look at podman / buildah? My understanding is that podman resolves all of the security concerns you highlight above while mostly maintaining compatability with the docker cli and existing docker images. It gets rid of the docker daemon so your containers (and restart policy) can just be managed by your existing service manager.

          I only just recently discovered podman and I've been pretty excited. Having never used LXD and only understanding the high level differences between the two, I'm curious how it compares with regards to security and usability.

          • cyphar 1654 days ago
            I'm a little bit too familiar with podman. LXD is more mature and actually implements all of the hardening features I mentioned. podman could implement them in theory, but doesn't. Its default security posture is very similar (though not the same as) Docker. Don't get me wrong, I do want to see podman succeed -- but I don't like the amount of unneeded hype around it. It's effectively a Docker rewrite by Red Hat (and other folks) that has some fairly important improvements, but it's not a revolutionary new concept. As for buildah, I am too biased to respond to that question.

            Oh, and most of the Docker CVEs found in recent years -- including those I've found -- have also impacted podman. The most brazen example is that podman was vulnerable to a trivial symlink attack that I fixed in Docker 5 years ago[1,2]. It turns out that both Docker and podman were vulnerable to a more complicated attack, but the fact that podman didn't do any special handling of symlinks is just odd.

            [Disclaimer: The above is my personal opinion.]

            [1]: https://github.com/containers/libpod/pull/3214 [2]: https://github.com/moby/moby/pull/5720

            • gnfurlong 1653 days ago
              I have to admit, the biggest selling point to me for podman is the removal of the central docker daemon. For my use case (personal workstation and home lab), it seems strange to me that I need essentially another service manager for these processes just because I want to slap them in a container. It definitely makes sense that there would still be some gaps though as it's a less mature product.

              You've definitely convinced me to take a good look at LXC/LXD though. Thanks for the thorough response!

              • cyphar 1653 days ago
                It should be noted that (unlike Docker), LXD can be safely killed and upgraded without your containers dying -- which is the main problem most people have with Docker's container liveliness model (even with Docker's --live-restore there are many issues). The main reason why LXD has a daemon is that it supports lots of management features (such as live migration and clustering) which cannot easily be done without a daemon.

                You can use LXC directly if you want to avoid a long-running daemon.

  • sdan 1656 days ago
    I host a bunch of docker containers plus Traefik to route everything. It runs on a cheap GCP instance (more on this here: https://sdan.xyz/sd2)

    Overleaf: https://sdan.xyz/latex

    A URL Shortener: https://sdan.xyz

    All my websites (https://sdan.xyz/drf, https://sdan.xyz/surya, etc.)

    My blog(s) (https://sdan.xyz/blog, https://sdan.xyz/essays)

    Commento commenting server (I don't like disqus)

    Monitoring (https://sdan.xyz/monitoring, etc.)

    Analytics (using Fathom Analytics) and some more stuff!

    • djsumdog 1655 days ago
      I run netdata too, but I keep that behind my VPN. I'd suggest the same for you. No reason to have that exposed to the entire world.

      I wrote this to setup my web server, mail server and VPN server, and auto-generate all my VPN keys.

      https://github.com/sumdog/bee2

      • sdan 1655 days ago
        You're 100% right. Actually was a bit concerned myself when I realized hundreds of people were peering into how my server is doing.

        But at the same time, I understand the security risks and if I have to I can just stop netdata's container and add some more security on it before turning it on again (I'm not running some SaaS startup, so security isn't a huge concern and I don't think you can do anything with my netdata that can affect or show anything else that can make me prone to attack)

      • rovr138 1655 days ago
        Any reason to have it behind a VPN?
        • AdamGibbins 1655 days ago
          Reduces surface area of attack, you never know when a 0day is going to be found. Exposing monitoring/metrics is particularly interesting as it exposes a lot of information to an attacker, if they're trying to starve your machine of a resource or whatever.
          • sdan 1655 days ago
            Exactly. They have direct access to your vitals and can push certain buttons to figure out how your system is running to brute-force that attack, ultimately ruining whatever they intended to do.

            I'm probably going to change how publicly accessible my monitoring view is soon, but for now, it seems pretty cool for everyone to see.

            • lma21 1654 days ago
              Indeed it was cool.

              Would love to get a link to a screenshot of your system's resource monitoring. The description of each panel & eache metric was quite useful!

              • pm7 1654 days ago
                It's still public as of now.
    • RulerOf 1655 days ago
      It’s a relatively popular choice but I’ll ask you about it...

      I see a lot of people putting their home stuff behind CloudFlare, but when I reviewed their free tier, I didn’t actually see any security benefit to outweigh the privacy loss, and I didn’t see that covered on your blog post.

      • tbyehl 1655 days ago
        > I didn’t actually see any security benefit to outweigh the privacy loss

        The main thing is being able to hide your origin IP address. That turns many types of DDoS attacks into CloudFlare's problem, not yours, and it doesn't matter that you're on the free tier[0]. If you firewall to only allow traffic from CF[1], then you can make your services invisible to IP-based port scans / Shodan.

        CloudFlare isn't a magic-bullet for security, but, used correctly, they greatly reduce the attack surface.

        Whether any of that is worth the privacy / security risk of letting CloudFlare MITM your traffic is up to you.

        [0] https://news.ycombinator.com/item?id=21170847

        [1] https://www.cloudflare.com/ips/

      • sdan 1655 days ago
        Thanks for the read!

        1. This is hosted on GCP. Actually was thinking of using Cloudflare Argo once my GCP credits expire so that I can truly self host all this (although all I have is an old machine).

        2. For me, Cloudflare makes my websites load faster on pages. Security wise, I have pretty much everything enabled... like always on HTTPS, etc. and I some strict restrictions on SSHing into my instance (also note that none of my ip addresses are exposed thanks to Cloudflare), so really I'm not sure what security risk there may be.

        3. How am I losing privacy loss? Just curious, not really understanding what you're saying there.

        • tbyehl 1655 days ago
          > Actually was thinking of using Cloudflare Argo

          I'd suggest that Argo is a waste of money if you have control of your router, you don't need to secure unencrypted HTTP traffic, and your ISP isn't port-blocking. Block all traffic except from CF's IPs, configure Authenticated Origin Pulls, and use SSL for your CF<->Origin traffic (your own cert or CF's).

          If you don't meet all of those conditions, a cheap VPS as a VPN server is probably a better value (plus you get a VPS to do other stuff with).

          • sdan 1655 days ago
            Great idea. Didn't think of that. Maybe I'll do that in the future.
        • oarsinsync 1655 days ago
          You lose the end to end encryption that you’d get by HTTPS directly to your home instead of proxying via CF, as CF will MITM all of your sessions.

          Browser <> CF, CF<> source server. Two distinct TCP sessions. Both potentially encrypted, but there’s no E2E encryption anymore.

          • sdan 1655 days ago
            I get what you're saying, but here's some benefits of using CF:

            1. Been using it for 3+ years. Whenever I'm making a site, there's nothing better than easily making some DNS records and making sure they're all always-on HTTPS.

            2. A little bit of the first part: It's a hassle to setup my own certs, etc.. I feel that Cloudflare "protecting" my IP from DDOS and other attacks is far better than anything that I can setup easily (at least from my experience, I think they know what they're doing)

            3. Maybe in the future when I have some time and money I'll do everything on my own and ensure I have E2E encryption. At the moment, anything I'm running isn't mission-critical and isn't used by hundreds of people; I'm not making a SaaS startup. I understand your concern, but the ease of use of Cloudflare is something I value.

            4. Analytics. I've come not to trust Google Analytics at all. I'm not sure what they're doing, but most if not 100% of tech-savy people have adblock, which blocks GA. My VPN from AlgoVPN blocks GA and anything related to GA, FB, Twitter, etc. So I'm not really sure how much I can trust GA's analytics opposed to Cloudflare giving me the exact numbers on how many people requested or visited my site. (I'm going to make my own analytics soon since Fathom has turned to profit only and not open source).

            • oarsinsync 1655 days ago
              Apologies for not being clear, but I was simply addressing the third point in your previous post:

              > 3. How am I losing privacy loss? Just curious, not really understanding what you're saying there.

              I understand the benefit of CF, and it’s for each person to decide for themselves what they consider acceptable or not.

              > Cloudflare giving me the exact numbers on how many people requested or visited my site. (I'm going to make my own analytics soon since Fathom has turned to profit only and not open source).

              We could do this in the 90s with our web server logs. It didn’t involve third parties or paid tools or centralising logging and sacrificing privacy. Tooling for simple stats has existed for literally more than two decades.

          • asdkhadsj 1655 days ago
            > HTTPS directly to your home

            Can you elaborate on this? Maybe I misunderstand you, but is there a good way to get HTTPS from your home?

            • notkaya 1655 days ago
              Don't use CF. Route everything directly to a reverse proxy on your home server.
    • pm7 1654 days ago
      You may want to consider adding netdata user to docker group. It will allow checking Docker names of containers instead of numeric id.

      Of course, it would simplify privilege escalation if someone successfully attack netdata service. If you want public dashboard, streaming is supposed to be quite safe (no way to send instruction to streaming instance of netdata).

    • _emacsomancer_ 1656 days ago
      how difficult is overleaf to self host?
  • whalesalad 1656 days ago
    Bums me out when I see people putting so many resources into running/building elaborate piracy machines. Plex, radarr, sonarr, etc... (you note some of these services but /r/homelab is notorious for this)

    Here’s my home lab: https://imgur.com/a/aOAmGq8

    I don’t self host anything of value. It’s not cost effective and network performance isn’t the best. Google handles my mail. GitHub can’t be beat. I use Trello and Notion for tracking knowledge and work, whether personal or professional. Anything else is on AWS. I do have a VPN though so I can access all of this when I’m not home.

    The NAS is for backing up critical data. R720 was bought to experiment with Amazon Firecracker. It’s usually off at this point. Was running ESXI, now running Windows Server evaluation.

    The desktop on the left is the new toy. I’m learning AD and immersing myself 100% in the Microsoft stack. Currently getting an idiomatic hybrid local/azure/o365 setup going. The worst part about planning a MS deployment is having to account for software licensing that is done on a per-cpu-core basis.

    • Marsymars 1656 days ago
      It bums me out when I see corporations putting so many resources into monopolizing copyright and preventing media from entering the public domain, which leads to consumers putting resources into purchasing media that would otherwise be in the public domain.

      The status quo is radically anti-consumer, IMO, as radical as abolition of all copyright would be.

      • zanny 1656 days ago
        It more generally burns me out that we as a society still feel it is necessary to construct and reinforce so arbitrary an apparatus as copyright to substantially stymie the tremendous potential information exchange of computer networks.

        Of all the ways to try to promote creativity in the 21st century, making information distribution illegal by default and then using force of law to restrict said distribution unless authorized is pretty wack.

        • dkarras 1655 days ago
          >it is necessary to construct and reinforce so arbitrary an apparatus as copyright to substantially stymie the tremendous potential information exchange of computer networks.

          It makes sense when you consider that information is generated in the first place for an incentive, and that incentive is only possible when copyright guards it. People are more than free to create public information if they choose to do so (and they do), but some people generate valuable information mostly for the purpose of profiting from it and the copyright framework tries to ensure that it will be worth their time when they attempt to create such information. Would you rather they didn't have the option which would result in the effort not being expended to generate such information? With copyright, you at least have the option to obtain it if you deem the price tag (set by the creator) fits the value you'll get from it.

          There is no central authority that copyrights information that people generate. You make it sound like there is some evil force in the world that prevents people from creating freely accessible information. There isn't. You're free to create freely accessible information. There are creators that choose to limit access to the information that they generate and I don't understand how someone can argue that it is unfair that they have an option to do so if they choose.

          • cannonedhamster 1655 days ago
            US copyright laws have been proliferated around the world. Copyright was originally intended to be a limited-time monopoly which allowed consumers the ability to trust creators and creators the ability to share without worry that their idea would be stolen by other businesses. It was never intended to limit the rights of consumers, it's been warped into that by Disney which rewrote the laws to protect Mickey. Copyright as it is goes against the very purpose of creation preventing any new works from ever being created.
            • dkarras 1654 days ago
              >ability to share without worry that their idea would be stolen by other businesses.

              Presumably so because consumers stealing the final products was already prohibited / a crime. The final product was generally physical, and consumers would have to physically break into stores to get their copy of whatever was produced and it was already illegal. And even if the consumers obtained their copy by legitimate means, them sharing with other would mean they would lose their own copy. For consumable-ish items (things you only need to experience once to get the value out of the product) this is still a problem of course but there is no easy way of preventing it - but the idea is still there and the limits can be enforced. With digital information, the barrier for entry for such theft is greatly reduced. You don't lose your copy when you share, and stealing is a lot easier too. Doesn't mean it is right or it is in line with the spirit of what we thought ownership meant back in the day.

              >Copyright as it is goes against the very purpose of creation preventing any new works from ever being created.

              Again, I don't get this. EVERYONE has the OPTION to create works for public domain. Why is this not enough for you? Everything you want is already there. It's just that there is another option for others that don't want to create works for public domain. Why does that bother you?

              My guess is that if you were entirely happy with what people create without a motive for profit, you wouldn't care that other people had a copyright option. But you are not happy with what that economic model (free, copyleft etc.) produces by itself. You are aware that that economic model doesn't work. You want free access to information that people with economic incentives create with a price tag attached to it, because you know information generated with profit in mind tends to be more valuable.

              • cannonedhamster 1653 days ago
                Me personally I'm not against the idea of copyright but the length of copyright has completely perverted the purpose. I do generally create for the public domain, but when you can lock up parts of culture you're stealing from the public. Once you share something it's no longer just yours. The idea that copyright has gone from 7 years to the perpetual state that it's gone to literally means that there were years where almost nothing has entered the public domain through copyrights expiring. Something from almost 100 years ago will only enter the public domain this year. That's wrong and a perversion of copyright, and has stolen something from the public for years because of retroactively changing the copyright rules.

                https://www.smithsonianmag.com/arts-culture/first-time-20-ye...

          • zanny 1655 days ago
            This is a chicken and egg fallacy. You can't tell what would or would not be made in the absence of an incentive that has never not been provided.

            > Would you rather they didn't have the option which would result in the effort not being expended to generate such information?

            Yes, I would argue absolutely that valuable information would still be made because those that would benefit just from the information existing - not from the potential sale of said information - would still fund its creation. Someone that wants a painting will still pay for it if or if not they can sell the finished work. The think tank researching a cure for cancer will still have ample funding sources from those who think not having cancer would be beneficial regardless of those sponsors ability to profit off said cure.

            > I don't understand how someone can argue that it is unfair that they have an option to do so if they choose.

            Its largely a problem because its both default and implied. Its in the same class of problem as if the government tried to restrict air - you had to pay to breathe and are charged per-month. Despite the air being "free" and "everywhere". Its a tough analogy to write though, because there is no true analog to the modern miracle of information propagation being infinite and endless - we truly have nothing else worth so little as a copy of a number to compare it to.

            But fundamentally its having your cake and eating it too - if you want to monetize your creations, you make them for free (at expense to yourself) to try to monetize something that has no value (copies of it). Its so abjectly opposed to reality and true scarcity that subconsciously drive people to feel no serious shame in piracy despite them "stealing theoretical profits from the rightsholder". But thats really all you are taking. In another light, a random stranger is offering you something for free and without recompense that they have, just because its so cheap to store, transmit, and replicate. That is magical. We take this modern miracle of technology and bind it in chains to try to perpetuate a model of profit that doesn't make any actual sense in actual reality given the scarce inputs (creative capability, motivation, and efforts) and infinite outputs (information) involved.

            • dkarras 1654 days ago
              >You can't tell what would or would not be made in the absence of an incentive that has never not been provided.

              You can though. Precisely because there are countries (past and present) that have / had no legal framework for such incentives, and there are others where the framework was there but the law is not enforced. And we can observe how it is working for them, compare and contrast. I live in one of them (lived here all my life) but work for countries that provide such a protection (precisely because my intellectual property would not be respected here, so my own country my homeland does not get to benefit from my work) so it is easy for me to look at both sides of the coin - though it is not strictly necessary. It doesn't take hands on experience to observe that the most productive and innovative countries occupying planet earth are those that have strong intellectual property rights.

              There is a lot of low hanging fruit where I live that would double the GDP of the country in a few years but no one is doing it, because they are either capital intensive, or time intensive (or both) but without any protections there to make it worth your while it doesn't make sense to attempt those - for anyone. It makes more sense to pitch any innovative ideas to countries that will protect you so that you get a return proportional to the value you generate for the rest of the society. If I have an idea that has the potential to shave 1 hour off of millions of people's work everyday, that is enormous value generated for everyone and I should be rewarded proportionally. Not to mention the risk I'd have to take to attempt that, by attempting that I'm doing this instead of doing something else with my only life so of course the incentive HAS TO be there.

              >The think tank researching a cure for cancer will still have ample funding sources from those who think not having cancer would be beneficial regardless of those sponsors ability to profit off said cure.

              I think this is far too naive way of looking at it. You are taking risk entirely out of the equation. If I'm attempting to do something that is of value to other people, merely by attempting it, I am taking a RISK. My time is my most valuable asset, I only live once, and I'm willing to invest my time and capital into this endeavor instead of doing something else with them. The resources of human beings are not unlimited, so they need to have a heuristic / algorithm to ration their resources. Their survival / wellbeing is dependent on this. So something becomes a viable risk only if there is a possible return that makes sense. Like you wouldn't play coin flip for 1.1x or nothing right? It must at the very least be 2x or nothing to break even.

              So we think we'd like to do good for no possibility of personal return, but behavioral economics show that humans do not operate that way (even though they'd like to think that they would) because each and every human being have their own lives, responsibilities, families, wants, wishes and only limited resources. To make ANYTHING the returns must be congruent with your heuristic about how you'd like to divide your limited resources.

              • zanny 1654 days ago
                > without any protections there to make it worth your while

                You are still approaching it from the "make it for free, charge for the result" model incompatible with reality. If there is something worth doing that will radically improve society you should be soliciting the funding first before undertaking the labor.

                Yes, risk is involved. You take risks when you pay someone to build a house for you. You take risks when you buy Sushi at the store that may or not still be good. We have, as a society, very effectively structured and produced buyer protections for services rendered ranging from guarantees to total free for all. You would, like with every other transaction, budget and account for risks and pay to mitigate them if warranted.

                > It makes more sense to pitch any innovative ideas to countries that will protect you

                Its more like foreign nations with IP laws are offering you a magic money machine that nations without don't, and the pile of gold is a tempting proposition over attempting alternative funding models.

                > So something becomes a viable risk only if there is a possible return that makes sense

                I don't think we are in any disagreement here. I'm never arguing that you abolish IP and expect all further scientific advancement, art, programming, etc to be done by people who cannot seek compensation for making it. I'm arguing that the US IP regime smothers any potential alternative with how exploitative having the government constrain information by law is.

                Your story runs a similar thread seen in tax evasion and money laundering - the rich will gravitate their wealth towards wherever they can keep the most of it. That is where Swiss Bank Accounts and Latin American cartels get their power. Likewise businesses want lower taxes and thus gravitate towards the countries that offer the lowest tax rates - even if those lower tax rates have abject demonstrable harm to the citizenry through reduced social programs, etc. The result is that there is a global race to the bottom economically - to appeal maximally to wealth to attract it, or see your nation rot as it constantly flees your borders for greener pastures. IP is a massive pile of money to be had, hence nations without it see creators flee to nations with it, but might does not make right - just because the existence of IP represents untenable profit by any other means does not justify its existence, especially when it is so contrarian to baseline reality. Its a perversion of normality meant to attract investment the same way having low or no wealth and income taxes or no corporate tax attracts the wealthy and businesses.

                > So we think we'd like to do good for no possibility of personal return

                The personal return on funding the cure for cancer is having the cure for cancer exist.

                > To make ANYTHING the returns must be congruent with your heuristic about how you'd like to divide your limited resources.

                And macroeconomically we are all constituted of limited resources - limiting information compels most to partition their scarce resources towards affording the IP regime, not for any physical necessity, and this reduces the buying power of everyone involved. IP falls under the same purview of economic cancers as advertising, health insurance, and the military - bureaucracy and a race to the bottom that has no abject benefit but siphons productivity away to rent seekers and middle men.

          • dillonmckay 1655 days ago
            Alot of us think 75 years + life of the creator is a bit excessive.
            • dkarras 1654 days ago
              As long as the intellectual property framework is ther, recognised and enforced, I'm more than open to discuss the specifics of how it should be done. The argument above is about abolishing the idea of intellectual property as a shackle around humanity's creative output - something which I disagree with. They are saying that since making a copy of something is essentially free in the digital era, there should be no protections for copying and distribution of the data - and that people would generate data of similar quality regardless of the incentive of profit proportional with the value of data. I think that is an absurd claim. And we can easily see that it is not the case because there are countries with lax or non-existent legal intellectual property frameworks and they lack productivity growth and they don't innovate.
    • LeoPanthera 1656 days ago
      I use Plex. It virtually exclusively contains rips of blu-rays and DVDs that I have bought. I do not consider this piracy. I do not think format shifting is unethical. I do think DRM is unethical.
      • cannonedhamster 1655 days ago
        Listen here, just because you bought a piece of plastic doesn't mean you own what's on the piece of plastic. It's like a car, just because you bought a car doesn't mean you own the steering wheel it the seats.... Oh wait.../s
    • esotericn 1655 days ago
      > The worst part about planning a MS deployment is having to account for software licensing that is done on a per-cpu-core basis.

      > Bums me out when I see people putting so many resources into running/building elaborate piracy machines.

      These two comments are rather at odds to me.

      That said, IME generally the type of person who's big into self hosting isn't a Microsoft guy. I work with MS stuff at work at the moment. The entire thing is set up for Enterprise and Regulations. It's hugely overcomplicated for that specific goal only.

      At home I don't care about Regulations(tm). The only reason I can see for someone to bother with it is if they want to train out of hours for a job at an MS shop.

    • input_sh 1656 days ago
      As soon as I find a DRM-free way to purchase my shows/movies/books, I'll be glad to do so. Until then, yarr.
      • ptman 1654 days ago
        There are DRM-free ebooks available. https://www.defectivebydesign.org/guide/ebooks (and there seems to be other stuff as well: https://www.defectivebydesign.org/guide/audio ). But I mostly agree with you. I hope watermarks would be used instead of DRM.
        • input_sh 1654 days ago
          Ebooks are the easiest of the bunch, but I don't purchase a book simply because it's DRM-free. I find the book I want to read, I look for a DRM-free version of it (I'm fine with watermarks as well), and, if I can't find it, I either strip away the DRM or pirate it.

          I specifically didn't mention music because it's easy to get it DRM-free. Pretty much every online music store is DRM-free.

    • saagarjha 1656 days ago
      Plex can and is often used for hosting content that you own the rights to.
      • DominoTree 1656 days ago
        I have well over 2,000 publicly-available cybersecurity talks on my Plex and I'm currently watching one right now.

        I also have piracy.

      • whalesalad 1656 days ago
        Sure, in the same way that BitTorrent can be used to download Linux ISOs :)
        • reificator 1655 days ago
          I pull in about two batches of 5-30 torrents every month or two for content I paid for on Humble Bundle.

          People who use bittorrent legally do exist, or at least there's one of us.

          • cannonedhamster 1655 days ago
            There's tons of people who use BitTorrent legally. Some companies use it on their servers to keep them in sync.
        • berti 1655 days ago
          It's an excellent way to download Linux (or BSD) ISOs. Much faster than the nearest HTTP mirrors.
    • aklemm 1656 days ago
      Not to be rude, but you are very much cozied up to “the man” ya know?
    • shakna 1656 days ago
      > Bums me out when I see people putting so many resources into running/building elaborate piracy machines.

      How would _you_ suggest I handle the 2TB of public domain media I have, then?

    • gcj 1650 days ago
      I'm sorry to bum you out, but I recently built a Raspberry Pi piracy box and it's amazing :D
    • dillonmckay 1655 days ago
      Tell me about the wood shelving. Did you build that?

      It seems to hold rack-mounted gear quite well.

    • futhey 1656 days ago
      Oh man, I have the same shelf from Ikea, never thought of using it for a couple rack-mountables. I like the look!
      • dillonmckay 1655 days ago
        I was wondering about that shelf. It is the perfect width.
  • tbyehl 1656 days ago
    In colo:

      nginx
      Plex
      Radarr / Sonarr / SABnzbd / qBittorrent / ZeroTier -> online.net server
      FreeNAS x2
      Active Directory
    
    At home:

      nginx
      vCenter
      urbackup
      UniFi SDN, Protect
      Portainer / unms / Bitwarden
      Wordpress (isolated)
      Guacamole
      PiHole
      InfluxDB / grafana
      Active Directory
      Windows 10 VM for Java things
      L2TP on my router
    
    Everything I expose to the world goes through CloudFlare and nginx with Authenticated Origin Pulls [0], firewalled to CF's IPs [1], and forced SSL using CF's self-signed certs. I'm invisible to Shodan / port scans.

    Have been meaning to move more to colo, especially my Wordpress install and some Wordpress.com-hosted sites, but inertia.

    [0] https://support.cloudflare.com/hc/en-us/articles/204899617-A...

    [1] https://www.cloudflare.com/ips/

    • NewDimension 1656 days ago
      Do you have a static IP at home? How does your cloudflare setup work?
      • bpye 1656 days ago
        I've done similar. You firewall your home network to all IP's other than Cloudflare's. You can use a Cloudflare provided certificate for HTTPS - they will MITM and use a trusted cert for outward connections. You can update Cloudflare DNS records via their API - the typical dynamic DNS tools work fine. It works well.

        I've always been unable to pull this off completely as I always want a way to SSH into my home network - but maybe there is a better way I can pull off this sort of 'break glass' functionality.

        • tbyehl 1656 days ago
          > I always want a way to SSH into my home network

          Guacamole (sorta) gives me that. If CloudFlare or nginx or Guacamole have problems then I'm hosed... but I work from home so remote access isn't a huge concern.

          And I've got nothing terribly "household critical" at home, just the PiHole needs to be running to keep everyone happy. I do wish that PiHole had an HA solution. I've been tempted to set up a pfSense / pfBlockerNG HA pair but that's a lot of overhead just for DNS.

          • rovr138 1655 days ago
            > I do wish that PiHole had a HA solution

            You could run 2 Pi’s or a Pi and a container in another always on machine for example. Then just point your router‘s primary to the Pi and secondary to the other instance.

          • bpye 1656 days ago
            That's not a terrible solution. I've just been looking at possibly forwarding SSH over WebSocket - then I can put that behind CloudFlare. Latency would however suffer.
        • jlgaddis 1656 days ago
          > ... want a way to SSH into my home network ...

          IMO, using a Tor hidden service is a (damn near) perfect solution for this.

          • sterlind 1655 days ago
            aren't jitter and latency still major problems with this approach? plus connection resets, though maybe long-lived flows are more reliable than I remember, and I suppose you could do multipath (if Tor doesn't handle that already, not sure.)

            have you made it work? my Tor career ended in college after running an exit node - no visits from the FBI, just got auto-klined from every IRC server since I was on the list of proxies.

    • Pmop 1656 days ago
      Wait. What? Windows VM for Java?
      • throwaway8941 1656 days ago
        It's most likely client-side stuff. Probably some crappy banking client, or an authentication client for some government websites, or something like that.

        I use one for the sites below. It is written in Java/Kotlin, but barely works anywhere except Windows.

        https://egov.kz/cms/en

        https://cabinet.salyk.kz/

        ...

      • tbyehl 1656 days ago
        Mostly for old shitty IPMI.
    • snagglegaggle 1656 days ago
      vCenter but no hosts? Why VMware stuff?
      • tbyehl 1656 days ago
        Colo: Three Hyper-V hosts on R620s. Goofball Quanta and Foxconn hardware for FreeNAS bare metal. All 2xE5v2 w/ 160-256GB RAM.

        Home: Two VMware hosts on Hyve Zeus (Supermicro, 2xE5 64GB), one on an HP Microserver Gen8 (E3-1240v2 16GB). PiHole bare metal on a recycled Datto Alto w/ SSD (some old AMD APU, boots faster than a Pi and like 4w). Cloud Key G2 Plus for UniFi / Protect.

        VMware because it's what I'm used to. Hyper-V because it's not. Used to have some stuff on KVM but :shrug:

  • zelly 1656 days ago
    SSH: for git and tunneling literally everything: VNC, sftp, Emacs server, tmux, ....

    Docker running random stuff

    Used to run Pihole until I got an Android and rooted it. Used to mess with WebDAV and CalDAV. Nextcloud is a mess; plain SFTP fuse mounts work better for me. My approach has gone from trying to replicate cloud services to straight up remoting over SSH (VNC or terminal/mosh depending connectivity) to my home computer when I want to do something. It's simple and near unexploitable.

    This is the way it should always have been done from the start of the internet. When you want to edit your calendar, for example, you should be able to do it on your phone/laptop/whatever as a proxy to your home computer, actually locking the file on your home computer. Instead we got the prolifetation of cloud SaaSes to compensate for this. For every program on your computer, you now need >1 analogous but incompatible program for every other device you use. Your watch needs a different calendar program than your gaming PC than your smart fridge, but you want a calendar on all of them. M×N programs where you could have just N, those on your home computer, if you could remote easily. (Really it's one dimension more than M×N when you consider all the backend services behind every SaaS app. What a waste of human effort and compute.)

    • dmos62 1655 days ago
      I sympathize. My meditations about this lead me to thinking about waste as well.

      Why computer at home though? For someone who moves around a lot and doesn't invest into "a home", this would be bothersome. Not to mention it's more expensive, in terms of energy and money. I think third-party data centers are fine for self-hosting.

      • zelly 1655 days ago
        There's really no difference. Mainly I use a machine at home instead of a data center VM because that's just one less bill to pay. I have two GPUs in there which would be very expensive on public cloud.

        I guess one reason people might gravitate to home hosting is owning your own disks, the tinfoil hat perspective. You can encrypt volumes on public cloud as well, but it's still on someone else's machine. They could take a snapshot of the heap memory and know everything you are doing.

        • dmos62 1654 days ago
          I speculate that in the future the trust aspect of using third-party hardware might be solved technologically. And I agree that today the tinfoil sentiment is not baseless.
    • oarsinsync 1655 days ago
      I like to be able to view and edit my calendar when I’m offline. This is remarkably often, regardless of whether I’m in London (UK), New York (USA), or some other country entirely.
  • ricardbejarano 1656 days ago
    On my home server (refurbished ThinkPad X201 with a Core i5-520M, 8GB of memory, 1TB internal SSD sync'd nightly to an external 1TB HDD) I run a single-node Kubernetes cluster with the following stuff:

    * MinIO: for access to my storage over the S3 API, I use it with restic for device backups and to share files with friends and family

    * CoreDNS: DNS cache with blacklisted domains (like Pihole), gives DNS-over-TLS to the home network and to my phone when I'm outside

    * A backup of my S3-hosted sites, just in case (bejarano.io, blog.bejarano.io, mta-sts.bejarano.io and prefers-color-scheme.bejarano.io)

    * https://ideas.bejarano.io, a simple "pick-one-at-random" site for 20,000 startup ideas (https://news.ycombinator.com/item?id=21112345)

    * MediaWiki instance for systems administration stuff

    * An internal (only accessible from my home network) picture gallery for family pictures

    * TeamSpeak server

    * Cron jobs: dynamic DNS, updating the domain blacklist nightly, recursively checking my websites for broken links, keeping an eye on any new release of a bunch of software packages I use

    * Prometheus stack + a bunch of exporters for all the stuff above

    * IPsec/L2TP VPN for remote access to internal services (picture gallery and Prometheus)

    * And a bunch of internal Kubernetes stuff for monitoring and such

    I still have to figure out log aggregation (probably going to use fluentd), I want to add some web-based automation framework like NodeRED or n8n.io for random stuff. I'd also like to host some password manager but I still have to study that.

    I also plan on rewriting wormhol.org into supporting any S3 backend, so that I can bind it's storage with MinIO.

    And finally, I'd like to move off single-disk storage and get a decent RAID solution to provide NFS for my cluster, as well as a couple more nodes to add redundancy and more compute.

    Edit: formatting.

    • whycombagator 1656 days ago
      > * CoreDNS: DNS cache with blacklisted domains (like Pihole), gives DNS-over-TLS to the home network and to my phone when I'm outside

      I would be _very_ interested in a write up/explanation of this set up

      • ricardbejarano 1655 days ago
        There you go!

        Essentially, this setup achieves 5 features I wanted my DNS to have:

        - Confidentiality: from my ISP; and from anyone listening to the air for plain-text DNS questions when I'm on public WiFi. Solution: DNS-over-TLS[1]

        - Integrity: of the answers I get. Solution: DNS-over-TLS authenticates the server

        - Privacy: from web trackers, ads, etc. Solution: domain name blacklist

        - Speed: as in, fast resolution times. Solution: caching and cache prefetching[2]

        - Observability: my previous DNS was Dnsmasq[3], AFAIK Dnsmasq doesn't log requests, only gives a couple stats[4], etc. Solution: a Prometheus endpoint

        CoreDNS ticks all of the above, and a couple others I found interesting to have.

        To set it up, I wrote my own (better) CoreDNS Docker image[7] to run on my Kubernetes cluster; mounted my Corefile[8] and my certificates as volumes, and exposed it via a Kubernetes Service.

        The Corefile[8] essentially sets up CoreDNS to:

        - Log all requests and errors

        - Forward DNS questions to Cloudflare's DNS-over-TLS servers

        - Cache questions for min(TTL, 24h), prefetching any domains requested more than 5 times over the last 10 minutes before they expire

        - If a domain resolves to more than one address, it automatically round-robins between them to distribute load

        - Serve Prometheus-style metrics on 9153/TCP, and provide readiness and liveness checks for Kubernetes

        - Load the /etc/hosts.blacklist hosts file (which has just short of 1M domains resolved to 0.0.0.0), reloads it every hour, and does not provide reverse lookups for performance reasons

        - Listens on 53/UDP for regular plain-text DNS questions (LAN only), and on 853/TCP for DNS-over-TLS questions, which I have NAT'd so that I can use it when I'm outside

        The domain blacklist I generate nightly with a Kubernetes CronJob that runs a Bash script[9]. It essentially pulls and deduplicates the domains in the "safe to use" domain blacklists compiled by https://firebog.net/, as well as removing (whitelisting) a couple hosts at the end.

        That's pretty much it. The only downside to this set up is that CoreDNS takes just short of 400MiB of memory (I guess it keeps the resolve table on memory, but 400MiB!?) and lately I'm seeing some OOM restarts by Kubernetes, as it surpasses the 500MiB hard memory limit I have on it. A possible solution might be to keep the resolve table on Redis, which might take up less memory space, but I'm still to try that out.

        [1] Which I find MUCH superior to DNS-over-HTTPS. The latter is simply a L7 hack to speed up adoption, but the correct technical solution is DoT, and operating systems should already support it by now (AFAIK, the only OS that supports DoT natively is Android 9+).

        [2] It was when I discovered CoreDNS' cache prefetching that I convinced myself to switch to CoreDNS.

        [3] http://www.thekelleys.org.uk/dnsmasq/doc.html

        [4] It gives you very few stats. I also had to write my own Prometheus expoter[5] because Google's[6] had a fatal flaw and no one answered to the issue. In fact, they closed the Issues tab on GitHub a couple months after my request, so fuck you, Google!

        [5] https://github.com/ricardbejarano/dnsmasq_exporter

        [6] https://github.com/google/dnsmasq_exporter (as you can see the Issues tab is no longer present)

        [7] https://github.com/ricardbejarano/coredns, less bloat than the official image, runs as non-root user, auditable build pipeline, compiled from source during build time. These are all nice to have and to comply with my non-root PodSecurityPolicy. I also like to run my own images just so that I know what's under the hood.

        [8]

          local:65535 {
            ready
            health
          }
        
          (global) {
            log
            errors
        
            cache 86400 {
              prefetch 5 10m 10%
            }
            dnssec
            loadbalance
        
            prometheus :9153
          }
        
          (cloudflare) {
            forward . tls://1.1.1.1 tls://1.0.0.1 {
              tls_servername cloudflare-dns.com
            }
          }
        
          (blacklist) {
            hosts /etc/hosts.blacklist {
              reload 3600s
              no_reverse
              fallthrough
            }
          }
        
          .:53 {
            import global
            import blacklist
            import cloudflare
          }
        
          tls://.:853 {
            import global
            import blacklist
            import cloudflare
            tls /etc/tls/fullchain.pem /etc/tls/privkey.pem
          }
        
        [9]

          #!/bin/bash
        
          HOSTS_FILE="/tmp/hosts.blacklist"
          HOSTS_FILES="$HOSTS_FILE.d"
        
          mkdir -p "$HOSTS_FILES"
          download() {
            echo "download($1)"
            curl \
              --location --max-redirs 3 \
              --max-time 20 --retry 3 --retry-delay 0 --retry-max-time 60 \
              "$1" > "$(mktemp "$HOSTS_FILES"/XXXXXX)"
          }
        
          # https://firebog.net/
          ## suspicious domains
          download "https://hosts-file.net/grm.txt"
          download "https://reddestdream.github.io/Projects/MinimalHosts/etc/MinimalHostsBlocker/minimalhosts"
          download "https://raw.githubusercontent.com/StevenBlack/hosts/master/data/KADhosts/hosts"
          download "https://raw.githubusercontent.com/StevenBlack/hosts/master/data/add.Spam/hosts"
          download "https://v.firebog.net/hosts/static/w3kbl.txt"
          ## advertising domains
          download "https://adaway.org/hosts.txt"
          download "https://v.firebog.net/hosts/AdguardDNS.txt"
          download "https://raw.githubusercontent.com/anudeepND/blacklist/master/adservers.txt"
          download "https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt"
          download "https://hosts-file.net/ad_servers.txt"
          download "https://v.firebog.net/hosts/Easylist.txt"
          download "https://pgl.yoyo.org/adservers/serverlist.php?hostformat=hosts;showintro=0"
          download "https://raw.githubusercontent.com/StevenBlack/hosts/master/data/UncheckyAds/hosts"
          download "https://www.squidblacklist.org/downloads/dg-ads.acl"
          ## tracking & telemetry domains
          download "https://v.firebog.net/hosts/Easyprivacy.txt"
          download "https://v.firebog.net/hosts/Prigent-Ads.txt"
          download "https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-blocklist.txt"
          download "https://raw.githubusercontent.com/StevenBlack/hosts/master/data/add.2o7Net/hosts"
          download "https://raw.githubusercontent.com/crazy-max/WindowsSpyBlocker/master/data/hosts/spy.txt"
          ## malicious domains
          download "https://s3.amazonaws.com/lists.disconnect.me/simple_malvertising.txt"
          download "https://mirror1.malwaredomains.com/files/justdomains"
          download "https://hosts-file.net/exp.txt"
          download "https://hosts-file.net/emd.txt"
          download "https://hosts-file.net/psh.txt"
          download "https://mirror.cedia.org.ec/malwaredomains/immortal_domains.txt"
          download "https://www.malwaredomainlist.com/hostslist/hosts.txt"
          download "https://bitbucket.org/ethanr/dns-blacklists/raw/8575c9f96e5b4a1308f2f12394abd86d0927a4a0/bad_lists/Mandiant_APT1_Report_Appendix_D.txt"
          download "https://v.firebog.net/hosts/Prigent-Malware.txt"
          download "https://v.firebog.net/hosts/Prigent-Phishing.txt"
          download "https://phishing.army/download/phishing_army_blocklist_extended.txt"
          download "https://gitlab.com/quidsup/notrack-blocklists/raw/master/notrack-malware.txt"
          download "https://ransomwaretracker.abuse.ch/downloads/RW_DOMBL.txt"
          download "https://ransomwaretracker.abuse.ch/downloads/CW_C2_DOMBL.txt"
          download "https://ransomwaretracker.abuse.ch/downloads/LY_C2_DOMBL.txt"
          download "https://ransomwaretracker.abuse.ch/downloads/TC_C2_DOMBL.txt"
          download "https://ransomwaretracker.abuse.ch/downloads/TL_C2_DOMBL.txt"
          download "https://zeustracker.abuse.ch/blocklist.php?download=domainblocklist"
          download "https://v.firebog.net/hosts/Shalla-mal.txt"
          download "https://raw.githubusercontent.com/StevenBlack/hosts/master/data/add.Risk/hosts"
          download "https://www.squidblacklist.org/downloads/dg-malicious.acl"
        
          cat "$HOSTS_FILES"/* | \
          sed \
            -e 's/0.0.0.0//g' \
            -e 's/127.0.0.1//g' \
            -e '/255.255.255.255/d' \
            -e '/::/d' \
            -e '/#/d' \
            -e 's/ //g' \
            -e 's/  //g' \
            -e '/^$/d' \
            -e 's/^/0.0.0.0 /g' | \
          awk '!a[$0]++' | \
          sed \
            -e '/gamovideo.com/d' \
            -e '/openload.co/d' > "$HOSTS_FILE"
        
          rm -rf "$HOSTS_FILES"
        • tptacek 1655 days ago
          DoH isn't an "L7 hack to speed up adoption". It's a DNS privacy mechanism that can't easily be disabled by network administrators, unlike DoT. You may have lots of good reasons to want to disable DNS privacy on your own network, and by all means use DoT to do that. But DoH is superior for end-users.
          • ricardbejarano 1655 days ago

              s/mechanism/hack/g
            
            DoH is a hack, the use of suboptimal protocol to achieve the same goal.

            DoT is a protocol explicitly designed for it's purpose.

            If brickhead sysadmins block DoT it's their problem, and if you have to work around that then it is, in fact, a hack (or a "workaround", doesn't matter).

            It's not that DoT or DoH are superior to one another, it's that DoT is "DNS in TLS", and DoH is "DNS in HTTP in TLS", doesn't that raise a red flag for you?

            • tptacek 1655 days ago
              You don't seem to follow. Millions of end-users get access to the Internet through major ISPs that monitor, log, monetize, and manipulate DNS. I'm on AT&T, and they absolutely do this. The purpose of DoH is to add a privacy mechanism that AT&T, or coffee shop wireless networks, or airplane wireless, or whatever, can't trivially disable. That's why it exists and why it's tunneled through HTTPS. Meanwhile, the reason network operators have a meme now about how much better DoT is comes down to the fact that they have middleboxes on their networks that passively monitor DNS, and they themselves (and, more importantly, the vendors that sell those boxes) want to hold back DNS privacy --- at least on their networks --- to keep those boxes working. They prefer a DNS privacy mechanism that has a kill switch that the network controls, not the user.

              The idea that end-users should give a shit about any of this "L7" "purpose built" "control plane" "layering violation" nonsense, and opt themselves into a version of DNS privacy that their network operators can turn off for them without end-user consent, is lunacy; bamboozlement.

              • ricardbejarano 1655 days ago
                Nothing keeps an end-user from rescinding it's ISP contract as soon as they ever slightly cross the line of filtering a single packet.

                I agree in that end-users shouldn't give a dime about DNS privacy, it should be private by default, but it is up to us to promote the correct protocol over the "hacky" one.

                If DNS-over-HTTPS is superior, then why don't we shove everything down 443/TCP? Or better yet, why don't we get rid of TCP altogether and send everything over a port-less encrypted dynamically-reliable trasport protocol? Surely middleman couldn't distinguish between traffic.

                Ports are there for a reason. The fact that they are used with anti-end-user intent doesn't make them (or any protocol that runs on them) inherently bad. Yet one thing that makes a protocol better than another one, given set of requirements, is efficiency.

                By the way, if I were to switch my DoT server from 853/TCP to 443/TCP, the port wouldn't be a problem anymore. Per your standards, now DoT would be better than DoH, wouldn't it? Same results, smaller payloads.

                • tptacek 1655 days ago
                  Why would any end-user care about what the "correct" protocol was, when the choice was between a privacy protocol with an ISP-owned kill switch and one without?
                  • ricardbejarano 1655 days ago
                    You haven't answered my question. Your supposed "kill switch" wouldn't exist in that scenario.

                    I gave you an apples to apples protocol comparison. If you tell me there's a single bit that lets you distinguish between HTTPS traffic and DoT traffic running both on 443/TCP, then I'll buy your "kill switch" argument.

                    And even if you do, nothing keeps me from saying farewell to my ISP as soon as they press that switch.

                • juliusmusseau 1648 days ago
                  My ssh server runs on port 443. That way I can connect to it when I'm on the Tim Hortons WIFI.
        • atomi 1655 days ago
          I'm using Unbound running on a dd-wrt router for a lot of this same functionality. There are some things I don't do only because they aren't a priority for me. But I certainly got DNS over TLS and DNS blocking going.
          • ricardbejarano 1655 days ago
            Didn't know unbound had DoT.

            I learnt about CoreDNS because Kubernetes uses it for service discovery, and once I read about it's "chaining plugins" philosophy I wanted to try it out.

            And it was so refreshing coming from Dnsmasq that I fell in love with it.

        • whycombagator 1655 days ago
          Thank you for going into such depth! This is really helpful
    • devthane 1656 days ago
      You might take a look at https://github.com/grafana/loki if you haven’t seen it yet for logs. It’s still really new but it’s been working for me great.
      • ricardbejarano 1656 days ago
        Thanks for the heads up, I'll check it out!
    • bluegreyred 1655 days ago
      nice to see somebody using a thinkpad as a homeserver.

      I remember comparing low power homeservers, consumer NAS and a refurb Thinkpad and the latter won when considering the price/performance and idle power consumption (<5W). You also get a built screen & keyboard for debugging and a efficient DC-UPS if you're brave enough to leave the batteries in. That's of course assuming you don't need multiple terabytes of storage or run programs that load the CPU 24/7, which I don't. These days a rPi 4 would probably suffice for my needs but I still think the refurb thinkpad is a smart idea.

      • ricardbejarano 1655 days ago
        I don't overload the CPU and my storage requirements are low. 95% of my used storage is stuff I wouldn't care if it got lost, but just nice to have around. I only have around 2GB of data I don't want to lose.

        I do leave the batteries in. Is it dangerous? I read some time ago that it is not dangerous, but the capacity of the battery drops significantly, I don't care about capacity, and safe shutdowns are important to me.

        In the past I used an HP DL380 Gen. 7 (which I still own, and wouldn't mind selling as I don't use it), but I had to find a solution for the noise. And power consumption was at around 18EUR for my EUR/kWh.

        Cramming down what ran on 12 cores and 48GiB of RAM on a 2-core, 4GiB (I only upgraded the memory 2 months ago) machine was a real challenge.

        The ThinkPad cost me 90EUR (IBM refurbished), we bought two of them, the other one burnt. The recent upgrades (8GiB kit + Samsung Evo 1TB) cost me around 150EUR. Overall a really nice value both in compute per EUR spent and in compute per Wh spent. Really happy with it, I just feel it is not very reliable as it is old.

        • bluegreyred 1654 days ago
          >I do leave the batteries in. Is it dangerous? I read some time ago that it is not dangerous, but the capacity of the battery drops significantly, I don't care about capacity, and safe shutdowns are important to me.

          It's not necessarily dangerous but lithium batteries have a chance to fail and in very rare cases even explode, making them a potential fire hazard. I'm not an expert, maybe someone else can expand on this. If I were to run an old laptop of unknown provenance with a LiIon battery 24/7 completely unattended I'd at least want to make sure that it is on a non-flammable surface without any flammable items nearby.

          >In the past I used an HP DL380 Gen. 7 (which I still own, and wouldn't mind selling as I don't use it), but I had to find a solution for the noise. And power consumption was at around 18EUR for my EUR/kWh.

          Yes, I am surprised how many people leave power consumption out of the equation. These days you can rent a decent VPS for the power cost of an old refurb server alone.

          • ricardbejarano 1654 days ago
            > It's not necessarily dangerous but lithium batteries have a chance to fail and in very rare cases even explode, making them a potential fire hazard.

            Well, I'm removing the battery and the pseudo-UPS logic right now. The battery looks fine, but I'm not taking any risks, since it's on top of the DL380 but under a wooden TV stand.

            Thanks for the heads up! You might have prevented a fire.

        • Havoc 1655 days ago
          >I do leave the batteries in. Is it dangerous?

          Should be fine if they're not swollen / getting very hot

      • Havoc 1655 days ago
        Using your previous generation gear as server when upgrading works very well too. Even a 5 year old i5 or whatever has plenty left for server duty
    • bpye 1656 days ago
      Hey - I'm not the only one using CoreDNS like this! I'm just abusing the hosts plugin - do you have something more elegant?
      • ricardbejarano 1655 days ago
        I'm throwing it the hosts blacklist as a file. For performance reasons I turn off reverse lookups and limit reloading to once every hour:

            (blacklist) {
              hosts /etc/hosts.blacklist {
                reload 3600s
                no_reverse
                fallthrough
              }
            }
        
            .:53 {
              import blacklist
        
              ... (more config)
            }
    • jamieweb 1656 days ago
      Good job for using MTA-STS. :)
    • mpnordland 1656 days ago
      I've been wanting to do a kubernetes setup with my home server, but most tutorials are aimed at multi node clusters. Do you have any links on how to setup such a system?
      • gravypod 1656 days ago
        If you follow a multi node cluster tutorial all you should need to do is remove the master taint from the node and normal pods will spawn on it
        • ricardbejarano 1656 days ago
          That's essentially it, yes.

          FYI: the control plane takes about 150m (milli-cpu) and ~1.5GiB of memory in a host with my specs.

      • lbotos 1656 days ago
        I'm trialling k3s right now. https://k3s.io/
        • ricardbejarano 1655 days ago
          I've seen it, never tried it.

          The thing is, I don't use Kubernetes for convenience or because I need it, I use to learn it.

          I was just fine with Docker Swarm before switching, but I wanted to learn Kubernetes as a valuable skill, and I know no better way of learning something than using it every day.

          And the thing about Kubernetes distros is that they usually all apply a new layer of "turning Kubernetes' complexity into a turn-key process", and I don't want that.

          If you know the ins and outs of K8s, sure, use any distro you like, but if you want to learn something, better learn the fundamentals first. It's like learning Linux's internals instead of learning how Ubuntu is, one applies to a single distro and the other will apply for every distro ever.

          • lbotos 1655 days ago
            I was like you, I knew about the "concept" of a pod, and nothing more.

            k3s is not very far from the fundamentals. It's really just "one binary" instead of many for the space savings/ simple deployment.

            That said, consider Kubernetes in Action by Manning. I'm about 75% done now, was a great help, and I'm continuing with k3s after doing it.

            • ricardbejarano 1655 days ago
              At this point I consider myself pretty knowledgeable on Kubernetes.

              I bought Kubernetes Up & Running a year ago, I was disappointed to see it is a very over-the-top view, without getting into details.

              I skimmed over Kubernetes in Action a couple months ago. Nothing really catched my eye either.

              The last one I read was Kubernetes Security by Liz Rize. Either there's not that much to securing Kubernetes or the book is very introductory too.

              The only parts of K8s I don't know a lot about are storage (haven't got past the NFS driver yet), CRDs and distributions like OpenShift. But in the same way I'm lacking storage expertise outside of Kubernetes.

    • magicfractal 1656 days ago
      Could you share a bit about your picture gallery?
      • ricardbejarano 1656 days ago
        Sure! Here's the source code: https://github.com/ricardbejarano/pyctures

        I could set up a demo if you want to.

        It's a cheap Flask app that scans a given "library" directory for "album" subdirectories, which contain the pictures you want to display.

        It has a big issue with image size (16 images per page, my phone takes 5MB pictures, 80MB per page is HUUUGE). Thumbnailing would be great. I'm open for PRs ;)!

        If anyone knows about a better alternative... I set this up when we got back from one vacation for my relatives to easily see the pictures (without social media).

    • thequailman 1656 days ago
      How are you doing multi tenant MinIO?
      • ricardbejarano 1656 days ago
        I don't :)

        Right now I have public (read-only) and private buckets only, and I'm the only who writes into any of them.

        Public buckets contain files I didn't even create myself and that friends might find useful (Windows ISO, movies, VirtualBox VMs...). Privates have, well, private data, and can only be accessed using my admin account's credentials.

        IIRC MinIO has access control through users, but I'm still very new to MinIO to the point where I discover new features every time I use it.

        If I were to give someone else their own buckets I'd probably run a second instance to keep things separate, though. I'm even considering running another one myself to keep private buckets only accessible from my home network... (right now the entire instance is reachable from WAN, regardless of whether they are public or not).

  • itm 1656 days ago
    I eat my own food:

    https://github.com/epoupon/lms for music

    https://github.com/epoupon/fileshelter to share files

    Eveything is packaged on debian buster (amd64 and armhf) and run behind a reverse proxy.

    • djsumdog 1655 days ago
      Huh, interesting. I usually have full copies of my music collection where I need them (512gb microsd in my phone and on the work laptop) but it would be nice to just have a web interface if I'm at someone's house or so they can play off their phone. I think I was using subsonic until they changed all their licensing.

      One UI question? Is there a reason you left off volume controls? That's something that always annoys me still about Bandcamp and I had submitted a patch to Mastodon to create a volume control for their video component.

      • itm 1655 days ago
        This is a good question. On mobile, I just use phone buttons to control volume. On desktop, I just use the media keys since I don't listen to something else. But since you are not the first to ask, I can add a volume slider on large devices.
      • z3t4 1655 days ago
        About volume control, when there is a volume control on the TV, the TV box, as well as the receiver, it feels a bit unnecessary to also have a volume control in the software.
    • EvanAnderson 1655 days ago
      Both of those projects look great! I could definitely see using both of them. I've been thinking about writing something similar to lms for awhile now.
  • kstenerud 1656 days ago
    Everything I run is in a deterministic, rebuildable LXC container or KVM virtual machine.

    I have around 10 desktops that run in containers in various places for various common tasks I do. Each one has a backed up homedir, and then I have a ZFS-backed fileserver for centralized data. I connect to them using chrome remote desktop or x2go. I've had my work machine die one time too many, so with these scripts I can go from a blank work machine to exactly where I left off before the old one died, in a little over an hour. None of my files are stuck to a particular machine, so I can run on a home server, and then when I need to travel, transfer the desktop to a laptop, then transfer it back again when I get home. Takes about 10 minutes to transfer it.

    https://github.com/kstenerud/virtual-builders

    I also run most of my server apps this way:

    https://github.com/kstenerud/virtual-builders/tree/master/ma...

  • letstrynvm 1656 days ago
    I have a cheap dedicated server with outgoing Postfix mail forwarding with sasl auth, nsd for the domains, a few web services over tls. Git server via gitolite + git-daemon. Mailman.

    Incoming mail points directly to an RPi at home on dsl... Postfix + Dovecot IMAP. It's externally accessible, my dedicated server does the dynamic dns to point to the RPi; the domain MX points to that. Outgoing mail forwards through the dedicated server, which has an IP with good reputation and DKIM.

    This gets me a nice result that my current and historical email is delivered directly to, and stays at, home, and my outgoing mail is still universally accepted. There's no dependency on google or github. There's no virtualization, no docker, no containers, just Linux on the server and on the rpi to keep up to date. It uses OS packages for everything so it stays up to date with security updates.

  • rolleiflex 1656 days ago
    I’m pretty vanilla compared to most people here, I just host my own email and I have a Synology box that provides a few utilities like an Evernote replacement.

    I also host Aether P2P (https://getaether.net) on a Raspberry Pi-like device, so it helps the P2P network. But I’m biased on that last one, it’s my own software.

  • thegeekbin 1655 days ago
    I changed my hardware around recently, I used to have 5u colo that I’ve now downsized for financial reasons, I migrated all into one box called Poof, on poof I’m running:

        - matrix home server
        - xmpp server
        - websites for wife and I (Cloudlinux, Plesk, Imunify360)
        - nextcloud
        - jellyfin + jackett + sonarr + radarr
        - rutorrent
        - CDN origin server (bunnycdn pulls files from this)
        - znc bouncer
        - freeipa server
        - Portainer with pihole, Prometheus, grafana and some microservices on them
        - Gitea server
        - spare web server I use as staging environment
    
    All of this is behind a firewall, I’ve been fortunate enough I’ve got /27 assigned to me, so more than enough IP addresses available to me, I’m using all but about 5 or 6 of them, but plan to change that soon. I’m going to be assigning dedicated IPs to every site I host (3 total), put my XMPP server on its own vm instead of sharing it with Matrix and giving it its own IP.

    I blog about this stuff if anyone’s interested: https://thegeekbin.com/

  • Youden 1655 days ago
    I used to have things in a colo but now I have fiber at home, just about everything is on a single giant machine, complete with graphics card for a gaming VM:

      VM management: libvirt (used to host gaming PC and financial applications)
      Container management: Docker (used to be k8s but gave up)
      Photo gallery: Koken, Piwigo, Lychee
      Media acquisition: Radarr, Sonarr, NZBGet, NZBHydra
      Media access: Plex
      Monitoring: InfluxDB, Grafana, cAdvisor, Piwik, SmartD, SmokePing, Prometheus
      Remote data access: Nextcloud
      Local data access: Samba, NFS
      Data sync: Syncthing
      WireGuard
      Unifi server
      IRC: irssi, WeeChat, Glowing Bear, Sopel (runs a few bots)
      Finance: beancount-import, fava
      Chat: Riot, Synapse (both Matrix)
      Databases: Postgres, MariaDB, Redis
      Speed test: IPerf3
    
    I also have a seedbox for high-bandwidth applications.
  • h1d 1656 days ago
    Just an advice but I suggest you host publicly facing services and privately hosted services on different instances.

    You don't want less tested web app to expose some security hole for someone to start snooping on your traffic toward BitWarden after SSL termination.

    If you don't want an extra box at home, you can always get a $5/mo cloud instance for public stuff, where you don't have to worry about increased electricity bill from DDoS having CPU spiked or choking your home network.

  • DominoTree 1656 days ago
    I self-host a fairly big Plex and several personal websites along with a NextCloud instance to sync calendars/reminders/etc across devices. Pretty much everything forward-facing is behind CloudFlare.

    On the front end I have two 1Gbit circuits (AT&T and Google) going into an OPNSense instance doing load-balancing and IPS running on a Dell R320 with a 12-thread Xeon and 24GB of RAM

    Services are hosted on a Dell R520 with 48GB RAM and two 12-thread Xeons running Ubuntu and an up-to-date ZFS on Linux build.

    Media storage handled by two Dell PowerVault 1200 SAS arrays.

    Back-end is handled by a Cisco 5548UP and my whole apartment is plumbed for 10Gbit.

    • Havoc 1655 days ago
      >On the front end I have two 1Gbit circuits (AT&T and Google)

      Holy hell. How did that come about?

  • menssen 1655 days ago
    Nothing. I self-host nothing. My entire home networking infrastructure consists of a more powerful WiFi router than the one built into the modem that the cable company provides so that it reaches to the back of my apartment. I pay money for GitHub, Dropbox, iCloud, Apple Music, Netflix, Hulu, HBO, Amazon Prime, a VPN to spoof my location occasionally, and Google Apps (or I would, if I were not grandfathered into the free tier). When I want to spin up a personal project, I do it on Heroku.

    I live in a stable first-world democracy. Or, since it seems to be getting less stable recently, maybe a better way to put it is: I participate in a stable global economy. If "the cloud" catastrophically fails to the point where I lose all of the above without warning, I will likely have bigger problems than never being able to watch a favorite tv show again.

    I wonder if this exposes two kinds of people: those who value mobility, and are more comfortable limiting the things that are important to them to a laptop and a bug-out bag, and those who value stability, and are inclined to build self-sufficient infrastructure in their castles.

    • elagost 1655 days ago
      There's a third kind of person - one who doesn't want their personal data beholden to a bunch of faceless for-profit companies who have proven they care less about security and privacy than they do about money.

      I don't self host a lot of services (and the ones that do could go away tomorrow without hurting me much) but I only have one cloud resource - email. It kind of has to be that way for various reasons; I'd self host if I could reasonably do so. I also think I value my $75/mo more than I value an endless stream of entertainment.

      (edit: just wanted to say, thanks for posting this. It is a valuable discussion point.)

      • CarelessExpert 1655 days ago
        Not to mention a fourth kind of person - one who just wants services that work better than what the cloud offers.

        By definition, self-hosting means the service is under my control, doing what I need, customized for my use cases. And because I use only open source stacks, I can (and have) even modify the code to customize even further.

        And that's ignoring the fact that free, self-hosted options can often provide features that third party services cannot for legal, technical, or supports reasons.

        For example, my TT-RSS feed setup uses a scraper to pull full article content right into the feed. A service would probably land in legal trouble if they did this. And while it works incredibly well, like, 90% of the time (thank you Henry Wang, author of mercury-parser-api!), if it was a service, that 10% could result in thousands of support emails or an exodus of subscribers.

        • acolumb 1655 days ago
          Could you elaborate on how you got TTRSS to scrape?
          • CarelessExpert 1653 days ago
            I installed the Mercury parser plugin:

            https://github.com/HenryQW/mercury_fulltext

            The directions there are pretty clear. You've gotta set up the mercury parser API service (I used docker) and then enable the plugin for the feeds you want to apply it to.

            Alternatively you could use the Readability plugin that ships with tt-rss, but I have no idea how effective it is as I never tried it.

            Finally, you could stand up the RSS full text proxy:

            https://github.com/Kombustor/rss-fulltext-proxy

            That service standa between your RSS feed reader of choice and the RSS feed supplier and does the scraping and embedding.

      • em-bee 1655 days ago
        yup, i keep my data at home for that reason. except email. but i don't host any services, just plain ssh access, and recently i started using syncthing to share some files among my devices.
    • mantoto 1655 days ago
      I host my own stuff for fun and excersise
  • olalonde 1656 days ago
    I recently installed docker-simple-mail-forwarder[0] to use a custom domain name email with Gmail. It's a one line install.

    [0] https://github.com/huan/docker-simple-mail-forwarder

  • folkhack 1656 days ago
    I'll bite because I love threads like this! I run an Intel NUC with an i5, 8G RAM, a OS SSD, and an external USB3 4TB RAID attached. OS is Debian 9. I've always ran a "general utility" Debian server at home due to projects and SSH tunneling (yeah yeah I should setup a proper VPN I know ha).

    * It's a target for my rsync backups for all my client systems (most critical use); Docker TIG stack (Telegraf, InfluxDB, Grafana) which monitors my rackmount APC UPS, my Ubiquiti network hardware, Docker, and just general system stats; Docker Plex; Docker Transmission w/VPN; Docker Unifi; A custom network monitor I built that just pings/netcats certain internal and external hosts (not used too seriously but it comes in handy); and finally a neglected Minecraft server.

    I went for low power consumption since it's an always-on device and power comes at a premium here + fanless. I highly suggest the NUC as it's a highly capable device and with plenty of power if upgraded a bit!

    • ekianjo 1656 days ago
      I guess this is a recent i5? In case you want a low consumption alternative, an older Celeron-based NUC is also a very capable machine (much better than a Raspberry Pi 4 for about the same price used nowadays) and remains idle at a few dozens of watts.
      • folkhack 1655 days ago
        Not sure on how recent but I picked it up about a year ago. Looked into the Celeron ones and although they're impressive I decided to go with something that's a bit beefier due to how many containers I planned to run/experiment with =)
  • kissgyorgy 1655 days ago
    Wallabag! It changed my reading habits: https://wallabag.org/en
    • mackrevinack 1655 days ago
      ive been eyeing that up for quite a while now. being able to it run on an eink reader seems like a great idea. ive set up the email-an-article-to-your-kindle thing a few years ago but it's still too much effort.
  • yankcrime 1655 days ago
    In colo (a former nuclear bunker, no less!) I have a small OpenStack 'cloud' deployment cobbled together from spare hardware, pieced together in partnership with a friend of mine. I wrote a bit about it here if anyone's interested:

    https://dischord.org/2019/07/23/inside-the-sausage-factory/

    At home I have:

      A Synology DS412+ with 4 x 4TB drives
      An ancient HP Microserver N36L with 16GB RAM and 4 x 4TB drives running FreeBSD
      Ubiqiuti UniFi SG + CloudKey + AP
      An OG Pi running PiHole
    
    The DS412+ is my main network storage device, with various things backed up to the Microserver. Aside from the OEM services it also runs Minio (I use this for local backups from Arq), nzbget, and Syncthing in Docker containers.
  • Mister_Snuggles 1656 days ago
    At home I have:

    FreeBSD server running various things:

    * Home Assistant, Node-RED, and some other home automation utilities running in a FreeBSD Jail.

    * UniFi controller in a Debian VM.

    * Pi-Hole in a CentOS VM.

    * StrongSwan in a FreeBSD VM.

    * ElasticSearch, Kibana, Logstash, and Grafana running in a Debian VM.

    * PostgreSQL on bare metal.

    * Nginx on bare metal, this acts as a front-end to all of my applications.

    I also have:

    * Blue Iris on a dedicated Windows box. This was a refurbished business desktop and works well, but my needs are starting to outgrow it.

    * A QNAP NAS for general storage needs.

    Future plans are always interesting, so in that vein here are my future plans:

    Short term:

    * Move my home automation stuff out of the FreeBSD Jail into a Linux VM. The entire Home Assistant ecosystem is fairly Linux-centric and even though it works on FreeBSD, it's more pain than I'd really like. Managing VMs is also somewhat easier than managing Jails, though I'm sure part of this is that I'm using ezjail instead of something more modern like iocage.

    * Get Mayan-EDMS up and running. I hate paper files, this will be a good way to wrangle all of them. I've used it before, but didn't get too deep into it. This time I'm going all-in.

    Medium term:

    * Replace my older cameras with newer models.

    * Possibly upgrade my Blue Iris machine to a more powerful refurbished one.

    * Create a 'container VM', which will basically be a Linux VM used for me to learn about containers.

    Long term:

    * Replace my FreeBSD server with new hardware running a proper hypervisor (e.g., Proxmox, VMware ESXi). This plan is nebulous as what I have meets my needs, this is more about learning new tools and ways of doing things.

  • boredpenguin 1656 days ago
    Currently not much. On the home server:

    • Apache: hosting a few websites and a personal (private) wiki.

    • Transmission: well, as an always-on torrent client. Usually I add a torrent here, wait for it to download and then transfer it via SFTP to my laptop.

    • Gitea: mostly to mirror third party repos I need or find useful.

    • Wireguard: as a VPN server for all my devices and VPS, mostly so I don't need to expose SSH to the internet. Was really easy to setup and it's been painless so far.

  • 0x0aff374668 1656 days ago
    Why are so many folks here running media servers? Are you really streaming your own video / audio libraries, or is there something else it is useful for? I'd be rather shocked to learn people still store digitally media locally.
    • Santosh83 1656 days ago
      If you don't have it locally, you don't have it at all. Repeatedly streaming video & audio over the Internet is not only wasteful of bandwidth but also prone to connection quality issues and of course arbitrary takedowns by the content provider. If you purchase media you should have the right to remove DRM and have at least one local copy and one backup copy for your personal use.
    • Jaruzel 1656 days ago
      I do. I have over 1,000 CDs losslessly ripped on my media server. I stream these around the house into good quality amps and speakers, and also down convert them to mp3 for portable devices. I don't like being held hostage by streaming services who one day may suddenly remove all the music that I like just because not enough people are listening to it. That'll never happen to my CDs - they are mine and no-one, bar a burglar, can take them away from me.

      I also used to have all my DVDs ripped onto my media server, but I never really watched any of them, so now they are just gathering digital dust on some offline disks.

    • mackrevinack 1655 days ago
      im making moves to go back to storing music myself. I used to say to myself that couldn't afford all the music I listen to but now looking at the stupid amount of money ive given spotify over the years I'm thinking that maybe I could.

      the other thing that is bothering me is that songs keep dissappearing from my playlists every once in a while

      people keeping their own movie library makes perfect sense as there are still no services today, that I know of, that have access to all movies a certain person might want, or if they do the service is basterdised by some region lock

    • AYBABTME 1655 days ago
      I use it for my boat (offgrid) & for travels, so we're not stuck behind regional content filters, like mid-way a series and we arrive in a new country and it's not on Netflix here.
      • 0x0aff374668 1654 days ago
        This is the coolest response. :)

        (You didn't by any chance sail around Cape horn in 2016? I met this really cool older couple in Central America who had been living at sea for 17 years.)

        Reading all of the replies I realize that sometime between 2007 and 2012 I just gave up entirely on storing media locally. I don't watch movies (e.g. no cable or netflix), but I've been using spotify for a decade maybe? One response makes a good point: it is a waste of overall bandwidth to stream content.

    • Youden 1655 days ago
      I want a single platform that allows me to conveniently access all the media I want to consume, now and for forever. There's no other way to have one, it's that simple.

      Sure I could buy DRM-laden stuff from some online store but there's no guarantee I can access it forever. I could buy a bunch of Blu-Rays or DVDs and stick them on a shelf but that's not convenient. I could pay for a subscription service but not a single one has anything close to everything I want to watch.

    • cannonedhamster 1655 days ago
      Plex automatically backs up me entire families photos to my RAIDed server with a Backblaze off-site backup that's been tested. It's also useful for audio books, and having access to my media when I travel abroad and things like Hulu, Amazon Prime, and Netflix don't really have much available. I also have a ton of old DVDs saved on there that aren't available anymore to watch online or television.
  • stiray 1655 days ago
    I am a bit old school but this fills all my needs.

    - httpd

    - nextcloud (mostly for android syncing, for normal file operations I prefer sftp). Nextcloud is great but the whole js/html/browser is clumsy.

    - roundcube (again mostly imap but just to have alternative when phone isnt available - I havent used it for ages)

    - postfix

    - dovecot

    - squid on separate fib with paid vpn (mitming all the traffic, removing all internet "junk" from my connections, all my devices, including android are using it over ssh tunnel).

    - transmission, donating my bandwidth to some OSS projects

    - gitolite, all my code goes there

    I think this is it.

    Everything is running on mitx board, with 16gb of ram, 3x 3tb toshiba hdds in zraid and additional 10tb hitachi disk. FreeBSD. 33 watts.

  • jasonkester 1656 days ago
    S3stat, Twiddla, Unwaffle, the Expat site, and a dozen other old projects all still run on a single box in a Colo.

    it costs about $800/month for the half cage and all the hardware in it, when you amortise it out. And there's plenty of performance overhead for when one project gets a lot of attention or I want to ad something new.

    Pretty much the only thing I use cloud computing for is the nightly job for S3stat, because it fits the workload pattern that EC2 was designed for. Namely, it needs to run 70 odd hours of computing every day, and gets 3 hours to do it in.

    For SaaS sized web stuff, self hosting still makes the most sense.

  • kemenaran 1655 days ago
    I like self hosting, but I want it to work without having to do sysadmin work. Especially the upgrades–most hosting providers will have one-click tools to install self hosted instances of something; but very few have working upgrade scripts to keep up with the new versions.

    So I set up Yunohost [0] on a small box, and now I install self hosted services whenever I need them. Installing a new service is a breeze–but more importantly, upgrading them is a breeze to.

    For now I self host Mattermost, Nextcloud, Transmission.

    [0] https://yunohost.org

    • simplehuman 1655 days ago
      For a paid alternative look into cloudron or unraid
  • hendry 1656 days ago
    FreeNAS + voidlinux nuc running grafana + prometheus.

    Tbh I run hot and cold about self hosting since after work, I really really want to be able relax at home.

    Not wonder why the hell my nuc hasn't come up after a reboot. Or why is it so hard to increase the disk space on my FreeNAS https://www.ixsystems.com/community/threads/upgrading-storag...

  • dcchambers 1654 days ago
    A household wiki. Contains all kinds of information about our house and lives. We used to track stuff like this in a google doc but it was getting unwieldy.

    I wasn't happy with any of the free wiki hosting solutions available so I ended up self-hosting a mediawiki site. It's been...challenging...to convince my wife and family to adapt and use wiki markup.

    I've been considering switching to something that uses standard markdown instead since it's easier to write with.

    • slavox 1654 days ago
      I also had issues with the mediawiki/wiki editors and their clumsy nature.

      For me I'm just after a simple pure text knowledge-base.

      Currently I use vuepress https://vuepress.vuejs.org/

      The positives with vuepress for me were:

      * Plain Markdown (With a little bit of metadata)

      * Auto generated search (Just titles by default)

      * Auto Generated sidebar menus

      The negatives:

      * No automatic site contents, I mostly use the search to move around docs

      * Search is exact not fuzzy

      * The menu settings are in a hidden folder

    • danesparza 1654 days ago
      +1 for even having a conversation with family about wiki markup. You are a baller.
  • Jaruzel 1656 days ago
    Hyper-V host:

        Active Directory (x2)
        Exchange Server 2013
        MS SQL 
        Various Single Purpose VMs providing automation
        Debian for SpamAssassion
        Debian for my web domains
        Custom SMTP MTA thats in front of SpamAssassin and Exchange
    
    Raspberry Pis:

        TVHeadEnd
        Remote Cameras
    
    Plus a Windows Server hosting all my files/media.

    I used to self-host a lot more, but have been paring back recently.

  • canada_dry 1656 days ago
    Calendar: (https://radicale.org)

    Home automation/security system + 'Alexa': completely home grown using python + android + arduino + rpi + esp32

  • dnate 1655 days ago
    I self host a flask app on my raspberry pi, soldered to the garage door opener.

    I have hosted media folders/streaming applications for friends and family, but this has been by far my most used and most useful hack.

  • Macha 1656 days ago
    So far I have a home server with:

    * Unbound for dns-over-tls and single point of config hostnames for my home network

    * Syncthing for file sync

    * offlineimap to backup my email accounts

    * Samba for a home media library

    * cron jobs to backup my shares

    * Unifi controller

    On my todo list:

    * Scheduled offsite backup (borg + rsync.net being the top contender currently)

    * Something a bit more dedicated to media streaming than smb. some clients like vlc handle it fine, others do not.

    * Pull logs for my various websites locally

  • vermilingua 1656 days ago
    On a bit of a tangent, hopefully not an inappropriate question:

    What do you all spend on this sort of thing? Whether hosting remotely or on local hardware, what would you say is the rough monthly/annual cost to move your Netflix/Spotify/etc equiv to a self-hosted setup (excluding own labor)?

    • Havoc 1655 days ago
      Home server - nothing recurring. Repurposed an old gaming laptop. Only cost was some USB3 HDD bays. Plus probably extra electricity since I run BOINC on it.

      Websites - nothing. Using GCP free server. About to move it to Oracle's free VMs though thanks to GCP's IPV4 shenanigans and Oracle's free offering being better (higher IO & you get two VMs).

    • Youden 1655 days ago
      Depends on the size of the collection. You can get a small (1-2TB) dedicated server for ~$5/month these days. When you get bigger you're looking at something like 64EUR/month for 40TB.

      Personally I have a home server which has minimal monthly costs. I just buy disks every now and then.

    • input_sh 1656 days ago
      ~30 euros for an external dedicated server, and for the home stuff it's mostly a one-time fee for a NAS (+ an occasional hard drive replacement).
    • ehnto 1656 days ago
      ~80 AUD a month for two VMs. One hosts a Tekkit server.
  • chrissnell 1655 days ago
    At home, I run:

    - A weather station that lives on a pole on the yard. Powered by GopherWX https://github.com/chrissnell/gopherwx

    - InfluxDB for weather station

    - Heatermeter Barbecue controller

    - oauth2_proxy, fronted by Okta, to securely access the BBQ controller while I'm away. This proxy is something that everyone with applications hosted on their home network should look into. Combined with Okta, it's much easier than running VPN.

    In the public cloud, I host nginx, which runs a gRPC proxy to the gopherwx at home. I wrote an app to stream live weather from my home station to my desktops and laptops and show it in a toolbar.

    nginx in the cloud also hosts a public website displaying my live weather, pulled as JSON over HTTPS from gopherwx at home.

  • ohiovr 1656 days ago
    I'm testing some self hosted apps including Nginx reverse proxy with letsencrypt, nextcloud with either onlyoffice document server or collabora, onlyoffice community server with mail, gitea, lychee, osclass, guacamole, wireguard vpn, searx, and a few others.
  • dmclamb 1656 days ago
    Boring and predictable, but openvpn and pihole on a raspberry pi.

    I have a second raspberry pi running a version of Kali Linux. I only hack my own stuff for learning.

    Once upon a time I ran a public facing website and quake server, and published player stats. No time these days for much play.

  • zzo38computer 1656 days ago
    On my computer I host HTTP (with Apache), SMTP (with Exim), NNTP (with sqlnetnews), QOTD (TCP only, no UDP), and Gopher. I might add others later, too (e.g. IRC, Viewdata, Telnet, Finger, etc). And on the HTTP server I host several Fossil repositories.
    • vageli 1656 days ago
      > On my computer I host HTTP (with Apache), SMTP (with Exim), NNTP (with sqlnetnews), QOTD (TCP only, no UDP), and Gopher. I might add others later, too (e.g. IRC, Viewdata, Telnet, Finger, etc). And on the HTTP server I host several Fossil repositories.

      Man, at my last job in a large enterprise, I WISH they were running fingerd. Would have made for some pretty cool, lightweight integrations.

    • h1d 1656 days ago
      It felt like I opened a 1999 thread.
  • geek_at 1655 days ago
    Open Trashmail so I can use throwaway emails with my own (sub)domains and keep my data private

    https://github.com/HaschekSolutions/opentrashmail

  • hanklazard 1656 days ago
    - raspi running pi-hole - synology NAS for storage and backups, and it runs an Ubuntu VM for a wireguard vpn server - for music, Volumio on a raspi as a server with snapcast; 4 other amped raspi’s with speakers in other parts of the house as clients, synced up via snapcast (check out hifiberry amp if you’re interested in this sort of thing) - an older Mac mini now running an Ubuntu server with hassio virtualized. Lights, hvac, music controls, etc, controlled through hassio front end - print server on a pi zero

    (I guess these may not really be “self-hosted” since I don’t make them publically accessible through ports ... just vpn in to my home network)

  • yogsototh 1656 days ago
    On a scaleway (about 20€/month):

    - my websites with nginx

    - IRC (ngircd)

    - ZNC

    - espial for bookmarks and notes

    - node-red to automate RSS -> twitter and espial -> pinboard

    - transmission

    - some reddit bots manager I’ve written in Haskell+Purescript.

    - some private file upload system mostly to share images in IRC in our team

    - goaccess to self host privacy respecting analytics

    At home, Plex.

  • moutansos 1656 days ago
    Raspberry PI 3: OpenVPN Dell Poweredge R720 running VMware ESXi With - Ubuntu Docker Host - - Plex - - Blog Site - - TeamCity - - Minecraft Servers (Java and Bedrock) - - Gitlab - - ElasticSearch - - Kibana - - Resilio Sync - - PostgreSql - Manjaro Linux VM - Windows Server 2019 VM - 3 Node Kubernetes Cluster - - Couple of Side Projects Running on It

    Basically all the stuff I don't want to pay a cloud provider to host.

    Overall the R720 with 48GB of ram has been one of my best buys hands down. down the road I plan on grabbing a second server and a proper NAS or unraid setup.

  • nilsandrey 1656 days ago
    - Syncthing (folders across devices)

    - docker (just dev env with a lot of images, almost everything I can is tested in there, and maybe used there too. Just on VM if is a desktop gadget or app)

      - generic web
    
      - some stacks, Rails, nodejs, php.
     
     - ...
    
    - Calibre

    - Windows Media share feature for remote videos on devices and TV (, don't like it really, mess with subtitles and really will look for a docker oss alternative)

    Wish list:

    - wallabag

    - firefox-sync (stuck on Chrome yet, no alternative on this found)

    - email sync

    It's not so great for now. Looking on this thread for contacts and calendar (currently used from the cloud classic providers)

  • ehnto 1656 days ago
    Edit: sorry, I misunderstood the question. The below is referring to software development.

    Everything. I keep infrastructure simple as I found as a developer, infrastructure configuration, dependency issues and updates took an extraordinary amount of time while providing zero benefit for products of a small to medium size. I do have a plan in place should I need to scale, but it is not worth maintaining an entirely different stack full of dependencies for the off chance I get a burst in traffic I can't handle.

  • notinventedhear 1655 days ago
    # 2GB linode instance ($10/month)

      nginx
      mailinabox (email, nextcloud)
      gogs
      6 static websites
      3 (dumb) little personal web-projects
      selfoss
      mumble
      openvpn
    
    # rpi-3 at home

      osmc (kodi) + 8TB of raided HDDs
      nginx
      chorus-2 in kodi publicly available (behind htpasswd) updated w/ dynamic DNS
      a nightly cron job rsyncs the from the linode instance
    
    # another rpi-3 in garden shed

      8TB of raided HDDs
      nightly cron of the other rpi-3
    • mxuribe 1655 days ago
      Curious, why the other rpi-3 in the garden shed? Is that for "off-site" backups?
      • anderspitman 1655 days ago
        Probably in case the house burns down.
  • k_sze 1656 days ago
    On a Linode instance (OS being Ubuntu Server 18.04):

    - mail server in Docker container

    - ZNC in Docker container

    - Shadowsocks server

    - Wekan as a Snap

    - My blog, statically generated using Pelican, served from nginx

    At home, I only have a Synology NAS that is exposed to the internet.

  • munmaek 1656 days ago
    On my FreeNAS server: gitea, plex, openvpn (w/ ExpressVpn), Mayan EDMS

    I am unhappy with the complexity of Mayan EDMS. I'm debating moving to Paperless. All I want is a digital file system that 1) looks at directories and automatically handles files 2) has user permissions/personal files so I can let my family use it 3) has a web form for uploads.

    I am planning to change gitea to sourcehut- the git service as well as builds.

    Any ideas for things a raspberry pi 3 & 4 could be useful for?

  • Fiahil 1655 days ago
    Like most folks here, I'm running the pihole/media/torrent suite. Hardware is a Rock64 soon to be colocated with a few Raspberry pi 4. Everything is dockerized and scheduled on k3s. Using kubernetes is a real life changer. I can unplug one of the SBCs and things are automatically balanced and rescheduled. It also makes the whole setup completely portable.

    I use NFS on the NAS for the storage unit. It's the only thing I need to backup.

  • bob1029 1655 days ago
    Nothing right now, but I am looking at spinning my own stack back up either "on-prem" (aka at home), and/or in some bare-metal hosting provider.

    Relying on streaming providers, cloud email services, etc., has left me in a very foul mood lately and I feel like I need to take back control. My biggest trigger was when I purchased an actual physical audio CD (this year; because NONE of the popular streaming providers offer the album), ripped it to FLAC, and then realized I had no reliable/convenient way to expose this to my personal devices. I used to have a very elaborate setup with subsonic doing music hosting duty, and all of my personal devices were looped in on it. This was vastly superior to Spotify, et. al., but the time it takes to maintain the collection and services was perceived to be not worth it. From where I am sitting now, its looking like its worth it again.

    How long until media we used to enjoy is squeezed completely out of existence because a handful of incumbent providers feel its no longer "appropriate" for whatever money-grabbing reasons?

    • thegagne 1655 days ago
      It may rub you the wrong way but Google Music allows you to upload your own music.
  • kixiQu 1652 days ago
    I selfhost stuff for fun! Which I'm counting my EC2 instance as.

    * Pleroma/Mastodon - I had been using Pleroma, but I'm not happy about a few things, so I bit the bullet to upgrade to a t3.small and am now running Mastodon. I love all the concepts of the fediverse, though the social norms are still being ironed out.

    * Write Freely (https://writefreely.org/) at https://lesser.occult.institute for my blog (right now mostly holds hidden drafts)

    * Matrix (Synapse) and the Riot.im frontend for a group chat. I'm a little conflicted, because right now the experience around enabling E2EE is very alarming for low-tech users and a pain for anyone who signs in from many places, and if it isn't enabled I have better security just messaging my friends with LINE. That said, I really want to write some bots for it. Group chats are the future of social networking, they all say...

  • greenyouse 1655 days ago
    I'm just starting out with building a virtual workstation system for myself with Eclipse Che. My home desktop has always been much more powerful than my laptop so I've always thought it would be ideal to have mainframe style development. I learned about Che 7 this week and figured that it was worth a shot. Using containers for everything sounds like an interesting idea to try out too!

    Surprisingly (at least to me), there are some really big companies like Microsoft, IBM/RedHat, and others pushing this workflow. The editor is supposed to basically be VSCode in browser and compatible with most extensions.

    I'm using my RPi as a jump box and have some commands to turn on my home desktop + mount the file system and that kind of stuff when connecting. I've used it in the past and it's worked nicely.

    I got k8s running but got blocked by some bugs when installing Che. Looks neat though. It would be cool to have a 2007 macbook with the computing power of a 2990WX workstation :).

  • winrid 1655 days ago
    I wrote my own orchestrator that I deploy my personal projects (pixmap, watch.ly, etc) with.

    The orchestrator can now deploy itself! All declarative service configuration with autoscaling etc. It manages the infra and service deployment for me. Thinking about open sourcing.

    Nginx/nchan, NodeJS, static sites (vanilla/angular/react deployments), nfs, MongoDB, Redis

  • pjc50 1656 days ago
    I used to host email and a blog. I even had a server in a rack on which I let people have shell accounts.

    I still have the email domain, because it's easier to run it forever than migrate all the things you signed up for. But actually running my own email is too much of an obligation and need to keep up on all the anti spam measures.

  • holri 1656 days ago
    Freedombox, Apache, Exim4, prosody (XMPP), rsync, rsnapshot, ssh all on 2 identical, redundant, interchangeable Olimex A20 Mini Server with ssd (1 at home, 1 colo) and one more powerful x86 separate X2Go (Desktop usage) & File (sftp) Server at home. Everything on pure, plain Debian stable and unattended-upgrades.
  • bluedino 1656 days ago
    I bought a used Lenovo P50 for $450, added another SSD, it has 48GB and an i7 so it's overkill.

    VMware ESXi, with VM's for Squid, DNS, MySQL, Nginx, Apache, basic file server, Gitlab, and one that's basically for IRSSI

    Strongly considering just moving everything to Debian with containers for everything, easier to manage than VM's.

  • minimaul 1656 days ago
    As much as I can, currently:

    On colo’d hardware:

    - off-site backup server (Borg backup on top of zfs) - this is a dedicated box

    - a mix of VMs and docker containers - mostly custom web apps

    - email (it’s easier than you think)

    At home:

    - file server using zfs

    - Nextcloud

    - more custom web apps

    - tvheadend

    - VPN for remote access (IKEv2)

    - gitlab

    - gitlab ci

    Also run an IPSec mesh between sites for secure remote access to servers etc

    While my workplace uses AWS a massive amount, I still prefer to run my own hardware and software. Cloud services are not for me.

  • fractalf 1656 days ago
    Gitea for an easy gui git access to repos with personal/sensitive data and Resilio for backing up my phone
  • harlanji 1655 days ago
    I built a setup called TinyDataCenter on a RasPi and run it hybrid with AWS and S3FS for unlimited media storage. On it I built iSpooge Live to host and syndicate my livestreams to YouTube and Twitch, and built some ffmpeg scripts to turn videos into HLS and playback with adaptive rate via VidroJS. Also on it is my portfolio site and in progress are imported copies of all my social media archives like Twitter and IG. Auth happens via JWTs from Auth0 but I’ve an email magic link system to bolt in soon. There’s an xmpp server that isn’t integrated yet. Email is hosted 3rd party but I may try email in a box. The theme is decentralized with syndication. This has been going on and live streamed to regularly for about 2 years. All mybdcripts are open source, same username on GitHub.
  • mmcnl 1654 days ago
    Besides some self-hosted applications, this is some stuff that is very useful to me:

    * Nextcloud - your own Dropbox! Amazing stuff.

    * VPN - simple Docker service that is super reliable and easy to set up (docker-ipsec-vpn-server)

    * Ghost - a very nice lean and mean blogging CMS

    * MQTT broker for temperature sensors

    * Samba server

    * Deluge - Torrent client for local use

    * Sabnzbd - NZB client

    * Gitea - my own Git server

    * Mail forwarder - very handy if you just want to be able to receive email on certain addresses without setting up a mailbox

    * Pihole - DNS ad-blocking

    * Jellyfin - self-hosted Netflix

    It's become sort of my hobby to self-host these kind of things. I use all of these services almost daily and it's very rewarding to be able to fully self-host it. I also really love Docker, self-hosting truly entered a new era thanks to readily avaibable Docker images that make it very easy to experiment and run things in production without having to worry about breaking stuff.

  • conradfr 1655 days ago
    I actually self-host a Phoenix LiveView silly game at work on my MacBook Pro, I'm not sure you can self-host more than that ;) For the anecdote the devops tried to ddos it but the app was working like it wasn't flooded by requests.

    Of course you can't even tell Macos to not suspend wifi or whatever if you close the lid while on battery so now I'm trying to move it to a Raspberry Pi 4 but I've got an obscure ssl error with OTP22 on it while querying an api, so I'm trying to debug that instead ... oh the joy.

    All my side projects and some clients are hosted old school style in a dedicated servers. I do overpay because that's the same price and machine since 2013 and yet it's still way cheaper than any cloud offering, especially because of the hosted databases pricings.

  • CarelessExpert 1656 days ago
    At home:

    TT-RSS + mercury-parser + rss-bridge + Wallabag to replace Feedly and Pocket.

    Syncthing + restic + rclone and some home grown scripting for backups.

    Motion + MotionEye for home security.

    Deluge + flexget + OpenVPN + Transdroid.

    Huginn + Gotify for automation and push notifications.

    Apache for hosting content and reverse proxying.

    Running on a NUC using a mix of qemu/kvm and docker containers.

    • shostack 1656 days ago
      What sort of things do you use Huginn and Gotify for?
      • CarelessExpert 1656 days ago
        I'm using Gotify for receiving push notifications on my phone for things where, in the bad old days, I might've used email. So things like: when my offsite backups complete, if my VPN goes down, on torrent add/complete events, and when motion is detected on my security cameras.

        Huginn came into being because I wanted a way to republish some of my emails as an RSS feed that I could subscribe to with TT-RSS (e.g. Matt Levine's newsletter), and for that purpose alone it's justified its existence.

        I've also used it as the plumbing that connects my various services to Gotify (Huginn makes a Webhook available and the event gets routed to Gotify). This is, admittedly, entirely unnecessary; I could just hit Gotify directly. But putting Huginn in the middle could give me some flexibility later... and it's there, so, why not use it? :)

  • ekianjo 1656 days ago
    many things:

    - Nginx

    - Nextcloud (with Calendar/Contacts on it)

    - IRC client (thelounge)

    - IRC server

    - DLNA server

    - Ampache server

    - video and photo library thru NFS (locally only)

    - OpenVPN

    - Shiori for bookmarks

    - Gitea for private projects

    - Syncthing (to keep a folder synchronized across my devices)

    - Jenkins

    • kuzimoto 1656 days ago
      Glad to see another Ampache user!
      • ekianjo 1656 days ago
        Yeah after trying many other options it is the one that works the best so far!
        • kuzimoto 1655 days ago
          Agreed! There's not much it can't do!
  • Spivak 1656 days ago
    Plex, Bitwarden, Nextcloud, Unifi, Pihole, OpenVPN, IPSec VPN, Gitea, OpenLDAP, Portainer, My Personal Site, Cloud Torrent, TTRSS, Grafana, Loki, FreeRADIUS, Kanboard, Dokuwiki, SMTP, Goitfy, php*Admin, Container Registry, Python registry, Matomo, PXE Server.
    • h1d 1656 days ago
      Is IPSec VPN a IKEv2 on either LibreSwan or StrongSwan connected to FreeRadius for authentication? What are the clients you connect from? Is it stable?
    • munmaek 1656 days ago
      What do you use gotify for/with?

      What do you feed into Grafana?

      I have a home server + some raspberry pis lying around that I want to start using.

  • Zash 1655 days ago

      * Email (postfix + dovecot)
      * XMPP (prosody + biboumi for IRC gateway)
      * Static websites
      * Mercurial code hosting (mercurial-server + hgweb)
      * File storage (sftp, mostly accessed via sshfs)
    
    Some on a HP microserver somewhere, some on a VPS.
  • gargron 1655 days ago
    I don't host anything at home, but I think it still counts as self-hosting if you run an independent service. In that sense, I self-host Mastodon: https://mastodon.social
  • platz 1656 days ago
    For my bookmarks, I self-host Espial, an open-source, web-based bookmarking server. https://github.com/jonschoning/espial
  • theshrike79 1654 days ago
    Fastmail handles my mail, Newsblur for RSS, iCloud for calendar. My blog is hosted on Netlify.

    The only things I host are either just hobbies or non-essentials:

    At home: - Node-red for home automation - PiHole for ad filtering on the local network - Plex on my NAS for videos - A Raspi for reading my Ruuvitags and pushing the info to MQTT On Upcloud and DigitalOcean and a third place: - Unifi NVR (remote storage for security cameras) - Flexget + Deluge for torrents - InfluxDB + Grafana for visualizing all kinds of stuff I measure - Mosquitto for MQTT

  • _b8r0 1655 days ago
    Online:

    - Nextcloud

    - Mailu.io

    - Huginn

    - Gotify

    - Airsonic

    - Gitea

    All on a dedicated box. Planning to add password sync, wallabag, syncthing a VPN and a few other features. Other boxes I have run various things from DNS to backup MXes and a WriteFreely instance on OpenBSD.

    Internally I host a ton of stuff, mostly linked to a Plex instance.

  • algaeontoast 1656 days ago
    File server and plex, that’s about it. I have another server I’ll occasionally run a Kubernetes cluster on, otherwise I don’t really bother with self hosting - I hate dev ops shit for a reason...
    • shantly 1656 days ago
      > I hate dev ops shit for a reason...

      I notice I was a lot more keen on hosting a bunch of crap myself before I knew how to do it "right", and before devops, orchestration ("you mean running scripts in remote shells?"), cloud, or containers or any of that were things. And yet it all worked just fine back then—time spent fixing problems from my naïve "apt-get install" or "emerge" set-up process wasn't actually that bad, compared with the up-front cost of doing it all "right" these days. A couple lightly-customized "pet" servers were fine, in practice. Hm.

      • shostack 1655 days ago
        As a beginner programmer this is something I wonder about. Having worked with many amazing engineers, I have some sense of the effort that goes into "doing it right" and the fear of god put into me for the consequences of not doing it right.

        So then look at home projects and I wonder if I know enough to self host things, or host them on GCP in a manner that won't just invite getting hacked, running up a ridiculous bill, or leaking my private sensitive data out.

        Any guidance to offer?

        • shantly 1655 days ago
          1) Just pay a flat fee for a VPS, unless you're trying to learn how to use a "true" cloud provider. Their web interfaces usually make recovery from the worst failure modes ("I can't even ping the box...") trivial and they'll cut you off if usage goes too high (which is what you want if you're trying to avoid insane bills). They may also have DNS and such in one place, again in an easy pointy-clicky interface, which is nice.

          2) A lot of what people do is chasing nines that you don't need (and a lot of the time they don't either, but "best practices" don't you know, and no-one wants to have not been following best practices, even if doing so was more expense and complexity than it was worth for the company & project, right?) so just forget about failover load balancers and rolling deploys and clustered databases and crap like that. All of that stuff can be ignored if you just accept that you may have trouble achieving more than three nines.

          3) If it's just for you, consider forgetting any active monitoring too. That can really kill your nines of reliability, but if it's mostly just you using it, that may be fine, and you won't get alerts at 3:00AM because some router somewhere got misconfigured and your site was unreachable for two minutes for reasons beyond your control. Otherwise use the simplest thing that'll work. You can get your servers to email you resource warnings pretty easily. A ping test that messages you when it can't reach your service for the last X of Y minutes (do not make it send immediately the first time it fails, the public Internet is too unreliable for that to be a good idea) is probably the fanciest thing you need. Maybe you can find some free tier of a monitoring service to do that for you and forget about it, even.

          4) If you can mostly restrict yourself to official packages from a major distro, and maybe a few static binaries, it's really easy to just write a bash script that builds your server from scratch with very high reliability. Maybe use docker if you're already comfortable with it but otherwise, frankly, avoid if you can and just use an official distro packages instead, as it'll complicate things a lot (now you have a virtual network to route to/from/among, probably need a reverse proxy, you may have a harder time tracking down logs, and so on). Test it locally in Vagrant or just plain ol' Virtual Box or whatever, then let it loose on a fresh VPS. If you change anything on the VPS, put it in the script and make sure it still works. If you're feeling very fancy learn Ansible, but you'll probably be fine without it.

          5) For security, use an SSH key, not a password, and change your SSH port to something non-default (put that in your setup script) just to cut down on failed login noise, if you feel like it. You could add fail2ban but if you've changed the port and are using a key it's probably overkill.

          6) Forget centralized logging or any of that crap. If you have a single digit count of VPSen then your logging's already centralized enough. If one becomes unreachable and can't be booted again and you can't find any way at all to read its disk, and that happens more than once, consider forwarding logs from just that one to another that's more reliable if you wanna troubleshoot it. You can do this with basic logging packages available on any Linux distro worth mentioning, no need to involve any SaaS crap.

          7) Backups. The one ops-type thing you actually have to to do if your data's not throwaway junk is backups. Backups and strictly-used build-the-server-from-scratch + restore-from-backup scripts are kinda sorta all most places actually need, despite all the k8s and docker chatter and such.

          8) Cloudflare exists, if you have any public-facing web services.

          [EDIT] mind none of this will help you get a job anymore since everyone wants a k8s wizard AWS-certified ninja whether they need 'em or not, so don't bother if your goal is to learn lucrative job-seeking skills, but it's entirely, completely fine for personal hosting and... hate to burst anyone's bubble... an awful lot of business hosting, too. Warning: if you learn how to run servers like this you may need to invest in some sort of eye clamp to prevent unwanted eye-rolling in server-ops-related meetings at work, depending on how silly the place you work is.

    • Marsymars 1656 days ago
      > I hate dev ops shit for a reason...

      I've mostly fallen into it at my job because the alternative to me pushing dev services to SAAS offerings and maintaining the glue myself is a pile of poorly-maintained IT-provided Server 2008 R2 boxes.

  • apple4ever 1655 days ago
    In DO:

    4 Ubuntu 16.04 servers:

    - Nginx/PHP for Wordpress - MySQL - Redis - Mail

    Planning to expand the the Nginx/PHP servers to at least two, and add load balancers. All certs are provided by an Ansible script using Lets Encrypt (yuck).

    At home:

    Proxmox running on two homebuilt AMD FX 8320 servers with 32GB each, with drives provided by FreeNAS on a homebuilt Supermicro server with about 10TB of usable space (on both HDDs and SSDs)

    Ubuntu 16.04 Servers:

    - 2x DNS - 2x DHCP - GitLab - Nagios - Grafana - InfluxDB - Redmine - Reposado - MySQL

    Other:

    - Sipecs

    All set up via Ansible.

    Next will set up a Kubernetes cluster (probably as far as I’ll get with containers).

  • DrAwdeOccarim 1655 days ago
    I host everything internal where if I need the resource I VPN in from outside. They all run on Raspberry Pis.

    > Resilio Sync for iPhone pictures backups and "drop box" file access

    > Transmission server

    > SMB share of NAS to supply OSMC boxes on every TV

    > Nighthawk N7000 running dd-wrt with a 500gb flash drive attached as storage for my Amcrest wifi cameras

    > Edgerouter Lite running VPN server

    > Hassbian for my zwave home automation stuff

    > A pi with cheap speakers that I can log into and play a phone ringing sound so my wife will look at her phone!

  • HellfireHD 1655 days ago
    I wasn't going to post mine until I realized that I'm hosting some stuff that I haven't seen mentioned yet.

        Appveyor
        Gitea
        Graylog + Elastic Search
        Minecraft/Pixelmon
        Nodered
        ruTorrent
        Taiga
        Tiny Tiny RSS
        Ubooquity* 
        WikiJS
        Zulip (chat/IM)
        
    *I hate it, but haven't found something better

    Also, kudos to those brave souls who are running Tor exit nodes!

    Edit: Forgot a bunch

  • preid24 1655 days ago
    Intel NUC7i5BNK with coreos running the following in a single node docker swarm:

      - Traefik (reverse proxy)
      - Git Annex
      - Gitea
      - Drone (CI)
      - Docker Registry
      - Clair (security scanning for docker images)
      - Selfoss (RSS reader)
      - Grafana / Prometheus / Alertmanager (overkill really)
      - A few custom applications...
    
    Turris Omnia running transmission under lxc
  • lostmsu 1656 days ago
    I tried to host OwnCloud, but could not figure out how to make fully automatic updates to function (including host OS - e.g. Ubuntu).

    Now I only host my own project: http://billion.dev.losttech.software:2095/

    Also regular Windows file sharing which I use for media server and backups.

    Though I'd like to expand that. Maybe a hosted GitLab.

    • fractalf 1656 days ago
      Try Gitea instead og Gitlab if all you need is just a simple online gui for git. Its like a lightweight version of github, super sweet
  • javitury 1656 days ago
    I run a small sever with node-red. Right now I use it to scrap university websites looking for PhD paid scholarships.

    Also, I use it to find flats when I need ro.

    • mosselman 1656 days ago
      That flats thing sounds cool. Do you have anything written up or available somehow where I can read up on it? Or would you care to elaborate here?
  • absc 1655 days ago
    I have one OpenBSD VM running on vultr with:

    - Mail server (OpenSMTPD)

    - IMAP (Dovecot)

    - CVS server for my projects.

    - httpd(8) for my website.

    I still need to add rspamd for spam check. But insofar, I received just one spam E-mail.

    • mdaniel 1655 days ago
      > - CVS server for my projects.

      Out of curiosity, do you genuinely prefer CVS or just haven't migrated from a historical repo?

  • dvko 1655 days ago
    Email, using mailinabox.email. Highly recommend it.

    Also NextCloud (files, contacts and calendar), few WordPress websites and Fathom for website analytics.

  • frgotmylogin 1656 days ago
    Currently nothing but Hass.io on a raspberry pi with an assortment of z-wave and zigbee sensors and a few wifi enabled light bulbs.
  • jorijn 1653 days ago
    Synology NAS:

      Unifi controller
      Miniflux
      CouchPotato
      DSMR Reader (software that logs smart electricity meter data)
      Gitea
      Deluge
      MySQL
      PostgreSQL
      Cloud Storage mirror (for Google Drive backup)
    
    Intel NUC:

      Full Bitcoin node
      Bitcoin lightning node
    
    Remote (Digital Ocean):

      Trading Software
      Various PHP websites
  • pnutjam 1655 days ago
    OpenSuse Leap, that acts as a NAS for my other computers. At home: Borg backups jellyfin x2go

    cloud (time4vps 1TB storage node) borg calibre AdGuard

    -- home server data drive rsyncs to an internal data drive (XFS to btrfs), btrfs drive takes a snapshot and unmounts when not in use, then important stuff is rsynced to my VPS. --- home drives backed up with borg for encryption

  • rukuu001 1656 days ago
    Syncthing as a Dropbox alternative

    I keep looking at hosting my own mail server, but get scared off by tales of config/maintenance dramas.

  • Artemix 1655 days ago
    I self-host the following services:

        syncthing
        nfs server
        UPnP server, connected to my media NAS
        gitea server, for my personal projects
        droneci, linked to my gitea server, for building websites and releases I publish
        A few locally hosted services, such as DevDocs, draw.io or Asciiflow, for convenience.
  • psic4t 1655 days ago
    On cheap cloud instances at Hetzner:

      - postfix/dovecot for mailing
      - searx instance
      - synapse for matrix
      - unbound for DoT
      - nginx for my blog
      - gophernicus for old times sake
    
    At home:

      - nextcloud
      - monero full node
      - unbound backup instance
      - fhem for home automation
      - restic for backup
  • BigBalli 1650 days ago
    Pretty much everything I develop (excluding most databases) are hosted on my cloud server. Best $5/mo I ever spent!
  • p0d 1655 days ago
    I run test environments for my saas products, gitlab and nextcloud on a dual core box, hp dc7900, in my roofspace. The os is running of an ssd and there are two old metal disks in software raid.

    All my business backups go to the same box. I have a pi and enrypted usb drive copying my backups to my shed from my house.

  • sahoo 1656 days ago
    Plex on raspberry Pi2 with deluge remote client/server. I don't think it can handle more than that.
    • sahoo 1655 days ago
      I do plan to upgrade the pi or get another one for a pihole.
  • zelon88 1656 days ago
    Everything.

    PiHole, HRCloud2, HRScan2, HRConvert2, my wordpress blog, a KB, and a few other nick knacks. Currently working on a noSQL share tool (for auth-less large file sharing) and then maybe this idea that's been floating around my head for a Linux update server. Like WSUS for linux.

  • wildduck 1656 days ago
    nodejs nginx apache2 postgresql mysql nextcloud jvm/rhino/ringojs mattermost wekan wikimedia nextERP nodejs WebRTC signaling server, nodejs pushing notification server, STUN server, mumble Asterisk git Haraka etherpad
  • nikisweeting 1656 days ago
    Zulip, archivebox, codimd, mailu, plex, radarr, sonarr, jackett, transmission, matomo, kiwix, minecraft, nextcloud, unifi controller, unifi CRM, pihole, wireguard, zfs, glusterfs, freenas, autossh, swarmpit, netdata, syncthing, duplicati, elk stack, nomad, a bunch of static sites, a bunch of wordpress sites, a bunch of assorted django apps (including a large consumer-facing one), custom dyndns and tls renewal cron jobs, and many many more that have come and gone over the years.

    All on a few Vultr + Digitalocean droplets, 2 raspis + 1 atomic pi, a couple HP i5 mini desktop machines, and a Dell r610 rack server with 24 cores and 48GB of ram (with about 36TB of assorted shucked and unshucked USB hard drives attached in a few GlusterFS / ZFS pools). I have a home-built UPS with about 1.5kwh worth of lead-acid batteries powering everything, and it's on cheap Montreal power anyway so I only pay $0.06¢/kwh + $80/mo for Gigabit fiber. It's a mix of stuff for work and personal because I'm CTO at our ~9 person startup and I enjoy tinkering with devops setups to learn what works.

    All organized neatly in this type of structure: https://docs.sweeting.me/s/an-intro-to-the-opt-directory

    Some examples: https://github.com/Monadical-SAS/zervice.elk https://github.com/Monadical-SAS/zervice.minecraft https://github.com/Monadical-SAS/ubuntu.autossh

    Ingress is all via CloudFlare Argo tunnels or nginx + wireguard via bastion host, and it's all managed via SSH, bash, docker-compose, and supervisord right now.

    It's all built on a few well-designed "LEGO block" components that I've grown to trust deeply over time: ZFS for local storage, GlusterFS for distributed storage, WireGuard for networking, Nginx & CloudFlare for ingress, Supervisord for process management, and Docker-Compose for container orchestration. It's allowed me to be able to quickly set up, test, reconfigure, backup, and teardown complex services in hours instead of days, and has allowed me to try out hundreds of different pieces of self-hosted software over the last ~8 years. It's not perfect, and who knows, maybe I'll throw it all away in favor of Kubernetes some day, but for now it works really well for me and has been surprisingly reliable given how much I poke around with stuff.

    TODOs: find a good solution for centralized config/secrets management that's less excruciatingly painful than running Vault+Consul or using Kubernetes secrets.

  • pasxizeis 1655 days ago
    What do people use to provision their Raspberries? Ansible or something?
  • IceWreck 1655 days ago
    On my VPS

      * My Website
      * Seafile
      * FreshRSS
      * RSSBridge for making rss feed for websites that don't have one
      * Dokuwiki
      * A Proxy
      * Multiple Telegram and Reddit bots
  • asdkhadsj 1655 days ago
    On this note, I've got a few services I'd like to setup locally. I'm curious if I could set them up in a Docker-like fashion, where it's super easy to manage the individual container image - and then run it on some type of home "cloud". I debated reaching for Docker Swarm, but I'm curious:

    What might be the easiest way to achieve this? Running a Kube cluster is insane for my needs, I imagine I'd be perfectly happy with a few Pi's running various Docker Containers. However I'm unsure what the easiest way to manage this semi-cloud environment.

    edit: Oh yea, forgot Docker Compose existed. That may be the easiest way to manage this, though I've never used it.

    • dillonmckay 1655 days ago
      You can run a single node k8s setup.
      • detaro 1655 days ago
        Is k8s worth it on just a single node if you aren't prototyping or learning for a larger setup?
  • jimmcslim 1656 days ago
    For folks that are reverse proxying I have a few questions...

    1) Do you identify the reverse proxy by host or by path?

    e.g. <service>.yourdomain.com or yourdomain.com/<service>

    2) Do you still run everything over a VPN?

    • CarelessExpert 1655 days ago
      Subdomain as well.

      External services I need are directly accessible via a local reverse proxy that's publicly visible over IPv6.

      For IPv4-only scenarios I proxy through a linode instance (that also hosts a few things, including my blog) which sends the traffic in over v6.

      Obviously this is all fronted by a traditional firewall.

      And before you ask: it's surprising how often v6 connectivity is available these days. Mobile phone providers have moved to v6 en masse, and even terrestrial internet providers are starting to get religion.

      It's still not available in my workplace (surprise surprise), but other than that, much to my surprise, v6 is my primary mode of connectivity.

    • bpye 1656 days ago
      1) By subdomain

      2) No - but I do use Cloudflare to proxy inbound traffic

  • gorkemcetin 1655 days ago
    Self hosting Balsa Knowledgebae (https://getbalsa.com), an alternative to Notion and Evernote.
  • carc1n0gen 1655 days ago
    I host my blog on a raspberry under my desk. At some point I'll Get around to moving my gitea instance there too, which is currently on digital ocean
  • CaptainJustin 1654 days ago
    Running a few different containers in Docker at home.

    - Hand-rolled Go reverse proxy with TLS from LE.

    - Several Pg DBs for development.

    - VPN server.

    - Chisel for hosting things "from home" while running on my laptop remotely.

    - Etcd

    - Jenkins

    - Gitea

    - Pi-hole

    - A few different development projects

  • danielparks 1656 days ago
    Postfix, Dovecot, Amavis/Spamassassin, Bind, NGINX.

    So, mail, DNS, and a few web sites. I’ve been running something like this for more than 15 years now.

  • Mave83 1656 days ago
    powerdns, wireguard, gitlab, nginx, pgsql, mariadb, zabbix, nextcloud, Grafana, graphite, prometheus, haproxy, postfix, and a lot more
    • zamadatix 1656 days ago
      Curious what made you choose powerdns over bind, I've never tried it out before.
  • awat 1656 days ago
    Tiny Tiny RSS - https://tt-rss.org/
  • vbezhenar 1655 days ago
    I have home server to host samba share for my needs, also hosting videofiles so I can watch them on my TV.
  • KajMagnus 1655 days ago
    I self host Talkyard, a cross between StackOverflow, Slack, HackerNews. https://github.com/debiki/talkyard (I'm developing it)

    And SyncThing, https://syncthing.net/

  • johnx123-up 1650 days ago
    Restyaboard (for trello alternative), GitLab (for GitHub alternative)
  • hanniabu 1656 days ago
    Ethereum archival node
  • jtthe13 1656 days ago
    Not much: Plex server, and Pi-Hole in a docker.
  • scorown 1656 days ago
    Bitwarden, Unifi, PiHole

    It all started with hosting subsonic

  • danielovichdk 1655 days ago
    Windows 2000 ISS 5 FTP SQL server 2000
  • nirav72 1656 days ago
    plex gitea deluge + VPN nzbget radar sickchill jackett grafana pihole openvpn server unify controller
  • dbeley 1655 days ago
    - Nextcloud

    - Ampache

    - Shaarli

    - Dokuwiki

    - Deluge

    - Hugo blog

    Everything running on a cheap server from kimsufi.

  • gramakri 1655 days ago
    I self-host using Cloudron (obviously). My list is:

    * Gogs

    * WordPress

    * Wallabag

    * Ghost

    * Minio

    * Email (yes, this is my primarily and only email)

    * TinyTinyRSS

    * NextCloud

    * Meemo

    * MediaWiki

  • bribri 1656 days ago
    Calibre web
  • sharma_pradeep 1655 days ago
    Blog
  • nonamestreet 1655 days ago
    bitcoin full node