I am currently running an Unraid server with some docker containers, here are a few of them: Plex, Radarr, Sonarr, Ombi, NZBGet, Bitwarden, Storj, Hyrda, Nextcloud, NginxProxyManager, Unifi, Pihole, OpenVPN, InfluxDB, Grafana.
I am currently running an Unraid server with some docker containers, here are a few of them: Plex, Radarr, Sonarr, Ombi, NZBGet, Bitwarden, Storj, Hyrda, Nextcloud, NginxProxyManager, Unifi, Pihole, OpenVPN, InfluxDB, Grafana.
114 comments
All web-services are reverse-proxied through traefik
At home:
On a remote server:Home server's a Raspberry Pi 4.
The config I ended up using - https://0bin.net/paste/gnWY4+Tn-jZ2UMZm#RgQfZ3uD7MIlK7nWKLLX...
It's deployed on docker, proxied through traefik.
Does anyone have recommendations for password+sensitive-data management?
I'm currently using Keepass and git, but I have one big qualm. You cannot choose to not version-control that one big encrypted (un-diff-able) file.
They both store passwords/data in gpg-encrypted files in a git repo. I'm not sure what the state of GUIs/browser plugins are for it, but I'm pretty sure there are some out there.
You can also set up your git config to be able to diff encrypted .gpg files so that the files are diff-able even though they're encrypted.
[0]: https://www.passwordstore.org/
[1]: https://github.com/gopasspw/gopass
One other alternative to keepass is pass[1].
[1]: https://www.passwordstore.org/
What are the issues with syncthing?
One nifty thing is that you don't need to run unison on the server ever, just have it installed. I have systemd units that I enable on my client machines and that does all of the syncing; unison connects to the server with ssh and does all the work there over that.
[1]: https://www.cis.upenn.edu/~bcpierce/unison/index.html
One thing I noticed with tinc is that it does not take advantage of sysctl network tuning. I had to increase the network buffers for that dynamic routing to not make as much of a noticeable slowdown.
I think the problem is entirely caused by the US having absolutely abysmal private internet speeds and capacity. Since you can’t then have your own server at home, you are forced to have it elsewhere with sensible internet connections.
It’s as if, in an alternate reality, no private residences had parking space for cars; no garages, no street parking. Everyone would be forced to either use public transport, taxis and chauffeur services to get anywhere. Having a private vehicle would be an expensive hobby for the rich and/or enthusiasts, just like having a personal server is in our world.
I do everything for little to nothing in my life, and there's no reasonable default as to where the line is other than a cost/benefit comparison.
For my for many years owning a car was far more expensive than renting or getting taxis when needed. Owning a car absolutely would have been an expensive hobby, and the same is true for many in cities.
Having a personal server is exceptionally cheap. I had a VPS unnoticed recently I'd forgotten to cancel which cost about 10 dollars per year. That's about one minimum wage hour where I live. If you mean literally a personal server a raspberry pi can easily run a bunch of things and can cost about the same as a one off.
It's time, and upfront costs of software. If I want updates, and I do (security at least) I need some ongoing payments for those, and then I need to manage a machine. That management is better done by people other than me (as even if they earned the same as me they'd be faster and better) and they can manage more machines without a linear increase in their time.
So why self host? Sometimes it'll make sense, but the idea it should be the default to me doesn't hold. Little needs to be 100% in house, and sharing things can often be far more efficient. Software just happens to be incredibly easy to share.
You can’t outsource your privacy. Once you’ve given your information to a third party, that third party can and will probably use it as much as they can get away with. And legal protection from unreasonable search and seizure is also much weaker once you’ve already given out your information to a third party.
To generalize, and to also answer your other comments in a more general sense, you can’t outsource your freedom or civic responsibility. If you do, you turn yourself into a serf; someone with no recourse when those whom you place your trust in ultimately betray you.
(Also, just like “owning” a timeshare is not like owning your house, having a VPS is not self-hosting.)
> If you do, you turn yourself into a serf;
I'm really not sure I follow. This is about self hosting services, I can't really link (e.g.) hosting my data on github.com and turning myself "into a serf".
> someone with no recourse when those whom you place your trust in ultimately betray you.
There's obviously recourse - as an EU citizen (at least currently) it's possible that companies can lose 4% of their global turnover for misusing data.
> (Also, just like “owning” a timeshare is not like owning your house, having a VPS is not self-hosting.)
You can see from my post that I also put up the price of running a rpi if you didn't count a VPS as self hosting, which I absolutely would because to me it's about what services I run vs what services I pay others to run.
I found on my Linode account one last weekend. It’s been up since 2010 running Debian 5, no updates cause the repos are archived. Couple of PHP sites on there which I don’t control the domains of (but the sites where active).
Last email I have from the people there is 2012, a backup. The company apparently is not in business anymore (I know the domains registrar was on the personal account of the owner. He might have have auto renew on).
Backed up everything there and shut it down.
The trend definitely traces to the advent and eventual domination of asymmetric Internet connectivity. My first DSL connection was symmetric, so peer-to-peer networking and running servers ("self-hosting") were just natural. Since then, asymmetric bandwidth has ruled the US.
It's not so much that connectivity technology in the US is strictly poor—many cities have options providing hundreds of megabits or a gigabit or more of aggregate bandwidth. It's that the capacity allocation of some shared delivery platforms (e.g., cable) is dramatically biased toward download/consumption, and against upload/share/host. And there's no way for consumers to opt for a different balance. I'd gladly take 500/500 versus 1000/50. Even business accounts, which for their greatly increased costs are a refuge of symmetric connectivity and static IPs, are more commonly asymmetric today.
I think that this capacity imbalance and bias toward consumption snowballs and reinforces the broader assumptions of consumption at the edge (why make a product you self-host when most people don't have the proper connectivity?). This in turn means more centralization of services, applications, and data.
Nevertheless, even with mediocre upload speeds (measured in mere tens of megabits), I insist on self-hosting data and applications as much as I can muster. All of my devices are on my VPN (using the original notion of "VPN," meaning quite literally a virtual private network; not the more modern use of VPN to mean "encrypted tunnel to an Internet browsing egress node located in a data center"). For example, why would I use Dropbox when I can just access my network file system from anywhere? To me, it's a matter of simplicity. Everything I use understands a simple file system.
And most people actually do outsource their jobs. They are employees rather than working for themselves…
That might be true if you are in SF, NY, Toronto, London or some other major metropolitan with a good public transportation network. However for a large number of places in North America including metropolitans like LA, San Diego, Minneapolis, Dallas, having a car is almost as necessary as anything as that is the only way to get around the city without spending half a day in public transit.
Having a car is not a hobby when you live outside of a very dense city center. That's just the tool that enables you to live.
While I know that some car owners do just have it for fun, I think a lot more are because it's useful.
(edit: forgot to state country, Slovenia)
50€ for 950/300 (fibre by Orange) . I could get 10G/1G (fiber by Free) for 100€ but I could not use my own router inserted of the provided one.
Shared LTE phone and data plan (2 people) w/ 22GB/mo total is $160.
And I also pay about $800/mo for health insurance for 2 people.
Saudi is like this, I hear, Jakarta too. I assume there's more.
“No self-drive. Only taxis.”
— The Prisoner, 1967
It also acts as an NFS server for my media center (Kodi -- though I really am not a huge fan of LibreELEC) to pull videos, music, and audiobooks from. Backups are done using restic (and ZFS snapshots to ensure they're atomic) and are pushed to BackBlaze B2.
I used to run an IRC bouncer but Matrix fills that need these days. I might end up running my own Gitea (or gitweb) server one day though -- I don't really like that I host everything on GitHub. I have considered hosting my own email server, but since this is all done from a home ISP connection that probably isn't such a brilliant idea. I just use Mailbox.org.
[1]: https://github.com/cyphar/cyphar.com/tree/master/srv
I plan to use Wireguard too, so I shouldn't run on containers? Can you elaborate on that?
I run it on the host.
This is a bit tangential, but to clarify, do you mean that you listen to audiobooks on your TV using Kodi? Do you also have a way of syncing them to a more portable device, like your phone?
Sometimes, though not very often -- I work from home and so sometimes I'll play an audiobook in my living room and work at the dinner table rather than working from my home office.
> Do you also have a way of syncing them to a more portable device, like your phone?
Unfortunately not in an automated way (luckily I don't buy audiobooks very regularly -- I like to finish one before I get another one). I really wish that VLC on Android supported NFS, but it doesn't AFAIK (I think it requires kernel support).
I used to run Docker containers several years ago, but I found them far more frustrating to manage. --restart policies were fairly hairy to make sure they actually worked properly, the whole "link" system in Docker is pretty frustrating to use, docker-compose has a laundry-list of problems, and so on. With LXD I have a fairly resilient setup that just requires a few proxy devices to link services together, and boot.autostart always works.
Personally, I also find it much simpler to manage a couple of services as full-distro containers. Having to maintain your own Dockerfiles to work around bugs (and missteps) in the "official library" Docker images also added a bunch of senseless headaches. I just have a few scripts that will auto-set up a new LXD container using my configuration -- so I can throw away and recreate any one of my LXD containers.
[Note: I do actually maintain runc -- which is the runtime underneath Docker -- and I've contributed to Docker a fair bit in the past. So all of the above is a bit more than just uneducated conjecture.]
I only just recently discovered podman and I've been pretty excited. Having never used LXD and only understanding the high level differences between the two, I'm curious how it compares with regards to security and usability.
Oh, and most of the Docker CVEs found in recent years -- including those I've found -- have also impacted podman. The most brazen example is that podman was vulnerable to a trivial symlink attack that I fixed in Docker 5 years ago[1,2]. It turns out that both Docker and podman were vulnerable to a more complicated attack, but the fact that podman didn't do any special handling of symlinks is just odd.
[Disclaimer: The above is my personal opinion.]
[1]: https://github.com/containers/libpod/pull/3214 [2]: https://github.com/moby/moby/pull/5720
You've definitely convinced me to take a good look at LXC/LXD though. Thanks for the thorough response!
You can use LXC directly if you want to avoid a long-running daemon.
Overleaf: https://sdan.xyz/latex
A URL Shortener: https://sdan.xyz
All my websites (https://sdan.xyz/drf, https://sdan.xyz/surya, etc.)
My blog(s) (https://sdan.xyz/blog, https://sdan.xyz/essays)
Commento commenting server (I don't like disqus)
Monitoring (https://sdan.xyz/monitoring, etc.)
Analytics (using Fathom Analytics) and some more stuff!
I wrote this to setup my web server, mail server and VPN server, and auto-generate all my VPN keys.
https://github.com/sumdog/bee2
But at the same time, I understand the security risks and if I have to I can just stop netdata's container and add some more security on it before turning it on again (I'm not running some SaaS startup, so security isn't a huge concern and I don't think you can do anything with my netdata that can affect or show anything else that can make me prone to attack)
I'm probably going to change how publicly accessible my monitoring view is soon, but for now, it seems pretty cool for everyone to see.
Would love to get a link to a screenshot of your system's resource monitoring. The description of each panel & eache metric was quite useful!
I see a lot of people putting their home stuff behind CloudFlare, but when I reviewed their free tier, I didn’t actually see any security benefit to outweigh the privacy loss, and I didn’t see that covered on your blog post.
The main thing is being able to hide your origin IP address. That turns many types of DDoS attacks into CloudFlare's problem, not yours, and it doesn't matter that you're on the free tier[0]. If you firewall to only allow traffic from CF[1], then you can make your services invisible to IP-based port scans / Shodan.
CloudFlare isn't a magic-bullet for security, but, used correctly, they greatly reduce the attack surface.
Whether any of that is worth the privacy / security risk of letting CloudFlare MITM your traffic is up to you.
[0] https://news.ycombinator.com/item?id=21170847
[1] https://www.cloudflare.com/ips/
1. This is hosted on GCP. Actually was thinking of using Cloudflare Argo once my GCP credits expire so that I can truly self host all this (although all I have is an old machine).
2. For me, Cloudflare makes my websites load faster on pages. Security wise, I have pretty much everything enabled... like always on HTTPS, etc. and I some strict restrictions on SSHing into my instance (also note that none of my ip addresses are exposed thanks to Cloudflare), so really I'm not sure what security risk there may be.
3. How am I losing privacy loss? Just curious, not really understanding what you're saying there.
I'd suggest that Argo is a waste of money if you have control of your router, you don't need to secure unencrypted HTTP traffic, and your ISP isn't port-blocking. Block all traffic except from CF's IPs, configure Authenticated Origin Pulls, and use SSL for your CF<->Origin traffic (your own cert or CF's).
If you don't meet all of those conditions, a cheap VPS as a VPN server is probably a better value (plus you get a VPS to do other stuff with).
Browser <> CF, CF<> source server. Two distinct TCP sessions. Both potentially encrypted, but there’s no E2E encryption anymore.
1. Been using it for 3+ years. Whenever I'm making a site, there's nothing better than easily making some DNS records and making sure they're all always-on HTTPS.
2. A little bit of the first part: It's a hassle to setup my own certs, etc.. I feel that Cloudflare "protecting" my IP from DDOS and other attacks is far better than anything that I can setup easily (at least from my experience, I think they know what they're doing)
3. Maybe in the future when I have some time and money I'll do everything on my own and ensure I have E2E encryption. At the moment, anything I'm running isn't mission-critical and isn't used by hundreds of people; I'm not making a SaaS startup. I understand your concern, but the ease of use of Cloudflare is something I value.
4. Analytics. I've come not to trust Google Analytics at all. I'm not sure what they're doing, but most if not 100% of tech-savy people have adblock, which blocks GA. My VPN from AlgoVPN blocks GA and anything related to GA, FB, Twitter, etc. So I'm not really sure how much I can trust GA's analytics opposed to Cloudflare giving me the exact numbers on how many people requested or visited my site. (I'm going to make my own analytics soon since Fathom has turned to profit only and not open source).
> 3. How am I losing privacy loss? Just curious, not really understanding what you're saying there.
I understand the benefit of CF, and it’s for each person to decide for themselves what they consider acceptable or not.
> Cloudflare giving me the exact numbers on how many people requested or visited my site. (I'm going to make my own analytics soon since Fathom has turned to profit only and not open source).
We could do this in the 90s with our web server logs. It didn’t involve third parties or paid tools or centralising logging and sacrificing privacy. Tooling for simple stats has existed for literally more than two decades.
Can you elaborate on this? Maybe I misunderstand you, but is there a good way to get HTTPS from your home?
Of course, it would simplify privilege escalation if someone successfully attack netdata service. If you want public dashboard, streaming is supposed to be quite safe (no way to send instruction to streaming instance of netdata).
https://github.com/dantuluri/sd2/blob/master/docker-compose....
You'll need Mongo and Redis (last I remember) as well (which I believe are the two images that follow the sharelatex image.
Here’s my home lab: https://imgur.com/a/aOAmGq8
I don’t self host anything of value. It’s not cost effective and network performance isn’t the best. Google handles my mail. GitHub can’t be beat. I use Trello and Notion for tracking knowledge and work, whether personal or professional. Anything else is on AWS. I do have a VPN though so I can access all of this when I’m not home.
The NAS is for backing up critical data. R720 was bought to experiment with Amazon Firecracker. It’s usually off at this point. Was running ESXI, now running Windows Server evaluation.
The desktop on the left is the new toy. I’m learning AD and immersing myself 100% in the Microsoft stack. Currently getting an idiomatic hybrid local/azure/o365 setup going. The worst part about planning a MS deployment is having to account for software licensing that is done on a per-cpu-core basis.
The status quo is radically anti-consumer, IMO, as radical as abolition of all copyright would be.
Of all the ways to try to promote creativity in the 21st century, making information distribution illegal by default and then using force of law to restrict said distribution unless authorized is pretty wack.
It makes sense when you consider that information is generated in the first place for an incentive, and that incentive is only possible when copyright guards it. People are more than free to create public information if they choose to do so (and they do), but some people generate valuable information mostly for the purpose of profiting from it and the copyright framework tries to ensure that it will be worth their time when they attempt to create such information. Would you rather they didn't have the option which would result in the effort not being expended to generate such information? With copyright, you at least have the option to obtain it if you deem the price tag (set by the creator) fits the value you'll get from it.
There is no central authority that copyrights information that people generate. You make it sound like there is some evil force in the world that prevents people from creating freely accessible information. There isn't. You're free to create freely accessible information. There are creators that choose to limit access to the information that they generate and I don't understand how someone can argue that it is unfair that they have an option to do so if they choose.
Presumably so because consumers stealing the final products was already prohibited / a crime. The final product was generally physical, and consumers would have to physically break into stores to get their copy of whatever was produced and it was already illegal. And even if the consumers obtained their copy by legitimate means, them sharing with other would mean they would lose their own copy. For consumable-ish items (things you only need to experience once to get the value out of the product) this is still a problem of course but there is no easy way of preventing it - but the idea is still there and the limits can be enforced. With digital information, the barrier for entry for such theft is greatly reduced. You don't lose your copy when you share, and stealing is a lot easier too. Doesn't mean it is right or it is in line with the spirit of what we thought ownership meant back in the day.
>Copyright as it is goes against the very purpose of creation preventing any new works from ever being created.
Again, I don't get this. EVERYONE has the OPTION to create works for public domain. Why is this not enough for you? Everything you want is already there. It's just that there is another option for others that don't want to create works for public domain. Why does that bother you?
My guess is that if you were entirely happy with what people create without a motive for profit, you wouldn't care that other people had a copyright option. But you are not happy with what that economic model (free, copyleft etc.) produces by itself. You are aware that that economic model doesn't work. You want free access to information that people with economic incentives create with a price tag attached to it, because you know information generated with profit in mind tends to be more valuable.
https://www.smithsonianmag.com/arts-culture/first-time-20-ye...
> Would you rather they didn't have the option which would result in the effort not being expended to generate such information?
Yes, I would argue absolutely that valuable information would still be made because those that would benefit just from the information existing - not from the potential sale of said information - would still fund its creation. Someone that wants a painting will still pay for it if or if not they can sell the finished work. The think tank researching a cure for cancer will still have ample funding sources from those who think not having cancer would be beneficial regardless of those sponsors ability to profit off said cure.
> I don't understand how someone can argue that it is unfair that they have an option to do so if they choose.
Its largely a problem because its both default and implied. Its in the same class of problem as if the government tried to restrict air - you had to pay to breathe and are charged per-month. Despite the air being "free" and "everywhere". Its a tough analogy to write though, because there is no true analog to the modern miracle of information propagation being infinite and endless - we truly have nothing else worth so little as a copy of a number to compare it to.
But fundamentally its having your cake and eating it too - if you want to monetize your creations, you make them for free (at expense to yourself) to try to monetize something that has no value (copies of it). Its so abjectly opposed to reality and true scarcity that subconsciously drive people to feel no serious shame in piracy despite them "stealing theoretical profits from the rightsholder". But thats really all you are taking. In another light, a random stranger is offering you something for free and without recompense that they have, just because its so cheap to store, transmit, and replicate. That is magical. We take this modern miracle of technology and bind it in chains to try to perpetuate a model of profit that doesn't make any actual sense in actual reality given the scarce inputs (creative capability, motivation, and efforts) and infinite outputs (information) involved.
You can though. Precisely because there are countries (past and present) that have / had no legal framework for such incentives, and there are others where the framework was there but the law is not enforced. And we can observe how it is working for them, compare and contrast. I live in one of them (lived here all my life) but work for countries that provide such a protection (precisely because my intellectual property would not be respected here, so my own country my homeland does not get to benefit from my work) so it is easy for me to look at both sides of the coin - though it is not strictly necessary. It doesn't take hands on experience to observe that the most productive and innovative countries occupying planet earth are those that have strong intellectual property rights.
There is a lot of low hanging fruit where I live that would double the GDP of the country in a few years but no one is doing it, because they are either capital intensive, or time intensive (or both) but without any protections there to make it worth your while it doesn't make sense to attempt those - for anyone. It makes more sense to pitch any innovative ideas to countries that will protect you so that you get a return proportional to the value you generate for the rest of the society. If I have an idea that has the potential to shave 1 hour off of millions of people's work everyday, that is enormous value generated for everyone and I should be rewarded proportionally. Not to mention the risk I'd have to take to attempt that, by attempting that I'm doing this instead of doing something else with my only life so of course the incentive HAS TO be there.
>The think tank researching a cure for cancer will still have ample funding sources from those who think not having cancer would be beneficial regardless of those sponsors ability to profit off said cure.
I think this is far too naive way of looking at it. You are taking risk entirely out of the equation. If I'm attempting to do something that is of value to other people, merely by attempting it, I am taking a RISK. My time is my most valuable asset, I only live once, and I'm willing to invest my time and capital into this endeavor instead of doing something else with them. The resources of human beings are not unlimited, so they need to have a heuristic / algorithm to ration their resources. Their survival / wellbeing is dependent on this. So something becomes a viable risk only if there is a possible return that makes sense. Like you wouldn't play coin flip for 1.1x or nothing right? It must at the very least be 2x or nothing to break even.
So we think we'd like to do good for no possibility of personal return, but behavioral economics show that humans do not operate that way (even though they'd like to think that they would) because each and every human being have their own lives, responsibilities, families, wants, wishes and only limited resources. To make ANYTHING the returns must be congruent with your heuristic about how you'd like to divide your limited resources.
You are still approaching it from the "make it for free, charge for the result" model incompatible with reality. If there is something worth doing that will radically improve society you should be soliciting the funding first before undertaking the labor.
Yes, risk is involved. You take risks when you pay someone to build a house for you. You take risks when you buy Sushi at the store that may or not still be good. We have, as a society, very effectively structured and produced buyer protections for services rendered ranging from guarantees to total free for all. You would, like with every other transaction, budget and account for risks and pay to mitigate them if warranted.
> It makes more sense to pitch any innovative ideas to countries that will protect you
Its more like foreign nations with IP laws are offering you a magic money machine that nations without don't, and the pile of gold is a tempting proposition over attempting alternative funding models.
> So something becomes a viable risk only if there is a possible return that makes sense
I don't think we are in any disagreement here. I'm never arguing that you abolish IP and expect all further scientific advancement, art, programming, etc to be done by people who cannot seek compensation for making it. I'm arguing that the US IP regime smothers any potential alternative with how exploitative having the government constrain information by law is.
Your story runs a similar thread seen in tax evasion and money laundering - the rich will gravitate their wealth towards wherever they can keep the most of it. That is where Swiss Bank Accounts and Latin American cartels get their power. Likewise businesses want lower taxes and thus gravitate towards the countries that offer the lowest tax rates - even if those lower tax rates have abject demonstrable harm to the citizenry through reduced social programs, etc. The result is that there is a global race to the bottom economically - to appeal maximally to wealth to attract it, or see your nation rot as it constantly flees your borders for greener pastures. IP is a massive pile of money to be had, hence nations without it see creators flee to nations with it, but might does not make right - just because the existence of IP represents untenable profit by any other means does not justify its existence, especially when it is so contrarian to baseline reality. Its a perversion of normality meant to attract investment the same way having low or no wealth and income taxes or no corporate tax attracts the wealthy and businesses.
> So we think we'd like to do good for no possibility of personal return
The personal return on funding the cure for cancer is having the cure for cancer exist.
> To make ANYTHING the returns must be congruent with your heuristic about how you'd like to divide your limited resources.
And macroeconomically we are all constituted of limited resources - limiting information compels most to partition their scarce resources towards affording the IP regime, not for any physical necessity, and this reduces the buying power of everyone involved. IP falls under the same purview of economic cancers as advertising, health insurance, and the military - bureaucracy and a race to the bottom that has no abject benefit but siphons productivity away to rent seekers and middle men.
> Bums me out when I see people putting so many resources into running/building elaborate piracy machines.
These two comments are rather at odds to me.
That said, IME generally the type of person who's big into self hosting isn't a Microsoft guy. I work with MS stuff at work at the moment. The entire thing is set up for Enterprise and Regulations. It's hugely overcomplicated for that specific goal only.
At home I don't care about Regulations(tm). The only reason I can see for someone to bother with it is if they want to train out of hours for a job at an MS shop.
I specifically didn't mention music because it's easy to get it DRM-free. Pretty much every online music store is DRM-free.
I also have piracy.
People who use bittorrent legally do exist, or at least there's one of us.
How would _you_ suggest I handle the 2TB of public domain media I have, then?
It seems to hold rack-mounted gear quite well.
Looks like the IKEA IVAR storage system. https://www.ikea.com/kr/en/catalog/categories/departments/li...
Have been meaning to move more to colo, especially my Wordpress install and some Wordpress.com-hosted sites, but inertia.
[0] https://support.cloudflare.com/hc/en-us/articles/204899617-A...
[1] https://www.cloudflare.com/ips/
I've always been unable to pull this off completely as I always want a way to SSH into my home network - but maybe there is a better way I can pull off this sort of 'break glass' functionality.
Guacamole (sorta) gives me that. If CloudFlare or nginx or Guacamole have problems then I'm hosed... but I work from home so remote access isn't a huge concern.
And I've got nothing terribly "household critical" at home, just the PiHole needs to be running to keep everyone happy. I do wish that PiHole had an HA solution. I've been tempted to set up a pfSense / pfBlockerNG HA pair but that's a lot of overhead just for DNS.
You could run 2 Pi’s or a Pi and a container in another always on machine for example. Then just point your router‘s primary to the Pi and secondary to the other instance.
IMO, using a Tor hidden service is a (damn near) perfect solution for this.
have you made it work? my Tor career ended in college after running an exit node - no visits from the FBI, just got auto-klined from every IRC server since I was on the list of proxies.
I use one for the sites below. It is written in Java/Kotlin, but barely works anywhere except Windows.
https://egov.kz/cms/en
https://cabinet.salyk.kz/
...
Home: Two VMware hosts on Hyve Zeus (Supermicro, 2xE5 64GB), one on an HP Microserver Gen8 (E3-1240v2 16GB). PiHole bare metal on a recycled Datto Alto w/ SSD (some old AMD APU, boots faster than a Pi and like 4w). Cloud Key G2 Plus for UniFi / Protect.
VMware because it's what I'm used to. Hyper-V because it's not. Used to have some stuff on KVM but :shrug:
Docker running random stuff
Used to run Pihole until I got an Android and rooted it. Used to mess with WebDAV and CalDAV. Nextcloud is a mess; plain SFTP fuse mounts work better for me. My approach has gone from trying to replicate cloud services to straight up remoting over SSH (VNC or terminal/mosh depending connectivity) to my home computer when I want to do something. It's simple and near unexploitable.
This is the way it should always have been done from the start of the internet. When you want to edit your calendar, for example, you should be able to do it on your phone/laptop/whatever as a proxy to your home computer, actually locking the file on your home computer. Instead we got the prolifetation of cloud SaaSes to compensate for this. For every program on your computer, you now need >1 analogous but incompatible program for every other device you use. Your watch needs a different calendar program than your gaming PC than your smart fridge, but you want a calendar on all of them. M×N programs where you could have just N, those on your home computer, if you could remote easily. (Really it's one dimension more than M×N when you consider all the backend services behind every SaaS app. What a waste of human effort and compute.)
Why computer at home though? For someone who moves around a lot and doesn't invest into "a home", this would be bothersome. Not to mention it's more expensive, in terms of energy and money. I think third-party data centers are fine for self-hosting.
I guess one reason people might gravitate to home hosting is owning your own disks, the tinfoil hat perspective. You can encrypt volumes on public cloud as well, but it's still on someone else's machine. They could take a snapshot of the heap memory and know everything you are doing.
* MinIO: for access to my storage over the S3 API, I use it with restic for device backups and to share files with friends and family
* CoreDNS: DNS cache with blacklisted domains (like Pihole), gives DNS-over-TLS to the home network and to my phone when I'm outside
* A backup of my S3-hosted sites, just in case (bejarano.io, blog.bejarano.io, mta-sts.bejarano.io and prefers-color-scheme.bejarano.io)
* https://ideas.bejarano.io, a simple "pick-one-at-random" site for 20,000 startup ideas (https://news.ycombinator.com/item?id=21112345)
* MediaWiki instance for systems administration stuff
* An internal (only accessible from my home network) picture gallery for family pictures
* TeamSpeak server
* Cron jobs: dynamic DNS, updating the domain blacklist nightly, recursively checking my websites for broken links, keeping an eye on any new release of a bunch of software packages I use
* Prometheus stack + a bunch of exporters for all the stuff above
* IPsec/L2TP VPN for remote access to internal services (picture gallery and Prometheus)
* And a bunch of internal Kubernetes stuff for monitoring and such
I still have to figure out log aggregation (probably going to use fluentd), I want to add some web-based automation framework like NodeRED or n8n.io for random stuff. I'd also like to host some password manager but I still have to study that.
I also plan on rewriting wormhol.org into supporting any S3 backend, so that I can bind it's storage with MinIO.
And finally, I'd like to move off single-disk storage and get a decent RAID solution to provide NFS for my cluster, as well as a couple more nodes to add redundancy and more compute.
Edit: formatting.
I would be _very_ interested in a write up/explanation of this set up
Essentially, this setup achieves 5 features I wanted my DNS to have:
- Confidentiality: from my ISP; and from anyone listening to the air for plain-text DNS questions when I'm on public WiFi. Solution: DNS-over-TLS[1]
- Integrity: of the answers I get. Solution: DNS-over-TLS authenticates the server
- Privacy: from web trackers, ads, etc. Solution: domain name blacklist
- Speed: as in, fast resolution times. Solution: caching and cache prefetching[2]
- Observability: my previous DNS was Dnsmasq[3], AFAIK Dnsmasq doesn't log requests, only gives a couple stats[4], etc. Solution: a Prometheus endpoint
CoreDNS ticks all of the above, and a couple others I found interesting to have.
To set it up, I wrote my own (better) CoreDNS Docker image[7] to run on my Kubernetes cluster; mounted my Corefile[8] and my certificates as volumes, and exposed it via a Kubernetes Service.
The Corefile[8] essentially sets up CoreDNS to:
- Log all requests and errors
- Forward DNS questions to Cloudflare's DNS-over-TLS servers
- Cache questions for min(TTL, 24h), prefetching any domains requested more than 5 times over the last 10 minutes before they expire
- If a domain resolves to more than one address, it automatically round-robins between them to distribute load
- Serve Prometheus-style metrics on 9153/TCP, and provide readiness and liveness checks for Kubernetes
- Load the /etc/hosts.blacklist hosts file (which has just short of 1M domains resolved to 0.0.0.0), reloads it every hour, and does not provide reverse lookups for performance reasons
- Listens on 53/UDP for regular plain-text DNS questions (LAN only), and on 853/TCP for DNS-over-TLS questions, which I have NAT'd so that I can use it when I'm outside
The domain blacklist I generate nightly with a Kubernetes CronJob that runs a Bash script[9]. It essentially pulls and deduplicates the domains in the "safe to use" domain blacklists compiled by https://firebog.net/, as well as removing (whitelisting) a couple hosts at the end.
That's pretty much it. The only downside to this set up is that CoreDNS takes just short of 400MiB of memory (I guess it keeps the resolve table on memory, but 400MiB!?) and lately I'm seeing some OOM restarts by Kubernetes, as it surpasses the 500MiB hard memory limit I have on it. A possible solution might be to keep the resolve table on Redis, which might take up less memory space, but I'm still to try that out.
[1] Which I find MUCH superior to DNS-over-HTTPS. The latter is simply a L7 hack to speed up adoption, but the correct technical solution is DoT, and operating systems should already support it by now (AFAIK, the only OS that supports DoT natively is Android 9+).
[2] It was when I discovered CoreDNS' cache prefetching that I convinced myself to switch to CoreDNS.
[3] http://www.thekelleys.org.uk/dnsmasq/doc.html
[4] It gives you very few stats. I also had to write my own Prometheus expoter[5] because Google's[6] had a fatal flaw and no one answered to the issue. In fact, they closed the Issues tab on GitHub a couple months after my request, so fuck you, Google!
[5] https://github.com/ricardbejarano/dnsmasq_exporter
[6] https://github.com/google/dnsmasq_exporter (as you can see the Issues tab is no longer present)
[7] https://github.com/ricardbejarano/coredns, less bloat than the official image, runs as non-root user, auditable build pipeline, compiled from source during build time. These are all nice to have and to comply with my non-root PodSecurityPolicy. I also like to run my own images just so that I know what's under the hood.
[8]
[9]DoT is a protocol explicitly designed for it's purpose.
If brickhead sysadmins block DoT it's their problem, and if you have to work around that then it is, in fact, a hack (or a "workaround", doesn't matter).
It's not that DoT or DoH are superior to one another, it's that DoT is "DNS in TLS", and DoH is "DNS in HTTP in TLS", doesn't that raise a red flag for you?
The idea that end-users should give a shit about any of this "L7" "purpose built" "control plane" "layering violation" nonsense, and opt themselves into a version of DNS privacy that their network operators can turn off for them without end-user consent, is lunacy; bamboozlement.
I agree in that end-users shouldn't give a dime about DNS privacy, it should be private by default, but it is up to us to promote the correct protocol over the "hacky" one.
If DNS-over-HTTPS is superior, then why don't we shove everything down 443/TCP? Or better yet, why don't we get rid of TCP altogether and send everything over a port-less encrypted dynamically-reliable trasport protocol? Surely middleman couldn't distinguish between traffic.
Ports are there for a reason. The fact that they are used with anti-end-user intent doesn't make them (or any protocol that runs on them) inherently bad. Yet one thing that makes a protocol better than another one, given set of requirements, is efficiency.
By the way, if I were to switch my DoT server from 853/TCP to 443/TCP, the port wouldn't be a problem anymore. Per your standards, now DoT would be better than DoH, wouldn't it? Same results, smaller payloads.
I gave you an apples to apples protocol comparison. If you tell me there's a single bit that lets you distinguish between HTTPS traffic and DoT traffic running both on 443/TCP, then I'll buy your "kill switch" argument.
And even if you do, nothing keeps me from saying farewell to my ISP as soon as they press that switch.
I learnt about CoreDNS because Kubernetes uses it for service discovery, and once I read about it's "chaining plugins" philosophy I wanted to try it out.
And it was so refreshing coming from Dnsmasq that I fell in love with it.
I remember comparing low power homeservers, consumer NAS and a refurb Thinkpad and the latter won when considering the price/performance and idle power consumption (<5W). You also get a built screen & keyboard for debugging and a efficient DC-UPS if you're brave enough to leave the batteries in. That's of course assuming you don't need multiple terabytes of storage or run programs that load the CPU 24/7, which I don't. These days a rPi 4 would probably suffice for my needs but I still think the refurb thinkpad is a smart idea.
I do leave the batteries in. Is it dangerous? I read some time ago that it is not dangerous, but the capacity of the battery drops significantly, I don't care about capacity, and safe shutdowns are important to me.
In the past I used an HP DL380 Gen. 7 (which I still own, and wouldn't mind selling as I don't use it), but I had to find a solution for the noise. And power consumption was at around 18EUR for my EUR/kWh.
Cramming down what ran on 12 cores and 48GiB of RAM on a 2-core, 4GiB (I only upgraded the memory 2 months ago) machine was a real challenge.
The ThinkPad cost me 90EUR (IBM refurbished), we bought two of them, the other one burnt. The recent upgrades (8GiB kit + Samsung Evo 1TB) cost me around 150EUR. Overall a really nice value both in compute per EUR spent and in compute per Wh spent. Really happy with it, I just feel it is not very reliable as it is old.
It's not necessarily dangerous but lithium batteries have a chance to fail and in very rare cases even explode, making them a potential fire hazard. I'm not an expert, maybe someone else can expand on this. If I were to run an old laptop of unknown provenance with a LiIon battery 24/7 completely unattended I'd at least want to make sure that it is on a non-flammable surface without any flammable items nearby.
>In the past I used an HP DL380 Gen. 7 (which I still own, and wouldn't mind selling as I don't use it), but I had to find a solution for the noise. And power consumption was at around 18EUR for my EUR/kWh.
Yes, I am surprised how many people leave power consumption out of the equation. These days you can rent a decent VPS for the power cost of an old refurb server alone.
Well, I'm removing the battery and the pseudo-UPS logic right now. The battery looks fine, but I'm not taking any risks, since it's on top of the DL380 but under a wooden TV stand.
Thanks for the heads up! You might have prevented a fire.
Should be fine if they're not swollen / getting very hot
FYI: the control plane takes about 150m (milli-cpu) and ~1.5GiB of memory in a host with my specs.
The thing is, I don't use Kubernetes for convenience or because I need it, I use to learn it.
I was just fine with Docker Swarm before switching, but I wanted to learn Kubernetes as a valuable skill, and I know no better way of learning something than using it every day.
And the thing about Kubernetes distros is that they usually all apply a new layer of "turning Kubernetes' complexity into a turn-key process", and I don't want that.
If you know the ins and outs of K8s, sure, use any distro you like, but if you want to learn something, better learn the fundamentals first. It's like learning Linux's internals instead of learning how Ubuntu is, one applies to a single distro and the other will apply for every distro ever.
k3s is not very far from the fundamentals. It's really just "one binary" instead of many for the space savings/ simple deployment.
That said, consider Kubernetes in Action by Manning. I'm about 75% done now, was a great help, and I'm continuing with k3s after doing it.
I bought Kubernetes Up & Running a year ago, I was disappointed to see it is a very over-the-top view, without getting into details.
I skimmed over Kubernetes in Action a couple months ago. Nothing really catched my eye either.
The last one I read was Kubernetes Security by Liz Rize. Either there's not that much to securing Kubernetes or the book is very introductory too.
The only parts of K8s I don't know a lot about are storage (haven't got past the NFS driver yet), CRDs and distributions like OpenShift. But in the same way I'm lacking storage expertise outside of Kubernetes.
I could set up a demo if you want to.
It's a cheap Flask app that scans a given "library" directory for "album" subdirectories, which contain the pictures you want to display.
It has a big issue with image size (16 images per page, my phone takes 5MB pictures, 80MB per page is HUUUGE). Thumbnailing would be great. I'm open for PRs ;)!
If anyone knows about a better alternative... I set this up when we got back from one vacation for my relatives to easily see the pictures (without social media).
Right now I have public (read-only) and private buckets only, and I'm the only who writes into any of them.
Public buckets contain files I didn't even create myself and that friends might find useful (Windows ISO, movies, VirtualBox VMs...). Privates have, well, private data, and can only be accessed using my admin account's credentials.
IIRC MinIO has access control through users, but I'm still very new to MinIO to the point where I discover new features every time I use it.
If I were to give someone else their own buckets I'd probably run a second instance to keep things separate, though. I'm even considering running another one myself to keep private buckets only accessible from my home network... (right now the entire instance is reachable from WAN, regardless of whether they are public or not).
https://github.com/epoupon/lms for music
https://github.com/epoupon/fileshelter to share files
Eveything is packaged on debian buster (amd64 and armhf) and run behind a reverse proxy.
One UI question? Is there a reason you left off volume controls? That's something that always annoys me still about Bandcamp and I had submitted a patch to Mastodon to create a volume control for their video component.
I have around 10 desktops that run in containers in various places for various common tasks I do. Each one has a backed up homedir, and then I have a ZFS-backed fileserver for centralized data. I connect to them using chrome remote desktop or x2go. I've had my work machine die one time too many, so with these scripts I can go from a blank work machine to exactly where I left off before the old one died, in a little over an hour. None of my files are stuck to a particular machine, so I can run on a home server, and then when I need to travel, transfer the desktop to a laptop, then transfer it back again when I get home. Takes about 10 minutes to transfer it.
https://github.com/kstenerud/virtual-builders
I also run most of my server apps this way:
https://github.com/kstenerud/virtual-builders/tree/master/ma...
Incoming mail points directly to an RPi at home on dsl... Postfix + Dovecot IMAP. It's externally accessible, my dedicated server does the dynamic dns to point to the RPi; the domain MX points to that. Outgoing mail forwards through the dedicated server, which has an IP with good reputation and DKIM.
This gets me a nice result that my current and historical email is delivered directly to, and stays at, home, and my outgoing mail is still universally accepted. There's no dependency on google or github. There's no virtualization, no docker, no containers, just Linux on the server and on the rpi to keep up to date. It uses OS packages for everything so it stays up to date with security updates.
I also host Aether P2P (https://getaether.net) on a Raspberry Pi-like device, so it helps the P2P network. But I’m biased on that last one, it’s my own software.
I blog about this stuff if anyone’s interested: https://thegeekbin.com/
You don't want less tested web app to expose some security hole for someone to start snooping on your traffic toward BitWarden after SSL termination.
If you don't want an extra box at home, you can always get a $5/mo cloud instance for public stuff, where you don't have to worry about increased electricity bill from DDoS having CPU spiked or choking your home network.
On the front end I have two 1Gbit circuits (AT&T and Google) going into an OPNSense instance doing load-balancing and IPS running on a Dell R320 with a 12-thread Xeon and 24GB of RAM
Services are hosted on a Dell R520 with 48GB RAM and two 12-thread Xeons running Ubuntu and an up-to-date ZFS on Linux build.
Media storage handled by two Dell PowerVault 1200 SAS arrays.
Back-end is handled by a Cisco 5548UP and my whole apartment is plumbed for 10Gbit.
Holy hell. How did that come about?
I live in a stable first-world democracy. Or, since it seems to be getting less stable recently, maybe a better way to put it is: I participate in a stable global economy. If "the cloud" catastrophically fails to the point where I lose all of the above without warning, I will likely have bigger problems than never being able to watch a favorite tv show again.
I wonder if this exposes two kinds of people: those who value mobility, and are more comfortable limiting the things that are important to them to a laptop and a bug-out bag, and those who value stability, and are inclined to build self-sufficient infrastructure in their castles.
I don't self host a lot of services (and the ones that do could go away tomorrow without hurting me much) but I only have one cloud resource - email. It kind of has to be that way for various reasons; I'd self host if I could reasonably do so. I also think I value my $75/mo more than I value an endless stream of entertainment.
(edit: just wanted to say, thanks for posting this. It is a valuable discussion point.)
By definition, self-hosting means the service is under my control, doing what I need, customized for my use cases. And because I use only open source stacks, I can (and have) even modify the code to customize even further.
And that's ignoring the fact that free, self-hosted options can often provide features that third party services cannot for legal, technical, or supports reasons.
For example, my TT-RSS feed setup uses a scraper to pull full article content right into the feed. A service would probably land in legal trouble if they did this. And while it works incredibly well, like, 90% of the time (thank you Henry Wang, author of mercury-parser-api!), if it was a service, that 10% could result in thousands of support emails or an exodus of subscribers.
https://github.com/HenryQW/mercury_fulltext
The directions there are pretty clear. You've gotta set up the mercury parser API service (I used docker) and then enable the plugin for the feeds you want to apply it to.
Alternatively you could use the Readability plugin that ships with tt-rss, but I have no idea how effective it is as I never tried it.
Finally, you could stand up the RSS full text proxy:
https://github.com/Kombustor/rss-fulltext-proxy
That service standa between your RSS feed reader of choice and the RSS feed supplier and does the scraping and embedding.
[0] https://github.com/huan/docker-simple-mail-forwarder
* It's a target for my rsync backups for all my client systems (most critical use); Docker TIG stack (Telegraf, InfluxDB, Grafana) which monitors my rackmount APC UPS, my Ubiquiti network hardware, Docker, and just general system stats; Docker Plex; Docker Transmission w/VPN; Docker Unifi; A custom network monitor I built that just pings/netcats certain internal and external hosts (not used too seriously but it comes in handy); and finally a neglected Minecraft server.
I went for low power consumption since it's an always-on device and power comes at a premium here + fanless. I highly suggest the NUC as it's a highly capable device and with plenty of power if upgraded a bit!
https://dischord.org/2019/07/23/inside-the-sausage-factory/
At home I have:
The DS412+ is my main network storage device, with various things backed up to the Microserver. Aside from the OEM services it also runs Minio (I use this for local backups from Arq), nzbget, and Syncthing in Docker containers.FreeBSD server running various things:
* Home Assistant, Node-RED, and some other home automation utilities running in a FreeBSD Jail.
* UniFi controller in a Debian VM.
* Pi-Hole in a CentOS VM.
* StrongSwan in a FreeBSD VM.
* ElasticSearch, Kibana, Logstash, and Grafana running in a Debian VM.
* PostgreSQL on bare metal.
* Nginx on bare metal, this acts as a front-end to all of my applications.
I also have:
* Blue Iris on a dedicated Windows box. This was a refurbished business desktop and works well, but my needs are starting to outgrow it.
* A QNAP NAS for general storage needs.
Future plans are always interesting, so in that vein here are my future plans:
Short term:
* Move my home automation stuff out of the FreeBSD Jail into a Linux VM. The entire Home Assistant ecosystem is fairly Linux-centric and even though it works on FreeBSD, it's more pain than I'd really like. Managing VMs is also somewhat easier than managing Jails, though I'm sure part of this is that I'm using ezjail instead of something more modern like iocage.
* Get Mayan-EDMS up and running. I hate paper files, this will be a good way to wrangle all of them. I've used it before, but didn't get too deep into it. This time I'm going all-in.
Medium term:
* Replace my older cameras with newer models.
* Possibly upgrade my Blue Iris machine to a more powerful refurbished one.
* Create a 'container VM', which will basically be a Linux VM used for me to learn about containers.
Long term:
* Replace my FreeBSD server with new hardware running a proper hypervisor (e.g., Proxmox, VMware ESXi). This plan is nebulous as what I have meets my needs, this is more about learning new tools and ways of doing things.
• Apache: hosting a few websites and a personal (private) wiki.
• Transmission: well, as an always-on torrent client. Usually I add a torrent here, wait for it to download and then transfer it via SFTP to my laptop.
• Gitea: mostly to mirror third party repos I need or find useful.
• Wireguard: as a VPN server for all my devices and VPS, mostly so I don't need to expose SSH to the internet. Was really easy to setup and it's been painless so far.
I also used to have all my DVDs ripped onto my media server, but I never really watched any of them, so now they are just gathering digital dust on some offline disks.
the other thing that is bothering me is that songs keep dissappearing from my playlists every once in a while
people keeping their own movie library makes perfect sense as there are still no services today, that I know of, that have access to all movies a certain person might want, or if they do the service is basterdised by some region lock
(You didn't by any chance sail around Cape horn in 2016? I met this really cool older couple in Central America who had been living at sea for 17 years.)
Reading all of the replies I realize that sometime between 2007 and 2012 I just gave up entirely on storing media locally. I don't watch movies (e.g. no cable or netflix), but I've been using spotify for a decade maybe? One response makes a good point: it is a waste of overall bandwidth to stream content.
Sure I could buy DRM-laden stuff from some online store but there's no guarantee I can access it forever. I could buy a bunch of Blu-Rays or DVDs and stick them on a shelf but that's not convenient. I could pay for a subscription service but not a single one has anything close to everything I want to watch.
- httpd
- nextcloud (mostly for android syncing, for normal file operations I prefer sftp). Nextcloud is great but the whole js/html/browser is clumsy.
- roundcube (again mostly imap but just to have alternative when phone isnt available - I havent used it for ages)
- postfix
- dovecot
- squid on separate fib with paid vpn (mitming all the traffic, removing all internet "junk" from my connections, all my devices, including android are using it over ssh tunnel).
- transmission, donating my bandwidth to some OSS projects
- gitolite, all my code goes there
I think this is it.
Everything is running on mitx board, with 16gb of ram, 3x 3tb toshiba hdds in zraid and additional 10tb hitachi disk. FreeBSD. 33 watts.
it costs about $800/month for the half cage and all the hardware in it, when you amortise it out. And there's plenty of performance overhead for when one project gets a lot of attention or I want to ad something new.
Pretty much the only thing I use cloud computing for is the nightly job for S3stat, because it fits the workload pattern that EC2 was designed for. Namely, it needs to run 70 odd hours of computing every day, and gets 3 hours to do it in.
For SaaS sized web stuff, self hosting still makes the most sense.
So I set up Yunohost [0] on a small box, and now I install self hosted services whenever I need them. Installing a new service is a breeze–but more importantly, upgrading them is a breeze to.
For now I self host Mattermost, Nextcloud, Transmission.
[0] https://yunohost.org
Tbh I run hot and cold about self hosting since after work, I really really want to be able relax at home.
Not wonder why the hell my nuc hasn't come up after a reboot. Or why is it so hard to increase the disk space on my FreeNAS https://www.ixsystems.com/community/threads/upgrading-storag...
I wasn't happy with any of the free wiki hosting solutions available so I ended up self-hosting a mediawiki site. It's been...challenging...to convince my wife and family to adapt and use wiki markup.
I've been considering switching to something that uses standard markdown instead since it's easier to write with.
For me I'm just after a simple pure text knowledge-base.
Currently I use vuepress https://vuepress.vuejs.org/
The positives with vuepress for me were:
* Plain Markdown (With a little bit of metadata)
* Auto generated search (Just titles by default)
* Auto Generated sidebar menus
The negatives:
* No automatic site contents, I mostly use the search to move around docs
* Search is exact not fuzzy
* The menu settings are in a hidden folder
I used to self-host a lot more, but have been paring back recently.
Home automation/security system + 'Alexa': completely home grown using python + android + arduino + rpi + esp32
I have hosted media folders/streaming applications for friends and family, but this has been by far my most used and most useful hack.
* Unbound for dns-over-tls and single point of config hostnames for my home network
* Syncthing for file sync
* offlineimap to backup my email accounts
* Samba for a home media library
* cron jobs to backup my shares
* Unifi controller
On my todo list:
* Scheduled offsite backup (borg + rsync.net being the top contender currently)
* Something a bit more dedicated to media streaming than smb. some clients like vlc handle it fine, others do not.
* Pull logs for my various websites locally
What do you all spend on this sort of thing? Whether hosting remotely or on local hardware, what would you say is the rough monthly/annual cost to move your Netflix/Spotify/etc equiv to a self-hosted setup (excluding own labor)?
Websites - nothing. Using GCP free server. About to move it to Oracle's free VMs though thanks to GCP's IPV4 shenanigans and Oracle's free offering being better (higher IO & you get two VMs).
Personally I have a home server which has minimal monthly costs. I just buy disks every now and then.
- A weather station that lives on a pole on the yard. Powered by GopherWX https://github.com/chrissnell/gopherwx
- InfluxDB for weather station
- Heatermeter Barbecue controller
- oauth2_proxy, fronted by Okta, to securely access the BBQ controller while I'm away. This proxy is something that everyone with applications hosted on their home network should look into. Combined with Okta, it's much easier than running VPN.
In the public cloud, I host nginx, which runs a gRPC proxy to the gopherwx at home. I wrote an app to stream live weather from my home station to my desktops and laptops and show it in a toolbar.
nginx in the cloud also hosts a public website displaying my live weather, pulled as JSON over HTTPS from gopherwx at home.
I have a second raspberry pi running a version of Kali Linux. I only hack my own stuff for learning.
Once upon a time I ran a public facing website and quake server, and published player stats. No time these days for much play.
Man, at my last job in a large enterprise, I WISH they were running fingerd. Would have made for some pretty cool, lightweight integrations.
https://github.com/HaschekSolutions/opentrashmail
(I guess these may not really be “self-hosted” since I don’t make them publically accessible through ports ... just vpn in to my home network)
- my websites with nginx
- IRC (ngircd)
- ZNC
- espial for bookmarks and notes
- node-red to automate RSS -> twitter and espial -> pinboard
- transmission
- some reddit bots manager I’ve written in Haskell+Purescript.
- some private file upload system mostly to share images in IRC in our team
- goaccess to self host privacy respecting analytics
At home, Plex.
Basically all the stuff I don't want to pay a cloud provider to host.
Overall the R720 with 48GB of ram has been one of my best buys hands down. down the road I plan on grabbing a second server and a proper NAS or unraid setup.
- docker (just dev env with a lot of images, almost everything I can is tested in there, and maybe used there too. Just on VM if is a desktop gadget or app)
- Calibre- Windows Media share feature for remote videos on devices and TV (, don't like it really, mess with subtitles and really will look for a docker oss alternative)
Wish list:
- wallabag
- firefox-sync (stuck on Chrome yet, no alternative on this found)
- email sync
It's not so great for now. Looking on this thread for contacts and calendar (currently used from the cloud classic providers)
Everything. I keep infrastructure simple as I found as a developer, infrastructure configuration, dependency issues and updates took an extraordinary amount of time while providing zero benefit for products of a small to medium size. I do have a plan in place should I need to scale, but it is not worth maintaining an entirely different stack full of dependencies for the off chance I get a burst in traffic I can't handle.
- mail server in Docker container
- ZNC in Docker container
- Shadowsocks server
- Wekan as a Snap
- My blog, statically generated using Pelican, served from nginx
At home, I only have a Synology NAS that is exposed to the internet.
I am unhappy with the complexity of Mayan EDMS. I'm debating moving to Paperless. All I want is a digital file system that 1) looks at directories and automatically handles files 2) has user permissions/personal files so I can let my family use it 3) has a web form for uploads.
I am planning to change gitea to sourcehut- the git service as well as builds.
Any ideas for things a raspberry pi 3 & 4 could be useful for?
I use NFS on the NAS for the storage unit. It's the only thing I need to backup.
Relying on streaming providers, cloud email services, etc., has left me in a very foul mood lately and I feel like I need to take back control. My biggest trigger was when I purchased an actual physical audio CD (this year; because NONE of the popular streaming providers offer the album), ripped it to FLAC, and then realized I had no reliable/convenient way to expose this to my personal devices. I used to have a very elaborate setup with subsonic doing music hosting duty, and all of my personal devices were looped in on it. This was vastly superior to Spotify, et. al., but the time it takes to maintain the collection and services was perceived to be not worth it. From where I am sitting now, its looking like its worth it again.
How long until media we used to enjoy is squeezed completely out of existence because a handful of incumbent providers feel its no longer "appropriate" for whatever money-grabbing reasons?
* Pleroma/Mastodon - I had been using Pleroma, but I'm not happy about a few things, so I bit the bullet to upgrade to a t3.small and am now running Mastodon. I love all the concepts of the fediverse, though the social norms are still being ironed out.
* Write Freely (https://writefreely.org/) at https://lesser.occult.institute for my blog (right now mostly holds hidden drafts)
* Matrix (Synapse) and the Riot.im frontend for a group chat. I'm a little conflicted, because right now the experience around enabling E2EE is very alarming for low-tech users and a pain for anyone who signs in from many places, and if it isn't enabled I have better security just messaging my friends with LINE. That said, I really want to write some bots for it. Group chats are the future of social networking, they all say...
Surprisingly (at least to me), there are some really big companies like Microsoft, IBM/RedHat, and others pushing this workflow. The editor is supposed to basically be VSCode in browser and compatible with most extensions.
I'm using my RPi as a jump box and have some commands to turn on my home desktop + mount the file system and that kind of stuff when connecting. I've used it in the past and it's worked nicely.
I got k8s running but got blocked by some bugs when installing Che. Looks neat though. It would be cool to have a 2007 macbook with the computing power of a 2990WX workstation :).
The orchestrator can now deploy itself! All declarative service configuration with autoscaling etc. It manages the infra and service deployment for me. Thinking about open sourcing.
Nginx/nchan, NodeJS, static sites (vanilla/angular/react deployments), nfs, MongoDB, Redis
I still have the email domain, because it's easier to run it forever than migrate all the things you signed up for. But actually running my own email is too much of an obligation and need to keep up on all the anti spam measures.
VMware ESXi, with VM's for Squid, DNS, MySQL, Nginx, Apache, basic file server, Gitlab, and one that's basically for IRSSI
Strongly considering just moving everything to Debian with containers for everything, easier to manage than VM's.
On colo’d hardware:
- off-site backup server (Borg backup on top of zfs) - this is a dedicated box
- a mix of VMs and docker containers - mostly custom web apps
- email (it’s easier than you think)
At home:
- file server using zfs
- Nextcloud
- more custom web apps
- tvheadend
- VPN for remote access (IKEv2)
- gitlab
- gitlab ci
Also run an IPSec mesh between sites for secure remote access to servers etc
While my workplace uses AWS a massive amount, I still prefer to run my own hardware and software. Cloud services are not for me.
* Nextcloud - your own Dropbox! Amazing stuff.
* VPN - simple Docker service that is super reliable and easy to set up (docker-ipsec-vpn-server)
* Ghost - a very nice lean and mean blogging CMS
* MQTT broker for temperature sensors
* Samba server
* Deluge - Torrent client for local use
* Sabnzbd - NZB client
* Gitea - my own Git server
* Mail forwarder - very handy if you just want to be able to receive email on certain addresses without setting up a mailbox
* Pihole - DNS ad-blocking
* Jellyfin - self-hosted Netflix
It's become sort of my hobby to self-host these kind of things. I use all of these services almost daily and it's very rewarding to be able to fully self-host it. I also really love Docker, self-hosting truly entered a new era thanks to readily avaibable Docker images that make it very easy to experiment and run things in production without having to worry about breaking stuff.
Of course you can't even tell Macos to not suspend wifi or whatever if you close the lid while on battery so now I'm trying to move it to a Raspberry Pi 4 but I've got an obscure ssl error with OTP22 on it while querying an api, so I'm trying to debug that instead ... oh the joy.
All my side projects and some clients are hosted old school style in a dedicated servers. I do overpay because that's the same price and machine since 2013 and yet it's still way cheaper than any cloud offering, especially because of the hosted databases pricings.
TT-RSS + mercury-parser + rss-bridge + Wallabag to replace Feedly and Pocket.
Syncthing + restic + rclone and some home grown scripting for backups.
Motion + MotionEye for home security.
Deluge + flexget + OpenVPN + Transdroid.
Huginn + Gotify for automation and push notifications.
Apache for hosting content and reverse proxying.
Running on a NUC using a mix of qemu/kvm and docker containers.
Huginn came into being because I wanted a way to republish some of my emails as an RSS feed that I could subscribe to with TT-RSS (e.g. Matt Levine's newsletter), and for that purpose alone it's justified its existence.
I've also used it as the plumbing that connects my various services to Gotify (Huginn makes a Webhook available and the event gets routed to Gotify). This is, admittedly, entirely unnecessary; I could just hit Gotify directly. But putting Huginn in the middle could give me some flexibility later... and it's there, so, why not use it? :)
- Nginx
- Nextcloud (with Calendar/Contacts on it)
- IRC client (thelounge)
- IRC server
- DLNA server
- Ampache server
- video and photo library thru NFS (locally only)
- OpenVPN
- Shiori for bookmarks
- Gitea for private projects
- Syncthing (to keep a folder synchronized across my devices)
- Jenkins
What do you feed into Grafana?
I have a home server + some raspberry pis lying around that I want to start using.
The only things I host are either just hobbies or non-essentials:
At home: - Node-red for home automation - PiHole for ad filtering on the local network - Plex on my NAS for videos - A Raspi for reading my Ruuvitags and pushing the info to MQTT On Upcloud and DigitalOcean and a third place: - Unifi NVR (remote storage for security cameras) - Flexget + Deluge for torrents - InfluxDB + Grafana for visualizing all kinds of stuff I measure - Mosquitto for MQTT
- Nextcloud
- Mailu.io
- Huginn
- Gotify
- Airsonic
- Gitea
All on a dedicated box. Planning to add password sync, wallabag, syncthing a VPN and a few other features. Other boxes I have run various things from DNS to backup MXes and a WriteFreely instance on OpenBSD.
Internally I host a ton of stuff, mostly linked to a Plex instance.
I notice I was a lot more keen on hosting a bunch of crap myself before I knew how to do it "right", and before devops, orchestration ("you mean running scripts in remote shells?"), cloud, or containers or any of that were things. And yet it all worked just fine back then—time spent fixing problems from my naïve "apt-get install" or "emerge" set-up process wasn't actually that bad, compared with the up-front cost of doing it all "right" these days. A couple lightly-customized "pet" servers were fine, in practice. Hm.
So then look at home projects and I wonder if I know enough to self host things, or host them on GCP in a manner that won't just invite getting hacked, running up a ridiculous bill, or leaking my private sensitive data out.
Any guidance to offer?
2) A lot of what people do is chasing nines that you don't need (and a lot of the time they don't either, but "best practices" don't you know, and no-one wants to have not been following best practices, even if doing so was more expense and complexity than it was worth for the company & project, right?) so just forget about failover load balancers and rolling deploys and clustered databases and crap like that. All of that stuff can be ignored if you just accept that you may have trouble achieving more than three nines.
3) If it's just for you, consider forgetting any active monitoring too. That can really kill your nines of reliability, but if it's mostly just you using it, that may be fine, and you won't get alerts at 3:00AM because some router somewhere got misconfigured and your site was unreachable for two minutes for reasons beyond your control. Otherwise use the simplest thing that'll work. You can get your servers to email you resource warnings pretty easily. A ping test that messages you when it can't reach your service for the last X of Y minutes (do not make it send immediately the first time it fails, the public Internet is too unreliable for that to be a good idea) is probably the fanciest thing you need. Maybe you can find some free tier of a monitoring service to do that for you and forget about it, even.
4) If you can mostly restrict yourself to official packages from a major distro, and maybe a few static binaries, it's really easy to just write a bash script that builds your server from scratch with very high reliability. Maybe use docker if you're already comfortable with it but otherwise, frankly, avoid if you can and just use an official distro packages instead, as it'll complicate things a lot (now you have a virtual network to route to/from/among, probably need a reverse proxy, you may have a harder time tracking down logs, and so on). Test it locally in Vagrant or just plain ol' Virtual Box or whatever, then let it loose on a fresh VPS. If you change anything on the VPS, put it in the script and make sure it still works. If you're feeling very fancy learn Ansible, but you'll probably be fine without it.
5) For security, use an SSH key, not a password, and change your SSH port to something non-default (put that in your setup script) just to cut down on failed login noise, if you feel like it. You could add fail2ban but if you've changed the port and are using a key it's probably overkill.
6) Forget centralized logging or any of that crap. If you have a single digit count of VPSen then your logging's already centralized enough. If one becomes unreachable and can't be booted again and you can't find any way at all to read its disk, and that happens more than once, consider forwarding logs from just that one to another that's more reliable if you wanna troubleshoot it. You can do this with basic logging packages available on any Linux distro worth mentioning, no need to involve any SaaS crap.
7) Backups. The one ops-type thing you actually have to to do if your data's not throwaway junk is backups. Backups and strictly-used build-the-server-from-scratch + restore-from-backup scripts are kinda sorta all most places actually need, despite all the k8s and docker chatter and such.
8) Cloudflare exists, if you have any public-facing web services.
[EDIT] mind none of this will help you get a job anymore since everyone wants a k8s wizard AWS-certified ninja whether they need 'em or not, so don't bother if your goal is to learn lucrative job-seeking skills, but it's entirely, completely fine for personal hosting and... hate to burst anyone's bubble... an awful lot of business hosting, too. Warning: if you learn how to run servers like this you may need to invest in some sort of eye clamp to prevent unwanted eye-rolling in server-ops-related meetings at work, depending on how silly the place you work is.
I've mostly fallen into it at my job because the alternative to me pushing dev services to SAAS offerings and maintaining the glue myself is a pile of poorly-maintained IT-provided Server 2008 R2 boxes.
4 Ubuntu 16.04 servers:
- Nginx/PHP for Wordpress - MySQL - Redis - Mail
Planning to expand the the Nginx/PHP servers to at least two, and add load balancers. All certs are provided by an Ansible script using Lets Encrypt (yuck).
At home:
Proxmox running on two homebuilt AMD FX 8320 servers with 32GB each, with drives provided by FreeNAS on a homebuilt Supermicro server with about 10TB of usable space (on both HDDs and SSDs)
Ubuntu 16.04 Servers:
- 2x DNS - 2x DHCP - GitLab - Nagios - Grafana - InfluxDB - Redmine - Reposado - MySQL
Other:
- Sipecs
All set up via Ansible.
Next will set up a Kubernetes cluster (probably as far as I’ll get with containers).
> Resilio Sync for iPhone pictures backups and "drop box" file access
> Transmission server
> SMB share of NAS to supply OSMC boxes on every TV
> Nighthawk N7000 running dd-wrt with a 500gb flash drive attached as storage for my Amcrest wifi cameras
> Edgerouter Lite running VPN server
> Hassbian for my zwave home automation stuff
> A pi with cheap speakers that I can log into and play a phone ringing sound so my wife will look at her phone!
Also, kudos to those brave souls who are running Tor exit nodes!
Edit: Forgot a bunch
Now I only host my own project: http://billion.dev.losttech.software:2095/
Also regular Windows file sharing which I use for media server and backups.
Though I'd like to expand that. Maybe a hosted GitLab.
Also, I use it to find flats when I need ro.
- Mail server (OpenSMTPD)
- IMAP (Dovecot)
- CVS server for my projects.
- httpd(8) for my website.
I still need to add rspamd for spam check. But insofar, I received just one spam E-mail.
Out of curiosity, do you genuinely prefer CVS or just haven't migrated from a historical repo?
Also NextCloud (files, contacts and calendar), few WordPress websites and Fathom for website analytics.
cloud (time4vps 1TB storage node) borg calibre AdGuard
-- home server data drive rsyncs to an internal data drive (XFS to btrfs), btrfs drive takes a snapshot and unmounts when not in use, then important stuff is rsynced to my VPS. --- home drives backed up with borg for encryption
I keep looking at hosting my own mail server, but get scared off by tales of config/maintenance dramas.
All my business backups go to the same box. I have a pi and enrypted usb drive copying my backups to my shed from my house.
PiHole, HRCloud2, HRScan2, HRConvert2, my wordpress blog, a KB, and a few other nick knacks. Currently working on a noSQL share tool (for auth-less large file sharing) and then maybe this idea that's been floating around my head for a Linux update server. Like WSUS for linux.
All on a few Vultr + Digitalocean droplets, 2 raspis + 1 atomic pi, a couple HP i5 mini desktop machines, and a Dell r610 rack server with 24 cores and 48GB of ram (with about 36TB of assorted shucked and unshucked USB hard drives attached in a few GlusterFS / ZFS pools). I have a home-built UPS with about 1.5kwh worth of lead-acid batteries powering everything, and it's on cheap Montreal power anyway so I only pay $0.06¢/kwh + $80/mo for Gigabit fiber. It's a mix of stuff for work and personal because I'm CTO at our ~9 person startup and I enjoy tinkering with devops setups to learn what works.
All organized neatly in this type of structure: https://docs.sweeting.me/s/an-intro-to-the-opt-directory
Some examples: https://github.com/Monadical-SAS/zervice.elk https://github.com/Monadical-SAS/zervice.minecraft https://github.com/Monadical-SAS/ubuntu.autossh
Ingress is all via CloudFlare Argo tunnels or nginx + wireguard via bastion host, and it's all managed via SSH, bash, docker-compose, and supervisord right now.
It's all built on a few well-designed "LEGO block" components that I've grown to trust deeply over time: ZFS for local storage, GlusterFS for distributed storage, WireGuard for networking, Nginx & CloudFlare for ingress, Supervisord for process management, and Docker-Compose for container orchestration. It's allowed me to be able to quickly set up, test, reconfigure, backup, and teardown complex services in hours instead of days, and has allowed me to try out hundreds of different pieces of self-hosted software over the last ~8 years. It's not perfect, and who knows, maybe I'll throw it all away in favor of Kubernetes some day, but for now it works really well for me and has been surprisingly reliable given how much I poke around with stuff.
TODOs: find a good solution for centralized config/secrets management that's less excruciatingly painful than running Vault+Consul or using Kubernetes secrets.
What might be the easiest way to achieve this? Running a Kube cluster is insane for my needs, I imagine I'd be perfectly happy with a few Pi's running various Docker Containers. However I'm unsure what the easiest way to manage this semi-cloud environment.
edit: Oh yea, forgot Docker Compose existed. That may be the easiest way to manage this, though I've never used it.
1) Do you identify the reverse proxy by host or by path?
e.g. <service>.yourdomain.com or yourdomain.com/<service>
2) Do you still run everything over a VPN?
External services I need are directly accessible via a local reverse proxy that's publicly visible over IPv6.
For IPv4-only scenarios I proxy through a linode instance (that also hosts a few things, including my blog) which sends the traffic in over v6.
Obviously this is all fronted by a traditional firewall.
And before you ask: it's surprising how often v6 connectivity is available these days. Mobile phone providers have moved to v6 en masse, and even terrestrial internet providers are starting to get religion.
It's still not available in my workplace (surprise surprise), but other than that, much to my surprise, v6 is my primary mode of connectivity.
2) No - but I do use Cloudflare to proxy inbound traffic
- Hand-rolled Go reverse proxy with TLS from LE.
- Several Pg DBs for development.
- VPN server.
- Chisel for hosting things "from home" while running on my laptop remotely.
- Etcd
- Jenkins
- Gitea
- Pi-hole
- A few different development projects
So, mail, DNS, and a few web sites. I’ve been running something like this for more than 15 years now.
And SyncThing, https://syncthing.net/
It all started with hosting subsonic
- Ampache
- Shaarli
- Dokuwiki
- Deluge
- Hugo blog
Everything running on a cheap server from kimsufi.
* Gogs
* WordPress
* Wallabag
* Ghost
* Minio
* Email (yes, this is my primarily and only email)
* TinyTinyRSS
* NextCloud
* Meemo
* MediaWiki