Main author of tinc here: it's great to see this get to the front page of HN! Development has been quite slow the last few years. Feel free to contribute to tinc!
It would be great to get version 1.1 out the door, and then focus on the future. One possibility that is very tempting would be to use Wireguard as a back-end for the end-to-end communication between peers, but have tinc manage the whole mesh, to get the best of both worlds.
As others note, it's trivial with tinc to setup mesh networks, with both direct and indirect routing among peers. It's the basis of CCC's ChaosVPN. Also, it works well as a Tor v3 onion service, and so provides an alternative to OnionCat, which will become unusable when the Tor Project deprecates the v2 onion protocol next year.
It made more sense in 1998 when tinc got its name. It refers to the Internet cabal (https://en.wikipedia.org/wiki/There_Is_No_Cabal), and three-letter agencies which would snoop your network traffic and send in unmarked vans and black helicopters if you did something they didn't like. Suggestions for a new logo are welcome!
Ah thanks for explaining this! I never knew that's what tinc stood for.
Fun fact though: In 1998 there was no worldwide traffic snooping. That only happened with the reorganisation of US and NATO intelligence after 9/11 :) But good future prediction. I do think it would have happened either way.
I just feel like tinc undermines itself with this logo. It's hard to take something seriously that doesn't take itself seriously. Even though it's an excellent project. I think something more generic like the logo on the recent android app would do (though probably a bit less generic than that!!)
How do you manage your wireguard setup? Mine breaks pretty much every time I upgrade because it depends on a kernel module. Maybe this goes away when I can upgrade to the latest kernel with native support?
Here's where we're at with WireGuard distro kernel shipping support, as of writing (July 4, 2020):
- Ubuntu Focal 20.04 LTS: native built-in
- Ubuntu Eoan 19.10: native built-in
- Ubuntu Bionic 18.04 LTS: native built-in
- Ubuntu Xenial 16.04 LTS: dkms :(
- Ubuntu Trusty 14.04 LTS: dkms :(
- Debian: native built-in
- Fedora: native built-in
- Mageia: native built-in
- Arch: native built-in
- OpenSUSE: native built-in
- SUSE Linux Enterprise: native built-in
- Alpine: native built-in
- Gentoo: native built-in
- Exherbo: native built-in
- NixOS: native built-in
- RHEL/CentOS: dkms and elrepo kmod :(
- Void: native built-in
- Adélie: native built-in
- Source Mage: native built-in
- Buildroot: native built-in
The rule of thumb here is: distros with kernel ≥ 5.6 have it native built-in, plus a few distros that have backported it, like Ubuntu, Debian, and SUSE. I'm in the process of working with other distros to get it backported; we'll see if I'm successful. I'm also maintaining a 5.4.y backport for distros who ship this LTS kernel (like Oracle's UEK), to make backporting it easier: <https://git.zx2c4.com/wireguard-linux/log/?h=backport-5.4.y>. There are instructions for each distro on <https://www.wireguard.com/install/>.
If you're presently having "update troubles", make sure you're using the latest variant of any of the "native built-in" distros written above.
If you're using distribution packages for WireGuard (whether they're in the official repos or not), they should be rebuilt with each kernel upgrade and so you shouldn't be having any issues with stability. But yes, if you upgrade to a kernel where WireGuard is part of the main kernel package (even if it's backported by your distribution) then you wouldn't have those issues either.
Use tinc for work and personally, recently built a hybrid k8s use tinc as flannel backend, reliable and easy to maintain, also use tinc Switch mode, works on L2, so DHCP works, wireguard can only work on L4.
Similar issue here - I want functional mDNS so I'm sticking with tinc for the foreseeable future. I'd really like to see tinc evolve to be able to use pluggable transports so that WG could form the backbone though.
WireGuard is like a modernized version of the SPTPS protocol --- SPTPS is a denatured variant of TLS --- that tinc uses, (ordinarily) coupled to the Linux networking stack. It is something you would build a modern version of tinc on top of, not a competitor to tinc. See Tailscale for an example of something that looks a lot like tinc, but built on WireGuard.
Tinc's security track record has not been especially great†, and while WireGuard and tinc are both written in C, tinc is a great ghastly blob of C, and WireGuard was written defensively by a vulnerability researcher to minimize attack surface --- the whole thing is about 4000 lines of code, and can be run without memory allocation.
So if you were just comparing SPTPS to WireGuard, it'd be no contest at all: you'd always, always prefer WireGuard. And that's what most people should do, because most people run simple access VPNs that don't need elaborate mesh routing features. For the minority that do, for now, there's Tailscale and Tinc; maybe Tinc can do a 2.0 on top of WireGuard, with its userland components in a memory-safe language.
The Linux & Android clients are fully open source. The Mac GUI is closed, but the "Linux" client runs on macOS, sans GUI. (Also runs on BSDs, and I believe Windows)
At that point, only the iOS and Windows GUI app is closed, but they're both super thin wrappers around the open source Go implementations.
But, yes, it's true that we're not entirely open. We'd planned to release a simple server implementation when we got it cleaned up, but then Headscale beat us. That seems like the most important missing bit.
I started off using tinc before wireguard existed, and now use both. Tinc provides a guaranteed full connectivity graph with indirect routing, nat punching, and local subnet discovery as needed. Wireguard is much more performant (eg it can be used to secure nfs without kerberos). Eventually someone will write a daemon for wireguard that provides tinc's features, but until then they are complimentary. I use wireguard between routers to carry full subnets, and tinc is setup on individual hosts to provide fallback access.
Wireguard does more than 1-1 connections, every computer can connect to as many other computers as you like, you can have computer a connect to computer c via computer b, and vice versa, and longer chains.
In my experience this is basically a turnkey setup with Tinc though as it handles all the key sharing transparently, and in your example will only use computer b to do the initial handshake and then let computer a talk directly to computer c. This is especially useful when you have large mesh networks across various LANs and WAN.
Years ago when I was building out a Docker hosting startup that never went anywhere, it was largely based on using Tinc to create a VPN so that your applications in different containers could all talk together.
2 lines in a config file. Create key (one command). Put server pubkey on client.. Put client pubkey on server. Restart both. Done. The network handles distribution of other clients' pubkeys in case they need to mesh.
And it does this really well: I used to travel a lot in my last job. I'd stream movies from my server, and the first few secs it was stopping/starting and suddenly it would go smoothly. Eventually I became curious and did a wireshark. Guess what, it saw the high traffic and meshed both clients together even though they were both behind NAT! Really nice.
Edit: Oh yeah you also have the ifup/ifdown files but they're always exactly the same except the IP. Really I don't find it hard at all.
Having said all that I was looking at Zerotier, especially because the firewall rules you can set per tag. That's really nice. With tinc you have to handle that on each client. But I don't really like that the VL1 planet is always run by a third party. You can run your own Network Controller at VL2 level but not your own VL1 planet if I understand it correctly. I know, it can't do anything with that, but I just prefer having no dependencies on anyone.
Nevertheless I will give it a try.. I don't see it being as simple as tinc, config-wise though! It is more powerful but that also makes it more complex.
Yes they're there! They're just not in /etc on a Mac - this is because you can't write there anymore in the latest macOS. Apple has locked that down a lot even before Catalina (system integrity protection)
I think they're under /usr/local/opt or something.. This is a homebrew thing, nothing to do with tinc really. I'm not on my Mac now so I can't check, but they're definitely somewhere.
Tinc 1.1 is actually fairly easy to set up - it's only a couple of commands for defining the name of the network, getting a link nodes can use to join etc. The only problem is that it's technically not a full release, so most distros don't carry it. Compiling isn't difficult (a simple ./configure && make && sudo make install), but it could be better. My only major issue with Tinc is lack of any central authority making it really niche - revoking keys seems to involve deleting them on each connected machine. Not fun.
From what I can tell ZeroTier seems nice as long as you're okay with using ZeroTier's servers for things (a curious trend I've noticed - so called decentralised services will always be great until you want to have fully independent servers). Sure, you can find github issues telling it's possible to set up your own planets, but the software seems somewhat complex and there's no documentation for it, and moons (what is with this lame terminology anyway?) will ping ZeroTier by default.
Oh thanks, I wasn't aware of that change in Tinc 1.1, I'll try it out. I'm indeed using the distro version, I compliled 1.1 once to have that network view (where you see which nodes are connected) but it wasn't that useful so I didn't bother with it. Didn't realise they simplified the joining also.
Though I never thought the joining and revocation was all that difficult anyway. I just have 2 central servers which carry every key, and the others just need to have the server keys. Everything else gets distributed automatically. So keys for clients you can keep on your servers only. You don't have to delete it from all of your clients!
I know tinc doesn't really have a client/server concept but I consider clients those devices that aren't publicly reachable (behind NAT) and not used in a ConnectTo statement.
And yeah the centralised VL1 "Planet" server in ZeroTier also bothers me. I know it plays no part in the actual access rights to the network and it can't see the traffic content, but still. I just want to run it myself.
yup, `sudo tinc -n %VPNNAME% invite %CLIENTNAME%` will generate an invite URL on the server side and `tinc join %INVITEURL%` will let you join it. It's definitely really easy now, but unfortunately it's being marked as a pre-release which sets tinc back a bit imho.
As for revocation, I have a similar setup and I agree. My worry is that in a theoretical situation an attacker could get access to a network and then spread his key to the entire network and there's little you can do about it. For personal use it's fine (I use it), but because of this, I would be vary of using Tinc for some sort of production use (although I've heard of people doing it). Even if it's a big IF since you need to actually have an access to a node to generate an invite, the attack surface is still there and there's no good way to undo it.
Yeah, 1.1 should really go into production at some point, it's been in beta for years.
It's really a good point you have about a hacker adding themselves to the network. I never thought of that. That could happen with every client so the attack surface is pretty big. It would be great if this feature could be reserved to only certain trusted devices (like the ones I have designated 'servers').
So maybe there is a good point to at least monitor the network activity with this 1.1 feature. Hmm... Thanks for getting me thinking about this!
I agree, but the actual simplicity (no meta server etc.) is amazing.
I've found it to be incredibly reliable and low overhead. (I've been using for years for MySQL replication and docker across low latency WAN connections).