BTFS: BitTorrent Filesystem

(github.com)

386 points | by pyinstallwoes 13 days ago

21 comments

  • raffraffraff 13 days ago
    So is there a server program to partner this with? Something that acts as a torrent file builder, tracker and simple file server for the torrent? I can imagine in a large org you could store a gigantic quantity of public data on a server that creates a torrent whenever the data changes, serves the.torrent file over http and also acts as a tracker. You could wrap the FUSE client in some basic logic that detects newer torrents on the server and reloads/remounts.

    Many moons ago I created a Linux distribution for a bank. It was based on Ubuntu NetBoot with minimal packages for their branch desktop. As the branches were serverless, the distro was self-seeding. You could walk into a building with one of them and use it to build hundreds of clones in a pretty short time. All you needed was wake-on-lan and PXE configured on the switches. The new clones could also act as seeds. Under the hood it served a custom Ubuntu repo on nginx and ran tftp/inetd and wackamole (which used libspread, neither have been maintained for years). Once a machine got built, it pulled a torrent off the "server" and added it to transmission. Once that was completed the machine could also act as a seed, so it would start up wackamole, inetd, nginx, tracker etc. At first you seed 10 machines reliably, but once they were all up, you could wake machines in greater numbers. Across hundreds of bank branches I deployed the image onto 8000 machines in a few weeks (most of the delays due to change control and staged rollout plan). Actually the hardest part was getting the first seed downloaded to the branches via their old Linux build, and using one of them to act as a seed machine. That was done in 350+ branches, over inadequate network connections (some were 256kbps ISDN)

    • mdaniel 13 days ago
      This may interest you, although as far as I know only AWS's S3 implements it: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjec...

      I actually have never used it in order to know if AWS puts their own trackers in the resulting .torrent or what

      • raffraffraff 13 days ago
        hmmm, pretty cool. Of course you have to pay the egress tax, but still...
    • elevation 12 days ago
      > You could wrap the FUSE client in some basic logic that detects newer torrents on the server and reloads/remounts.

      Wouldn't creating new torrents for each update to a dataset cause clients to retransfer data that hasn't changed?

      • raffraffraff 11 days ago
        From memory, with a standard torrent, when it loads up a new torrent file it verifies hashes on existing data, and if it matches up it leaves it. That's how it worked on my old distribution. Not sure how this would work on BTFS, but if it's lazy-loading data and if it handled existing data in the same way, it shouldn't be an issue. But I think this would suit infrequently changing data anyways.
      • nine_k 11 days ago
        I suppose blocks with the same hash as existing won't get re-requested.
  • apichat 13 days ago
    This tool should be upgrade to use Bittorrent v2 new functions.

    https://blog.libtorrent.org/2020/09/bittorrent-v2/

    Especially merkle hash trees which enable :

    - per-file hash trees - directory structure

    • extraduder_ire 11 days ago
      Ever since I learned of bittorrent V2, I've been hoping for some per-file index to show up online. I'd love to be able to tag most or all of the bulk media files on my drives with how many copies exist in bittorrent swarms at any time. That way, I could more easily pick out big files for deletion knowing I could easily get another identical copy back later.

      I tried doing something like this in the past using ipfs, but I wasn't very successful and didn't find any duplicates.

    • bardan 13 days ago
      somebody check the test functions...
      • logrot 13 days ago
        Especially the corrupt torrent test file.
  • Kerbonut 13 days ago
    I dream of having a BTFS that will fix my "damaged" media files. E.g. ones I media shift, if my disk was scratched and portions are missing, or if the codec options I picked suck, it could download the "damaged" portions of my media and fix it seamlessly.
    • kelchm 13 days ago
      Not the same as what you are talking about, but your comment reminded me of AccurateRip [1] which I used to make extensive use of back when I was ripping hundreds of CDs every year.

      1: http://www.accuraterip.com/

      • a-french-anon 12 days ago
        Pretty sure AccuRip is only a collections of checksums to validate your rips. http://cue.tools/wiki/CUETools_Database actually improved on it to provide that healing feature (via some kind of parity, I guess?).

        Related, I use and recommend https://github.com/cyanreg/cyanrip on modern UNIXes.

      • neolithicum 13 days ago
        Do you have any tricks you can share on how to rip a large library of CDs? It would be nice to semi-automate the ripping process but I haven't found any tools to help with that. Also the MusicBrainz audio tagging library (the only open one I am aware of?) almost never has good tags for my CDs that don't have to be edited afterwards.
        • mavhc 13 days ago
          https://github.com/whipper-team/whipper

          https://github.com/thomas-mc-work/most-possible-unattended-r...

          Finding a good CD drive to rip them is the first step.

          https://flemmingss.com/importing-data-from-discogs-and-other...

          IME Discogs had the track data most often.

          And obviously rip to flac

          • neolithicum 13 days ago
            Great suggestions, I'll have to try these out. Thank you!
        • kelchm 12 days ago
          I’ll be honest, this was around 2005-2008 — it was a long time ago and at the time I really enjoyed the ritual of it all.

          The main advice I can give you is to use ripping software that integrates with AccurateRip (XLD, EAC, etc) and use a widely supported lossless format (like FLAC).

          Also — I can’t remember all the details, but there’s a way to store a CUE file, along with some metadata alongside your rip such that you can recreate an exact copy of the original physical media.

          At least for now, I’ve moved on to streaming services, but I’m happy to know that I have a large library of music that I ripped myself to fall back to using instead, should I ever choose to.

        • eddieroger 13 days ago
          This project still seems alive to my pleasant surprise.

          https://github.com/automatic-ripping-machine/automatic-rippi...

          I never had it fully working because the last time I tried, I was too focused on using VMs or Docker and not just dedicating a small, older computer to it, but I think about it often and may finally just take the time to set up a station to properly rip all the Columbia House CDs I bought when I was a teen and held on to.

          • neolithicum 13 days ago
            Nice, I might install this on my Raspberry Pi.
        • jonhohle 13 days ago
          In the distant past iTunes was great at this (really). Insert a disc, its metadata is pulled in automatically, it’s ripped and tagged using whatever coded settings you want and when it’s done the disc is ejected.

          Watch a show do some other work and when the toast pops out a new one in.

          Ripping DVDs with HandBrake was almost as easy, but it wouldn’t eject the disc afterwards (though it could have supported running a script at the end, I don’t recall).

          • cheap_headphone 13 days ago
            It really was. In the early 2000s I had a stack of Mac laptops doing exactly this. Made some decent cash advertising locally to rip people's CD collections!
        • bayindirh 13 days ago
          I was ripping my CD's with KDE's own KIO interface, which also does CDDB checks and embeds original information to ID3 tags. Passing through MusicBrainz Picard always gave me good tags, but I remember fine tuning it a bit.

          Now, I'll start another round with DBPowerAmp's ripper on macOS, then I'll see which tool brings the better metadata.

    • tarnith 12 days ago
      Why not run a filesystem that maintains this? (ZFS exists, storage is cheap)
    • gosub100 13 days ago
      another use of this is to share media after I've imported it into my library. if I voluntarily scan hashes of all my media, if a smart torrent client could offer those files only (so a partial torrent because I always remove the superfluous files) it would help seed a lot of rare media files.
      • Stephen304 13 days ago
        This happens to be one of the pipe dream roadmap milestones for bitmagnet: https://bitmagnet.io/#pipe-dream-features

        I used to use magnetico and wanted to make something that would use crawled info hashes to fetch the metadata and retrieve the file listing, then search a folder for any matching files. You'd probably want to pre-hash everything in the folder and cache the hashes.

        I hope bitmagnet gets that ability, it would be super cool

      • jonhohle 13 days ago
        I’ve done a lot of archival of CD-ROM based games, and it’s not clear to me this is possible without a lot of coordination and consistency (there are like 7 programs that use AccurateRip, )and those only deal with audio). I have found zero instances where a bin/cue I’ve downloaded online perfectly matches (hashes) to the same disc I’ve ripped locally. I’ve had some instances where different pressings if the same content hash differently.

        I’ve written tools to inspect content (say in an ISO file system), and those will hash to the same value (so different sector data but the same resulting file system). Audio converted to CDDA (16-bit PCM) will hash as well.

        If audio is transcoded into anything else, there’s no way it would hash the same.

        At my last job I did something similar for build artifacts. You need the same compiler, same version, same settings, the ability to look inside the final artifact and avoid all the variable information (e.g. time). That requires a bit of domain specific information to get right.

        • gosub100 13 days ago
          Sorry I think you misunderstood me. I mean when I download a torrent called "Rachmaninov Complete Discography" I copy the files to the Music/Classical folder on my NAS. I can no longer seed the torrent unless I leave the original in the shared folder. But if I voluntarily let a crawler index and share my Music folder, it could see the hash of track1.flac and know that it associates with a particular file in the original torrent, thus allowing others to download it.
    • pigpang 13 days ago
      How you will calculate hash of file, when it broken, to lookup for?
      • rakoo 13 days ago
        You have all the hashes in the .torrent file. All you need is a regular check with it

        (but then the .torrent file itself has to be stored on a storage that resists bit flipping)

        • arijun 13 days ago
          If you’re worried about bit-flipping, you could just store multiple copies of the hash and then do voting, since it’s small. If you’re worried about correlated sources of error that helps less, though.
        • Dibby053 13 days ago
          >storage [...] bit flipping

          As someone with no storage expertise I'm curious, does anyone know the likelyhood of an error resulting in a bit flip rather than an unreadable sector? Memory bit flips during I/O are another thing but I'd expect a modern HDD/SSD to return an error if it isn't sure about what it's reading.

          • halfcat 13 days ago
            Not sure if this is what you mean, but most HDD vendors publish reliability data like “Non-recoverable read errors per bits read”:

            https://documents.westerndigital.com/content/dam/doc-library...

            • Dibby053 13 days ago
              Thanks for the link. I think that 10^14 figure is the likelyhood of the disk error correction failing to produce a valid result from the underlying media, returning a read error and adding the block to pending bad sectors. A typical read error that is caught by the OS and prompts the user to replace drives.

              What I understand by bit flip is a corruption that gets past that check (ie the "flips balance themselves" and produce a valid ECC) and returns bad data to the OS without producing any errors. Only a few filesystems that make their own checksums (like ZFS) would catch this failure mode.

              It's one reason I still use ZFS despite the downsides, so I wonder if I'm being too cautious about something that essentially can't happen.

      • everfree 13 days ago
        Just hash it before it's broken.
        • jonhohle 13 days ago
          Maybe this is a joke that’s over my head, but the OP wants a system where damaged media can be repaired. They have the damaged media so there’s no way to make a hash of the content they want.
          • OnlyMortal 12 days ago
            How far would error correction go?
      • alex_duf 13 days ago
        if you store the merkle tree that was used to download it, you'll be able to know exactly which chunk of the file got a bit flip.
      • 01HNNWZ0MV43FF 13 days ago
        You could do a rolling hash and say that a chunk with a given hash should appear between two other chunks of certain hashes
        • arijun 13 days ago
          That seems like a recipe for nefarious code insertion.
      • selcuka 13 days ago
        Just use the sector number(s) of the damaged parts.
    • Fnoord 13 days ago
      Distribute parity files together with the real deal, like they do on Usenet? Usenet itself is pretty much this anyway. Not sure if the NNTP filesystem implementations work. Also, there's nzbfs [1]

      [1] https://github.com/danielfullmer/nzbfs

    • drlemonpepper 13 days ago
      storj does this
  • pyinstallwoes 13 days ago
    Submitting because I'm surprised why this isn't used more... couldn't we build a virtualmachine/OS's as an overlay on BTFS? Seems like an interesting direction.
    • Jhsto 13 days ago
      Just the other week I used Nix on my laptop to derive PXE boot images, uploaded those to IPFS, and netbooted my server in another country over a public IPFS mirror. The initrd gets mounted as read-only overlayfs on boot. My configs are public: https://github.com/jhvst/nix-config

      I plan to write documentation of the IPFS process including the PXE router config later at https://github.com/majbacka-labs/nixos.fi -- we might also run a small public build server for peoples Flake configs, who are interested in trying out this process.

      • __MatrixMan__ 13 days ago
        I laughed when I saw that your readme jumps straight into some category theory. FYI others might cry instead.

        You're doing some cool things here.

      • vmfunction 13 days ago
        >A prominent direction in the Linux distribution scene lately has been the concept of immutable desktop operating systems. Recent examples include Fedora Silverblue (2018) and Vanilla OS (2022). But, on my anecdotal understanding of the timelines concerning Linux distros, both are spiritual successors to CoreOS (2013).

        Remember in the late 90's booting server off a CD-ROM was the thing.

        • hnarn 13 days ago
          Booting off USB sticks is still done all the time and it's (almost) literally the same thing. Doing it in combination with encrypted persistence support as available in Debian for example can be really nice.
      • jquaint 13 days ago
        This is really cool. Plan to take some inspiration from your config!
    • infogulch 13 days ago
      It's not an overlay provider itself, but uber/kraken is a "P2P Docker registry capable of distributing TBs of data in seconds". It uses the bittorrent protocol to deliver docker images to large clusters.

      https://github.com/uber/kraken

      • XorNot 13 days ago
        The problem with being a docker registry is that you're still having to double-dip: distribute to the registry, then docker pull.

        But you shouldn't need to: you should be able to do the same thing with a docker graph driver, so there is no registry - even daemon should perceive the local registry as "already available", even though in reality it's going to just download the parts it needs as it overlay mounts the image layers.

        Which would actually potentially save a ton of bandwidth, since the stuff in an image is usually quite different to the stuff any given application needs (i.e. I usually base off Ubuntu, but if I'm only throwing a Go binary in there plus wanting debugging tools maybe available, then in most executions the actual image pulled to the local disk would be very small).

      • phillebaba 13 days ago
        Kraken is sadly a dead project, with little work being done. For example support for Containerd is non-existent or just not documented.

        I created Spegel to fill the gap but focus on the P2P registry component without the overhead of running a stateful application. https://github.com/spegel-org/spegel

      • apichat 13 days ago
        https://github.com/uber/kraken?tab=readme-ov-file#comparison...

        "Kraken was initially built with a BitTorrent driver, however, we ended up implementing our P2P driver based on BitTorrent protocol to allow for tighter integration with storage solutions and more control over performance optimizations.

        Kraken's problem space is slightly different than what BitTorrent was designed for. Kraken's goal is to reduce global max download time and communication overhead in a stable environment, while BitTorrent was designed for an unpredictable and adversarial environment, so it needs to preserve more copies of scarce data and defend against malicious or bad behaving peers.

        Despite the differences, we re-examine Kraken's protocol from time to time, and if it's feasible, we hope to make it compatible with BitTorrent again."

    • retzkek 13 days ago
      CVMFS is a mature entry in that space, heavily used in the physics community to distribute software and container images, allowing simple and efficient sharing of computing resources. https://cernvm.cern.ch/fs/
    • idle_zealot 13 days ago
      I'm not sure I see the point. A read-only filesystem that downloads files on-the-fly is neat, but doesn't sound practical in most situations.
      • crest 13 days ago
        It can be an essential component, but for on-site replication you need to coordinate your caches to make the most of your available capacity. There're efforts to implement this on top IPFS to have mutually trusted nodes elect a leader deciding who should pin what to ensure you keep enough intact copies of everything in the distributed cache, but like so many things IPFS it started out interesting, died from feature creep and "visions" instead of working code.
      • pyinstallwoes 13 days ago
        Imagine that any computation is a hash, then every possible thing becomes memoized not distinguishing between data/code. Then as a consequence you have durability, cache, security to an extent, verifiability through peers (could be trusted or degrees away from peers you trust).
        • gameman144 13 days ago
          Is every computation worth memoizing? I can think of very few computations I do that others would care about, and in those cases there's already a much more efficient caching layer fronting the data anyway.
          • pyinstallwoes 13 days ago
            Why not? I think there is some interesting research here at the computationally level / distributed that could lead to some interesting architecture and discoveries.

            Fully distributed OS's/Virtual Machines/LLM's/Neural Networks

            If LLM's are token predictors for language, what happens when you do token prediction for computation across a distributed network? Then run a NN on the cache and clustering itself? Lots of potential use cases.

      • cyanydeez 13 days ago
        If you were a billionaire and you wanted some software update, you could log into your super computer and have every shell mount the same torrent and it should be the fastest upload.
    • thesuitonym 13 days ago
      Every once in a while, someone reinvents Plan 9 from Bell Labs.
  • Maakuth 13 days ago
    This is the perfect client for accessing Internet Archive content! Each IA item automatically has a torrent that has IA's web seeds. Try Big Buck Bunny:

    btfs https://archive.org/download/BigBuckBunny_124/BigBuckBunny_1... mountpoint

    • haunter 13 days ago
      I don’t know the internal workings of IA and the bittorent architecture but if an archive has too many items the torrent file won’t have them all. I see this all the time with ROM packs and magazine archives for example. +1000 items, the torrent will only have the first ~200 or so available
      • sumtechguy 13 days ago
        I think for some reason IA limits the torrent size. I have seen as low as 50 with a 1000+ item archive.
    • rnhmjoj 13 days ago
      Even better, try this:

          btplay https://archive.org/download/BigBuckBunny_124/BigBuckBunny_124_archive.torrent
  • dang 13 days ago
    Related:

    BTFS – mount any .torrent file or magnet link as directory - https://news.ycombinator.com/item?id=23576063 - June 2020 (121 comments)

    BitTorrent file system - https://news.ycombinator.com/item?id=10826154 - Jan 2016 (33 comments)

  • sktrdie 13 days ago
    Or even better store data as an sqlite file that is full-text-search indexed. Then you can full-text search the torrent on demand: https://github.com/bittorrent/sqltorrent
  • ChrisArchitect 13 days ago
  • mdaniel 13 days ago
    the top comment <https://news.ycombinator.com/item?id=23580334> by saurik (yes, that one) on the previous 121 comment thread back in 2020 sums up my feelings about the situation: BTFS is a "one CID at a time" version of IPFS

    I do think IPFS is awesome, but is going to take some major advances in at least 3 areas before it becomes something usable day-to-day:

    1. not running a local node proxy (I hear that Brave has some built-in WebTorrent support, so maybe that's the path, but since I don't use Brave I can't say whether they are "WebTorrent in name only" or what

    2. related to that, the swarm/peer resolution latency suffers in the same way that "web3 crypto tomfoolery" does, and that latency makes "browsing" feel like the old 14.4k modem days

    3. IPFS is absolutely fantastic for infrequently changing but super popular content, e.g. wikipedia, game releases, MDN content, etc, but is a super PITA to replace "tip" or "main" (if one thinks of browsing a git repo) with the "updated" version since (to the best of my knowledge) the only way folks have to resolve that newest CID is IPNS and DNS is never, ever going to be a "well, that's a good mechanism and surely doesn't contribute to one of the N things any outage always involves"

    I'm aware that I have spent an inordinate amount of words talking about a filesystem other than the one you submitted, but unlike BTFS, which I would never install, I think that those who click on this and are interested in the idea of BTFS may enjoy reading further into IPFS, but should bear in mind my opinion of its current shortcomings

  • anacrolix 13 days ago
    https://github.com/anacrolix/torrent has a fuse driver since 2013. I'm in the early stages of removing it. There are WebDAV, 3rd party FUSE, and HTTP wrappers of the client all doing similar things: serving magnet links, infohashes, and torrent files like an immutable filesystem. BitTorrent v2 support is currently in master.
  • skeledrew 13 days ago
    Pretty cool on the surface, but 2 things I can think of right now that mostly kills it for me (and maybe many others). 1) the things one usually use Bittorrent for tend to need the complete files to be all there to be useful, and internet speeds is a limiter in that regard. 2) seeder count tends to break down pretty quickly, as people delete or even just move the files elsewhere for whatever reason, so availability falls.
    • elevenz 13 days ago
      A file index tool can be add to torrent clients, letting it scan all files in a user-selected folder and tag them with torrent info. Then how can it find the torrent info for a given file? Maybe we need a central server, maybe some hacker will invent a distributed reverse torrent search network, to list what playlists has a specific song. I don't know if someone has already invented this. I think it can be the final answer to any distributed file system driven by user uploads.
      • skeledrew 13 days ago
        IPFS has resolved the indexing issue, with content addressing. Overall I'd say it has all the pros of bittorrent and fewer of the cons, from a technical perspective. However it's more complex, which may be a reason why it isn't more adopted.
      • WhiteOwlLion 13 days ago
        Aren't torrents block based so file boundaries are not observed without hacks?
        • rakoo 13 days ago
          torrents are file-based but in v1 the edge of a file doesn't map with the edge of a piece, so you can't easily find file's hashes.

          In v2 this is solved and it is possible to easily know the hash of each file in the torrent, so you can search for it in other torrents

  • stevefan1999 13 days ago
    I found it might be quite useful for huge LLMs since those are now hosted on BitTorrent. Of course it is not going to be as practical as IPFS since IPFS is content addressable and easier to do random access.
    • kimixa 13 days ago
      Is there much use for a partially resident LLM though?

      Then I don't see much of an advantage over just vanilla bittorrent - if you realistically need a full local copy to even start working anyway.

  • bArray 13 days ago
    Couldn't help but think it would be epic if it went the other way - you throw files into a folder, it's set as read-only and then share that as a torrent.
  • rwmj 13 days ago
    Here's a similar idea for block devices: https://libguestfs.org/nbdkit-torrent-plugin.1.html

    It lets you create a block device (/dev/nbd0) backed by a torrent. Blocks are fetched in the background, or on demand (by prioritizing blocks according to recent read requests).

    In practice it works - you can even boot a VM from it - but it's quite slow unless you have lots of seeds. There's a danger, particularly with VMs, that you can hit time outs waiting for a block to be read, unless you adjust some guest kernel parameters.

    There are some bootable examples in that page if you want to try it.

    • efrecon 11 days ago
      That's really cool
  • Cieric 13 days ago
    I've thought about using this for my media server in the past, but in the end I ran into to many issues trying to automate it. Then there's all the normal issues, slow downloads can wreck havoc on some programs expecting the whole thing to be there and I couldn't move files around without breaking connections. It was interesting to mess with, but in the end I just decided it would be a fun challenge to write my own in zig so I could have something "easy" to hack on in the future.
  • SuperNinKenDo 13 days ago
    Cool concept. I assume that it seeds if and while the files are present on your device? Tried to read the manpage but unformatted manpage markdown on a phone was too difficult to read.
  • ceritium 13 days ago
    I did something similar some years ago, https://github.com/ceritium/fuse-torrent

    I had no idea what I was doing, most of the hard work IS done by the torrent-stream node package

  • eternityforest 12 days ago
    For a long time I've been thinking there should be a P2P alternative to the snap package store. This looks like it's already got 90% of the work on the technical side to implement it.
  • xmichael909 13 days ago
    No security at all?
    • blueflow 13 days ago
      What would be something that needs to be protected against?
    • rakoo 13 days ago
      What security would be interesting here ?
    • mixmastamyk 13 days ago
      Give her some funked-up muzak, she treats you nice

      Feed her some hungry reggae, she'll love you twice

  • 090rf 13 days ago
    [dead]
  • Solvency 13 days ago
    so is this like a Dropbox alternative using the bittorrent protocol?
    • VyseofArcadia 13 days ago
      No,this is being able to interact with a .torrent file as if it's a directory.

      The only usecase I see for this is as an alternative to a more traditional bittorrent client.

      • sumtechguy 13 days ago
        This could be semi interesting for some torrents if it had CoW local store? So you could still write to it but keep your changes local and it still looks like a coherent directory.
      • Solvency 13 days ago
        got it. but now i kind of want a torrent-based dropbox. i have five workstations. would be great to be able to utilize them as my own miniature distributed file system without a corporate server.
        • ggop 13 days ago
          • jszymborski 13 days ago
            Syncthing is great (I use it daily!) but I'm not sure it does the Dropbox/NextCloud thing that BTFS does where you can see remote files and download them on access. Syncthing rather just syncs folders as far as I can tell.
          • jtriangle 13 days ago
            Syncthing experience is greatly improved if you also host your own discovery server, and if you can port forward.

            Pretty minor to do, but, it's a big speed increase.

        • vineyardmike 13 days ago
          There have been a bunch of projects that tried to do some variant this, and I'd love for it to exist, but I'd almost posit it's an impossible problem. Projects either find a way to handle the content-addressing, but then fail on coordinating nodes, or can coordinate but can't choose placement efficiently, or are just vaporware. I think the hard part is most personal computers are too unreliable to trust, and centralization, even for a homelab experimental user, is just too easy.

          A few projects that tackled some version of this...

          Nextcloud, Owncloud, and generally just NAS can be "dropbox but self hosted" but its centralized.

          IPFS, Perkeep, Iroh, hypercore (npm), focused on content-addressed information, making cataloging and sharing easy, but fail to really handle coordination of which node the data goes on.

          Syncthing, Garage, and of course BitTorrent, and a few others can coordinate but they all duplicate everything everywhere.

          "Bazil.org", and (dead link) "infinit.sh" both sought to coordinate distribution and somehow catalog the data, but they both seem to have died without achieving their goals. I used infinit.sh when it was alive ~2016 but it was too slow to use for anything.

          I'd love for something like this to exist, but I think its an impossible mission.

          • Solvency 13 days ago
            ...wow, kind of fascinating. wish there was a post-mortem on failed attempts at this. I would not expect this, of all problems, to be unsolvable in 2024.
        • zed716 13 days ago
          There used to be something that worked exactly in this manner, called AeroFS -- There was a website portion that worked just like Dropbox did, and then a client you could install on your systems, and it would distribute items in a torrent-like manner between clients. It had a lot of neat features. It's a shame that it didn't end up really going anywhere (in a very crowded field at the time), because it worked great and they had an on-prem solution that worked really well.
        • kaetemi 13 days ago
          Resilio Sync (previously BitTorrent sync) is what you're looking for.
        • r14c 13 days ago
          tahoe-lafs might do what you need!
        • baobun 13 days ago
          There are families of distributed filesystems. Most famous would be Ceph and GlusterFS and there are a many newer ones - maybe one of them would fit your use-case?
        • Zambyte 13 days ago
          IPFS?
        • throawayonthe 13 days ago
          [dead]