11 comments

  • rektide 948 days ago
    Tribler is O.G. legit. Created in 2005 originally as a part of a research paper- I believe- around trying to create "fairer" allocation of bandwidth in BitTorrent (itself from 2001). Words like "incentivization" and "reptuation" dot their work for many years.

    Tribler has continued to be one of the most leading edge pieces of p2p software on the planet. It went on to pioneer p2p search, streaming partially completed videos, live-streaming, a range of privacy/security enhancements, tagging, moderation.

    In 2012 they dubbed their efforts "4th Generation P2P", encompassing a couple of these goals, many of which were already underway. I believe they've succeeded on all these fronts, & have only continued pushing further since. https://www.tribler.org/4thGenerationP2P/

    The team has been up to ongoing cutting edge research for a long time. Their ability to create p2p search is, still, basically unparalleled in this world.

    • 1vuio0pswjnm7 948 days ago
      IIRC someone from the Tribler team regularly comments on HN. His comments on p2p are usually interesting.
      • synctext 948 days ago
        Tribler founder here, great to see people still care about cute old P2P file sharing.

        My lab has been trying to get sharing, searching, and crowdsourcing to scale towards millions for 16 years now.

        The fundamental science is making progress to re-decentralise The Internet. Decades of work left obviously. Lots of progress on creating non-profit versions of all of Big Tech services (well in principle that is). Even Amazon can be decentralised, see a recent phd thesis from the lab, https://repository.tudelft.nl/islandora/object/uuid%3Aa4f750...

        European Commission might come in and allow open source clients of Facebook, instagram, Twitter, Amazon, etc. Our trustchain technology is specifically designed to be the superstructure for this. So EU might break down the defensive moats around 10 T$ of market cap. The legal foundation is already there, the new eIDAS Regulation is designed to _enforce_ login using the upcoming EU digital passport for all of Big Tech. So Open Source EU Metaverse, connecting all Big Tech protocols into a single repo and identity management solution with true privacy protection..

        • rektide 948 days ago
          > The legal foundation is already there, the new eIDAS Regulation is designed to _enforce_ login using the upcoming EU digital passport for all of Big Tech. So Open Source EU Metaverse, connecting all Big Tech protocols into a single repo and identity management solution with true privacy protection..

          Legally mandating a single user-identity system seems like the worst possible scenario I ever could have imagined for cyberspace. Imagining trusting such a system is enormously difficult. But more so, to let the mold set on cyerspace, to create a single way the internet has to work, & deny new frontiers, new possibilities, new creativities... that seems monstrous. Beyond imagining.

          I do like the idea of the possibility of interop.

          • tribler 948 days ago
            apologies for the poor formulation. Another login system you are required to support. People can create personas and if you try to de-anonymize them across service you're in gross violation of EU law. So more like privacy friendly must-have, right?
    • omasque 948 days ago
      Nice try, Steve Tribler
    • srgpqt 948 days ago
      Astroturfing a wee bit too much there.
      • dang 947 days ago
        You broke the site guidelines badly with this. If you'd please review them and stick to the rules in the future, we'd be grateful.

        https://news.ycombinator.com/newsguidelines.html

      • rektide 948 days ago
        no affiliation, not the submitter nor affiliated with them, just an old & ongoing fan.
        • omasque 948 days ago
          Just kidding, this was actually a really informative comment that's prompted me to go and have a proper look, appreciate you taking the time.
  • jancsika 948 days ago
    > Tribler follows the Tor wire protocol specification and hidden services spec quite closely, but is enhanced to need no central (directory) server.

    If Tribler has no central directory server then how does Tribler route around suspicious/malicious relays?

    Edit: Also-- has Tribler passed the Scihub Test yet[1]?

    1: test to probe whether Scihub is currently being served over a proposed privacy overlay.

    • mlinksva 948 days ago
      > Scihub Test

      Did you just coin that? Well done. Quick web and HN search, nearest I could find is https://news.ycombinator.com/item?id=27805134

      • jancsika 948 days ago
        Yes. :)

        It seems to be easier and less prone to flame wars than asking:

        * how easy is it for a newcomer to discover content on the privacy overlay?

        * what metadata gets leaked when someone searches, downloads, or hosts content?

        * how does the privacy overlay perform compared to the regular old web/internet?

        * what's the user experience like (e.g., do the devs even care whether non-technical people use their software or not)

        * what happens when some mildly interested party attacks the network

        * probably other bullets I'm forgetting

        It's way easier to just know if Sci-hub is there. Because if it is then a) they've solved the most important problems of a privacy overlay and b) they are probably actively being attacked so nobody has to speculate about their defenses.

        • 8eye 947 days ago
          solid points.
      • synctext 948 days ago

          Scihub test :-)
        Note our scientific progress towards "The Global Brain" https://github.com/Tribler/tribler/issues/3615#issuecomment-...

        Specifically designed to improving science

        -Tribler team

        • jancsika 948 days ago
          So: currently failing the Sci-hub test, but already planning work in the direction of passing it. That sounds promising to me.

          But what about the issue of filtering out malicious relays? How do you achieve that without a centralized directory server?

          • synctext 947 days ago
            Malicious relays have the option to give you bogus content or real content. The block-level hashes of Bittorrent prevent anything bad getting through. So just avoid relays which are not giving you good blocks. At another level, if relaying as honest, but curious (e.g. spying) you need to randomly select several relays and use them all in a long relay path. Pioneered by Tor team, results in excessive bandwidth usage.
    • cced 948 days ago
      Can you explain [1] and the implications thereof?
  • marcodiego 948 days ago
    Tried it from source. I had to install the following with pip3:

      - yappi
      - pydantic
      - Faker
      - sentry-sdk
    
    Its search works. Gives a nice nostalgic feeling of old p2p software. Definitely should be more popular.
    • krageon 948 days ago
      Any kind of telemetry service inside of a program purporting to care about centralised avenues of control should be an indicator to never ever use it. Either the developers don't understand what they're talking about or they are disingenuous. It doesn't really matter to the end user, both will mean that your use-case is not being served :)
    • aleph- 948 days ago
      I am curious what type of data they're sending to sentry?

      Could be some leakage there, don't think sentry does PII scrubbing of any kind either iirc.

      • zacmps 948 days ago
        It tries to scrub passwords and secret keys based on some text filters by default, but it can be configured to scrub arbitrary data (via a hook in the sdk).
    • junon 948 days ago
      The use of sentry is a huge red flag here...
      • synctext 948 days ago
        Crashes are optionally reported to Github for fixing. What could we use instead of Sentry to understand bugs and causes?

          - Tribler team
        • junon 948 days ago
          Define "optionally", please.
          • synctext 947 days ago
            Crash reporter uses sentry. We don't want to know anything else about our users. When Tribler crashes the core dump (e.g. Python TraceBack) can be inserted as a Github issue, using Sentry. Requires the user to opt-in by clicking "send" on crash reporter.
        • trevyn 947 days ago
          A language that attempts to move detection of logic errors to compile-time instead of run-time. :-)
  • EVa5I7bHFq9mnYK 948 days ago
    I am old enough to remember when all torrenting was decentralized (edonkey, imesh, napster and million others). Then, all of a sudden, everyone switched to torrents, that rely on [centralized] web sites for search. I guess convenience always beats privacy.
    • thesausageking 948 days ago
      Napster, KaZaA and most of the initial wave of p2p sharing had centralized servers which were required for coordination. That's why it got shut down. The record companies sued the companies behind them and won, forcing them to shutdown their networks.

      The second wave of Gnutella and all of the various BitTorrent clients was decentralized. No single entity controlled the network so there was no single point of failure. Record companies and movie studios came after individual nodes, but were never ever to shut down the network.

      However, no one ever invented a great way to do search and discovery on top of BitTorrent, so we've always had centralized servers for that piece.

    • nathanlied 948 days ago
      It doesn't (strictly) need to have centralized search; you can passively collect info from the DHT swarm and build up an index over time of torrents people are sharing.

      It's not as convenient as something like napster or the like, but we've also got this draft <http://bittorrent.org/beps/bep_0051.html> to make it a bit better.

      • mrmuagi 948 days ago
        > you can passively collect info from the DHT swarm and build up an index over time

        For anyone wanting to pursue this, I feel like I can share, I used this recently https://github.com/boramalper/magnetico, people share database dumps regularly. I found ~4 DB dumps and merging scripts are all you need to get up and running.

    • whatever1 948 days ago
      Because we needed curation. Very large proportion of the p2p content was viruses/malware etc or fake (you thought you were downloading A, but in fact you were downloading B).

      With links in centralized websites and fora we at least had a little more confidence on the safety/quality of the content.

      • synctext 948 days ago
        this.

        To make decentralised search work you need to solve the trust problem. Strangers sharing content with you without spam, decoys, and malware. You need to create trust without any central coordinator. Crawling the DHT for content is not going to get you Google-quality search. The Tribler lab is working on the web-of-trust problem and decentralised (ledger) accounting since pre-Bitcoin days. Its a stubborn problem. But we're making consistent progress. Latest peer-reviewed science: https://www.ifaamas.org/Proceedings/aamas2021/pdfs/p1263.pdf

          -Tribler team
      • EVa5I7bHFq9mnYK 943 days ago
        You generally can't catch viruses from mp3 or video files. As for the games, I very much doubt a torrent web site can check hundreds of gigabyte-sized .exe files for trojans. So I wouldn't recommend torrenting games on the same computer you store your Bitcoin wallet.
    • crtasm 948 days ago
      edonkey and napster required bootstrap nodes, I think? and sending your search terms out to other peers on the network could be considered less private, depending on your view.

      You can search bittorrent via DHT in a similar manner (I'm unsure if the bootstrap nodes there are strictly required) or search an indexer website from one IP then use the magnet link to download from another IP.

    • marcodiego 948 days ago
      Privacy is not the only drawback of centralization. It is also easier to take down by simply disabling the central component. Although not used as much as during the 90's, some decentralized networks like gnutella and kademlia are still to this day.
    • stiltzkin 948 days ago
      Jumping from edonkey/emule to torrent was a night and day difference on speed and convenience.
    • nostoc 948 days ago
      Those were all Peer2peer services, but none of them were torrenting.
  • fsflover 948 days ago
    If you want to not worry about censorship at all, consider using torrents in i2p: https://geti2p.net
  • parsecs 948 days ago
    Using this opportunity to ask - has anyone here used torrentpanda before? Imo it was the best torrent search site with a big database of crawled torrents. They seem to have disappeared a few months ago and I haven't found any working domains so far.

    The old site is torrentpanda.org but going to it now gives error page and redirects to softcore porn.

  • 1MachineElf 948 days ago
    Name reminds me of the retro-styled Illumos distro Tribblix: http://www.tribblix.org/
  • Barrin92 948 days ago
    There's a pretty big caveat in the anonymity section on the website about seeding by default not being anonymous (although there is an option for hidden seeding as well which apparently slows speeds quite a bit) but that aside, does this mean that downloading is by default anonymous to ISPs? That's pretty big, weird that I've never seen this before.
    • fnord77 948 days ago
      seeding is the big risk from lawyers, right?

      weird that hidden seeding is not the default

    • Haemm0r 948 days ago
      Seeding is anonymous by default afaik.

      However the integrated search is not,so your ISP could know what you are searching for.

      So you would need to get your magnetlinks over e.g. TOR an add them manually to stay anonymous.

  • squarefoot 948 days ago
    Very interesting. On Debian the Ubuntu .deb package installs fine without asking for other dependencies.

    I wonder if it could be split into a core+GUI pair for those of us who like to have p2p software run on small boxes (RasPI, other ARM boards, etc). I'm so used to run Transmission as a service on my NAS then use the GUI interface from PCs when they're turned on, which is extremely convenient.

  • sergiotapia 948 days ago
    Can someone use this to download a lot of LINUX ISOS safely and avoid having to pay for a VPN or seedbox?
    • ghostly_s 948 days ago
      what is unsafe about downloading Linux ISOs over BT currently?
      • no_time 948 days ago
        Nothing. Copyright enforcement bootlickers briefly join the swarm to grab all the IPs and thats it. They (in theory) only have claims on hashes they are payrolled to monitor. Also they need the tracker to cooperate so that rules out private sites with a competent opsec.

        Edit: the moment I hit send I realized I fell for a common innuendo. Feeling a bit dumb now.

      • walterbell 948 days ago
        As a new downloader of Linux ISOs, please start an FAQ when you find the answer.
  • antocv 948 days ago
    Tribler is a honeypot at best. The papers it was based on was below high-school quality.
    • debarshri 948 days ago
      I know the guys from TU delft who worked on it ( parallel and distributed systems group).

      It is one of respected groups in CS in Netherlands. With your comment you are actually insulting people who have worked for years on this. With your one-liner you are discrediting many thesis, hard work, without any credentials or proof from your side.

      • krageon 948 days ago
        > With your comment you are actually insulting people

        It's not a personal insult to say a paper that someone produced was of poor quality (I don't know whether or not it was, I have not read it). Someone can produce something bad without it saying anything about their character.

        • debarshri 948 days ago
          You can find it here [1].

          I am not talking personal insults either. It is insulting their work and profession. Producing paper is their work. Producing paper and delivering a project that makes an impact is rare and should be appreciated.

          It is like you have a successful opensource project and somebody says you are a really bad programmer without any context.

          [1] https://research.tudelft.nl/en/publications/tribler-a-social...

          • krageon 947 days ago
            It's not an insult to say that work was done bad. Nobody deserves appreciation just because they did something, and nobody deserves to be told to essentially shut up when they give a value judgement you do not agree with.

            Thanks for the link, I will read it.

            Edit: Given that the ideas presented in the paper are somewhat novel (and there is more than one such idea), I would guess the original post was complaining about the grammatical errors that are in it. I've definitely read papers that purported to be more "important" that eventually (after many words saying essentially nothing of value) turned out to have far less novel information in them, so I would say that in the realm of compsci papers that invent new relevant (as opposed to purely academic) algorithms this one is above-average.

          • zibzab 948 days ago
            Well, i wish my high-school reports were scopus indexed with almost 200 citations.

            Jokes aside, the Wiley link is dead which is kind of funny, given the scihub talk above

    • rand846633 948 days ago
      Don’t you owe it the audience her to back up your claims with facts or at least details?