Fiber-optic data transfer speeds hit a rapid 301 Tbps

(livescience.com)

105 points | by Brajeshwar 30 days ago

12 comments

  • tombert 30 days ago
    I know basically nothing about computer engineering or electronics, so bare with me on this, but conceivably could tech like this be used inside a computer?

    As in, instead of just using fiber to make different computers talk to each other really fast, could we use it for like a RAM bus or something? Is there too much latency associated with it compared to the copper we've been using? 301 Tbps seems like insane speeds, even inside a computer.

    • morphle 30 days ago
      Yes, basic physics shows [1] that we can use (free space) optics instead of wires inside chips. This will improve energy use and speeds by 3 orders of magnitude. Next to a transistor you put a photon detector. You can flip the transistor with the voltage from the photon detector by sending 10000 photons (or less). Pictures of such systems in the slides [2].

      We can beam billions of optical channels with different frequencies in parallel across chips, exabits (yettabits, yottabits) per second.

      We will not compute with photons tough [4][5], the optical structures are to large and it would only work for very specific types of computations.

      We design wafer scale integrations (very large chips) this way we can start making these fast on-chip interconnects around 2027 if we invest a few billion today in making free space optics. A layman's introduction in my talk here [3].

      [1] Stanford Seminar - Saving energy and increasing density in information processing using photonics - David B. Miller https://www.youtube.com/watch?v=7hWWyuesmhs

      [2] https://www.researchgate.net/profile/David-Miller-65/publica...

      [3] Smalltalk and Self Hardware https://vimeo.com/731037615

      [4] D. A. B. Miller, “Are optical transistors the logical next step?” Nature Photonics, vol. 4, pp. 3–5, 2010. https://www.researchgate.net/profile/David-Miller-65/publica...

      [5] Attojoule Optoelectronics for Low-Energy Information Processing and Communications – a Tutorial Review https://arxiv.org/pdf/1609.05510.pdf

      • freedomben 30 days ago
        Amazing, thank you!

        How much parallelization is required? Any idea how fast a single channel could get with optical transport and photon-detecting transistors?

        As an Elixir dev (where parallelization is relatively easy), I think there is a lot of potential for parallelization that isn't being used by most programs, but for serial algorithms where multi-core can't be used, I wonder what the ceiling will be.

        • morphle 30 days ago
          > Any idea how fast a single channel could get with optical transport and photon-detecting transistors?

          Terabits per second per channel would be possible but it would require to much high energy SerDes (serializer-deserialiser) circuits. It will be more energy efficient to have more parallel optical channels (bundled) switching at the low power optimal speed of the transistors, around 1-2 Ghz instead.

          >I wonder what the parallelisation ceiling will be? How much parallelization is required?

          There is no ceiling, no limit, for example look at an "existence proof": there are around a hundred trillion cells in your body that perform billions of computations chemically and also with DNA processing by ribosomes in parallel. No limit. Those 8 billion bodies theoretically could learn to work together with the aid of internet and personal computers. There are 10^24 stars in the universe.

          Your thinking, your imagination, your thinking brain modelling of parallel systems is the ceiling, the limit. But you can learn, experiment and improve over time so your limits on thinking up better ways to parallelise computation will improve. Humanity could dedicate itself to the open ended creation of knowledge (of knowing how to compute in parallel with photons without limits) [1].

          Right now our computation limits are limited by our knowledge (of manufacturing at atom scales), the energy output of the sun and the amount of atoms in the solar system we could rearrange [4]. We should fund our scientists to create the knowledge we need to enlarge the limits [5]. I hope you'll fund me as well :-)

          > Elixir development

          Smalltalk, LISP, Erlang, Elixer, Actor Language are some of the best message passing programming languages for massively scaling parallelism.

          Alan Kay [2][3] has great lectures to get you started in thinking better (including about (computational) parallelism, scaling and message passing). A few others have written some papers as well (see links in my HN comments the last 12 weeks). I can teach you a bit too, write to morphle73 at gmail dot com.

          [1] Chemical scum that dream of distant quasars https://www.ted.com/talks/david_deutsch_chemical_scum_that_d...

          [2] Alan Kay lecture: putting Turing to work https://www.heidelberg-laureate-forum.org/video/lecture-putt...

          [3] Is it really "Complex"? Or did we just make it "Complicated"? https://www.youtube.com/watch?v=ubaX1Smg6pY&t=2557s

          [4] https://gwern.net/doc/ai/scaling/hardware/1999-bradbury-matr...

          [5] https://internetat50.com/references/Kay_How.pdf

          • freedomben 30 days ago
            Thank you! Minor clarification, for performance ceiling I was wondering about for serial (not parallel)
            • morphle 30 days ago
              The sequential process performance ceiling will be set by physical limits. Photons at high frequenties have too much energy, for example gamma rays.

              The practical ceiling will be set by manufacturing limitations for the next few decades: can we build structures atom by atom? [1]

              [1] Richard Feynman "Tiny Machines" Nanotechnology Lecture - aka "There's Plenty of Room at the Bottom" https://www.youtube.com/watch?v=4eRCygdW--c&t=1390s

    • Aurornis 30 days ago
      Optical interconnects within a system are very much a possibility, but they are prohibitively expensive.

      Going from copper to optics and back again at the other end is significantly more expensive than using mechanical connectors. The complexity of optical fibers and their interconnects also adds a lot more assembly difficulty and failure opportunity.

      We still have a lot of headroom in copper interconnects, but it’s getting more and more difficult to squeeze bandwidth out of longer distances interconnects like those between your CPU and your GPU slot. Next generation systems might need retimer chips at the halfway point to basically rebuild and retransmit the signal so it can make it the full distance. We also have to use more expensive PCB materials to reduce the cost. There may come a day when we have to connect the CPU and GPU optically, but it’s going to be a while before we get there.

    • 486sx33 30 days ago
      The biggest problem I see from 10,000 feet is you’d need to constantly be converting electricity to light and back again… there is a penalty for that in speed… so far putting ram on die with the cpu has been the best way to reduce that particular form of latency…
      • op00to 30 days ago
        There is also a heat penalty.
    • tw04 30 days ago
      This specific tech perhaps not. But yes, silicon photonics is a thing.

      https://www.tomshardware.com/news/intel-demoes-8-core-528-th...

    • bgnn 30 days ago
      Theoretically yes. Though it makes 0 sense and difference in practice. The conversion between electrical and optical isn't straightforward and it isn't cheap to implement now. Let's say we've resolved this. We are still limited by the same latency of the channel, i.e. speed of light is roughly the same in fibre-optics and metal interconnects we use on chip now for electrical signals. Meaning, electrical signals we use are as fast as light. So, no delay advantage.

      The biggest advantage of optical is less loss, which makes a huge difference for a long channel. By this I mean really long, like meters. Till that point it has literally no advantage while it is so much more complicated.

      • nsteel 29 days ago
        I think this original question was between devices within a computer, rather than within a single chip/package. Optical interconnects within high-end servers/routers are being designed today for use in the next generation. We previously reached the limits of copper pcb traces, coax has lots of problems, optical is next. It's not about latency, it's about signal integrity.

        Ironically, the heat produced by the optical transceivers is one of the biggest problems.

        • bgnn 29 days ago
          Yeah for within a computer, maybe. But then we're still using inefficient standards like DDR for memory access. There's sooooo much more we can do for copper interconnects while more viable optical tech is being developed. Think about denser modulation PAM4 is already in, PAM8 and PAM16 are being looked at. 10Gbit Ethernet uses a symbol density of 128 over a 100m long channel. We have the digital horsepower for encoding/decoding the data on-chip. I also see people are implementing full-duplex channels. This will double the density. Again, Ethernet is doing this for 30 years already but no chip-to-chip standard like PCIe, DDR etc are doing these yet.

          I'd think optical will replace back-plane side but not anything within a motherboard itself in the coming 10 years. Copper has a lot of juice left in it.

          • nsteel 23 days ago
            I'm not aware of anyone actually going to use pam8 or 16 with high-frequency links. The signal integrity is a mess and the corresponding FEC requirements undo all the benefits.

            I suppose a relatively high-speed chip-to-chip standard in use within consumer computers today is GDDR. If you want something with serdes then it's obviously PCIe, which has been using NRZ and now PAM4. Then there's Interlaken which has been around for yonks in HPC and networking. More recently NVLink also. None of these point-to-point serdes solutions can give you latency anywhere near DDR. And when they get to 200Gbps, copper backplanes are dead. It's cable now, and optical is coming. On the PCB, copper channels are thankfully holding out.

    • alex_duf 30 days ago
      • Workaccount2 30 days ago
        I almost get nostalgic reading about optical computing, it's something I remember reading about 20 years ago as a promising field for future development. As far as I can tell though, it's something that still hasn't gotten out of the lab.
    • shutupnerd0000 30 days ago
      > bare with me

      I would prefer to keep my clothes on, thanks.

      (It's spelled "bear with me")

      • tombert 30 days ago
        Ugh, for some reason my brain can never remember which version to use for that. It's always a 50/50 shot and I guess I came up tails this time.
        • 0_____0 30 days ago
          Language is funny. Some of the best engineers I know have terrible spelling.
  • ksec 30 days ago
    > Japan breaks world record ( 319 terabits per second ) for fastest internet speed (freethink.com) 2021. https://news.ycombinator.com/item?id=28673726

    So nothing really new. I am still looking forward to 1 Pbps. And more undersea cable being built.

    • bolp 30 days ago
      At the end of the article they mention the world record of 22.9 Pbps was set by a team at NICT in November 2023.

      See this link for more info: https://www.nict.go.jp/en/press/2023/11/30-1.html

    • stevenjgarner 29 days ago
      I think the important difference in this case is that it can use existing optical infrastructure, "just" replacing the electronics at each end.
  • judge2020 30 days ago
    The University's article: https://www.aston.ac.uk/latest-news/aston-university-researc...

    Although even the university article doesn't link the paper..

  • falsandtru 30 days ago
    The latest world record is updated to 378.9 Tbps by the same research group. Probably not yet published in English.

    https://www.nict.go.jp/press/2024/03/29-1.html

    • virtuallynathan 30 days ago
      Pretty wild for a single fiber, but super involved, using 5 different doping type amps. Current systems use just EDFAs (Erbium). These guys used Erbium, Thallium, Bismuth, and others.
      • foobiekr 30 days ago
        The latest one is also just more lambdas. Like that’s impressive but DWDM itself took a decade+ after gear was introduced.

        Most of the time it’s easier to just add another few dozen fibers when laying cables.

  • larodi 29 days ago
    This magazine contains ridiculous amount of ads - more than actual content on a single screen span. Wonder whether we really need to get news this way and especially on a renowned aggregator like HN.

    Note: ad blockers don’t work on iOS.

    • kalleboo 29 days ago
      Ad blockers work just fine on iOS, I don't see any ads at all on that page on my iPhone in Safari. Just using 1Blocker, nothing fancy
    • FractalHQ 29 days ago
      Mullvad VPN iOS app successfully blocks the ads on that page (they appear as empty white space).
    • malnourish 29 days ago
      You could use a service like NextDNS
  • kyleleelarson 30 days ago
    I am curious why the term "fiber optic" seems to have declined in popularity, as least when it comes to big companies' annual reports: see https://searchsecdata.com/search?stockindex=S%26P+500&search...
  • that_lurker 29 days ago
  • 0cf8612b2e1e 30 days ago
    How congested are undersea network cables? I naively assumed that the deployment was so expensive that the cable diameter(stands?) would be hilariously over provisioned so that the available bandwidth would far eclipse present needs.
    • supertrope 30 days ago
      Deployment is most of the cost so based on that you'd think they'd choose 288 strands. But amplification requirements go up with strand count. So the solution used is single digit number of strands with many wavelengths.
    • okdood64 30 days ago
      I had understood past a certain point diameter doesn't add bandwidth?
      • lukevp 29 days ago
        I think they mean diameter of multiple strands, routed together in the same jacket
  • sjm 30 days ago
    Does this mean anything for actual latency, or only bandwidth?

    e.g. speed of light could mean a ~40ms ping between LA and Sydney, but best we get today is probably around 150ms?

    • cycomanic 30 days ago
      This is only throughput. The latency is given by the speed of light in the fibre (~c/1.5). That said Microsoft bought a company that develops hollow core fibre which yields a factor 1.5 improvement in latency. They just presented their latest results which is a 0.11 dB/km loss. This is actually the biggest result from the conference, because it is a massive improvement over regular fibre which has been hovering at about 0.15 dB/km loss for the last 40 years, with improvements below 1% over that time.
      • foobiekr 30 days ago
        I’m not so sure it will matter much. The earth is 42ms across at light speed, 66ms if traversing a great circle.

        Network hops are notoriously slow. In the datacenter the best I have ever seen is 200ns or so per packet which is very rare, most in DC hops are closer to 3-9 usec (especially modular chassis); then you hit the routers. With moderate congestion your routing hops are going to be twice that or more ignoring queuing, and you are likely six hops at least between two points in each direction.

        The hollow core stuff mostly will jot help since it gains with distance, but distance means more hops on average, so we are talking about an application where low latency is required but distances are high (where the improvement applies) but the minimum latency achievable is still tens of ms.

        It is interesting technology but I think it’s more interesting for hypothetical materials savings than for latency improvement.

    • greggyb 30 days ago
      The speed of light in fiber (or electrical signal in copper) is less than the speed of light in vacuum.

      There are delays (very small) converting a signal from electrical on one end of the fiber to light and back to fiber on the other end. For this reason, DAC tends to have measurably lower latency compared to fiber for in-rack networking.

      The length of an undersea cable is greater than both the straight line and the great circle distance between two points on the earth's surface.

      These things do not explain all (probably not even most) of the difference between the latency you suggested and that in the real world, but I hope they help to suggest why the naïve calculation is not achievable.

      • denotational 30 days ago
        > both the straight line and the great circle distance

        Curious what you mean by “straight line”; Rhumb line?

        • greggyb 30 days ago
          I meant the equivalent of boring a very deep and long tunnel (: I should have been more specific.
          • denotational 29 days ago
            Ah, you’re literally thinking in a higher dimension to me!
  • nubinetwork 29 days ago
    At what distance? If it's only like 10 feet than that's practically useful to nobody.
  • rkagerer 30 days ago
    Is the 1 really important here? I would have rounded down the headline.
  • qwertyuiop_ 30 days ago
    Yet my ComcastUniversalNBC Cable sputters at 30 Megabits per second.
    • ejb999 30 days ago
      Yea, but at least you get to pay thru the nose for it. </sarc>