Don't Use Iperf3 on Windows

(techcommunity.microsoft.com)

76 points | by thepuppet33r 13 days ago

14 comments

  • klabb3 13 days ago
    > The average [of iperf3] across multiple tests was about 7.5 Gbps.

    > Ntttcp averaged about 12.75 Gbps […] Ntttcp does something called pre-posting receives, which is unique to this tool.

    This is TCP we’re talking about, and with large enough buffer—per-syscall (128k on my iperf3 by default - that’s good) you should be able to saturate even a thick line, provided auto-tuning is enabled correctly. But just add -P 16 to parallelize to be sure it’s not the TCP stack, instead? Did they even try a workaround, before suggesting an extremely limited and proprietary tool that nobody is familiar with? Or better yet, help the project out by contributing native code if the open source community is so bad at windows? At the very least, make a wire-compatible version.

    Iperf is a very boring and wonderful tool that just works. Importantly, it’s a networking tool, which means you need interop across different systems. Any networking tool that’s single-platform is dead in the water, as far as I’m concerned.

    Edit: To be fair, I see they have a Linux version, how generous. Last updated in 2022 and many open issues though.

    • distortedsignal 13 days ago
      After reading the article, it sounds like iPerf3 on Windows is going through an emulation layer (Cygwin) which - I kinda get why it's going slower if that's the case. Isn't syscall translation always going to add overhead?
    • raggi 13 days ago
      It’s a shame that they don’t talk about it in the article but a big part of the drive toward next generation tools is to study the interfaces needed and used by quic implementations which can’t “just be solved by” the flags you’re talking about, and iperf has no support for the relevant features (offloads).
      • klabb3 13 days ago
        Yeah, to be fair I’m assuming tcp both on the iperf side (which I always use it for) and the alternative ntttcp tool does have TCP in the name although it appears they have UDP support as well(?)

        I know UDP and QUIC comes with a ton of novel bottlenecks throughout the stack (even down to the NICs, no?), and at the very least tons of knobs, which warrants more custom tooling and keeping up with the latest in terms of OS/platform support. I can totally see how iperf isn’t suited for that.

        Still, it’s a pretty strong and different message that Microsoft is sending on an official channel, which is unfortunate if it indeed is the case that iperf works just fine on windows with pretty standard flags (note that even on typical Linux recv buffers are constrained to ~6MiB so -P is standard for WAN saturation tests).

        (Also good to see your relentless attention to engineering details as always. We used to work together some years back)

        • raggi 12 days ago
          It largely depends what you mean by "works just fine", if all you want is "high throughput on a low latency link" then sure, whack up the buffer and throw it at the wall, it'll stick - but you can just as well use curl for that.

          To study higher end network stacks or more modern protocol dynamics, or cross network flows, the other tools are providing much better insights.

          The need for -P for basic tests is also telling and part of the problem: if the tool can't outpace a browser then even for these "basic cases" it's showing its inadequacy for the kinds of things the bulk of users care about even today, even with old protocols.

  • eqvinox 13 days ago
    > What Does Microsoft Recommend

    > Microsoft maintains two synthetic network benchmarking tools, ntttcp (Windows NT Test TCP) and ctsTraffic.

    Okay, let's see which of these is compatible with the iperf3 available on my locked-down commercial router (or, alternatively, the test endpoint my ISP provides) …

    …it's neither…

    …aaaaand we're back to iperf3 on Windows. :-(

    As tongue-in-cheek this comment may be, a recommendation for 2 incompatible tools is worth nothing. Iperf is the de-facto standard, if anything the problem is sometimes you have iperf2 around for some reason. If, Microsoft, you want better Windows network stack benchmark results, you'll need to make something compatible with iperf3 (or fix iperf3.)

    (It does not matter that ntttcp runs on Linux. I can't install ntttcp on an Ubiquiti router, or ask my ISP to add another test endpoint.)

  • proactivesvcs 13 days ago
    "Go search for “iPerf3 on Windows” on the web. Go ahead, open a tab, and use your search engine of choice. Which I am certain is Bing with Copilot."

    For many people it won't be by choice, it'll be because Windows' plethora of dark patterns and anti-consumer features have tricked you into using it, or taken away your previous explicit choice.

    • Joker_vD 13 days ago
      I personally thought the part about Bing with Copilot was a subtle joke, because giving people the benefit of doubt is just a polite thing to do and even the Microsoft employees deserve it. Otherwise the sentiment contained in that sentence is indeed quite outrageous.

      Anyhow, even when searching for "iPerf3 on Windows" in both Google and DuckDuckGo, the first result is still iperf.fr.

      • 38 13 days ago
        Even with the most charitable Interpretation, it's a tasteless joke. Bing has a history of shoving Copilot down people's throats, and Microsoft has a LONG history of shoving Bing down people's throats. They built it into the God damn start menu, and have repeatedly changed the method to disable it. So even people who have explicitly disabled it will see it pop back up from time to time. Fuck Microsoft.
        • iforgotpassword 12 days ago
          What else do you expect a Microsoft employee to say? Google it? I think that would be awkward, but the author is fully aware nobody is using Bing, so they made a joke. It seemed somewhat sarcastic to me so I had to chuckle.
          • nikanj 12 days ago
            Nobody? Tons of people are using Bing, because their IT-skilled family members only reset it to Google on Christmas and Thanksgiving, but Microsoft resets it to Bing every other Edge update
      • mrguyorama 12 days ago
        >because giving people the benefit of doubt is just a polite thing to do and even the Microsoft employees deserve it

        This is just woefully naive when we have court documents from Microsoft's history of open source attacks. Microsoft lost the benefit of the doubt ages ago, nevermind that you should not be giving a legal entity who's only incentive is to extract more wealth from the world "the benefit of the doubt".

  • molticrystal 13 days ago
    Perhaps Microsoft should work with OpenWRT to get the linux version of ntttcp incorporated and built for routers as a package. I use iperf2/iperf3 because it is provided by opkg.
    • luma 13 days ago
      I think this is the key problem with their recommendation - it presumes that I'm running Windows on both sides of the test. iperf is neat because it runs on just about anything which allows you to run a binary which is really handy in datacenter troubleshooting situations.
    • pseudosavant 13 days ago
      My first thought was "I bet getting ntttcp on x86 with Ubuntu/CentOS is ok, but I doubt it will be simple for something like OpenWRT."

          root@router:~# opkg find ntttcp
          root@router:~# opkg install ntttcp
          Unknown package 'ntttcp'.
          Collected errors:
           * opkg_install_cmd: Cannot install package ntttcp.
          root@router:~#
      
      
      Microsoft would be so much better served putting resources into proper Windows support for iperf3, instead of creating their own tools and convincing the world to switch.
      • rixthefox 12 days ago
        I would suspect Microsoft's corporate culture is to blame for that. If you make a tool that "already works on Windows" work you don't get any extra praise. If you however come up with a brand new thing you'll get a slap on the back and a raise!
      • suprjami 12 days ago
        There are only packages for Debian, Ubuntu, and NetBSD. Not even include-everything Arch or Gentoo has ntttcp apparently.
  • SushiHippie 13 days ago
    This article is actually right that I downloaded the old version for Windows (would not happen on another OS, as there are maintained packages in most of the package managers).

    But I just checked with the newer version and I didn't see a difference when testing with my 2.5GbE LAN (still 2.3Gbit UP and 1.9Gbit DOWN)

    • Joker_vD 13 days ago
      Do you really want Microsoft to maintain a centralized software repository from which you, yes, you, are supposed to install almost all of the software you'd like to use? Do you, really?

      It's always such a bizarre experience when people advocate for centralized repositories for Windows. Windows, unlike Linux, comes in very few flavours/versions (and with decent backwards-compatibility story), so the application developers can very well be (and in practice, well, are) expected to build the apps for the Windows versions they care about and then use their web-sites to distribute it.

      In this case with iPerf3 we just have a sad story of two forks, and the group that owns the namesaked website for some reason could not be bothered after 2016 to re-upload the releases of the other group. Well, similar things happens in "official" Linux distros as well, with severely outdated packages.

      • zamadatix 13 days ago
        I, really, wouldn't mind installing most of my software from Microsoft maintained repositories. The kinds of things that drive me nuts are when these are the ONLY place you can get things and you can't just add other repos like you can on any Linux setup. Managing distro variations doesn't really come into why I like to use centralized repositories.

        If winget had appeared a decade or two earlier it might have replaced the need for every Windows app to ship an auto-updater.

      • mschuster91 13 days ago
        > Do you really want Microsoft to maintain a centralized software repository from which you, yes, you, are supposed to install almost all of the software you'd like to use? Do you, really?

        I'd actually love for that, yes. Just google "download VLC" - the top link does refer to VLCs official site (probably because they paid for or got donated the top spot), the second one goes to an IT newspaper, the third one to something uBlock flags as badware risk, the fourth one another newspaper, the fifth one someone offering VLC with a newsletter subscription, and finally the official page.

        And that's the reality for a lot of popular Windows software. Part of that is obviously the responsibility of Google that can't even keep the top software search results free of advertising grifters (the newspapers) or malware, but the larger part is on Microsoft for not offering a centralized store for decades.

        In contrast, on Ubuntu a simple `sudo apt-get install vlc` is enough, and (bar a supply chain corruption) I can be reasonably sure that I get a stable, unmodified version of VLC. Oh, and when I run a `sudo apt-get remove --purge vlc` it is gone... in contrast to a lot of Windows software with badly written uninstallers (because MSI is an utter PITA to develop against), that leaves remains everywhere across the system.

        • Joker_vD 13 days ago
          Well, good news: VLC is in Microsoft Store [0]. Go there and get it straight from the horse's mouth! And the best thing is, it has been around for a decade already.

          [0] https://apps.microsoft.com/detail/xpdm1zw6815mqm?hl=en-us&gl...

          • oarsinsync 10 days ago
            Well, bad news: VLC team is pretty explicit about their negative views about app stores in general, as well as the microsoft store specifically.

            VLC: App Stores Were a Mistake (twitter.com/videolan) https://news.ycombinator.com/item?id=39798565

            More here: https://mjtsai.com/blog/2024/04/19/vlc-vs-the-app-stores/

            > Currently, we cannot update VLC on Windows Store, and we cannot update VLC on Android Play Store, without reducing security or dropping a lot of users…

            > If you do wonder why we don’t update VLC on the Windows Store or why VLC/iOS can’t connect properly to OneDrive shares, it’s because Microsoft Kafkaïesque bureaucracy refuses to help us.

            So maybe you don’t want to use centralised app stores with commercial control, and instead want to download software directly from the authors.

        • dmz73 12 days ago
          Except that in Ubuntu and Debian (and likely other major distros) you don't get the unmodified version of VLC, you get the crippled version with missing features. I recently spent a day trying to use VLC on Debian and Ubuntu to stream video from network camera (which works on Windows) only to find that Linux distros removed some VLC library that was required for this to work. So i used appimage version instead which did have the missing library. Linux distro packages are no longer easiest or best way to get working software.
          • oarsinsync 10 days ago
            > Linux distro packages are no longer easiest or best way to get working software.

            Debian explicitly ships free software. They make no apologies for the impact on functionality if software authors depend on non-free elements, for software that they package. This has always been true, and is always a source of complaint from people who prioritise functionality over freedom.

          • SushiHippie 11 days ago
            Seems like the 'problem' was non-free code:

            https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=981439

    • jtriangle 13 days ago
      Just an aside, when I'm forced to use windows, installing programs with choco is a much better experience.

      It's not quite linux package management, but, it beats the pants off any other way.

      • darknavi 13 days ago
        I am a big fan of winget these days.
        • alyandon 13 days ago
          Winget definitely has its warts but is useful enough for me that I use it almost exclusively now to install and update programs.
          • jtriangle 12 days ago
            I've found that it occasionally just..... fails, and then works on the second attempt for, well, reasons? No idea.
    • drbawb 13 days ago
      I was able to get up to 35-37 Gbps between my host-guest with a virtio NIC with the outdated iperf3.exe, which I did indeed acquire from the site mentioned in the article. That seems like plenty of perf, to me, considering the fattest link I could possibly have in my house would be ~35Gbps if I aggregated all the ports on my switch. (... and filled my Threadripper system with nothing but NICs, I guess?)

      I actually get ~7 Gbps, not 10Gbps, out to my real network. I haven't dug into what the issue is but when hitting my real network the guest clearly becomes bottlenecked by CPU, so I doubt iperf3 is to blame. (The host does not have this problem despite being on the same bridge, so I'm guessing there is some host-guest optimization I'm hitting in the virtio driver in the former case.)

      Now that's a low-latency, high-bandwidth link. If you're testing "high latency, high-bandwidth" as purported in the article, and that link is apparently ~40Gbps or fatter, you're probably running on a "real server" in a "real datacenter" somewhere. I can tell you I wouldn't be burning a Windows Server license just to verify my L2/L3 connectivity is configured correctly.

      I am sure MS would love if I bought two licenses of their proprietary operating system just to use this proprietary network testing client, but my pockets are not infinitely deep. (As I already spent all that money on the Cisco-branded optics. /s)

      • banish-m4 12 days ago
        Windows Server is barely used anywhere anymore outside of Azure and niche IT operations that demand it. CentOS Linux, Amazon Linux, and Debian Linux run on 100's millions to a billion boxes.
  • PaulKeeble 13 days ago
    iperf3 is popular because its in the router software stack and a good way to check the raw performance. I know from my own experience that iperf3 on windows performs fairly similar to Linux on a modern machine all the way up to 10 gbit/s. Its a theoretical problem not a practical one but if Microsoft has a better solution then they can open source it and do the work to include it in the open source router software and it will get used.
    • nullindividual 13 days ago
      If you read the article, the author shows iperf3 is slower than Microsoft's open source tools.
      • rixthefox 12 days ago
        Nothing wrong if it works faster, but it's not impossible to make tools that are compatible with other standard testing programs. By intentionally ignoring the tools network engineers are already using every day Microsoft is really shooting themselves in the foot and showing that they are tone deaf to the needs of users outside of their walled garden.

        How many routers do you know off the top of your head that run Windows?

      • supertrope 12 days ago
        Using v3.16 iperf3.exe I got 9.2 Gbps between Windows and Ubuntu. A single TCP connection. No -P flag needed.
  • extraduder_ire 13 days ago
    I wonder how it performs under WSL. I assume worse than the suggested benchmarking tools, but far better than the outdated Iperf3 version using cygwin that the page references.
    • Dalewyn 13 days ago
      Given the goal is benchmarking and there are native options available, having emulation/thunking (WSL1) or virtualization (WSL2) in the pipeline would defeat the purpose unless you very specifically want such things.
      • TiredOfLife 13 days ago
        Even enabling wsl2 or other features puts windows itself into virtualized mode.
    • nullindividual 13 days ago
      If you read the comments on the article, the author goes into WSL2 perf.
  • suprjami 12 days ago
    ntttcp only has packages for Debian, Ubuntu, and NetBSD:

    https://pkgs.org/download/nttcp

    No user of the other commercial Linux distros (RHEL, SLES) is going to install ntttcp from source or random binaries.

    Of course, the Windows kernel team could support POSIX system calls that iperf and other network applications require, instead of removing that feature altogether:

    https://learn.microsoft.com/en-us/archive/technet-wiki/10224...

    But hey why provide customers a good product when you can just write a blog with a useless recommendation instead.

    Microsoft is a big company with many teams on many products. Some of those teams really understand interoperability with modern developers and applications. This team don't.

    • pjmlp 12 days ago
      Actually had Windows NT provided better POSIX support, I would never ever had bothered with dual boot into BSD or Linux during the 1990's.

      In hindsight not doing that is what opened the door to FOOS on PCs in first place, so beware what one wishes for.

  • rjmcmahon 11 days ago
    Just some clarification: IPerf 2 is different from the iperf3 found at https://github.com/esnet/iperf Each can be used to measure network performance, however, they DO NOT interoperate. They are completely independent implementations with different strengths, different options, and different capabilities. Both are under active development.

    The current release of iperf 2 is 2.2.0 but 2.2.1 with bug fixes will be out soon.

    Comparison table here: https://iperf2.sourceforge.io/IperfCompare.html

  • jandrese 13 days ago
    Yet again this article shows how Microsoft programmers don't dogfood their own console applications.

    Compare invocations for these commandline apps:

    Linux app:

        iperf3 -s
    
    Windows app:

        .\ctstraffic.exe -listen:* -Buffer:"$(128KB)" -Transfer:"$(1TB)" -ServerExitLimit:1 -consoleverbosity:1 -TimeLimit:60000
    
    Who the hell wants to type all of that out?
    • banish-m4 12 days ago
      Steve Jobs might've been an opinionated dick, but he was right that Microsoft "has no taste in a big way". CLI UX with sane defaults and shorter options wins.
    • cchance 12 days ago
      are really none of those defaulted? i kinda fiugred that example was just setting it's defaults to match the iperf defaults
  • rjmcmahon 11 days ago
    Also, with iperf 2, -c localhost seems to perform the same on Windows as it does for Linux. So skeptical that the cygwin overhead is very large.

    A binary is here https://sourceforge.net/projects/iperf2/files/

  • jiggawatts 13 days ago
    Over the years I've used a variety of network test tools including ntttcp, iperf, hrPing, and I even whipped a couple of custom ones.

    I've gravitated towards using custom test tools that use the same pattern as the application that will run on the network. So instead of testing maximum UDP throughput or with dozens of TCP threads with artificially high window sizes, I test with "whatever it is" that the client/server apps are likely to do.

    I still use hrPing, because it behaves a lot like a large category of common server apps, such as Linux tools ported to Windows.

    • suprjami 12 days ago
      The problem there is commercial users want trusted software from their support providers.

      A company providing a network application should be providing a custom benchmark tool whose behaviour mimics the application. If the application provider is so incompetent they don't do that, then tools like yours become useful, if they are allowed by organisation security policy. In my experience, few are going to compile some bloke's random testing tool from source or run scripts off your GitHub.

      Most commercial users just want to plug in 10Gbps NICs, run iperf and see "9.x Gbps", tick a box and move onto the next item.

    • eqvinox 13 days ago
      Good recommendation if you need to test for some target application's performance.

      However, when the service you provide is the network itself, e.g. to a customer who just bought XYZ performance from A to B, you need usage-agnostic test tools.

  • probably_jesus 13 days ago
    [dead]
  • tzury 13 days ago
    [flagged]
    • kryptiskt 13 days ago
      Because I'm a professional and not a cultist I'm game for using whatever environment as long as I'm appropriately compensated. Windows, QNX, Wind River, bring it on.
    • John23832 13 days ago
      You obviously haven't witnessed software development in Asia (particularly India, and until recently, China). Windows is everywhere.

      .Net and C# are drivers. Plus, people just have Windows computers.

    • GuB-42 13 days ago
      Because their customers use Windows, and at some point, even when their app is multi-platform, they need to work on the platform their customers are using.

      Amateurs can do as they like, but professionals need to follow the money, and sometimes, Windows is where the money is.

      Edit:

      Windows has a few strong points, I particularly like the Visual Studio debugger and there is no equivalent on Linux (my platform of choice) that I am aware of. But really, since you specially mentioned professionals, the answer is simple: money.

    • nikanj 13 days ago
      Because despite what HN says, the vast majority of actual paid work is still done on Windows PCs using a locally-running application.

      If you want to sell a Windows Desktop application to your customers, you probably want to write it on Windows.

    • LordN00b 13 days ago
      Because Visual Studio is windows only.
      • SpaghettiCthulu 13 days ago
        Why do professional software developers use Visual Studio in the first place?
        • jasode 13 days ago
          >Why do professional software developers use Visual Studio in the first place?

          - to build C++/C# executables that are deployed to Windows computers. Examples are the millions of corporate desktops that already use Microsoft Office suite and home pcs with Windows for running AAA games.

          - to build ASP.NET web apps because the corporation chose the Microsoft stack of WindowsServer+SQLServer+C# instead of Linux+MySQL+PHP.

          Why do companies continue to use Windows instead of Linux? Often because they have lots of other desktop software that only runs on Windows. E.g. CAD, analytics, etc.

          If the target audience is already on Windows and the implementation language is C++/C# instead of Javascript/Electron, the easiest path for the developer is to also use Microsoft Visual Studio on Windows.

        • high_na_euv 13 days ago
          Because it is world class IDE, which is by far ahead of competition (except rider) for C#?

          I wish evey Lang had as good IDE as vs is

          • josephg 13 days ago
            IntelliJ seems pretty close, at least in the languages I’ve used it in.

            Xcode used to be good as well, but now it seems slow and crashes all the time. It’s a terrible ambassador for the engineering culture at Apple.

            • vbezhenar 13 days ago
              Xcode is a joke compared to VS or Idea.
            • SpaghettiCthulu 13 days ago
              Intellij is ahead in my experience
          • ecmascript 13 days ago
            I did C# dev for a couple of years. I think VS is a shitty IDE. It's slow and laggy. So much so I started to use Visual Studio Code for some of the C# development I did. It's way worse than for example Jetbrains products by a large margin.

            What exactly with VS is so great? Maybe I didn't use that feature..

            • high_na_euv 13 days ago
              Debugger, changing code at fly, expression evaluator
          • madeofpalk 13 days ago
            It's embarrassing how much better Rider is as a C# IDE.
            • high_na_euv 13 days ago
              Pure vs or vs with extensions? Because things like Roslynstor enhance vs significantly while Not slowing it down like resharper
              • madeofpalk 13 days ago
                The product that Microsoft makes vs the product that Jetbrains makes.
                • high_na_euv 12 days ago
                  Extensions are a part of ecosystem.
          • SpaghettiCthulu 13 days ago
            Having used Visual Studio and Rider myself, I can say with confidence that you are correct: Rider is miles ahead in terms of stability and usability.
            • high_na_euv 13 days ago
              Consider using free extensions like Roslynator then.
          • KETHERCORTEX 13 days ago
            I wish they didn't kill Monodevelop.
          • prmoustache 13 days ago
            Why do professional developers use C# in the first place?
            • vbezhenar 13 days ago
              It's pretty simple chain:

              1. Most consumers using Windows. That might not be the case in USA, but definitely the case in the most of the world.

              2. To develop for a platform, primary choice would be platform creator recommendation. In this case it's obviously .NET from Microsoft Visual Studio. It contains extensive UI libraries and components.

              There are alternatives, of course, you can use Qt or Electron or just web app. But you would need a good reason to move from the primary path. For many projects there's no good reason, so you naturally using .NET to write GUI applications for Windows.

              3. To develop .NET GUI applications, you need Windows. Any other option is inferior.

              That was the case 10-20 years ago. So plenty of projects were created and plenty of developers were taught .NET and Windows.

              Today web apps are more popular. But .NET provides excellent support to create web apps, so there's no reason to switch from Windows.

              That's why many professional developers use C# even today.

              Of course different people might move different paths. But I think for majority that's the way. .NET GUI frameworks and may be Sharepoint created huge number of Windows developers.

              • neonsunset 12 days ago
                Today, there is no reason to use Windows specifically when developing back-end or GUI (unless you're stuck with WPF and co) applications in .NET.

                Though the argument makes for a convenient straw-man since it used to be true until 2016.

              • prmoustache 13 days ago
                I was just following the running gag, but thanks anyway. I did a bit of C# when, as a sysadmin, I was asked to do some kind of frontend that would hide the pocket pc interface so that employees in an hospital can use those pocket PC to order menus for the patients and not end up using them to browse shit or install unwanted apps.

                Security by obscurity in all its glory but they used to do the same with a very easily escapable panel replacing the default windows shell in Win95 and Win98. :)

            • josephg 13 days ago
              Because it’s a good language.

              Fast enough for most applications. Memory safe enough. Easy enough to learn. Good enough type system. And it helps that it’s integrated into unity and windows. Also the ecosystem, compiler and IDE support is exceptional.

              It’s not perfect. It’s slower than native code. It has a worse type system than rust, Swift and typescript. (Burn the nulls!) And C# needs a runtime - which makes it annoying to deploy on Mac & Linux.

              I’d put it as an A tier language.

              • neonsunset 13 days ago
                ARC and shared_ptr are net more expensive than GC, in throughput scenarios you will see that C++ often offers no meaningful performance advantage. Also Swift is significantly slower due to upfront ARC cost and defaulting to virtual dispatch in many places, more so than .NET with interface spam (Dynamic PGO takes care of it anyway).

                Nulls have stopped being an issue in practical terms since you specify nullability explicitly e.g. string?/string.

                C# does not need runtime installed on the host. You can produce JIT or AOT executables which include one. It also needs runtime on Windows just as much (if you don’t include it or host doesn’t have it installed). Only .NET Framework was preinstalled but no one (sane that is) chooses this legacy target for new code.

                • josephg 12 days ago
                  Do you have some benchmark examples where C# beats the throughput of C++ using shared_ptr?
            • high_na_euv 13 days ago
              One of the Best designed languages and most sane ecosystem by far

              I did c, cpp and c# for money and thats my experience.

              C# has lowest amount of WTFs per loc

              • KronisLV 13 days ago
                Here's a recent comment of mine that talked about some of the ups and downs of choosing C# and .NET for a webdev project of mine: https://news.ycombinator.com/item?id=40022126

                In short, nowadays it's a pretty strong contender and can fill in similar niches to Java (decent type system, good runtime, good tools, nowadays available on most platforms), with downsides that to me don't seem like dealbreakers (not as big of an ecosystem, which leads to breakages along the way when tools and libraries aren't as well maintained).

                I've also used Node, Python, Ruby, PHP, a bit of Go and some others on the back end, I'd say that it's a bit slower to develop in than many of those but the performance and maintainability of the code (especially the refactoring you can do in the IDEs) feels worth it. Maybe for not quick MVPs, but probably for multi-year projects.

              • SpaghettiCthulu 12 days ago
                > I did c, cpp and c# for money and thats my experience. > > C# has lowest amount of WTFs per loc

                Sure, C# looks great when compared with C or C++, but that's a very low bar to pass. Compare it with Java, and it's on par at best. Compare it with Rust, and C# might as well be C.

          • repelsteeltje 13 days ago
            Why do professional software developers use an IDE in the first place?

            (vim and a bash shell are all you need)

            • josephg 13 days ago
              Because once a program is large enough, you can’t fit the whole thing in your head at once. Tools which help you bounce around and refactor code en masse help keep complexity under control and make you more productive.
              • AlexandrB 13 days ago
                In my experience, IDEs often result in code that's more complex in the first place. Instead of keeping complexity under control, they introduce it since it's easier to work with more complex code if you have an IDE.
                • josephg 13 days ago
                  IDEs don’t create complexity. But they do remove some of the natural downwards pressure on complexity that you get without an ide.

                  I’ve been programming for 30 years and I still feel very mixed about that. For some projects, I figure who cares if it took us 30k lines to solve the problem. It’s just that complex, and more code = more optimised data structures and faster runtimes. On other days I’m reminded of how productive I can be with tiny programs or libraries that just solve their own niche exceptionally well. The best small libraries basically never need to be modified or updated. They just work and keep working forever.

                  I don’t think this conflict can ever be solved once and for all. I’m sad that Chrome and Linux are around 20-30M lines of code. Would they be better or worse programs if they were smaller? It’s so hard to know.

                  • pipo234 13 days ago
                    My worry with large code bases is not so much about performance, or bloat. It's: bugs.

                    One of the first lessons I learned same 30 years ago in CS class was that invariably you'll find at least 3 bugs/kloc, regardless of where those 1000 lines of code came from. Highly critical battle tested code tended to have less, but even code where those 3 bugs had been fixed, ultimately given enough time would turn out to have yet another 3 bugs (on average). That's scary!

                    My professor suggested 2 ways to mitigate:

                    1. create robust systems that can handle malfunctions

                    2. never write a single line of code that doesn't need to be written

                    Ps. Don't think it's fair to attribute Linux's vast code base to IDEs.

                  • repelsteeltje 13 days ago
                    > more code = more optimised data structures and faster runtimes.

                    Not sure if I agree, can you elaborate? Seems a more modest claim "more code allows for more optimized ds and faster runtimes" would be more accurate(?)

                    At least, in the Pentium era it used to be that instructing compiler to optimize for size often resulted in faster code than optimizing for speed. That was of course the result of relatively small (text segment) caches and the often underestimated effects of temporal and spacial locality.

                    • josephg 12 days ago
                      > Seems a more modest claim "more code allows for more optimized ds and faster runtimes" would be more accurate(?)

                      Yeah; I’m assuming the code in question is produced at a similar skill level. Hence mentioning Linux and Chrome.

                      But you’re right; large projects are often large because they’re written poorly. Personally I think there’s a special place in hell for people who bloat programs unnecessarily. Making if statements into class hierarchies. Adding generic interfaces to things that don’t need them. Factories and factory factories. This stuff reduces your team’s productivity with no corresponding benefit. I hate working with people who argue for this stuff.

            • mlfreeman 13 days ago
              change "professional software developers" to "real programmers" and this is like watching https://xkcd.com/378/ come to life.
            • Hikikomori 13 days ago
              Do you compile with bash or vim?
              • repelsteeltje 13 days ago
                Nope, I use a compiler to compile, a makefile to drive the compiler an editor to edit and bash to control them all.

                Do you compile with an IDE?

                There used to be tools that built compiler, runtime, debugger and editor all into a single monolithic hammer, like 4D in late '90s (FourthDimension). There is excel, of course. Nice for prototyping, but very clunky if you want for build and maintain professional software packages that are easy to test and to deploy.

                If visual studio were the only way to use Visual C++, we'd long ago abandoned cross platform development.

            • prmoustache 13 days ago
              It is funny how people don't understand running jokes and just downvote in anger.
        • jimcsharp 13 days ago
          To make just a bunch of money.
    • mixermachine 13 days ago
      Company policy and old habits
    • guardian5x 13 days ago
      Because it works pretty well and offers basically all development tools and WSL2 as well. And some Developers like the Windows Desktop. (I know this might be an unpopular opinion)
    • delta_p_delta_x 13 days ago
      I swear, these sort of snarky drive-by comments are below HN's standards. I'll bite anyway.

      - Windows has a 72-75% market share of desktop OSs.

      - About 95% of non-console video games are developed either for or on Windows. And video games are the most lucrative entertainment market—more than music, film, television, and adult entertainment together.

      - A large majority of internal sysadmin is run with Windows Server 2022, including things like Active Directory, Exchange, SMB, etc.

      - About a fifth to a quarter of public web servers are also running Windows Server.

      These are the financial factors.

      - Windows is a decent development environment—sue me, UNIX greybeards.

      - Visual Studio 2022 blows everything that the FOSS world has come up with out of the water. GDB, perf, LLDB, dtrace, I've used them all. They all suck compared to what VS offers. After some point, you want to stop screwing with 'my OS is an IDE', and just click through your source code setting red circles, press the green play button, and then smash F5 jumping through code. And it works.

      - During debug, I get flame graphs, memory and CPU profiling with live graphs, stack traces, exception traces, conditional breakpoints, thread tree views, local, global, and 'auto' variable values, highlights when variable values change, and even GPU debugging.

      - For a 'closed-source' OS, debugging and profiling on Windows is still better than on Linux[1].

      - Windows comes with a fantastic GC runtime—.NET—and an equally powerful shell (heh, maybe that's why they called it PowerShell) that can directly call methods and classes in said runtime. I'd like to see the UNIX shell with an equivalent.

      - Windows doesn't have fork(), and all the baggage that comes with it. It is possible to back-target Windows XP and even Windows 2000 from a computer running Windows 11 and VS 2022. I'd like to see what Linux can do barring 'let's spin up an old CentOS Docker'.

      These are the development factors.

      As a user OS, it's getting enshittified, I fully concur. I despise the encroachment of ads and 'Copilot' everywhere. But as an OS, I'll daresay it is every bit as productive as Linux is, if not more so.

      [1]: https://www.reddit.com/r/rust/comments/hrqr36/cpu_profiling_...

    • threePointFive 13 days ago
      The gravity of ASP.NET, SQL Server, AD, etc. Is hard to break out of
    • pjmlp 13 days ago
      Because professional software developers have Windows customers.

      Developer !== UNIX.

    • cqqxo4zV46cp 13 days ago
      I haven’t consensually used Windows in almost two decades, but this still made me roll my eyes.

      Your industry peers aren’t your comrades in this weird 2000s holy war.

    • throwaway2990 13 days ago
      [dead]