19 comments

  • jug 1655 days ago
    .NET Core 3.0 was major achievement for Microsoft and I like the air surrounding the project and some around it like Visual Studio Code and Windows Subsystem for Linux.

    The teams working for Windows development are overall on a roll these days and it's both sad and a little mysterious how Windows 10 is still struggling with QA issues, decisions like dismantling their internal testing teams, dual control panels and inconsistent design.

    I've heard Windows devs speaking up off the record that it is hell to work on with so much boilerplate and ceremony both in practice and politics and with few understanding all the ins and outs of building for their Windows 10 desktop, many restorting to copy & pasting because "it's just supposed to be like that". That makes it a little easier to understand why their aforementioned old control panel is still around even after five years of work on a new one.

    I wish they could somehow restructure and streamline and come back to also be able to share the excitement like the .NET Core team are doing right now. Maybe if Windows 10X is a success? Maybe Windows is indeed just getting too big for its own good and the Win32 apps are best just put into one fat virtualized environment for backwards compatibility, like in that upcoming evolution of Windows.

    Then write the Windows user land in .NET Core.

    • gnud 1655 days ago
      I love it every time I get the 'old' control panel. It means I know where the switches are, and more importantly, what they do.

      The new UIs change with every minor update. Layout, labels and semantics. It's truly horrible. Please don't encourage Microsoft to mess it up even more.

      • nick_ 1655 days ago
        I call it the Windows control fractal. Each new version of Windows requires clicking through a totally new UI layer to get to the previous version's control panel. Finally you arrive at the Windows 95 settings program you were trying to get to in the first place.
      • simonh 1655 days ago
        >It means I know where the switches are, and more importantly, what they do

        Or even more importantly than that, that they even exist in there somewhere.

        • AndrewDavis 1655 days ago
          My goodness.

          Recently I went to play a game on my laptop, something i don't normally do. I'm not even normally a windows user.

          It has this 'feature' where if a key is pressed then it disables all mouse clicks for X number of seconds. This is to prevent a forearm from tap to clicking on the touchpad while typing. Absolutely makes sense.

          Only... I had a mouse plugged in. It disabled a physical USB mouse from clicking after a keypress. Even after disabling the touchpad entirely it continued this behaviour.

          After some Googling there should have been a switch in the modern Windows 10 style mouse config. But it wasn't there.

          More Googling, "go edit this registry key" but i didn't have that registry key.

          More Googling, the registry key depends on your touchpad driver.

          More Googling finally found a way to disable it globally with regedit.

          #1 Why was the switch not available in the mouse settings #2 Why was it applied to a usb mouse #3 Why did the registry key differ depending on mousepad driver.

          Just a nightmare.

          • Someone1234 1655 days ago
            This one is mostly the touchpad driver being poorly written and or outdated. Let me guess, Synaptics touchpad?

            For some inexplicable reason their newer touchpads (in newer laptops) support the Settings configuration, whereas their older ones (in older laptops) only work via the Mouse Control Panel applet (and sometimes not even then).

            I've also had their touchpad driver cause keyboard input latency (it tries to hook hotkeys) with uninstalling it entirely and having no touchpad being the only solution. Their stuff is just bad, and they should feel bad about it.

            I doubt they do feel bad as they have a seeming monopoly on Windows laptop touchpads.

          • DennisP 1655 days ago
            I've got a Macbook Pro and it doesn't seem to have or need this feature in the first place.
            • wlesieutre 1654 days ago
              It does have an analogous feature, the MBP's trackpad is huge and you probably touch it by accident all the time.

              It's just that Apple's system tries to separate deliberate touches from accidental ones, rather than disabling the trackpad input completely.

              You used to be able to turn this on and off (it was called "Ignore accidental trackpad input" in the trackpad preference pane) but now it's an always on feature. The trackpad has extended so far into the palm rests that it would be very difficult to use without it, and the feature works well enough that you don't even realize it's happening.

              https://support.apple.com/en-us/HT201822

              • AbortedLaunch 1654 days ago
                It may work quite often without me knowing, but ever since I got a 2017 Mac I have multiple events per day where my typing ends up in a random location due to phantom clicks. Perhaps I should disable tap to click, even though I really dislike clicking via the force push.
              • DennisP 1654 days ago
                Yes, I touch the pad by accident all the time. But why is Apple so good at detecting which touches are accidental, while so many other manufacturers are not?
                • the_pwner224 1654 days ago
                  On Linux this works similarly. ~5 years ago my old laptops did the 'disable touchpad when typing' thing, now with libinput the touchpad has excellent and completely invisible (to the user) anti-false-touch functionality.

                  Apple has invested the time to make great trackpad drivers. People have invested time to make libinput a great driver for Linux (in this regard - it does lack customization possible in older drivers).

                  On Windows the manufacturers don't care. IIRC MS is doing a thing where manufacturers can add some property to their touchpad HW/drivers which tells Windows to handle the touchpad entirely on its own, and the result has been that touchpads have been improving significantly over the last few years since MS is investing time in making good drivers. But that depends on your specific touchpad allowing Windows to do its magic, instead of using crappy OEM drivers.

                  • DennisP 1653 days ago
                    That is fantastic news about Linux. I bought a System76 four or five years ago and the touchpad was terrible. It wasn't just false touches; it was really hard to accurately click on something, the pointer kept making little jumps. Would you say that's better too?

                    If touchpads are good now, then I might go dedicated Linux again when it's time to upgrade from my aging MBP.

                    • the_pwner224 1653 days ago
                      Try making an Ubuntu/Fedora USB and booting from it once on your MBP, that should let you see what it's like. I had to explicitly disable the 'disable touchpad while typing' libinput setting but aside from that everything works great out of the box.
                      • DennisP 1653 days ago
                        Cool, thanks!
      • x3haloed 1655 days ago
        > It means I know where the switches are, and more importantly, what they do

        Does that mean you would never be comfortable with with a complete UI overhaul?

        > The new UIs change with every minor update

        I completely agree that this is a problem. And this part of my biggest issue with Windows 10. They're constantly reworking the new UI for what seems like micro-optimizations, but they're mostly ignoring the old UI. Not only is it a headache for users to keep up with new paradigms every 6 months, but there continues to be two areas in Windows to control system settings, one of which is hidden pretty well, and the location of settings between the two is continually changing. It's a nightmare.

        .NET Core, on the other hand, is pure gold. It's fast, it runs everywhere, it's well document, and it's a joy to work with.

        Which leads us to Visual Studio, which is becoming like Windows. Constant tweak updates that break things. I stopped installing '.0' releases, beacsue they're consistently broken.

        • Faark 1654 days ago
          > Does that mean you would never be comfortable with with a complete UI overhaul?

          I'd like for the old interface to be still around, so I can click though the nav layers like I always have. At the end, instead of the information / setting I'm looking for, I'd be fine with info on how now to find that info/setting. This "tutorial" should obviously be kept up to date on later UI changes. It also requires the info/setting to still be available... so no dumbing-down please.

      • tonyedgecombe 1655 days ago
        Not only that but the new interface is becoming just as complex as the old. By the time they have finished they will be back where they started.
        • supernovae 1654 days ago
          Nah, it's pretty simple these days and people who love scripting can use powershell for everything
          • tonyedgecombe 1650 days ago
            Nah, it’s getting just as complex as the old interface, there is nobody with the ability to say no, this is too much.

            I’m a big fan of PowerShell, I have embedded it in two products. However it’s not a good justification for a poor user interface.

      • cm2187 1655 days ago
        And also items from the old control panel don't appear any more when you search in the start menu.

        Which would be ok if they had been replaced by an equivalent in the new menu. But they have not. So what is the point in hiding them?

        • velox_io 1654 days ago
          They're removing the pre-Windows 10 Control Panel items with a new Settings interface. The problem is that it's far worse:-

          Tasks such as changing network adaptors setting, or sound settings. I regularly switch between three different audio devices on my desktop PC, and doing simple tasks such as toggling volume levelling isn't in the new interface and digging into the old is becoming harder and harder to find, with more clicks for every revision. Just yesterday I went to add a firewall rule, and that was hard to find. I'm an MCSE, so you would think I could change a firewall setting!

          It definitely doesn't feel like an improvement, it feels like they're forcing an abstraction. Plus they've recently removed the Control Panel when you right-click the Start menu. That was about the only 'Safe Space' left.

          I feel for anyone who has to support Windows 10. They're definitely not listening to the users. This is the FisherPrice interface all over again, but there's no stopping it this time around.

        • nikanj 1655 days ago
          See a few comments up this very comment chain. They’re hidden because they look dated, regardless of their usability
        • gexla 1655 days ago
          Let me have their pretty UI. If you want to geek out, there's Powershell.
          • LIV2 1654 days ago
            Powershell is shit for discovery & not useful at all for the majority of Windows consumers i.e regular users. You shouldn't need to learn powershell to switch your audio output
      • pantalaimon 1655 days ago
        Not to mention that the 'old' control panel has a lot more options!
      • rkagerer 1654 days ago
        Agreed. Most of my time trying to adjust settings in Windows 10 is spent trying to locate the 'old' control panel dialogs. What's sad is that all their attempts at redesigns and tweaks have not resulted in something I want to use more.
      • celticmusic 1655 days ago
        same here. I still prefer the old control panel. They hid everything behind "usability". I'm sure for a lot of people it's more usable, but for me it's just more painful.
      • samus 1654 days ago
        Funny, I always thought the same things about the control panel of various other window environments like Gnome and KDE.
      • jug 1655 days ago
        Yes, if things keep changing around or is hard to find, that's annoying but I see this as a separate problem from a bad pace and inconsistency. Nothing here in particular hinders good and well researched design.
        • y4mi 1655 days ago
          I could find stuff in the old panel without knowing where it is. I can't in the new one. Either because the options are missing entirely or because they're too well hidden.

          My biggest gripe is not being able to directly open the old settings... Always having to open "control panel" and then clicking on the correct category gets old fast, but is nonetheless more effective than fruitlessly searching in the new panels until I give up.

    • dvfjsdhgfv 1655 days ago
      I can sum up the situation with two sentences:

      I hate writing Win32 apps, but I love using them. I love writing .NET apps but I hate using them.

      The problem with Win32 apps is that although they look ugly, in general they're blazing fast and work basically everywhere. The problem with .NET apps is although they're aesthetically pleasing, they require the relevant version of the framework to be installed. I remember fighting with installing a .NET-based driver installer that insisted on installing an ancient version of .NET that Windows wouldn't allow to install. Win32 support is still universal.

      • gameswithgo 1655 days ago
        With .NET Core 3, you can deploy them standalone, and performance is much improved. So you might have a happy middle ground available now.

        And when I say performance is improved, there are two fronts to that. First, the compiler/jit are just better now, core library functions are sped up a ton, so just running ported old .NET Framework code will be a lot faster. But also, C# has added new features in recent years that give you further control over memory usage/layout, and now even SIMD Intrinsics, so the performance ceiling is much higher if you want to optimize your own code.

        • huzaif 1654 days ago
          Piggy backing on this a bit..

          You can now publish a .net core 3.0 app that is: 1- Ready to Run: Part AOT compiled app for the platform. 2- Trimmed: Tree shaken down to only necessary bits from the framework. 3- Single File: Self contained app.

          This combination of features provides a really decent balance between size, performance and distribution.

      • platz 1655 days ago
        Core apps can be deployed standalone
    • mumblemumble 1655 days ago
      This oldie seems relevant: http://moishelettvin.blogspot.com/2006/11/windows-shutdown-c...

      1 year to build Windows Vista's new shutdown menu, with a total of 43 people somehow involved in its design. If this is how Windows developers still have to work, it's no wonder.

      That said, I would readily believe that the roots of this story have more to do with the complexity of modern OSes than any particulars of Microsoft culture. In terms of software quality, OS X peaked about 15 years ago, and has been steadily getting flakier ever since. And the Linux community has been struggling for virtually its entire existence, without success, to deliver a well-polished desktop experience.

      • crazysim 1655 days ago
        ChromeOS? Is that a success? Lots of concessions there I guess though.
    • pjmlp 1655 days ago
      Windows userland is being rewritten in COM since Vista, as the Windows team picked up the Longhorn ideas and used them in COM after winning the WinDev vs DevTools politics, I doubt they will change route.

      The best we can hope for is that with the new AOT/JIT infrastructure, it gets more equal footing with C++/WinRT in platform APIs.

      After all, it has won the UI, MFC is legacy and XAML/C++ doesn't have much uptake as most companies rather do C++/COM and then consume them from .NET for the UI part.

      • Crinus 1655 days ago
        > Windows userland is being rewritten in COM since Vista

        I am not sure "rewritten" is the proper word as this implies replacement, but in reality nothing gets replaced (at least as far as the user/developer facing stuff go) and the new stuff are added on top (or alongside, depends) the existing stuff.

        Also i'm not sure if the whole "exposing new stuff in COM" thing is a good idea overall. The classic Win32 GUI APIs might not provide much functionality on their own, but the plain C APIs make them both understandable by pretty much everyone and fairly easy to use by pretty much everywhere. I haven't seen much COM use on the wild since the turn of the millenium (outside of the new UWP stuff in Windows Store but really the UWP/Windows Store seems to be a different world on its own that got itself attached to the regular desktop Windows world... and even then i do not really know of anyone who uses anything from there). The most common comment i read about COM is using it to control Office but that's about it (technically DirectX APIs are also COM but in practice you do not really use much of it).

        As an example, Windows had a COM API for creating ribbon-like interfaces ever since Windows 7/Vista SP2 yet pretty much every framework out there (including MFC) reimplements their own ribbon using Win32 primitives instead of using the native one.

        • snagglegaggle 1655 days ago
          COM is kind of like GObject and something like it has a place.

          COM usage is fairly limited because using it easily requires compiler support, which only very recently was included in the Community editions of Visual Studio. You can also access COM via language bindings; most .NET code you write is COM-enabled, for example, and is accessed from non-CLR code as COM. MinGW has stuff that lets you use and generate COM classes, but it's not great. Improving it would help FOSS dev on Windows.

          There are other languages that use COM also. The lead designer of Delphi was poached by Microsoft to design .NET.

          • pjmlp 1655 days ago
            Anders went to Microsoft to design J++. .NET only came up a couple of years later and as follow up on Sun's lawsuit.

            Ext-VOS design document clearly refers to Java and J++.

            Secondly, I have been developing for Windows since Windows 3.0 and wonder which compiler support features do you mean. Even the old express editions had the required Windows SDK tooling for COM.

            • snagglegaggle 1654 days ago
              No, guy I'm thinking of was hired on about the time .NET was being started. This was after Borland.

              The tools may have just been for COM object generation. I found the option to generate COM code in VS2017 one day, followed up on it, and found it was the first time they had been released. The bare headers were always there but the code generators were not, at least in most current releases.

      • zaphirplane 1655 days ago
        Hey for people that don’t follow windows internals and politics closely, what did the devTools champion ? What are the longhorn ideas that are getting into windows ?
        • pjmlp 1655 days ago
          Long story short, that you kind have to assemble from blog posts, forum comments, between the lines articles and so on.

          Longhorn was to be built on top of .NET and there were lots of issues, regarding performance, stability and what not.

          Apparently it was more a thing of WinDev (owner of Windows and C++) and DevTools (owner of VS and .NET) each pulling to their side instead of collaborating.

          As proven later by Singularity and Midori it was an achievable goal, when everyone on the team believes on the end goal.

          Instead Sinofsky team took over, several ideas were dropped, others were reborn in .NET 3.x like WPF, and Vista came to be.

          Many of the .NET (Longhorn) libraries reappeared in Vista and Windows 7 as COM libraries.

          See Project Hilo sample, https://blogs.msdn.microsoft.com/jasonz/2010/06/06/project-h...

          Later WinRT was born, which is an improvement over COM, using IInspectable alongside IUnknown, and .NET Metadata instead of the old COM type libraries.

          Curiously similar in concept to Ext-VOS, which was in the genesis of .NET, before CLR came to be.

          https://blogs.msdn.microsoft.com/dsyme/2012/07/05/more-c-net...

          So WinRT (Win 8.0), eventually became UAP (Win 8.1) and now it is UWP (Win 10).

          Contrary to many Windows 10 haters belive, UWP isn't tied to the store and each Windows 10 release brings more Win32 "legacy" APIs into UWP.

          The next generation COM based runtime with interoperability across C++, .NET, JavaScript, Delphi and every other language capable of understanding COM projections.

          Ah, and thanks to the contributions initially started by Kenny Kerr, which ended up joining the Windows team, the C++/CX language extensions got replaced by C++/WinRT a C++17 framework for UWP.

          • zerkten 1655 days ago
            I think Sinofsky was still in Office when this was happening. He comes into the picture with Windows 8 since Microsoft wanted Windows to be as successful as Office had become. Jim Allchin (https://en.wikipedia.org/wiki/Jim_Allchin) was in charge at the time of Vista.

            Office had stuck with regular the regular Windows platform and were focusing on UI and getting into a regular release rhythm under Sinofsky. People forget but Office releases used to be a real crapshoot and Microsoft needed to move to a true subscription model at some point (the original activation wasn't it.) To do this you need regular updates at least a few years apart before you can even get close to what's needed for a subscription business. Sinofsky seems to have been a real success here, but not so much in Windows.

            Looking back the expectations for Vista could never meet the reality. Windows was fundamentally not ready as a product or team to deliver on any of the promises. This was not much different from Cairo (https://en.wikipedia.org/wiki/Cairo_(operating_system)) as it turned out. If the focus had been on the fundamentals like regular releases they may have been achieved some of what is promised with the right compromises. I also wouldn't discount some of the hardware changes in the period. We were getting multi-core and there was a transition to 64-bit.

            • pjmlp 1655 days ago
              Thanks for the correction regarding Sinofsky .

              Undoubtedly there were multiple issues at play there.

              However having been part of such "rebuilt the world" projects, I do believe politics also played an important role, as many signs that came into the outside world feel similar.

            • teddyuk 1655 days ago
              I'm not sure that is right, Sinofsky was brought in to fix windows dev after the vista fiasco, he did a good job on win 7 then went off piste with the metro ui in win 8 and that was it for him
      • fauigerzigerk 1655 days ago
        One of the key problems of our time (besides antibiotics resistance, climate change, lack of affordable housing, poverty, rise of nationalism and totalitarian ideologies, mass surveillance, etc) seems to be the impossibility of reliably generating reference counted code from code written for a tracing GC.
        • WorldMaker 1654 days ago
          But why should reference counting code be the "default" over a tracing GC in the first place? If the "problem" is embedding tracing GC inside of reference counted worlds, but tracing GC itself has little to no problem embedding reference counted worlds, why not invert the stack?

          Lisp machines in the 60s were doing tracing GC in hardware. It's not like we don't have the technology to move tracing GCs further down the stack. Sure, reference counting is "simpler", but it's also easier to do wrong. It's harder to get tracing GC wrong. (Sure, we can debate for days if its harder to get tracing GC performant, but it can and has been done, including as mentioned directly in hardware.)

        • pjmlp 1655 days ago
          Nah, that key problem will eventually be sorted out with a couple of generational changes.
          • fauigerzigerk 1655 days ago
            I'm serious (even if it may not sound like it). COM is a fantastic platform, because it provides simple and efficient interoperability between language runtimes.

            But code written for tracing GCs cannot be automatically transformed into COM's (or any) reference counting scheme. And that's why we have this endless tug of war between those who want to build on top of COM versus those who want to build on top of .NET.

            It's not just a Microsoft problem either. Our industry has been wasting a ton of resources on duplicating everything for a bunch of different languages/runtimes and it's all either because of GC incompatibility or license incompatibility.

            It's not a minor problem even if there are more important problems in the grand scheme of things is what I'm saying.

            • Const-me 1655 days ago
              I’ve been writing C++ COM objects for use in .NET for years now. Some are for performance-critical parts, like manually vectorized SIMD code. Another use case is interop heavy code, using Direct3D, media foundation, Eigen, other C and C++ libraries.

              It works even on Linux https://github.com/Const-me/ComLightInterop I also have these parts, especially about the interop. Libraries like drm, kms, gles, udev, and many other kernel APIs are only usable from C or C++.

              Had very few issues with IUnknown-based interop. On Linux, my interop library retains .NET objects with GCHandle while they are referenced by C++ code. On Windows, it’s done automatically by the runtime. C++ implemented objects are automatically retained on both OSes.

              What exactly do you think is a hard problem about such interop? Leaks due to ref.count cycles are possible, but I’m not sure it’s such a huge deal. There’re many good reasons to keep interop API surface small, with obvious “who owns what” relations.

              • fauigerzigerk 1654 days ago
                >What exactly do you think is a hard problem about such interop?

                The problem is that .NET libraries cannot be used from C++ or Rust without depending on the entire .NET runtime, including its tracing GC, which is obviously extremely undesirable.

                And that is probably why Microsoft dropped their plans to write Windows userland components in .NET and decided to write them in C++ instead (exposing COM interfaces).

                As I see it, the key theoretical issue is that neither CIL nor arbitrary code written in languages like C#, Java, JavaScript or Python contains enough static information to generate purely reference counted code.

                You're seeing the same on Unix. All truly reusable code is written in languages that don't require a tracing GC. Everything that is written in other languages is duplicated for each language/runtime environment.

                • Const-me 1654 days ago
                  > cannot be used from C++ or Rust without depending on the entire .NET runtime

                  You can compile .NET into native code if you want: MS .net native, Unity’s IL2CPP, etc. These aren’t some research projects, every Unity3D game shipped on iOS compiles with IL2CPP, most windows store apps are built with .net native. The GC probably still included in the output binary somewhere, but it does much less work than usually.

                  > which is obviously extremely undesirable

                  Depends on use case. You wouldn’t want to bring the runtime for a small dependency, but for a large and complex one, like PDF writer or embedded web server, it can be OK. The 2.2 runtime binaries are 25-28MB depending on the platform, not a huge deal. These days, people bring 150MB+ Electron dependency just to render a few strings and colored rectangles :-)

                  > the key theoretical issue is that neither CIL nor arbitrary code written in languages like C#, Java, JavaScript or Python contains enough static information to generate purely reference counted code.

                  C# and Java and strongly typed, have a lot of static information. I think the main reason why it’s impossible to reliably recompile these languages into non-gc code — circular references.

                  • fauigerzigerk 1654 days ago
                    >C# and Java and strongly typed, have a lot of static information.

                    They may have a lot of information, but they don't have the static information required to generate reference counted COM. It's information about the lifetimes and ownership of objects.

                    >I think the main reason why it’s impossible to reliably recompile these languages into non-gc code — circular references.

                    Exactly. The information to resolve those is missing.

                    >Depends on use case.

                    This was a debate about Microsoft's decision not to build Windows userland on top of .NET after all. So it would have affected all use cases of all Windows developers.

            • vbezhenar 1655 days ago
              May be I don't understand something, but how does it differ from e.g. operating file handles? For example in Java you're supposed to use try-finally to close file handle. You should do the same with reference-counted objects, just manage their lifetime manually. And finalizers will close it for you if you forgot to do that (that should be considered as a bug, but not very serious).
              • fauigerzigerk 1655 days ago
                It doesn't differ from that, but in languages that assume the presence of a tracing GC, that pattern is only used for resources other than memory.

                And therefore you can't (as far as I know anyway) compile arbitrary C#/Java/JavaScript libraries down to reference counted COM in order to make those libraries available to C++ or Rust code.

                • pjmlp 1655 days ago
                  You can also use that pattern for memory management in .NET.

                  For example stack allocations and unmanaged memory handles.

                  .NET COM interop take care of exposing .NET classes to other COM aware languages.

                  https://docs.microsoft.com/en-us/dotnet/standard/native-inte...

                  • fauigerzigerk 1655 days ago
                    You can, but in order to translate arbitrary .NET code into reference counted COM you would need a guarantee that _all_ code is written like that or that you can statically infer the correct reference counting for _all_ code that isn't written like that.
                    • youdontknowtho 1654 days ago
                      I haven't seen the reluctance to use the runtime in interop scenarios. Is this just a personal interest or do you have a particular use case? I'm genuinely interested.
                      • fauigerzigerk 1654 days ago
                        This debate is about Microsoft's decision some 14 years ago to build Windows userland components on top of C++ COM technology instead of .NET as some within Microsoft had wanted.

                        Those inside Microsoft who didn't want to force everyone to use a rather heavyweight runtime including a tracing GC won the debate. It would have affected almost everyone, not just those with use cases that can tolerate the significant additional resource usage.

                        I merely made the observation that the difficulty to infer correct reference counting information from C#/Java code is what caused this rift and a lot of unproductive duplicated effort in our industry.

                • vbezhenar 1655 days ago
                  Offtopic, but I think that Rust memory model makes a perfect consumer of COM services :) May be it'll be integrated to other languages as an additional type system.
          • simonh 1655 days ago
            Said someone in the 1960s ;)
      • ryuukk_ 1655 days ago
        > After all, it has won the UI

        LOL, you are stuck in 2004 my friend, .net is a huge bloated piece of garbage technology stuck trying to catchup with java

        They had the balls to show us a simple calculator app written in .net core 3.0 and WPF that was JUST 140mb

        Now please, stop trying to make windows even more bloated, and maybe take a look at macOS, and understand why NOBODY WANTS JIT garbage collected shit in UI

        Windows became a huge pile of dogshit because people like you promote the WIN32 and now UWP enterprise bloatware

    • shawnz 1655 days ago
      The old control panel can never be removed, because a significant amount of legacy software integrates itself into the old control panel and therefore wouldn't be usable if they removed it.
      • teddyuk 1655 days ago
        They also contain decades of bodges and quick fixes to get stuff to just work - working out what they need to keep and throw away must be a nightmare.
        • cptskippy 1655 days ago
          Microsoft is gradually porting what's necessary out of the Control Panel in a controlled manner. Rather than offering 1:1 functionality to existing items in the Control Panel, they're examining and rethinking everything. The new Settings App is a cross platform legacy free future.

          Certain elements will likely never be ported, like Phone and Modem, but will remain forever in the Control Panel. And the Control Panel will probably be a platform specific feature of x86/AMD64 based Windows installs.

    • mtgx 1655 days ago
      Nadella has made it pretty clear, whether explicitly or implicitly, that Windows is a dead-end product for them. You could've also deduced this from how they selected their CEOs (cloud guy vs product/software guy or even a sales guy like Ballmer was).

      When you've already decided a product is a dead-end, you ignore it and don't spend as much in Q&A. You also try to squeeze your customers for all of their worth, even if that pisses them increasingly more, because you know the product has no future anyway so might as well maximize the revenue at all costs - which is also what Nadella has been doing with all the Windows 10 tracking.

    • donedealomg 1655 days ago
      There are rumors that the WindowsNT kernel will be replaced in the future with a Linux Kernel with a Windows Desktop code on top of it.

      Can't wait for it. I know 2020 is going to be the year of the Linux Desktop... but....

  • jeswin 1655 days ago
    Given that our deployment platform was Linux (for a .Net Core 3.0 project), I was determined to use Linux and VS Code for development. That was a fail; the verbose nature of C# and the Framework APIs make it impossible to be productive without significant help from a full-fledged IDE like Visual Studio. One might think the verbosity can be reduced by clever coding (and adopting a functional style), but that's not so easy. If you're writing a Web App/Service, the Framework APIs and most popular libraries will nudge you strongly to use Dependency Injection throughout the app and implement everything as a class and to extract interfaces out of it.

    My wishlist for C# is short:

    - Allow functions outside of classes

    - Structural typing

    - Files and directories already provide excellent namespacing. Adopt it instead of forcing namespace declarations

    Btw, Linux as a deployment platform works quite well.

    • SideburnsOfDoom 1655 days ago
      > Allow functions outside of classes

      Why? You can put them in a static class, and if C# ever had this feature, it would just be syntactic sugar for a hidden static class.

      But that would mean optimising for "hello world" and other small script scenarios, which isn't a goal of the language design.

      > Adopt (files and dirs) instead of forcing namespace declarations

      Likewise.

      > Structural typing

      Like "dynamic" in C# 4 ? Been there, done that and it didn't take off really. It's useful for some interop cases and that's about it. Strong types, with type inference is more suited to the language design.

      https://stackoverflow.com/questions/2690623/what-is-the-dyna...

      https://docs.microsoft.com/en-us/dotnet/csharp/programming-g...

      • goto11 1655 days ago
        Structural typing is different from dynamic typing. C# supports structural typing in the form of tuples and anonymous types. But you can't return an anonymous type from a method. This is a significant limitation.

        As for "free" functions: C# already recognizes the usefulness of this in the "using static" directive. This is useful for a lot more than "hello world".

        • moron4hire 1655 days ago
          You can't return an anonymous type, but the syntactic difference between an anon type and a tuple is two characters: parens instead of curly braces.
          • Faark 1654 days ago
            And the little "new" in front of it. I actually preferred the curly braces version and would have loved for them improve it instead of creating a replacement.

            And well, I'd expect them to get it right eventually. It is at least their third evolution of that feature... there was a System.Tuple as well. Wonder if they ever come out and officially deprecate their earlier attempts.

            • SideburnsOfDoom 1654 days ago
              > Wonder if they ever come out and officially deprecate their earlier attempts.

              That (mostly) doesn't happen.

              I would say that .NET Core was the sole attempt to deprecate a lot of the framework. And most of it came back anyway; see the title of the parent article.

            • goto11 1654 days ago
              I don't think C# have ever deprecated language features. You can still use the old "delegate{}" syntax even though we have the more concise arrow syntax.
        • manigandham 1654 days ago
          Why are "free" functions useful? What actual real difference does it make compared to putting an extra word in front to call the method?
          • goto11 1654 days ago
            You don't need to put the extra word in front since "using static" was introduced. But you still need to define a useless static class.
      • jeswin 1655 days ago
        > But that would mean optimising for "hello world" and other small script scenarios, which isn't a goal of the language design.

        Fairly complex programs can be written with just plain functions, all of them elegantly starting off column 0 instead of being tabbed for namespaces and classes. It works especially well with directory structure based namespaces. eg: Microsoft's own TypeScript.

        > Like "dynamic" in C# 4 ?

        No not dynamic, I wouldn't use that very much either. https://en.wikipedia.org/wiki/Structural_type_system

        • nbevans 1655 days ago
          It sounds like you are using the wrong language. F# is what you want to be using.
        • SideburnsOfDoom 1655 days ago
          > No not dynamic

          Ok, I see that structural typing is a third type system option.

          But honestly, I don't want C# and .NET to contain every feature and paradigm ever created. It would be bloated and hard to use. Already there is significant legacy and different ways of doing the same thing, and it's likely to get even worse.

          If the language that you ideally want is built around structurally typed free-floating functions, then it probably isn't C#. It probably exists, it's likely possible on the .NET platform, but it doesn't work and think like C#.

      • pjc50 1655 days ago
        > But that would mean optimising for "hello world" and other small script scenarios, which isn't a goal of the language design.

        That's what the Powershell integration is for.

    • gtsteve 1655 days ago
      If you'd like to develop on Linux, I suggest trying Jetbrains Rider. It's as good as VS in most cases, and better in several. I've used it as my primary IDE for most of this year.
      • jmkni 1655 days ago
        +1 for Rider. It's basically ReSharper.
        • pjmlp 1655 days ago
          Including resource usage! As anything on top of InteliJ.
          • lultimouomo 1655 days ago
            JetBrains stuff does feel sluggish and occasionally hangs on my 2016 XPS-13 with 8G of ram. It gets the job done, but it's not pleasant.

            On my new Ryzen 3600 with 32G of ram it runs smooth. So if you go for JetBrains, I definitely advise not to cheap out on the hardware.

            Rider on Linux was very good for me, working on C# azure functions.

            • pjmlp 1655 days ago
              32G?!? Where does one find such employers???

              Not to mention how crazy it sounds having 32GB for an IDE.

              Visual Studio, Netbeans and Eclipse run perfectly fine with 8 GB.

              • simonh 1655 days ago
                I had a 32GB dev machine in my last job at a bank. 8GB is barely enough to run a full-on IDE, but if you also need to run wireshark, Geneos, outlook, big-ass excell sheets, several browsers with over a dozen tabs each, a bunch of communications tools, other office apps etc, etc, I'd regularly go above 20GB memory usage.

                I was working app support really, with a fair bit of dev work as well, so needed to run a ton of different tools.

                • avgDev 1654 days ago
                  I'm not even at a tech company and requested 32GBs, as I sometimes have VS and Android Studio open, running mobile emulators and checking what is going on the API end. You add VMs to the mix and one really needs 32GBs. My company had no issues providing the ram. Plus, it is insanely cheap right now.
              • lultimouomo 1655 days ago
                I am self employed, and invested the lump sum of 800€ (+ screen that I already head) for a reasonably beefy desktop machine. I'm not saying that 32G is the minimum required to run JetBrains IDEs well; I'm saying 8G is not enough and 32G is definitely enough to run it alongside a bunch of containers and other stuff.

                I agree that JetBrains stuff is resource hungry (maybe excessively so); to me it is still worth using it for Java, C#, Python, iOS and Android. For C++ I prefer Qt Creator (though I must say that the idea of switching to CLion and having a consistent IDE experience across all platforms that I work on is tempting).

              • jmkni 1655 days ago
                In my current role, I was given a laptop with 8GB. An additional 24GB mysteriously found it's way in there after a couple of days...
              • MrGilbert 1655 days ago
                It depends on your use-case.

                Add Docker, VM, development database server etc. to the bill, and a 32 GB machine is pretty reasonable for a dev.

              • stoobs 1655 days ago
                All the Dev machines where I work have been 16GB for years, I wouldn't be surprised if we jumped to 32GB in the next hardware refresh.
              • rafaelvasco 1655 days ago
                These days 32G is starting to become more of a necessity as apps take more and more RAM. And if you're a gaming, 8G doesn't cut it anymore for some games;
          • jen20 1655 days ago
            On a brand new machine (just verified), Visual Studio cannot keep up with my typing - a problem that has existed since the VS .NET days. IntelliJ may take a little extra time to start (though, not a huge amount more), but it has never suffered from that problem, even on much lower end boxes.
          • thrower123 1655 days ago
            One of the nice things about Rider, though, as opposed to ReSharper, is that it can be run as 64-bit, and you can control the size of the Java memory heap, rather than being jailed inside the 32-bit devenv.exe process.

            Visual Studio with Resharper chokes and dies pretty badly when you get into large solutions with 100k+ LOC and many projects. For years I've been seeing problems when memory usage gets close to 2GB, and the editor and intellisense starts getting laggy and unresponsive. For whatever reason, it is really bad if you have web applications with JS in the mix, to the point where I have resorted to just running WebStorm in parallel to do any front-end work.

            • teddyuk 1655 days ago
              something I have noticed with vs 2019 is that it creates lots of processed, presumably to try and give itself more than a single 32-bit address space
      • daniel-levin 1655 days ago
        Seconded. As another commenter on this thread said, make sure you have a powerful enough machine. I work with Angular a lot. For me, Rider's killer feature is Angular integration [1]. It is far more powerful than Visual Studio's.

        [1] Other paid-for Jetbrains products such as IntelliJ Ultimate have the same integration

    • jayd16 1655 days ago
      The closest you'll get to free floating functions are methods in a static class. The static classes essentially become namespaces.

      Then with static using directives, you don't need to spell out the namespaces.

      https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

      You can even put every global function into a single class but still organize your code across multiple files by making them all part of the same partial class.

      I personally wouldn't organize my code this way, but it seems like you can achieve what you want today with minimal effort.

      • jeswin 1655 days ago
        > Then with static using directives, you don't need to spell out the namespaces.

        Static using directives! Certainly a useful tool, thank you. I had been away from the .Net universe for a while and I think I have some more catching up to do. Some of the newer changes in C# help a lot with reducing code heft, even though it makes the language more complex.

        However, the Framework seems to be headed in the opposite direction when it comes to simplicity. Here's for example the IOptions pattern: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/co...

        • noderat 1655 days ago
          I really wish the documentation for things like DI, IOptions and ILogger were moved out of the ASP.NET documentation. Trying to decipher from the documentation how to use these in a non-ASP.NET project is needlessly complicated.
        • thomasz 1655 days ago
          The asp configuration thing is complicated but powerful. If you don't want that, you can just use ``Environment.GetEnvironmentVariable`` and be done with it.
          • shhsshs 1655 days ago
            Plus it’s barely any extra work to load settings from environment variables. And it lets you write code agnostic to where the settings are coming from, and then choose to load them from environment variables, a file, hard coded values, etc.

            Makes testing easier too because you don’t have to write a fixture around your settings. Just build the class with known setting values without writing code purely for testing.

      • bob1029 1655 days ago
        I have found that large partial classes declared across many physical source files creates lots of issues with intellisense throughput. At 50+ partials, you are looking at solid 10+ second delays in the IDE between when you typed something naughty and when you are informed of it. This behavior has been replicated across many different computers and versions of visual studio, so I think it's just a fundamental limitation.

        We have some solid use cases for free floating functions (e.g. business rules that need to be able to arbitrarily invoke each other) which we ultimately decided to scope to one public function per class. This allowed for some other interesting advantages in terms of consolidating tightly-coupled business logic that was formerly spread across the codebase (e.g. items used only by the one function - mappers, request/response models, private utility methods, etc). We even explicitly duplicated out some logic in order to enable completely isolated verticals of BL.

      • rafaelvasco 1655 days ago
        Don't get why people want floating functions. It makes easier to do bad coding and architecture imo. At least you must namespace them;
        • goto11 1655 days ago
          Consider something like the Math class. In practice it just works like a namespace for math-related functions. The only reason to have a class is because the language requires it.

          It would certainly be cleaner without having to define a useless class.

          • rafaelvasco 1653 days ago
            Well, a static Math class always did make perfect sense to me. Never bothered me. Even after I discovered languages that allow you to define functions everywhere, I still prefer static classes; As long as there's no performance penalties, I'm good. In fact, static classes method calls are really fast;
          • jayd16 1654 days ago
            Yeah, except I like that Log() isn't already a reserved function name.
            • goto11 1654 days ago
              I don't think anyone is suggesting the function names should be reserved names. They would still exist in a namespace.
              • dragonsngoblins 1653 days ago
                I don't really understand how this is significantly different than having a static class with the function
    • surban 1655 days ago
      > My wishlist for C# is short: [...]

      F# provides all of them, is shipped with .Net Core 3.0 and also works great on Linux.

      • tonyedgecombe 1655 days ago
        It doesn't give you structural typing, it's still normative:

        https://stackoverflow.com/a/3137561/57094

      • jeswin 1655 days ago
        Works for personal projects and is certainly a joyful experience. But recruiting developers is near impossible - which is key for the kind of projects/companies I work with.
        • jen20 1655 days ago
          I've heard this before, and frankly, it is not something I recognise.

          A few years back, I was the VP of Engineering at a (now well-known) startup that had chosen to use .NET (on Azure, which is a whole other story). Everything was written in F# by default, _occasionally_ branching out to C# or C++ if it was proven that F# was unsuitable for the job.

          We managed to hire a very large team in comparatively little time - and because people who apply for F# jobs either know it (indicating someone interested in looking forwards in the industry), or were interested in learning it (we provided training), quality was better and waste was substantially less overall than I have seen elsewhere in a rapidly scaling team.

          Despite what Microsoft may think, F# is the crown jewel of the .NET world, and it is a mistake to sideline it over fears such as this.

        • FpUser 1655 days ago
          What's the problem asking experienced developer to learn F#? That is course assuming they do not refuse the work?
          • jeswin 1655 days ago
            Team dynamics in large companies can be quite challenging. In addition, whoever made that decision might become responsible for project delays, inability to hire, people writing bad code, destroying work-life balance etc. And not just you, everyone up the chain will get blamed for choosing a programming language with a near-zero market share.

            Generally large companies and enterprises are resistant to change, and my advice will be to just ask for enough room to get the project executed somewhat within budget and schedule.

            • gameswithgo 1655 days ago
              At Olo, which is primarily a C# shop, people have been excited to learn F#, and able to pick it up on an as needed basis. We are pretty large and it has been fine. It helps quite a bit that it is a different language but part of the same ecosystem. It is quite a bit easier to manage than it would be adding say, Go, or Rust or something into the company, with entirely new tooling and libraries. for instance a visual studio solution can contain C# and F# projects, and they can refer to each other.
            • FpUser 1655 days ago
              Your original point was about inability to recruit developers. That is why I asked. From my experience it was never a problem for seasoned developer to get a grip on new language barring few very exotic cases. I completely understand other factors in enterprise.
      • Scarbutt 1654 days ago
        F# is a nightmare you want to use asp.net core.
        • akra 1653 days ago
          In a previous role some time back used F# on top of ASP.NET Core. It isn't that bad and people thought the code was quite clean in the end; we had dev's thinking going back to C# even with F# using vanilla ASP.NET would be a downgrade. There's ways to mitigate the pain of the C# specific API. We went the vanilla ASP.NET Core route (for Swashbuckle) and found only the Startup class (which could be replaced by functions in hindsight) and the Controllers (which were still clean code wise) needed to be classes. The rest (majority) of the program was typical F# code with an interfaced object usually created via an F# object expression put into ASP.NET's dependency injection. Helped to separate the web controllers from the logic layer anyway and keep the ASP.NET code to a minimal. Most of the program was written in F# style with ASP.NET used just as a web server for functions basically.

          There's just few public examples on how to do this cleanly so people kinda have to work it out themselves which I think could be improved.

        • TomasJansson 1654 days ago
          What do you base that on? I have had no problem with F# and asp.net core. Just look at safe stack, it uses saturn or giraffe which in turn runs on asp.net. https://safe-stack.github.io/
          • Scarbutt 1654 days ago
            That this abstractions exists on top asp.net core to make things more ergonomic proves my point.
            • Horusiath 1653 days ago
              No, it only proves that C# and F# are two different languages.
    • denisw 1655 days ago
      If you install the VS Code's C# extension, you should be getting autocomplete and quick fixes such as adding impport statements automatically.

      https://code.visualstudio.com/docs/languages/csharp

      • wokwokwok 1655 days ago
        Oh no, don’t do that.

        The vscode integration is barely beyond autocomplete, and it uses more memory than rider does.

        The c# story for vscode still has a long way to go before it’s even close to feature parity.

        (...and yes, if you scrabble enough plugins you can get more features, but you get more crashes and even more memory slurping. :/)

        • manigandham 1654 days ago
          What do you mean? The official C# support is from the Omnisharp plugin [1].

          Omnisharp is the official .NET cross-platform development support system and is used across many different interfaces (VSCode, Atom, Sublime, etc). It's powered by Roslyn [2], the C# compiler platform, and can use the same packages for analyzers and autofixes that VS uses. It also has official debugging support. This is way beyond autocomplete.

          1. https://github.com/OmniSharp/omnisharp-vscode

          2. https://github.com/OmniSharp/omnisharp-roslyn

          • wokwokwok 1654 days ago
            I don’t know what to say; except, yes, and the experience on atom, sublime and the other tools that use omnisharp is terrible.

            It’s not unusable, but for example, I can’t refactor my code base without it crashing, and forget any advanced features like jump to symbol or extract class. The only feature that works reliably for me is autocomplete.

            I can only speak for personal experience; omnisharp on Windows and Mac, on .net core (or heaven forbid you use something non standard like unity3d) ...

            It’s just terrible.

            Perhaps your combination of platform / project is a different story, but I try it again every few months, and bluntly, rider is just better at everything.

            Visual studio on windows is also excellent, and the tooling is usually tighter (rider usually lags when new stuff like .net core 3 comes out, and the azure function integration is a bit spotty because it uses the core tools).

            /shrug

            Tldr; my experience, on multiple platforms on multiple projects has been simply negative every time I’ve used it.

            Ymmv.

      • samuell 1655 days ago
        I can second that. Worked better on Xubuntu/VSCode than on Windows/VSCode even. Really smooth in fact, with intellisense/autocompletion and more just working. Then, I used it only on small CLI apps so far.
      • james_s_tayler 1655 days ago
        It's still nowhere near the same level of experience that Rider or VS offer.
        • wayneftw 1655 days ago
          Thank goodness that VSCode is a different experience! VS and Rider are a pain in the ass to setup and maintain and VSCode provides the essential UX better than just about anything else at this time.

          I can have VSCode installed and running with my personal settings on any OS within 2 minutes flat. I don't have to find a secret license key, login with any account, or pick through a list of 100 features in the installer. Then, once it's installed, well VSCode is just a better experience for the basic act of editing source code. Multiple cursors, quickly opening files by typing the name (without having to type a greater than sign followed by a space first every time like you do in VS), sane default keyboard shortcuts for managing/splitting documents and so on... Plus, it's got way better facilities for working with front-end web code than VS ever had and some other major features such as remote editing oh and one other little thing: it's fully cross-platform.

          And that's the reason for VSCodes wild success. Let's hope Microsoft never fucks it up by trying to glom the rest of their Azure/Microsoft Account crap onto it too much.

          Luckily, I don't have to work on anything that needs a designer anymore, like WinForms, but if I did I would certainly install VS. Even then - I think I'd only use VS for the designer and go back to VSCode for everything else. (And I love Windows and I used to love VS, having used both of them for 30 years and 15 years respectively.)

          But the VSCode team also covered a lot of ground faster than VS ever did, adding feature after feature. So, what do you want for C#? I'm certain that we'll be getting it soon.

          • com2kid 1654 days ago
            > Luckily, I don't have to work on anything that needs a designer anymore, like WinForms, but if I did I would certainly install VS. Even then - I think I'd only use VS for the designer and go back to VSCode for everything else. (And I love Windows and I used to love VS, having used both of them for 30 years and 15 years respectively.)

            The WinForms designer is wonderful. I can throw up an ugly CRUD app that works so insanely fast with it. The web-equivalents are horrible. Everyone is trying to make a rich web app to write rich web apps and IMHO they are all targeting way too low on the "technical ability" scale for their users.

            Though web is harder, since apps need to be responsive, and cross plat, and automating code gen of CSS that does that cannot be easy, and having a form designer UI expose settings for how a UI resizes itself doesn't sound like a UX challenge I'd want to tackle.

          • james_s_tayler 1654 days ago
            >Thank goodness that VSCode is a different experience! VS and Rider are a pain in the ass to setup and maintain and VSCode provides the essential UX better than just about anything else at this time

            I find it's the opposite. VSCode is just a very barebones editor with an ecosystem of plugins that may or may not be maintained. I always need to search through a list of hundreds of plugins to figure out which ones I need for this particular techstack.

            I started using Rider this year and it's been much, much better than VS and VSCode for me. I can use it in both Windows and Linux. Search experience in Rider is my favorite part of the UX. I find VSCode clunky in comparison.

            I use VSCode when doing Vue stuff. I like it for that. But I still miss the nice search UX of Rider. Maybe I need to dive deeper to see if there is a setting or another plugin that can get me that same feeling.

          • apk-d 1655 days ago
            I've been using VSCode for C# daily for a year or two. I'm considering switching to a non-electron editor because of how slow and sluggish it is, including IntelliSense, refactoring, syntax highlighting and the latency and frame rate of actual rendering (huge drain on the battery if you have one). The editor doesn't (seem to) support language-integrated syntax highlighting, so no ambiguous language features get recognized and there's plenty of bugs (such as with around the C#7 ValueTuple syntax feature). The C# extension tends to freeze a couple of times a day. One feature that excels out of nowhere is the search functionality which greps entire folders almost instantly, probably because the functionality is delegated to a separate (native, I presume) process. JSON-with-schema-based configuration is something more software should have. The editor also starts pretty fast but most features are delegated to extensions which load (ultra)lazily and aren't usable for a few (5-30) seconds.
            • wayneftw 1655 days ago
              Just curious: are you on a Mac or Windows? I never experienced any freezing at all on Linux.
              • apk-d 1654 days ago
                Windows, although my experience with VSCode on Ubuntu wasn't very smooth or stable either (worked with TypeScript for a few months a while ago).
        • rraghur 1655 days ago
          I use VSCode with C# extension. THe only thing I missed was the additional analysis and quick fixes that full VS provides. However, that got sorted out by installing Roslynator in VS Code (works on linux too) and I couldn't be happier.
    • tonyedgecombe 1655 days ago
      I keep coming back for a look every time there is a major release but it always seems quite cumbersome to me despite the fact I work in C# everyday. I ran through the Razor pages tutorial yesterday and I was surprised how heavy it all felt, I was expecting something like Flask or Sinatra.

      I'm a big fan of IDEs, I couldn't imagine using C# without one.

      • thrower123 1655 days ago
        I've been quite happy using Nancy for the past several years. At the time we made the switch, it wasn't possible to use WebApi or MVC in a self-hosted OWIN web server, and our application had lifecycle constraints that made trying to run it out of IIS impractical.

        But I've come to really like the way Nancy has explicit API routes, compared to the implicit conventional style, or slapping attributes all over everything. You can avoid a lot of the magical action-at-a-distance middleware behavior that seems to be increasingly common as well.

    • louthy 1655 days ago
      > and adopting a functional style

      Although this won't solve all of your problems, if you can't use F# and want to use a more functional style, then this will help [1].

      disclaimer: I'm the author

      https://github.com/louthy/language-ext

    • mariusmg 1655 days ago
      >Btw, Linux as a deployment platform works quite well.

      What are you using in production for Linux ? Kestrel + Nginx ? Any problems encountered with this setup ?

      • manigandham 1655 days ago
        We run Kestrel directly facing the internet serving http2 traffic across billions of requests per day with no issue. No need for Nginx.
        • deskamess 1654 days ago
          Which Linux distro are you going with? Any recommendations for/against distributions.
      • enlyth 1655 days ago
        I am using nginx as a reverse proxy to kestrel in production on one of my projects and it seems to be working great so far
        • Topgamer7 1655 days ago
          Yeah I do the same, lets me do seamless deploys. I spin up the new version, change the nginx config, then tear down the old one.
        • sword_smith 1655 days ago
          This setup is also what Google Cloud is using for their App Engine.
      • jeswin 1655 days ago
        Yes, this is what we use. So far it seems alright, and we did some load testing last week.
    • moron4hire 1655 days ago
      You can get most of the way to functions outside of a class with "using static". I often do "using static System.Math;" to have bare may functions. Also often do the same with System.Console.

      I'm not sure what your thoughts are on Structural Typing, but I've found liberal use of extension methods on interfaces to be a really useful way to get lots of functionality on a wide variety of types.

      As for your comment on namespaces, I cannot agree. Again extension methods would not be anywhere near as useful if one can't arbitrarily add code to any namespace, without needing to adhere to a file system structure.

    • manigandham 1655 days ago
      Are you making interfaces for everything? Just register and use the class with constructor injection. All those layers of indirection just waste time.

      What would change with functions outside of classes? You already have anonymous functions/lambdas, static classes and extension methods if it's about instantiation overhead, and local (nested) functions. Can you explain a scenario where this doesn't work for you?

      For namespace declarations, you can just use the same namespace on all classes if you want to keep it simple, and that kind of flexibility isn't available if you use file structure.

      • sword_smith 1655 days ago
        It would seem to me that you wouldn't be able to mock all the classes if you followed this strategy.
        • manigandham 1655 days ago
          Well do you really need to mock everything? And if so, how is that different than any other stack?

          Most of the time this is just "enterprise patterns" without any thought about whether it's worth it. I've seen too many small LOB apps that have 5 tiers of code for no reason.

          • arethuza 1655 days ago
            I once "inherited" a project where I counted more than thirty levels between the web pages and the send/receive on a queue to the system that actually did the work.

            Funnily enough they had a huge team (and over 30 thousand classes) and a lot of time (over two years) but they couldn't finish it.

            Edit: Should have said it was J2EE rather than .Net

          • pathartl 1655 days ago
            I agree. I've spent the past 6-9 months cleaning up a .NET MVC project and one of the first things I did was to rip out all interfaces that weren't being used the way I expect interfaces to be used.

            We have the base MVC project that references services for DI. We load in another DLL via Ninject to add customization on top of those services (custom parsing of some magnetic card swipes for different institutions we deploy to and such). I discovered that while all of our services (~20) had an interface, only one of them was being customized/overridden. This mean that in order to do something as simple as a parameter change, it had to be changed in way too many places: the definition, the implementation, and the usages.

            Why have that extra layer? If we need to override in the future, we can add more interfaces. For now, I just want to maintain my sanity.

          • FpUser 1655 days ago
            I am seeing it all the time when consulting for companies. Their developers just love taking relatively small things and turning it into multilayered monster with insane amount of dependencies for code, development building pipelines and deployment.
            • gnud 1655 days ago
              I see that too - but I also see plenty of completely un-testable code, because the controller constantly reads from AppSettings, or creates a new database connection directly, or similar issues.

              For small applications, instead of going full on DI, I sometimes just make a few simple public static properties, that I initialize on program startup. Then you can initialize them differently in your test harness. This is a lot easier to understand if you're not an expert in the chosen DI framework or app framework.

            • sword_smith 1655 days ago
              I am designing financial software for the crypto industry and I definitely prefer the ability to 1) have a high test coverage, and 2) quickly be able to write regression tests. Without dependency injection with mockable objects, that would not be possible.
          • sword_smith 1655 days ago
            Probably not but it is in my opinion very nice to have the ability to mock evertyhing if and when you need it.
            • manigandham 1655 days ago
              You always have the ability. It's just writing code. But you should only do it when you need to.
        • gnud 1655 days ago
          You can mock classes with virtual methods/properties.

          I prefer making interfaces instead, though. I find it useful to make the boundaries between components explicit.

        • caseymarquis 1654 days ago
          I'm pretty sure you could do this via reflection and virtual public methods. The same way entity framework creates its proxy classes.
        • taco_emoji 1655 days ago
          Which strategy? If your dependencies are all interfaces, then they're all mockable.
    • pjmlp 1655 days ago
      I find the verbosity complain rather strange.

      Then again, I rather use maintainable programming languages, instead of writing hieroglyphs.

    • contravariant 1655 days ago
      I find structural typing is mostly necessary when someone messed up and used a class where an interface should have been used.

      I'm now currently in the position of having to decide whether it's better to accept defeat or use T4 or similar to write the 1000+ line wrapper class to undo the mistake... You'd think Microsoft would at least remember to do dependency inversion through interfaces in their own C# code.

      • LeonB 1654 days ago
        I wrote a tool, NimbleText which is free and good and lightweight for these one off 1000-line code generation scenarios. See https://nimbletext.com/live or email me for help (support at nimbletext)
    • commanderjroc 1654 days ago
      > Given that our deployment platform was Linux (for a .Net Core 3.0 project), I was determined to use Linux and VS Code for development. That was a fail; the verbose nature of C# and the Framework APIs make it impossible to be productive without significant help from a full-fledged IDE like Visual Studio.

      I have had and continue to have the opposite experience of you. I use Ubuntu with VS Code and a few C# plugins that have greatly allowed me to navigate the .NET Core framework and write code with minimal references.

      Even, with Visual Studio (unless you use R#), you will always run into issues where you aren't sure where the function or class lives. That's why you go read the documentation or ask SO.

      The more you write webapi's the easier it is.

      > Structural Typing

      Dynamic was the closest to it, but it has significant performance issues. Tuples and structs do exist also.

      > Allow functions outside of classes

      No.

      > Verbosity in C#

      ????

      > most popular libraries will nudge you strongly to use Dependency Injection throughout the app and implement everything as a class and to extract interfaces out of it.

      You do realize C# is mostly a strong Object Oriented language hence why they (the libraries) urge you to do that.

      If you want functional so much use F#.

    • Digit-Al 1655 days ago
      I don't believe you will ever get functions outside of classes in C#. It is antithetical to the entire ethos of the environment.
      • rafaelvasco 1655 days ago
        Thank god we won't. Functions outside classes never did make sense to me. People love to hate classes sometimes. For me, everything starts at classes and structs;
        • gameswithgo 1654 days ago
          in C# you can make a static class, and then put functions in it. When you do this, the class is nothing more than a namespace. But the class is already in a namespace. So you now have a redundant namespace. It isn't a big deal but its a bit silly to have to do that, when writing static functions.
          • rafaelvasco 1653 days ago
            Yeah I agree. I really like static classes, but I've seen them being overused; As long as it doesn't causes performance problems I'm good with it. In fact calling static methods is faster;
      • gameswithgo 1655 days ago
        you can just make a public static class FUNCTIONS and then use a static use at the top of each file, then pretend.
    • jopx 1655 days ago
      > Allow functions outside of classes

      Mads Torgersen did a proposal related to this, it's in the C# Version Planing as version X.X, so maybe in the future we're going to have those capabilities.

      https://github.com/dotnet/csharplang/issues/2765

    • hudo 1655 days ago
      > - Allow functions outside of classes

      use a delegate for this:)

      Btw, for dev on Mac I use Jetbrains Rider, its like VS with Resharper or even better. Its also available on Linux. Otherwise, for smaller (micro)services, VS Code works fine. As soon as you start fighting with Code, maybe your solution is too big?

    • txdv 1655 days ago
      can't you just put static functions inside of a static class and use them with "using static StaticClassName"?
    • codeulike 1655 days ago
      - Allow functions outside of classes

      What do you mean? Isn't that just anonymous functions?

      • oblio 1655 days ago
        He wants top level functions. He wants to not have to write classes if he thinks the domain doesn't need modelling as objects.
        • arethuza 1655 days ago
          You can have static classes that can contain nothing but static members - so effectively a namespace with no instantiation of that class allowed so no objects.
          • oblio 1655 days ago
            I know, but many people dislike boilerplate code.
        • manigandham 1655 days ago
          You can use a static class though. It's 1 class that then just behaves like a namespace and doesn't need any object modeling.
        • jmkni 1655 days ago
          You could create an F# class library, put your functions in there, and then call them from C#?
        • e12e 1654 days ago
          They probably wants to not have to write classes if he thinks the domain doesn't need modelling as classes.

          Objects are just initialized memory with a vtable, or actors that sends and receives messages...

  • samuell 1655 days ago
    Can I use the occasion to recommend an VERY good book on modern C# programming: "Functional Programming in C#" by Enrico Buonnano:

    https://www.goodreads.com/book/show/31550964-functional-prog...

    I'm only a few chapters in, but already it has transformed my C#-writing in many ways, and I have ton of practical ideas on how to better structure my programs in a functional way as I go on.

    • LordN00b 1655 days ago
      The first half is an excellent book, buuuuut becareful as you get into the second half, it really becomes a book pimping his own library.
  • mariusmg 1655 days ago
    I've tried to port a MVC project to .NETCore 2 a while ago, it was pretty painful mainly due to lack of @helper syntax in views (everything which relied to @helper had to be changed).

    Also, from what i saw, nobody is actually in a rush to "move" to .NETCore, most big shops still rely on .Net Framerwork , i still do some occasional work on a project which is using .Net Remoting :)

    • EnderMB 1655 days ago
      There are some similarities to Python 3 in .NET Core adoption.

      I know plenty of people that would love nothing more than to be up to date and to start using .NET Core, but many of them rely on certain libraries that simply aren't there yet. I know a load of Umbraco devs that are eager to make the jump, but until their CMS supports it, they're kinda stuck if it's a dependency.

      • oaiey 1655 days ago
        Just with the difference, that there is not 2/3 culture split here. Everyone agrees that .NET Core is the future and should be used. The universal cloud adoption (e.g. AWS Lambda), containerization and Linux deployments are so huge selling points for .NET Core.

        People are hold back because of deprecated tech and because no one touches a running system without need. And maybe that black matter developers just do not know yet ;).

        • EnderMB 1655 days ago
          For reference, I'm mostly talking about brand-new projects, and purely from my perspective as a former .NET dev that still spends time with the local .NET community.

          I've not met a single .NET dev that doesn't want to use .NET Core. Hell, if anything, a lot of them would love the opportunity to use .NET on a Unix system, and to use established tooling not available on Windows. The problem is that the tools they use aren't ported yet.

          Umbraco is a key example, as the most popular .NET CMS in use today. It's a great CMS, but we're at least 1-2 years away from a .NET Core implementation, and probably even longer if we're hoping for a first-class Postgres/MySQL backed implementation. Until then, as you've rightly said, there's no need to switch to .NET Core.

          • scarface74 1655 days ago
            I don’t care about being able to develop .Net apps on Linux. Being able to deploy to Linux is a game changer especially in the cloud. Anytime that you add Windows to a cloud environment you get hit with the triple whammy of slow startup times, increased licensing costs and increased resource requirements.
      • sebazzz 1655 days ago
        Many libs already support it. But the end-users, the web applications, are the problem. If you haven't properly separated business logic from UI, which might very well be the case if your project budgets aren't too high or other causes, you will have a rewrite on your hands instead of a port. A rewrite which is, especially on low budgets, not worth it.
    • manigandham 1655 days ago
      I find the opposite to be true. Many companies are moving to .NET Core, especially the independent dev shops. Cross-platform, easier to develop, faster and cheaper to run, and now supports desktop APIs.
    • asp_hornet 1655 days ago
      In my experience, it’s the opposite. If you’re not migrating your apps to core your hiring pool is going to be getting significantly smaller and smaller over the next couple of years. Think Swift vs Objective C or old jquery heavy websites vs transpiled ES6.
    • Someone1234 1655 days ago
      They're working on bringing back @helper in .Net Core 3 for back-compat:

      https://github.com/aspnet/AspNetCore/issues/5110

    • mirekrusin 1655 days ago
      I think this news marks the point from which we may see the move happening.
      • WorldMaker 1654 days ago
        The marketing move of stopping .NET Framework at 4.x, and branding the next version of Core to just .NET 5 with no "Core" in sight will probably also push a lot more companies to move ahead with the migration (if for no other reason than the silly baseline that 5 > 4 and what manager wants to be one behind).
  • LandR 1655 days ago
    My biggest issue with .net core and .net in general, although .net core seems worse, is nuget issues and it causing issues with binding redirects.

    Seems like every project I waste hours trying to figure out the mess that nuget creates.

    • marsrover 1655 days ago
      I really don't understand how you're having binding redirect issues with .NET core.

      They're usually caused by a mixup between local packages and the GAC, which .NET core doesn't use.

      When you publish a .NET core application, you can publish as stand alone and see right there in the publish directory all the DLLs that are being used.

      • lazulicurio 1654 days ago
        There's still plenty of ways reference resolution with NuGet can go wrong, even without the GAC.

        For example, because NuGet allows packages to import .props and .targets files you can have packages that add arbitrary references that don't match your target framework or runtime. Now you could say "oh, that's the package's fault, not NuGet's", but often the reason a package includes .props or .targets files is to work around other shortcomings in NuGet.

    • fstopmick 1654 days ago
      Same here. I've tried moving my "boilerplate side project" .NET Framework template over to the latest and greatest about once a year over the past four years and every time I hit the eight hour mark, I give up. Either my dependencies aren't supported yet, or there's some wonky versioning incompatibility, or there's some other undocumented frustration. I'm not dealing with anything wildly complex here ~ just an n-tier architecture supporting MVC rendered views, forms authentication, API endpoints, an EF middle-tier, and Azure SQL. About as simple as it gets for a web app.

      Has anyone here migrated from .NET framework > .NET core lately? Think it's time for me to give it another go?

      • zmj 1654 days ago
        If your .NET Framework projects are in the old csproj style, first convert them to SDK style - no changes to the target runtimes or dependency versions. For asp.net, winforms, or wpf apps, this is usually the hard part.

        Once the app is working with SDK style projects, then try changing the runtime to netcoreapp. Often it just works; if it doesn't, you'll get useful error messages.

    • autechr3 1655 days ago
      .net core is definitely better than .net framework in regards to nuget issues in my experience.
    • moron4hire 1655 days ago
      .NET Standard 2 relaxed the versioning requirements that led to so many binding redirects.
  • asplake 1655 days ago
    > ...increased the number of .NET Framework APIs ported to .NET Core to over 120k, which is more than half of all .NET Framework APIs

    I don't know .NET at all and the numbers there seem mind boggling. Can someone put this into some kind of context?

    • denisw 1655 days ago
      The crazy high numbers are due to a fairly Microsoft-specific definition of "API" in this context. What they count here is class members; so they ported a class with 15 methods and 3 properties, they'd count this as "18 APIs" (or perhaps 19 - not sure if the class itself counts as an "API" as well).

      That being said, I'm sure there was indeed an impressive amount of code that had to be ported.

      • sytelus 1655 days ago
        All public members are legitimate APIs. Would you be more happy if every property had get* and set* methods so now it counts as two?
        • ptx 1655 days ago
          I think the point was that often "an API" means a larger collection of functions, classes, properties, etc. Like the COM API, the DirectX API, the MFC API and so on.
          • denisw 1655 days ago
            Yes, that's what I meant. The use of the term "API" for a single public code element is something I have not come across in any other language community, hence the clarification for those who are not familiar.
            • tasogare 1655 days ago
              I've seen it before in Apple communication. I hate this usage. It doesn't cost a lot of keystrokes to write "API members" which is more precise.
    • manigandham 1655 days ago
      It refers to the API surface area (classes, methods, namespaces available in the standard library) from .NET Framework, the Windows-only runtime that has been developed for the last 15 years.

      .NET Core is the new cross-platform runtime and now supports more than half of all the API surface available from the older framework. The number itself isn't anything special but just denotes how big the standard library is.

    • sytelus 1655 days ago
      It's massive and includes all kind of things from localization, networking, file system, threading, compiler, graphics, cryptography, web, text processing, XAML, OS etc APIs.

      https://docs.microsoft.com/en-us/dotnet/api/?view=netcore-3....

  • samuell 1655 days ago
    .Net core is fantastic. What a surprise to find a software from Microsoft that worked a LOT smoother on Linux than on Windows* :D

    * VSCode/Linux vs VSCode/Windows. Full VS on Windows worked OK.

    • autechr3 1655 days ago
      This has been my experience as well. JetBrains Rider on my mac is nice. I prefer it to VSCode. I'd recommend it if you can afford it.
  • Dolores12 1655 days ago
    >With .NET Core 3.0, we’re at the point where we’ve ported all technologies that are required for modern workloads

    .net HttpClient is based on outdated cookies RFC, RFC6265(that is 8 years old) is yet to be supported [1]. And what can you do today without good http library?

    [1] https://github.com/dotnet/corefx/issues/29651

    • manigandham 1655 days ago
      HttpClient is fast and efficient, and the built-in cookie container handles all the standard functionality, although many users just read and handle the cookie header directly.

      This RFC seems to be all about rejecting certain cookies under some very specific security rules. How impactful is this really? Is this affecting your app somehow?

      • Dolores12 1655 days ago
        Well, you expect http library to be as good as everywhere else.. like python requests. When library is not updated due to app compatibility issues i would not call it modern. It is good enough for basic use, tho.
        • manigandham 1655 days ago
          HttpClient supports http2, the latest protocol features, has been rewritten in managed code with sockets, and includes advanced handling to balance connection lifetimes with DNS updates. It's about as modern as it gets and we run 10 billion requests per day through this code without issue.

          How does a lack of support for an RFC (which is still being revised) that outlines where a cookie should not be accepted in edge cases mean the HTTP library is not modern? Where are you running into this issue in a real app?

          • Dolores12 1655 days ago
            > Where are you running into this issue in a real app?

            I had discovered few bugs in cookies handling myself, compared behavior to competitors and moved on. If cookies handling is not enough, i came across inconsistent behavior on different platforms: on Windows it uses WinHttpHandler, on Linux its curl lib(if i remember correctly). And they both handle edge cases differently.

            • manigandham 1655 days ago
              HttpClient uses sockets in managed code since .NET Core 2.1 which was released in Oct 2018: https://docs.microsoft.com/en-us/dotnet/core/whats-new/dotne...

              There shouldn't be any platform issues unless you're using an even older version or manually forcing it. What bugs in cookie handling did you encounter? This is very widely used code and the team is responsive so if you can document the issues then they can help.

          • Rapzid 1655 days ago
            Yeah, comparing it to requests which is:

            A.) Not even stdlib

            and

            B.) Doesn't support async/await

            Is an odd choice.

        • daniel-levin 1655 days ago
          Well, requests is a third party library. I don't like Python's standard library's HTTP APIs. I think that's a common sentiment. Similarly, you can get a solid third party its client for dotnet [1]. .NET core certainly has its shortcomings. I'm going to rattle of a list of other high quality libraries that address the .NET core shortcomings for me personally:

          1) Dapper - https://github.com/StackExchange/Dapper

          2) DbUp - https://dbup.readthedocs.io/en/latest/

          3) Autofac - https://autofac.org/

          The Kestrel source code expanded my view of what was possible in C# with respect to low level operations on memory (check out the fixed statement [2]).

          A part of me doesn't want to like Microsoft because of their anti-competitive behavior in the past and crappy developer experiences from 10 years ago. Now, I sound like a Microsoft shill because of how good .NET core is.

          [1] https://github.com/restsharp/RestSharp

          [2] https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

      • merb 1655 days ago
        especially since you need to enable the cookie container functionality manually and you can override it by yourself
        • Dolores12 1655 days ago
          or use different language with good library that support 8-year old RFC.
          • manigandham 1655 days ago
            C# is a language but .NET is a runtime and standard library. If this issue is really such a problem, you can use any of the dozens of open-source libraries available or just implement your own cookie container that follows this RFC in about a day.

            It sounds like you're picking on a strange issue to disparage .NET without any real experience in it.

            • Dolores12 1655 days ago
              No, i just didn't like the wording in the original post mentioning all 'modern workloads' , while ubiquitous thing like cookies in httpclient is still not according to 8 year old standard. Does it make any sense now?
              • saberience 1655 days ago
                You are being over the top and quite frankly ridiculous. I've been using .Net Core in production since its initial release while working for a famous games company (which you probably heard of) and now for a multi billion dollar FinTech firm, and the whole time I have been using HttpClient or libraries using HttpClient for connecting to services/apis.

                If your whole reason for not using .Net Core is based around this weird and honestly unimportant edge case, all that tells me is that you have a strange bias against the framework for some other reason and looking to nitpick.

              • manigandham 1655 days ago
                Modern workloads refers to the kind of applications commonly built today, like http/json/grpc webservices running in containers.

                And yes the HttpClient is modern. Your entire argument so far has been a single RFC issue that is labeled up-for-grabs because it's so minor and doesn't really affect anyone. Claiming the http functionality and the whole stack isn't modern because of it just doesn't add up.

                • merb 1654 days ago
                  especially since cookiejar is kinda useless. you can't use it with more than a single server, anyway..
              • nbevans 1654 days ago
                That "8 year old standard" you linked to is just a draft and proposed standard. Nobody implements draft specs in production grade libraries/frameworks. Especially not Microsoft.
    • romanovcode 1655 days ago
      The HTTP library is super easy to use and is very fast. I would not say that this issue makes the library bad.
      • sebazzz 1655 days ago
        Agreed. You could also use Curl bindings for .NET as an alternative.
    • victorNicollet 1655 days ago
      Using cookies in HttpClient strikes me as a rare situation.
  • merb 1655 days ago
    sadly .net core still lacks a good pdf library that is not priced over the top. (and at least supports building pdfs and creating pdf/a 3's.)
    • UglyToad 1655 days ago
      I'm currently building an open source (not copyleft) PDF library for.NET standard [0] and I'd be interested to hear more what you need on the document generation side.

      The current generation API for my library is extremely limited because I've never needed one but you are the perfect market research participant. It's an API I'm actively looking to improve.

      PDF/A compliance is probably quite a way off though.

      [0]: https://github.com/UglyToad/PdfPig [1]: https://github.com/UglyToad/PdfPig#document-creation

      • SamuelAdams 1655 days ago
        In the past I have frequently needed to merge several existing PDF's together. The idea is this: we have a list of objects Foo in a grid that is available to an end user. Each Foo item has a PDF that can be printed - some of our users really like to just look at the paper print out of everything.

        So we created a "bulk print" option. They tick a bunch of checkboxes for the Foo items, then click the Print button.

        Internally we merge all those PDF's together and send one print job to the printer's queue. We used DevExpress [1] to accomplish this, and it worked very well. The only problem: it's expensive.

        Multi-billion dollar companies don't mind paying the price tag, but smaller shops sometimes think twice. If a free or near-free alternative existed, that would be fantastic.

        [1]: https://github.com/DevExpress-Examples/how-to-merge-document...

        • UglyToad 1655 days ago
          A large chunk of PdfPig started as a port of PdfBox (Java) , so it might be worth considering containerising or in some other way wrapping the PdfBox functionality to get Apache 2 licensed PDF merging [0] if this works for your scenario.

          It's useful to know that this is a real use case too, I always assumed it was implemented 'just because' but the scenario you describe makes sense.

          [0]: https://pdfbox.apache.org/docs/2.0.1/javadocs/org/apache/pdf...

          Edit: another option is this pdfium wrapper for NET Core though it merges one pair at a time: https://github.com/GowenGit/docnet

      • merb 1655 days ago
        wow that looks really nice. actually we mostly do html to pdf, but maybe I can look into your project and try to add it (if there is a nice html/dom library like java has with jsoup).

        basically we don't need a lot, mostly switch fonts/text sizes/images (generated barcodes, logos) and of course pdf/a-3(a/b/u) for invoice. so apis that translate the html into the layouting of pdf is the bigger problem.

        • numo16 1654 days ago
          For html/dom parsing, I would recommend looking at either AngleSharp[0] or HtmlAgilityPack[1], as they tend to be the most popular in the community.

          [0]: https://github.com/AngleSharp/AngleSharp [1]: https://html-agility-pack.net/

        • UglyToad 1655 days ago
          Thank you for the response.

          Yeah HTML to PDF is a tricky one, presumably wkhtmltopdf/pechkin doesn't work out because of licensing / interop issues? Other than that the only other one I'm aware of is Aspose which is expensive as you say.

          Images (along with font subsetting and fixing the gzip implementation) are the next thing I plan to implement so it's helpful to know its a real requirement.

          • merb 1655 days ago
            actually wkhtmltopdf/pechkin both won't support pdf/a-3 and have a not so nice output. aspose is actually really cheap, compared to others, unfortunatly their support is trash: https://forum.aspose.com/t/html-to-pdf-pdf-net-fonts-error-o... i.e. pdf/a-3 does not work on mac (didn't tried linux, but I guess it has similar problems)

            btw. itext is a really great library, unfortunatly itext has a problematic licensing and I tought they were jocking after they gave me prices.

      • ThrowMeAwayOkay 1655 days ago
        I have a .NET Core web app that needs PDF creation abilities. Can I be your market research participant as well?
        • UglyToad 1655 days ago
          Of course. Most of the current API focuses on data extraction because that was my area of work when I started the project but I think creation is the more common problem so its good to hear as many use cases as possible.
    • Const-me 1655 days ago
      I’ve just built iTextSharp LGPL for .NET Core 2.2: https://github.com/schourode/iTextSharp-LGPL It only needed a single dependency to build on.NET core, System.Drawing.Common from nuget, for Image and Matrix classes.

      That version has a few unfixed bugs, e.g. merged table cells are sometimes broken, but the code quality is OK on average, relatively easy to fix them if you need to.

      • merb 1654 days ago
        btw. we are using it to read pdf's unfortunatly no pdf/a-3 creation support
    • kgwxd 1654 days ago
      I'm going to need this very soon (not PDF/A 3 specifically, just PDF). A few months ago I did some research and I figure Skia[1](Sharp)[2] would end up being the base for any .NET core PDF libs. At the time, I was easily able to make a basic functioning demo for a project that ended up dying. Anyone using it in production?

      [1]https://skia.org/user/sample/pdf [2]https://github.com/mono/SkiaSharp/

    • kelvin0 1655 days ago
      Well we've used Reportlab for most our PDF generation needs. There is also an open-source version of the tool/framework.

      https://www.reportlab.com/dev/opensource/

      This is a Python implemented API/framework. However, you could use IronPython (compiles to CLR and .NET) and import Reportlab. I've never used it that way, but it's worth the try!

      Hope this helps.

    • Volrath89 1655 days ago
      Have you tried itextsharp? https://www.nuget.org/packages/iTextSharp/5.5.13.1

      I've successfully used it both in .net 4.7 and .net core 2.2 to generate complex pdf structures, and it has been a great experience.

      • merb 1654 days ago
        AGPL or over 4000€ per server.
    • Yusho 1653 days ago
      DinkToPDF works well to render the HTML produced by Views into PDFs
    • morrbo 1655 days ago
      I've extensively used itext with no issues on core
      • merb 1655 days ago
        itext is extremly overpriced. btw. you can't use the AGPL version in any closed source project.
        • e12e 1654 days ago
          This is one of those cases where distribution really matters; for quite a lot of the work we do, AGPL would probably be fine (to-order in-house business tools;client pays for development, gets source anyway).

          While selling those tools under AGPL certainly would be both Free and open source (but not gratis!) - the only real impact of the AGPL would be on our client - in that they'd be guaranteed the four freedoms (but they generally specify that in the contract anyway - they want the possibility to continue development in house, or with a possible new partner down the road).

          However, the products are proprietary in the sense that our client only ever use then in-house and don't distribute them or expose them as public facing services. So no redistribution.

          So you're technically correct (the best kind of correct!) - I just think the distinction is important, as you certainly could sell software with AGPL components.

          • merb 1654 days ago
            I never said that I can't sell AGPL software. just that I can't use AGPL in a closed source project.
            • e12e 1654 days ago
              I didn't mean to imply you did, just that this distinction is particularly important here, as I've seen quite a few "closed" projects that do need a pdf library, and would be fine with the AGPL.
  • mjfisher 1655 days ago
    Can anyone closer to the .NET community than I am comment on the wider adoption of .NET core on standard line of business applications?

    The few places I know that work with .NET are a long way from migrating yet. Are there any similarities with the Python 2/3 port? I imagine language level compatibility makes the transition much easier.

    • manigandham 1655 days ago
      Last 5 startups have used .NET, all of them have migrated and all new projects are on .NET Core. Every major dev shop I know starts new projects on Core.

      It's mostly legacy and enterprise apps but it wasn't until .NET Core 3.0 (released last month) that it was viable to migrate everything perfectly anyway so it'll take time for that to filter through.

      This is nothing like Python 2/3. .NET Standard has been out for a while and all the major libraries are compatible with both runtimes, and that now includes desktop APIs too.

      • sebazzz 1655 days ago
        If you have one product (like startups) it is easy to port. You have, in relative terms, nearly unlimited budget and incentive to stay on the latest and greatest.

        If you work in a place where you develop many products for many clients, budgets are lower and you have much to maintain and develop. Porting is in my experience almost never done, because there is no business case to justify it.

    • moksly 1655 days ago
      > adoption of .NET core on standard line of business applications

      This is rather anecdotal but in my little area of the world there is little. .Net is moving too fast and too impractical to really be worth it on enterprise these days. The main problem for us is the libraries for handling thing like data-formats.

      Linq-to-sql is still better than Entity Framework from a time to market perspective, but both of them are really, really slow. XML interpreters reminds me of learning JAVA back in the early 00’s, they execute well enough but you need two million lines of code to do what Python does in 20. Working with SOAP and SAML often requires you to build additional parts on top of the fairly terrible APIs and some older stuff, that used to be in .Net has simply disappeared into unmaintained third party libraries. Connecting up Microsoft’s own System Center (2012) has gone from being a simple service reference to you having to build your own library because Microsoft moved to Azure. Even official libraries for AD integration are half finished and require you to build your own service APIs to look up stuff like unique IDs.

      .Net Core is really good at building CRUD services and really bad at everything you actually need in Enterprise situations because almost nothing in the real world actually requires that.

      Luckily both Microsoft and Azure are treating things like Python as first class citizens both on Windows Servers and in Azure. Their own Powershell has become a much more powerful tool than C# has as well, so it’s not like I dislike Microsoft at all, it’s just that .Net has spent the past decade becoming less and less useful for what we need it to do.

    • kqr 1655 days ago
      I work in a .NET shop that has done a massive push to port all of their .NET Framework code to .NET Core. I joined in at the end of it, and only helped out porting 1–2 of the central applications, so I don't know what the internal struggles were like during the main effort.

      What I do know is that everyone is relieved now and incredibly happy that they did it. Hosting the applications on Linux is so much easier, and they appear to have twice the performance as well. Still not sure why, but it's definitely correlated with hosting them on Linux.

    • saberience 1655 days ago
      I've been using .Net Core in production since its initial release. First company was a well known AAA games studio running backend micro-services in docker containers using .Net Core. Currently at a unicorn FinTech (2B valuation) where literally every backend service is .Net Core 2.2 (soon 3) running in docker containers and the containers are orchestrated in AWS ECS. Works amazingly.
    • pjmlp 1655 days ago
      For our customers, we still don't bother with it beyond a couple of toy projects.

      Most of our customers are doing transitions to .NET Framework 4.7.1 and 4.7.2 from 4.5 and such, .NET Framework 4.8 still isn't a viable option for them, and plenty of in-house, and third party libraries they depend on aren't yet on .NET Core.

      By .NET 5, they might start moving into Core infrastructure.

      And for those teams the multi-platform part of .NET Core isn't that appealing, because UNIX deployments are already covered by the Java teams anyway, with products whose .NET counterparts are yet to be 1:1 available.

      • jmkni 1655 days ago
        Serious question, what makes 4.7.1/4.7.2 viable, but 4.8 not? Major breaking changes?
        • pjmlp 1655 days ago
          Validation through IT images.

          Until IT certifies server images with .NET Framework 4.8, it doesn't get green light for new projects.

          So is the nature of enterprise computing, I know a couple of customers still using Red-Hat Enterprise 5 on their servers.

          • jmkni 1655 days ago
            Ah gotcha! Fun times!
    • darklajid 1655 days ago
      I recently switched companies from one .Net shop to (kinda, sorta) another.

      The previous one had a big product based on a COM-turned-WCF architecture for years. WCF is dead/not a thing on .Net core.

      The one I'm currently looking at just recently (last year-ish?) migrated from idk what to .. a big pile of WCF. I don't think that will every migrate either.

      (yes, there are ways to replace WCF if you - and these examples don't require that - ignore transaction support, but it's a nightmare to migrate as far as I can tell .. and you end up with a product that looks exactly the same to the customer)

      I've dabbled in .Net core since its inception and would love to use it more, but the whole WCF story was usually a deal breaker for usage at work unfortunately.

    • bob1029 1654 days ago
      We went from NetFramework->NetCore 2.x->NetCore 3.x with very few difficulties. The hardest phase was NetFramework->NetCore 2.x because of the differences in dependency resolution rules (e.g. no binding redirects allowed).

      We ultimately restructured our projects a bit to dramatically simplify the number of points of touch for managing 3rd party nuget versions. We now have a common platform core (targeting NetStandard 2.1), with specific application implementations consuming it and targeting NetCore 3.0. Mercifully, we were able to capture 100% of our 3rd party dependencies into this common platform core project, so we no longer have to worry about mismatches between our own internal projects or nugets. The picture for us is: (3rd party libs)=>(shared platform library)=>(specific application). This seems to work out really well, and with the newer AOT/linker capabilities, we are resting easier knowing that we can shake off some unused bytes if our distributions start getting a little chunky due to this approach.

      Also, the .NET compatibility pack is a life-saver for anyone stuck using System.Drawing or DirectoryServices. Those are our only 'difficult' windows dependencies, and we expect to be able to replace these with cross-platform alternatives next year.

    • jayd16 1655 days ago
      We're using it for a game server that interfaces with a Unity client. Works damn well. As someone who makes games, that's a standard business application for me, but YMMV.
      • bob1029 1654 days ago
        Have you run into any difficulties with the GC, or are you already playing around with some of the configuration options here?
        • jayd16 1654 days ago
          No issues but the use case is an internet hosted dedicated server so as far as the client is concerned network variance mask any GC concerns.
    • jongalloway2 1654 days ago
      Here's the customer showcase on the .NET site (with links to more at the bottom of the page). It lists some of the biggest enterprise adopters: https://dotnet.microsoft.com/platform/customers

      Of course, lots of Microsoft services run on .NET Core. Last year, the Bing team talked about how their move to .NET Core 2.1 gave them big performance jumps here: https://devblogs.microsoft.com/dotnet/bing-com-runs-on-net-c...

      (disclaimer: Microsoft employee, .NET team)

    • marsrover 1655 days ago
      I've been using .NET Core since RC1 and first deployed to production when version 1 was released.

      From what I've seen, all new projects are being done in .NET Core, and legacy apps are not really being migrated over. .NET Core 3 might fix a lot of these issues, but in the past there were a lot of library unavailable in Core that you could only use in Framework.

      I haven't had a recruiter mention Framework in over a year now, I don't believe. It's all Core now.

    • scarface74 1655 days ago
      We are mostly a web SASS that was traditionally written in C# .Net Framework/ASP.Net MVC. All of our new code is .Net Core/React we are trying to move toward Docker and Fargate (AWS serverless Docker). We really want to get off our dependency on Windows.

      In general, .Net framework is legacy. I see very little green field development being done with .Net framework.

      Language level compatibility doesn’t help. There are so many differences between ASP.Net and EF6 between .Net framework and .Net Core that it isn’t an easy lift. But, MS did the right thing by not worshipping at the alter of backwards compatibility for a change.

    • breakingcups 1655 days ago
      We're using .NET Core for a few new projects. Our situation is a bit different as we build bespoke software for a wide range of clients. We're pretty satisfied with it.
    • blntechie 1655 days ago
      We have 3 .NET Core apps already in production in an enterprise environment but all were new projects.

      We have not migrated the 4-5 .NET apps we have and don't see a pressing need yet.

    • drawkbox 1655 days ago
      We are currently working on a social game app and proptech for a startup, both in .NET Core.

      The game is Unity so C# on the app and .NET Core on the backend is nice to match up, common code.

      The client on the proptech is Microsoft focused so they chose Azure/dotnet/C#/Xamarin.

      Initially the proptech drawing/document app was on .NET Framework 4.6 and had the app running with Xamarin on Android + iOS when the app was to run on Android or iOS tablets/pads.

      However, now the project uses Surface Books with touch/pen and Windows Ink so we went to .NET Core for UWP and that is the main app target now though it still runs on iOS/Android. Xamarin is quite nice for business apps that have to integrate to dotnet or any backend really. Lots of great developing and testing tools.

      We have updated from .NET Core 1 to 2 to 2.1 and 2.2 and will be going to .NET Core 3 soon, by end of year.

      The web app, apis and app are all .NET Core + Xamarin and it is nice. I have been doing .NET apps since 2000 along with Python, PHP and Node apps for interactive / promotional / game / drawing / rendering products and .NET Core is pretty clean and I love the nix implementations as well.

      .NET Core 1 + updating to .NET Core 2 was a little rough reminiscent of the .NET 1.0 to .NET 1.1 and 2.0 initial start, lots of breaking changes, but .NET Core 3.0 is pretty solid and slowing down on massive changes, libraries and frameworks are almost all on it for what we use including IdentityServer4, SignalR, Xamarin (Standard) and more. The list of breaking changes is small this time [2].

      There is the typical .NET base library depth that random deep errors you have to hunt down, and the thousands of libs/packages, but for the most part it has been not much more than speedbumps. .NET Core comes with lots of great security features for APIs/apps as well which helped us with compliance/security scans [1].

      Now that .NET Core is pretty solid, and the linux/nix implementations and tools are far along, I expect with the ease of Azure, for Microsoft to really take some ground.

      Microsoft ecosystem is a complete setup, from Visual Studio/Code to Azure to dotnet to Xamarin and UWP to Teams/Office and Surface Books with Windows Ink and even CI/github. They have retooled quite nicely, I left .NET in 2007-2010ish when developers took a backseat and they got it handed to them with mobile, but they are developer focused fully again.

      Side note: I also love the Surface Book with the disconnecting screen and Windows Ink. My current machines are massive custom PC, massive Mac Pro (cheesegrater from 2013 that I love) and Macbook but seriously considering going Surface Book and probably just going to be updating/building on a Mac Mini for our Unity games instead of upping to the new cheese grater as it is the cost of a car or major home renovation project for the specs I want.

      [1] https://docs.microsoft.com/en-us/aspnet/core/security/?view=...

      [2] https://docs.microsoft.com/en-us/ef/core/what-is-new/ef-core...

  • thrower123 1655 days ago
    While I'm happy to hear this on one level, as it likely means that .NET Core is finally going to be approaching some levels of stability that allow it to be used in earnest for production usage, I'm somewhat dismayed. There's still a lot of things missing from it that the 4.X full framework version had, and this feels like it is the door slamming shut on hopes that existing code could be seamlessly upgraded without a lot of rework.

    Time will tell how many existing libraries are fully ported to Core. For the foreseeable future, I expect that I'll still have to be working with the legacy framework, as there are so many SDKs that I require to interface with different products that will never receive the investment to bring them in line.

    • mumblemumble 1655 days ago
      FWIW, the decisions on what not to port that I'm aware of seem to make sense.

      WCF, for example: Its a legacy technology that is also horribly complex and never really took off. It was never really a good solution if you wanted cross-platform RPC, and therefore has a userbase that doesn't overlap much with the people they're trying to attract with .NET Core. If you're trying to migrate to .NET Core in order to go cross-platform, you probably want to be migrating off of WCF, anyway. And if that's not true of you. then .NET Framework 4.8 probably still suits your needs, anyway.

      • WorldMaker 1654 days ago
        Worse than that, too, was that WCF tried to be an "all worlds" solution, so it was an okay-not-great RPC toolkit, and an okay-not-great REST API toolkit, and an okay-not-great IPC toolkit, (and it tried to be a terrible P2P communications toolkit for several years), and so forth.

        Replacing WCF won't be easy in a lot of cases not so much because there's a lot of WCF-specific code, but simply figuring out where on the flowchart of possible concerns to migrate to:

        Were you using WCF for RPC? Try gRPC, unless you really need SOAP support or worse WS-* support (I'm sorry) and then, uh, good luck. (Though SOAP libraries for .NET Core do exist and turn up in search results, depending of course how far down the WS-* rabbit hole you need to go.)

        Were you using WCF for REST API? Try ASP.NET directly now instead of indirectly. Need client tools support? Take a look at Swagger (now aka OpenAPI) to replace WCF's odd extensions to SOAP's WSDL for REST APIs. Or maybe look into GraphQL (or less commonly Falcor) if you want something really new and wild.

        (Were you using WCF for P2P communications? That hasn't been officially supported since Vista and whoops.)

        WCF was actually pretty well built for exactly this sort of migration (the focus on interface-first design, data contracts, etc; in some cases it's just writing interface implementations where there were none before), and even the most WCF heavy applications were far more about fiddly giant bits of config files than actual code specific to supporting WCF. I really do think that half the challenge in migrating away from WCF has more to do with figuring out which tool (or tools, given you might have been using WCF for multiple things) to migrate to, as much as any actual code migration.

  • privateSFacct 1655 days ago
    Question:

    I haven’t tried building a windows app in 10 years.

    That said, in the past I found it darn easy to wire an interface up quickly.

    I recently downloaded visual studio and could not quickly figure out how to get a GUI going (design view would not show).

    What is the recommended approach with this new stuff?

    Some buttons and textboxes on a form with an onChange method and a data bound grid?

    This used to be pretty easy.

    • scott00 1655 days ago
      If you were trying WinForms, the designer does not yet work for .NET Core apps. The WPF designer is supposed to work with .NET Core, but there was a bug in it in the first VS release that might have been the cause of your issue[0].

      I also had problems doing anything other than toy WPF projects. IMO the .NET Core support for the GUI frameworks is not ready for prime time. Building GUI apps in .NET Framework is still a great experience though.

      [0] https://developercommunity.visualstudio.com/content/problem/...

      • privateSFacct 1655 days ago
        Very helpful. I thought WPF / Core was the recommended / modern approach.

        I just fired up an attempt using Windows Forms / Net Framework after scrolling down to that combo. It looks good so far.

        • scott00 1655 days ago
          Yeah, I think the Microsoft messaging would naturally lead you to that conclusion, though generally they avoid saying it explicitly. I think in a year WPF and WinForms will probably work great under .NET Core, but I wouldn't use either for real work right now. My recommendation would be to use .NET Framework for GUI work right now, but do it in such a way that the upgrade path is easy. The way to do that is to create the project under .NET Core, and then hand edit the project file to change the <TargetFramework> element from "netcoreapp3.0" to "net472". That will give you the new project file format and make it easy to upgrade when they finish getting the bugs out. The other main thing you should do is to create any libraries as .NET Standard 2.0 libraries, which work with both .NET Framework and .NET Core.
          • 1wd 1654 days ago
            Can you put WPF code in .NET Standard 2.0 libraries somehow?
        • WorldMaker 1654 days ago
          > I thought WPF / Core was the recommended / modern approach.

          The recommended / modern approach for 100% new applications would probably still be UWP (WinRT XAML) / Core, except getting that up-to-date with the latest .NET Core improvements is currently delayed until .NET 5.

        • oaiey 1655 days ago
          Await the November update. When I remember right, they will update the designer till then.
  • k_ 1655 days ago
    Shouldn't this link to https://github.com/dotnet/corefx/issues/41769 instead since it's redirecting there for discussion?
  • pknopf 1654 days ago
    You know what's funny? ASP.NET Core 2 previews dropped full framework. The community freaked out, warning of a Python 2/3 split, and Microsoft backtracked.

    Fast forward to 3.0, Microsoft did it again, and nobody seems to care.

    • manigandham 1654 days ago
      True, it's another lesson in change management and perceptions. Looks like people have finally figured out .NET Core is the clear future and it's about time to move on from the legacy framework.
  • jcmontx 1655 days ago
    I never understood why they didn't port IQueryable. I never really updated my Azure functions from runtime v1 to v2 becuause of that. Dealing with Table Storage without it is a pain in the ass.
  • Rapzid 1655 days ago
    It would be great if app domains or some other form of sandboxing came back? Perhaps with faster communication between domains..
    • WorldMaker 1654 days ago
      Depending on your needs, of course, AssemblyLoadContext supports unloading, if the goal is simply loading then unloading plugins.

      (Security sandboxing is obviously a different matter, but Microsoft hasn't seemed too keen on the old .NET 1.0 CAS model for over a decade now, and has mostly recommended against it.)

      Example code: https://github.com/dotnet/samples/tree/master/core/tutorials...

    • josteink 1654 days ago
      What other options do we have? Dedicated processes and some sort of (non-WCF) IPC?
  • donedealomg 1655 days ago
    Microsoft acting to be a good open-source citizen. Did hell freeze over?

    Congratulations. I hope Blazor kills Javascript.

  • pts_ 1655 days ago
    And yet it's not used for rockstar projects (change my mind).
    • oblio 1655 days ago
      What's a "rockstar project"? :-)
      • rodgerd 1655 days ago
        One that gets high, trashes the room, drives an expensive car into the hotel swimming pool, and breaks up the band to release a mediocre solo project, then has to go back to touring with the band when the money runs out.
    • Volrath89 1655 days ago
      .NET is an enterprise framework, mostly used in B2B software, which is rarely "rockstar"

      Anyway, who cares about it, when the bulk of job vacancies are offered by the enterprise. It's a lot easier to find a junior dev role for a .NET dev than for a Ruby dev for example: https://www.youtube.com/watch?v=ZUgNy-okDQ4

    • rafaelvasco 1655 days ago
      It will be. Just a matter of time. As things stabilize; Unity Engine will certainly use it moving on. They could already be using it. Not certain; It just doesn't make sense starting a new project with old .NET Framework from now on;
      • pjmlp 1655 days ago
        Until everyone has put their golden eggs on .NET Core basket, it makes plenty of sense.
        • rafaelvasco 1655 days ago
          Absolutely. It depends on the project and situation. But it'll start making less and less sense. Personally I've moved on at .NET Core 2. Never looked back; But my area is game development. Most people on .NET are doing Web Development; There it could still make sense to stay at old and proved .NET Framework;
    • cyptus 1655 days ago
      dot.net and bing.com are running under aspnet core 3.0
      • pts_ 1655 days ago
        Um dogfooding doesn't count.
    • bishala 1654 days ago
      Tencent - pretty rockstar in China.
    • darklajid 1655 days ago
      You don't use StackOverflow, ever?
      • pts_ 1655 days ago
        That's pretty much it.