If this reaches production status (e.g. with a new Hyper-V Hardware Qualification List and HHQL logo program like WHQL), it could be the biggest news for Linux since Dell started shipping a laptop with pre-installed Linux.
Such a logo program could provide OEMs with a test suite for Hyper-V with Linux hardware drivers, which means Microsoft could start contributing test cases to upstream Linux projects like CKI.
Most importantly, a Hyper-V for Linux logo qualification program could require that OEM/ODMs pass HHQL before shipping supported hardware, when they still have engineering resources allocated for system and device firmware fixes.
In short, Hyper-V for Linux has the potential for positive ripple effects throughout the supply chain for Linux "secured core" hardware. When combined with WSL2, Direct X paravirtualized graphics for Linux, and Azure Sphere (based on OpenEmbedded/Yocto), it's a major endorsement for Linux and the flexibility of Type-1, CPU-assisted virtualization pioneered by open-source Xen.
> This talk will cover ... device security from the chip to the Linux kernel, user application isolation, network communication, cloud interaction, and what it takes to keep a system secure for 13 years.
Even if primarily adopted by existing Microsoft customers familiar with Hyper-V, this would benefit Xen, KVM and any Linux distro running on the same hardware targets.
If a theoretical HHQL motivates OEMs to prove that Linux drivers work in a Hyper-V root partition on new hardware, with driver fixes upstreamed to the Linux mainline kernel, then everyone wins.
There is also ongoing work for nested virtualization, to enable KVM, Xen or Hyper-V to be a bare-metal (L0) or a nested (L1) hypervisor, e.g. in a cloud environment where the bare-metal hypervisor cannot be changed by customers.
These interoperability improvements increase hardware support for hypervisors, which can then compete at multiple architectural layers. Users can choose based on optimization of the HL0-HL1-VM-App stack which works best for their specific workload.
>> This sounds promising, something closer to OSX would give apple competition for devs like myself that want a linux based system and a fluffy "it-just-works" gui
No, don't. That seems to be what they want. It's not about running linux apps on Windows. It's about blending the two so that even Linux apps are dependent on Microsft APIs and such. What is the point of bringing native DX12 to linux if not that? BTW they floated that a while back and the kernel folks said no.
It's the "extend" phase they are in now. Once they extend linux and you jump on with that gui, then the work to extinguish the other options, or just dont care because they get to take in money for every user of MS/linux hybrid.
This is painfully obvious and a bunch of people on here are saying "STFU" unless you have proof of their intentions, which is a stupid argument given the history and current efforts to "extend".
> What is the point of bringing native DX12 to linux if not that?
Oh, easy: it’s so games written for DX12 on Windows—that are currently run on Linux using Wine—can have a direct, low-overhead GPU driver path, rather than Wine needing to translate DX12 calls into OpenGL/Vulkan calls first.
Of course, if you’re a game-engine developer, you know you can achieve the same by just writing your engine to target Vulkan instead of DX12.
But if you’re e.g. Steam, you’d love to see more games “automatically made” multi-platform (with comparable performance), so that you can sell them to your Linux userbase.
Microsoft themselves are—maybe surprisingly—in the same boat as Steam here. The various game studios they own publish multi-platform games. Those platforms don’t necessarily include Linux, but they do often include Android. And merging code into the upstream Linux kernel is one way to get it to appear in Android.
This change would also allow the graphics of Linux programs running in a VM on Windows to be accelerated through virtualization (i.e. having the X/Wayland server on Linux act as a Windows DX12 client.) It’s analogous to the reason that some VM software offers special-purpose “host-guest filesystem” drivers to the guest, that allows the guest to just pass through filesystem requests at a high level, rather than needing a virtual block device or a networked-filesystem protocol.
This is the reason Microsoft themselves offered. I feel like there’s a lot more money on being able to easily port Windows games to Android, though.
Sure, but its been, like, thirty years. The biggest advancement specifically Linux gaming has had in that time is a highly optimized DX interpolation layer from Valve (Proton, an extension of WINE).
The linux community seems to believe that the world will just naturally bias toward open systems given enough time. Maybe Vulcan will own the world in a hundred years; we'll wait it out. But there's no evidence that this is the case. People just want their problems solved; they want working games, or a freakin' powerful office suite, or whatever. That's why Linux succeeded in the server in the first place; it wasn't (mostly) its libre licensing; it was just better at a lot of things, which led software to be written for it, which entrenched its position.
We can solve problems and push for openness at the same time. Linux can have DX and HyperV and whatever, while we still make open solutions better. Vulcan may actually own the gaming world one day; its very good. But I doubt LibreOffice will; Microsoft Office is just too good. Why not try to do both?
I said above: because Microsoft (or rather, their Xbox division) owns a bunch of gaming studios, and acts as their publisher. And one of the roles of a publisher, is to get the games the publisher has rights to, ported to more systems, to get them into the hands of more people, to make more money.
Make no mistake: any time an Xbox title isn’t explicitly chosen to be a “console exclusive” for marketing purposes, Microsoft would love nothing more than to port it to every system imaginable. Look at what they did with Minecraft.
You dont make games multi-platform by bringing an API from a closed platform to an open one. You do it either by writing to multiple APIs (extra work) or by using an API that is already multi-platform.
Microsoft proprietary APIs are not and can not be part of a free software platform, that is a contradiction. They can be part of a new MS-Linux platform masquerading as the real thing until support for the real thing dries up.
The "you" here is nebulous. Games are made by a studio, but are often ported by that studio's publisher, who will often then subcontract the porting work to some other development house.
(Why? Because the original studio is busy making other new games. Or because the publisher got the game produced as a one-off work-for-hire, so the original studio has no ongoing contractual relationship for the publisher to lean on. Or, sometimes, because the studio is defunct, but customers are still interested in getting new ports of the game, so "you gotta do what you gotta do.")
Publishers can't change what engine a game is written in; and there's certainly no positive ROI in a ground-up rewrite. They just have to cope with the game's codebase as-is, relying on combining small tweaks with techniques like emulation/virtualization to get the port shipped. Pushing to get the target platform to natively support the APIs the game uses, is just another such strategy.
> Microsoft proprietary APIs
What is a "proprietary API"? APIs aren't IP. Even in Oracle vs. Google, the copyright case had to focus on plagiarism of header files, not of the API.
The moment there's more than one (popular) implementation of an API, its original creator loses de-facto control over it. Third parties interested in using that API will almost always focus on the lowest-common denominator subset of the API supported by both implementations.
Which is to say that, if anything, Linux implementing DX12 would be bad for Microsoft's control over DX12, since it means that Linux could "hold DX12 features hostage" by refusing to implement them. Just like Chrome and/or Safari are currently holding a lot of HTML5 features hostage from appearing in most web-apps, by being a hold-out on implementing them.
> or just dont care because they get to take in money for every user of MS/linux hybrid.
Oh my god, the horror. Are you suggesting that I could, I dare to think it, pay Microsoft some money for a working operating system that meets all of my needs? Someone needs to call the EU and get this shut down, this is unacceptable behavior.
Look, this fear out of the linux community is why I'm hesitant to immerse myself in it. Lets say I decide to build a linux app that is dependent on Microsoft APIs, as you fear: what's stopping you from just not using it? If Microsoft allows this, and we're legal with the licensing, what gives you the right to (1) control how I build the software I want to build, and (2) control what kinds of software my users want to use?
Well, I think if we dig deep enough, the reason is some variant of "because then developers will take the easy route and use the generally pretty good and well supported Microsoft APIs, and the open linux APIs will flounder". Which is a weird combination of 'not invented here' and self-loathing from the linux community that is startlingly unhealthy. Microsoft is making an effort toward openness and interoperability here; we can argue their intentions all day, maybe they're good or bad, but its (some members of) the linux community who's saying "no, we don't want to be open, we want to live in Linux world and pretend like no one else exists." Sounds a lot like 90s-00s Microsoft.
And what's the worst case scenario? Microsoft suddenly removes their mask and admits they were the evildoer we all feared them to be, like a scooby-doo villain? Ok? They tried pulling that with the browser ecosystem, and Office, and Java, and the world is still spinning. Linux is healthier than ever.
>> Oh my god, the horror. Are you suggesting that I could, I dare to think it, pay Microsoft some money for a working operating system that meets all of my needs?
You are free to do that now. Maybe Linux meets your needs, or maybe Windows, or something else.
>> Lets say I decide to build a linux app that is dependent on Microsoft APIs, as you fear: what's stopping you from just not using it? If Microsoft allows this, and we're legal with the licensing, what gives you the right to (1) control how I build the software I want to build, and (2) control what kinds of software my users want to use?
Nothing gives me that right. The problem isn't about you or your users. The problem is that longer term, developers like that end up using MS APIs on Linux and then all Linux users are forced to use MS APIs and we all end up paying for it. I don't pay MS anything these days, I use Apple and Linux systems and I don't want MS and other developers to ruin one of my options.
Fortunately the Linux Kernel folks are well aware of this type of thing and don't allow GPL licensed shims that serve no purpose other than supporting proprietary binary blobs.
As someone who obviously cares about open software, how do you feel about Apple's increasingly consumer hostile actions and their walled garden ecosystem? Haven't they "embraced, extended and extinguished" Linux in a very real way?
> This is painfully obvious and a bunch of people on here are saying "STFU" unless you have proof of their intentions, which is a stupid argument given the history and current efforts to "extend".
It's even more stupid when you consider that microsoft is going in the data collection business like facebook/google/etc. Not just their ever expanding "telemetry", taking more and more control away from the users but their acquisitions like linkedin, github, etc ( which are fundamentally data extraction companies ).
The amount of "love" microsoft/bill gates/etc gets here is rather disappointing. I'm sure it's part paid PR and partly people making a living on the windows/microsoft stack, but still sad.
Billionaires are evil, but don't you dare say anything bad about saint gates. He is here to save the world. Eerie. Facebook, google, etc are evil and people should stop using it, but I need microsoft in my life. Strange.
So what's wrong with Linux if you want a Linux based system? I don't remember the last time I got a Linux desktop which didn't "just work"; of course I try to avoid exotic hardware components, but the truth is that you can run into driver problems and issues too when running operating systems you pay money for, like Windows.
Lack of support for GUI apps such as MS Office, Adobe Creative Suite, etc. It was actually workable with wine a few years back, but that was before everything went to an auto-updating subscription model.
>> Excel is, in and of itself, a single reason alone for many people in various industries to own or operate a computer.
I had a client that suffered an excel/windows ransomware attack. It started with a macro or something in an excel document that, for some reason, everybody seemed to open. My inbox was flooded with "do no open anything" emails. Then the CEO called me to check on the data I had stored on my systems. "Remember when I said I didn't use windows? Your data, at least the copy I have, is safe."
I know lot of people use excel, and can't live without it. Anyway, the point is that we can blame Linux for kernel or driver issues, its thousands of desktop environments for its bugs, etc, but I don't think it's fair to say that Linux "doesn't work" just because Microsoft will never release Linux versions of their desktop software.
I have a working Linux desktop and I also use spreadsheets in my day to day work.
"Then you haven’t really used excel beyond loading a CSV into a table."
Bad example. I've brought LibreOffice to its knees after importing a CSV file: went to adjust a column width and it became non-responsive. And this was within the last year. They have done a lot of performance work on LibreOffice and I appreciate that, but it is a long way away from Excel for serious work.
My favorite snag was on a plot with the X-axis labeled with date text at an angle. I could have horribly aliased ugly fast text, or I could have nicely anti-aliased text that slowed moving around in the sheet to a crawl. But I couldn't have nice text on a spreadsheet I could still work in.
I've crashed Excel (2016 for Mac) trying to preview CSV I was about to import.
To Microsoft's credit, they eventually fixed it.
What still doesn't work (in Mac version) is ODBC driver for Postgresql (and maybe for other databases; it is shipped with MSSQL driver only). It will also crash Excel. So to work with psql data I have to export them into file and then import into Excel. I cannot have a query saved in the spreadsheet and just refresh it.
I'm getting downvoted, but the fact is that up until like 10 years ago I vastly preferred Calc over Excel. It had a lot less stupid built in causing me to lose my work. Excel would clear my undo history on save, or close ALL worksheets when I tried to close one (completely different behavior than Word) and I would miss which file it was warning me about.
What's odd - is that nobody ever decided to build a reasonable spreadsheet package, or imaging package, or heck, any of a hundred decent packages that are available on OS X and Windows. My day-day job is on Linux (And has so been for the last three years) - but that's because I live on a CLI, and my needs for spreadsheets/presentations are (A) Both very rudimentary, and (B) more often then not done with google docs anyways.
Anybody who had, as their day job, working with spreadsheets would never do it with Libre Office unless they weren't really using them with a lot of data and complexity, or (B) had a bit of a masochistic streak. (Or, I guess (C), were completely dedicated to open software - which is a fair reason I guess).
The two things that will permanently hold back Linux on the desktop (And, remember, I say this as someone who has only worked on Linux for 3+ years now) - is (A) The amazing number of inconsistencies on the desktop environment (Depending on the application, I have 3 separate ways I need to initiate a "copy" - Ctrl+C, Ctrl+Shift+C, Ctrl+Right-Click+Select copy.) and (B) The lack of most decent desktop software (Chrome, Slack excepted - they are pretty much flawless on Linux).
Bringing this back to the original thread - what I would love from Microsoft, is to make Linux a first-rate citizen on Windows. Right now it's 80% of the way their with WSL. Still missing a bunch of networking stuff, and still doesn't run cron/init and friends. But It's getting closer.
Then, we have the best of both worlds - I can do all my work in Linux, and still have access to the rich desktop environment of Windows. So - count me as one person who is really happy that Microsoft is putting effort into advancing their Linux offerings by submitting these Hyper-V support patches into the Linux Kernel.
Yes, I understand your position. I tend to disagree, because for me, the Windows world is not a better world: I don't specially like the Windows UI, I don't like how updates are done, I value using open-source software and I don't like my vendor to be constantly trying to make me use its software, be it Internet Explorer, Edge or Bing. But if that is the best of both worlds for you, that's great.
However, I feel WSL means using Linux to enhance a product (Windows) while at the same time preventing the growth of that same system in its standalone fashion. Because WSL means you won't be using a lot of the usual Linux desktop stuff, you'll be a pure Windows user in a lot of statistical data, and you won't be helping to make Linux a better OS. Not to say that it can get worse once Microsoft starts releasing Windows-only Linux components for WSL, as (I think) started happening already.
Actually - I think we agree a lot. I'm not a big fan of how updates are shoved down your throat and the system rebooted, (Though I do like the Fast/Slow/Preview/Full Release approach towards development that provides weekly releases if you want to live dangerous, and cautious well baked pre-releases of features 3-4 months before the public beta/full release)- much higher quality releases than the OS X Once-A-Year release). I definitely value using open-source software, not only because I find it to be higher-quality for what I do (data engineering) - but also the control and freedom I get from it. I am absolutely opposed to the Operating System vendor forcing stuff down my throat (One reason why I don't like SNAP on Ubuntu - it's kind of cruddy and forces us to use their store).
Remember - I use Linux all day for a reason - it's the better tool for the job I do.
But - Linux is incredibly lacking in highly-polished commercial software. I miss the dozens of beautifully crafted packages on OS X, and their somewhat less crafted variants on Windows constantly. That final finish, and careful craftsmanship that occurs from making every button, every font, every round-rec flawless - is something that, for the most parts - is completely absent on Linux.
I am a contrarian when it comes to Microsoft and Linux - I genuinely believe that they are going to do the right thing (And I say this as a Former Netscape Employee who knows all to well their EEE pattern) - particularly if we get to use the full desktop experience in WSL - then we'll know they are really on the right path.
But - by and large, I'm guessing we agree way more than disagree on stuff.
Enticing developers to use their platform isn’t “extending their walled garden.” In a healthy free market, competition is good. Microsoft is encouraging using Linux, just through a VM instead of dual booting (or even removing Windows).
Vendor lock-in, OTOH, would be a valid criticism. But again, this is not that.
Exactly. All these software companies could probably make their apps run on Linux at a minimal development cost. They just don't want to. Microsoft because they want people to use Windows, Adobe for whatever reason.
The whole 'online' thing is the soft entry into the OS war by Google by the means of diminishing of the role of the OS: the OS is like a bootloader for the browser. Browser takes the responsibility of the OS shell and main applications (mail, drive, docs, chat...). So that's a slightly different topic.
That is not really Linux' fault though. If you all keep buying this auto-updating subscription crap you are sending the signal to the companies involved that you bought in to their view. The only way to make them reconsider is to stop using it.
No one was given the choice. Or the choice was to "upgrade" to a subscription or throw away the technical investment in the platform, starting from scratch with a new and unfamiliar and usually less capable software.
When I really need to run a Windows application, I fire up a VM and use it. KVM's desktop experience is lacking, but VirtualBox is excellent.
And the VMs are nicely contained, disk volumes can have copy-on-write snapshots, and the volumes can even be made immutable - you shut the VM down and, when you restart, it's back to its immutable state. This is becoming less than practical thanks to various self-updating things - you need to make them mutable, let the self-updates happen and then you can make them immutable again. Otherwise they'll self update a lot on every boot.
It's a trade-off. It's a bit slower, but not much, and you'll need to have a little extra memory, say 4 gigabytes, if you want to do everything inside the VM and just have some breathing room outside it. OTOH, you'll get the isolation I mentioned and the safety of doing backups with full volume snapshots.
> On the other hand, not having Office is kind of a feature, it makes you learn better tools
Replace “Office” with Photoshop, Autodesk, Altium, etc. and it becomes very clear why this isn’t true. The Big Guys(tm) aren’t popular because they’re Windows and macOS programs; They’re popular because they do practically everything and no alternative comes close.
Sure, there is the bit about market dominance which leads to people learning just that tool which leads to further dominance (this is true of Photoshop especially). There’s also the bit about tools not getting better if no one uses them. But one would be hard pressed to find programs as capable as those mentioned earlier.
My exotic hardware can only run on linux (wifi hacking dongles, SDR bits, very old data devices). One of the big reasons I switched to linux as a kid was so that I could play around with tools that windows hates, like enabling monitor mode on a wifi card.
I really feel this. I manage about a hundred computers for scientists/students. Total free reign. Some want to use linux. I’m sorry, you will never have working audio. Oh. It looks like the available linux drivers don’t play nice with this sandy bridge mobo and the dm will fail to initialize. The list goes on.
Never have I seen lack of hardware support on windows. I think that’s the essence of “just works” here.
One example I found recently is bluetooth headsets:
On Windows it by defaults uses the high quality audio output profile, but switches to "headset" profile when an app requires a microphone and picks the headset.
On Linux (Ubuntu to be precise) you have to manually pick the profile somewhere deep in Pulse, making the usage of the microphone pretty hard as you first have to figure out how to change it and secondly need to manually change it every time you get into and out of a call. I quickly gave up and now use a separate mic.
Its all kind of small things like this that make Linux grind just a little bit more for me than Windows, not saying Windows doesn't have its issues though, just a lot less.
With the advent of more people working at home, with a greater variety of headphone/headset devices it's even more of a hassle to use linux for daily work tasks. Not that you can't, obviously, but if you're not a developer or sysadmin-inclined, it's a mess trying to get a linux desktop lashed together.
That is not due to drivers, and nobody negates how bad Windows 10 is and that it becomes worse with every update. Still doesn't address the fact that you dGPU probably wouldn't work on Linux at all, or, more probable, would stay on all the time.
The only laptops that old I know still run just fine are all Macs, props for keeping that thing alive, I usually give laptops about 6 years tops before I consider them dead (they usually die on their own for various reasons).
I don't know for other vendors, but at least for nvidia, there's parity with the Windows drivers, according to the benchmarks and the general stability. In fact, I remember when sometimes a feature had to be removed from the Linux version of the nvidia drivers just to keep "parity with the Windows one".
> I don't remember the last time I got a Linux desktop which didn't "just work";
I can. I have a laptop with a high-DPI screen. It can output to a second monitor. Setting that second monitor up is relatively easy (though not as simple as it is on Windows or Mac), though getting the desktop to play nicely with a high-DPI screen and a non-highDPI screen doesn't just work. It basically becomes unusable at that point. The laptop was sold with Linux installed by default.
> but the truth is that you can run into driver problems and issues too when running operating systems you pay money for, like Windows.
Yes. Though, I haven't had that problem in years. And when I last had that problem, my solution was literally just upgrading the driver.
Listen, I've hand coded XFree86 config files back in the day to get three monitors up and running on a Slackware system. It's gotten better. But it doesn't "just work."
Old fingerprint readers were reversed. Newer stuff is more tightly locked down. People have reversed the communication protocol, but various other bits are harder to get around due to signatures or other DRM like mechanisms.
As a Kubuntu use (and I appreciate this isn't strictly a Linux problem), I've found most of the window managers quite buggy on Linux - system clock freezes, menus crash, settings occasionally reset themselves randomly.
You allude to driver issues which certainly can happen on Windows, but I've had quite a few in Linux - particularly with network adapters.
Let's not even get into gaming on Linux... I do it, but it certainly doesn't 'just work' for many things.
I think there's something to be said for the consistent experience that Apple and Microsoft achieve by being the arbiter of their entire stack.
This sounds promising, something closer to OSX would give apple competition for devs like myself that want a linux based system and a fluffy "it-just-works" gui
I switched from macOS to Linux. GNOME and others are pretty good these days. I did try Windows + WSL2 out of curiosity, but my brain is just fundamentally incompatible with non-unix systems. Moreover, I was really annoyed that when Windows installed drivers automatically, it also installed a lot of Realtek and Intel crapware with it. Why?
It would be really nice though if Microsoft could make a Linux version of Office (even if it was just Wine-based), because the web version is too limited. Oh, and the Affinity suite would be nice as well ;).
I have to say I find the settings / control panel dichotomy issue overblown. I rarely need to go into control panel anymore, and pretty much never without it being a clickthrough from the new settings app.
I usually do too, but then again I don't change settings all the time, so it's mostly initial set-up where I spend a while in the settings app or control panel and this is where the problems come together:
1. at least for my preferences there's just too much happening out of the box, e.g. "Explore", "Create", the whole Xbox business, all the notifications, the colors and design - so there's a lot I have to change so I'm not distracted all the time.
2. the number of menus and different places I need to go to to get my setup the way I want it and how inconsistent these are.
I could probably set up a custom image but I prefer to get the most recent build when setting up and would appreciate it being as clean and minimalistic as a fresh install of macOS (yes there are some things here too, but you have to look for it) or Linux. The plain Windows setup feels like how preinstalled systems used to be with all sorts of "helpful" apps that automatically start and make the system look "nice".
I primarily care about how terrible the new settings window is and that it's a step back from the Control Panel in too many ways to ignore. What's with all the wasteful white space? What's with the inability to multi-task settings?
Agree with the non-availability of multiwindow Settings, but the UI of Settings seems to mimic iOS and Android settings, which in my opinion, modern, simple, and familiar. The Control Panel UI seems so complex nowadays.
I don't think having a similar UI as mobile is very good for the user since the input mode is very different.
I think macOS does way better here with a minimal use of white space because I can easily have a browser open on the side to look up what I'm trying to change - though some things do appear a bit old school (e.g. networking).
Seems like you've never used it with a non-FHD screen. Or as a window. Because it absolutely wastes a ton of space. The sidebar on the right will disappear to the bottom, which is unnecessarily cut off because of the crazy large margins around everything. I don't understand why every page needs to be scrolled when the Control panel fit twice as much functionality into half or even a quarter of the space.
Looking at the AMD and nVidia software, I see it surfacing a lot of hardware-/vendor-specific options and I don't see how it could be done any differently. Then there's Realtek, where I've never seen their software add anything useful that doesn't already exist in system settings. But then again, there are drivers like those for the Xonar DX or other soundcards where a bunch of useful features and configuration options are surfaced.
I think there are pros and cons. PulseAudio could possibly be half the incomprehensible monstrosity that it is if it didn't have to take over the responsibilities of every audio driver out there. On the other hand, Realtek will arbitrarily disable/hide features with no way (that I've found) to do anything about as a user.
The ecosystem is already there for devs, except for those interested with directly interacting with their laptop's hardware.
I've migrated from MBP to Win10 WSL three months ago and it's like a breath of fresh air for general $DayJob backend CRUD-like work. I'd still prefer a Linux laptop, but WSL is so much closer to a real Linux box.
I don't think I touched PowerShell or any .BAT stuff even once.
It doesn't matter how the hypervisor is arranged, but it is still just a virtual machine. Just because it happens to run on a type1 vs type2 hypervisor means not much, or just go ask around whether Xen is any faster than KVM these days.
The filesystem integration is not without flaws though, it doesn't seem to implement full posix semantics.
Many times in WSL2 I've had a git rebase fail because it couldn't overwrite existing files, and every time it could be fixed by just running git rebase --continue. The actual problem seems to be that WSL2 system calls return before the NTFS operation completes, so quick successions of create-after-unlink or write-after-rename can have unexpected results.
For this reason I still keep a WSL1 environment, it doesn't have the same issues (and doesn't appear to be any slower for filesystem operations).
> For this reason I still keep a WSL1 environment, it doesn't have the same issues (and doesn't appear to be any slower for filesystem operations).
It's definitely a great deal slower for filesystem operations inside the Linux system. Filesystem operations outside the VM's disk image go through 9p, which gives you basically the same performance for both.
"Windows programs for Linux"? It doesn't really have any better filesystem integration. The filesystem is stored on a disk image on WSL2 as on any other VM. You can mount other host filesystems from inside the VM in any hypervisor I know of.
Ack on XQuartz being crap (thought it does have Retina support these days, at least for non-rootless mode), but this is not really a problem of the VM.
Isn't that just a binary loader, like you can do with wine/qemu on any Linux? Surely it's mostly just proxying the pipes, replicating the working directory, and maybe keeping some of the environment variables. The filesystem integration seems like the trickiest part of that altogether.
You got it completely wrong. WSL1 was a system call translation layer, and everything worked directly on the Windows kernel, like Windows programs do under WINE.
WSL2 is an standard Hyper-V virtual machine, no more no less, with a custom Linux kernel build and a ext4 root filesystem stored in a .vhdx image. Every process runs as a native Linux process under the Microsoft's Linux kernel, _inside_ the VM. What makes it more convenient than simply opening the Hyper-V manager and creating a VM is ease of installation, maintenance and integration with Windows (using the 9p protocol to allow file access).
So use Linux if you want an 'it just works' gui. It just works. I've been using Linux as my daily driver since just about forever and it works just fine. Very, very rarely do I hit the limitations of the system, usually this is when trying to do something weird such as running 8 sound cards in the same machine or some other strange hardware feat. But other than that it is very hard to find flaws in the day-to-day use and for more mundane use cases it just works accurately describes the state of affairs.
If you have to develop for Windows then that's of course another matter.
I've been using Manjaro Linux as my daily driver for 2 years and I can't complain at all about it. It absolutely just works. Installing software is easier than any other OS and Manjaro's default desktop environment, XFCE, does a Windows-style UI better than Windows 10 does IMO and with more features, like being able to middle-click taskbar items to close them just like Chrome tabs - a feature I used to have to hack Windows to enable. Even when I've had to write small scripts for XFCE to add features, I don't feel like I'm hacking because everything I'm doing is supported.
Of course I keep a few separate Windows PCs around to do things that Windows can only do (when I need to do them, which hasn't happened for quite some time now) like SQL Server Management Studio, Visual Studio and some games/video streaming services.
Also, I keep a Mac around to do Mac things like debugging an iOS app or helping some junior developers who only know how to use a Mac.
I would second this. I finally got rid of my MacBook Pro after all kinds of hardware problems and moved to Linux as my daily driver. I made a point of buying a new laptop that was built by a System76 type company in Europe to avoid any weird hardware compatibility issues and it really does work perfectly for me. I’m using Pop_OS 20.04 for what it’s worth.
Ok, so are you seriously suggesting if we did a random sampling of 60+ aged people in Europe and the US (who use computers) we would find Linux being used by the majority of them? Because if you are I would suggest you're living in a fantasy-land. I would be surprised if 5% of computer-using 60+ year olds use Linux.
Yes, I used an anecdote, but it's an anecdote that shows a massive grain of reality. I mean, I work for a tech company full of 30-somethings and 20-somethings and only one of the engineers in a group of about 320 that I know and work with requested a Linux laptop versus Mac or Windows. If this is the case with young engineers, it's absolutely going to be even worse (for Linux) when looking at non technical older people.
I recently had an issue with my Linux Mint desktop where a power outage somehow introduced a hard disk error that was not easy to resolve (unless one knows exactly what to look for in the logs and knows the correct `fsck` incantation from memory). Linux is definitely still worth it, but the "just works" factor is just significantly less than MacOS.
If that had happened to you on Windows instead of doing an fsck you might have had to re-install from scratch. And I've had OS/X helpfully suggest to 'initialize a harddrive' with some perfectly good data and a borked boot block on it.
Fixing that took a lot more magic than just an fsck, no matter what the incantations. Once you have trouble at that level any OS will be tricky to get going again because a lot of the underlying assumptions have failed.
For me it feels quite the opposite. OS X is built upon on a Unix core and shares such common heritage. There is consistency in the design throughout all OS levels. On the contrary with Windows and GNU/Linux, IMHO two very incompatible architectures. The business rationale for Microsoft behind this is clear but hybrids represent mostly a temporary transition state. So the question is, what comes next, where is the journey to go?
I cannot speak for the other poster, but there are several levels where consistency is significantly better. The GUI is of course the most noticeable, especially as the “iOS-ification” of macOS continues. But for a developer, the methods you interact with are more consistent across platforms and apps. Porting an app between iOS and macOS can be as simple as changing a few method names and setting a new target in XCode. For the most part one can assume things like app bundle layouts and where files will be dropped on the system. Most of this consistency lies parallel to where Apple enforces it, which comes with its own downsides.
That consistency isn’t absolute though, rough spots like the boundary between Mach and the BSD components still exist.
However MS seems to fall back on its old bad habits and the "it-just-works" gui is less true since a while now (I would say a year): we had some major pains after every Windows update at work to the point where the admin blocked them in the firewall.
I think their release schedule (twice a year) is way too fast, and they should focus more on stability.
We're 2 to manage IT tasks in the company (my real qualification is software dev though). We don't have access to Microsoft anything (too expensive) so we use the licenses that comes with the computers we (rarely) buy.
I still haven't managed to convert all our workstations from Win 7 to Win 10 (and yes we still have a couple XPs), there are special apps on them that needs the intervention of one of our provider, it's complicated.
Last year one of our provider sent us some machines with LTSB 2016 installed. They're already at their EOL.
Edge is not available and there's not even an image viewer on this OS, MS Photos is impossible to install (and is crap anyway, it regularly fails to show an image that any other viewer including Paint can open etc).
LTSB (or LTSC) aren't designed for general purpose web office machines. This is one of the reasons it's difficult to get LTSC, it's only available via enterprise and it's designed for a very specific non changing long term environment. It will never have the most updated drivers, or the integration of the latest/greatest web browser or microsoft store.
LTSB is amazing fit for things like kiosks, POS, appliance that are designed for a single purpose only. Uptime & Stability is the purpose, not enduser focus. You should never use it outside of a large enterprise where you can support it.
From what i can gather, China is hedging away from windows and onto the Linux based distro UOS(deepin).
I wouldn't be surprised if Microsoft is also making some preparation for that movement
so they can still offer software for the Chinese market.
>something closer to OSX would give apple competition for devs like myself that want a linux based system
Apple has been consistently removing native *nix tools from the system with every iteration. I was recently surprised to see it still had SFTP built-in (although I'm not sure how long that would last).
But, homebrew still does save the day. I wonder whether binary notarization requirements will affect that too.
I'm not sure if Windows 10's GUI is still a fluffy "it-just-works" GUI.
Windows 7 did make a lot of sense, but configuring Windows 10 has become confusing. You have two versions of the configuration tools, the legacy one and the new HTML-like one, and the latter seems to be almost unusable because of how it is structured. Not to mention that the HTML-variant doesn't work when you VNC into a laptop which has its lid closed and the attached monitor is turned off. Then it's not possible to scroll, because the pages are broken, as if the HTML-renderer is unable to obtain proper display metrics.
I've seen a comment in reddit sometime before that said, "I would use Linux if it had all the programs I use day to day in Microsoft Windows". Though I can understand this, Microsoft providing support for Linux inside Windows is far from being the right approach. Its only going to make users stay on it. Rather, software makers should release their products for Linux as well. Microsoft is working towards the goal making Windows the de facto platform for all kinds of users.
People - users, developers, artists or anyone, should see this beyond software. They should see the philosophy that drives it, the "Free Software" movement paved way for an ecosystem where "Knowledge Freedom" was more important.
Sorry for the ignorance, but what does this mean? I understand from the article that the patch allows the full Hyper-V stack to run on a linux-based machine (as opposed to previously needing windows to run the root partition). But what does _that_ mean to have the ability to run Hyper-V entirely on linux? Is it just a "good to have options" thing?
Hyper-V already is a bare metal (type 1) hypervisor. Admittedly it does use one (or more) of the guests, called the "parent partition", to handle some of the work for it. This change allows that parent partition to be Linux instead of Windows, but it does not make it any more of a bare metal hypervisor than it was before, unless you know something in addition to what the article says.
Please don't FUD like this. Proprietary systems on top of Linux are nothing new and Microsoft certainly isn't the first company to have done it. If you have some real proof of their business plans then let's hear that rather than encouraging speculation and fearmongering.
> I feel like taking an extremely cautious stance in regards to Microsoft is very fair
As someone who wrote their first code for Linux in 1998 (a research project, not upstreamed), the whole "Microsoft loves Linux" thing is almost exactly like "Magneto loves the X-Men". I mean, I'm glad they're not calling us a cancer anymore, but I don't believe for a second they're a fundamentally different company; and I don't trust them one bit.
But the great thing about open-source is that bitter enemies can collaborate when it makes sense to do so.
To be clear, the GP wasn’t making a claim about Microsoft’s strategy; what they’re claiming is that Linux is robust against such threats. Linux is constantly “Embraced” and “Extended” by all sorts of people, but it’s far too large a tent at this point for any entity, no matter how large, to gain the control necessary to “Extinguish” it.
(In this specific case, it’s because Linux has many in-kernel hypervisor systems, not just Microsoft’s. And at no point would they get rid of any of them, even if Microsoft’s is “better” in some sense. That’s not how Linux works: those systems are there to allow various groups to scratch their own itches, not as some central ideologically-driven top-down design. Microsoft can certainly add to Linux—anyone can—but nobody can force Linux to take away alternatives until it becomes dependent in some way on their particular code.)
Whatever Embrace Extend Extinguish is Microsoft doing, all the patch were open source and licensed under GPL. If VMWare's patch to make Linux run in VMWare closed source hypervisor getting mainlined, why not this one?
That strategy has also failed them numerous times and resulted in complete flops. Sorry I am just really sick of the senseless Microsoft bashing that tends to pervade Linux communities. If you have some real reason to criticize them based on current behavior then let's hear that. Anything else is speculation.
There is no MS lovehate anymore from Linux community in my experience since the rise of the greater evils went under the radar as hipp. Sadly MS is on the course joining the Google and Facebook in this regard.
I think it is uncharitable to assert that a company that is 45 years old has a strategy that is "never-changing". There is no denying that Microsoft has been hostile to Linux and open source efforts before, but it doesn't mean that this would never change. Last I checked, Microsoft was the largest contributor to open source projects, and Satya Nadella seems to embrace FOSS
How are you seeing this perceived strategy in LinkedIn, GitHub, Azure, .NET Core, Docker, Surface, etc?
These are all things that Microsoft is either owner or a strong participant. They aren't the same company as they used to be. They don't have one single product (Windows) that everything revolves around. They were losing too much (things like Mobile) to keep that strategy.
Or maybe the Hyper-V team sees that Windows as the only host that Hyper-V support as a liability. Look at , you need 16 hours to compile Windows in 64 bit superfast workstation w/ hundreds of GB memory, which would be frustrating and annoying, if they just want to improve Hyper-V and testing, working w/ Linux would be much faster since it compiles within minutes on standard dev machine with adequate RAM.
Windows contains not only kernel but also many components like stdlibs, daemons, graphics, basic apps, and so on. So you should compare compiling time for Linux, glibc, systemd, X/Wayland, GNOME, wpa_supplicant, PulseAudio and so on.
Perhaps Hyper-V team is already tired dealing with Windows. I mean look at this quote: "...it takes approximately 16 hours to compile Windows on a 64 cores super fast server-class machine optimized for the job, and with hundreds of GB of memory, and that time does not include running tests." 
This is awesome. Although hyperv itself has been great for me, there have been certain things which have been a royal pain (copying/unzipping a compressed image, integrating into our monitoring system come to mind) which don't work with the stripped down windows version they have. I do wonder though, will the Linux version be backwards compatible with the Microsoft hyper-v console? Or will there be something similar? It's very nice to be able to fire up a GUI over a vpn and right click add/configure a new machine.
Similarly with installing new images. If anyone has any insight into any of this id be really interested to know!
The only reason I am still using windows is MS office.
I might give it a try myself... My home machine is running Arch linux and works flawlessly for more then two years already. Just my work is still stucked with a lot of documentations sharing between some other people and I need MS office programs which have no alternative in open-source word.
well, the office.com looks promising these days and I hope to switch completely ASAP.
Well, according to TFA, the patches were submitted as an RFC.
Short version: it's not in Linux, this is basically the very first step in getting it into Linux, it's still gonna be probably several months at least before it makes it into the mainline kernel, and then however long after that for it to make it into your distro's kernel.
FWIW, I think it was a little over 3 years from the first "RFC" for Wireguard until it was merged into mainline (part of that, though, is because some existing things in the kernel had to be "re-worked" first).
If we assume this is technically sound and so on, is there a political dimension that will be considered before merging this? Such as "this will decrease Linux marketplace penetration in this-and-this niche"? I know that Linux is not a company etc so a direct comparison is awkward, but there can still be such considerations weighing in, perhaps...?
Anyone who's hosting Hyper-V VMs on Windows can now move to Linux and host those VMs under the same Hyper-V environment they were before. I'm not sure how many do this and, frankly, I'm quite surprised Microsoft does it on Azure.
Of course, you can run those same VMs under KVM under a Linux OS, so it's not some new capability.