I get the sentiment here, it's very annoying for developers (including me). Establishing trust is very hard problem, though.
Let's move this to a productive conversation though. What can Microsoft do, as an alternative, that doesn't result in an identical or worse situation?
Giving out free code-signing certificates also makes it easier for malware to get legitimate certificates. This is akin to LetsEncrypt for certs -- https://yourbank.real-secure-website.xy can have a valid cert but it doesn't mean it's legitimate. What's the equivalent to the "URL bar" for software? What's the equivalent to the ACME domain validation challenge?
The SmartScreen stuff is another attempt at this -- software that's not frequently seen is flagged as a potential problem. As a developer, this annoys me greatly. As the de-facto support person for family that don't understand computers... I don't mind so much. Without this, malware gets executed directly and now you're dependent on (very imperfect) anti-virus software.
I guess the Store is another way to have "trusted" applications, but you only have to look at the Google Play or iOS store to see how well this ultimately works out (for both malware and legitimate authors).
Note this isn't even about admin vs non-admin installations. Obviously malware running as admin can do more damage that's harder to recover from, but non-admin malware is just as capable of doing bad things (think: stealing credentials, running cryptocurrency miners, ransomware), and after being hit by a randomware attack I doubt your "typical" user is going to really care much about the distinction between their account vs the entire computer being trashed.
> What can Microsoft do, as an alternative, that doesn't result in an identical or worse situation?
Stop. Reasons for doing it at all are bullshit. If you want to evaluate the software, do that. I'll happily hand you the source code from GitHub and the build chain on AppVeyor so you can watch the executables being compiled. If you want to give me a way to mark my applications so that users know that they're picking up what I'm putting down, great. I also care about my users, so I'll happily mark my applications.
But you don't need to take $100/year from me for the right to not have thugs block access.
> Giving out free code-signing certificates also makes it easier for malware to get legitimate certificates.
Malware exists to make money. Therefore malware authors can easily pay for certificates as a cost of business. Megacorp software fucks people over all the time. It exfiltrates their browsing history, MITMs their secure connections, installs rootkits and backdoors. Please don't pretend that this increases security.
Having gone through the process myself, it’s not trivial.
We first tried through GoDaddy who shouldn’t even offer the service to South Africans because they “required” a photograph of a company director holding a government issued photo identity document with physical address included. There is no such type of document here. We offered affidavits, lawyers letters, but they were unbending.
We then tried and managed to succeed through Digicert. Their process involved a few checks including checking local government mandated company registry and using the telephone number from that registry to phone a director to confirm they were aware of the certificate request.
Bad guys can certainly get this done, but it raises the bar very substantially, and once they’ve burned their credibility of the company the certificate is issued against, they have to use a different company.
Oh, yes, so we're supposed to believe that malware outfits bringing in millions of dollars a year in illicit profit can't afford to set up front organizations.
And it doesn't even have to be fake! You could be Zoom! Or Avast! Or Trend Micro! Or Sony! Or Lenovo!
The only person in this story who doesn't have a business address is me.
This is in line with my experience. I've worked with cybersecurity in the past and the "attackers" used to be huge black hat companies that had much more money and power than all white hat companies combined. We joked that only the bad guys went to work using suits, since they are very powerful and well organized :)
I know there's a smiley in your comment, but this is a now a very genuine issue, and working in cybersecurity I feel helpless to do anything about the situation.
My biggest worry is the endless "telemetry", "customer experience feedback", "licensing", "opt-out", "tracing", "quality improvement programs", etc. On and on. There are now a thousand hooks into every data centre by just about every vendor involved. Every one of them is exfiltrating information. Every one of them is a data leak waiting to happen. Every one of them could take control at any time.
Huawei isn't doing anything different. The western governments just don't like it that they're copying them.
> malware outfits bringing in millions of dollars a year
With that kind of money they can pretty much bypass any measure an OS manufacturer could reasonably put in place without completely sacrificing usability.
You should see most security measures as the lock on your door. It doesn't take an expert to crack but still stops most from even attempting it. No security measure is 100% effective but it still raises the bar for a successful attack.
You seem to have a bone to pick with MS's choice so I'm genuinely curious in which direction would you go. And keep in mind you have a billion users and a truck load of baggage to work with.
So they should stop (discontinue) every measure that can be bypassed by an outfit with this kind of money? That doesn't leave many measures, if any. And it lowers the bar for a successful attack to someone with a couple of hours to spare. SmartScreen prompts are just like any HTTP error in a browser. The cert certifies an identity, that's it. It doesn't magically clean any malware inside, that you know the identity, or that you have to trust it.
> it hurts without helping
It helps me verify that the software I use comes from the developer I expected. It may not be a catch-all but I find value in this.
Don't forget, your door lock can be opened in seconds and the person it inconveniences is mostly you, especially if you lock your keys inside. Your argument would be far more convincing if you took no precautions whatsoever because someone with millions to spend can bypass them.
> It helps me verify that the software I use comes from the developer I expected.
Nobody needs to charge money for me to be able to sign my executables with something like PGP.
> The cert certifies an identity, that's it. It doesn't magically clean any malware inside, that you know the identity, or that you have to trust it.
And yet the message is very clearly "THIS IS UNSAFE" because "WE don't know the guy". It isn't "Don't run"/"Run anyway thanks." It's "DON'T RUN" in BOLD in an eye-catching bright box in a prominent location where buttons go and "(more info)" in body text somewhere else.
Your initial argument was against SmartScreen, not against paying. Now it's that things should be free. A cert costs because it gives you access to an already established infrastructure, integrated with everything, where you can easily verify the identify being certified.
Business opportunity for you: set up a similar infra for PGP but free. Something that can be integrated with an OS and can instantly allow me to see what a certificate would. Then make sure your infra don't become the point of failure. You can also implement a reputation system where a new developer gets a warning but one that's not intrusive, almost easy to miss... I will gladly use it (and I'm sure OS makers will too once you prove your solution is at least as solid as PKI) because I want the functionality, not the particular implementation.
> because "WE don't know the guy"
I don't know the guy either because there's not kind of identification attached to the package. A clear identification and reputation systems definitely help. Both of these are widely used on the internet today because it's pretty much the only way anyone ever came up with to discourage offenders. I will say it again and again: it raises the bar for a successful attack.
> It isn't "Don't run"/"Run anyway thanks." It's "DON'T RUN"
Which is almost identical to a browser security warning. It stands to reason that protecting your own machine should have an even more prominent message.
If you jump from one line of argumentation to another you'll never make a point. And your "solution" was to suggest that any security measure which can be bypassed by someone with millions of dollars to spend on this should not be implemented at all. Which is not even worthy of discussion.
Registering offshore company with a fake passport will take whole $300 out of malware distributor pocket. Or might be $500 if they need random homeless guy to pass interview for them.
As someone who used to make fake IDs it would be very easy to pass EV checks. It's like putting a padlock on a gate. Anyone who wants to get past it can easily do so.
I have a hard time believing this. Based on my experience, a part of the EV check was checking of the business entity with the state and sending snail mail to the address of record. Since our registered address was a law firm, I had to go through all kinds of hoops to get that letter as it contained the string needed to proceed to the next step in the EV verification.
You can recruit a clueless person to be a “virtual assistant” or similar, give them some random admin tasks (to make it feel legitimate) before sending the letter to their address and asking them to read/scan the contents for you.
A similar approach is successfully used by scammers to recruit money mules.
You're assuming this is in the US; some countries have different requirements and it's much easier to set up a business fraudulently. Not to mention, given the current situation with fraudulent covid-related unemployment/benefit claims in the US I am pretty sure there are ways to set up a business fraudulently in the US too.
digressing a bit, that's believable but my registered agent (a specialized law firm) just scans and uploads any mail to my account on their portal. It's not a problem most businesses spend much time on but you can probably find one that lives in this century. If I'm paying them a tiny annual retainer it's the least they can do and has already saved me a lot of tax compliance hassle being able to pull old docs that were handled by someone else.
If you were deliberately trying to set up a few shells for obfuscation it would not be hard, particularly if you had some means of buying fake IDs.
That's easy with a fake ID. You can get a personal mailbox (eg UPS Store) with fake IDs and register a company on the state's website with an anonymous prepaid debit card.
> What can Microsoft do, as an alternative, that doesn't result in an identical or worse situation?
Microsoft can recognize that pay-to-play certificates are merely assets owned by developers; not developer identities. Microsoft needs to manage identity relationships with developers in a way that is indepdent of certificate aquisition and expiry.
The certificates themselves are merely an implementation detail of identity.
With that said, I don't condone the corporate gate-keeping of software development and recommend developers not contribute to any platform that requires indie developers to pay to become a part of the ecosystem. Long live open source.
Microsoft can give out free code-signing certificates like LetsEncrypt, but bind publisher reputation to the domain name, rather than to a particular public key or certificate. In such way, malware makers won't be able to build up enough reputation, because they will have to switch domain names often (and legit software publishers won't be subjects to extortion by Code Signing CAs). Regarding domain expiry problem, this is solved by short-lived certificates (same as for LetsEncrypt) and timestamping server.
This enforces a "developers-must-own-and-continue-to-own-a-domain". Not saying that's bad, but it needs to be considered. for eg A lot of software these days is built and served entirely from GitHub.
You could even end up re-using domain-name-signals from existing spam datasets (whois, hosting provider, age etc).
Yes. Domains, long lived private keys, developer accounts, etc. are all better than the current system. If timestamp countersigning were replaced with some kind of domain based validation we could have short lived keys and the whole thing could be tied back to a digital identity that’s more trustworthy than Malware Inc.’s EV code signing certificate.
So what? We are developers, we are affected, we are making ourselves heard. We aren't our users (designer's motto).
It took me about 15 minutes to get my Android Phone infected by a malware by downloading just apps from Google Play. If Google is failing in such basic task for "ordinary people", all MS accomplishes is to annoy regular developers, making their apps "suspicious" to the public.
I don't believe any computer-illiterate people use Windows. Tablets and phones have completely replaced PCs in ordinary people's lives, with macbooks serving for the tiny percentage of tasks that need a keyboard. Anyone trying to use Windows as a "normal person" quickly runs into a bunch of random issues and edge cases, e.g. I recently realised that my microphone doesn't work right because I was plugging it into a USB3 port instead of a USB2 one. Microsoft drivers, everyone. You have nVidia drivers leaving all their previous versions installed on every update requiring use of a third party tool to clean your constantly growing C drive. You have just random driver crashes from random device manufacturers. You have at least two different control panels for every thing. And on and on. It is nearly impossible to use Windows unless you're a power user and know its workings, or have someone else administer your PC and only run MSOffice and Outlook.
That said, Smart Screen is a useful feature, I do want to know if the program I downloaded was not signed by the developer. What I would like from it however is that it should show up on every install with either "This program is signed by Malbolge Inc." or "This program does not have a signature" (in bold on bright red background with big exclamation marks border); "Do you wish to install?". Every installer gets this popup, nobody is spared. For the unsigned case, maybe require two clicks - a checkbox and a button. Because I always want to know who signed the installer, just in case I downloaded a compromised installer signed by a certificate different from what I expect. It also makes the unsigned installer popup less scary in comparison, though it must be kept visually distinct from the signed installer popup. For individual portable programs that are not installed, you will also have a "do not ask again" checkbox. Again, this will pop up for every program at least once.
> Giving out free code-signing certificates also makes it easier for malware to get legitimate certificates. This is akin to LetsEncrypt for certs
IMO that would still be good.
This would help protect against dangerous middlemen, all those massive sites containing thousands of pieces of software, many with added ad- or other badware. If CCleaner (just an random example) could sign their executables for free, then there'd be a very justified suspicion if it isn't signed.
Not to mention, does the existence of a cert really make a piece of (proprietary) software trustworthy?
Yes, true. But it would be an additional hurdle and any weirdness like that is visible. Plus MS or AV vendors could blacklist actual signers that misbehave.
> What can Microsoft do, as an alternative, that doesn't result in an identical or worse situation?
They can keep SmartScreen in place, but soften the language and make it more obvious that you can still run it if you are sure you got it from a good source.
Sure, Bonzi Buddy might not be the best example, but you get his point. People just click through dialogs to close them without thinking about what they are clicking.
You don't get to handwave away a clear fallacy with "ok but you get the point". _My_ point is that the point is false, not that the example is bad. If Bonzi Buddy can just buy their way into your computer, then this does not save you. From anything. Because real malicious actors like Bonzi Buddy can just buy their way in!
I don't think this is just a hypothetical either; if I remember rightly, a lot of the really obnoxoius unwanted software from that era was signed with valid, purchased signing certificates in order to encourage people to install it via ActiveX.
A trojan that steals banking information is going to be able to afford a certificate, but they aren't going to buy one because they would have to identify themselves.
In order to understand this, you really need to start thinking like a criminal. "If I had money, power, maybe guns, and no conscience or care for anyone else, what could I get away with?"
You're imagining some mastermind criminal rather than some kid or for-hire hacker who finds an exploit kit and gets to work piecing together something malicious.
>but soften the language and make it more obvious that you can still run it if you are sure you got it from a good source
I'm not sure whether that helps. The dunning-kruger effect will invariably cause novice users to skip those warnings because they think they "know computers", and "wouldn't fall for scams".
For open source apps: Perhaps have a build service (Azure+github) that builds the software from source, then automatically signs it. Perhaps have some kind of application + manual review + requirements (like Github 2FA) process. Scanning for malicious source code would help a small bit as well.
For individuals: Some countries have an electronic identity system (eID), that would allow verification of name + address when code signing while being as secure/more secure than the usb sticks one gets for an EV certificate + the EV address/identity verification process. Because there is no standard this would be a lot of per-country work. And some countries don't have a (good) eID system (or an ID system at all). I only know the technical details of the German one and that one would work.
For businesses: EV certificates shouldn't be too much of a cost there. Though I am annoyed by the lack of competition driving the prices down in this area. Maybe make it easier for new authorities to enter the market?
To start with, the shared libraries (DLLs SOs, whatever) should be something MS makes easy for their authors to get inside of an official package distribution channel. For Windows that happens to be the 'Microsoft Store' and they should be free (as in beer).
That would then let MS worry about only the business logic and other glue using that already trustworthy or at least trust-rated code.
That too, if it's free of charge software, should have ways of being published that make it easier to verify something is safe, and if it's questionable, then engage in more of a process.
> software that's not frequently seen is flagged as a potential problem
Which means it’s not frequently seen, which means it’s flagged, which means it’s not frequently seen, which means...
One simple solution is don’t flag “not frequently seen” with the “interfere with installation” flag. Ditto developer certificates.
Does it open up a potential attack vector? Yes. But it removes an algorithmic, uncompromising, and artificial hurdle for independent developers, which is better for users IMO.
What did we do before we needed corporations to tell us what is safe and unsafe for consumption? Even as late as 2010 I cannot recall some e-nanny telling me that I downloaded a naughty executable image. I do not recall anyone having a life-altering problem due to lack of such a system, but I only know so many people.
Any way you slice this it's more about profit than it is about security. There is no amount of locking-down of Windows that will prevent my friends and family from figuring out how to completely fuck up the machine anyways and everyone involved knows it.
We were literally just talking about how awesome it would be to dumpster Apple's iOS ecosystem in favor of a .NET/Surface ecosystem, but if its going to be the same enterprise signing certificate hell, then Microsoft can fuck directly off too. I am so tired of this shit. We have spent the last 2 weeks fighting Apple certificate nonsense on behalf of several of our customers.
Apple, Microsoft, and others who would force code signing identities on developers: Kindly fuck off with your nonsense. We would like to get back to writing code and delivering excellent customer experiences. Trust can be built in many other ways. You are not the sole arbiters of trust. I would much more likely follow advice from a green username on HN than I would your smartscreen technology or any ridiculous certificate chain backing it.
> Let's move this to a productive conversation though. What can Microsoft do, as an alternative, that doesn't result in an identical or worse situation?
They could prioritize the radical concept that the owner of the hardware also takes responsibility for the software that runs on it, and they could make efforts to inform, educate, and empower the owner, rather than wrest power away from them and from independent developers.
They could copy Debian and other orgs that solved many distribution and trust problems decades ago while remaining free for both users and developers, with only volunteer resources, while MS users were still stuck with random .exe's from random websites and shrinkwrapped software as the only options.
And those of us who really ought to know better could stop acting like this is some best-effort genuine desire to solve truly hard problems on behalf of users, rather than the shameless power-grab it actually is.
There is no solution to the problem except a curated list.
EV certs should be a one-time payment thing, reasonable price ($100, for example) and require paperwork to link it to a business or individual. Same as the "blue tick" in social apps.
That doesn’t stop everything, but at least it is not the wild west either.
> What's the equivalent to the "URL bar" for software? What's the equivalent to the ACME domain validation challenge?
It's quite ironic that JavaScript is practically a native language on Windows since Windows 8. Either it's indeed not possible to use LetsEncrypt for that or they marketed it wrong to developers.
I'm amazed how MS is often the first one to push out new Desktop technology a decade before everyone else (HTML on the desktop, AJAX) but it's usually others that take full advantage of those on other platforms...
Allow self-signed certs and build reputation as usual. This would still require effort from indie devs to build reputation but as soon as more people start using your app the dialogs disappear. You don't need domain name, anyone can generate self-signed certs.
One variation of this scheme is making Microsoft the CA that issues certs for free with some issuance limits.
For the record Google uses self-signed certs for Android apps.
> What can Microsoft do, as an alternative, that doesn't result in an identical or worse situation?
Do what Apple does -- allow me as a user to set a setting that allows unsigned code to run, or allows signed but not recognized code to run, but with a warning that it is signed but not recognized.
They could also fix their warning screen to better explain that it's not necessarily malware.
Windows already works the way you describe. There's a Developer Mode that relaxes some restrictions and SmartScreen lets you run unsigned code even in its default configuration (you have to click More Info to show the button, which is annoying but hardly unique to Windows - security error bypass in most browsers has been this way for ages)
SmartScreen's current behaviour is more like if browsers pretended HTTP is a security error (It is not. It is just insecure, and labeled as such) and required the same bypass workflow.
Just do what Apple does. Own the certification and recognize trustworthy developers. Allow new devs to easily distribute and ban/block them if they turn malicious. Its not a perfect solution but is in the right direction IMHO.
Application signing is a mafia protection racket, plain and simple.
If you aren't signed by an "authority", every user is told by default automatically that your code is unsafe until you pay money.
It is 100% analogous to thugs walking into your store saying "It would be a real shame if something were to happen to scare people away."
The message is "We Protected You" and "Unsafe". WHY? Because "WE don't recognize" it.
Application signing certificates cost money. Always. And if you're making something for free either out of the goodness of your heart or because you like making things, that money has to come out of your pocket just so the thugs don't stand in front of your door with bats. Nobody should be ok with that. AND FUN FACT: malicious or incompetent actors can and do also pay money.
The problem isn't the rent. It may be an inconvenience for an indie developer, but to Microsoft, Apple, etc. the cost of a yearly developer account is peanuts. They are not doing it hoping to make excessive amounts of money. Even if Apple has a million developers paying $100, $100M is barely a blip on their radar.
It is all about control. They get to decide whose software gets to run with and whose without annoyances.
Yea, I did not speak out. I yelled. But the opposition shouted even louder.
Just butting heads doesn't seem to be getting us any results, there are probably better ways to achieve fairness, even if it is not exactly in the form we want (e.g. popup for every program instead of just unsafe ones).
I think a major difference here is that LetsEncrypt relies on an already existing third-party authority, that generally has some scrutiny to it: DNS registrars.
His point is that with webtrust certificates (what most people think of as "ssl certificates"), it's easy to validate the identity (DNS name), either by sending a email or validating a DNS record. For code signing certificates, they're not issued to DNS names, they're issued to legal entities (natural persons or corporations), which you can't easily validate.
Code signing certificates could be issued to domain names, and that might provide a lower bar to clear while still offering a good way to establish a reputation.
They should be issued to domain names. I bet there are close to 100 domains I’d recognize from small or open source developers and I couldn’t tell you the name or company name of a single one of them.
It’ll never happen though because Microsoft, Google, and Apple are doing everything they can to de-emphasize domains and online namespaces so they can become the gatekeepers of all content.
The entire point of authnicode wasn't to protect users, it was to make sure software wasn't modified in flight (ala what SourceForge did), and to make those binaries accountable.
Now we're getting to the point that this feels like a protection racket. It, at least in theory, is possible for individuals to get EV certificates for websites. Worst case scenario, you can get a one man business for the paperwork.
A lot of viruses can hit via either remote code execution, or exploiting a bug when loaded through a data file. Neither of those scenarios is stopped by SmartScreen. At best, it stops someone from clicking "WannaCry.exe".
MSFT is basically doing everything to make you use the Store, and it reeked back with Windows RT, and it reeks even more now.
Unfortunately, it seems MSFT is incapable of creating a version of Windows that doesn't have live tiles, constantly tracking what applications you run. I switched to Linux years ago, but I realize that most people live in a Windows ecosystem, and that they're subject to the whims of MSFT.
I am one of the creators of SmartScreen application reputation. SmartScreen is a reputation-based safety feature that allows 'know' downloaded software to run friction-free but interrupts the execution of 'unknow' downloaded software with a 'stranger-danger' warning. SmartScreen application reputation was first launched in IE9 (2010/2011) and then was integrated into Windows (starting with Windows 8 in 2012). It has been a few years since I have worked on SmartScreen, but I thought providing some context may be interesting.
Windows 7 made great strides in addressing software vulnerabilities. And bad guys quickly moved from software vuln exploits to socially engineered attacks. They tricked users into download and running spurious programs (using a variety of techniques - ranging from SEO to scare-ware website that tricked people into thinking that their machines were compromised to running sophisticated ad campaigns). The problem with traditional anti-virus (AV) approach was that by the time the AV analysts could get their hands on a new binary, analyze it, classify it as malicious, write a signature, and distribute it - it was generally too late. The bad guys monetized the latency between when they published a new binary on the internet and when the AV vendors were able to effectively detect it as malware.
SmartScreen tried to flip that dynamic - by using reputation. By volume, majority of downloads were benign; there was no point in warning users when the likelihood of future infections from such downloads was nearly zero. However, for software that had never been seen before - there was a significant risk associated with it - all the yet-undetected-malware resided in that set (depending on the situation and context - the risk of future malware infection could range from 25% to 75%). So, for 'known' software, SmartScreen eliminated the mostly-useless warning (that everyone had grown used to clicking through); for 'unknown' software (which the AV companies still hadn't deemed as malicious; yet-undetected-malware would be 'unknown'), SmartScreen showed a 'stranger-danger' warning. This was incredibly effective in stemming the socially engineered malware attacks. In the overwhelming majority of the cases - users did the right thing; they chose not to run the downloaded programs that were later detected as malicious. When SmartScreen was launched, nearly 7% of all downloads were later detected as malware by AV. In a couple of years, the incidence of socially engineered malware had dropped significantly (By a ton! By many orders of magnitude. Most bad actors changed their business model to bundleware - bundling unwanted, non-malicious, software with popular downloads). An important factor was that SmartScreen had expansive coverage on (knew about) all executable binaries downloaded from the Internet - in order to mitigate the risk of users getting used to ignoring the warnings if they saw those too often. Most users saw one or two SmartScreen warnings in a year. And, when they saw the warnings, the risk was significant - and users did the right thing in those cases (not run the downloaded program). SmartScreen provided highly effective 0-hour protection against socially engineered malware by helping users make the right trust-decisions.
There was some friction - for developers and advanced users (who downloaded esoteric, non-very-commonly-downloaded software more frequently). For developers, reputation came in two forms - either each individual binary that they published could 'acquire' reputation - or if they had a code signing certificate - then that certificate could 'acquire' reputation (and any binary that was signed with the certificate would inherit it; which was a better option). In either case, if the developer went rogue or published a program the was malicious - it was straightforward to deal with that problem. SmartScreen could instantaneously revoke the reputation for the certificate or the binary. There is a meaningful cost (time, behavior) that the developer incurred in order to get reputation - and it is hard, expensive, and not-scalable for bad actors to 'acquire' reputation - and if they did end up behaving badly after acquiring reputation (e.g. signing malware with code signing certs that has acquire reputation), that reputation would be lost very quickly - really hurting their ROI.
So yes - there was some friction. In many / most cases, new executables and publishers 'acquired' reputation after a short period (typically a few days) and many advanced users understood why they would occasionally see the SmartScreen warning and would make the right action choice. But the benefits to the larger ecosystem were incredibly significant and impactful.
With all due respect, you may have built an effective tool to suppress malware, but I would be more excited had you addressed the complaint in the article.
It remains difficult if not impossible for a brand new developer to coordinate a new launch or for an open source project to release unsigned binaries without triggering SmartScreen's rather opaque and user-hostile reputation block. At first glance, the dialog does not give the user any information that it is even possible to execute the program. Of course it stops malware. It brings the average Windows user to a dead stop!
Based on the number of regular people I help on a daily basis who download completely legitimate, "esoteric, non-very-commonly-downloaded software" I must say that I do not find your arguments very compelling. This tool could have easily remained in your browser where it belongs.
Scam companies are making several hundred dollars per victim, so the ROI on an EV code signing certificate is pretty good in that context. I can’t count the number of PCs I’ve helped people fix because they’re infected with some fake antivirus that came in with an EV code signing certificate.
And who cares if you guys crush 10 or 50 or 100 small developers for every malware distributor that gets stopped, right?
SmartScreen is a bad solution. The underlying issue is the pathetic identity validation industry where $ = reputation. All SmartScreen does is add popular = reputation on top of that. Both suck!
What we need for modern software development is a proper identity validation system that doesn’t cost an arm and a leg and lets us tie the validation to our developer accounts and long lived digital identities.
Code signing is good for the rent seekers charging a fortune to provide terrible service. SmartScreen is an awful black box that you think is good because you worked on it and are privy to the internals.
Maybe it was well intentioned when it started, but now SmartScreen is a a non-issue for industrial sized malware distributors, but is devastating for small, independent developers.
What this really does is train users to bypass protections. As anyone who has owned a mac can attest, I now basically think nothing of the potential perils of doing "right click -> open -> ok" because I have to do it ALL THE TIME.
It's getting to the point where it's the same on windows. More info -> run anyway is becoming a useless annoyance.
I would note that at one point SSL certificates were kind of in the same boat. Users clicked through web warnings as a matter of course. The thing that changed is that we were able to reduce the financial barrier to certs to literally $0. If the same can be done for this nonsense, we will be in much better shape.
I have already made my choice on risk when I chose to download the software from the developer and execute it. A prompt for a first time trust might be the limit of what I'm OK with including, but any stronger warning or making it difficult to proceed is really only reasonable to do if some 3rd party is compromising a choice I have already made.
Dear GitHub CEO reading this message on HN: a cool feature would be a GitHub action to sign binaries for free.
It's ridiculous that my open source code is hosted on GitHub, the binary is created with an action but I have to pay for a certificate and manually sign it.
Yes this please! I recently found out the hoops one must jump through to sign code, I've basically have given up until one of the CI services will provide a way.
I guess since Microsoft through GitHub controls the build process, couldn't they also include the signed source as well as a combined signature?
Idea being that one could, in theory, download the git repo, check out the relevant commit and verify that this was the source code and this was the resulting binary, and that matches the exe file I just downloaded.
Yes! I bet there are numerous ways this could be implemented to make things easier for developers and end users.
With the current setup GitHub has more control on the resulting bits than I do.
Also, current Microsoft signature rules that a hardware dongle is required to sign the bits (all non-hardware certificates will be deprecated in time).
So, I'm supposed to take the .EXE that GitHub produced on one of their VMs running actions then certify it's legit by signing it with my certificate.
But what am I actually certifying? Well, that to the best of my knowledge this EXE is the build output of this Git commit that triggered the action.
I recently went through this pain for an electron app. Non-EV code sign cert kept throwing SmartScreen warnings for most users. Acquiring an EV cert forced me to register a company and a hefty cert price per year, overall costing nearly $1500. Not to mention overall delay and added costs.
Meanwhile my $100/y Apple Dev subscription was enough to package the app and distribute outside App Store.
At this point I am fairly certain EV certs are nothing but rackets supported by MS.
Spent an afternoon comparing different (very sketchy yet somehow the best in Windows code sign cert land) sites, finally picked one, signed the app and downloaded it on another machine.
Was immediately greeted with Smart screen, and learned I needed to shell out HUNDREDS more to get rid of it.
What a racket. And something Apple do automatically for you (provided you pay for their developer program).
Oof. Next time you need to renew, check the links from the msft authenticode dev website for ev certs. A couple providers are much less than $1500/yr.
Also, when I got my renewal bill for something like $1500/yr for PhotoStructure, I sent an email to their support asking to continue the low original fee, and they agreed.
I'm not seeing any requirement to have a registered company? DigiCerts checkout process says to use your own legal name if you don't have a legal business name.
I am guessing that they'd want you do distribute your program through their "App Store" as a UWP, which is likely subject to sandboxing and will never show the SmartScreen prompts.
Making raw Win32 .exe distribution as user-unfriendly as possible is very likely to be a goal of MS.
Honestly I am all for sandboxing by default and scary prompts for anything that doesn't subject itself to strict sandboxing or that tries to break out of it. That doesn't mean apps necessarily have to come from a MS curated store either, just that by default the run in a sandboxed context with access to dick squat on my system.
I absolutely hate when I install something like Adobe Acrobat and it installs 18 schedule tasks, a startup job to ensure those task exist, a tray icon to monitor it's services and all the other crap. No thanks.
I also don't want to have to worry about installing some random app I found on SourceForge that purports to do what I want but might come with the SF equivalent of News Gator side loaded.
Lock it all down by default in a virtual sandbox by default and if it needs legitimate access to my files, prompt me to grant it access.
MSIX tries to be exactly that. It supports the gamut of Win32 so long as they can be minimally sandboxed. Win32 apps in MSIX still have access to everything and can't be entirely trusted, but the installer will warn you about that a lot less "scarily" than the SmartScreen Defender prompt this author doesn't like.
MSIX can be sideloaded by default in every supported version of Windows (in addition to or rather than Store installed) (supported versions of Windows include 7, 8, 8.1, and 10 after the Anniversary Update). MSIX support auto-updating even when sideloaded, all it takes is an HTTPS server with some simple manifests (for those that remember ClickOnce it is reminiscent but a lot easier/cleaner), or for Enterprises a file share will do.
The only caveat is that MSIX requires Code Signing Certificates for sideloading. (Note that makes Store install is a "cheaper" option with the $99/year and Microsoft handles the package signing for you as a part of the approval process.)
My biggest gripe with WinGet as a project (and no, the article is wrong it's not intended to replace the Store) is that it isn't doing way more to bootstrap MSIX packages instead of "downloading and running EXEs like always before".
APPX sideloading has been enabled by default since at least the Anniversary Update. Not every MSIX will sideload if just renamed to APPX, but many will (including many Win32 applications). Microsoft has stated that every actively supported Windows 10 feature update from the Anniversary Update forward will receive updates to enable the rest of MSIX and the MSIX file extension, including sideloading.
You're both partly right - it was enabled by default in Home and Pro editions since 2016, but is just now being enabled by default in the Enterprise edition in the new version 2004 update.
> That doesn't mean apps necessarily have to come from a MS curated store either, just that by default the run in a sandboxed context with access to dick squat on my system.
This is absolutely possible. You can distribute your UWP app as an MSIX package[0], which allows side-loading without having to go through the store. The whole experience is actually quite nice and straightforward, UI-wise.
Does that mean they're Sandboxed by default. I personally don't care if the package is MSIX, MSI, EXE, or whatever. I just want Sandboxed by default.
I really hate worrying what an App like Chrome is doing to my system and I'm reminded every time I launch it because all my Thunderbolt connected monitors black out for a second when it starts up.
Win32 MSIX packages are minimally sandboxed by default. Mostly just in that it virtualizes some file system and Registry changes to ensure the package is clean to uninstall.
It doesn't stop Win32 apps from doing crazy Win32 things (unfortunately), but it's still a step up from MSI/EXE installs in general.
MSIX only enforces a sandbox if an application doesn’t elect to use the restricted capabilities that allow it to run without. File system and registry virtualization can be disabled quite easily with a few lines in the package manifest, as well as a host of other isolation features.
This is a good thing, and it makes MSIX a usable packaging format for pretty much any application under the sun - but nobody should assume a MSIX package is safe just because it’s a MSIX package.
Now with Project Reunion making it official that both worlds are getting merged, I look forward that with time the minimal Win32 sandbox acquires full sandboxing capabilities.
I see current MSIX limitations for Win32 just as means to incrementally steer developers into it, while not introducing too many breaking changes (as WinRT happened to be).
Not the case at all. It's about the fact that they build reputation off installation data from all Windows users, and the catch-22 that can create for new developers just starting out with their first app.
I work for a company that develops audio software. The market itself isn’t that huge so our products triggered even on Windows 7.
Once we got the first reports, we’ve bought a code sign certificate.
But still... if your software isn’t common... you’ll trigger the SmartScreen.
Even if your app managed to pass SametScreen, you’ll soon discover that many users got Anti-Viruses that with the same assumption about non-common app is bad.
I’ve ended up contacting a lot of Anti-Virus companies and I whitelist our installers also on their end.
While the process itself is much simpler than notarizing with Apple. On Windows you need to contact much more parties to make your app “just work” for the non-tech-savvy user.
The current code signing situation on Windows is painful. It's especially galling how the code signing requirements apply to modern sandboxed Windows applications as well - an MSIX-packaged application with minimal capabilities is subject to the same requirements as an EXE that can do anything.
I think there's an argument to be made that sandboxing shouldn't exempt you from security restrictions. Unless the sandbox is entirely impossible to penetrate (making apps borderline useless), a malicious app still has many opportunities to trick the user or exploit security vulnerabilities. Revocable signing certificates are a useful tool for fighting back against hostile actors.
The certs should be much cheaper, though, and it would make sense for sandboxed applications to perhaps be easier to sign.
That makes a lot of sense - I think making signing easier should be the goal, not removing signing requirements entirely.
Part of my frustration comes from this paradox: it's harder than it used to be to share a native application with others, even though we now have sandboxing that makes it significantly safer.
It feels like most of the pieces are in place, MSIX has matured to the point where it's pretty easy to distribute an application that runs in a sandbox with auto-updating. If Microsoft can just make the signing part easier, I think we'll see a much more vibrant community around native app development.
This is why we need WASM and web-first development. If the platforms won't empower us to develop and distribute without levying their tax, we need to take our business back to the open web and spend our money on making it the best platform.
Besides, it's a nightmare to have to develop for every single platform when the web is universal. Platforms should pay the cost of making things work on their devices and operating systems, not tens of thousands of independent developers and engineers paying this price N-many times for each walled garden.
We also need to get Google's claws out of the open web (AMP, standards balkanization, etc.)
This is why we need WASM and web-first development.
This is only going to make things worse. If everything is a web application, why would users even need access to their hardware or be able to modify their operating system? Boot from an locked bootloader that only loads a FAANG-signed shim OS, that only loads a FAANG-signed web browser. Next, disallow extensions and ad blockers and we are in the brave new web world were a small number companies more powerful than nation states continuously monitor your behavior for ad impressions.
Sure, now you can install a web browser on your Linux or BSD machine. But once they are in that position, they will make sure that it will be a miserable experience without necessary DRM, disabling or not investing in hardware acceleration, breaking random things on 'unsupported combinations', etc. Moreover, as usual on the web, any competitive player the becomes a threat will be bought by the increasingly large FAANG companies, ending their 'incredible journeys'.
Even though it will probably never happen, what we need is Linux on the desktop, with good native applications, plus a truly open mobile platform.
> This is why we need WASM and web-first development.
This is why we have Linux. Personally, I don't want to live in a web-first world. I don't want everything I do on a computer to have to have internet access. And I certainly don't want the browser to be my OS.
The barrier to entry for integrating with the web ecosystem is way higher than native. I can interop trivially with basically any native app on my machine, meanwhile even basic tasks like 'export my data from this web service' are often difficult or impossible because vendors don't care.
If you are building a custom product from scratch with no need to integrate with users' existing data and services, the barrier to entry is low. But most users have lots of existing data to use and it lives on other services, so you have to integrate... if you can
The problem isn't WASM and DOM, it's that in the web environment because the storage and expressiveness are so limited most of your data lives on a third-party server and now you have to integrate with that.
My Outlook and Thunderbird inboxes are files on my local disk, so if I'm determined enough I can dig through those files with custom software. If I want to process my gmails no amount of time in a hex editor will help me because that data lives on a server somewhere. In practice, Thunderbird acts as my interop tool here, because the developers of that program did the work to keep up with whatever method was offered to pull content out of gmail. In this case there are standardized APIs (network protocols) for talking to mail servers, and lots of software that implements those. What's the equivalent if you want to do this in a webapp? Is there a common API that I can use in my webapp to access a user's gmail, apple mail, outlook mail, etc?
In practice you need to do 1-off integrations with almost every given web service a customer's data might live on, and some of those services don't have any APIs so it's screen-scraping or nothing. It's as if every user's photos were stored in a mix of 50 different custom image file formats.
Code-signing and certificates are still necessary even in a WASM environment. Once all your software moves into the web browser, all your important data lives there too and the risk profile of all the third-party code loaded into your websites is incredibly high, just like it is for Chrome and Firefox extensions.
Right now if you're a Windows dev you get harassed by SmartScreen, but if you're a web dev you get to either reinvent the entire stack from the ground up (to control it) or pull in 20 different third-party dependencies, none of which are code-signed, without any easy methods to ensure integrity other than manually listing specific hashes in a fashion that causes your website to break any time a vendor pushes new code. Don't get me started on software that currently only works if deployed as an extension - being an extension developer is one of the worst development experiences on earth
web-first means you don't have an immutable application on your machine anymore, it's always loaded from a server on every use. They can change their terms at any time and if you granted them access to a directory tree via the fileystem API they can change the code under your nose and start exfiltrating those files.
Native applications can be forced to run without network access with a single command (e.g. firejail or unshare). The web is always-online.
At least for standard apps, the user can bypass the warning. For drivers, it gets much worse.
On 64 bit versions of Windows, there is no way for me to permanently allow unsigned drivers. Every time I want to install an unsigned driver, I have to reboot my computer while holding down shift, select advanced startup options, and then hit "7" on a USB keyboard which I lug out of the closet because my Bluetooth keyboard won't work. The effect only lasts until the next time I restart.
I understand that the kernel is sensitive, but this is overkill. Please let me install the software I want on my own machine!
This works for some drivers and not others! EDID overrides, for example, require the rigamarole I described. (The command you mentioned does need to be run regardless, or even drivers installed when rebooted into the special mode will actually stop working after the next reboot.)
If anyone understands more about the differences, I'd be very curious to learn more...
Please note that EV code signing certificates are not the magic bullet they first appear. SmartScreen is reputation based. An EV code signing certificate does not grant immediate access. You will have to build reputation regardless.
There is no pay to win solution. I've learnt this the hard way.
You can however place the executable in a Zip archive to bypass SmartScreen. An unzipped file won't bear the mark of the web.
Wait really? How do you unzip your files? I thought files unzipped inside with Windows's built-in unzipper still keep the attachment manager metadata that Windows stores?
> “Developers, developers, developers!” was a cry from Steve Ballmer and one of the speeches that defined him as CEO of Microsoft. These infamous words were uttered back in 2006.
The famous YouTube video of it was uploaded in 2006, but my sources say the event in question was from Microsoft's 25th Anniversary Event in September 2000.
I could use a more authoritative source than knowyourmeme, though, so someone else is free to chime in there. There's a newsletter referencing it as early as 2001[0], so that seems to suggest well ahead of 2006.
This racket and Microsoft's hostility to independent developers (and apparent attempt to force them to use their Windows store?) is one of the reasons that I just can't go along with people who think that Microsoft is a "champion of open source" now or benign actor in the business world. They are fundamentally corrupt and cutthroat.
I recently wiped Windows and loaded up Pop! OS. It works great. And I never get a message saying "this software is suspicious" just because it's not in an official Deb package or something. If I add a PPA or download a binary or whatever I do, there are basic permissions required, but it does not automatically say "this is probably dangerous so I stopped it" just because the distro doesn't recognize the software. That is ridiculous.
Eh, honestly, that's what getting an SSL cert used to be like 15 years ago. They can knock it all they want, but really, the process functioned pretty much as expected.
Like a lock on your front door, the purpose is not to prevent unwanted people from ever getting inside (no lock will ever accomplish that), but to both make it take long enough the likelihood of being noticed is high, and to put enough hurdles in place that the attacker looks for a more lucrative target.
This is done through time, capability, and money. It takes time to jump through the steps. This presents risk to an attacker as it leaves them exposed for much longer, and susceptible to being tracked down in various ways as they expose themselves. It takes capability to jump through the steps, requiring the attacker having compromised the target in multiple different ways (phone system, employee, etc). This increases the complexity of the attach and thus the risk to the attacker. It takes money. This helps the providing company recoup the cost of verification (and if it's anything like web SSL certs is also about insurance), but money also provides another path to racking down an attacker, and provides a cost to them that isn't just their time. It also makes it less feasible to attempt a bunch of these attacks at once, as it's expensive.
People can complain about the inconvenience all they want, but the inconvenience is part of the point, so I doubt you'll see it change all that much.
Funny how Let’s Encrypt is so popular because it doesn’t inconvenience people, and as a result more websites than ever have been secured, to the betterment of society.
That's because the purpose of SSL encryption changed. It used to be primarily to protect payment information and sometimes also used for logins, but it was considered too heavy for all traffic in the past. Now it's used primarily to ensure all traffic is secure, and payment is along for the ride.
In the past you used to be fairly sure a company was somewhat legitimate and responsible if they has an SSL cert and could supply card information to them without too much worry, as someone had vetted them as a real company (to some degree). These days, that doesn't happen, and whether I'm willing to give a company my credit card has nothing to do with if they are secured with a cert (which is a bare minimum to use the site). That cert doesn't really imply much anymore though. These days, I'll only enter my card into large reputable sites, or if they are using a payment service I trust (PayPal, Amazon, Square, etc), as they've presumably done a lot of the vetting that the SSL companies traditionally did.
There is a huge difference of what kind of identity is established: Let's Encrypt tells you basically that you are actually talking to the right server, whereas EV certs for signing binaries tell you which company that actually exists in the real world owns the certificate.
Also, for a lot of sites they've outsourced their payment systems (which was the main reason for SSL certs in the past), so you can place less importance on the SSL cert if the payment gateway is PayPal, or Amazon, or Squarespace, and they are essentially providing the same assurance (and are a responsible party you can contact to reverse charges if you suspect fraud).
At this point, I'm very hesitant to enter any credit card info into a site itself, even if it's secured through an SSL cert. It doesn't really signify what it used to.
And aside from signing certs being a racket, it makes sense for LE to be easier than getting a code signing cert! If the server I land on is a malicious website, I can close the tab. If the app I installed turns out to be malicious odds are good that I have to reformat my machine and restore an old backup, plus change all my security credentials because the malware exfiltrated them.
When I tried to get one for a project I was working on, it was a horrible convoluted hell.
They needed all kinds of things scanned and sent to them. They wanted 2 government IDs, 2 financial documents and 2 non-financial documents. Everything had to be notarized. I had to get my financial documents notarized on a different day than the other documents, and had to have a different notary do those. They would not accept two different notaries that worked in the same office, or for the same organization. (In my case, my local banking branch).
Finally was able to get everything sent in and then they asked me to notarize and sign an additional document called the "face-to-face". So I get that in. All told, it takes them over a month to verify the certificate. They send me the link and key to download it. In the email they send, they say that I can use IE, Firefox or Chrome. I use Chrome. The certificate is corrupted upon download. I tell them and they say it's because I used Chrome. I forward them the email they had sent me that specifically stated chrome could be used. They refuse own up to it and re-issue and because it had been 30 days since purchase, I could not get a refund.
It was such a horrible pain that I just gave up on my project and never published it, rather than go through the whole process again. There has got to be an easier way to verify someone is in fact a human.
> There has got to be an easier way to verify someone is in fact a human.
Yeah, there is, digital signatures. Estonia started with it's digital signatures back in early 2000s, a physical signature is practically considered caveman-tier tech. Just recently the EU created the eIDAS regulation.
I really can't predict how long it'll take for the US to catch up though :/
This is really hurting Microsoft's ecosystem, I'm trying to add the latest version of QuickLook [0] to the winget package manager at the moment but because the msi installer isn't signed my pull request is doomed to failure[1].
If Microsoft want Windows to be an awesome developer platform this kind of friction with the tooling helps no-one.
I always tell users to Mark the file as trusted before running it, which is easier to do and less subject to difficult to navigate dialogs. I use a message like this:
> To avoid getting security warnings each time you launch the application, right click and select "Properties". Click "Unblock" towards the bottom of the page, and click "OK".
> By this time most users have deleted the .exe already thinking it is a malware, but SmartScreen can be bypassed by clicking on “More info” then “Run anyway”.
What typically happens is people don't even click on more info. They redownload the files and then give up and the files are now collecting bit dust. Source: me.
Also, I think it has been going on for more than a year now.
It might be the best way, but distributing your malware as executable - maybe bundled with some other app - is easy. Plenty of bad actors out there who can’t pull the fancy tricks.
Smartscreen may protect against cases where somebody has for example repackaged WinSCP and distributes it with malware.
We have been bitten with this one, multiple times now over the years but seems to have gotten worse.
Our otherwise SaaS product requires an installable component for our customers PCs, from installation package (MSI and/or EXE) that we generate on the fly per customer -basis in order to customize data inside the installer for each tenant. For this reason, the timestamping of digital signature varies between packages as does the hash [of the content], and for _months_ after we have renewed our signing certificate we get support messages about both Smart Screen "scary warnings" as well as from some AV products as well. Despite number of downloads for runs for the package(s) signed with one and the same signature.
As the article mentions, it does not matter if you have had previous certificate; each renewal (=new certificate technically) starts this reputation process from zero. What's worse, since the signing happens on the [Windows] server as part of the product itself we really cannot use EV certificates either as those require physical USB dongles to be attached to machine doing the signing.. so we are left only with option of using regular certificate that gets this treatment. Sure, 10 year certificate would postpone the issue for a long time, but for security purposes we actually want to recycle those signing certificates with one to two year interval so the problem always resurfaces regularly.
The part that is insane to me is the idea that there is no way as a developer to get a "this product was reviewed and certified clean" label to guarantee a clean installation UX, even after buying a code signing certificate.
SmartScreen seems like a great idea for protecting the end user, but without ways to remediate "reputation" problems with some form of prior review, this makes the software ecosystem shitty especially for individuals and small software outfits.
As is typical for Microsoft, they again are trying to copy Apple, but failing to get the details right, turning it into a disaster. This behavior used to just be limited to their UI, but now it seems to have passed into their business practices.
I suspect this happened because they were trying to be more secure, but all the Microsoft employees have free access to signing certificates that bypass the checks, so they never ran into these problems.
Can someone break down how this compares to what Apple does with Mac?
The basic gist I have is for Apple you have a yearly fee of $100(USD) but that covers all of your apps.
Where Microsoft is per app and looks to be about $100 a year (depending on how far in advance you pay) and then also have another piece on top of it regarding how often its used.
Is there more to what Microsoft is doing or areas where what Apple is doing is shady?
One big difference is that this $100/yr (ish) for MS side of things doesn't actually bypass smart screen on its own. Smart screen stops showing up if more people eventually install and use your app (black box). If you want to bypass it out of the gate, you need an EV Cert. Which are hundreds more (further down this article quotes one at $700), and require more effort on the business side of things.
Interesting that they aren't doing it the same way as Apple, where you pay to be part of the program and they provide all the certificates. Certainly seems cheaper at $100/yr compared to the prices of these certs.
Also why are these certs so much more expensive compared to a normal SSL certificate?
Microsoft provides an option to do exactly that. Microsoft Store will handle all the certificates for you for $100/year and their Store cut is smaller than Apple's if you decide to sell through the Store.
Also why are these certs so much more expensive compared to a normal SSL certificate?
There is some manual intervention, you usually end up with phone calls and emailing documents to and fro before the certificate is issued.
I last renewed mine in 2017, I'm dreading doing it next year as I don't have a public phone number which always causes problems. Sticking sharpened pencils in my eyes would probably be a more pleasurable experience.
I like to think that these stupid corporations would try harder at solving the problems they create if whenever those problems are mentioned, those corporations didn't get dozens of apologists. Or maybe I'm just frustrated.
Microsoft doesn't need some oddly loyal developers defending them nor the free publicity they provide. Developers affected by stuff like this don't need some guy asking them what they would do better. It's not indie developers' job to fix microsoft, it's microsoft's.
I agree in general with what SmartScreen tries to do. There are a lot of non-technical folks clicking on stuff willy-nilly so adding barriers to prevent spyware / virus infections for totally unknown binaries is a good thing.
A "Let's Encrypt" for code signing may be a good idea, but the cost of a certificate is itself a barrier for spyware distributors, so i'm conflicted on that... not sure what the right fix is.
> but the cost of a certificate is itself a barrier for spyware distributors
Is it? If spyware and malware wasn't profitable, people wouldn't take the risks to create and distribute it. An up front cost of a few hundred dollars could be made back with a single successful ransomware attack, and the certificate would last years.
This "smart screen" thing, as well as Google's endless warnings when you try to install an app outside of their store on Android are not for protecting the user; they're there to prevent competition.
The barrier is actually bigger than just money: for an EV certificate you need to prove that you are a registered business. So there is a name and an address behind it, which is kind of a barrier for malware.
walled gardens ever increasing assault on independent developers never stops. seems microsoft is following Apple's trend of shitting on indies. my take is folks develop your apps to be browser based, if you can. if we can have a browser app, like Figma. a lot of apps, could do. the ones that need to deal with os n files etc, might be non-starter. take your talents to open platforms.
If hypothetically everyone moved to the web then this very same game would again be played by browser vendors. Some browser APIs only work on secure sites (e.g. service workers, EME) and they could start blacklisting specific certificates the same way.
Want to use web payments API? Nope, sorry, your DV certificate doesn't look trustworthy enough, maybe you should get an EV certificate.
it is a web app. You can run it fully in the browser. Local executable is just an Electron app iirc. I don't have it install on my work machine, but I frequently access and manipulate Figma files from Chrome just fine.
I developed some absolutely free Windows software as a hobby. I specifically didn't want to change for it because then it's a job. Smartscreen was definitely a problem; end users were absolutely afraid of my software.
Eventually a user who was also software developer offered to sign the binaries for me with his certificate and that's how I've operated since.
Lmao. We should all pitch in $20 and set up a shell company to buy 1 code signing certificate to share. It would definitely pass SmartScreen if a couple hundred apps were signing with it.
I don't think it's such a bad system either, but the certificates themselves should be much cheaper, if not free. I mean, they don't even do anything until a critical mass of users lets the app through smart-screen.
All the open source developers/publishers here should get together and pitch in to get one long lasting certificate to sign all of their respective binaries (of course, really important that they vet each other's code, so has to be open source)
Edit: typo.
AppGet or Chocolatey are already on the system and (I think) could unblock binaries from SmartScreen. AFAIK user level programs could run with zero prompts.
Show clear steps with images on the download page that direct them to click "More Info" to run the app. Eventually the app builds up reputation (at least until the cert expires). How effective would this approach be?
You can but you'd need to account for every OS/update combination to make sure you get the UX right. Many apps do something similar for teaching people how to click through the trust dialogs when running the installer when downloading with chrome/ie/edge/firefox/chrome-edge/etc.
Not impossible but it's definitely a hassle and probably non-trivial to get right.
> The price range is wide but a certificate only valid for a year will typically go for about $100.
This is hypocrisy, smells like Apple-fanboy trolling. MS stimulates competition on the code signing certificates and it is free to deploy through their appstore. And you can bypass Smartscreen right from the warning.
Certificates cost $59.00 at codesigncert[0], not "about $100.00". Established developers with reputation don't need certificates.
Now compare that with Apple's (much more walled) approach: only Apple sells the certificate at the price of $99,00, even to deploy on their Mac store (where they'll also take a cut) and it is not trivial to bypass their screen.
$59 if you pay for 3 years, so actually $177 up front or $75 for one year, and again, this doesn't actually help bec after x years your app will need to build a reputation again.
This is ridiculous and as others have said is meant to funnel devs to using the broken Microsoft Store.
How am I nitpicking? I am just highlighting the fact that when I signed my app using Apple provided certs and notarization, there is no additional warning for the user while on MS land unless I get an EV cert I will not be able to remove it.
Why not add a protective layer to the OS, kinda like Apparmor on Linux!? Because of the monopoly status of the Windows platform, instead of making a better product, you invent things to make more money.
Completely agree. Just a rebuild of an MSI the other day turned up a false positive in windows defender and it’s still stuck in smartscreen today despite MSFT fixing the AV definitions.
Look at this from anither angle: on MacOS it is worse. You can "buy" certificates/approval only from Apple. If Apple doesn't like you, you are screwed!
OTOH: On macOS, you pay $100/yr and you're set -- any apps you notarize will launch with a simple confirmation dialog. On Windows, you can spend hundreds of dollars on a security certificate and still get blocked by SmartScreen and/or antivirus software.
I'm not aware of any situations where Apple has declined to offer a developer certificate to a developer, outside of situations where they are legally barred from doing so (e.g, for developers in countries subject to US economic sanctions, who would probably be unable to obtain a software signing certificate either).
> You are wrong. As described in the OP if you buy EV certificate($70) [0] you are all set.
That's an EV SSL certificate, not an EV code signing certificate, which costs significantly more and still doesn't prevent an application from being blocked by antivirus software.
Sorry for the wrong link. Below is the correct one to an EV Code Signing Certificate offer. For the dumb people who can't find a better price it's just $300. Enjoy!
How long will it be before Google, Apple, and Microsoft start pulling certificates for apps governments deem unacceptable, such as organizing protests or communicating?
While there are some similarities, the key difference is that Apple doesn't impose a "reputation requirement." Signed apps and installers work the same no matter how many other users have run them.
This is clearly a warning, among others, that the bad Microsoft is back.
Using certificates and app signing is OK as long as it is done properly, with as much clarity as possible, and an inexpensive way to get our apps signed.
Back, how? I don't disagree with the gist of this article, but that SmartScreen window has looked like that for many, many years. It's not new, nothing is back. Microsoft was never fantastic.
The only thing that's new about this article is the bit about WinGet, which is so beta that it's hard to assume that its "SmartScreen means malware" behavior will remain like that.
I do agree with it btw, it's just like how Gmail's spam filters make it super hard to self host email. This is monopolistic behavior under the guise of protecting users, plain and simple.
> but that SmartScreen window has looked like that for many, many years
It changed a year or two ago. The "Run Anyways" button used to be on the main dialog. Now the user has to click the "More Info" button to see that option.
That particular dialog has been the same since Windows 10's launch, though it is "adaptive" in that as the reputation slowly builds it will show the Run Anyways button at something like "medium reputation".
Ahh, that explains it. I had a screenshot of the SmartScreen dialog on my download page. After releasing a new version last fall, I noticed the dialog changed to the hidden Run Anyways option. My error, apparently, was including the major version number in the setup file name. When that rolled, the reputation reset.
The interesting thing from Microsoft's point of view is if they make it increasingly difficult to install applications then Windows will have little value beyond offering a web browser.
I call BS. This might be hurting some hobbyist projects, but anybody who earns their living publishing software, and gets even a small amount of those earnings from Windows apps, can afford $100/year for a code signing certificate.
There are a lot of people trying really hard to trick users into installing software they don't want. There's big money in it. Authenticode strikes me as an entirely reasonable step to provide more signal to the systems trying to identify and block crapware.
But why even require payment for these certificates, since Microsoft according to the article doesn't actually place any value on the certificate itself. You still have to build up reputation separately. In that case, it seems entirely unnecessary to require a paid certificate.
Because it costs money to do the identify verification? Its not like a DV certificate where the volume is very high, and the verification isn't hard to automate.
I paid $400/yr for my employers first Authenticode certificate back when they were a new thing and you could only get them from Verisign. Now there's a handful of providers and competition has brought the price down, but the volume is lower than DV certs, and verification is harder, so the prices won't go down to zero.
Why do identity verification at all, you could ask? A reasonable question; if all I need to start earning reputation is a private key, the costs could go to zero. But letting the scammers make as many identities as they like at no cost changes the prior on a previously unseen identity. It seems MS isn't granting a very strong prior anyway though, so... my argument certainly is weak here.
> There are a lot of people trying really hard to trick users into installing software they don't want. There's big money in it. Authenticode strikes me as an entirely reasonable step to provide more signal to the systems trying to identify and block crapware.
Maybe that should be up to the machine owners to decide what to install themselves (crazy idea I know).
Yes. I'll accept that thats a lot of money in some places of the world, but thats less than half a day of time for a skilled Windows developer no matter where they live.
Let's move this to a productive conversation though. What can Microsoft do, as an alternative, that doesn't result in an identical or worse situation?
Giving out free code-signing certificates also makes it easier for malware to get legitimate certificates. This is akin to LetsEncrypt for certs -- https://yourbank.real-secure-website.xy can have a valid cert but it doesn't mean it's legitimate. What's the equivalent to the "URL bar" for software? What's the equivalent to the ACME domain validation challenge?
The SmartScreen stuff is another attempt at this -- software that's not frequently seen is flagged as a potential problem. As a developer, this annoys me greatly. As the de-facto support person for family that don't understand computers... I don't mind so much. Without this, malware gets executed directly and now you're dependent on (very imperfect) anti-virus software.
I guess the Store is another way to have "trusted" applications, but you only have to look at the Google Play or iOS store to see how well this ultimately works out (for both malware and legitimate authors).
Note this isn't even about admin vs non-admin installations. Obviously malware running as admin can do more damage that's harder to recover from, but non-admin malware is just as capable of doing bad things (think: stealing credentials, running cryptocurrency miners, ransomware), and after being hit by a randomware attack I doubt your "typical" user is going to really care much about the distinction between their account vs the entire computer being trashed.
Stop. Reasons for doing it at all are bullshit. If you want to evaluate the software, do that. I'll happily hand you the source code from GitHub and the build chain on AppVeyor so you can watch the executables being compiled. If you want to give me a way to mark my applications so that users know that they're picking up what I'm putting down, great. I also care about my users, so I'll happily mark my applications.
But you don't need to take $100/year from me for the right to not have thugs block access.
> Giving out free code-signing certificates also makes it easier for malware to get legitimate certificates.
Malware exists to make money. Therefore malware authors can easily pay for certificates as a cost of business. Megacorp software fucks people over all the time. It exfiltrates their browsing history, MITMs their secure connections, installs rootkits and backdoors. Please don't pretend that this increases security.
We first tried through GoDaddy who shouldn’t even offer the service to South Africans because they “required” a photograph of a company director holding a government issued photo identity document with physical address included. There is no such type of document here. We offered affidavits, lawyers letters, but they were unbending.
We then tried and managed to succeed through Digicert. Their process involved a few checks including checking local government mandated company registry and using the telephone number from that registry to phone a director to confirm they were aware of the certificate request.
Bad guys can certainly get this done, but it raises the bar very substantially, and once they’ve burned their credibility of the company the certificate is issued against, they have to use a different company.
Not a good plan to distribute malware without going through the effort of faking a legitimate business aswell.
And it doesn't even have to be fake! You could be Zoom! Or Avast! Or Trend Micro! Or Sony! Or Lenovo!
The only person in this story who doesn't have a business address is me.
My biggest worry is the endless "telemetry", "customer experience feedback", "licensing", "opt-out", "tracing", "quality improvement programs", etc. On and on. There are now a thousand hooks into every data centre by just about every vendor involved. Every one of them is exfiltrating information. Every one of them is a data leak waiting to happen. Every one of them could take control at any time.
Huawei isn't doing anything different. The western governments just don't like it that they're copying them.
The main purpose is a paper trail leading the software back to you. If you aren't malicious, it's a few hours task to arrange.
https://www.microsoft.com/en-us/microsoft-365/business-insig...
With that kind of money they can pretty much bypass any measure an OS manufacturer could reasonably put in place without completely sacrificing usability.
You should see most security measures as the lock on your door. It doesn't take an expert to crack but still stops most from even attempting it. No security measure is 100% effective but it still raises the bar for a successful attack.
You seem to have a bone to pick with MS's choice so I'm genuinely curious in which direction would you go. And keep in mind you have a billion users and a truck load of baggage to work with.
Exactly. That's why this should just stop. Because it hurts without helping.
So they should stop (discontinue) every measure that can be bypassed by an outfit with this kind of money? That doesn't leave many measures, if any. And it lowers the bar for a successful attack to someone with a couple of hours to spare. SmartScreen prompts are just like any HTTP error in a browser. The cert certifies an identity, that's it. It doesn't magically clean any malware inside, that you know the identity, or that you have to trust it.
> it hurts without helping
It helps me verify that the software I use comes from the developer I expected. It may not be a catch-all but I find value in this.
Don't forget, your door lock can be opened in seconds and the person it inconveniences is mostly you, especially if you lock your keys inside. Your argument would be far more convincing if you took no precautions whatsoever because someone with millions to spend can bypass them.
Nobody needs to charge money for me to be able to sign my executables with something like PGP.
> The cert certifies an identity, that's it. It doesn't magically clean any malware inside, that you know the identity, or that you have to trust it.
And yet the message is very clearly "THIS IS UNSAFE" because "WE don't know the guy". It isn't "Don't run"/"Run anyway thanks." It's "DON'T RUN" in BOLD in an eye-catching bright box in a prominent location where buttons go and "(more info)" in body text somewhere else.
Your initial argument was against SmartScreen, not against paying. Now it's that things should be free. A cert costs because it gives you access to an already established infrastructure, integrated with everything, where you can easily verify the identify being certified.
Business opportunity for you: set up a similar infra for PGP but free. Something that can be integrated with an OS and can instantly allow me to see what a certificate would. Then make sure your infra don't become the point of failure. You can also implement a reputation system where a new developer gets a warning but one that's not intrusive, almost easy to miss... I will gladly use it (and I'm sure OS makers will too once you prove your solution is at least as solid as PKI) because I want the functionality, not the particular implementation.
> because "WE don't know the guy"
I don't know the guy either because there's not kind of identification attached to the package. A clear identification and reputation systems definitely help. Both of these are widely used on the internet today because it's pretty much the only way anyone ever came up with to discourage offenders. I will say it again and again: it raises the bar for a successful attack.
> It isn't "Don't run"/"Run anyway thanks." It's "DON'T RUN"
Which is almost identical to a browser security warning. It stands to reason that protecting your own machine should have an even more prominent message.
If you jump from one line of argumentation to another you'll never make a point. And your "solution" was to suggest that any security measure which can be bypassed by someone with millions of dollars to spend on this should not be implemented at all. Which is not even worthy of discussion.
Second to that, it's that people set the information up so that they'll mislead your verification processes just enough to pass.
Certificates stolen are way down from that.
A similar approach is successfully used by scammers to recruit money mules.
If you were deliberately trying to set up a few shells for obfuscation it would not be hard, particularly if you had some means of buying fake IDs.
This is less than a day's worth of work to setup.
Microsoft can recognize that pay-to-play certificates are merely assets owned by developers; not developer identities. Microsoft needs to manage identity relationships with developers in a way that is indepdent of certificate aquisition and expiry.
The certificates themselves are merely an implementation detail of identity.
With that said, I don't condone the corporate gate-keeping of software development and recommend developers not contribute to any platform that requires indie developers to pay to become a part of the ecosystem. Long live open source.
You could even end up re-using domain-name-signals from existing spam datasets (whois, hosting provider, age etc).
I'll come out and say it's bad... but also much better than the current situation. So as a compromise I'm all for it!
The identity could also be tied to Github accounts, among other options.
I think it's easy for the HN crowd to inadvertently forget what computers / phones / tablets / TVs look like to ordinary people.
I would encourage people to come up with actual use-cases and pro-con scenarios rather than 100% "emotion mind" opinions.
It took me about 15 minutes to get my Android Phone infected by a malware by downloading just apps from Google Play. If Google is failing in such basic task for "ordinary people", all MS accomplishes is to annoy regular developers, making their apps "suspicious" to the public.
That said, Smart Screen is a useful feature, I do want to know if the program I downloaded was not signed by the developer. What I would like from it however is that it should show up on every install with either "This program is signed by Malbolge Inc." or "This program does not have a signature" (in bold on bright red background with big exclamation marks border); "Do you wish to install?". Every installer gets this popup, nobody is spared. For the unsigned case, maybe require two clicks - a checkbox and a button. Because I always want to know who signed the installer, just in case I downloaded a compromised installer signed by a certificate different from what I expect. It also makes the unsigned installer popup less scary in comparison, though it must be kept visually distinct from the signed installer popup. For individual portable programs that are not installed, you will also have a "do not ask again" checkbox. Again, this will pop up for every program at least once.
IMO that would still be good.
This would help protect against dangerous middlemen, all those massive sites containing thousands of pieces of software, many with added ad- or other badware. If CCleaner (just an random example) could sign their executables for free, then there'd be a very justified suspicion if it isn't signed.
Not to mention, does the existence of a cert really make a piece of (proprietary) software trustworthy?
but if everyone can get their certificate, they can always just sign it with their own...
Nope. I’d even goes as far as to say the majority of malware and other crap you don’t want on your PC is signed these days.
They can keep SmartScreen in place, but soften the language and make it more obvious that you can still run it if you are sure you got it from a good source.
False. https://en.wikipedia.org/wiki/Identity_fraud
In order to understand this, you really need to start thinking like a criminal. "If I had money, power, maybe guns, and no conscience or care for anyone else, what could I get away with?"
I'm not sure whether that helps. The dunning-kruger effect will invariably cause novice users to skip those warnings because they think they "know computers", and "wouldn't fall for scams".
For individuals: Some countries have an electronic identity system (eID), that would allow verification of name + address when code signing while being as secure/more secure than the usb sticks one gets for an EV certificate + the EV address/identity verification process. Because there is no standard this would be a lot of per-country work. And some countries don't have a (good) eID system (or an ID system at all). I only know the technical details of the German one and that one would work.
For businesses: EV certificates shouldn't be too much of a cost there. Though I am annoyed by the lack of competition driving the prices down in this area. Maybe make it easier for new authorities to enter the market?
That would then let MS worry about only the business logic and other glue using that already trustworthy or at least trust-rated code.
That too, if it's free of charge software, should have ways of being published that make it easier to verify something is safe, and if it's questionable, then engage in more of a process.
Which means it’s not frequently seen, which means it’s flagged, which means it’s not frequently seen, which means...
One simple solution is don’t flag “not frequently seen” with the “interfere with installation” flag. Ditto developer certificates.
Does it open up a potential attack vector? Yes. But it removes an algorithmic, uncompromising, and artificial hurdle for independent developers, which is better for users IMO.
Any way you slice this it's more about profit than it is about security. There is no amount of locking-down of Windows that will prevent my friends and family from figuring out how to completely fuck up the machine anyways and everyone involved knows it.
We were literally just talking about how awesome it would be to dumpster Apple's iOS ecosystem in favor of a .NET/Surface ecosystem, but if its going to be the same enterprise signing certificate hell, then Microsoft can fuck directly off too. I am so tired of this shit. We have spent the last 2 weeks fighting Apple certificate nonsense on behalf of several of our customers.
Apple, Microsoft, and others who would force code signing identities on developers: Kindly fuck off with your nonsense. We would like to get back to writing code and delivering excellent customer experiences. Trust can be built in many other ways. You are not the sole arbiters of trust. I would much more likely follow advice from a green username on HN than I would your smartscreen technology or any ridiculous certificate chain backing it.
They could prioritize the radical concept that the owner of the hardware also takes responsibility for the software that runs on it, and they could make efforts to inform, educate, and empower the owner, rather than wrest power away from them and from independent developers.
They could copy Debian and other orgs that solved many distribution and trust problems decades ago while remaining free for both users and developers, with only volunteer resources, while MS users were still stuck with random .exe's from random websites and shrinkwrapped software as the only options.
And those of us who really ought to know better could stop acting like this is some best-effort genuine desire to solve truly hard problems on behalf of users, rather than the shameless power-grab it actually is.
EV certs should be a one-time payment thing, reasonable price ($100, for example) and require paperwork to link it to a business or individual. Same as the "blue tick" in social apps.
That doesn’t stop everything, but at least it is not the wild west either.
It's quite ironic that JavaScript is practically a native language on Windows since Windows 8. Either it's indeed not possible to use LetsEncrypt for that or they marketed it wrong to developers.
I'm amazed how MS is often the first one to push out new Desktop technology a decade before everyone else (HTML on the desktop, AJAX) but it's usually others that take full advantage of those on other platforms...
One variation of this scheme is making Microsoft the CA that issues certs for free with some issuance limits.
For the record Google uses self-signed certs for Android apps.
Do what Apple does -- allow me as a user to set a setting that allows unsigned code to run, or allows signed but not recognized code to run, but with a warning that it is signed but not recognized.
They could also fix their warning screen to better explain that it's not necessarily malware.
Decentralise the signing authority so there are multiple different independent sources and they lose editorial control and monopoly pricing.
I would rather Windows didn't become another walled garden.
If you aren't signed by an "authority", every user is told by default automatically that your code is unsafe until you pay money.
It is 100% analogous to thugs walking into your store saying "It would be a real shame if something were to happen to scare people away."
The message is "We Protected You" and "Unsafe". WHY? Because "WE don't recognize" it.
Application signing certificates cost money. Always. And if you're making something for free either out of the goodness of your heart or because you like making things, that money has to come out of your pocket just so the thugs don't stand in front of your door with bats. Nobody should be ok with that. AND FUN FACT: malicious or incompetent actors can and do also pay money.
Then they came for the Windows developers, and I did not speak out, because I wasn't a Windows developer...
I sure wish I could distribute the software I write without paying some rent-collector (at least) $100 for the privilege.
It is all about control. They get to decide whose software gets to run with and whose without annoyances.
Yes and no. I'm making free software for the world, not running a business, and I don't appreciate getting shaken down by a megabillion$ corporation.
My guess is the amount significantly reduces the volume of malware
Just butting heads doesn't seem to be getting us any results, there are probably better ways to achieve fairness, even if it is not exactly in the form we want (e.g. popup for every program instead of just unsafe ones).
It’ll never happen though because Microsoft, Google, and Apple are doing everything they can to de-emphasize domains and online namespaces so they can become the gatekeepers of all content.
Now we're getting to the point that this feels like a protection racket. It, at least in theory, is possible for individuals to get EV certificates for websites. Worst case scenario, you can get a one man business for the paperwork.
A lot of viruses can hit via either remote code execution, or exploiting a bug when loaded through a data file. Neither of those scenarios is stopped by SmartScreen. At best, it stops someone from clicking "WannaCry.exe".
MSFT is basically doing everything to make you use the Store, and it reeked back with Windows RT, and it reeks even more now.
Unfortunately, it seems MSFT is incapable of creating a version of Windows that doesn't have live tiles, constantly tracking what applications you run. I switched to Linux years ago, but I realize that most people live in a Windows ecosystem, and that they're subject to the whims of MSFT.
Windows 7 made great strides in addressing software vulnerabilities. And bad guys quickly moved from software vuln exploits to socially engineered attacks. They tricked users into download and running spurious programs (using a variety of techniques - ranging from SEO to scare-ware website that tricked people into thinking that their machines were compromised to running sophisticated ad campaigns). The problem with traditional anti-virus (AV) approach was that by the time the AV analysts could get their hands on a new binary, analyze it, classify it as malicious, write a signature, and distribute it - it was generally too late. The bad guys monetized the latency between when they published a new binary on the internet and when the AV vendors were able to effectively detect it as malware.
SmartScreen tried to flip that dynamic - by using reputation. By volume, majority of downloads were benign; there was no point in warning users when the likelihood of future infections from such downloads was nearly zero. However, for software that had never been seen before - there was a significant risk associated with it - all the yet-undetected-malware resided in that set (depending on the situation and context - the risk of future malware infection could range from 25% to 75%). So, for 'known' software, SmartScreen eliminated the mostly-useless warning (that everyone had grown used to clicking through); for 'unknown' software (which the AV companies still hadn't deemed as malicious; yet-undetected-malware would be 'unknown'), SmartScreen showed a 'stranger-danger' warning. This was incredibly effective in stemming the socially engineered malware attacks. In the overwhelming majority of the cases - users did the right thing; they chose not to run the downloaded programs that were later detected as malicious. When SmartScreen was launched, nearly 7% of all downloads were later detected as malware by AV. In a couple of years, the incidence of socially engineered malware had dropped significantly (By a ton! By many orders of magnitude. Most bad actors changed their business model to bundleware - bundling unwanted, non-malicious, software with popular downloads). An important factor was that SmartScreen had expansive coverage on (knew about) all executable binaries downloaded from the Internet - in order to mitigate the risk of users getting used to ignoring the warnings if they saw those too often. Most users saw one or two SmartScreen warnings in a year. And, when they saw the warnings, the risk was significant - and users did the right thing in those cases (not run the downloaded program). SmartScreen provided highly effective 0-hour protection against socially engineered malware by helping users make the right trust-decisions.
There was some friction - for developers and advanced users (who downloaded esoteric, non-very-commonly-downloaded software more frequently). For developers, reputation came in two forms - either each individual binary that they published could 'acquire' reputation - or if they had a code signing certificate - then that certificate could 'acquire' reputation (and any binary that was signed with the certificate would inherit it; which was a better option). In either case, if the developer went rogue or published a program the was malicious - it was straightforward to deal with that problem. SmartScreen could instantaneously revoke the reputation for the certificate or the binary. There is a meaningful cost (time, behavior) that the developer incurred in order to get reputation - and it is hard, expensive, and not-scalable for bad actors to 'acquire' reputation - and if they did end up behaving badly after acquiring reputation (e.g. signing malware with code signing certs that has acquire reputation), that reputation would be lost very quickly - really hurting their ROI.
So yes - there was some friction. In many / most cases, new executables and publishers 'acquired' reputation after a short period (typically a few days) and many advanced users understood why they would occasionally see the SmartScreen warning and would make the right action choice. But the benefits to the larger ecosystem were incredibly significant and impactful.
It remains difficult if not impossible for a brand new developer to coordinate a new launch or for an open source project to release unsigned binaries without triggering SmartScreen's rather opaque and user-hostile reputation block. At first glance, the dialog does not give the user any information that it is even possible to execute the program. Of course it stops malware. It brings the average Windows user to a dead stop!
Based on the number of regular people I help on a daily basis who download completely legitimate, "esoteric, non-very-commonly-downloaded software" I must say that I do not find your arguments very compelling. This tool could have easily remained in your browser where it belongs.
And who cares if you guys crush 10 or 50 or 100 small developers for every malware distributor that gets stopped, right?
SmartScreen is a bad solution. The underlying issue is the pathetic identity validation industry where $ = reputation. All SmartScreen does is add popular = reputation on top of that. Both suck!
What we need for modern software development is a proper identity validation system that doesn’t cost an arm and a leg and lets us tie the validation to our developer accounts and long lived digital identities.
Code signing is good for the rent seekers charging a fortune to provide terrible service. SmartScreen is an awful black box that you think is good because you worked on it and are privy to the internals.
Maybe it was well intentioned when it started, but now SmartScreen is a a non-issue for industrial sized malware distributors, but is devastating for small, independent developers.
The friction is still there. Have you read the article?
> When SmartScreen was launched, nearly 7% of all downloads were later detected as malware by AV.
What about the other 93%? The vast majority gets unnecessarily blocked!
It's getting to the point where it's the same on windows. More info -> run anyway is becoming a useless annoyance.
I would note that at one point SSL certificates were kind of in the same boat. Users clicked through web warnings as a matter of course. The thing that changed is that we were able to reduce the financial barrier to certs to literally $0. If the same can be done for this nonsense, we will be in much better shape.
I have already made my choice on risk when I chose to download the software from the developer and execute it. A prompt for a first time trust might be the limit of what I'm OK with including, but any stronger warning or making it difficult to proceed is really only reasonable to do if some 3rd party is compromising a choice I have already made.
It's ridiculous that my open source code is hosted on GitHub, the binary is created with an action but I have to pay for a certificate and manually sign it.
Idea being that one could, in theory, download the git repo, check out the relevant commit and verify that this was the source code and this was the resulting binary, and that matches the exe file I just downloaded.
With the current setup GitHub has more control on the resulting bits than I do.
Also, current Microsoft signature rules that a hardware dongle is required to sign the bits (all non-hardware certificates will be deprecated in time).
So, I'm supposed to take the .EXE that GitHub produced on one of their VMs running actions then certify it's legit by signing it with my certificate.
But what am I actually certifying? Well, that to the best of my knowledge this EXE is the build output of this Git commit that triggered the action.
Meanwhile my $100/y Apple Dev subscription was enough to package the app and distribute outside App Store.
At this point I am fairly certain EV certs are nothing but rackets supported by MS.
Spent an afternoon comparing different (very sketchy yet somehow the best in Windows code sign cert land) sites, finally picked one, signed the app and downloaded it on another machine.
Was immediately greeted with Smart screen, and learned I needed to shell out HUNDREDS more to get rid of it.
What a racket. And something Apple do automatically for you (provided you pay for their developer program).
Also, when I got my renewal bill for something like $1500/yr for PhotoStructure, I sent an email to their support asking to continue the low original fee, and they agreed.
Making raw Win32 .exe distribution as user-unfriendly as possible is very likely to be a goal of MS.
I absolutely hate when I install something like Adobe Acrobat and it installs 18 schedule tasks, a startup job to ensure those task exist, a tray icon to monitor it's services and all the other crap. No thanks.
I also don't want to have to worry about installing some random app I found on SourceForge that purports to do what I want but might come with the SF equivalent of News Gator side loaded.
Lock it all down by default in a virtual sandbox by default and if it needs legitimate access to my files, prompt me to grant it access.
MSIX can be sideloaded by default in every supported version of Windows (in addition to or rather than Store installed) (supported versions of Windows include 7, 8, 8.1, and 10 after the Anniversary Update). MSIX support auto-updating even when sideloaded, all it takes is an HTTPS server with some simple manifests (for those that remember ClickOnce it is reminiscent but a lot easier/cleaner), or for Enterprises a file share will do.
The only caveat is that MSIX requires Code Signing Certificates for sideloading. (Note that makes Store install is a "cheaper" option with the $99/year and Microsoft handles the package signing for you as a part of the approval process.)
My biggest gripe with WinGet as a project (and no, the article is wrong it's not intended to replace the Store) is that it isn't doing way more to bootstrap MSIX packages instead of "downloading and running EXEs like always before".
Is that true? I thought MSIX sideloading was only enabled by default in the recent Win10 2004 release (better late than never!). https://www.windowslatest.com/2019/08/12/windows-10-20h1-imp...
This is absolutely possible. You can distribute your UWP app as an MSIX package[0], which allows side-loading without having to go through the store. The whole experience is actually quite nice and straightforward, UI-wise.
[0]: https://docs.microsoft.com/en-us/windows/msix/
I really hate worrying what an App like Chrome is doing to my system and I'm reminded every time I launch it because all my Thunderbolt connected monitors black out for a second when it starts up.
It doesn't stop Win32 apps from doing crazy Win32 things (unfortunately), but it's still a step up from MSI/EXE installs in general.
This is a good thing, and it makes MSIX a usable packaging format for pretty much any application under the sun - but nobody should assume a MSIX package is safe just because it’s a MSIX package.
I see current MSIX limitations for Win32 just as means to incrementally steer developers into it, while not introducing too many breaking changes (as WinRT happened to be).
But still... if your software isn’t common... you’ll trigger the SmartScreen.
Even if your app managed to pass SametScreen, you’ll soon discover that many users got Anti-Viruses that with the same assumption about non-common app is bad.
I’ve ended up contacting a lot of Anti-Virus companies and I whitelist our installers also on their end.
While the process itself is much simpler than notarizing with Apple. On Windows you need to contact much more parties to make your app “just work” for the non-tech-savvy user.
The current code signing situation on Windows is painful. It's especially galling how the code signing requirements apply to modern sandboxed Windows applications as well - an MSIX-packaged application with minimal capabilities is subject to the same requirements as an EXE that can do anything.
The certs should be much cheaper, though, and it would make sense for sandboxed applications to perhaps be easier to sign.
Part of my frustration comes from this paradox: it's harder than it used to be to share a native application with others, even though we now have sandboxing that makes it significantly safer.
It feels like most of the pieces are in place, MSIX has matured to the point where it's pretty easy to distribute an application that runs in a sandbox with auto-updating. If Microsoft can just make the signing part easier, I think we'll see a much more vibrant community around native app development.
This is why we need WASM and web-first development. If the platforms won't empower us to develop and distribute without levying their tax, we need to take our business back to the open web and spend our money on making it the best platform.
Besides, it's a nightmare to have to develop for every single platform when the web is universal. Platforms should pay the cost of making things work on their devices and operating systems, not tens of thousands of independent developers and engineers paying this price N-many times for each walled garden.
We also need to get Google's claws out of the open web (AMP, standards balkanization, etc.)
This is only going to make things worse. If everything is a web application, why would users even need access to their hardware or be able to modify their operating system? Boot from an locked bootloader that only loads a FAANG-signed shim OS, that only loads a FAANG-signed web browser. Next, disallow extensions and ad blockers and we are in the brave new web world were a small number companies more powerful than nation states continuously monitor your behavior for ad impressions.
Sure, now you can install a web browser on your Linux or BSD machine. But once they are in that position, they will make sure that it will be a miserable experience without necessary DRM, disabling or not investing in hardware acceleration, breaking random things on 'unsupported combinations', etc. Moreover, as usual on the web, any competitive player the becomes a threat will be bought by the increasingly large FAANG companies, ending their 'incredible journeys'.
Even though it will probably never happen, what we need is Linux on the desktop, with good native applications, plus a truly open mobile platform.
This is why we have Linux. Personally, I don't want to live in a web-first world. I don't want everything I do on a computer to have to have internet access. And I certainly don't want the browser to be my OS.
If you are building a custom product from scratch with no need to integrate with users' existing data and services, the barrier to entry is low. But most users have lots of existing data to use and it lives on other services, so you have to integrate... if you can
You must have different native apps than most users.
In any case, there's nothing special about WASM and DOM that makes it less interoperable than native stuff.
My Outlook and Thunderbird inboxes are files on my local disk, so if I'm determined enough I can dig through those files with custom software. If I want to process my gmails no amount of time in a hex editor will help me because that data lives on a server somewhere. In practice, Thunderbird acts as my interop tool here, because the developers of that program did the work to keep up with whatever method was offered to pull content out of gmail. In this case there are standardized APIs (network protocols) for talking to mail servers, and lots of software that implements those. What's the equivalent if you want to do this in a webapp? Is there a common API that I can use in my webapp to access a user's gmail, apple mail, outlook mail, etc?
In practice you need to do 1-off integrations with almost every given web service a customer's data might live on, and some of those services don't have any APIs so it's screen-scraping or nothing. It's as if every user's photos were stored in a mix of 50 different custom image file formats.
Right now if you're a Windows dev you get harassed by SmartScreen, but if you're a web dev you get to either reinvent the entire stack from the ground up (to control it) or pull in 20 different third-party dependencies, none of which are code-signed, without any easy methods to ensure integrity other than manually listing specific hashes in a fashion that causes your website to break any time a vendor pushes new code. Don't get me started on software that currently only works if deployed as an extension - being an extension developer is one of the worst development experiences on earth
Native applications can be forced to run without network access with a single command (e.g. firejail or unshare). The web is always-online.
Best way to kill all alternative OSes.
On 64 bit versions of Windows, there is no way for me to permanently allow unsigned drivers. Every time I want to install an unsigned driver, I have to reboot my computer while holding down shift, select advanced startup options, and then hit "7" on a USB keyboard which I lug out of the closet because my Bluetooth keyboard won't work. The effect only lasts until the next time I restart.
I understand that the kernel is sensitive, but this is overkill. Please let me install the software I want on my own machine!
And, practically speaking, I need to do it because it's the only way to get certain weird game controllers to work.
bcdedit -set testsigning on and bcdedit -set nointegritychecks on
If anyone understands more about the differences, I'd be very curious to learn more...
There is no pay to win solution. I've learnt this the hard way.
You can however place the executable in a Zip archive to bypass SmartScreen. An unzipped file won't bear the mark of the web.
I've been using 7zip. It does clear the flag. Windows built-in unzipper does not.
The famous YouTube video of it was uploaded in 2006, but my sources say the event in question was from Microsoft's 25th Anniversary Event in September 2000.
I could use a more authoritative source than knowyourmeme, though, so someone else is free to chime in there. There's a newsletter referencing it as early as 2001[0], so that seems to suggest well ahead of 2006.
[0]http://www.ntk.net/2001/08/03/
I recently wiped Windows and loaded up Pop! OS. It works great. And I never get a message saying "this software is suspicious" just because it's not in an official Deb package or something. If I add a PPA or download a binary or whatever I do, there are basic permissions required, but it does not automatically say "this is probably dangerous so I stopped it" just because the distro doesn't recognize the software. That is ridiculous.
https://twitter.com/hanspagel/status/1262317935898566658
Like a lock on your front door, the purpose is not to prevent unwanted people from ever getting inside (no lock will ever accomplish that), but to both make it take long enough the likelihood of being noticed is high, and to put enough hurdles in place that the attacker looks for a more lucrative target.
This is done through time, capability, and money. It takes time to jump through the steps. This presents risk to an attacker as it leaves them exposed for much longer, and susceptible to being tracked down in various ways as they expose themselves. It takes capability to jump through the steps, requiring the attacker having compromised the target in multiple different ways (phone system, employee, etc). This increases the complexity of the attach and thus the risk to the attacker. It takes money. This helps the providing company recoup the cost of verification (and if it's anything like web SSL certs is also about insurance), but money also provides another path to racking down an attacker, and provides a cost to them that isn't just their time. It also makes it less feasible to attempt a bunch of these attacks at once, as it's expensive.
People can complain about the inconvenience all they want, but the inconvenience is part of the point, so I doubt you'll see it change all that much.
In the past you used to be fairly sure a company was somewhat legitimate and responsible if they has an SSL cert and could supply card information to them without too much worry, as someone had vetted them as a real company (to some degree). These days, that doesn't happen, and whether I'm willing to give a company my credit card has nothing to do with if they are secured with a cert (which is a bare minimum to use the site). That cert doesn't really imply much anymore though. These days, I'll only enter my card into large reputable sites, or if they are using a payment service I trust (PayPal, Amazon, Square, etc), as they've presumably done a lot of the vetting that the SSL companies traditionally did.
At this point, I'm very hesitant to enter any credit card info into a site itself, even if it's secured through an SSL cert. It doesn't really signify what it used to.
They needed all kinds of things scanned and sent to them. They wanted 2 government IDs, 2 financial documents and 2 non-financial documents. Everything had to be notarized. I had to get my financial documents notarized on a different day than the other documents, and had to have a different notary do those. They would not accept two different notaries that worked in the same office, or for the same organization. (In my case, my local banking branch).
Finally was able to get everything sent in and then they asked me to notarize and sign an additional document called the "face-to-face". So I get that in. All told, it takes them over a month to verify the certificate. They send me the link and key to download it. In the email they send, they say that I can use IE, Firefox or Chrome. I use Chrome. The certificate is corrupted upon download. I tell them and they say it's because I used Chrome. I forward them the email they had sent me that specifically stated chrome could be used. They refuse own up to it and re-issue and because it had been 30 days since purchase, I could not get a refund.
It was such a horrible pain that I just gave up on my project and never published it, rather than go through the whole process again. There has got to be an easier way to verify someone is in fact a human.
Yeah, there is, digital signatures. Estonia started with it's digital signatures back in early 2000s, a physical signature is practically considered caveman-tier tech. Just recently the EU created the eIDAS regulation.
I really can't predict how long it'll take for the US to catch up though :/
If Microsoft want Windows to be an awesome developer platform this kind of friction with the tooling helps no-one.
[0] https://github.com/QL-Win/QuickLook [1] https://github.com/microsoft/winget-pkgs/pull/1241
> To avoid getting security warnings each time you launch the application, right click and select "Properties". Click "Unblock" towards the bottom of the page, and click "OK".
I was working for mid-level corp writing a C# Windows program that would be used by a few hundred people tops.
Even after going through the signing process, there were still false positives from some major anti-virus vendors.
I had to submit and resubmit to multiple vendors to get the program off the false-positives. Some smaller vendors still triggered.
Microsoft Defender was one of those that I had trouble with even after the signing.
So if you write software for a smaller audience you are going to get screwed.
What typically happens is people don't even click on more info. They redownload the files and then give up and the files are now collecting bit dust. Source: me.
Also, I think it has been going on for more than a year now.
The workaround for now:
Right click .exe → Properties → at bottom of the General tab check “Unblock”
Microsoft – same wolf, different clothing.
Smartscreen may protect against cases where somebody has for example repackaged WinSCP and distributes it with malware.
Our otherwise SaaS product requires an installable component for our customers PCs, from installation package (MSI and/or EXE) that we generate on the fly per customer -basis in order to customize data inside the installer for each tenant. For this reason, the timestamping of digital signature varies between packages as does the hash [of the content], and for _months_ after we have renewed our signing certificate we get support messages about both Smart Screen "scary warnings" as well as from some AV products as well. Despite number of downloads for runs for the package(s) signed with one and the same signature.
As the article mentions, it does not matter if you have had previous certificate; each renewal (=new certificate technically) starts this reputation process from zero. What's worse, since the signing happens on the [Windows] server as part of the product itself we really cannot use EV certificates either as those require physical USB dongles to be attached to machine doing the signing.. so we are left only with option of using regular certificate that gets this treatment. Sure, 10 year certificate would postpone the issue for a long time, but for security purposes we actually want to recycle those signing certificates with one to two year interval so the problem always resurfaces regularly.
SmartScreen seems like a great idea for protecting the end user, but without ways to remediate "reputation" problems with some form of prior review, this makes the software ecosystem shitty especially for individuals and small software outfits.
I suspect this happened because they were trying to be more secure, but all the Microsoft employees have free access to signing certificates that bypass the checks, so they never ran into these problems.
The basic gist I have is for Apple you have a yearly fee of $100(USD) but that covers all of your apps.
Where Microsoft is per app and looks to be about $100 a year (depending on how far in advance you pay) and then also have another piece on top of it regarding how often its used.
Is there more to what Microsoft is doing or areas where what Apple is doing is shady?
Also why are these certs so much more expensive compared to a normal SSL certificate?
https://docs.microsoft.com/en-us/windows/uwp/publish/account...
There is some manual intervention, you usually end up with phone calls and emailing documents to and fro before the certificate is issued.
I last renewed mine in 2017, I'm dreading doing it next year as I don't have a public phone number which always causes problems. Sticking sharpened pencils in my eyes would probably be a more pleasurable experience.
Microsoft doesn't need some oddly loyal developers defending them nor the free publicity they provide. Developers affected by stuff like this don't need some guy asking them what they would do better. It's not indie developers' job to fix microsoft, it's microsoft's.
Is it? If spyware and malware wasn't profitable, people wouldn't take the risks to create and distribute it. An up front cost of a few hundred dollars could be made back with a single successful ransomware attack, and the certificate would last years.
This "smart screen" thing, as well as Google's endless warnings when you try to install an app outside of their store on Android are not for protecting the user; they're there to prevent competition.
https://www.bleepingcomputer.com/news/software/chrome-and-fi....
Want to use web payments API? Nope, sorry, your DV certificate doesn't look trustworthy enough, maybe you should get an EV certificate.
Eventually a user who was also software developer offered to sign the binaries for me with his certificate and that's how I've operated since.
Seems like a dev w/ a new certificate could post a build of said application to a dev-friendly space to "bootstrap" the process.
Certainly not practical in many scenarios, but that's why I was curious about how many it takes.
Show clear steps with images on the download page that direct them to click "More Info" to run the app. Eventually the app builds up reputation (at least until the cert expires). How effective would this approach be?
Not impossible but it's definitely a hassle and probably non-trivial to get right.
> The price range is wide but a certificate only valid for a year will typically go for about $100.
This is hypocrisy, smells like Apple-fanboy trolling. MS stimulates competition on the code signing certificates and it is free to deploy through their appstore. And you can bypass Smartscreen right from the warning.
Certificates cost $59.00 at codesigncert[0], not "about $100.00". Established developers with reputation don't need certificates.
Now compare that with Apple's (much more walled) approach: only Apple sells the certificate at the price of $99,00, even to deploy on their Mac store (where they'll also take a cut) and it is not trivial to bypass their screen.
[0] https://codesigncert.com/comodo-individual-code-signing
Edit: I don't like MS codesigning implementation, but they are not the worst on this game.
This is ridiculous and as others have said is meant to funnel devs to using the broken Microsoft Store.
Cost me three days.
Edit: possibly also the MLK one and police v journalists one...
I'm not aware of any situations where Apple has declined to offer a developer certificate to a developer, outside of situations where they are legally barred from doing so (e.g, for developers in countries subject to US economic sanctions, who would probably be unable to obtain a software signing certificate either).
You are wrong. As described in the OP if you buy EV certificate($70) [0] you are all set.
>> I'm not aware of any situations where Apple has declined to offer a developer certificate
I can see that. Still there are such cases. Monopoly on the approval process is bad anyway. Even if acting in good faith, mistakes happen.
[0] https://www.ssl2buy.com/comodo-positivessl-ev.php
That's an EV SSL certificate, not an EV code signing certificate, which costs significantly more and still doesn't prevent an application from being blocked by antivirus software.
Code signing certificate is indeed more expensive(i.e around $300). Still I would rather pay more than having a single provider(Apple)
https://comodosslstore.com/code-signing/comodo-ev-code-signi...
Is it this hard to publish on the Microsoft Store? Is this a play to make the Store more relevant?
Using certificates and app signing is OK as long as it is done properly, with as much clarity as possible, and an inexpensive way to get our apps signed.
The only thing that's new about this article is the bit about WinGet, which is so beta that it's hard to assume that its "SmartScreen means malware" behavior will remain like that.
I do agree with it btw, it's just like how Gmail's spam filters make it super hard to self host email. This is monopolistic behavior under the guise of protecting users, plain and simple.
It changed a year or two ago. The "Run Anyways" button used to be on the main dialog. Now the user has to click the "More Info" button to see that option.
There are a lot of people trying really hard to trick users into installing software they don't want. There's big money in it. Authenticode strikes me as an entirely reasonable step to provide more signal to the systems trying to identify and block crapware.
I paid $400/yr for my employers first Authenticode certificate back when they were a new thing and you could only get them from Verisign. Now there's a handful of providers and competition has brought the price down, but the volume is lower than DV certs, and verification is harder, so the prices won't go down to zero.
Why do identity verification at all, you could ask? A reasonable question; if all I need to start earning reputation is a private key, the costs could go to zero. But letting the scammers make as many identities as they like at no cost changes the prior on a previously unseen identity. It seems MS isn't granting a very strong prior anyway though, so... my argument certainly is weak here.
Maybe that should be up to the machine owners to decide what to install themselves (crazy idea I know).
BTW I believe skilled Windows developer can't get $100 for half a day in some countries, maybe even full a day.