Websites doesn't require JavaScript, what really needs JavaScript are Singe-Page-Web-Apps-Somethings with anti-patterns like infinite scrolling. You get two things in return, a "super duper" fast web and a more secure web-browser. For example Amazon supports usage without JavaScript very well. Another experience is Stackoverflow, things like the preview and highlighting doesn't work. The highlighting can be added with server-side code but this will cost some CPU-Time - and it is not your CPU-Time. It is their CPU-Time? There is tiny feature I would appreciate in HTML-Engines "copy link to location" instead of using JavaScript - but there is this usable hierarchical address-bar a the top (which Google tries to hide) which already serves this purpose.
I'm not against JavaScript. JavaScript is just a tool but I ask the question if good websites require it? Hackernews uses 134 lines of JavaScript, this is alredy nearly nothing. Can you imagine using Hackernews without JavaScript?
My webbrowser (WebKitGtk) provides a permission panel for every website:
* advertisements
* notifications
* password
* location
* microphone
* webcam
* media
A first step would be adding there JavaScript, too? Maybe some nasty Cookie-Dialogs will disappear as consequence. But that is not a loss?
I'm using uBlock Origin more or less in nightmare mode (https://github.com/gorhill/uBlock/wiki/Blocking-mode:-nightm...): all 3rd party stuff blocked, inline and 1st party scripts blocked. Some websites work out of the box and I love them, but too many still want me to enable 1st party scripts and 3rd party resources to at least display correctly even though most of the content I'm consuming is text.
This is also one of the reason I'm hoping more people write in Gemini, because it's just text
> Can you imagine using Hackernews without JavaScript?
It definitely works, it's just a very few small features that need javascript. The most important for me is the ability to collapse sub-threads, which is mandatory when reading threads with lots of comments.
HackerNews would not really work in gemeni though as there are too strict limitation on how long urls can be and the only way to send data is by appending it to the url.
I agree, but HN isn't really a "website", it's closer to an application: you don't really have documents. The ideal interface to have HN functionality is Usenet, or maybe mailing lists: you have many people exchanging messages about a specific subjects, quoting and replying to each other, all in very hierarchichal threads. It's only because HTTP is so widespread that HN uses that, but if it weren't I would bet we wouldn't even be talking about javascript.
It‘s simply like having an automatic transmission in an ICE car.
The motor type is an implementation detail the user does not want to care about.
I want to press the pedal and the car accelerates. What happens in the background does not concern me.
Same for websites.
If I see a list that is larger than the height of my screen, I want to be able to keep scrolling until I decide to stop.
I do not care that the creators of the website have structured their database so that the current DB would return 573738 entries.
I also do not care that the way the developers decided to present the application to me (via a browser) means that every row entry is a HTML element and showing 573738 HTML nodes at once would slow down the browser.
I just want to keep scrolling (and will stop after a few hundred rows anyways).
Having these kind of ivory tower fringe opinions like „infinite scroll bad“ is gate-keeping by middle-aged backend devs that panic at the speed and impact of the frontend world, plain and simple.
I wouldn't call it an antipattern either - it literally does exactly what the programmer wants, but it's definitely a dark pattern.
Most of us humans have an urge to finish things, but infinite scroll by definition never finishes. This increases engagement of users by utilizing their psychology, which is basically the definition of a dark pattern.
There is an argument to be made that it also increases usability, so it's inception probably wasn't malicious. But it's definitely another product of today's addiction driven design philosophy if you just look at the psychological effect it has on users
Imagine an email program that, knowing how many total items there are in the inbox, gives you finite scroll! Even if it only streams metadata for nearby items, so if you suddenly drag the bar it has to wait for a network round-trip to show content, one of the problems with infinite scroll is that the scrollbar no longer shows a reliable absolute position.
A better case for infinite scrolling would be chat history, where you'd want an entirely separate UI widget for "jump to date", with bidirectional infinite scroll that unloads distant content, and a way to grab permalinks. Missing any one of those features, or the fact that messages will never be re-ordered for marketing purposes, infinite scrolling becomes a hinderance.
A paginated inbox can be extremely useful. Pagination gives you a "You are here" context: you are on page 3 of 20. That context is great in use cases such as "I need to review everything in this Inbox". If you just finished 3 of 20 in that case you know you are roughly 15% done. If for some reason you are interrupted and need to close the app and come back you know that you can probably start right where you left at on page 4 of 20.
Finite scroll is a compromise that you get some amount "you are here" context from scrollbar thumb size and position. If you are interrupted you can generally visually guess about where you left off, maybe.
Infinite scroll is terrible tool for an inbox because you no longer have such context. Scroll too far and the scrollbar thumb changes size because new items came into view. Most infinite scrolls entirely hide the scrollbar thumb because it is precisely so useless. Try to resume where you were in your inbox after an interruption and you have no idea how far to scroll down, and you can't just use the scrollbar thumb and hit an approximate spot. You have no context for where you are, you are lost in an infinitely scrolling maze of items all alike.
Even if infinite scroll wasn't a dark pattern (and it is, there's enough behavioral psychology studies now that have gathered enough evidence that infinite scroll feeds addictive behaviors), it is a terrible tool to work with if you are trying to get stuff done. It has none of the context of "you are here" and "here's how you can come back to where you were if you need to leave". Both of those things contribute to why it is such a dark pattern. Both of those reasons are strong reasons why it is an anti-pattern anywhere you expect people to work/to get stuff done. Pagination is great for getting work done and infinite scroll is awful. Finite scroll is alright compromise between the two for some apps.
Imagine an email program that isn't built with web technology. It could actually display all the items in your mailbox, with a normal and usable scroll bar, without "infinite scroll". I distinctly remember applications being able to display lists in the before-times.
Dark pattern would probably a better naming. Infinite scroll works, you're right. For example every file-browser or mail-program, which show you the visible portion on screen and at same time scrolling through all items.
On the web infinite-scroll is often used to load and reload stuff and you often cannot link to an item directly.
What form of book do you prefer to read? One with pages or a scroll which leave confused at all time on what part of the thing you are on or have read?
In fact I do prefer to read books as a large seamless HTML file rather than a paginated PDF file that has cuts (new pages) at arbitrary positions.
Now, of course the ideal way to read a book is probably a navigation/outline sidebar on the left, and then the content of the current chapter in the main pane.
so you have no problem with a scrollbar that jump to the next chaper when you shift it by a couple of pixels then?
Of course why would you have a problem with that, the only navigation one should perform while reading is change chaper, who wants to go to abitrary paragraphs.
I'm 100% on board with this line of thinking, but the edge case that the business keeps complaining about is being able to push async events to the client.
How would we accomplish this sort of UX without javascript?
I think the argument is that while a good amount of my daily web browser usage might like that feature (as I do in fact use it as an application sandbox), a vanishingly small number of the separate web sites I visit on any given day (which are all one-off pieces of content I am accessing) need that to functionality, and I can whitelist them by turning JS on for them, as they are also pretty much the same few websites every day.
I believe you can utilize partial http response streaming that never stops and there is even a way to replace html fragments. I‘m sure i‘ve seen this, but I am having no luck finding anything about it.
That ... is an interesting question that is not really a problem for someone who wants to browse the web without JavaScript. Which is almost the only person whos choice really matters when it comes to user security.
Some sites I've accidentally browsed without NoScript makes me question how people access sites with JS enabled at all. There are some shockers out there.
For this edge case I suggest handling JavaScript like Webcam-Access, if a website wants to use it they can and need to convince the users. So we have an agreement between both sides :)
I foster myself some weird JavaSript to allow hardware-access. But I think business itself should not be the driving force for the technical development and society.
I need a WYSIWYG editor, a very simple one for Forum uses. And some sort of Ajax, Pjax function.
I think that covers 90% of my needs. If we could get that without enabling JS. Some function could move back to Server at the expense of more CPU cycle.
Although pushing more features to HTML isn't exactly a great idea either.
What I find most interesting as a JavaScript developer is that almost no open source projects will touch the stupidity of giant SPA frameworks. Seriously, if you are writing software on your own time why burn that time away digging in the trash?
On the contrary JavaScript developers that don’t contribute to open source cannot write two lines of code without some disgustingly bloated framework and a million dependencies to do half their job for them. You can’t even get hired without completely caving into the stupidity. It feels like cult membership lighting money on fire.
Yeah dude this overgeneralisation is not cool.
I'm a JS dev, and you're insinuating that since I don't write FOSS, I am a brainless monkey who glues frameworks together. Classic elitism, and it shows your hand as a gatekeeper. Not cool.
Not exactly. I am insinuating you are a brainless monkey if you need a ginormous SPA framework to build a simple web page with a couple click events.
> Classic elitism, and it shows your hand as a gatekeeper. Not cool.
Cry me a river. I am not being elitist by suggesting you should learn to do your job. I really don't care how uncool it sounds or how many tears you shed.
I am not sure what you or the downvote brigade want here. Do you want me to feel sorry for you?
This is actually really interesting. Bravo to the Edge team.
And if I remember correctly, writeable memory pages were the main reason why iOS banned browsers like firefox from embedding their own rendering engines.
Perhaps this kind of approach could address such concerns and enable other rendering engines.
Ironically, in the early days Apple was redirecting developers to the web to build apps for the iPhone. But then it seems they discovered a money minting model.
This reminded me of my experience handling the frontend side of Varnish using a RISC-V emulator. The most important thing is that the emulator is quick to bring up and quick to tear down. Almost zero syscall (10ns) and vmcall (4ns) overhead. So, if nobody is doing any heavy computation, the most important things are getting the base overheads down. For example, the RISC-V emulator could handle the full frontend pipeline side in less than 1 microsecond. That's going to be hard to beat. I wonder if the same could apply to a standard website for a simpler kind of WebAssembly emulator instead of JavaScript? And by that, I mean replacing JavaScript with WebAssembly completely, and just make all the tooling necessary to make it nice and easy.
I'm surprised at how little regressions there were in the tests they run, given they completely disabled JIT. This could be very useful as a default 'mode' for websites, with JIT able to be turned on for trusted websites if the user would like more performance.
They did note that the JavaScript benchmarks were reduced by upto 58%, while noting that users won't generally notice the difference.
I would be interested to see how this affects the performance of websites that make use of complex JavaScript for things like charting/visualization (like the D3.js demos, or online formulae graphing tools), audio waveform rendering/processing, games, and other complex uses of JavaScript (including things like vue, react, bootstrap or other JavaScript UI frameworks).
Because "defaults are forever" or something like that.
Most people won't alter those settings, so whatever is "trusted by default" will run faster. The average user will just note that some sites are very fast, while others are very slow.
Basically what you see for instant messengers on Android: phones usually come with battery savings exceptions for WhatsApp, so when people install Signal, it looks bad for not delivering messages as reliably as WhatsApp.
Everything’s a trade off, but this one is worth trying imho.
There are lots of things could be done to even the playing field. Eg require all browsers to come “out of the box” with with zero sites trusted.
This would incentivise regular sites to not use heavy JS, if they knew they won’t be JITed by default.
And by all means, if you use say Salesforce, by all means trust the site. But that tiny bit of friction is a good thing imho, analogous to running ‘chmod +x’ on Unix.
In general, I think it’s time to say that browsers should have a more refined security model, and letting every darn site on the internet access to run code on your computer is maybe not a great idea.
I don't think it's quite so dire. Remember that big sites like Facebook at least used to display warnings in the developer console, akin to "DO NOT PASTE THINGS IN HERE YOU RECEIVED FROM STRANGERS"?
There is a sizeable subset of people who are curious and do care, and who would be eager to try that "one weird trick that speeds up <hot web property du jour> 200%" spreading through their Telegram group.
But for the most part, non-technologically inclined people seem to have a Hindu cow-like frustration tolerance when it comes to technology. If Windows takes twelve minutes to boot and your browsers viewport has shrunken to the size of a postage stamp due to toolbars, then that's just the way it is.
I would wager that for them, site Y running half as fast as site X matters a lot less than you think.
Or maybe instead of trusted websites they'll move to a model where you need digital signatures on your JavaScript code to enable high-performance mode, just like you need to code sign Windows applications to avoid scary warnings about what they might do to your computer.
So… JIT doesn’t actually improve performance. Nor does virtual DOM. What next? It seems like web browsers and frameworks are full of this cargo cult magic that doesn’t actually work, but must be used because everything else is built on top of it. When can we call this interpreted DOM experiment over and go back to compiled programs on a window API? It seems we’re headed there anyway with WebASM and WebGL, but every program has to carry its own UI services, kind of like video games in DOS, which is also right about where performance is today too.
I have enabled this setting yesterday and I am unable to tell any difference when browsing my normal websites.
I wonder why JIT was put on in the first place. I mean if it has little to none end-user impact when removing it. JIT sounds like a great deal of doing nothing except for creating 50% of all bugs.
It this a case of JIT was meaningful when first introduced but overtime advances in other area has made this obsolete/redundant?
Web pages where compilation costs and benefits are short may be one thing. But what about cached code? And what about real applications running in the browser?
We have come a long road to establish transparent protocols to run stuff on our browsers (HTML, JavaScript, CSS, JSON, HTTP protocol), yet we are going to towards compiled binary code, towards black-box schema.
The users will have less power of the content on their browsers with compiled code and something like ad blockers become challenging to implement again. I'm not really big fan of this trend.
For example Google Docs is being rewritten to use canvas and who knows what it actually does behind the scenes.
I wonder if this change will make it much harder to exploit sandbox escape vulnerabilities, which IIUC are in the main browser process, not the renderer process. What is the impact of those vulnerabilities compared to ones in the renderer process?
If your javascript is no longer getting compiled down to bytecode any kind of speculative execution is going to be soooooooo much harder to do that I think disabling the JIT is probably the most effective spectre-like mitigation a browser could do.
they did mention it in the long description. It's off for now but they are planning on turning it on.
I'd guess it's safer than JIT because the translation to assembly is simple, or can be simple. It's not trying to do the complicated process of analyzing a dynamically typed language and applying different ways of optimizing.
I bet they'll end up using a pure bytecode interpreter for Web Assembly as well, that just runs the operations one by one instead of converting them to machine code. It'd be the same "slower but safer" trade-off that the mode uses for regular Javascript.
Websites doesn't require JavaScript, what really needs JavaScript are Singe-Page-Web-Apps-Somethings with anti-patterns like infinite scrolling. You get two things in return, a "super duper" fast web and a more secure web-browser. For example Amazon supports usage without JavaScript very well. Another experience is Stackoverflow, things like the preview and highlighting doesn't work. The highlighting can be added with server-side code but this will cost some CPU-Time - and it is not your CPU-Time. It is their CPU-Time? There is tiny feature I would appreciate in HTML-Engines "copy link to location" instead of using JavaScript - but there is this usable hierarchical address-bar a the top (which Google tries to hide) which already serves this purpose.
I'm not against JavaScript. JavaScript is just a tool but I ask the question if good websites require it? Hackernews uses 134 lines of JavaScript, this is alredy nearly nothing. Can you imagine using Hackernews without JavaScript?
My webbrowser (WebKitGtk) provides a permission panel for every website:
* advertisements
* notifications
* password
* location
* microphone
* webcam
* media
A first step would be adding there JavaScript, too? Maybe some nasty Cookie-Dialogs will disappear as consequence. But that is not a loss?
This is also one of the reason I'm hoping more people write in Gemini, because it's just text
> Can you imagine using Hackernews without JavaScript?
It definitely works, it's just a very few small features that need javascript. The most important for me is the ability to collapse sub-threads, which is mandatory when reading threads with lots of comments.
It‘s simply like having an automatic transmission in an ICE car.
The motor type is an implementation detail the user does not want to care about.
I want to press the pedal and the car accelerates. What happens in the background does not concern me.
Same for websites.
If I see a list that is larger than the height of my screen, I want to be able to keep scrolling until I decide to stop.
I do not care that the creators of the website have structured their database so that the current DB would return 573738 entries.
I also do not care that the way the developers decided to present the application to me (via a browser) means that every row entry is a HTML element and showing 573738 HTML nodes at once would slow down the browser.
I just want to keep scrolling (and will stop after a few hundred rows anyways).
Having these kind of ivory tower fringe opinions like „infinite scroll bad“ is gate-keeping by middle-aged backend devs that panic at the speed and impact of the frontend world, plain and simple.
Most of us humans have an urge to finish things, but infinite scroll by definition never finishes. This increases engagement of users by utilizing their psychology, which is basically the definition of a dark pattern.
There is an argument to be made that it also increases usability, so it's inception probably wasn't malicious. But it's definitely another product of today's addiction driven design philosophy if you just look at the psychological effect it has on users
Your problem is with pointless dark-pattern ridden attention-grabbing social media using that tool (and rightfully so!).
Now imagine a useful application, like an email program.
Infinite scroll is useful there.
Imagine having a paginated inbox!
A better case for infinite scrolling would be chat history, where you'd want an entirely separate UI widget for "jump to date", with bidirectional infinite scroll that unloads distant content, and a way to grab permalinks. Missing any one of those features, or the fact that messages will never be re-ordered for marketing purposes, infinite scrolling becomes a hinderance.
The pagination is a nice break and lets me know I am done, everything on back pages has been previously processed.
Finite scroll is a compromise that you get some amount "you are here" context from scrollbar thumb size and position. If you are interrupted you can generally visually guess about where you left off, maybe.
Infinite scroll is terrible tool for an inbox because you no longer have such context. Scroll too far and the scrollbar thumb changes size because new items came into view. Most infinite scrolls entirely hide the scrollbar thumb because it is precisely so useless. Try to resume where you were in your inbox after an interruption and you have no idea how far to scroll down, and you can't just use the scrollbar thumb and hit an approximate spot. You have no context for where you are, you are lost in an infinitely scrolling maze of items all alike.
Even if infinite scroll wasn't a dark pattern (and it is, there's enough behavioral psychology studies now that have gathered enough evidence that infinite scroll feeds addictive behaviors), it is a terrible tool to work with if you are trying to get stuff done. It has none of the context of "you are here" and "here's how you can come back to where you were if you need to leave". Both of those things contribute to why it is such a dark pattern. Both of those reasons are strong reasons why it is an anti-pattern anywhere you expect people to work/to get stuff done. Pagination is great for getting work done and infinite scroll is awful. Finite scroll is alright compromise between the two for some apps.
That algorithm breaks on infinite scroll.
Now, of course the ideal way to read a book is probably a navigation/outline sidebar on the left, and then the content of the current chapter in the main pane.
Kind of like the Rust book: https://doc.rust-lang.org/book/
Of course why would you have a problem with that, the only navigation one should perform while reading is change chaper, who wants to go to abitrary paragraphs.
How would we accomplish this sort of UX without javascript?
Edit: here it is https://news.ycombinator.com/item?id=16319248
Some sites I've accidentally browsed without NoScript makes me question how people access sites with JS enabled at all. There are some shockers out there.
I foster myself some weird JavaSript to allow hardware-access. But I think business itself should not be the driving force for the technical development and society.
I think that covers 90% of my needs. If we could get that without enabling JS. Some function could move back to Server at the expense of more CPU cycle.
Although pushing more features to HTML isn't exactly a great idea either.
On the contrary JavaScript developers that don’t contribute to open source cannot write two lines of code without some disgustingly bloated framework and a million dependencies to do half their job for them. You can’t even get hired without completely caving into the stupidity. It feels like cult membership lighting money on fire.
> Classic elitism, and it shows your hand as a gatekeeper. Not cool.
Cry me a river. I am not being elitist by suggesting you should learn to do your job. I really don't care how uncool it sounds or how many tears you shed.
I am not sure what you or the downvote brigade want here. Do you want me to feel sorry for you?
And if I remember correctly, writeable memory pages were the main reason why iOS banned browsers like firefox from embedding their own rendering engines.
Perhaps this kind of approach could address such concerns and enable other rendering engines.
Facebook buying two of their competitors resulting in crickets.
https://knowyourmeme.com/memes/oopsie-woopsie
I would be interested to see how this affects the performance of websites that make use of complex JavaScript for things like charting/visualization (like the D3.js demos, or online formulae graphing tools), audio waveform rendering/processing, games, and other complex uses of JavaScript (including things like vue, react, bootstrap or other JavaScript UI frameworks).
Most people won't alter those settings, so whatever is "trusted by default" will run faster. The average user will just note that some sites are very fast, while others are very slow.
There are lots of things could be done to even the playing field. Eg require all browsers to come “out of the box” with with zero sites trusted.
This would incentivise regular sites to not use heavy JS, if they knew they won’t be JITed by default.
And by all means, if you use say Salesforce, by all means trust the site. But that tiny bit of friction is a good thing imho, analogous to running ‘chmod +x’ on Unix.
In general, I think it’s time to say that browsers should have a more refined security model, and letting every darn site on the internet access to run code on your computer is maybe not a great idea.
There is a sizeable subset of people who are curious and do care, and who would be eager to try that "one weird trick that speeds up <hot web property du jour> 200%" spreading through their Telegram group.
But for the most part, non-technologically inclined people seem to have a Hindu cow-like frustration tolerance when it comes to technology. If Windows takes twelve minutes to boot and your browsers viewport has shrunken to the size of a postage stamp due to toolbars, then that's just the way it is.
I would wager that for them, site Y running half as fast as site X matters a lot less than you think.
I wonder why JIT was put on in the first place. I mean if it has little to none end-user impact when removing it. JIT sounds like a great deal of doing nothing except for creating 50% of all bugs.
It this a case of JIT was meaningful when first introduced but overtime advances in other area has made this obsolete/redundant?
The users will have less power of the content on their browsers with compiled code and something like ad blockers become challenging to implement again. I'm not really big fan of this trend. For example Google Docs is being rewritten to use canvas and who knows what it actually does behind the scenes.
For something we do have networking protocols.
I know, I know! "Super Duper Secure Mode For Professional Enterprise Datacenter Unlimited Seats"
Disabling CPU's speculative execution on untrusted code (browser) too
I'd guess it's safer than JIT because the translation to assembly is simple, or can be simple. It's not trying to do the complicated process of analyzing a dynamically typed language and applying different ways of optimizing.
> Disables the JIT and enables new security mitigations to provide a more secure browsing experience - Windows