Performance Matters (2019)

(hillelwayne.com)

153 points | by azhenley 1246 days ago

18 comments

  • pjkundert 1246 days ago
    Software can be bad in arbitrary way, and still be useful and not too agonizing.

    But, software that ignores, delays or discards user inputs? Absolutely F*$#ing Unacceptable.

    Unfortunately, most user-interface software (including the ePCR software described in this article) commits this "Unpardonable Sin" of UI software. So, the EMTs, being insulted by the software developer, just don't use it.

    If you write UI software, you have one job. To utterly cherish user input, painstakingly preserve it, and lovingly and promptly provide user output. That's it. Almost anything else is optional, and can be forgiven.

    • greggman3 1246 days ago
      Almost all software that has a login seems to throw away input. It's at the "forgot my password" prompt. I type in my email to login. I type a password. It doesn't work. I click "forgot my password" and I have to type my email again. On desktop it's not too annoying but on mobile I find it super frustrating.
    • ajnin 1246 days ago
      Unfortunately Firefox is guilty of that Unpardonable Sin. Often I will open a new window to search for something and start typing my request right away, since the address/search bar should be focused on new windows. If the window takes a little bit of time to appear for some reason, the first few characters will be dropped. Frustrating. Chrome on the other hand does not exhibit this defect, even if I start typing before the window actually appears, my input is preserved.
      • ALittleLight 1246 days ago
        I find the autocomplete in jira constantly doing absolutely preposterous things - e.g. I'll be typing out a query like "project != TheirProject" and, when I've finished typing "TheirProject" and hit enter, jira has just barely managed to raise an autocomplete prompt such that "TheirProject", which is correctly spelled, gets "autocompleted" to something like "!=" making my entire query "project != !=". I have no idea what's happening in this case as autocomplete seems to take a completed correct term and turn it to something that isn't even semantically valid. It happens to me in some variety at least once a day.
      • kzrdude 1246 days ago
        MS Teams. Be in a call with a person and write something long in the chat message box, without sending it yet. This disappears when the call ends, even if it could just be transfered to the message box in the chat view with that person.
      • TeMPOraL 1246 days ago
        Firefox has too many abstraction layers. The other day I've managed to break the multi-account containers, because I pressed Enter (to confirm opening a site in a different container) faster than the confirmation page finished loading, which yielded a) an empty tab with no site, and b) a confirmation page with no URL in it, that did nothing.

        Software should never break because flesh-and-bone human user is "too fast" for it.

        • minipci1321 1246 days ago
          I used to work on drivers and firmware for consumer electronics remote controls, and the empirical threshold we used at the time, was that an average human cannot press a button two times in a row much faster than with a 200 ms delay. (So, if the driver was getting same key presses spaced by less than that, it could safely assume those come from auto-repeat in the RC.)
          • TeMPOraL 1246 days ago
            200ms sounds surprisingly high. I can imagine it being hard to press a button two times in a row in <200ms with a cold start, but if you already have a controller in your hand and it's even minimally ergonomic, pressing the same button 5 times with average <200ms delay between each should be relatively easy for most kids and adults.

            My guess is, whoever came up with that threshold has never played any console game.

      • throwaway_pdp09 1246 days ago
        So it does. Not if you open a new tab, but for a new window (just tried it on palemoon, an FX fork). As I never open a new window, never hit it.

        Do a bug report I suggest.

    • austincheney 1246 days ago
      As a front-end developer my professional experience tells me that in practice the ONLY (cannot stress this enough) thing that matters is developer convenience in the code. Everything else be damned.

      That said if this UI failure was easier for the developers to code, such as no code and download from NPM, then it is most correct even if both the business and end user are both catastrophically harmed immediately.

      • 8ytecoder 1246 days ago
        There were solid good practices from the way back when Struts was a thing. Even when a server side validation fails, you can redirect the user to a page and the form can prefill the inputs (except password). This was across page refreshes. What happened to all those hard learned lessons?
        • sriku 1246 days ago
          These days you don't even get that on most occasions even in a "forgot password" flow. You still have you key in your email address or username again pretty much everywhere.
      • saagarjha 1246 days ago
        This heavily depends on field, I believe.
    • johnisgood 1245 days ago
      Make animation optional at least! Just clicking into an input field can cause a system hangup.
  • outworlder 1246 days ago
    This matters more than one would think.

    I've once worked on a system that was objectively slow. Some actions would take seconds to complete. It's not like people would refuse the use the system, it was the only way to do their jobs. The public had no choice, either.

    Initially, I didn't think it was such a big deal. Yes it was a bit slow, but nothing _terrible_ and why did it matter, since there were so many other procedures that took way longer than the software step. It was a government facility, lots of red tape. Surely optimizing some of the paper processes would be more beneficial.

    But we optimized it anyway. Not because the UI was slow(which we didn't even measure directly), but because it was slow for a reason - backend processes were taking time, and resources. If we could optimize, we thought, we would save resources (and also increase the runway until we had to expand again, buy hardware and the like).

    First round of optimization was completed. Lots of low hanging fruits(many database-related). Testing indicated that the backend would be faster by almost an order of magnitude. Deployed.

    THE SYSTEM BECAME SLOWER.

    How was that possible? Maybe we had missed something. We found more bottlenecks, optimized those. Deployed again.

    Everything is slow again. What's happening?!

    We went to the 'field' - as in, talked to the users. Well, this is what we discovered: we saw the backend working harder - because the users were more productive! Instead of staying after the facilities were closed to the public and then catching up on whatever paperwork they were unable to enter in the system, they could process everything almost as fast as people came in. Which means that they could go home early.

    What we saw as "the system being slow again" was due to bad metrics - we were watching backend load, not UI response times. Because, admittedly, that didn't bother us directly. Not until we talked to the poor souls that had to use the system. Lots of async processes got triggered and queues filled up, but it didn't really matter to the users, as long as the UI said things were submitted and they could move on.

    After that, we optimized the heck out of all the UI interactions we could find.

    • doctor_eval 1246 days ago
      I had a similar experience using my own web app over a satellite link (750ms+ latency) many years ago. What took minutes at the office literally took hours over the air, which I found really surprising. I knew it would take longer but it was much more than 10x longer.

      This was when I first realised that UI delays have a non linear effect on productivity. The productivity loss from 10x latency increase is more than 10x. I don’t know how to explain it other than by example, but if you have multiple 1000ms delays you’re more likely to be distracted - to check your email or get a cup of tea or answer a slack message - than if you’re just ploughing through the work. That was my experience anyway.

      A few years later I saw the problem in a different context (ETL), so I drew up an informal scale from small to long delays using this kind of human centred thought process, and I realised that, for example, a 1 hour delay to process ETL data is effectively identical to a 6-12 hour delay, because the user will set the process off and go do something else productive for the rest of the work day, which had a huge impact on project delivery (of course).

      Once I had seen this in action, I couldn’t unsee it, and I spent the next few years getting rid of a pile of bottlenecks to make the UI performance and feedback loops as fast as possible, for example by refactoring complex operations. My proudest moment was reducing a very complex, 8 hour process down to a 5 minute (perceived) process by doing most of the work in advance, so the data was almost entirely ready before the user needed it. Our users thought the system had broken because it finished so quickly!

      I’m sure there are formal studies about this phenomenon, but it was so obvious once I’d experienced it, and our focus on UI latency made a huge difference to both the quantity and quality of the work we and our customers could get work done in a given time period.

    • zwaps 1246 days ago
      As a user forced to use many apps and websites with a terrible user experience, I conclude that you can certainly have well-communicated loading times without me hating the product.

      However, I will always avoid - if I can - using a product where the UI itself is slow, sluggish or bad. This includes many modern websites and also quite a few apps.

      Responsiveness is the key. If I click on something, then something needs to happen. People can deal with a loading bar, but everyone gets terribly confused and / or annoyed by a sluggish drop-down menus, buttons that seem to fire only after a couple of seconds, or, my favorite, a scrolling action that is either sluggish or leads to load processes that interrupt itself.

      • greggman3 1246 days ago
        I'm am shocked how bad Apple is at this. They used to harp on it. It was in their Interface Guidelines in the 80s. Now every time they do a major OS update I get greeted by a bunch of setup screens like to enable Siri or iCloud etc and clicking on Ok or Cancel there is often no response for 2-5 seconds. No idea if I did it correctly or anything. I expect that from bad ATMs or Gas Pumps but not from Apple.
    • eyelidlessness 1246 days ago
      This is great for two reasons (and probably more):

      1. It highlights that user impact determines success more than resource metrics.

      2. It shows that assumptions infect even when you think you’ve accounted for assumptions.

      Honorable mention: don’t just prove the abstraction and call it done. It’s got to prove itself in real world scenarios before you can trust its claims!

    • noisy_boy 1246 days ago
      The UI should be the last thing thats slow, relatively speaking. Database could be slow, network could have some issues, the backend code could be inefficient and so on. But if the UI is objectively slow, discounting above things, then that should be addressed asap. Because if that continues to be the issue, it doesn't matter how much quick the backend/database is - user just won't see the performance behind the scenes.
  • victorronin 1246 days ago
    What would happen if you replace "performance" in this article with "not contrast enough UI" or "bad autocorrect" or "annoying background music"? Should this article have been "Good autocorrect matters"?

    The problem here isn't performance. The problem here is that the company which is building this software are so remote from end-user that they don't hear feedback.

    If they knew that performance is the problem and multidollar contract with some huge network of hospitals were at risk, I can bet you, this performance issues would have been fixed really fast.

    • eyelidlessness 1246 days ago
      It isn’t as simple as that but I still want to recognize and applaud this response. You’re right, performance isn’t the fundamental issue, distance from user need and user pain is. Unfortunately even where that distance is short, mitigation can be hard to achieve. Understaffed and underfunded orgs can be wildly aware and even obsessed with shortcomings in their offerings and still fail to deliver improvements. I’ve worked in such orgs, I’ve seen overworked, deeply caring and empathetic and dedicated engineers trying to move mountains to fix (and have habitually been one myself).

      More to the point: unfortunately the software needs in the world, and the ways the world are underserved by software, are competing for resources and organizing talent with organizations that fundamentally don’t serve those needs. And both categories are competing in a limited pool, because software’s capabilities have outpaced the available talent to take advantage of those capabilities.

      Good for my bank account, I guess. But bad for people and the world we (I) inhabit. And bad for me too, probably even more than I know.

      Anyway this is far afield of parent comment’s point, but I felt it was a good place to add a little depth as an engineer who gives a damn but rarely sees the opportunity to apply damns.

      • xupybd 1246 days ago
        I can't agree more. Everyone wants amazing software yesterday. They don't want to spend money to do it. The company that wins the bid tends to over promise on delivery dates and under supply resources to do it.
    • greggman3 1246 days ago
      It certainly is frustrating when there is no way to get your message to the devs. There's a game I play. Sometime in late July it started exhibiting a game breaking bug. AFAIK there is no way to contact the devs. I went through the "proper channels" and silence.

      I remember back when FogBugz was a thing and Joel claimed that the correct way to provide customer service was track bugs, have one owner, and only allow the original reporter of the bug to mark it as closed. I'm sure that's not completely feasible but It's surprising to me how I know of ZERO software developers that follow anything even remotely close to this practice except for a few open source projects.

      Want to report a bug on Windows? Photoshop? Almost any game ever? Good luck finding out if a dev ever saw your report and that it didn't just get dropped by some underpaid customer service center rep.

      • TeMPOraL 1246 days ago
        > Want to report a bug on Windows? Photoshop? Almost any game ever? Good luck finding out if a dev ever saw your report and that it didn't just get dropped by some underpaid customer service center rep.

        The current industry standard is directing such reports to official "support forums", where users try to help each other and nobody with any relevant expertise is present. After all, why would the crew of a modern and enlightened software project stoop so low as to talk with actual users, where extensive telemetry provides all the information they need?

        s/, but only slightly.

    • TeMPOraL 1246 days ago
      "Bad autocorrect" is a problem (notable examples include MacOS helpfully autoincorrecting names of prescription medicines in some clinics). "Not contrast enough" and "annoying background music" can be worked around. Bad performance can't.
    • appleflaxen 1246 days ago
      end users don't know how to give this feedback

      in the case of EMS, it sounds like they could recognize it, but probably didn't report it back to the company. Why bother? They have used their product; they see it as "good enough".

      But very often, end users get frustrated by things like input latency, and can't express what is making them frustrated in specific terms. So they tell their IT department that "it's slow", and IT goes back and starts hammering on the __ team (networking, server farm, whatever).

      It's remarkably powerful if you can help you users develop vocabulary to recognize and report what they experience. (it's also flipping hard)

  • MaxBarraclough 1246 days ago
    A slight nitpick:

    > The ambulance I shadowed had an ePCR. Nobody used it. I talked to the EMTs about this, and they said nobody they knew used it either. Lack of training? No, we all got trained. Crippling bugs? No, it worked fine. Paper was good enough? No, the ePCR was much better than paper PCRs in almost every way. It just had one problem: it was too slow.

    That is a crippling bug. The UI is a soft real-time system, [0] and it's doing such a poor job of meeting its deadlines that the user considers the system to be unusable.

    If you're writing an autopilot system, it's not enough for the system to eventually make the right decision, it must arrive at the right decision before the deadline. Failure to do so would by definition qualify as a bug.

    > Most of us aren’t writing critical software. But this isn’t critical software, either: nobody will suddenly die if it breaks.

    Won't they? If the software corrupts the patient data, someone could die, right? Elsewhere the article essentially says as much:

    > An error might waste valuable time as nurses chase invisible problems or ignore obvious ones. Worst case, it leads to the wrong treatment. In emergency situations these mistakes can be fatal.

    [0] https://en.wikipedia.org/wiki/Real-time_computing#Soft

    edit I see brundolf's comment already makes some of these points

    • jspash 1246 days ago
      I would argue that the 250ms lag mentioned in the article was not the reason the ePCR system was perceived as being slow.

      Consider the new trend in login forms these days whereby you are forced to enter you username or email THEN press a button THEN enter your password THEN press another button. What used to be simple is now no longer simple. Why is it like this? To accommodate the x% of people who seem to get confused in some manner when presented with "too many choices". I forget the reasoning now, and disagree 1000% but don't want to sidetrack my argument any further.

      Anyhow... compare the original form to an imagined GUI since we're not presented with the software to make a proper comparison. If you needed to fill out the paper form quickly you can tick..tick...tick..tick..write a bit.tick...tick..read..write.etc.

      Now with the software, does it use a mouse? Probably not. It's most likely a touch screen. Possibly a tablet. So now every choice required more interaction. More choices. Opening a dropdown? So you mean to say the developers have decided to hide important information until you request it? The paper version has everything you ever need to know at a glance. One side of a sheet, no need to even flip it over.

      This is all conjecture, but I experience this type of thing all the time when a software "solution" comes to fix a real-world "problem". It can be done well. Most UK Govt websites are incredibly well-done IMHO. But usually they're not.

      • TeMPOraL 1246 days ago
        > Why is it like this? To accommodate the x% of people who seem to get confused in some manner when presented with "too many choices".

        FWIW, this is done to accommodate SSO (single sign-on), which matters for any software that's going to be used in corporate or governmental environments. You have to submit your login first, because it's used to determine what authentication method and provider to select.

        That said, I hate this flow too, and there must be a better way. It also doesn't excuse products that do not support SSO that still implement such split flow anyway.

        • setr 1245 days ago
          Couldn’t you just accept the username and password

          Determine the Auth provider from the username

          Feed both username & password to the provider

          And be done with it?

          I’m not clear why the user needs to be made aware of the SSO setup..

          • detaro 1245 days ago
            No, because part of the point of those systems is that they redirect you to your SSO provider to enter the password so the app can't see it.
      • phkahler 1246 days ago
        What you describe is very common. The best example I can think of is a map app with pinch zoom vs an old one with pan and zoom buttons. People want to interact with data, not fiddle with UI controls, menus, and popups. The best UI looks like no UI.
  • unnouinceput 1246 days ago
    Of course performance always matter. Also testing your software under real usability also helps understand the performance.

    Story time: I got hit by performance these very days. The app I'm developing for one of my clients has to process images from an USB camera. Under my development everything is dandy. Works like a charm, images gets processed and when the user hits the on-screen button that image gets stored in database as part of the entire process. Neath, yeah?

    Well, it turns out my client is using a cheap from last decade tablet (I'm a purist so this decade will end on 31st Dec 2020) that due to processing 30 frames each second from USB camera, has little time for actual GUI responsiveness. And the app feels sluggish, with 1 second delay between my tap and the combobox firing up. Turns out, it didn't need to actually process all those 30 frames each second. On per second will suffice, hence I've implemented to only process on frame every second. More than enough for customer's needs and now the app is also flying on that old hardware.

    My 2 cents.

    • Const-me 1246 days ago
      > hence I've implemented to only process on frame every second

      That's merely a workaround, not a fix. Next day someone will use a 8k camera on the same slow tablet. Another day someone will run your app in parallel with with some other process consuming all CPU cores.

      A fix would be making so that however slow the computer is, processing frames from the camera doesn't affect GUI latency, at least not by much.

      You probably gonna need multithreading for that. And if that 2010 tablet only has a single CPU core without hyperthreading, you might need to adjust the priority of that camera's thread. But it's all doable.

      • unnouinceput 1246 days ago
        I have bad news for you my friend. Everything is a workaround. You do workaround your family, you do workaround your kids and you do workaround your health.

        You have seat-belts on your car? Is that a fix or a workaround? Because a fix would be to actually have a car that doesn't crash at all. But would not be economically viable.

        You have plastic insulator around your electricity wires to prevent you getting electric shock. Is that a fix or a workaround? Because a fix would be to actually have continuous current at max 12V as power lines. But that's not economically viable.

        You have kids going to school and strangers are educating your kids a good portion of their life, molding them sometime against your values. Is that a fix or a workaround? Because a fix would be to have them home-schooled under your eye. But that's not economically viable.

        I can do this all day.

        • abraae 1246 days ago
          > You have plastic insulator around your electricity wires to prevent you getting electric shock. Is that a fix or a workaround? Because a fix would be to actually have continuous current at max 12V as power lines. But that's not economically viable.

          Your examples are wanting.

          Wires are also insulated to stop them from touching each other, not just to prevent electrocution. A 12V wire touching another can definitely result in sparks and fire. It can even amputate your finger if you bridge with your wedding ring.

          In virtually all situations wire needs insulation - it's a feature not a workaround.

          • unnouinceput 1246 days ago
            Did you know that when they first implemented electricity they just buried them pinned with nails on wood, no insulation? Back then insulating wires was more expensive than just eating energy losses via running current.

            Like I said, it all comes down to economics.

            • mikewarot 1246 days ago
              I'd really like to see evidence of this... if you bury wires directly, the voltage will speed corrosion to the point you won't have wires after a few weeks at best.
        • nullsense 1246 days ago
          >You have seat-belts on your car? Is that a fix or a workaround? Because a fix would be to actually have a car that doesn't crash at all. But would not be economically viable.

          Lmao. I like this perspective. I guess all if life is basically a workaround to not dying.

        • Const-me 1246 days ago
          Multithreaded software is all over the place. The process on my PC implementing this browser tab has 38 threads.

          For experienced developer using the right tools/architecture it's not even that expensive to develop.

    • NicoJuicy 1246 days ago
      Are you seperating the ui thread from your processing thread?
      • unnouinceput 1246 days ago
        Not on this app. Also the tablet in question is single core processor so implementing multi-thread is actually even worse. As a detail I left out on above story, this is one of the first surface tablets running Windows 10. I mean it needs W10 because security update reasons, but it was not designed to run such a demanding OS. And then comes my app that requires even more processing power of an already filled up to the guts hardware. Hence the breaks on demanding too much processing.
  • momokoko 1246 days ago
    I think this is missing the point.

    Did they ask the people who would actually use the software if they would use the software? All the way from design to implementation to the team that purchased the software. This is what happens when no one checks with the person that will actually use the software everyday.

    You end up with checked boxes and wasted resources.

  • liquidify 1246 days ago
    In my limited software engineering experience, most things are slow because they are built on other things that are slow. Generally people do not build software from the ground up. They simply reuse endlessly until someone high up gets so frustrated that they demand a complete rebuild. Then we start over by downloading a bunch of libraries and the same process begins again.
    • sim_card_map 1246 days ago
      Or because we created languages like Python that are 10x slower than compiled languages and write code bases with millions of lines of code in them.
      • oddity 1246 days ago
        I'm not sure this is a useful summary of the problem. I've seen an awful lot of code written in C and C++ that pisses away any and all advantage of using either. In general, I think software devs (and the software market) tend to value convenience over performance until it's too late, but the reasons why are so complicated and different per sector they'd make for a full PhD thesis and then some.
      • hansvm 1246 days ago
        It's 10x slower at a lot of things, but even vanilla cpython is every bit as compiled as something like C#. Python is slow because of an extraordinary amount of runtime dynamism, because it doesn't have a JIT or many optimization passes, and because performance isn't a top concern of the project.
        • charliebreslau 1246 days ago
          Higher abstraction -> less bugs. Forget low level, I beg you.
      • rubatuga 1246 days ago
        PyPy is actually very fast, ~10x faster than standard python.
  • luckystarr 1246 days ago
    I worry most people don't get the point of the Knuth's quote "Premature optimization is the root of all evil".

    Here is the full quote: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."

    What most people do is ALSO pass up optimizing that critical 3%, so software gets slower and slower. What he (probably) really meant is: "don't micro-optimize what a compiler can do better, optimize your algorithms and data-structures"

    Some common excuses for bad code:

    - Shuffling data around needlessly? Not a problem. It's fast enough.

    - The algorithm is bad. Not a problem. It's fast enough.

    - This is obviously bad. Not a problem. It already takes minutes, so a few more won't matter.

    - The code is slow. Not a problem. It's only used internally.

  • eyelidlessness 1246 days ago
    One thing I don’t see mentioned here (haven’t looked at the past comments link but it’s worth surfacing here) is that requirements for many public RFPs, health care being right up there, is that since most people driving the process aren’t intimately familiar with software success and failure cases tend to refer to the intersection of tech and regulation. They don’t necessarily think about usability except in terms of a11y. Uptime, yes. Human experience interacting with it? Likely only if it resembles workflows that are similar and painful for them. This isn’t faulting these people defining these requirements! It’s a call to all the types of HN readership to put yourself more in the position to bring the knowledge that informs your contribution into the requirements process. You might never know that the success of the project you work on depends on your knowledge of color accuracy or time zones or particular patterns of abuse... until you get yourself a seat at the “first things first” table.
  • joe_the_user 1246 days ago
    I'd just mention another scenario where performance matters more than is imagined. It's common for people to reference Rich Sutton's The Bitter Lesson Of Machine Learning [1], which essentially says that brute force has always beaten "clever algorithms". I'd outline an alternative view to this. The development of "clever algorithms" in the GFAI days or pre-neural-nets computer visions days often didn't follow an approach where the different parts could be combined together in a fashion that was performant. It seems like the higher you go in CS theory, the more speed gets abstracted away. And the problems with that might be the real lesson here.

    [1] http://www.incompleteideas.net/IncIdeas/BitterLesson.html

  • noisy_boy 1246 days ago
    > It wasn’t even that slow. Something like a quarter-second lag when you opened a dropdown or clicked a button. But it made things so unpleasant that nobody wanted to touch it.

    Considering the no. of fields on that form, with that kind of lag, I'll be ditching it too. I mean most of it is check-boxes; just replicate the damn thing on html. Though without knowing more about the setup, I would guess that hardware with terrible input is also involved.

  • VBprogrammer 1246 days ago
    I've recently been knees deep in optimising some old python multi processing code. I threw everything at it from trying to optimise the SQL queries it made to pre-fetching multiple requests to its child processes. In the end I broke out pyinstrument and found some logging in a loop which was taking up a big chunk of parent time. Fixing that gave a 2x performance increase.

    I went back and tested the various changes I'd made to that point and thankfully found that most of them were needed to sustain that performance improvement but it was a good reminder to break out the profiler first, not last.

  • brundolf 1246 days ago
    > But this isn’t “critical software”, either

    I strongly disagree. Anyone on the project would have (hopefully) known what it was being used for and how critical timing is for that task. Heck, the entire reason the project exists is to expedite the task. That translates directly to minimum-latency being a project constraint, so this to me just sounds like a project that failed to meet its constraints.

    If you're an application developer, you're generally going to have a clear idea of what the priorities are for your project and what does and doesn't matter (in terms of performance and otherwise).

    Maybe there's a case here for systems/library programmers erring on the side of performance because they genuinely don't know who will be using their code for what. But even then- if a library isn't fast enough (put differently: if the library's priorities diverge from the application's priorities), the application developer should know that and not use that library for their application.

    This trend of universalist performance-puritanism is exhausting. Performance is one of many priorities that a project may have. Know your target user, set your priorities accordingly, and develop your code in line with those priorities. No single priority gets to be so important across an entire field that everything else takes a back-seat.

  • obviouslynotme 1246 days ago
    I'm old enough to remember Win32 programming in C/C++ because it was the only thing that worked. There was a problem though. Memory management, shitty standard library, and a non-existent packaging system. Java came around and gave you something that worked. Memory management? The runtime does it. Want to do X? Here is a library for X. These combined with academia moving over to Java killed new C++ development hard. It seemed good at the time. Java sucked but Widgets and MFC sucked harder. Qt wasn't that popular yet.

    There was only one problem with Java. Even on new machines, you would get noticeable random pauses doing simple things. This never happened in C++ apps even with full MFC bloat. The GC just wasn't that good yet. Programmers would also include FAR more code and libraries because it sped development up. Even today, C++ applications only include libraries if they absolutely have to because of how painful it is.

    All of C++'s disadvantages become advantages when you look at them from the lens of performance. Manual memory management means that even if you are slow, you are consistently slow. You don't get the peaks and valleys that really irritate humans who are incredibly sensitive to rhythm. Memory leaks are honestly not even a problem unless you have long running servers. Even then, you could honestly just restart. The near-complete lack of standard libraries and complete lack of a package system highly reduced the amount of fluff you had. If you wanted it in your software, you had to do it yourself. This led to very simple, non-pretty, static interface that just tried to look like Word without the Toolbar of Death. The best optimization has always been to Do Less Stuff.

    It's only going to get worse too. The same companies who think C++ is too complex for their programmers are going to laugh at Rust. People in Java or .NET shops aren't going to move over. "Native" is increasingly becoming a JavaScript space which has all the terrible performance of JavaScript usually combined with the slowness of internet connections. For good and ill, package management is now standard.

    Microsoft Office used to run fast on a 486 with megabytes of ram. Think about that. Are you a more complex or intensive program than Word? I bet that ePCR has far higher specs. The program running on it isn't inherently more complex. It's literally a form filler, but it isn't used because of how ridiculously bloated even the most basic software has become. In the 90's, you could have made that machine by installing Windows on a machine and programming a VB program in a few weeks that hooked up to a printer. It would have been more responsive than a state-of-the-art ePCR that sits unused today.

    • MaxBarraclough 1246 days ago
      > The GC just wasn't that good yet.

      Also last-minute loading and verification of classes. Interpretation overhead and JIT compilation wouldn't help either, especially on a single-core machine. Many aspects of Java make more sense for servers than for desktop UIs. (This might finally be changing, with recent progress on ahead-of-time compilation of Java without requiring an obscure proprietary JVM.)

    • hansvm 1246 days ago
      > Memory leaks are honestly not even a problem unless you have long running servers.

      Not your main point, but having to kill a program or an OS and restart whatever I was working on because some process can't be bothered to give back memory it doesn't need (and that I now do need) is a major pain.

      • ReactiveJelly 1246 days ago
        Luckily, the same programs that leak memory are probably the ones that handle crashes poorly
    • bsder 1246 days ago
      > Memory leaks are honestly not even a problem unless you have long running servers.

      Firefox is a counterexample to this statement.

      Rust appeared because experienced Firefox developers simply couldn't manage C memory manually, C memory with garbage collection, C++ memory manually, or C++ memory with garbage collection.

      • ReactiveJelly 1246 days ago
        Yeah, kinda. Rust is meant to solve memory unsafety, which is an even worse problem than memory leaks, especially when you're running untrusted code.
      • renox 1245 days ago
        Uh? While Rust provide memory safety it doesn't prevent memory leaks..
      • Scarbutt 1246 days ago
        And still, Firefox is less efficient and less secure than Chrome or Safari.
  • JshWright 1246 days ago
    "If 0.1% of PCRs have mistakes that waste an hour of a doctor’s time, that’s 30,000 doctor-hours not spent on other patients."

    There is a 0.0% chance that this happens (in my opinion as a paramedic). If I had to guess, 0.1% is probably the rough order of magnitude of the frequency with which doctors look at the PCR at all. We give a verbal report which covers the important details. I would be hard-pressed to come up with an error that could possibly waste an hour of someone's time.

    "Did that quarter-second lag kill anyone? Was there someone who wouldn’t have died if the ePCR was just a little bit faster, fast enough to be usable? ... It could have saved the person the EMTs couldn’t get to because they lose an hour a week from extra PCR overhead."

    Similarly, no. There is no situation in EMS where 250ms is the difference between life and death. It's not like the tones drop for a call and we say 'gee, I wish I could go help that person, but I still have this chart to write...'.

    Performance absolutely matters (ironically, I'm a software developer for an EMR at my day job), but a little lag in an ePCR isn't going to kill anyone.

    • hxtk 1246 days ago
      > There is no situation in EMS where 250ms is the difference between life and death. It's not like the tones drop for a call and we say 'gee, I wish I could go help that person, but I still have this chart to write...'.

      That wasn't how I read it. I may be incorrect, but my interpretation was that the 250ms didn't make the software too slow to be effective but rather too slow to be user friendly.

      The idea isn't that it directly killed people by being slow, but rather that it was not adopted because people didn't like using it, and on the paper alternative, mistakes were made that would have been impossible to make on the computer.

      • JshWright 1246 days ago
        I don't have a choice about whether I use an ePCR or not, New York State (and, AFAIK, most other states) mandates it.

        The only place where PCR errors matter is on the witness stand... The PCR is certainly important, but any critical information is conveyed in a number of redundant ways, and I would be very surprised if there's been a single incidence of a PCR error resulting in a patient's death.

        PCRs are not treated with a great deal of trust, nor should they be (I say this as someone who has written thousands of them). We're dealing with incomplete and conflicting information, patients that actively lie (or, more charitably, forget) about their medical history, and in the case of critically sick patients (the ones that are at risk of dying in the first place) I am more focused on the acute management of their condition. No one expects a PCR to be accurate in every detail. That doesn't mean it's useless, but it also means the system expects there to be errors and has redundancies in place to account for that.

  • jmnicolas 1246 days ago
    I remember a comment here about a similar discussion about performances.

    The author made such a good praise of performance that I was half ready to learn ASM to code all my apps ... then he admitted he was using this Electron app because it was more featurefull than the native one ...

  • legulere 1246 days ago
    For a reference, this is how fast you should be:

    - animations under 10 ms - reacting to user interactions under 50ms

    https://web.dev/rail/#focus-on-the-user

  • kuharich 1246 days ago
    • JshWright 1246 days ago
      Oh man, 2020 has been a long year... I totally forgot this had been posted before and I managed to repeat myself in the comments...