"general computer responsiveness" at this point is 100% on software/OS - QNX for example was perfectly responsive in the 90s on Pentium II class hardware (and you can probably find earlier examples with weaker CPUs like BeOS on early PPC, but these were just the first to come to my mind - someone will probably chime in below with Amiga anecdotes or something).
I refuse to believe at current high end intel/amd levels (i7-9 & ryzen 7-9) & even mid-range that lack of responsiveness is due to the CPU rather than windows/mac.
> I refuse to believe at current high end intel/amd levels (i7-9 & ryzen 7-9) & even mid-range that lack of responsiveness is due to the CPU rather than windows/mac.
You are right, of course, in that the fault lies with software. But holding the software as constant, the only way to improve responsiveness is to up your core speed and IPC.
It's not as if the average user can email Microsoft of the Chrome browser team and ask them to make the OS/Browser more responsive for their older hardware. But they _can_ go to the store and buy a faster CPU, most of the time.
The situation was probably reversed a few decades ago, when hardware was actually expensive and multi-core was not a thing.
Sure it was. Back then screens had 16+ times less pixels (multiply it by 2-8 for text mode), Linux (the kernel) source code was still relatively small, and the games played had 10 2D levels of a few million pixels each.
I'm not talking about games, I'm talking general computer use responsiveness. the difference in computing resources between a pentium II @ 266mhz and modern-day CPUs is much bigger than 16x and even back then win95 was a lot slower than it should have been (we were saying basically the same thing - "why is this 266mhz PII not feeling any faster than my old 8mhz amiga?").
Again the "poster boy" for responsive interaction is probably BeOS - the original BeBox used dual ppc 603 CPUs at 66mhz: if you consider IPC, Mhz & number of cores a modern CPU probably has 1000x the computation power at its disposal & RAM is also generally 1000x more plentiful (we have as many GBs as we used to have MBs back then). I'll bet with GPUs the difference is even bigger.
Single 4k screen @ 60Hz requires transfer rate of 3840x2160x3 bytes 60 times per second = ~1.5GB/s just to copy pixels without any logic applied.
Typical RAM module (DDR-266) in late 90s could only provide ~2.1GB/s, leaving almost no room to perform any compute on a single 4k screen, and simply insufficient to run two of them.
PC66 of early 90s could only do 0.5GB/s.
DDR3, commonly used now in integrated GPUs does 13GB/s.
What I am getting at, is that the task to just copy pixels to a screen became proportionally harder as RAM progressed forward (26x faster per module since 1990, 6.5x per module since 1999 vs 60x more work per screen since 1990 (256 colors), 10x per screen since 1999).
And that is just screen rendering. Same happened to source code, documents, images, everything.
Back then, to write a pixel you did, at most, 3 memory writes. Now you need to write those bits to a bitmap and pass it on to the GPU, which will combine it with all the rest of the stuff needed to make your screen happen and, hopefully, you'll see something in a couple screen refresh cycles.
> but tons of useful software is still bottlenecked by single core speed
It's interesting to see approaches like Apple's where more and more of computationally heavy work is moved onto what are essentially special purpose purpose ASICs (I think you can generously expand the definition to even include GPUs). Specific examples include video decoding, graphics, and increasingly ML computation. Once those things become "free" what is left? I'd say mostly just many layers of abstractions atop basic computation.
As more and more software stacks become ergonomic w.r.t. multithreading & multiple execution contexts, single core becomes less of a bottleneck. IMO that's the ultimate solution to the hard physical constraints of Moore's Law. Short of a new type of compute substrate.
One thing that is maybe helpful to consider in the modern era is that Moore's law's meaning has unraveled as we have reached smaller scales (hence the basic arguments over what it even means).
Originally the law relates to the transistor density of an IC, but this was strongly correlated with Power Consumption, Clock Speed, and a bunch of other metrics. As we reach smaller scales these parameters are no longer tightly coupled. I recall seeing a plot to the effect that Power Efficiency tapped out at 14 nm. Likewise clock speed has not been increasing at the original clip for the better part of a decade (it went up ~50% in 10 years, which in any other area of engineering would be astounding, but I could sure use a 32GHZ processor).
Anyway, having trade offs makes things more interesting and perhaps we are going to see an era with more cleverness chip architecture soon.
The progress of Moore's Law no longer exists. It was a very specific claim that we have not managed to achieve for a while. People often use the phrase "Moore's Law" just to refer to the continuing shrinkage of transistors, though. Which I think is what you mean.
5 nm node products are being released to consumers this year so we're still on track. I think at this point the next node is always questionable because it takes a pretty big breakthrough in manufacturing techniques, resist chemistry, and node design to shave off another nanometer.