If one is already well-versed in multiple areas of software technology (especially development and database administration), this is an excellent book. It surveys the landscape of software data storage technologies, talks about (at a modest level of depth) some of theory behind things like quorums in distributed database systems, resiliency/redundancy strategies during data loss, and a host of other interesting topics.
I'd consider its level of depth somewhere in the middle between specialist books and 10k foot overview books. I recommend it to anyone that has been a software developer or DBA for 5+ years, as I think they'd get the most value out of it.
Reading high scalability and watching talks about how Netflix, Google, Spotify and friends deal with their massive loads can be entertaining, but for the vast vast majority of us it is only that: entertainment.
Google's solution is almost certainly not right for you. Netflix's solution is almost certainly not right for you.
I've never had the distinction to work on a project that large, but I imagine that even if you are this scale their solutions are still not useful, because at that scale and complexity everyone's needs are different.
I don't agree fully. My understanding of the Google level scale is that you are forced to deal in good abstractions. You need to pick the right interface, nothing can ever be responsible for more than one thing, and you need to make things extremely composable. With a foundation like that, you get the ability to model your system using even simpler substitute components, making it possible to reason about behaviour in conditions which are difficult – if not impossible – to actually test.
If the ideas sound familiar, that's because they're basically just good software design. Except if you don't design things well from the start, you have to either kill it off or spend years fixing it.
I think a lot of this could be the exactly right mindset for many of us. Sure, there's something to be said for an MVP, but with some experience in designing composable abstractions, you can create an MVP that scales up to real workloads in half the time of your competitors, who basically have to rebuild their thing from scratch.
You need to _operate_ your software. The way a company with 10,000 software engineers operates their software is very different than how a company with 5 software engineers does it.
Microservices are the prime example of thing you should never do at small scale. They're a solution to an organizational problem of organizing large teams. Typically each team has a single microservice... and then 5-person companies will go all microservices and support 20 services with one team, and then get confused why it's not going well.
Others have already posted some fantastic, practical resources for creating technical systems.
But for truly first-time system designers that haven't had much exposure to systems thinking before, you might find  useful. Donella Meadows focused on environmental sciences and economics, so the specific examples/anecdotes mentioned may not be of interest to you. But she is also well known for her work in systems thinking, and the mental frameworks and general systems thinking principles sprinkled throughout are just as applicable to designing complex software systems as they were to her work. And make it significantly easier to evaluate the practical resources others have provided and interpret them in context to your own needs.
Two decades ago, when I was learning to code, the career path for engineers was to eventually become a "software architect". The architect was to be the god among mortals who would "design" large systems and dictate how these things fit together.
Fast-forward a few years and it turned out the "architects" were the biggest waste of of time and money. The best system designs came from the low-level engineers who were actually building the individual components.
In my humble opinion, the best way to learn big system design is to just put in your 10,000 hours of coding. The principles necessary for multi-thread concurrency are not so different from multi-datacenter concurrency. I suspect there are 1,000s of subtle design patterns that one can perhaps never fully articulate.
I would start with the MIT OCW course 6.033 "Computer System Engineering"  (old videos ):
This class covers topics on the engineering of computer software and hardware systems. Topics include techniques for controlling complexity; strong modularity using client-server design, operating systems; performance, networks; naming; security and privacy; fault-tolerant systems, atomicity and coordination of concurrent activities, and recovery; impact of computer systems on society.