> Go is a pretty good programming language. I have long held that this is not attributable to Google’s stewardship, but rather to a small number of language designers and a clear line of influences which is drawn entirely from outside of Google — mostly from Bell Labs. pkg.go.dev provides renewed support for my argument: it has all the hallmarks of Google crapware and none of the deliberate, good engineering work
that went into Go’s design.
Putting aside the criticisms, I think a new word is needed to describe anti-Google sentiment of this flavor. The idea that Google is incapable of doing anything positive, so anything “good” coming out of Google must be attributable to influences outside of Google.
I’d like to think about it as a “no true Googler” argument. The Go team are not true Googlers, because they produce something useful. When they built that module proxy however, they were clearly so doing as Googlers. (As someone who works at Google, I can humorously imagine that when I’m acting as a true Googler, my only goal is to foist new crapware onto the world, get promoted, and kill it off.)
Or maybe we recognize that companies are collections of individuals, and all individuals bring their own history and influences. The culture shapes them and they shape it back. Playing semantic games to divide and attribute isn’t useful.
Maybe when you feel the need to defend your employer from such criticisms often enough to define a new term for it, you should wonder about your employer more, and whether or not the criticism comes from a place of truth.
Your attempt to disarm the argument against Google's volatile engineering essentially boils down to: "people are all different, and people make up Google, so you can't say Google is one way or another".
That would apply equally to quite literally any collective/group/company/society.
How does such poor reasoning end up voted so highly on HN?
This is not a "no true Googler" argument, but a factual statement supported by the history of Go. The design of Go is heavily inspired by Plan 9, Inferno, and Limbo, none of which came from within Google. In fact - name any of the flagship features of Go and I'll draw a line for you from that feature into the historical influences. Goroutines, channels, garbage collection, GOPATH... none of these were invented by Go.
Right, but Go itself, which builds on those ideas was built within Google. I have no objection to attributing ideas, but you seem to want to attribute some general “quality of engineering” attributes to “not Google”.
The people doing things you like are just as much Googlers as all the people doing things you don’t like, and they all have an influence on the company and vice versa. In many cases, it’s even the same people.
So the distinctions you are trying to draw are silly. Just let the criticisms stand on their own.
FWIW, Drew's position on Go is hardly even unique. Someone who is quite fond of Go was telling me last week about how I should consider learning it despite it being from Google, because it is not very Google-like of a project. And that wasn't the first time I'd heard it either.
(Myself, I am usually just trying to build stuff people wrote in Go, and I usually end up fighting the tooling for a while and getting mystified by the error messages and having no idea where to look to solve them.)
I avoided Go for years because it was from Google until I quite accidentally landed a Go job several years ago, and I haven't looked back. Amusingly, I am now using Go to build a competitor for Google Analytics. Heh.
It was an important lesson in a way, as it taught me that Google – or indeed, any organisation comprised of thousands of people – is not a single monolithic entity.
Also, feel free to drop me a line if you've got problems with a specific error, and I can perhaps point you in the right direction.
I did get the thing last week to build! :) My Go-recommending friend seemed to think the script I was using was pulling in Go 1.11 where modules were not as well supported as they are now? But I got around it.
It was just that the error message I got really gave me nothing useful to search for. Literally was a "permission denied" error... while "go build"ing stuff with effective superuser. It wasn't really pointing me at what it didn't have permission to do or how I would go about getting it.
"Permission denied" usually happens when it tries to build stuff in "GOROOT", that is, the standard library installed in /usr/lib/go or some such. This normally shouldn't happen, so there's probably something wrong with your installation or setup. Hard to be 100% sure though, that's just the usual cause in my experience.
I think Debian ships with Go 1.11 in current stable (and by extension, Ubuntu too, I assume?). The entire GOPATH to modules switch is messy, ugly, broke a lot of stuff, and is confusing for newcomers, even though modules is clearly better. I think we're finally nearing the end of it though.
You probably want to just use Go 1.14 (the latest version) if you can. The language itself is very stable (just not the surrounding tooling).
I can definitely say it's the tooling that throws me every time. And I do think we pull from Debian for that script currently, though we're going to update it soon. (I think I was told 1.15 was near anyways?)
Overall I find that the tooling is actually really nice, and better than anything I've worked with previously. It's just this weird "old method vs. new method" schism that makes it hard :-/ It's restricted to just tooling, but kind of Go's Python 3 moment.
And yeah, I believe Go 1.15 should be released in August. rc1 was released last week or so.
It's about complexity, about design and process. I don't think anyone can seriously deny that Google, it's engineering process and resources have a vast impact on what a documentation or package server comes out looking.
If all you have been building for the past 20 years are web services and you are surrounded by Kubernetes-but-the-internal-version and Postgres-but-the-internet-scale-version, is it any surprise that what comes out at the other end is invariably a centralized web service running on a bunch of Google proprietary infrastructure? And is it really that difficult to see how that is very different from the principles in Go, the language?
Git is very much the same thing. There is no doubt that in a world without Git, version control inside Google looks a lot more like Team Server than it does decentralized Merkle tree.
Can you please tone down the inflammatory rhetoric, in general? I used to like reading your blog but it's getting really tiring to see every technical discussion you make sidetracked by a rant against someone or something.
Some features being absent is not the same as what you said in the article. "Any GitLab instance other than gitlab.com [..] none of these are going to work unless every host is added and the list is kept up-to-date manually." makes it sound like they don't work at all, but in fact it's just a few minor features that don't work. sr.ht has always worked on pkg.go.dev from day 1.
Actually, pkg.go.dev needs to have some way of figuring out where to link to, so it's not necessarily easy to automatically make links that work for 100% of the sites. That's not "a blatant failure to comprehend the fundamentally decentralized nature of git hosting", that's having to deal with a large ecosystem spread out over thousands of domains. Go's model is a lot more decentralized than most other package systems (which also makes stuff like this harder).
You suggested many code hosting sites don't work, while in fact they do work except for some (IMHO minor, but we can perhaps disagree on that) features. You don't think that's not a full and accurate representation of the state of things?
None of that is relevant; you can feel the Google engineering approach is the worst in the history of humanity – all fine with me – but ..... that does not mean you get be misleading about facts. You can – and should – state that one feature doesn't work; rather than say that none of it works. Because the former is true, and the latter is not.
What is the goal of your post? To inform your readership, or to convince them that Google is bad? If it's the former, then you have done your readership a great disservice. If it's the latter – which seems to be the case – then congratulations: you've fooled your readship – or at least part of it – in to believing something that's not true.
How do you propose this problem be solved? I was partially responsible for the go-source meta tag thing in the first place. Is there some other mechanism to find a web-based view of the source of a git repo?
Well, it's complicated, and we could brainstorm a few ideas. The problem seems to be that this brainstorming session never happened.
One problem is that the import path doesn't really give you enough information to totally figure out how to fetch the package. It's treated like a URL, and then the go-source tag comes in to relate it to something VCS-controlled. If we encoded the VCS into the import, we could e.g. have "git+https://git.sr.ht/~sircmpwn/getopt" as the import path, or we could move the import details into a separate file (like go.mod).
But if we assume that the import path and go.mod formats are fixed, then we still have more things we could do. For example, why this:
When the same tag could be "source-browser" or something similar? I'm sure many projects other than Go would be very happy to have features like this standardized and available for all of them to use, but the Go team sees no further than its own nose when designing this sort of thing.
And furthermore, with the specific case of hardcoded software hosts on pkg.go.dev, why isn't it using the go-source meta tags to look up how to create links to files & line numbers? You guys forced this upon us and then don't even use it? There's no reason to hard-code regexes for various git domains when you could just fetch it like godoc.org does.
I agree that it would have been better to design these a little differently.
In the first case the go-import meta tag is totally unnecessary. The go tool and module proxy can discover git repos from an import path just fine. It’s only required for “custom” import paths where the path is some domain but the code lives somewhere else, like github.
In the latter case the decision to use go-source was actually made by the original author of godoc.org, not a google employee, and was done in coordination with another non-google initiative (gopkg.in) to make their source links work. So nothing to do with the go team really, sorry.
If pkg.go.dev isn’t respecting go-source meta tags then that should probably be fixed. It would also imo be worth considering devising a more general, well-known mechanism for doing this. Worth proposing I think!
The way this works (or rather, worked) is that you first need to "go get" a module and then it gets picked up by the pkg.go.dev system. This works like that for every site, and was kind weird and confusing, although as you mentioned this was improved on recently. Either way: not related to that module being on sr.ht; that's just coincidental.
That is why they decided to turn it into an internal/source package inside pkgsite first to have test cases for a later go-source-2. If your site does not show source links on pkg.go.dev then add a comment to issue 40477 above and an exception will be added.
These are fair criticisms. Still though, there are some pretty sweeping generalizations about Google being made here. The company is as big as a city. Not all of their engineering culture is like this.
Take what I'm saying with a grain of salt; slightly biased since I worked there for a bit in the past.
As a counter-point, I worked at Google for two years and what the author is describing is completely consistent with my experience with the engineering culture and the systems it produces.
Fundamentally there is a hostility towards the philosophical aspect of software engineering which involves a careful consideration of the essential nature of the system as a whole as it relates to some ultimate end and the nature of logical components of the system and how they interact. Note that these considerations can occur prior to any code being written and also as part of an iterative process as the product and understanding of the product develops.
I believe this stems from three sources:
1. The criteria and process for getting promoted is very focused on shipping new and complex things which in practice is at the exclusion of all other considerations. This means that improving or maintaining existing systems or producing systems which are well architected at all is not as clearly valued. The promo committee is comprised of people who don't know the specifics of the systems you're working on so the easiest thing to convey is that feature X or product Y was shipped.
2. There is a hyper-rationalist, hyper-modernist, data-driven, anti-intuitionist view which pervades Google which completely discounts any sort of philosophical/qualitative thinking. People get really uncomfortable when you try to start even the most anodyne discussion about the product or service as it relates to some broader goal since those involve qualitative judgements that can't be reduced down a set of metrics. Some of this may be attributed to the CYA culture among management.
3. The interview process doesn't select for software engineering or software architecture skills (aka people who have developed an intuitive understanding of the art of software engineering) so there are many people within Google who lack the ability to effectively evaluate the architecture choices that are being made around them or that they themselves are making.
With all that said I did meet and work with many brilliant engineers and others who were very talented and did think deeply about things but those were relatively rare. And yes it is a big company with many different teams and subcultures but what the author and I have described are the patterns that one sees broadly across Google.
It’s pretty uncool to use just two years of experience at a place as a credential to shit on the culture of that place (and through implication, the engineers there).
I’ve been at Google for six years, and I’ve never encountered a “hostility towards the philosophical aspect of software engineering”. What a bunch of hooey.
There are certainly problems with the promotion system (and incentives it creates), but it’s laughable to say that it rewards shipping “at the exclusion of all other considerations”. In two years, I imagine that you didn’t yet serve on promo committees (or see many successful and failed promotions of colleagues), so I’m not sure your criticisms are coming from a deep understanding of how they work.
I’m sorry that you had a bad experience, or that you felt your specific working group had some kind of CYA culture or anti-engineering vibe. But it upsets me to see people indirectly shitting on or commenting about tens of thousands of engineers that they had never worked with or even interacted with. I don’t know why it’s fashionable to do this with Google right now, but it just shouldn’t happen with any company.
If you want to describe your specific, concrete experience, I’m sure that would be much more helpful and constructive than saying “the culture is X”, “Google engineers are Y” without any connection to specifics. My 2c.
If you've been at Google for 6 years then you know that my point about the promo system is the dominate opinion held by engineers there. Not that Memegen is the best barometer but you should check that out around promo time since it does comport with the opinions of engineers I talked with in other contexts. I had thought that would be the least controversial point.
When I was there they added "Landing a product" as perf criteria as opposed to "Launching a product" and they added some lip service towards maintenance tasks being perfable which is a reflection of the problems I laid out.
I don't know you or your situation and there are always exceptions (e.g. Bram Moolenaar, of course, and others not as famous) but in my experience the more tenure an engineer had the worse they were. The best engineers I've met in my career were ex-Googlers who were only there for a few years (some of them became Xooglers while I was there).
My assertion is that generally speaking the people who stay are the ones who don't see the problems in the system because they don't have the talent, experience, or perspective OR they have other priorities in their life (raising a family) and Google makes it easier for them.
What I encountered was that many long time engineers just weren't aware of what else was out there, the advances in the state of the art, and how things could be better because they were cozily ensconced within the Google ecosystem for so long. It's an incredibly ossified environment.
I don’t know how many people you worked with closely in two years, but I strongly suspect it’s not enough to properly conclude that “the more tenure an engineer had the worse they were”.
I’ll point out the irony of having this discussion on this post. Many of the Go team engineers have a long tenure, but the author wanted to attribute “good” things only to “not Google” influences.
Yes, the promo system has issues. Yes, there are widely held beliefs about it. But not all widely held beliefs are accurate: there’s usually some truth, but as always, it’s complicated.
Yes, there are some echo chamber issues. But engineers are much more aware of the outside world that you’re implying.
I just think your assertions are pointlessly mean by virtue of being overly broad (in terms of who and what). I like to see specific criticisms of any company, even if backed by an anecdote, because you can learn something.
Seeing this was so ironic because of all the engineers at Google, Rob actually came and gave a talk to our team about NgRx and he was one of the few engineers I encountered that really got engineering. Having him come give a talk to our team was like a breath of fresh air. I have no doubt there was some insane political BS he was subjected to and forced to leave.
As far as my assertions being "mean" because they are overly broad, the nature of speaking about anything requires some degree of generalization which I've tried to couch (and you're aware that I have). What a silly argument.
Another concrete example I can give you is that the people who run the Angular team also own support for Typescript at Google and they disabled .tsx support for Typescript (literally just a compiler flag that does simple desugaring) despite a ton of protest from a lot of teams using it. They did that to kill off and discourage React (which many people want to use) to consolidate political power. I had many private messages with new engineers that were upset when they got shut down for asking if they could use React. Obviously these people were trying to bring in practices from outside of Google.
Why would six years as an individual contributor be enough to make sweeping assertions about an entire company when two years is not?
googthrowaway's experience more closely matches my experience at Google, fwiw, but it's certainly the case that your experience depends on which org you're in. My org had a notoriously incompetent VP running things and it hemorrhaged staff to companies like Apple and Microsoft.
You missed the point: I didn’t make any sweeping generalizations. I’m advocating against that. The only generalization I made is that the existing generalization about anti-engineering is hooey.
And I don’t know why you’re assuming I’m an IC; I didn’t say that either.
I don’t have any problems with people sharing their specific personal experience. I have a problem when people start claiming good engineers are “rare” — you’re just insinuating bad things about tens of thousands of people you have never even met.
As a meta comment I will say that the rhetoric you're using is representative of the kinds of rhetoric you see on the internal mailing lists in case anyone reading whose never worked at Google wants to get a taste of what that's like. It literally reminds me of being back at Google.
If it's so meaningless, now that Drew has complained they can just make the alterative the default for their software right? Right? It is MEANINGLESS afterall.
When faced with a decision, someone chose the option that took extra operational expenses to keep services online. If you're going to call that a meaningless, then you've either a huckster or have got quite a lot to learn my friend.
They might be if they were true, but they're not: the only thing that was broken about sr.ht was that the links to the source code didn't work, but other than that it worked fine. The other sites mentioned like codeberg.org work just fine today, just without those links (a minor feature I almost never use). The whole "it needs meta tags" thing is also not true, I don't know where he got that from.
Quite a few points in this article are highly misleading.
Ah sorry, you're correct. I checked but forgot about the "?go-get=1" parameter it adds.
The entire thing is a bit hacky, but as mentioned in other comments godoc.org started as an independent project outside of Google and pkg.go.dev built on what was already in place. Was that the right decision? I don't know, I haven't looked at the matter and possible problems in-depth; all I do know it's far more nuanced than "Google being bad at engineering".
I don't understand what this comment is saying about Hacker News or Drew DeVault. Are the points any less valid just because they do not praise Google? And just because something is unsurprising does not mean that it is good or acceptable, or that we should not urge them to do better.
Remember, Go is MIT-licensed. If things become too bound to Google, it can and will be forked. People thought it was the end of the world when multiple Java SDKs arose didn't they? (Not to say that didn't hurt Java, but the language survived).
There is some complexity here. Go is one of the few programming languages with a fully-decentralized module system. I can upload a module to example.com, and you can use a module from example.com with no centralized coordination. This is very different from systems like Node/npm or Perl/CPAN, where if you want to write a module, you have to beg them for permission first. (This is a moneymaking opportunity, too. You can charge people for your package system if they want to keep their code private. And npm does! With Go, you can have private modules for free!)
There end up being some problems with this model. Popular modules might be hosted on some server that can't stand the load, and if you can't pull dependencies, the entire module system breaks down. So, you probably actually want the modules on a CDN, because high availability is important for CI systems. Before Go automatically pulled from Google's module proxy, I had my own. I had to, because Github rate-limited our pulls (and when it wasn't rate-limiting us, it was slow).
Setting that up was a few hours I had to spend for no good reason -- it didn't make my product better, except by making some incidental step between idea and deployment slightly faster. You can of course vendor your modules (now your git clone is slow), or add caching to your builds (go's caching is very good, but doesn't play well with Docker). All of them take your time and mental energy to do something unimportant. So, Google made a module proxy that Go pulls from by default. It works every time. It probably costs them almost nothing. There is no business value in it. Google doesn't need Go for anything. They just have some employees that want to improve the lives of average programmers. You can turn it off. You can host your own -- go/x has an example proxy, and there are full-fledged open source projects that add more features (Athens). It's actually a very good situation. You can be as centralized or as decentralized as you desire.
Going back to centralized package databases, they do have some really useful features. The main one I use is looking at users of a package that I'm reading documentation for. No example code in the repository? No problem. You can go see how other people are using it. The problem there is getting a list of all possible packages. You can poll common repository hosts for them. You can convince everyone to use a certain proxy that gets the names of the modules you are using so you can go index them. That's what Google does, and the user experience is great. You trade Google the information "the IP address 126.96.36.199 uses github.com/foo/bar" to get the knowledge of every Go module in the world. Seems like a good trade. And you can easily fork go and tell it to use your proxy, and make your own package database. The code is open. The license allows it. Go for it!
Would it make more sense for Go to be owned by something like the CNCF instead of Google? Sure. But they did make it, so it's kind of their choice. It is pointless for a large company to make a brand new programming language, but they did it anyway, and it's nice that we have it. They give us lots of free stuff and ask for nothing in return. The fact that your CI system pulled a Go module is not something that advertisers are salivating over. If you want to hurt Google because you don't like them, ask your lawmakers to make advertising illegal. That would show them! But caching Go modules is not a big moneymaker for Google, it's just kind of a nice thing they do because a few employees thought it would be good. I appreciate it. If you don't, set up Athens and forget about them. I am sure you can render godoc just as nicely as pkg.go.dev with only a few hours of work. The thing is, it's boring and nobody would care.
So they do offer private package hosting as a business like GitHub offers private git repos side by side with public plans? I’m not seeing what you’re trying to point out. They’ve built something and they’re charging money for it. You’re welcome to use any other private npm registry.
Why not make the proxy an opt-in fallback by default? I like my fails to be catastrophic. I don't want to touch any host I don't have to. Ever. And doesn't it leak metadata if you aren't careful? I remember a Go page to request data removal. A package named github.com/microsoft/clippy already makes my skin crawl.
If you don’t want to use the proxy you can turn it off, trivially. M For the overwhelming majority of users the proxy is a huge benefit (installing and updating dependencies is way faster and the checksum database provides security benefits).
Note that most other packaging ecosystems use centralised services, where you don’t even have the option of merely turning off the proxy.
I know you know this, but gosumdb and goproxy are two very different entities, consumed separately.
Let's not muddy conversations about the default centralized goproxy with the existence of a global TOFU service. gosumdb would be perfectly possible to query without even revealing a module import path you're asking for (just like browser-side safe browsing can do).
Not using a proxy by default causes people a lot of pain -- it's the common case to not care that the network is in use if it makes your builds faster and more reliable. If you don't want to accidentally leak things to the network, that's kind of your OS's or IT department's jobs. The tools are available and they will work with things that aren't explicit about needing the network. (What I'm saying is, if this concerns you, Go is one worry of many.)
I have a concrete example about why module caching is good. I have an example repository that shows what happens when upstreams exercise poor release hygiene. Clone github.com/jrockway/evil-module-user. Run "go run main.go". (It's not actually evil, it just prints a message.) You will find that with the Go module proxy turned on, it compiles and prints "This is a good module!". With the proxy turned off, though, you will find that the checksum in go.sum doesn't match for the upstream github.com/jrockway/evil-module, and that evil-module has been silently compromised to become evil. What happened here is that evil-module released v0.0.1 and the module cache cached it. Then someone edited the code, and forced pushed the v0.0.1 tag to the repository, pointing at the commit. When go get reaches out to Github to clone email@example.com, it gets code that doesn't match the checksum in github.com/jrockway/evil-module-user's go.sum, because it's not the v0.0.1 that the developer of github.com/jrockway/evil-module-user used when writing the code.
Obviously this example is intentionally contrived to demonstrate this exact problem. But when I first started using Go modules early on, this happened all the time. Different engineers got different "version 0.0.1" from many repositories, and didn't notice there was a problem until CI. They then loudly complained that go modules suck, when in fact the problem is that upstream authors were very fast and loose about retagging releases. They are sending your release system code that you've never even seen! That's good that the module system detects that. But annoying to fix :) The problem all went away when Google started caching the modules -- you get the old version of the code, but at least everyone gets the same version. (As an author, you can always release v0.0.2 to bust the cache.)
I actually thought until I investigated this in great detail recently that everyone had just embraced immutable version tags and were being smart about releases, as they learned more about Go modules. Turned out I was wrong -- people are as bad as ever, but they don't break your CI build because there is now a centralized cache that gives your workstation, your coworkers, and your CI system the same code. The upstream modules authors aren't doing that. They are force-pushing with abandon.
goproxy and gosumdb are two separate services, you can get the TOFU benefits without goproxy, and a checksum database would be possible to query without even revealing what import path you are talking about (just like browser safe browsing).
I think the proxy is a bigger benefit than the checksum database. For the CI use case, it lets you get your modules from a server that's under less load than github (which has to support writes). Your go.sum already contains the checksums, so the sum database is not relevant there, but the module cache adds a big reliability and speed boost to your workflow.
The checksum database is of course extremely important (you want to get a known-good copy the first time you use a module), but I stand by my assertion that it's beneficial for both to be turned on by default. Yes, it's a small privacy leak. But not one that is particularly personally embarrassing or useful to adversaries. So I think the go team made the right call.
A little off-topic. I like Go but haven't used it in a while.
While offline I remember a couple go command operations were surprisingly giving me proxy.golang.org or network unavailable errors. In the sense that it had no reason to touch the network. Can you refresh my memory what they were? Or if they are still present? I can't for the life of me find or remember the answer.
It was what made me discover some of the things the article mentions, like the package index.
If you download a package that supports go mod and try to build it, it'll automatically try to fetch dependencies through proxy.golang.org
You can override the proxy, and there are open source implementations of the module datastore, but it's not made clear front and centre to the end user that building software will call a Google owned service.
It was a source of contention for some when the default toolchain moved in this direction.
I understand the complaints, but Go is Google’s language just like Ruby on Rails is Basecamp’s framework. This is not a secret! Also, I think this is fine? If your interests align with the owners, then you can really benefit. If not, then you are better off choosing a different solution.
This article names, what I have had as a feeling for a long time: "Oh it created by Google … ah this will suck." Take whatever you want. GMail, YouTube, Google Search, Google's online office products (they are sooo soo basic, so that even the most basic used can handle it, with no way to unlock great functionality), Google Hangout, you name it. It all sucks. You are the product, a used and not a user.
Mostly the reason is probably, that this kind of software is only written with the most basic and common used in mind, not with the one, who wants to customize everything to their own personalized needs or a user, who does not require big corp to tell them "what they want". Google targets another group of people.
So I think the wording "crapware" pretty much covers it.
Of course ethical questions like the ones raised in the article … I guess they don't usually even enter the equation at Google.