Smalltalk is a tight design. So is Lisp. So is Forth. So is APL. C's design just feels like a pile of... stuff. Arrays are almost but not quite the same as pointers. You can pass functions but not return them. There's a random grab-bag of control flow keywords, too many operators with their own precedence rules, and too many keywords given over to a smorgasbord of different integer types with arbitrary rules for which expressions magically upconvert to other ones.
By the fact that lmm said "You can pass functions", they probably meant "function pointers", since that's the only way you can pass in something that could be referred to as a function in C.
EDIT: At least last I heard. The example I gave is C89, and I know C99 and some GNU extensions. I don't know if a new standard introduced something else that could be referred to as a function that can be passed in but not returned.
A quick google for "c lambda" and "c closure" as well as looking for those keywords in the Wikipedia articles of C11 and C18 turned up nothing, so I guess not.
GNU C has "downward funarg" closures, like Pascal. They have the same representation as function pointers, which requires the compiled code to generate trampolines (little pieces of machine code that find a PC-relative environment pointer) on the stack. Invoking them requires executable stacks (a linker-level executable option often now turned off by default in major GNU/Linux distros).
Part of me wonders if a lot of that baggage comes from C's background compared to Lisp. Or, especially when C's origins were originally to operate a PDP-8 I wonder if the goals and ideas behind each contributed to why the way they are.
> Arrays are almost but not quite the same as pointers.
There's a big misunderstanding here. Arrays and pointers are entirely different things. An array is a chunk of elements that lie next to each other in memory. A pointer is just a pointer.
The only thing that C does that causes confusion is pointer decay. If you use an array in an expression context, other than the sizeof operator, that expression evaluates to a pointer to the first element of the array.
And while it's confusing the heck of many people who never learned it properly, it's very useful. For example, string literals are arrays, too! Would you prefer to write printf(&"Hello, World\n")?
Funny, I got the same feeling from Smalltalk and Lisp.
I own both "Common Lisp: The Language" and "Smalltalk-80: The Language and its Implementation", and while there are many ways those languages could be described as 'tight' (tightly-coupled, perhaps), at no point can you look at the C language and say "This could be smaller" without significantly removing functionality. Ok, perhaps there are some things around array/pointer syntax, etc. but the room for removing things from the language is very small.
LISP and Smalltalk are both 'kitchen-sink' languages. As I understand it (i.e. Unless I misread something or skipped a page) for an implementation to be a proper spec-conforming instance of Smalltalk-80, a screen and graphics server is required. Indeed, Smalltalk-80 requires a very specific model of graphics display that is no longer appropriate for the time. Steele's Lisp, has a number of functions that one could strip out and nobody would care or notice very much.
On the other hand, all of the C that is there serves a purpose.
Perhaps the only thing in your list that does feel like a tight design in addition to C, is FORTH. But FORTH puts the burden for the programmer to remember what is on the stack at any given time. It has some beauty, indeed, but all of the abstractions seem inherently leaky. I haven't programmed in FORTH, however, so I can't really talk more about how that plays out in practice.
If the "There is nothing else to remove" does not resonate with you, then I think the perspective of the OP, and myself, and others, when we call C a "small"/"tight" language, is that essentially, C was born out of necessity to implement a system. Conversely, the 'batteries included' aspect of Smalltalk and Lisp more or less presume the existence of an operating system to run. It feels like the designers often did not know where to stop adding things.
Most of the library functions in C, can be implemented very trivially in raw C. Indeed, much of K&R is just reinventing the library 'from scratch', there is no need to pull out assembly, or any more assumptions about the machine other than "The C language exists". Whereas, a lot of the libraries of Smalltalk and Lisp seem bound to the machine. Not to harp on too much about the graphics subsystem of smalltalk, but you couldn't really talk about implementing it without knowing the specifics of the machine. And while much of Lisp originally could be implemented in itself, Common Lisp kind of turned that into a bit of a joke. Half the time when using it, it is easier and faster to reimplement something than find whether it exists.
Apologies if this is repetitive or does not make much sense.
I agree with you, but perhaps you are reading “tight” slightly differently than the way the original poster intended it?
To me, ANSI C is “tight” in the sense that it is made up of a small set of features, which can be used together to get a lot done. But the design of the features, as they relate to each other, can feel somewhat inelegant. Those different features aren’t unified by a Simple Big Idea in the way that they are in Lisp or Smalltalk.
Lisp and Smalltalk, then, have “tight” designs (everything is an s-expression/everything is an object) which result in minimal, consistent semantics. But they also have kitchen sink standard libraries that can be challenging to learn.
(Although to be fair, Smalltalk (and maybe Common Lisp to a lesser extent) was envisioned as effectively your whole OS, and arguably it is a “tighter” OS + dev environment than Unix + C...)
FWIW, I am learning Scheme because it seems to be “tight” in both senses.
It sounds like you're talking about the standard library rather than the language? The examples I gave have a very small language where you really can't remove anything, whereas in C quite a lot of the language is rarely-used, redundant, or bodged: the comma operator surprises people, for and while do overlapping things, braces are mandatory for some constructs but not for others, null and void* are horrible special cases.
Standard libraries are a different matter, but I'm not too impressed by C there either; it's not truly minimal, but it doesn't cover enough to let you write cross-platform code either. Threading is not part of the pre-99 language spec, and so you're completely reliant on the platform to specify how threads work with... everything. Networking isn't specified. GUI is still completely platform-dependent. The C library only seems like a baseline because of the dominance of unix and C (e.g. most platforms will support BSD-style sockets these days).
I'm actually most impressed by the Java standard library; it's not pretty, but 20+ years on you can still write useful cross-platform applications using only the Java 1.0 standard library. But really the right approach is what Rust and Haskell are doing: keep the actual standard library very small, but also distribute a "platform" that bundles together a useful baseline set of userspace libraries (that is, libraries that are just ordinary code written in the language).
> Steele's Lisp, has a number of functions that one could strip out and nobody would care or notice very much.
Common Lisp is kind of a language and a library. Parts of the library would be optional. Some standardization efforts (EuLisp, R6RS, ...) tried to defined a Lisp like-language in layers/modules/libs later on.
I think one's love/comfort with C depends one's path into software development. It was my first language (aside from Basic). It has it's quirks and you can do some really bad/dangerous things when writing large apps.
That said, given my comfort level and tendency to be more explicit rather than tricky, my C coding ends up surprising me a week or so later when I go back and realize "even at 2am" I did the right thing.
It's also a fun language to mess with junior programmers with.
Hi, I vouched for this comment, so people can see it. You're shadowbanned, it seems. I took a glance at your comments and I can't really see a reason for it, many of your comments are borderline but I've seen worse here by 'top rated' commenters.
UB is not what makes C fast. A good (and fast) program does not contain undefined behaviour, at least ideally.
The only speed advantage of Fortran, to my knowledge, comes from pointer aliasing information that C compilers have a harder time to infer. But that's more a consequence of the programming domain. Fortran is not a systems programming language. Fortran has specialized data structures for scientific computing built in (I think??). It's an apple and oranges comparison.
Plus the experience from all from us that had access to early C compilers on CP/M and home micros.
IBM already had a LLVM like toolchain for their RISC research with PL.8 and respective OS.
Only later did they switch to UNIX due to the market they were after.
Surviving mainframes are still writen on their system languages.
As for Fran Allen point of view:
"Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization...The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue.... Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels? Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve. By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are ... basically not taught much anymore in the colleges and universities."
-- Fran Allen interview, Excerpted from: Peter Seibel. Coders at Work: Reflections on the Craft of Programming
Because C guys keep spreading the false arguments that C was some kind of language sent by god to solve all performance issues, while we in the trenches when the language sprung into existence know it wasn't never the case.
Not to mention the fact that other programming languages, on the mainframe and workstation space, were already starting to collect the benefits of whole program optimizers.
>To me seemed like C was such a tight, perfect little design. Only thirty keywords, and simple consistent semantics.
Except that clearly history showed that it wasn't enough, and we ended up with about 50 millions (and counting) different meaning for "static" for instance. I like C but its simplicity is almost by accident more than by design. It's pretty far from "perfect" in my book.
There are so many weird features about the language that can bite you ass for no good reason. Why don't switch() break by default since it's what you want it to do the overwhelming majority of time (answer: if you generate the corresponding assembly jump table by hand "fall through" is the easiest and simplest case, so they probably kept it that way).
Why do we have this weird incestuous relationship between pointers and arrays? It might seem elegant at first (an array is a pointer to the first element or something like that) but actually it breaks down all over the place and can create some nasty unexpected behavior.
Why do we need both . and -> ? The compiler is always able to know which one makes sense from the type of the variable anyway.
String handling is a nightmare due to the choice of using NUL-terminated strings and string.h being so barebones that you could reimplement most of it under an hour.
Some of the operator precedences make little sense.
Writing hygienic macros is an art more than a science which usually requires compiler extensions for anything non-trivial (lest you end up with a macro that evaluates its parameters more than once).
Aliasing was very poorly handled in earlier standards and they attempted to correct that in more modern revisions while still striving to let old code build correctly and run fast. So you have some weird rules like "char can alias with everything" for instance. Good luck explaining why that makes sense to a newbie without going through 30+ years of history.
The comma operator.
Undefined function parameter evaluation order.
I suspect that with modern PL theory concepts you could make a language roughly the size of C with much better ergonomics. I'm also sure that nobody would use it.
> Why do we need both . and -> ? The compiler is always able to know which one makes sense from the type of the variable anyway.
I've often wondered this myself. The best I can come up with is that the underlying code generation includes an additional dereferencing step with ->, so having both . and -> makes the compiler a little more transparent.
Of course, in C++ you really need both because overloading -> is nice for e.g. objects that want to look like pointers.
This was solved in rust by approaching it from the other side. You need to be explicit everywhere that a value is "borrowed", so while you never need a `->`, you'll still know it's dereferencing in the more natural place - before the variable, not as an implied step after.
> Why do we have this weird incestuous relationship between pointers and arrays? It might seem elegant at first (an array is a pointer to the first element or something like that) but actually it breaks down all over the place and can create some nasty unexpected behavior.
> Why do we need both . and -> ? The compiler is always able to know which one makes sense from the type of the variable anyway.
You're contradicting yourself in a way.
Maybe K & R thought: "Why can't arrays decay to pointers? The compiler is always able to know which one makes sense from context anyway."
I agree with the your opinion about arrays/pointers, but disagree about ./->. A programmer reading the code might confuse a pointer for a non-pointer if you conflate . and ->
K&R C didn't have ‘undefined behaviour’, and I've become convinced that the original ANSI committee didn't intend to create the monster they did.
The reason is that Dennis Ritchie wrote a submission to the committee¹ that described the ‘noalias’ proposal as “a license for the compiler to undertake aggressive optimizations that are completely legal by the committee's rules, but make hash of apparently safe programs”, and, that “[i]t negates every brave promise X3J11 ever made about codifying existing practices, preserving the existing body of code, and keeping (dare I say it?) ‘the spirit of C.’”
Those comments describe what ‘undefined behaviour’ turned in to. The only reason dmr and others didn't make the same objections to it is that nobody realized at the time what the committee wording implied.
Same here. K&R C is probably still my favorite programming book. I will admit that most of the examples I had to stop and think about, they all seemed 'cleverly' written. But in each case, it gave me pause, and taught me a new, often simpler way.
At the time it was written, there was a good chance that the person reading it would not be doing so at a computer. Their computer access was quite likely via a terminal on a time sharing system, that they had to share with others. They couldn't hog a terminal while reading a book and typing in examples from it.
So books of that day where written so you could learn a lot just from reading the book and thinking seriously about the examples, and working out the exercises in your head or on paper
It helped if every couple of days or so you could get some computer time and try out the things you learned, but it wasn't actually necessary. In the case of K&R, you could go through the whole book without touching a computer, and then one or two sessions afterwards trying things out could be enough to correct your misunderstandings.
I wish more books were like that today. Nowadays, they assume you are a computer all the while you are reading the book, and often depend on that when writing the text.
The 'K' in K&R actually wrote a book on Go, "The Go Programming Language". I haven't used it personally but it has good reviews, so it might be a good resource for people who are curious about Go and know they like that writing style.
C is great for some things, but not at others. For instance, the same simplicity you laud makes it very difficult to work with strings or serialization—both are highly manual operations. So—I agree generally with your sentiment, but it's very far from a perfect language. If you're working with word-sized integers it's pretty damn good.
Also, ironically, I feel like C presents an excellent case for an evolved language, not a designed language. Its very name is derived from BCPL, as are many of the semantics and notation.
I just started digging into C a few years ago and was struck by the same. It's amazingly simple and the only "flaw" that leaps out at me is the precedence of & and | being higher than comparison operators. Other than that, and maybe the macro system, everything frustrating about learning it was due to the frustration of dealing with the machine rather than anything C itself imposed on me.
The modern aggressive undefined-behavior based optimizations ruins any remaining appeal of the "simplicity" of C for me. The extremely broad definition of undefined behavior may allow the compiler to, for example, silently delete explicit checks for signed integer overflow  among other things and still claim standards compliance, but I don't think that programming model can reasonably be described as simple.
It many ways Go feels like an updated C. It retains most of the basic abilities of C, while cleaning up some things (headers, etc), having more basic libraries included, and choosing some defaults around things like out-of-bounds array access that are probably more appropriate for the majority of most programs (outside of particular hot loops, etc) in an era where software is often connected to the network and security is a bigger concern.
Lots of programs don't need efficiency in "hot loops". You want efficient code size everywhere (usually smaller is faster), and realtime programs like games are CPU-bound everywhere not just in one loop.
Undefined behavior is a great way to help the language help you; for instance, most loops look like infinite loops without it. All you need to do is test with ubsan to see if you're dynamically undefined anywhere.
>realtime programs like games are CPU-bound everywhere not just in one loop
Isn't this a simplification? Even for games, I'd expect most often to be GPU bound, then memory bandwidth bound, and only then CPU bound. Although I get that the hot loop thing/10% rule is not universally true for every CPU bound program (especially after code has already been heavily optimized).
But I'm not sure what you are disagreeing with. I didn't mean to totally deny the usefulness of UB in C++ (although I think the core rendering code of AAA games are more performance sensitive than "the majority of most programs"). I was just pointing out that C is certainly not as simple as it first appears, and many people new to it are not familiar with the rules around UB.
Go's able to design around some of the performance aspects without UB (using the native word size for ints by default can help some of the signed integer stuff, range loops can elide bounds checks, etc), but probably can't generate as optimal code as C can without dropping to assembly language.
C can be quite complex once you get into all the abstractions it's using. And even for newbies I think the whole syntax around pointers is unnecessarily confusing (or at least people are often confused by it in a way they usually aren't when learning assembly). Not to mention how many keywords and symbols are overloaded and depend on their context for meaning.
Yeah, on reflection you're right about that. At the time when the language matched the hardware the abstraction was much more literal. It was only when they diverged that optimizing compilers were needed and all the complexity was introduced in order to maintain the fiction that the hardware was still as simple as PDP-11.
To piggyback this comment, does anyone know of any co's building something interesting mainly w/ the C programming language (and maybe even w/ Rust)? Also, that are usually open to interns/entry level programmers?
At the moment, I don't know. Do you any resources where one maybe may possible find something one would be interested in, in the C programming language domain? I've not thoroughly looked into compilers (i.e. LVMM) and operating systems but IIRC, they both went over my head.
I have been using Go in production since 2015 and can honestly say that other than the ternary operator, none of these have been a major issue for me. Granted, I am doing mostly REST API development so my use cases may be different, but I have never had an issue with capitalization or which interfaces are implemented. The tooling is by far some of the best I have used in a language. Paired with a good editor (I personally use VS Code on Ubuntu 18.04 as my main setup) and I have yet to miss exceptions or wonder what my code is capable of doing.
That being said, Go isn't perfect by any stretch. Sometimes panic ins Go routines can be very hard to trace. The transition to Go modules is challenging for larger projects with versions beyond 1.x (when I started it was Glide, then dep, now go mod, which has been a bit frustrating). However, I wouldn't go back to Java, C#, PHP, or NodeJS if I had the choice.
My go to in the server space is Go and Elixir. I don't feel a desire or need for anything else, but again, that's me. You know your use case better than a stranger on the internet :)
The "Capitalization feature" is actually objectively worse because of the reason OP mentioned, of having to rename all semi-local usages when visibility changes. But good IDEs can help mitigate the difficulty of this.
But even more significant is that the Go authors cannot seem to grasp the importance of pre-existing conventions. Almost every language I've used in the past decade allows and encourages the variable, Class, CONSTANT convention.
It reminds me of people who want everyone to use CE and BCE instead of AD and BC, ignoring the inertia and relevance of the latter and almost seeming like we live in a vacuum where fresh ideas have as equal weight as old ones, and history doesn't matter at all.
I don't know how to explain this better, but this is basically the core reason I don't like Go, above and beyond any specific features or lack of features.
I once heard Go described as "what if we took the good ideas of C and started from scratch?" But it feels like they take that very literally, as if they were saying, "what if it was actually 1970 right now, and we didn't have C, and the next 49 years never happened?"
(This is a separate reply than my other one because someone already upvoted that.)
WRT renaming, I find that refactoring code is often such an oversight from language designers. I consider C# to be an elegant language in this way. Public fields and properties look the same in C# so you can effortlessly refactor between them, for example. I wish more languages thought about this stuff.
I liked a similar aspect of Ruby: attributes and 0-args methods had the same syntax, so you could easily refactor a static field into a method that returns a dynamically calculated value. Overall I don't like that feature, but that was a very handy aspect of it.
TBF that's pretty directly inherited from the Smalltalk ancestry: just like Smalltalk, Ruby simply doesn't have public (data) fields. Although it does provide shortcuts for automatically generating accessors which I don't think Smalltalk did / does.
For both C# and Go I use editors that understand the language and can rename all references to a variable with one keyboard shortcut. C# has at least 3-4 of these editors available, and Go has at least 2.
I realize that not everyone's preferred editor may support every language yet, but still I think the solution to this problem is probably just having the language designers provide libraries that editors can use for these refactoring features, instead of through specific capitalization conventions.
> But even more significant is that the Go authors cannot seem to grasp the importance of pre-existing conventions. Almost every language I've used in the past decade allows and encourages the variable, Class, CONSTANT convention.
What "pre-existing conventions" do you see in the Go code that you've had to refactor? The fact that conventions from Java or C++ aren't the same simply reinforces the fact that Go is a different language, and that's A Good Thing.
That's really interesting. How do you use Go with Elixir? I've never really gotten into Go, partially because I feel like there's a lot of overlap with what it does and what Elixir does.
I could see using it for CLIs and various Unix scripting, but I've been doing that with Ruby, for the most part. I also considered learning Go for creating some native binaries for a few critical paths, but Rust seems like the best choice for that. A NIF that crashes is one of the few things that will take down the Erlang VM, so Rust's strong safety guarantees (might) make it worth the steep learning curve.
Sorry if I wasn't clear. I don't really combine the two and use whichever makes the most sense. In my day to day, we are primary a Go microservices shop and are exploring Elixir for some video stuff. I've used some Elixir on side projects. Sorry I wasn't clear that I am not combining them, although I imagine if I was going to it would either be over some sort of gRPC implementation or using something like rabbit to hand over tasks.
Note that I wasn't asking why one would use Go instead of Elixir for a given task. If it's CPU bound, Go would be a better choice.
My question was how the poster was using Go with Elixir. Since they have a big overlap in what they're commonly used for and since NIFs (Natively Invoked Functions) on the BEAM really need to be fail-safe, it's not as clear as how to use these two together as how one would use C in the same stack with Python, for example.
If the article had gone up to 250K connections, Elixir would have fallen down similarly to what happened to Node... although the reasons for Node being unable to keep up were unclear in the article, and I think warranted further investigation that the article didn't do.
The plaintext benchmark is about as far from a computationally intensive task as you can get... it should basically be network-bound, supposedly an ideal task for the BEAM VM, yet the results are clear.
If you turn on additional languages and frameworks, you'll note that Elixir or Erlang are significantly faster than, say, Rails, but Elixir or Erlang are a good 5x to 25x slower than Go.
People have built massive, successful companies on Ruby on Rails, such as GitHub, so don't think that I'm discounting these languages wholesale. You just have to accept that you will be paying substantially higher infrastructure costs if your infrastructure needs begin scaling beyond a single server. If you think that Elixir or Ruby or whatever else is the secret sauce to make your company successful, go for it! But those are a lot slower than Go or Rust or other very fast languages.
Elixir along with OTP and BEAM is sitting at a significantly higher abstraction level than Go. Everything is built around the distributed stateful soft-real time problem domain. While maintaining fault-tolerance.
You can have that in Go as well, you just need to build all the clustering mechanisms, OTP behaviors, tooling, actor model, embedded monitoring services and such from the scratch. You may well see Go significantly slower than Elixir at this point when you finish baking all that into it.
Developer hours are significantly more expensive than hosting. I'd happily pay double for hosting if it meant I could need half the people to accomplish the same task.
You're right about the abstraction but Elixir is slow because the runtime is slower, because of immutability and many other reasons, if you don't use all the message passing features and just do pure CPU computation you will find it's still pretty slow.
I've never used Kubernetes so I don't know what it provides. But I doubt it could provide the mechanisms for two application nodes to connect to each other and automatically share real-time state. Or the monitoring services to the underlying virtual machine's green threads.
Actually which of the above does Kubernetes really provide as it is in the Erlang VM?
In my experience, having clearer cpu metrics is more useful than any hypothetical latency benefits.
However, the hypothetical benefit would also depend on the cpu model; it made a little more sense when they introduced it than it does now. Back then, CPUs took a significant amount of time to change power states, so going to sleep and waking up for an event that comes shortly after could involve quite a bit of delay. Even if the processor didn't fully sleep, it may reduce the clock frequency, and not increase it until you've done a substantial amount of work.
With more recent processors, these delays are much smaller, and perhaps it would have made more sense to control the power states in another way, but there was some justification.
After a long stint in enterprise Java land, Go was a really big adjustment. Mostly about letting go of unnecessary complexity. I didn't realize how much I didn't miss that complexity until I recently went back into Java. If you learn the golang way of doing things, the issues this guy mentions really are not something you run into.
I think Go will suffer the same fate as Java. Any enterprise language that becomes too popular will slowly devolve into an enterprise monster.
If you come to Java with a blank slate, and try using it with simplicity in mind. There's not much wrong with it. But once people add design patterns, layers of layers, reflection, annotations. Etc. You get a monster. And somehow, the enterprise world seems to always create those.
The problem with Java is that there isn't enough of it to have anything wrong with it. The programs have all those design patterns because the language lacks so much power that you have to write it all yourself. For instance, having value types would really help.
The one thing they should take away is namespaces. You can't write unreadable enterprise software if you can't put every 10-line class six packages deep for no reason.
- Compared to Java: Ecosystem is way over-engineered. You might get along just fine without writing a bunch of boilerplate and factories and XML configs, but sooner or later you're probably going to pull in some dependency that does and have to deal with a bunch of clunky APIs and other annoyances.
- Python: Possibly the only language that's even worse at dependency management than Go. Installs packages globally. The solution is to create a "virtual environment" which is code for hacking up your PATH.
I like Go because:
- I can compile to a single statically-linked binary and cross-compile for other platforms
- I can choose to either vendor dependencies or use go-dep to create a dependency lockfile. I can have multiple GOPATHs with minimal magic.
- The "go" CLI handles pretty much everything I need without an explicit config file like pom.xml or package.json.
- The language lacks features. There is no magic. Way more so than even Python, there is one obvious way to do things and there's even an official style guide, so it's easy to jump into someone else's code and understand exactly what's going on immediately. After working with Scala professionally for a few years, I firmly believe this to be a feature, not a flaw.
- The standard library is incredibly comprehensive. It's completely possible to write a web service without importing a single external dependency. There has been a lot of thought put into library APIs for things like byte Readers/Writers and HTTP handlers, to the point that external libraries still usually stick to these standard interfaces. Contrast that with a language like Python where urllib sucks so everyone uses requests.
Obviously completely get that this is subjective, whatever language you're happiest and most productive in is the correct choice for you, etc, but if I might address your points on JS/node perhaps you'll find something useful there:
> == vs ===
Just never use ==, and you're done. This is pretty much a defacto standard. It essentially has no legitimate usecase that couldn't be expressed more explicitly with at most 1 extra line, and should just be ignored.
> undefined vs null
I've only very occasionally run into this being an issue. Standard seems to be using the falsey nature of both undefined and null to check for them, !x rather than x === null for example, so that it doesn't really come up much in practice, but anyway...
> TypeScript may solve some of these issues
Yes, it solves both the above :-)
> Dislike dealing with promises / callbacks
Firstly callbacks are hardly seen anywhere any more. And if they are, certainly no more than one level deep, and that's even assuming you don't just promisify the callback function anyway, which is trivial to do.
For Promises - async/await makes dealing with them syntactically much nicer in node. It's pretty much just like synchronous code to read, and in behaviour.
I think that when GP said, "ecosystem is over-engineered", they didn't mean that it's too rich, they meant to say that it's over-engineered.
Which, I'm not sure there's a more durable reputation in all of informatics than Java's reputation for over-engineering things.
There are cleaner Java libraries for most things nowadays, but the culture around the language is also such that, in many companies, it is politically much easier to use another language entirely than it is to use Java but choose a REST framework that doesn't implement JAX-RS.
It's not always clear how to do that. Sure, in theory it's possible to manually download JARs, configure your classpath, and run javac. In practice, the wisdom seems to be to replace the old ecosystem with a new ecosystem like Maven -> Gradle or Spring -> Spring Boot.
In comparison, the entire standard build process for a Go program is:
$ export GOPATH="/path/to/repo" && go get && go build && ./main
No config files with some DSL to learn or handwritten XML, no weird class loader behavior to track down, no tuning memory limits or any of that stuff that is just par for the course in Java.
And then I get a jar. Or maybe a war. Or maybe a zip? And maybe it's a fat jar with all dependencies bundled. Or maybe not and I need to fetch a bunch of deps and configure a classpath. And then I need the right JRE wherever I'm going to run it. And then I can run it like 'java -jar'. Or maybe I need a server like Tomcat. Or maybe something else.
And every time you jump into an existing codebase, you need to scrutinize the docs that are hopefully there to figure out particulars of the build/deploy process.
This is any language in any code base. Even in Go I will need to figure out if go modules, dep, or something else is being used. In rust, does this project produce a bin or rlib? Having options isn't a bad thing.
These complaints are usually by people who have not been using modern Java, or who have just been reading or hearing unsubstantiated claims about it. Or who haven't worked in large golang projects to see all the mess it brings with it because of how underpowered it is.
Look up libraries like Spark or Javalin and you get something quite light weight. Or DropWizard if you need something more holistic.
That being said, the moment you need something more involved, say validation, DB access, pre- or post- endpoint call processing (e.g. for authorization), then golang completely falls on its face. There is nothing in golang that compares to Jooq for instance, and because golang doesn't have annotations, you can't do automatic validation or authorization, and you end up having to do everything manually in a verbose and error prone manner.
For static compilation in Java, GraalVM is supposed to be quite good
You are right that big (especially enterprise) projects in Go are also a mess. But one of the main reasons is that people have a Java (or C++) background and try to replicate all sorts of complexity.. not because Go is "underpowered".
Also if you compare those codebases to Java (again enterprise) projects of a similar size they seem quite readable instantly..
> You are right that big (especially enterprise) projects in Go are also a mess. But one of the main reasons is that people have a Java (or C++) background and try to replicate all sorts of complexity.. not because Go is "underpowered".
That hasn't been my experience at an employer. Their devs mainly had Python and NodeJs experience and similar languages (which is what the first version of the code base was written in), and disliked "enterprisey" code. Somehow, the decision to move to golang was made.
Yet, they somehow managed to come up with their own mess, and yes it is mainly because how underpowered golang is. I keep thinking about how much simpler the code base would be if it were written in Java, let alone something like Kotlin.
I've used Spark in prod. Worked great and was the highest QPS service in the company I worked at. It was pushing >5GB/hour of JS over ~50 running containers at more than ~1000QPS/container each doing a bunch of crypto (AES) & and data wrangling (JDBC). Only thing that was difficult was serving SSL but that was the Java SSL-key ecosystem's fault. Ideally SSL would come from an edge load balancer but this company had a strange requirement that there was no 80 traffic.
Not OP, but in my experience, it's harder to know how to get from System.out.println("hello world") to GET / in a browser showing "<b>hello world</b>", without adopting some intensely documented framework and infrastructure, along with obscure XML configuration files. I liked that aspect of Clojure, but I wouldn't have a clue how to achieve the same thing in pure Java.
Thanks for sharing these links! My point was that I didn't know how to do this, not knowing these frameworks existed and unable to find them on my own search. You solved that issue for me. If these frameworks get more attention, maybe it'll be solved for more people!
Aside from what others have said, the single tiny binary and cross-environment compilation is a huge plus. I can compile my largest service in seconds for Windows, Linux, and Mac (we still use Docker, but not for the cross-environment reasons). It's simple, concise, and fast out of the box. I also have been bitten far too many times by the JVM being RAM hungry and, frankly, I focus on startups which don't have time or resources to spend tweaking VM variables. Granted, I am sure Java and the JVM have come a long way since I last used it heavily in 2013, but I haven't found a single reason to want to go back.
As far as Python, same kind of reasons. Single binary makes deployments easy, static analysis and built in tooling makes life easier, and I just find it more enjoyable, which is completely subjective.
1. Java - did not want to adopt the entire ecosystem. This is very much wanted a banana and got the whole jungle with a gorilla type of story.
2. Python - dynamic. Don't want that. Go's minimal typing is perfect. It's easy to deploy (binaries). It's fast. It can scale well. It's opinionated (love this).
Python and Java both encourage and allow developers to flex creative solutions that are hard to maintain long-term. Sure, seniority helps with that, and being part of a good team; however using Go, you just run into that less due to the conciseness of the language and strong idioms.
Python with type annotations and use of the mypy typechecker is really darn great.
I work on Golang stuff at work where we made the switch after the troubles associated with refactoring Python. Recently though, I 'typed' a personal project of mine that was fairly large, and it's become a pleasure to work on.
IDE integrations of mypy warn you as soon as type errors occur. The fact that the type annotations are first-class features of the language and not embedded in comments also makes it great. The compromise of type-safety at the boundaries where you interface with 3rd party APIs that don't provide type annotations (the major ones do) does not get in the way too often contrary to what I expected.
> mypy makes Python a pleasure to work with again.
This is my experience as well. I love that I have the option to use Python typed and untyped. For little scripts, fiddles, prototypes, etc. it's often convenient to omit type annotations. For solid software that runs in production it's nice to have them.
Another thing that I'm excited about is nuitka , a Python to C compiler that allows you to create binaries from Python code. It cut start-up time of a command line tool that I compiled with in half.
You can, but you can't deploy them. If you really want to keep your services separate, you need a separate JVM installation and jars for each service. Go gives you this automatically with a simple binary. Go's standard library is an order of magnitude (at least) better than Java's, much more comprehensive, much more cohesive, and much more capable.
I've been entirely satisfied with Go when using it to interact with kubernetes and AWS and do what doesn't amount to much more than CRUD and logging. I've never implemented any non-trivial algorithms with it though, which is what the article appears to be about.
If there's anything I'd draw the most attention to from this, it'd be the conclusion:
"If your program is small and can mostly be described by what it does... then Go is fine. If it’s large, if it has non-trivial data structures... or if it will be dealing with a lot of data from the outside, then the type system will fight you for no benefit and you’re better off using a different static language (where the type system helps) or a dynamic language (where it doesn’t get in your way)."
That's a much better way to express my sense after almost a year of working with it that I'd ended up with a Pascal-like subset of Java with most of the liabilities of the former. And I wanted to like it going in ("hey, some people love it, and it's more or less Gosling's statement about Java w/o classes come to life!").
This article is from 2016, but it's still relevant. Even in medium sized projects I've been bitten by many of these issues, and for a language that's supposed to eschew magic, there's an awful lot of wizardry going on.
Stopping compilation with an error for every unused import & variable is particularly annoying, so much so that I've patched the go compiler to treat them as warnings instead .
Ultimately, I think the problem boils down to a language that outgrew its original design, and hasn't evolved elegantly (yet).
Half of these issues are solved by linters which are also well integrated into virtually all common editors/IDEs.
Also regarding warnings/errors: time-travelling back into the 90s it was normal that C/C++ code - also in Open Source Projects - was full of warnings and often not portable across compiler/library versions. That was a major pain, I remember myself experimenting a lot with compiler flags (-Weffc++, -strict etc.) for my own code that also prevented stepping into common pitfalls.
Yes indeed. Hatred of projects full of warnings is one of the hallmarks of a good programmer.
Except that it only happens in C and C++.
Why? Because C and C++ were designed to have sharp edges for you to cut yourself. The default behaviors are absolutely bizarre and stupid, and the breadth of things that are undefined behavior guarantees that any given program is non-conformant. Warnings and -Werr and linters were the only possible ways to keep any semblance of sanity and code quality.
But we're not talking about those dangerous languages and their sordid history. We're talking about the here and now and modern languages, where an unused import NEVER causes bugs, and where unused values aren't flagged as errors in any other language for good reason: we debug code far more often than we write it, and slowing the debug process for the sake of cleanliness is stupid.
And it still astonishes me how unused values are treated as errors in a language that happily allows variable shadowing that is literally one colon away (= vs :=), and doesn't allow file-level variable and function isolation. Those are FAR more dangerous and bug inducing.
Not sure, I mean there are clear workarounds for the debugging stage, before pushing into a public branch one would probably remove those the same way one would remove other ad hoc debugging code. Of course there are official reasons for this: https://golang.org/doc/faq#unused_variables_and_imports
Speaking of myself, in various programming languages both ancient and modern I've run into the situation where debugging is difficult because of hard to read code. Code written by others (or written by myself 6+ months ago) is always more difficult to read. Even more so when there are unused artefacts ("dead code").
So often it happens that people have to work under a lot of time pressure and write 300 lines for something that can be solved in 30 lines in the same language. It doesn't matter so much usually but when there is a difficult bug, sometimes the only way to solve it is to thoroughly understand the code. For me it's often easiest to just remove dead code/compactify existing code so there is less code I have to reason about.
Probably a lot of Go's design is about simplicity, its spec can be read in 1-2 afternoons if you're already familiar with the language.
I must admit it's not perfect. What I usually do is add
_ = unusedVar
and then do the debugging stuff. Not sure why, but over time I ran into this procedure far less often but I do very pedantic error checking like
return fmt.Errorf("foo(%v): %v", fn, err)
and putting in checks sometimes that can return custom errors. So in my log I often end up with messages like "bar: baz: foo(/data.txt): open: file not found". Debugging usually means for me replacing
with a more verbose
return fmt.Errorf("...: %v", err)
and adding mentioned checks and/or additional log messages. Either with the log package or logrus. (And possibly adding unit tests/factoring out code into separate functions.) This gives me information content like in a full Java stack trace but in one line.
But YMMV, probably it also depends what kind of software one writes.
Warnings are part of the compiler not the code, and if your project is complicated enough you'll always hit false warnings in some version of the compiler, because some gcc contributor just snuck in a false positive, or a library you're using deprecated something.
Everything being an error makes this worse not better; if -Wdeprecated turns into an error you're just not going to be able to fix everything in someone else's code. Unused code being an error is easy to fix, but it's so disrespectful. I shouldn't get compile errors as I'm editing when I haven't done anything wrong.
> Stopping compilation with an error for every unused import & variable is particularly annoying
It's been a long while since I last touched Golang, but I recall the process of learning it.
My coworker and I were tasked with creating an interface for our employer (a cloud service provider) to allow Rancher (or clients of Rancher - I forget) to use our backend system (which was built out of a combination of PHP, Java, and Bash - among other parts) to provision servers.
Neither my coworker or I had ever touched Go; we were PHP developers. But Rancher used it, had examples, and we set out to learn it (while we were employed as PHP developers, we both had extensive prior experience with a number of other languages).
It took about a week until we were comfortable enough to begin our implementation, and about a month later we had a working library written in Go that we understood and had documented well, with tests.
But the process: Exactly like you noted! We railed, we gritted our teeth! We shook our fists at heaven and loudly proclaimed "WHY?!"
Because we were so used to leaving things lying about in PHP; for experimentation, debugging, and other reasons. But Go wouldn't let us - no-siree-bob! - you had to make it just so before it would successfully compile, and we tore our hair out over it. Day after day, week after week...and then:
Something changed. We understood. We realized why the designers of Go did what they did, and we also started to wish fervently that PHP could be the same way. We also noticed that by these changes, we no longer needed to leave this "cruft" around, this "dead code" that could possibly cause us to trip up, or wonder if it was needed later, or whatever. Yes, it made development more difficult - but we came to recognise that it helped greatly to prevent errors, or future problems, and kept things very maintainable.
But like I said - I haven't used Go in years since that time; there hasn't been a call or need for it, but it is a language that I keep in my "back pocket" just in case I ever need it. Maybe things have changed a lot since then, or maybe they haven't. Regardless, I learned from that experience that sometimes you have to power and struggle through things before the revelation appears. I got to experience a fairly unique language, and I feel that I'm better as a developer for it.
This is a solved problem in every compiled language I've ever used - at least 10 of them. Unused variables is a warning, and your CI build compiles with a -werror / --warnings-as-error flag. Thats it. All the benefits you mention, without the "slowing down development" that you mention.
Yes, I think the idea is good. Maybe less annoying if there could be a lax dev mode (-d) that didn't complain about such things until you were ready and in the cleanup stage. Then you'd run format and it would compile in normal mode.
> Stopping compilation with an error for every unused import & variable is particularly annoying
I think the worst part of that decision is I've never seen a bug arise from unused imports and variables (even in languages with ubiquitous importing side-effects like Python, imports and import ordering can cause issues but usually you need the import anyway, you can't just nuke it).
Meanwhile variable shadowing / overwriting or unused return values which do cause issues are perfectly fine by the compiler.
Your fork of Go promotes poor code quality. How’s this different from driving a car with an unbuckled seatbelt draped surreptitiously over the shoulder? The feeling of smug superiority must be intoxicating, but the risk of a violent ejection after a crash and subsequent death is still there.
> Capitalization also restricts visiblity to two levels (package and completely public). I frequently want file-private identifiers for functions and constants, but there isn’t even a nice way for Go to introduce such a thing now.
To me this is more of an issue with not understanding some of the conventions the language pushes you towards. An example of this common stylistic mistake is when some repos will have a `lib` package with a large number of utility functions spread across different files. It runs into the author's issue of function and constant collisions. The solution is to group together utility functions into many small packages, which has the added benefit of changing the code at the call site from something like this:
To something terse like this:
I've found that Go has lots of these sorts of nudges — dependency cycles generating a compiler error, for example — where the language enforces good style and organization.
However, if the Go compiler is going to enforce these rules, maybe it could be more helpful by suggesting the design tweaks that would help to avoid them, rather than forcing developers new to Golang to go through a kind of trial and error process.
> Structs do not explicitly declare which interfaces they implement. This is done implicitly by matching the method signatures. This design makes a fundamental error: It assumes that if two methods have the same signature, then they have the same contract.
Isn't this just duck typing? Don't other languages renowned for their type systems do this?
> There’s no ternary (?:) operator. Every C-like language has had this, and I miss it every day that I program in Go. The language is removing functional idioms right when everyone is agreeing that these are useful.
Expressive if/else is much more readable. Of course if Go doesn't have that, removing the ternary operator is a bad call.
> The tried and true approach of providing a compare method works great and has none of these drawbacks.
Hmm... except in anonymous cases. This author clearly has some Java stockholm syndrome.
> The append() function modifies the array in-place when it can, and only returns a different array if it has no place left.
> Don't other languages renowned for their type systems do this?
No. In languages with good type systems, nominal types are very valuable. E.g. "known-immutable list" and "read-only view onto a mutable list" are very different types, but offer the same set of methods.
> Isn't this just duck typing? Don't other languages renowned for their type systems do this?
It's "structural subtyping", which is the type-safe equivalent of duck typing. It's a feature that allows implementations to exist without needing to know exactly every interface they implement. TFA's concern is purely theoretical.
> That's horrifying.
append() doesn't operate on arrays, it operates on slices. Arrays are fixed-length, contiguous blocks of memory that can't be appended to. Slices are backed by arrays, and if you append to a slice whose backing array is full, it will "grow" by allocating a bigger array elsewhere and copying the original data into it. This is a pretty standard data structure in most languages.
> This is a pretty standard data structure in most languages.
Yes, vectors are common, but they don't typically require compiler modifications to avoid mis-using them. In this case I'd make it super clear whether the method mutates the existing value or returns a new one, and "possibly both" seems like the worst of both worlds. If this behavior is preferred, it seems like `append` should take a handle to a slice pointer, possibly to a slice rooted in the stack, and write to it if replacing the underlying slice.
> append() doesn't operate on arrays, it operates on slices
If you go with most languages definition of "slice", that would be pretty horrible just by itself.
But, as you said, Go's definition is different (what is also bad, just not much):
> Slices are backed by arrays, and if you append to a slice whose backing array is full, it will "grow" by allocating a bigger array elsewhere and copying the original data into it
What would be pretty much like Java's ArrayList, or Python's list if it was designed on a sane way. So, just the name would be non-standard... But the container actually migrates when it is reallocated! That's a broken design in any language.
> If you go with most languages definition of "slice", that would be pretty horrible just by itself.
Most languages don't have a definition of "slice" at all.
> What would be pretty much like Java's ArrayList, or Python's list if it was designed on a sane way. So, just the name would be non-standard... But the container actually migrates when it is reallocated! That's a broken design in any language.
It's not broken, it's just fast and simple. But yes, it does require you to know that Go's slices are not exactly Python's lists (which do gratuitous copies). Java's implementation does an insert on the original list (both surprising and an unnecessary copy).
I think we're going to have to disagree there, then - yes that oneliner might reduce your LOC count and help you win at code golf, but it doesn't actually make things more readable and certainly isn't worth breaking from the rest of the community.
The ecosystem-wide consistency of go fmt is its biggest strength. Adding switches to disable features would eliminate that.
It has one amazing property not shared by any other auto-formatter I'm aware of - it is included with the compiler by default and so is the only formatter in common use. Yes, there are formatters for C++ or Java - but the plural there is not a good thing.
Out of curiosity, what's the motivating case for using Go in 2019's programming landscape?
Edit: I didn't mean this as an insult, I meant it as a genuine question. I don't know much about the Go ecosystem, and I meant to find out what Go's killer feature is in 2019, when other languages now have strong and elegant concurrency, C-like performance with higher-level syntax and without manual memory management, etc.
Go's star feature is straightforwardness. Some people call this simplicity. The upshot is that it is very easy to use Go within teams of inconsistent expertise. Since there is a fairly low ceiling to cleverness, Go rewards just getting started rather than spending time making the code smaller, or more general.
Of course, this is not as good for having fun programming, unless the programmer feels having written something is more fun than writing it.
This seems similar to one of the main virtues of using Java - there's pretty much just one way to do any given thing and there's a "low ceiling to cleverness". Of course, Go's other features stand out much more when we're comparing to Java specifically.
Productivity. There just aren't other languages that let me build things quickly with confidence that those projects will also scale well into the future. You don't need to employ people to set up the build system, you don't need to recruit for people who already know the language, you don't need to quibble about which features to use or what style to use or what IDE to use, deployment is a breeze, etc.
I happen to love the C language, and my brain is very much tuned to how it works and the world view that it embodies. Go feels to me like veteran C language experts made a thoughtful evolutionary step forward. It's certainly not on the cutting edge of programming language theory, but it has solidly moved the baseline ahead. It's a highly opinionated language, but if you approach those opinions with an open mind, you'll find it has a sense of internal coherence that is both tasteful and pragmatic.
Fast compilation speed, easy integration with C, and the "one large static binary" are probably my favourite features, although I do wish those binary sizes weren't quite so large...
- memory footprint management
- cpu speed
- that nebulous feeling I want a program etched in stone (strongly typed)
I've written a lot of Api based clients. Read from a queue and update AWS permissions. Receive a callout from something and passing it along to somewhere else. Come to think of it, I've written a lot of queue-reading workers, some replaced existing Ruby workers for CPU or concurrency reasons.
I think in the space of having computers talk to computers Go is highly successful because it handles nearly everything with the standard library. So I find a lot of sdk's for specific platforms written in go that can get something going with the standard library and a single package for the platform.
Kotlin and Swift are both currently encroaching on Go's server-side use cases. For example, Go's star feature, goroutines, has recently been cloned by Kotlin. They are at least as equally pleasant to program in, and they're hitched to major client platforms which means they improve faster and will definitely have a pool of trained developers in the long term.
Java is getting a fiber implementation by means of project Loom, which will make golang even less appealing. The good thing is that because the fiber implementation will be built into the JVM, Kotlin and other languages can use them seamlessly.
Points taken. I am full time on Go since few years ago and before that I was 100% Java. Haven't really kept myself up-to-date with the Java land.
But from my experience with Java, Go, and Python, I find Go a very versatile programming language. Being able to compile binaries means it's great for infrastructure deployments and CLI tools. Goroutines, concurrency, and etc make it a great choice for server/micro service use cases as well. Some can say that not having a JVM is a plus as well.
However, developers do have to be disciplined with error handling and dependency management. But one can argue that this is the case for software development in general.
I'd propose Rust - it's also a static, stack-favoring, non-class-based language with OO features that compiles down to native binaries. Where it differs is a wonderful type system and top tier package management. If you can work through lifetimes and the borrow checker, you might enjoy it.
I have been using Go in production since 2012 and I find it a fantastic tool. Except for point 8 ("The sort.Interface approach is clumsy") the rest are features for me.
I am so grateful for the Go creators for having made it as it is. What worries me is the recent changes: modules (never had a problem with GOPATH) and Go 2 proposals. I hope they are able to keep their vision as it started.
I actually fall into the opposite camp. I never felt comfortable with GOPATH, but found modules extremely natural, perhaps because it's somewhat analogous to how NPM works: go mod init -> nom init; go get -> nom install...
These are still very niche languages. Go has a better shot, albeit still long, with the weight of Google behind it, but there is an unbelievable amount of inertia. Rust certainly hasn't replaced C/C++ to any real degree yet.
Maybe time will tell, but I'd lay money that there will be a lot of people still working on C, C++ and Java codebases in fifty years, just like there is a boatload of COBOL and even MUMPS software still out there in the wild today.
If you get a pebble stuck in your shoe, you can usually get over it within a couple minutes, and start to ignore the minor annoyance. But if it stays there all day, it could eventually get pretty painful. It all depends on how much walking you do.
I used Java as my primary language for about a decade and, to borrow your analogy, experienced a shoe that had quite a few pebbles in it from the outset.
What happened over time was that we developed a way of using Java that was less painful. And it turned out that the bones of Java are actually quite good. If you strip away a lot of the cruft people glom onto their code in the belief that it saves them time, adopt sensible ideas from how other languages want you to think, and keep things very minimal and consistent, Java can actually be quite pleasant.
But this requires you to have someone in your company that can teach people. Often peope with many years of experience who will initially insist on not changing their ways.
(The main reason I don't like Java isn't down to the language itself. It is because I don't trust Oracle. It took me nearly a decade to ditch Java as my primary language, but Go provided me with a viable option. Sure, it'll take some time still to reach the level I was at in Java, but after about 3 years, things still look good)
I thought you had misinterpreted it, so I was trying to help clarify by extending the metaphor. In case we're still misunderstanding each other: the pebble is the language's warts, which you can't remove, only avoid or work around. Every language has its warts, but we're talking about Go's here.
I wasn't being rude, and I understood your point. I was pointing out that the metaphor can as easily be used to support Go's position as the critics' position. The metaphor lacks substance. Criticism != rude.
To be fair, in a professional setting that is often not an option when it comes to programming languages. Usually you will have a situation where the language has already been chosen when you join a project. When you start a new project at a company, the language chosen is usually dictated by either what the company uses already or some process for reaching a consensus on what to use. What you don't really want in a professional setting is to use a lot of languages. For a large number of very good reasons.
Picking a language involves compromise.
I think what I perhaps reacted to were the somewhat superficial nature of the blogger's complaints. Especially when claiming to have done non-trivial projects in Go. Perhaps as a programmer, but it doesn't sound like someone who has lead development and had to make hard decisions.
Non-trivial projects usually give you plenty of other things to worry about. For instance how good the language is for collaboration, how easy it is to model things, how clear is the code, how does it make use of the machine resources, will the language still have a following 10 years from now, what does the tooling look like, what quality are the open source contributions, is the standard library usable, do the third party libraries cover enough of what you need?
And then there are the developers. You need to be able to recruit developers beyond your circle of friends. You want that pool to be as big as possible. This is why I avoid languages that have too little mainstream appeal. It doesn't help me much if I can hire some Common Lisp guru to build a lovely piece of software if I can't find anyone to maintain it once he or she leaves.
Being the one to decide what language to use for a project is a thankless job because you will always disappoint someone. Not everyone will or can take in the larger perspective. As a manager I seriously do not want to hire developers that always want to program in some other language, often one with a small following. Or who want to change languages every 2 years. Because if they can't learn to live with one of a selection of mainstream languages that are "80% okay", why would I think they are any better at dealing with difficulties in other languages? I expect good programmers to adapt practices - to make languages work for them. To be professionals.
When we switched from Java to Go in my department it was a team decision. As a manager I let my team decide what to use, HOWEVER, I made it clear that everyone will make an effort to learn whatever is chosen and I don't want any whining or excuses. If they found Go to be the wrong thing, fine, we have Java which is the devil we know to fall back on. But only after everyone has made a real effort.
That was 3 years ago. We now have somewhere in the neighborhood of 100kLOC running in AWS and it has been astonishingly productive. Sure there has been _some_ whining about small details, but on the whole adopting Go was surprisingly fast and surprisingly pain free.
I totally get all of these points and understand them, and they're really why I do like Go. Implicit interfaces, for example, means you don't need to change code for something to implement this interface. It's a powerful thing that I find really great, especially coming from years of Java having to update everything I want to implement the new interface. If I don't have access to the code, then it's more boilerplate to do that. With Go, I create the interface I want and anything that implements it immediately works.
I understand why people might dislike a particular language and that's cool. That's why there are many languages and we have the ability to choice what works best for us.
most of these seems reasonable, but they are not big enough for me to actively dislike the language.
The capitalisation is maybe the most annoying for me, I think it was designed with an IDE in mind (which would be able to automatic update all references), but I still find it a flawed design to have to touch potentially a lot of files, many places to change something from private to not private.
The problem with err has also bit me a few times. I don't like exceptions very much, but a solution like he suggests where a variable can only be ignore explicitly would solve it.
Go is not perfect, but for me there is nothing else better for microservices that I have come across.
I'm still pretty new to it and I've never used Go, but Rust seems to address most if not all of these problems, while serving many of the same original design goals of Go. Probably its biggest disadvantage is the learning curve/iteration speed: it doesn't make it easy to just hack things together, especially when you're first learning.
Yeah, iteration speed is a pretty big blocker for adoption in a lot of domains. Go isn’t perfect, but it lets me get things done today. That said, I appreciate that Rust exists for safety/performance critical domains, and as Rust matures (e.g., as it’s async story solidifies and consensus emerges around various HTTP libraries, etc), it will be more competitive for general purpose application development. But I suspect there will always be a significant productivity gap.
I suspect that once I've better-internalized the borrowing mechanisms I'll be able to move at a reasonable clip in Rust. The build/package management environment is pretty batteries-included and "just works", and the syntax isn't particularly verbose, just strict. Once getting things to compile is no longer a debugging process for me personally, I don't really see any other intrinsic bottlenecks.
If you don't want to use an IDE to rename, consider the excellent `gorename` tool. It is integrated with plugins for most editors (vim, vscode and atom off the top of my head), or is easy to invoke from the command line.
Golint can enforce the "don't ignore errors" rule. I think it comes that way out of the box?
Most of the items explained are non-issues and part of why Golang is how it is today. Lets hope we can keep the language as-is and don't go back to old decisions and remove features like Scala have been doing the last years...
Personally I like Scala more than Go because it is pushing the boundaries on what a language can do. Sometimes there are mistakes but in general, I would rather work with a forward looking technology than with a recreation of the 70s.
They've also significantly simplified the collections classes, removing quite a bit of abstraction. That was always one of the things that struck me as most ludicrous about Scala, so i was pleasantly surprised by that.
Indeed, i've been impressed by the Scala team's recent willingness to remove complex features generally. I didn't see that coming.
I'm happy all of the things you list are going away.
The first one is unreadable. If I saw that in a pull request at work, I would ask the author to change it. (But I wouldn't have to; the people I work with know better.)
For the second, it's confusing to have two different ways to write the same thing; arguably you should always be writing type ascriptions for function return types (even though scalac doesn't require them) both for readability and avoiding bugs.
For the third, argument list adaptation is a great way to write bugs by accident and not notice. Good riddance.
The fourth is really a style thing, and a pretty weak one at that. I don't want random untypeable unicode characters in my code when ones I can type will do.
Biggest issues for me here were generics & the dependency situation, both of which are being fixed. But even back then, I loved using Go, mostly for what it DIDN'T have. Even ternaries, while convenient, inevitably lead to someone using them to write some really stupid shit that's hard to read.
Overall, I'm a fan of losing a few helpful abstractions in order to get rid of all the bad ones, whether in the language itself or in it's ecosystem. And Go delivers on that.
If you work for a while at a larger company that enforces a style guide, it starts to make total sense. If your company's style guide already insists that identifiers should be named with some convention that matches their visibility, then by making that part of the language, you are simplifying the system.
I would not be surprised if specifically the Google C++ style guide influenced this decision.
It becomes very annoying and tedious to refactor when you need to change visibility. Suddenly, a single line change (e.g. changing private to public) needs an IDE to refactor it and make sure it gets all instances (which is quite ironic given that golang proponents generally shun IDEs). Now depending on how many instances changed, you would need to split up your diff for readability, or clutter your diff with needless changes.
At the risk of courting controversy, I'll confess that I haven't even so much as considered using Golang on the basis that it's a language ostensibly controlled by Google.
I have reason to believe that my moral stance against Google is shared by many in the development community, but I don't recall ever seeing this sentiment extended to Golang.
The anti-Oracle sentiment is thick and heavy in the development community, mainly surrounding Oracle's predatory business practices and how they leverage them via Java. I've completely avoided investing in Golang for that very reason, I don't want to ever become beholden to Google's services in order to maintain further development in this language.
Not to mention you are mutating an already assigned variable. You run in to problems when you want to know the type of serializeType and you look at the first assignment but don't see that there is a second one.
Go's certainly not perfect but I usually find it more productive than java or C++ for equivalent problems.
For 2, I hate this example. Code that returns or takes booleans that requires reading a comment to understand the value is super error-prone to use. Don't write textually ambiguous interfaces in any language. I just haven't seen this particularly be a problem.
#3 Re: checking errors, we wrote a trivial go vet plugin in house that checks for that.
sort.Slice is probably new since this was written and is way better than sort.Interface if you're sorting a slice.
Better vendoring and generics are finally acknowledged to be issues and solutions are coming. Hooray.
I created a new language  that is very similar to Go, but it fixes many things people often complain about, including all of the points in this article (except #2, but I don't think it's a drawback, really).
It got lots of attention in a couple of months since the public release, and I often hear people say that they feel like this is "Go done right".
The ultimate test is to read someone else's code and try to figure out what is the actual struct being used in place. You will see that you often have to jump through many many files in order to get a sense of what is actually used in place. This problem becomes even harder when people start nesting generics.
Take a look at associated types in Rust. I think it is a better way to do generics because it forces you to specify the type early. It also prevents nested types that are hard to read like <T<A, Cat<B>>> (and this is nothing)
Being able to ignore return values is something that C had before they had function prototypes. The compiler didn't know if something returned a value, so it couldn't check. There's no good reason for that misfeature in a newer language.
Go apparently does it that way to make "defer" work.
it's a mediocre language, in fact it's the most mediocre language I have ever seen, that is being pushed by managers who want to lower hosting costs and faster time to market with good runtime performance.