In "The Soul of a New Machine," there is an engineer who spends months debugging nanosecond-level glitches in their new CPU, snaps, and runs away after leaving a note: "I am going to a commune in Vermont and will deal with no unit of time shorter than a season."
As an software engineer, NO you did not (unless you actually suck at it). Part of the problem with software dev as a profession is that often the people responsible for resource allocation are out of touch and the overall process is complex enough that you can acquire a lot of technical debt for a short term gain. this makes bad managers look good while creating a lot of problems in long term. Since acquiring technical debt does not really breaks anything in short term its really hard to create a budget line item that allocates resources to it. this continues until something gives and then you have a crisis somebody can fix and take credit.
Until there are leaders 'woke' enough to proactively do the right thing I dont see anything change in computing industry.
Lol, dude, I'm 31. I've been doing this for around 10 years now. And I'm alright at it :P
Have you seen how happy homesteaders are?
On a serious note. I'm aware. That's why I did my own company with a friend. It cuts a lot of bullshit out I don't have to deal with anymore. As (essentially) a contractor, it's easy to tear people a new asshole in a meeting and get away with it. I can't imagine an employee being able to.
First off, people need to stop worshiping the ilk like Jobs, Musk and Bezos. They're just money makers. Just like Rockefeller, Morgan and Carnegie. None of them are visionaries as all their ideas are easy to spot from history or other people. Like the hyperloop... just about 80 years too late to call that one unique. Ooooo you put colored pieces of plastic on a computer and made a small digital music player. The Zune was WAY better! The other guy just sold books and moved on to other shit. Congrates on the better mouse trap. But it's not like they did it themselves.
Maybe a homestead on a tropical island... I like coconuts.
“Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?” in The Elements of Programming Style (p10 in the 2nd edition).
> If you’re capable of understanding `finalised virtual hyperstationary factory class', remembering the Java class hierarchy, and all the details of the Java Media Framework, you are (a) a better man than i am (b) capable of filling your mind with large chunks of complexity, so concurrent programming should be simple by comparison. go for it.
> ps. i made up the hyperstationary, but then again, it’s probably a design pattern.
Reminds me of a great deal of programmers in the banking world, especially those who used spring. Their software often failed, but their knowledge of Java design patterns never did!
What are some valid complaints about XML? I was talking to one of the older IT guys at my company a while ago, and he was all about it because it made serializing data structures very simple. I'm not sure if that's a valid use case or an example of the kind of monstrosity that XML-haters hate.
If all you need to do is serialize data structures, we solved that problem 30 years ago—including all the schema-validation and querying stuff—with ASN.1 (and then re-solved it with Protobufs and Thrift, and solved it again in half-assed manners with JSON, YAML, etc.)
XML is a markup format, a regularization/minimization of the syntax of SGML. It's good at being a markup format—the paired named open+close tags allow for corrupted-stream repair in a way that e.g. Markdown just doesn't. XML is great in, say, DocBook.
But XML gets used for pretty much everything except actual markup. And for everything else, it's not solving those problems well.
It's very complicated. Leading to severe security problems, which means you basically should avoid any untrusted client supplied xml, if you can at all avoid it. It is a step backward from s-expressions, being harder to parse for both machines and humans whilst being less expressive. Whitespace handling is comically bad. It doesn't have a working comment syntax. The list goes on. The only excusably bad thing about xml are namespaces. The design is not a success, but it springs from a list of desiderata which look quite sensible on first sight.
XML conflates a bunch of stuff (XSLT, XPath, Schemas, namespaces, etc....) making it overly complicated
Then it has 3 different sets of data, elements, attributes, content. In other words <a c="foo">bla<b/>bla</a> is mess to deal with. Now replace the "bla" with spaces and assume <a> is supposedly only supposed to have <b> as a child except really it will have 3 children, the one before the <b> and the one after.
I think this is why JSON won over XML. It seems like all those extra features on XML would be a benefit but in reality they just give you more places to hang yourself.
It does not actually solve a problem. Ok, I cannot disagree with derefr's comment here regarding text markup. When it comes to serialization... it does not define how to serialize data items. But it has different mechanisms to group them: attributes and nesting. Nesting is usually wrong (one reason is it only works for 1:n relations, so you need named cross references (a.k.a foreign keys in a saner world) anyway). And there are multiple obligatory escaping schemes (quoted attributes, body text, and maybe we could also count tag names and attribute keys).
The hierarchy stuff is so complicated and wrong that I've personally never witnessed a collegue actually using a data schema. Which means data does not actually get validated.
I have to second this, I do not understand why XML receives so much hate. Sure it is verbose und editing it without, at the very least, good syntax highlighting can be a bit of a pain. And namespaces. But just use a good XML editor and the pain is gone. And in return you get a mature and powerful ecosystem where communicating data formats and validating data with XSDs or transforming data with XSLTs is done easily. And you have to write almost no code if your language has a decent XML library. Serialization and deserialization is usually easy and there are tools for generating matching classes for your schemas. I am not really following the development of other serialization formats like JSON, but as far as I can tell at least the JSON ecosystem is essentially coming up with analog ideas like JSON schema to solve the same problems that XML has already solved.
I find XML good for configuring pipelines since there's a clear "parent" node. Most other things I use JSON for. In other words, XML is good when a human is going to be editing it, but the ROI isn't enough to make a GUI for.
I don't think it's true that XML - in itself - makes serializing data structures very simple. An easy to use XML library might. But that library could use any structured data file format at all, and still make serializing data structures very simple.
Years ago there was a list of top 10 lists I had printed out, but I haven't been able to find them. One of the lists was "Top 10 signs you're a Microsoft programmer" and #1 or 2 was something along the lines of: "You think human teleportation will eventually be possible, and XML will be the transport."
Hrm, my experience is quite the opposite. I can read a 2000 page manual which will take weeks because there's no way I can concentrate on all of it and where I probably still won't actually find the answer to my question which is an edge case OR I can just try the edge case and see what happens.
If I knew exactly where to look in the manual then the quote might fit but it rarely does.
It combines all the worst aspects of C and Lisp: a billion different
sublanguages in one monolithic executable. It combines the power of
C with the readability of PostScript.
> To me perl is the triumph of utalitarianism.
So are cockroaches. So is `sendmail'.
— jwz [http://groups.google.com/groups?selm=33F4D777.7BF84EA3%40netscape.com]*
~ and ~
If the designers of X Windows built cars, there would be no fewer than five steering wheels hidden about the cockpit, none of which followed the same principles – but you’d be able to shift gears with your car stereo. Useful feature that.
I'm sure if you _just_ use the right experimental language extensions and strangle your program under a mountain of indecipherable nonsense, performing arithmetic at the type-level, you can trivially prevent that though!
This is the greatest sarcastic comment I've seen all year. You nailed the Haskell zealot to a tee. Meanwhile the Forth zealot is getting IBS just hearing about the code complexity from Haskell and all the myriad of language extensions to make it closer to actual mathematics.
Don't get me wrong, Haskell is really really cool and some brilliant coders use it. One day I'll actually make it through Haskell book.
A (data) type is a set, and a value is an element in that set. So you're arguing that set versus element is a false dichotomy, which is preposterous. Even a set which contains just one element is still distinct from that element itself.
Rather, the whole idea behind dependent types (if dependent types can be glibly summarized by a "whole idea") is that logical predicates give rise to sets. E.g. "all integers that are squares" or "prime numbers". Or all points <x, y> such that x^2 + y^2 <= r^2.
A proposition is not the same thing as the domain of that proposition's parameters; but it does denote a subset of that domain.
A dependent type can contain a large number of values and that will often be the case.
If we have a situation in the program where we must calculate a value that is the member of some dependent type that contains billions of values, and only one values is correct, there is the threat that we can calculate the wrong value which is still in that set. That dependent type will not save our butt in that case.
We have to isolate the interesting value to a dependent type which only contains that value. "correct value" is synonymous with "element of that set which contains just the correct value".
Still, that doesn't mean that the set is the same thing as that value.
A big problem with dependent types is that if the propositions on values which define types can make use of the full programming language, then the dependent type system is Turing complete. This means that it has its own run-time; the idea that the program can always be type checked at compile time in a fixed amount of time has gone out the window. Not to mention that the program can encode a self reference like "My type structure is not correct". If that compiles, then it must be true, but then the program should not have compiled ...
> A (data) type is a set, and a value is an element in that set.
Nope, a type is not a set. Type theories are separate formal systems designed to be (among other things) computationally aware alternatives to set theory, and they are defined by "terms", "types" and "rewrite rules". The whole point of a dependent type theory is to be able to encode things like sets and propositions in the same formal system.
The rest of your comment is wildly off based on this misunderstanding.
What the GP was probably referring to is the persepctive of dependent types from the lambda cube, where you have three different extensions to simply typed lambda calculus's regular functions from value to value:
1. Parametric polymorphism, which are functions from type to value
2. Type operators, which are functions from type to type
3. Dependent types which are fubnction from *value to type*
Since dependent types allows functions from value to type, it kinda erases the artificial barrier between type-level and value-level. In reality it doesn't really erase it, somuch as replace it with an infinte hierarchy of levels to avoid Girrard's paradox.
Their (3) is the only debatable point (aside from the inherent subjectivity of statements involving the word "should", which applies equally to you), since dependently typed languages tend to put a lot of the burden of proof onto the programmer. The other points are practically true by definition.
If types and values were identical, there would be no need for a runtime in any program, and that is simply not possible. There are many things which can only be modeled as values and not as types. (Randomness, for instance.) Dependently typed languages allow a little bit of value-level feedback into the type system, but they certainly do not eliminate the distinction.
And certainly none of this is free. Indirection has costs, and trying to model the entire behavior of a program in a compile-time system is going have costs. Some of those costs will push solvable problems into an unsolvable state, and then things break down. (For another example of this: software can try to emulate hardware, but there are things that hardware can do that software cannot and vice versa, so it wouldn't make sense to say they are identical.)
>The other points are practically true by definition.
"More guarantees" and "more assumptions" are not definable terms. You can't count the number of theorems that need to be true for some model to be accurate.
You seem to be operating from an overly narrow perspective. Type theories were invented for mathematical logic and only co-opted for programming later. It doesn't make sense to try to prove things about type systems by referring to a runtime. The rest of your objections show similar misunderstandings of the question. While I said myself a dependent type system is not free, its costs have nothing to do with "indirection", whether you mean type system complication or pointer chasing (the latter is more likely to be reduced). Randomness is about computational purity, which is nearly orthogonal to the expressiveness of your type system. I'm not completely sure what you're trying to say about hardware, not I'm pretty sure it's a red herring, at least from the theoretical perspective relevant to characterizing dependent types.
I don't know what you're trying to say. The point I am trying to refute is "the whole type-level and value-level is a false dichotomy", and the existence of dependent types do not support that claim, they only provide a narrow domain where the distinction is blurred.
> Randomness is about computational purity, which is nearly orthogonal to the expressiveness of your type system.
If types and values are to be considered identical, then random types should be just as useful as random values. They are not. That alone should indicate that values and types have a well-motivated distinction.
Additionally, dependent types offer no additional capabilities, only convenience. Having a type defined in terms of a value communicates nothing more to a compiler than would a functionally equivalent assertion, though it would certainly be easier to use and work with in some situations.
I'm not trying to say anything about dependent types here. I'm trying to say that the goal of producing a language which makes no distinction between type and value is neither useful nor possible.
To be quite blunt, what I'm saying is that you don't have enough context to have an informed opinion on the relationship between dependent types and the distinction between types and values. It's pretty clear you don't have a clear idea of what a dependent type system actually is. People who study type theory and logic are pretty much on the same page about this.
I don't think the person you're responding to is implying that.
Rust eliminates entire classes of bugs at compile time that plague other languages, usually out in production. "NoneType has no attribute 'enough'", "KeyError: this dictionary never has the key you want", etc. Multiple times per day I'll have a website fail to render simply b/c I blocked cookies (and they try to use localStorage, and the call fails, the failure isn't handled, and it bubbles all the way out, crashing the site).
Sure, unit tests could help here, but that's the thing about a typechecker: it's a unit test (of sorts) that you can't avoid (which prevents laziness), and I feel it is easier in the long run (writing the equivalent unit tests in Python requires more work). IME, laziness usually wins out, and sloppy code winds up in production.
The Clemenceau quote is not grammaticaly correct, it should be: "Le meilleur moment _de la_ programmation". Also I've seen "développement" used more often than "programmation", even if both terms are valid.
Still where does it come from? The only George Clemenceau I've heard of died in 1929.
Downvoter, please get a life. It's hard not to feel soured by such miserable gestures as yours. Mine was the first comment on here, I thought maybe there wouldn't be any comments - it was a great collection and I wanted people to check it out, and express my gratefulness; not sure what else I should've done. I guess I should just accept that such spiteful downvotes happen, but it's shame. Maybe you wouldn't even have seen the page had I not upvoted and commented...
Pity the list isn't updated anymore, or uriel would've sure put these gems by Charles Forsyth in it:
[B]y treating "compiling C" as a subset problem of "compiling C++", gcc and especially clang are vast whales whistling at the wrong frequency for the problem.
Plan 9 C implements C by attempting to follow the programmer’s instructions, which is surprisingly useful in systems programming. The big fat compilers work hard to find grounds to interpret those instructions as ‘undefined behaviour’.
Dennis Ritchie, in his ‘noalias must go’ essay, described that proposal as “a license for the compiler to undertake aggressive optimizations that are completely legal by the committee's rules, but make hash of apparently safe programs.”
(That he did not write something similar about ‘undefined behavior’ makes it clear that no one at the time intended or even guessed at the mess it would make of the language.)
Often the generalized parsing/lexing/codegen/etc. algorithm that exists in service of -O1 through -O3 can be turned "down to a trivial level" with -O0, but not "off"—as that would require an entirely separate code-path.
I believe a more appropriate noun is "contempt" :-)
I found this page  some time ago, maybe someone can find some good quotes in here but the page is kind of long and heavy and not super quotable, maybe some of you will find some gems in there.
Top of the page is Xah Lee, a lovable, and sometimes a bit weird, cranky master of contempt:
> The basis of computer languages' merit lies in their mathematical properties.
> It is this metric, that we should use as a guide for direction.
> As an analogy, we measure the quality of a hammer by scientific principles:
> ergonomics, material (weight, hardness. . .), construction, statistical
> analysis of accidents/productivity/… …etc., not by vogue or lore.
> If we go by feelings and preferences, hammer's future will deviate
> and perhaps become dildos or maces. (Xah Lee in comp.lang.lisp, July 2000)
I think Xah is cranky but many times on to something. For instance, point 6 in his list of "Python Doc Idiocies"  is hard to disagree with :-).
I don't value contempt. I don't think that contempt is strongly associated with programming or computer science. I would consider that to be a problem. I can't condone the expression of contempt in the public discourse related to computer science. I do not typically have much free money for donations, but I would on principle refuse to financially support contempt. I would wonder that anyone would consider that a positive trait. I certainly think that this attitude would discourage newcomers to the field, and I think it's worth working to promote a different image and mindset.
I'm not suggesting that you support "contempt", but the person behind the writings, and only if you enjoy the writing style.
Also, Xah also has all sorts of other documents were he carefully tries to avoid the mistakes he criticizes in other documents, and were contempt is not the main driver. I'm sure he spent countless hours writing those materials and he's made those documents available for absolutely free! 
The best intellectual and artistic endeavors are deeply flavored by the emotions of their authors, which gives the works a measure of authenticity. I personally can appreciate that, the same way I can also appreciate a lot of forms of dark humor.
Also, I'm OK with others not sharing that appreciation.
I've heard about this guy before. Reading a few of his posts in the second link (particularly the one about children), it seems like he was a deeply unhappy individual. Rants and anger can be fun, but wallow in them too long and it begins to feel like there's nothing else.
He was clearly a very talented programmer though, and as someone who reads and writes Go regularly I have a lot to thank him for.
Programming is tough. That doesn't necessarily mean that we need to emphasize the negative aspects. However, the more concerning element to me is that there are a fair number of gratuitous insults. I am as prone to making and/or enjoying a witty jab as the next person, but I recognize that this is not a wholly positive trait, and try to resist the impulse. I feel like the distilled genius of this profession has something finer to offer than Perl-bashing, or at least, I hope so.
One interpretation: I think there's a dictum (due to Dijkstra, maybe?) that says something like programming = data structures + algorithms. If you remove the 'algorithms' part, then you're left with data structures on the right, and, presumably, an impoverished programming language on the left.
Thanks! Wirth was going to be my second guess. For some reason, possibly because I had the order switched (or because the '+' itself, rather than just the words, is key to the phrase), I couldn't find the phrase by Googling.
It means that a data structure is an object which supports a very limited range of algebraic operations. E.g. for a stack, you just have push and pop. That's the whole set of actions for that data structure. You could argue that it constitutes a tiny self-contained language within any host language that supports the data structure.
One thing it brings to mind is this old argument: a configuration language is just a scripting language where you can't do anything useful to decrease repetition. (Or: Lisp programs don't contain config parsers.)
Instead of a "stupid" fixed set of declarative configuration options, just expose your program's high level abstractions as a set of macros and functions, and then let the user "configure" the program by just... writing a program, which uses those high-level macros and functions to do what they want.