Processing parentheses is still parsing. It's kindergarten-level parsing, but it's still parsing.
I've written thousands of lines of scheme code preprocessed to eliminate most parenthesis. I'm convinced Lisp self-asphyxiated by fighting a last stand on parentheses, which is so far from the point of Lisp. With modern editor "language server" support and syntax coloring, languages can shape-shift to any appearance they want, separate from meaning.
While I love the poetic look of parenthesis-spare Lisp, Bill Joy got it right when he said that code density on a screen affects programmer productivity. I love Haskell, both for this and because one reasons about the activity of programming as if one is doing algebra.
While Lisp more easily modifies itself than any other language, macros are bolted on. No one can tell me with a straight face that with a thousand runs of the civilization simulation, our Lisp macros would be the best run. I'd love to see a language where manipulating the language is the core strength, and all ease with data structures follows as a consequence. Lisp is not that language; peek into some of the other simulation runs.
The parenthesis issue is one that I simply cannot grasp, but have to acknowledge as true simply because so many programmers have voiced it. As far as I can tell there is no correlation between who has an issue with them and who does not. New programmers vs experienced, young vs old, bad vs good, it does not matter. Some programmers love them and for others they are a show stopper.
For me, until you demonstrate the same level of structural editing support and powerful macro system without S-expressions, you can pry my parenthesis from my cold dead hands.
Some say that with a "proper IDE" or with "proper editing support" this is possible for other languages too, but I ultimately doubt it, for all languages, which do not mark the beginning and end of their expressions and all languages, which consist of a mix of statements and expressions. The point is, that the parentheses make it unmistakably clear, where something starts and ends, which means, that an editor has no problem figuring it out. If a language has statements and blocks and expressions and whatever else there is, but not delimited by tokens like parentheses, then there will be ambiguity. That ambiguity cannot be removed and hinders how good an editor can be at selecting exactly that portion of the code, that you want to select and possibly move or wrap in something or do whatever with.
Some languages replace such parentheses tokens with a "begin" and "end" keyword for example, but imagine what it would look like, if you had to use "begin" (5 characters) and "end" (3 characters) for really everything, for which parentheses in lispy languages are used. Now that would be a syntax nightmare.
Portraying widely held opinions as propaganda is quite counter-productive, bordering on denial. Given that you provide no reasoning one way or another, I cannot say more about your personal views, but...
Denying Python's simplicity is surprising. Clarity of syntax, avoiding superfluous tokens, and ordering of code in speak-order are a few of the reason for it's simplicity. Interestingly, Python fails at syntax clarity exactly where it adopts functional-style construct: list-comprehension for example.
Like it or not, a language, like Lisp, which insist on polish notation is just unnatural, at least for most western speakers. (I don't know any spoken language that would map to placing verbs first in a sentence.)
This is silly. All programming is already in the same order for anything not math. And even math has plenty of prefix usage. Sigma, gamma, etc.
So, I should use another term than propaganda, as that is negatively loaded. Consider it, rather, effective marketing.
Edit: and apologies on dodging the point on python being simple. I really don't know how to respond on that. I still fight heavily to read most python I am exposed to, as it would be way easier in sql for most of the etl I see.
This feels like a somewhat dubious metric, all told. For one, complexity of language says little about complexity of programs made in language. And, that is usually more influenced by age of the code base with how many are contributing to the code base.
Still, if you find the article, is be delighted to see the argument being made.
> This feels like a somewhat dubious metric, all told.
The claim was: Python was easy for beginners. Certainly a simpler, more consistent grammar is easier for beginners.
For me, I left Perl for exactly that reason. Python just seemed to "fit in my head" much more easily than Perl. Scalar vs List context is a PITA.
> For one, complexity of language says little about complexity of programs made in language.
True. But I can also tell you that I could come back to my Python code a couple months later and still understand it. That never happened with my Perl code.
And, this really got put to the test when Python code I wrote 15 years ago came back home to roost. Yeah, I cringed at some of the ways I did things in the code (although probably I should be kinder to my younger self--it was Python 1 code chewing through nasty binary formats), but I could still understand it and debug it.
> I'm convinced Lisp self-asphyxiated by fighting a last stand on parentheses, which is so far from the point of Lisp.
I think the reason Lisp self-asphyxiated is because people don't get it.
Most try to spend a little bit time programming it as if it was C/Java/Python/whetever else they have used on their last project. They just see a language that can do what they need but it is all wonky and so why bother?
Compared to other languages, to get benefits from using Lisp you actually need to understand what the benefits are and what causes the benefits.
You can switch from C to Python and reap benefits of Python without having some kind of deep understanding.
That's because programming in Python is basically same thing as programming in C, it is just more efficient. There is less stuff to set up (unless you run into compatibility problems), less stuff take care for (begone pesky pointers and memory allocation!) and better API to do repeatable tasks.
On the other hand programming in Lisp is different than every other language. But you also can program as if you were programming any other language. So you get confused about what makes Lisp different and if you don't stick it out you might think you just learned Lisp and "what is all teh fuss about?"
I think getting from regular programming to Lisp should be presented as same step change as going from not programming to programming.
Person who has never programmed and never heard about programming might have a lot of trouble understanding why you would even want to program (in the end it is just adding numbers -- why would that accomplish anything, I am not an accountant?)
For me it's that lisp just sets up the language to be so easily used at multiple levels of abstraction, which is all at once a truism by also not nearly enough to communicate what it is like to use.
At this stage there are likely many languages that will allow you to work at similar number of abstractions including syntax, but I'd be surprised if many feel so much like idiomatic code. Implementing macros has some constraints but it's quite possible to internalise and start writing s-exp manipulating code as if it was just any standard problem.
To be able to utilize Lisp well you need to be able to structure your application, but you would need already a lot of experience and exposure to well structured applications.
Whereas if you are a Java developer you get structure from people who designed Spring and now this is how you fill in controller, this is how you make service method, this is how you make your database layer... all these decisions were already made for you and you just need to follow to get a passable result.
Freedom is a double-edged sword. If you don't know how to use it it may very well be worse than not having it in the first place.
> That's because programming in Python is basically same thing as programming in C, it is just more efficient.
Depends on what you're doing. If you're writing an operating system, programming in Python is not at all the same thing as programming in C. Ditto if you're writing an embedded system, especially if it uses memory-mapped IO.
Well, I meant from the point of view of the act of programming.
Those strengths and weaknesses frequently come down to some pretty arbitrary choices the author made and have nothing to do with the language.
It clicked for me only after getting significantly far in SICP back in the day, doing all the excersizes and creating things inspired by the book. Before that I just thought lispys were just some weird emacs thing. While this is a long time ago for me, I think it still will work for you to feel that 'click'.
For me it clicked after I red Practical Common Lisp, ANSI Common Lisp and On Lisp, Let over Lambda and after I did couple projects in Common Lisp.
One day I looked at various REST API clients generated by JHipster. The clients generated for all languages had huge amount of boilerplate.
"Well, that's fair for ability to generate it from DSL", I thought to myself.
Then I looked at the code generated for Clojure client and it was many, many times smaller and looked as if somebody wrote it by hand. It was nice and neat.
Basically, the calls to APIs were macros and rather than generate a stack of code to be put in repository, the macros generated everything at runtime. The DSL was translated 1:1 to calls to macros and all the complexity of the generated code was completely hidden.
This caused me to spend considerable time thinking about the nature of difference between lisp and all those other languages and at some point it just clicked.
Actually, I learned by programming assembly for a long gone platform, 25 years ago. I have learned Python already after many years programming in C, Perl, Java and Common Lisp, professionally.
Experience with Lisp gave me a certain way of looking at programming languages. Rather than "oh, how fun!", I am more like "let's see which subset of Lisp it implements".
The only really interesting language in recent years that I have learned is Rust which truly does something new and it recalibrated my thinking about utility of making further subsets of Lisps. It seems there is still a lot of possibility for improvement from the language standpoint, and that it is possible that a lot of improvement may come from restricting the developer from doing certain things rather than giving unrestricted freedom.
> I'm convinced Lisp self-asphyxiated by fighting a last stand on parentheses
Lisp self-asphyxiated because the teaching materials were garbage^W poorly focused for normal programmers from 1985-1995.
Stupidly useful things like "Hash Tables" and "Imperative Linear Loops" were always sort of an afterthought/footnote in Lisp books as opposed to "Recursion and Y-combinator" which were 70%+ of the books and 90+% useless to daily programming.
Perl stood up and said "Regexs and Hashes are important, useful on a daily basis, and easy to use." And consequently wiped the floor of the non-systems programming community until Python.
As I understand it, macro is at a metalanguage level above the current code (eg. C macros vs C "normal" codes). In lisp, this metalanguage is in lisp forms, and thus it is parsed by ... the very same lisp parser for the code. Therefore the whole language is available to you at this meta level, and for all the meta levels above this meta level (ie. for macros that build other macros), so it's lisp all the way up.
I only know Lisp at a really basic introductory level, but knowing a different homoiconic language my understanding was that "macros" aren't even actually a meta-level thing in these - it's just the language being able to manipulate itself. Using it like this just gets called "meta-level" or "macros" to keep a cleaner separation for developers to reason about.
It feels more bolted on than in, say, REBOL, where there aren’t separate macros because functions can selectively take any or all of their arguments unevaluated, as a value but not fully evaluated, or fully evaluated.
(Which isn’t to say the Lisp way is worse; there’s strengths and weaknesses of both.)
> I'd love to see a language where manipulating the language is the core strength,
This is the appeal of Julia, where the community uses macros to do things like auto-differentiation (which itself feeds into modeling engines), create domain-specific languages, and to customize compiler behavior (auto-vectorization, target GPU compute shaders).
Homoiconicity is a start. But the killer feature is simplicity. In Lisp, only symbols and parentheses. In prolog, only predicate head, body and atoms. This is what makes those two languages so suitable to syntactic macros, while other languages claiming homoiconicity with more syntactic irregularity make syntactic macros impractical (although certainly possible). IMO, semantic AST-level macros are certainly a better choice in non homoiconic and syntactically irregular languages.
For me, there is a mindset of how to read the code that is unlocked by not having to specially parse so much of it in my head. It really can be seen as a list of code. And I can more easily see ways of transforming the code.
Granted... I actually like the LOOP and FORMAT forms a lot. They somewhat break the easy to parse, but often they work as a coherent unit of code by themselves.
Even without macros, it's useful for languages to have a format that's "readable without parsing", since new language versions can introduce keywords that break existing tooling.
The worst case I experienced was a project to perform static analysis on Haskell code. Some optional features (e.g. LambdaCase) alter the way files must be parsed, and I recall at least one causing an ambiguity. This made parser libraries pretty useless, so I had to use the GHC compiler directly. This was around 2015, when GHC's API was much harder to use programatically; in particular it would often crash (saying "the impossible happened") if it wasn't given exactly the right "DynFlags". Those were hard to guess, so had to be passed through as commandline arguments, essentially re-creating the GHC command!
Sometimes those features and flags are toggled in a "pragma" comment at the top of a file, but it is also very common for projects to specify these in a separate config file for the Cabal build system. Those files are a non-standard format, so we have to parse them using the Cabal library. Hence I ended up essentially re-implementing the Cabal command, as a way to pass the right arguments to my re-implemented GHC command.
That still struggled to parse Haskell code, due to widespread use of the C preprocessor (CPP).
I ended up using the vanilla GHC and Cabal commands, but added my own optimisation pass as a GHC plugin. This pass is a function accepting and returning "GHC Core" (one of GHC's intermediate representations); the only side-effects it permits are for error reporting. Hence my pass just returned its input unmodified, whilst also printing it to stderr (as s-expressions, for simplicity).
This "GHC Core" representation was usable for my project, but I still find it incredible that I wasn't able to reliably parse Haskell code (especially given how well-suited Haskell is for parsing!).
The reason you don't need to parse text is very much because of homoiconicity.
Homoiconicity = same representation
It means that the way you represent code in a language is the same way you represent some data-structure in that language. This applies both syntactically and semantically.
(+ 1 2)
This is the Lisp syntax for a list of three elements. Parenthesis denote a list, and each word inside it denote an element of that list. The above list has elements +, 1 and 2.
So the above code is the syntax for a list, and when you deserialize this (read in Lisp parlens), you get a list of three elements. This is the semantic part. Now you have a List object of three elements. You can now loop over this list, access elements in it, change the order of the elements, you can append or prepend more elements to it, remove elements, and whatever else you can do with a list data-structure.
Finally, it happens that code is represented this way as well. So syntactically if you want to add 1 and 2 together, you'd also write:
(+ 1 2)
And semantically, the interpreter/compiler will not take text as input, it will take a list of three elements as input, because code is assumed to be modeled as a list of elements. Not using some special CompilerList, but using the standard list data-structure you use for any other list.
Which means to execute code, you take text which models code using the syntax used to represent lists, and you parse/deserialize it (read it) into a list data-structure, and then you feed this list to the evaluator which will execute it as code.
Thus the textual representation for code and for a list is the same, and the in-memory representation of code and of a list is the same, and finally this list can be executed as code or simply processed as a list. This is homoiconicity.
What you get from the flexibility of having macros, it seems like you lose by being unable to do static analysis at any level other parentheses matching?
In particular, consider trying to figure out all usages of a function without executing code. I don't think you can even resolve imports without possibly getting fooled by macros that you didn't hard-code an understanding of?
(But on the other hand, it's not going to be 100% in languages that support reflection either. We just assume that's rare.)
One problem is that macro expansion is arbitrary code execution, which means your static analysis is potentially insecure. But I suppose it could run in a sandbox.
A second issue is that the macro-expanded code is a lowered representation compared to to what the user sees. This tends to make user-friendly error reporting harder.
Another problem might arise when macro expansion is intermixed with normal execution, making it harder to just expand macros with executing code. Then you can still do a lot in a debugger, but that's quite different than static code analysis.