The conversation Bill's touching on -- the tradeoffs between different notations -- is really valuable but I think it misses something a lot of us desire in a syntax: compositionality. Take the provided "onion" notation, loops, and the "new" syntax. They all look something like this:

yeah to me this just hollers for the kind of "pour it through the functions" approach we see in functional languages or even with Ramda.pipe in JS. Ezpz.

> "that’s as close as I can get in wordpress but you know what it looks like"

Um, I still don't know what it looks like. I know mod and there is a range of numbers it seems and an i to the power of two, but I really have no idea what the other symbols mean or if their position/layout in that fashion is important. I can probably reverse engineer what the mathematician writes by reading the code but I'm still confused as to why many articles about software assume a complete mathematics syntax knowledge.

Agreed, I think the syntax used is not that common. (This is a problem, I think, with math notation -- everyone's notation means different things, google "substitution notation history" for the worst.) They definitely meant to write:

The author of the post is the co-inventor of the Lucid programming language. In an earlier blog posts he describes a Python interpreter for a version of Lucid that he named pyLucid.

List comprehension in Haskell is probably the best approach so far, imho. However once you're using these tools then map, reduce, filter should be easy to understand, so the terseness and clarity of it doesn't come without a cost.

Stream would technically better, however given the discussion is about Map/Reduce, the only thing Stream has in common with Map/Reduce is it's lazy. If you wanted something comparable (mapping is done in parallel, reducing as well just over partitions), then you'd want to use the Flow[1] library. As it does the same thing as Stream.map |> Enum.reduce just parallelized/partitioned, and what's great is the Flow module is more-or-less a drop in replacement for Enum/Stream (with a few caveats like calling Flow.partition before Flow.reduce). But, with just some quick a dirty benchmarks you can see Flow outperforms Stream on all but the smallest data set (range 1..100):

with_stream = fn range ->
range
|> Stream.filter(&(rem(&1, 3) == 0))
|> Stream.map(&(&1 * &1))
|> Enum.reduce(0, &Kernel.+/2)
end
with_flow = fn range ->
range
|> Flow.from_enumerable()
|> Flow.filter(&(rem(&1, 3) == 0))
|> Flow.map(&(&1 * &1))
|> Flow.partition()
|> Flow.reduce(fn -> [0] end, fn val, [acc | _] ->
[Kernel.+(val, acc)]
end)
|> Enum.sum()
end
iex(4)> Benchee.run(
iex(4)> %{"stream" => with_stream, "flow" => with_flow},
iex(4)> inputs: %{"small" => 1..100, "medium" => 1..10_000, "large" => 1..10_000_000}
iex(4)> )
Operating System: macOS
CPU Information: Intel(R) Core(TM) i5-5257U CPU @ 2.70GHz
Number of Available Cores: 4
Available memory: 8 GB
Elixir 1.9.4
Erlang 22.2.1
Benchmark suite executing with the following configuration:
warmup: 2 s
time: 5 s
memory time: 0 ns
parallel: 1
inputs: large, medium, small
Estimated total run time: 42 s
Benchmarking flow with input large...
Benchmarking flow with input medium...
Benchmarking flow with input small...
Benchmarking stream with input large...
Benchmarking stream with input medium...
Benchmarking stream with input small...
##### With input large #####
Name ips average deviation median 99th %
flow 0.0994 10.06 s ±0.00% 10.06 s 10.06 s
stream 0.0782 12.78 s ±0.00% 12.78 s 12.78 s
Comparison:
flow 0.0994
stream 0.0782 - 1.27x slower +2.72 s
##### With input medium #####
Name ips average deviation median 99th %
flow 83.87 11.92 ms ±20.48% 11.30 ms 25.53 ms
stream 74.88 13.35 ms ±32.02% 12.32 ms 30.22 ms
Comparison:
flow 83.87
stream 74.88 - 1.12x slower +1.43 ms
##### With input small #####
Name ips average deviation median 99th %
stream 4.98 K 0.20 ms ±87.16% 0.169 ms 0.56 ms
flow 0.70 K 1.42 ms ±21.58% 1.35 ms 2.52 ms
Comparison:
stream 4.98 K
flow 0.70 K - 7.06x slower +1.22 ms

compositionality. Take the provided "onion" notation, loops, and the "new" syntax. They all look something like this: (The math one looks more like this:) But none of them provide the same kind of "putting pieces together" feeling as this: which we see in the wild as this: or this: or this:If you wanted more trickery you could play with transducers and compose them with "comp".

> Clear? As mud

> the lambdas and the functions obscure what is being computed.

Simply use J!

g =: +/ @: : @ (] * =&1@(3&|))

Or:

f =: +/ @: *: @ (#~ (=&1@(3&|)))

I just remembered haskell's "range" operator ".." can also take a step size.

['a', 'd' .. 'z'] => "adgjmpsvy"

Um, I still don't know what it looks like. I know mod and there is a range of numbers it seems and an i to the power of two, but I really have no idea what the other symbols mean or if their position/layout in that fashion is important. I can probably reverse engineer what the mathematician writes by reading the code but I'm still confused as to why many articles about software assume a complete mathematics syntax knowledge.

https://latex.codecogs.com/gif.latex?\sum_{\substack{i\in[1,...

"sum of all i² where i is in [1,5] and i mod 3 = 1".

I'm not sure what the third line is supposed to look like, but I guess it's a filter. So: Sum i^2 where i is in the set {1,2,3,4,5} and i % 3 = 1.

https://github.com/jedie/PyLucid https://github.com/yelantingfeng/pyLucid

let mod3 n = if n `mod` 3 == 1 then n else 0

sum [ i^2 | i <- [ mod3 n | n <- [1..5] ] ]

> I think the best choice is to make them vectors (arrays)

In Haskell they are lazy linked lists! Which in theory is more efficient than vectors (in terms of space).