This post shows a Haskell-ish definition of Functor, then attempts to show the same thing in TypeScript.

interface Functor<A> {
map<B>(f: (a: A) => B): Functor<B>;
}

But the TypeScript definition loses something important: the point of a Functor is that you get back the same data type -- there's one `f` in the Haskell defn both in the argument and return type, while this TS definition can give you back any random data type. E.g. the implementation of Array.map could give you back an Either.

I don't mean this comment to be a random nitpick of the post. Trying to think these things through is hard, and trying to use things you know to help understand the new idea is not unreasonable. But in particular with PL each thing has so much associated baggage (here in TS, subtyping) that reasoning "by metaphor" often means you end up missing the critical point.

fp-ts [1] contains an implementation of Higher Kinded Types, which TypeScript doesn’t support natively (the idea for emulating higher kinded types in TypeScript is based on "Lightweight higher-kinded polymorphism" [2])

That gets closer to the problem, but the solution is further away. You don't want to return "Functor" (which is like a vtable, the interface itself) but instead the thing itself which is implementing the Functor interface.

So you end up with

interface Functor<A> {
map<B>(f: (a: A) => A): Self<B>
}

where `Self` needs to recognize that the type being defined to implement this interface has a "slot". This tends to make things tough.

I'd like to +1 that "tends to make things tough." Not sure how it is today, I think it was remedied in Swift 4.1 or something... but holy-shit, that "slot" problem gave me endless grief trying to implement functors in swift for composing UI structs (I was ahead of the curve, heh).

You've lost me. Returning an interface isn't returning a concrete thing. It's returning the thing that implements the interface

I'm a very low level functional programmer. I'm big on immutability, big on not using loops and instead using map/flatMap/filter/fold, I tend to roll my own either and option implementations when they don't exist because it's the tidiest way of handling errors I've come across, etc etc. But when it comes to stuff like functors I don't get what it's buying me. What interesting stuff can I do if I know that both an option and a list can be mapped over?

I really need to look more deeply into it at some stage. I might be missing out on some powerful tools. Or it might be a bunch of stuff that's theoretically interesting but practically useless.

I think you may have misunderstood their point. They're just saying that specifying `Functor` in the return type of `map` isn't enough to resolve the issue because you could still have a case of `List#map` returning a `Maybe` or `Either` since they are all implementations of `Functor`.

Sorry, you're right. I wasn't thinking with subtyping. Rockostrich demonstrated the other weakness of this approach, but I'll say it outright: if you return a supertype then you've lost information.

It's very important for the type of `fmap f x` to be identical to `x` except in that its "inner" type has been modified. Without that, these kinds of interfaces lose all of their value.

The functor it returns should be the same functor it was called on, i.e. calling map on a Maybe should return a Maybe. The parameterized type can change so map can bring you from a Maybe<Int> to a Maybe<String> which is why in the above description we have Functor<B> instead of A. But what typescript is lacking is a way to describe that the functor returned should be the same.

Actually both the Haskell and type script definitions don't communicate the concept of a functor well. Especially with the way it's used.

The interface only requires you to implement the mapping between morphisms, but a mathematical functor also includes mappings between the objects from one category to the next.

Additionally fmap is actually mapping

(a -> b) to (f a -> f b)

but the way it's typically used in haskell breaks this intuition. Most people see it as

((a -> b), f a) to (f b)

as a result of how it's used in FP. Technically the usages are isomorphic but using it this way blocks your intuition from fully understanding the true nature of a functor.

The following would be more a more accurate type class for a functor:

class Functor f where
fmap :: (a -> b) -> (f a -> f b)
tmap :: a -> f a

Functors don't include tmap. Yes, a functor maps objects from one category to objects of the second, but it does so on the sets of objects. There might not be a corresponding morphism between "a" and "f a".

The typeclass you created would be a functor along with a natural transformation from the identity functor to f. I don't think these always exist, but they do for applicative functors (pure/return).

The object mapping is reflected at the type level. "f a" is the object that "f" sends "a" to.

In Haskell, functors are all endofunctors. In math, functors can be between categories, and in such a case a -> f a might not make any sense because there are no morphisms between categories.

Using some quasi-Agda syntax, here would be a full definition of a functor between two categories. Maybe you could overload "->" so that "X -> Y" could mean "Mor X Y" for the set of morphisms between X and Y in C or D (depending on context). Curly brackets mean optional parameters, which Haskell approximates via typeclasses.

record Functor where
field
C, D : Cat
F : Ob C -> Ob D
fmap : {X Y : Ob C} -> Mor{C} X Y -> Mor{D} (F X) (F Y)
fmapId : {X : Ob C} -> fmap id == id
fmapComp : {X Y Z : Ob C} -> {f : Mor X Y} -> {g : Mor Y Z}
-> fmap g . fmap f == fmap (g . f)

>and in such a case a -> f a might not make any sense because there are no morphisms between categories

In CAT the category of small categories objects are categories and morphisms are functors. Haskell types can be thought of as a small categories so I don't think this statement is correct? I'm no expert but can you clarify, the definition on wikipedia says that functors are morphisms between categories in CAT.

>There might not be a corresponding morphism between "a" and "f a".

Hmm. if haskell types make up a small category that means 'a' and 'f a' are sets. I can't imagine a case where two sets cannot have a mapping between them. Can you give a specific example?

>The typeclass you created would be a functor along with a natural transformation from the identity functor to f.

I'm confused about this. Isn't the natural transformation from identity to another category (at least for small categories) equivalent to a functor? In my head they look identical.

If you look at the commutative diagram on this page and replace F(X) and G(X) with identity X and G the natural transformation diagram looks just like a diagram for a functor.

btw I'm not an expert in haskell (let alone agda) or category theory so forgive me if the stuff I talk about is totally off.

Are you familiar with how the word "vector" changes meaning depending on which vector space your talking about? The same goes for "morphism," which depends on the category. In your example of "tmap : a -> f a", the only way this would make sense is as a morphism from object 'a' of the first category to an object 'f a' of the second, but this is a semantic error (a categorical error, if you will).

It is true that functors are morphisms between categories, but that's a morphism in CAT.

> I'm confused about this.

That's how these things go :-) Anyway, natural transformations are between two functors C -> D. This is an example of a 2-morphism in a 2-category. It's hard to come up with something to say other than they're just different, but think about this: a natural transformation is a consistent choice of morphisms F X -> G X (in D) for each object X (in C), but a functor needs to know where the morphisms of C go, too.

(An intuition is a natural transformations is kind of like a continuous path transforming one functor into another. This might just make things confusing, though.)

> Hmm. if haskell types make up a small category that means 'a' and 'f a' are sets. I can't imagine a case where two sets cannot have a mapping between them.

This is irrelevant. The question isn't, does there exist a function, it's does the functor determine one. (Given any sets A and B there is always a function from A to B unless B is empty and A is nonempty. But that's not very helpful, is it?)

> I'm confused about this. Isn't the natural transformation from identity to another category (at least for small categories) equivalent to a functor?

This doesn't mean anything. A natural transformation is from one functor F:C->D to another functor G:C->D. Not from a functor to... a category? Huh?

>(Given any sets A and B there is always a function from A to B unless B is empty and A is nonempty. But that's not very helpful, is it?)

Can you clarify this? I can't define a -> f a because the functor can't determine one?

What about like for list functor?

Int -> [Int]

Doesn't this work?

>This doesn't mean anything. A natural transformation is from one functor F:C->D to another functor G:C->D. Not from a functor to... a category? Huh?

I'm still kind of confused. Yeah you're right it doesn't make sense. But then the parent poster is saying that

a -> f a

is the natural transformation from identity functor to f it seems off to me. It looks to me like it's just a functor from a to f a. So I'm confused with the nomenclature here. a is a category and f a is another category how is a mapping between categories a natural transformation?

Isn't this type signature the natural transformation from identity to f?:

(a -> a) -> f a

--edit:

I think I see where I'm confused. Your last comment on the parent helped. Thanks.

> is the natural transformation from identity functor

Sorry I confused you here. I meant, if you had a function of that type (a -> f a), you would want a naturality relation to hold for it, like how 'return' for monads is meant to relate to fmap.

As it is, just having a map "a -> f a" is not a natural transformation.

(I think you might be confusing objects of Hask with the category Hask itself. Remember that "a -> f a" means "for each object 'a' of Hask, a function that takes an element of 'a' to an element of 'f a'. The functor is f : Hask -> Hask, an endofunctor. (In haskell, Hask is replaced by stars.) "f a" is the object of Hask that F sends "a" to.)

> Can you clarify this? I can't define a -> f a because the functor can't determine one?

In mathematics, it's important to distinguish between "an X such that there exists a Y" and "an X together with a Y".

We are discussing, what data determines a functor? You seem to be asserting that part of this is something you're calling tmap, which for a specific functor F would have type a -> F a.

This is mistaken; no such thing is part of the definition of a functor.

But you are confusing the issue of whether such a map is part of the definition of a functor (or is determined by the functor and so could be included in the definition with no change in the actual meaning; let's just group these two cases together as "part of the functor"), with the issue of whether such a map exists.

You pointed out that such a map exists -- as if that made it part of the functor. I pointed out that such a map does exist, but that's not helpful, because this has no relation to the functor, and that it's not part of the functor.

Your reply confuses these issues again, suggesting that because it's not part of the functor like I said (true), therefore it doesn't exist (false). No. It does exist, but it's not part of the functor.

Because it's not part of the functor -- not determined by the functor -- that means that, if you were to tack it on to the definition of the functor -- let's call this new object you're defining a functor' -- then, for any one functor, there would be multiple possible functor' that result, because of the different choices that can be made. This demonstrates that a functor and a functor' are not in fact the same thing; the functor' contains extra information that the functor does not, information that cannot be determined from the functor alone.

> Isn't this type signature the natural transformation from identity to f?:
> (a -> a) -> f a

No. You are mixing things in ways that do not make sense.

Look, apologies, but I think you're in over your head here. And I'm afraid your notation is confusing the issue by mixing things inappropriately. There are too many errors here to fix individually; if you want to get this right, you're going to have to take it from the top. I could try to write an explanation that I'd hope you might understand, but my point is that that's what I'd have to do, write a whole explanation from the top. I don't see this conversation going anywhere otherwise.

It's fine, no need to come from the top, thanks for the explanation. I mean if you want to take it from the top I'll read it really carefully, just saying I'm grateful that you both even take the time to explain what you guys already have.

I'm out of responses here on HN so I'll respond tomorrow.

I'm not a mathematician, so yeah I am a bit over my head.

> Hmm. if haskell types make up a small category that means 'a' and 'f a' are sets. I can't imagine a case where two sets cannot have a mapping between them. Can you give a specific example?

This is incorrect in a bunch of ways. First, Haskell types aren't truly just sets. As a simple, practical example you might have

module Container (makeInt, Container, getValue) where
data Container a
= Container { getValue :: a }
deriving Functor
makeInt :: Int -> Container
makeInt = Container

And now we've got (externally) a type which is equivalent to the identity functor in structure, but has a weird interface which disallows tmap.

More powerfully, we're only actually interested in discussing an interface in and of itself. A type may instantiate many interfaces of varying levels of power, but each interface needs to be well-defined and well-behaved on its own. Then their compositions need to be "glued together" properly.

So even if all Haskell Functors were actually TMapFunctors, it's still important to note that the interface TMapFunctor is a sub interface to Functor which allows more operations and has more laws.

An even stronger example of this is the fact that due to the way Haskell arrows work, all Haskell Functors are actually "strong" functors in the sense that we cannot even truly specify a non-string functor.

Strength is a way that fmap and products (or, really, category tensors) interact.

strength :: Functor f => (a, f b) -> f (a, b)
strength (a, fb) = fmap (\b -> (a, b)) fb

This seems completely obviously possible, but it's only because we can crack open products in building general anonymous arrows. That's not supported in every category and a generic functor should not be expected to satisfy it.

>An even stronger example of this is the fact that due to the way Haskell arrows work, all Haskell Functors are actually "strong" functors in the sense that we cannot even truly specify a non-string functor.

Can you explain this? What is a strong functor and how does it have to do with arrows? Also what do you mean by non-string functor?

A strong functor is one which supports the strength operation above. Haskell -> arrows being based on any lambda term are rich enough to make all Haskell functors strong. But not all functors are!

What's hard about that? C and D are Hask. F_Q maps objects A to objects Const Q A, and fmap takes functions in Hom(a, b) to Hom(Const Q a, Const Q b). We identify that Const Q a is just a little subset of Hask, so Hom(Const Q a, Const Q b) is Hom(Q, Q) and fmap works by mapping all functions to id_Q. Now fmapId and fmapComp are satisfied trivially.

a1369209993 was answering my "I don't think [natural transformations from the identity functor] always exist" with "no, they don't."

The natural transformation would be a function "a -> Const q a" for all a. There can't be a way to do this for all q that is somehow natural in q, since this would have to construct a value of q to put into the Const constructor. Worse, (even though vanilla Haskell doesn't allow it), q might be the empty type, so there might be no functions "a -> Const q a" whatsoever. I think the point of the example is that keeping q polymorphic is a simulation of the empty type.

Allowing empty types, I'm happy with the example. However, this is technically a natural transformation in Haskell:

Ah, I misunderstood! And yes, absolutely. An even easier example

data Nullity a

This is a perfectly valid functor, too. Ignoring undefined, there are no values of it, but fmap cannot be used to create values of the mapped type, just manipulate them.

Other commenters have already pointed out some of the ways you're confused about categories, but let me explicitly answer your question of, why doesn't the functor include the mapping on the objects and not just on the morphisms?

The answer is it does. In this context, the types are the objects. So if F is your functor, then F itself -- which does not have a type, but has kind * -> * -- is the mapping on the objects.

I put this in another comment, but I think you're thinking 'a' is a category, but it's an object in a category. The type for a functor (in Haskell) is Hask -> Hask, which is represented by the star notation Sniffnoy uses. That is, it sends objects of Hask to objects of Hask. The object mapping for a functor in Haskell is defined by its "data" or "type" definition.

The notation "a -> f a" means a function that sends elements of 'a' to elements of 'f a', where 'a' is an object of the category Hask. (Functors don't look into the objects at the element level.)

No. It's not a -> f a. It's * -> *. There is no a -> f a here. You are simply mistaken in thinking that any such thing is part of the definition of a functor.

As for the quoted definition, I agree that if this is your first exposure to abstract algebra, you'll need a moment to unpack the sentence, follow some links, and especially, read on. But that's not just Wikipedia; the blog post also takes considerably more than one sentence to explain what it is trying to say. You can't explain everything in one sentence.

For whatever it's worth, as a data point relating to a recent discussion on whether a university CS education makes you a better programmer or not: We literally started learning about algebraic structures in the first math class on the first morning of the first year of university. If you're going to program in a setting where algebraic structures (or data types!) are relevant, this university knowledge will help.

Finally, this blog looks very nice visually, but the "broken typewriter" font effect makes the code examples much too hard to read. It would be great if the ribbon in that typewriter could be replaced.

> For whatever it's worth, as a data point relating to a recent discussion on whether a university CS education makes you a better programmer or not: We literally started learning about algebraic structures in the first math class on the first morning of the first year of university.

My problem with this argument is that this isn't a universal experience. I went to Waterloo, arguably one of the best CS schools in Canada, and the CS program didn't even cover algebraic structures.

I learned about them, because I spent all my electives taking extra math classes, but the vast majority of my classmates never needed to learn any math beyond basic combinatorics and some introductory complexity analysis.

I believe that these topics are incredibly important, but I'm not convinced that a college degree is a reliable way to be exposed to them. If your school covers this stuff, great! But at least in my experience, I think it's disingenuous to act like all computer science programs will.

I have an MA in math and I was somewhat exposed to category theory and algebraic structures in my time at the university. I find knowing math generally is useful for programming but I haven't found it useful for understanding functional programming in particular.

I think functional programming uses category theory in a different fashion than even fairly advanced mathematics; Breaking Hungerford's Algebra text, in the chapter on Category Theory, he writes, "A significant part of the elementary theory of categories is the attempt to generalize as many concepts as possible from well-known categories (for example, sets or modules) to arbitrary categories". Which is a rather different approach than building programs.

Which is to say that for most mathematics, categories are tools for generalization or for providing a firmer foundation for existing mathematical structures. For understands monads as used by FP, the description as "little languages" seemed the best - it's way of not having side effects by using functions to incrementally construct output instead of doing output in the middle of computation.

Much of software design is about composition, which is where category theory shines, just at the most shallow levels.

If I dream up a way to tackle a problem, have I made a category? If I have two modules, can I compose them into one, is there an initial or final module, is composition associative? If there's some underlying structure, what's the free module based on it?

Those questions can help make more durable designs.

Just another data point: first time I encountered algebraic structures during my education in the Czech Republic was last year of grammar school, in a seminar for students who took maths as part of the exit exam (https://en.wikipedia.org/wiki/Matura#Maturita_in_the_Czech_R...). And then in the second year of CS at the university.

Also, we did functional, logic and also imperative programming in the first year. I really did live under the impression that CS education is meant to give you all these foundations, and it's a TIL today that it's not universally true.

I think Wikipedia is wrong here. I never knew that anyone referred to any generic set with operations as an "algebra". That's a magma! An algebra has much more structure than any generic set with operations.

Does this really happen? Do people really say generic "algebra" for any set with operations?

(And as an aside, I don't like the "abstract algebra" monicker. It sounds so immature and undergraddy. There isn't an ordinary algebra and an abstract algebra. It's all just algebra.)

> I never knew that anyone referred to any generic set with operations as an "algebra". That's a magma!

If I have a structure S with an associative operation, and another structure G with an associative operation and a neutral element, I will say that S and G are different algebras, not "different magmas". Others looking at S or G will not ask "oh, what kind of magma do you have there", they will ask what kind of algebra.

So... Yes, these are both (special cases of) magmas, but the general term used for them is "algebra" or "algebraic structure". Don't you agree?

But you added extra structure: associativity. Cohn's definition doesn't. It just says "set with operations" (well, finitary operations, but that's kind of always implicit in the definition of "operation").

So you're saying people shorten the phrase "algebraic structure" to "algebra"; this hasn't been my experience.

Of course I added extra structure since I wanted to make a point about different kinds of algebraic structures which are all subsumed by the term "algebraic structure" or "algebra". And it's only possible to distinguish kinds of algebraic structures by differences in structure.

But adding extra structure in one example doesn't mean that I somehow exclude magmas from the definition. Here is the example again, extended to be include a component with no extra structure:

If I have a structure M with no structure but an operation, a structure S with an associative operation, and another structure G with an associative operation and a neutral element, I will say that M and S and G are different algebras, not "different magmas". Others looking at M or S or G will not ask "oh, what kind of magma do you have there", they will ask what kind of algebra.

Of course this extension by M doesn't change anything about the validity of the example. Magmas are just as included in the term "algebraic structure" as semigroups, groups, rings, and fields are.

> So you're saying people shorten the phrase "algebraic structure" to "algebra"; this hasn't been my experience.

>So... Yes, these are both (special cases of) magmas, but the general term used for them is "algebra" or "algebraic structure". Don't you agree?

Algebraic stucture, sure, but _algebra_, absolutely not.

An algebra is a module with a compatible multiplication which has an identity element. If I had a magma and you asked about my "algebra" I would be very confused about where you were seeing all the extra structure.

> Algebraic stucture, sure, but _algebra_, absolutely not.

As someone else pointed out elsethread, the term seems to be overloaded in different branches of mathematics.

https://en.wikipedia.org/wiki/Universal_algebra: "Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures themselves, not examples ("models") of algebraic structures. [...] In universal algebra, an algebra (or algebraic structure) is a set A together with a collection of operations on A."

> I never knew that anyone referred to any generic set with operations as an "algebra". That's a magma! An algebra has much more structure than any generic set with operations.

I think this comes from competing definitions of the term 'algebra'. There's what, for want of anything better than the terrible term, I'll call the 'algebraist's algebra', which is (at least) a ring that is compatibly a module over some other ring; and there's what, as smadge (https://news.ycombinator.com/item?id=21443587) mentions, could be called the 'universal algebraist's algebra', which is a model for a certain signature, of which the algebraist's algebra is just one special case. EDIT: I see that matt_noonan (https://news.ycombinator.com/item?id=21443892) made this point several hours ago, but I leave this post in case the links below are convincing.

A magma is a set with one binary operation. There can be any number of operations in an algebra, with any arity. For example, one of the simplest algebras has one unary operation. You can specify additional structure of an algebra beyond just the operations using axioms. Universal algebra is where I first learned this idea.

Algebra is pretty overloaded. In the context of the study of "abstract algebra" the term takes on the more generic representation. You can also see it being defined in places like Lawvere Theories and F-Algebras. It's a context-dependent thing.

Unfortunately terms are a bit overloaded. When some people say algebra they might mean algebra over a field. A field alone adds a lot of structure to your set and is powerful enough to define vectors on. Algebra over a field adds even more.

But that is different from an algebraic structure. An algebraic structure is just some set with some operations and laws. It can be magmas, monoids, groups etc.

And abstract algebra is also a necessary term to differentiate from "ordinary" algebra that is taught in middle schools and high schools, where algebra only means using letters to substitute numbers.

What structure does an algebra require to merit the name? I've been confused by examples such as the "algebra of a monad", which as I understand it arises from adding an "unwrap" operation in addition to monadic "return" and "join". Most of the references to this only ever refer to specific cases such as an algebra over a monad or an algebra over a functor or a division algebra, etc., and I haven't seen anyone be clear about what makes something an algebra.

I would expect an algebra to be at least a ring, i.e. an algebra over a field, a vector space with a bilinear product. If you just say plain algebra, I think matrices and vectors or similar.

Most of the structures called algebras (e.g. boolean algebras) have at least two operations and these operations frequently interact via distributivity or something like that. Other examples are ring-like, like a sigma algebra in measure theory.

I've never really encountered anyone saying "algebra" for a generic algebraic structure. I think with this definition Cohn was trying to start a trend that didn't catch on.

There are two different notions of algebra in common usage. One is the definition you gave (vector space with a bilinear product, as in Lie algebras, Clifford algebras, etc). The other definition, as used in the blog, comes from universal algebra. https://en.m.wikipedia.org/wiki/Universal_algebra

Which one(s) you have been exposed to just depends on which mathematical subcultures you’ve interacted with.

Mathematics Wikipedia is sometimes very biased towards certain points of view (usually more undergraddy, as evidenced by its insistence with the "abstract algebra" term), so I think it's being a bit too pushy with its "algebra" definition here. I've edited the term to say that this usage of "algebra" is a universal algebra thing.

Calling these things algebraic structures might help you win some confidence, but the communities which talk about algebraic structures aren't going to be helpful for learning how to program with these things. Mathematicians love algebraic structures (and non-algebraic ones).

The advantage really plays out more with the first-order structures, too. Things like monoid, semiring, torsor, group. You also have nice ones in more standard data structures: a balanced tree is an excellent example of a structure where the laws exist to cut out unbalanced trees.

In my opinion, there are two things to study here:

First, the practice of thinking about abstract structures that apply to concrete data. For this, the practice of thinking of there being a type (or multiple interrelated ones) which offers some set of "constructors" which create the type or augment existing values (gluing new items into a tree, merging two trees, etc) and some set of "laws" for which all values of that type must uphold.

It turns out that you can do a lot of analysis of the behavior of these structures in the abstract and then apply it wholesale throughout programs. Many concrete values you work with are the combination of multiple structures in natural ways. Sometimes you can replace whole APIs with hundreds of calls, each named uniquely to this implementation of this type, with just a small set of nicely orthogonal methods with completely standard names.

Second, the use of higher order structures like Functor, Applicative, Monad. These get a LOT of airtime because they're both challenging and offer important capabilities. But they're also in a lot of senses their own realm of study. Not only are they developed very uniquely in programming communities (as opposed to what you'll find if you read about the category theoretic definitions) but they are also "higher order" in that they involve functions between types.

This higher-order nature both makes their own equations much more complex, but it also means that to see them applying (in languages other than Haskell and its ilk where purity drives this) you have to get really good at seeing languages in an abstract fashion. It's a great skill to develop, but ramps up the difficulty greatly.

Master the "first order" ones first. Master concrete, interesting types like Either, Maybe, List (as a source of non-determinism) first. Then come back and see if you can see how the skills you develop with the "first order" structures apply to these higher order, computationally minded types.

> Functor isn’t the only algebraic structure either.

I actually wouldn't call it an algebraic structure at all. A functor really isn't just a set with some finitary operations. A quick search online [1] tells me I'm not alone.

Very much agree... but also want to be a little soft here. It's a very specific line of reasoning needed to discuss the difference between a "structured set", an algebra, a "theory" (algebraic or geometric or otherwise), et al.

Generally, we're all talking about the same sort of thing. But also, generally, there isn't one good formal apparatus for discussing the whole of these things without inducing just tons of complexity. Why? Because we want to talk about specific and complex things and we often want to use specific and complex language as opposed to working through 8 levels of encoding every time.

So anyway, right there with you! But also happy to guide people down that path step-by-step.

I guess this is an easy point of confusion. Are specific instances of functor algebras, then? (What about f-algebras?) What would be the more appropriate word for the category theoretical things the author is trying to refer to here, functor and monad and so on?

A monad (on Set) is an "algebraic structure" (eg. the notion of a group). An algebra for that monad is an "algebra" (eg. one individual group).

This is the original use for monads in universal algebra, before they were interpreted as computational effects. An algebraic structure like a group or monoid is traditionally given by a signature: a list of what operations it has and what rules the operations need to follow. You can generalize from a signature Sig to a monad M. If a is a set, then Ma is "the set of expressions with constants from a": the set of formal expressions built from elements of a and the operations in Sig and where two expressions are regarded as equal if you can manipulate one into the other using the rules from Sig. The list monad for example corresponds to the signature for monoids.

The monad laws can thus be read as expressing "how to do algebra" at the most general level (ie. the level that is common to all algebraic structures). I would gloss them as "the order you evaluate an expression does not matter".

edflsafoiewq's answer is good, but they didn't really mention functors. If you want an umbrella to put functors under, I'd say "higher order structure". It's a map between structures that respects structure.

I know it's not for everybody, but go back and read Backus' Turing Award lecture introducing FP: "Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs" https://dl.acm.org/ft_gateway.cfm?id=1283933&type=pdf

The two main points (they're in the title) are eliminating the "von Neumann bottleneck" between the CPU and RAM, and the algebra of [FP] programs, the potential to manipulate programs as one manipulates mathematical formulas.

I just skimmed that paper but I don't see a section on how to avoid the bottleneck when your functional program is trapped on a Von Neumann machine. It seems to me that functional constructs have to be mapped down to structures that will suffer the Von Newmann bottleneck if they are being executed on a Von Newmann machine. But there isn't any apparent discussion of an alternative machine architecture, just the computer language.

Indeed one of the complaints you sometimes see about functional programming is the amount of memory churn it produces on a Von Newmann architecture.

I'm actually working on that. I've come up with a way to dynamically create dataflow graphs on banks of hardware interconnected by latching sort-nets and programmed by a simple, pure, functional, "concatinative" programming language called Joy.

I don't think it's useful to differentiate "functional" programming from "function-level" programming as if they are different paradigms, especially in light of e.g. Conal Elliot's work "Compiling to categories": http://conal.net/papers/compiling-to-categories/ where he has built a compiler extension for Haskell that converts it into a point-free form very similar to Backus' "function-level" style of programming.

I don’t have a need right now to master a new programming paradigm in order to leverage my business.

However, if I happen to bump into this need,
I’d focus first on Logic and Array-Oriented programming languages first. They seem more valuable to my industry (finance).

> But the one on the left will be slower and use a lot more memory.

Is it really true?
I mean, GC will clean the intermediate array, won't it?
And the speed won't be significantly slower.
It's still linear complexity anyway.

Dragging a large data structure through the cache only once rather than twice can be beneficial. Also, GC will clean the intermediate array, but (depending on the specifics of the language and the data types involved) it might first have to make yet another scan through the data structure. So yes, it's linear, but possibly 2-3x slower.

Hard to tell without more information, and probably more runs. Did the code run often enough for the JIT compiler to warm up sufficiently? If you're running in interpreted mode, or only baseline compiled mode, you have other overheads. Also, and this is admittedly something I should not have glossed over: The reading and writing costs of fused maps might be sped up by the factors I mentioned, but in your loops you also have allocations, which have their own costs and can complicate things. And the computation is not free either, though at sufficiently large sizes the memory accesses should dominate sind and tan, I think.

It also matters how this code is compiled exactly. The C# version (I know nothing about C# or how good its compiler is) looks like it must first allocate some kind of dynamic stream, and only when ToArray() is called can it allocate the final array, so there might be extra copying. Maybe the compiler is smart enough to optimize a sequence of arr.Select().ToArray() to allocate a target array of the size of arr right away, I don't know.

Also, the JavaScript version uses a smaller array than the C# version, is that on purpose? 1000000 unboxed doubles are only 8 MB, which is not very big: On the machine I'm typing this on, L3 cache is 6 MB.

My advice would be to run the JavaScript version many times, for many more than 8 iterations, and with sizes increasing stepwise up to a GB or so. Also try replacing the maps with preallocated arrays and hand-written loops that contain only the computations, not the allocations. I know this sounds like I'm trying to give you homework, which I'm not, but benchmarking is hard, and there are many factors to take into account.

> And the computation is not free either, though at sufficiently large sizes the memory accesses should dominate sind and tan, I think.

Looks like I was wrong about this! You might want to retry your experiments with cheaper operations than sin and tan.

I wrote a little C benchmark to test this more:

#include <stdio.h>
#include <time.h>
#include <math.h>
extern void sinTanSeparate(double *a, double *b, int n) {
for (int i = 0; i < n; i++) {
b[i] = tan(a[i]);
}
for (int i = 0; i < n; i++) {
b[i] = sin(b[i]);
}
}
extern void sinTanFused(double *a, double *b, int n) {
for (int i = 0; i < n; i++) {
b[i] = sin(tan(a[i]));
}
}
#define N (128 * 1024 * 1024)
#define RUNS 5
double a[N];
double b[N];
int main(void) {
clock_t start, end;
printf("will do %d runs over %zu MB of data\n\n",
RUNS, sizeof a / (1024 * 1024));
for (int i = 0; i < RUNS; i++) {
start = clock();
sinTanSeparate(a, b, N);
end = clock();
printf("separate: %f sec\n", ((double) end - start) / CLOCKS_PER_SEC);
}
printf("\n");
for (int i = 0; i < RUNS; i++) {
start = clock();
sinTanFused(a, b, N);
end = clock();
printf("fused: %f sec\n", ((double) end - start) / CLOCKS_PER_SEC);
}
return 0;
}

Compiling this with gcc -O3 gives:

will do 5 runs over 1024 MB of data
separate: 1.461349 sec
separate: 1.020120 sec
separate: 1.019002 sec
separate: 1.019888 sec
separate: 1.018454 sec
fused: 1.014774 sec
fused: 1.014724 sec
fused: 1.013895 sec
fused: 1.016440 sec
fused: 1.013729 sec

So almost no difference, though with enough runs I think this would be significant. Interestingly, although C is not JIT compiled, even here there is a "warmup" effect. I guess these are initial page faults or something.

But if we now comment out <math.h> and instead use some cheap "fake" implementations of in and tan:

Imagine that the size of the intermediate array is half of your computer's memory. It matters because by composition it reduces the memory footprint from 2x to 1x.

I want to talk about the three criteria necessary to have an algebraic structure:

1. It must be a type/class.
2. It must implement methods with a specific type signature.
3. It must obey laws.

Criteria 1, 2 have an implementation in programming languages. But criterion 3 doesn't. You can't check whether the implementation of a Group abstract class satisfies the three group axioms like you can check whether they extend the Group class or whether they're implementing the wrong type signature. That means this is going back to Dijkstra's "proof of correctness" paradigm of programming which is a bad idea.

99% of problems one encounters while programming can be solved in C++ with std::vector and functions taking a vector in and producing a vector out. That's my main problem with FP and many other language making bold claims: oversell. That simple fact is that for most computing tasks, you don't meed much more than simple types.

Honestly, you don't need much more than a really big chalk board to solve most problems one encounters when programming.

Even that's probably strictly optional if you get enough people to double check the work.

Sarcasm aside, what you're often doing when you make these transformations of std::vectors are structured algorithms. The structures of these algorithms can often be factored through operations performed on types—even if they're just all different names for the same std::vector!

These structures exist to give words and patterns to the stuff we do. To make it easier to talk about, share, reflect upon, improve. To make it easier to judge how different approaches to the same end relate and can improve upon one another.

You don't have to go use Haskell to get a LOT of benefit out of algebraic and equational reasoning. And I feel pretty unsure what to think about a philosophy where one would avoid a nice tool merely because it's possible to get along without it?

I imagine you're a fan of that Primitive Technology YouTube channel?

I agree with you to some extent, but picking it apart:

1. I think there is value in immutable data structures. Languages like Haskell and Elm make everything immutable by default. A big problem I find in C#, JavaScript (and I'm sure you'd get in C++) is if I return a List of something, even if the list is readonly, the consumer could modify the items, unless I go to some tedious lengths to make sure the list only contains immutable objects. Also if I compute something based on a reference to an object, I cannot be sure that what the reference represents hasn't changed later in the program.

2. The algebraic data type produces very nice tight data structures and makes it easier to create data structures that can't hold invalid values. And where this is not possible you can use techniques to hide/product the data - usually by hiding the constructor, and requiring a function to make the type. You can do this in OO with classes, but the fact that any reference can be null causes issues, also if you return anything from your class that isn't immutable then you are passing out a potential backdoor to f' your state. See React and the hoops you have to jump through to keep things immutable.

You can absolutely get stuff done without FP, but I like the abstractions and guarantees it brings. Less to go wrong, and less to think about overall (once you have carefully planned your types).

What I don't like about FP, or Haskell in particular is the very complex types people dream up and all the crazy GHC extensions which are hard to understand and produce the most unhelpful of error messages if you make a mistake. Elm is more my kind of thing - very simple type system but still algebraic, no forced nulls etc. Gives you the 80% benefit of FP for 20% of the effort.

When I read an Elm program from someone else I don't have to think much. Reading a Haskell program I need to learn a lot to understand it. Maybe Haskell is OK if you are doing it full time and can commit all that stuff to long term memory.

Yeah.... You're probably right. Let's not talk about the meta-structures of our programs, but just use those structures.

Sure. I might be using a monad and the bind function every single day, but acknowledging any useful repetition of patterns is never useful. That's why no successful programmer has ever even given a thought to design patterns.

/s

Why is it that people are alright with identifying and naming the pattern of a single global instance of a type - i.e. a singleton, but as soon as you identify a pattern of type signatures you're instantly thought of as looney and overselling?

C++ is not a purely functional language, but the type-system is (caveat WIP things like concepts). Even if all you're doing is "vector-in, vector-out", the types of those vectors matters immensely. This is especially true in languages like C++ where the type-system is very deep but also very hard to debug or change at varying levels of abstraction.

If you seek to simply solve a problem then there are a number of ways to do so.

If you seek to solve a problem with a solution that has the best design, the least technical debt and almost no bugs. Then FP with ADTs is the closest thing I've encountered to such a solution.

Unfortunately, the author doesn't actually understand functional programming; they think that it has to do with loops and mutability. Also, they cannot get outside of their "I worked really hard for my PhD" mindset for long enough to consider programming as it actually is, rather than as they want to imagine it to be.

May I humbly recommend you read the article before leveling a complaint about its content, as at least some of this was clearly addressed in the second section, starting above the fold.

This post shows a Haskell-ish definition of Functor, then attempts to show the same thing in TypeScript.

But the TypeScript definition loses something important: the point of a Functor is that you get back thesamedata type -- there's one `f` in the Haskell defn both in the argument and return type, while this TS definition can give you back any random data type. E.g. the implementation of Array.map could give you back an Either.I don't mean this comment to be a random nitpick of the post. Trying to think these things through is hard, and trying to use things you know to help understand the new idea is not unreasonable. But in particular with PL each thing has so much associated baggage (here in TS, subtyping) that reasoning "by metaphor" often means you end up missing the critical point.

fp-ts [1] contains an implementation of Higher Kinded Types, which TypeScript doesn’t support natively (the idea for emulating higher kinded types in TypeScript is based on "Lightweight higher-kinded polymorphism" [2])

[1] https://github.com/gcanti/fp-ts [2] https://www.cl.cam.ac.uk/~jdy22/papers/lightweight-higher-ki...

Haven't had a coffee yet so go easy on me - doesn't this solve your issue?

You map over an option, you get an option. You map over an either, you get an either. etc etc.That gets closer to the problem, but the solution is further away. You don't want to return "Functor" (which is like a vtable, the interface itself) but instead the thing itself which is implementing the Functor interface.

So you end up with

where `Self` needs to recognize that the type being defined to implement this interface has a "slot". This tends to make things tough.I'd like to +1 that "tends to make things tough." Not sure how it is today, I think it was remedied in Swift 4.1 or something... but holy-shit, that "slot" problem gave me endless grief trying to implement functors in swift for composing UI structs (I was ahead of the curve, heh).

You've lost me. Returning an interface isn't returning a concrete thing. It's returning the thing that implements the interface

I'm a very low level functional programmer. I'm big on immutability, big on not using loops and instead using map/flatMap/filter/fold, I tend to roll my own either and option implementations when they don't exist because it's the tidiest way of handling errors I've come across, etc etc. But when it comes to stuff like functors I don't get what it's buying me. What interesting stuff can I do if I know that both an option and a list can be mapped over?

I really need to look more deeply into it at some stage. I might be missing out on some powerful tools. Or it might be a bunch of stuff that's theoretically interesting but practically useless.

I think you may have misunderstood their point. They're just saying that specifying `Functor` in the return type of `map` isn't enough to resolve the issue because you could still have a case of `List#map` returning a `Maybe` or `Either` since they are all implementations of `Functor`.

This gets handle by the Cats library in Scala (https://typelevel.org/cats/typeclasses/functor.html) by defining the type class the functor is being defined for as an abstract type on the functor itself.

Sorry, you're right. I wasn't thinking with subtyping. Rockostrich demonstrated the other weakness of this approach, but I'll say it outright: if you return a supertype then you've lost information.

It's very important for the type of `fmap f x` to be identical to `x` except in that its "inner" type has been modified. Without that, these kinds of interfaces lose all of their value.

The functor it returns should be the same functor it was called on, i.e. calling map on a Maybe should return a Maybe. The parameterized type can change so map can bring you from a Maybe<Int> to a Maybe<String> which is why in the above description we have Functor<B> instead of A. But what typescript is lacking is a way to describe that the functor returned should be the same.

Actually both the Haskell and type script definitions don't communicate the concept of a functor well. Especially with the way it's used.

The interface only requires you to implement the mapping between morphisms, but a mathematical functor also includes mappings between the objects from one category to the next.

Additionally fmap is actually mapping

but the way it's typically used in haskell breaks this intuition. Most people see it as as a result of how it's used in FP. Technically the usages are isomorphic but using it this way blocks your intuition from fully understanding the true nature of a functor.The following would be more a more accurate type class for a functor:

See here:https://en.wikipedia.org/wiki/Functor

Note that there are TWO axioms for functor, I added the second axiom to complete the definition.

I think the reason why tmap doesn't exist is because it's trivial? Not sure maybe some expert can pitch in as I'm certainly not an expert in haskell.

Functors don't include tmap. Yes, a functor maps objects from one category to objects of the second, but it does so on the

setsof objects. There might not be a corresponding morphism between "a" and "f a".The typeclass you created would be a functor along with a natural transformation from the identity functor to f. I don't think these always exist, but they do for applicative functors (pure/return).

The object mapping is reflected at the type level. "f a" is the object that "f" sends "a" to.

In Haskell, functors are all endofunctors. In math, functors can be between categories, and in such a case a -> f a might not make any sense because there are no morphisms

betweencategories.Using some quasi-Agda syntax, here would be a full definition of a functor between two categories. Maybe you could overload "->" so that "X -> Y" could mean "Mor X Y" for the set of morphisms between X and Y in C or D (depending on context). Curly brackets mean optional parameters, which Haskell approximates via typeclasses.

>and in such a case a -> f a might not make any sense because there are no morphisms between categories

In CAT the category of small categories objects are categories and morphisms are functors. Haskell types can be thought of as a small categories so I don't think this statement is correct? I'm no expert but can you clarify, the definition on wikipedia says that functors are morphisms between categories in CAT.

>There might not be a corresponding morphism between "a" and "f a".

Hmm. if haskell types make up a small category that means 'a' and 'f a' are sets. I can't imagine a case where two sets cannot have a mapping between them. Can you give a specific example?

>The typeclass you created would be a functor along with a natural transformation from the identity functor to f.

I'm confused about this. Isn't the natural transformation from identity to another category (at least for small categories) equivalent to a functor? In my head they look identical.

https://en.wikipedia.org/wiki/Natural_transformation

If you look at the commutative diagram on this page and replace F(X) and G(X) with identity X and G the natural transformation diagram looks just like a diagram for a functor.

btw I'm not an expert in haskell (let alone agda) or category theory so forgive me if the stuff I talk about is totally off.

Are you familiar with how the word "vector" changes meaning depending on which vector space your talking about? The same goes for "morphism," which depends on the category. In your example of "tmap : a -> f a", the only way this would make sense is as a morphism from object 'a' of the first category to an object 'f a' of the second, but this is a semantic error (a categorical error, if you will).

It is true that functors are morphisms between categories, but that's a morphism in CAT.

> I'm confused about this.

That's how these things go :-) Anyway, natural transformations are between two functors C -> D. This is an example of a 2-morphism in a 2-category. It's hard to come up with something to say other than they're just different, but think about this: a natural transformation is a consistent choice of morphisms F X -> G X (in D) for each object X (in C), but a functor needs to know where the morphisms of C go, too.

(An intuition is a natural transformations is kind of like a continuous path transforming one functor into another. This might just make things confusing, though.)

> Hmm. if haskell types make up a small category that means 'a' and 'f a' are sets. I can't imagine a case where two sets cannot have a mapping between them.

This is irrelevant. The question isn't, does there exist a function, it's does the functor determine one. (Given any sets A and B there is always a function from A to B unless B is empty and A is nonempty. But that's not very helpful, is it?)

> I'm confused about this. Isn't the natural transformation from identity to another category (at least for small categories) equivalent to a functor?

This doesn't mean anything. A natural transformation is from one functor F:C->D to another functor G:C->D. Not from a functor to... a category? Huh?

Also smallness is irrelevant to all of this.

>(Given any sets A and B there is always a function from A to B unless B is empty and A is nonempty. But that's not very helpful, is it?)

Can you clarify this? I can't define a -> f a because the functor can't determine one?

What about like for list functor?

Doesn't this work?>This doesn't mean anything. A natural transformation is from one functor F:C->D to another functor G:C->D. Not from a functor to... a category? Huh?

I'm still kind of confused. Yeah you're right it doesn't make sense. But then the parent poster is saying that

is the natural transformation from identity functor to f it seems off to me. It looks to me like it's just a functor from a to f a. So I'm confused with the nomenclature here. a is a category and f a is another category how is a mapping between categories a natural transformation?Isn't this type signature the natural transformation from identity to f?:

--edit:I think I see where I'm confused. Your last comment on the parent helped. Thanks.

> is the natural transformation from identity functor

Sorry I confused you here. I meant, if you had a function of that type (a -> f a), you would want a naturality relation to hold for it, like how 'return' for monads is meant to relate to fmap.

As it is, just having a map "a -> f a" is not a natural transformation.

(I think you might be confusing objects of Hask with the category Hask itself. Remember that "a -> f a" means "for each object 'a' of Hask, a function that takes an element of 'a' to an element of 'f a'. The functor is f : Hask -> Hask, an endofunctor. (In haskell, Hask is replaced by stars.) "f a" is the object of Hask that F sends "a" to.)

> Can you clarify this? I can't define a -> f a because the functor can't determine one?

In mathematics, it's important to distinguish between "an X such that there exists a Y" and "an X together with a Y".

We are discussing, what data determines a functor? You seem to be asserting that part of this is something you're calling tmap, which for a specific functor F would have type a -> F a.

This is mistaken; no such thing is part of the definition of a functor.

But you are confusing the issue of whether such a map is part of the definition of a functor (or is determined by the functor and so could be included in the definition with no change in the actual meaning; let's just group these two cases together as "part of the functor"), with the issue of whether such a map exists.

You pointed out that such a map exists -- as if that made it part of the functor. I pointed out that such a map does exist, but that's not helpful, because this has no relation to the functor, and that it's not part of the functor.

Your reply confuses these issues again, suggesting that because it's not part of the functor like I said (true), therefore it doesn't exist (false). No. It does exist, but it's not part of the functor.

Because it's not part of the functor -- not determined by the functor -- that means that, if you

wereto tack it on to the definition of the functor -- let's call this new object you're defining a functor' -- then, for any one functor, there would bemultiplepossible functor' that result, because of the different choices that can be made. This demonstrates that a functor and a functor' are not in fact the same thing; the functor' contains extra information that the functor does not, information that cannot be determined from the functor alone.> Isn't this type signature the natural transformation from identity to f?: > (a -> a) -> f a

No. You are mixing things in ways that do not make sense.

Look, apologies, but I think you're in over your head here. And I'm afraid your notation is confusing the issue by mixing things inappropriately. There are too many errors here to fix individually; if you want to get this right, you're going to have to take it from the top. I could try to write an explanation that I'd hope you might understand, but my point is that that's what I'd have to do, write a whole explanation from the top. I don't see this conversation going anywhere otherwise.

It's fine, no need to come from the top, thanks for the explanation. I mean if you want to take it from the top I'll read it really carefully, just saying I'm grateful that you both even take the time to explain what you guys already have.

I'm out of responses here on HN so I'll respond tomorrow.

I'm not a mathematician, so yeah I am a bit over my head.

> Hmm. if haskell types make up a small category that means 'a' and 'f a' are sets. I can't imagine a case where two sets cannot have a mapping between them. Can you give a specific example?

This is incorrect in a bunch of ways. First, Haskell types aren't truly just sets. As a simple, practical example you might have

And now we've got (externally) a type which is equivalent to the identity functor in structure, but has a weird interface which disallows tmap.More powerfully, we're only actually interested in discussing an interface in and of itself. A type may instantiate many interfaces of varying levels of power, but each interface needs to be well-defined and well-behaved on its own. Then their compositions need to be "glued together" properly.

So even if all Haskell Functors were actually TMapFunctors, it's still important to note that the interface TMapFunctor is a sub interface to Functor which allows more operations and has more laws.

An even stronger example of this is the fact that due to the way Haskell arrows work, all Haskell Functors are actually "strong" functors in the sense that we cannot even truly specify a non-string functor.

Strength is a way that fmap and products (or, really, category tensors) interact.

This seems completely obviously possible, but it's only because we can crack open products in building general anonymous arrows. That's not supported in every category and a generic functor should not be expected to satisfy it.>An even stronger example of this is the fact that due to the way Haskell arrows work, all Haskell Functors are actually "strong" functors in the sense that we cannot even truly specify a non-string functor.

Can you explain this? What is a strong functor and how does it have to do with arrows? Also what do you mean by non-string functor?

A strong functor is one which supports the strength operation above. Haskell -> arrows being based on any lambda term are rich enough to make all Haskell functors strong. But not all functors are!

And “non-string” was just a typo.

FWIW,

is a perfectly viable Functor on a. Good luck converting a a into one of those.What's hard about that? C and D are Hask. F_Q maps objects A to objects Const Q A, and fmap takes functions in Hom(a, b) to Hom(Const Q a, Const Q b). We identify that Const Q a is just a little subset of Hask, so Hom(Const Q a, Const Q b) is Hom(Q, Q) and fmap works by mapping all functions to id_Q. Now fmapId and fmapComp are satisfied trivially.

a1369209993 was answering my "I don't think [natural transformations from the identity functor] always exist" with "no, they don't."

The natural transformation would be a function "a -> Const q a" for all a. There can't be a way to do this for all q that is somehow natural in q, since this would have to construct a value of q to put into the Const constructor. Worse, (even though vanilla Haskell doesn't allow it), q might be the empty type, so there might be

nofunctions "a -> Const q a" whatsoever. I think the point of the example is that keeping q polymorphic is a simulation of the empty type.Allowing empty types, I'm happy with the example. However, this is technically a natural transformation in Haskell:

since every type contains bottom (undefined).Ah, I misunderstood! And yes, absolutely. An even easier example

This is a perfectly valid functor, too. Ignoring undefined, there are no values of it, but fmap cannot be used to create values of the mapped type, just manipulate them.I wrote something to refute your example, but then I realized that you're right.

Other commenters have already pointed out some of the ways you're confused about categories, but let me explicitly answer your question of, why doesn't the functor include the mapping on the objects and not just on the morphisms?

The answer is it

does. In this context, the types are the objects. So if F is your functor, then Fitself-- which does not have a type, but has kind * -> * -- is the mapping on the objects.So you're saying the constructor of the type itself is a -> f a. Right?

I think I get it. Thanks.

I put this in another comment, but I think you're thinking 'a' is a category, but it's an object in a category. The type for a functor (in Haskell) is Hask -> Hask, which is represented by the star notation Sniffnoy uses. That is, it sends objects of Hask to objects of Hask. The object mapping for a functor in Haskell is defined by its "data" or "type" definition.

The notation "a -> f a" means a function that sends elements of 'a' to elements of 'f a', where 'a' is an object of the category Hask. (Functors don't look into the objects at the element level.)

Thanks for the clarification. I think I get it despite the incorrect notation. You guys are talking about some sort of type of types?

Yeah, the "type of types" is called a

kindin Haskell https://wiki.haskell.org/Kind https://en.wikipedia.org/wiki/Kind_(type_theory)Caveat: there seems to be some reason not to think of kinds as actually being the types of datatypes, but I don't understand why that is.

No. It's not a -> f a. It's * -> *. There is no a -> f a here. You are simply mistaken in thinking that any such thing is part of the definition of a functor.

The https://en.wikipedia.org/w/index.php?title=Algebraic_structu... link in footnote 1 is broken, it takes me to a page saying 'The revision #898454436132 of the page named "Algebraic structure" does not exist.'.

As for the quoted definition, I agree that if this is your first exposure to abstract algebra, you'll need a moment to unpack the sentence, follow some links, and especially, read on. But that's not just Wikipedia; the blog post also takes considerably more than one sentence to explain what it is trying to say. You can't explain everything in one sentence.

For whatever it's worth, as a data point relating to a recent discussion on whether a university CS education makes you a better programmer or not: We literally started learning about algebraic structures in the first math class on the first morning of the first year of university. If you're going to program in a setting where algebraic structures (or data types!) are relevant, this university knowledge will help.

Finally, this blog looks very nice visually, but the "broken typewriter" font effect makes the code examples much too hard to read. It would be great if the ribbon in that typewriter could be replaced.

> For whatever it's worth, as a data point relating to a recent discussion on whether a university CS education makes you a better programmer or not: We literally started learning about algebraic structures in the first math class on the first morning of the first year of university.

My problem with this argument is that this isn't a universal experience. I went to Waterloo, arguably one of the best CS schools in Canada, and the CS program didn't even cover algebraic structures.

I learned about them, because I spent all my electives taking extra math classes, but the vast majority of my classmates never needed to learn any math beyond basic combinatorics and some introductory complexity analysis.

I believe that these topics are incredibly important, but I'm not convinced that a college degree is a reliable way to be exposed to them. If your school covers this stuff, great! But at least in my experience, I think it's disingenuous to act like all computer science programs will.

I have an MA in math and I was somewhat exposed to category theory and algebraic structures in my time at the university. I find knowing math generally is useful for programming but I haven't found it useful for understanding functional programming in particular.

I think functional programming uses category theory in a different fashion than even fairly advanced mathematics; Breaking Hungerford's Algebra text, in the chapter on Category Theory, he writes, "A significant part of the elementary theory of categories is the attempt to generalize as many concepts as possible from well-known categories (for example, sets or modules) to arbitrary categories". Which is a rather different approach than building programs.

Which is to say that for most mathematics, categories are tools for generalization or for providing a firmer foundation for existing mathematical structures. For understands monads as used by FP, the description as "little languages" seemed the best - it's way of not having side effects by using functions to incrementally construct output instead of doing output in the middle of computation.

Much of software design is about composition, which is where category theory shines, just at the most shallow levels.

If I dream up a way to tackle a problem, have I made a category? If I have two modules, can I compose them into one, is there an initial or final module, is composition associative? If there's some underlying structure, what's the free module based on it?

Those questions can help make more durable designs.

Just another data point: first time I encountered algebraic structures during my education in the Czech Republic was last year of grammar school, in a seminar for students who took maths as part of the exit exam (https://en.wikipedia.org/wiki/Matura#Maturita_in_the_Czech_R...). And then in the second year of CS at the university.

Also, we did functional, logic and also imperative programming in the first year. I really did live under the impression that CS education is meant to give you all these foundations, and it's a TIL today that it's not universally true.

I think Wikipedia is wrong here. I never knew that anyone referred to any generic set with operations as an "algebra". That's a magma! An algebra has much more structure than any generic set with operations.

Does this really happen? Do people really say generic "algebra" for any set with operations?

(And as an aside, I don't like the "abstract algebra" monicker. It sounds so immature and undergraddy. There isn't an ordinary algebra and an abstract algebra. It's all just algebra.)

> I never knew that anyone referred to any generic set with operations as an "algebra". That's a magma!

If I have a structure S with an associative operation, and another structure G with an associative operation and a neutral element, I will say that S and G are different algebras, not "different magmas". Others looking at S or G will not ask "oh, what kind of magma do you have there", they will ask what kind of

algebra.So... Yes, these are both (special cases of) magmas, but the general term used for them is "algebra" or "algebraic structure". Don't you agree?

But you added extra structure: associativity. Cohn's definition doesn't. It just says "set with operations" (well, finitary operations, but that's kind of always implicit in the definition of "operation").

So you're saying people shorten the phrase "algebraic structure" to "algebra"; this hasn't been my experience.

> But you added extra structure: associativity.

Of course I added extra structure since I wanted to make a point about

different kindsof algebraic structures which are all subsumed by the term "algebraic structure" or "algebra". And it's only possible to distinguish kinds of algebraic structures by differences in structure.But adding extra structure in one example doesn't mean that I somehow exclude magmas from the definition. Here is the example again, extended to be include a component with no extra structure:

If I have a structure M with no structure but an operation, a structure S with an associative operation, and another structure G with an associative operation and a neutral element, I will say that M and S and G are different algebras, not "different magmas". Others looking at M or S or G will not ask "oh, what kind of magma do you have there", they will ask what kind of algebra.

Of course this extension by M doesn't change anything about the validity of the example. Magmas are just as included in the term "algebraic structure" as semigroups, groups, rings, and fields are.

> So you're saying people shorten the phrase "algebraic structure" to "algebra"; this hasn't been my experience.

<shrug> It has been mine. Wikipedia has lots of uses of the phrase "the algebra of": https://en.wikipedia.org/w/index.php?search=%22the+algebra+o..., always meaning something like "the algebraic structure of set X with operations f, g, and h".

>So... Yes, these are both (special cases of) magmas, but the general term used for them is "algebra" or "algebraic structure". Don't you agree?

Algebraic stucture, sure, but _algebra_, absolutely not.

An algebra is a module with a compatible multiplication which has an identity element. If I had a magma and you asked about my "algebra" I would be very confused about where you were seeing all the extra structure.

> Algebraic stucture, sure, but _algebra_, absolutely not.

As someone else pointed out elsethread, the term seems to be overloaded in different branches of mathematics.

https://en.wikipedia.org/wiki/Universal_algebra: "Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures themselves, not examples ("models") of algebraic structures. [...] In universal algebra, an algebra (or algebraic structure) is a set A together with a collection of operations on A."

> I never knew that anyone referred to any generic set with operations as an "algebra". That's a magma! An algebra has much more structure than any generic set with operations.

I think this comes from competing definitions of the term 'algebra'. There's what, for want of anything better than the terrible term, I'll call the 'algebraist's algebra', which is (at least) a ring that is compatibly a module over some other ring; and there's what, as smadge (https://news.ycombinator.com/item?id=21443587) mentions, could be called the 'universal algebraist's algebra', which is a model for a certain signature, of which the algebraist's algebra is just one special case. EDIT: I see that matt_noonan (https://news.ycombinator.com/item?id=21443892) made this point several hours ago, but I leave this post in case the links below are convincing.

For sources maybe more authoritative than Wikipedia, you might consult Springer's Encyclopedia of Mathematics (https://www.encyclopediaofmath.org/index.php/Variety_of_univ...) or the nLab (https://ncatlab.org/nlab/show/universal+algebra).

A magma is a set with one

binaryoperation. There can be any number of operations in an algebra, with any arity. For example, one of the simplest algebras has one unary operation. You can specify additional structure of an algebra beyond just the operations using axioms. Universal algebra is where I first learned this idea.Algebra is pretty overloaded. In the context of the study of "abstract algebra" the term takes on the more generic representation. You can also see it being defined in places like Lawvere Theories and F-Algebras. It's a context-dependent thing.

Okay, given this, I'm going to rewrite Wikipedia to say "in universal algebra", to disambiguate the context in which it's quoting Cohn.

Unfortunately terms are a bit overloaded. When some people say algebra they might mean algebra over a field. A field alone adds a lot of structure to your set and is powerful enough to define vectors on. Algebra over a field adds even more.

But that is different from an algebraic structure. An algebraic structure is just some set with some operations and laws. It can be magmas, monoids, groups etc.

And abstract algebra is also a necessary term to differentiate from "ordinary" algebra that is taught in middle schools and high schools, where algebra only means using letters to substitute numbers.

It's unfortunate.

What structure does an algebra require to merit the name? I've been confused by examples such as the "algebra of a monad", which as I understand it arises from adding an "unwrap" operation in addition to monadic "return" and "join". Most of the references to this only ever refer to specific cases such as an algebra over a monad or an algebra over a functor or a division algebra, etc., and I haven't seen anyone be clear about what

makessomething an algebra.I would expect an algebra to be at least a ring, i.e. an algebra over a field, a vector space with a bilinear product. If you just say plain algebra, I think matrices and vectors or similar.

Most of the structures called algebras (e.g. boolean algebras) have at least two operations and these operations frequently interact via distributivity or something like that. Other examples are ring-like, like a sigma algebra in measure theory.

I've never really encountered anyone saying "algebra" for a generic algebraic structure. I think with this definition Cohn was trying to start a trend that didn't catch on.

There are two different notions of algebra in common usage. One is the definition you gave (vector space with a bilinear product, as in Lie algebras, Clifford algebras, etc). The other definition, as used in the blog, comes from universal algebra. https://en.m.wikipedia.org/wiki/Universal_algebra

Which one(s) you have been exposed to just depends on which mathematical subcultures you’ve interacted with.

Okay, thanks.

Mathematics Wikipedia is sometimes very biased towards certain points of view (usually more undergraddy, as evidenced by its insistence with the "abstract algebra" term), so I think it's being a bit too pushy with its "algebra" definition here. I've edited the term to say that this usage of "algebra" is a universal algebra thing.

I second the comment on the site design. Very beautiful indeed. In fact, I just spent some time looking at the css/scss files.

Calling these things algebraic structures might help you win some confidence, but the communities which talk about algebraic structures aren't going to be helpful for learning how to program with these things. Mathematicians love algebraic structures (and non-algebraic ones).

The advantage really plays out more with the first-order structures, too. Things like monoid, semiring, torsor, group. You also have nice ones in more standard data structures: a balanced tree is an excellent example of a structure where the laws exist to cut out unbalanced trees.

In my opinion, there are two things to study here:

First, the practice of thinking about abstract structures that apply to concrete data. For this, the practice of thinking of there being a type (or multiple interrelated ones) which offers some set of "constructors" which create the type or augment existing values (gluing new items into a tree, merging two trees, etc) and some set of "laws" for which all values of that type must uphold.

It turns out that you can do a lot of analysis of the behavior of these structures in the abstract and then apply it wholesale throughout programs. Many concrete values you work with are the combination of multiple structures in natural ways. Sometimes you can replace whole APIs with hundreds of calls, each named uniquely to this implementation of this type, with just a small set of nicely orthogonal methods with completely standard names.

Second, the use of higher order structures like Functor, Applicative, Monad. These get a LOT of airtime because they're both challenging and offer important capabilities. But they're also in a lot of senses their own realm of study. Not only are they developed very uniquely in programming communities (as opposed to what you'll find if you read about the category theoretic definitions) but they are also "higher order" in that they involve functions between types.

This higher-order nature both makes their own equations much more complex, but it also means that to see them applying (in languages other than Haskell and its ilk where purity drives this) you have to get

reallygood at seeing languages in an abstract fashion. It's a great skill to develop, but ramps up the difficulty greatly.Master the "first order" ones first. Master concrete, interesting types like Either, Maybe, List (as a source of non-determinism) first. Then come back and see if you can see how the skills you develop with the "first order" structures apply to these higher order, computationally minded types.

For the more beginners of us, I love this blog post / cheatsheet by Julie Moronuki on the matter of Algebraic Structures https://argumatronic.com/posts/2019-06-21-algebra-cheatsheet...

> Functor isn’t the only algebraic structure either.

I actually wouldn't call it an algebraic structure at all. A functor really isn't just a set with some finitary operations. A quick search online [1] tells me I'm not alone.

https://www.quora.com/Why-is-functor-considered-to-be-an-alg...

Very much agree... but also want to be a little soft here. It's a

veryspecific line of reasoning needed to discuss the difference between a "structured set", an algebra, a "theory" (algebraic or geometric or otherwise), et al.Generally, we're all talking about the same sort of thing. But also, generally, there isn't one good formal apparatus for discussing the whole of these things without inducing just tons of complexity. Why? Because we want to talk about specific and complex things and we often want to use specific and complex language as opposed to working through 8 levels of encoding every time.

So anyway, right there with you! But also happy to guide people down that path step-by-step.

A very nuanced take, certainly. Thank you.

I guess this is an easy point of confusion. Are specific instances of functor algebras, then? (What about f-algebras?) What would be the more appropriate word for the category theoretical things the author is trying to refer to here, functor and monad and so on?

A monad (on Set) is an "algebraic structure" (eg. the notion of a group). An algebra for that monad is an "algebra" (eg. one individual group).

This is the original use for monads in universal algebra, before they were interpreted as computational effects. An algebraic structure like a group or monoid is traditionally given by a signature: a list of what operations it has and what rules the operations need to follow. You can generalize from a signature Sig to a monad M. If a is a set, then Ma is "the set of expressions with constants from a": the set of formal expressions built from elements of a and the operations in Sig and where two expressions are regarded as equal if you can manipulate one into the other using the rules from Sig. The list monad for example corresponds to the signature for monoids.

The monad laws can thus be read as expressing "how to do algebra" at the most general level (ie. the level that is common to all algebraic structures). I would gloss them as "the order you evaluate an expression does not matter".

edflsafoiewq's answer is good, but they didn't really mention functors. If you want an umbrella to put functors under, I'd say "higher order structure". It's a map between structures that respects structure.

As a mathematician first (only an amature programmer), I wholeheartedly agree.

I know it's not for everybody, but go back and read Backus' Turing Award lecture introducing FP: "Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs" https://dl.acm.org/ft_gateway.cfm?id=1283933&type=pdf

The two main points (they're in the title) are eliminating the "von Neumann bottleneck" between the CPU and RAM, and the

algebraof [FP] programs, the potential to manipulate programs as one manipulates mathematical formulas.I just skimmed that paper but I don't see a section on how to avoid the bottleneck when your functional program is trapped on a Von Neumann machine. It seems to me that functional constructs have to be mapped down to structures that will suffer the Von Newmann bottleneck if they are being executed on a Von Newmann machine. But there isn't any apparent discussion of an alternative machine architecture, just the computer language.

Indeed one of the complaints you sometimes see about functional programming is the amount of memory churn it produces on a Von Newmann architecture.

I'm actually working on that. I've come up with a way to dynamically create dataflow graphs on banks of hardware interconnected by latching sort-nets and programmed by a simple, pure, functional, "concatinative" programming language called Joy.

Didn't the MIT LISP machines achieve this back in the 80s?

you may want to read this

https://en.wikipedia.org/wiki/Function-level_programming

it's different

I don't think it's

usefulto differentiate "functional" programming from "function-level" programming as if they are different paradigms, especially in light of e.g. Conal Elliot's work "Compiling to categories": http://conal.net/papers/compiling-to-categories/ where he has built a compiler extension for Haskell that converts it into a point-free form very similar to Backus' "function-level" style of programming.I don’t have a need right now to master a new programming paradigm in order to leverage my business.

However, if I happen to bump into this need, I’d focus first on Logic and Array-Oriented programming languages first. They seem more valuable to my industry (finance).

For example: Prolog and J.

Gotta say I love the style/aesthetics of that site.

You must have good vision, because it's annoying for me to read.

I had to turn off the background and change the fonts of the code samples. It was impossible otherwise.

I found the code blocks a very interesting looking and authentic, but that's true for me: it was somewhat difficult to read.

> a.map(g).map(f) ≣ a.map(x => f(g(x)))

> But the one on the left will be slower and use a lot more memory.

Is it really true? I mean, GC will clean the intermediate array, won't it? And the speed won't be significantly slower. It's still linear complexity anyway.

Dragging a large data structure through the cache only once rather than twice can be beneficial. Also, GC will clean the intermediate array, but (depending on the specifics of the language and the data types involved) it might first have to make yet another scan through the data structure. So yes, it's linear, but possibly 2-3x slower.

Then why do I have the following results?

Hard to tell without more information, and probably more runs. Did the code run often enough for the JIT compiler to warm up sufficiently? If you're running in interpreted mode, or only baseline compiled mode, you have other overheads. Also, and this is admittedly something I should not have glossed over: The

reading and writing costsof fused maps might be sped up by the factors I mentioned, but in your loops you also haveallocations, which have their own costs and can complicate things. And the computation is not free either, though at sufficiently large sizes the memory accesses should dominate sind and tan, I think.It also matters how this code is compiled exactly. The C# version (I know nothing about C# or how good its compiler is) looks like it must first allocate some kind of dynamic stream, and only when ToArray() is called can it allocate the final array, so there might be extra copying. Maybe the compiler is smart enough to optimize a sequence of arr.Select().ToArray() to allocate a target array of the size of arr right away, I don't know.

Also, the JavaScript version uses a smaller array than the C# version, is that on purpose? 1000000 unboxed doubles are only 8 MB, which is not very big: On the machine I'm typing this on, L3 cache is 6 MB.

My advice would be to run the JavaScript version many times, for many more than 8 iterations, and with sizes increasing stepwise up to a GB or so. Also try replacing the maps with preallocated arrays and hand-written loops that contain only the computations, not the allocations. I know this sounds like I'm trying to give you homework, which I'm not, but benchmarking is hard, and there are many factors to take into account.

> And the computation is not free either, though at sufficiently large sizes the memory accesses should dominate sind and tan, I think.

Looks like I was wrong about this! You might want to retry your experiments with cheaper operations than sin and tan.

I wrote a little C benchmark to test this more:

Compiling this with gcc -O3 gives: Soalmostno difference, though with enough runs I think this would be significant. Interestingly, although C is not JIT compiled, even here there is a "warmup" effect. I guess these are initial page faults or something.But if we now comment out <math.h> and instead use some cheap "fake" implementations of in and tan:

we get very different behavior: Here the computation is so cheap that it's really other effects that dominate, and you get a 2x difference.Imagine that the size of the intermediate array is half of your computer's memory. It matters because by composition it reduces the memory footprint from 2x to 1x.

Then it is 8Gb of pointers. I doubt it's practical to worry about such extreme cases.

I want to talk about the three criteria necessary to have an algebraic structure:

1. It must be a type/class. 2. It must implement methods with a specific type signature. 3. It must obey laws.

Criteria 1, 2 have an implementation in programming languages. But criterion 3 doesn't. You can't check whether the implementation of a Group abstract class satisfies the three group axioms like you can check whether they extend the Group class or whether they're implementing the wrong type signature. That means this is going back to Dijkstra's "proof of correctness" paradigm of programming which is a bad idea.

YAGNI

99% of problems one encounters while programming can be solved in C++ with std::vector and functions taking a vector in and producing a vector out. That's my main problem with FP and many other language making bold claims: oversell. That simple fact is that for most computing tasks, you don't meed much more than simple types.

You don't. Of course.

Honestly, you don't need much more than a really big chalk board to solve most problems one encounters when programming.

Even that's probably strictly optional if you get enough people to double check the work.

Sarcasm aside, what you're often doing when you make these transformations of std::vectors are structured algorithms. The structures of these algorithms can often be factored through operations performed on types—even if they're just all different names for the same std::vector!

These structures exist to give words and patterns to the stuff we do. To make it easier to talk about, share, reflect upon, improve. To make it easier to judge how different approaches to the same end relate and can improve upon one another.

You don't have to go use Haskell to get a LOT of benefit out of algebraic and equational reasoning. And I feel pretty unsure what to think about a philosophy where one would avoid a nice tool merely because it's possible to get along without it?

I imagine you're a fan of that Primitive Technology YouTube channel?

I agree with you to some extent, but picking it apart:

1. I think there is value in immutable data structures. Languages like Haskell and Elm make everything immutable by default. A big problem I find in C#, JavaScript (and I'm sure you'd get in C++) is if I return a List of something, even if the list is readonly, the consumer could modify the items, unless I go to some tedious lengths to make sure the list only contains immutable objects. Also if I compute something based on a reference to an object, I cannot be sure that what the reference represents hasn't changed later in the program.

2. The algebraic data type produces very nice tight data structures and makes it easier to create data structures that can't hold invalid values. And where this is not possible you can use techniques to hide/product the data - usually by hiding the constructor, and requiring a function to make the type. You can do this in OO with classes, but the fact that any reference can be null causes issues, also if you return anything from your class that isn't immutable then you are passing out a potential backdoor to f' your state. See React and the hoops you have to jump through to keep things immutable.

You can absolutely get stuff done without FP, but I like the abstractions and guarantees it brings. Less to go wrong, and less to think about overall (once you have carefully planned your types).

What I don't like about FP, or Haskell in particular is the very complex types people dream up and all the crazy GHC extensions which are hard to understand and produce the most unhelpful of error messages if you make a mistake. Elm is more my kind of thing - very simple type system but still algebraic, no forced nulls etc. Gives you the 80% benefit of FP for 20% of the effort.

When I read an Elm program from someone else I don't have to think much. Reading a Haskell program I need to learn a lot to understand it. Maybe Haskell is OK if you are doing it full time and can commit all that stuff to long term memory.

Yeah.... You're probably right. Let's not talk about the meta-structures of our programs, but just use those structures.

Sure. I might be using a monad and the bind function every single day, but acknowledging any useful repetition of patterns is never useful. That's why no successful programmer has ever even given a thought to design patterns.

/s

Why is it that people are alright with identifying and naming the pattern of a single global instance of a type - i.e. a singleton, but as soon as you identify a pattern of type signatures you're instantly thought of as looney and overselling?

Heck, even a turing machine could solve most of these problems.

How highfalutin! It's x86 assembly for me!

x86 is more complex than a Turing machine.

C++ is not a purely functional language, but the type-system is (caveat WIP things like concepts). Even if all you're doing is "vector-in, vector-out", the types of those vectors matters immensely. This is especially true in languages like C++ where the type-system is very deep but also very hard to debug or change at varying levels of abstraction.

100% of problems one encounters while programming can be solved with C with goto. What's your point?

If you seek to simply solve a problem then there are a number of ways to do so.

If you seek to solve a problem with a solution that has the best design, the least technical debt and almost no bugs. Then FP with ADTs is the closest thing I've encountered to such a solution.

Unfortunately, the author doesn't actually understand functional programming; they think that it has to do with loops and mutability. Also, they cannot get outside of their "I worked really hard for my PhD" mindset for long enough to consider programming as it actually is, rather than as they want to imagine it to be.

May I humbly recommend you read the article before leveling a complaint about its content, as at least some of this was clearly addressed in the second section, starting above the fold.

You might be interested to see https://jrsinclair.com/articles/2019/what-i-wish-someone-had... where he puts those things under the header "Wrong Assumptions".