Implement with Types, Not Your Brain

(reasonablypolymorphic.com)

103 points | by Tehnix 1738 days ago

13 comments

  • quickthrower2 1738 days ago
    > I don't need to use my brain to do programming

    ...

        hoistStateIntoStateT
            :: Sem (State s ': r) a
            -> S.StateT s (Sem r) a
        hoistStateIntoStateT (Sem m) = m $ \u ->
        case decomp u of
            Left x -> S.StateT $ \s ->
            liftSem . fmap swap
                    . weave (s, ())
                            (\(s', m') -> fmap swap
                                        $ S.runStateT m' s')
                            (Just . snd)
                    $ hoist hoistStateIntoStateT x
            Right (Yo Get z _ y _)     -> fmap (y . (<$ z)) $ S.get
            Right (Yo (Put s) z _ y _) -> fmap (y . (<$ z)) $ S.put s
    
    
    ...

    I definitely need to use my brain to read that code though.

    • munchbunny 1738 days ago
      I have a similar experience with mathematics. "The differential form expressions for Maxwell's equations are incredibly simple!" Yup, they're simple after you've done all of the work to wrap your mind around differential forms in the first place. I did it at some point. It was rewarding, and it was elegant, and it was even fun, but it was anything but easy on my brain.

      Haskell feels a lot like that. That code snippet gave me the same feeling of having to climb a conceptual mountain to appreciate the metaphorical view at the top. I have no doubt that there is something beautiful there, but of all of the conceptual, programming related mountains I need to climb, the language abstraction one is pretty low priority, because the language I am using really is not my limiting factor. I appreciate that others who care spend a lot of time thinking about this problem, but Haskell's got a long way to go before I'm convinced that it's worth using in everyday contexts.

      • schwurb 1738 days ago
        Note that Haskell makes very few assumptions about the users; the quality and complexity of Haskell code varies greatly. The author of this block is dealing with an abstract topic, resulting in somewhat arcane looking code (I am sure that after some refactoring it would read nicer). It is however very possible to write really easy to understand Haskell code by ignoring its more fancy features and sticking to the most used abstractions: Monads, Applicatives and Functors. They alone are enough to write great code, no category theory, effect library, type magic or whatever is needed. Simple, real-world Haskell exists, it is just not often blogged about.
        • munchbunny 1738 days ago
          Fair enough, I just disagree with the premise that heavily relying on the type system actually lets you think less hard about how to implement your code. In my experience, defining the types you'll use at module boundaries is the actual hardest part. The stuff in the middle is usually straightforward until you realize you need sub-modules, at which point you're defining module boundary types again.
          • seanmcdirmid 1738 days ago
            A good type is like a heads up display about where you are in your implementation. Haskell types are often far away from being that, however.
      • quickthrower2 1738 days ago
        Absolutely, and I have put a decent amount of time in to learning Haskell. But I've been imperative programming since 8 yo, and as much as I love Haskell I can't be saved from finding it hard to understand. I also find it unmemorable - I could get my head around that code if I tried but 2 weeks later I will forget.

        I also find that code quite abstract, it looks to be solving some kind of problem that usually the language designer (if say you are in C#) would be solving. Haskell seems to be something that leaks it's mechanics into your code, where as C# keeps it for the most part nicely separated from your code, albeit with the effect of giving the programmer less power. No typeclasses etc.

    • sameerds 1737 days ago
      That criticism looks unfair to me because you did not quote the very next sentence which was relevant:

        "Gee, that’s complicated! I must be really smart to have
         written such a function, right?
      
         Wrong! I just have a trick!"
      
      Seems like the author deliberately picked an example that looks hard but isn't once you understand what he wants to say.
    • stcredzero 1738 days ago
      I definitely need to use my brain to read that code though.

      Nothing new under the sun. How about APL?

      https://medium.com/@gordonguthrie/the-beam-needs-an-apl-y-la...

      A professor of mine used to write commercial software in APL. (Including business apps!)

      • mamcx 1738 days ago
        I think APL fit well for business apps. Now, if you tell me some lunatic use it for make AAA games...
    • nine_k 1738 days ago
      I take a new unfamiliar thing and I have no idea how to work with it.

      I take an old familiar tool and I instantly remember how to use it.

      What gives?

      When I was a boy, wise practitioners told me that being a programmer is a path of constant learning. They were right. And since then, I take learning as a natural part of the profession. I suspect it also has something to do with a high salary it commands.

      • munchbunny 1738 days ago
        Personally, the issue isn't learning vs. not learning. The issue is where to put the energy and time that I do have for learning. For example, in my current job, the top four items in my list are all about conceptual mastery in a specific domain (cybersecurity) so that I can design better systems. For programming languages, the usual suspects are good enough. The real limitation is the intersection of human behavior interacting with a cripplingly complex technical system, and you can make multiple careers studying just that.

        Somewhere down closer to #15 is "learn Haskell," which comes after learning Rust/Go. Learning a "real" functional language would be much higher, but I already did that early in my career.

        • nine_k 1738 days ago
          Indeed, one does not always need to learn new tools when available tools suffice; the learning energy can be more effectively spent elsewhere.

          I only meant to notice that there's no impenetrable barrier to mastery of that, when desired. I don't think that to learn Haskell at a high level is harder than to do the same with C++. The latter just had more adoption for last 25 years, for a number of reasons.

          In any case, some (cursory) knowledge of Haskell improved my Java, Python, and JavaScript quite a bit, and is directly applicable to Scala in many aspects.

      • EdwardDiego 1738 days ago
        > Programs must be written for people to read

        - Harold Abelson

        Looking at that code, it looks like it's working on a tree of some sort, but what kind? Something to do with Sem and State and Yo, I guess. At a wild guess, it's something to do with ASTs and DSLs or similar. But code that is dramatically inscrutable without metric shit tonnes of context, is bad code.

    • tathougies 1738 days ago
      No you dont. Thats the point. The type of the function completely determines the operation. Dont read the code. Read the type and then you know what the code has to do.
      • md224 1738 days ago
        > The type of the function completely determines the operation.

        Really? I must be misunderstanding what you mean by "operation", or maybe this is the part of functional programming that hasn't clicked for me yet... my understanding is that the type signature tells you what kinds of things the function works with, but it doesn't tell you how the function maps the input to the output. When I hear "operation", I think of the latter. What are you referring to?

        • derefr 1738 days ago
          Sometimes there's only one possible function of a given typing.

          For example, if a function's type is `a -> b -> (a, b)`, there's really only one function fitting that shape: cons. What else could a (pure) function possibly do, with that typing, other than "exactly what cons does"?

          • etbebl 1738 days ago
            Map x : a and y : b to (5x, y-3)?
            • chas 1738 days ago
              In a Haskell context, that function’s type would be something like ‘(Num x, Num y) => a -> b -> (a, b)’ where ‘Num’[0] is the type class of types with defined numerical operators. If there is no fat arrow (=>), you can only perform operations which don’t depending on any aspects of the types in question.

              There is, however, a gotcha along these lines: Haskell has a value called ‘undefined’ (along with some other issues collectively called “bottom” for CS theory reasons[1]) which can take any type. So ‘foo x y = (x, undefined)’ is a legal implementation of that function will will compile, but crash if you try to do anything with the ‘undefined’ result.

              In practice, knowing your function has essentially one implementation because it’s sufficiently polymorphic (no concrete types), is still a great trick for getting the compiler to enforce certain kinds of correctness.

              [0] https://www.haskell.org/tutorial/numbers.html

              [1] https://andre.tips/wmh/brief-note-undefined/

            • Jtsummers 1738 days ago
              Then you’ve forced the types a and b to be numerics or integers and the function type would be: Num -> Num -> (Num, Num) or similar and not the original type.
              • md224 1738 days ago
                I'm not the person you're replying to, but my guess is that their example was an attempt to raise the same question that I have: how do we know the values (?) of the a-type and b-type variables weren't modified within the function in some way before being outputted as (a, b)? Doesn't the type signature leave an infinite amount of ambiguity as to the actual operation of the function, even if it specifies the types of the input and output? Variables of type "a" and "b" could have (I assume) many different values (just as a variable of type Num could be 1 or 1000), so how do you know the function doesn't change their values? Or is there a strict relationship between type and value that I'm missing here?

                Do functional programmers consider the type of a variable to be more important (in some sense) than its value? If so, that could explain my confusion here, and that would be a pretty important thing to understand.

                Edit: This idea -- "the type of the function completely determines the operation" -- seemed counter-intuitive to me at first, but I think that's actually a sign that it's a key concept I need to understand to really grok functional programming in a deeper way. (I think this is the case for counter-intuitiveness more generally... it's a sign that you've found the missing link in your understanding of something.) Thanks for the replies, this has been helpful.

                • BoiledCabbage 1738 days ago
                  In that specific case, how do you modify it? What function can you use. You dont know that a & b are numeric, so you can use arithmetic, you dont know that their strings, you actually have put zero constraints on the types of a and b, so there are almost no sufficiently generic functions you could apply to them. You could pass them to the "identity" function to "modify" them, but that's not really changing them. Just about the only thing you can do to match the types is construct a tuple.

                  A (somewhat faulty analogy) is imagine a method that takes in two variable odds type Object and returns a tuple of Objects. You can call any instance method on the object class, and any static method, but can't reference any class variables. And you can do any casts to or from Object. There is almost nothing left you can do but return a tuple of them. The types constrained your implementation.

                • TheAsprngHacker 1738 days ago
                  The submission mentions "theorems for free," which answers your question. The a and b type variables can be any type. Because the code has to work for anything, there are not too many operations that you can do on the values of types a or b. For example, if x :: a, you can't do x - 1 because that would require the constraint Num a.
                  • taneq 1738 days ago
                    Still seems to me (and I feel that the volume of discussion on this point confirms it) that this approach has a higher cognitive load than doing things slightly less abstractly and slightly more explicitly.
                    • avmich 1738 days ago
                      I guess the volume of discussion can be attributable to that there are enough Haskellers and enough non-Haskellers.

                      Haskellers read some texts and got used to the idea of type manipulations. They know, for example, there is a type "Void" which doesn't contain any values - that is, by definition no object has this type - and therefore any function which has this type of argument can't be invoked. Because to invoke the function, it has to be passed a value of the given type, and there aren't any.

                      Haskellers got used to the idea that if we have function from type a to the same type a, and there are absolutely no restriction on what the type a is, then it should be identity function - they just sat on this idea, pondered it for some time and got to this conclusion. For non-Haskellers it's not clear - and even the idea that types can restrict code so tightly as to make code unique (up to isomorphisms) can be novel to them.

                      Does it make a bigger cognitive load? Probably not for those who got used to type properties like this. Is it hard for non-Haskellers to see this feature and try to use it? Well, many Haskellers passed through this stage, so it ought to be doable. Most importantly, can they agree on this?..

                    • mrkeen 1738 days ago
                      Abstraction is about eliminating the details, not ignoring them.

                      If I have ((a,b)->b), it behaves precisely the same way regardless of what those variables represent.

                      Do I need to do null check? Irrelevant. B is being reused, do I do a deep or a shallow copy? Does it have a copy constructor? Irrelevant.

                • Twisol 1738 days ago
                  The function definition must work unchanged for any choice of types. In order to “change” their values, it would have to know what other values are possible and how to get them.

                  Frame it as a game: give me a function definition and I will give you a pair of types for which that function is statically ill-defined. You must give me a function that works, without changing how it’s defined, for whatever pair of types I tell you.

                • Chris_Newton 1738 days ago
                  Doesn't the type signature leave an infinite amount of ambiguity as to the actual operation of the function, even if it specifies the types of the input and output?

                  Counter-intuitively, it’s actually because of ambiguity that you don’t have as much choice in how your function is implemented. If you know nothing about your types a and b, there is nothing you can do with values of those types to create new values (excluding oddities with “undefined” and the like, which I’ll ignore for the rest of this comment). The only things you can do are “structural” operations that use only some or all of the inputs you’ve been given, verbatim.

                  For example, given one value each of types a and b, you could have a function with a return type of a that just chose that input value and gave it back, but what other function of type a -> b -> a could you possibly have?

                  Now, extend that idea. Suppose you have a function of type a -> b -> (a, a). This needs to return a pair of a values, but we still only have one such value that we know. We don’t know how to make another a from an a, because we don’t know anything specific about that type. Similarly, we don’t know how to make an a from a b, or from an a and a b for that matter. So the only possible implementation of this function (with the caveat above) would be the one that returns a pair where each element is the a it was given as input.

                  Let’s extend that idea again. This time, suppose we have input values of types a and b, but we also have an input function of type b -> c. That is, we start with two values of ambiguous types, but we do know how to turn one of those values into a third type. In this case, we could write a function of type a -> b -> (b -> c) -> (a, c), for example, because although we don’t have an input value of type c directly, we do have a b and we know how to make a c from it. But, we have exactly one way we know to do that, so there is still only one possible implementation of a function of this type: we must take the a we started with, and we must take the b we started with and convert it to a c using the b -> c function we started with, and then we return a pair with the two final values.

                  When can we have more than one possible implementation? Essentially, when we have been given more than one way to make at least one of the required output values. For example, suppose we have a function of type a -> b -> (a -> c) -> (b -> c) -> (c, c). This time, we start with an a and a b, but we also know how to convert either of them to a c. We need two c values in our output, but nothing here says they have to be the same, so each of the output values could come from either of the a and b inputs, giving four possible implementations (but only four).

                  As a final example, as surprising at it might seem at first sight, there is no possible implementation at all of a function of type a -> b (again, excluding funny games with “undefined” and the like). We simply don’t have any way to make a b without knowing anything about the type itself and without being supplied with either a b value directly or some way to make one.

                  If you found that interesting and really want to blow you mind, you might enjoy the Curry-Howard isomorphism. The Wikipedia page isn’t a good introduction if you’re trying to understand it, IMHO, but Wikibooks has a nice introduction:

                  https://en.wikibooks.org/wiki/Haskell/The_Curry%E2%80%93Howa...

          • mcphage 1738 days ago
            Do you frequently find yourself having to redefine cons when programming?
            • posco 1738 days ago
              No, but surprisingly frequently this property holds: if your function is generic enough, there is only one (or a small number) of possible implementations that type check.

              Leveraging this rules out bugs.

            • danharaj 1738 days ago
              If you define a custom generic data type, for example if you need a special kind of tree for a particular algorithm, then you're going to be writing a lot of generic functions like cons for that data type.
              • EdwardDiego 1738 days ago
                In what language? I thought Haskell and OCaml for example, had generics.
                • mrkeen 1738 days ago
                  The situation is a bit better than just generics (as they are in Java). For instance, if I had a List<Int>, I can treat it like a List<I>, but why can't I treat it like an <L>Int? Or an <L><I>?
      • quickthrower2 1738 days ago

            hoistStateIntoStateT
                :: Sem (State s ': r) a
                -> S.StateT s (Sem r) a
        
        ?

        Sorry even that'll get my brain CPU on 100% for a few minutes.

        • lalaithion 1738 days ago
          This code takes a Sem monad with a State effect in it, and turns it into a StateT monad with an inner monad of Sem and al l the other effects.

          I have never user Polysemy, any effects library, or type-level lists before, but I'm reasonably sure my guess is right, despite not understanding any of the code.

        • nilkn 1738 days ago
          I suspect you probably just need more experience with the base language. And I don't mean that in a bad way -- Haskell is a bit of a weird language, and it's something that you'll need to spend some time with. But indeed that's a fairly standard sort of type signature that you'll process at a glance if you've used the language a fair bit.
        • tathougies 1738 days ago
          Takes a Sem with a state effect and turns it into the state monad transformer over Sem with the remaining effects?
    • pxtail 1737 days ago
      My god, this is horrible, it looks more like an ascii art than usable piece of code. Personally I don't even try to fiddle around with languages or libraries which are trying to get too fancy or too "clever" with their syntax.

      The reason is simple: as a developer looking at the code written a week ago by myself, using "normal" language, I'm able to somewhat recognize what's going on, after a month or two it's something produced by aliens. With code depicted above this time will shorten to what? hours? days? Additionally it will require additional effort for developer himself or anyone else to decrypt it after a while.

  • tempsolution 1738 days ago
    Sorry but this is just crazy. I always wonder if people writing blog posts like that have ever worked in a big company. You need to spend at least 2-3 hours a day doing code reviews and most of the rest you are writing code, which subsequently involves reading the code you wrote yourself and others have written.

    Now we currently use Java here. I worked with other programming languages in companies too, for instance C and C++ which were a nightmare with no equal. Don't get me wrong, when I was in college I just loved creating crazy shit with template metaprogramming and preprocessor metaprogramming. Once you mature out of it, this is just the most craziest thing you could ever do.

    The reason is simple. Even plain, stupid Java is already incredibly hard to verify during a CodeReview. Spend 2 hours in a row trying to find problems with other's people well written Java code and your brain feels like it explodes. Now with C++ that same task is almost impossible and will likely require 8 hours of your day for the same code volume. With Haskell I would go out on a limb here and say it would require 24 hours of your daily time to handle the same code volume...

    Not to mention that most programmers even at Google and Amazon would not even be able to either read or write such code and people like me, who could and did in the past would just shake their heads and move on. Nobody wants to deal with such code outside of university...

    • mabbo 1738 days ago
      > most programmers even at Google and Amazon would not even be able to either read or write such code

      Hello, I am at Amazon. I spend a decent portion of my time working with an in-house Clojure library (not entirely dissimilar to Haskell) which handles some mission critical business logic my team owns.

      It terrifies me. I hate it.

      For every hour I spend working with it, I spend another hour trying to convince everyone we need to rewrite it in Java, immediately, or replace it with a different library. None of us have a clue how it works. We can't debug it. The guy who wrote it quit three years ago.

      I mean, it works. It works very well. But the day it doesn't work, we're not going to have a clue why that is. And then we're in real trouble.

      • rfrey 1738 days ago
        You could use those hours currently spent lobbying for a rewrite to, you know, learn clojure.
        • tempsolution 1738 days ago
          Yet another myth that comes from not working in big companies... Learning a language is mostly useless for several reasons: 1) You will be among the only person who is willing to learn it 2) You will have no capable reviewers, and usually you need about 2 for each CR 3) Your team has no production knowledge and muscle memory for that language. Ergo pretty much all you do will be garbage for a very long time and a huge operational risk. 4) Other teams at your company will not know this language either.

          Bottom line: Pick the language most teams at your company are familiar with. At Amazon that simply is Java for anything that is not frontend, period. Even getting into Kotlin presents a major obstacle. Clojure? Yeah right.

          Edit: Just to clarify. Working in a big company is not meant as a restriction. In fact, having worked in several startups before, these would do well to adopt the same principles for programming and prohibit this "university graduate" way of developing software. In the end, if your startup is going to survive, it's going to become a big company itself someday.

          • ulucs 1738 days ago
            Learning a language that is almost completely useless for your company is still useful for you. It adds to your skill set, increases your market value and keeps you learning.

            Only learning corporate-approved languages is a bit shortsighted. Tech companies, development paradigms and programming languages all have shorter lifespans than humans. COBOL and Fortran, once very successful and popular languages have stopped being mainstream for a while. And those two were the survivors of their batch. Who knows how many then-popular languages have faded away to obscurity?

            Also you have to keep learning even if you are to stick to the same language. Java is not 1.6 anymore. Languages keep evolving by borrowing good ideas from other languages. So why not use your learning time to stay ahead of the curve?

            Edit: All languages have their places, even in big companies. Could Whatsapp and Discord have grown at the same pace while using Java/C#? BEAM is the right tooling for them and it has paid off.

            • apta 1736 days ago
              > Could Whatsapp and Discord have grown at the same pace while using Java/C#?

              Arguably, yes. The JVM and .NET CLR are both superior platforms when it comes to performance. They also both offer actor models with supervision (e.g. Akka/Akka.NET).

        • mabbo 1738 days ago
          Oh, I am learning clojure. As best I can and as quickly as I can. But that isn't the problem so much as clojure's ability to make code so succinct that to read it, you have to sit down and really, really read it.

          But see, someday, I won't be on this team anymore. And then all the time I spent all learning how this beast works will be wasted and some new person will be scratching their head at it.

        • sk0g 1738 days ago
          The vast majority of devs can be (some definition) of productive in general languages like C#, Java, Python etc in a day or a few. As in, able to grasp code, spot and fix logic faults.

          That is far from accurate for more left-field languages. Should every new hire spend three months meditating and becoming one with the functional gods? Seems a bit silly, almost as silly as trying to hire a programmer that is a fit into the company, and knows the relevant domain AND programming language/ paradigm as dictated by someone long gone.

      • 0815test 1738 days ago
        > working with an in-house Clojure library (not entirely dissimilar to Haskell)

        Clojure is actually very different from Haskell. The pervasive use of "dynamic" features, including the lack of a satisfactory type system, does mean that Clojure code can be expected to be inherently less reviewable, surveyable and auditable, which is exactly what you found.

    • chachachoney 1738 days ago
      >> Sorry but this is just crazy.

      The article was about "virtues of Haskell's strong type system" but you seem to have read it as "Why you should use Haskell and Type driven development in place of your existing C++ or Java Stack."

      The author has published a computer science book. He probably doesn't work in a big company, but lots of people work in big companies and still appreciate FP / type theory / computational theory and forays into those realms.

      Not to mention... A little bit of stuff like this and a lot of Rust makes more sense.

      • jose_zap 1738 days ago
        He has worked for Google and Facebook, the big company argument is not very relevant, IMO.
  • kazinator 1738 days ago
    > Which means you can slowly use type holes to chip away at a difficult implementation, without ever really knowing what you're doing.

    Yeah, look! I started with a function signature and a little _. Lo and behold, fifty seven holes later, I have a graph-coloring optimal register allocator for MIPS for my compiler back end. Don't ask me how it works, but the type system assures it.

    • state_less 1738 days ago
      This type system, is it powered by AI?
      • mrkeen 1738 days ago
        There's nothing fuzzy going on. If I said f(x) = 2, you could quickly work backwards from that and state: 2 is an Int. x isn't used so it could be any type, let's say a. That means f has type (a -> Int).
    • mrkeen 1738 days ago
      This, but unironically.
  • hardwaresofton 1738 days ago
    This post is not for beginners/non-haskellers, I don't understand why it's being posted here. Is this intentional or are people in the haskell community this tone deaf? It's like we're determined to prevent new people from joining.

    Dear everyone, code like this is not where Haskell starts to get useful, it's useful much earlier. I won't reproduce it here but I just wrote a post with some practical code (and how haskell helped) taken from a project of mine: https://news.ycombinator.com/item?id=20260095.

    If you are using Coyoneda in your code, please do not use it to extol the virtues of Haskell, except if you're sure your audience is expert haskellers/mathematicians/researchers/others who can actually gain something from the post, or if the writing is so sublime that it has reduced the complexity to near zero.

    Please OP, you're actively hurting the adoption of Haskell by posting stuff like this in the wrong places/context.

    The author of the article (Sandy Maguire) created a library called polysemy that improved on the efficiency of using a concept called Free Monads in Haskell -- their main benefit is that they let you write functions that deal only in your business domains. A rough corrolary would be a dependency-injected function that could not do anything outside of what the dependency injected pieces do, but also allows for different dependency implementations to be mocked out entirely within the languag (i.e. without the XML/annotation hell that is Spring DI magic).

    • danharaj 1738 days ago
      > This post is not for beginners/non-haskellers, I don't understand why it's being posted here. Is this intentional or are people in the haskell community this tone deaf? It's like we're determined to prevent new people from joining.

      There are plenty of submissions on HN that are aimed at specialists of a field. They get to the front page because people are curious about specialized fields. So, even though I disagree from the pits of my soul with the content of the submission (as a professional Haskell developer, I should add), I think you're being very unfair to say that stuff like this shouldn't be shared with a broader audience.

      • hardwaresofton 1738 days ago
        You're right, my comment was a bit of a knee-jerk. A few things:

        - You're absolutely right about specialist-aimed submissions on HN, it's a large part of the reason I keep coming back to HN, I should have thought about that a bit more. I think this post was a little too far into the very deep end of what haskell can do.

        - Stoking curiosity about Haskell is a wanted/welcome result, and I think part of my annoyance was that this article was very unlikely to do that, in this context.

        - I agree with the fact that deeply technical posts like this should be shared with a broader audience -- I didn't mean to imply that all posts shouldn't be shared, but the audience needs to be taken into account a little more. r/haskell is very different audience from HN.

        While we're here, I'd love to get your opinion on polysemy -- it looks like the best thing since sliced bread for free(r) monads, the combination of the usability of freer-simple and the efficiency gains of fused-effects.

        I've long secretly held the believe that mtl-style and free-monads were roughly equivalent in power and efficiency equality was only a matter of time, but I am really looking to the free approach because effect algebras seem to be a fantastic tool -- they compose more cleanly than mtl does.

        • danharaj 1738 days ago
          I sympathize!

          > While we're here, I'd love to get your opinion on polysemy -- it looks like the best thing since sliced bread for free(r) monads, the combination of the usability of freer-simple and the efficiency gains of fused-effects. > I've long secretly held the believe that mtl-style and free-monads were roughly equivalent in power and efficiency equality was only a matter of time, but I am really looking to the free approach because effect algebras seem to be a fantastic tool -- they compose more cleanly than mtl does.

          I've not tried polysemy and I don't think I will unless it comes up on the job. I'm inured to effect systems after the fifth or sixth flavor. I've heard it's nice from people who've wanted to use it though. I've written my share of production Haskell and I haven't found mtl to be as bad about composition as people tell me it is.

      • xelxebar 1738 days ago
        Oh man, I'd absolutely love to work in haskell professionally. Mind if I ask for your advice? As someone with ~4yrs in industry as a mobile developer and a master's in math, what are some routes I should be look into?
        • danharaj 1738 days ago
          So, I got my first job by going to my local Haskell meetup and chatting with someone who would end up being my boss, and I've stuck with that crew since. I think that's rather serendipitous but it definitely doesn't hurt to go hang out with the Haskellers in your area. There's a regular meetup in many major cities, including Boston, SF, NYC, London and Zurich off the top of my head. Our shop has also hired people who respond to our job postings (on Reddit, for example), as well as people who participate in the community on IRC and Discord either by word of mouth or recommendation.

          Now, as someone who evaluates resumes and interviews people, the only Haskell specific thing I care about is making sure the candidate knows how to use the core of the language, which is basically Haskell2010 + some indispensable extensions. I expect fancier stuff to require training, and to be honest it's sometimes better to properly contextualize that stuff for someone instead of having them come in already thinking they know how to use the footguns :^). My company asks candidates to attach or link code samples if they have them. If they don't, we ask them to write a small Reflex [0] app, because we're primarily a Reflex shop.

          Now, every shop is going to be different and expect different things from their candidates. That's one reason why getting involved in the community is so useful. If you hear directly from people who employ Haskellers in your area what they want, then you can build your credentials for that. Besides that, you just gotta apply. If you're a strong candidate in another tech stack, you're a strong candidate for a Haskell stack too if you can demonstrate you can use the language. There's currently a remote job posting on r/haskell if you want to give it a shot. I wish you the best of luck :)

          [0] https://reflex-frp.org/

        • hardwaresofton 1738 days ago
          Some resources:

          - https://www.fpcomplete.com

          - https://haskellweekly.news

          - https://wiki.haskell.org/Jobs

          - https://angel.co/haskell/jobs (if you're into startups)

          - r/haskell on reddit

          I do not work in haskell at my day job but I've found writing side projects with it has done wonders for my understanding -- I've been able to work through the trends in how people are being productive with haskell (mtl, readert, free monads, vinyl, datakinds shenanigans), just by trying to make stuff with haskell.

        • dllthomas 1738 days ago
          Apply at LeapYear?
    • crispinb 1738 days ago
      > you're actively hurting the adoption of Haskell by posting stuff like this in the wrong places/context

      If someone's put off learning Haskell because they are (terrifyingly!) exposed to a single obviously non-beginner's article (on HN not the Daily Mail!) -- well, they must be too incurious to be worth recruiting to the language anyway.

      I know nothing about Haskell (and am unlikely ever to learn it for unrelated reasons), but I read the article, understood little of it, and survived with my interest if anything somewhat piqued.

      • hardwaresofton 1738 days ago
        It kind of sounds like using curiosity as a gatekeeping mechanism. There's also nothing to suggest that someone who isn't curious today can't become more curious as time goes on. Maybe someone writes their first program with `unsafePerformIO` everywhere without understanding how to properly use monadic contexts, but that's likely not where they're going to stop -- that person may go on to properly understanding monads, monad stacks, then MTL, then free monads, etc.

        Great things happen when barriers to entry are low, why would you want to make the barriers seem higher (despite the fact they didn't actually change)? You're going to scare off someone who could have contributed a simple driver/client/whatever (the kind of libraries that make thriving ecosystems) that doesn't require any of this type tomfoolery (I use that term jokingly not pejoratively), because they wrote off haskell early.

        • crispinb 1738 days ago
          It kind of sounds like using curiosity as a gatekeeping mechanism

          I don't know about 'using'. I think it just is one - learning is possible but terribly inefficient in its absence.

          There's also nothing to suggest that someone who isn't curious today can't become more curious as time goes on

          Very true & thanks for reminding me. That we can & do change (and must leave space for each other to) is one of the drums I like to bang now and then.

          • hardwaresofton 1738 days ago
            > I don't know about 'using'. I think it just is one - learning is possible but terribly inefficient in its absence.

            I definitely agree there -- it's definitely part of the barrier to entry, people have to start looking for other ways of doing things before they'll land on haskell since it isn't main stream.

            The things from haskell that are trickling out into the mainstream should (hopefully) be pointing people back at haskell, if it's at least mentioned in passing

            > Very true & thanks for reminding me. That we can & do change (and must leave space for each other to) is one of the drums I like to bang now and then.

            Yeah I really felt this when I watched Sandy's talk on polysemy (https://www.youtube.com/watch?v=-dHFOjcK6pA0

            There were parts he didn't quite fully understand and he mentions it

            Also to temper my previous comment, I want to be clear that I don't necessarily want every single able bodied coder to necessarily join the haskell ecosystem -- I am at least a little cynical in that I believe if there exist good contributors (that you do want) there are probably bad ones (that you don't want).

      • EdwardDiego 1738 days ago
        > they must be too incurious to be worth recruiting to the language anyway.

        Is Haskell a language or a cult?

    • hardwaresofton 1738 days ago
      A follow up to this comment... Please, haskell people, read what other people are writing:

      > Sorry but this is just crazy. I always wonder if people writing blog posts like that have ever worked in a big company. You need to spend at least 2-3 hours a day doing code reviews and most of the rest you are writing code, which subsequently involves reading the code you wrote yourself and others have written.

      > Now we currently use Java here. I worked with other programming languages in companies too, for instance C and C++ which were a nightmare with no equal. Don't get me wrong, when I was in college I just loved creating crazy shit with template metaprogramming and preprocessor metaprogramming. Once you mature out of it, this is just the most craziest thing you could ever do.

      These impressions of Haskell are wrong Haskell is not crazy, it's actually insanely reasonable -- so reasonable in fact that it will not let you compile code that is crazy.

      This is the message getting lost in crazy type signatures (which admittedly often do cool/useful things if you undertsand them) and overly verbose/complex examples.

      > Hello, I am at Amazon. I spend a decent portion of my time working with an in-house Clojure library (not entirely dissimilar to Haskell) which handles some mission critical business logic my team owns.

      > It terrifies me. I hate it.

      I do not use/encourage use of Clojure any more, and this comparison scares me -- Haskell is not clojure, but it's getting lumped together and this really worries me because it should not be.

      Haskell helps you write clear code -- type signatures, pure functions, immutable values, typeclasses, algebraic data types are all tools to make your code clearer and more obviously correct. If this isn't getting across haskell is in trouble/has a big marketing problem.

    • nilkn 1738 days ago
      I don't really understand. I thought this was an extremely interesting post, and it was very easy to follow. It's clearly not written for a complete beginner, but I don't think this is intended for experts either -- not even close. This kind of material is the sort of thing I come to HN to find.

      I also remember reading your comment the other day, and I thought it was great too.

      • hardwaresofton 1738 days ago
        Yeah I clarified up top -- I think the title "Implement with Types, Not Your Brain" does a bad job of setting expectations.

        Personally, I clicked through expecting to find out about some new way haskell can help me use my types to verify how implementations work (the title is fantastic in this sense, and I saw the domain), but I was instantly hit with that bit of code that's somewhat inscrutable and doesn't do enough to set up the context of what's happening and why. Then the general point of the article basically was to use polymorphism and type holes to work your way to what you need. I felt like this wasn't what I signed up for, and wondered what others thought, only to go back and see upvoted comments bashing haskell based on this one example.

        This is what lead me to make my comment -- I think this article is too advanced to post here without some really good prose/writing. Not blaming sandy, as of course he doesn't control how his content spreads but I thought it just wasn't right for this context and sharing something that makes haskell seem harder than need be is damaging IMO.

        The case that "haskell will tell you what types it expects" is so much easier to make without the monstrous signatures in that post, or without type level computation (the ':).

    • mrkeen 1738 days ago
      Really?! Where exactly should we post write-ups about complicated concepts like the y-combinator?
    • gridlockd 1738 days ago
      > Please OP, you're actively hurting the adoption of Haskell by posting stuff like this in the wrong places/context.

      I think what's really hurting the adoption of Haskell is Haskell. This blog post represents Haskell quite well. Haskell will never be "popular", just like category theory will never be "popular".

    • stcredzero 1738 days ago
      Is this intentional or are people in the haskell community this tone deaf? It's like we're determined to prevent new people from joining.

      It happens with small "misunderstood" groups. People in such groups start doing things to purposely set themselves apart and distinguish themselves from the outside majority. It seems partly subconscious or instinctual, though it's also expressed as conscious actions and can involve quite a bit of thought and rationalization.

      In particular, it happens with programming communities.

    • Tehnix 1738 days ago
      I don’t have any particular need to defend the posting, but I disagree with your take, as I think you’re not aware of how subjective your opinion is, and there are more target groups than what you seem to focus on.

      > This post is not for beginners/non-haskellers, I don't understand why it's being posted here.

      Plenty of interesting topics related to specific languages get posted here all the time, I am under no assumption that 100% of HN is haskellers, but there is still enough that people find it worthwhile to share and upvote on here.

      > Is this intentional or are people in the haskell community this tone deaf?

      Why would you blame the community on this? If anything, the need to create rifts with comments like yours, is something the community deeply needs to move on from.

      The reason for the posting is that it’s an interesting exploration of type holes and their usefulness. As you may have noticed several places in this thread, the concept that the more polymorphic the type, the more restricted you actually are, seems to have peaked interest and opened some eyes to new concepts.

      > It's like we're determined to prevent new people from joining.

      Not at all. What people find interesting is very subjective, and one of the things I sorely miss from daily working in other languages, is being able to work with the compiler as a pal that helps you out. This post demonstrates one way to make Haskell work for you, and if you’ve honestly read the post, Sandy does a good job of explaining the concept on simpler functions.

      > If you are using Coyoneda in your code, please do not use it to extol the virtues of Haskell, except if you're sure your audience is expert haskellers/mathematicians/researchers/others who can actually gain something from the post, or if the writing is so sublime that it has reduced the complexity to near zero.

      Sorry, but I’m starting to feel like you didn’t read the article, but maybe just the comments here. At no point did the post care about Coyoneda. It started out with demonstrating how a complex piece of code could be reached with the method (type holes), and then went on to demonstrate the method on simpler code, removing the need to know anything else. If you focused on understanding the first piece of code, then you’ve completely missed the authors intent.

      > Please OP, you're actively hurting the adoption of Haskell by posting stuff like this in the wrong places/context.

      That’s a fair opinion to have, but not one I can agree with at all. With a wealth of languages to pick from, why should anyone pick Haskell? Some of what drives people’s interest are these demonstrations of what is possible, when you have a more expressive type system. That are tons of other reasons too, but as a “selling point”, it’s not “look how simple Haskell code can look”, because that is sure as sh*t like lying people straight up in their face—that’s not where Haskell differentiates itself.

      I’ll close out with adding that Polysemy is very cool, and I’m personally looking forward to seeing it ready for usage :)

      • hardwaresofton 1737 days ago
        Even if I'm 100% completely wrong, please compare the comments here versus the comments on r/haskell. The difference is night and day.

        https://www.reddit.com/r/haskell/comments/c5dfuw/implement_w...

        Surely this is the difference of (more) appropriate context & audience.

        Maybe this is just a case of the loudest people on HN being the most disapproving of haskell somehow, but just about all the top level comments except for mine and the bottom one are critical of haskell because of the perceived complexity of the code that was posted.

  • sfink 1738 days ago
    This reminds me of solving physics problems by getting the units right.

    Blah blah ...15 meters per second... blah blah ...after 10 minutes... blah blah ...how far... blah blah => aha! I probably want to multiply 15 by (10 × 60).

    • tome 1737 days ago
      Yes, very much the same flavour! In each case although it doesn't guarantee the right answer it gets you closer.
  • jrochkind1 1738 days ago
    Are they suggesting writing code that you can't later understand or debug?

    Does this actually work out for Haskellers? I mean, I guess I'll believe you if you tell me it does, Haskell is different enough from the kind of programming I do that I don't think I know anything at all about it.

    • jeremyjh 1738 days ago
      Some of the type-machinery code (hint, it has words like "hoist" or "lift" that describe type operations, not data operations) is like that, and its really not a problem because it does the only thing it can do given the types, and the things it does have nothing to do with your business domain or I/O so its never going to change unless a library you are using changes. When that happens its annoying but you just play type tetris again until the red squiggly lines go away and then you are fine for another few years. It isn't something I'd really brag about though as its really hard to explain the benefits of this stuff to non-Haskellers and you can end up wasting infinite amounts of time polishing these abstractions instead of getting work done.
      • jrochkind1 1737 days ago
        > because it does the only thing it can do given the types

        Okay, this may be (probably is) a stupid question, because I know no Haskell whatsoever, but... if there's really only one thing it can do given the types, why does the code have to be there at all? Why can't you just write "Do the only thing you can do given these types"? If it's code humans aren't meant to read anyway, and it really is the only thing that could be done... why can't the compiler just do it without you having to put the non-human-intelligible code in your source files?

        • syrak 1737 days ago
          This is an interesting question.

          Technically there is more than one possible implementation, but you would also have to go out of your way to get it wrong. Automatically determining what constitutes "going out of your way" so that the code can be generated automatically is an active area of research (program synthesis), but as this post already shows, for "simple" cases, we're getting pretty close.

          As the post mentions, there are certain dead giveaways of an incorrect implementation, such as unused variables. Conversely one may still have some rough idea of the desired code to guide the implementation. The point is not about entirely removing one's mind from the process, but to allow oneself to only think about the choices that matter, which are few.

          Alternatively, another way to look at the problem is that these types, while already being quite precise, are still not as precise as they could be, because the type system is not sufficiently expressive. And even with the ability to describe the desired properties to uniquely determine the implementation:

          - it may not be obvious that the implementation is in fact uniquely determined;

          - a solution, unique or not, may not be easy to find automatically. Program synthesis is essentially the same problem as proof search (c.f. Curry-Howard correspondence); knowing that there is a proof of Fermat's last theorem is not sufficient to construct an actual proof of it from scratch.

          For these reasons, it may still be desirable to nail down the implementation explicitly even if it is hard to read.

          It's also worth considering the fact that polysemy (the package that the controversial snippet at the beginning of the post comes from) is very much an implementation of a state-of-the-art effect system using state-of-the-art features of Haskell's type system, so it can be expected that the abstractions to make this code more digestible (for the right audience) are still missing, because no one has ever thought of how to express them yet.

    • pavas 1738 days ago
      This comes down to the concept of 'proof'. How do you know that the code you wrote does what you think it does? You construct some sort of proof in your mind, but for any non-trivial code you have to follow certain operational rules[1] because you can't hold the whole code in your mind all at once. That is you work on smaller parts of it, proving to yourself that it works like you expect, and put those parts together using operational rules that you just have to assume are correct. Like if you have {f(a,b) => a+b}, you can conclude in your mind that {f(1,2) == 3} (but you're really doing a whole lot of rule-application under the covers).

      What if the code you wrote gave you guarantees about the things that it did, as long as you trust the compiler/proof-checker/other tools that used to verify the correctness of the code?

      This is basically what testing does-it gives you proof. As long as the things you test are "pure" or functional in the sense that they will do the same thing every time, given that they are given the same inputs, you have proof that the code is correct.

      The problem with testing is that you can't test everything because of combinatorial explosion. However, there's a trick to this where you can collapse states together and prove something using that collapsed state. For example, you collapse all 32-bit positive integers {1,2,3,...} into the state {N} and now as long as you prove something that holds for {N} you also proved something that holds for all 2^31-1 positive integers. You just have to be very careful and very precise about what you're doing, and that's where your compiler comes in to help.

      (So this next part is kinda butchering the math behind it and all sorts of programming language theory, apologies in advance.) You can kind of think of the state {N} as a type [2], and {1,2,3,...} as instantiations of that type. You can do something like "prove" that if you're given {f(a,b) => a+b} and {a=1, b=2} then you can conclude {f(1,2) == 3}, and throw that under some test. The test instantiation where you run 'assertEquals(f(1,2), 3)' is basically a concrete instance of your proof that 'f' does what it should. You just have to trust that your runtime environment is behaving correctly (no bugs).

      You can also consider functions that can be defined in your language as types (e.g. f: int -> int, a function that takes an int and gives you an int). That's the 'function signature' or 'function declaration'--it's type. And you can consider instantiations of this type to be the same thing as a function definition --where you define what the function does. As long as your function definition doesn't throw any compiler errors (that is, it passes your type-checker and there are no bugs in your toolchain), then you have provided an instance of a proof that the function definition is of the type of the function declaration. This might not sound like much ("great, the function definition has a certain type...didn't we already know that?"), but as long as your function definitions are pure functions (no external state change), you've just guaranteed type safety in your program [3]. Your program will never crash because you input a string when you should've input an int (hello JavaScript...). The type-checker won't allow you to write such a program.

      You can get a lot more guarantees than what I just mentioned, and there is a ton of active research in this area [4].

      So to answer your first question, yes, they are suggesting writing code that you don't necessarily understand because you "offload" parts of your understanding to the compiler (type-checker). It's like when you do an automated refactor--if your code and the auto-refactoring tool are written correctly you can just trust that it did the right thing and doesn't cause any bugs.

      As for your second question, I can't answer it because I haven't used Haskell.

      [1] https://en.wikipedia.org/wiki/Operational_semantics

      [2] https://en.wikipedia.org/wiki/Type_theory#Basic_concepts

      [3] https://en.wikipedia.org/wiki/Type_safety

      [4] https://en.wikipedia.org/wiki/Static_program_analysis

  • mpweiher 1738 days ago
    This piqued my interest, because that's the same way TDD works for me.

    And then it starts with "Let's go through an example together. Consider the random type signature that I just made up:"

    Hmm...I never have a type signature as a starting point, random or not. I have some sort of requirement, some sort of functionality I want out of the code. Soft fuzzy requirements.

    And the types rarely if ever tell me what the function does. Heck, a function that's int x int -> int could be just about anything. How about string -> int? Does it convert the string to an integer or count the vowels?

    • codebje 1738 days ago
      I usually start with types. A signature saying I'm going from a pair of ints to one int doesn't say why I want to do that, I trust that the function name will, but it does help guide me to a function that's more likely correct than one without a type signature. More complex types offer more benefit, as they constrain the set of correct implementations further. Consider just the value of saying Int x Int -> Nat.

      But mostly I like Haskell's type system for the high degree of confidence it gives me while refactoring. Type Tetris is a side benefit on that, IMO.

    • golergka 1738 days ago
      > Heck, a function that's int x int -> int could be just about anything. How about string -> int?

      That's because using types like "int" and "string" is not a real, valid use of a good type system. Instead, you'd typically have a function type of `newton -> sqMeter -> pascal`, or `username -> accessLevel`. All these types would be implemented through basic int and string types, but declaring them explicitly with very limited conversions inbetween would actually use type system to verify correctness of your code.

      • mpweiher 1738 days ago
        So you're going to not have arithmetic or string manipulation?
        • tathougies 1738 days ago
          Not really. I may have vector length manipulation and address formatting though. But vector lengths are not integers (or natural for that matter) and an address is certainly not a string. Perhaps you could use these primitives to represent these things but we should not confuse representation with equality.
          • AgentOrange1234 1738 days ago
            Yes so much.

            An integer can represent all sorts of things, so forever programmers have been using integers to represent all sorts of things.

            If instead of “int age, int size, int color” you have “age_t age, size_t size, color_t color”, and a type system that helps keep these apart, so much confusion can be avoided...

        • codebje 1738 days ago
          You're probably not going to need a separate function for a binary operator.
        • golergka 1737 days ago
          Arithmetic of adding up kilograms and meters together is wrong.

          Arithmetic of adding up kilograms and kilograms together is automatically inherited when you declare a derived type.

  • noncoml 1738 days ago
    I really wanted to get into the promise-land of Haskell, and tried hard to, but my enthusiasm went south when I found out I can hit bugs with lazy-IO that the compiler wouldn't warn me about.
    • jose_zap 1738 days ago
      Lazy Io was an unfortunate historic accident, one which is preached much against in the Haskell community. Alternative standard libraries abound in the ecosystem that remove this accident.
  • slifin 1738 days ago
    I've never read a code base where I thought the type system was doing a good job of being a DSL for describing business requirements

    Types are great imo for checking against primitives because the semantics of int string array etc are strong but most languages are encouraging wrapping up primitives in some user defined wrapper then the semantics of the wrapper are implicit, brittle and known only to the author

    The semantics of your Bob class is more likely to change as it's used over many different contexts than the int type is, if the int type is being swapped out it's usually because the wrong data is being used, which is what we are using types to protect against, it's not usually because there's a problem with what int is

    • syrak 1737 days ago
      > I've never read a code base where I thought the type system was doing a good job of being a DSL for describing business requirements

      Would refinement types help in that respect? For example, the F* language. https://www.fstar-lang.org/

      I don't have any experience with "business" code, but my naive understanding is that application-level code is typically very monomorphic which is exactly where refinement types are useful, whereas the OP leverages polymorphism to a large extent (not necessarily), and that works well for general-purpose libraries which don't and must not care about their users' data.

  • dan-robertson 1738 days ago
    I read the argument in the article as something like “strong advanced type systems like Haskell’s are good because instead of manually writing hard functions, one can just write the type (ie the meaning of the function) and do a bunch of menial compiler-directed work to build up one’s function in little steps. This is great because one doesn’t need to “actually” write such horrible functions.

    I find it slightly silly because one still has to read such functions so it is still hard to know that they do the right thing. For example, consider the function in the article which has type:

         Sem (State s ': r) a
      -> S.StateT s (Sem r) a
    
    It seems pretty obvious what this morally should do.

    In some cases it is possible for polymorphism to force a value of a certain type to always behave a certain way. E.g. a value of type [a] must be the empty list because that is the only value with that type. On the other hand there are multiple values of type [[a]] (I.e. an empty list or an infinite list of empty lists or anything in between). A more complicated example is that the only thing a function with the signature of compose ((b -> c) -> (a -> b) -> (a -> c)) can do is compose functions whereas a function of type (Int-> Int) -> (Int-> Int) -> (Int -> Int) can do just about anything. Outside of abstract Haskell libraries, code tends to look more like the second case than the first.

    In this case it is hard to be sure that the function must behave in the correct way because of its type. I think it hinges on whether it is possible for Put to change the type of the state. If it can then types force you to always take the newly put value and return it with Get, otherwise it would be possible for Put to eg only sometimes work. So if one has to check that the code is correct, one must either read very carefully and check the code, or one must very carefully read the type signature and deduce that the code is correct.

    In the first case the fact that the compiler made the code easy to write doesn’t really help much. Perl5 taught us that just because a program was easy to write it does not mean it will be easy to read/check.

    In the second case, why do we have a program at all? If there is only one correct program we could have written then it seems to me that the program is in fact the type and the compiler ought to have been more clever and written it itself. (There are of course plenty of issues with that statement).

    It seems to me that claiming “Haskell is great because it helps automatically write difficult functions which are impossible to check” or “Haskell is great because it makes me do the manual steps in writing a function that can only ever do one thing and mixes that pointless code in with the thing I care about” is a bit silly.

    This all being said, I do think holes are a useful feature but I don’t think they are useful for entirely writing a function for you. They are particularly useful in proof assistants like Agda where one must manually exhibit a million trivial propositions and holes help fill in all the boring gaps. Agda doesn’t really suffer from the “multiple things a type could mean” issue.

    • syrak 1737 days ago
      > In this case it is hard to be sure that the function must behave in the correct way because of its type.

      I would argue that in this case it is quite easy to ensure that it does the right thing, because even if the implementation is not unique, the number of possibilities is extremely limited. In the case of `Put` you can return either the new state or the old one. It takes a single unit test to ensure it's putting the new one, with parametricity to generalize from one case to all cases, still without looking at the implementation.

      > In the second case, why do we have a program at all? If there is only one correct program we could have written then it seems to me that the program is in fact the type and the compiler ought to have been more clever and written it itself. (There are of course plenty of issues with that statement).

      Program synthesis being basically proof search, even after the right automation is developed it may still be more practical to write the program by hand.

      > They are particularly useful in proof assistants like Agda where one must manually exhibit a million trivial propositions and holes help fill in all the boring gaps.

      I think the same mechanisms and benefits are at play here in a non-dependently-typed setting, even considering "entirely writing a function for you" as an unwarranted exaggeration.

  • Gormisdomai 1738 days ago
    I would really love to see an IDE that auto suggests / autocompletes code based on typed holes like this.

    Does such a thing exist?

    • gcommer 1738 days ago
      Holes in Agda are a good bit more powerful than in Haskell, and agda's emacs mode has a bunch of very powerful commands for working with them, eg: listing possible values, automatically filling, splitting them by case, etc.

      For example, for this post's "jonk" example I just had to copy the type into emacs, reformat it a bit into Agda syntax, then press C-c C-a and it automatically figured out the solution that the author worked through manually: λ z z₁ z₂ → z₁ (λ z₃ → z₂ (z z₃))

      See the full list of commands at https://agda.readthedocs.io/en/v2.5.2/tools/emacs-mode.html

    • Jtsummers 1738 days ago
      Check out Idris and the book Type Driven Development. It has an emacs mode that does that for you. I’m not sure about other languages. It was a very nice experience though I just went through it for the learning experience and haven’t tried to apply it to any problems since then.
    • derefr 1738 days ago
      Why even have the code? The type is all the compiler needs to spit out a definition. Your source can just be the type.
    • hardwaresofton 1738 days ago
      Idris does this with a REPL-like interface:

      https://www.youtube.com/watch?v=mOtKD7ml0NU

    • pirocks 1738 days ago
      Intellij can sorta do this for Java if you fo ctrl+shift+space.
  • wellpast 1738 days ago
    > Once you really understand the typesystem, most of the time, types really are the best documentation --- they often tell you exactly what the function does, in a way that English comments never will.

    Snake Oil.

    Apply this claim to any set of real world functions, however well designed they are.

    This is classic Haskellism. Obsessed with the lattice of types in the room. Door closed. Real world outside.

    • quickthrower2 1738 days ago
      Not completely. Types are documentation. When going from JS -> TS you suddenly have the ability to create types that allow you to easily discover what is in an object. You can now confidently say foo.bar.replace(...) and know it'll work at runtime.

      Otherwise you hope that someone has written in a doc somewhere that bar is a string. And judging by npm docs I've seen - that is very unlikely. So you debug at runtime, head over to the github repo, or do an console.log(typeof(... hmm!

      • Scooty 1738 days ago
        My biggest problem with this claim is typescript doesn't include any support for runtime type checking.

        My experience with TS has been great until I need to deal with data that comes from an external system (API, client, database). Then I either have to cast the type (which can lead to bugs where compile and run time types don't match and IMO isn't an option server-side) or I have to effectively duplicate my types by writing type guard functions.

        • kyllo 1738 days ago
          Speaking of TS, here is a good argument for static typing:

          >Change a front-end React prop, then follow static type errors through the statically typed API, into the backend, all the way down into the database where they force a change in the Postgres schema. The system won't even compile unless React can talk all the way down to Postgres.

          https://twitter.com/garybernhardt/status/1140695491685933060

          • wellpast 1738 days ago
            You’ve just described hell.
            • wellpast 1738 days ago
              Okay, let me be more specific. I've built complex React apps full-stack down to Postgres and maintained them over ever-evolving/increasing product requirements. To schedule. With a tiny team, part-time dedicated. And little maintenance/defects/outages in between feature pushes.

              The success here lied in using various tools, applied pragmatically at each step according to the given context. Probably the most important tool is defensive coding and building up a system from decoupled, almost visibly-correct components. Then some nice technical tools thrown in: dynamic development with hot loading, some modest storybooking of components, a repl, good backend libraries, conscious and measured unit testing, strong build/deploy/release tools.

              The ability to turn on a dime and incrementally refactor things was paramount to the success and efficiency of this development.

              Then someone says we need to use strong static typing. All the way. No judicious application of this tool, we'll use it everywhere. We'll make everything cohere to the type system. This is no pragmatic choice. This is religion. You know its religion because its an ideology that pervades the whole system. There is no judicious application. The extreme tax of contending with types as your system evolves is high. I've only ever seen strong typing enthusiasts deny this truth. Will one enthusiast be honest here? You can literally stand over a strong typing enthusiasts shoulder as they spend an hour running all over their system adjusting their types for one minor change in business requirements and they'll still say, "No, no, types don't cost anything..."

              • kyllo 1738 days ago
                You sound smart, but I don't want to have to be that smart when I'm refactoring code. I want the compiler to do as much thinking for me as possible. I want to be able to change it in one place and have type errors to remind me of any other places in the codebase I forgot to change.

                Static analysis doesn't solve all your problems, but it solves enough of them to be a very useful technique.

                • wellpast 1737 days ago
                  Static typing cannot ensure about their code what the coder can and should ensure about their code.

                  And often -- very often -- especially in strongly typed languages -- the static checker will reject code that is otherwise valid. First point.

                  Second point: Statically typed PLs (especially strongly typed ones) enforce the verifications across the code, with no (at least non-ridiculous) way for the coder to be judicious about things.

                  Third point: statically typed languages have weaker runtime features/abilities: polymorphism, homoiconicity, true REPL (ie true read->eval), etc.

                  All of these points add up to MORE cognitive complexity for the coder who is trying to develop non-trivial business applications -- not less cognitive complexity.

                  It's not being smart. It's learning. Once you learn algebra, say, many classes of problems without algebra would be too much cognitive complexity.

                  Type systems are focused on one thing: type coherence. But the real world demands runtime dynamics are what the customer is paying for; being as close to that need as possible IS lower cognitive impedance.

                  • kyllo 1737 days ago
                    "enforce the verifications across the code" is the point.

                    I've worked with a ton of Python code that looks like this:

                        def foo(bar):
                            # do stuff
                            return baz(bar)
                    
                    What does `foo()` return? I have to go read the body (or docstring if I'm lucky) of `baz()` to know that. And `baz()` might be another level of indirection to `quux()`. If I change what `baz()` returns, now I need to grep my codebase for all call sites of `baz()` and verify that the new return type is acceptable at each of them, and make changes if not. This is super time-consuming and error-prone, especially when a compiler (especially with an IDE refactoring tool) could do take care of it for me in seconds. It's easier if I have a good test suite, but that means I'm just implementing static type checking with runtime tests, which is more code that I have to maintain.
                    • wellpast 1737 days ago
                      You're looking at this Python code and blaming its deficiencies on lack of types. It's deficiencies are not lack of types, but lack of separation of stable vs volatile code, lack of contract specification and documentation, lack of refactoring and versioning idioms (see https://www.youtube.com/watch?v=oyLBGkS5ICk).

                      Why does the blame always go to lack of types?

                      Whether or not you have types you're still going to suffer if you don't have the other things I mention. And if you have the other things I mention then static typing become much more of a nuisance. Ergo...

            • taneq 1738 days ago
              Yeah, it's much better when you have to change something, and you miss an important step in the guts of the code somewhere, and the compiler DOESN'T warn you, and you continue merrily on your way, unaware that six months from now when that code path executes in the wild, it'll take your company offline for six hours while someone (probably you) sweats their box off trying to figure out what the hell just happened.
              • wellpast 1738 days ago
                This scares the hell out of me when I hear people say stuff like this, because if you think a type system is going to save you from your outages then that means you're not aware of a whole other more comprehensive, efficient and dynamic set of programming idioms and tools that will actually save you.

                Here's how you know what I'm saying is true. Because you can go across the landscape of systems in the real world and see both crappy systems falling over and rock-solid systems humming along happily -- and the differentiation between these two categories is NOT a type system. It's one thing: it's who wrote it and what was their level of experience and what was their value system.

                • taneq 1738 days ago
                  It's not going to automatically save me any more than a motorbike helmet will save me if I crash my bike. That's not going to stop me from wearing one, though.

                  I actually agree with your second paragraph, and I like to think the systems I build fall in the latter category, but I'm still not too proud to take whatever help the compiler can give me.

                  • wellpast 1737 days ago
                    You would stop wearing a bike helmet if:

                       - It made you top heavy and more likely to crash
                       - It reduced your speed in half
                    
                    The tacit implication (or elephant in the room) when someone argues for types and the presumed safety they bring is the severe cost they impose. Now they save some costs too, for sure. But the balance that I see is far, far in the direction that they save much less than they cost. (When applied universally as they almost always are when applied.)
        • thegeomaster 1738 days ago
          Check out https://github.com/woutervh-/TypeScript-is. It works as a compile-time transformer which emits the type guard code for a given type and narrows down the type for you. You just call `is<T>` (returns a bool and narrows if true) or `assertType<T>` (narrows or throws an error). We've used it in production with great success.
        • munchbunny 1738 days ago
          It would be really nice to be able to coerce/validate incoming data (JSON I assume) at runtime against an interface definition. However, I feel like something like this goes against most of TypeScript's other stuff that aims for low cost abstractions on top of JS.

          That said, even if I have to write type guards at the edges (which I'd do in JS anyway, just because you aren't using types doesn't mean you don't need to enforce schemas), I think being able to rely on compiler typechecking inside the codebase boundary is really nice for making module interactions more predictable.

      • wellpast 1738 days ago
        Types alone are severely insufficient documentation. That’s the refutation of the original poster’s claim.
        • dllthomas 1738 days ago
          Types are incomplete documentation. They may or may not be sufficient depending on the context.

          There are some really great things about types as documentation, though:

          1) It's machine-checked, so it's probably not out of date or mistaken, which can be worse than no documentation.

          2) The interaction with my code is checked, so if I misunderstood the compiler often has my back.

          3) With inference, I can often ask for docs for code I just wrote.

          None of this is to say that there aren't usually other forms of documentation that should be produced.

        • perfmode 1738 days ago
          What are all of the things this function could possibly do to its arguments?

          (a -> b -> c) -> b -> a -> c

          • quickthrower2 1738 days ago
            What about String -> String-> String ?
            • dodobirdlord 1738 days ago
              Unless what you're doing is content-agnostic String manipulation (i.e. a function to concatenate arbitrary Strings), this type signature wouldn't make sense. And since there are only a small number of meaningful content-agnostic string manipulations, I've already got a solid guess that this is probably doing concatenation! It could also possibly be doing substitution, or a few other things, but not much before it becomes a gaping code flaw that the types in use are still 'String'. Concatenating a person's name with a country name doesn't make sense. Nor does searching for a street address in a month. A type signature like String -> String-> String should only show up down in the implementation details of helper functions, or on the signatures of content-agnostic library functions.
              • mpcjanssen 1737 days ago
                This could equally well be a function which takes a string and a seed and returns the hash.

                The more concrete the type signatures become the less they become documenting.

                A signature of a -> a defines the id function because the type is so generic. A function of String->String can do pretty much anything.

                • dodobirdlord 1737 days ago
                  But that's exactly my point. Why would something that is a randomization seed have type "String"? Why would a hash have type "String"? Why not type Seed and type Hash?

                  A type signature of Seed -> String -> Hash is pretty enlightening!

          • wellpast 1738 days ago
            Infinite.
            • quickthrower2 1738 days ago
              3 things:

              1. Call the function 2. error 3. infinite recursion

              Technically correct - there are infinite possible error messages.

              • im3w1l 1738 days ago
                It's my understanding that we have to involve a function call as a fallback, as a function call is the only way we have of producing a value of type c in the general case.

                But we could potentially have some list of exceptions.

                So one valid implementation would be: If c is string we always return hello world elseif a, b and c are integers we add the two ints. Elseif a and b are the same type we call the function with the argument order swapped. And finally, if none of these are the case, we call call the function with the arguments.

                • danharaj 1738 days ago
                  In a type system like Haskell's, a function with that signature has to be unconditionally polymorphic in its variable types, so you can't have a list of exceptions.
                  • dllthomas 1737 days ago
                    If you do want to be able to list exceptions in Haskell, that's what Typeable gets you. But then the type signature is different.
                • Twisol 1738 days ago
                  Types don’t exist at runtime in the way you’re describing. It isn’t possible to express a program that asks what type a value has.

                  Dynamic types are sometimes called “tags” to distinguish the case in which this information _does_ exist at runtime.

              • wellpast 1738 days ago
                You need to rethink your #1.

                How about calling the function derivative?

                Sounds like you could have used real documentation.

                • dllthomas 1737 days ago
                  > How about calling the function derivative?

                  How would you write this with no constraints on the types involved?

                  • wellpast 1737 days ago
                    I hope you're not suggesting that it's impossible to write this function without such constraints. If so, you should jettison your PL as fast as you can.

                    In any case this is precisely the trouble I believe the typer enthusiast get into. You can keep pulling this yarn, refining your types to match the world as it exists today; meanwhile the dynamic typers forgot about this function days ago and are eating your lunch in productivity.

                    • dllthomas 1737 days ago
                      > I hope you're not suggesting that it's impossible to write this function without such constraints.

                      To be concrete about it, that's exactly what I'm suggesting.

                      Please demonstrate, in any programming language, a derivative function that is parametrically polymorphic - that is, the same machine code will handle absolutely any type that's thrown at it - floating point, integral, string, function, map, set, graph, etc. I do not believe it's possible, unless you have some very different notion of derivative in mind. I would be delighted to learn it is possible, and mildly pleased to learn of a reasonable use of the phrase "derivative function" that's other than what I'm thinking and which makes sense of what you've written above.

                    • dllthomas 1737 days ago
                      Or... other people working in other languages you (obviously) don't understand have their own effective ways of working which you also don't understand.

                      It's a lesson that could be better understood by parts of the Haskell community, to be sure, but it's not always clear how useful a tool can be when you don't really know how to use it.

                      And here, you have repeatedly shown that you really don't know what you're talking about - take the opportunity to learn.

          • wellpast 1738 days ago
            f(+’,1,2) = “cookie”

            For one.

            Now tell me: are types the best documentation?

            • TheAsprngHacker 1738 days ago
              Maybe I'm misunderstanding what you're saying, but how on Earth would a function

                  f :: (a -> b -> c) -> b -> a -> c
              
              result in

                  f (+) 1 2
              
              evaluating to

                  "cookie"
              
              ? For one, the types (obviously, to me at least) don't work out.
              • wellpast 1738 days ago
                Sounds like you could use some real documentation.
            • dllthomas 1738 days ago
              The types tell me that that's not the function in question. Type variables in Haskell mean the caller of the function gets to pick the types. That means the implementation doesn't get to - and the compiler makes sure of it.

              You're pattern matching on a number and returning a string. The function might be handed a function and expected to return a file handle for all you know from inside the function.

            • nilkn 1738 days ago
              a, b, and c are generic types. You need to supply an implementation of f that doesn't know anything about a, b, or c. How many such implementations are there? Can you supply at least two, both of which finish running in a finite amount of time and neither of which produces any error?