Elm and Haskell with Flavio Corpa

Flavio Corpa joins us to discuss the similarities and differences between Elm and Haskell, and how learning Haskell can help you write better Elm code.
May 8, 2023


Hello Jeroen.
Hello Dillon.
You know, I've always wanted to map Haskell concepts to Elm and well I guess it would be f-mapping then, wouldn't it?
Well, it really depends on whether you have a list of things that you want to learn or if you have other things you want to learn.
I guess you could say maybe this episode would be like a type of class, you could say.
Well, today we've got Flavio Korfa joining us. Flavio, thanks so much for coming on the show.
Hello, my pleasure.
So Flavio, you wrote a really nice set of blog posts about Haskell for Elm and you named the series Giving Names to Stuff.
And I like that name and if we could just start with why did you title the series Giving Names to Stuff?
It seems like there's something important to you that you wanted to convey to Elm developers with your Haskell background about names and how names relate to type classes.
So why do you think names are important and why are they important for your blog series?
That's a very interesting question.
I always wanted for someone to ask me about the name of the series.
And I was also waiting for your puns at the beginning of the episode.
I keep forgetting that he's going to do some until he's saying, I'm like, oh yeah, I didn't prepare anything.
Get your cringe phrase ready.
It's a pretty recent thing. He didn't do this in the beginning. And now he just can't stop.
So it's a new thing or it's a modern thing in the podcast then, the puns.
I'd say so.
So related to Giving Names to Stuff, I first learned Haskell and it took me a while because it's quite a big language to learn and a bit wide.
And then at some point I had some front-end experience and JavaScript experience prior to that.
At some point I realized that I wanted to try out Elm and that I already knew Elm because the syntax is so similar to Haskell.
And the Elm compiler is made in Haskell.
So when you learn Haskell, they sort of teach you the other way around.
They teach you the names first and then the patterns that you use.
But in Elm, it's amazing because you don't really need to know the names of things.
You just use them and you gain an intuition for them.
And you see them in practice. How are they useful? How they make you a better programmer or why would you need such patterns?
So while trying to explain a little bit of Haskell to my Elm friends, I just noticed you just need to know the names for stuff you already use.
That's why I chose the name, basically.
I love that.
So did you think it was important for Elm developers to know the names for these things if they want to try Haskell?
Or do you think that there's also value to giving names to these things for an Elm developer, even just working with Elm?
Yeah, that's a good question because what happened to me before is that before even learning Haskell, as I mentioned, I was a JavaScript developer.
So I tried to learn ReasonML, this new language that Facebook was creating.
You know, the creator of React, Jordan Wolk.
And then I started to learn some Reason and underneath it was OCaml and I did not know about it.
And I knew that or I learned that Reason had something called functors.
But when I tried to learn what these functors were, they were basically different from the Haskell functors or the JavaScript functors I already had read about.
So it was really, really confusing to me.
This name collision in my head that was making me feel anxious, so to speak.
So I'd say that learning the names or at least getting familiar with the semantics or what they try to convey makes sense.
Even if you want to jump to F-sharp, OCaml, whatever other functional language, it is useful in itself.
But the same things are sometimes called different names, unfortunately, in different languages, and so it's a bit confusing, all of it.
And naming these things is so hard because I think what I've started to realize is that our initial inclination may be...
I imagine many people initially learn an imperative or an object-oriented language.
And we tend to think of concepts in terms of what you do with them.
Like, what's the inheritance chain?
Or this is like a list or this is a sequence or this is a thing that you await that can be done sequentially.
And it's like, well, what if we weren't talking about the kind of data something was or the kind of thing something represented,
but rather how you transform data in something, right?
So actually, you know, we can talk about a list and a task and a JSON decoder in the same way.
What do those things have in common?
Well, you can manipulate data in a similar way.
How do you put a name to that idea?
It's very difficult and it's very abstract, but it's a very productive way to work with things.
And once you gain that intuition, you can be really productive, it feels like.
But it's hard to develop that intuition.
And it's an interesting question whether somebody develops that intuition faster when they're given the names first,
learning Haskell, or when they're not given the names and just use these concepts in Elm.
Yeah, definitely.
I remember while doing or learning React in 2013, 2014,
I was coming from AngularJS.
So, for example, I just wanted to do a very simple thing.
I wanted to iterate on a list.
And you used to have these ng4, ngRepeat directives, you know, in the code.
I was looking for that in React.
And then obviously I learned about the function map, you know, and it's like, oh, so there is no special directive to do this.
It's just you use a language construct, which is just, so to say.
And then you do what you want with the list.
But this concept, learning it as a JavaScript developer, was so useful.
And then it replicated in all the following languages I learned after that.
And it was really, really practical and useful to get acquainted with that.
Now I really want to know what was different with the Reason functor.
Oh, yeah.
So in OCaml and by extension in Reason,
so basically one file is not one module as in Elm, as in Haskell as well.
You can have multiple modules in one file.
So there is a case in which you want to parametrize a module and call it with a specific generic type and use that module with a different type.
So these parametrizeable modules, so to speak, how they call them, they call them OCaml functors.
But it's a completely different thing as in what we know in Elm and Haskell as a functor.
So it was kind of, yeah, mind-boggling to learn this.
Gotcha. I can understand the confusion.
Yeah, it was very confusing.
Okay. So, well, let's dive into it.
In your blog series, you wrote about basically three different type classes.
You want to kind of enumerate those for us?
Sure. Well, I started with the easiest one, I thought.
We're talking about functors for a while now.
But I started with functor because I think it's one of the most intuitive to think about once you understand the concept of mapping.
And we use them every day in Elm in all sorts of code.
As soon as you map any kind of data structure, you know that you are using functors underneath.
So the next thing that you kind of get to or need to understand is that there is something called applicative functors for when you need to map two or map three, map four, etc.
I read a really nice post by, I think it was Joel that also was a guest in a previous episode.
It was on this topic.
Exactly. So what happens when you run out of maps, right?
When you don't have enough map n numbers.
And then you can build yourself like using this and map thing.
You can basically use the pipe operator and this and map and use as many as you want.
And the concept behind these things, the concept that you can apply functors to each other, are called applicative functors.
And it's kind of weird.
The intuition is not as easy to understand as with functors.
But when you learn about them, you start seeing them everywhere.
And it's interesting because it's an intermediate step that you need to learn to go to the third step, which is monads.
The thing everyone talks about and there are a thousand blog posts about this scary monad thing.
Except in the Elm community where we basically don't talk about it.
We don't use the M word.
In fact, we'll beep out every use of our word monad in the text.
We will censor it so no one hears the M word anymore.
Keep our PG rating.
Actually, the only people who talk about monads are the ones who try to convey that information from Haskell to Elm.
Yeah, but it's funny because it's also such a common pattern that happens all the time as soon as you want to work with the fakes.
Monads are everywhere, even if we ignore them.
But when I became a more serious Elm developer, I understood why Evan would choose to just ignore those names.
Because they tend to be confusing.
And you don't need to know the names to use the patterns.
Yeah, I find it confusing. Even the term fmap. Does that mean functor map? Is that why it's called fmap?
Yeah, because since Haskell started as a research language, it didn't even have a way to do I-O at the beginning.
It lived in the world of purity, so you could literally do nothing useful with it, according to its author.
So they started to develop some patterns to call things.
And sometimes things have a weird name.
For example, there's a map function, but it only works for lists.
So then once they generalized the functor pattern, so to say, or the functor type class, they said,
Hey, map is already taken for lists. So what do we do?
Okay, we will just call it fmap. Or something along those lines.
Actually, do you know what the history is here? Is it like, these terms come from mathematics, and they were ported to Haskell?
Or were they mostly found out in Haskell or other ancestor languages to that?
And then people say, Oh, it's just mathematics.
Yeah, I can be wrong answering this, but I will give you what I know from my understanding.
There is a word, there are a few words that we don't want to say in Haskell as we don't want to say in Elm, which are category theory.
You know.
Oh, really?
Do you know about category theory?
Yes. I mean, I know of it.
I don't know about it.
What do you mean Haskell developers don't want to talk about that?
That's, isn't that just the whole thing around functors and monads and all those things?
Exactly. Yeah, but we try to speak the least possible amount about category theory, because obviously, since it influences all of these crazy names.
But sometimes, for example, if you talk to a mathematician who happens to be an expert in category theory, they can tell you that the functor defined in Haskell or the monad defined in Haskell is not actually 100% right.
So that's why I don't want to get too much deep into that forest.
So for a category theory purist, Haskell is not pure enough for a category theory purist. I see.
I see. Elm is not pure enough for a Haskell purist. Okay, I see.
And O'Camel was right the whole time.
Yeah, actually, I think there's an argument to be made for the O'Camel functor. It's just that I don't know the argument.
But probably someone with more experience than me that will probably explain why it's called a functor and what it has to be with a category theory functor.
Yeah, I'm guessing they didn't come up with that word out of nowhere. Like, oh, let's find a word for this. Functor sounds nice.
I think only one person did that ever. And then that's the functor that we hear about everywhere.
Yeah, I think while researching or while developing Haskell and learning about all these patterns, obviously, you know that FB has a solid foundation in math.
So you turn to something to look up for names, right? And they said, hey, so there is this really abstract branch of mathematics called category theory that happens to have all those fancy and cool names we're looking for.
Why don't why not? We can start using them right away. That's my guess. My wild guess.
Yeah, I mean, and they, they, there certainly are useful ideas here. It's just like, it feels like there's such a large barrier to entry.
But, you know, I have I have thought, like, sometimes you'll see an Elm API that, you know, maybe it has like map two and map three, but it doesn't expose an end map, which you can derive, you know, as Joel talks about in his post running out of maps, you can derive it from those building blocks.
But we sometimes don't have the conventions for which of these things should go together. Like if something is applicative, what building blocks should we have?
You know, the idea has crossed my mind. What if we had like an Elm review package, some community standard and came up with a set of conventions?
And we said, okay, well, like what? Because really, sometimes even just the terms that we use for these ideas, like, is it called concat map? Or is it called and then or is it called or whatever?
Exactly. And you find you find instances of these different terms in different packages. And you have to look at the type signatures and know this is sort of the same type of thing that I can do, or have a little bit of a background in Haskell or category theory or whatever, and kind of squint your eyes at it.
And you see it's the same thing. But wouldn't it be interesting if we had a community standard and an Elm review package that said, well, what's the name for this group of things? What are the standard function names and signatures?
I'm not sure if Elm review, would Elm review be able to look at the type signatures of something in order to do that? Yeah, you can do that. Okay.
You can look at the type signatures as long as they're written in the code. Yeah. So then it should be possible to say like, this is a mappable thing. And then it could, it could verify it could give you an error if it doesn't have something called map that has the appropriate signature, right?
Yeah. There's also the opposite thing, where, for instance, in Elm review simplify, I tried to simplify code that uses maps. But if I don't know what the type is, then I can't do the simplification. Like if I see list on map used in a certain way, like on an empty list, then I can know, okay, well, this is useless, I can just simplify it to an empty list.
But if I have a map function that whose type I don't know, well, I have no clue whether that will, in fact, respect the laws of functors as they do it in Haskell. So, so we like we have a name we have a that is pretty standard, but we don't have all the guarantees that go with it.
Like potentially, it's just a map that creates something, or maybe it's a map that has the same signature, but it does something entirely different. And that's, that's a bit annoying, but we don't have that information of does this follow those rules, those laws, because we could do something, a few things that would be quite nicer, I think.
But that is the case in Haskell, right? So that's some type says, well, I'm implementing or deriving or whatever, or what the term is, a functor. And therefore, this map now has to prove, does it have to prove that it follows some rules? Or does it not have to?
Yeah, you can check if it follows some rules. There are well, you can use one of the most used libraries is QuickCheck for property-based testing. And it comes by default for example, for checkers for the functor laws or the applicative laws or monad laws, and you can you can check.
But if you use, for example, the deriving mechanisms from GHC, from the main Haskell compiler, whatever the GHC derives as a functor instance, you can be pretty sure that it's going to be a correct functor.
Unless there's more than one way to implement something which happens with some other instances, like with monoid instances, for example, or certain operators or for certain operations, there can be more than one valid implementation, then you need to disambiguate. But it's not so common with functors, I would say.
Yeah. So one thing that I was wondering about is like, you made this suite of blog posts for Haskell for Elm developers. Why specifically Elm developers? Is it because all of your colleagues are Elm developers? Or is it because like, we're doing something wrong? Or, well, we are the closest ones to be able to reach for this information, like JavaScript developers, they're not close enough, but Elm developers, they just have to know the names.
And it's good.
Yeah, exactly. I think knowing Elm makes you closer to Haskell than knowing any other language. Because you could consider to some point Elm to be some kind of subset of the Haskell language, like a very small subset. And then I'm a little lazy as a functional engineer, you know, and then I was wondering, hey, I really want to teach Haskell. What's the best audience to teach Haskell to?
Because they have very little to learn, actually, if they want to, to learn Haskell. And also it happened to be that yes, my previous job was was Elm. And I was finding more and more Elm developers in Twitter, especially on Slack, interested into the Haskell language. I was, for example, part of the Haskell channel in the Slack. And there were many questions. So I could see there was some some kind of community or a pool of people that were trying to learn Haskell. And I thought, okay, I'm going to try to teach them.
And I thought, hey, I might have something to say in this domain, that might be useful for someone else. And I really, it got me excited, because I love Haskell. And I love Elm. And I said, hey, if whatever I struggled to learn all those years, is useful for somebody else and wants to have a head start on Haskell, I'm happy to write those, you know.
Do you think with hindsight, that you would have preferred learning Elm before Haskell? Would that have made it easier? Yeah.
Yeah, for sure. Because it's like starting small, and then using stuff and getting proficient with things and then having the next jump. But in retrospective, I cannot regret my decision too much, because I'm hearing from some Elm developers that the gap is actually bigger than I expected when they tried to actually learn Haskell.
So going the hardcore path first of trying to learn Haskell, it paid the dividends in the end, you know, it was it was worth it, because I learned Haskell, I learned Elm, and I also got to understand OCaml and Reason and many other functional languages, Fshare, for example, I was trying Fshare for some time.
And I could feel that the hardest part of the language I already understood, which was incredible for me. So it was time well spent, definitely. Gotcha.
I've always found Haskell to be intimidating, personally. And I mean, I think some of it is like, some of the names and operators seem cryptic to me. And it just is like a lot of concepts that are hard to make sense out of. But I'd be curious to hear your thoughts on like, for an Elm developer, what types of things might they do with Haskell?
And if they, you know, if they paid that cost of learning Haskell, what are the cool things that you can build with Haskell?
Except an academic career.
Exactly. So, you know, now there's all this fuss going on about probably having Elm in the backend. We don't know yet, right? But there's some expectation. But before that, we don't actually know what's going to happen. Hopefully, we will have some news regarding that.
But if you wanted to experience this feeling of you being in control of your code, of the compiler assisting you, of checking every possible case of your code, etc. If you wanted to have the same experience you have with Elm, but in the backend, or for something, for a more general purpose programming language that could pretty much do anything, then you needed to look for alternatives.
For example, Haskell is one of them. And yeah, even though the learning curve is a little bit different, but you could do what's no Red Ink does, for example, using a small subset of Haskell as well and have packages and imports look like Elm code. And their Haskell pretty much resembles Elm. So that's a very interesting approach as well.
Is it called like cherry prelude? Is that what their helper is called that gives you like an Elm-like core?
I think I've looked at nri prelude is what they use like a Haskell package that turns your Haskell prelude into an Elm-ish kind of looking prelude. So that's interesting, but maybe they do something else as well.
Yeah, Teresa has a package called cherry core, which is a set of basic functions, but maybe they are influenced by each other. I think at some point they may have been using Teresa's cherry core.
Interesting. First time I hear about it.
So one of the things that always seemed really cool to me, but also a little bit cryptic is the do notation. So maybe we should first talk a little bit about monads. We have talked about them on previous episodes, but do you want to do your best monad elevator pitch?
In 10 words or less.
Wow, that's gonna be difficult. Well, basically monads allow you to conditionally chain computations, basically. So when you need something like an fmap or a mapping function, but you need to have more power over what to do next, and you end up having to use monads.
That's why Elm has something called and then, which reads quite nicely and it's easy to understand. But surprisingly, or the list data type, it's called concat map.
So Elm was not consistent with that and then name just for lists, because it's easier to understand that you might want to flat map a list or might want to concat map something. But it's the only case in which it is not consistent.
Yeah, Elm does tend to use more domain terms for naming things when possible. But then certain things like mapping, sometimes you can have a more domain like term for mapping, but usually it's just map and it's easier to just in the case of map, it's a pretty clear case where you just say, yeah, you're just mapping this data.
Let's just learn the concept that you can map things. But yeah, in the case of like, and thening a list, that could be confusing. I will say since our conversation with Joelle, where we talked about some of these category theory ideas, and I expressed my difficulty with reasoning about and thening a maybe type, I now feel more confident doing that and more comfortable. It feels more intuitive.
So having the discussion about some of these things has helped me a little bit with building up an intuition for this a little bit more. So I think there's definitely value to it.
So yeah, so in Elm, you can end then in a JSON decoder and you can then have access to the actual decoded value in the chain so far and do something based on that. You can succeed or fail or do another JSON decoder conditionally based on that. Or in a task, you can have access to, task.endthen, and then now you have the time.
So Elm developers will be familiar with that general concept of and thening something.
Yeah, and interestingly, it comes up a lot while trying to use effects, right? Like side effects, like for example, in I think when Elm developers use the task data type, they probably are going to eventually end up using and then, task and then, and that happens very frequently as well.
I will want to go back to talk about effects afterwards, but I think Dillon wants to continue on the do notation.
Sure, no problem.
Well, I wonder if you can demystify that for me a little bit because I've always whenever I see a little Haskell snippet and it uses the do notation, I'm always thinking like that seems very helpful and interesting, but I don't quite understand what does it do, you might say. And I wonder if you could demystify that for me.
Yeah, without showing code is kind of difficult, but I will try my best.
Basically, when you want to and then a lot of things, and this happened also in some Elm code, right? For example, I'm looking at the implementation of the map five for task. It has like, task XA, task XB, like a bunch of tasks. And when you want to chain those with and then you end up having lots of nested. And how do you call the indentation?
Indentation hell.
Indentation, yeah. You have like a callback hell, like an indentation hell.
Right, right, totally.
And you cannot skip it. And I remember, for example, Martin Janicek opening an Elm format issue saying, hey, is there some way to prevent Elm format to do this indentation thing? And it's not the scope of Elm formats to fix that because it's actually code that needs to be indented semantically, right?
So do notation is just the nice thing about it is that it's basically just syntactic sugar. So instead of having to indent every single lambda expression after the bind operator, it lets you a way of opening some block of code. And F sharp has this as well.
But it is literally like you open a block of code in Haskell, it starts by two, and then you start to bind things into names that are going to be the result of applying that bind operator and the lambda expression.
So instead of all your code going deep and deep into indentation, it allows you to have a more flat, like a vertical align kind of code that basically does the same underneath. The Haskell compiler will desugar this into all the bind and lambda expression function calls.
But it just looks more natural or easier for the developer to look at the code with do notation sometimes, because people apparently in the Haskell community, some of them are not all of them are not big fans of do notation.
That's a completely different topic.
So is the do notation something that you would like to see in Elm as well? Or do you think it's actually better not to have it?
Well, that's a that's a good question. I think it would not hurt to have it because it's like a syntactic choice. But you know that Elm is so simple that it doesn't often offer you various ways to do the same thing.
It prefers one single approved way of doing things. And this makes in the end things simpler. But for example, people building like for example, Martin Janicek building this Elm in Elm compiler and having all these nested lambda expressions, these people would argue that maybe it might be useful.
But being an Elm developer by myself, I've come across in a very few occasions of the need of saying, hey, I would, I would really love to do have you do notation right now.
It's like you don't often nest your code that much. I know there, there's a case to be made against well, in favor of people really needing this. So I think it wouldn't hurt. But I understand why it's not included in the language.
As someone who has to analyze Elm source codes, like the least amount of options that are available, the nicer it is for me.
So exactly. The less amount of work.
So I will actively give Evan money to not implement a new feature like that.
Some people would say a happy medium would be a formatter that allows you to do that a chain of and then lambdas without the indentation increasing every time you nest a lambda.
But yeah, so the do notation gives you a way to sequence things if you're comfortable putting the term sequence. Monads don't necessarily always mean a sequence because they could be a list or whatever. But it's a way of if it were tasks and you said task A, you're basically saying task A returns this value A, task B returns this value B, task C returns this value C.
So given the return values for each of these, do task A, then give me the return value of it. Do task B, then give me the return value of it. And then you can combine all of those after the do.
So you're basically just grabbing a return value from a sequence of and thenable things. And then you can combine them in an expression at the end of that.
So now it is one thing that does stick out to me is and actually you mentioned the implementation of task map five.
When I think of task map five, I actually think, well, if it's map five, wouldn't it be nice if I didn't have to end then each of them?
Wouldn't it be nice if I could do it applicatively? Because you think like an applicative chain, like end map, you think parallel because you can do a sequence of things and then combine them at the end.
Whereas end then you get the value and then you can continue with another value of the same type. So if you do task dot and then you need to resolve that task before you can know how to continue.
Whereas in an applicative pipeline, you don't need that. So is that a point of controversy in do or a downside in do?
Yeah, that's a very good point because in Haskell, there is a language extension called applicative do.
So just as you said, so you can use if you toggle on this extension, you can use do notation with applicatives without needing monads.
And I know lots of people when you do not need the sequencing aspect of it, like you don't need to make sure that one happens after the other.
They will just turn on applicative do and have do blocks with just applicatives, which is also very useful.
Hmm, interesting. Okay.
Kind of feels like a cheat, like, yes, I can't do this thing in Haskell. Well, you can just enable this extension.
Exactly. That's totally how doing Haskell feels like you want to do all sorts of fancy or tricky type things.
You ask somewhere around and they tell you, oh, actually, you know, you can do that with that syntax.
You think it should be doable, but you need to enable this and this language extension and then it works.
It happens really often.
Now we're back at the JavaScript babble days where everybody has their own JavaScript syntax.
Well, this is actually something I was wondering about.
Is all the language extensions a problem for Haskell or is it something that introduces a lot of complexity that people try to reduce their usage of?
Or is it like, no, everything enable as many as you can.
You will thank me later.
Yeah, this is a really controversial topic because there are hundreds of extensions and some of them are widely used.
And most Haskell developers are using the same, let's say, 30 to 40 extensions.
So interestingly, being a JS developer, Haskell is going the JavaScript way now in the sense that they're gathering as a standard to decide on a standard per year.
For example, they gathered on GHC 2021, the committee, and they came up with a language extension called default language, rather.
That was GHC 2021 that enables this list of like 20 extensions.
For all files?
Yes, for all files.
And you can enable this in your cabal file, which is like your package.json file in Haskell, and then you can basically remove all these language extensions in from your comments in your files.
Because now you're using that version of Haskell, so to speak, quote unquote, with all those language extensions enabled by default.
But it is an issue because it adds complexity and some of them are a bit frowned upon if you use them.
Because, for example, there is one extension called allow-ambiguous-types, which sometimes is not what you want.
It doesn't sound great.
Yeah, it doesn't sound great.
Just for some crazy compiler logic, people who use this, they know what they're doing, but I would not know what I was doing by enabling that extension.
And all these language features, is that coming from the academia world?
So, from what I understand, Haskell was developed in academia, where people were trying out a lot of things like how would the language work if we did this, if this feature was in there, and all those kinds of experiments.
And so Haskell is just, by default, a language that is supposed to be extensible so that researchers can try out things.
Is that correct?
Yeah, this is what is happening. It started in academia, but then it made its way to industry.
And now it's a language that has a really difficult path ahead because it needs to accommodate both communities.
So it's still used in academia, for example, it's used for researching algebraic effects and dependent types and really crazy things like the next generation of programming languages.
But also there are companies like mine, like Scribe, which is using it to digitally sign contracts and it needs to work and it needs to be 100% correct.
So you need to accommodate both audiences in the same language.
But it's a nice challenge.
So we talked about the I-O monad briefly, we touched on that. Could you tell us a little bit about the role of I-O in Haskell and what that is?
So you mentioned that originally the Haskell language, you couldn't really do anything in the real world because it was just a pure thing.
And then I-O changed that. So what does I-O introduce that lets you do things in the real world?
Well, I-O is like a very special monad and it's used for input, output and performing side effects, basically.
So once you use code or with the I-O monad, you know that things can go wrong.
So it's like when you see the CMD part of your Elm application, it's that special hole in the language that allows you to do things that can crash.
And for example, people use it expecting it to be like a pure monad, but suddenly there are exceptions in Haskell and they can throw exceptions within the I-O monad.
And people do not expect a language to be able to throw exceptions. But as soon as you are in the I-O monad realm, many, many things can happen.
So it's interesting. It allows you on one side, you cannot do a program that does stuff without input output. So you need to eventually use the I-O monad.
But as soon as you start doing things with it, you know that you need to be extra careful because that's where the whole realm of side effects and unwanted things can happen.
Right, right. Yeah. So there's this kind of advice with all of these things, whether you're using the terms of monads and functors or not, just using an Elm application.
There's this advice to do as much work as you can in the land of simple functions that don't know about these complicated things because that's where failures can happen.
I mean, really, it's the same idea as we talked about in a previous episode about Richard's talk, Scaling Elm Applications, the general idea that the more guarantees you can have about a function, the better.
So like the less things that something can return, the more narrowly scoped, the more narrowly scoped the inputs, the less your brain has to worry about what could go wrong.
But I-O, it can do anything, including catastrophically failing.
Yeah, so you use it with care. But now when you say exceptions, do you mean, of course, the first thing many people will think with exceptions would be something like a JavaScript exception where it's a control flow mechanism.
It changes the way the language works. You can throw and you can catch. What does an exception mean in the context of Haskell and I-O?
Yes, it means exactly the same. So you have the control flow of your application. You can deliberately throw and catch exceptions.
But for example, because of Haskell's legacy, for example, the list.head function is partial, meaning it's not as in Elm that it returns a maybe.
If you accidentally called head on an empty list, it will throw an exception. It will crash.
So this is an issue. And Haskell has in the prelude a few partial functions that will crash if you're not careful.
And that's why people also created new preludes, standard preludes that don't have these issues that provide you, for example, with a non-empty list, a data type and all sorts of useful stuff.
But yeah, if you come to the language thinking about, oh, this is everything going to be pure and perfect, then you're up for a surprise.
Right. I mean, Haskell is the other pure language, right?
And then you don't know. No, it's not that pure.
And I guess PureScript is also somewhere in that realm, but I don't know how it works. I know even less of PureScript compared to Haskell.
Yeah, I know very little about PureScript as well. But interestingly, one thing I like, which is quite mind boggling about Haskell compared to Elm, is that Elm is eager and Haskell is lazy.
Ah, I was going to ask about that.
So I think it's the only lazy functional programming language that is actually used and it's not useless, you know.
But the whole concept of it being lazy and having different algorithms for benefiting from this laziness, it's very mind boggling as well, very surprising to me.
Okay. Can you explain what lazy is and what eager is as well? And yeah, what can you do with it or why is it useful?
Wow, it's a very philosophical question and hard to explain, but I will do my best.
So basically in Haskell, you can define functions that are recursive.
For example, you can define recursively the Fibonacci sequence and you can define things that should not compile because they depend on each other.
If the compiler tries to parse all of that, it should absolutely crash, but it's lazy in the sense that it doesn't compute things until you willingly request it to do so.
So Elm chose to be eager for some reason, because I think technically for HTML on the front end was needed, but Haskell is not.
So for example, you define your lazy Fibonacci sequence and you say, take five on this Fibonacci sequence and you're making it just to compute up until number five.
It will not compute the rest, it will not crash or it will not recurse infinity until infinity.
So, yeah, from what I understand is you can create infinite lists, like all the integers, for instance, which is impossible in practice.
But because those are actually not computed eagerly, you can do computations on it and just hope that you're not reaching out to infinity because then you have a problem.
And I'm guessing there's some performance overhead with that as well.
Yes, like there are some functions that, for example, there is foldAl, l, and foldR.
So depending on whether you want to fold from left or right.
And in Haskell, foldAl is known to have a performance impact because it being lazy, what it will do is just create thunks and thunks and thunks of code.
And because it doesn't eagerly evaluate them until it's at the end, it can cause very huge space leaks.
Yes, so a thunk is a delayed computation in a function, right?
And presumably it would be a closure, meaning it carries context with it.
So it prevents those things from being released in memory.
So there are performance implications to that.
Yes, so that's why by default Haskellers tend to use more foldR, surprisingly, when they're just folding lists and they don't care about the associativity.
But also there are versions of the function which are called very weirdly foldL', which won't cause a space leak.
So they're very bad at naming things.
I'm kind of confused because you say like people tend to not like the list at foldL because that creates all those thunks and performance issues.
So they prefer foldR. But if you do that on an infinite list, then are you trying to get to infinity before you get to number one?
Or is it just like people won't do that in that case?
Yeah, obviously, folding R on infinity will not work.
But for finite lists of things, it definitely will work.
But if you see the explanation, for example, there's a blog post from Lexi Lambda, which is really great.
She was explaining why foldL is so dangerous to a colleague and it ended up being like a full blog post.
And it's so interesting the way this laziness thing enables this to happen and why Haskell is the way it is.
So I will probably send you a link afterwards so that you can have a look at it.
It's really interesting.
Yeah, we'll put it in the show notes.
So what are other pitfalls that you can have with laziness and also maybe like cool things that you can do that we can't benefit from in Elm?
So from my experience, for example, when you are learning algorithms and you see like the mathematical implementation of an algorithm that deals with infinite lists, for example, or infinite sequences of things.
If you see the equations, you can pretty much translate those straight from math notation into Haskell and it will work, which is amazing.
But it's not really doable in many other languages.
But yeah, I think the huge or the main con against laziness is the space leaks thing.
But it enables all sorts of also different way of thinking about algorithms that are only like it's a specialized case for lazy languages that they need to think in a different algorithms to solve the same problems that we that we're used to solving with eager algorithms, interestingly.
Okay, so if you learn Haskell, if you go from JavaScript to Haskell, you first have to learn about no mutation, you have to learn about recursion, and now you also have to think about well, laziness is a thing you need to.
Okay, yeah, that's that sounds like a lot of things to learn.
And also, yeah, coming back to the Elm question, for example, while doing Elm, you learn about carrying and partial application and higher order functions.
And these are functional concepts that are great, and you learn them and you use them instantly.
But if you come from JavaScript, you might need to learn all those plus, as you said, the laziness and all the crazy Haskell stuff.
So you have like a huge pile of things to learn when you want to get into Haskell from JavaScript.
The one place in Elm where I can think of this eager thing coming into play is in tests where you you say test and then give it the test name as a string and then you give it you know, usually you give it a left pipe to avoid parentheses and then you give it a lambda with unit.
And so that's just a way of creating laziness explicitly that and that's why the API has that because if you did your test cases without that being in a little lambda with with no data with a unit type argument,
then the test runner wouldn't be able to defer execution on a test or, you know, choose to parallelize tests by selectively running different ones on different threads.
There are optimizations like that.
And also, if you have a test that has an infinite loop in it, you can't control which tests you want to run and say, well, I just want to try running this one. Okay, this one doesn't have an infinite loop. This one is green. Try running these other ones. These ones are red. They halt. They complete.
And oh, I tried running this one and it infinitely loops. So you wouldn't be able to do that if it was eager because you'd run your test suite and they all just start running and it's an infinite loop and you don't get any feedback.
The one place where I would see this be very useful, at least I would see it, I would use it a lot, is when I want to potentially compute something only once. If I have a value that I need to compute, that is pretty expensive.
I have the choice in Elm to either do it once, and then it's done for forever, but at the risk of it being computed unnecessarily. If this is done, if this value is passed to a map, for instance, then it's computed once, even if no one will ever reach for it.
The other option is to compute it every time that I need it, which means that if I never need it, I will never compute it. But if I do need it multiple times, then I will compute it multiple times.
I feel like laziness here really helps because it will only be computed once it's needed. So that feels quite nice.
This is something that I've thought about suggesting for the Elm language as well, but I have not come up with a good enough proposal.
Interesting. Also, maybe Evan explained why Elm is eager by default, but I'm sure there are some technical reasons why he chose to, and it would be interesting to read all of those.
I'm guessing because it compiles to JavaScript, that's a big part. JavaScript is eager as well. So if you want to make it lazy, then you have to add a whole lot of lazifying layer.
Yeah. And a lot of the Elm philosophy is reducing the barrier to entry and being very simple and obvious. And lazy is just, it's like a powerful thing, but a sharp knife that comes with a lot of caveats and things to be careful about.
And that's just kind of counter to the Elm philosophy. So I think it fits in the Elm philosophy to be avoiding that. Because I've heard a lot of people talking about gotchas with lazy.
So it comes with its fair share of caveats and things to be careful about.
Okay. Effects. So you mentioned effects before. I know that there are some effects in Elm as well, like in the implementation of things like platform.command, maybe of task as well.
And people who do Haskell seem to be familiar with that. And I have no clue what it is. I do think there was a blog post explaining that a few years ago for Elm, which was really good, but I've forgotten everything that it said.
I should reread it. But yeah, can you explain to me what effects are and what are they for?
Well, actually it's kind of an interesting topic because for me, effects are just like things or computations that happen on, for example, like on an input output or on a side effect kind of world.
But I know that when people refer to effects, they are mostly talking about either effect systems or algebraic effects, which is something that also looks to me really scary and abstract.
And Haskell, I know there are a few Haskell libraries that try to implement this concept of algebraic effects, but to my knowledge, none of them is like a hundred percent solved all the edge cases and things to deal with those effect systems.
But yeah, for example, there is one effects library called Effectful. It's gaining some traction in the Haskell community and the creator happens to work in the same company I do.
So my company is fully switching to, instead of using monad transformers, which is like a way of trying to compose and combine monads, is the de facto standard in Haskell.
And then they are switching from this monad transformer stack to Effectful to try to use this effect system in a simple way and in a way that the code looks actually readable and usable without understanding all the machinery behind it.
But to be honest, since I haven't dwelled too much into it, I am not an expert in effects at all. So I feel as lost as you both.
That's fair. That's fair.
Is it the same general idea as it is when we use the term effect in Elm, where it's creating a data type that represents a possible effect without executing it?
Like a command, if you pass it to update or init in Elm, a command will do something, whereas an effect needs to be given a perform function to turn it into a command.
Is it the same concept or is it a different concept?
Yes, I know there is something along those lines, like declaratively dealing with effects and computations. But it's a term that is also used with the same name in different languages for different things. So it's quite confusing as well.
Oh no, OCaml, what have you done again?
So on the topic of effects, back to I-O a little bit. One thing that I've wondered, so I've been implementing some effect-y, I-O-y things in Elm Pages.
There's a backend task similar to an Elm task, but in Elm Pages v3, it's full stack, so you can do things in a backend context.
And so one of the things as I was designing it was I was thinking about Haskell I-O, and I was like, why is there one type variable in I-O?
So in Haskell, you can chain I-O to, I guess, like read from a file, and then given the thing you read from a file, you can map that into something.
But if an error happens, you don't know what type of error data, if I'm understanding correctly, that is erased from the type.
The possible error data is erased in the I-O type, whereas if you look at the Elm core task type, it's task error value, and you can map error.
And you can task.perform says it takes a task that never errors, and you can recover from errors in this, you know, I don't want to say monadic style,
because I know it's not technically following monadic laws, but that sort of explicit way of chaining things together and knowing the types based on the type signature.
So what's the deal with that in Haskell? Why does I-O not have an error type represented in its type variables?
So I know that I-O has this type variable because obviously like a monad needs to have it.
Right? Because you think about maybe or either, well, which is called result in Elm, it needs to have something to wrap around because it's a wrapper in the end.
That's why all the memes of monad being burritos comes from.
But basically, for example, the simplest example of people using the I-O monad to perform side effects or to do stuff like, for example, printing to the command line,
they're using I-O and then open and close bracket, like the unit type.
So it's like, it's an I-O that is going to perform nothing effect. And you will see these in type signatures all across many Haskell code bases.
Why it doesn't have more than one type variable? I really don't know, but it's an interesting question.
Yeah. Well, so if you can throw an exception, like when you catch an exception, do you know the type of the exception?
Do you catch it? Do you catch it through the monad, the I-O monad interface?
Or is it just a completely separate thing that is not specific to I-O to deal with exception handling?
Yes, it's specific to I-O and all exceptions are typed.
So when you catch an exception, the I-O or Elm, you get lots of information from that exception and you can use it to your will as well.
You can then use it to your needs.
Interesting. Okay. Well, that's nice that they're typed.
That's always like the most frustrating thing for me in TypeScript is that you catch an exception and it just completely forgot what might have happened along the chain.
You know nothing about it nearly, right?
Right, right. Are you forced to acknowledge that something could throw an exception?
And I'm guessing no, anything can just throw an exception and you don't have to annotate it or allow it to fail in any way.
Yeah, that's the thing, like the Haskell etiquette or the standards that we try when we do pull request reviews and all that.
Obviously, you should know do things without signifying this in type signature.
But it's kind of like a paradox or like strange that some prelude functions because of legacy or inherited reasons do throw exceptions when you don't.
But it's like, okay, there's a few catches.
People know about those functions. They are known to be dangerous.
So either they use them with care or they don't use them at all and they use some safe alternative from a well-established library.
But obviously, yeah, you are doing Haskell for the same reason you would like to do Elm.
You would like to trust 100% in purity and not having to worry about strange stuff.
But for example, there is a placeholder called undefined, which is like the debug.todoElm thing that you can just give a name and don't implement thing.
So that's a keyword called undefined.
Undefined takes a string. Is that it?
No, no. Undefined is a value on itself.
But it's not a function.
Yeah, no, it's not a function.
Undefined is not a function in Haskell.
But there is an undefined and it's considered to be the bottom value is, you know, you're not meant to to ship code using it.
But you can signal that something is not implemented by just putting an undefined there in the code base.
Gotcha. OK. Yeah, that's useful.
You're not meant to ship code with that, but you could ship code with that.
Exactly. I think so.
Like there's no checking the package registry or something that prevents you from shipping code with undefined.
It's so interesting because like in a way you would you would tend to think of like Haskell as the purists, not Elm.
But in a way like Elm really the thing that Elm is a purist about is purity.
And Haskell is like less of a purist about that, but maybe more of a purist about like category theory.
I mean, I guess to some extent, if it's roots in academia, right.
And researchers, they try to design a language with language extensions.
Well, they don't have to think about all the edge cases.
They care mostly about some area of computation of some area of research.
And whether you handle all the edge cases is doesn't really matter.
So it's fine to have a few undefined in a language if you're only trying to to figure out to answer a specific question where this is not going to happen.
And Elm is really more pragmatic or meant for production.
So it makes sense that we have it there, though, I guess some people would say, well, if you can't just call JavaScript through FFI, then you're not very pragmatic.
But that's a topic.
Yeah. Also, I think there is some kind of check in the Elm package registry that prevents you from publishing code with on it.
Right. So it's not happening, basically.
So there are security guards against that.
Yeah. Thankfully.
I don't know when it appeared, though. I'm guessing it was pretty recent.
But because you could at least have applications with debug.crash in 0.18 or 0.17.
Right. Yeah, I remember that, actually.
Yeah. But I don't know if you could have them in packages. That I don't remember.
I was worried about it because recently I contributed to Elm format.
I added a few linear parser things.
And then someone told me, hey, you are parsing this fat arrow into this slim arrow lambda thing.
But prior to Elm 19, someone could have an infix operator being a fat arrow.
And I was so scared about my PR not being correct, because the only version of Elm I've used is just 0.19 onwards.
And it's like, wow. So I really do need to extend now my PR and add a test for this specific case that I wasn't expecting.
I remember the convention was to define the fat arrow as creating a tuple.
So you could have key value pairs.
And instead of doing, you know, like if you did dict from list with a literal list,
and then you give a list of tuples, instead of doing dict from list, a list with tuple, some value, comma, some value, next tuple, some value, comma, some value,
you would do list value, fat arrow value, comma, key, fat arrow value.
So it would be like tuple.pair?
Yep, it was exactly like that.
Wasn't there like the daughter sign for the same purpose?
Like, I remember that people did a model daughter sign command at none or something, quite a lot.
Oh, gosh, I'd forgotten about that.
It's completely left my memory now.
But there was something about getting something with no command or whatever.
Actually, I think, Flavio, in one of your blog posts, you say something like, oh, in Haskell, you can use the comma operator to make a tuple.
Well, I think it was there in Elm, and it was the daughter sign,
which I guess was even more confusing for people who use Haskell because the daughter sign is something very different.
I think it's the function composition.
The daughter sign is the left pipe in Elm.
Oh, it's a pipe, yeah.
Yeah, daughter is used a lot.
To prevent the usage of parentheses.
Right, right.
Yeah, and there also used to be the parentheses comma parentheses operator to create a tuple in Elm.
As well?
Or with two commas to create a triple.
That used to be there, and now it's tupled up here.
But yeah, so all these things, I mean, it is a very different, like, what makes idiomatic Haskell is so different.
And it definitely seems like in the Haskell ecosystem, people really like being able to express things tersely with operators.
And in Elm, we shy away from that.
Yeah, we don't use those Greek letters as well.
One thing you mentioned in your blog post on Functors Flavio is about function composition and about how any chain of with functions or any sort of chain of maps can be turned into a single map with the composition operator.
Like greater than, greater than operator, or less than, less than operator, if you're into that sort of thing.
But some people in Elm I've noticed shy away from the composition operator.
And sometimes I myself find it difficult to reason about things in a pipeline versus a lambda.
So I'll opt for doing a lambda.
What are your thoughts on this sort of like point-free style?
From what I understand, that's very common in Haskell.
Yeah, even within Haskell, there is like a fight, you know, that people prefer Terrace programming or Taft programming.
Some of them prefer point-free and some of them don't.
But personally, I'm a bit of a point-free freak.
So I need to refrain myself from using too much point-free code.
And I have this issue with Elm as well.
So I reach for the composition operators very often, and only when I get some complaints from my Elm colleagues saying this is unreadable or this is not maintainable or blah, blah, blah.
Then I will change the code to use Python, for example.
But yeah, the thing I wanted to say or to convey in my post that I think I a little bit failed is when you cannot talk about functors or applicatives or monads without speaking about laws.
So they must fulfill certain laws.
And one of the very simple laws for functors is that they need to preserve composition.
So if you compose things that do nothing, for example, if you compose with the identity function, nothing should happen.
It's equal to doing basically nothing.
And this little law is what makes possible that you can either map n times or just compose n times and then map once, which in JavaScript tends to be like a performance optimization.
But it depends on how your language implements these things.
Right. Right.
So what would be your pitch to somebody to consider using point-free style or something closer to point-free style more often?
Well, it depends on the case by case scenario.
But, for example, when you're using mapping functions or functions that return a Boolean, like in a condition to some other function, sometimes I feel like by adding a lambda with an extra argument adds a little bit of noise.
And if you get used to composition and you, for example, in Elm, the composition operators read really nicely because you can either do left to right composition or right to left composition.
And when you see the functions, you clearly you can read in the code, oh, we are running this function, then the next one and then the next one.
And I don't really care what the name of the argument or of the lambda is, because I just don't care because it's a function that expects a function.
So I would find that sometimes it reads a little bit better, but it's totally a personal choice and a matter of usage and about a personal intuition.
So I would not force it on my colleagues where they are learning functional programming, for example.
You need to be very, very soft and very not strict about it.
I see. So you think that giving names to parameters in anonymous functions is pointless or point free?
It's pointless.
Only if you're a point freak.
Exactly. All the puns were coming.
Yeah, it depends on the scenario. But yeah, I think sometimes the lambdas add a little bit noise and they are a bit pointless.
So then I prefer point free.
I see. That's interesting. Yeah, I think that makes sense. I would say on the other side of that, you know, sometimes I think about like certain designs and idioms resist refactoring.
I remember in my Ruby days, people talked a lot about this with certain language constructs that like using unless instead of if.
And it's just like, well, OK, you can use unless, but you're introducing potentially a lot of churn if that condition changes in the future.
Or if you add an else clause, now are you going to have an unless else? That's just weird.
So some people just say, let's just not use unless because it's prone to churn.
And one thing that I've found is that like using the composition operator greater than greater than is convenient in some cases.
And in some cases I do reach for it.
But I do find myself flipping that to a lambda with an explicitly named argument pretty often, and it creates a little bit of churn in the code when I'm refactoring.
So that would be one argument against it. Also, it's like there's a certain point where you're like, well, I'm having to think pretty hard about what this is doing when you're chaining like enough things together.
And you're like, if I just if this was just like a lambda, I wouldn't have to think quite so hard.
So I think like that's one argument I've heard in the opposite direction.
I think I think there's a reason for both sides.
Maybe the right answer is somewhere in the middle.
Yeah, for me, it was there's a talk called Tacit Programming Point Free or Die or something like that that was given at Strange Loop some years ago.
And it is more reasonable than the title makes you think.
But it is basically Haskell code.
And for example, you want you you're used to seeing JavaScript code when people will just open a lambda, say X lambda, then console log of X.
Right. So functions with one point, you see that and you say, oh, the point is pointless.
So the X is pointless. And you can just say console log as a function.
Right. But when it gets crazier in Haskell is when you do not have one function argument, you have two.
And they're passed as they are giving A and B to the next function A and B.
And you cannot you cannot do this the same way.
But there is an operator called the Blackbird operator, which I just learned about like a month ago that allows you to pass these two point function the next function.
But the code starts to get really, really messy and really crazy.
And that's why it says, you know what, do what's readable.
If if that's the programming reads better, sometimes just use it.
If it doesn't, please don't overuse it. Basically, but it's a really great talk.
Right. Yeah. I mean, at the point that you start reaching for for flip, then you might consider just naming that argument in a lambda.
Yeah. Yeah.
I miss flip a lot of times in Elm.
Yeah, but it's because of me being a pointless freak.
I guess that's why carry and uncarry and flip were removed from the prelude of Elm, you know.
Right. Now, OK, why don't you go ahead and define those for those of us who haven't used them?
I don't think I ever used I don't think I ever used any of those, actually, even back in the 18 days when it was around.
I'm sure I did either. But I started basically with 0.19 as well, even though I did it back a little bit back in 0.16.
Flip is a function that, for example, expects a function that takes is a function that expects a function that takes two arguments and then returns a result.
So you have a B and the result would be C. So sometimes the order of the arguments you expect for a certain function is B, A and then X.
But you you cannot change the order because of the way your code works or whatever.
So this flip function is very convenient because it lets you flip the arguments of the function you're calling.
And it's also a mathematical concept because you can also flip functions mathematically.
But it is kind of magic that this does that this works. But if you see the function implementation, it's not magic at all.
You know, right. So flip does that. And there there's uncarry.
You know that all the functions are carried in Elm, which means that they all receive just one argument and return a function that also returns expects one argument and returns one.
And there is a case in which you might want to uncarry a function so that it receives all of the arguments at the same time.
And the only way to do this in Elm is to use uncarry. And now instead of expecting arguments one by one, it will just expect a tuple of arguments.
And then it will give you a result.
Right. Interesting. And most mostly useful for for chaining things in a point freestyle.
Yes, I tend to reach for those basically when you are doing crazy point free stuff.
You find that you haven't defined your functions in the best order possible because for composition to work properly, the functions need to be data last.
Right. And for functions that have expect more than one argument, you sometimes do not choose the correct or the order that you were expecting.
And you end up flipping lots of functions.
Yeah. Things like dig.union or set.diff. Probably more the diff than union.
For diff is really tricky because you think that you're doing the same thing, but depending on the order, you might be diffing two completely different, getting two out.
Yeah. So that one definitely I tried to avoid using composition because it's difficult enough already.
Yes, it's true.
So maybe to round off, we've seen that Elm and Haskell are very similar languages where they look a lot like each other.
So it's pretty easy to jump from Elm to Haskell relatively compared to other languages at least. And Haskell has quite a large learning curve.
So we've talked about category theory. We've talked about laziness. We've talked about language extensions.
Well, we kind of did talk about category theory and you also told us to shut up about that.
But yeah, what other concepts are useful to learn in Haskell? Or would someone who goes from Elm, who would go to Haskell, what would they need to learn on top of these?
Or in other words, what could your next blog post be about?
Nice. So there are a couple of things. For example, we talked a little bit, well, I talk in my posts about infix operators.
Elm has some of those, but you cannot user define them anymore.
So the most famous ones in Haskell are the ones in the Lens library, which is a library that is very famous because it produced a set of Lens libraries for many other languages.
A Lens is basically something that allows you to, it's a functional getter and setter that allows you to manipulate data on a deeply nested data structure, basically.
And they come from Haskell and they are useful in JavaScript, for example, to traversing JSON, really huge nested JSONs and doing things on the level you want.
And we don't have, unfortunately, per se, Lenses in Elm, but I know that there are people who build their own Lens libraries or they're building code generators for Elm that are able to use Lenses in Elm.
But it's not like a common pattern that it's used in the Elm community.
There are a few packages out there.
At least a couple, right?
At least a couple, which is a lot for the Elm community.
In terms of packages and for something that is sometimes frowned upon.
You have your set of choices.
So Lenses would be one thing.
Well, getting acquainted with a lot of crazy infix operators is another thing.
And also just knowing that there are many ways to do the same thing.
Like, for example, there are two or three ways of doing pattern matching in Haskell, whereas in Elm there's only one.
The case of syntax.
There are pattern guards and pattern synonyms, which are things that are meant to help you simplify pattern matches.
And many other constructs that we don't have, but I would bet you don't really actually need in Elm.
Also, I'd like to say that even though it's not language related, but ecosystem related.
Because of the Haskell academia nature, there is no formatter that came by default with the language as we do have with Elm format.
And in the build tool, which is Cabal, there's also a war.
There's one called Cabal and one called Stack.
And both of them work, but both of them have serious deficiencies.
So it's such a bless to have something like the Elm compiler or something that is curated and that comes by default with a formatter with one single style to rule them all.
And it's something I really, really, really miss in Haskell sometimes.
I would be invited on a Haskell podcast to teach Elm for Haskellers.
Flavio would be a much better person to be on that podcast episode as well.
I mean, the episode would be pretty short.
Like, take Haskell, remove everything, and you've got Elm, basically.
And please use a formatter.
You know functions? Yeah, we have those.
Functions over data? Yeah, that's Elm.
I guess something I hear a lot is GADTs, but I don't know if that's a language extension.
Yeah, it is a language extension.
Generalized algebraic data types.
Which is like our custom types, but more powerful.
It's a custom type, but it's a polymorphic custom type that you can parametrize just as OCaml functors.
So I haven't really reached for GADTs in Haskell,
because they come up when you really need to do really abstract, really complex stuff.
But for many library authors and compiler creators as well,
I know they reach for GADTs very often.
And I know there are some measures, some things similar to GADTs in OCaml as well.
And people are really crazy about GADTs.
But for the average programmer, even the concept of it, even for me, it's crazy.
It's very hard to grasp, very hard to use, and very hard to understand.
But yeah, there are GADTs.
I'm sure Dillon would use GADTs in Elm pages or in the Elm GraphQL for sure.
I've definitely thought about some of these things.
There are definitely certain compromises to Elm's simplicity.
And it's actually impressive how often we can get by without running into limitations
with the number of language features that Elm gives us and their simplicity.
Like a custom type is such a simple thing.
And there's an extreme elegance to just how little there is to that concept.
But I definitely have thought as a package and framework author,
at times I'm like, hmm, yeah, if there was some way to have a more general way
to deal with this, like to be able to let users have a more customizable
custom markdown block type.
We don't have a lot of tools for extensibility.
So we end up using a lot of code generation as framework and tooling authors
to create extension points or very clever design.
But we've actually managed to figure out a lot of clever tricks to do these things
and give a pretty good experience to the user at the end of the day.
But it takes a lot of love as a designer of these tools.
I'm guessing in your case, a Haskell person would come in and say,
hey, I have a language extension for this.
Or it's just baked in that a type class would solve this problem for you maybe.
Exactly. Yes. How could I forget about type classes?
This is the main thing that you need to learn from Haskell,
for Haskell if you are coming from Elm, basically.
And it's also the crown of Haskell, right? Type classes.
And it's an idea that I think it was born with Haskell,
and it's extended to traits in Rust, and it influenced greatly many other languages.
Because it's such a great thing as well.
Right. So would a type class allow you to say that this function takes a data structure
that you can index into, but you could use an ordered dictionary or an unordered dictionary,
and it would support operations on both because it needs a certain class of functions?
Exactly. Yeah. Some people compare them to interfaces in object-oriented programming,
but they're a little bit more powerful than interfaces.
So, for example, we've discussed about monad and functor and applicative functor.
Those are basically type classes in Haskell.
You can define them, define the methods that they need to implement
to be considered instance of a certain type class.
And also we have type class tests and property tests that we can check against
to make sure that a type is a semi-group, a monoid, or whatever type class.
Yeah. So much stuff to discuss.
So if somebody wants to do a deeper dive into Haskell,
do you have any favorite resources to point them to?
I think there's the Elm Guide, which is a good introduction.
Actually, for me, the ideal learning path would be, for example, learning JavaScript.
And then, for example, I read Richard Feldman's Elm in Action,
which I think it does a pretty good job teaching Elm to a JavaScript mindset.
And then from Elm to Haskell, well, I obviously would recommend my blog posts,
my series of blog posts.
But as well, this Haskell book, Learning Haskell from First Principles,
it's an excellent resource, even though it's a little bit long.
But it just does a perfect job of explaining from almost ground zero what you need to know.
It almost teaches you what the string is and a function is
before dipping dive into... diving deep, sorry, into Haskell.
Is it one of those books where Hello World is chapter 12?
Exactly. Yes. Or chapter 20. Yes.
I think Monads, I think Functors is chapter 12,
and then Applicatives 13, and then Monads is chapter 16 or something like that.
I have to confess, I read chapter one of this book years ago,
like six years ago, about lambda calculus.
And I diligently went through...
Skip chapter one. You can skip it.
I diligently went through all of it.
I had my pencil and paper and I did those lambda calculus exercises
and I understood lambda calculus a little better.
And then I'm like, okay, I think that's all.
That's it.
Exhausted my energy for going through it.
Yeah, I think I kind of understand why they decided to put lambda calculus on the first chapter.
But I know that many people haven't gone through the first chapter.
So it's a terrible entry barrier.
You can pretty much read the book without reading the first chapter.
But it's basically like we do in Elm, programming with expressions, not statements.
So this is an invaluable skill, in my opinion, that will get you really, really far,
understanding the whole expressions thing.
Right. Yeah. Well, great stuff.
And Flavio, thanks again for joining us.
And thanks for your blog posts.
And looking forward to hearing more in the future.
My pleasure. Thank you. Thank you both for having me.
Thank you for coming.
And Jeroen, until next time.