Elm's Universal Pattern

Guest Joël Quenneville shares his wisdom on transforming and mapping in Elm, and how it applies across many Elm data types.
June 7, 2021


Some common metaphors for Elm's Universal Pattern (Applicative Pattern).


  • Random generators
  • Apply mapping functions to vanilla value functions to keep things clean


Record constructors

Some more blog posts by Joël that related to Elm's Universal Pattern:

Joël's journey to building a parser combinator:

  • Nested cases -
  • Extracted Result functions -
  • Introducing a Parser alias and map2 -
  • Re-implementing with elm/parser -
  • Getting Unstuck with Elm JSON Decoders - because mapping is universal, you can solve equivalent problems with the same pattern (described in this post)


Hello, Jeroen.
Hello, Dillon.
And today, once again, we've got another guest joining us, Joël.
Thanks for joining.
Hi, everyone.
Thanks for having me on the show.
It's a pleasure.
I've been hoping to have you for some topic.
And finally, we got a nice topic to discuss with you that popped up.
And yeah, you're sort of, I think of you as somebody who explains things in a way that
a beginner hears your explanation and a light bulb goes off and an expert and an Elm veteran
hears your explanation and they say, oh, I never thought of it that way.
You're sort of got a great philosophical way of breaking down fundamentals, which I really
Well, thank you.
That's really the goal of mine when I speak or write or teach.
I usually like to be right on that boundary of something that's practical and teaching
how to do a task and solve a problem, but also venture a little bit in the philosophical
world of like, why is this a useful solution?
And is there a bigger concept at work?
So speaking of bigger concepts at work, what is that concept today?
You want to introduce it for us, Joël?
Elm's universal pattern.
What does that mean?
So I think I'll just open this by saying that I think my favorite function in Elm is probably
There's a bunch of different modules that implement this as a maybe map2, a JSON decode
map2, random map2, and all of those, they're probably my favorite.
If you were in a desert island and you could only bring one function, would it be map2?
It probably would be map2.
Technically, I should probably say and then because you can use it to implement map2 and
then you'd get like, it's like wishing for more wishes.
It's kind of cheating.
But yeah, if I'm only allowed to take one, it would be map2.
I guess you would also want to bring some data with you because map2 without any data
doesn't have any value.
That's true.
There's some bad pun here that can be made about date palms or something like that, but
I can't make it.
If it was another language, you'd bring rescue or something like that.
I don't know.
So Elm's universal pattern.
So what exactly are we talking about here when we're talking about a universal pattern?
A universal pattern for what?
What do you use this pattern to do?
I think the universal part of it is just the idea that map2 exists for multiple different
It's actually very common to see different types both in core and in third party libraries
implemented because it's such a useful function.
And at its most basic level, I think of it as a way to combine two things of the same
So to be more concrete, if we talk about, say, maybe, I have two maybe values that I
would like to combine, and I have a two argument function I would like to combine them with,
map2 would be the way to do that.
So I think of it as a way to say two argument function, two maybes, how can I combine all
those things together?
And then there's more functions as a map3, a map4, map5, et cetera, if you want to scale
that pattern up to a three argument function with three maybes, or a four argument function
and four maybes, and so on.
It's really interesting because in the Elm community, we don't tend to talk about these
things with these category theory terms because it can be confusing.
And often, you hear this like, all right, chapter 10 of this book on Haskell is when
you finally have gotten past the introductions of category theory concepts, and then you
write your hello world or something.
And in Elm, we go the opposite direction where if you get to these concepts at all, it's
after chapter 10.
So sometimes we go to the point that we don't want to put terms on these different categories
and concepts, but it is helpful to have some way to think about them somehow.
So sometimes I know people in that category theory world talk about things in boxes, that
it's going between these different worlds of you have a value that you can do something
with and then something that's you can't reach, like a random value.
If you have a random generator of type int, you can't go and touch that int and add a
number to it and multiply it.
So you need to apply something to it in the box.
And that's kind of what mapping is conceptually.
It's like reaching into the box with a function that so you have this operator that can multiply
or some function that can take the absolute value of a number.
And you want to apply that function to the value that's in that box, that random generator.
Yeah, I guess there's a few different mental models you could use to think about what mapping
functions do.
I'd mentioned one earlier, the idea of combining.
Another one that's particularly helpful with say types like maybe a result is the idea
of abstracting over this really common pattern that you might have, which might be unwrap
a value, apply a function and then rewrap.
So like a maybe if you want to do an operation on it, you might say, well, unwrap it if it's
present, do my operation, but because it might not be present, we need to return nothing.
Therefore, we also need to rewrap at the end.
And really the unwrap rewrap part is just a boilerplate.
We have to do this all the time.
And so a map function allows us to abstract over that pattern.
I think there's also maybe a sense where you can think of mapping functions as a way to
sort of translate functions into ones that operate on your sort of wrapper type.
So I have a two argument function and I want to turn it from a function that works on integers
to a function that works on maybe integers.
I can use map two to convert it.
I think the fancy functional programming term there would be lifting where you say I have
this two argument function.
I will sort of lift it into the world of maybes.
So yeah, those are sort of three different ways of looking at the same concept.
And I think sometimes it can be really hard to get a good grasp on what this concept is.
And so having multiple mental models can be really helpful.
Particularly because some of them don't work quite as well for some types.
So you mentioned the idea of a box earlier.
And I think that's very concrete when looking at something like maybe because it's like,
yes, I have a number and it's wrapped inside of it maybe and I can unwrap it.
That feels like a box.
Something like random isn't quite a box in that it's a future value that you might get.
Like a decoder is kind of similar.
It's mailbox in a way.
Something will be delivered to it in the future and when it's delivered, you wrap it in something.
I found that maybe it's probably one of the easier types to use to understand some of
these concepts because it's really concrete.
Most Elm developers are familiar with how that type works.
And you can deconstruct it and you can pattern match on it, do a case expression and see
what's inside at any point.
And you can reimplement your own map, map two, map three, et cetera, pretty easily in a way
that you couldn't for say the random generator.
That's a good point.
Yeah, because the actual internals under the hood of the thing you're mapping can get a
lot more abstract than with a maybe.
As you say, it's to the point where it's tempting to just do a case statement all over the place
with maybes.
I find that one thing that I look out for sometimes is if a case statement is happening
too often and if functions are dealing with these wrapped types or these, you know, if
you have a function that's dealing with random generator types or maybe types rather than
ints or whatever underlying data type, would you say that that's generally a smell?
Like I often think if I have a lot of case statements around maybes or if I'm passing
these wrapped values, things tend to work really nicely when you have like functions
that deal with sort of vanilla values and then you apply these map functions to combine
I would agree, yes.
In general, the way I tend to write code and Elm code in particular, I like to separate
what I might call branching code or deciding code from doing code.
So if I were to say case on a maybe, I would have one function that cases and branches
and then it would just call another function that's that sort of doing function.
And so even if I had the case expression, I would have a separate function that acts
on the inner integer or whatever it is, which is just generally, I think, easier to read
and understand.
That also makes it nice to refactor later if you realize, wait, this case expression
could be a map.
I don't have to separate the business logic inside.
Yeah, I find that like with if you have a remote data value, for example, often code
starts out wanting to do too much and doing like a case statement on the remote data type
that if it's successfully loaded or loading or you kind of render these different views
in line.
But it turns out to be a lot to wrap your head around to parse out the logic of the
rendering logic for the successful view and the error view and all these pieces in one
And it's really this general concept, which actually you have a nice blog post on this,
I think, about staying at one level of abstraction.
And in a way, when you're kind of unwrapping and then dealing with the unwrapped thing,
by definition, you're dealing with two different levels of abstraction right there.
And I think that separation of sort of deciding code versus doing code, those are sort of
two abstractions that you want to keep separated.
Well, let's talk about some examples of this universal pattern.
With these different examples, you were describing how different analogies might be more intuitive
for different ones.
It's also interesting, like in a way, there are almost different semantics for these.
Like for maybe, if you're combining maybes, the semantics are almost like and semantics,
where it almost like short circuits.
If any of the maybe values are nothing, then it just short circuits through and the whole
thing is nothing.
But if like, for example, with a JSON decode value, I guess it's a similar concept that
it almost short circuits with a JSON decoding error if there's an error anywhere.
But that error carries information, so it could carry information from any given decoder.
Similarly, I think you could say that with something like result, where the error has
more context about where it failed, why it failed, rather than maybe it's just we don't
have a value.
It's kind of interesting that it's not like, I guess it's not a very common pattern to
just take multiple errors and group them together.
But I suppose it could just as well be.
But I guess you can't really proceed because it assumes that it has the needed information
in order to proceed in certain contexts.
But like with a decoder, it's not going to attempt the other decoders, the first one
that succeeds, the first one that fails at short circuits.
There's no reason you couldn't accumulate errors.
I think later if we talk about parsers, that might be something that comes up.
Okay, so let's talk about where some places this pattern occurs.
So we've touched on maybe Elm JSON, random generators.
It might be worth talking a little bit more about Elm JSON because I think that's maybe
one of the places where it's particularly useful.
Sounds great.
I think for me, the metaphor or the mental model that works best here is the idea of
So when we're parsing JSON, typically we're pointing to a particular path in the JSON
We're saying in this field, decode this value as a string or integer or something like that.
But usually we want to read more than one value out of the JSON.
So we want to say at this field, read this integer.
At this other field, read this string.
At this third field, read a Boolean.
And then give me all three values back and let me combine them into some custom Elm value.
And so we can write individual decoders for each of the pieces of data, but then we need
a way to combine all three together.
And that's where the mapping functions come in.
If we're combining three pieces, it would be a map three.
And yeah, for me, this mental model thinking of them as combining functions, I think is
most apt when thinking about decoders.
Scott Welch and has this concept of like railway oriented development, I think he calls it.
And he talks about this pattern for like a decoder or mapping things together that you
have these sort of split tracks.
If you picture a fork in the railroad where you can split off between these two different
directions and one of the directions is sort of an error direction and the other one is
like a green success direction.
So you map together a JSON decoder that picks off five different fields from a user and
it expects them to be non null and of these specific types.
And as it picks them off, it's going along the green railroad.
And if any of those is null unexpectedly, now it can take that other track and go to
the red track.
And suddenly, and you can imagine each time you apply a map, it's branching off and there's
another sort of a new green track for it to branch off of.
But it can always go down that red track.
And the red track, it's just following along a continual path.
So instead of applying more data and combining it together, you just get that error straight
through that short circuited error data.
I love the visual metaphor that he uses.
You should definitely link to the talk because it's worth looking at it with the slides.
Yes, I agree.
It takes a concept that's a little bit arcane and sort of pulls in a lot of different ideas
from functional programming, not just this mapping idea and strips away the really sort
of academic language and really puts it in a metaphor that's easy to follow.
And so I think we're sort of so entrenched in this Elm world here that it's easy to forget.
It's easy to take these things for granted.
But if we sort of step back from it and talk about how would we deal with these things
otherwise, dealing with throwing exceptions.
And it's actually really wonderful dealing with data in this sort of composable way because
you can think about something as a unit and you can combine these things.
And so I mean, I'm not sure if it's just a reminder to appreciate what we've got or if
there are implications for how we design our code there.
But I think that's a good thing to keep in mind.
Yeah, how would you do that?
Would you go with plenty of case expressions?
There's a sense maybe where in say a more dynamic language, a bunch of maps in Elm
might be more or less equivalent to some kind of optional chaining.
So like Ruby has what they call the lonely operator.
JavaScript has the question mark where you do this sort of optional chaining.
Knowledge coalescence, something like that.
I think that's a separate concept.
Right, right.
This new JavaScript question mark dot operator and you see this in different languages.
And I noticed this, this is and the sort of I guess before that operator in JavaScript,
it would be, you know, user double and user dot.
Right, which gets really clunky if you have a long chain because then you have to check
every step along the way.
And so one of the things about that pattern that I've noticed is people say like, wow,
this like question mark dot operator in JavaScript makes code so much nicer, which it certainly
cleans things up.
But then what if you're not dealing with something that may be null?
What if you're dealing with something that may represent some kind of error or how do
you change different types of things?
So, you know, Elm doesn't Elm doesn't have, you know, these sort of type classes for these
different types of things where you use a single operator to do it.
But it is so baked into the core libraries and the ecosystem and the ethos of Elm that
you sort of apply these patterns and also the language itself because it doesn't have
sort of exceptions that can just bubble up somewhere and be caught.
And so you've got to sort of flow data through and you've got to prove to the compiler before
you can just unwrap values and that sort of thing.
So it's it's sort of baked into the language in a way.
And things do compose together so nicely because this is not just taking five maybes and mapping
them together, but, you know, then chaining that along and turning that maybe value that
you drive into a result type because you need to combine it with another result type from
another place.
And then you combine those to build some value.
And that at that point, things really compose together in a way that it feels totally different
than just using question mark dot operators in JavaScript.
Things really compose with all these other libraries and chains.
There's also, I think, the really key distinction is that in a language like JavaScript, things
are nullable by default unless you check them.
And then you can have confidence that they're not null, whereas Elm values are guaranteed
present unless they're explicitly wrapped in maybe.
So we can sort of trust by default and then we sort of mark the areas that are untrustworthy
and the compiler will force a check.
That's a great point.
And this pattern in a way, it's like intimately tied to this quality of the Elm compiler and
the Elm language that you're sort of deriving data of different types as you apply functions
to it.
So, you know, if you have a pipeline and you do, you know, you pipe it to list dot singleton.
Now you take a thing that was not a list and you make it a list.
And then you, you know, combine that together with something else.
So this is one of the things with this sort of applicative pattern.
We haven't used that term yet, but you know, you have a pipeline and you're applying these
functions and it's sort of modifying the type as you go.
So with list based APIs, which you also find in Elm, like, you know, Elm HTML, you create
a div and you give attributes and children.
You're not changing the type as you add HTML attributes to that list in the div.
You add a class, you add an ID, but when you're doing a, you know, json.decode.succeed user,
and then you're piping that to end map or some, you know, pipeline operator, you're,
you're modifying that value from, from the starting point.
And you start with this constructor that takes five arguments, and then you pipe it through
with applying five different times.
And it goes from a function that takes five arguments to a function that takes four arguments
to a function that takes three arguments.
And in that way, the applicative pattern is really nice with Elm libraries, but because
it allows you to sort of transform the types based on what you're applying.
If you pass in a decoder that takes an int, a decoder that takes a maybe string, it's
going to expect that to be matching up with the constructor you started with and applying
Well, and one of the things that you're saying here that I think you're hinting at is the
idea that in functional programming, the entire way we structure programs is as a series of
data transformations.
So we start with one or more input values, and we slowly convert them into it could be
the same type, it could be a different type, but we're slowly converting them until we
eventually get the output that we want.
And that's how we structure programs in functional programming.
And it's just like when you're writing code, it's like this little puzzle that you're like,
I know I need a value of this type, and how do I build it?
I mean, we just talked about this recently in our debugging episode, Jeroen, of this
process of debugging types when the types aren't quite fitting together and how you
figure out what type to put in the type hole.
Sometimes it's really helpful to just break out these little puzzles and say, oh, this,
I know I need a value of this type.
Let me pull this out into a let and break it out into a sub puzzle, give it a type annotation.
The type annotation proves that that type would solve the puzzle in that chain of applications.
And you don't yet have a type of that value that you promised with your annotation.
So now that's your next puzzle to solve.
Yeah, there's this, you know, not only is functional programming about transforming data,
but another key concept in at least structuring functional programs is breaking down larger
transformations into smaller steps.
You might call that decomposition, each of which are smaller transforms, some of which
might be reusable.
And that's where we get into all the fun deeper functional programming concepts are generally
just patterns that we can use to do that, to break down a larger transformation into
smaller pieces.
Yeah, I wrote this blog post, combinators, inverting top down transforms, where I kind
of talked about, like, the difference of thinking about a problem as these sort of composable
sub problems or decomposable, I don't know.
These little breaking down into sub problems where you say, like, I know how to decode
a user, but I mean, where am I going to decode the user from?
What data is it going to be hanging off of?
Is it going to be nested under a bunch of fields?
Is it going to be continuing off of something?
Or am I going to be decoding it based on if the role is admin or whatever it may be, but
you can think about these sort of parts of it independently, and then compose them together.
Whereas that's like this sort of bottom up way of thinking about things, whereas this
top down way is just sort of reaching in and grabbing data from a JSON blob, which in my
experience is what tends to happen when I've worked in JavaScript code bases is it's so
easy to just pull in data from a big JSON blob that and then you've got this big JSON
blob, you pass it through a transformation function that changes a bunch of data, but
you're dealing with this like monolithic object, and it's really difficult to think about.
But with these sort of combinators, it's just you can think about this one piece, but then
you can take that piece and this other piece and build them up into one thing.
So this sort of like universal pattern, I'm not sure if it's like inseparable from this
concept of a combinator, but it seems like there's a link there.
So we've been using the term universal pattern because I use that in a as a title of a blog
In that blog post, I was talking about map two, map three, map four, and so on functions.
Those functions are combinators because as we sort of talked earlier, one of the mental
models for what those functions do is they give us a way to combine values together.
And so it might allow us to combine three maybes or I mentioned also earlier that it
was a really helpful mental model for myself for thinking about JSON decoding.
Say I can decode three different pieces of valid three different pieces of data and I
want to combine them all into one more complex piece.
And so now I need a combinator.
And that's really when we look at a library like the JSON decode library.
At its most basic level, it really only provides us with two types of things, some sort of
primitive decoders like int and string, and then a few combining functions.
And that's basically it.
And we can use those building blocks then to decode anything we want into any Elm structure
that we want.
Because a really key thing about JSON decoding in Elm that I think is not obvious to people
who are new to the language is that your Elm structure and your JSON structure don't need
to be mirrors of each other.
And in fact, you probably don't want your Elm structure to mirror the JSON.
So I typically will design my Elm structure first to match my needs for my program to
eliminate impossible states and all that good stuff.
And then say, OK, given this Elm type and given this JSON that I have, how do I bridge
the gap?
And that's where I will then pull out all the JSON decoder tricks to say, how can I
translate between the JSON I have and the Elm structure I want?
I'm curious about one thing.
What do you think of the names, map, map2, map3?
Like for me, map is about transforming one thing to another.
And map2 is, as you say, combining.
So would it make more sense to call it combine2, combine3?
Or is that even what you have in your mind every time you talk about map2, map3?
That's a good question.
There's a sense of the base map.
There's a sense in which you could call it map1.
It's just sort of continuation of that pattern where you can take a one argument function
and one maybe, and I guess you're only combining one.
You're combining one maybe and just applying a function to it.
I do notice that I often start as the first tiny step if I'm doing a refactoring to a
different data type.
Let's say I've got a value that I'm just decoding a user from some HTTP response, and that's
stored in my model.
But now I actually want it to be a bunch of metadata, and user is one of those pieces
of metadata.
And I've got some other bits of metadata in there.
And so the first thing I'll do is I'll wrap it in a metadata, which has
a single field user of type user.
So now I've done this map1.
I've done with a record.
Now I'm wrapping it in a record.
And that's a preparatory step for the next step, which is it's going to be JSON.decode.map2,
and I'm going to add another field to that metadata field.
So in a way, it does feel like map1, even though you can use it for just sort of transforming
There is an elegance to the fact that you can change a map to a map2.
Yeah, it really feels like a continuation of this pattern.
And there's also the sense in which it is a transformation, it's just a transformation with two inputs.
So you might have, say, two integers coming in, but a string coming out.
So it is still a transformation, but it's less of a transform one item into another,
because now you have multiple inputs.
Yeah, if you think about it with maybe, is a very natural, like, imagine when maybe
is created and we have this maybe type and we're doing case statements all over the place
and we say, case just, take that value, I want to apply some function to it.
And we're like, this is really inconvenient.
Wouldn't it be nice if I could just pass in the function I wanted to apply when I wanted
to turn this string to uppercase, I could just pass in a string to upper function.
And so we create a map function.
And then we say, well, I actually, I want to combine two maybe values.
And then we say, okay, well, I mean, how would I combine two maybe values?
Well, if either of them are nothing, then I can't combine it into a single maybe value.
So let's just, you know, turn it into nothing if any of them are nothing.
And otherwise, we'll pass in those two just values that we have to the function that takes
two values.
Specifically, if you're trying to combine them with a two argument function.
Because it's a two argument function, you need both values to be present.
So if they're present, apply the two argument function to the two values.
Otherwise just return nothing.
I think your question, Jeroen, is really interesting.
If we look at what Haskell has done, they've chosen to not name this function map2, map3,
They've called it liftA2, liftA3, liftA4.
And they've sort of gone with this other metaphor that I talked about, this idea of lifting.
You could think of it as translating functions into the world of some other type.
So you could transform the add function to one that works on integers to one that works
on maybe integers.
So liftA2 would be map2.
Is there a liftB2?
So liftA, the A here stands for applicative, which is a term I think that we've sort of
been dancing around a little bit.
It's sort of the fancy functional term, but we haven't really gotten into it and defined
All right.
Let's do it.
Maybe we should do that.
At its core, really, what you need for something to be considered applicative is you need some
kind of constructor.
And then you need one of two things.
You either need map2 or you need what in Elm we often call andMap, which is sort of a pipelineable
version of map2.
And then?
It doesn't ship in the core libraries.
That would be similar to the JSON decode pipeline required function, right?
So the JSON decode pipeline required function is a combination of what you might call andMap
and then also allowing you to plug in the field name for convenience.
So given either of those, you can describe a type as being applicative.
So because maybe has a constructor, which is just, and it has a map2, we can describe
it as applicative.
And the interesting thing with map2 and andMap is there are sort of two different ways of
expressing the same thing.
And so given either of those, we can implement the other.
I feel like this is like MacGyver skills for functional programming.
All right.
I need a stick of gum, a twig, or if you don't have that, I need...
But if you're missing one of those, you have nothing.
So yeah, I find that the map2 is much more concrete, more easy to understand as someone
who's exploring these ideas.
And from my own personal journey into some of these more philosophical concepts, it is
much easier to understand with something like map2.
Partly because you can deconstruct it more easily.
You can implement it yourself with a type like maybe and grasp pretty easily what it
We mentioned earlier, right?
A map2 for maybe is just checking are both values present?
If so, apply this function, else return maybe, or else return nothing.
And map is a little bit more mind bending because it plays with sort of partial application
and some pipelines.
And there are more concepts you need to understand in order to work with it.
Yeah, it's the kind of thing that you sort of copy paste from the docs for a library
to build up a pipeline.
But you don't always fully think about exactly what it's doing under the hood because it
would hurt your brain a little bit too much.
So you have to like...
I think that's why to a certain extent it's helpful to have some high level concepts of
how to think about these things because you don't always have to think about the low level
The high level is, well, I want to sort of apply a high level combination of these things.
And so you sort of associate and map with that concept and you don't need to understand
all the internals.
I think one thing that maybe we should mention too, something that can trip people up is
the record constructors feel like this magical thing.
So to sort of, I like to demystify that by just explaining exactly what it is.
So if you do maybe.map2, let's say you've got like a first name and a last name and
you expect them to both be there, but you've got some user input fields.
So you've got maybe values.
So you could pass in, you could have type alias user equals first string, last string,
and you could pass in that user constructor, capital U user to maybe.map2 user and then
your maybe first, maybe last.
And so what is that doing?
Well, it would be equivalent to doing a function that takes a maybe, a function that takes
a first and last, which are both string, and then builds a record with a field called first
and a field called last.
But what happens is this is just a part of the Elm language that when you define a type
alias of type record specifically, it doesn't happen if you define a type alias of type
int, it doesn't happen if you define a type alias of a custom type, only for specifically
type alias record type.
It will give you a constructor function that takes the arguments of the type of each of
the fields in that exact order and returns a record with exactly those fields and types.
So that's an important thing to understand.
And so I think it's a good exercise to like just write that with an anonymous
function or a named function, doesn't matter which you prefer, but write anonymous
function that takes first and last as arguments and then returns first equals first, last
equals last and convince yourself, oh, that's exactly what doing type alias user equals
first string last string is doing when I pass that constructor instead of that anonymous
function is exactly equivalent.
So that's I think that's a really good thing to demystify because it feels like magic otherwise.
I think this this confusion is maybe the fault of a lot of the tutorials that are out there.
And if you read Elm codes or in the wild, you will see people will use that constructor
because that's kind of what it's there for.
But if you're just learning, say, JSON decoders and you see something that says decode map
two and then capital U user and you see that type alias defined above a very reasonable
assumption that might be is like, oh, I'm giving that user type and map two is doing
some sort of reflection or metaprogramming or something like that based off of that type
and knows to just magically construct a user out of the fields that I give it.
And really map two doesn't want to be given a type.
It wants to be given a function.
So when I what I've started doing in my own writing, even when giving examples on the
Elm Slack is trying to always show the anonymous function.
It's a little bit more verbose.
And it's often it's not necessarily the concept I'm trying to teach.
But I think it's useful to show it there just to avoid that misconception.
So that it's very clear.
Oh, map two, map three, whatever takes a function, not a type as its first argument.
And that avoids some misconceptions.
And then you say, by the way, there's a shorthand for this function.
Did you know that when you define a type alias for a record, you get a constructor function
that has the same name as your type and you could then clean up your or make your decoder
a little bit terser by using that.
I think one reason why it feels like something weird is because we never use that function
elsewhere than in the name of an applicative.
You never or you rarely see like user your own angles.
You always see a record with first and last specified.
Or I've asked around and people really don't like using the record alias name as a function
outside of an applicative.
Because you can get the names of the fields mixed up.
Because if you change the name, if you have type alias user equals first string, last
string now, and then you create a user by saying user equals capital U user, and then
as first name string and then a last name string.
Now if you, you know, you probably wouldn't, but if you were to change the order of first
and last in the record alias, now you're passing strings, you don't get a compiler error.
And basically you've created a layer of indirection between what the field name is and the value
that's being passed to it.
Whereas if you just said user equals literal record first equals string, last equals string,
there's no getting it mixed up.
And so you can avoid that confusion.
For myself, I think it's less the being afraid of changing the field names because they're
pretty much never changed the order.
But it's more just the readability.
If you see the user constructor and then two strings, it's not immediately obvious which
one is the first, which one is the last.
And so it's really nice for readability to have the field names as labels.
That's usually less important if you're say doing a decoder, because when you look at
the, all the decoder, the little individual field decoders below it, you'll see the JSON
field names.
And generally you can tell from the JSON field names what they are.
Or of the decoders.
So it's pretty obvious looking at the decoder, what's the first name, what's the last name,
because we're going to be referring to the names might not be exactly the same in the
And that's one of the really nice things about decoders.
The JSON doesn't need to match, but I could probably tell what they are.
And so it feels a little bit redundant to copy that into an anonymous function.
And it also gets really long and verbose for larger records.
You have 10, 20 keys in the record, then that can get really verbose, which I guess that
maybe leads us really nicely into and map a version of the sort of applicative pattern.
We've talked a lot about a map two, map three, and so on, but those are going to be finite.
Every own library you use is going to have, you know, map up to map eight or however they
want to do.
And eventually it's going to stop.
I've yet to run into that limit for something like maybe.
I don't think I'm combining that many optional values, but I do run into this all the time
on JSON decode because it's not uncommon to say, I want to read 20 fields out of a JSON
and combine them into some Elm object.
And so that's where this sort of pipeline approach becomes really helpful because now
you don't rely just on something finite because the beauty of the and map, which is sort of
the, I don't know if you'd want to say the, it's not the inverse of map two, it's the
corollary to map two.
There's a fancy term that we can use for this, but it's another formulation of what map two
does, but you can sort of chain it infinitely.
So if you want the equivalent of map 100, you could do, start with a hundred argument
function and then just 100 pipes to and map.
Yeah, right.
And so to a certain extent it's like a matter of personal taste, but why don't we talk about
some of the more objective pros and cons between map N functions, map two, map three, map four
versus end map.
So I think the big one is one that we've talked about already is you will run out of map N
at some point.
Although you can always, if you need a map 17, you can always implement it in terms of
and map because it's equivalent.
So if you find it easier to read your code, you could just implement your own map 17 using
and map and then use the map 17 in your code.
If that's a style that you prefer, you could use some code generation to create map all
the way up to a hundred.
That would actually be pretty easy.
Yeah, it would.
Would it be a good idea?
Who can say?
Could you generate a map 17 from map two also or map three?
Well, you can generate all of these from map two.
Map two is the one.
If you have map two, then you can build all of these things.
I've like, I've built so many libraries that have this at this point.
Basically like I started by like going to the no red ink, Jason decode pipeline library
and looking at the source code and being like, how do they implement these things?
And like, how did the types line up?
And then you see like, there's a, you know, it's like a decoder of A to something.
The signature is mind bending.
It still hurts my brain to think about it.
And I've implemented it in libraries so many times now.
You get decoders of functions, right?
Something like that?
And it's like applying one of the values as you go through in the pipeline.
You have a function wrapped in a decoder.
You have a, like a concrete value wrapped in a decoder and you're saying apply that
value as an argument to that function.
Like the function might be a 10 argument function.
So you only apply one argument to it.
And then you get back a new decoder that's another, that's now a nine argument function
decoder, which you can then apply to another concrete value decoder to apply argument nine.
And now you get back an eight argument decoder and so on.
And a decoder of a function really doesn't make any sense on its own.
It really makes sense in this context of an applicative.
And that's where it's incredibly helpful.
So the error messages could be confusing.
And that's one of the challenging things is like the map and error messages when you get
something wrong are very clear and precise.
The compiler isn't able to give information as precisely if you're doing end map because
it doesn't know exactly how many things you plan to apply.
So it can't give you the precision.
So that's one of the trade offs.
So if you've got an error in a say a map two or a map three because it's unwrapping all
of them first and then saying, here's a three argument function, apply all three of these
arguments, it can immediately tell you, oh, argument two of three is incorrect.
Whereas with end map, because you're slowly applying arguments one at a time and you've
got your, the process of applying them one at a time is convert a 10 argument function
into a nine argument function, then convert it into an eight argument function, then a
seven argument function and so on.
The error that you're going to get is something like, oh, on step five, I expected a function
with this signature, but the signature here is not quite right.
And it can be really a head scratcher if you don't understand under the hood what's going
on, if you're not familiar with the concept of partial application.
So that's definitely the downside.
It takes some deciphering, even if you're very familiar with it.
But it gives you enough of a clue that you're like, something's off with my chain.
And at that point, sometimes it's helpful to just, sometimes I'll just put in my pipeline
of end maps.
I'll just put like a do as one of the things in the pipeline be like, all right,
let's just pretend that this one is whatever you want it to be to satisfy the compiler
here is the problem there or somewhere else.
And then it'll tell you if it's still giving you an error, the problem wasn't where you
put the do.
If it's not giving you an error, if it's not giving you an error, then you know exactly
where to look.
One advantage of the sort of end map pipeline approach is that you can then combine that
with other functions to create almost like a domain specific variation.
And we've mentioned a few times the no red ink JSON to code pipeline.
And what they've done is they've taken this end map function and combined it with a few
of the helpers from the JSON to code library for finding fields at a particular location.
And so you can say I have this required field or have this required nested path.
And those can all just be piped one to another.
And it becomes very nice to read.
I think someone who doesn't understand what the pattern does under the hood could still
understand what the code does.
Because you could say, oh, construct a user using a required first name and a required
last name nested under these sets of keys.
So one reason I feel like people may sometimes just reach for these sort of end map or pipeline
functions to start with is just the workflow of changing from map to map two to map three
back to map two as you sort of adjust things is a little bit clunky.
I tend I find myself using control a and control x and vim, which is increment number and decrement
number all the time for this.
Because what happens is you go up to the line where there's a map to you can be anywhere
in the line at the beginning of the line anywhere before the two in the map to you do control
a and it increments that to map three.
So that's that's a little trick that I use.
And I actually personally tend to use the map and functions when I'm dealing with lots
of small composable bits.
But I think it's it's a matter of personal preference.
And there's good reason to just say, you know what, I don't want to deal with this workflow
of changing the end in the map and every time I add something, I just want to deal with
end map every time.
Yeah, but now that you know that shortcuts like no, you don't have any excuse anymore.
So I'm very curious about one thing, because we've seen this pattern happen in a lot of
the core libraries or core concepts that have been spread out spread out all over like parsers,
But when would you reach for this pattern?
Like you're building something new?
In what cases?
What situations would you say it would be nice to have a combinator for this API?
I think that need often arises organically.
You'll sort of start working with your type and realize, oh, actually, I need a way to
And that's when this sort of thing will arise.
More generally, this sort of thing is usually only needed for types that have a type variable
in them.
So if you have a concrete type, that's some kind of enum style value or something like
that, you're not going to need a map to because there's no sort of there is no sort of inner
value to transform.
Yeah, but you could still want to combine two elements to be a single element, like
a list of two things turned into a list of one thing.
Well, now it's the combinators on the list, not the item itself.
There are some we've been talking a lot about, you know, all you need is a map to function
for this to count as applicative and also a constructor.
Technically, there's also a set of rules for that the map to function needs to follow in
order to be considered a legitimate map to for this purposes.
You can't just invent some function that's like, oh, this is a string concatenation.
I'm going to call it map to and hey, I'm applicative.
What's the term for it?
Lawful, right?
I think so.
I like that.
It makes it sound like an outlaw.
It almost is like, sounds like it would be cool to like not follow those rules.
Be an outlaw.
I just think of a D&D alignment chart now.
Yeah, right.
That is the only definition I have in mind for lawful.
So if you can explain, please do.
So there's a few properties that have to be, and I don't know them off the top of my head,
but basically it's like, oh, if you map the identity function, then the output must be
the same or there's a few rules like that.
They're called the applicative laws.
So if you look that up, that's what we'll show.
But more generally, the signature for map to is going to have variables in it.
So it's going to be a two argument function, A to B to C, and then your type with a variable
A, your type with variable B, and then it will in the end create your type with variable
C. And so if your type doesn't have a type variable, then it's probably not needing these
That's interesting.
Like what if you have an opaque type, like some sort of money type?
And I mean, I'm just trying to think of a concrete use case.
In that case, there is like a thing in this box.
Like you can't directly do anything with money types.
You need to expose an interface to deal with those.
But that said, do you want to expose some money dot map where you can then multiply
it by a million or something?
Maybe what you really want to do is expose money dot some or some sort of domain specific
functions for dealing with the money.
That's exactly the path.
Actually, I've been down this path.
I think that was actually probably one of the areas I first really understood mapping
I was creating a money type and it was not parameterized.
It was just a wrapper around probably an integer or a float.
And then I realized, wait, but like it's annoying to always wrap and unwrap these things.
This kind of looks like mapping.
What if I created a map too?
And then I can say anytime I want to say add to dollar amounts, I can just map to the plus
function that just works.
Or map to the times function.
What does that mean?
Which like, I think as programmers, we play very fast and loose with math.
And if you were in a more, say, if you're working with physics, you know that if you
multiply two numbers that have a unit, then you also have to multiply the unit.
And so if you're multiplying dollars times dollars, what you get back is dollars squared.
That sounds great to me.
Sign me up.
Which in most applications is probably a nonsensical unit type.
So you probably don't want to allow arbitrary operations on the value.
Another really interesting thing is that normally a map to function, you pass a to argument
function to it to say, hey, combine these two using this function.
And that function you give it can have any two inputs and any output type because you
can combine any values together.
With something like, say, a dollar wrapper, you can't do that because the value inside
is always an integer.
And so the two inputs for your two argument functions must be an integer.
And because you're creating a new dollar value as the output, the output value also must
be an integer.
So rather than having a generic function being passed into your map to that's a, b, c, it's
actually going to be int, int, int, which is possibly OK.
There's the concept of distinguishing between, I'm going to throw some fancy terms out here,
polymorphic versus monomorphic versions of these functions.
Monomorphic meaning many shapes, monomorphic meaning single shape.
So if it's just always an integer, then that's a monomorphic version of map or map to.
And those can be lawful under certain circumstances.
But in general, when people are talking about things like applicative, they, you mean the
polymorphic version.
So how would you call the monomorphic version then?
You have to make one.
We actually decided not to give it a name like map to and instead give it a domain specific
As Dillon mentioned, for something a dollar type, you might want to just create some domain
specific functions like add rather than something generic.
I think where this comes up maybe a little bit more frequently is if you have some opaque
type that has a string in it or maybe it's even a record or something and you say, oh,
I want to map over this user's name, but you can't reach into the name directly.
You have to, because it's opaque, so you have to have some sort of function that does that
for you.
And you might be tempted to call it map or map name or something like that.
But because it is more monomorphic and it doesn't really work in the same way, I found
it's useful to just go all in on the domain specific idea and just give it a name that
describes what it does.
So call it update name and that better describes what it's going to do and doesn't confuse
people with a more general concept of mapping.
So it's a better experience for the users of your code, probably just easier to read
it in general.
I think one exception to this is actually in the Elm core library and that is
because Elm allows you to map over strings.
They don't have a map to, but there is a string map and it is monomorphic because when you
map the function you pass in has to be character to character.
Good trivia.
But I think people are so used to mapping as this idea of traversing a collection and
transforming the values along the way that that one probably doesn't confuse people.
People probably even just use it and be like, oh, of course it's character to character
and never thought of like, oh, what if I wanted to do character to int?
Why doesn't that work?
So you may have used and never realized it was different from all the other
maps in the Elm world.
So one other pattern I noticed emerging when I'm dealing with building up pipelines of
things, I mean, it happens all the time.
It's not necessarily just like building a JSON decoder or a random number generator.
I'm often doing these pipelines and sometimes there are these pipelines where rather than
just dealing with one specific thing, and we talked about this idea of dealing with
one level of abstraction at a time.
So often you're dealing with one level of abstraction where it's just decoding a bunch
of stuff, just building up a JSON decoder.
But sometimes you're running a decoder and then that gives you a result and then you're
turning that into a particular type of error that you're combining with another thing,
for example.
So these sort of higher level pipelines where you're deciding you're sort of processing
something rather than doing all the detailed processing.
Often I want to build something up into a particular type of value.
If I need to take one type of error and turn it into, combine it with, maybe there's an
HTTP error that may have happened in one result and another type of error that might have
happened in another result.
And then I need to combine those and pull in some other data.
So those types of pipelines, I tend to see a few different types of patterns emerging.
One is I tend to see, sometimes I need to coerce something into the same type of thing.
So maybe I have a maybe value and I have a result of an HTTP error and I have a result
of another error type.
So I might need to do result.mapError to get the two result types to have the same error.
And then I might need to take the maybe type and do result.fromMaybe and give it an error
type if it's nothing.
So those are sort of like some higher level patterns for combining things that I find
come up a lot.
And another one that I see coming up a lot in code that I write is I'll want to sort
of compose together ways of mapping things.
So I'll have like a maybe list or a JSON decoder of a list.
And I want to map the inner list inside of that.
And in those cases, I'll do like,, and then apply something in there.
So those are sort of two higher level patterns that I've noticed emerging a lot.
I think those are probably a little bit separate from this applicative concept and that they're
just tips for working with pipelines in general.
I would typically not combine those with a say and map pipeline.
If I'm say doing some JSON decoding and then I want to combine the errors with some other
result, I would probably have a separate function that handles the JSON decoder and just find
this is how you decode the JSON and then call that from a different pipeline that's managing
the results.
Again, back to that idea of a single level of abstraction.
I have a function that defines here's how we interact with JSON and the other one that
says here's how we then like read the JSON and handle the errors.
You get to some really interesting patterns with sort of deriving these things too.
So another thing I've noticed is that a lot of these sort of mappable APIs will...
I mean, if you have map2, what can you derive from it?
You mentioned that you can...
That is magical.
The moment you introduce map2, so many things become possible.
It's pretty neat.
One of the things that I've been doing, I've got this LMarkdown parsing library and I've
had a lot of fun building up transformations because it's really fun in a typed language
to deal with any sort of abstract syntax tree, whether it's Markdown or something else.
So I'm finding myself doing operations where you want to count the number of headings or
you want to take all of the level two headings and capture those.
So that's like, you might want to do like a fold left over them and that's just derived
from map2 or you might want to...
Well, the fold is not derived from map2.
It is its own thing, but there's a combination of fold and map2 that becomes really, really
I think that's what you're pointing towards.
So this goes typically under the name of a sequence or combined in various libraries.
But if you have say a list of maybes and you don't want to check each of them individually,
you say, give me back just one single maybe that's either nothing if any of the items
were missing or just the list of all the present values if they were all present.
And that's where you would fold map2.
I've used it using the remote data pattern.
It's really useful to know if all of the...
If you have a list of remote data values, it's useful to see are they all successful
or are any of them pending or failed.
So it gives you like an aggregate status of all these independent remote datas.
And if they're all successful, then you get a list of all the successful values.
So it's super, super convenient.
Combine is an awesome helper.
And I imagine you're using it in your markdown parser where you say, oh, I have a list of
Can you sort of turn that into a parser that gives me a list of things?
You know, I don't think I expose a combined function for...
But maybe I should.
But I am in Elm pages.
I have this data source API, used to be static HTTP, which is sort of like a declarative
description of like getting HTTP data.
So it's not a command.
It's something that you can sort of just have when you load a page.
And anyway, combine is super helpful there because you'll have all these different data
sources that you want to combine into a list.
And that's a super handy function.
So I'm confused.
Is this combine or is it sequence?
That is usual.
I think he's saying it's a synonym.
The two names are used in Elm.
So for example, in the core library, there's a task dot sequence, which takes a list of
tasks and just sort of squashes it down into a single task that will succeed if all the
child tasks succeed.
But you might see and say the remote data I think uses...
Actually remote data I think uses from list and then like the result extra maybe extra
use combine.
I think that's maybe one of the disadvantages of not having type classes is that it allows
the same function to have different names, which sometimes is nice because a more domain
specific name might make more sense in the context of one library, but it makes it maybe
a little bit harder to see some of these patterns across multiple modules.
Well having type classes wouldn't prevent you from adding a new function that does the
same thing anyway.
But it would enforce that if something is applicative, it must have this function with
this name.
And so I think it would be really interesting to explore like having a sort of community
resource of an Elm review rule where you can sort of have some little at directive in a
doc comment in a module and say this is applicative or whatever term we want to use.
But just to sort of have it remind you, oh, but you don't expose a function name to this.
Maybe you meant to do that.
So another thing that the map to allows and we already touched a little bit on it with
pipeline APIs and combining, but it can just be a really powerful way of cleaning up code.
And I had this magical experience a while back.
I was helping somebody else on a JavaScript project where they needed to parse sort of
like Excel style formulas, which are more or less just like prefix functions that can
be nested arbitrarily.
And we came up with something that's a little bit clunky.
I think it might've been some kind of recursive function that would consume a string and try
to build a tree out of it.
And I wondered if I could do something in Elm that would be nicer.
And I started with just re implementing the same approach that we had in JavaScript in
Elm, where I'm parsing a string.
But I also had introduced the end of a result type just because Elm has that and JavaScript
So each sort of step I would try to parse a chunk of the string and then return a result
if it was bad and otherwise keep going.
And it was this giant nested case expression, which my second step was saying, okay, well,
there's a bunch of steps where I can say it can either be a function name, like add or
It can be an open parenthesis.
It can be an inner expression.
It can be a closed parenthesis.
And those were all nested case expressions.
What if I broke them out into functions?
And so I broke them all out into functions with this really tedious signature where it's
like string to tuple of remaining string and result of like the type we've parsed so far.
It was just really tortuous.
But at least it flattened my case expressions a little bit because now all of the bodies
were broken out into functions, which is that rule of abstraction I talked about earlier,
separate doing code from branching code.
And then I started realizing, wait a minute, this signature of like string to this awful
tuple shows up all the time.
And if we think about it, that's effectively what a parser is.
It's turning a string into some less structured value, in this case a string, into a more
structured value and possibly an error, which is why I had that result.
And in this case, I had to keep track of the remaining string because you don't parse everything
all at once.
And so I took that and aliased it to parser and just cleaned up all the signatures.
And it looked a lot nicer, but I was still having to do all this casing to sort of combine
the things together.
This is where the light bulb starts going off.
I'm like, wait a minute, I'm doing all this casing on the results, all on this like tuple
result thing to see, can I combine these different pieces together?
Wouldn't it be nice if I had a way to just, now that they're called parsers, just combine
two parsers together?
Hmm, could I define a map2 function?
And it's a little bit mind bending to define a map2 function over like functions of tuples
of results.
But because I had aliased it to just parser A, I knew, oh, I know how to define a map2
over parser A. I defined that, and that's when the magic happened.
Because all of a sudden, I could eliminate all those case expressions and just very cleanly
combine all those extracted functions that I had together in a fairly flat way.
And then of course, knowing that I can use map2 to implement and map, I did that to give
myself a parsing pipeline API, which turned out to be really, really nice.
And then sort of in the vein of what the JSON decode pipeline does, where you can sort of
layer on a little bit of extra behavior or meaning on top of that.
When you're parsing, sometimes you want to parse a value and then like actually create
a value out of it.
And sometimes you just want to make sure that something is there in the string, but you
want to move on.
So you might want to consume a value or you might want to actually like parse something
out of it.
I don't know if you might call that keep and consume or something like that.
So that's effectively what I did.
I had like domain specific variations on and map.
And I think I called them keep and consume that allowed me to have a very flat pipeline
that was just like, oh, start by attempting to parse a function name, then just consume
an open parenthesis, then sort of recursively attempt to parse another expression, and then
try to consume a closing parenthesis.
And just it all fell into place from this really tangled nested mess of case expressions
and nested functions into this beautiful API.
It's all because of map two.
And that was a very iterative approach that I took.
I was aware of some of these concepts because I've used a lot of JSON decoders before,
but I wasn't really comfortable with parsing strings.
But that experience of sort of stumbling into what I guess you might call parser combinators,
which I guess really all of a sudden that term made so much sense for me because I had
these parses already.
This is little functions I had already extracted for parse the string into a function name
or a parenthesis.
And then I implemented map two and a couple other functions that allowed me to combine
parsers together and boom, all of a sudden I had in, I don't know, probably less than
100 lines of Elm built a parsing library.
That was really magical and mind blowing.
And then just for fun, I checked the Elm parser library, which it actually has a pipeline
It uses special operators, but it's effectively nmap.
That's like pipe dot and pipe equals, which are equivalent to my specialized operators
for like parse and consume.
And it was basically the same code.
So I sort of stumbled into something that was very similar to the official Elm parse
So it was a really fun exercise for me.
I learned a lot.
I feel like I learned how parsing works.
I got way more comfortable with some new facets of map two, the idea of combinators in general.
I think I gained a new level of understanding the combination of parsers and combinators
as like two pieces that really play well together.
Yeah, it was a really magical experience.
I stayed up late into the night and it was just like, oh, another light bulb moment.
I think I created probably four or five Elis that I created like for each step in that
And it was amazing.
I'm now very curious, did you backport that to the JavaScript version?
I didn't directly, but I was helping somebody else on their project and I shared the Elm
equivalents, which got the other person interested in looking up at JavaScript parsing or parser
combinator libraries, which they were then able to refactor our original solution into
something using the JavaScript parser combinator library that was very similar to what I ended
up with in Elm.
That's cool.
I think like combinator is such an intimidating word, but really the concept is something
that like, I mean, if you've spent a lot of time using some of the basic tools that Elm
gives us like decoders, it's a very familiar concept of breaking down a problem into small
sub problems and then building it up into something more complex by using these sort
of combining functions.
That's all it is.
And it's a very, it's funny because when you start to like think about the internals and
definitions, it seems so complicated, but when you do it, it's so natural and it's so
easy to do it well.
Cause it basically like the conclusion I've come to is that it's basically the difference
between like imperative transformations and declarative transformations.
That's basically what a combinator is, is it's like a declarative way of describing
a transformation, which can then be built out of like basically they're the primitive
transformation building blocks and these compound ones where you can combine them together.
That's all it is.
And it's a very natural pattern.
Anytime you want, you have like two pieces of data that you'd want to work with and you're
wondering, Oh, I need to combine them.
It might be, I have two maybes.
I want both of, I want to do some operation on both of those to get a new maybe back.
That might be one way to do it.
That would be a combinator.
I think that the two types of combinator that I needed to implement for this parsing library,
one was a way to combine two pieces together and say, I want to do parse this piece of
data and also this other piece of data.
But also sometimes you want to say, attempt to parse it this way, or if that fails, also
attempt to parse it this other way.
If you've done JSON decoding, you'll be familiar with like the one of where you give it a list
of decoders and it will try all of them and whichever succeeds first is the ones that's
And I implemented one of those for my little parser and that's also a form of combinator.
We talked about this on our Elm parser episode, but there is a really interesting thing that
when you're coming from experience using JSON decode and then you go use the Elm parser
library, it's kind of counterintuitive because you use one of and you're like, wait a minute,
the one of just failed on the first thing in my one of that had three different options.
And so it's interesting because the semantics are different between one of in JSON decode
and one of in Elm parser.
Because you have the idea of committing versus backtracking.
Which is like another layer to learn.
And I don't think that's necessarily a law of a particular pattern, but it just goes
to show that you can sort of perhaps like follow these same patterns, but have slightly
different semantics.
Well, I think we've we've covered applicatives pretty well.
I'm sure there's more we could say.
But Joelle, thanks again for joining us.
And if people want to get some more of your good knowledge, where can they follow you
and where can they learn more?
So they can follow me on Twitter, Joelle Ken, J O E L Q U E N.
They can also go to the Thoughtbot blog.
That's a place I work at.
I have a lot of articles there talking about Elm and also other things.
So that would be slash blog slash authors slash Joelle dash Kenville.
That's probably easier to link than to try to spell it.
If you click around to some tags or search, you'll find it too.
There are a lot of great Elm blog posts.
There's definitely worth checking out.
A lot of them probably connect to the topic we talked to today because there's so many
sort of foundational aspects that overlap into this topic of applicatives.
And so there's a lot of articles I've written over time that connect to this.
You also gave a really great talk about random generators that might be relevant here for
people curious to learn more.
There's also talk about random generators and map two and how that works there.
I've given talk with maybe and how map two works there.
So yeah, maybe this whole time I was just trying to get everyone to be excited about
map two.
It was the best function.
Well, it worked for me.
I'm amped up.
Well, thank you so much again.
And Jeroen, have a good one.
Have a good one.