spotifyovercastrssapple-podcasts

Elm's Universal Pattern

Guest Joël Quenneville shares his wisdom on transforming and mapping in Elm, and how it applies across many Elm data types.
June 7, 2021
#32

Metaphors

Some common metaphors for Elm's Universal Pattern (Applicative Pattern).

Examples

  • Random generators
  • Apply mapping functions to vanilla value functions to keep things clean

Tips

Record constructors

Some more blog posts by Joël that related to Elm's Universal Pattern:

Joël's journey to building a parser combinator:

  • Nested cases - https://ellie-app.com/b9nGmZVp9Vca1
  • Extracted Result functions - https://ellie-app.com/b9qtqTf8zYda1
  • Introducing a Parser alias and map2 - https://ellie-app.com/b9MwZ3y4t8ra1
  • Re-implementing with elm/parser - https://ellie-app.com/b9NZhkTGdfya1
  • Getting Unstuck with Elm JSON Decoders - because mapping is universal, you can solve equivalent problems with the same pattern (described in this post)

Transcript

[00:00:00]
Hello, Jeroen.
[00:00:01]
Hello, Dillon.
[00:00:02]
And today, once again, we've got another guest joining us, Joël.
[00:00:07]
Welcome.
[00:00:08]
Thanks for joining.
[00:00:09]
Hi, everyone.
[00:00:10]
Thanks for having me on the show.
[00:00:11]
It's a pleasure.
[00:00:12]
I've been hoping to have you for some topic.
[00:00:15]
And finally, we got a nice topic to discuss with you that popped up.
[00:00:19]
And yeah, you're sort of, I think of you as somebody who explains things in a way that
[00:00:25]
a beginner hears your explanation and a light bulb goes off and an expert and an Elm veteran
[00:00:31]
hears your explanation and they say, oh, I never thought of it that way.
[00:00:35]
You're sort of got a great philosophical way of breaking down fundamentals, which I really
[00:00:39]
appreciate.
[00:00:40]
Well, thank you.
[00:00:41]
That's really the goal of mine when I speak or write or teach.
[00:00:47]
I usually like to be right on that boundary of something that's practical and teaching
[00:00:53]
how to do a task and solve a problem, but also venture a little bit in the philosophical
[00:00:58]
world of like, why is this a useful solution?
[00:01:02]
And is there a bigger concept at work?
[00:01:04]
Right.
[00:01:05]
So speaking of bigger concepts at work, what is that concept today?
[00:01:10]
You want to introduce it for us, Joël?
[00:01:12]
Elm's universal pattern.
[00:01:13]
What does that mean?
[00:01:14]
So I think I'll just open this by saying that I think my favorite function in Elm is probably
[00:01:20]
map2.
[00:01:21]
There's a bunch of different modules that implement this as a maybe map2, a JSON decode
[00:01:26]
map2, random map2, and all of those, they're probably my favorite.
[00:01:31]
If you were in a desert island and you could only bring one function, would it be map2?
[00:01:36]
It probably would be map2.
[00:01:38]
Technically, I should probably say and then because you can use it to implement map2 and
[00:01:44]
then you'd get like, it's like wishing for more wishes.
[00:01:47]
It's kind of cheating.
[00:01:50]
But yeah, if I'm only allowed to take one, it would be map2.
[00:01:53]
Nice.
[00:01:54]
Yeah.
[00:01:55]
I guess you would also want to bring some data with you because map2 without any data
[00:01:59]
doesn't have any value.
[00:02:03]
That's true.
[00:02:04]
There's some bad pun here that can be made about date palms or something like that, but
[00:02:08]
I can't make it.
[00:02:10]
If it was another language, you'd bring rescue or something like that.
[00:02:13]
I don't know.
[00:02:14]
Okay.
[00:02:15]
So Elm's universal pattern.
[00:02:18]
So what exactly are we talking about here when we're talking about a universal pattern?
[00:02:24]
A universal pattern for what?
[00:02:27]
What do you use this pattern to do?
[00:02:29]
Yes.
[00:02:30]
I think the universal part of it is just the idea that map2 exists for multiple different
[00:02:37]
types.
[00:02:38]
It's actually very common to see different types both in core and in third party libraries
[00:02:44]
implemented because it's such a useful function.
[00:02:46]
And at its most basic level, I think of it as a way to combine two things of the same
[00:02:54]
type.
[00:02:55]
So to be more concrete, if we talk about, say, maybe, I have two maybe values that I
[00:02:58]
would like to combine, and I have a two argument function I would like to combine them with,
[00:03:04]
map2 would be the way to do that.
[00:03:05]
So I think of it as a way to say two argument function, two maybes, how can I combine all
[00:03:10]
those things together?
[00:03:11]
And then there's more functions as a map3, a map4, map5, et cetera, if you want to scale
[00:03:18]
that pattern up to a three argument function with three maybes, or a four argument function
[00:03:23]
and four maybes, and so on.
[00:03:24]
Yeah.
[00:03:25]
It's really interesting because in the Elm community, we don't tend to talk about these
[00:03:30]
things with these category theory terms because it can be confusing.
[00:03:37]
And often, you hear this like, all right, chapter 10 of this book on Haskell is when
[00:03:43]
you finally have gotten past the introductions of category theory concepts, and then you
[00:03:48]
write your hello world or something.
[00:03:50]
And in Elm, we go the opposite direction where if you get to these concepts at all, it's
[00:03:56]
after chapter 10.
[00:03:58]
So sometimes we go to the point that we don't want to put terms on these different categories
[00:04:04]
and concepts, but it is helpful to have some way to think about them somehow.
[00:04:09]
So sometimes I know people in that category theory world talk about things in boxes, that
[00:04:16]
it's going between these different worlds of you have a value that you can do something
[00:04:21]
with and then something that's you can't reach, like a random value.
[00:04:27]
If you have a random generator of type int, you can't go and touch that int and add a
[00:04:34]
number to it and multiply it.
[00:04:36]
So you need to apply something to it in the box.
[00:04:41]
And that's kind of what mapping is conceptually.
[00:04:43]
It's like reaching into the box with a function that so you have this operator that can multiply
[00:04:49]
or some function that can take the absolute value of a number.
[00:04:53]
And you want to apply that function to the value that's in that box, that random generator.
[00:04:58]
Yeah, I guess there's a few different mental models you could use to think about what mapping
[00:05:03]
functions do.
[00:05:05]
I'd mentioned one earlier, the idea of combining.
[00:05:07]
Another one that's particularly helpful with say types like maybe a result is the idea
[00:05:13]
of abstracting over this really common pattern that you might have, which might be unwrap
[00:05:18]
a value, apply a function and then rewrap.
[00:05:21]
So like a maybe if you want to do an operation on it, you might say, well, unwrap it if it's
[00:05:26]
present, do my operation, but because it might not be present, we need to return nothing.
[00:05:31]
Therefore, we also need to rewrap at the end.
[00:05:34]
And really the unwrap rewrap part is just a boilerplate.
[00:05:38]
We have to do this all the time.
[00:05:39]
And so a map function allows us to abstract over that pattern.
[00:05:44]
I think there's also maybe a sense where you can think of mapping functions as a way to
[00:05:51]
sort of translate functions into ones that operate on your sort of wrapper type.
[00:05:57]
So I have a two argument function and I want to turn it from a function that works on integers
[00:06:03]
to a function that works on maybe integers.
[00:06:05]
I can use map two to convert it.
[00:06:08]
I think the fancy functional programming term there would be lifting where you say I have
[00:06:13]
this two argument function.
[00:06:15]
I will sort of lift it into the world of maybes.
[00:06:19]
So yeah, those are sort of three different ways of looking at the same concept.
[00:06:22]
And I think sometimes it can be really hard to get a good grasp on what this concept is.
[00:06:28]
And so having multiple mental models can be really helpful.
[00:06:32]
Particularly because some of them don't work quite as well for some types.
[00:06:37]
So you mentioned the idea of a box earlier.
[00:06:40]
And I think that's very concrete when looking at something like maybe because it's like,
[00:06:44]
yes, I have a number and it's wrapped inside of it maybe and I can unwrap it.
[00:06:48]
That feels like a box.
[00:06:50]
Something like random isn't quite a box in that it's a future value that you might get.
[00:06:59]
Like a decoder is kind of similar.
[00:07:02]
It's mailbox in a way.
[00:07:06]
Something will be delivered to it in the future and when it's delivered, you wrap it in something.
[00:07:11]
I found that maybe it's probably one of the easier types to use to understand some of
[00:07:17]
these concepts because it's really concrete.
[00:07:22]
Most Elm developers are familiar with how that type works.
[00:07:25]
And you can deconstruct it and you can pattern match on it, do a case expression and see
[00:07:30]
what's inside at any point.
[00:07:33]
And you can reimplement your own map, map two, map three, et cetera, pretty easily in a way
[00:07:39]
that you couldn't for say the random generator.
[00:07:43]
That's a good point.
[00:07:44]
Yeah, because the actual internals under the hood of the thing you're mapping can get a
[00:07:50]
lot more abstract than with a maybe.
[00:07:53]
As you say, it's to the point where it's tempting to just do a case statement all over the place
[00:08:00]
with maybes.
[00:08:02]
I find that one thing that I look out for sometimes is if a case statement is happening
[00:08:08]
too often and if functions are dealing with these wrapped types or these, you know, if
[00:08:14]
you have a function that's dealing with random generator types or maybe types rather than
[00:08:21]
ints or whatever underlying data type, would you say that that's generally a smell?
[00:08:26]
Like I often think if I have a lot of case statements around maybes or if I'm passing
[00:08:32]
these wrapped values, things tend to work really nicely when you have like functions
[00:08:37]
that deal with sort of vanilla values and then you apply these map functions to combine
[00:08:43]
them.
[00:08:44]
I would agree, yes.
[00:08:45]
In general, the way I tend to write code and Elm code in particular, I like to separate
[00:08:52]
what I might call branching code or deciding code from doing code.
[00:08:56]
So if I were to say case on a maybe, I would have one function that cases and branches
[00:09:03]
and then it would just call another function that's that sort of doing function.
[00:09:07]
And so even if I had the case expression, I would have a separate function that acts
[00:09:14]
on the inner integer or whatever it is, which is just generally, I think, easier to read
[00:09:20]
and understand.
[00:09:21]
That also makes it nice to refactor later if you realize, wait, this case expression
[00:09:25]
could be a map.
[00:09:26]
I don't have to separate the business logic inside.
[00:09:29]
Yeah, I find that like with if you have a remote data value, for example, often code
[00:09:35]
starts out wanting to do too much and doing like a case statement on the remote data type
[00:09:42]
that if it's successfully loaded or loading or you kind of render these different views
[00:09:48]
in line.
[00:09:49]
But it turns out to be a lot to wrap your head around to parse out the logic of the
[00:09:56]
rendering logic for the successful view and the error view and all these pieces in one
[00:10:00]
place.
[00:10:01]
And it's really this general concept, which actually you have a nice blog post on this,
[00:10:06]
I think, about staying at one level of abstraction.
[00:10:10]
And in a way, when you're kind of unwrapping and then dealing with the unwrapped thing,
[00:10:16]
by definition, you're dealing with two different levels of abstraction right there.
[00:10:20]
Yes.
[00:10:21]
And I think that separation of sort of deciding code versus doing code, those are sort of
[00:10:27]
two abstractions that you want to keep separated.
[00:10:30]
Well, let's talk about some examples of this universal pattern.
[00:10:35]
With these different examples, you were describing how different analogies might be more intuitive
[00:10:40]
for different ones.
[00:10:41]
It's also interesting, like in a way, there are almost different semantics for these.
[00:10:46]
Like for maybe, if you're combining maybes, the semantics are almost like and semantics,
[00:10:53]
where it almost like short circuits.
[00:10:55]
If any of the maybe values are nothing, then it just short circuits through and the whole
[00:11:00]
thing is nothing.
[00:11:01]
But if like, for example, with a JSON decode value, I guess it's a similar concept that
[00:11:07]
it almost short circuits with a JSON decoding error if there's an error anywhere.
[00:11:13]
But that error carries information, so it could carry information from any given decoder.
[00:11:19]
Right.
[00:11:20]
Similarly, I think you could say that with something like result, where the error has
[00:11:25]
more context about where it failed, why it failed, rather than maybe it's just we don't
[00:11:30]
have a value.
[00:11:32]
It's kind of interesting that it's not like, I guess it's not a very common pattern to
[00:11:36]
just take multiple errors and group them together.
[00:11:40]
But I suppose it could just as well be.
[00:11:43]
But I guess you can't really proceed because it assumes that it has the needed information
[00:11:49]
in order to proceed in certain contexts.
[00:11:51]
But like with a decoder, it's not going to attempt the other decoders, the first one
[00:11:56]
that succeeds, the first one that fails at short circuits.
[00:12:00]
Right.
[00:12:01]
There's no reason you couldn't accumulate errors.
[00:12:03]
I think later if we talk about parsers, that might be something that comes up.
[00:12:09]
Interesting.
[00:12:10]
Okay, so let's talk about where some places this pattern occurs.
[00:12:14]
So we've touched on maybe Elm JSON, random generators.
[00:12:18]
It might be worth talking a little bit more about Elm JSON because I think that's maybe
[00:12:22]
one of the places where it's particularly useful.
[00:12:25]
Sounds great.
[00:12:26]
I think for me, the metaphor or the mental model that works best here is the idea of
[00:12:33]
combining.
[00:12:34]
So when we're parsing JSON, typically we're pointing to a particular path in the JSON
[00:12:41]
tree.
[00:12:42]
We're saying in this field, decode this value as a string or integer or something like that.
[00:12:48]
But usually we want to read more than one value out of the JSON.
[00:12:52]
So we want to say at this field, read this integer.
[00:12:54]
At this other field, read this string.
[00:12:57]
At this third field, read a Boolean.
[00:12:59]
And then give me all three values back and let me combine them into some custom Elm value.
[00:13:05]
And so we can write individual decoders for each of the pieces of data, but then we need
[00:13:10]
a way to combine all three together.
[00:13:14]
And that's where the mapping functions come in.
[00:13:16]
If we're combining three pieces, it would be a map three.
[00:13:20]
And yeah, for me, this mental model thinking of them as combining functions, I think is
[00:13:27]
most apt when thinking about decoders.
[00:13:30]
Scott Welch and has this concept of like railway oriented development, I think he calls it.
[00:13:36]
And he talks about this pattern for like a decoder or mapping things together that you
[00:13:42]
have these sort of split tracks.
[00:13:44]
If you picture a fork in the railroad where you can split off between these two different
[00:13:50]
directions and one of the directions is sort of an error direction and the other one is
[00:13:55]
like a green success direction.
[00:13:58]
So you map together a JSON decoder that picks off five different fields from a user and
[00:14:05]
it expects them to be non null and of these specific types.
[00:14:09]
And as it picks them off, it's going along the green railroad.
[00:14:13]
And if any of those is null unexpectedly, now it can take that other track and go to
[00:14:18]
the red track.
[00:14:19]
And suddenly, and you can imagine each time you apply a map, it's branching off and there's
[00:14:26]
another sort of a new green track for it to branch off of.
[00:14:31]
But it can always go down that red track.
[00:14:34]
And the red track, it's just following along a continual path.
[00:14:37]
So instead of applying more data and combining it together, you just get that error straight
[00:14:43]
through that short circuited error data.
[00:14:45]
I love the visual metaphor that he uses.
[00:14:49]
You should definitely link to the talk because it's worth looking at it with the slides.
[00:14:54]
Yes, I agree.
[00:14:56]
It takes a concept that's a little bit arcane and sort of pulls in a lot of different ideas
[00:15:01]
from functional programming, not just this mapping idea and strips away the really sort
[00:15:07]
of academic language and really puts it in a metaphor that's easy to follow.
[00:15:13]
And so I think we're sort of so entrenched in this Elm world here that it's easy to forget.
[00:15:22]
It's easy to take these things for granted.
[00:15:24]
But if we sort of step back from it and talk about how would we deal with these things
[00:15:29]
otherwise, dealing with throwing exceptions.
[00:15:32]
And it's actually really wonderful dealing with data in this sort of composable way because
[00:15:39]
you can think about something as a unit and you can combine these things.
[00:15:44]
And so I mean, I'm not sure if it's just a reminder to appreciate what we've got or if
[00:15:49]
there are implications for how we design our code there.
[00:15:52]
But I think that's a good thing to keep in mind.
[00:15:54]
Yeah, how would you do that?
[00:15:56]
Would you go with plenty of case expressions?
[00:15:58]
There's a sense maybe where in say a more dynamic language, a bunch of maps in Elm
[00:16:05]
might be more or less equivalent to some kind of optional chaining.
[00:16:09]
So like Ruby has what they call the lonely operator.
[00:16:11]
JavaScript has the question mark where you do this sort of optional chaining.
[00:16:16]
Knowledge coalescence, something like that.
[00:16:18]
I think that's a separate concept.
[00:16:21]
Possible.
[00:16:22]
Yeah.
[00:16:23]
Right, right.
[00:16:24]
Yeah.
[00:16:25]
This new JavaScript question mark dot operator and you see this in different languages.
[00:16:29]
Yeah.
[00:16:30]
And I noticed this, this is and the sort of I guess before that operator in JavaScript,
[00:16:36]
it would be, you know, user double and user dot.
[00:16:40]
Right, which gets really clunky if you have a long chain because then you have to check
[00:16:46]
every step along the way.
[00:16:48]
Right.
[00:16:49]
And so one of the things about that pattern that I've noticed is people say like, wow,
[00:16:54]
this like question mark dot operator in JavaScript makes code so much nicer, which it certainly
[00:17:00]
cleans things up.
[00:17:01]
But then what if you're not dealing with something that may be null?
[00:17:06]
What if you're dealing with something that may represent some kind of error or how do
[00:17:10]
you change different types of things?
[00:17:12]
So, you know, Elm doesn't Elm doesn't have, you know, these sort of type classes for these
[00:17:19]
different types of things where you use a single operator to do it.
[00:17:22]
But it is so baked into the core libraries and the ecosystem and the ethos of Elm that
[00:17:29]
you sort of apply these patterns and also the language itself because it doesn't have
[00:17:33]
sort of exceptions that can just bubble up somewhere and be caught.
[00:17:38]
And so you've got to sort of flow data through and you've got to prove to the compiler before
[00:17:44]
you can just unwrap values and that sort of thing.
[00:17:46]
So it's it's sort of baked into the language in a way.
[00:17:50]
And things do compose together so nicely because this is not just taking five maybes and mapping
[00:17:56]
them together, but, you know, then chaining that along and turning that maybe value that
[00:18:03]
you drive into a result type because you need to combine it with another result type from
[00:18:08]
another place.
[00:18:09]
And then you combine those to build some value.
[00:18:12]
And that at that point, things really compose together in a way that it feels totally different
[00:18:19]
than just using question mark dot operators in JavaScript.
[00:18:23]
Things really compose with all these other libraries and chains.
[00:18:27]
There's also, I think, the really key distinction is that in a language like JavaScript, things
[00:18:35]
are nullable by default unless you check them.
[00:18:37]
And then you can have confidence that they're not null, whereas Elm values are guaranteed
[00:18:42]
present unless they're explicitly wrapped in maybe.
[00:18:46]
So we can sort of trust by default and then we sort of mark the areas that are untrustworthy
[00:18:51]
and the compiler will force a check.
[00:18:53]
That's a great point.
[00:18:54]
And this pattern in a way, it's like intimately tied to this quality of the Elm compiler and
[00:19:00]
the Elm language that you're sort of deriving data of different types as you apply functions
[00:19:06]
to it.
[00:19:07]
So, you know, if you have a pipeline and you do, you know, you pipe it to list dot singleton.
[00:19:15]
Now you take a thing that was not a list and you make it a list.
[00:19:18]
And then you, you know, combine that together with something else.
[00:19:22]
So this is one of the things with this sort of applicative pattern.
[00:19:27]
We haven't used that term yet, but you know, you have a pipeline and you're applying these
[00:19:31]
functions and it's sort of modifying the type as you go.
[00:19:35]
So with list based APIs, which you also find in Elm, like, you know, Elm HTML, you create
[00:19:42]
a div and you give attributes and children.
[00:19:45]
You're not changing the type as you add HTML attributes to that list in the div.
[00:19:52]
You add a class, you add an ID, but when you're doing a, you know, json.decode.succeed user,
[00:20:02]
and then you're piping that to end map or some, you know, pipeline operator, you're,
[00:20:08]
you're modifying that value from, from the starting point.
[00:20:11]
And you start with this constructor that takes five arguments, and then you pipe it through
[00:20:16]
with applying five different times.
[00:20:19]
And it goes from a function that takes five arguments to a function that takes four arguments
[00:20:23]
to a function that takes three arguments.
[00:20:24]
And in that way, the applicative pattern is really nice with Elm libraries, but because
[00:20:29]
it allows you to sort of transform the types based on what you're applying.
[00:20:35]
If you pass in a decoder that takes an int, a decoder that takes a maybe string, it's
[00:20:40]
going to expect that to be matching up with the constructor you started with and applying
[00:20:45]
those.
[00:20:46]
Well, and one of the things that you're saying here that I think you're hinting at is the
[00:20:50]
idea that in functional programming, the entire way we structure programs is as a series of
[00:20:56]
data transformations.
[00:20:57]
So we start with one or more input values, and we slowly convert them into it could be
[00:21:04]
the same type, it could be a different type, but we're slowly converting them until we
[00:21:08]
eventually get the output that we want.
[00:21:10]
And that's how we structure programs in functional programming.
[00:21:14]
Right?
[00:21:15]
Yeah.
[00:21:16]
And it's just like when you're writing code, it's like this little puzzle that you're like,
[00:21:19]
I know I need a value of this type, and how do I build it?
[00:21:24]
I mean, we just talked about this recently in our debugging episode, Jeroen, of this
[00:21:28]
process of debugging types when the types aren't quite fitting together and how you
[00:21:35]
figure out what type to put in the type hole.
[00:21:38]
Sometimes it's really helpful to just break out these little puzzles and say, oh, this,
[00:21:42]
I know I need a value of this type.
[00:21:44]
Let me pull this out into a let and break it out into a sub puzzle, give it a type annotation.
[00:21:49]
The type annotation proves that that type would solve the puzzle in that chain of applications.
[00:21:56]
And you don't yet have a type of that value that you promised with your annotation.
[00:22:01]
So now that's your next puzzle to solve.
[00:22:03]
Yeah, there's this, you know, not only is functional programming about transforming data,
[00:22:09]
but another key concept in at least structuring functional programs is breaking down larger
[00:22:15]
transformations into smaller steps.
[00:22:18]
You might call that decomposition, each of which are smaller transforms, some of which
[00:22:24]
might be reusable.
[00:22:25]
And that's where we get into all the fun deeper functional programming concepts are generally
[00:22:30]
just patterns that we can use to do that, to break down a larger transformation into
[00:22:37]
smaller pieces.
[00:22:38]
Right.
[00:22:39]
Yeah, I wrote this blog post, combinators, inverting top down transforms, where I kind
[00:22:43]
of talked about, like, the difference of thinking about a problem as these sort of composable
[00:22:51]
sub problems or decomposable, I don't know.
[00:22:55]
These little breaking down into sub problems where you say, like, I know how to decode
[00:23:00]
a user, but I mean, where am I going to decode the user from?
[00:23:04]
What data is it going to be hanging off of?
[00:23:06]
Is it going to be nested under a bunch of fields?
[00:23:08]
Is it going to be continuing off of something?
[00:23:12]
Or am I going to be decoding it based on if the role is admin or whatever it may be, but
[00:23:19]
you can think about these sort of parts of it independently, and then compose them together.
[00:23:25]
Whereas that's like this sort of bottom up way of thinking about things, whereas this
[00:23:29]
top down way is just sort of reaching in and grabbing data from a JSON blob, which in my
[00:23:35]
experience is what tends to happen when I've worked in JavaScript code bases is it's so
[00:23:41]
easy to just pull in data from a big JSON blob that and then you've got this big JSON
[00:23:49]
blob, you pass it through a transformation function that changes a bunch of data, but
[00:23:54]
you're dealing with this like monolithic object, and it's really difficult to think about.
[00:23:58]
But with these sort of combinators, it's just you can think about this one piece, but then
[00:24:04]
you can take that piece and this other piece and build them up into one thing.
[00:24:07]
So this sort of like universal pattern, I'm not sure if it's like inseparable from this
[00:24:14]
concept of a combinator, but it seems like there's a link there.
[00:24:17]
So we've been using the term universal pattern because I use that in a as a title of a blog
[00:24:22]
post.
[00:24:23]
In that blog post, I was talking about map two, map three, map four, and so on functions.
[00:24:29]
Those functions are combinators because as we sort of talked earlier, one of the mental
[00:24:34]
models for what those functions do is they give us a way to combine values together.
[00:24:41]
And so it might allow us to combine three maybes or I mentioned also earlier that it
[00:24:47]
was a really helpful mental model for myself for thinking about JSON decoding.
[00:24:51]
Say I can decode three different pieces of valid three different pieces of data and I
[00:24:55]
want to combine them all into one more complex piece.
[00:25:00]
And so now I need a combinator.
[00:25:02]
And that's really when we look at a library like the JSON decode library.
[00:25:06]
At its most basic level, it really only provides us with two types of things, some sort of
[00:25:12]
primitive decoders like int and string, and then a few combining functions.
[00:25:19]
And that's basically it.
[00:25:21]
And we can use those building blocks then to decode anything we want into any Elm structure
[00:25:27]
that we want.
[00:25:28]
Because a really key thing about JSON decoding in Elm that I think is not obvious to people
[00:25:33]
who are new to the language is that your Elm structure and your JSON structure don't need
[00:25:39]
to be mirrors of each other.
[00:25:40]
And in fact, you probably don't want your Elm structure to mirror the JSON.
[00:25:45]
So I typically will design my Elm structure first to match my needs for my program to
[00:25:52]
eliminate impossible states and all that good stuff.
[00:25:55]
And then say, OK, given this Elm type and given this JSON that I have, how do I bridge
[00:26:00]
the gap?
[00:26:01]
And that's where I will then pull out all the JSON decoder tricks to say, how can I
[00:26:05]
translate between the JSON I have and the Elm structure I want?
[00:26:09]
I'm curious about one thing.
[00:26:11]
What do you think of the names, map, map2, map3?
[00:26:15]
Like for me, map is about transforming one thing to another.
[00:26:19]
And map2 is, as you say, combining.
[00:26:22]
So would it make more sense to call it combine2, combine3?
[00:26:26]
Or is that even what you have in your mind every time you talk about map2, map3?
[00:26:31]
That's a good question.
[00:26:33]
There's a sense of the base map.
[00:26:35]
There's a sense in which you could call it map1.
[00:26:38]
It's just sort of continuation of that pattern where you can take a one argument function
[00:26:42]
and one maybe, and I guess you're only combining one.
[00:26:47]
You're combining one maybe and just applying a function to it.
[00:26:50]
I do notice that I often start as the first tiny step if I'm doing a refactoring to a
[00:26:57]
different data type.
[00:26:58]
Let's say I've got a value that I'm just decoding a user from some HTTP response, and that's
[00:27:07]
stored in my model.
[00:27:08]
But now I actually want it to be a bunch of metadata, and user is one of those pieces
[00:27:13]
of metadata.
[00:27:14]
And I've got some other bits of metadata in there.
[00:27:17]
And so the first thing I'll do is I'll wrap it in a JSON.decode.map metadata, which has
[00:27:25]
a single field user of type user.
[00:27:29]
So now I've done this map1.
[00:27:30]
I've done JSON.decode.map with a record.
[00:27:33]
Now I'm wrapping it in a record.
[00:27:35]
And that's a preparatory step for the next step, which is it's going to be JSON.decode.map2,
[00:27:41]
and I'm going to add another field to that metadata field.
[00:27:45]
So in a way, it does feel like map1, even though you can use it for just sort of transforming
[00:27:51]
things.
[00:27:52]
There is an elegance to the fact that you can change a map to a map2.
[00:27:57]
Yeah, it really feels like a continuation of this pattern.
[00:28:00]
And there's also the sense in which it is a transformation, it's just a transformation with two inputs.
[00:28:06]
So you might have, say, two integers coming in, but a string coming out.
[00:28:11]
So it is still a transformation, but it's less of a transform one item into another,
[00:28:16]
because now you have multiple inputs.
[00:28:17]
Yeah, if you think about it with maybe, maybe.map is a very natural, like, imagine when maybe
[00:28:25]
is created and we have this maybe type and we're doing case statements all over the place
[00:28:29]
and we say, case just, take that value, I want to apply some function to it.
[00:28:34]
And we're like, this is really inconvenient.
[00:28:35]
Wouldn't it be nice if I could just pass in the function I wanted to apply when I wanted
[00:28:40]
to turn this string to uppercase, I could just pass in a string to upper function.
[00:28:47]
And so we create a map function.
[00:28:49]
And then we say, well, I actually, I want to combine two maybe values.
[00:28:54]
And then we say, okay, well, I mean, how would I combine two maybe values?
[00:28:58]
Well, if either of them are nothing, then I can't combine it into a single maybe value.
[00:29:03]
So let's just, you know, turn it into nothing if any of them are nothing.
[00:29:09]
And otherwise, we'll pass in those two just values that we have to the function that takes
[00:29:15]
two values.
[00:29:16]
Specifically, if you're trying to combine them with a two argument function.
[00:29:20]
Yes.
[00:29:21]
Because it's a two argument function, you need both values to be present.
[00:29:25]
Yes.
[00:29:26]
So if they're present, apply the two argument function to the two values.
[00:29:29]
Otherwise just return nothing.
[00:29:31]
I think your question, Jeroen, is really interesting.
[00:29:35]
If we look at what Haskell has done, they've chosen to not name this function map2, map3,
[00:29:42]
map4.
[00:29:43]
They've called it liftA2, liftA3, liftA4.
[00:29:48]
And they've sort of gone with this other metaphor that I talked about, this idea of lifting.
[00:29:54]
You could think of it as translating functions into the world of some other type.
[00:30:01]
So you could transform the add function to one that works on integers to one that works
[00:30:07]
on maybe integers.
[00:30:09]
So liftA2 would be map2.
[00:30:12]
Correct.
[00:30:13]
Okay.
[00:30:14]
Is there a liftB2?
[00:30:15]
So liftA, the A here stands for applicative, which is a term I think that we've sort of
[00:30:21]
been dancing around a little bit.
[00:30:25]
It's sort of the fancy functional term, but we haven't really gotten into it and defined
[00:30:29]
it.
[00:30:30]
All right.
[00:30:31]
Let's do it.
[00:30:32]
Maybe we should do that.
[00:30:33]
At its core, really, what you need for something to be considered applicative is you need some
[00:30:38]
kind of constructor.
[00:30:39]
And then you need one of two things.
[00:30:41]
You either need map2 or you need what in Elm we often call andMap, which is sort of a pipelineable
[00:30:48]
version of map2.
[00:30:51]
And then?
[00:30:52]
AndMap.
[00:30:53]
It doesn't ship in the core libraries.
[00:30:55]
That would be similar to the JSON decode pipeline required function, right?
[00:31:01]
Yes.
[00:31:02]
Yes.
[00:31:03]
So the JSON decode pipeline required function is a combination of what you might call andMap
[00:31:07]
and then also allowing you to plug in the field name for convenience.
[00:31:13]
So given either of those, you can describe a type as being applicative.
[00:31:18]
So because maybe has a constructor, which is just, and it has a map2, we can describe
[00:31:24]
it as applicative.
[00:31:26]
And the interesting thing with map2 and andMap is there are sort of two different ways of
[00:31:29]
expressing the same thing.
[00:31:31]
And so given either of those, we can implement the other.
[00:31:33]
I feel like this is like MacGyver skills for functional programming.
[00:31:38]
All right.
[00:31:40]
I need a stick of gum, a twig, or if you don't have that, I need...
[00:31:46]
But if you're missing one of those, you have nothing.
[00:31:53]
So yeah, I find that the map2 is much more concrete, more easy to understand as someone
[00:32:00]
who's exploring these ideas.
[00:32:03]
And from my own personal journey into some of these more philosophical concepts, it is
[00:32:07]
much easier to understand with something like map2.
[00:32:10]
Partly because you can deconstruct it more easily.
[00:32:13]
You can implement it yourself with a type like maybe and grasp pretty easily what it
[00:32:17]
does.
[00:32:18]
We mentioned earlier, right?
[00:32:19]
A map2 for maybe is just checking are both values present?
[00:32:22]
If so, apply this function, else return maybe, or else return nothing.
[00:32:26]
And map is a little bit more mind bending because it plays with sort of partial application
[00:32:31]
and some pipelines.
[00:32:34]
And there are more concepts you need to understand in order to work with it.
[00:32:39]
Yeah, it's the kind of thing that you sort of copy paste from the docs for a library
[00:32:46]
to build up a pipeline.
[00:32:49]
But you don't always fully think about exactly what it's doing under the hood because it
[00:32:54]
would hurt your brain a little bit too much.
[00:32:56]
So you have to like...
[00:32:57]
I think that's why to a certain extent it's helpful to have some high level concepts of
[00:33:02]
how to think about these things because you don't always have to think about the low level
[00:33:06]
things.
[00:33:07]
The high level is, well, I want to sort of apply a high level combination of these things.
[00:33:14]
And so you sort of associate and map with that concept and you don't need to understand
[00:33:21]
all the internals.
[00:33:22]
I think one thing that maybe we should mention too, something that can trip people up is
[00:33:27]
the record constructors feel like this magical thing.
[00:33:32]
So to sort of, I like to demystify that by just explaining exactly what it is.
[00:33:38]
So if you do maybe.map2, let's say you've got like a first name and a last name and
[00:33:45]
you expect them to both be there, but you've got some user input fields.
[00:33:49]
So you've got maybe values.
[00:33:50]
So you could pass in, you could have type alias user equals first string, last string,
[00:33:57]
and you could pass in that user constructor, capital U user to maybe.map2 user and then
[00:34:03]
your maybe first, maybe last.
[00:34:06]
And so what is that doing?
[00:34:07]
Well, it would be equivalent to doing a function that takes a maybe, a function that takes
[00:34:14]
a first and last, which are both string, and then builds a record with a field called first
[00:34:20]
and a field called last.
[00:34:21]
But what happens is this is just a part of the Elm language that when you define a type
[00:34:27]
alias of type record specifically, it doesn't happen if you define a type alias of type
[00:34:32]
int, it doesn't happen if you define a type alias of a custom type, only for specifically
[00:34:37]
type alias record type.
[00:34:39]
It will give you a constructor function that takes the arguments of the type of each of
[00:34:44]
the fields in that exact order and returns a record with exactly those fields and types.
[00:34:50]
So that's an important thing to understand.
[00:34:52]
And so I think it's a good exercise to like just write that maybe.map with an anonymous
[00:34:58]
function or a named function, doesn't matter which you prefer, but write maybe.map anonymous
[00:35:04]
function that takes first and last as arguments and then returns first equals first, last
[00:35:11]
equals last and convince yourself, oh, that's exactly what doing type alias user equals
[00:35:17]
first string last string is doing when I pass that constructor instead of that anonymous
[00:35:23]
function is exactly equivalent.
[00:35:25]
So that's I think that's a really good thing to demystify because it feels like magic otherwise.
[00:35:30]
I think this this confusion is maybe the fault of a lot of the tutorials that are out there.
[00:35:35]
And if you read Elm codes or in the wild, you will see people will use that constructor
[00:35:40]
because that's kind of what it's there for.
[00:35:42]
But if you're just learning, say, JSON decoders and you see something that says decode map
[00:35:48]
two and then capital U user and you see that type alias defined above a very reasonable
[00:35:54]
assumption that might be is like, oh, I'm giving that user type and map two is doing
[00:36:00]
some sort of reflection or metaprogramming or something like that based off of that type
[00:36:06]
and knows to just magically construct a user out of the fields that I give it.
[00:36:11]
And really map two doesn't want to be given a type.
[00:36:14]
It wants to be given a function.
[00:36:17]
So when I what I've started doing in my own writing, even when giving examples on the
[00:36:22]
Elm Slack is trying to always show the anonymous function.
[00:36:27]
It's a little bit more verbose.
[00:36:29]
And it's often it's not necessarily the concept I'm trying to teach.
[00:36:31]
But I think it's useful to show it there just to avoid that misconception.
[00:36:35]
So that it's very clear.
[00:36:36]
Oh, map two, map three, whatever takes a function, not a type as its first argument.
[00:36:42]
Right.
[00:36:43]
And that avoids some misconceptions.
[00:36:45]
Right.
[00:36:46]
And then you say, by the way, there's a shorthand for this function.
[00:36:49]
Yes.
[00:36:50]
Did you know that when you define a type alias for a record, you get a constructor function
[00:36:56]
that has the same name as your type and you could then clean up your or make your decoder
[00:37:04]
a little bit terser by using that.
[00:37:06]
I think one reason why it feels like something weird is because we never use that function
[00:37:12]
elsewhere than in the name of an applicative.
[00:37:15]
You never or you rarely see like user your own angles.
[00:37:20]
You always see a record with first and last specified.
[00:37:26]
Or I've asked around and people really don't like using the record alias name as a function
[00:37:33]
outside of an applicative.
[00:37:35]
Right.
[00:37:36]
Because you can get the names of the fields mixed up.
[00:37:40]
Because if you change the name, if you have type alias user equals first string, last
[00:37:46]
string now, and then you create a user by saying user equals capital U user, and then
[00:37:52]
as first name string and then a last name string.
[00:37:55]
Now if you, you know, you probably wouldn't, but if you were to change the order of first
[00:37:59]
and last in the record alias, now you're passing strings, you don't get a compiler error.
[00:38:05]
And basically you've created a layer of indirection between what the field name is and the value
[00:38:12]
that's being passed to it.
[00:38:13]
Whereas if you just said user equals literal record first equals string, last equals string,
[00:38:20]
there's no getting it mixed up.
[00:38:21]
And so you can avoid that confusion.
[00:38:24]
For myself, I think it's less the being afraid of changing the field names because they're
[00:38:28]
pretty much never changed the order.
[00:38:31]
But it's more just the readability.
[00:38:33]
If you see the user constructor and then two strings, it's not immediately obvious which
[00:38:38]
one is the first, which one is the last.
[00:38:40]
And so it's really nice for readability to have the field names as labels.
[00:38:45]
That's usually less important if you're say doing a decoder, because when you look at
[00:38:51]
the, all the decoder, the little individual field decoders below it, you'll see the JSON
[00:38:56]
field names.
[00:38:57]
And generally you can tell from the JSON field names what they are.
[00:39:00]
Yeah.
[00:39:01]
Or of the decoders.
[00:39:02]
Right.
[00:39:03]
So it's pretty obvious looking at the decoder, what's the first name, what's the last name,
[00:39:06]
because we're going to be referring to the names might not be exactly the same in the
[00:39:10]
JSON.
[00:39:11]
And that's one of the really nice things about decoders.
[00:39:13]
The JSON doesn't need to match, but I could probably tell what they are.
[00:39:20]
And so it feels a little bit redundant to copy that into an anonymous function.
[00:39:26]
And it also gets really long and verbose for larger records.
[00:39:31]
You have 10, 20 keys in the record, then that can get really verbose, which I guess that
[00:39:37]
maybe leads us really nicely into and map a version of the sort of applicative pattern.
[00:39:45]
We've talked a lot about a map two, map three, and so on, but those are going to be finite.
[00:39:51]
Every own library you use is going to have, you know, map up to map eight or however they
[00:39:56]
want to do.
[00:39:57]
And eventually it's going to stop.
[00:39:58]
I've yet to run into that limit for something like maybe.
[00:40:02]
I don't think I'm combining that many optional values, but I do run into this all the time
[00:40:07]
on JSON decode because it's not uncommon to say, I want to read 20 fields out of a JSON
[00:40:12]
and combine them into some Elm object.
[00:40:16]
And so that's where this sort of pipeline approach becomes really helpful because now
[00:40:23]
you don't rely just on something finite because the beauty of the and map, which is sort of
[00:40:30]
the, I don't know if you'd want to say the, it's not the inverse of map two, it's the
[00:40:34]
corollary to map two.
[00:40:35]
There's a fancy term that we can use for this, but it's another formulation of what map two
[00:40:41]
does, but you can sort of chain it infinitely.
[00:40:44]
So if you want the equivalent of map 100, you could do, start with a hundred argument
[00:40:51]
function and then just 100 pipes to and map.
[00:40:55]
Yeah, right.
[00:40:56]
And so to a certain extent it's like a matter of personal taste, but why don't we talk about
[00:41:02]
some of the more objective pros and cons between map N functions, map two, map three, map four
[00:41:10]
versus end map.
[00:41:12]
So I think the big one is one that we've talked about already is you will run out of map N
[00:41:19]
at some point.
[00:41:20]
Although you can always, if you need a map 17, you can always implement it in terms of
[00:41:29]
and map because it's equivalent.
[00:41:31]
So if you find it easier to read your code, you could just implement your own map 17 using
[00:41:37]
and map and then use the map 17 in your code.
[00:41:40]
If that's a style that you prefer, you could use some code generation to create map all
[00:41:45]
the way up to a hundred.
[00:41:46]
That would actually be pretty easy.
[00:41:49]
Yeah, it would.
[00:41:50]
Would it be a good idea?
[00:41:52]
Who can say?
[00:41:53]
Could you generate a map 17 from map two also or map three?
[00:41:58]
Well, you can generate all of these from map two.
[00:42:01]
Yeah.
[00:42:02]
Map two is the one.
[00:42:03]
If you have map two, then you can build all of these things.
[00:42:06]
I've like, I've built so many libraries that have this at this point.
[00:42:10]
Basically like I started by like going to the no red ink, Jason decode pipeline library
[00:42:15]
and looking at the source code and being like, how do they implement these things?
[00:42:18]
And like, how did the types line up?
[00:42:21]
And then you see like, there's a, you know, it's like a decoder of A to something.
[00:42:29]
The signature is mind bending.
[00:42:31]
It still hurts my brain to think about it.
[00:42:33]
And I've implemented it in libraries so many times now.
[00:42:37]
You get decoders of functions, right?
[00:42:40]
Something like that?
[00:42:41]
Yes.
[00:42:42]
Yeah.
[00:42:43]
And it's like applying one of the values as you go through in the pipeline.
[00:42:46]
You have a function wrapped in a decoder.
[00:42:49]
You have a, like a concrete value wrapped in a decoder and you're saying apply that
[00:42:54]
value as an argument to that function.
[00:42:57]
Yeah.
[00:42:58]
Like the function might be a 10 argument function.
[00:43:00]
So you only apply one argument to it.
[00:43:02]
And then you get back a new decoder that's another, that's now a nine argument function
[00:43:07]
decoder, which you can then apply to another concrete value decoder to apply argument nine.
[00:43:15]
And now you get back an eight argument decoder and so on.
[00:43:17]
Yeah.
[00:43:18]
And a decoder of a function really doesn't make any sense on its own.
[00:43:22]
It really makes sense in this context of an applicative.
[00:43:25]
Yes.
[00:43:26]
Yes.
[00:43:27]
And that's where it's incredibly helpful.
[00:43:29]
Right.
[00:43:30]
So the error messages could be confusing.
[00:43:33]
And that's one of the challenging things is like the map and error messages when you get
[00:43:39]
something wrong are very clear and precise.
[00:43:43]
The compiler isn't able to give information as precisely if you're doing end map because
[00:43:49]
it doesn't know exactly how many things you plan to apply.
[00:43:53]
So it can't give you the precision.
[00:43:55]
So that's one of the trade offs.
[00:43:56]
So if you've got an error in a say a map two or a map three because it's unwrapping all
[00:44:00]
of them first and then saying, here's a three argument function, apply all three of these
[00:44:05]
arguments, it can immediately tell you, oh, argument two of three is incorrect.
[00:44:09]
Whereas with end map, because you're slowly applying arguments one at a time and you've
[00:44:15]
got your, the process of applying them one at a time is convert a 10 argument function
[00:44:20]
into a nine argument function, then convert it into an eight argument function, then a
[00:44:23]
seven argument function and so on.
[00:44:25]
The error that you're going to get is something like, oh, on step five, I expected a function
[00:44:31]
with this signature, but the signature here is not quite right.
[00:44:36]
And it can be really a head scratcher if you don't understand under the hood what's going
[00:44:40]
on, if you're not familiar with the concept of partial application.
[00:44:44]
So that's definitely the downside.
[00:44:46]
It takes some deciphering, even if you're very familiar with it.
[00:44:51]
But it gives you enough of a clue that you're like, something's off with my chain.
[00:44:56]
And at that point, sometimes it's helpful to just, sometimes I'll just put in my pipeline
[00:45:01]
of end maps.
[00:45:02]
I'll just put like a debug.to do as one of the things in the pipeline be like, all right,
[00:45:07]
let's just pretend that this one is whatever you want it to be to satisfy the compiler
[00:45:12]
here is the problem there or somewhere else.
[00:45:14]
And then it'll tell you if it's still giving you an error, the problem wasn't where you
[00:45:18]
put the debug.to do.
[00:45:20]
If it's not giving you an error, if it's not giving you an error, then you know exactly
[00:45:24]
where to look.
[00:45:25]
One advantage of the sort of end map pipeline approach is that you can then combine that
[00:45:31]
with other functions to create almost like a domain specific variation.
[00:45:38]
And we've mentioned a few times the no red ink JSON to code pipeline.
[00:45:42]
And what they've done is they've taken this end map function and combined it with a few
[00:45:46]
of the helpers from the JSON to code library for finding fields at a particular location.
[00:45:53]
And so you can say I have this required field or have this required nested path.
[00:45:59]
And those can all just be piped one to another.
[00:46:02]
And it becomes very nice to read.
[00:46:04]
I think someone who doesn't understand what the pattern does under the hood could still
[00:46:07]
understand what the code does.
[00:46:09]
Because you could say, oh, construct a user using a required first name and a required
[00:46:17]
last name nested under these sets of keys.
[00:46:20]
Right.
[00:46:21]
So one reason I feel like people may sometimes just reach for these sort of end map or pipeline
[00:46:28]
functions to start with is just the workflow of changing from map to map two to map three
[00:46:35]
back to map two as you sort of adjust things is a little bit clunky.
[00:46:40]
I tend I find myself using control a and control x and vim, which is increment number and decrement
[00:46:46]
number all the time for this.
[00:46:48]
Because what happens is you go up to the line where there's a map to you can be anywhere
[00:46:53]
in the line at the beginning of the line anywhere before the two in the map to you do control
[00:46:58]
a and it increments that to map three.
[00:47:01]
So that's that's a little trick that I use.
[00:47:04]
And I actually personally tend to use the map and functions when I'm dealing with lots
[00:47:10]
of small composable bits.
[00:47:12]
But I think it's it's a matter of personal preference.
[00:47:14]
And there's good reason to just say, you know what, I don't want to deal with this workflow
[00:47:18]
of changing the end in the map and every time I add something, I just want to deal with
[00:47:22]
end map every time.
[00:47:23]
Yeah, but now that you know that shortcuts like no, you don't have any excuse anymore.
[00:47:28]
So I'm very curious about one thing, because we've seen this pattern happen in a lot of
[00:47:34]
the core libraries or core concepts that have been spread out spread out all over like parsers,
[00:47:39]
JSON.
[00:47:40]
But when would you reach for this pattern?
[00:47:43]
Like you're building something new?
[00:47:46]
In what cases?
[00:47:47]
What situations would you say it would be nice to have a combinator for this API?
[00:47:54]
I think that need often arises organically.
[00:47:56]
You'll sort of start working with your type and realize, oh, actually, I need a way to
[00:48:01]
combine.
[00:48:02]
And that's when this sort of thing will arise.
[00:48:05]
More generally, this sort of thing is usually only needed for types that have a type variable
[00:48:10]
in them.
[00:48:11]
So if you have a concrete type, that's some kind of enum style value or something like
[00:48:16]
that, you're not going to need a map to because there's no sort of there is no sort of inner
[00:48:21]
value to transform.
[00:48:22]
Yeah, but you could still want to combine two elements to be a single element, like
[00:48:29]
a list of two things turned into a list of one thing.
[00:48:32]
Well, now it's the combinators on the list, not the item itself.
[00:48:37]
There are some we've been talking a lot about, you know, all you need is a map to function
[00:48:42]
for this to count as applicative and also a constructor.
[00:48:45]
Technically, there's also a set of rules for that the map to function needs to follow in
[00:48:51]
order to be considered a legitimate map to for this purposes.
[00:48:54]
You can't just invent some function that's like, oh, this is a string concatenation.
[00:48:58]
I'm going to call it map to and hey, I'm applicative.
[00:49:00]
What's the term for it?
[00:49:01]
Lawful?
[00:49:02]
Lawful, right?
[00:49:03]
I think so.
[00:49:04]
Yes.
[00:49:05]
I like that.
[00:49:06]
It makes it sound like an outlaw.
[00:49:07]
It almost is like, sounds like it would be cool to like not follow those rules.
[00:49:10]
Be an outlaw.
[00:49:11]
I just think of a D&D alignment chart now.
[00:49:14]
Yeah, right.
[00:49:17]
That is the only definition I have in mind for lawful.
[00:49:20]
So if you can explain, please do.
[00:49:23]
So there's a few properties that have to be, and I don't know them off the top of my head,
[00:49:27]
but basically it's like, oh, if you map the identity function, then the output must be
[00:49:32]
the same or there's a few rules like that.
[00:49:34]
They're called the applicative laws.
[00:49:37]
So if you look that up, that's what we'll show.
[00:49:41]
But more generally, the signature for map to is going to have variables in it.
[00:49:47]
So it's going to be a two argument function, A to B to C, and then your type with a variable
[00:49:54]
A, your type with variable B, and then it will in the end create your type with variable
[00:49:59]
C. And so if your type doesn't have a type variable, then it's probably not needing these
[00:50:05]
functions.
[00:50:06]
That's interesting.
[00:50:07]
Like what if you have an opaque type, like some sort of money type?
[00:50:12]
And I mean, I'm just trying to think of a concrete use case.
[00:50:15]
In that case, there is like a thing in this box.
[00:50:18]
Like you can't directly do anything with money types.
[00:50:20]
You need to expose an interface to deal with those.
[00:50:23]
But that said, do you want to expose some money dot map where you can then multiply
[00:50:31]
it by a million or something?
[00:50:33]
Maybe what you really want to do is expose money dot some or some sort of domain specific
[00:50:40]
functions for dealing with the money.
[00:50:43]
That's exactly the path.
[00:50:44]
Actually, I've been down this path.
[00:50:47]
I think that was actually probably one of the areas I first really understood mapping
[00:50:52]
functions.
[00:50:53]
I was creating a money type and it was not parameterized.
[00:50:57]
It was just a wrapper around probably an integer or a float.
[00:51:01]
And then I realized, wait, but like it's annoying to always wrap and unwrap these things.
[00:51:05]
This kind of looks like mapping.
[00:51:07]
What if I created a map too?
[00:51:09]
And then I can say anytime I want to say add to dollar amounts, I can just map to the plus
[00:51:15]
function that just works.
[00:51:18]
Or map to the times function.
[00:51:20]
Yeah.
[00:51:21]
What does that mean?
[00:51:23]
Yeah.
[00:51:24]
Which like, I think as programmers, we play very fast and loose with math.
[00:51:30]
And if you were in a more, say, if you're working with physics, you know that if you
[00:51:35]
multiply two numbers that have a unit, then you also have to multiply the unit.
[00:51:42]
And so if you're multiplying dollars times dollars, what you get back is dollars squared.
[00:51:50]
That sounds great to me.
[00:51:52]
Sign me up.
[00:51:53]
Which in most applications is probably a nonsensical unit type.
[00:51:57]
So you probably don't want to allow arbitrary operations on the value.
[00:52:03]
Another really interesting thing is that normally a map to function, you pass a to argument
[00:52:09]
function to it to say, hey, combine these two using this function.
[00:52:13]
And that function you give it can have any two inputs and any output type because you
[00:52:18]
can combine any values together.
[00:52:21]
With something like, say, a dollar wrapper, you can't do that because the value inside
[00:52:27]
is always an integer.
[00:52:28]
And so the two inputs for your two argument functions must be an integer.
[00:52:32]
And because you're creating a new dollar value as the output, the output value also must
[00:52:37]
be an integer.
[00:52:38]
So rather than having a generic function being passed into your map to that's a, b, c, it's
[00:52:43]
actually going to be int, int, int, which is possibly OK.
[00:52:49]
There's the concept of distinguishing between, I'm going to throw some fancy terms out here,
[00:52:54]
polymorphic versus monomorphic versions of these functions.
[00:52:59]
Monomorphic meaning many shapes, monomorphic meaning single shape.
[00:53:02]
So if it's just always an integer, then that's a monomorphic version of map or map to.
[00:53:07]
And those can be lawful under certain circumstances.
[00:53:12]
But in general, when people are talking about things like applicative, they, you mean the
[00:53:19]
polymorphic version.
[00:53:20]
So how would you call the monomorphic version then?
[00:53:23]
You have to make one.
[00:53:27]
We actually decided not to give it a name like map to and instead give it a domain specific
[00:53:35]
name.
[00:53:36]
As Dillon mentioned, for something a dollar type, you might want to just create some domain
[00:53:40]
specific functions like add rather than something generic.
[00:53:44]
I think where this comes up maybe a little bit more frequently is if you have some opaque
[00:53:51]
type that has a string in it or maybe it's even a record or something and you say, oh,
[00:53:55]
I want to map over this user's name, but you can't reach into the name directly.
[00:54:01]
You have to, because it's opaque, so you have to have some sort of function that does that
[00:54:06]
for you.
[00:54:07]
And you might be tempted to call it map or map name or something like that.
[00:54:11]
But because it is more monomorphic and it doesn't really work in the same way, I found
[00:54:18]
it's useful to just go all in on the domain specific idea and just give it a name that
[00:54:22]
describes what it does.
[00:54:23]
So call it update name and that better describes what it's going to do and doesn't confuse
[00:54:28]
people with a more general concept of mapping.
[00:54:31]
So it's a better experience for the users of your code, probably just easier to read
[00:54:34]
it in general.
[00:54:35]
I think one exception to this is actually in the Elm core library and that is string.map
[00:54:43]
because Elm allows you to map over strings.
[00:54:47]
They don't have a map to, but there is a string map and it is monomorphic because when you
[00:54:52]
map the function you pass in has to be character to character.
[00:54:56]
Good trivia.
[00:54:57]
But I think people are so used to mapping as this idea of traversing a collection and
[00:55:02]
transforming the values along the way that that one probably doesn't confuse people.
[00:55:07]
People probably even just use it and be like, oh, of course it's character to character
[00:55:10]
and never thought of like, oh, what if I wanted to do character to int?
[00:55:14]
Why doesn't that work?
[00:55:15]
So you may have used string.map and never realized it was different from all the other
[00:55:19]
maps in the Elm world.
[00:55:22]
So one other pattern I noticed emerging when I'm dealing with building up pipelines of
[00:55:29]
things, I mean, it happens all the time.
[00:55:31]
It's not necessarily just like building a JSON decoder or a random number generator.
[00:55:36]
I'm often doing these pipelines and sometimes there are these pipelines where rather than
[00:55:42]
just dealing with one specific thing, and we talked about this idea of dealing with
[00:55:48]
one level of abstraction at a time.
[00:55:50]
So often you're dealing with one level of abstraction where it's just decoding a bunch
[00:55:55]
of stuff, just building up a JSON decoder.
[00:55:58]
But sometimes you're running a decoder and then that gives you a result and then you're
[00:56:03]
turning that into a particular type of error that you're combining with another thing,
[00:56:08]
for example.
[00:56:09]
So these sort of higher level pipelines where you're deciding you're sort of processing
[00:56:13]
something rather than doing all the detailed processing.
[00:56:17]
Often I want to build something up into a particular type of value.
[00:56:22]
If I need to take one type of error and turn it into, combine it with, maybe there's an
[00:56:28]
HTTP error that may have happened in one result and another type of error that might have
[00:56:33]
happened in another result.
[00:56:34]
And then I need to combine those and pull in some other data.
[00:56:37]
So those types of pipelines, I tend to see a few different types of patterns emerging.
[00:56:43]
One is I tend to see, sometimes I need to coerce something into the same type of thing.
[00:56:48]
So maybe I have a maybe value and I have a result of an HTTP error and I have a result
[00:56:53]
of another error type.
[00:56:55]
So I might need to do result.mapError to get the two result types to have the same error.
[00:57:02]
And then I might need to take the maybe type and do result.fromMaybe and give it an error
[00:57:08]
type if it's nothing.
[00:57:10]
So those are sort of like some higher level patterns for combining things that I find
[00:57:16]
come up a lot.
[00:57:17]
And another one that I see coming up a lot in code that I write is I'll want to sort
[00:57:21]
of compose together ways of mapping things.
[00:57:24]
So I'll have like a maybe list or a JSON decoder of a list.
[00:57:31]
And I want to map the inner list inside of that.
[00:57:34]
And in those cases, I'll do like maybe.map, list.map, and then apply something in there.
[00:57:41]
So those are sort of two higher level patterns that I've noticed emerging a lot.
[00:57:45]
Yes.
[00:57:46]
I think those are probably a little bit separate from this applicative concept and that they're
[00:57:49]
just tips for working with pipelines in general.
[00:57:52]
I would typically not combine those with a say and map pipeline.
[00:57:59]
If I'm say doing some JSON decoding and then I want to combine the errors with some other
[00:58:04]
result, I would probably have a separate function that handles the JSON decoder and just find
[00:58:09]
this is how you decode the JSON and then call that from a different pipeline that's managing
[00:58:14]
the results.
[00:58:15]
Again, back to that idea of a single level of abstraction.
[00:58:18]
I have a function that defines here's how we interact with JSON and the other one that
[00:58:21]
says here's how we then like read the JSON and handle the errors.
[00:58:27]
You get to some really interesting patterns with sort of deriving these things too.
[00:58:31]
So another thing I've noticed is that a lot of these sort of mappable APIs will...
[00:58:38]
I mean, if you have map2, what can you derive from it?
[00:58:43]
You mentioned that you can...
[00:58:44]
That is magical.
[00:58:45]
The moment you introduce map2, so many things become possible.
[00:58:50]
Yeah.
[00:58:51]
It's pretty neat.
[00:58:52]
One of the things that I've been doing, I've got this LMarkdown parsing library and I've
[00:58:59]
had a lot of fun building up transformations because it's really fun in a typed language
[00:59:05]
to deal with any sort of abstract syntax tree, whether it's Markdown or something else.
[00:59:11]
So I'm finding myself doing operations where you want to count the number of headings or
[00:59:17]
you want to take all of the level two headings and capture those.
[00:59:24]
So that's like, you might want to do like a fold left over them and that's just derived
[00:59:29]
from map2 or you might want to...
[00:59:32]
Well, the fold is not derived from map2.
[00:59:36]
It is its own thing, but there's a combination of fold and map2 that becomes really, really
[00:59:42]
powerful.
[00:59:43]
I think that's what you're pointing towards.
[00:59:46]
So this goes typically under the name of a sequence or combined in various libraries.
[00:59:52]
But if you have say a list of maybes and you don't want to check each of them individually,
[00:59:59]
you say, give me back just one single maybe that's either nothing if any of the items
[01:00:05]
were missing or just the list of all the present values if they were all present.
[01:00:10]
And that's where you would fold map2.
[01:00:12]
I've used it using the remote data pattern.
[01:00:15]
It's really useful to know if all of the...
[01:00:17]
If you have a list of remote data values, it's useful to see are they all successful
[01:00:22]
or are any of them pending or failed.
[01:00:24]
So it gives you like an aggregate status of all these independent remote datas.
[01:00:30]
And if they're all successful, then you get a list of all the successful values.
[01:00:33]
So it's super, super convenient.
[01:00:36]
Yeah.
[01:00:37]
Combine is an awesome helper.
[01:00:40]
And I imagine you're using it in your markdown parser where you say, oh, I have a list of
[01:00:44]
parsers.
[01:00:45]
Can you sort of turn that into a parser that gives me a list of things?
[01:00:48]
You know, I don't think I expose a combined function for...
[01:00:51]
But maybe I should.
[01:00:52]
But I am in Elm pages.
[01:00:55]
I have this data source API, used to be static HTTP, which is sort of like a declarative
[01:01:01]
description of like getting HTTP data.
[01:01:03]
So it's not a command.
[01:01:05]
It's something that you can sort of just have when you load a page.
[01:01:10]
And anyway, combine is super helpful there because you'll have all these different data
[01:01:14]
sources that you want to combine into a list.
[01:01:16]
And that's a super handy function.
[01:01:18]
So I'm confused.
[01:01:19]
Is this combine or is it sequence?
[01:01:20]
That is usual.
[01:01:21]
I think he's saying it's a synonym.
[01:01:23]
Yeah.
[01:01:24]
The two names are used in Elm.
[01:01:26]
So for example, in the core library, there's a task dot sequence, which takes a list of
[01:01:30]
tasks and just sort of squashes it down into a single task that will succeed if all the
[01:01:35]
child tasks succeed.
[01:01:37]
But you might see and say the remote data I think uses...
[01:01:40]
Actually remote data I think uses from list and then like the result extra maybe extra
[01:01:45]
use combine.
[01:01:46]
Yeah.
[01:01:47]
I think that's maybe one of the disadvantages of not having type classes is that it allows
[01:01:53]
the same function to have different names, which sometimes is nice because a more domain
[01:01:59]
specific name might make more sense in the context of one library, but it makes it maybe
[01:02:03]
a little bit harder to see some of these patterns across multiple modules.
[01:02:09]
Well having type classes wouldn't prevent you from adding a new function that does the
[01:02:13]
same thing anyway.
[01:02:14]
But it would enforce that if something is applicative, it must have this function with
[01:02:19]
this name.
[01:02:20]
And so I think it would be really interesting to explore like having a sort of community
[01:02:26]
resource of an Elm review rule where you can sort of have some little at directive in a
[01:02:33]
doc comment in a module and say this is applicative or whatever term we want to use.
[01:02:40]
But just to sort of have it remind you, oh, but you don't expose a function name to this.
[01:02:45]
Maybe you meant to do that.
[01:02:47]
So another thing that the map to allows and we already touched a little bit on it with
[01:02:51]
pipeline APIs and combining, but it can just be a really powerful way of cleaning up code.
[01:02:57]
And I had this magical experience a while back.
[01:03:01]
I was helping somebody else on a JavaScript project where they needed to parse sort of
[01:03:06]
like Excel style formulas, which are more or less just like prefix functions that can
[01:03:12]
be nested arbitrarily.
[01:03:13]
And we came up with something that's a little bit clunky.
[01:03:15]
I think it might've been some kind of recursive function that would consume a string and try
[01:03:20]
to build a tree out of it.
[01:03:23]
And I wondered if I could do something in Elm that would be nicer.
[01:03:26]
And I started with just re implementing the same approach that we had in JavaScript in
[01:03:31]
Elm, where I'm parsing a string.
[01:03:35]
But I also had introduced the end of a result type just because Elm has that and JavaScript
[01:03:40]
doesn't.
[01:03:41]
So each sort of step I would try to parse a chunk of the string and then return a result
[01:03:47]
if it was bad and otherwise keep going.
[01:03:51]
And it was this giant nested case expression, which my second step was saying, okay, well,
[01:03:56]
there's a bunch of steps where I can say it can either be a function name, like add or
[01:04:02]
subtract.
[01:04:03]
It can be an open parenthesis.
[01:04:04]
It can be an inner expression.
[01:04:06]
It can be a closed parenthesis.
[01:04:08]
And those were all nested case expressions.
[01:04:10]
What if I broke them out into functions?
[01:04:12]
And so I broke them all out into functions with this really tedious signature where it's
[01:04:16]
like string to tuple of remaining string and result of like the type we've parsed so far.
[01:04:24]
It was just really tortuous.
[01:04:27]
But at least it flattened my case expressions a little bit because now all of the bodies
[01:04:32]
were broken out into functions, which is that rule of abstraction I talked about earlier,
[01:04:37]
separate doing code from branching code.
[01:04:39]
And then I started realizing, wait a minute, this signature of like string to this awful
[01:04:46]
tuple shows up all the time.
[01:04:49]
And if we think about it, that's effectively what a parser is.
[01:04:53]
It's turning a string into some less structured value, in this case a string, into a more
[01:04:59]
structured value and possibly an error, which is why I had that result.
[01:05:05]
And in this case, I had to keep track of the remaining string because you don't parse everything
[01:05:09]
all at once.
[01:05:11]
And so I took that and aliased it to parser and just cleaned up all the signatures.
[01:05:16]
And it looked a lot nicer, but I was still having to do all this casing to sort of combine
[01:05:22]
the things together.
[01:05:23]
This is where the light bulb starts going off.
[01:05:24]
I'm like, wait a minute, I'm doing all this casing on the results, all on this like tuple
[01:05:30]
result thing to see, can I combine these different pieces together?
[01:05:34]
Wouldn't it be nice if I had a way to just, now that they're called parsers, just combine
[01:05:38]
two parsers together?
[01:05:40]
Hmm, could I define a map2 function?
[01:05:45]
And it's a little bit mind bending to define a map2 function over like functions of tuples
[01:05:50]
of results.
[01:05:51]
But because I had aliased it to just parser A, I knew, oh, I know how to define a map2
[01:05:57]
over parser A. I defined that, and that's when the magic happened.
[01:06:01]
Because all of a sudden, I could eliminate all those case expressions and just very cleanly
[01:06:09]
combine all those extracted functions that I had together in a fairly flat way.
[01:06:15]
And then of course, knowing that I can use map2 to implement and map, I did that to give
[01:06:22]
myself a parsing pipeline API, which turned out to be really, really nice.
[01:06:28]
And then sort of in the vein of what the JSON decode pipeline does, where you can sort of
[01:06:32]
layer on a little bit of extra behavior or meaning on top of that.
[01:06:36]
When you're parsing, sometimes you want to parse a value and then like actually create
[01:06:41]
a value out of it.
[01:06:43]
And sometimes you just want to make sure that something is there in the string, but you
[01:06:46]
want to move on.
[01:06:47]
So you might want to consume a value or you might want to actually like parse something
[01:06:52]
out of it.
[01:06:53]
I don't know if you might call that keep and consume or something like that.
[01:06:56]
So that's effectively what I did.
[01:06:57]
I had like domain specific variations on and map.
[01:07:00]
And I think I called them keep and consume that allowed me to have a very flat pipeline
[01:07:05]
that was just like, oh, start by attempting to parse a function name, then just consume
[01:07:15]
an open parenthesis, then sort of recursively attempt to parse another expression, and then
[01:07:23]
try to consume a closing parenthesis.
[01:07:27]
And just it all fell into place from this really tangled nested mess of case expressions
[01:07:33]
and nested functions into this beautiful API.
[01:07:36]
It's all because of map two.
[01:07:38]
And that was a very iterative approach that I took.
[01:07:42]
I was aware of some of these concepts because I've used a lot of JSON decoders before,
[01:07:46]
but I wasn't really comfortable with parsing strings.
[01:07:50]
But that experience of sort of stumbling into what I guess you might call parser combinators,
[01:07:57]
which I guess really all of a sudden that term made so much sense for me because I had
[01:08:02]
these parses already.
[01:08:03]
This is little functions I had already extracted for parse the string into a function name
[01:08:07]
or a parenthesis.
[01:08:08]
And then I implemented map two and a couple other functions that allowed me to combine
[01:08:14]
parsers together and boom, all of a sudden I had in, I don't know, probably less than
[01:08:19]
100 lines of Elm built a parsing library.
[01:08:21]
That was really magical and mind blowing.
[01:08:24]
And then just for fun, I checked the Elm parser library, which it actually has a pipeline
[01:08:28]
syntax.
[01:08:29]
It uses special operators, but it's effectively nmap.
[01:08:32]
That's like pipe dot and pipe equals, which are equivalent to my specialized operators
[01:08:38]
for like parse and consume.
[01:08:41]
And it was basically the same code.
[01:08:43]
So I sort of stumbled into something that was very similar to the official Elm parse
[01:08:48]
library.
[01:08:49]
So it was a really fun exercise for me.
[01:08:51]
I learned a lot.
[01:08:52]
I feel like I learned how parsing works.
[01:08:56]
I got way more comfortable with some new facets of map two, the idea of combinators in general.
[01:09:04]
I think I gained a new level of understanding the combination of parsers and combinators
[01:09:09]
as like two pieces that really play well together.
[01:09:11]
Yeah, it was a really magical experience.
[01:09:14]
I stayed up late into the night and it was just like, oh, another light bulb moment.
[01:09:19]
I think I created probably four or five Elis that I created like for each step in that
[01:09:25]
process.
[01:09:26]
And it was amazing.
[01:09:27]
Yeah.
[01:09:28]
I'm now very curious, did you backport that to the JavaScript version?
[01:09:33]
I didn't directly, but I was helping somebody else on their project and I shared the Elm
[01:09:40]
equivalents, which got the other person interested in looking up at JavaScript parsing or parser
[01:09:47]
combinator libraries, which they were then able to refactor our original solution into
[01:09:52]
something using the JavaScript parser combinator library that was very similar to what I ended
[01:09:58]
up with in Elm.
[01:09:59]
That's cool.
[01:10:00]
Yeah.
[01:10:01]
I think like combinator is such an intimidating word, but really the concept is something
[01:10:06]
that like, I mean, if you've spent a lot of time using some of the basic tools that Elm
[01:10:12]
gives us like decoders, it's a very familiar concept of breaking down a problem into small
[01:10:18]
sub problems and then building it up into something more complex by using these sort
[01:10:24]
of combining functions.
[01:10:26]
That's all it is.
[01:10:27]
And it's a very, it's funny because when you start to like think about the internals and
[01:10:31]
definitions, it seems so complicated, but when you do it, it's so natural and it's so
[01:10:37]
easy to do it well.
[01:10:38]
Cause it basically like the conclusion I've come to is that it's basically the difference
[01:10:42]
between like imperative transformations and declarative transformations.
[01:10:46]
That's basically what a combinator is, is it's like a declarative way of describing
[01:10:50]
a transformation, which can then be built out of like basically they're the primitive
[01:10:55]
transformation building blocks and these compound ones where you can combine them together.
[01:11:00]
That's all it is.
[01:11:01]
And it's a very natural pattern.
[01:11:03]
Anytime you want, you have like two pieces of data that you'd want to work with and you're
[01:11:07]
wondering, Oh, I need to combine them.
[01:11:09]
It might be, I have two maybes.
[01:11:10]
I want both of, I want to do some operation on both of those to get a new maybe back.
[01:11:15]
That might be one way to do it.
[01:11:17]
That would be a combinator.
[01:11:19]
I think that the two types of combinator that I needed to implement for this parsing library,
[01:11:24]
one was a way to combine two pieces together and say, I want to do parse this piece of
[01:11:28]
data and also this other piece of data.
[01:11:30]
But also sometimes you want to say, attempt to parse it this way, or if that fails, also
[01:11:35]
attempt to parse it this other way.
[01:11:37]
If you've done JSON decoding, you'll be familiar with like the one of where you give it a list
[01:11:40]
of decoders and it will try all of them and whichever succeeds first is the ones that's
[01:11:44]
fused.
[01:11:45]
And I implemented one of those for my little parser and that's also a form of combinator.
[01:11:52]
Yeah.
[01:11:53]
We talked about this on our Elm parser episode, but there is a really interesting thing that
[01:11:58]
when you're coming from experience using JSON decode and then you go use the Elm parser
[01:12:03]
library, it's kind of counterintuitive because you use one of and you're like, wait a minute,
[01:12:08]
the one of just failed on the first thing in my one of that had three different options.
[01:12:14]
And so it's interesting because the semantics are different between one of in JSON decode
[01:12:19]
and one of in Elm parser.
[01:12:21]
Right.
[01:12:22]
Because you have the idea of committing versus backtracking.
[01:12:24]
Exactly.
[01:12:25]
Which is like another layer to learn.
[01:12:28]
Yes.
[01:12:29]
And I don't think that's necessarily a law of a particular pattern, but it just goes
[01:12:36]
to show that you can sort of perhaps like follow these same patterns, but have slightly
[01:12:42]
different semantics.
[01:12:43]
Well, I think we've we've covered applicatives pretty well.
[01:12:48]
I'm sure there's more we could say.
[01:12:49]
But Joelle, thanks again for joining us.
[01:12:53]
And if people want to get some more of your good knowledge, where can they follow you
[01:12:58]
and where can they learn more?
[01:13:00]
So they can follow me on Twitter, Joelle Ken, J O E L Q U E N.
[01:13:05]
They can also go to the Thoughtbot blog.
[01:13:10]
That's a place I work at.
[01:13:12]
I have a lot of articles there talking about Elm and also other things.
[01:13:16]
So that would be Thoughtbot.com slash blog slash authors slash Joelle dash Kenville.
[01:13:22]
That's probably easier to link than to try to spell it.
[01:13:26]
If you click around to some tags or search, you'll find it too.
[01:13:29]
Yeah.
[01:13:30]
There are a lot of great Elm blog posts.
[01:13:32]
There's definitely worth checking out.
[01:13:34]
A lot of them probably connect to the topic we talked to today because there's so many
[01:13:39]
sort of foundational aspects that overlap into this topic of applicatives.
[01:13:45]
And so there's a lot of articles I've written over time that connect to this.
[01:13:49]
Yeah.
[01:13:50]
Yeah.
[01:13:51]
You also gave a really great talk about random generators that might be relevant here for
[01:13:55]
people curious to learn more.
[01:13:57]
Yeah.
[01:13:58]
Yeah.
[01:13:59]
There's also talk about random generators and map two and how that works there.
[01:14:03]
I've given talk with maybe and how map two works there.
[01:14:07]
So yeah, maybe this whole time I was just trying to get everyone to be excited about
[01:14:11]
map two.
[01:14:12]
It was the best function.
[01:14:14]
Well, it worked for me.
[01:14:15]
I'm amped up.
[01:14:16]
Well, thank you so much again.
[01:14:18]
And Jeroen, have a good one.
[01:14:19]
Have a good one.