spotifyovercastrssapple-podcasts

Category Theory in Elm with Joël Quenneville

Joël Quenneville joins us to help us distill down Category Theory patterns and explore what value it brings us as Elm developers.
March 14, 2022
#52

More of Joël's distillation of category theory ideas:

Transcript

[00:00:00]
Hello, Jeroen.
[00:00:02]
Hello, Dillon.
[00:00:03]
And once again, we're joined by friend of the show,
[00:00:06]
Joel Kenville.
[00:00:08]
Thanks so much for coming back, Joel.
[00:00:09]
Thanks for inviting me onto the show.
[00:00:11]
So today we are talking about a bit of a hot topic.
[00:00:14]
Should Elm devs care about category theory?
[00:00:17]
Forgive us if we use that term a bit imprecisely,
[00:00:20]
but I think that that's what Elm developers
[00:00:23]
often think about when they think about
[00:00:28]
the area that we're going to be touching on.
[00:00:30]
So let's dive into it.
[00:00:31]
So what do we really want to talk about here?
[00:00:34]
We're kind of talking about some patterns
[00:00:37]
that we use as Elm developers, or could use.
[00:00:39]
How would you sort of describe overall
[00:00:42]
this sort of concept of category theory
[00:00:45]
or how it might be useful for an Elm developer?
[00:00:47]
I think there's value in recognizing patterns
[00:00:50]
in our code and when they repeat.
[00:00:55]
And then use things we know in one context
[00:00:59]
and apply them in another.
[00:01:01]
Right.
[00:01:03]
Right.
[00:01:04]
So basically what we would hope to come away with here
[00:01:08]
is a better understanding of some tools for abstraction
[00:01:11]
that we can use as Elm developers.
[00:01:13]
And maybe just a clearer understanding
[00:01:17]
of some abstractions we're already using.
[00:01:19]
We're probably already familiar with these
[00:01:24]
or don't recognize them as a recurring group of patterns.
[00:01:29]
So is that how you think about them as a group of patterns
[00:01:33]
that you can use to identify common abstractions?
[00:01:37]
Yes, I think I like the term pattern here.
[00:01:40]
In fact, we used it in a previous episode
[00:01:42]
where we talked about one of these
[00:01:45]
what we called universal patterns.
[00:01:47]
Yes, and listeners should definitely check out that episode
[00:01:52]
of Refresher. We covered a lot of stuff there,
[00:01:54]
a lot of it talking about what you might call
[00:01:57]
the applicative pattern.
[00:01:58]
It's episode 32, if I remember correctly.
[00:02:01]
You should actually check that.
[00:02:02]
Yes, episode 32.
[00:02:04]
So I've been trying to understand
[00:02:07]
why is it so hard to wrap our brains around this topic?
[00:02:11]
I think, I don't know, I imagine a lot of Elm developers
[00:02:14]
can relate to this. To a certain extent,
[00:02:16]
we shy away from using some of this academic language,
[00:02:21]
using these terms in the Elm community.
[00:02:23]
And to a certain extent, that makes it feel more accessible
[00:02:26]
for a lot of people, that they don't need to understand
[00:02:29]
all these terms before they feel like they can jump in
[00:02:32]
and create a web application.
[00:02:34]
But why is it so hard to wrap our brains around it?
[00:02:37]
And I was thinking about, it's almost like these patterns,
[00:02:41]
they don't have semantic ideas to them,
[00:02:44]
they don't have domain ideas, and that's sort of the point,
[00:02:49]
they don't have any ideas.
[00:02:52]
It's almost like if you were to explain the term
[00:02:54]
organization to somebody who didn't know what an organization was.
[00:02:58]
And if they were saying, well, what does it do?
[00:03:00]
Well, it could do anything.
[00:03:02]
What's its purpose? Its purpose could be anything.
[00:03:06]
But there's some sort of set of coherent ideas
[00:03:10]
you could talk about of how do people organize work.
[00:03:12]
Are they self organizing teams?
[00:03:17]
Are they individual?
[00:03:19]
You can talk about these patterns of how people organize,
[00:03:22]
even if you don't know what they're organizing to do.
[00:03:25]
And in a sense, these sort of functional patterns,
[00:03:28]
it's a similar thing.
[00:03:29]
They have sort of metasemantics almost.
[00:03:33]
Do you think that's a fair description?
[00:03:35]
I would agree.
[00:03:36]
These things often, I think, are so abstract,
[00:03:41]
it's hard to describe them outside of their very formal definitions.
[00:03:46]
When we get a little bit more accessible,
[00:03:48]
we end up having to hand wave a little bit.
[00:03:52]
And then we have an approximation,
[00:03:55]
which works in many, but not all cases.
[00:03:59]
It's something that if you want to explain,
[00:04:01]
you usually have to go with a concrete example
[00:04:04]
right from the start to get it across.
[00:04:07]
And then you try to generalize, oh, you've got this pattern.
[00:04:12]
This is how it works for lists.
[00:04:14]
And then, oh, you can also apply it to maybe,
[00:04:16]
or you can also apply it to results.
[00:04:19]
I think as a teaching method, that is absolutely the way to go.
[00:04:22]
It's going to take a while before you can start seeing
[00:04:25]
what the greater pattern is and what is part and not part of the pattern.
[00:04:30]
So for example, you're looking at and then on maybes
[00:04:33]
and then on results, and you might think,
[00:04:35]
oh, and then is a pattern for following the happy path.
[00:04:40]
But then you look at and then on a random generator
[00:04:44]
and there is not really the concept of a happy path.
[00:04:47]
And so now your approximation has kind of broken.
[00:04:49]
So now you have to find something else
[00:04:51]
that can include the way random does it.
[00:04:53]
And then you have to find a way that includes the way list does it.
[00:04:56]
And every time you learn a different one,
[00:04:58]
you start broadening a little bit that understanding.
[00:05:00]
And I think it's okay to have approximations
[00:05:02]
that are good in certain cases and not in others,
[00:05:07]
because that's how we build our understanding,
[00:05:09]
how we build mental models.
[00:05:10]
For lists and then would be concat map, right?
[00:05:14]
Yes, that's correct.
[00:05:15]
Yeah, I feel like result and then,
[00:05:17]
result dot and then and list dot concat map
[00:05:21]
have two things very differently.
[00:05:24]
The type signature looks the same,
[00:05:26]
but otherwise it's pretty much the only thing that is the same,
[00:05:31]
the same like the laws around it and the type signature.
[00:05:35]
But what it does and why it does it are very different.
[00:05:39]
So I feel like it's hard to explain like,
[00:05:41]
why would you reach for this?
[00:05:44]
And it's more like, well, this is useful function
[00:05:47]
and it has this type signature.
[00:05:49]
And then, well, result has the same one.
[00:05:52]
It just you don't you just don't use it for the same thing.
[00:05:55]
But it behaves somewhat alike.
[00:06:00]
And behaviors that are true across all of these.
[00:06:02]
And that's where some of the your intuitions can come across really nicely.
[00:06:07]
And you can build other things on top of them,
[00:06:10]
regardless of whether it's on maybe or list,
[00:06:13]
because certain things hold true.
[00:06:15]
So a classic one that you'll see is, for example, flattening a nested maybe.
[00:06:21]
You would do and then identity.
[00:06:23]
That is the thing that is true for any and then function.
[00:06:28]
So if you want to do a list and you want to flatten it,
[00:06:30]
you can do concat map identity.
[00:06:32]
You want to flatten the results and then identity.
[00:06:35]
So that holds true for all of them.
[00:06:37]
Right. That's like a law for that pattern.
[00:06:39]
If we were to get into the jargon.
[00:06:42]
Yes, yes.
[00:06:43]
And I think one thing that's interesting to note with these things
[00:06:45]
is that they're like really formal definitions.
[00:06:49]
Typically are a combination of some base description of
[00:06:54]
I said maybe it includes a type.
[00:06:55]
I forget if you have to have a type.
[00:06:57]
But typically it's one or two functions.
[00:06:59]
And then what they call laws.
[00:07:01]
So things that must hold true for that function,
[00:07:04]
regardless of what you pass into it.
[00:07:06]
Yeah. So things like associativity and identity
[00:07:09]
and sort of like laws of logic and mathematics.
[00:07:12]
And these things have their foundations in these more.
[00:07:16]
They're not just for writing computer programs.
[00:07:18]
They're for logic and reasoning and mathematics.
[00:07:23]
Understanding where the relationship with mathematics comes from.
[00:07:27]
Because like you've got mathematics which works with algebra
[00:07:32]
and set theories, I guess.
[00:07:35]
And then you've got maybes and then.
[00:07:40]
And like there's some connection somewhere that I'm missing
[00:07:45]
and don't know where it is or what it is.
[00:07:47]
So the whole world of mathematics
[00:07:52]
is translating pure numbers.
[00:07:54]
So even if you talk about sets,
[00:07:56]
a function is a transformation from one set to another.
[00:07:59]
Right. So from numbers to strings and strings to numbers.
[00:08:03]
And for instance.
[00:08:04]
Exactly.
[00:08:05]
And now once you start thinking of transformations between sets,
[00:08:10]
that's where some of these more category style things come in.
[00:08:15]
I want to say, and this is getting to the edges of my knowledge
[00:08:20]
that it's sort of a layer of abstraction
[00:08:22]
on top of the idea of sets and functions as transformation between sets.
[00:08:27]
So there are other mathematical,
[00:08:31]
I'm going to call them objects,
[00:08:33]
that can be transformed into each other
[00:08:36]
with a similar style of relationship that functions and sets have.
[00:08:40]
And category theory is sort of like a meta framework
[00:08:43]
for being like, oh, I don't know, groups are to something.
[00:08:48]
I'm just using random terms here.
[00:08:50]
These things relate to each other in a similar way
[00:08:53]
as functions and sets relate to each other.
[00:08:57]
It's my very fuzzy understanding
[00:08:59]
of what mathematical category theory is.
[00:09:05]
My understanding is also that in the programming world,
[00:09:09]
we've kind of diverged a little bit from that
[00:09:12]
and that what we tend to call category theory
[00:09:17]
is mathematical category theory.
[00:09:20]
Do you have examples where it doesn't map cleanly?
[00:09:23]
No, because I don't actually understand
[00:09:26]
mathematical category theory.
[00:09:28]
Okay, that's fair.
[00:09:31]
One of the things I wonder about is
[00:09:33]
how much would these things naturally emerge
[00:09:36]
if there was no such thing as this sort of
[00:09:40]
history of category theory and all these terms and concepts
[00:09:45]
that have been around over the years?
[00:09:47]
And all you had was the Elm language,
[00:09:49]
and you had those basic building blocks
[00:09:51]
for creating programs.
[00:09:53]
Would the same patterns emerge naturally,
[00:09:56]
or are they somehow tied to the fact
[00:09:59]
that we have all these ideas that we developed before
[00:10:02]
and we try to apply them to this new context?
[00:10:05]
So someone said, let there be Elm,
[00:10:07]
and then you invent the future?
[00:10:12]
My guess is that they would still emerge,
[00:10:15]
particularly something like a map.
[00:10:17]
It's such a common style of operation
[00:10:20]
that someone would write that abstraction
[00:10:22]
because they're tired of writing a case expression
[00:10:25]
to unwrap, do a thing, rewrap.
[00:10:28]
Yep, exactly.
[00:10:30]
That's my suspicion as well.
[00:10:32]
And if you think about things like,
[00:10:35]
your way of doing control flow is,
[00:10:40]
you have primitive expressions and case expressions
[00:10:42]
and function calls, basically.
[00:10:45]
You don't have any exception handling
[00:10:48]
in that sort of thing.
[00:10:50]
Well, okay, so then you need some sort of way to do,
[00:10:53]
and then it's going to pretty naturally emerge
[00:10:55]
from those basic ingredients, it seems.
[00:10:58]
And I think that that gets at the conversation today,
[00:11:02]
is how can we sort of understand
[00:11:07]
the pattern rather than just,
[00:11:09]
all right, I do end then, and I do that over and over.
[00:11:12]
How can we understand that as the relationship
[00:11:15]
between how you do that with a list
[00:11:17]
and how you do that with maybe
[00:11:19]
and how you do that with a result?
[00:11:21]
I think what Jeroen shared earlier
[00:11:23]
about working with each individual type,
[00:11:26]
and then as you work with them,
[00:11:28]
you start understanding or seeing the greater pattern.
[00:11:33]
That's the way to learn it.
[00:11:36]
And then after that, where it gets interesting is,
[00:11:38]
once you've done three or four,
[00:11:40]
you can start thinking when you move to a different domain
[00:11:44]
that has these same functions.
[00:11:46]
I already know how and then works on maybes and lists
[00:11:51]
and JSON decoders.
[00:11:53]
I wonder if it works the same on tasks.
[00:11:57]
And then at that point,
[00:12:02]
I think that's what understanding really was.
[00:12:04]
And I've had that sort of magical experience
[00:12:06]
of helping out someone in the Elm Slack.
[00:12:10]
I think this was with the test fuzzers
[00:12:15]
for writing fuzz tests.
[00:12:17]
And I'd never used the library
[00:12:19]
if someone was asking for help.
[00:12:20]
And I was able to solve their problem
[00:12:23]
even though I'd never used the library,
[00:12:24]
did not know how fuzzers worked
[00:12:26]
because I knew these patterns.
[00:12:31]
And I've been using JSON decoders.
[00:12:34]
Yes, it may have been JSON decoders.
[00:12:36]
It may have been random generators.
[00:12:39]
I forget which one I compared it to in my mind.
[00:12:42]
Right.
[00:12:43]
It's a pretty neat mapping
[00:12:45]
from a test fuzzer in Elm to a random generator
[00:12:49]
because I'm pretty sure
[00:12:51]
it's the same thing under the hood.
[00:12:52]
I think you can just pass in a random generator, right?
[00:12:55]
But the same overall pattern applies, of course.
[00:13:00]
I read a couple of explanations of
[00:13:03]
there was one sort of explain it like I'm five type posts
[00:13:07]
that I found kind of interesting.
[00:13:09]
But one of the things it was talking about
[00:13:10]
was it was saying it's a messy world out there.
[00:13:13]
And what if we wanted to create
[00:13:16]
a sort of nice, happy place
[00:13:19]
where we could not think about that messiness,
[00:13:21]
where we could just think about simpler things
[00:13:24]
and then shield ourselves
[00:13:29]
and so it was saying like,
[00:13:32]
if you have a result or like a JSON decoder,
[00:13:35]
there are all these messy things that are happening
[00:13:38]
with the outside world in a JSON decoder.
[00:13:42]
All these ways that it could fail
[00:13:43]
and whether it's failed
[00:13:45]
and the state of whether it's gone through
[00:13:48]
and tried to do something that wasn't available,
[00:13:51]
that's very messy and it's stressful
[00:13:56]
and we should just forget about that
[00:13:58]
and happily just give it a function
[00:14:01]
for transforming our data and call map,
[00:14:04]
call JSON.decode.map
[00:14:06]
and pass in a plain old function
[00:14:09]
that takes a value and pretends
[00:14:11]
that everything is nice and happy
[00:14:13]
and shields away that complexity
[00:14:14]
of the ugly world outside.
[00:14:17]
So I don't know, that sort of clicked with me.
[00:14:19]
It felt like a pretty good way to describe
[00:14:24]
that.
[00:14:25]
I think this goes into what I was mentioning earlier
[00:14:27]
about approximations and that that sort of picture
[00:14:30]
assumes that you're talking about situations
[00:14:33]
where there are potential failures.
[00:14:36]
So maybe result, to a certain extent,
[00:14:39]
even JSON decoder or task.
[00:14:40]
That's not really what's happening
[00:14:44]
with a random generator
[00:14:46]
where mapping is not trying to ignore failures.
[00:14:51]
It kind of does something different.
[00:14:53]
Right, I sort of got into the specifics
[00:14:55]
of failure with JSON decode,
[00:14:57]
but the sort of explanation was talking more
[00:15:00]
about messiness as a general concept
[00:15:02]
and it was saying that like that messiness means
[00:15:06]
maybe the thing exists, maybe it doesn't
[00:15:09]
or that messiness could be
[00:15:11]
you will have this thing in the future
[00:15:12]
but you don't have it now
[00:15:14]
or the messiness could be
[00:15:15]
it depends on something from the outside world
[00:15:20]
and so there is something in common there
[00:15:22]
that I don't want to think about some sort of messiness
[00:15:25]
whether it's like uncertainty or happening in the future
[00:15:29]
or there's something connecting all of those things
[00:15:32]
even though semantically there are a bunch
[00:15:34]
of different things, results and presence or absence,
[00:15:39]
but somehow it's abstracting some sort of messiness
[00:15:42]
of the world and letting you deal with it
[00:15:47]
I think what you're referring to as messiness here
[00:15:49]
is what in slightly more formal context
[00:15:52]
people will refer to as context.
[00:15:54]
So it's often referred to a parameterized type
[00:15:57]
like maybe or random generator
[00:15:59]
as context wrapped around a value
[00:16:03]
and that context might be the fact
[00:16:05]
that it's present or not.
[00:16:07]
It might be the fact that it's a future value
[00:16:10]
that you may or may not have,
[00:16:15]
but it's very kind of hand wavy
[00:16:17]
and I think context is such a broad word
[00:16:19]
that it's not always useful to use,
[00:16:21]
but it's that thing that you've got your type A
[00:16:25]
oftentimes your type is adding some kind of extra context
[00:16:29]
about the A.
[00:16:30]
Right.
[00:16:31]
And naturally if you're dealing with these things,
[00:16:35]
I mean in a way, if you're dealing with them
[00:16:37]
in a more declarative way,
[00:16:39]
if you're dealing with things in an imperative way
[00:16:44]
you're dealing with a lot of these things
[00:16:46]
and you don't necessarily need these kinds of abstractions
[00:16:49]
in the same way.
[00:16:50]
The abstractions that emerge look different I think.
[00:16:52]
But if you're dealing with things in a declarative way
[00:16:55]
essentially what you've got with a lot of these things
[00:16:57]
that can be transformed in these ways
[00:16:59]
is sometimes you can't directly apply a transformation.
[00:17:04]
You know, perhaps with something like maybe you could
[00:17:07]
under the hood because it's just a wrapped data type
[00:17:12]
generator or JSON decode value or something like
[00:17:16]
LMI with context that's able to read things from the outside world
[00:17:20]
that sort of represents a future value, tasks, things like that.
[00:17:23]
You can't just map the thing because it doesn't necessarily
[00:17:27]
just exist for you to interact with it directly right away.
[00:17:31]
So what you would have to end up doing
[00:17:33]
if you were dealing with these things without these abstractions
[00:17:35]
is you would have to basically unwrap a function
[00:17:40]
over and over all over the place.
[00:17:42]
So it really does feel like on a desert island
[00:17:45]
a lot of these same patterns would emerge
[00:17:47]
if you were working with these basic tools.
[00:17:49]
And you wanted to do things in a declarative style.
[00:17:51]
I think you've done an episode on the concept of combinators, right?
[00:17:55]
We briefly touched on combinators when we had you on last
[00:17:58]
but we haven't done a full episode on specifically that topic.
[00:18:03]
Okay, well you've written an article about combinators.
[00:18:08]
Since many of these category theory ish things
[00:18:11]
could be described as universal combinators
[00:18:15]
I'm hand waving a little bit because this is not quite true.
[00:18:20]
But and then and map and other functions like this
[00:18:25]
I guess they're not always combining two values
[00:18:28]
but they're sort of in that world
[00:18:30]
and they're sort of common utilities
[00:18:35]
on many different data types.
[00:18:38]
Also going back to something you mentioned earlier
[00:18:40]
about different laws and how they're often like
[00:18:42]
very kind of mathematical logical sounding things
[00:18:45]
a lot of them have sort of very practical implications
[00:18:49]
even though when you read the law
[00:18:50]
it sounds like reading a math theorem.
[00:18:53]
So for example the map function has a law that says
[00:18:58]
that mapping the identity function should equal
[00:19:03]
the mapping identity function.
[00:19:04]
Basically mapping identity does not change anything
[00:19:06]
and a really interesting implication of that
[00:19:08]
is that your map function can change items inside your type
[00:19:16]
but it cannot modify the general structure of your
[00:19:20]
what you've been calling the context.
[00:19:22]
So for example a list you cannot add or remove items
[00:19:26]
from the list using map.
[00:19:31]
Otherwise your mapping identity shouldn't change anything
[00:19:33]
but if you drop items from the list
[00:19:35]
that thing would be violated.
[00:19:37]
You can also not reverse the list
[00:19:39]
if you want it to be a map because otherwise it's not an identity function.
[00:19:43]
Yes that's right.
[00:19:44]
So a map makes sure that the list
[00:19:48]
or whatever collection stays in the same order
[00:19:51]
regardless of whether you called it with identity or not.
[00:19:54]
Right and similarly you can't change a just to a nothing
[00:19:59]
as part of a map or okay into an error or anything like that.
[00:20:05]
You can only modify the wrapped value
[00:20:08]
which is a really interesting thing that comes out of that
[00:20:10]
and if you had implemented as a test suite or a fuzz test
[00:20:15]
you would find that if your implementation does modify
[00:20:20]
the overall structure your fuzz test will start failing.
[00:20:23]
Yeah when you do implement these kinds of functions
[00:20:28]
of the type that you made do you write fuzz tests
[00:20:31]
or do you write anything to make sure that those laws are applied and true?
[00:20:36]
I like to write fuzz tests that follow the laws.
[00:20:39]
That's a good way to find out that you've implemented
[00:20:42]
the function according to spec.
[00:20:45]
Because there's nothing at least in Elm saying that
[00:20:49]
oh this type has a map function
[00:20:52]
there's nothing saying that this respects the laws of the map function.
[00:20:57]
Correct.
[00:20:58]
I wonder is it useful to have this to call it like map
[00:21:03]
if you don't know whether the law is applied or not?
[00:21:07]
For instance someone could have a map on a list like structure
[00:21:12]
but that reverses the list because of some bug.
[00:21:16]
Would you then still call it a map?
[00:21:18]
Would you just call it something else?
[00:21:20]
Do you just report a bug?
[00:21:21]
I think I would probably report a bug on that
[00:21:26]
the library maintainer.
[00:21:28]
If it is intentional then I might recommend that they name it something else
[00:21:31]
because this sort of violates what people expect of the map function.
[00:21:37]
Yeah even though there's nothing saying
[00:21:39]
hey this is a map function from the functor name.
[00:21:46]
Sorry for dropping the f word.
[00:21:50]
I don't know if we've been avoiding saying these words for now
[00:21:55]
but there you go.
[00:21:57]
Now we can say them if we feel like it
[00:22:00]
or avoid them if we want to.
[00:22:03]
To clarify functor is the fancy
[00:22:07]
category theory term for
[00:22:11]
a type where we've implemented a map function
[00:22:13]
that fulfills I think two or three laws
[00:22:16]
one of which is the one we mentioned about identity.
[00:22:19]
Yeah I think the other one is commutativity.
[00:22:24]
It's a really fun one that I actually take advantage of pretty frequently.
[00:22:28]
The idea that instead of
[00:22:30]
if I have two functions I want to map
[00:22:32]
let's say I have a list of
[00:22:35]
I don't know a list of integers and I want to double them
[00:22:38]
and then I want to add one
[00:22:41]
instead of mapping once through the list to double them
[00:22:44]
and then mapping a second time to add one
[00:22:47]
I can map once and double and add one at the same time.
[00:22:52]
That's actually a refactor that I do pretty frequently in my own code
[00:22:56]
and that some fancy compilers can actually do automatically for you
[00:23:00]
as a performance optimization.
[00:23:02]
I tried working on that with LMAOptimize level 2.
[00:23:06]
I actually don't remember where I got it
[00:23:09]
I probably have a branch somewhere.
[00:23:11]
It's clearly something that could be done
[00:23:13]
but the issue is then
[00:23:18]
whether this map function is trying to respect those rules or not
[00:23:22]
which we know for the core functions but not for the rest.
[00:23:26]
For the most part the community has a good intuition of how they should work.
[00:23:29]
Yeah I agree.
[00:23:31]
And if they don't you probably get some
[00:23:34]
an issue or a PR on your repo.
[00:23:37]
Yeah I would imagine that the fact that
[00:23:40]
Elm is a pure functional language would eliminate
[00:23:45]
the idea that things would not follow these laws too.
[00:23:49]
I feel like it would be pretty easy to not follow a law
[00:23:52]
if you could accidentally throw an exception or depend on some outside state.
[00:23:57]
I feel like you can easily, there's a whole reason we have these laws in the functional world.
[00:24:01]
They're all functional things like
[00:24:03]
these things must be commutative or the identity function must apply in this way.
[00:24:06]
So they're not about not throwing errors or mutating state
[00:24:09]
it's just when you build your pure function here
[00:24:14]
it's not a specific criteria.
[00:24:16]
Right but wouldn't, I mean if you throw an error then it's definitely not...
[00:24:21]
That kind of breaks I think other parts of functional programming
[00:24:24]
not for the laws.
[00:24:26]
Okay.
[00:24:27]
Yeah it's kind of like when you play Monopoly and you tell your child
[00:24:32]
hey if you roll a six twice then you need to go to prison
[00:24:37]
and then the kid just throws the table.
[00:24:42]
Right.
[00:24:44]
That's a good analogy.
[00:24:46]
So one really interesting thing that comes out of this very broad definition of
[00:24:51]
here's some laws and a few functions that must follow those laws
[00:24:54]
is that for any given type there may be more than one valid implementation
[00:24:59]
that follows the laws and the signatures and all this stuff.
[00:25:03]
So there is no, not necessarily a single
[00:25:06]
for example applicative implementation for lists.
[00:25:11]
So applicative being like map 2 right?
[00:25:13]
Yes, map 2.
[00:25:15]
So in Elm when we map 2 with two lists we're effectively zipping them together
[00:25:20]
but it's also possible to do a map 2 in a way that's doing a Cartesian product
[00:25:25]
so all the possible combinations from the two lists
[00:25:28]
and that's what Haskell's map 2 does
[00:25:32]
and both of these are completely valid.
[00:25:35]
In fact Martin Janicek
[00:25:40]
built a package that has a list.zip
[00:25:44]
and list.cartesian to showcase the fact that both of these are
[00:25:49]
different ways of having map 2 that work in different contexts.
[00:25:54]
Yeah and the proof for that is that they respect those laws and that they have
[00:25:58]
that special type signature right?
[00:26:01]
Yes.
[00:26:02]
Those are the only criteria.
[00:26:07]
Do they need to match maybe?
[00:26:10]
In Elm no, the name doesn't have to match.
[00:26:12]
Yeah no, because there's no concept of functors or anything.
[00:26:16]
Right.
[00:26:17]
In Martin's package what he's done is just created two modules
[00:26:23]
with it expose functions that operate on the same data type
[00:26:26]
that way he can reuse the name in a different namespace.
[00:26:29]
So you've got list.cartesian.map2
[00:26:34]
and list.zip.map2.
[00:26:36]
Gotcha, I thought he had done like a map cartesian or map zip.
[00:26:41]
Yeah, no in this case he just used the namespace and used the same name
[00:26:46]
and they both act on the same core list type.
[00:26:49]
So it's not like each of them has that, there's not the concept of a cartesian list and a zip list.
[00:26:53]
Gotcha.
[00:26:54]
This also happens a lot for one of the simpler patterns called Monoid
[00:26:59]
which is built around this idea of like combining items.
[00:27:04]
So I think what you need is you need some sort of combining function
[00:27:09]
that can combine two values of a type and then you need a sort of empty item
[00:27:14]
that when you combine it with something else is equivalent to identity.
[00:27:18]
And for example looking at numbers you can add two numbers together
[00:27:23]
that's a form of combining and if you add zero you don't change your number.
[00:27:28]
And so addition for numbers is a monoid
[00:27:33]
but it is not the only one because you could do the same thing with multiplication
[00:27:38]
except in this case your sort of base element that when multiplied doesn't change is not zero
[00:27:43]
it is one because any number multiplied by one is itself.
[00:27:48]
So and I actually like monoid as a sort of example
[00:27:53]
in that we often tend to think of these sort of categories as inherent to the type
[00:27:58]
but it's not really about the type itself it's like a thing you can layer on
[00:28:03]
and so what you need is like sort of several different component pieces that all come together
[00:28:08]
and if they're all present then you can talk about the collection as being this category
[00:28:13]
in this case monoid. So you can see if you have numbers
[00:28:18]
and zero and multiplication now you have a monoid
[00:28:23]
but you need all three of those. Numbers by themselves are not inherently monoid
[00:28:28]
that's not a thing about the type.
[00:28:29]
Right like you could have some special kind of numbers that you cannot add.
[00:28:37]
In fact you can define a number type and just not implement the addition function.
[00:28:42]
Exactly yeah and then it wouldn't be a monoid.
[00:28:47]
Exactly the interesting thing is that someone can provide a type in a package
[00:28:52]
and not write some of these functions and so it is not a functor or a monad or any of these other things
[00:28:57]
but then I pull down this package and in my application I write one of these functions
[00:29:02]
to act on this third party type and all of a sudden within the context of my code
[00:29:07]
it is a functor because now we have all the pieces.
[00:29:12]
That's a great point.
[00:29:12]
If you have the building blocks to implement that function.
[00:29:17]
Yes so for example I could create a maybe type
[00:29:22]
and publish it without the map function on it and then you could decide
[00:29:27]
well I need to map over it I'm going to write a map function for your maybe type.
[00:29:32]
Obviously.
[00:29:32]
And now it's within the context of your code my maybe is a functor
[00:29:37]
or the combination there is.
[00:29:42]
I sometimes find it is an easy approximation to talk about these things as adjectives rather than nouns.
[00:29:50]
So I can say this type is monadic not this type is a monad because it has these functions
[00:29:55]
so this is like an attribute of it rather than being an inherent quality.
[00:30:00]
Yeah it's not like you create a type and you say okay this needs to be monadic
[00:30:05]
this needs to be blah blah blah it's you add the function and therefore it is monadic.
[00:30:10]
Therefore it is a monad.
[00:30:15]
I think you can talk about it in a noun sense to say oh I have the maybe monad
[00:30:20]
but at that point you're not talking about maybe the type you're talking about this sort of
[00:30:25]
the fact that I have a type and some functions and they all follow the laws.
[00:30:30]
And I think that gets confusing to people when you talk about the x pattern.
[00:30:35]
So there's an interesting example that when I realized this it sort of hurt my brain a little bit
[00:30:40]
which is that the Elm GraphQL selection set is inherently not monadic
[00:30:45]
because it represents a selection set represents both the request for saying
[00:30:55]
I would like this data and the decoding to say okay I'm going to turn this into an Elm type from the server
[00:31:02]
but because of that you can't and then that you can't you can't say get this user's role
[00:31:07]
and then make a follow up selection set because inherently
[00:31:12]
well I mean I guess I guess that would represent multiple requests to the server
[00:31:17]
right but inherently a selection set represents a single request to the server
[00:31:22]
so it is inherently not monadic not just because it's not exposed
[00:31:27]
but that operation doesn't really fit in that abstraction
[00:31:32]
particularly because in the the rules that you set up for what a selection set does
[00:31:37]
you don't want multiple requests to be done.
[00:31:42]
Exactly. Everything has to be done within a single request.
[00:31:47]
Yes. And by the monad laws and then would imply a second request which is a thing you don't want to do.
[00:31:52]
Exactly. Exactly. So the semantics that it would imply for the outside world
[00:31:57]
would not fit into the desired semantics for the library's goal
[00:32:02]
to not the whole point is you don't want to make repeat requests to the server
[00:32:07]
you want to make a single request to say exactly what you need.
[00:32:12]
Just to clarify that is not an issue right?
[00:32:17]
No it's not. It's not like there is no and then and therefore there's a problem with your library didn't?
[00:32:22]
Right. Well I mean it's just a constraint
[00:32:27]
and it's just one building block that you can't use and sometimes it is quite convenient
[00:32:32]
like I don't know when I think about these patterns
[00:32:37]
in sort of layman terms I think about I'd really like to have this concrete data
[00:32:42]
not this wrapped data thing I want give me the actual data you've got
[00:32:47]
and then let me continue on and do something with it or I mean of course as we've discussed
[00:32:52]
these analogies fall apart because sometimes it's continuing sometimes it's dealing with presence or absence
[00:32:57]
sometimes it's dealing with failure or success but you know
[00:33:02]
timey wimey you get the idea something along those lines but you can't do that
[00:33:07]
you can't say I'd like to take this thing and then and then I'd like to perform this check
[00:33:12]
and fail if I don't have this thing you just can't do that you have to find other ways to solve it
[00:33:17]
so like it just changes the way that you approach solving certain problems
[00:33:22]
I think there's a really interesting thing here that goes back to the why is it useful for
[00:33:27]
LN programmers to learn these concepts and we can kind of get it just by the signature
[00:33:32]
but given the fact that someone might know the definition of what and then does
[00:33:37]
in general or familiar with its signature and that they know that in your library
[00:33:42]
a selection set represents both a request and parse data with those two facts
[00:33:47]
I can tell you that and then is going to involve multiple requests but
[00:33:52]
someone who's that's something that I think came out in the conversation we were saying earlier
[00:33:57]
but that might not be obvious but someone who's
[00:34:02]
something that I think came out in the conversation we were saying earlier but that might not be obvious
[00:34:07]
you'd have to dig through if this was named something else but because it's named and then
[00:34:12]
and you know how and then works then you can quickly
[00:34:17]
sort of apply it to the domain that it's in and quickly pick up some information
[00:34:22]
So I feel like all those functions are mostly there for familiarity
[00:34:27]
If you see a map function you expect it to behave this way
[00:34:32]
you expect it to follow those laws that we mentioned and I feel like that's
[00:34:37]
pretty much the only reason to have those for familiarity
[00:34:42]
at least in Elm. Well they're just common operations that you end up doing
[00:34:47]
common transformations that we end up doing on data. But the reason why you would name them
[00:34:52]
that way or that you would have the arguments in that order
[00:34:57]
Yes. But yeah obviously like if you want if you have a selection set which contains some data
[00:35:02]
then map makes sense whether you call it that way or whether you make it
[00:35:07]
follow those laws or not is a different consideration
[00:35:12]
but I think it makes sense to have them follow those laws and have that name because it makes it
[00:35:17]
very familiar to people who know list.map who know maybe.map etc.
[00:35:22]
So I feel like it's all about familiarity and just making it easier to
[00:35:27]
to grok what a piece of code does based on what your knowledge of other pieces of function do
[00:35:32]
I would say that's the general value of the concept of patterns
[00:35:37]
Yeah I guess. Even like in the object oriented world right the idea of design patterns or
[00:35:42]
standardized ways of solving common problems it's much easier for me to look at some code and be like oh that's a decorator
[00:35:47]
rather than being like huh okay so there's an object that's wrapping another object
[00:35:52]
and there seems to be some kind of delegation happening here and instead of like really digging into the implementation
[00:35:57]
I can look at it and recognize the pattern and then immediately I know it's one of those big building blocks
[00:36:02]
that I've seen in 50 other contexts. So what about when Elm diverged into the
[00:36:07]
list.concatmap it's not called list.andthen but it has the same signature
[00:36:12]
and vice versa with things that use andthen
[00:36:17]
so it seems like in many cases Elm opts for
[00:36:22]
using more domain language than category theory language
[00:36:27]
but there are trade offs there. I mean like the
[00:36:32]
you could imagine a world where we just say let's be consistent with this naming
[00:36:37]
if it follows these laws let's pick a term for it that we're going to use in the Elm community
[00:36:42]
and let's stick with that in all of the core libraries at least so let's call it list.andthen
[00:36:47]
or let's use concatmap elsewhere whatever it is let's stick with it. But it does get difficult to sort of
[00:36:52]
because again the semantics can feel so different even if the pattern
[00:36:57]
is the same. Yes I think in fact lists are really tricky
[00:37:02]
because it's often where you're first introduced to a lot of these things particularly
[00:37:07]
something like map and if you know your only example that you've seen is list map
[00:37:12]
you're going to think map is about iteration and list traversal and so when you see
[00:37:17]
maybe map JSON to code map and it's not about traversing a list
[00:37:22]
you get really confused. Or at least I did.
[00:37:27]
Yeah especially coming from JavaScript where the only map function that I know is on arrays
[00:37:32]
so collections. Yes although fun fact
[00:37:37]
the promise then method is effectively map on a promise
[00:37:42]
but it's also an and then. Yes it is also an and then.
[00:37:47]
Right.
[00:37:52]
So is it what kind of. It's monadic
[00:37:57]
but the exception part isn't really well behaved
[00:38:02]
the errors don't really map cleanly I don't know if that breaks any monadic. It kind of does yes
[00:38:07]
as an approximation you can say that promise then is
[00:38:12]
like a similar to a monad it's similar to a functor or similar to map similar to and then
[00:38:17]
because you can do promise dot resolve to get just the unit value
[00:38:22]
and you can do then on the promise to continue.
[00:38:27]
Well it gets interesting if you compare it to say elm task which is very similar to JavaScript's promise
[00:38:32]
where elm task has two very separate functions for map and then if you're looking at some code
[00:38:37]
and you see that it applies a task map you know that
[00:38:42]
let's assume these tasks are just HTTP requests you know it's only making one request and then
[00:38:47]
applying a transformation to the result of that request. Map cannot make a second request
[00:38:52]
because of the functor laws but. Can you clarify why?
[00:38:57]
Because if you made a second request
[00:39:02]
it would end up
[00:39:07]
let's see I think if you made a second request you ended up with like a doubly nested
[00:39:12]
task. Yes you would have a task or a task. Right which is actually why
[00:39:17]
some languages instead of calling it and then we'll call it flat map because it's
[00:39:22]
do map but with another task and then you've got this nested thing and you have to flatten it down
[00:39:27]
and so because of the way the signatures and the laws work out when you
[00:39:32]
see a map you know it's a sort of a local operation it's a safe thing as opposed to if you
[00:39:37]
seen and then you know okay it's doing some logic potentially and then making another
[00:39:42]
request whereas if you see a chain of thens in JavaScript you don't know
[00:39:47]
did we make one request and then just do a bunch of local transformations or did we
[00:39:52]
do 10 requests and now we get back to another of the functor laws
[00:39:57]
which is I think what you were mentioning earlier that you were trying to
[00:40:02]
do with the LLM optimized level 2 where all of those like a chain of maps can be like
[00:40:07]
flattened down and what you can do is all those functions that were in each of the maps just pull
[00:40:12]
them out into a separate named function. So in LLM if I see task
[00:40:17]
and then just task map task map task map like five of these I can just look at all those five functions
[00:40:22]
create my own named function pull all of that out and say there is one
[00:40:27]
transformation that gets applied at the end of my request. In JavaScript I
[00:40:32]
can't know to do that because if some of them are making extra requests that's not just a pure
[00:40:37]
transformation. So then I'd have to say okay well these two can be pulled out but then I
[00:40:42]
need another I actually need a then to make that second request and then some more
[00:40:47]
transformations. So you have to really check each item at a time.
[00:40:52]
So you could with a promise if it didn't behave as an and then
[00:40:57]
if it only behaved as a map then you could potentially right. Yes if you saw
[00:41:02]
if it only behaved as a map and you saw a chain of thens you could reduce it down to a single then and then just
[00:41:07]
pull out all of those extra things into one function. But it's not safe to do that
[00:41:12]
if you are potentially making more requests.
[00:41:17]
And I guess I mean naturally you would reach for the simplest tool you could in a lot of
[00:41:22]
cases and try to use map where possible kind of like I mean you're in and I
[00:41:27]
often come back to Richard scaling L maps talk where the one of the
[00:41:32]
main takeaways is if a function doesn't need more data give it less
[00:41:37]
data. If a function doesn't need to pass out more data for the
[00:41:42]
calling code then don't return as much data just constrain things more so
[00:41:47]
that you can more easily reason about what does this code depend on and what can this code do.
[00:41:52]
And that same principle I think applies from what you're saying here if you
[00:41:57]
don't need and then if you can do it with a map that's less cognitive complexity for you to hold in your
[00:42:02]
head when you're looking at the code or when you're debugging and trying to find where an issue is coming from.
[00:42:07]
Yes. And in fact I've often seen people use and then when they only need a map
[00:42:13]
and the classic thing that you'll see them do is and then some pure function
[00:42:18]
then like adding an explicit wrapper. Right. I want to say is there a
[00:42:23]
Elm review rule for that? No but it could definitely could.
[00:42:28]
Yeah. So like a classic case would be something like maybe and then some pure function
[00:42:33]
and then just wrap the result in just. But that's basically just saying wrap it in just and then
[00:42:38]
unwrap it because and then is you can think of it as a flat map
[00:42:43]
and so it's just going to apply map but you wrap it in just it's going to flatten
[00:42:48]
it back down and you've done extra work for nothing. I might have something
[00:42:53]
similar to that in Elm review simplify. OK. And if not someone could create an issue I'm sure.
[00:42:58]
Or a pull request. Yeah. Or pull requests while listening to this episode
[00:43:05]
because our audience or listeners are very quick.
[00:43:11]
But I guess to come back to the sort of original question that we're asking right should Elm devs care about category
[00:43:16]
theory or why should we care about it. One thing that really helps is to
[00:43:21]
because you can recognize some of these larger building blocks and patterns you can immediately gain some
[00:43:26]
much deeper insights into how the program works and what it does or doesn't do just by looking at some of these
[00:43:31]
function names. Right. Yeah. That's a great takeaway.
[00:43:36]
I feel like in the Elms like I've seen some conversations which I will not be able to remember so I can't give it an example
[00:43:41]
where someone asked like is it possible to have a
[00:43:46]
function that does this or does that with this type
[00:43:51]
and people say well no that is not possible because you're missing
[00:43:56]
some constraints on your type or on your functions or it is possible and
[00:44:01]
here is the type signature that you will see in other places. So I feel like some people are able to recognize
[00:44:07]
when something is possible when something is not possible very quickly thanks to this.
[00:44:12]
I'm thinking like I don't know if I'm pulling this out of thin air but like
[00:44:17]
contravariance could be one example maybe. I don't even know where that one is.
[00:44:22]
But yeah where you have like B to A as a function
[00:44:27]
and then you can construct something. I'm going to stop here because I'm
[00:44:32]
swinging. There's definitely cases where you can use it to say well here are standard solutions to a particular problem that follows one of these patterns or sometimes even built on top of these patterns.
[00:44:41]
So you have a list of maybes and you want to accumulate them into a single
[00:44:46]
maybe with a list of values. That is the thing that is
[00:44:51]
really interesting about this. I think it's really interesting to see
[00:44:56]
that you can actually generate a list of values into a single maybe with a list of values.
[00:45:02]
That is the thing that is a solved problem for actually
[00:45:07]
genericize from pretty much any types as long as they implement map2.
[00:45:12]
So you know as long as your type is applicative then you can do this
[00:45:18]
kind of sequencing of values together or combining.
[00:45:23]
You can either sequence or combine for this kind of operation.
[00:45:28]
So this might be a big ask Joelle but can you try to break down a few of these patterns like you just did with this sort of idea of like OK when you see map2 you know you can combine things.
[00:45:39]
What are some other patterns and some other things that you know you can do when you have those patterns.
[00:45:45]
One that I like and again this is an approximation it's not always true but I typically think of
[00:45:52]
monads as being serial or sequential and applicatives as being parallel.
[00:45:59]
So if you're doing a map2 or map3 or map4 the important thing is that all of the things you're combining together are all independent of each other.
[00:46:10]
So in theory if you're doing a map4 with four different HTTP requests those could be executed in parallel.
[00:46:18]
Elm does not for the moment. That's an implementation detail. They've chosen to do it in sequence.
[00:46:25]
But in theory they are independent and can be executed in parallel in a way that doing a bunch of and thens for make one request and then make another request and then make another request.
[00:46:35]
They must be sequential because you can't make the second request until the first request resolves because you need data from the first request in order to even construct the second one.
[00:46:45]
So there's an inherent dependency chain in and thens that you don't have with map2 or map3.
[00:46:53]
Now there's a little bit of fuzziness there in that that's only true with regards to the type variable that you're working with.
[00:47:03]
So for example parsers they're all independent of each other for the thing that they're parsing.
[00:47:08]
So the output of one parser won't impact the value of the second parser but they're all operating sequentially on the underlying string.
[00:47:17]
So there is a sort of inherent sequentiality to them but they are independent of each other.
[00:47:23]
So that serial versus parallel model falls apart in some cases.
[00:47:27]
But for many or most of the and thens and map2s that I've experienced it's a useful distinction to keep in mind.
[00:47:38]
Yeah I mean at the very least you know with and then just based on the type signature you know you're going to be able to take the concrete value and you're going to be able to have that and then give something else back.
[00:47:51]
That's like what the signature says.
[00:47:53]
So if it is something where it's performing operations you know it's going to need to be sequential because you actually have a resolved value at that point.
[00:48:02]
But the naming does get interesting there right because the word then does imply time and you know maybe with a list we're not dealing with time and so maybe we call it concat map and it gets a little fuzzy doesn't it?
[00:48:16]
Right right and even something like maybe right you're not dealing with something that's in the future.
[00:48:24]
Right.
[00:48:25]
There is an implied sort of fact that the second maybe is going to be resolved after the first one because there is that order that's there.
[00:48:33]
So maybe then kind of makes sense there.
[00:48:36]
Right, but the the general thing you're doing is you're taking some how resolved or concrete or present value or whatever that that fuzzy thing represents and you're able to return a new one of those with and then so okay cool so that's sort of this.
[00:48:54]
When you have a monad you have and then that's what you can do. What are some of these other patterns?
[00:49:00]
So we talked a little bit about monoid earlier the idea of we have some combining function and we have sort of a base case that always does nothing when combined with another value.
[00:49:11]
Is there a username for the combining function? Do you know?
[00:49:15]
What is it? The... oh I forget what it is.
[00:49:19]
The base case is sometimes called mempty in Haskell and they have a name for the other one.
[00:49:26]
I don't know if those are standard names or if that's just the Haskell names for it, but what's interesting about this is that it may have actually kind of set off a light bulb in your mind.
[00:49:36]
The idea of a combining function in a base case more or less what folding does.
[00:49:42]
And so if you have a monoid, then you can inherently fold collections of that or lists of that type using that monoid.
[00:49:54]
So most of the classical sort of fold function examples will show like summing a list is effectively just applying that monoid to things.
[00:50:04]
But you can also multiply things.
[00:50:06]
Something like asking are all values in a list true according to this predicate or are any of them true?
[00:50:15]
Now you're effectively folding the Boolean operator or or the Boolean operator and.
[00:50:21]
On Booleans and is a combining function and anding a value with false returns sorry, ending a value with true returns whatever your value was.
[00:50:37]
So ending values is a monoid and same thing with or.
[00:50:41]
Orring a value with true returns self and or is a way to combine two Booleans.
[00:50:46]
Orring a value with false returns self, orring a value with true returns true.
[00:50:51]
Yeah, yeah, I got it reversed.
[00:50:54]
Right, right. Yeah, no, that's 150% chance.
[00:51:00]
And I got it wrong with both.
[00:51:04]
You were very unlucky.
[00:51:08]
So like would a set be another example of that?
[00:51:11]
You can have an empty set and you can union a set.
[00:51:14]
Yeah, you right.
[00:51:17]
Yeah, you can union with that.
[00:51:19]
You cannot insert an empty element, but you can union two sets, right?
[00:51:23]
So you could form a monoid for sets with union as your combining function and empty set as your, I guess, identity value.
[00:51:33]
You could. I'm not sure for some of the other set operations if intersection works or not.
[00:51:38]
I mean, I think effectively you can you can think of union like or an intersection like and.
[00:51:46]
Yes. So it's probably possible.
[00:51:51]
Yeah, I have to think about that.
[00:51:55]
Where this is really interesting is that functions are themselves monoids because you can combine two functions with function composition.
[00:52:05]
Right. And you can combine them with identity.
[00:52:10]
Yes. And I've actually used this on.
[00:52:15]
In fact, this is not even like a functional programming thing.
[00:52:17]
This is a Ruby project.
[00:52:19]
And I had to do this complex search form with a bunch of search parameters.
[00:52:25]
And the way I'd often solve this in Ruby was just like a bunch of like if else statements like if this parameter is there,
[00:52:32]
try to filter your results through this way. But then like all the combinations get really weird and it gets really messy.
[00:52:38]
So instead, what I ended up doing was creating a bunch of query objects, which are you can think of them almost like functions,
[00:52:47]
but they operate on the database and made them composable.
[00:52:52]
And then based on the parameters in the URL, I would just construct a bunch of these and then fold through them to combine them all together,
[00:53:00]
effectively just to compose them with the empty query as my base value.
[00:53:07]
That's how I built up my search.
[00:53:10]
At the time, I didn't know about monoid. I didn't really know that much about folding even.
[00:53:15]
It just kind of worked out. But years later, when I learned about monoid, I was like, oh, that like really cool search thing I did in a Rails project once.
[00:53:23]
That was monoid plus folding.
[00:53:25]
It's worth noting that folding doesn't always have to be on a monoid. I think that's a common misconception.
[00:53:30]
You can fold things that are not monoid because there are some operations that don't follow the monoid laws where your base case isn't necessarily the identity.
[00:53:39]
And so folding is a broader set of things, but monoid are always foldable.
[00:53:43]
Yeah. So I'm currently thinking that if you provide a functor, so if you provide a map function and it is a monoid, would you say it is monoid?
[00:53:58]
Sure. Yeah, we can say it's monoid.
[00:54:00]
Yeah. So if it is a monoid and it has a map function, so it's functor, then you can implement like map reduce strategies, right?
[00:54:10]
Oh, that's interesting.
[00:54:11]
And now I'm realizing that in Elm Review, when I ask authors to create the rule, I ask them to provide something that actually looks like give me an initial value, kind of a neutral value.
[00:54:31]
For the context?
[00:54:33]
Yeah, for the context, the project context.
[00:54:35]
And I also ask them to fold them, a way to fold them. And internally, then I do like map reduce to combine these together.
[00:54:45]
So multiple project contexts into one project context, which allows me to do pretty nifty things under the hood.
[00:54:52]
So I think this may be a little loose with the terminology here.
[00:54:55]
When I'm talking about that monoids can be folded, it's not the monoid itself that can be folded.
[00:55:00]
It's a collection of them.
[00:55:02]
So, yes, numbers are not foldable, but a list of numbers is foldable.
[00:55:07]
Yeah, two numbers are foldable together into one.
[00:55:11]
Well, two numbers can be combined monoidally with addition.
[00:55:15]
A list of numbers can be folded using the addition monoid or folded with the multiplication monoid or however you want, whichever one you want to use.
[00:55:25]
Right. Given that you have an operation that allows you to combine them, you can fold them.
[00:55:30]
Given that you have a monoid for some values, a list of those values can be folded.
[00:55:36]
So to your example earlier, it's your collection, your list that's a functor and the items in the list that are monoids.
[00:55:44]
And then you can do your map reduce.
[00:55:46]
Yeah, this gets really powerful once you start noticing some of these pieces.
[00:55:50]
And sometimes you notice like, oh, my type could be made or is an operation that would be really nice if these things were monoids, but they're not.
[00:56:00]
But also, I know that they could be if I implemented one function.
[00:56:05]
And so it's like you implement one function, then everything else falls into place and you can write a really nice program.
[00:56:12]
Right. That is sort of the beauty of patterns, isn't it?
[00:56:14]
Is you don't have to reinvent the wheel.
[00:56:16]
It's sort of like this is going to pop up again and again and I'm going to have to think really hard.
[00:56:22]
How am I going to solve this problem?
[00:56:24]
And then I'm going to come up with the same solution.
[00:56:26]
If only I sort of understood that pattern better.
[00:56:29]
So that is something I would like to strive towards to to be a little more versed in these patterns so that I can sort of pull them out as needed more and more easily and add them to APIs as needed.
[00:56:42]
And sometimes you already have like all the machinery in place and all you're missing is that one cog.
[00:56:47]
But if you don't know that it's that one cog you're missing, you're going to rebuild the whole machine.
[00:56:52]
Right.
[00:56:53]
Instead of being like, oh, I just need to implement this one little function and then everything else is just going to work.
[00:56:59]
Right. Yeah, I wonder, like, you know, I mean, when I built Elm GraphQL, I was very new to all all these things.
[00:57:07]
In fact, like I literally read an article, like I found an article explaining what a phantom type was.
[00:57:13]
I'm like, that's what I need. And that day added that as I was like exploring building this API.
[00:57:19]
But yeah, it took me like it would have been nice to know that, hey, having the ability to just have an empty thing that when combined with anything else is itself, that would be cool.
[00:57:31]
And then you have a selection set that empty and you can.
[00:57:35]
So I suppose that would be a monoid because you can take a selection, an empty selection set and combine it with another thing.
[00:57:43]
You could fold over that. That's useful knowledge.
[00:57:47]
So what is your take on names? So there's a lot of discussion around this.
[00:57:53]
So if something has and then do you call it and then able or do you call it a monad or how would you teach it around names?
[00:58:04]
Oh, that's a really hard question because I think it is useful to have a name for things that have and then.
[00:58:13]
I think that having a name like monad is not useful for beginners, people who are learning.
[00:58:20]
And in fact, even talking about it in terms of some larger pattern is not useful for beginners.
[00:58:26]
Because it didn't grasp it in their head yet.
[00:58:29]
And now you're trying to see like what are the things in common, what are the patterns?
[00:58:34]
And it's just so abstract that you tell them, oh, it's just an and then function that follows these laws.
[00:58:41]
And it's like, well, that's too simple. What does it actually mean?
[00:58:45]
And then you get stuck in all these like really high level things that just kind of make you spin in circles as opposed to just being like, oh, here's how I combine.
[00:58:54]
Maybe here's how I combine results and then over time building up an intuition for what it works.
[00:58:59]
Eventually, you're going to want a name for all of these things.
[00:59:02]
And either just to recognize the path, like put a name on a pattern you've seen sort of emerge, but also to having conversations with other people or with yourself to say, oh, this thing here, if I make it and then all of a sudden a bunch of other pieces fall into place.
[00:59:19]
Or you're doing review for someone and you're saying, you're doing a lot of work here and it looks like you're chaining things in a monadic sort of way.
[00:59:29]
And I've used this with people who know what the term means in nonfunctional programming contexts because it is a very useful term.
[00:59:37]
I guess there's also just the fact that if you want to know more about the pattern, if you want to research more, then googling and then is not going to give you much, but googling monads will give you a lot of resources.
[00:59:52]
Yes. Unfortunately, most of those are not going to be helpful.
[00:59:59]
That's a different thing.
[01:00:01]
Yes. Yes. I think also what's sometimes tricky with these things is that a lot of the literature around these terms is written in a very kind of formal language.
[01:00:13]
Like it's written almost by academics for academics.
[01:00:16]
I remember I gave a talk last fall on folding and a particular sort of style of folding, which I didn't mention in the talk, but it's called a catamorphism.
[01:00:27]
And anything that I read on folding and catamorphism has got really deep.
[01:00:33]
Even the ones that are claiming to be like written for programmers in an accessible way.
[01:00:38]
And you get a few paragraphs in and we're talking about how F algebras compose and arrows.
[01:00:43]
I'm just completely lost.
[01:00:45]
So sometimes when you dig in on the terms, most of the material that's out there is not written for people like us who are trying to write programs in the real world.
[01:00:55]
And I wish there were more tutorials out there that try to give more of a practical introduction to some of these concepts.
[01:01:04]
And I feel like you've written a few.
[01:01:07]
I've tried. The talk I gave about folding was exactly that.
[01:01:12]
I was trying to say, let's take these concepts and make them accessible to people and not worry too much about the formal definition of what a catamorphism is, but instead, how is it useful?
[01:01:24]
My example was working with trees and particularly inverting a binary tree on a whiteboard in one line of Elm.
[01:01:32]
That was my my selling point.
[01:01:34]
Yeah, we'll definitely link to that talk.
[01:01:36]
Yeah, it is. It would be so nice to just have it written in plain English in one place.
[01:01:42]
Like, here are all these kind of common patterns.
[01:01:45]
Here's what they look like in Elm.
[01:01:47]
Here's here are the kinds of problems you can solve with them.
[01:01:51]
This conversation has definitely been illuminating. So thank you so much for coming on to illuminate these things for us and dive in.
[01:01:59]
It's been a pleasure. Thanks for bringing me on.
[01:02:02]
Are there any concepts that you would recommend people learn first?
[01:02:06]
And do you have any resources? Because you said it's very hard to find resources.
[01:02:10]
So if you have good links, that would be useful.
[01:02:14]
I'm a big fan of just digging into maybe.
[01:02:18]
I think it's a great place to learn, particularly because maybe exposes its constructors.
[01:02:23]
So you can manually unwrap it and manipulate it and sort of see how it works under the hood.
[01:02:29]
An exercise I've done with people before is actually have them implement their own map and then on a maybe to see how that works.
[01:02:39]
I also have an article called the mechanics of maybe that sort of looks at it less from a patterns perspective and more from a here.
[01:02:49]
You're new to maybe here's some common situations that you run into.
[01:02:53]
You would write a case expression like this. And because it's such a common situation, the library provide the helper for you, which end up being things like map and and then.
[01:03:03]
But the intuition, even for just the difference between map and and then.
[01:03:08]
I think you touched on a little bit, Dillon, the idea that not only transforms the value inside does not produce a new maybe whereas and then does and sort of what the implications of that can be.
[01:03:20]
Right. It is. I really like this idea of using maybe as an exercise to let's build these up from scratch.
[01:03:28]
One thing that's a little bit elusive is maybe doesn't have a succeed function.
[01:03:33]
I mean, it does. It's just not called that. And it's not a standard function. It's a constructor function. It's just.
[01:03:39]
And so you have to really squint your eyes to see those patterns there.
[01:03:44]
Yeah. And the reason you bring it up is because the definition, the strict definition for what a monad is requires an and then function and some form of constructor.
[01:03:54]
Right. We've sort of not mentioned that out loud on the episode, but that's what the formal definition says.
[01:04:02]
And for some types that don't expose the internals of the type, for example, task or JSON to code, they will often expose a succeed function, which is a constructor.
[01:04:15]
Right. Exactly. Yeah. So, Dillon, I'm curious if you used any of these patterns and concepts when you were designing the own GraphQL library.
[01:04:25]
You know, I wish I even knew what these things were back then. They were much fuzzier in my mind at the time, like just conceptually in category theory and those things.
[01:04:36]
But even just the patterns themselves were less clear in my mind. So I had no idea about these things.
[01:04:43]
And the original Elm GraphQL, which was called GraphQL, it had these core concepts of like, you know, making it safe to query for your GraphQL data and having phantom types to protect that you're querying in the right context in these things.
[01:05:01]
But it really didn't have any of these properties that we've been talking about of monoids and functor and all these things. So there was a selection set and there was a field.
[01:05:15]
And those were two different concepts.
[01:05:17]
Did you have that sort of nice pipeline API back then? So like the selection set with build things out?
[01:05:25]
There was. There was selection set with.
[01:05:28]
And so that, I guess, just to clarify for our listeners, is applicative. So selection sets could be combined applicatively.
[01:05:36]
But you also have this concept of fields which could not be combined applicatively.
[01:05:40]
Well, now here's the thing. I would venture to guess that it would not be applicative because it wasn't applicatively combining it with its own type.
[01:05:51]
You would build up a selection set. So you would say, here's a selection set. And I can say with field, with field, with field. And it takes it on to a selection set.
[01:06:02]
You're right. That would not be applicative.
[01:06:04]
Right. So the reason it did that was because I was trying to do, I needed to make sure I had a unique way of querying for them.
[01:06:16]
So in GraphQL, in a selection set, you can say, give me, you know, the avatar for user one, two, three.
[01:06:26]
But you can say you can pass arguments to your fields in GraphQL selection sets.
[01:06:31]
So you could say, well, I want the large thumbnail and I want the small thumbnail.
[01:06:35]
Well, those would both be thumbnail, the thumbnail field.
[01:06:39]
And that comes back under a JSON key of thumbnail.
[01:06:44]
But those both come back under the same JSON key.
[01:06:47]
So I need a way to uniquely be able to get each field that you're requesting.
[01:06:52]
And because of that, I would say, well, if I'm adding on a field that already exists, so I have thumbnail and you're adding thumbnail again.
[01:07:02]
So when you say with thumbnail small and then with thumbnail large, now it's going to add thumbnail too.
[01:07:10]
It's going to add an alias the second time you add something. So it's no longer unique.
[01:07:15]
And so because of that implementation detail, when I was trying to figure out how to really in hindsight,
[01:07:21]
I was trying to figure out how to obey these laws and building up a selection set where I could combine things, where I could do map two.
[01:07:29]
And I knew that I wanted map two, I guess, because I knew that I could do nice things with JSON decoders.
[01:07:37]
And so that was an obvious parallel. But I didn't.
[01:07:41]
Yeah. So in hindsight, that's really what I was trying to do, was trying to figure out how to obey those laws.
[01:07:47]
Interesting. It sounds like you came at it more from a developer experience perspective.
[01:07:51]
But somehow that brought you exactly in line with some of these classic category theory constructs.
[01:07:58]
Exactly. Yeah. Yeah. And in hindsight, that is really, really cool that that was just I wrote I wrote a blog post about I think it's called
[01:08:06]
how guides you towards simplicity and how I essentially had to have a deterministic way to uniquely hash these fields when I added them in.
[01:08:19]
And that made it so it was order independent. So you can add you know, you can have a selection set where you add a thumbnail and then you can add another thumbnail later.
[01:08:28]
And it doesn't care because it's already uniquely created a field alias.
[01:08:33]
So it's never going to collide. That's that's really cool. And it goes back to sort of that one property we discussed earlier that applicatives tend to be order independent.
[01:08:42]
Right. With a little asterisk. There are some some edge cases like parsers.
[01:08:49]
I feel like there's a lot of asterisks around those laws. Not the laws, but the functions and how you.
[01:08:55]
Right. Right. Because order independent is not part of the laws or at least not order independent in the way that we think of it.
[01:09:04]
I think there are laws around things like commutativity, which effectively mean order independence in a certain strict sense of the term.
[01:09:12]
But yes, for in general, it's easy to think of applicatives as being order independent. And that's exactly the behavior you wanted.
[01:09:19]
Right. Yeah, exactly. And things just bloomed at that point. It just felt like the the whole library before that, it became so much nicer to work with.
[01:09:31]
And now so there used to be this concept of a fragment of a fragment selection set.
[01:09:37]
So so GraphQL has concepts of a fragment and you could like build these fragment selection sets.
[01:09:44]
And I don't even remember exactly how you did that. It is the kind of thing that it was like you had to build up a fragment and then explicitly combine it with it and it would duplicate these things.
[01:09:55]
And it was a very difficult problem because how do you combine two selection sets when you've already sort of built up the decoder that's expecting to be querying for the JSON that comes back from GraphQL?
[01:10:09]
But now you you've already sort of built up this decoder. Now you add some some more things to the selection set.
[01:10:16]
But now you need to change the way you're querying for that data. So it's very messy.
[01:10:20]
So suddenly the whole concept of a fragment went away and you could just say, well, all you've got is a selection set.
[01:10:27]
There's no such thing as a field. There's no such thing as fragments.
[01:10:30]
You've got a selection set which can be empty, can be a single field or it can be a collection of fields.
[01:10:36]
And suddenly it became so much simpler to work with and the fragment just becomes expressed as a selection set.
[01:10:45]
So I wonder what questions you could have asked yourself to come to the same conclusion earlier.
[01:10:51]
Hmm. Like, yeah, that's a great question. Yeah. Like what would have made me realize these things?
[01:10:57]
I mean, looking back on it now, I guess if I had some more knowledge of these category theory ideas in a way that I could understand for how do I help users solve problems?
[01:11:10]
And that might have pushed me sooner to kind of find ways to follow these laws.
[01:11:17]
That said, there were also certain technical things that I didn't have those insights of how do I make it deterministic?
[01:11:24]
That it's always it doesn't matter what order you combine things in.
[01:11:28]
But maybe that's maybe that's a takeaway is just if you're having trouble combining things, figure out how to make it order independent.
[01:11:38]
And if I if I just said, OK, I'm sure I can find a way to do this and focused on that, maybe I would have gotten there sooner.
[01:11:46]
I think it's interesting. You mentioned you wrote a blog post about how Elm guides you towards simplicity.
[01:11:51]
And you asked a question at the beginning of this episode where you said, if we didn't have these category concepts and we just started with Elm, would we sort of independently rediscover them?
[01:12:03]
Would they sort of always emerge? And I think with the answer you just gave here, you're saying yes.
[01:12:08]
That's my intuition. Now, that said, in a way, I had a blueprint. I had this obvious point of comparison.
[01:12:13]
Right. Because a graph QL selection set is kind of like, you know, it's it's sort of a JSON decoder on steroids.
[01:12:22]
It's like a super powered JSON decoder. So there's an obvious API to try to match and say, hmm, JSON decoders can map to JSON decoders can succeed.
[01:12:36]
Wouldn't that be nice if I could do that? So it's hard to say what the API would have been if it wasn't for the existing example of JSON decoders.
[01:12:45]
But I do imagine that a lot of the same patterns would have emerged.
[01:12:48]
I mean, I guess in some way, this is what happened at some point in the history of computer science.
[01:12:54]
Right. Anyway, so. Right.
[01:12:57]
Us being here talking about this is proof that this is the way it would have gone, potentially.
[01:13:06]
I think the biggest insight was when I realized that the concept of a field and a selection set could be the same.
[01:13:14]
And that the thing I needed to be able to implement that was to not care about order.
[01:13:21]
And and when I just sort of stubbornly focused on that and said, there's got to be a way I'm going to try to find out a way to do that.
[01:13:29]
And I'm going to explore every option. For a while, I was exploring, do I need to sort of hold some context of what fields have been queried and lazily build up the decoders?
[01:13:41]
And can I even do that? And but eventually I just kind of held with that goal and and it took me there.
[01:13:49]
What about Jeroen? What about Elm Review? Are there any places you think you could have sped up the way you developed certain parts of that API if you'd had these patterns in mind?
[01:13:59]
Probably a few. I know that I have something called a context creator, which is basically an applicative, I think.
[01:14:09]
Yeah, it's an applicative because you you start with some with the function and you say, I want this, I want that, I want this and then something to construct it to combine all those information.
[01:14:22]
I remember that was a bit tricky to implement. I'm not even sure that it turned out exactly how I wanted it to be.
[01:14:30]
But yeah, that would have been helpful at least. And then internally, I'm pretty sure I've implemented plenty of category theory things that are just not exposed, but that would have been helpful for me.
[01:14:44]
It's quite easy to even sort of accidentally implement these things, right? Because there's such useful types of functions.
[01:14:53]
There's almost like a gravity pull towards those sort of optimal points. So the odds of you're implementing a map if you didn't know what map was are pretty high.
[01:15:05]
Yeah, it's also because we don't have that many ways of building things. I mean, we don't have mutation, so that reduces a lot of what we can do.
[01:15:15]
That's true.
[01:15:16]
We don't have for loops. So yeah, how do you map things from list? Well, yeah, you're going to reinvent mapping at some point.
[01:15:26]
Yeah, it would be cool to maybe that would be like a good write up or something is exploring which of these patterns do APIs that we've built inadvertently implement.
[01:15:41]
I'm sorry, I invented applicatives again.
[01:15:46]
That's a great commit message right there.
[01:15:51]
Whoops, I created a monad.
[01:15:55]
Is this like the phenomenon of like things becoming crab like in the natural world a lot?
[01:16:03]
I'm not familiar with this word.
[01:16:05]
I have become familiar with that one recently and I keep hearing about it ever since.
[01:16:15]
Basically, the idea that all animal species tend to evolve into something that very much looks like a crab.
[01:16:24]
So basically, a crab is like kind of a perfect being in a way, an optimal being, and everything just gravitates towards that.
[01:16:34]
Is it corollary that all programs will eventually be migrated over to REST?
[01:16:44]
Very probable.
[01:16:48]
It's all making sense now. All right.
[01:16:52]
They chose their logo well, at least.
[01:16:55]
I really like that idea of getting to the basics and trying to derive these things by solving a real problem.
[01:17:04]
Are there any references for these more formal things?
[01:17:07]
Do you sort of do what we all do and click a bunch of links and have a thousand tabs open in your browser?
[01:17:13]
I've written a few articles digging into that.
[01:17:17]
I've written one on two perspectives on map functions.
[01:17:22]
So sort of two mental models that I hold.
[01:17:24]
I've written one about Elm's universal pattern.
[01:17:27]
We use that as an inspiration for a previous episode.
[01:17:30]
So those are all, I think, more practical entryways into these concepts.
[01:17:35]
I also made one on what happens when you run out of map functions, which is digging into what applicative is in a very kind of concrete use case.
[01:17:47]
So those are all, I think, useful articles for getting some understanding of what these things do in a very practical and in this case Elm centered environment.
[01:18:01]
Yeah. Yeah. Well, link to all those. Those are great resources.
[01:18:04]
Awesome. And where can people follow you and learn more?
[01:18:08]
And any other links you want to share?
[01:18:10]
Yes, they can follow me on Twitter.
[01:18:12]
I'm at joelken, J O U L Q U E N. And I publish a lot of articles on the Thoughtbot blog.
[01:18:22]
So that's Thoughtbot, T H O U G H B O T. I think I spelled that right.
[01:18:30]
Slash blog. And then if you want to find mine specifically, slash authors slash joel dash kenville.
[01:18:37]
It might be better to have an actual link then.
[01:18:39]
Yes, we will add that.
[01:18:41]
Look at the show notes for the link. Great. Well, thanks again, Joelle, for coming on and pleasure as always.
[01:18:47]
Thank you. Great to talk with you all again.
[01:18:50]
And Jeroen, until next time. Until next time.