Tune in to the tools and techniques in the Elm ecosystem.
If It Compiles It Works
What do we really mean when we say "if it compiles it works?" And how do we make sure our apps stay that way?
January 17, 2022
Dillon's blog posts
If It Compiles It Works
When It Compiles But Doesn't Work
Running code paths to check it works vs checking assumptions at the gate
Types Without Borders
Shotgun surgery -
Parse, Don't Validate episode
You can trust your tests
Keeping what you depend on to a minimum helps make things trustworthy
Semantics can make it harder to predict (like parser)
is almost never what you intend
Scaling Elm Apps
talk by Richard Feldman
Opaque types episode
So, what are we talking about today?
Today we're talking about a sentence that we hear a lot in Elm, which is, when it compiles,
And it is a good question.
If it compiles, does it work?
Well, I just said so.
There we go.
End of the episode.
End of the episode.
That was a quick one.
So I posted an article about this, and there were some interesting conversations, and it
seemed to spark some ideas about, you know, I mean, it's a bold statement to say, if it
compiles, it works.
And obviously, if you write a main.elm, and then you get it compiling, is whatever application
you were trying to build now suddenly a completely functioning application with zero bugs and
all of your logic implemented?
If you type, text, hello world, is it the new Facebook?
Sorry, the new meta.
So, clearly, there are limits to what that actually means when we say, if it compiles,
Well, what does it mean then?
There's the interesting question.
So I think that, I think we can agree that it doesn't mean that it's impossible to have
bugs and that your business logic will be working flawlessly.
But I think we can also agree that it does mean that, for one thing, your types are correct.
There's no way around that.
Unless you want to be really tricky.
No, I was going to say, like, the types are correct, but it doesn't mean that it works.
Well, it depends on what we mean by works, doesn't it?
But what I've been thinking is one thing that, like, when we think about that actual feeling,
that emotion of it works, that you get a little, you know, clean compiler message.
The screen is, you know, the compiler error that was on your dev window goes away, and
suddenly, the application is there.
You get that little dopamine hit because it's compiling again.
And you try it out, and it seems to behave exactly like you expected?
Yeah, what do we mean by that when it works?
Because we, you know, a lot of us in the Elm world have experience with a different programming
language where perhaps we run it, we don't have any syntax errors, but we have to go
through all of this tedium to manually test things and, you know, hopefully automate a
But still, we don't feel confident that we've covered every case because we have to exercise
all these code paths to trust them.
To trust now, so to me, that says a lot about wiring.
I trust wiring, whereas in certain other programming languages, I don't trust wiring until I see
it for myself, until I've actually exercised that code path.
So I'd say that wiring is a big one.
If we stick to the definition, my definition is like, whenever you change Elm code, either
by doing refactoring, which usually means that the behavior has not changed, or when
you add new behavior, or when you change the behavior, then once your code compiles, it
behaves like you expected it to.
Doesn't mean that there's no bugs, but at least it does what you expected it to when
you sit down to write it.
That's my take on when compiling works.
That means, so if you do a large scale refactoring, that when you finally get to the end of the
list of errors that you initially received, and your code compiles, then it does work
exactly like it did before.
I like that train of thought.
If I could be a bit pedantic here, just to poke a little fun.
You said if, when we do a large refactoring, which usually means that we didn't change
the behavior, I would say it always means that if you use the word refactoring, because
that's what refactoring is, often people use that term imprecisely to mean changing the
code and the behavior.
But refactoring means changing code without changing behavior.
I absolutely agree.
That's why he additionally added it.
And the reason I indulged in being a little bit pedantic there, for one thing, is as a
public service announcement, don't mix your refactorings with your behavior changes.
It's a really, I believe, a very important practice.
But another reason is I think that might illuminate a little bit about this experience you're
talking about where you go in and you refactor code or you change behavior separately from
refactoring and it does something that you expected.
I think that when things like wiring and some of these other things that we can get into
get out of the way, that you know you can trust those things, then there's less to think
Just like when you're refactoring and you're only changing behavior and you're taking
small steps, you can reason about what's happening.
Like if you're in a math class and somebody says, hey, here's this giant formula and here's
another way we can write this and they just wave their hands.
You can't do that.
They say show your work.
And what does show your work mean?
It means break it down into manageable steps so that I can trust each step and then see
how it derives at that result.
And to me, that's what a good refactoring looks like too.
It's showing your work, doing each small step in a way that it's manageable for your brain
and you can follow what's going on.
Yeah, it's verifiable as well by others.
It's verifiable and you can trust it.
And so to me, because of these characteristics in Elm, it can get out of our way and we can
think about a change because we're not thinking about wiring.
We're not thinking about, wait a minute, I changed this thing, but am I passing the right
type through here?
We're not thinking about that.
And that freezes up to think about these other things.
So I think removing that noise is a really big thing.
So you know that I'm also an adept of tiny steps, but I don't think it really matters
all that much when we talk about when a composite works in the sense that the big aha moments
that I had with this when a composite works were when I did not do tiny steps.
When I did large scale refactorings, the things that took me hours before I learned how to
do tiny steps and it still worked.
But tiny steps definitely help that for sure.
Right, right, right.
I know what you mean.
And I do those big steps sometimes too.
I try to manage making small steps, but sometimes you do a refactoring and you can sort of see
And sometimes it's just very obvious.
I mean, sometimes you're just going to change the underlying data model of something and
you just go through and you change it in a whole bunch of places and you know it's just
going to work.
You were passing in, you know, a list before and now you're passing in a non empty list
with a tuple of value and a list of values or something and you just wire that through
a whole bunch of places and then it just works.
And there are certain ways to do those sweeping changes, but as a small step.
And I think I think you can still count that as a small step in a way where it touches
a lot of places.
Well, yeah, I guess it's when it's the smallest possible step.
Right, right, right.
And sometimes that can be like a focused change that affects a lot of places.
So like what what is it that makes those changes feel safe?
So do you want to talk about when the how does work?
You know what?
What is it that makes that feel feel like something we can trust?
So the reason why it works when it compiles is first of all, because there is a compiler
that blocks you as long as there are errors.
a lot of changes.
The thing is, you don't know when you're done.
And in Elm, or with a compiled languages with good compilers, let's say, you cannot stop
earlier, maybe slightly early, but there's a lot of issues, especially as you said, like
and other languages.
And I think that's probably the main thing.
Then why is it that we get compilers is a different issue.
That's a great point.
So there's, yeah, it's almost like the difference between it compiles so it works or it runs
so it works.
And, you know, I mean, I talked about this in my types without borders talk that you
through picking off values, passing them through places.
And what Elm decoders make you do is validate and parse those validations into data types
to as a result of those validations.
So you're checking those assumptions at the gate.
And if those assumptions are not true, you're finding out early.
So I think that does have something to do with it.
Now if you, you know, if you use this types without borders approach, the idea is that
we can move that even further to the left of instead of knowing as soon as you receive
a server JSON response, whether it's valid, you know, at compile time.
But either way, moving that further to the left means that you don't have to run every
single code path to test those assumptions.
Whereas if you're just picking off raw JSON values and passing them through, that means
that's like a deferred problem that you're finding you're passing values through and
you didn't actually validate them upfront.
So the errors can trickle through the system.
And that makes it very hard to trust code unless you actually run through the full code
So I think that's an important piece of it.
only notice that at the end and you should have decoded or handled it differently.
So having to deal with those things upfront is really makes it feel different working
with Elm code.
And it gives you a lot more confidence when you get that compiling screen, the compiler
error goes away.
go away and the server live reloads and I'm looking at a page of running code.
It's not a feeling of relief.
It's a feeling of, all right, time to go through and see if this actually works and time to
run a bunch of tests and write a bunch of new tests.
Like the syntax errors are like the, Oh, okay.
I know that I still have errors.
I know where to find them and how to fix them.
But once you're past that, now it's time to find the tedious errors.
And so now, I mean, what's the point of talking about this?
Are we trying, you know, like people understand we like Elm.
So, so why do you, it may come as a shock, but we like Elm.
So why is this conversation meaningful at all?
I think it's interesting because how can we do more of that?
How can we understand what makes, what gives us that feeling of relief when it's compiling
and how can we do more of that?
And to understand that better, I think it's interesting to break down how can we, how
can we get around that in Elm and not feel confident when it compiles?
What makes that happen?
What are the foot guns in Elm?
of like deferred validation, you know, the shotgun surgery thing that we talked about
in our Parse don't validate episode.
just reach for JSON and Elm forces you to validate types before you can use them.
But it's also a convention that you can avoid.
I mean, if you, if you wanted to, you could, you could pass around a JSON object and you
could run a JSON decoder and deal with a result type and you could do result with default
And you could do that if you wanted to, right?
Or you could just have a dictionary.
And you could do that in Elm, but you're probably not going to.
So what I'm pointing out here, I think, or touching on is that there are things that
the compiler knows when things are static.
So records, for instance, are static and the compiler can help you figure out issues about
So if you want to rename a field, it will tell you all the places where the field is
used or used incorrectly, I guess.
But if you switch to a JSON object and decoding it on the fly or dealing with a dictionary,
then the compiler doesn't help you.
So in a way it's having dynamic or sometimes magic values can create some issues, create
some problems and uncertainty in how the program will behave.
Yeah, I think that the example of dealing with JSON decode values and passing them through
and running individual decoders lazily just in time to get, obviously, maybe nobody's
ever done that before.
Maybe somebody's done it.
It's a large world out there, but that's not a realistic problem.
Yeah, because it's just too tedious to do.
But that said, I think we can still learn something about...
Because I think default cases handle a lot.
Default cases happen a lot where we have a maybe and we say, okay, well, let me just
do a maybe dot with default.
I think that the farther you can move those checks up in the code...
I think one rule of thumb that I try to follow is I want code dealing with the happy path
all the way down and I want my error handling code around the edges so it doesn't pollute
my core business logic as much as possible, which is basically the pattern of a JSON
You run a JSON decoder, you validate everything, you say that not only is this a string, but
it's a nonempty string and it's a string that I can parse into a date type or it's a valid
username and it matches the username format and I can parse it into that type and all
And if anything fails, you go down a different path, the sad path, and if all of those things
succeed, then you go to the happy path.
And now you've got this set of functions that are blissfully unaware of the sad path.
And I think that that's something we can all benefit from turning up to 11 and doing a
little bit more of, even if we're not going to the other extreme, even if it's not dialed
down to one with passing around JSON decode values and decoding them just in time.
I keep thinking about it and it just feels so painful.
Just so painful.
Well that's good.
Then it worked.
Then I'm scaring people into dealing with errors upfront and getting that sad path out
of the way.
So there are other ways that the compiler helps you to avoid any issues, which leads
to when a compiler works.
One of them, for instance, is pattern matching or rather the exhaustiveness of pattern matching.
So as you said, you can always have a default case like a wild card and those should be
Because as soon as you do those, you lose some kind of guarantees and you forget to
handle some cases.
But when you don't, well, the compiler gives you all the reminders that you need.
So you're adding a new variance to a custom type.
Then the compiler tells you, hey, you need to handle it here.
don't know when you're done.
In Elm, here you would get a reminder.
Which is sort of the premise of my violins and Vuvuzelas analogy that...
You're going to need to explain that one.
So Vuvuzelas, you're going to have a hard time making beautiful sounds on a Vuvuzela.
Now if you want to prove me wrong, I'm open minded about that, but I'm pretty sure it's
not possible to create beautiful music with a Vuvuzela.
I'm not actually sure you really want to hear people play Vuvuzela and counter argument
But a violin, you can create screeches on it, but you can create the most beautiful
music you've ever heard.
I think that Elm is similar in that the expressive power is there, but it doesn't come for free.
You need to be able to take advantage of it.
And that's our job as Elm developers is to understand the tools we're given with its
expressive power and how to use those.
So those tools are, it gives us exhaustive case statement or case expressions.
That's a powerful tool if you use it, but you can opt out of it if you want to.
Just like you can opt out of safe JSON handling and that sort of thing.
It's hard to opt out of because you have to be very explicit about everything, but you
So, but it's our job as Elm developers to use those tools to model the constraints of
our system so we can get guarantees about our domain because Elm doesn't know about
It knows what an int is and a string is and JSON is, and it knows how to model constraints
about that, but it doesn't know how to model constraints about your specific business domain.
Yeah, thankfully it's easier to learn about these than to learn how to play the violin.
It doesn't take years of practice.
Well, at least hopefully.
I think it's probably easier to write a beautifully written Elm application.
Do you think I can learn the violin in like two days?
I should try that.
I'm sure my family will love it.
And your neighbors.
So, another thing that is part of the equation is that Elm doesn't have any side effects.
And the consequence of that is that the order of operations does not really matter.
So if you compute A and then B, that will always give you the same results than if you
at a distance, and that could be false.
And that is one of the trickier things when you do refactoring.
You move things around and that actually changes the behavior.
In Elm, that is not a problem and people move things all around all over the place all the
And when you're testing, that is an incredible quality because you can trust what you're
So I think that's huge.
And testing in Elm to me is such a such a no brainer.
If you've got these nice pure functions and you just run them and check them and it's
so much easier to test in Elm than it is in other contexts.
Now a thing I've been thinking about it.
So like let's let's drill into this a bit more about spooky effects at a distance and
things being predictable and depending on implicit things.
Can you do that in Elm?
I kind of think you can.
And it's subtle, but I've felt that feeling before.
Doing spooky effects at a distance?
You can do that in Elm?
You can't directly do spooky effects at a distance in the sense that you can't do...
You can't depend on environment, global variables, implicitly defined values.
But what if you jam a whole bunch of context into some type and depend on it in a way that
has confusing semantics and depends on lots of stuff?
You can start to get back at that in a way.
So if you have very complex business logic, for instance?
Well, I would say if you depend on too much.
If you depend on too much and pull in too many things instead of having something expressed
in the simplest possible terms of the fewest things that can depend on and having things
sort of sliced down in that way, it can start to feel like...
So one thing I've been thinking about is the semantics of APIs.
So we think of...
When you do JSON.decode.oneof, it feels extremely intuitive in the sense that if you do one
of with a list of strings decoded or a single string or null, then it's going to fall through
to any of those cases and it's pretty intuitive what it's going to do.
If you're writing a parser in Elm, we talked about this in our parser episode, the semantics
of backtracking and committing in a parser, so you chomp a specific character and now
you've committed down that path unless you make it backtrackable, right?
Now it's a very elegantly designed package and those are important tools for writing
a parser to be able to commit or make something backtrackable.
But they're confusing semantics and I would certainly say when I'm writing Elm parsers,
I do not think that if it compiles it works.
I think that if my giant test suite passes, it works.
Well, I think that's kind of the same with JSON decoders.
So when the types match, it doesn't give you a real sense of guarantees.
So I'm thinking maybe that's because the types are not expressive enough in a way.
Like for instance, if you do something like JSON decode maybe compared to JSON decode
nullable, they have I think the same type but they have different meanings.
I think Simon Liddell might have turned me on to this that essentially JSON.decode.maybe
is never what you want.
Like basically you should never use it because it's basically saying if anything at all fails,
then do this.
It's not saying if it's null, then use this value.
But if it's a different shaped object, then I expected fail.
It's actually never going to fail.
I think a big reason why when a compiler works is because of all the types that indicate
what values are possible and what you can do with the values.
But the decoders and parsers, they're not expressive enough or they lose a lot of detail
and therefore type checking helps but not entirely.
It's not enough.
Is my gut feeling at least?
I mean if you're writing a parser or a decoder, in a sense, we were talking about like spooky
effects at a distance and having access to global state or large amounts of state.
Well, I mean if you're writing a parser or a JSON decoder, you have access to all of
I mean if you do JSON.decode.field, you don't exactly have access to all of the state, quote
unquote, you have access to that context.
So that does scope your reasoning to a certain extent.
But it's also like a very wide open thing that you can grab data from wherever you want
and do all sorts of things with it and squelch errors.
I would say for JSON decoders, the issue is that a lot of things depend on the parents
or whoever uses decoders.
So if you do JSON.nullable and then pass it to the decoder, then that decoder depends
on the parents.
And that does not always get, and then the types don't help you do exactly what you want.
I mean the semantics are really important and I think that the JSON.decode.maybe is
a good example of how important the semantics of the API are.
And Elm doesn't give you that for free.
So JSON.decode is like a core package, but when we're writing Elm code, we're building
our own suite of functionality and giving them semantics.
And it's really important to consider how the semantics you're defining will give you
a pit of success or a pit of failure.
And things like JSON.decode.maybe can make it very tempting to do something that you
probably don't want to do.
I think it's also when you do like a JSON.decode.field, some field name, and then you pass that another
JSON.decode.field, then that one depends on the parents and that can be kind of tricky
I wouldn't call it spooky action at a distance, but I get your point that it's not as obvious
as it could be or as other APIs are.
And I mean, I guess the bottom line is if you can constrain the data that you're passing
through and make it easier to reason about what it's going to do and make it depend on
fewer things than you should, which is basically what Richard Feldman's talk, I can't remember
what the name of the talk is, but one of his Elm...
Scaling Elm apps?
Scaling Elm apps.
That was like the core message was, hey, if you can depend on less data, then pass in
In a way, I think that's what functional programming is pretty good at.
And yeah, like if you just pass in your whole model all over the place and you can change
it anywhere, that can sort of give you that feeling of spooky effects at a distance.
And maybe a more controversial one, I don't have a fully solidified opinion on this one,
but you can do things like defining your message type in a message module and then use that
message rather than injecting those message constructors and message variants as needed
so you can explicitly trace it.
There are pros and cons there, but anytime you can get to that feeling of spooky action
at a distance.
You just pass in your model everywhere, give everything access to messages.
So you have to be aware of what state you're passing in everywhere because you can sort
of Elm by default gives you these nice pure functions that take data and return data.
But if you just balloon that up to have access to everything and be able to change everything
from everywhere, you start to lose that sense of if it compiles, it works because it's unmanageable.
And it starts to feel like spooky action at a distance if you're not too careful there.
I think what you're saying is that it becomes hard to expect what the code will do because
the code is just so complex because you made some spaghetti code or some very hard to maintain
So writing maintainable code helps you do what you expect it to do.
So it's like in a sense, there's if it compiles, it works.
But then there's what you were talking about earlier, which is if I make a change, did
the behavior change in the way I expect it to?
Or not change in a way I expect it to.
And I think that's an important point, too.
When you're trying to change behavior or when you're trying to not change behavior, how
easy is it to do that in an expected way?
No, I feel like I never go to the states where my code is so complex that I don't understand
what it does because I have this ease of refactoring because when it compiles, it works.
That I just do it all the time.
And therefore, my code is relatively maintainable always.
And I improve it when I need to because I can refactor all the time.
It never gets all that complex.
And also because there's the Elm architecture, which is now deeply ingrained in my mind so
I know how an Elm app works.
Another thing that I think is worth mentioning, even if it's a tiny thing, is that you cannot
have issues because of shadowing.
So shadowing in Elm is forbidden.
So that is when you have two different variables in the same scope.
So you define a variable name and somewhere else underneath it, another variable name,
name, name, name, name, name, name.
And that is not allowed because the issue is that it's fairly easy to remove one of
the names and then all variables that pointed to the other one now point to the other one.
I think you're illustrating how confusing it is pretty well.
So because that is not a problem in Elm.
In Elm 19 plus, yeah.
I remember going through code bases from the 18 to 19 migration and going through and changing
a lot of cases that had shadowing violations.
By the way, I'm super happy that shadowing is not there because it makes Elm review so
But yes, so if you don't have shadowing, you don't have the problem where, oh, if I remove
a variable, then the values point to something else.
Now it's just like, hey, this name, this value, this name variable does not exist anymore.
Please make it exist or change the reference.
Again, one kind of reminder that the compiler gives you so you don't forget to do what you
were supposed to do originally.
That's a great point.
It's like another case where if you make a change, the resulting behavior change is going
to be predictable.
Whereas if you rename a variable and suddenly the behavior changes, that's unexpected.
That shadowing rule of Elm 19 plus fixes that.
I mean, similarly, like sort of language semantics.
If you add a string in a number or subtract a string in a number and just truthy values,
Well, it's just void.
It's just like returning undefined.
And now you have it undefined somewhere.
I have it undefined.
And I'm like, what the heck?
It's running this code.
I put a log in there.
I see it's running.
And like, I've been through that way too many times.
Those little foot guns are removed by the core Elm semantics.
But again, we're the violinists.
We've got this very expressive tool, but it's only as expressive as we're able to take advantage
of those tools.
So similar to this concept of the semantics of truthy values, that can be a liability
Sometimes you see people using this Boolean constructor to turn something into an explicit
Boolean and things like that to try to be safer.
But you have to go out of your way.
Also things like saying if not with the exclamation mark variable, people often do that to make
sure that it's not null or undefined.
But then it's zero.
Or it's empty string.
Semantics are important.
And we may have fixed that issue in the core semantics because truthiness is not a concept
It's very explicit in terms of concatenating values of the same type together and checking
truthiness with real Booleans and that sort of thing.
But we still have to define semantics for our domain again.
So I think having these value types is very important.
This primitive obsession code smell of passing strings all over the place instead of a username.
If something represents a username, just wrap it in a type, make it an opaque type.
If you need to validate it, then use that pattern which we talked about in depth in
our opaque types episode that you conditionally return that type if it's validated.
This sort of parse don't validate.
You can say username.from string and return a maybe.
And the only way you can get it is if it checks that validation.
So now you have this type which has semantics which are actually meaningful because they
tell you something about a validation that's been performed on that type or the origin
of that type because you can only receive it from running an HTTP request or things
So using these type semantics and creating these value types I think is huge.
I really like creating types for a user ID, specific types of user ID versus product ID,
things like that.
Yeah, in general, any technique that makes errors impossible, just like making impossible
states impossible, writing tests, using the type system, using Elm review, all those reduce
the number of things that you will have to check manually once your code compiles and
your test pass.
So all the tools that you can add to your test suite just reduce the amount of manual
work that you will have to do and therefore the missed expectations that you can have.
Yeah, it's kind of like in Richard scaling Elm apps talk, you know, like put yourself
in the shoes of going through and debugging an issue and you say, all right, what do I
Putting on your detective hat.
I'm at this point in the code.
I have this data.
I didn't expect to have this data.
What do I know about the context of where I am?
And well, given that you're in the context of now application at a baseline, you know
that your types are correct.
You know that your conditionals were running on Boolean's.
You didn't accidentally pass the wrong pass, no in some somewhere and get the falsy path.
But if you're getting an int argument, does that represent the right thing?
So if you put yourself in the shoes of trying to understand what do I know in a given piece
Well, if you're receiving an int instead of a user ID, that's a piece of information that
you don't know that you can't be sure of.
So that's one more thing that you can't add to the if it compiles, it works things that
you can trust.
So we talked a lot about wiring, but we didn't really get into it.
So wiring is yeah.
What is wiring?
When I talk about wiring, I just have the word boilerplate in my head because that's
what other people from newcomers or from other languages have in mind.
Like, Hey, this is, has a lot of boilerplates.
You have to do a lot of wiring yourself.
And I have to admit, I just tried thinking of examples of boilerplate or of code wiring.
And I'm like, I cannot think of any in the sense that I don't see them that way anymore.
Because for me, they're super useful.
I mean, it's certainly, it can be tedious to be verbose.
Like, I mean, one thing that I find particularly tedious sometimes is having to wrap things
in custom types or, you know, combine together multiple possible types, something could receive
in a custom type or things like that.
It's verbose, you know, it's not as verbose as creating an abstract user factory in an
anonymous class to implement abstract user factory in Java.
You can do better.
You can do longer Dillon.
Yeah, I could if I tried.
Maybe it's a single Tintu, I don't know.
But yeah, I mean, there is a cost to a wrapper.
If you have a user ID, there's a cost, you know, it's like, oh, it's so much easier to
just pass an Int.
And, you know, if you need to prove that something was a user ID, but, you know, to a certain
extent, it's a question of like, are you going to optimize for change or creating code?
Because we read and change code and debug code far more often, we spend far more time
and energy doing that than we do writing code.
And also, if you like how much of the time and effort in writing code can be attributed
to trying to make sure that you're doing it correctly and not making any mistakes.
So if it can reduce that problem, I think that it's a very good trade off to have to
be a little bit more explicit, a little bit more verbose to have this clarity in your
You also often talk about moving wrapping and unwrapping to the extremities.
Wrap early, unwrap late.
The problem with user IDs, instead of just Int, is that you need to wrap it somewhere,
maybe parse it, which can be annoying because you need to handle the error case.
And you need to unwrap it, which is also more syntax.
But all the places in between, it's just value passing.
It's not more annoying than dealing with an Int.
hey, is this user ID null?
Hey, is this user ID zero, which has some special meaning?
While it is a user ID, it is actually very simple to use, I think.
So the cost is offset somewhat.
Yeah, for sure.
And I think it takes experience to become comfortable navigating those things, knowing
when to create those abstractions and how to manage, oh, I have this opaque type.
I need to use it in this place.
I need to make sure that it can only receive this type of thing and be used in this type
That takes experience.
But on the whole, I think it's more maintainable, but it might feel more sluggish.
It might feel like boilerplate.
And Elm is very explicit.
So if you're doing browser.application, you can't not pass in subscriptions.
You can't not pass.
And if you made the API in such a way that you could pass nothing, well, now you have
to wrap that in a just if you have the actual value.
Yeah, so you cannot forget to pass in things that are necessary to some extent.
And that's part of this.
That's part of that feeling of if it compiles, it works.
And it's so I mean, the fact that Elm only has these tagged union types for the custom
types, there's no way to just say, oh, I'll accept a string here or you can pass me a
record with these fields or it's more verbose, but it's very predictable.
And it might be tedious, but it's easy and straightforward.
So there's a bit of a trade off that the Elm always chooses explicit and predictable and
straightforward over frictionless and easy and low, low boilerplate or, you know, sleek
that those aren't the choices that Elm goes.
So if we avoid boilerplate that usually works by doing dynamic accesses, dynamic writes,
or even just magic things like method overloading in object oriented programming, which I always
find surprising, like, hey, the behavior of this thing changes if you define a X method.
Very hard to detect.
Whereas if you did that in Elm, like if you remove a method that you thought no one uses,
then you would get a compiler error.
If you do that in Python is where I had a bad experience with that.
Well, your behavior changed and you don't know why, because you had to look at the Django
docs deep, deep somewhere that I still haven't found, but a colleague told me it exists.
That was always, you know, in my Ruby on Rails days, that was my first job out of college
working at a Ruby on Rails shop.
And it was a struggle for me to keep those things in my head with the implicitness and
the magic and the method missing automatically resolving things dynamically when you call
a function that doesn't exist or method that doesn't exist.
And Rails depended on that quite a bit.
And it would use the method, it would include something that really should be an argument
in the method name.
You know, when you say path to users, it would be far better to do path to string users or
something like that should be an argument.
So yeah, Elm is more chooses the verbosity in those cases, but I think that does contribute
to that feeling of predictability that if it compiles, it works.
So you've never gotten your code to a point where you felt like it started to become unpredictable
Elm code that you couldn't really get a grasp on?
Well, yeah, for complex business logic, for instance.
But that's usually where I write heavy tests.
Like actually some Elm review rules are very complex.
And I look at them every few months or so because someone reported an issue.
And I'm like, I don't know how this works anymore, but I'm super happy that I have an
armada of tests.
But at least if I try to change something like the type of a field or I change the context
or slash model, then the compiler has my back and changes become a lot smoother.
I've noticed like when I'm writing sort of framework code, like this happens a lot in
Elm pages that I'm creating these building blocks and I often have to work with the lowest
Where something can, it's unconstrained, like the data source API, you can end then, you
can go get glob data and all these things.
And it can become harder to keep the semantics in your head.
Like for example, if you have a decoder and you can do decode.end then, or you can do
Now suddenly you've got these possible failure paths that they're easier.
It's really nice to be able to just fail fast in a JSON decoder, but it's almost like less
explicit in the way that throwing an exception is less explicit.
Like you may not know when something failed.
The ways that it might fail.
Like how can a JSON decoder fail?
What's the error type of a JSON decoder?
It's kind of an untyped error.
It's just a string basically.
It's not like a nice custom type that says this is a result type and these are the possible
ways it could fail.
Here are these five variants with some information associated.
So I think now when you're building frameworks in these building blocks, these are really
powerful tools to write this lowest common denominator that can do a lot of powerful
But again, this sort of scaling L maps idea, like don't use the lowest common denominator.
Use the simplest thing that could possibly work and be as explicit as possible.
So I think those are the times when I think about Elm code getting unwieldy and hard to
predict what it's going to do.
I think a lot of it comes down to having a lot of state and a lot of possible ways to
fail that things are boiling down to the lowest common denominator.
Sometimes I find it hard to think about like if I do maybe dot end then, that's like a
simple thing, the semantic, but it hurts my brain just a little bit.
But that does that cause your changes to not work when you could compile?
It causes me to not trust that it will work when it compiles.
Like maybe I just put a maybe dot end then and I know it's going to compile, but I have
to go in and check it manually or run some tests around it.
Then it works.
It still works.
You don't trust it, but it works.
It's just that you're getting back to your roots of your Elm adventure.
You don't trust it compiler yet, but it still works.
But the semantics could surprise me.
And I think semantics are very important.
Like I don't know, I don't trust if I tried to explain the semantics of maybe dot end
then to you, I couldn't explain it clearly.
I couldn't write out a truth table of what it does exactly very clearly.
I'm sure if I thought about it for a couple of minutes, I could do that.
But that's sort of like a semantic concept in my brain that's not quite clear.
Just like decode dot maybe, the semantics are a little misleading.
But like the maybe dot extra dot or sort of helpers around the logical or and end and
those things, that I find intuitive.
Sometimes I think semantics are really important, even if the wiring you can trust.
So is it the name of and then that you find confusing?
That's just it.
Well then I'm going to give you the task to just make that better.
Maybe then we'll have some good results.
I'm not going to give you a list of all the things to do though.
So what can people do to improve their chances of when it compiles it works?
So we mentioned do tiny steps because tiny steps have less risk to them.
You are less likely to forget to do something or to do it incorrectly when you get down
So tiny steps definitely one.
And we talked about sort of wild cards and squelching errors or maybe types, just giving
them default values.
And more broadly, dealing with the sad path from the start at the gate.
Move that as far left as you can, ideally all the way to the left to compile time if
you can or to your entire give of constraints.
Move it to the Elm review step to static analysis time if you can't move it to compile time.
But if you can move it to compile time, types without borders, check things at compile time.
But yeah, I think that that's going to, I mean, if you've written JSON decoders for
an API, you know that it can compile and not work.
But I think the issue of when it compiles, it doesn't work is for me that I forget to
So usually it means that in Elm at least, because I'm reminded to do most things, when
it doesn't work, it's because I intentionally or unintentionally didn't do anything, something
like I had a, I returned zero instead of a real value and added a common thing to do,
blah, blah, blah.
So I think it's useful to make those clear, like add comments saying to do make them easy
to find, make your editor or Elm review to report those to you.
Use debug.todo, add new tests that you can also do to do for those.
So yeah, give yourself reminders and help yourself help the compiler give you reminders.
So exhaustiveness, don't use wildcards, don't use defaults, stuff like that.
And if you're going to ignore errors, you know, which when you're trying to get something
on the screen, you want to do the simplest thing that could possibly work.
You want to get your tests to green.
There's nothing wrong with that.
Like that's, that's a good approach to start with getting the happy path wired through.
But like you said, when you have those kind of default values passed through deep within
the recesses of your logic, and your sort of data processing code and decoding code
and that sort of thing, then when you do go to handle those errors, which you'll need
to once you get past the first few steps, now you've made that job a lot more difficult,
because you've, you've covered them up, which you might forget about, like you said, you
want to give yourself reminders, you want to give Elm the opportunity, like Elm's ability
to remind you to handle every possible case is only as useful as your ability to be honest
about what those possible cases are.
So if you're not honest in your data modeling, or, you know, we were human, we make mistakes,
and we can come back and revisit things.
But when we do do that in a way where we're acknowledging the reality in our data types,
rather than covering them up.
So acknowledge the reality and if you have an error, handle it at the top level, not
in, you know, in the recesses of your code, because at the very least, now you you say,
okay, bubble up this error, you have the information in the central point, things are wired up,
and you're just saying, I'm just not going to handle that right now.
But at least you're acknowledging that it's there at the top level.
Yeah, because if you handle the error at a lower level, and not at the top level, then
at some point, you will need to cover it up, necessarily.
Yeah, I mean, that's sort of like the feeling I get when, you know, when I'm working with
languages that have sort of these these control flow mechanisms for exceptions, which is most
When I'm dealing with exceptions, I don't, I don't know where a problem could come from,
I don't know when I might hit a snag and a problem might occur, because there's nothing
explicitly telling me an exception may occur here.
And I don't know where that's being handled.
I don't get the opportunity to say, hey, run this thing.
And if there's an error, I want to opt into doing something about it, because something
else is, you know, deciding what what to do about that.
So it makes it feel a little bit more magical and unpredictable when when you put you know,
when you do those types of shortcuts in your own code.
By the way, raising an error or throwing error also causes some code not to be executed.
And when they have side effects, that can lead your code to have different behavior.
So not having the exceptions makes Elm code easier in that regard as well.
So there's no special mechanism for for control flow, maybe some code will be executed.
But I mean, we're still just computing a value.
I mean, that there was a consensus decade ago that go to statements were not helpful
in high level programming languages.
But essentially, an exception is not exactly a go to statement.
But it is a sort of special case in in control flow that creates this whole new mental model
that you have to hold in your head.
That makes it the control flow becomes implicit.
It's implicitly jumping to this spot.
So it's still too close to a go to in my opinion.
Yeah, I think it feels similar.
And where it goes is actually not clear.
Compared to go to.
Yeah, I would say the biggest, biggest takeaway, I think biggest bang for your buck that people
could get from from listening to this is our advice on not squelching errors and of dealing
with the sad path at the top, not in all of the leaf nodes, and having nice semantic types
and taking advantage of opaque types.
Like I would really recommend going back and listening.
I think it was episode three, our opaque types episode.
Well, it's true.
Episode two, see, that's how important it was.
It was more important than Elm review.
Don't say that.
People are going to believe it.
Hey, it's up there in your hierarchy.
If you can do it at the API level.
Yeah, I know.
I love opaque types, so I can't say anything bad about it.
I'm still very surprised that when it compiles, it works is so true to Elm.
Even with all that we said, and mostly just because there's no side effects and there's
a good, complete, exhaustive type system or sound one.
And I find that so surprising.
I mean, it's not even a very complex type system, right?
Which I think is a big part of it because its simplicity makes it very predictable.
Right, but it also limits some constraints that it could give.
It doesn't say, oh, this number is always between one and five, or in the range of this
list or this array's length.
So yeah, I'm really surprised.
To be honest, I still find it hard to explain why just having this type system makes so
much of a difference.
But I'm super happy it does.
And I think it can be attributed as much to the Elm language as it can to the Elm philosophy
and how much Elm packages in the ecosystem embody that philosophy of modeling things
with APIs that are very oriented on constraints and giving you a minimal API.
Because you can write all sorts of APIs that allow you to do something, but don't really
You can make impossible states possible.
And there's nothing in Elm stopping you from doing that in a published package or in your
All my APIs are functions that take care of dictionary.
They receive a based on decode value and they could give a result anywhere.
So it could go wrong.
Maybe it won't actually happen, but just to be sure.
Every function returns a result.
You could do that.
And that's a testament to the philosophy and how well it's caught hold in the Elm ecosystem.
Well, I think we've covered if it compiles, it works.
I think so.
Well, happy to hear that.
Well, well, happy, happy coding.
Enjoy that beautiful Elm compiler guarantees and the Yeroon until next time.
Until next time.