Tune in to the tools and techniques in the Elm ecosystem.
Building Trustworthy Tools
We discuss how to earn users' trust by giving meaningful feedback and giving predictable results.
July 4, 2022
Error messages should give three pieces of info
Why it's a problem
How to go forward
Needing to do magic incantations to get things into a good state adds cognitive load and makes debugging harder
Make tools more predictable
Clear mental model (avoid inconsistencies and leaky abstractions)
Doing an operation through different means should consistently arrive at the same result
Tweet about layering platforms properly
Extensible web manifesto
Avoid boy who cried wolf (becoming blind to noise from errors or warnings or other feedback)
And what are we talking about today?
Today, we're talking about building trustworthy tools, which is something that we really enjoy,
both you and me, I'm sure.
That is true.
I'd say we enjoy building and using trustworthy tools.
Using them even more, yeah.
What a coincidence that Elm fans would enjoy trustworthy tools.
Who did that?
Well, you say that, but a lot of other communities don't enjoy them as much as we do.
Or at least they don't seem to put as much of an emphasis on them.
It's an interesting question.
I mean, is it that they're seeking something besides trustworthiness, like maybe convenience
or performance, rich ecosystem performance?
Or is it that they have a different set of criteria for what they consider to be trustworthy
in a different ecosystem?
I don't know.
I think it's more about like they value reliability, which is trust in a sense, but they don't
know what they could have.
So if they don't know what is possible, then they don't ask for it, they don't request
it from their tools.
If you have, I mean, maybe somebody who's working with TypeScript, they get IntelliSense
that shows them the thing they were looking for and catches some bugs that something was
null that they didn't check for.
And they're like, wow, that's a reliable tool.
So that's great.
That's absolutely great.
And I wonder, is there a continuum of trade offs between trustworthiness and sort of handholding
or you know, I think that that's how a lot of people would look at it.
But as Elm developers, I think we would tend to say like, yes, we do have strong guarantees
But also, we don't feel like we're fighting against the compiler.
We feel like the compiler is pushing us along.
And the things that it's preventing us from doing are things that we're glad we weren't
able to do.
It's not like, oh, if only the compiler would let me do this thing.
When you say handholding, do you mean like, if you're trying to cross the street while
holding the hand of your parents, and then they stop you because there's a car?
You mean like people don't like that?
You know, it's ironic.
In America, there are a lot of, you know, you'll have like, if there's a big drop off,
like an old historic building or something, where there's like a big cliff, suddenly,
you're like walking out to see a historic site, and there's a big cliff.
So people go and explore.
And in America, there will be often like a big fence preventing you from going off the
It's like, this is an impossible state going off the cliff.
We will make it impossible.
We'll put a fence there.
And a lot of like European historical sites, you're just like, whoa, there's just a cliff
Like someone could just walk off the cliff.
But it's a different approach.
So I guess there's like, is there a balance between making the tool trustworthy and trusting
You know, is that a dimension?
I think there is.
I mean, the annoying part with hand holding is when you get yanked back on the sidewalk.
And I feel like something that Elm does quite well, for instance, the compiler is when it
yanks you back, it does it in a gentle way.
So it pulls you back slightly in a nice gentle way.
Whereas the C++ compiler, from my experience back 15 years ago, is like, bam, yank you
and throw you on the curb and you figure out for yourself what you did wrong or something.
That's a good distinction.
Whereas the compiler tells you, hey, this was dangerous because there was a car coming
This car was driving at this speed and if you walked forward, then it would have hit
you, which would have been bad because we're in the US and the bill would have been huge.
Or something like that.
Someone would have gotten sued.
So yeah, I feel like maybe the problem with hand holding is not so much that the hand
holding is the communication around it.
If you explain the problems correctly, if you explain the reasoning behind it, then
things are nicer.
So for instance, if I want to walk across the street, I need to make sure that there's
no car in sight or that it's sufficiently far away.
And if that's not the case, then I should be blocked if I want to stay alive.
And to a certain extent, it comes down to you're going to trust your tool more if it's
preventing you from doing things that you're actually, you know, once you figure it out,
you say, oh, maybe you see a compiler and you're like, what?
No, this is fine.
This should work.
And then you sort through it and you're like, oh, I see.
Yeah, I did.
I did need to handle a case or I did need to get these types to line up.
So a tool can earn your trust by not giving you false alarms and not preventing you from
doing things that would be reasonable to do.
Yeah, the tool should definitely not give you false alarms.
And when it gives you an alarm, a real one, there are three pieces of information that
it should give you.
It should give you, indicate you what the problem is, what you did wrong, what you almost
did wrong, why it is a problem, like what is the reasoning behind the issue.
So I pulled you back from the road because otherwise you would have been hit.
And if you don't give that information, then people will think that you got stopped for
a stupid reason.
Like people are going to get frustrated.
Like this tool doesn't want me to do this.
And then they're going to be frustrated.
They're going to complain.
So I think explaining the reasoning behind a problem, behind an error is very valuable.
And then you need to, the third piece of information is like, how can you go forward?
How can you unblock yourself?
Like when is it okay to cross the road?
Once you've seen it, looked at the, once you've taken a look at the road and seen that there's
no car around, then you can walk, but not in the other cases.
So like, yeah, when you have a Elm compiler error, you want to know what the problem is.
Like, Oh, why is this wrong?
That's in my, what did I write wrong here?
And a compiler will tell you, well, you had an if branch, an if expression.
And in the first branch you had a type number and second branch you had a type string.
And that's wrong.
And then why is it a problem?
Well, for Elm to make sure that you have something that is working, it requires you to have the
same type in both branches.
And then it could go on, like explain why that is necessary for it potentially.
And then you need to explain how to move forward.
Like how can I solve this problem?
Do I need to convert one of these to an integer, for instance?
And I actually have an example for that, this exact example.
And Elm gives you all of this information and more, which is really useful.
So if you just read the error message in full, you know all of this information, you don't
get frustrated and you know how to unblock yourself.
Yeah, it makes me think like if somebody is crossing the street and you yell stop, that's
actually kind of hard to trust because you haven't given clear information why.
But if somebody is starting to cross the street and you say car, now you've actually just
as concisely given information with some not only like reason why it's important, but more
Because if you say stop, are you calling out stop because you're getting a phone call and
you need to wait in that area and it would be fine for them to like cross the street
and sit on a bench and stop for you there?
So saying stop doesn't help you in an actionable way where you know in what way to stop.
Like should I stop?
I'm in the middle of the road, should I stop in the middle of the road?
If you say car, then oh I should probably remove myself from the road, check my surroundings
and see which direction the car is coming from.
So in the same way, if an error message just says something went wrong, it can be very
frustrating for the user and it doesn't earn their trust that they should listen to us.
It's almost just unfairly demanding their attention without giving them an actionable
way to deal with that information.
It's not nice to tell somebody pay attention, do this thing without giving them the motivation
I think it's a kinder thing to do to users or other human beings in general to bring
them into that process and tell them why.
It's important to share why.
Yeah, it's kind of the same thing under the news and politics like saying this policy
is a terrible idea.
Well that's great news, I'm not sure I'm going to trust you.
This policy is a bad idea because, and then your argument is that's a lot more helpful.
Or it can also be helpful to know why the person thinks that it's a bad policy and then
you can counter arguments.
Counter argue that.
So if a compiler says this says it's a maybe string but you need a string, you're like
compiler can we talk for a minute because I disagree.
Just reason with it.
I just have a string.
This has come up for me in Elm pages when I've been working on the user experience for
the dev server.
One of the things I noticed, I was trying to pay attention to how it felt interacting
with the dev server when I build things with it.
And one of the things that I noticed was if I'm on a 404 page in the dev server, and Elm
pages has a file based router.
So if it just says 404 page not found in the dev server, then I'm saying why is it 404?
Why can't you find it?
So you're trying to go to a page and for some reason it couldn't find it.
The dev services.
Even though you were actually trying to work on that page.
So you were kind of expecting to find that page, right?
I was expecting to find that page.
And now what I found was what that made me want to do is restart the dev server, hard
refresh the browser.
In other words, not trust the tool.
It made me not trust the dev server.
So I found that even when I earned the trust by having the 404 pages only be shown as an
accurate reflection of reality, that the routes were reloaded accurately whenever they needed
to be and it was always showing accurate information.
If I didn't present the user with more context, then they still wouldn't trust it.
And me as a proxy for the user, I experienced that.
Yeah, you doubted the tool just in case.
Because it could potentially solve the issue.
So you need more context.
So what I did is I mean, in hindsight, it sounds very simple, but I don't actually know
of other dev servers that do this.
So what I changed it to is it will say either I didn't find a route matching the URL you're
Here are the routes that I have and it lists out all the different routes.
So if you go and you create a new route module that's going to add a new route to your project,
you will see it live reload with a new route showing up.
And then there's another scenario in on pages where you can have a set of statically rendered
pages and it might not be one of them.
So you could have like slash posts slash slug, but the particular slug you're rendering is
So in that case, I show a different error message.
I say, hey, I matched this route.
This route exists, but this slug is not one of the slugs that you told me to render.
So I don't know how to handle that.
And then it will say, these are the slugs that I do know how to render.
And so the user might be able to go there and say, oh, I see the typo that I made, or
I expected to add a route like this and I expected it to be handled this way, but it
gives them feedback so they can kind of fine tune what went wrong.
It's all the information that you kind of wish you had, but you often don't.
The scary part is when the tool, you don't trust it and you rerun it and then it actually
That's where you don't want to end up because then people think, oh yeah, this tool, sometimes
it doesn't work.
It doesn't reload correctly.
So sometimes you need to exit it and restart it again.
And if you hit that point, then people are just going to do that sometimes just in case.
Oh, just rerun your ID, oh, just rerun your computer.
I mean, that's a huge time waste.
Like we're seeing this with rebooting your computer.
Like if it doesn't work, it sucks so much.
And if it works, you know, it's going to suck for the next time that you're going to encounter
And when users become superstitious like that with their tools and warrant this superstition,
it's a well founded superstition because when they do magic incantations, it does solve
the problem sometimes.
But when you build that kind of relationship with users, then think about the cognitive
load that's going on.
So the problem solving capacity that could be put towards debugging their specific problem
is now spent on debugging the tool and whether it's doing the thing it's supposed to be doing,
which is a very bad state of things.
And if you compound that with a large number of tools that the user doesn't trust, now
think of all the different permutations of incompatible, untrustworthy states that these
tools could get into.
Now suddenly the user is spent, you know, or the developer, whoever's using the tool
is spending a huge amount of their brain capacity to just think through these possible scenarios
and how to do the magic incantations to fix them.
I know that I personally feel like I'm very bad at doing anything related to infrastructure
or computer infrastructure.
So for instance, like using Docker, setting up an architecture where you have a Redis,
you have MongoDB or whatever and all those things.
I really have a hard time and I feel like it's because there's a lot of information
that I wish was there and potentially explanation because I never got good at it.
And these are really hard to find to me.
Like for instance, with Docker, I think you have, you can talk to things that are executed
inside of Docker through ports.
And then you have to open that port on Docker side, which has one number.
And then you have a port on the outgoing side, something like that.
And which ones are open is not available information to me.
At least I don't know how to get it.
But basically that's all those pieces of information that I wish I had.
And if I don't, I'm going to try everything until I just quit or succeed.
And it's the same thing with installing things.
Like I feel like I have very bad luck with installing things.
Oh, it's all Node.js.
Oh, well now it doesn't work when I run this program.
And there's a reason behind it.
Like there's a reason why it doesn't work, but I don't have it.
So I'm going to try everything.
I'm going to try and review my computer multiple times.
At some point, hopefully it works.
You hope your snowflake doesn't melt because it's going to be hard to create a new one
What is that?
You don't know the snowflake term.
Like people talk about snowflakes in terms of like reproducible environments.
Like every snowflake is unique, right?
It's like a fingerprint.
Like no two snowflakes are alike.
It is the term for a non reproducible environment.
I'm not sure if snowflakes melting is a thing, but I think it should be.
I mean, in terms of water.
Then it's more uniform.
But if you mean in environments, then I don't know.
I do feel like my environments sometimes are like a smudge of everything.
So maybe a mud snowflake ish.
So as you said, like you actually, if you want to make a tool that you can trust, then
you actually need to make sure that it does the right thing as expected.
Like it needs to do the job that it advertises.
So if you have a watch mode or a live dev server running, then it actually needs to
reload everything as it should, which can be really hard, like especially with watching
files on your system that has plenty of tricky issues.
But then yeah, you also need to communicate a lot of useful information that the tool
has available that the user might want to have, or at least to instill trust in the
I'm realizing that a lot of this, I think, comes down to making the tool predictable
for the user.
And so like giving the user context is one way to make things more predictable, making
it do the same thing, whether you create a new route this one way or change this thing
this other place or delete a route module or however you arrive at a particular state
of things, having it behave the same way is another thing that makes it more predictable.
Another thing that makes tools more predictable is having like a clear, understandable mental
If it has a lot of magic in the mental model, or if it has like leaky abstractions or inconsistencies
in the mental model, it's less predictable for the user.
Well using this function is not enough.
You also need to have called this function previously, and otherwise it won't work.
I wish it was written in documentation or in the error message.
Never call this function after you've done this thing.
I saw a tweet recently that I thought was very interesting.
It's actually longer than a tweet because it's a tweet of an image with text.
But if I...
If you don't mind humoring me to read something a bit longer than a tweet.
So we'll link to this, but it says layering platforms properly.
One of the most important goals of a rationalized platform, I hadn't heard that term, is that
it is well layered in multiple senses.
One type of layering is ensuring that higher layers are more opinionated than lower layers.
So that was interesting.
Another important layering property is that as much as possible, higher layers should
be explainable in terms of lower layers.
So in a sense, it's talking about like almost like composable mental models where your mental
model at one level builds on top of your mental model at another level.
So it says that is the higher level API should express semantics that are reasonably thin
or opinionated combining of semantics exposed by lower layers that are strictly lower in
Yeah, I'm thinking is it finished?
The last part says layers that do not follow this property are likely either too thick
or rely on too much magic.
That's a layering smell that implies that there are things that developers might want
to do that they won't be able to break apart and use separately, i.e. the platform is not
composed of bricks held together loosely.
It also makes the layer harder to reason about.
So I thought that was quite interesting.
Yeah, the first thing that I'm thinking about is programming languages where you have bytecode
assembly at the bottom, binary even lower, and you have C++ and Rust and those combining
to WebAssembly, to assembly, not WebAssembly.
Yeah, although they break a lot of mental models about performance characteristics like
we talked with Robin about.
Yes, there's some magic and there's some things that are really hard to explain, at least
with my understanding of the engine, how it works.
rely on too much magic.
So yeah, I feel like that tweet is pretty good.
And like Elm Review, for example, it's actually just an Elm project.
So that's actually pretty cool, right?
Because it means that anything you can use to reason about an Elm project, there's an
Elm.json, you install dependencies with Elm install, you can open it up in your editor
and all of your assumptions about an Elm project apply to an Elm Review folder.
The Review folder is just an Elm project.
You can run Elm make on it and other Elm tooling.
There is a Node.js part, but let's skip that part.
Right, but see, that is a strong opinion that is easily explainable in terms of the lower
So that would, I think, follow this tweet's advice about how to layer platforms.
Because you could have an Elm Review.json and it's a set of the rules that you install.
And if you put a particular special JSON key in there, it's going to automatically install
those rules and there's JSON config for those rules.
So there are all sorts of ways that Elm Review could be designed where it doesn't build off
of that mental model that Elm developers already have.
Yeah, and there was some exploration how to do that for quite a while.
I could have reviewed dependencies in the Elm.json file, which was what I was going
for at the beginning.
And then it turned out that the Elm compiler just removes them when you install another
So like, well, I guess that doesn't work as well as expected.
I really like the end result.
It's not really nice to say, well, it's this whole new application, but it has some very
So I'm happy with it.
I think it worked out very well.
And another thing that comes up in trustworthy tools is that I think Elm developers are particularly
tuned into is invalid configurations or invalid ways to use a package.
And in general, I really like this idea of eliminating caveats.
Just like if I find myself explaining something, hey, here's this new design.
Here's this new package.
How do you use it?
Well, you do this.
But there's one thing I should explain.
It's like, okay, hold on.
What was that caveat?
Can you design it so that it doesn't require that caveat?
And just keep massaging the caveats out of the design.
I mean, you do need to explain the mental model that you might need to have.
You might need to explain the bigger picture.
But yeah, like, oh, by the way, you should never use this function.
Otherwise, everything's going to break.
Can we remove that function?
Well, no, because, oh, well, nah.
I like that distinction.
Because if it feels like you're explaining a mental model, then it's a consistent mental
If you're explaining a caveat, why do we not call that a mental model?
I mean, technically it is, but it's a caveat within the mental model.
It's an inconsistency in that mental model.
So I think that is the distinction.
But you definitely do need to, when you're building packages, APIs, tools, whether it's
internal within a company or an external tool, you are going to need to learn some mental
And that's OK.
But is it going to be a predictable mental model and a consistent mental model?
And if you're noticing those asterisks is a really good way to look for inconsistencies
in the mental model.
Do you have an example of a caveat like in Elm, in the Elm language, or Elm tooling?
I mean, we've talked about this design change where in Elm GraphQL, actually when it went
from being called GraphQL to Elm GraphQL, when I found a way to turn a fragment and
selection set into the same concept and a selection set with zero items, one item, more
than one item in it, it's just the same concept.
That was a caveat.
That was like, how do you...
That was an inconsistency in the mental model that you couldn't think of those things in
the same way.
And so I massaged that out.
And it took work in the internal implementation detail.
It doesn't come for free to massage out the caveats.
You have to do it through careful design.
But when you do, the user can think about it with a less complex mental model that's
Now it comes for free for the user.
But yeah, I mean, in general, I think we're keenly aware of caveats in Elm design.
And in a way, Elm makes us be so explicit about types in our APIs and how many arguments
does it take that it makes them glaringly obvious.
And also just Elm developers think so much about making impossible states impossible
and parse don't validate and things like that, that it sticks out like a sore thumb.
And oh no, now this function needs to return a maybe.
Because it might feel like, how can we prevent this to make this nicer?
I'm thinking about the example of restarting Elm pages.
Imagine you had to run the Elm compiler like two or three times before you can be quite
certain that it works, that there's no error or...
That would suck.
Or you have to remove Elm stuff.
Or clear your Elm home or that would be such a waste of time.
It puts quite a burden on the user to need to think about those things.
It's all the things that you don't have to think about anymore.
I've been thinking about linters for quite a while, as you might know.
And for instance, you know my love hate relationship with disabled comments.
Yes, I do.
Mostly a hate relationship.
So none of the people find that term confusing.
So disabled comments are those comments that you add to your code to disable linter warnings.
It's almost like a caveat.
It is almost like a caveat.
It's like a little asterisk in your code that says, this rule is enforced everywhere, except
for these few places.
Except for this case, yeah.
So you have a like ESLint disable comment.
I'm just giving the example of ESLint because that's one I'm most familiar with.
And you've contributed to the ecosystem quite a bit.
I could have given the example of Elm review disable comments, but those don't exist.
Except for known tail recursion.
In a way, yeah.
Those are more hints to the rule.
There's a caveat to Elm reviews lack of the caveat of a disable comment.
I mean, I'm not saying that's what I made.
It's a reasonable one.
It's a good trade off.
It's a good trade off.
The thing I've been noticing is like, when you, so linters are actually one of the tools
that are the least trustworthy among the software tooling that is available.
They actually have a very bad reputation of just reporting way too many false positives.
And false positives are annoying because they block you.
You need to solve them if you want to go forward.
And if you go back to the three points of information that I gave before, like how to,
what is the problem, what is the reasoning behind it and how to unblock myself.
A lot of times those pieces of information are not given.
Main reason for that is that these tools usually give you a one line error message.
Which is like, that's not a lot of information for me to go on.
And that can be very frustrating.
So when you don't have all the information, then you will often think about, well, how
can I unblock myself?
And the easiest way for that is to use a disable comment.
So if you really don't know what to do, then you're going to reach for that.
That's almost a certainty.
You can ask maybe a coworker, but otherwise you're going to reach for that.
And that's going to be even made even worse if you're used to having false positives,
then, oh, well, this case is very likely to be a false positive again.
I'm just going to ignore it.
So you had a disabled comment and you don't think about it anymore.
So for example, if you have like an ESLint rule for unused functions, but then you have
a disabled comment for one function that you're like, well, yeah, but I'm going to use this
or it breaks the mental model of we don't have unused functions.
That's an exception.
The thing is, if you have that option of using the disabled comments, you're going to ask
yourself, should I use a disabled comment here?
Every time you see an error, should I use it?
Can I use it?
Is it okay to use it?
Those are questions that you don't see in Elm Review when you use Elm Review because
you don't have that option.
You don't think about those unless you know that it's wrong for some reason and then please
open an issue.
But the usual exception for that is like people want to keep their code around because they
haven't finished a feature or something, but there are better options in my opinion than
So yeah, like there's another, what did you call it?
Nudge or incentive if you have those disabled comments is that, well, I'm going to write
a rule and if it turns out to be wrong or impractical to apply the advice, then people
can just add a disabled comment.
Meaning that as a rule author, you're more inclined to writing bad rules, which is not
the case in Elm Review because you can't disable things willy nilly.
So therefore the quality has to be higher and in some cases it needs to be too high
for the rule to even exist, which might be a bad trade off in some cases.
I wish we could have more of those, but I haven't found a good way to make the tool
trustworthy and still have being able to detect things like code smells.
But yes, something that I've noticed is like we need to, the tool needs to communicate
well and it needs to not give you all those easy escape hatches for you to reach out for.
Otherwise people are just going to use them and yeah, the result is not that great.
The result is sometimes like you enable a rule, you're going to think, okay, well this
critical problem will never appear anymore because we now have a review rule to make
sure it doesn't happen again.
And then someone has a disabled comment and like, well now I can't trust that that guarantee
Like potentially someone disabled it.
So you need to go and check whether someone added a disabled comment somewhere, which
is like just like restarting your tool again, like, oh, well I think it's right, but it's
just going to check again just to be sure.
And then you want a tool for a tool.
And people are going to add escape hatches to that one and then you need a tool for a
tool for a tool.
Yeah, I mean in a way an escape hatch, I mean obviously we're pretty, we're fans of eliminating
escape hatches in the Elm community and some people find that frustrating, but some, you
know, it's also a big part of what we love about Elm.
But I wonder is like, is an escape hatch a type of caveat that breaks the mental model?
It's just like another case of a caveat, you know?
Depends on the definition of it breaks your mental model.
It's just like there's an exception to your mental model that is made possible because
of the escape hatch.
Just like saying this is a sphere and it's perfectly round and then you actually have
an escape hatch on there, meaning at that exact location it is not round.
So it's an almost round balloon.
Yeah, I mean it's more, well I guess it's two things.
It's like number one, if there's an escape hatch then there's something that could have
been more consistent in the mental model possibly.
I mean it takes careful design of course and that's easier said than done.
But secondly like it might be layering things like this tweet talked about in terms that
aren't like a cohesive layer that you can think of as like an opinionated layer and
it's sort of leaking between two different layers of abstraction.
So how would you prevent a layer from doing too much magic for instance?
How do you prevent one from being explained by what is underneath it?
So also like for example, I've been working on a new form API for the Elm pages v3 release
I'm working on and I've been heavily inspired as we've talked about by Remix's approach
which is kind of trying to take this concept of progressively enhancing forms where the
browser has built in forms.
We have an existing kind of low level mental model of a primitive that the web provides.
Just like the web provides a primitive of links and you go to a new page and we have
single page apps and they sort of hook into that mental model.
So you go to a new page, it doesn't actually use the browser's page navigation, it intercepts
to a new page.
Well similarly, what if we use progressive enhancement to take that foundational mental
model of something that exists in the web, forms, a way of having key value pairs that
you can send to a server and in response to that data you send to the server, the server
can do something and then go to a new page.
Well what if we take that same mental model and build off of it?
So by doing that you're essentially creating a set of higher level opinions on top of that
idea that can take that as the foundational concept.
You can do it in a way where you don't break that mental model but you can add additional
concepts to that mental model.
So like if you have a button in a form then that button will submit and if that button
is disabled then clicking on that button won't submit the form and all these things that
we understand in our mental model about forms, you can piggyback on that and that reduces
the learning burden of the user because they may be familiar with forms and if not, it's
portable knowledge that they can use in different contexts and it's not going to be contradicted
by other caveats in other areas.
So it's just more sort of robust portable knowledge and so you could take that model,
build on top of it and then progressively enhance it to actually do client side server
requests to progressively enhance the vanilla form posts and that's sort of the basis of
the Elm Pages form API.
Yeah so you can always explain it using whatever is underneath or by saying something is almost
like whatever is underneath.
Right, it picks up where that left off and you can actually also use the low level thing
and it will work.
You can use a vanilla form but if you use this opinionated API that builds on top of
that, it adds a couple of things to that mental model.
Yeah but if you do reason with this layer system, that does mean that a layer that comes
on top of another one will necessarily be able to do less than the layer underneath
At least in terms of runtime.
Which is why this ethos is about having the lower level pieces being less opinionated.
It basically is saying it should get more opinionated as you go up on the stack and
we've talked about this concept in the podcast in the past about the extensible web manifesto
which really influenced my thinking in Elm Pages to build with building blocks that can
be composed together and if you have an opinionated piece of a building block, you can still,
you know, I mean like for example, Elm CSS is expressed in terms of Elm HTML.
It's like a more opinionated version of that low level building block but having the Elm
HTML package be low level is a good basis where it has fewer opinions which means you
can build more opinionated things on top of it and not be constrained to a particular
set of paths.
Elm CSS is actually built on top of Elm Virtual DOM.
Yeah, good point.
But very close.
And then yeah, Elm Virtual DOM is based on some kind of magic it feels like.
You don't know it.
Although that said it's also because no one explained to me how Virtual DOM works or at
least in sufficient detail like if you explain it then it's probably okay.
But I guess that's the same for anything that is reasonably complex like you won't get it
unless you dig in or someone explains it to you.
Yeah, I mean like the like the Extensible Web Manifesto talks about like if you're building
a low level platform, like basically the idea is that platforms should be low level.
They shouldn't be custom tailored to specific use cases because then you're preventing higher
level layers from being built on top of them.
So like service workers, they shouldn't really have strong opinions about what use cases
will be handled.
They should have strong opinions about security and corrupted cache states and things like
Those are the concerns at that level of the stack just like whatever, you know, TCP, UDP
are concerned about transporting data, not whether you're building, you know, a social
media application or an ecommerce website.
It doesn't care.
I'm not going to transmit this message if it's that social media website.
Maybe the world would be a better place, who knows?
But then we wouldn't have this tweet to talk about.
I mean, I'm pretty sure that there's no IDP for Google Plus anymore.
So maybe it is opinionated in a certain way.
Do you think that, I mean, I guess you've kind of touched on this, but I think sometimes
I think about like having a sacred space for users.
So like, for example, you know, the obvious one that everybody is familiar with would
be like being spammed with notifications and things like that, right?
Nobody wants, you don't want the boy who cried wolf.
You don't want a high amount of noise.
You want a good signal to noise ratio.
And the same is true.
I mean, boy who cried wolf effect, it means you're going to start ignoring things and
it's hard to trust something if you're ignoring it.
So ignoring something means you're probably not trusting it.
So are you talking about linter warnings?
Linter warnings, false alarms, error messages that aren't meaningful.
And there's a balance, right?
Like, I mean, in like Elm test, if you have a describe with no test cases, then your test
And that's a trade off, right?
It's a, it's a trade off that it's kind of doing something that is slightly annoying
because it's trying to give you feedback early.
So there's a balance there, right?
I mean, it's, if you do too much of that, if you're stopping the world and giving a
warning about every possible thing, then people can start ignoring it.
So that's a delicate balance.
I know for instance, where for Elm review rules, you can not compile your project or
your rule if it doesn't have any visitors.
So basically if the, if you can detect that it will not report anything ever, then that
And people have found that to be quite annoying because then you need to, if you want to do
like in TDD mode, then you necessarily need to write some dummy code for it to compile
before you can actually start writing tests and all that sorts of things.
So that's something that I'm thinking of removing because that's not all that helpful in practice.
I mean, yeah, people will probably notice it if they have a rule that does nothing.
There's a user experience aspect to this, like don't give them meaningful errors, not
And there's also like a big case of what is a meaningful error, right?
So for instance, Elm review has a testing module that is going to check for a lot of
things to make sure that your rule reports what is expected, that it does the right things,
that doesn't introduce bad automatic fixes.
And I feel like those are all, they can be annoying, especially if you have written rules
in other frameworks, which require a lot less checking.
But there's a reason for them to be there.
And I try to explain it in the error message as well.
That's I'm reporting these problems because otherwise the user will have a bad experience
or will not understand the error or it will make their code not compile anymore, stuff
So there's definitely a thing, the limit for when something is too nitpicky and when it's
useful, but then you still need to explain it carefully so that people don't get frustrated
and they actually understand it.
Yeah, because if you use the Phantom Builder pattern to do too much handholding, it can
start to feel nitpicky.
So at the same time, it is freeing for the user to have certain scenarios that they don't
have to think about because they're impossible.
And so as with any of these API design considerations, you need to really weigh the tradeoffs.
Yeah, in this case, it's kind of like handholding when you're trying to cross the road and forbidding
you to cross the road when there's a car in the next mile around you.
Like sure, that's going to make sure that you don't die, but it's a bit overly protective.
And yeah, so what is overly protective?
That's the question.
Like for the Elm compiler, the Elm language, you have no shadowing and that annoys a lot
of people and they think it's not warranted or it's not necessary.
And I can see the point.
I'm personally convinced otherwise, but yeah, it's something that if you want to get across
well, you need to explain well, which I think is done correctly, but it's still something
that people need to be convinced of.
There's also something that we didn't mention before, I think, is like the tool just needs
to do the job that it advertises well.
So like for instance, if you promise that you're going to build an Elm project, if you're
going to create an Elm pages application, well, you need to do it correctly, right?
You should not crash at all.
You should not give bad error messages when things do go wrong.
And that's a lot of work that is, there's no magic.
It's just trying out the thing, finding edge cases yourself and when things go wrong, either
present a nice error message with all the context that you want to provide to user or
fix the issue.
This is a lot of work for tooling authors in my opinion, but in practice, it's worth
it for everyone.
Yeah, that's obviously huge for trust, making tools that are in fact dependable or packages.
And I think another thing that really helps there is just, I mean, having a small and
focused tool or API, having a smaller API surface area, there's just less that can go
wrong, less edge cases for you to think about and for the user to think about, less mental
model to create.
It's just like, so if you can find simpler ways to express things, you can create less
work for yourself.
Now I'm wondering whether I'm doing too much with Arm Review, but...
I don't think so, but...
You have not seen my plans, Dillon.
I don't want to stop you.
It sounds enticing.
I think back sometimes to, like, I wonder if our sense of creating reliable software
has degraded at all.
I don't know if that's something that every generation thinks that things are deteriorating,
But I think about like the term software engineering I just recently learned was created by Margaret
Hamilton who was working on the Apollo program.
And didn't know that?
I mean, the robustness of the systems they had on the Apollo missions were pretty remarkable.
And I mean, even like, so when they did the Apollo 11 moon landing, there were some program
alarms going off, you know?
But because they were putting the systems under some strain that it hadn't been under
before, but the amount of drilling they did prepared them for these different scenarios.
And they had like every, like basically they take these, you know, lunar module simulators
and do all these training scenarios and try to create these edge cases where they have
to choose whether to abort the mission or how to handle it and all this stuff.
And man, it is, it's impressive the amount of consideration that goes into designing
And so Margaret Hamilton created this term to capture the amount of care that goes into
building this software that is really like an engineering discipline in the sense of
we need to build robust systems.
It's not just willing.
And I mean, really it's incredible that there were these program alarms, 1201, 1202 program
alarms going off in the Apollo landing that they had never encountered before in the simulations,
but the code worked.
The code, you know, the program alarms were saying that they were having to shove off
nonessential tasks from the processor because they had very limited processor power, but
Those code paths completely worked.
That's how robust it was.
So I don't know what we have to learn from that other than, I mean.
We stand on the shoulder of giants.
I think there's something that we said for, you know, what do they call it?
The you know, that doctor's oath to do no harm.
I think first do no harm, like an azimuth rule for robots or is that doctors?
First do no harm.
I think it's the part of the Hippocratic oath.
I'm not a doctor.
But we need something like that for engineering.
I feel like Elm does equip us with tools for carefully considering use cases.
I'm not talking about upfront design here, but I'm talking about really examining holistically
the experience you're giving to users, the things that can go wrong when somebody uses
your tool, whether it's an end user, a developer, internal colleagues, whatever it might be.
I feel like the oath just starts with something like first do not seek escape hatches.
You might be onto something here.
Because that actually like requires you, oh, well I need to understand the thing.
I need to read documentation.
I need to dig in or something.
That's use the tool for what it was purpose for.
Create not superstitions in your user's mind.
And then hackers might think like, yeah.
I mean it certainly does feel like, I mean when you look at blamed era, evergreen migrations
and things like this, it does feel like a different way of thinking about problem solving
where you can really think about, am I handling every case in a way that is harder to do in
And in fact, for instance, that requires you to handle all the error cases is really nice
to make tools, right?
Because all that remains is to make it nice, to give the user a nice error message basically.
As long as you're in Elm world.
It's like, I mean user experience and developer experience are kind of one in the same.
Like you want to give users a good, predictable, robust experience.
I mean, I see developer experience as the same, just like my users are developers, right?
And they're serving other users and you want to help them serve those users better.
Which in turn might also be serving users.
It's users all the way down.
And maybe somewhere at the bottom there's a developer and then it's a circle or cycle
or a graph.
It's a complex system we live in Dillon.
It certainly is.
There's another case of which I am thinking of and that is when the tool is doing something
without telling you or without giving you the information.
So the example that I have is for instance, automatic fixes that the linter give you.
So if you look at ESLint, ESLint has a fix mode.
So you do ESLint dash dash fix and that's going to run all the rules and it's going
to apply all the automatic fixes that it finds.
And then it's going to save that to the file system.
And the problem with that is that at the end you don't see what has happened.
You don't know what has changed.
The only thing you're left is the files on your system and maybe a git diff if you have
git, which you should.
You look at the diff and you're like, well, I don't know why this thing changed.
You're missing some information like which rules were triggered.
How did they change my code?
Is that change safe?
Is it something that I should worry about?
Is it going to change how it behaves?
Maybe for the better, but is it going to change how it behaves?
And what is the reasoning for the errors that occurred and the reason for this change?
A lot of information that you have to basically dig for, the only thing you have is a diff.
That I've always found to be a bit of a problem.
And that's when Elm review, every time you, when you run it in fixed mode, every time
you encounter an error, you have a prompt where you see all the error messages and details
and you can say, okay, or not to this suggested fix, which you can have a preview for.
And like, yeah, if you don't have that information, it's going to be very hard to trust the tool,
That high stakes situations like that are a very fast way to lose trust.
What do you mean with high stakes?
Like if you do a money transaction where a user has money, where it's not clear that
money is being spent, you will quickly lose trust.
If you overwrite users files, when they're, they're not clear that they're doing a destructive
transaction that is going to potentially change or remove files, you will quickly lose their
And so those high stakes scenarios, you have to be very careful about communicating the
intent and letting the user know what's going on and opt into it.
Kind of feels like filing your taxes to me.
Like you have a bunch of numbers and at the end you have this amount to pay and like,
well, I don't know how these all fit together, but that's the way it is.
So just like with ESINT or with taxes, either you accept it as it is, like, okay, I guess
Or you got to complain and call the support center or whatever.
And then you might learn that they were actually correct the entire time.
But yes, if you don't understand what is going on, like it's not helpful for you.
And as a maintainer, you're going to have a lot of complaints as well.
Like, Hey, this is not doing what I wanted.
And once you lose that trust, people will never fully trust a tool or a product, you
It's going to take at least a few months, depending on your tool and how often people
But yeah, it's going to take a while for people to know, okay, well it's really been a long
time since I've had this problem occur.
You'll need a rebrand.
You'll have to change company names, change your tools name, put up a sign that says under
I think in Elm 019.0, there were cases where the competitor didn't check some things and
you had to remove the Elm stuff.
It's been a while.
And there were like the map crashes and things like that.
So those were annoying, but the fact that it didn't do the proper checks when you needed
it was annoying.
You know, just like with, you know, we've talked about the difference between 99% guarantees
and 100% guarantees and how different that experience feels working with abstractions
that give you those full guarantees.
And similarly, like the difference between 99% trust and 100% trust.
I mean, obviously we don't 100% trust anything really, but you know what I mean?
Like once you lose a little bit of faith in a tool and start to second guess it, you no
longer trust it.
You like if you lose trust, you know, like if you fully can depend on a tool, it's a
totally different experience.
And again, like if you have a more transparent abstractions that build an opinionated layer
building off of an existing layer with some light abstractions with strong opinions.
That's one way that helps you do that.
Having like a focused small tool.
If you're going to be more ambitious, then it's harder to earn that trust and it's harder
to prove to users that your tool is trustworthy.
Or you might be trustworthy in one feature, but not trustworthy in a new feature that
And I feel like that there's a bias of where if you know that someone is good at something,
is an expert at something, then you're going to assume that they're going to be experts
at something else as well, which will often not be the case.
Like if I'm starting to make videos about knitting, and if you only know me from Elm,
like, oh, he's good at Elm.
He must be good at knitting.
No, I'm not.
That is a name.
I don't remember what it is.
If I find it, I will add it to the show notes.
Jeroen, let's take a stab at a Hippocratic oath for tooling developers.
So tooling developers, not developers.
Then I would like to start with first, do not seek escape hatches.
Be dependable and let the user know that you're dependable through feedback.
I like that.
Don't leave the user confused.
Wait, did you already prepare this?
I am looking at some notes.
No, you're cheating.
I don't have anything to offer here.
Don't make them refresh.
Don't make them refresh yet.
We already have, oh no, caveats.
Make sure your tool doesn't crash.
Give a clear mental model.
Tell the user what you're going to do.
Let the user opt in.
Also, don't organize surprise parties.
Don't offer invalid ways to configure or use your tool.
And then I think we should end with saying use Elm.
Elm adoption is going to skyrocket after this tweet goes viral about our hypocratic oath
That's pretty good.
I think that's definitely a tweet.
If not a tweet, then a screenshot of some texts that we can tweet.
In multiple tweets.
A tweet thread.
One image with one of those lines per tweet.
That's how you make...
With relaxing background images.
Well if we've missed anything for our hypocratic developer oath, then tweet it at us.
Reply to our tweet.
Let us know.
And you're in.
Until next time.
Until next time.