spotifyovercastrssapple-podcasts

Building Trustworthy Tools

We discuss how to earn users' trust by giving meaningful feedback and giving predictable results.
July 4, 2022
#60
  • Error messages should give three pieces of info
    • The problem
    • Why it's a problem
    • How to go forward
  • Needing to do magic incantations to get things into a good state adds cognitive load and makes debugging harder
  • Make tools more predictable
    • Clear mental model (avoid inconsistencies and leaky abstractions)
    • Doing an operation through different means should consistently arrive at the same result
    • Give context
    • Avoid caveats
  • Tweet about layering platforms properly
  • Extensible web manifesto
  • Avoid boy who cried wolf (becoming blind to noise from errors or warnings or other feedback)
  • Halo Effect

Transcript

[00:00:00]
Hello, Jeroen.
[00:00:01]
Hello, Dillon.
[00:00:02]
And what are we talking about today?
[00:00:03]
Today, we're talking about building trustworthy tools, which is something that we really enjoy,
[00:00:11]
both you and me, I'm sure.
[00:00:12]
That is true.
[00:00:13]
I'd say we enjoy building and using trustworthy tools.
[00:00:18]
Oh, yeah.
[00:00:19]
Using them even more, yeah.
[00:00:20]
Absolutely.
[00:00:21]
Yeah.
[00:00:22]
What a coincidence that Elm fans would enjoy trustworthy tools.
[00:00:26]
Who did that?
[00:00:27]
Yeah.
[00:00:28]
Well, you say that, but a lot of other communities don't enjoy them as much as we do.
[00:00:35]
Or at least they don't seem to put as much of an emphasis on them.
[00:00:39]
Right.
[00:00:40]
It's an interesting question.
[00:00:42]
I mean, is it that they're seeking something besides trustworthiness, like maybe convenience
[00:00:50]
or performance, rich ecosystem performance?
[00:00:54]
Or is it that they have a different set of criteria for what they consider to be trustworthy
[00:01:01]
in a different ecosystem?
[00:01:02]
I wonder.
[00:01:03]
I don't know.
[00:01:04]
Yeah.
[00:01:05]
I think it's more about like they value reliability, which is trust in a sense, but they don't
[00:01:16]
know what they could have.
[00:01:18]
So if they don't know what is possible, then they don't ask for it, they don't request
[00:01:25]
it from their tools.
[00:01:26]
Right.
[00:01:27]
Right.
[00:01:28]
If you have, I mean, maybe somebody who's working with TypeScript, they get IntelliSense
[00:01:35]
that shows them the thing they were looking for and catches some bugs that something was
[00:01:42]
null that they didn't check for.
[00:01:45]
And they're like, wow, that's a reliable tool.
[00:01:47]
So that's great.
[00:01:49]
That's absolutely great.
[00:01:50]
And I wonder, is there a continuum of trade offs between trustworthiness and sort of handholding
[00:01:59]
or you know, I think that that's how a lot of people would look at it.
[00:02:03]
But as Elm developers, I think we would tend to say like, yes, we do have strong guarantees
[00:02:08]
in Elm.
[00:02:10]
But also, we don't feel like we're fighting against the compiler.
[00:02:14]
We feel like the compiler is pushing us along.
[00:02:17]
And the things that it's preventing us from doing are things that we're glad we weren't
[00:02:23]
able to do.
[00:02:24]
It's not like, oh, if only the compiler would let me do this thing.
[00:02:27]
Yeah.
[00:02:28]
When you say handholding, do you mean like, if you're trying to cross the street while
[00:02:33]
holding the hand of your parents, and then they stop you because there's a car?
[00:02:39]
You mean like people don't like that?
[00:02:41]
You know, it's ironic.
[00:02:42]
In America, there are a lot of, you know, you'll have like, if there's a big drop off,
[00:02:47]
like an old historic building or something, where there's like a big cliff, suddenly,
[00:02:53]
you're like walking out to see a historic site, and there's a big cliff.
[00:02:58]
So people go and explore.
[00:02:59]
And in America, there will be often like a big fence preventing you from going off the
[00:03:04]
cliff.
[00:03:05]
It's like, this is an impossible state going off the cliff.
[00:03:09]
We will make it impossible.
[00:03:10]
We'll put a fence there.
[00:03:12]
And a lot of like European historical sites, you're just like, whoa, there's just a cliff
[00:03:17]
there.
[00:03:18]
Like someone could just walk off the cliff.
[00:03:20]
But it's a different approach.
[00:03:23]
So I guess there's like, is there a balance between making the tool trustworthy and trusting
[00:03:29]
the user?
[00:03:30]
You know, is that a dimension?
[00:03:33]
I think there is.
[00:03:34]
I mean, the annoying part with hand holding is when you get yanked back on the sidewalk.
[00:03:45]
And I feel like something that Elm does quite well, for instance, the compiler is when it
[00:03:50]
yanks you back, it does it in a gentle way.
[00:03:54]
So it pulls you back slightly in a nice gentle way.
[00:03:59]
Whereas the C++ compiler, from my experience back 15 years ago, is like, bam, yank you
[00:04:07]
and throw you on the curb and you figure out for yourself what you did wrong or something.
[00:04:15]
Right.
[00:04:16]
That's a good distinction.
[00:04:17]
Yeah.
[00:04:18]
Whereas the compiler tells you, hey, this was dangerous because there was a car coming
[00:04:22]
there.
[00:04:23]
This car was driving at this speed and if you walked forward, then it would have hit
[00:04:29]
you, which would have been bad because we're in the US and the bill would have been huge.
[00:04:33]
Right.
[00:04:34]
Or something like that.
[00:04:35]
Someone would have gotten sued.
[00:04:36]
Yeah.
[00:04:37]
So yeah, I feel like maybe the problem with hand holding is not so much that the hand
[00:04:44]
holding is the communication around it.
[00:04:47]
If you explain the problems correctly, if you explain the reasoning behind it, then
[00:04:54]
things are nicer.
[00:04:55]
So for instance, if I want to walk across the street, I need to make sure that there's
[00:05:00]
no car in sight or that it's sufficiently far away.
[00:05:04]
And if that's not the case, then I should be blocked if I want to stay alive.
[00:05:10]
Right.
[00:05:11]
Yeah.
[00:05:12]
And to a certain extent, it comes down to you're going to trust your tool more if it's
[00:05:16]
preventing you from doing things that you're actually, you know, once you figure it out,
[00:05:22]
you say, oh, maybe you see a compiler and you're like, what?
[00:05:26]
No, this is fine.
[00:05:27]
This should work.
[00:05:28]
What's wrong?
[00:05:29]
And then you sort through it and you're like, oh, I see.
[00:05:32]
Yeah, I did.
[00:05:33]
I did need to handle a case or I did need to get these types to line up.
[00:05:37]
So a tool can earn your trust by not giving you false alarms and not preventing you from
[00:05:45]
doing things that would be reasonable to do.
[00:05:48]
Yeah, the tool should definitely not give you false alarms.
[00:05:52]
And when it gives you an alarm, a real one, there are three pieces of information that
[00:05:55]
it should give you.
[00:05:57]
It should give you, indicate you what the problem is, what you did wrong, what you almost
[00:06:03]
did wrong, why it is a problem, like what is the reasoning behind the issue.
[00:06:08]
So I pulled you back from the road because otherwise you would have been hit.
[00:06:12]
And if you don't give that information, then people will think that you got stopped for
[00:06:17]
a stupid reason.
[00:06:18]
Like people are going to get frustrated.
[00:06:20]
Like this tool doesn't want me to do this.
[00:06:23]
And then they're going to be frustrated.
[00:06:25]
They're going to complain.
[00:06:27]
So I think explaining the reasoning behind a problem, behind an error is very valuable.
[00:06:33]
And then you need to, the third piece of information is like, how can you go forward?
[00:06:39]
How can you unblock yourself?
[00:06:41]
Like when is it okay to cross the road?
[00:06:44]
Once you've seen it, looked at the, once you've taken a look at the road and seen that there's
[00:06:48]
no car around, then you can walk, but not in the other cases.
[00:06:52]
So like, yeah, when you have a Elm compiler error, you want to know what the problem is.
[00:06:58]
Like, Oh, why is this wrong?
[00:07:00]
That's in my, what did I write wrong here?
[00:07:04]
And a compiler will tell you, well, you had an if branch, an if expression.
[00:07:09]
And in the first branch you had a type number and second branch you had a type string.
[00:07:15]
And that's wrong.
[00:07:16]
And then why is it a problem?
[00:07:18]
Well, for Elm to make sure that you have something that is working, it requires you to have the
[00:07:25]
same type in both branches.
[00:07:26]
And then it could go on, like explain why that is necessary for it potentially.
[00:07:33]
And then you need to explain how to move forward.
[00:07:35]
Like how can I solve this problem?
[00:07:38]
Do I need to convert one of these to an integer, for instance?
[00:07:42]
And I actually have an example for that, this exact example.
[00:07:48]
And Elm gives you all of this information and more, which is really useful.
[00:07:53]
So if you just read the error message in full, you know all of this information, you don't
[00:07:58]
get frustrated and you know how to unblock yourself.
[00:08:00]
Yeah, it makes me think like if somebody is crossing the street and you yell stop, that's
[00:08:09]
actually kind of hard to trust because you haven't given clear information why.
[00:08:14]
But if somebody is starting to cross the street and you say car, now you've actually just
[00:08:20]
as concisely given information with some not only like reason why it's important, but more
[00:08:30]
actionable information.
[00:08:31]
Because if you say stop, are you calling out stop because you're getting a phone call and
[00:08:38]
you need to wait in that area and it would be fine for them to like cross the street
[00:08:44]
and sit on a bench and stop for you there?
[00:08:46]
So saying stop doesn't help you in an actionable way where you know in what way to stop.
[00:08:54]
Like should I stop?
[00:08:55]
I'm in the middle of the road, should I stop in the middle of the road?
[00:08:57]
If you say car, then oh I should probably remove myself from the road, check my surroundings
[00:09:04]
and see which direction the car is coming from.
[00:09:07]
So in the same way, if an error message just says something went wrong, it can be very
[00:09:12]
frustrating for the user and it doesn't earn their trust that they should listen to us.
[00:09:19]
It's almost just unfairly demanding their attention without giving them an actionable
[00:09:25]
way to deal with that information.
[00:09:30]
It's not nice to tell somebody pay attention, do this thing without giving them the motivation
[00:09:37]
behind it.
[00:09:38]
I think it's a kinder thing to do to users or other human beings in general to bring
[00:09:45]
them into that process and tell them why.
[00:09:49]
It's important to share why.
[00:09:50]
Yeah, it's kind of the same thing under the news and politics like saying this policy
[00:09:56]
is a terrible idea.
[00:09:59]
Well that's great news, I'm not sure I'm going to trust you.
[00:10:03]
This policy is a bad idea because, and then your argument is that's a lot more helpful.
[00:10:10]
Or it can also be helpful to know why the person thinks that it's a bad policy and then
[00:10:15]
you can counter arguments.
[00:10:16]
Yes.
[00:10:17]
Counter argue that.
[00:10:18]
Right.
[00:10:19]
So if a compiler says this says it's a maybe string but you need a string, you're like
[00:10:25]
compiler can we talk for a minute because I disagree.
[00:10:32]
Just reason with it.
[00:10:33]
I just have a string.
[00:10:39]
This has come up for me in Elm pages when I've been working on the user experience for
[00:10:45]
the dev server.
[00:10:46]
One of the things I noticed, I was trying to pay attention to how it felt interacting
[00:10:52]
with the dev server when I build things with it.
[00:10:55]
And one of the things that I noticed was if I'm on a 404 page in the dev server, and Elm
[00:11:03]
pages has a file based router.
[00:11:05]
So if it just says 404 page not found in the dev server, then I'm saying why is it 404?
[00:11:12]
Why can't you find it?
[00:11:14]
So you're trying to go to a page and for some reason it couldn't find it.
[00:11:18]
Yep.
[00:11:19]
The dev services.
[00:11:21]
Even though you were actually trying to work on that page.
[00:11:24]
So you were kind of expecting to find that page, right?
[00:11:27]
I was expecting to find that page.
[00:11:29]
And now what I found was what that made me want to do is restart the dev server, hard
[00:11:37]
refresh the browser.
[00:11:39]
In other words, not trust the tool.
[00:11:41]
It made me not trust the dev server.
[00:11:43]
So I found that even when I earned the trust by having the 404 pages only be shown as an
[00:11:56]
accurate reflection of reality, that the routes were reloaded accurately whenever they needed
[00:12:03]
to be and it was always showing accurate information.
[00:12:05]
If I didn't present the user with more context, then they still wouldn't trust it.
[00:12:11]
And me as a proxy for the user, I experienced that.
[00:12:14]
Yeah, you doubted the tool just in case.
[00:12:18]
Yeah.
[00:12:19]
Because it could potentially solve the issue.
[00:12:21]
Right.
[00:12:22]
So you need more context.
[00:12:23]
So what I did is I mean, in hindsight, it sounds very simple, but I don't actually know
[00:12:30]
of other dev servers that do this.
[00:12:34]
So what I changed it to is it will say either I didn't find a route matching the URL you're
[00:12:41]
on.
[00:12:42]
Here are the routes that I have and it lists out all the different routes.
[00:12:46]
So if you go and you create a new route module that's going to add a new route to your project,
[00:12:53]
you will see it live reload with a new route showing up.
[00:12:58]
And then there's another scenario in on pages where you can have a set of statically rendered
[00:13:04]
pages and it might not be one of them.
[00:13:06]
So you could have like slash posts slash slug, but the particular slug you're rendering is
[00:13:14]
not.
[00:13:15]
So in that case, I show a different error message.
[00:13:16]
I say, hey, I matched this route.
[00:13:19]
This route exists, but this slug is not one of the slugs that you told me to render.
[00:13:25]
So I don't know how to handle that.
[00:13:27]
And then it will say, these are the slugs that I do know how to render.
[00:13:31]
And so the user might be able to go there and say, oh, I see the typo that I made, or
[00:13:38]
I expected to add a route like this and I expected it to be handled this way, but it
[00:13:43]
gives them feedback so they can kind of fine tune what went wrong.
[00:13:47]
Yeah.
[00:13:48]
It's all the information that you kind of wish you had, but you often don't.
[00:13:54]
The scary part is when the tool, you don't trust it and you rerun it and then it actually
[00:14:01]
works.
[00:14:02]
Yeah, exactly.
[00:14:03]
That's where you don't want to end up because then people think, oh yeah, this tool, sometimes
[00:14:10]
it doesn't work.
[00:14:11]
It doesn't reload correctly.
[00:14:14]
So sometimes you need to exit it and restart it again.
[00:14:18]
And if you hit that point, then people are just going to do that sometimes just in case.
[00:14:23]
Oh, just rerun your ID, oh, just rerun your computer.
[00:14:27]
Exactly.
[00:14:28]
I mean, that's a huge time waste.
[00:14:30]
Like we're seeing this with rebooting your computer.
[00:14:33]
Like if it doesn't work, it sucks so much.
[00:14:36]
And if it works, you know, it's going to suck for the next time that you're going to encounter
[00:14:42]
this problem.
[00:14:43]
Yes.
[00:14:44]
And when users become superstitious like that with their tools and warrant this superstition,
[00:14:53]
it's a well founded superstition because when they do magic incantations, it does solve
[00:14:58]
the problem sometimes.
[00:15:01]
But when you build that kind of relationship with users, then think about the cognitive
[00:15:07]
load that's going on.
[00:15:08]
So the problem solving capacity that could be put towards debugging their specific problem
[00:15:17]
is now spent on debugging the tool and whether it's doing the thing it's supposed to be doing,
[00:15:22]
which is a very bad state of things.
[00:15:23]
And if you compound that with a large number of tools that the user doesn't trust, now
[00:15:30]
think of all the different permutations of incompatible, untrustworthy states that these
[00:15:36]
tools could get into.
[00:15:37]
Now suddenly the user is spent, you know, or the developer, whoever's using the tool
[00:15:43]
is spending a huge amount of their brain capacity to just think through these possible scenarios
[00:15:49]
and how to do the magic incantations to fix them.
[00:15:53]
Yeah.
[00:15:54]
I know that I personally feel like I'm very bad at doing anything related to infrastructure
[00:15:59]
or computer infrastructure.
[00:16:01]
So for instance, like using Docker, setting up an architecture where you have a Redis,
[00:16:08]
you have MongoDB or whatever and all those things.
[00:16:12]
I really have a hard time and I feel like it's because there's a lot of information
[00:16:17]
that I wish was there and potentially explanation because I never got good at it.
[00:16:22]
And these are really hard to find to me.
[00:16:25]
Like for instance, with Docker, I think you have, you can talk to things that are executed
[00:16:31]
inside of Docker through ports.
[00:16:33]
And then you have to open that port on Docker side, which has one number.
[00:16:39]
And then you have a port on the outgoing side, something like that.
[00:16:44]
And which ones are open is not available information to me.
[00:16:48]
At least I don't know how to get it.
[00:16:51]
But basically that's all those pieces of information that I wish I had.
[00:16:55]
And if I don't, I'm going to try everything until I just quit or succeed.
[00:17:02]
Hopefully succeed.
[00:17:03]
Yeah.
[00:17:04]
And it's the same thing with installing things.
[00:17:06]
Like I feel like I have very bad luck with installing things.
[00:17:09]
Oh, it's all Node.js.
[00:17:11]
Okay.
[00:17:12]
Oh, well now it doesn't work when I run this program.
[00:17:16]
Oh, oh.
[00:17:18]
And there's a reason behind it.
[00:17:20]
Like there's a reason why it doesn't work, but I don't have it.
[00:17:23]
So I'm going to try everything.
[00:17:24]
I'm going to try and review my computer multiple times.
[00:17:27]
Yeah.
[00:17:28]
At some point, hopefully it works.
[00:17:31]
You hope your snowflake doesn't melt because it's going to be hard to create a new one
[00:17:36]
from scratch.
[00:17:37]
What is that?
[00:17:38]
You don't know the snowflake term.
[00:17:39]
Like people talk about snowflakes in terms of like reproducible environments.
[00:17:42]
Like every snowflake is unique, right?
[00:17:45]
It's like a fingerprint.
[00:17:46]
Like no two snowflakes are alike.
[00:17:48]
So okay.
[00:17:49]
Yeah.
[00:17:50]
It is the term for a non reproducible environment.
[00:17:53]
Yeah.
[00:17:54]
I'm not sure if snowflakes melting is a thing, but I think it should be.
[00:17:57]
I mean, in terms of water.
[00:18:01]
Then it's more uniform.
[00:18:03]
Yeah, true.
[00:18:04]
But if you mean in environments, then I don't know.
[00:18:08]
Yeah.
[00:18:09]
I do feel like my environments sometimes are like a smudge of everything.
[00:18:12]
So maybe a mud snowflake ish.
[00:18:16]
Yeah.
[00:18:18]
So as you said, like you actually, if you want to make a tool that you can trust, then
[00:18:24]
you actually need to make sure that it does the right thing as expected.
[00:18:28]
Like it needs to do the job that it advertises.
[00:18:31]
So if you have a watch mode or a live dev server running, then it actually needs to
[00:18:37]
reload everything as it should, which can be really hard, like especially with watching
[00:18:42]
files on your system that has plenty of tricky issues.
[00:18:47]
But then yeah, you also need to communicate a lot of useful information that the tool
[00:18:52]
has available that the user might want to have, or at least to instill trust in the
[00:19:00]
relationship.
[00:19:01]
Right.
[00:19:02]
I'm realizing that a lot of this, I think, comes down to making the tool predictable
[00:19:08]
for the user.
[00:19:09]
And so like giving the user context is one way to make things more predictable, making
[00:19:17]
it do the same thing, whether you create a new route this one way or change this thing
[00:19:23]
this other place or delete a route module or however you arrive at a particular state
[00:19:30]
of things, having it behave the same way is another thing that makes it more predictable.
[00:19:36]
Another thing that makes tools more predictable is having like a clear, understandable mental
[00:19:42]
model.
[00:19:43]
Yeah.
[00:19:44]
If it has a lot of magic in the mental model, or if it has like leaky abstractions or inconsistencies
[00:19:51]
in the mental model, it's less predictable for the user.
[00:19:54]
Well using this function is not enough.
[00:19:56]
You also need to have called this function previously, and otherwise it won't work.
[00:20:02]
I wish it was written in documentation or in the error message.
[00:20:08]
Never call this function after you've done this thing.
[00:20:15]
I saw a tweet recently that I thought was very interesting.
[00:20:19]
It's actually longer than a tweet because it's a tweet of an image with text.
[00:20:24]
But if I...
[00:20:25]
Cheating.
[00:20:26]
If you don't mind humoring me to read something a bit longer than a tweet.
[00:20:30]
So we'll link to this, but it says layering platforms properly.
[00:20:33]
One of the most important goals of a rationalized platform, I hadn't heard that term, is that
[00:20:38]
it is well layered in multiple senses.
[00:20:40]
One type of layering is ensuring that higher layers are more opinionated than lower layers.
[00:20:45]
So that was interesting.
[00:20:47]
Another important layering property is that as much as possible, higher layers should
[00:20:51]
be explainable in terms of lower layers.
[00:20:55]
So in a sense, it's talking about like almost like composable mental models where your mental
[00:21:01]
model at one level builds on top of your mental model at another level.
[00:21:06]
So it says that is the higher level API should express semantics that are reasonably thin
[00:21:12]
or opinionated combining of semantics exposed by lower layers that are strictly lower in
[00:21:18]
the stack.
[00:21:19]
Yeah, I'm thinking is it finished?
[00:21:21]
The last part says layers that do not follow this property are likely either too thick
[00:21:26]
or rely on too much magic.
[00:21:28]
That's a layering smell that implies that there are things that developers might want
[00:21:33]
to do that they won't be able to break apart and use separately, i.e. the platform is not
[00:21:38]
composed of bricks held together loosely.
[00:21:41]
It also makes the layer harder to reason about.
[00:21:44]
So I thought that was quite interesting.
[00:21:45]
Yeah, the first thing that I'm thinking about is programming languages where you have bytecode
[00:21:54]
assembly at the bottom, binary even lower, and you have C++ and Rust and those combining
[00:22:00]
to WebAssembly, to assembly, not WebAssembly.
[00:22:05]
And then you have languages like JavaScript that kind of compile to that.
[00:22:11]
Yeah, although they break a lot of mental models about performance characteristics like
[00:22:15]
we talked with Robin about.
[00:22:17]
Yes, there's some magic and there's some things that are really hard to explain, at least
[00:22:24]
with my understanding of the engine, how it works.
[00:22:30]
And then you've got languages like Elm, which can be explained through JavaScript.
[00:22:35]
Oh, well, this function simply compiles to this JavaScript function, which usually doesn't
[00:22:41]
rely on too much magic.
[00:22:43]
So yeah, I feel like that tweet is pretty good.
[00:22:47]
Yeah.
[00:22:48]
And like Elm Review, for example, it's actually just an Elm project.
[00:22:52]
So that's actually pretty cool, right?
[00:22:55]
Because it means that anything you can use to reason about an Elm project, there's an
[00:23:01]
Elm.json, you install dependencies with Elm install, you can open it up in your editor
[00:23:08]
and all of your assumptions about an Elm project apply to an Elm Review folder.
[00:23:13]
The Review folder is just an Elm project.
[00:23:17]
You can run Elm make on it and other Elm tooling.
[00:23:24]
There is a Node.js part, but let's skip that part.
[00:23:28]
Right, but see, that is a strong opinion that is easily explainable in terms of the lower
[00:23:36]
layers.
[00:23:37]
So that would, I think, follow this tweet's advice about how to layer platforms.
[00:23:42]
Because you could have an Elm Review.json and it's a set of the rules that you install.
[00:23:50]
And if you put a particular special JSON key in there, it's going to automatically install
[00:23:56]
those rules and there's JSON config for those rules.
[00:24:00]
So there are all sorts of ways that Elm Review could be designed where it doesn't build off
[00:24:06]
of that mental model that Elm developers already have.
[00:24:10]
Yeah, and there was some exploration how to do that for quite a while.
[00:24:16]
I could have reviewed dependencies in the Elm.json file, which was what I was going
[00:24:21]
for at the beginning.
[00:24:22]
And then it turned out that the Elm compiler just removes them when you install another
[00:24:27]
dependency.
[00:24:28]
So like, well, I guess that doesn't work as well as expected.
[00:24:32]
I really like the end result.
[00:24:36]
It's not really nice to say, well, it's this whole new application, but it has some very
[00:24:41]
nice properties.
[00:24:42]
So I'm happy with it.
[00:24:44]
I think it worked out very well.
[00:24:45]
Yeah.
[00:24:46]
And another thing that comes up in trustworthy tools is that I think Elm developers are particularly
[00:24:55]
tuned into is invalid configurations or invalid ways to use a package.
[00:25:02]
And in general, I really like this idea of eliminating caveats.
[00:25:09]
Just like if I find myself explaining something, hey, here's this new design.
[00:25:15]
Here's this new package.
[00:25:17]
Oh, cool.
[00:25:18]
How do you use it?
[00:25:19]
Well, you do this.
[00:25:20]
But there's one thing I should explain.
[00:25:22]
It's like, okay, hold on.
[00:25:24]
Capture that.
[00:25:25]
What was that caveat?
[00:25:27]
Can you design it so that it doesn't require that caveat?
[00:25:29]
And just keep massaging the caveats out of the design.
[00:25:34]
Yeah.
[00:25:35]
I mean, you do need to explain the mental model that you might need to have.
[00:25:39]
You might need to explain the bigger picture.
[00:25:42]
But yeah, like, oh, by the way, you should never use this function.
[00:25:45]
Otherwise, everything's going to break.
[00:25:48]
Can we remove that function?
[00:25:50]
Well, no, because, oh, well, nah.
[00:25:53]
Sure.
[00:25:54]
I like that distinction.
[00:25:56]
Because if it feels like you're explaining a mental model, then it's a consistent mental
[00:26:02]
model.
[00:26:03]
If you're explaining a caveat, why do we not call that a mental model?
[00:26:05]
I mean, technically it is, but it's a caveat within the mental model.
[00:26:09]
It's an inconsistency in that mental model.
[00:26:12]
So I think that is the distinction.
[00:26:14]
But you definitely do need to, when you're building packages, APIs, tools, whether it's
[00:26:21]
internal within a company or an external tool, you are going to need to learn some mental
[00:26:26]
model.
[00:26:27]
And that's OK.
[00:26:29]
But is it going to be a predictable mental model and a consistent mental model?
[00:26:34]
And if you're noticing those asterisks is a really good way to look for inconsistencies
[00:26:40]
in the mental model.
[00:26:41]
Do you have an example of a caveat like in Elm, in the Elm language, or Elm tooling?
[00:26:46]
Yeah.
[00:26:47]
I mean, we've talked about this design change where in Elm GraphQL, actually when it went
[00:26:54]
from being called GraphQL to Elm GraphQL, when I found a way to turn a fragment and
[00:27:01]
selection set into the same concept and a selection set with zero items, one item, more
[00:27:07]
than one item in it, it's just the same concept.
[00:27:11]
That was a caveat.
[00:27:12]
That was like, how do you...
[00:27:15]
That was an inconsistency in the mental model that you couldn't think of those things in
[00:27:18]
the same way.
[00:27:20]
And so I massaged that out.
[00:27:22]
And it took work in the internal implementation detail.
[00:27:26]
It doesn't come for free to massage out the caveats.
[00:27:30]
You have to do it through careful design.
[00:27:31]
But when you do, the user can think about it with a less complex mental model that's
[00:27:37]
more predictable.
[00:27:38]
Now it comes for free for the user.
[00:27:41]
Right.
[00:27:42]
Yep.
[00:27:43]
But yeah, I mean, in general, I think we're keenly aware of caveats in Elm design.
[00:27:48]
And in a way, Elm makes us be so explicit about types in our APIs and how many arguments
[00:27:55]
does it take that it makes them glaringly obvious.
[00:28:00]
And also just Elm developers think so much about making impossible states impossible
[00:28:04]
and parse don't validate and things like that, that it sticks out like a sore thumb.
[00:28:09]
Yeah.
[00:28:10]
And oh no, now this function needs to return a maybe.
[00:28:12]
Right.
[00:28:13]
Exactly.
[00:28:14]
Because it might feel like, how can we prevent this to make this nicer?
[00:28:18]
Yeah.
[00:28:19]
Yeah, absolutely.
[00:28:20]
Yeah.
[00:28:21]
I'm thinking about the example of restarting Elm pages.
[00:28:25]
Imagine you had to run the Elm compiler like two or three times before you can be quite
[00:28:30]
certain that it works, that there's no error or...
[00:28:34]
Yes.
[00:28:35]
That would suck.
[00:28:36]
Or you have to remove Elm stuff.
[00:28:38]
Right.
[00:28:39]
Or clear your Elm home or that would be such a waste of time.
[00:28:45]
It puts quite a burden on the user to need to think about those things.
[00:28:49]
Yeah.
[00:28:50]
Yeah, exactly.
[00:28:51]
It's all the things that you don't have to think about anymore.
[00:28:54]
I've been thinking about linters for quite a while, as you might know.
[00:29:01]
And for instance, you know my love hate relationship with disabled comments.
[00:29:08]
Yes, I do.
[00:29:10]
Mostly a hate relationship.
[00:29:11]
Yeah.
[00:29:12]
So none of the people find that term confusing.
[00:29:16]
So disabled comments are those comments that you add to your code to disable linter warnings.
[00:29:23]
It's almost like a caveat.
[00:29:26]
It is almost like a caveat.
[00:29:27]
It's like a little asterisk in your code that says, this rule is enforced everywhere, except
[00:29:33]
for these few places.
[00:29:34]
Except for this case, yeah.
[00:29:35]
So you have a like ESLint disable comment.
[00:29:39]
I'm just giving the example of ESLint because that's one I'm most familiar with.
[00:29:42]
And you've contributed to the ecosystem quite a bit.
[00:29:46]
I could have given the example of Elm review disable comments, but those don't exist.
[00:29:51]
Except for known tail recursion.
[00:29:55]
In a way, yeah.
[00:29:57]
Those are more hints to the rule.
[00:29:59]
There's a caveat to Elm reviews lack of the caveat of a disable comment.
[00:30:04]
Yes.
[00:30:05]
I mean, I'm not saying that's what I made.
[00:30:08]
It's a reasonable one.
[00:30:09]
It's a good trade off.
[00:30:10]
It's a good trade off.
[00:30:11]
So far.
[00:30:12]
Yeah.
[00:30:13]
Yeah.
[00:30:14]
The thing I've been noticing is like, when you, so linters are actually one of the tools
[00:30:21]
that are the least trustworthy among the software tooling that is available.
[00:30:28]
They actually have a very bad reputation of just reporting way too many false positives.
[00:30:32]
And false positives are annoying because they block you.
[00:30:35]
You need to solve them if you want to go forward.
[00:30:39]
And if you go back to the three points of information that I gave before, like how to,
[00:30:44]
what is the problem, what is the reasoning behind it and how to unblock myself.
[00:30:48]
A lot of times those pieces of information are not given.
[00:30:52]
Main reason for that is that these tools usually give you a one line error message.
[00:30:59]
Which is like, that's not a lot of information for me to go on.
[00:31:05]
And that can be very frustrating.
[00:31:07]
So when you don't have all the information, then you will often think about, well, how
[00:31:14]
can I unblock myself?
[00:31:15]
And the easiest way for that is to use a disable comment.
[00:31:19]
So if you really don't know what to do, then you're going to reach for that.
[00:31:24]
That's almost a certainty.
[00:31:25]
You can ask maybe a coworker, but otherwise you're going to reach for that.
[00:31:30]
And that's going to be even made even worse if you're used to having false positives,
[00:31:35]
then, oh, well, this case is very likely to be a false positive again.
[00:31:40]
I'm just going to ignore it.
[00:31:41]
So you had a disabled comment and you don't think about it anymore.
[00:31:46]
So for example, if you have like an ESLint rule for unused functions, but then you have
[00:31:51]
a disabled comment for one function that you're like, well, yeah, but I'm going to use this
[00:31:56]
or it breaks the mental model of we don't have unused functions.
[00:32:02]
Yeah.
[00:32:03]
Yeah.
[00:32:04]
That's an exception.
[00:32:05]
The thing is, if you have that option of using the disabled comments, you're going to ask
[00:32:11]
yourself, should I use a disabled comment here?
[00:32:14]
Every time you see an error, should I use it?
[00:32:16]
Can I use it?
[00:32:17]
Is it okay to use it?
[00:32:20]
Those are questions that you don't see in Elm Review when you use Elm Review because
[00:32:23]
you don't have that option.
[00:32:24]
You don't think about those unless you know that it's wrong for some reason and then please
[00:32:29]
open an issue.
[00:32:30]
But the usual exception for that is like people want to keep their code around because they
[00:32:38]
haven't finished a feature or something, but there are better options in my opinion than
[00:32:41]
disabled comments.
[00:32:43]
So yeah, like there's another, what did you call it?
[00:32:47]
Nudge or incentive if you have those disabled comments is that, well, I'm going to write
[00:32:52]
a rule and if it turns out to be wrong or impractical to apply the advice, then people
[00:32:59]
can just add a disabled comment.
[00:33:01]
Meaning that as a rule author, you're more inclined to writing bad rules, which is not
[00:33:08]
the case in Elm Review because you can't disable things willy nilly.
[00:33:13]
So therefore the quality has to be higher and in some cases it needs to be too high
[00:33:18]
for the rule to even exist, which might be a bad trade off in some cases.
[00:33:25]
I wish we could have more of those, but I haven't found a good way to make the tool
[00:33:29]
trustworthy and still have being able to detect things like code smells.
[00:33:34]
But yes, something that I've noticed is like we need to, the tool needs to communicate
[00:33:39]
well and it needs to not give you all those easy escape hatches for you to reach out for.
[00:33:46]
Otherwise people are just going to use them and yeah, the result is not that great.
[00:33:51]
The result is sometimes like you enable a rule, you're going to think, okay, well this
[00:33:55]
critical problem will never appear anymore because we now have a review rule to make
[00:34:00]
sure it doesn't happen again.
[00:34:02]
And then someone has a disabled comment and like, well now I can't trust that that guarantee
[00:34:08]
is true.
[00:34:09]
Like potentially someone disabled it.
[00:34:11]
So you need to go and check whether someone added a disabled comment somewhere, which
[00:34:16]
is like just like restarting your tool again, like, oh, well I think it's right, but it's
[00:34:23]
just going to check again just to be sure.
[00:34:28]
And then you want a tool for a tool.
[00:34:33]
And people are going to add escape hatches to that one and then you need a tool for a
[00:34:36]
tool for a tool.
[00:34:37]
Yeah, I mean in a way an escape hatch, I mean obviously we're pretty, we're fans of eliminating
[00:34:49]
escape hatches in the Elm community and some people find that frustrating, but some, you
[00:34:56]
know, it's also a big part of what we love about Elm.
[00:34:58]
But I wonder is like, is an escape hatch a type of caveat that breaks the mental model?
[00:35:06]
It's just like another case of a caveat, you know?
[00:35:10]
Depends on the definition of it breaks your mental model.
[00:35:12]
It's just like there's an exception to your mental model that is made possible because
[00:35:17]
of the escape hatch.
[00:35:18]
Right.
[00:35:19]
Just like saying this is a sphere and it's perfectly round and then you actually have
[00:35:26]
an escape hatch on there, meaning at that exact location it is not round.
[00:35:31]
Yeah, yeah.
[00:35:32]
Right.
[00:35:33]
So it's an almost round balloon.
[00:35:35]
Yeah, I mean it's more, well I guess it's two things.
[00:35:41]
It's like number one, if there's an escape hatch then there's something that could have
[00:35:47]
been more consistent in the mental model possibly.
[00:35:50]
I mean it takes careful design of course and that's easier said than done.
[00:35:54]
But secondly like it might be layering things like this tweet talked about in terms that
[00:36:02]
aren't like a cohesive layer that you can think of as like an opinionated layer and
[00:36:10]
it's sort of leaking between two different layers of abstraction.
[00:36:14]
So how would you prevent a layer from doing too much magic for instance?
[00:36:18]
How do you prevent one from being explained by what is underneath it?
[00:36:23]
Right.
[00:36:24]
So also like for example, I've been working on a new form API for the Elm pages v3 release
[00:36:33]
I'm working on and I've been heavily inspired as we've talked about by Remix's approach
[00:36:39]
which is kind of trying to take this concept of progressively enhancing forms where the
[00:36:44]
browser has built in forms.
[00:36:46]
We have an existing kind of low level mental model of a primitive that the web provides.
[00:36:52]
Just like the web provides a primitive of links and you go to a new page and we have
[00:36:57]
single page apps and they sort of hook into that mental model.
[00:37:01]
So you go to a new page, it doesn't actually use the browser's page navigation, it intercepts
[00:37:07]
that and progressively enhances it and uses JavaScript to pull in new data and change
[00:37:15]
to a new page.
[00:37:16]
Well similarly, what if we use progressive enhancement to take that foundational mental
[00:37:22]
model of something that exists in the web, forms, a way of having key value pairs that
[00:37:27]
you can send to a server and in response to that data you send to the server, the server
[00:37:33]
can do something and then go to a new page.
[00:37:37]
Well what if we take that same mental model and build off of it?
[00:37:41]
So by doing that you're essentially creating a set of higher level opinions on top of that
[00:37:48]
idea that can take that as the foundational concept.
[00:37:52]
You can do it in a way where you don't break that mental model but you can add additional
[00:37:57]
concepts to that mental model.
[00:37:59]
So like if you have a button in a form then that button will submit and if that button
[00:38:06]
is disabled then clicking on that button won't submit the form and all these things that
[00:38:10]
we understand in our mental model about forms, you can piggyback on that and that reduces
[00:38:16]
the learning burden of the user because they may be familiar with forms and if not, it's
[00:38:25]
portable knowledge that they can use in different contexts and it's not going to be contradicted
[00:38:30]
by other caveats in other areas.
[00:38:32]
So it's just more sort of robust portable knowledge and so you could take that model,
[00:38:39]
build on top of it and then progressively enhance it to actually do client side server
[00:38:45]
requests to progressively enhance the vanilla form posts and that's sort of the basis of
[00:38:52]
the Elm Pages form API.
[00:38:54]
Yeah so you can always explain it using whatever is underneath or by saying something is almost
[00:39:01]
like whatever is underneath.
[00:39:03]
Right, it picks up where that left off and you can actually also use the low level thing
[00:39:09]
and it will work.
[00:39:10]
You can use a vanilla form but if you use this opinionated API that builds on top of
[00:39:16]
that, it adds a couple of things to that mental model.
[00:39:19]
Yeah but if you do reason with this layer system, that does mean that a layer that comes
[00:39:25]
on top of another one will necessarily be able to do less than the layer underneath
[00:39:31]
it.
[00:39:32]
So Elm will never be able to do anything that JavaScript isn't able to do.
[00:39:36]
True.
[00:39:37]
At least in terms of runtime.
[00:39:39]
Which is why this ethos is about having the lower level pieces being less opinionated.
[00:39:47]
It basically is saying it should get more opinionated as you go up on the stack and
[00:39:51]
we've talked about this concept in the podcast in the past about the extensible web manifesto
[00:39:57]
which really influenced my thinking in Elm Pages to build with building blocks that can
[00:40:04]
be composed together and if you have an opinionated piece of a building block, you can still,
[00:40:11]
you know, I mean like for example, Elm CSS is expressed in terms of Elm HTML.
[00:40:17]
It's like a more opinionated version of that low level building block but having the Elm
[00:40:23]
HTML package be low level is a good basis where it has fewer opinions which means you
[00:40:30]
can build more opinionated things on top of it and not be constrained to a particular
[00:40:35]
set of paths.
[00:40:36]
Elm CSS is actually built on top of Elm Virtual DOM.
[00:40:40]
That's true.
[00:40:41]
That's true.
[00:40:42]
Yeah, good point.
[00:40:43]
But very close.
[00:40:44]
Yeah, yeah.
[00:40:45]
And then yeah, Elm Virtual DOM is based on some kind of magic it feels like.
[00:40:50]
You don't know it.
[00:40:52]
Although that said it's also because no one explained to me how Virtual DOM works or at
[00:40:57]
least in sufficient detail like if you explain it then it's probably okay.
[00:41:03]
But I guess that's the same for anything that is reasonably complex like you won't get it
[00:41:08]
unless you dig in or someone explains it to you.
[00:41:12]
Yeah.
[00:41:13]
Yeah, I mean like the like the Extensible Web Manifesto talks about like if you're building
[00:41:19]
a low level platform, like basically the idea is that platforms should be low level.
[00:41:24]
They shouldn't be custom tailored to specific use cases because then you're preventing higher
[00:41:30]
level layers from being built on top of them.
[00:41:33]
So like service workers, they shouldn't really have strong opinions about what use cases
[00:41:39]
will be handled.
[00:41:40]
They should have strong opinions about security and corrupted cache states and things like
[00:41:46]
that, right?
[00:41:47]
Those are the concerns at that level of the stack just like whatever, you know, TCP, UDP
[00:41:53]
are concerned about transporting data, not whether you're building, you know, a social
[00:42:01]
media application or an ecommerce website.
[00:42:05]
It doesn't care.
[00:42:07]
I'm not going to transmit this message if it's that social media website.
[00:42:15]
Maybe the world would be a better place, who knows?
[00:42:17]
But then we wouldn't have this tweet to talk about.
[00:42:20]
I mean, I'm pretty sure that there's no IDP for Google Plus anymore.
[00:42:25]
So maybe it is opinionated in a certain way.
[00:42:31]
Do you think that, I mean, I guess you've kind of touched on this, but I think sometimes
[00:42:35]
I think about like having a sacred space for users.
[00:42:39]
So like, for example, you know, the obvious one that everybody is familiar with would
[00:42:43]
be like being spammed with notifications and things like that, right?
[00:42:47]
Nobody wants, you don't want the boy who cried wolf.
[00:42:49]
You don't want a high amount of noise.
[00:42:54]
You want a good signal to noise ratio.
[00:42:57]
And the same is true.
[00:42:58]
I mean, boy who cried wolf effect, it means you're going to start ignoring things and
[00:43:04]
it's hard to trust something if you're ignoring it.
[00:43:07]
So ignoring something means you're probably not trusting it.
[00:43:11]
So are you talking about linter warnings?
[00:43:15]
Linter warnings, false alarms, error messages that aren't meaningful.
[00:43:19]
And there's a balance, right?
[00:43:20]
Like, I mean, in like Elm test, if you have a describe with no test cases, then your test
[00:43:27]
suite fails.
[00:43:28]
Yep.
[00:43:29]
And that's a trade off, right?
[00:43:30]
It's a, it's a trade off that it's kind of doing something that is slightly annoying
[00:43:36]
because it's trying to give you feedback early.
[00:43:40]
So there's a balance there, right?
[00:43:41]
I mean, it's, if you do too much of that, if you're stopping the world and giving a
[00:43:46]
warning about every possible thing, then people can start ignoring it.
[00:43:51]
So that's a delicate balance.
[00:43:53]
I know for instance, where for Elm review rules, you can not compile your project or
[00:44:00]
your rule if it doesn't have any visitors.
[00:44:04]
So basically if the, if you can detect that it will not report anything ever, then that
[00:44:11]
won't compile.
[00:44:12]
And people have found that to be quite annoying because then you need to, if you want to do
[00:44:17]
like in TDD mode, then you necessarily need to write some dummy code for it to compile
[00:44:23]
before you can actually start writing tests and all that sorts of things.
[00:44:28]
So that's something that I'm thinking of removing because that's not all that helpful in practice.
[00:44:33]
I mean, yeah, people will probably notice it if they have a rule that does nothing.
[00:44:39]
There's a user experience aspect to this, like don't give them meaningful errors, not
[00:44:45]
annoying ones.
[00:44:46]
And there's also like a big case of what is a meaningful error, right?
[00:44:52]
So for instance, Elm review has a testing module that is going to check for a lot of
[00:44:57]
things to make sure that your rule reports what is expected, that it does the right things,
[00:45:03]
that doesn't introduce bad automatic fixes.
[00:45:08]
And I feel like those are all, they can be annoying, especially if you have written rules
[00:45:13]
in other frameworks, which require a lot less checking.
[00:45:18]
But there's a reason for them to be there.
[00:45:21]
And I try to explain it in the error message as well.
[00:45:24]
That's I'm reporting these problems because otherwise the user will have a bad experience
[00:45:32]
or will not understand the error or it will make their code not compile anymore, stuff
[00:45:39]
like that.
[00:45:40]
So there's definitely a thing, the limit for when something is too nitpicky and when it's
[00:45:47]
useful, but then you still need to explain it carefully so that people don't get frustrated
[00:45:52]
and they actually understand it.
[00:45:54]
Right, right.
[00:45:56]
Yeah, because if you use the Phantom Builder pattern to do too much handholding, it can
[00:46:04]
start to feel nitpicky.
[00:46:06]
So at the same time, it is freeing for the user to have certain scenarios that they don't
[00:46:14]
have to think about because they're impossible.
[00:46:16]
And so as with any of these API design considerations, you need to really weigh the tradeoffs.
[00:46:22]
Yeah, in this case, it's kind of like handholding when you're trying to cross the road and forbidding
[00:46:29]
you to cross the road when there's a car in the next mile around you.
[00:46:35]
Like sure, that's going to make sure that you don't die, but it's a bit overly protective.
[00:46:45]
And yeah, so what is overly protective?
[00:46:48]
What isn't?
[00:46:49]
That's the question.
[00:46:51]
Like for the Elm compiler, the Elm language, you have no shadowing and that annoys a lot
[00:46:57]
of people and they think it's not warranted or it's not necessary.
[00:47:04]
And I can see the point.
[00:47:05]
I'm personally convinced otherwise, but yeah, it's something that if you want to get across
[00:47:11]
well, you need to explain well, which I think is done correctly, but it's still something
[00:47:17]
that people need to be convinced of.
[00:47:21]
There's also something that we didn't mention before, I think, is like the tool just needs
[00:47:26]
to do the job that it advertises well.
[00:47:29]
So like for instance, if you promise that you're going to build an Elm project, if you're
[00:47:34]
going to create an Elm pages application, well, you need to do it correctly, right?
[00:47:40]
You should not crash at all.
[00:47:44]
You should not give bad error messages when things do go wrong.
[00:47:49]
And that's a lot of work that is, there's no magic.
[00:47:51]
It's just trying out the thing, finding edge cases yourself and when things go wrong, either
[00:48:00]
present a nice error message with all the context that you want to provide to user or
[00:48:06]
fix the issue.
[00:48:08]
This is a lot of work for tooling authors in my opinion, but in practice, it's worth
[00:48:13]
it for everyone.
[00:48:15]
Yeah, that's obviously huge for trust, making tools that are in fact dependable or packages.
[00:48:23]
And I think another thing that really helps there is just, I mean, having a small and
[00:48:28]
focused tool or API, having a smaller API surface area, there's just less that can go
[00:48:35]
wrong, less edge cases for you to think about and for the user to think about, less mental
[00:48:40]
model to create.
[00:48:42]
It's just like, so if you can find simpler ways to express things, you can create less
[00:48:49]
work for yourself.
[00:48:50]
Now I'm wondering whether I'm doing too much with Arm Review, but...
[00:48:54]
I don't think so, but...
[00:48:55]
You have not seen my plans, Dillon.
[00:49:02]
I don't want to stop you.
[00:49:04]
It sounds enticing.
[00:49:06]
I think back sometimes to, like, I wonder if our sense of creating reliable software
[00:49:13]
has degraded at all.
[00:49:15]
I don't know if that's something that every generation thinks that things are deteriorating,
[00:49:21]
right?
[00:49:22]
But I think about like the term software engineering I just recently learned was created by Margaret
[00:49:29]
Hamilton who was working on the Apollo program.
[00:49:32]
And didn't know that?
[00:49:35]
I mean, the robustness of the systems they had on the Apollo missions were pretty remarkable.
[00:49:46]
And I mean, even like, so when they did the Apollo 11 moon landing, there were some program
[00:49:54]
alarms going off, you know?
[00:49:57]
But because they were putting the systems under some strain that it hadn't been under
[00:50:02]
before, but the amount of drilling they did prepared them for these different scenarios.
[00:50:07]
And they had like every, like basically they take these, you know, lunar module simulators
[00:50:14]
and do all these training scenarios and try to create these edge cases where they have
[00:50:21]
to choose whether to abort the mission or how to handle it and all this stuff.
[00:50:25]
And man, it is, it's impressive the amount of consideration that goes into designing
[00:50:31]
these systems.
[00:50:32]
And so Margaret Hamilton created this term to capture the amount of care that goes into
[00:50:40]
building this software that is really like an engineering discipline in the sense of
[00:50:45]
we need to build robust systems.
[00:50:47]
It's not just willing.
[00:50:48]
And I mean, really it's incredible that there were these program alarms, 1201, 1202 program
[00:50:55]
alarms going off in the Apollo landing that they had never encountered before in the simulations,
[00:51:01]
but the code worked.
[00:51:02]
The code, you know, the program alarms were saying that they were having to shove off
[00:51:07]
nonessential tasks from the processor because they had very limited processor power, but
[00:51:13]
it worked.
[00:51:14]
Those code paths completely worked.
[00:51:16]
That's how robust it was.
[00:51:17]
So I don't know what we have to learn from that other than, I mean.
[00:51:23]
We stand on the shoulder of giants.
[00:51:26]
Yeah.
[00:51:27]
I think there's something that we said for, you know, what do they call it?
[00:51:31]
The you know, that doctor's oath to do no harm.
[00:51:39]
I think first do no harm, like an azimuth rule for robots or is that doctors?
[00:51:44]
No, no.
[00:51:45]
First do no harm.
[00:51:46]
I think it's the part of the Hippocratic oath.
[00:51:49]
Okay.
[00:51:50]
Yeah.
[00:51:51]
I'm not a doctor.
[00:51:52]
Yeah.
[00:51:53]
But we need something like that for engineering.
[00:51:54]
I feel like Elm does equip us with tools for carefully considering use cases.
[00:52:01]
I'm not talking about upfront design here, but I'm talking about really examining holistically
[00:52:08]
the experience you're giving to users, the things that can go wrong when somebody uses
[00:52:12]
your tool, whether it's an end user, a developer, internal colleagues, whatever it might be.
[00:52:20]
I feel like the oath just starts with something like first do not seek escape hatches.
[00:52:26]
Oh wow.
[00:52:29]
You might be onto something here.
[00:52:30]
Yeah.
[00:52:31]
Because that actually like requires you, oh, well I need to understand the thing.
[00:52:35]
I need to read documentation.
[00:52:37]
I need to dig in or something.
[00:52:39]
Yeah.
[00:52:40]
That's use the tool for what it was purpose for.
[00:52:43]
Create not superstitions in your user's mind.
[00:52:47]
And then hackers might think like, yeah.
[00:52:53]
I mean it certainly does feel like, I mean when you look at blamed era, evergreen migrations
[00:52:59]
and things like this, it does feel like a different way of thinking about problem solving
[00:53:04]
where you can really think about, am I handling every case in a way that is harder to do in
[00:53:10]
other contexts.
[00:53:11]
Yeah.
[00:53:12]
Yeah.
[00:53:13]
And in fact, for instance, that requires you to handle all the error cases is really nice
[00:53:18]
to make tools, right?
[00:53:20]
Because all that remains is to make it nice, to give the user a nice error message basically.
[00:53:26]
As long as you're in Elm world.
[00:53:28]
Yeah.
[00:53:29]
It's like, I mean user experience and developer experience are kind of one in the same.
[00:53:34]
Like you want to give users a good, predictable, robust experience.
[00:53:39]
Yeah.
[00:53:40]
I mean, I see developer experience as the same, just like my users are developers, right?
[00:53:46]
Yeah, exactly.
[00:53:47]
And they're serving other users and you want to help them serve those users better.
[00:53:52]
Which in turn might also be serving users.
[00:53:54]
That's true.
[00:53:55]
It's users all the way down.
[00:53:58]
Yeah.
[00:53:59]
And maybe somewhere at the bottom there's a developer and then it's a circle or cycle
[00:54:06]
or a graph.
[00:54:08]
Infinite loop.
[00:54:09]
Directed graph.
[00:54:13]
It's a complex system we live in Dillon.
[00:54:14]
It certainly is.
[00:54:16]
There's another case of which I am thinking of and that is when the tool is doing something
[00:54:22]
without telling you or without giving you the information.
[00:54:26]
So the example that I have is for instance, automatic fixes that the linter give you.
[00:54:32]
So if you look at ESLint, ESLint has a fix mode.
[00:54:36]
So you do ESLint dash dash fix and that's going to run all the rules and it's going
[00:54:40]
to apply all the automatic fixes that it finds.
[00:54:45]
And then it's going to save that to the file system.
[00:54:47]
And the problem with that is that at the end you don't see what has happened.
[00:54:53]
You don't know what has changed.
[00:54:56]
The only thing you're left is the files on your system and maybe a git diff if you have
[00:55:00]
git, which you should.
[00:55:03]
You look at the diff and you're like, well, I don't know why this thing changed.
[00:55:11]
You're missing some information like which rules were triggered.
[00:55:15]
How did they change my code?
[00:55:18]
Is that change safe?
[00:55:20]
Is it something that I should worry about?
[00:55:23]
Is it going to change how it behaves?
[00:55:25]
Maybe for the better, but is it going to change how it behaves?
[00:55:28]
And what is the reasoning for the errors that occurred and the reason for this change?
[00:55:34]
A lot of information that you have to basically dig for, the only thing you have is a diff.
[00:55:41]
That I've always found to be a bit of a problem.
[00:55:45]
And that's when Elm review, every time you, when you run it in fixed mode, every time
[00:55:50]
you encounter an error, you have a prompt where you see all the error messages and details
[00:55:56]
and you can say, okay, or not to this suggested fix, which you can have a preview for.
[00:56:03]
And like, yeah, if you don't have that information, it's going to be very hard to trust the tool,
[00:56:09]
right?
[00:56:10]
Yeah.
[00:56:11]
That high stakes situations like that are a very fast way to lose trust.
[00:56:17]
What do you mean with high stakes?
[00:56:19]
Like if you do a money transaction where a user has money, where it's not clear that
[00:56:28]
money is being spent, you will quickly lose trust.
[00:56:31]
If you overwrite users files, when they're, they're not clear that they're doing a destructive
[00:56:38]
transaction that is going to potentially change or remove files, you will quickly lose their
[00:56:43]
trust.
[00:56:44]
And so those high stakes scenarios, you have to be very careful about communicating the
[00:56:50]
intent and letting the user know what's going on and opt into it.
[00:56:55]
Kind of feels like filing your taxes to me.
[00:56:59]
Like you have a bunch of numbers and at the end you have this amount to pay and like,
[00:57:05]
well, I don't know how these all fit together, but that's the way it is.
[00:57:11]
So just like with ESINT or with taxes, either you accept it as it is, like, okay, I guess
[00:57:21]
it's right.
[00:57:22]
Or you got to complain and call the support center or whatever.
[00:57:28]
And then you might learn that they were actually correct the entire time.
[00:57:34]
But yes, if you don't understand what is going on, like it's not helpful for you.
[00:57:39]
And as a maintainer, you're going to have a lot of complaints as well.
[00:57:42]
Like, Hey, this is not doing what I wanted.
[00:57:45]
And yeah.
[00:57:46]
Yeah.
[00:57:47]
And once you lose that trust, people will never fully trust a tool or a product, you
[00:57:53]
know?
[00:57:54]
It's going to take at least a few months, depending on your tool and how often people
[00:57:59]
use it.
[00:58:00]
But yeah, it's going to take a while for people to know, okay, well it's really been a long
[00:58:03]
time since I've had this problem occur.
[00:58:06]
Yeah.
[00:58:07]
You'll need a rebrand.
[00:58:08]
You'll have to change company names, change your tools name, put up a sign that says under
[00:58:15]
new management.
[00:58:16]
I think in Elm 019.0, there were cases where the competitor didn't check some things and
[00:58:24]
you had to remove the Elm stuff.
[00:58:27]
It's been a while.
[00:58:28]
Well, yeah.
[00:58:29]
And there were like the map crashes and things like that.
[00:58:31]
Also yeah.
[00:58:32]
So those were annoying, but the fact that it didn't do the proper checks when you needed
[00:58:38]
it was annoying.
[00:58:39]
Yeah.
[00:58:40]
You know, just like with, you know, we've talked about the difference between 99% guarantees
[00:58:46]
and 100% guarantees and how different that experience feels working with abstractions
[00:58:52]
that give you those full guarantees.
[00:58:55]
And similarly, like the difference between 99% trust and 100% trust.
[00:58:59]
I mean, obviously we don't 100% trust anything really, but you know what I mean?
[00:59:05]
Like once you lose a little bit of faith in a tool and start to second guess it, you no
[00:59:10]
longer trust it.
[00:59:11]
You like if you lose trust, you know, like if you fully can depend on a tool, it's a
[00:59:17]
totally different experience.
[00:59:18]
And again, like if you have a more transparent abstractions that build an opinionated layer
[00:59:28]
building off of an existing layer with some light abstractions with strong opinions.
[00:59:36]
That's one way that helps you do that.
[00:59:37]
Having like a focused small tool.
[00:59:41]
If you're going to be more ambitious, then it's harder to earn that trust and it's harder
[00:59:45]
to prove to users that your tool is trustworthy.
[00:59:48]
Yeah.
[00:59:49]
Or you might be trustworthy in one feature, but not trustworthy in a new feature that
[00:59:55]
you're developing.
[00:59:56]
Right.
[00:59:57]
And I feel like that there's a bias of where if you know that someone is good at something,
[01:00:02]
is an expert at something, then you're going to assume that they're going to be experts
[01:00:07]
at something else as well, which will often not be the case.
[01:00:12]
Like if I'm starting to make videos about knitting, and if you only know me from Elm,
[01:00:18]
like, oh, he's good at Elm.
[01:00:19]
He must be good at knitting.
[01:00:20]
No, I'm not.
[01:00:24]
That is a name.
[01:00:25]
I don't remember what it is.
[01:00:27]
If I find it, I will add it to the show notes.
[01:00:31]
So okay.
[01:00:32]
Jeroen, let's take a stab at a Hippocratic oath for tooling developers.
[01:00:38]
Okay.
[01:00:39]
So tooling developers, not developers.
[01:00:44]
For developers.
[01:00:45]
For developers.
[01:00:46]
Okay.
[01:00:47]
Then I would like to start with first, do not seek escape hatches.
[01:00:51]
Okay.
[01:00:52]
Yes.
[01:00:53]
Be dependable and let the user know that you're dependable through feedback.
[01:00:57]
I like that.
[01:00:59]
Don't leave the user confused.
[01:01:01]
Wait, did you already prepare this?
[01:01:04]
I am looking at some notes.
[01:01:06]
No, you're cheating.
[01:01:07]
I don't have anything to offer here.
[01:01:13]
Don't make them refresh.
[01:01:14]
Don't make them refresh yet.
[01:01:21]
Eliminate caveats.
[01:01:23]
We already have, oh no, caveats.
[01:01:27]
Make sure your tool doesn't crash.
[01:01:30]
Give a clear mental model.
[01:01:32]
Tell the user what you're going to do.
[01:01:35]
Let the user opt in.
[01:01:36]
Also, don't organize surprise parties.
[01:01:44]
Don't offer invalid ways to configure or use your tool.
[01:01:49]
And then I think we should end with saying use Elm.
[01:01:56]
Elm adoption is going to skyrocket after this tweet goes viral about our hypocratic oath
[01:02:02]
for developers.
[01:02:07]
That's pretty good.
[01:02:08]
I think that's definitely a tweet.
[01:02:10]
If not a tweet, then a screenshot of some texts that we can tweet.
[01:02:14]
In multiple tweets.
[01:02:16]
A tweet thread.
[01:02:19]
One image with one of those lines per tweet.
[01:02:25]
That's how you make...
[01:02:26]
With relaxing background images.
[01:02:30]
Well if we've missed anything for our hypocratic developer oath, then tweet it at us.
[01:02:36]
Reply to our tweet.
[01:02:38]
Let us know.
[01:02:39]
And you're in.
[01:02:40]
Until next time.
[01:02:41]
Until next time.