spotifyovercastrssapple-podcasts

Debugging in Elm

We talk about our favorite debugging techniques, and how to make the most of Elm's guarantees when debugging.
May 10, 2021
#30

Elm Debugging techniques

  • Debug.todo
  • Frame then fill in
  • Annotation let bindings
  • Using nonsense names as a step
  • Elm review rule to check for nonsense name

Hardcoded values vs debug.todo

  • Todos don't allow you to get feedback by running your code
  • TDD
  • Fake it till you make it
  • Simplest thing that could possibly work
  • Joël Quenneville's article Classical Reasoning and Debugging
  • Debugging is like pruning a tree

Breaks

  • Take a walk. Step away from the keyboard when you're grinding on a problem
  • sscce.org (Short, Self Contained, Correct (Compilable), Example)
  • Create a smaller reproduction of the problem
  • Reduce the variables, you reduce the noise and get more useful feedback
  • Reasoning by analogy from Joël's post
  • Elm debug log browser extension
  • node --inspect
  • elm-test-rs

Debug.log in unit tests

Transcript

[00:00:00]
Hello, Jeroen.
[00:00:01]
Hello, Dillon.
[00:00:02]
And, well, what are we talking about today?
[00:00:05]
Today we're talking about debugging apps in Elm.
[00:00:09]
Debugging makes me think of rubber ducks.
[00:00:12]
Is it just me?
[00:00:13]
Uh, no.
[00:00:14]
I was wondering whether it was a pun on debugging in Elm, something.
[00:00:20]
Well, ducks might eat bugs, but I don't know their diets too well.
[00:00:25]
But I do know that rubber ducking is an incredibly useful debugging technique.
[00:00:29]
Yeah.
[00:00:30]
I usually use a plush toy.
[00:00:32]
Oh, that's good.
[00:00:34]
But because I don't have rubber ducks, but I do have a few plushes.
[00:00:39]
That comes in handy.
[00:00:40]
Yeah, like the nice thing with plushes is that even when you can't figure the problem
[00:00:44]
out with the rubber duck, you can at least hug it, and then it makes you feel better.
[00:00:49]
So you can't go wrong.
[00:00:51]
It's a win win.
[00:00:52]
Yeah, exactly.
[00:00:53]
I mean, a cat seems like it would be the ideal debugging partner.
[00:00:56]
If it doesn't leave.
[00:00:57]
If it doesn't walk away.
[00:00:58]
That would just be sad.
[00:01:01]
Yeah.
[00:01:03]
So basically, you don't need debugging in Elm because if you write it, it works as long
[00:01:09]
as it compiles, right?
[00:01:10]
Yeah.
[00:01:11]
I mean, debugging means bugs, right?
[00:01:12]
Yeah, it means bugs.
[00:01:14]
So as long as you get it to compile.
[00:01:16]
So we should really just talk about how to make an Elm application compile, I think.
[00:01:19]
Yeah, exactly.
[00:01:20]
Yeah, this is going to be a very short episode.
[00:01:22]
So we might as well not do it.
[00:01:25]
And so see you later, Dillon.
[00:01:28]
You know, I do think that there is, I don't know if it falls under the same category,
[00:01:34]
but I do find that there's a certain art to being able to solve type puzzles in Elm, almost
[00:01:41]
like debugging why your types aren't lining up.
[00:01:44]
And I'm not sure if like, we should squeeze that into this episode, or if it merits an
[00:01:48]
episode of its own.
[00:01:50]
It might be a big enough topic to have its own episode.
[00:01:53]
But that's an interesting process in Elm as well.
[00:01:57]
Absolutely.
[00:01:58]
Well let's start with the more vanilla standard debugging.
[00:02:03]
So when I think about debugging, a word that comes to mind for me is assumptions.
[00:02:11]
And when I say assumptions, what assumptions can I make?
[00:02:14]
What assumptions am I making that might be preventing me from seeing what's going on?
[00:02:20]
Like one of the most common things that happens to me when I'm banging my head against a wall
[00:02:25]
debugging for a long time is I'm making an assumption and I'm not considering that area.
[00:02:33]
You know, it's like if you're looking for your keys and you like, looking like the first
[00:02:38]
place you look, you're like, I always keep it in this drawer.
[00:02:42]
And you look there and you're like, oh, it's not there.
[00:02:44]
And then you go looking all throughout the house, you pull up all the cushions on the
[00:02:47]
couch, you look all over the place.
[00:02:49]
And then you're like, you know what?
[00:02:50]
I assumed that I was done looking in that drawer, but did I fully search that drawer?
[00:02:56]
So you've created this assumption, which has blinded you to one of the obvious things,
[00:03:01]
which is you check that box.
[00:03:03]
Obviously it's not in that drawer.
[00:03:04]
Then you go back and you look in the drawer after searching all over the house and there
[00:03:08]
it is.
[00:03:09]
You move over some stuff in your drawer and you see your keys.
[00:03:11]
And that happens with debugging all the time.
[00:03:13]
Like you're making that one assumption.
[00:03:15]
So I think that being, just thinking about it as assumptions, I think is really helpful
[00:03:20]
because you can then examine your assumptions and at least put them out there.
[00:03:24]
And I think that's why rubber ducking is very helpful because like if, if a person or a
[00:03:30]
rubber duck walks up to you and says, Hey, what's going on?
[00:03:34]
And you have to explain the problem to them, then it forces you to enumerate your assumptions
[00:03:39]
because then you say, well, I know it's not in the drawer because I checked there and
[00:03:44]
you're like, wait a minute, but did I fully check there?
[00:03:47]
So, you know, the equivalent in code would be, I know that this data is getting initialized
[00:03:52]
correctly because I looked at that code and it gets initialized with the right state.
[00:03:56]
Wait a minute, but does it get initialized in that right state for this code path?
[00:04:00]
Yeah.
[00:04:01]
Yeah.
[00:04:02]
That makes me think of the confirmation bias.
[00:04:05]
Yes.
[00:04:06]
There's a video by Veritasium on YouTube, which I found to be pretty compelling where
[00:04:11]
he basically goes to people on the street and asks, I have a rule, I'm going to give
[00:04:17]
you three numbers and you need to find the rule.
[00:04:20]
So here are my numbers one, two, four, try to guess the rule.
[00:04:25]
And people say, well, you double it every time.
[00:04:27]
And that is not the rule, but you can now guess other numbers and I will tell you whether
[00:04:32]
they match the rule and you can then try to find other rules to see whether they're matching.
[00:04:39]
And what seems to happen a lot is that people try to confirm what they were assuming before.
[00:04:46]
So one, two, four.
[00:04:47]
So you're doubling the numbers every time.
[00:04:49]
No, that's not the rule.
[00:04:51]
Okay.
[00:04:52]
Two, four, eight.
[00:04:53]
Yes, that fits the rule, but that is not doubling the number.
[00:04:56]
16, 32, 64.
[00:04:58]
You're trying again and again to validate your initial assumption, but that is not what
[00:05:04]
you should do.
[00:05:05]
What you should do is try to ponder, is my assumption correct by saying the opposite
[00:05:12]
or something else?
[00:05:13]
So does one, two, three fits?
[00:05:15]
And in this case, yes, it did.
[00:05:18]
Which is not doubling the number.
[00:05:19]
Yeah.
[00:05:20]
That's a great example.
[00:05:21]
I love that idea of instead of trying to confirm your assumptions, trying to completely deviate
[00:05:29]
from your assumptions.
[00:05:30]
And I remember the first ever computer science lecture in my first ever computer science
[00:05:38]
class and I will never forget it.
[00:05:41]
And my teacher said, all right, here's a setup.
[00:05:45]
You're trying to guess a number between one and 100 and you have however many guesses
[00:05:52]
and you guess a number and I'll tell you if you're too high, too low or guessed correctly.
[00:05:57]
So what's your first guess?
[00:05:58]
And then he was having someone in the audience like make guesses.
[00:06:02]
And this one student was like 17, 23.
[00:06:06]
He just wanted to get the right number.
[00:06:10]
And he was explaining like, well, what happens if you guess 17 and I say that's too low,
[00:06:17]
what do you now know?
[00:06:18]
And what information does that illuminate and eliminate?
[00:06:22]
And so what would be the answer that would give you the most information?
[00:06:26]
And as people who've learned about binary searching.
[00:06:31]
Get bisect.
[00:06:32]
You do a bisect, you try to eliminate as much information.
[00:06:35]
So you split it in the middle.
[00:06:36]
That gives you the most information in that situation.
[00:06:39]
And that is a very real thing that happens in debugging is like if you poke at something,
[00:06:45]
what will tell you the most information?
[00:06:47]
That is a really important skill.
[00:06:49]
When you said it before about trying to make the compiler work or to fix the compilers,
[00:06:55]
I kind of do that same technique with trying to figure where there's a compiler error.
[00:07:00]
So if I have a list and one of them is of a different type, but I can't seem to figure
[00:07:05]
out which one because I'm not looking at the type error sufficiently or because it's not
[00:07:10]
clear, I tend to remove half of them and see whether problematic ones were in that half.
[00:07:18]
And then do a bisect in that way.
[00:07:21]
So we're getting into it.
[00:07:23]
So here we go.
[00:07:24]
Let's do it.
[00:07:25]
Okay.
[00:07:26]
So debugging compiler errors, debugging types that don't line up.
[00:07:31]
Some of the things that I really like to use for figuring out how to solve a type puzzle
[00:07:37]
are debug.to do is a very useful tool.
[00:07:41]
So we're talking about debug to do before debug log.
[00:07:44]
Okay.
[00:07:45]
Sure.
[00:07:46]
debug.to do is, you know, I mean, it's the equivalent of like the TypeScript any type,
[00:07:54]
except that you can't get it into production code because you can't do lmake dash dash
[00:07:58]
optimize with debug.to do.
[00:08:01]
So the thing is, it is an extremely helpful tool for testing assumptions.
[00:08:05]
And the basic idea is you take a, you're using it to ask a question.
[00:08:10]
You're saying you can say, would any type satisfy the compiler here?
[00:08:16]
And what that allows you to do is that sort of binary searching that you're talking about
[00:08:19]
where you just say like, ignore all of this other stuff.
[00:08:22]
But if I did, you know, pipe this to list dot map, this function, maybe dot with default,
[00:08:29]
this value, etc, etc.
[00:08:31]
If I started with a debug.to do in this one point, would everything else line up?
[00:08:35]
Is there is there a possible path where that would line up?
[00:08:39]
Or in other words, if it doesn't, that tells you that the compiler problem or the type
[00:08:44]
problem is later down the line.
[00:08:46]
Exactly, exactly.
[00:08:48]
You've bisected it into into two categories.
[00:08:51]
The problem either exists within the debug dot the thing you've you've replaced with
[00:08:56]
a debug.to do, there was a fault there, or everything that comes after is inconsistent.
[00:09:03]
Because if you if you put a debug.to do in the thing that starts a pipeline that goes
[00:09:08]
through list dot map, some function, maybe that with default, then you know, okay, whether
[00:09:14]
or not I get the value I'm looking for in the spot, I have the debug.to do everything
[00:09:19]
else is not consistent, because there's no value that would satisfy the compiler as that
[00:09:24]
starting point.
[00:09:25]
But if it compiles, then you know, okay, I know that there exists a value that would
[00:09:30]
satisfy the compiler here.
[00:09:32]
So the the remaining code appears to be consistent, at least.
[00:09:35]
So what kind of value could I put here?
[00:09:38]
And at that point, I really like to use a let with a type annotation, and just extract
[00:09:44]
things.
[00:09:45]
I wrote a blog post about this that I sent out on my mailing list, but I should really
[00:09:50]
publish it on my public facing blog.
[00:09:52]
I've been publishing them both on my public blog and to my mailing list lately, but my
[00:09:57]
earlier ones, I didn't maybe I'll go dig that up and publish it.
[00:10:01]
But but I wrote about a technique that I called frame then fill in kind of like basically
[00:10:06]
talking about this technique of like, if you're solving a puzzle, you start with the easy
[00:10:12]
things that you can figure out.
[00:10:13]
So you start with around the puzzle, if you can find the edge pieces and the corner, there
[00:10:18]
are four corner pieces.
[00:10:20]
So if you can find the four corner pieces, now, you have some easy assumptions that you
[00:10:25]
can start with, and then you can start fitting things into that.
[00:10:28]
But using like an annotated let binding, and for people who may not know, you can you can
[00:10:35]
create a let, you know, variable in Elm, and you can put a type annotation above it just
[00:10:40]
like in a top level value,
[00:10:42]
which you should do, in my opinion.
[00:10:44]
But yeah, absolutely.
[00:10:45]
I mean, I think a lot of people just don't realize that you can do that.
[00:10:48]
Yeah, notice that too, as well.
[00:10:50]
And if you're using IntelliJ Elm, it can often help you out with you just hit like option
[00:10:55]
enter and say add type annotation.
[00:10:58]
And often it gives you a pretty helpful annotation there.
[00:11:01]
Yeah, it works generally.
[00:11:03]
Yeah, it's pretty good.
[00:11:04]
So that's a really helpful technique.
[00:11:06]
Also I think people may not know, it took me a while to discover this, that the type
[00:11:14]
variables you have in your top level type annotation.
[00:11:17]
So if you have something that says this is a list of a, if you use a lowercase a, if
[00:11:25]
you say this top level value, my list is a list of a, and then you do a let binding and
[00:11:31]
you annotate something as a lowercase a, that is now the same a as the top level annotation
[00:11:38]
pad.
[00:11:39]
So it's sort of bound in that same context.
[00:11:41]
So that can come in handy.
[00:11:43]
But so frame then fill in, basically the idea is you keep locking in these things that you
[00:11:49]
know with type information.
[00:11:51]
You put like a type annotation, you're like, okay, I know this needs to be this type.
[00:11:55]
This is consistent if I do this, I put a debug.to do here and things line up and they're consistent.
[00:12:00]
Now I just need to get a value of this type.
[00:12:03]
And then you start extracting sub problems and saying, well, if I had a value of this
[00:12:08]
type and a value of this type and a function with this annotation, wouldn't that be helpful?
[00:12:13]
Write those annotations with debug.to dos, and then you start to fill in those sub problems.
[00:12:17]
So it's almost like taking a puzzle, getting the corner pieces, but then you like get a
[00:12:23]
sub puzzle out of that and you can get new corner pieces by getting those new sort of
[00:12:27]
core assumptions that you need to work towards.
[00:12:30]
Yeah.
[00:12:31]
Debug.to do is a really easy way to hard code values, right?
[00:12:35]
Because if you say this function just needs to take two arguments and returns a string,
[00:12:42]
you can just say that this function returns an empty string, but the debug.to do will
[00:12:48]
at least give you one, a reminder to remove it.
[00:12:53]
And two, it works for more complex values because strings are easy to create, but if
[00:12:58]
you need to create a user, which could be impossible to create because of opaque types
[00:13:02]
or stuff like that, debug.to do is really helpful in that manner.
[00:13:07]
That's a very good point.
[00:13:08]
Also like, so yeah, that's a good distinction between like debug.to do and hard coding values,
[00:13:14]
because they can play a similar role and serve a similar function.
[00:13:19]
But one thing that hard coding can never do is say, I don't know what the type is here,
[00:13:26]
but is there any type that would be consistent and work for all these things?
[00:13:31]
So if you do like, if you call like string.to upper on something, and then you call like
[00:13:37]
times two on something, you can put debug.to do as the value you're applying those transformations
[00:13:44]
to and it's not internally consistent because there are no values that can both be uppercase
[00:13:51]
to the string and multiplied by two.
[00:13:54]
So it's not consistent.
[00:13:55]
But debug.to do can answer the question, are there any values where these things are consistent?
[00:14:01]
So if you don't know what the type is going to be, debug.to do can be helpful.
[00:14:06]
If you do know what the type is going to be, then sometimes hard coding is the way to go.
[00:14:10]
But as you say, it doesn't necessarily give you a reminder to remove it.
[00:14:15]
So sometimes it is helpful to like have kind of standard nonsense names that you use.
[00:14:20]
Yeah.
[00:14:21]
Yeah.
[00:14:22]
So you can find them.
[00:14:23]
You can use Elm review to find your standard nonsense terms of choice in your in your code
[00:14:30]
base.
[00:14:31]
Yep.
[00:14:32]
You can totally create a rule that says, hey, any function names, replace me underscore
[00:14:37]
123.
[00:14:38]
I often call things like thingy, I'll call it thingy or something.
[00:14:44]
And
[00:14:45]
I tend to go with foobar.
[00:14:47]
And that's good too.
[00:14:49]
Some people don't like it.
[00:14:50]
But for me, at least it's a reminder that I need to rename it.
[00:14:53]
I think it's very valuable to have like some go to terms that you can quickly scan and
[00:14:59]
be like, oh, I'm not done with this.
[00:15:01]
I need to come back and fix something.
[00:15:04]
We did a workshop together with the Duen and Falco.
[00:15:08]
Yes.
[00:15:09]
Someone else.
[00:15:10]
And we chose the name nonsense for as a nonsense name.
[00:15:15]
Yeah.
[00:15:16]
It's a good nonsense name.
[00:15:17]
Yeah.
[00:15:18]
Yeah.
[00:15:19]
And one thing to keep in mind is that with hard coded values, you can at least run your
[00:15:24]
application if you have a debug to do it will crash.
[00:15:27]
Exactly.
[00:15:28]
Exactly.
[00:15:29]
And that's actually why I actually tend to prefer hard coded values when possible to
[00:15:34]
debug to do's.
[00:15:36]
As a rule of thumb, I like to have debug to do's be extremely short lived to just help
[00:15:42]
me answer those questions and binary search my compiler errors.
[00:15:47]
But then as quickly as possible, I want to replace it with hard coded or real values.
[00:15:53]
So like one example of using like a hard coded value.
[00:15:56]
This is a debugging technique that I find really helpful.
[00:15:59]
Sometimes like recently, I was debugging something where there was like, I was tracing the problem
[00:16:08]
back.
[00:16:09]
There was like, I mean, it was pretty obvious that there was like a dictionary lookup that
[00:16:13]
was a miss.
[00:16:14]
It was not getting value back.
[00:16:16]
So you know, that was pretty easy to tell because it's like, couldn't find this thing.
[00:16:20]
You look at the page and that's what you see, right?
[00:16:23]
At least that was your assumption.
[00:16:25]
Well, that's true.
[00:16:27]
But that did seem like a pretty safe assumption that like, it didn't find this thing and you
[00:16:32]
trace it back here and it's like, it gets that value from a dictionary lookup.
[00:16:36]
So then you test that assumption, right?
[00:16:38]
And at that point, I actually draw a lot.
[00:16:42]
It's actually kind of hard for me to separate my test driven development sort of toolkit
[00:16:49]
and ideas from my debugging toolkit because really there's so much overlap.
[00:16:55]
It's almost like the same process for me.
[00:16:57]
But one of the things I'm trying to do is to get feedback as quickly as possible.
[00:17:03]
And hard coding is a very useful technique there, right?
[00:17:05]
So like with test driven development, we talked about this in our test driven development
[00:17:08]
episode, you can fake it till you make it.
[00:17:12]
You pass in a hard coded value and you get the test green in the quickest, dirtiest way
[00:17:17]
you know how.
[00:17:19]
And then you go and remove the hard coding.
[00:17:20]
Well, similarly, like you've got this bug, you've got your assumption is a dictionary
[00:17:25]
lookup is failing.
[00:17:26]
You can test that assumption by like for a specific case, try to initialize that dictionary
[00:17:34]
with those specific values.
[00:17:36]
Or actually, if you want to like if you want to increase your odds of success, start closer
[00:17:41]
to the actual point of failure, because you don't know what transformation something has
[00:17:47]
gone through.
[00:17:48]
So you basically want to get feedback about your assumption as quickly as possible.
[00:17:53]
If you if you say at the end on this screen, a value is missing, I know that therefore
[00:18:00]
it must not be getting retrieved from this dictionary.
[00:18:04]
That's like another assumption, therefore, it must not be getting inserted into this
[00:18:08]
dictionary.
[00:18:09]
You actually don't know that.
[00:18:11]
That's like a bigger leap, because what happens to that dictionary through the course of its
[00:18:15]
lifetime?
[00:18:16]
It gets initialized to something get removed at some point, does the dictionary get swapped
[00:18:21]
out in your model where there's an update and the value gets replaced or re initialized?
[00:18:27]
You don't know.
[00:18:28]
So you want to test your assumption and get that feedback as soon as possible.
[00:18:32]
That's your your first goal.
[00:18:33]
Yeah, what I would do here is to replace the dict.get or dict.member call by a just value
[00:18:43]
that makes sense.
[00:18:45]
And see if I can reproduce the error.
[00:18:48]
That's a great one.
[00:18:49]
Yes, right.
[00:18:50]
That would be a great way to test that assumption.
[00:18:53]
If you do that, and you still reproduce the error, then this is not the problem.
[00:18:59]
If it is, then at least your assumption is kind of validated.
[00:19:04]
Yes, right.
[00:19:05]
And just to emphasize, this is not code that's going to get committed.
[00:19:11]
This is temporary code to test your assumption.
[00:19:14]
So yeah, I think that's a great first step.
[00:19:16]
You put a just value instead of the value coming from the dictionary lookup.
[00:19:21]
Does the thing you want show up on the screen?
[00:19:23]
Yes.
[00:19:24]
Okay, good.
[00:19:25]
Then wind it back a step further.
[00:19:27]
Instead revert that thing that's taking the dictionary lookup value and putting a hard
[00:19:31]
code adjust value and instead put a hard coded dictionary.
[00:19:36]
So what if I had a hard coded dictionary and it had this value with this key?
[00:19:40]
Would would it be able to look that up?
[00:19:42]
And if if the answer is yes, it was able to look that up and get the correct value, then
[00:19:47]
and again, if you're using debug to do's here, it would tell you if the types work, but it
[00:19:52]
wouldn't allow you to execute it and get that feedback, which is important.
[00:19:56]
It also wouldn't allow you to run your tests, which is also important.
[00:19:59]
So but then you work backwards from there and you and then finally, like, okay, well,
[00:20:06]
at the point where the dictionary is initialized, what if I instead of that hard coded dictionary
[00:20:10]
I had right at the point of failure when it where does the dictionary look up?
[00:20:14]
What if I use that hard coded dictionary when I initialize the dictionary?
[00:20:18]
Now, you know, if my if if if your code now works, you've now shown that the problem is
[00:20:24]
in the initialization of the dictionary.
[00:20:27]
So if you fix the dictionary initialization, your code will work.
[00:20:31]
So now you've narrowed down exactly where the problem is.
[00:20:34]
You may have created new issues, but that's a different issue.
[00:20:40]
But at least you have tests for that, right?
[00:20:42]
Well, yeah, exactly.
[00:20:43]
And tests are.
[00:20:44]
Yeah, tests are important.
[00:20:46]
Like if I when I'm working through a bug, I can't emphasize how much like I essentially
[00:20:52]
like I've got my full dopamine hit for fixing that issue.
[00:20:56]
When I get my failing test case, if I reproduce it in a failing unit test, then I've got my
[00:21:01]
dopamine ready to go.
[00:21:03]
I understand that.
[00:21:04]
Yep.
[00:21:05]
Because that's the hard part.
[00:21:07]
If you've got that now you can iterate and test things very rapidly.
[00:21:10]
And sure, it might still be a tricky problem.
[00:21:12]
But that tests are not a, you know, a magic bullet.
[00:21:17]
The way that you're writing your tests matters a lot.
[00:21:20]
And if you've, you know, not all tests are created equal.
[00:21:23]
You might have a test that really like is exercising a unit that's not like a clear,
[00:21:29]
well defined unit.
[00:21:30]
And at that point, so it really...
[00:21:33]
You mean if it's integrating multiple pieces?
[00:21:36]
Yes.
[00:21:37]
If it's sort of like an integration test, then it becomes really difficult to like find
[00:21:42]
the precise failure that you're dealing with and fix that cleanly and get that feedback
[00:21:48]
because you're not getting this like very precise feedback about where a problem is
[00:21:52]
happening.
[00:21:53]
So a good unit test suite and getting that, reproducing that failure in a test is incredible.
[00:21:58]
For all the reasons that we described about testing, like if the problem ever arises again,
[00:22:04]
then your test will fail.
[00:22:07]
And yeah, as you said, you get a dopamine rush when you write the test because now you
[00:22:13]
have something that says red, this is not working, which is kind of what you're expecting.
[00:22:17]
And when you fix it, you get a big green, which is the dopamine rush of now it's working
[00:22:24]
as expected.
[00:22:25]
Yes.
[00:22:26]
And I mean, I think that's why I can't separate in my mind my like debugging toolkit from
[00:22:31]
my test driven development toolkit, because really it's all about feedback.
[00:22:35]
So like what we're talking about with like this technique of hard coding with putting
[00:22:40]
a just for the dictionary lookup value just to be like, does this do the right thing?
[00:22:45]
Right?
[00:22:46]
So it's like writing a failing test and getting it green as quick and dirty as you can.
[00:22:52]
The reason you do that is just to get that feedback so that you're not wasting time building
[00:22:58]
on top of faulty assumptions.
[00:23:00]
So it's all about feedback.
[00:23:02]
The better your feedback loops are, the more quickly you will be able to test your assumptions
[00:23:08]
and debugging is all about testing assumptions.
[00:23:10]
So let's go back to the dict problem that we had before.
[00:23:15]
Would you really start with inlining just something or would you test your assumption
[00:23:22]
by adding a debug log to start with?
[00:23:25]
That's what I tend to do just maybe because I'm too much of a rookie or something, but
[00:23:31]
I do use debug log a lot.
[00:23:35]
That's a really interesting question.
[00:23:37]
I use debug a lot as well.
[00:23:39]
I don't think that there's anything inherently better or worse about the technique of hard
[00:23:46]
coding values to get feedback versus debugging.
[00:23:48]
I mean, really, they're both forms of feedback and sometimes a debug log, if there's like
[00:23:58]
an intermediary value and you want to figure out, I think I would tend to start with just,
[00:24:05]
well, if you have no idea what's going on and it could be coming from a lot of places,
[00:24:10]
then yeah, I mean, pulling up, like doing a debug.log or pulling up the Elm debugger,
[00:24:15]
like that's a great place to start because the cost of doing that is so low.
[00:24:19]
So you can just be like, this thing isn't showing up on the screen.
[00:24:23]
Why?
[00:24:24]
Well, what does the model look like and what do I expect it to look like?
[00:24:28]
And maybe like if I have a known point of code before, if I get stash or go back to
[00:24:34]
a previous commit, what did it look like then?
[00:24:37]
Those are really interesting questions and I don't think that there's inherently like
[00:24:42]
a better option between those two.
[00:24:44]
I think they're both great ways to get feedback.
[00:24:47]
Yeah.
[00:24:48]
I think debug.log is much easier to set up because like if you take the example of the
[00:24:53]
dictionary again, it is much easier to inspect whether the dictionary was empty or whether
[00:24:58]
you got something from it rather than constructing a value, which might be very complex or impossible
[00:25:05]
to create.
[00:25:06]
Yes, that's right.
[00:25:07]
I think that's a good way to put it.
[00:25:09]
And I mean, there is a real advantage to like the hard coding in that you can actually prove
[00:25:16]
that it works by seeing it working.
[00:25:18]
You can actually say like, I had this value here, it would work.
[00:25:22]
So that's extremely valuable to just know that, I mean, basically it's like, it's all
[00:25:27]
about choosing where to spend your time.
[00:25:30]
And if you can, there was a, Joelle had an article on the Thoughtbot blog recently about
[00:25:37]
classical reasoning and debugging.
[00:25:39]
It's a really nice post and he talks about this analogy of looking at debugging as pruning
[00:25:45]
a tree.
[00:25:46]
I think that's a really nice way to look at it.
[00:25:48]
You're basically, you know, because of the nature of trees, the number of paths, you
[00:25:53]
know, grows exponentially.
[00:25:55]
And so if you can cut off one of those branches and stop examining it, prune that tree, then
[00:26:01]
the gains are huge.
[00:26:02]
So that's really your goal.
[00:26:04]
And if you can, if you can debug.log, it helps you figure out which branches to examine or
[00:26:12]
where to focus your efforts.
[00:26:14]
If you can put a hard coded value and prove that it works, if I do this thing, because
[00:26:20]
I can actually run it and see it working with this hard coded thing.
[00:26:24]
Now you've actually pruned a branch.
[00:26:26]
So I would say that those are, just be aware of the purpose that those two different techniques
[00:26:31]
serve.
[00:26:32]
Yeah.
[00:26:33]
Kind of like with the guess a number from one to a hundred.
[00:26:36]
Like if you say just 23 and then it happens to be the right number.
[00:26:40]
Great.
[00:26:41]
But if it isn't, then good luck.
[00:26:45]
Right.
[00:26:46]
And a debug.log can definitely help give you a clue because at a certain point, I mean,
[00:26:52]
kind of like your Veritasium example, like you want, you need to get your bearings of
[00:26:56]
like, where do I even start looking and just printing out a bunch of information or inspecting
[00:27:02]
the model is a good place to sort of pick where to start.
[00:27:06]
If you need to pick where to start, then just sort of like looking at a bunch of information
[00:27:11]
and seeing if you notice anything interesting, can sort of get your brain thinking a certain
[00:27:16]
direction.
[00:27:17]
It may also get you thinking with some tunnel vision and getting some confirmation bias.
[00:27:23]
So you, you, you know, that's why you want to validate your assumptions.
[00:27:26]
Once you sort of get a place to start and make sure you are explicit about your assumptions,
[00:27:31]
lay them out, tell a rubber duck or a plush toy about them or a cat taking a walk.
[00:27:38]
Okay.
[00:27:39]
Let's talk about breaks.
[00:27:40]
Cause that's, that's a really good one.
[00:27:41]
I think breaks are definitely underrated as a debugging technique.
[00:27:47]
And walking in particular is really good for like getting your mind off of like just grinding
[00:27:52]
on a problem, which if you are, um, if you are grinding on a problem, then be aware that
[00:27:59]
you, you won't want to tear yourself away from the code because you're like, I want
[00:28:04]
to fix the problem.
[00:28:05]
Your brain is in that mode.
[00:28:06]
And often that's exactly the time when you should get away from the code, get away from
[00:28:10]
the keyboard and take a break because you're going further down that tunnel vision where
[00:28:16]
you're like, I just want to see this thing through.
[00:28:19]
You're burning yourself out and just step away from the keyboard for a little bit.
[00:28:23]
I'm thinking of cases where I would want to, to move away from it because it is very complex
[00:28:28]
and like, especially if you're dealing with, um, a lot of mutating code where things happen
[00:28:34]
and impact other code and you have to create a mental model of everything, which is very
[00:28:41]
complex and you want to keep, um, in that state.
[00:28:45]
But that also probably means that you have a problem that your, your system is too complex.
[00:28:50]
Not that you need to resolve it now.
[00:28:53]
You don't have, you haven't created a small enough system in which to reason about.
[00:28:58]
So we often talk about SSCCEs in L, which, ah, what does it mean again?
[00:29:06]
Short?
[00:29:07]
Isn't there like a website that you can point people to?
[00:29:09]
Yeah.
[00:29:10]
Yeah.
[00:29:11]
SSCCE.org, short self contained correct example.
[00:29:15]
Yeah.
[00:29:16]
Did you also learn that word from Elm?
[00:29:18]
Yes.
[00:29:19]
Okay.
[00:29:20]
Um, so if you can create a smaller example, maybe not minimal, but at least small and
[00:29:28]
well, that makes it much easier to know where the problem is exactly.
[00:29:33]
And often by doing this, you have found where the problem is because you are kind of pruning
[00:29:39]
that tree while saying, oh, well this information that actually doesn't impact the, this code
[00:29:45]
doesn't impact the results.
[00:29:46]
I'm still getting the bug.
[00:29:48]
Oh, but when I change this, then bug is resolved.
[00:29:51]
So this is part of the problem.
[00:29:53]
Right, right.
[00:29:54]
So yeah, absolutely.
[00:29:56]
Because it's really, I mean, again, I keep coming back to it cause I think this is just
[00:30:00]
like such a fundamental thing about programming that it applies universally, but it's about
[00:30:07]
feedback.
[00:30:08]
And when you reduce down the problem to a simplified version of the problem, it allows
[00:30:14]
you to, you're reducing the variables.
[00:30:16]
And that means when you get feedback, it there's less noise to that feedback.
[00:30:21]
So you're getting better, faster feedback.
[00:30:24]
I'm thinking if you managed to resolve it to SSCCE, you fix it and then you still have
[00:30:30]
the problem.
[00:30:31]
Then you have actually found two bugs.
[00:30:34]
Yeah, Joël also talks about, he refers to this as reasoning by analogy in his blog post
[00:30:41]
here, which is, I like the way that he breaks down, you know, these different reasoning
[00:30:48]
techniques and how they applied in the context of coding.
[00:30:51]
And so with like debug logging and using like the interactive Elm debugger, like are there
[00:30:57]
any specific techniques that you find helpful there?
[00:31:01]
Not specifically.
[00:31:02]
I would attempt to try to pinpoint where I need to put my debug log and try to get the
[00:31:07]
most precise information because otherwise I get that big block of text.
[00:31:12]
Like if I want to know whether a digs gets is a hit or a miss, I debug log that instead
[00:31:19]
of the key and the digs.
[00:31:22]
But when I know that I actually need to look at those, then I do a debug log on both.
[00:31:27]
And actually when you have a big debug log block of text, there's one extension that
[00:31:34]
is very useful that was made by Thomas Latal, also known as Krakelyn, which I probably pronounce
[00:31:40]
better, which is an extension that you can add to basically any browser, which when you
[00:31:46]
turn it on, you can have a nicer view of the debug log, like actually interactive where
[00:31:52]
you can open a record to see the fields in it.
[00:31:56]
You can open a list to see the elements in it and so on.
[00:32:01]
It feels a lot like if you do like console log of a JavaScript object in the browser
[00:32:06]
and you can kind of inspect and expand pieces of it.
[00:32:10]
But nicer with colors.
[00:32:11]
So instead of a big block of text, that is very useful.
[00:32:14]
I actually am a bit sad because most of the debugging I do, I feel like I can do it in
[00:32:19]
Elm review.
[00:32:20]
So it's done with Elm test and I can't use this.
[00:32:25]
This is so annoying.
[00:32:26]
You actually probably could if you use node dash dash inspect.
[00:32:31]
Yeah.
[00:32:32]
So with node dash dash inspect, you can actually execute because you know, Elm review is running
[00:32:38]
in a node runtime, not like a Chrome browser runtime.
[00:32:42]
I was actually thinking with Elm test, but yeah, otherwise, yeah.
[00:32:45]
Oh yeah.
[00:32:46]
With Elm test, true.
[00:32:47]
Yeah.
[00:32:48]
It's not going to give you, it's not going to give you that.
[00:32:50]
I mean, unless you pulled down the Elm test runner code and did node inspect with that,
[00:32:56]
but yeah.
[00:32:57]
But do give me the information.
[00:32:59]
I'm still interested.
[00:33:00]
It can be helpful.
[00:33:01]
Like if you do node dash dash inspect and you're running your node program, which happens
[00:33:07]
to execute some Elm code, then you can, you get your console log messages in a Chrome
[00:33:14]
or whatever window and you just like connect to this node session and you can like analyze
[00:33:24]
the memory and see everything's going on as if it were running in Chrome.
[00:33:28]
That's quite handy.
[00:33:30]
So for debug logging, there are a lot of different techniques that I find myself using.
[00:33:34]
Like one technique I use often is just, oh, and I wanted to mention one other thing, which
[00:33:39]
is Matthew Pittenberg's.
[00:33:42]
I don't know how to pronounce his name properly.
[00:33:44]
You pronounce it better.
[00:33:45]
Matthew Pittenberg.
[00:33:46]
There you go.
[00:33:47]
That's what I meant to say.
[00:33:48]
Probably.
[00:33:49]
I mean, French names can always be pronounced in different ways, especially last names.
[00:33:55]
Interesting.
[00:33:56]
Well then I don't feel too bad.
[00:33:59]
But yeah, he has this Elm test RS tool.
[00:34:05]
It's like a rust test driver for Elm unit tests, which is just, you know, drop in replacement
[00:34:11]
for the Elm test CLI.
[00:34:12]
And one of the main features that he has for that is that he like captures the debug log
[00:34:19]
output for specific tests.
[00:34:21]
So a technique that I use quite often is I will use debug.logging in tandem with Elm
[00:34:28]
test.
[00:34:29]
And because that's a really great fast feedback mechanism, I don't need to like run through
[00:34:34]
the steps in the browser and then try to reproduce the case and make sure I'm reproducing it
[00:34:39]
the same way every time and context shift between reproducing and adding log statements,
[00:34:45]
right?
[00:34:46]
But so that's pretty cool.
[00:34:47]
In Elm test RS, it captures the log output and associates it with an individual test
[00:34:53]
case.
[00:34:54]
I do have to say, though, in practice, what I tend to do anyway is I tend to add an only
[00:35:01]
yeah, on a specific test anyway.
[00:35:03]
Yeah, because you would still have a lot of debug logs for all the other tests.
[00:35:09]
That's right.
[00:35:10]
Exactly.
[00:35:11]
Even though you're associating the log output with a specific test run instead of just printing
[00:35:17]
the logs with it without a specific ordering, it's still a lot of noise.
[00:35:21]
And it's nice to reduce down the noise in a lot of cases and say like, I know I'm reproducing
[00:35:27]
the issue here.
[00:35:29]
Give me log output to help me understand what's going on.
[00:35:32]
Yeah, I tend to use test.only also quite a lot.
[00:35:36]
Yeah, only is very helpful.
[00:35:37]
And you can use only on a specific test case or on a describe.
[00:35:43]
I tend to just use it on a specific test case in practice most of the time, at least when
[00:35:47]
I'm debugging.
[00:35:48]
Yeah, totally.
[00:35:49]
I tend to do it on describe sometimes when I'm working on stuff.
[00:35:53]
Right.
[00:35:54]
But then when I add it to a single test underneath it, there's an issue where it still runs the
[00:35:59]
whole describe.
[00:36:00]
Right.
[00:36:01]
I mean, if you have multiple onlys, it doesn't know which only you want.
[00:36:05]
So I think it does them all.
[00:36:07]
If you do an only inside of a describe.only.
[00:36:10]
Yeah, that's true.
[00:36:12]
Then I would argue you need to only run that test, but no, it doesn't.
[00:36:17]
That's reasonable.
[00:36:18]
Yeah, that's reasonable.
[00:36:20]
So another logging technique that I like to use is I'll run an only with a specific test
[00:36:27]
case and again, like one of the most useful tips is just try to get your red failing unit
[00:36:35]
test.
[00:36:36]
If you can do that, you're in really good shape.
[00:36:38]
But I also like to, often I will just put like a debug log so you can do underscore
[00:36:47]
in a let binding, you can do underscore equals debug.log.
[00:36:51]
And that's important because Elm will not evaluate the debug log statement if you just
[00:36:57]
say foo equals debug.log string 123.
[00:37:01]
So you need to do it as underscore and then it will evaluate that whether or not it needs
[00:37:06]
it for any of the other values.
[00:37:09]
Just as a side note, if you do not see your debug log, don't forget to add two arguments.
[00:37:16]
That's another yes.
[00:37:17]
The description between quotes and then the value that you're trying to show.
[00:37:23]
And if you're not showing, not trying to show anything, just trying to see whether the code
[00:37:28]
passed through this path, then add a unit or something.
[00:37:31]
Yes, exactly.
[00:37:32]
That would be an interesting Elm review rule.
[00:37:34]
I don't know if there is one for that, but there isn't.
[00:37:37]
Yeah, that would be a, that would probably save some people.
[00:37:40]
Yeah, it's already reporting that, hey, you should not use debug log.
[00:37:43]
So I don't think people will actually look at it.
[00:37:46]
That's fair.
[00:37:47]
Right.
[00:37:48]
That's a good point.
[00:37:49]
But yeah, so if you, so I sometimes find myself trying to understand which code path it's
[00:37:55]
taking, like in a case statement or an if, if expression or whatever.
[00:38:00]
So what I'll often do there is I'll put an underscore equals debug.log, like a little
[00:38:07]
let in each branch of the if or case expression.
[00:38:11]
And then I'll just put, one, two, three for each of the log statements.
[00:38:15]
And then I know which branch it's going down.
[00:38:17]
Yeah.
[00:38:18]
Don't forget to always give a reasonable description to know that, to make it easy to find the
[00:38:25]
debug log again.
[00:38:26]
So Oh, I found debug log two.
[00:38:29]
Oh wait, where did I put that one?
[00:38:32]
So we usually do something like before this function call and after this function call,
[00:38:38]
something like that, or one, two, three, four.
[00:38:41]
That's less important if you are running it with a reproducible unit test, because if
[00:38:46]
you're like, where is this log coming from?
[00:38:48]
Then you find all the places you're logging that and change one of them.
[00:38:52]
Whereas if you're like, if this is a very hard to reproduce bug and you have, and you
[00:38:57]
can only reproduce it one out of 10 times, cause you don't know exactly what you did
[00:39:00]
clicking through these steps to reproduce it.
[00:39:03]
Then you want to be really sure that you can get all the logging information in that one
[00:39:07]
spot.
[00:39:08]
Yeah.
[00:39:09]
So you have to understand through which paths the code is going through.
[00:39:13]
Yeah, exactly.
[00:39:14]
That's, that's another kind of pruning strategy to help you identify which branches you can
[00:39:19]
stop examining and which ones you can focus your efforts on.
[00:39:23]
So how often do you use the Elm browser debugger?
[00:39:27]
So the one that comes built in with the, with Elm, it looks almost, but it's actually part
[00:39:33]
of Elm browser and that it appears in the bottom left, bottom right corner.
[00:39:39]
Yeah.
[00:39:40]
I use it, I actually use it fairly often to just, um, inspect what's going on with the
[00:39:46]
model.
[00:39:47]
I don't, and I mean, it's used, it's interesting to see like which messages are coming in and
[00:39:52]
then which, but often it's enough for me to just know like, what is the current state
[00:39:57]
or like, or to toggle.
[00:39:58]
Um, actually I find myself pretty often like clicking back.
[00:40:01]
So if you like expand a particular part of your model, you can like expand and collapse
[00:40:06]
portions of it.
[00:40:07]
It retains that if you click between states that this message came in, this message came
[00:40:12]
in, you can click back and forth between those states.
[00:40:15]
So it can be really helpful to like toggle between them and see what changed if you expand
[00:40:19]
the portion that you're interested in examining.
[00:40:22]
So you can see exactly how it changed in a particular step.
[00:40:24]
So often that's like easier than the debug.log.
[00:40:27]
Yeah.
[00:40:28]
And then when you combine it with some debug logs, you can know why it changed the way
[00:40:33]
it did.
[00:40:34]
Right.
[00:40:35]
So you can like find the specific message that triggered it.
[00:40:38]
And that's another, um, so I was kind of thinking about like, what is unique about debugging
[00:40:43]
in Elm?
[00:40:44]
I think that's like, I think that's an interesting topic to talk about, you know, not just for
[00:40:48]
telling someone why Elm is interesting, but if you're using Elm, how do you make the most
[00:40:53]
of these cool properties in Elm to, you know, to do, to, to debug more effectively.
[00:41:00]
And so one of those ways that Elm is really unique is that you have this pinch point of
[00:41:06]
changing your model and, you know, initiating side effects and that's update and, and an
[00:41:12]
it, I think of them almost as like the same thing, but you've got, uh, you've, you've
[00:41:18]
got these messages that you can examine and that's really cool because you can, um, you
[00:41:23]
can look through and say, is this message getting triggered at the right time or what
[00:41:28]
happens when this message is happening?
[00:41:30]
And another unique thing about Elm that you can take advantage of when you're debugging
[00:41:35]
is types.
[00:41:36]
So if you can, if you can look at in the update, you know, clause for this message, it's not
[00:41:43]
triggering any side effects and it's running this function, which only could possibly change
[00:41:50]
these values that helps you prune the tree and narrow down your search space even more.
[00:41:56]
Yeah. And also reduce the mental model that you need to keep in your head.
[00:42:00]
Yes.
[00:42:01]
Right.
[00:42:02]
Yeah.
[00:42:03]
We, we talked about that in our sort of book club about, uh, scaling Elm applications,
[00:42:07]
Richard Feldman's talk.
[00:42:09]
And, uh, uh, I think that's a really, that's a really important point is like understanding
[00:42:14]
how you can narrow down your search space through Elm types.
[00:42:19]
Also like I think parse don't validate is very relevant here too.
[00:42:23]
Like if you, if you are passing a maybe down all over the place, you have to constantly
[00:42:28]
be asking yourself, is this adjust or nothing?
[00:42:32]
And that's more cognitive load for you.
[00:42:34]
And I understand that's more code paths for you to consider.
[00:42:37]
So parse don't validate allows you and refer back to our parse don't validate episode.
[00:42:43]
It's one of my favorite episodes actually that allows you to, to narrow down information
[00:42:48]
and basically track your assumptions through types.
[00:42:51]
That's that's kind of how I think of what parse don't validate allows you to do.
[00:42:54]
So instead of passing down a maybe all over the place, you want to, um, you want to have
[00:43:00]
that value that keeps track of the fact that actually if it goes down this code path, it
[00:43:04]
won't be nothing.
[00:43:05]
You have a value.
[00:43:06]
And it will reduce the code.
[00:43:08]
Yes.
[00:43:09]
The amount of code anyway.
[00:43:10]
So it will look nicer.
[00:43:11]
Yeah.
[00:43:12]
Yeah.
[00:43:13]
Right.
[00:43:14]
Yeah.
[00:43:15]
Using types, running tests, doing tiny steps also I think is really useful.
[00:43:20]
Just like all those techniques to not have bugs in the first place or yeah, the less
[00:43:26]
bugs you have to start with the less debugging you have to do.
[00:43:29]
So invest in your future, you know?
[00:43:31]
Right.
[00:43:32]
And again, coming back to the analogy of the pruning the tree, thanks again to Joël for
[00:43:38]
a really good analogy.
[00:43:40]
That is, you know, uh, types help you prune the tree and, oh, tiny steps help you prune
[00:43:46]
the tree.
[00:43:47]
But if you're taking tiny steps, your search space is smaller because you know, so I mean,
[00:43:53]
there are a few different cases to consider here, right?
[00:43:55]
There's like debugging something that you just introduced and then there's debugging
[00:44:00]
something that a user found in production, right?
[00:44:04]
And, but if you're debugging something that you just introduced, which, you know, we're
[00:44:08]
constantly debugging in that regard, right?
[00:44:11]
You introduce a change, you're like, why is this not working?
[00:44:13]
If you're taking tiny steps, your search space is a lot smaller.
[00:44:17]
And also if you want to figure out a compiler error, stash your code and do the same change
[00:44:22]
in a tiny step, makes the compiler error much easier to read and to understand and to fix.
[00:44:27]
So back to the Elm debugger, I tend not to use it a lot.
[00:44:32]
One reason being is that a lot of the debugging I do is for Elm review.
[00:44:37]
So I don't have access to it, but even in my work application, we don't use it because
[00:44:42]
we have a lot of events going through all the time because we were showing graphs and
[00:44:49]
analytics and so we have constant messages coming in and that makes it very hard to find
[00:44:57]
which step failed and just trying to make the rolling of messages stop.
[00:45:03]
So one thing I could see us doing, but we haven't done yet is to make it easier for
[00:45:10]
us to use the debugger by not triggering some events, mostly subscriptions.
[00:45:17]
If you have a time that every subscription that rolls every frame, for instance, you
[00:45:24]
will get a lot of messages and the debugger will be unusable.
[00:45:29]
And it creates performance problems because it's keeping track of all of that state in
[00:45:33]
memory and explodes.
[00:45:35]
But if you somehow remove those, like if you remove this subscription, the code might not
[00:45:41]
work as well, but maybe that's not important for this particular use case or this particular
[00:45:47]
debugging session.
[00:45:48]
So you could do yourself a favor by removing it, either by removing it from the code or
[00:45:54]
by running the code with a certain configuration or adding a shortcut that disables that fetching
[00:46:02]
of the time.
[00:46:03]
And by doing that, you will have a lot less noise.
[00:46:07]
And similarly, what we do do is, that sounded bad, but what we do do, but is not a do do,
[00:46:16]
is actually kind of invest in our dev tools.
[00:46:19]
So in all of the pages, we have what we call a dev panel, which we open up with a shortcut,
[00:46:26]
and then it gives us buttons to put the application in a certain state.
[00:46:31]
So if you have a list of items that we're showing, like a, for us, it's a dashboard
[00:46:37]
list because we can have several.
[00:46:40]
So what would the page look like if you had 100 or 1000?
[00:46:45]
Would it slow down?
[00:46:46]
Would it still look okay?
[00:46:47]
Would the CSS be okay?
[00:46:49]
So we just have a button that says, add 100 dashboards or empty the dashboards or mock
[00:46:56]
a request failure so that you can see the error page, the error state.
[00:47:03]
And when you do all those things, it becomes much easier to test your application in very
[00:47:10]
specific scenarios.
[00:47:11]
Even just visually, even if it's only for CSS or styling, being able to easily put your
[00:47:19]
application into the state where it shows whatever you need to style is very useful.
[00:47:24]
It's very time saving.
[00:47:26]
That's great.
[00:47:27]
That's a great tip.
[00:47:28]
So that's something that you need to add to your application?
[00:47:30]
Right.
[00:47:31]
Invest in building your own tools.
[00:47:32]
Yeah, I think that if you have, I always think about like, what is the sort of macroscopic
[00:47:39]
effect of improving feedback mechanisms?
[00:47:43]
I think that you see the quality and level of like innovation around a particular area
[00:47:51]
directly affected by those types of things.
[00:47:54]
So that's really cool.
[00:47:56]
I think if you want to make something really good, invest, I mean, it's like, they have
[00:48:02]
like chaos monkey and those things at Netflix, right?
[00:48:06]
So they'll just reproduce an outage in an AWS region or whatever it might be.
[00:48:13]
And by having those feedback mechanisms, it makes it more resilient and robust because
[00:48:20]
instead of dealing with those issues when they happen to happen, which is rare, they
[00:48:26]
can trigger them at any time and they have to build in that resilience to it.
[00:48:31]
So I think that design tools that bring that feedback out so that you're responding to
[00:48:37]
it instead of, otherwise it's just going to, it's not going to be front of mind.
[00:48:42]
Yeah.
[00:48:43]
It makes me think of fuss tests.
[00:48:45]
Yeah.
[00:48:46]
Yeah.
[00:48:47]
You're not going to test when that integer is minus one, but having a fuss test will
[00:48:53]
force you to do it.
[00:48:55]
Yes.
[00:48:56]
Yes.
[00:48:57]
And it doesn't have the same biases that you have unless you encode your biases subconsciously,
[00:49:00]
which you probably will to some extent.
[00:49:04]
It does have some biases, some very weird ones like sometimes, oh yeah, negative numbers.
[00:49:10]
I like those.
[00:49:11]
Oh sure.
[00:49:12]
Well that's, yeah, that's by design, right?
[00:49:15]
Like it, it, it knows that there is a certain meaning to like an empty list and a list with
[00:49:20]
one item.
[00:49:21]
It's like those are, let's not be random about those because those specific areas might be,
[00:49:27]
let's like be more likely to produce those random values or let's always produce them.
[00:49:32]
I don't know which it is, but first testing is another interesting one that can sort of
[00:49:37]
get you out of your cognitive biases.
[00:49:39]
That's another good, good tip for debugging.
[00:49:42]
There are almost like two different modes for debugging.
[00:49:44]
There's like the exploration phase and the fixing phase or, or like narrowing down phase.
[00:49:51]
So like one is expanding when you're exploring like what is going on, what might be causing
[00:49:56]
it.
[00:49:57]
You're trying to expand your options and where to look and one is narrowing within that.
[00:50:02]
Which we've talked about mostly.
[00:50:03]
Yeah.
[00:50:04]
Sometimes people talk about like exploratory testing, which is a type of manual testing.
[00:50:09]
I I'm not an expert about it.
[00:50:11]
I don't know very much about it, but I think the general idea is to sort of overcome those
[00:50:16]
biases by sort of poking at a system instead of saying, here's our test plan.
[00:50:21]
This is what we test every time we do a release.
[00:50:23]
It's more how can we overcome our cognitive biases and explore things that might not be
[00:50:30]
a code path that we've examined before.
[00:50:32]
Okay.
[00:50:33]
I've never, never seen it in action or I'm not sure what it is.
[00:50:37]
I haven't read a lot about it and I haven't, haven't practiced it myself, but I I know
[00:50:43]
a lot of sort of automated testing advocates often talk about, you know, ideally almost
[00:50:49]
all of your testing should be automated, but there's this thing called exploratory testing,
[00:50:54]
which is really helpful because basically like you're automating the known knowns, not
[00:51:00]
the unknown unknowns or something like that.
[00:51:02]
Right.
[00:51:03]
It's to try to flesh those things out, which your automated tests aren't necessarily going
[00:51:08]
to do a good job doing all the first tests are perhaps an interesting exception.
[00:51:13]
Is exploratory, is exploratory testing kind of a way to figure where to add tests?
[00:51:19]
Why should I add automated tests?
[00:51:20]
I mean, I'm sure, I'm sure you can, let's see how to do exploratory testing, categorize
[00:51:25]
common types of faults found in past projects, analyze the root cause, find the risk to develop.
[00:51:30]
Yeah.
[00:51:31]
There's a whole, there's a whole area here that I need to dig into more, but, but there
[00:51:36]
could be some interesting concepts to glean there from how to put on the right, the right
[00:51:42]
lens when you're looking at debugging.
[00:51:44]
I'll drop a link in case people want to read about it more.
[00:51:46]
Yeah.
[00:51:47]
All right.
[00:51:48]
Anything else you want to add?
[00:51:49]
I, I didn't bring up my, my mantra of wrap early, unwrap late.
[00:51:55]
I think that's another good one to mention here.
[00:51:58]
You know, if you can deal with, you know, deal with possible failures at the periphery,
[00:52:03]
I mean, Elm bakes in some of those things with like decoders, you know, sort of finding
[00:52:08]
out about issues when you, you know, when you get the payload rather than when you use
[00:52:14]
the data.
[00:52:15]
But I think that that's another really good practice is, you know, you want to enforce
[00:52:19]
contracts as soon as possible.
[00:52:21]
You want to use types to represent those.
[00:52:22]
You want to wrap it in a meaningful type as quickly as possible and, and retain it as
[00:52:28]
that ideal representation of the data as long as possible until you need to get some, you
[00:52:34]
know, built in data type that you need to pass as JSON.
[00:52:37]
You don't want to turn, you know, serialize it prematurely because that makes it harder
[00:52:42]
to debug and understand what's going on.
[00:52:45]
Basically any technique that we covered during these other 29 episodes are probably worth
[00:52:51]
looking into.
[00:52:52]
Yeah.
[00:52:53]
Is there anything else that makes Elm unique for debugging?
[00:52:57]
I guess we didn't talk about like pure functions, but that's pretty, pretty huge.
[00:53:02]
Yeah.
[00:53:03]
Just, just the fact that you don't have any spooky action at a distance as we've mentioned
[00:53:07]
before.
[00:53:09]
One that sets apart from the others is that there is no debugger between quotes.
[00:53:15]
The step debugger.
[00:53:16]
Yeah.
[00:53:17]
Step by step debugger.
[00:53:18]
Maybe there will at some point, but there is at a moment.
[00:53:21]
I've heard people mention that they, they think that would be useful, which I can, I
[00:53:25]
can sort of see.
[00:53:26]
Yeah.
[00:53:27]
Because it's like putting a debug log on the fly.
[00:53:30]
Right.
[00:53:31]
Right.
[00:53:32]
Yes.
[00:53:33]
Right.
[00:53:34]
If you'd put a debug log, you need to know where you want to extract information.
[00:53:38]
But you can sort of explore freely if you have a stepwise debugger and find something
[00:53:43]
you weren't necessarily looking for.
[00:53:45]
Now that said, I think that, I think that you and I would both agree.
[00:53:50]
Like we would be happier to debug in Elm than in any other language.
[00:53:53]
Yeah.
[00:53:54]
Yeah.
[00:53:55]
We got a pretty good there.
[00:53:58]
Like even just the fact that there, nothing happens without a message is wonderful.
[00:54:03]
Like you don't get a message, it will not change.
[00:54:06]
You get a message, it might change and you know exactly how it has changed.
[00:54:10]
So yeah, that is in a way a step by step debugger.
[00:54:15]
The Elm browser one.
[00:54:17]
Not in the same.
[00:54:18]
Right.
[00:54:19]
Not the same level of granularity of step, but it is steps.
[00:54:22]
Yeah.
[00:54:23]
And yeah, I guess in a way that's like a big bisect already, you can already say.
[00:54:29]
Yeah, absolutely.
[00:54:30]
It's only the problem is happening between this step and this step.
[00:54:35]
And that reduces a lot of the craft already.
[00:54:38]
Right.
[00:54:39]
Right.
[00:54:40]
And you want to take advantage of the unique characteristics of your application and your
[00:54:46]
programming language to prune that tree.
[00:54:50]
So that's why I think it's worth thinking about this question of what's unique about
[00:54:55]
debugging in Elm because that helps you make assumptions safely.
[00:54:59]
And so you should double down on using that to step.
[00:55:05]
Like that's going to make you more effective at debugging your Elm code.
[00:55:09]
If you're thinking about, okay, what do I know?
[00:55:12]
Is this as Elm code, is there a message I can look forward to like it's a problem with
[00:55:17]
state.
[00:55:18]
Okay.
[00:55:19]
Well then what are the relevant messages?
[00:55:20]
Right.
[00:55:21]
You can start with that because that's an assumption you can make because of Elm, not
[00:55:25]
because of your domain or the way you've structured your code, which is really cool.
[00:55:29]
And then with types, that's you can make certain assumptions about what data types are possible.
[00:55:34]
What can happen here?
[00:55:35]
Could there be an error or not?
[00:55:37]
So take advantage of those things and keep them in mind when you're writing your code
[00:55:42]
and when you're debugging your code and use those to further prune the tree and use those.
[00:55:47]
And this is really important.
[00:55:49]
We've mentioned this before, when you find a bug, ideally try to fix it by improving
[00:55:54]
the types if possible.
[00:55:55]
If you can prevent the bug in the future through types, do that.
[00:55:59]
Now that doesn't necessarily have to be your first step in order to get your unit test
[00:56:05]
green.
[00:56:06]
In fact, it likely shouldn't be your first step because you want to do the simplest thing
[00:56:09]
that could possibly work, but go swing back around and make sure it doesn't happen again
[00:56:13]
by improving the types.
[00:56:14]
If you find a bug related to types.
[00:56:16]
Yeah.
[00:56:17]
There are several ways you can make sure that the problem doesn't happen again, changing
[00:56:22]
your types so that you make impossible states impossible and other things like that.
[00:56:26]
You can write a test, which you probably already did by fixing this.
[00:56:30]
If you had a good test suite.
[00:56:32]
If you didn't, it's always a reminder that investing your unit test suite is a good thing.
[00:56:38]
And sometimes when those don't work out, maybe code generation is a good solution.
[00:56:44]
Elm review rules is a solution.
[00:56:46]
You have so many techniques at your disposal and you should use the one that is most appropriate.
[00:56:53]
Basically probably not Elm review.
[00:56:55]
Right.
[00:56:57]
The closer to the metal you can get, if you can do it through the Elm compiler, then go
[00:57:01]
for that.
[00:57:02]
If you can't, then keep going down.
[00:57:04]
Then code generation, then Elm review.
[00:57:09]
Try to make it so that it doesn't happen again, because otherwise it will happen again at some
[00:57:12]
point and you will have to do the same debugging you just did or someone else will.
[00:57:18]
Right.
[00:57:19]
Even worse.
[00:57:20]
Yeah.
[00:57:21]
Basically there's a direct correlation between writing maintainable code and then the techniques
[00:57:27]
you use to write maintainable code are similar to the thought process you use to narrow down
[00:57:33]
your search space when you're debugging.
[00:57:35]
The more you're doing those techniques to write maintainable code, the more you're able
[00:57:40]
to narrow down your search space when you're debugging and the less you're going to run
[00:57:43]
into bugs in the first place because your general overall space is more narrow because
[00:57:48]
there are fewer impossible states that can be represented, et cetera.
[00:57:53]
Well I think we've covered debugging pretty well.
[00:57:56]
Yeah, not so much about debugging compiler errors in the end.
[00:58:00]
I expected more.
[00:58:03]
Maybe there could be another episode on the horizon for that topic.
[00:58:05]
That sounds good.
[00:58:06]
Let us know what you think.
[00:58:08]
And also thank you to John Doe, Doe spelled D O U G H, for submitting this question.
[00:58:17]
We neglected to mention in the beginning this is another listener question and we really
[00:58:20]
appreciate getting these.
[00:58:21]
It's really fun to get listener questions.
[00:58:25]
You can submit a question by going to elm dash radio dot com slash question.
[00:58:31]
Don't forget to leave us a review on Apple Podcasts and tell a friend if you enjoy the
[00:58:35]
podcast and until next time, have a good one.
[00:58:38]
Have a good one.