spotifyovercastrssapple-podcasts

elm-test

We discuss the fundamentals of test-driven development, and the testing tools in the Elm ecosystem.
August 10, 2020
#10

elm-test Basics

TDD Principles

Fuzz Testing

When to Use Types or Tests

Should you test implementation details?

Higher-Level Testing in Elm

elm-program-test Martin Janiczek's elm Europe talk on testing Msg's with ArchitectureTest

Transcript

[00:00:00]
Hello, Jeroen. Hello, Dillon. I really hope we meet people's expectations with today's
[00:00:07]
episode. That goes off on a good start. You want to tell people the episode topic so they
[00:00:17]
can appreciate how bad that pun was just now? Yeah, well, I think they got it from the episode
[00:00:22]
title, but today we're going to talk about Elm tests, which is dealing with expectations.
[00:00:28]
Or do you want me to explain the pun? It's always funnier when you explain the joke.
[00:00:34]
If there's one thing I've learned, always explain the joke. Okay, so you said talking
[00:00:40]
about meeting people's expectations and in Elm tests, you write expectations for a test
[00:00:46]
to pass. See, if people weren't on the floor laughing the first time, now they're definitely
[00:00:51]
laughing really hard. You're welcome. Okay, so let's quickly move on from that and give
[00:01:01]
people a little introduction to Elm tests. So like, for somebody who has never encountered
[00:01:08]
it, what what the heck is it? How do you even get set up with it in the first place? Yeah,
[00:01:14]
so Elm tests is the official library and tool to write tests in Elm and for Elm code. So
[00:01:23]
you have two parts for it. You have one part that is the library, which is Elm dash explorations
[00:01:30]
slash test. And there's the CLI, which is run on nodes. So you to install it, you do
[00:01:37]
well, the name is Elm test. And to install you do npm install Elm tests. That's pretty
[00:01:45]
much it. Oh, and then actually, there's an Elm test init command, which is kind of helpful
[00:01:52]
for kind of setting things up because there's a there's a section in your Elm dot JSON,
[00:01:58]
which has test dependencies. And it needs to make sure it installs the you know, proper
[00:02:04]
version of Elm exploration slash test the package that allows you to make assertions
[00:02:09]
and declare test cases. Yeah, so you so you have your production code or your source code.
[00:02:17]
And next to that, you usually have a tests folder in which you put all the tests. And
[00:02:23]
Elm test is going to read all the tests, find all the tests that are available in that folder.
[00:02:30]
With a little bit of magic, and with a little bit of magic, build something using your test
[00:02:36]
dependencies that is your in Elm JSON, right and run them. Yeah, it it's kind of hard to
[00:02:43]
wrap your brain around it at first. But what it does is it looks for any values that are
[00:02:48]
exposed in your test folder of the type test, you could expose like multiple top level values
[00:02:55]
in a module of type test, and it will run those you could expose a list. Well, you create
[00:03:01]
like a list of tests with a describe. A describe is just a function that takes a test, which
[00:03:06]
takes a list of tests, and the title for the whole group. Yep, exactly. And really, what
[00:03:13]
a test is in Elm is it's you just invoke one of the expect helper functions, for example,
[00:03:20]
expect dot equal, and that takes two values. So if you say expect dot equal one, two, it's
[00:03:26]
going to say, I expected, let's see, actually, usually you pipe it. So it's hard to think
[00:03:32]
about which one is expected and actual. If you say one, right pipe, expect dot equal
[00:03:39]
two, it's going to say, I got one, but I expected it to be two. Yeah, exactly. So ultimately,
[00:03:49]
that's what an Elm test case is. It's literally just an expectation. So one of the things
[00:03:57]
that's really beautiful about the design of Elm is the fact that it's so testable, because
[00:04:03]
it's just people bend over backwards. I think we might have talked about this on another
[00:04:07]
episode, but people bend over backwards in other languages trying to make all of these
[00:04:14]
side effects and trying to assert on all of these impure things and trying to make non
[00:04:22]
deterministic things deterministic. So you have some code that depends on time. You have
[00:04:28]
some environment stuff that it's picking out and reading global variables and things like
[00:04:34]
that. And you have to do all of this setup. And some of it's implicit, some of it's required,
[00:04:40]
some of it's not. You're not quite sure which of the pieces are required or not. You're
[00:04:44]
not sure which pieces are deterministic or not. You might get tests randomly failing
[00:04:49]
on the first of the month. With Elm, you don't have any of those problems. And I can't tell
[00:04:54]
you how, I mean, I've got a background in doing agile technical coaching. So I've spent
[00:05:01]
a good deal of time trying to sort of coach people on some of these best practices around
[00:05:06]
testing. And so many of these are just built into the Elm language because of the purity,
[00:05:11]
because testing is so natural when it's just input output. And so that's all an Elm test
[00:05:18]
is. It's like you have this input, you get this output, and then state a few things that
[00:05:24]
you expected about that output. That's all it is. And it's so nice to test in Elm. So
[00:05:30]
please do it. Please write tests in Elm.
[00:05:33]
That is actually one of the things that is always quite hard with tests is the setting
[00:05:37]
up. And so when you have complex functions, complex environments, to get the thing testable
[00:05:46]
in a testable state, you need to do a lot of techniques and inversion of control, mucking,
[00:05:53]
to get it testable. And in Elm, since you're only dealing with pure functions, a lot of
[00:06:00]
those problems go away directly.
[00:06:02]
Yes. Yeah. And on the topic of testable design, I think this is one of the things that maybe
[00:06:10]
somebody who is more new to test driven development might not be familiar with this concept, but
[00:06:15]
I think it's a really important one. Actually, many people consider test driven development
[00:06:19]
to be a design practice rather than a testing practice. Because what test driven development
[00:06:27]
does, you know, when you're writing a unit test, before you write the code that makes
[00:06:33]
that unit test pass, what you're doing is you're being deliberate about what you want
[00:06:38]
the design to look like, rather than letting the implementation guide the design. You're
[00:06:43]
letting your intentionality guide the design. And by definition, you're making it testable
[00:06:52]
because you're writing it in a way that's nice to test and nice to use before you write,
[00:06:59]
you know, the implementation. So naturally, if you write the implementation first, what's
[00:07:05]
going to tend to happen is it's going to be a design that's very difficult to test. And
[00:07:10]
there's this inherent quality with testable design that it's decoupled. And similarly,
[00:07:19]
if you have not test driven your design, it tends to get coupled. So test driven development
[00:07:24]
is a good technique for designing your code in a way that is nicely decoupled and nice,
[00:07:30]
you know, nice to maintain whether like independent of the tests, it's just tends to be nicer
[00:07:36]
designed code.
[00:07:37]
Definitely. So as you just said, when you do TDD, you use the functions, the library
[00:07:45]
before you've finished designing it. And when you use it, that's when you find the flaws
[00:07:51]
in your API design. So if you try to make the API look good after the fact, it wouldn't
[00:07:59]
get you the same result as if you did it over the course of the writing.
[00:08:07]
Yeah. So before we go into too many more details of like the specifics of Elm test, maybe let's
[00:08:18]
introduce like the basic concepts of test driven development. For example, red green
[00:08:24]
refactor. So red green refactor is, it's just a very liberating way to work actually, because
[00:08:34]
the thing is, it spreads out a large difficult task over many small individual steps. And
[00:08:42]
so I find it a lot more enjoyable to work in that style, rather than trying to get everything
[00:08:48]
working at once, which is very overwhelming and kind of gives you this analysis paralysis.
[00:08:54]
Test driven development allows you to say, how do I get this one case working, you know,
[00:08:59]
if I'm doing something that transforms a list in some way, what if I give it an empty list?
[00:09:06]
You know, does it give me the correct result for an empty list? And then you you write
[00:09:10]
a test case that does that you make it pass. And in the process, you get several, several
[00:09:15]
things wired up. And so it's actually moving you forward where you're making sure that
[00:09:21]
you get the wiring because you've got a fully working thing at each step. So the red part
[00:09:26]
is you write a failing test, you have an assertion that fails. Now, that was the red part, the
[00:09:33]
first part, that's the red part. And in Elm, a compiler error could be part of that failure
[00:09:40]
process, right? So you, you may write a red test where there's a compilation error, you
[00:09:46]
fix that there's an expectation failure, and then you fix that. And that's okay. But the
[00:09:54]
thing that's important is that you're following the failures you're getting when you run your
[00:09:58]
test. So you write an assertion, you run the test, it prints out the current problem, which
[00:10:05]
may be from the Elm compiler saying it doesn't compile. But you're fixing what running the
[00:10:11]
test tells you you need to fix. And that's the test driven part. So that's the red part.
[00:10:17]
The green part is you fake it till you make it you do the simplest thing you could possibly
[00:10:21]
do to get it green. Usually the stupidest solution is the good one. If your inputs are
[00:10:29]
like multiply six by seven, just hard code 42. It's stupid, but it works. And to make
[00:10:35]
sure that that doesn't say you write a different test later, it says multiply two by three,
[00:10:41]
and then 42 will not be the correct answer for that one. So there you have to generalize.
[00:10:46]
Yeah, exactly. There are a lot. There are so many really elegant principles at play here
[00:10:51]
that I find are just very, very useful ideas in programming, like Yagni, you ain't gonna
[00:10:58]
need it. So that's the idea that you think you're gonna need to handle this case, you
[00:11:03]
think you're gonna need this functionality, but write it when you when you do need it
[00:11:08]
when when you do have concrete evidence that you need it. You know, in the case of like,
[00:11:14]
building out a product, that concrete evidence might be that you observe users using it and
[00:11:19]
see that they're running into a problem or get feedback from them. In the case of a test,
[00:11:24]
the you ain't gonna need it, you know, you prove that you need it by writing a test.
[00:11:29]
And then now you need that design, but don't design things in anticipation of I'm gonna
[00:11:34]
need to generalize this. This design discipline of fake it till you make it doing the stupidest
[00:11:40]
thing that could possibly work. Or as some people like to say, the simplest thing that
[00:11:44]
could possibly work that keeps you honest about not over designing and anticipating
[00:11:51]
what you're going to need. And it also our brains are much better equipped to solve one
[00:11:58]
case than to like, have the generalized solution to something. It's so much easier to just
[00:12:04]
think about one case at a time. So I hope you break that down.
[00:12:08]
So we got the red, we got green, what is a refactor?
[00:12:12]
The refactor, that's the one that that people forget to do a lot of the time. Refactor is,
[00:12:20]
well, it's quite delightful to do in Elm. Refactor is when the tests are green, refactor.
[00:12:27]
Now you know, I've been I've been thinking a lot lately about refactoring in Elm, refactoring
[00:12:32]
tools. You know, I've been I've been doing some work on making some contributions to
[00:12:38]
IntelliJ Elm for some automated refactorings. I know you also are thinking about these types
[00:12:43]
of things you're in with Elm review.
[00:12:45]
Yep, definitely.
[00:12:46]
The idea with the refactor step in TDD is now you have a test that kind of demonstrates
[00:12:52]
the behavior that you want is happening. So now you can safely refactor. So one of the
[00:12:57]
things in TDD also is that you don't want to you don't want to start refactoring when
[00:13:03]
you're in a red state because then you make a refactoring and you don't have that green
[00:13:08]
to tell you everything went well.
[00:13:11]
Yeah. So if you started refactoring, either stop the new test or stash or whatever refactoring
[00:13:19]
you were attempting, just do one or the other, not both at the same time.
[00:13:24]
That's a great tip. Yeah, I find I find myself in practice using that technique quite a bit.
[00:13:29]
You just you're writing a new test and then you're like, wait a minute, I really need
[00:13:34]
to refactor this thing to build this thing that the new test case wants me to build.
[00:13:40]
And as you say, you comment out the new test, you skip it, but now you're back at a green
[00:13:44]
state and you can refactor all you want. And it's this is another really great principle
[00:13:49]
of test driven development, which is make the change easy, then make the easy change.
[00:13:55]
How is that related to testing?
[00:13:59]
Because it's part of the refactor step. So you want as much as possible to rely on. So
[00:14:07]
sometimes that means that you you have a new test case. So red, green refactor, we kind
[00:14:12]
of introduced the different steps, but you kind of iterate on that cycle, adding new
[00:14:17]
cases. So you do red green refactor, you can refactor anytime it's green, it's really more
[00:14:21]
of a state machine than like a discrete sequence, right? Yeah, then you add a new test case
[00:14:26]
as needed when when there's more behavior you need to add, you add a test case to prove
[00:14:31]
you need that new behavior handled. Yeah. And then as you say, you find that I really
[00:14:38]
need to generalize this piece of code or extract this function or add a parameter here or whatever
[00:14:43]
it might be, in order to make this next step easier, like this step is going to be too
[00:14:48]
big, unless I do a refactoring step first. And so as you as you pointed out, you you
[00:14:54]
skip the new test you were adding, or you come to comment it out, you stash it, whatever
[00:15:00]
you need to do to get back to a green state. But now you know, okay, I am going to need
[00:15:04]
to do this refactoring to make the next step easier. And so you make the change easy. And
[00:15:10]
then you make the easy change. So you want to do as much of the heavy lifting as possible
[00:15:15]
as a set of refactoring steps. So that when you make the behavior change, it's dead simple.
[00:15:22]
And the search space for where a problem could happen is much smaller.
[00:15:27]
I'm curious, do you factor at every step or every few steps? I usually do every few but
[00:15:33]
I don't know if that's the best way to do it.
[00:15:36]
I think it totally depends the way I think about refactoring, which really, we could
[00:15:42]
definitely do several episodes just about that topic. And I'm sure we will. But the
[00:15:48]
way I think about refactoring is we're constantly reading code, we're reading code more often
[00:15:53]
than we're writing code. So you want it to be very inexpensive to read code, because
[00:16:00]
we do it often. So we want to optimize for reading code, and for changing code, right?
[00:16:04]
Those are if we if you can make reading and changing code extremely inexpensive and efficient,
[00:16:11]
then you're going to have a good code base to work with your you know, you're in good
[00:16:15]
shape. So refactoring helps you do that. But as you read the code, you start to understand
[00:16:23]
things about the code, you start to see certain patterns, you start to realize like, okay,
[00:16:28]
this variable is called accumulator. But really, this is like the
[00:16:33]
concatenation of a list of strings. Yeah, this is like the the admin users. That's
[00:16:39]
really what this accumulator represents. So you're reading the you know, you're reading
[00:16:43]
through your code, and you see some variable and and you're like, what is this doing, right?
[00:16:48]
Because that's what happens when you're reading code. You're like, I need to add this feature.
[00:16:53]
What the heck is this doing? You read it, you sort of understand it. And then when you
[00:16:58]
have that insight refactor. So that's one cue you can use to refactor is when you start
[00:17:04]
to understand something, take that understanding out of your head and put it back in the code.
[00:17:08]
And now the next time you or someone else is reading that code, they don't have to do
[00:17:12]
that extra step of processing it in their brain to get the understanding it's more readily
[00:17:17]
available or sometimes that understanding might not be like a variable name, but it
[00:17:21]
might be extracting a particular function that that represents some operation, you know,
[00:17:26]
grouping something together, having something in a certain module. But I find that helpful
[00:17:32]
to think about as you're reading code, just refactor something if you see something that
[00:17:37]
can change, just do it. And I think people sometimes think about refactoring as something
[00:17:42]
that you take a month on a branch, and just do a giant rewrite. And that's like a lot
[00:17:48]
of people's notion of refactoring. And that's called summer. But when all the colleagues
[00:17:55]
are on vacation, go refactor.
[00:17:58]
And I mean, it's one thing to like dedicate a large chunk of time to refactoring, but
[00:18:04]
it's another thing to just get like just do a giant step and you want to break it down
[00:18:10]
into a lot of tiny steps whether or not you spend a long time refactoring. And also like,
[00:18:15]
yeah, so anyway, I mean, I like to keep myself honest about making tiny changes where there's
[00:18:23]
almost zero risk that I've changed behavior unintentionally.
[00:18:28]
One thing I would like to point out or to emphasize on is during your refactor, you
[00:18:35]
should not support new cases. So if something needs and deserves a test written for it,
[00:18:45]
and doesn't work at the moment, and refactoring the way you do it makes a test pass, then
[00:18:52]
you should first write the test and then refactor it. Otherwise, you lose one good loop, one
[00:19:01]
good cycle. And this cycle is very, brings a lot of good things.
[00:19:09]
I do find that sometimes I like to generalize as part of refactoring step. I'm not sure
[00:19:17]
if that's compatible with what you're saying right now or not. But so there's this one
[00:19:22]
I guess not.
[00:19:23]
I mean, I guess I would clarify that I don't call that refactoring. I call that generalizing.
[00:19:30]
And sometimes people use the word refactoring there, right? So if you're refactoring, it
[00:19:35]
means you're not changing the behavior, right? But if you suddenly fix a bug during your
[00:19:40]
refactoring, it means it wasn't a refactoring step. And that, you know, it's not that you
[00:19:44]
shouldn't fix bugs or change your code in a way that resolves bugs. It's just, no, that
[00:19:49]
it's not a refactoring. And there's a particular role that a low or zero risk refactoring plays
[00:19:56]
in the development process. It's a very helpful technique.
[00:20:00]
It's good to have the word generalizing in mind for both steps.
[00:20:05]
Exactly. Exactly. They're like different modes of operation. So know which one you're in.
[00:20:11]
There are different approaches to test driven development. You can do this technique called
[00:20:16]
triangulation where you sort of every time you want to generalize your code, you write
[00:20:22]
a failing test that proves that it needs to be generalized first. So like basically, you
[00:20:27]
only generalize code by responding to a failing test. I find in practice, sometimes that gives
[00:20:36]
you like, you know, you hard code a case like you you're doing fizzbuzz and you say fizzbuzz
[00:20:43]
for for one is one. And then you just hard code fizzbuzz to return the string one. And
[00:20:50]
now you say fizzbuzz of two is two, you have to write a failing test in order to remove
[00:20:57]
the hard coding of the number one from the return value. Yeah, but really, to me, it
[00:21:03]
seems just noisy to have two test cases that are testing, turning a regular number into
[00:21:10]
a string. And so sometimes I'll do that as a generalization step where I'll just Kent
[00:21:16]
Beck in test driven development by example, which is a really nice book, he talks about
[00:21:21]
this idea as like, removing duplication between the test case and the production code. So
[00:21:28]
anyway, that's a technique that I sometimes reach for rather than sometimes it can become
[00:21:33]
a little bit dogmatic to just follow triangulation at every step. And it doesn't feel natural
[00:21:40]
in a context. So I recommend that people try out these approaches, it can be nice to do
[00:21:46]
some exercises, some some kata, we can link to some code kata. But it's really nice to
[00:21:51]
just practice code kata, and try out these different techniques to get experience with
[00:21:57]
them. And then you have to use your own judgment in a real code base to figure out which techniques
[00:22:04]
make sense for you. But if you've at least experienced them, then you have a better sense
[00:22:09]
of of how they help you write better code. Now you know everything about test driven
[00:22:15]
development. Okay, well, shall we talk about some some more of the details of the specific
[00:22:21]
elm test package? Yeah, we can do that. We didn't say how you how you write a test. Maybe
[00:22:29]
we should go into that. Let's do it. The way you write a test is you use the import the
[00:22:36]
test module. You do you call the test function from the module. So test, followed by the
[00:22:46]
title of the of the test. And I found that one pretty important because if the test fails,
[00:22:54]
you know that whatever you wrote in the title is the thing that is not working anymore.
[00:23:00]
The failing test message is like part of the thing you're building when you write a failing
[00:23:06]
test. Because if it ever fails in the future, that's going to be what's guiding somebody
[00:23:11]
to fix the broken thing. Yeah, so the title and the expectation error. So exactly, use
[00:23:18]
both. Exactly. So yeah, you got the title. And then usually what you do is you pipe user
[00:23:25]
left pizza, followed by a anonymous function. So which takes this argument, empty tuple.
[00:23:34]
So a unit. Yep. And then inside of that, you write the setup of the function, setup of
[00:23:43]
the test, followed by the expectations that you want to make sure that happen. The reason
[00:23:50]
why there is a anonymous function is for performance reasons. So that's if you only want to run
[00:23:58]
tests from one file from one specific location, all the tests are not run on that executed
[00:24:08]
exactly uselessly. Because Elm does eager evaluation. So if you Yeah, if you make it
[00:24:15]
a lambda, then you have to pass an argument for it to evaluate that. Whereas if you don't
[00:24:21]
call that lambda, it's not going to evaluate the body of that lambda until you pass in
[00:24:26]
the unit to that. Yeah, which makes everything much, much faster. Yeah, and it allows the
[00:24:33]
test framework to give you feedback as it's running things rather than just blocking on
[00:24:39]
evaluating everything at once and then Yeah, giving you feedback. And I think another thing,
[00:24:45]
I guess there could have been a different design for this. But that unit argument of
[00:24:51]
that lambda, it becomes a value if you're doing fuzz testing. Yeah. Shall we get into
[00:24:57]
fuzz testing now then? We may as well. Let's do it. So what is fuzz testing Dillon? Well,
[00:25:05]
fuzz testing is also known as property testing property based testing. And I think actually
[00:25:11]
property based testing is a good term for it too, because it kind of gives you this
[00:25:17]
sense that you're testing a general class of data. So you you're making assertions about
[00:25:23]
properties of that class of data, rather than in the traditional unit testing style, making
[00:25:30]
assertions about one specific case, one specific function call, and the output of that. So
[00:25:36]
what fuzz testing does is it uses a random generator with a random seed, you can build
[00:25:42]
up either simple or complex data types, you could you could do fuzz dot int to have a
[00:25:47]
fuzzer that gives you an int, you can build up a fuzzer very much like you build up a
[00:25:54]
decoder. And get a complex data type, you could build your own custom data type. And
[00:26:01]
then you know, you can get a list of those fuzzed values and compose them together much
[00:26:05]
like a decoder. And then you you make assertions about those randomly generated values. And
[00:26:11]
if you want to reproduce a particular failure, then you can copy the random seed and pass
[00:26:19]
that in as a flag when you run elm test on the command line. Yeah. And because when elm
[00:26:24]
test fails, you get a big error message saying, hey, if you want to produce it exactly like
[00:26:30]
the way I just ran it, run this command and it contains a seed flag and the list of files.
[00:26:37]
Exactly. And and the reason that it's that it's running it with a random seed. The point
[00:26:44]
is, like conceptually, you're not running the test against one value, you're running
[00:26:49]
it against an infinite sized set. And you're using a random sample of that. But every time
[00:26:56]
your test runs in your CI or your local environment, it's running on a different random sample.
[00:27:01]
So as you run the tests more, you approach running the tests on the infinite sample of
[00:27:08]
everything and asserting that property about that whole data set. Yes. I'm not sure if
[00:27:14]
we actually said it, but the idea is that you write your test, your first test, and
[00:27:21]
then it will run a lot of times, usually 100 times. Yeah, I think that's the default with
[00:27:27]
different inputs. Yes, exactly. So yeah, whenever you you run your test, you get 100 or more
[00:27:36]
if you want to test generate it with values that make sense that don't make sense. But
[00:27:42]
your setup will be run with a lot of different inputs. So if something goes wrong, you will
[00:27:48]
know which values it will be for. And you won't generally have coverage for a lot more
[00:27:55]
cases than you would have thoughts by yourself. Yes. Yeah, because you're basically testing
[00:28:03]
an infinite sized set because every time you run it, it runs it with a different sample.
[00:28:09]
Yeah. Another cool feature of fuzz testing is that it will shrink down the result set
[00:28:17]
to give you the simplest failing test case. So if you're using a string fuzzer, for example,
[00:28:25]
and your string fuzzer, like so like a common common example is like a palindrome, right?
[00:28:32]
Or just any sort of reversible operation. You know, if I encode this to a JSON value
[00:28:38]
and then decode that JSON value, I should have the same thing I started with or if I
[00:28:44]
take a string, and I reverse if I take a string and check if it's a palindrome, then the reverse
[00:28:51]
of that string should also be a palindrome. For example, that's like kind of property
[00:28:56]
that we're talking about when talking about property based testing. It's a behavior property
[00:29:01]
of the whole system. Exactly. So if that assertion were to fail on empty string,
[00:29:08]
maybe the you know, maybe that property also fails on a 200 character long string. But
[00:29:18]
the failure it's going to give you is the empty string. So that's called shrinking that
[00:29:21]
it reduces down the failures to find the the simplest failure it can produce. Yeah, it's
[00:29:28]
kind of like a very magical concept. It's kind of cool that that feature is just built
[00:29:33]
into it.
[00:29:34]
Yeah, the way I think it works is, I think it's going to try that string with 200 characters
[00:29:39]
first, or we'll probably try the empty string first. But at some point, it will try the
[00:29:44]
200 character long string. And if it finds it that it fails, it will try to simplify
[00:29:50]
it to this shrinking part by generating a few simpler cases than the 201. So maybe 190
[00:29:59]
characters or 199. So we'll generate a few ones, and we'll run the test on each of those.
[00:30:07]
And if one of those fails, then it will try over and over and over again with those values,
[00:30:14]
until finds the simplest thing, which could be an empty string or which could be something
[00:30:20]
else depends what your problem is.
[00:30:22]
Right. Martin Janacek has been working on, I guess it's pronounced minithesis. I don't
[00:30:30]
know because it looks like mini thesis, but it's actually like a mini version of something
[00:30:36]
called hypothesis. And hypothesis is like another approach to this idea of like property
[00:30:46]
based shrinking. But it allows you to do property based testing in a way where you can do end
[00:30:53]
then. The basic short description from what I understand from the readme of the project
[00:31:01]
is that there's like the random fuzz values, there's like the actual concrete values,
[00:31:07]
and then there's like the seed that determines those values. And the idea of like hypothesis
[00:31:13]
and mini thesis is...
[00:31:16]
I'm just going to say mini thesis also.
[00:31:18]
I know, I know. It keeps track of the underlying seeds that generated those values. And so
[00:31:25]
it allows you to do end then. Whereas like the Elm fuzz library doesn't provide end then.
[00:31:32]
But those are all like fun sort of academic details. But in practice, fuzzing is just
[00:31:36]
a really cool technique that's at your disposal when you write tests with Elm test, it's built
[00:31:41]
in. I've used it like a few months ago, I was testing some logic around a money module
[00:31:50]
in a code base. And it was really nice to do some property based testing because with
[00:31:55]
money you want to make sure that there aren't any bugs. And so it was quite nice to be able
[00:32:01]
to like to say if I take the difference of two sums of money, then it should be zero
[00:32:10]
if I'm taking the difference of the same value of money. Or like it parses negative money
[00:32:16]
correctly. So if there's a negative sign in front, whatever the actual value of the dollars
[00:32:21]
and cents are, when there's a negative value in front, the value will be negative, right?
[00:32:26]
That's like a property you can assert across the whole set of data. And it's just an extra
[00:32:33]
set of confidence. Sometimes it's nice to be able to also reason about like concrete
[00:32:38]
values with a plain unit test, but it's nice to have both at your disposal.
[00:32:44]
The thing I have found very difficult to use with property testing is building the data
[00:32:49]
sets. So usually what I find is, and this is why I don't use it much, actually, pretty
[00:32:57]
much never, is because there's always something that you don't do know what to accept as input.
[00:33:05]
For instance, you say if you put a minus sign before the value, they should be negative.
[00:33:12]
But that is not the case if the value is zero. So how do you say I want a float or an integer,
[00:33:21]
but not zero? Because for zero, it will fail. And it will definitely try zero at some point.
[00:33:26]
Right. I know. I agree. That's a really tricky part of it. One way you can do that is you
[00:33:32]
could do like fuzz.int and then you could do fuzz.map int plus one.
[00:33:39]
Then what if it generated minus one? Oh, isn't there like a positive or you could
[00:33:45]
do like absolute value. You have to get clever with it. Yeah. I guess there's not a positive
[00:33:52]
int one, but you could make a positive int fuzzer by saying taking the absolute value
[00:33:57]
and then adding one. Yeah. And then you can give yourself this building block of a positive
[00:34:03]
int fuzzer. But it is a little bit awkward. I think there's a way to like say that a particular
[00:34:10]
fuzz value to exclude it, but it can also lead to like stack overflow issues with the
[00:34:17]
fuzzing. Yeah. Or you could also ignore the test, say expect.pass, meaning the test will
[00:34:25]
just pass if the value is not valid. But if your values are rarely valid, then the test
[00:34:33]
is not worth much. Right. Oh, there is an int range. There's int range. So you can give
[00:34:39]
it a range. There's float range. And you can get clever like sometimes with characters.
[00:34:47]
Like you could create, there's also fuzz.constant and fuzz.oneof. So like you could make like
[00:34:55]
a vowel fuzzer and you can build up all sorts of complex things, but you do have to get
[00:35:00]
a little clever. And I mean, much like Elm, sometimes the way that you want to express
[00:35:05]
it isn't the way it's going to be natural to express it. And so you have to go a different
[00:35:09]
route. But once you do, you can find an elegant solution. Yeah. Jeroen, do you ever test views
[00:35:15]
in Elm? No. Do you know what the reason why? Tell me. Because he mostly writes Elm review
[00:35:23]
and there's no view in Elm review. Well, except in the name, but that's it. No, I don't. I
[00:35:31]
know there's a module or several modules to test the HTML in Elm tests or Elm explorations
[00:35:39]
tests, but I've never tried them. Have you? I've used them a tiny bit, but I find that
[00:35:47]
if I'm doing unit testing, what I find is that usually what I want to do is I want to
[00:35:53]
create a data type that represents what the view is going to be and make assertions on
[00:35:58]
that data type. And then pass that like sometimes in an object oriented context, people talk
[00:36:04]
about like view objects, you know, that you can have something that represents everything
[00:36:09]
in a formatted way and the view is just templating it. It's just picking off those little pieces
[00:36:14]
of that view object, right? So that's what makes a lot of sense to me is just have all
[00:36:19]
of that data formatted in a particular way. And if I want to make assertions about that
[00:36:24]
format, I don't need to grab it out with CSS selectors in the HTML output. I just make
[00:36:31]
assertions about this data type. And then I pass that data type to be rendered as my
[00:36:36]
view. But all it's doing is picking off these values and presenting them directly. It's
[00:36:41]
not manipulating them or doing any logic on them. Okay, but then you don't test your view
[00:36:47]
much. Right, exactly. So to me, doing a unit test at the view level never feels like it's
[00:36:56]
adding value and giving me confidence. It just feels like a pain and it feels like it's
[00:37:01]
coupling me to some things that I don't want to be coupled to. And it doesn't feel like
[00:37:06]
it makes me feel more confident that I got things right in my view. If I want to change
[00:37:11]
my view, then I change my view. I want to test the underlying logic. Now, that said,
[00:37:17]
that changes if we're talking about a more high level test rather than a unit level test.
[00:37:22]
If we're talking about an end to end test, then it's very valuable. Then you say, okay,
[00:37:27]
I click on the login button. I type this into this input field. I click on this button.
[00:37:32]
I navigate to this page. I should see this show up. Then in that case, making assertions
[00:37:38]
about what's on the page is great. So that's the distinction I would make. Yeah, if you
[00:37:44]
can write scenarios, which bring a lot more value because you test a lot more things,
[00:37:48]
make sure that things don't break. Whereas with unit tests, you only test one tiny thing
[00:37:56]
and that could work, but not the whole thing. Exactly. Yeah, the sort of spectrum from unit
[00:38:06]
to end to end testing is, you know, the lower level would be unit tests and the higher level
[00:38:13]
would be end to end tests. So on the lower level, you're more tied to the implementation.
[00:38:19]
You're less confident about the pieces fitting end to end. You're less realistic, right?
[00:38:24]
But they're faster to run. They're easier to write. Unit tests are great for exhaustively
[00:38:29]
checking corner cases. Especially with property based testing. Yeah, property based testing
[00:38:35]
is a great way to do that because they're very fast to run and it's easy to write a
[00:38:40]
lot of them. But if you think about exercising corner cases in an end to end scenario, you're
[00:38:45]
like, log into this page as this user. Now you have this combinatoric explosion where
[00:38:52]
you're like, okay, well you log in as this user and this user and you log in as a guest
[00:38:57]
user, you log in as an admin user, you log in as a regular user, and then you test out
[00:39:03]
the corner case for all of those. It doesn't make any sense. You don't have 10,000 end
[00:39:08]
to end tests in your project? Yeah, it gets insane, right? So that's the way I think about
[00:39:17]
it is the role of a unit test is to exhaustively check corner cases, but it's not to give you
[00:39:22]
confidence that the entire system is working together. So there's this notion of like the
[00:39:28]
testing pyramid where like if you split off the pyramid into three sections where the
[00:39:33]
very top triangle is one part of the pyramid, then there's like the middle slice of the
[00:39:39]
pyramid and then there's like the bottom part of the pyramid. The bottom part of the pyramid
[00:39:43]
is the biggest chunk. That's your unit tests. You want a lot of unit tests. The middle chunk
[00:39:49]
is like integration tests which don't exercise the full system, but they piece together some
[00:39:54]
parts of it. And then the top most chunk is like end to end tests or like smoke tests
[00:40:00]
and that sort of thing. And you want to have few of those, but they give you confidence
[00:40:06]
that everything is working out together. So, you know, to be honest, I hear more and more
[00:40:11]
people talking about the value of end to end testing and I find myself writing more and
[00:40:16]
more end to end tests as I go along because ultimately I want to have confidence that
[00:40:22]
I can hit the deploy button and just be confident that things are working. It depends on the
[00:40:27]
use case, but it's good to have like a sense of the role that these different types of
[00:40:31]
tests play and then you kind of have to use your own judgment.
[00:40:34]
Yeah, I think you need all of them. Yeah, you need to balance the speed and how many
[00:40:42]
you want for each one. So that it remains maintainable and fast and so that it doesn't
[00:40:51]
hinder you more than it helps you.
[00:40:54]
Right, exactly. What I've seen a lot of with people being hindered by their test suites
[00:41:00]
and it becoming a burden is like in Ruby on Rails shops, people end up with like all of
[00:41:06]
these sort of integration tests that are mocking things like crazy. So they're doing a mixture
[00:41:12]
of like they're actually executing database requests, you know, they're actually like
[00:41:17]
performing database queries and they're sort of rendering HTML and making assertions about
[00:41:23]
the HTML and like stubbing out HTTP requests, mocking certain like reaching in and stubbing
[00:41:30]
out certain functions. So they return something and then making a mock to assert that this
[00:41:35]
method gets called on this one thing. So you don't know what's real and what's fake. And
[00:41:40]
to me, that kind of test has so little value because for one thing, it's very coupled to
[00:41:46]
the actual implementation. So if I change the implementation, you can end up with either
[00:41:50]
false positives or false negatives. So you can have a failing test when everything as
[00:41:55]
far as the user is concerned is working perfectly. You have a passing test when everything is
[00:42:00]
completely broken for the user. And so all I can say is I'm very happy that Elm doesn't
[00:42:06]
have or need mocking.
[00:42:08]
Yeah. That makes me think of another very cool part of Elm with regard to testing. You
[00:42:15]
know what one of the worst things with tests are is that when you have tests that depend
[00:42:21]
on each other, when you set up something in one test and the second test depends on the
[00:42:28]
first one to have been run. And that is so awful to debug and to run. But it doesn't
[00:42:38]
work when a dude tests that only. Like what?
[00:42:42]
I'd blocked that out of my memory, but you brought it back. Yeah. I think you don't have
[00:42:47]
that in Elm because everything's immutable.
[00:42:50]
Exactly. And deterministic. There's like some test helper. I don't remember if it's an R
[00:42:56]
spec or mini test, but like I think in some Ruby thing, there's like a method that you
[00:43:01]
can call and it makes it so the order is deterministic. So the order is like always in the same order
[00:43:08]
rather than randomizing the order of the tests. For exactly the type of thing you're describing,
[00:43:14]
they run the tests in a random order, but there's some method you can call that makes
[00:43:19]
the order fixed. And the method name is like, I'm a horrible person and I don't know, I
[00:43:27]
kill puppies or something like that. It's like some awful name. It's like if you really
[00:43:34]
want to do it, you at least have to admit that you're a terrible person first.
[00:43:38]
I need to find that one.
[00:43:43]
We'll link to it in the show notes just for fun.
[00:43:48]
That's better than those React hidden functions that is, please don't use me or something.
[00:43:56]
Oh my goodness. So I was writing a test in Ruby the other week and I actually like shipped
[00:44:05]
some code and there was a method missing error. And I'm like, what? How on earth did this
[00:44:10]
test pass? And then there was this exception in production. And then I was looking into
[00:44:17]
it and someone actually pointed it out. They're like, Oh yeah, R spec monkey patches the global
[00:44:23]
variable context. So there was some undefined thing context, but R spec was monkey patching
[00:44:30]
it so it wasn't giving a method missing exception. So that was a fun one. Suffice it to say,
[00:44:37]
consider yourself very lucky to be working in Elm and testing is so much nicer than it
[00:44:42]
is in other languages. So please write tests.
[00:44:45]
Yeah.
[00:44:46]
Jeroen, how do you think about this sort of question of what do you test in a typed functional
[00:44:52]
language like Elm and when do you rely on types or perhaps, you know, certain properties
[00:45:00]
that something like Elm review or tooling can provide you?
[00:45:03]
Oh, usually I try to make impossible states impossible. I stop when I can't find a way
[00:45:12]
to make something impossible or when it becomes very unusable. I try to balance usability
[00:45:19]
and correctness. So I still don't know when I would use Elm review for something like
[00:45:26]
that. It really depends on the case. I think there's a lot of places where you could use
[00:45:30]
Elm review kind of as a test in a way, something that tests the contents of your code base.
[00:45:37]
Yeah. I mean, you can certainly use it to make assertions about literal values.
[00:45:42]
Yeah, exactly. So I made a blog post at one point saying here's a safe way to write regex
[00:45:52]
in an unsafe way, in a way that looks unsafe.
[00:45:55]
Yes.
[00:45:56]
And that gives you the kind of the rights test for you without you having to write it.
[00:46:01]
Right. Because you could, yeah, like let's say you were writing, like there's a similar
[00:46:07]
thing that maybe is something a valid username. And so you could write tests that say it returns
[00:46:14]
nothing if I pass empty string and it returns a just username value, like a custom type
[00:46:22]
that proves that it's a valid username if I pass it this other value. So that is something
[00:46:28]
you can test. And that's in a way you're leveraging the type system, but you're letting your test
[00:46:34]
suite give it the stamp of approval that this type does indeed represent a valid check.
[00:46:40]
You could even do fuzz testing on that, right?
[00:46:42]
Yeah.
[00:46:43]
If you're only passing literal values to that, you could also use Elm review to help you
[00:46:47]
with that.
[00:46:48]
Yeah. Because you know, never in the code base I use a constant that is the empty string.
[00:46:54]
Mm hmm. I mean, ultimately these are all just verification methods, you know, whether it's
[00:47:00]
static code analysis, which is, you know, essentially Elm reviews a static code analysis
[00:47:05]
tool.
[00:47:06]
Definitely.
[00:47:07]
It could be, you know, a compiler, you know, types. It could be a unit test or an intent
[00:47:14]
test. They're all just tools at our disposal to verify our code.
[00:47:17]
Yeah. So what would I use tests for otherwise? It's usually business logic. So things that
[00:47:25]
I can't represent with types. So if I try to have a function that says if the value
[00:47:33]
is X, then do this. And if I have something else, give me something else that is something
[00:47:40]
that will always be valid type wise. And that's where I would write in a unit test to make
[00:47:48]
sure that in the first case you always get X. In the other cases you always get something
[00:47:52]
else.
[00:47:54]
You don't have to test the wiring like you do in JavaScript or maybe in Ruby. I don't
[00:48:00]
know Ruby that much. So there's a lot less tests that you have to write.
[00:48:05]
Right. Exactly. Yeah. There are a lot of tests in Ruby about, I mean, like a lot of, a lot
[00:48:11]
of API's in Ruby. It's like if you pass in a string, then you call it like this. If you
[00:48:17]
pass in a list of strings, then you call it like this. If you pass in a regex, then it's
[00:48:21]
going to run it like this. If you pass in a hash and it has a key called this, then
[00:48:29]
it's going to run it like this. And like, and of course you have to check for nil. And
[00:48:33]
if this thing is nil, then it's going to interpret it as this or as that. And it's so much simpler
[00:48:39]
in Elm just having, having the confidence that those things are going to be wired up
[00:48:43]
correctly both for the caller and you know, your, your test cases are exercising all the
[00:48:50]
different paths. And you also just have the simplicity that you can't overload functions
[00:48:55]
like in other languages. And in some cases maybe it would be convenient, but it really,
[00:49:00]
it's, it's a, it's a very nice quality of Elm that keeps things very easy to think about
[00:49:05]
and test.
[00:49:06]
Yeah. In the testing pyramid, I would actually put the Elm compiler at the, at the base of
[00:49:12]
it. Yes. Like a tree or something.
[00:49:16]
I like this. I like this. Well, but you want more, you want even more.
[00:49:20]
Yeah.
[00:49:21]
Right. More than unit tests.
[00:49:22]
Yeah. I would also put Elm review between any Elm compiler and unit tests maybe. So
[00:49:28]
Elm compiler, Elm review, unit tests, integration tests, end to end tests. Maybe Elm review
[00:49:34]
could be after, it could be somewhere else, but.
[00:49:36]
Yeah. It might depend on the context too. Cause there are some things that Elm review
[00:49:41]
can make really great assertions about with static analysis. And there are some things
[00:49:46]
that can't do good stuff, static analysis on. If it's like, if it's looking at literal
[00:49:50]
values, then it can do a lot. If it's looking at user input values, then it can't necessarily
[00:49:55]
make as many guarantees about that.
[00:49:57]
Yeah. And that's where you would use types or unit tests, probably unit tests in this
[00:50:02]
case.
[00:50:03]
Right.
[00:50:04]
Or end to end tests.
[00:50:07]
I like this extended, extended pyramid. Yeah.
[00:50:11]
Use whatever you can to, to create confidence in your system.
[00:50:16]
Exactly.
[00:50:17]
So we get all these tools, like even Elm GraphQL gives you a lot of guarantees. I don't know
[00:50:23]
where you would put it in the pyramid, but it gives you some kind of confidence.
[00:50:27]
I mean, I would put that as part of the Elm compiler, you know, it's just extending the
[00:50:34]
number of guarantees that the Elm compiler can make for you.
[00:50:37]
Yeah, exactly. So this question came up of how or whether you should test internals of
[00:50:44]
your Elm modules. Do you have thoughts on that?
[00:50:47]
Yeah. So usually I try not to test implementation details. But I find that what you test in
[00:50:56]
unit tests are kind of like implementation details of end to end tests.
[00:51:02]
That's right. They totally are.
[00:51:04]
Everything is an imitation, is an internal of something else.
[00:51:09]
Yep.
[00:51:11]
So I think it's best to try to test at the highest possible level, even in unit tests,
[00:51:18]
when you can. But if something gets very complicated or impossible to test, because you think something
[00:51:24]
might get into some kind of state, but you don't know how, then testing the implementation
[00:51:32]
could be useful.
[00:51:33]
Yeah.
[00:51:34]
It's not something I do often, though.
[00:51:37]
I totally agree with you that unit tests are testing the implementation. When you look
[00:51:43]
at it, it's, I mean, from one perspective, any unit test is the implementation, as you
[00:51:49]
said, of the end to end story. So you are testing implementation, and you are actually
[00:51:54]
coupling yourself to specific implementation by writing a unit test. So that's why some
[00:51:59]
people really double down on end to end tests more, because they say, well, now I can change
[00:52:06]
the implementation, and my tests keep passing.
[00:52:09]
So I think one question to keep in mind is, how much extra setup am I having to do? How
[00:52:14]
much noise is there in the test where I can't tell whether the thing I care about in this
[00:52:20]
test is being exercised and is working or not?
[00:52:24]
So if you have to do a bunch of setup, like we mentioned at the beginning, if you have
[00:52:29]
to log in as this kind of user, that kind of user, that kind of user, and then you test
[00:52:32]
100 edge cases on this one part of the page, but you have to navigate in 10 pages deep
[00:52:39]
to test that. Maybe there's a unit test where you can really thoroughly exercise the edge
[00:52:45]
cases of if it's a guest user or an admin user or whatever, what's the visibility? What
[00:52:54]
are the visibility permissions?
[00:52:55]
And then you have one end to end test or a couple of end to end tests that exercise that
[00:53:00]
and make sure you indeed can only see pages that you have permission to view. You can't
[00:53:05]
see the admin panel if you're a guest or a regular user or whatever, but you want to
[00:53:09]
thoroughly exercise all of those permutations in unit tests. That's what unit tests are
[00:53:13]
great for. And if the implementation changes somehow, you can throw those tests away. It's
[00:53:19]
not a big deal, but it's nice to have some confidence that the user is going to see things
[00:53:25]
working correctly because it's actually exercising that code that your unit test is testing.
[00:53:30]
But in terms of should you test internals of Elm modules, I very much agree that I think
[00:53:36]
it's a feature that you can't write Elm unit tests of internal private things. I don't
[00:53:43]
think that approach makes sense to me. I think of it as if you find yourself wanting to test
[00:53:48]
the internals of one module, then what that's saying is it belongs as its own responsibility.
[00:53:54]
I think about code often in terms of responsibility. Should this really be the job of the admin
[00:54:00]
privileges, like the admin module to know whether I have access to this or not? Or should
[00:54:07]
it be its own module that tells what you have access to? Maybe that belongs as its own responsibility.
[00:54:14]
And maybe the fact that I'm trying to test this function like admin has access to that's
[00:54:21]
like a private function in this module means that it wants to be its own responsibility
[00:54:26]
and it wants to be tested separately and in a separate module. I think a lot of test room
[00:54:31]
development is about, it's not a magic bullet. It doesn't fix your design. It doesn't make
[00:54:37]
your code nice or make your code work.
[00:54:39]
Unfortunately.
[00:54:40]
Unfortunately, but it does expose issues and then you have to pay attention to the signals
[00:54:46]
it's giving you. So if something is uncomfortable, that might be a design smell. That might mean,
[00:54:52]
Hey, this code is hard to test. Well, what's that telling me about my design? We talked
[00:54:57]
about end to end testing, but we didn't really talk about how you would do that in Elm. I
[00:55:02]
think sometimes when people think about testing Elm, like we're so used to living within the
[00:55:08]
Elm ecosystem and we don't want to go outside of it, but I think it's worth just stating
[00:55:13]
that it's okay to use non Elm tools to test Elm code. For example, you can use Cypress
[00:55:21]
to exercise, you know, to pull up a browser and start clicking around and you know, you
[00:55:27]
might have to write some JavaScript and make some assertions there, but it's okay. Like
[00:55:33]
use the best tool for the job.
[00:55:34]
Yeah. I love Cypress. I find it very good. I'm very sorry that I've never had the chance
[00:55:40]
to use it at work and I have the chance to try it out, set it up and then it got forgotten.
[00:55:47]
But that's what often happens with end to end tests. In my experience, they get left
[00:55:55]
out.
[00:55:56]
Oh, but they can be so helpful. Yeah. And then there's also Elm program tests. So I
[00:56:04]
would characterize Elm program test as more of a, an integration test than an end to end
[00:56:08]
test because it's not, it's not running end to end, right? It's not running in a browser.
[00:56:15]
Yeah. Yeah, exactly. It's, it's simulating putting pieces together rather than actually
[00:56:21]
doing that.
[00:56:22]
Yeah. You're writing scenarios like I have this application or this element. When it
[00:56:29]
gets displayed, the user clicks on something and then this action gets triggered or this
[00:56:35]
command gets triggered. This message. Why? What did I say? All of those other things.
[00:56:42]
And then you cycle and try other things and do expectations all the way around.
[00:56:49]
Right. And then it simulates HTTP responses coming back with certain statuses and bodies
[00:56:55]
and stuff.
[00:56:56]
But I think that could be the topic of another episode.
[00:57:00]
Definitely could be the topic of another episode.
[00:57:01]
Let us know if you want that.
[00:57:04]
I think that could be a fun one. I've been getting a lot of value out of Elm program
[00:57:07]
test. I think it's a really cool tool. Definitely, definitely check it out. And, you know, one,
[00:57:13]
one point to mention on that topic is testing effects. So like effects are opaque in Elm.
[00:57:21]
You can't inspect a command, right? You can't like look at the command that's being returned
[00:57:26]
and make assertions about it. So one of the things that Elm program test has you do, which
[00:57:31]
is kind of a pain, but you get a lot of value and it's not the worst design idea. It's a
[00:57:37]
reasonable design. You have to create a custom type that's a sort of intermediary value that
[00:57:43]
represents all the possible effects that you can have in your domain. And then you have
[00:57:48]
to write a function that turns that effect custom type that you define into an Elm command,
[00:57:55]
which is an opaque thing. So you can make assertions about the commands that you're
[00:58:00]
receiving. So yeah, I mean, basically Elm program test is it's actually you, you kind
[00:58:05]
of give it your init and update and view, and it ties those pieces together and simulates
[00:58:10]
some of the commands and stuff. So that's quite a cool technique. Martin Janacek had
[00:58:14]
an Elm Europe talk that he gave about this tool he built called architecture test and
[00:58:21]
Richard Feldman built like a similar tool for basically testing your update function.
[00:58:25]
So that's another kind of interesting concept. I haven't played around with those particular
[00:58:29]
tools too much, but there's so much you can play around with testing in Elm. It's really
[00:58:34]
it's purity makes it really fun to play around with testing things.
[00:58:37]
Yeah. I think the biggest hurdle is getting started. We like to write code and we like
[00:58:46]
to refactor, but writing tests is a bit of a pain. But once you get into the hang of
[00:58:53]
it, and once you have that coverage, then it feels very good.
[00:58:59]
Exactly. Yeah. Yeah. And it is. Yeah. I mean, I really think that it's a great way to slice
[00:59:06]
up a large task into small chunks because when you're looking at implementing something,
[00:59:11]
you're like, where do I even be? Like, what do I even want it to look like when I call
[00:59:15]
the function? What do I even want the API to look like? Well, you know, with the test,
[00:59:20]
that's the first thing you do the red step, you write what, what might it look like? And
[00:59:24]
then before you even run the test, you're like, is this really what I want it to look
[00:59:28]
like? And you can think about that without worrying about the implementation a little
[00:59:31]
bit. So it's a habit. And I think I think the way you get started, as with any habit
[00:59:36]
is try it out, and then try it out some more and then try it out some more experiment.
[00:59:41]
Soon, you'll you'll get Stockholm syndrome and you'll learn to love it.
[00:59:45]
All right. Well, I think that gives people enough to get started. And maybe we'll we'll
[00:59:51]
circle back another time with some some deeper dives on Elm program test, and maybe some
[00:59:57]
other testing techniques, some refactoring techniques. But hopefully that gives people
[01:00:02]
a good place to start. Yeah, I think it does. I hope it does. Let us know if it does. Let
[01:00:08]
us know if it meets your expectations.
[01:00:10]
Well, it didn't. Until next time. Until next time. Bye bye.