spotifyovercastrssapple-podcasts

Types vs. Tests

We discuss the role of types versus tests, and whether they complement each other or make the other obsolete.
November 7, 2022
#69

Transcript

[00:00:00]
Hello, Jeroen.
[00:00:02]
Hello, Dillon.
[00:00:03]
So are you the type of person that likes tests?
[00:00:07]
Because we're going to have a face off between those two.
[00:00:10]
I might be strongly typed.
[00:00:12]
I might be...
[00:00:14]
Why do we have to choose?
[00:00:15]
Why do we have to fight over this?
[00:00:17]
Can't we just all agree that they're both good?
[00:00:21]
We can.
[00:00:22]
And let's do that.
[00:00:23]
So maybe that's the episode already.
[00:00:26]
Episode 10.
[00:00:27]
Yeah.
[00:00:28]
Types versus tests.
[00:00:29]
It's not a competition.
[00:00:30]
Just do both.
[00:00:31]
I have that little girl's face in my head now.
[00:00:38]
And like, yep, yep.
[00:00:40]
That's our episode.
[00:00:41]
That's the episode.
[00:00:43]
Yeah, I mean, I actually do want to like investigate that with you a little bit.
[00:00:48]
Like, why is it?
[00:00:49]
Well, so first of all, we're talking about types versus tests today.
[00:00:53]
If you hadn't gathered that.
[00:00:55]
Why does it have to be a war between these two things?
[00:00:59]
Like, why is there any notion that one would take away from the other or that one would supplant the other?
[00:01:06]
Like, I find that very interesting.
[00:01:08]
Yeah, well, my first instinct to this question is like, in a lot of languages, you only have one, right?
[00:01:16]
You only have tests.
[00:01:17]
For all the languages that are dynamically typed, it doesn't feel like you have types.
[00:01:22]
Or at least they don't help you writing your code in the way that we think about when we talk about types versus tests, at least.
[00:01:29]
So therefore, you have to use tests, right?
[00:01:33]
If you want to make sure that your code is correct.
[00:01:37]
At least I hope they do.
[00:01:38]
God, I hope they do.
[00:01:43]
So if you don't have types, then you're going to go for tests.
[00:01:48]
But at some point, you're going to ask yourself the question, should we add types?
[00:01:52]
Like, should we switch from JavaScript to TypeScript?
[00:01:55]
Should we use MyPy from Python, etc.?
[00:01:59]
Or like, oh, we're using Java or some other type language, but we don't make very good use of our types.
[00:02:09]
Or at least we don't use them to the effect that other languages do like Haskell and obviously Elm.
[00:02:15]
And then I think that's the part where you ask yourself the question, well, types or tests?
[00:02:21]
Right. I think you're right.
[00:02:23]
I think there can be a very strong testing culture in a lot of these communities.
[00:02:28]
I think the Ruby community created a very strong testing culture, which was definitely a big influence on me,
[00:02:35]
the testing culture in the Ruby on Rails community.
[00:02:38]
Although interestingly, which I think is great and is a very strong part of that community.
[00:02:44]
Interestingly enough, DHH, the creator of Rails, has had some very controversial talks now where he says TDD is dead
[00:02:52]
and that it deteriorates your code quality and makes your code worse.
[00:02:56]
I haven't watched those talks. I have heard of that, but I haven't seen the points that were raised.
[00:03:03]
There are some interesting talks where, you know, Martin Fowler and some of these, you know,
[00:03:08]
pioneers of testing practices sit down and do some video calls with DHH and try to explain it to him.
[00:03:15]
Essentially, I believe that the way that DHH is testing things makes things very painful.
[00:03:22]
And I think that's a huge part of it is that not all tests are created equal.
[00:03:26]
I think that not all types are created equal.
[00:03:29]
And I think that that's part of the root of it is so when what you're talking about
[00:03:34]
where maybe people are sprinkling types in, maybe they're adding something that helps add a little bit of types into an untyped language.
[00:03:45]
You know, maybe adding some TypeScript or something through some comments or something like Elixir that has types you can sprinkle in,
[00:03:56]
but it's not really a core part of the language.
[00:03:58]
Yeah, or simply avoiding primitive obsession by re implementing or wrapping some primitives into new types.
[00:04:06]
That's also a good use case, I think.
[00:04:09]
Oh, that's interesting because that's yeah, that's types without a compilation step.
[00:04:14]
And you're right. That's another dimension to it.
[00:04:16]
And I think that there can be a little bit of a sense that when you're using types in this way, when you sprinkle them in,
[00:04:24]
then it's just like maybe it feels like just another test that you're just like, hey, this is another thing that sort of checks something.
[00:04:32]
And it sort of gives us a little boost in confidence.
[00:04:36]
But it fundamentally feels different when you have, like, to me, a sound type system or a sprinkling of types that sort of help you out a little bit more.
[00:04:48]
And it feels more like a test where like you can test like if I pass in a string here, it behaves this way.
[00:04:55]
If I pass in undefined here, then it returns undefined or raises an invalid type exception or whatever.
[00:05:02]
And you say, well, my type system, I can do that through a type. I can do that through a test.
[00:05:07]
And if I've been following a really strong test driven development practice where I am actually doing red green refactor,
[00:05:16]
then I would have built up all these test cases. But the thing is, like, did you test every possible input type?
[00:05:23]
And because tests only cover what you write and types work almost in in the opposite way.
[00:05:31]
Like if you define a union type, then a union type only has something when you add something to it,
[00:05:38]
whereas a test only gives you confidence like working the other way around.
[00:05:44]
Like you you only constrain it by adding to it.
[00:05:49]
Whereas you only allow a possibility when you add something to a union type.
[00:05:54]
So you mentioned TDD, right? To go back to why is this question of types versus tests even there in the first place?
[00:06:03]
There is such a thing as TDD, right? There is test driven development and there's a lot of articles, a lot of videos, a lot of books on it.
[00:06:12]
I think mostly pushed by extreme programming, I think, and agile coaches.
[00:06:20]
Kent Beck sort of created the process and extreme XP was Kent Beck's sort of early agile process that he introduced those techniques.
[00:06:29]
Yeah. So TDD is big. It's really big.
[00:06:33]
But I think mostly originated in languages like C style languages like C, Java, C sharp.
[00:06:40]
On the other side, you've got the academic languages like Haskell, where they are big on like, hey, see what all the things that we can do or at least we can prevent using types.
[00:06:52]
And don't quote me on any of this. This is my my understanding, at least.
[00:06:56]
And there's one side where they go, they make code work through tests.
[00:07:02]
And there's another side where they say, hey, look at all these types that are so useful.
[00:07:07]
And I think like these two have not been melted together quite well.
[00:07:14]
There's also like just there's no words. There's you mentioned type driven development in one of your talks.
[00:07:21]
But like there's nothing as catchy as TDD.
[00:07:25]
And there's no practice that I know of that has that name of like, hey, let's improve our code by changing our types.
[00:07:32]
We call it making impossible states impossible, make legal states and reprisidential, but nothing that is books with that name.
[00:07:41]
Right. Yeah. I also wonder if if the workflow is at odds where some people like to write a type and model out their types.
[00:07:53]
We've talked about this process. We both enjoy this workflow of sketching out a bunch of custom types to
[00:08:01]
wrap our heads around a concept. And that's in a way that sort of an upfront design process in a sense.
[00:08:10]
It can be. Now, with the caveat, it is it is a sketch. So you can you can throw that sketch away.
[00:08:18]
And then test driven development is more of a pull model. You don't do it up front.
[00:08:23]
You pull things in as needed. You use the simplest thing that could possibly work.
[00:08:28]
I tend to think, you know, I've talked in the past about this concept of spikes.
[00:08:34]
And to me, like that's really essential in TDD is being able to do spikes because they remind me what you mean with spikes exactly.
[00:08:43]
Yes. A spike is essentially writing code that you can throw away for the purpose of learning.
[00:08:49]
So the the deliverable of a spike is learning, not production code.
[00:08:54]
And that's important because in TDD, you always write a failing test first and then kind of pull code into existence as needed to satisfy that failing test.
[00:09:07]
But sometimes you want to just explore something. And so a spike gives you a space to explore where you sort of put that discipline to the side for a second.
[00:09:16]
But you're not writing production code. You're writing throwaway code.
[00:09:19]
And so it allows you to just explore without being constrained by that workflow where you're doing things just in time.
[00:09:26]
Yeah. You're thinking of spikes that last like a few minutes, right? Not spikes that last half a day or two days.
[00:09:33]
Right. Absolutely. So to me, sketching out types can be a really nice type of spike.
[00:09:40]
And I do really like that workflow where, you know, I mean, I imagine you do too with not having Elm Review saying this is an unused union type constructor for a long time where it's just sitting there collecting dust because you think you're going to use it at some point in the future.
[00:09:58]
So it is a little bit difficult to piece these two workflows together in a sense because one is just in time and one is a more upfront design process.
[00:10:10]
Yeah. So in TDD, there's this red green refactor cycle.
[00:10:16]
But I don't know of such a popular cycle or routine that you have to do for types.
[00:10:23]
It's just like, well, you write the type and then do something like follow the compiler errors, change the type again if needed, follow the compiler errors again, et cetera, et cetera.
[00:10:37]
There's a lot more process around TDD that has been written down in things that are easy to teach to beginners, but not for type programming.
[00:10:49]
Yeah. Yeah. See, I don't even have a term for this.
[00:10:53]
Right. Right. Exactly. Yeah. It's kind of like domain modeling with types.
[00:10:57]
But yeah, like what are the steps? At what point in your process do you do it? At what point in your process do you update the types? And how do you decide how to update your types?
[00:11:06]
So let's imagine you're doing both. You're doing TDD.
[00:11:09]
So TDD, there's red, you write a thing test, green, you make the code work, and then you refactor. Is refactor the phase where you change your types? Or is it in red? Is it in green?
[00:11:23]
It's definitely not in red. I think it's in refactor.
[00:11:26]
Well, keep in mind in TDD, I think this is a subtle point that's very important to do TDD in a typed context.
[00:11:35]
A compilation error is red, is a red step in TDD. And that's very important.
[00:11:41]
So you would consider changing the type to something that would... Actually, you would just consider changing the types to be another cycle, just like tests, but it's not a test related cycle. It's just for types.
[00:11:55]
Potentially. So like, for example, in an untyped context, if you're writing a unit test in Ruby, you might say like, you expect calling this method to return this value.
[00:12:11]
And then you get a failing test because what does it say? It says that method doesn't exist. Right?
[00:12:17]
And what kind of thing is that? It's a runtime error, which the test runner says, oh, this test failed. Runtime error.
[00:12:24]
And so you fix that. Whereas in Elm, suddenly you have a compilation error. Things aren't working.
[00:12:30]
And you... But it still is red, even though it couldn't actually run your test.
[00:12:36]
So it feels like, well, how is it a red test? It's not even running the tests yet.
[00:12:40]
But the point is that you're pulling along just as much as you need. And so you're ensuring that you're exercising things through tests because they're coming into existence by that process where the test tells you, I need this thing in order to continue or in order to pass.
[00:13:00]
And you're giving it the simplest thing you could to satisfy that, which means you're getting test coverage of everything. Right?
[00:13:05]
That's the elegance of that process. And it allows you to split work where you can work on one small slice at a time.
[00:13:13]
And that's why the process works that way. So if you call a function in Elm, it doesn't exist. It's a compiler error.
[00:13:19]
Now you have to write that function and you can give it a debug.to do or you can give it a fake value or whatever.
[00:13:26]
But so you do have now there are certain things that you have to do. Debug.to do is a little bit different than returning nil or null.
[00:13:38]
And you have to be you have to work within more constraints.
[00:13:42]
It feels a little more formal. And there are certain things about working in a typed context and working in, you know, working in Java versus Elm feels very different than working in TypeScript.
[00:13:53]
But if you're if you're working in Elm, there are certain things that you have to do upfront to just get the compiler happy.
[00:14:01]
And I've been really interested to hear some of the some of the things that Richard has been exploring with Rock here, where he's playing around with the idea, for example, of allowing the compiler to execute code until it gets to a point where there's a compilation issue.
[00:14:21]
With basically like compiling everything it can in the program in a rock program.
[00:14:26]
And then when it hits a dead end where it says there's a compiler error here, essentially that it puts a debug.to do in there for you.
[00:14:34]
So if it executes that in debug mode, then it raises an exception.
[00:14:39]
And that is very interesting for testing workflow, because it lends itself a little bit more to this just in time process where you don't have to do everything upfront.
[00:14:48]
So there's I think there's a little bit of a push and pull with those two different mindsets.
[00:14:53]
There's also like a concept that I've been kind of keenly following in Rock, which is the, you know, Rock's approach to to tags like the I forget the term, but these global tags that you can reference without explicitly defining.
[00:15:10]
And it infers the different tags that are possible given given your usage of these named tags without you having to define upfront.
[00:15:21]
Here's a custom type. Here are all the variants.
[00:15:23]
Yeah. If something returns A in one branch, B or in another branch, then it will be the result will be either A or B.
[00:15:31]
Right. Of none either.
[00:15:34]
Yes. And if you think about that in the context of TDD, it's kind of interesting because now you can write a failing test that says, I expect this to return this variant without defining all of the possibilities for that variant upfront.
[00:15:51]
So it enables a different type of workflow.
[00:15:54]
So, I mean, is that good or bad? I don't know.
[00:15:57]
I definitely it's it's very subtle. And maybe as you say, we do need to more concretely define a process just like TDD has this very clear, easy to teach set of steps for red green refactor.
[00:16:11]
Maybe we need something similar for like TDD in a typed context.
[00:16:16]
So, yeah, as to why people tend to draw these stark dividing lines between tests and TDD or tests and sorry, tests and types.
[00:16:28]
I think somehow like these different mindsets are looking at the problem of gaining confidence in your system in a different way where one is saying, you know, when we're thinking about types, we're thinking about guarantees and proofs.
[00:16:46]
And when we're thinking about tests, we're thinking of specific scenarios, right?
[00:16:53]
Exactly. We're exercising some specific scenarios and gaining confidence through, I don't know, through through a type of automated check.
[00:17:01]
It's less formal. It feels like a less formal process of narrowing down these constraints with proofs.
[00:17:08]
And somehow I feel like people get into one mindset or the other.
[00:17:11]
But to me, it's like, why wouldn't like these these two different things play two different roles?
[00:17:17]
Like if I'm so if if I'm trying to say, you know, I don't know, I have a I have a game that has died that can be a number one through six.
[00:17:28]
Then isn't it nice to model parts of the game through that?
[00:17:33]
And, you know, yes, you want to capture the behavior of the game through through tests because the way that things interact and the behaviors are very subtle.
[00:17:42]
And you can't you can't model that all through your types.
[00:17:46]
And I know we want to as people who love types.
[00:17:49]
We want to make all states impossible through the type system, but you can't.
[00:17:53]
And even if you could, isn't that what a theorem prover is about?
[00:17:58]
Like languages like Coq and Agda, I think, and Idris.
[00:18:03]
Yeah. You write your code and specifically your types in such a way that it proves that your code does exactly what it what it is meant to.
[00:18:12]
That is, follows some kind of specification through types, I think mostly and some implementations, but not necessarily through tests.
[00:18:21]
That said, I don't know how to work. So, right.
[00:18:24]
I've definitely seen walkthroughs of this type of thing where where the types are the proof like.
[00:18:31]
And the fact that you have an executable program that fulfills that is is a proof, which is pretty pretty fascinating and incredible stuff.
[00:18:40]
Yeah. In practice, you know, you want simple, easy to reason about types.
[00:18:46]
And so you want the best tool for the job. Those types are very hard to read.
[00:18:51]
And if you're trying so it certainly I think it does depend on what you're trying to model.
[00:18:59]
And perhaps if you're doing something that's going to be on a Mars rover or something like that, you know, yeah.
[00:19:07]
Perhaps there would be an appropriate place to to use one of these like proof proof tools.
[00:19:14]
And, you know, that that's great. But oftentimes, like if you're writing a game, then what does what's the cost benefit of that?
[00:19:24]
And I think a lot of the time, like the the cost of writing a type to narrow down a few basic constraints about the primitive things you're dealing with.
[00:19:34]
Not primitive as in primitive language values, but the primitives of your domain to sort of clearly define the constraints of those core pieces.
[00:19:43]
And then having some tests that talk about how those pieces interact works very well for a cost benefit.
[00:19:50]
You know, I think that that's what it comes down to at the end of the day is it's like if you can write very simple types and then you can write very simple tests.
[00:20:00]
You don't have to write tests that exercise every possible type you could throw at it and that it fulfills these contracts.
[00:20:08]
And so they do their job very well when you when you just use types in a simple way, getting the sort of 80 20 like, you know, what is it?
[00:20:20]
I think it's also called the Pareto principle, the 80 20 rule, which is 80.
[00:20:25]
Yeah, yeah, you do the the 20 percent of the work that gives you 80 percent of the benefit because it's diminishing returns at a certain point.
[00:20:33]
And I think that's really the case with with types. And I think if you're if you're saying, oh, we don't need types, we have tests.
[00:20:41]
I think you also sort of get to to an 80 20 where you're getting diminishing returns for your tests when types would be the best tool for that.
[00:20:50]
So in languages like Coch and Agda Idris, where you prove something through these types, the types are really hard to read.
[00:20:58]
And I do wonder, like, how do you test that those types are proving the right thing as well?
[00:21:05]
Right. Because if they're complex, then it's some sort of code, right? And you want to test code.
[00:21:13]
So how do you test that? I'm sure they have some techniques or tools for that.
[00:21:18]
But, yeah, yes, right. And I think it's it's easy to get into the headspace of trying to like like, well, if it doesn't give me a 100 percent guarantee or how do I prove this?
[00:21:29]
But but I think getting practical, you say, like, listen, how easy is it to reason about the correctness of this system?
[00:21:39]
How easy is it to spot a failure? How easy is it to fit the behavior of the system into my head with these pieces?
[00:21:48]
And if you have some very simple types, that's a great tool for reasoning about your system.
[00:21:54]
And that's like a tool that really lets you work confidently with with maintaining some code and updating the constraints as they change.
[00:22:03]
Tests are very good for looking at what is the current behavior. And if something goes wrong or if you're adding some new behavior, you you write some tests to exercise that and make sure it does what you expect.
[00:22:15]
And so I think it's it's using the right tool for the right job. Also, I think that so actually I suspect we may have both watched this this talk in preparation for it.
[00:22:26]
Types versus tests. There was one at a scholar conference that I watched. What is the name of the talk?
[00:22:33]
Because I'm not sure. So I watched this talk called Types versus Tests, an epic battle by Amanda Loucher.
[00:22:41]
Yeah, me too. OK, cool. Yeah, I had a feeling that we both watched the one. It was very interesting.
[00:22:47]
Yeah, I wanted to watch a talk that you probably wouldn't watch. And we watched the same one. That one was very good. Yeah.
[00:22:56]
Which surprised me a bit because it was from someone who is from that TDD world, that agile world at a SCADA conference still.
[00:23:04]
They're like, huh, she does a good job joining those two two worlds together.
[00:23:10]
I thought so, too. Somewhat contradicting what I said previously, where I said that those worlds didn't mingle.
[00:23:17]
Well, they do mingle, but not as much as I would like them to be. Right.
[00:23:21]
Absolutely. Which which I think is why it stood out as a really good talk, too, because it doesn't get discussed that often.
[00:23:28]
Usually people are in one space or the other. And unfortunately, you know, a lot of people who are very focused on talking about types don't talk about craftsmanship principles and test driven development as much.
[00:23:39]
And vice versa. But they really pair very nicely together.
[00:23:42]
I have to admit, like whenever I hear about craftsmanship coding, I always hear that in the back of my mind, it means TDD, it means agile practices, tiny steps, refactoring.
[00:23:57]
But it doesn't mean types at all. So. Right. Do you have the same feeling as well?
[00:24:02]
Yeah, I would say I associate it with types not necessarily being a core piece of that, unfortunately. I mean, in the most common practice, it's yeah, I agree.
[00:24:14]
So one thing that makes me think is also like you can do TDD in every language. As soon as you have a test framework, you can do it in any language.
[00:24:23]
You might not have property based testing. You might not have advanced levels testing like end to end testing.
[00:24:30]
But types, you can only do that in some languages. Right. Yes. So, yeah, it makes sense that there's a lot of a lot more writing about TDD than about types, especially since most popular languages are not all that big into defining your own custom types and having opaque types and stuff like that.
[00:24:53]
So I think it makes a lot of sense.
[00:24:55]
And a lot of them. I mean, I learned a ton about test driven development from Kent Beck's excellent book, Test Driven Development by Example.
[00:25:05]
It's a very nice, simple little book. And from what I recall, it used Java for all the examples.
[00:25:10]
And, you know, the thing is, like when you're using Java for the examples, you're not using it as a tool for giving you guarantees.
[00:25:20]
Right. Because you have you still have casting and all these possibilities to to circumvent the type system.
[00:25:28]
So it's not really operating in the same way where you're able to rely on it.
[00:25:32]
And also it doesn't have the features that we love so much, like some types, which they are coming.
[00:25:39]
Yes. Right. But in the time when it was written, that that certainly wasn't a feature.
[00:25:45]
And it certainly wasn't the idiomatic approach to solving problems that would have been used in a book like Test Driven Development by Example.
[00:25:54]
So one thing that this that this types versus tests talk, I felt missed.
[00:26:00]
It did a very good job covering a lot of like a lot of the core things that I wanted to hit upon in our discussion.
[00:26:07]
But one of the things that it didn't talk about that I think is very important is how types and tests fit together and work in tandem,
[00:26:16]
which is our favorite topic, which if you're playing Elm Radio Bingo, you can go ahead and cross off that square.
[00:26:25]
Opaque types. I was going to go for Elm Review. I lost.
[00:26:32]
I'd put that on our Elm Review Bingo card as well. It's a solid, solid choice.
[00:26:37]
So what I mean by by these two things working together is I think that let's say you write a unit test for a function and, you know, I don't know.
[00:26:48]
I mean, you know, one of my go to opaque type examples, you have a function that checks the validity of a username.
[00:26:57]
So now you have, you know, is valid username returns a bool. And, OK, you've used test driven development for that.
[00:27:05]
And it's fully tested code, right? It's fully tested code that takes a string and gives you a boolean.
[00:27:12]
Well, you have a function or a method that says this username is valid.
[00:27:18]
Right. Yep. And it's fully tested, 100 percent done through TDD. But is it used appropriately everywhere in the code base?
[00:27:27]
Every is every string that does truly represent a username invoke that function to to make sure it checks that bool?
[00:27:35]
Of course, we're we are good coders, right? We'll never forget to do that.
[00:27:40]
But to me, this is so, so core to how I think about craftsmanship principles is being able to sort of narrow down my thinking about something into a nice, neat, well tested concept where the knowledge lives in one place.
[00:27:59]
So I'm able to not only organize that logic into a single place, which, you know, don't repeat yourself.
[00:28:06]
It's about knowledge. It's not about not repeating code. It's about not repeating knowledge. There's a single authoritative place where any piece of knowledge lives.
[00:28:15]
Well, in a pick type is a great way to represent that single authoritative place because, well, you can't create it outside of that thing.
[00:28:23]
So it is authoritative because it's the only way you can create a username. And so you use that username type.
[00:28:29]
And, you know, sure, you could still pass strings somewhere, but it gives you more confidence that you're using that well tested unit in the appropriate places.
[00:28:39]
So to me, types and tests work very well together. And in this types versus test talk, they were kind of discussing this in the talk and a little bit in the Q&A as well.
[00:28:51]
They were talking about, you know, how do you like, you know, that you can you can make impossible states impossible through your type system.
[00:28:59]
But certain things you can't represent that in your type system. But then, like you, you have this opaque type.
[00:29:06]
You test that opaque type. And now that gives you more guarantees. So they work together, you know.
[00:29:12]
So we do need more formal write ups about these processes as you as you hinted at. I really I think I think you're spot on with that, that we need some type.
[00:29:24]
What was the name of the Ken Beck's book? Test driven development by example. Right.
[00:29:30]
OK, well, type driven developments by example. That's what we need.
[00:29:34]
Absolutely. Or, you know, test driven development aided by test, test driven development aided by types by example.
[00:29:43]
Maybe, you know. Oh, no, no, no, no. Now it seems like types are less good than tests.
[00:29:49]
Like, come on. Like, tests are not better than types.
[00:29:57]
So I think it's important to understand, like, when are types the good the good abstraction and when are tests the right tool?
[00:30:08]
So the way I tend to see. So maybe first let's talk about usually when people say, well, you don't need.
[00:30:15]
Well, whenever people say that you don't need tests, what they mean is that you don't need to write as many tests when you have type when you have types.
[00:30:26]
Right. So, for instance, the common examples. Well, what if the argument that you pass to is valid user name is undefined or is nil or whatever?
[00:30:36]
Well, you don't need to check for those if your types say that it needs to be a valid, it needs to be a non null string or whatever.
[00:30:46]
And you also don't need to check that this the this function returns anything else than a Boolean.
[00:30:51]
So when you have types, it's very much limits the wiggle room that a function has, like between its inputs and outputs.
[00:31:03]
The inputs say what is available to the function and the output is what is available to return.
[00:31:10]
Like in the this enormous space of potential implementation of potential values to return and to get.
[00:31:18]
What what can you do? So whenever you add types, you constrain what you can write, what you can receive, what you can get, what you can write as the implementation.
[00:31:28]
And I think that's when people who write a lot of a lot of types, but don't know a lot of tests, what they think of.
[00:31:36]
And that's partially my case, because I don't write that many tests in practice, is that because there's so little wiggle room.
[00:31:45]
If you have good types that you don't really need to test those, like, for instance, if you have an enum of four things as an input and you return a Boolean as an output, then you have very few implementation possibilities.
[00:31:59]
I think you have like eight or something or you have a very few. So it's going to be hard for me to make a mistake here.
[00:32:07]
I'm going to make some at some point, probably, and therefore it's still useful to have tests. But the wiggle room is a lot less than if we wrote it in a dynamic language where the inputs and the outputs are any value there.
[00:32:23]
They can be undefined. They can be JSON functions, whatever.
[00:32:27]
So just restricting the wiggle room that you have to something very tiny makes it much more likely that you're going to have the correct implementation or a correct implementation or somewhat correct.
[00:32:41]
Right, right. And people put a lot of thought and effort into those practices of using types very well.
[00:32:50]
And then they focus on that one thing and then, oh, it turns out like, well, do I really need to test for this workflow?
[00:32:58]
And I think, I mean, I don't know, if you take the username example, like testing a valid username, like types are probably not a great tool for that.
[00:33:10]
Like, sure, you could say that like these are the valid characters and this first character can be this and the second character and characters after can be that.
[00:33:20]
But like, it's probably not necessary. But tests are a very good tool for that.
[00:33:25]
And to me, like at the core of this is the habits to to build in these practices and also like not all tests are created equal.
[00:33:36]
So I think if we like dig into it, I think a lot of people view I think a lot of people view types as a hindrance to their productivity.
[00:33:46]
Which they can be depending on the language and tools that you have at your disposal.
[00:33:51]
Right. And depending on how you use them. Right. And a lot of people's experiences with Java where they're not getting strong guarantees, you know, probably from a time when optional wasn't really baked into it.
[00:34:04]
They're getting null exceptions all over the place and array index out of bounds, runtime exceptions, casting exceptions.
[00:34:10]
So not all not all type systems are created equal. But also, if you're just using strings everywhere in your type system and not really leveraging types, then it's going to feel like nothing but a burden.
[00:34:22]
It's going to feel like nothing but this thing that's forcing me to implement an abstract, you know, instance of this factory.
[00:34:31]
And so I have to make like an anonymous class to satisfy this thing. And, OK, great. What safety did that give me?
[00:34:39]
Yeah, it's only going to give you limited benefits because if everything is nullable anyway, then you're still going to end up with the biggest problem is that everything is nullable and you have to check for null everywhere, which I mean, you're going to have to write tests for that.
[00:34:55]
Right. And if you still are using primitive types all over the place and you're not really making impossible states impossible and you're not really, you know, using, you know, union types or don't have that functionality in your language.
[00:35:09]
So I think that's a lot of people's experience with it. So the way that you use your type system matters a lot, as I think a lot of our listeners will, you know, will be preaching to the choir there.
[00:35:20]
But similarly, I think the way that you write tests matters a lot. And I think that, you know, in my opinion, after watching Kent Beck or watching DHH Creative Rails talk about TDD is dead and watching these sort of TDD experts talking him through it and asking, why do you think TDD is dead and how do you practice TDD?
[00:35:45]
And what I gathered from all of that is the way he practices TDD is very focused on doing a lot of integration tests, sort of these in between tests. They're not end to end tests. They're not unit tests that are exercising one small unit of behavior.
[00:36:03]
They're integration tests. And from my experience in my Rails development days, there is a lot of that in testing culture in the Rails community and, you know, these controller tests. So it's not really giving you confidence that your full system works end to end because it's not opening up a browser and running through a user workflow and giving you confidence end to end.
[00:36:25]
It's not thoroughly exercising all of the possible ways to call one method either. So it gets very messy and you're doing a lot of mocking and stubbing, creating a lot of fake values.
[00:36:38]
And that is like very important because you're testing your mocks. You're testing your fake value producers, not your system under test. And so when you change something and nothing breaks, well, oh, I guess I was mocking that.
[00:36:58]
And what happens is your tests become extremely coupled to your code, but don't actually give you confidence about it. So it's the worst of both worlds. And I think that's why DHH had this whole Rails is dead thing besides just being a provocateur and liking to say confidential things.
[00:37:17]
Were you pulling your hair out when you were watching the talk?
[00:37:20]
Yeah, yeah, well, it was just very on brand and it's like, all right, you know, I think people are going to continue to think what they think about testing and if they already thought that testing was a waste of time, then that will reinforce that opinion.
[00:37:34]
And if they thought TDD was great, then that will reinforce that opinion and have them think about why they disagree. But I think it's an interesting conversation.
[00:37:43]
To me, the takeaway is it really matters how you write your tests. And now you can't mock in Elm, particularly, but you can write good, good or bad tests, useful or not useful tests.
[00:37:57]
Yeah, if you write Elm program tests, you have to mock somehow.
[00:38:01]
Right, right.
[00:38:03]
But you're not going to have those spies or in the exact same kind of mocks like you're going to have test data, you're not going to have mocks. And like, I actually, you can't even do white box testing in Elm, right?
[00:38:17]
So black boxes testing is when you check, you give inputs and you make assertions on the output, but can't do white box. I'd say not. You can influence the internals by passing inputs. Well, yeah.
[00:38:33]
I mean, it depends on how like with Elm a lot of the time. I mean, you know, when I was doing technical coaching, you know, in non Elm companies, like I was spending a lot of my time trying to teach people to follow these practices that Elm forces you to do, like dependency inversion and, you know, dependency injection.
[00:38:57]
And, you know, so instead of like mocking things, you can pass in the value. So you have control over instead of like mocking time dot now or whatever, right? You, you pass in the time and in Elm, you have to do that because you can't just go get the side effect.
[00:39:15]
So you don't have to mock the current date because you have to have that as an explicit dependency as, as an argument with, with dependency injection essentially. So Elm does help with a lot of those practices, but nonetheless, like you can write useful tests or not.
[00:39:32]
You can, and you can scope things into meaningful units or not. And I think the way you organize your code and extract things into modules and opaque types is a big part of that.
[00:39:42]
The way I see it, whenever I think of TDD, I mostly think of unit tests because that's what people push towards. That's where you will see the most benefits. And integration tests are always a bit slower and a bit more clunky, especially if you need to do mocks and spies and, and those kinds of things, which we can't do.
[00:40:03]
And now, as you just said, but also integration tests is like about connecting multiple things, right? And that's actually where types shine. So in a unit test, like if you're not going to, you're going to give some input and going to assert something on the output.
[00:40:19]
If the tests are not exactly what is expected, that might not be too much of a problem. I mean, your tests are going to cover that, but types are contracts, right? They say, well, this thing takes this as an input and it will return this type as an output.
[00:40:35]
And then that can only be used in specific ways, just like you said with the username and other non primitive types. And well, whoever's going to use those types as inputs or outputs, they're going to have to do it in a correct way because the type checker will validate that for you.
[00:40:53]
But that's going to be something that is very hard for your unit test to verify, right? You're going to have to write multiple unit tests. You're going to have to write a lot of scenarios and to figure out where, when it fails to do it correctly, which you're going to fix.
[00:41:10]
But basically you're going to do like tests based on, yeah, scrap that, scrap the last part.
[00:41:16]
So yeah, whenever I think about integration tests, I feel like that's where types are better suited or view the whole thing as a unit tests. So unit tests all the way and for big things still considered as unit tests, but the implementation inside, that's where types shine.
[00:41:36]
Right. I totally agree. I think that fitting pieces together wiring, like, and in my Ruby on Rails development days, I thought a lot about wiring and with Elm, you just let the compiler think about it.
[00:41:53]
You know, I mean, you think about designing how the pieces will fit together and then you trust it once you've sort of designed how you want that to work. And, but you have to think about that in your testing process. And as you say, with like writing mocks and things as you're writing integration tests in your Rails applications.
[00:42:11]
And it's very challenging and it takes a lot to gain confidence through test driven development about your wiring, whereas it's trivial in a type system. That's what type systems really shine at. Like they're so good at doing that.
[00:42:26]
And yes, they're good at making impossible states impossible too. And that's great. But the wiring, it's just, you can't go wrong. And if your wiring is just this function takes a string, this function takes an int, you're missing out.
[00:42:40]
You're missing out on what you can do with your type system. But yeah, a lot of these integration tests go away. But I think the way that you organize your code is very important. So just like, I think the cost of change is very important to this types versus tests, because I think a lot of people will feel that, like, they'll feel that tests slow them down, they'll feel the types slow them down, or, you know, one or both, whichever they hate more.
[00:43:06]
Yeah. Possibly they'll feel that both slow them down. But if so, maybe they're not listening to Elm Radio. But I think the way that you write your tests will will affect how it slows you down or not. And if you're, you can write Elm code in a way where things get very tangled up with each other.
[00:43:30]
And it feels like making a change, you have to change all of these tests and throw things away. But if you are kind of organizing things into nice encapsulated, opaque types that have well defined areas of knowledge and responsibility, and you pull out these clean leaf nodes that are responsible for this one area of work, then things aren't coupled in an awkward way, right?
[00:43:56]
So you can't, you can't separate the conversation about how you couple your system with the maintainability of your your types and tests. And so it's essential. So I would, I would encourage people if they are feeling like either types or tests are slowing them down, think about how, like, are you leveraging them to maximum effect?
[00:44:20]
Like, are you actually getting something meaningful and useful out of them? And are you are you coupling them that in a way that makes the cost of change difficult, just like DHH is talking about a lot of tests with tons of mocks that are not really giving a lot of confidence and, and coupling all these things to the internals of the system?
[00:44:40]
Well, you can kind of couple things in a way where your your tests are very hard to maintain and change in your Elm application. So the way you couple and organize your code is essential for that.
[00:44:52]
So maybe let's talk about when you need to test like, so types are going to check for things that are very general, very generic, they're never going to be very detailed, they're never going to provide a lot of detail.
[00:45:07]
So if you say that a function returns an integer, well, the type checker will prove that it will always be an integer, otherwise it wouldn't compile. But it doesn't tell you which one it will be.
[00:45:17]
That's when you want to test, you want to test if you want to assert that in a specific scenario, or if you use property based testing, that it will always have some that either has a specific value or is constrained by a specific rule.
[00:45:37]
So you're going to basically you're going to want to write tests whenever you can't prove anything by type system. So in my mind, there are at least two things for that. One, when you want to verify specific values among the wiggle room that you have, right?
[00:45:52]
And two, whenever you want to prove things, things that can't be proven by the type system, including side effects, when your type system doesn't convey the information of which side effects are returned, then you're going to have to write the test for that.
[00:46:08]
And also, if your type system is unsound, it's giving you some guarantees, but not everything. Like everything is nullable, like in Java, then you will want to have tests that prove that things are never null, things like that.
[00:46:24]
So I think it's mostly for those things. So when you have, you want to test specific values, when you want to test side effects, and when you want to test things that can't be proved by your type system.
[00:46:37]
Right. Yeah, I mean, types, to me are all about constraints. They describe and enforce constraints, and tests are about behavior. And there's an interplay. There are times when you'll want to capture a constraint in a test because it's very hard to express in the type.
[00:46:58]
But yeah, a lot of the time, you describe your constraints and your types, and then you test the behavior in the tests. You just can't test the behavior with types. So yeah, I think to me, it's your business logic.
[00:47:15]
Right. Now, I'm usually not going to be writing view tests because Martin Janacek recently gave an Elm Online talk where he was showcasing how he uses Elm Book to do this sort of storybook driven development style of writing view components in his Elm app.
[00:47:34]
And I thought that was very cool for, like, to me, that's pretty sufficient for testing visual elements, and I don't find much value to writing unit level tests for that. I think it's very valuable to have end to end tests, not integration tests that are faking things out, but end to end tests that are actually running through opening browser Cypress tests, things like that, to give you confidence in the system.
[00:48:01]
Oh, sorry. What if you have view code that has quite a bit of logic that returns HTML?
[00:48:06]
Great point. So in cases like that, I would tend to already want that as a separate testable unit that is invoked by my view logic, but not spread out all over in my view code.
[00:48:21]
Okay, so it's kind of like, wrap early, unwrap late. You unwrap to HTML as late as possible, but whatever logic you have, you want to do it with specific types or maybe even just primitives, but not HTML, which is harder to test.
[00:48:39]
Yeah, and, and like, to me, I want to split out my business logic from my view logic and templating it to me is like if I'm writing a test that's testing my templating, it's just, it's just writing the same thing twice and it and coupling me in a way where I change this thing, this thing breaks, it just feels like brittle tests.
[00:49:05]
It doesn't feel valuable to me. It doesn't feel like it's preventing me from causing bugs. It just feels like it's slowing me down from making changes in the system and making it more frustrating.
[00:49:14]
Yeah, absolutely. I was mostly thinking of like, if you have branching conditionals in your view code.
[00:49:20]
Right, right. So, so like, for example, if there's like, you know, I don't know, you could have like, how do you render, you could have some complex logic for how you you render names based on, you know, if there's a, if it's a guest login, or if there's a last name or if there's a username or you, you know, you pick this display name to use or, you know, you could have complex logic with lots of branching and lots of complexity.
[00:49:49]
That feels like business logic, right? The, the key thing is it feels like business logic, then I'm going to want to encapsulate that and invoke that from my view template.
[00:50:00]
But I want my view template to be kind of dumb. And I don't really want to test that through unit tests. But I do want to unit test my business logic. And I think this is really essential is it's like, it's just a skill and a habit.
[00:50:15]
It's kind of hard to learn without just sitting next to somebody who's done this a lot and probably sat next to somebody else who had done it a lot. You know, you go up the chain enough, and then there's probably somebody that sat next to Kent Beck and did it a lot and Kent Beck came up with this discipline, right?
[00:50:34]
But that's kind of like the easiest way to learn how to decide what to separate out as business logic. Like it's this design sense that you, it's hard to just learn naturally by thinking it through. But, but to me, that's how I think about it.
[00:50:50]
Business logic, I want to encapsulate that out somewhere as a unit and then test that unit. So, so unit testing, figuring out what the units are is very hard.
[00:50:59]
Okay, I'm starting to think we're getting at the point where we need to answer the question. Dillon, types or tests?
[00:51:09]
Tweps.
[00:51:10]
God, goddammit.
[00:51:13]
I actually don't even know. Are we allowed to swear on this radio?
[00:51:17]
No, I don't think we ever have.
[00:51:20]
That elicited a very strong response.
[00:51:28]
Yeah, let's imagine you only had the choice between writing tests or writing types. What would you prefer?
[00:51:35]
All I have to say is, if you had modeled your constraints better in your possible responses, then I wouldn't have been able to respond that way.
[00:51:45]
Yeah, but everything you say is just stringly typed.
[00:51:50]
That's true. Our whole podcast is.
[00:51:52]
I'm working against your interface and you only return strings.
[00:51:58]
Maybe strings?
[00:52:00]
Maybe strings, because you could keep silent.
[00:52:03]
Exactly. It's hard for, I mean, I genuinely can't make up my mind. I really can't. Like, you can't make me choose between my children. I love them both.
[00:52:17]
You don't have any children.
[00:52:20]
And more importantly, you don't need to choose one or the other. Like, what about it takes away from the other thing? They enhance each other.
[00:52:32]
You could write JavaScript and don't have access to types.
[00:52:36]
Right. Well, you could, or you could write Elm.
[00:52:39]
Well, that's a bold choice.
[00:52:42]
So I think, do you have an opinion? There's something I want to mention, but before I do, what would you choose?
[00:52:48]
I would totally choose types.
[00:52:50]
I suspected as much. Why do you think that is?
[00:52:53]
Yeah, because I also said, like, I don't write that many tests because I feel like I constrained that wiggle room enough for me to not mess up too often.
[00:53:05]
But also, like, the experience between writing unit tests and types is so different because for writing unit tests, you need to have a somewhat clear understanding of the API, where if you change it, you're going to have to change to update your unit tests.
[00:53:25]
But also the experience of finding issues is going to be very different for it when you have test failures versus when you have compiler errors.
[00:53:33]
So if you change your code, your types or your tests, no, your production code or your types, then unit tests will just start breaking, right?
[00:53:45]
They will say, hey, you've got to, this is not returning the correct thing or it crashes for some reason.
[00:53:51]
And you're going to have to figure out yourself where the problem lies.
[00:53:55]
But when you change a type, the compiler will tell you, hey, you've got a problem here and you go fix it. You got a problem here, you go fix it and so on and so on.
[00:54:04]
You don't have to figure out where the problem lies. The compiler tells you. And that is just so much more useful, I think.
[00:54:12]
So I know you're going to say, like, oh, you didn't have to think too much about your API.
[00:54:18]
You can change it and that's fine. But it's just like another type error.
[00:54:22]
Right. No, I mean, I think, yeah, these are great points. And I think that in a way, I suspect that some of this types versus tests conversation, again, as I said earlier, I think that often when people are dealing with type systems that aren't sound type systems,
[00:54:42]
they're not working with guarantees from types.
[00:54:45]
They're working with checks from types, which feels a lot like a test. A test checks one thing. You run a test, it checks one thing.
[00:54:53]
You run a, you know, fuzz test and it checks and things for and runs within the constraints you build up for it to check.
[00:55:01]
But it's it's finite, whereas types work the other way, constraining things and giving you guarantees, not checking one thing and saying, yes, that one thing does what you think.
[00:55:12]
One kind of testing we haven't talked about, which is writing assertions in your code.
[00:55:19]
That is something we don't have in Elm and also isn't that big in JavaScript.
[00:55:25]
But I know that some ecosystems are bigger on that. Maybe Rust, maybe C like languages where, oh, you're going to write an assertion checking that some list is non empty, for instance.
[00:55:39]
Invariants. Yeah, invariants, but writing those in your code. And if that ever if that is ever false in a specific behavior, a specific scenario, then it's going to crash, I guess.
[00:55:51]
But again, you need to write unit tests to to find those in your C.I. Right.
[00:55:57]
Right. And yeah, it's interesting, but it feels like something that, you know, impossible states or opaque types or these different techniques should be able to help you help you with.
[00:56:08]
So, yeah. So I think the way a lot of people are used to working with type systems, it feels more like testing because it's just checking one thing rather than giving you guarantees.
[00:56:21]
But when you're working with it as guarantees, it feels qualitatively different.
[00:56:26]
As you're saying, when you have tests with guarantees, tools can help you. The compiler can point you to, hey, here's what's wrong in this specific spot. Here's what you can do to fix that.
[00:56:40]
Also, static analysis tools, maybe Elm review, cross that out your bingo card.
[00:56:45]
Actually, Elm review can infer some things from from your code. Like we don't do much, but we it could potentially. But figure out things from your tests. That would be interesting and a lot harder.
[00:56:58]
Exactly. Because it's just the whole point to me. It's it's the bare metal. Like tests are bare metal. They're this very low level thing. That's the whole point.
[00:57:07]
They are not as expressive as code. And that's why they're useful. It's not arbitrary code.
[00:57:14]
What do you mean they're not as expressive as normal code?
[00:57:17]
They're not an unconstrained thing where you're writing arbitrary code. You're saying, you know, type user equals guest or admin, you know, admin details or regular user details.
[00:57:32]
Like it's you're not saying, oh, and if this if this conditional checking run time conditions, you look at it, you look at it and you can fit it in your head all at once without imperatively running through the code and all the interactions of a complex system.
[00:57:50]
That's what makes them interesting. And that's what makes them useful for tools. So static analysis tools, optimizations as well.
[00:57:59]
Optimizers also. Yeah, absolutely. Optimizations, IDEs, code completion, stuff like that. Yeah, absolutely.
[00:58:09]
So and I just feel like we're barely scratching the surface of what we can do with tools with with constraints, with with very strong guarantees.
[00:58:17]
Right. Like, you know, automatic code solvers. Right. Like GitHub Copilot is, you know, pretty pretty popular these days.
[00:58:25]
Well, what if what if there was something that took a completely different approach to GitHub Copilot and used its understanding of the constraints baked into the language and the types to suggest possible solutions?
[00:58:38]
Right. Like I saw a talk that was showcasing this, like, you know, automatic function generator that takes like all of the values that are in scope and tries to infer like here are 10 possible functions I can create with this.
[00:58:54]
They use these different values because this is a list and this is a function that takes a value and returns this.
[00:59:00]
And then I can list out fold over this. And so here are 10 different things you could do with this.
[00:59:05]
And a lot of the time it auto generates code that you're like, oh, yeah, that was what I wanted to do with these inputs.
[00:59:11]
Tests can't really fulfill those types of possibilities. So tests are very compelling in that regard.
[00:59:18]
But, you know, you can you can you can use tests to do a a worse job at being a type system and checking constraints.
[00:59:30]
So I think tests are tests are just so good. And I wish that like I wish that people in the type community would embrace them a little bit more.
[00:59:39]
But maybe we need to give some really good resources for how to how to do that.
[00:59:43]
Yeah. To make good use of tests, you do need to be a not going to say a good developer, but you need to have some some good habits.
[00:59:50]
Yeah, some good habits, because if you don't have the habits of running a test, then you don't have any guarantees at all.
[00:59:59]
Running a type is a lot less effort, in my opinion.
[01:00:04]
And also, one thing that is interesting to figure out and to notice is that it is actually quite easy to ignore a failing unit test because you can delete the unit tests.
[01:00:16]
Right. Or, you know, you can with big quotes and forget to write the unit tests.
[01:00:23]
Right. But with a type checker, that won't be possible.
[01:00:26]
So type checkers are much more general, also in the sense that they will look at the whole code base.
[01:00:35]
So it's going to be better to find a lot more issues, especially if you don't have good habits.
[01:00:41]
Test development is only as good as the culture and the habits that it's operating within.
[01:00:47]
Yeah. I wrote a blog post a while back called Relentless Tiny Habits.
[01:00:51]
I think that test development is fascinating in the sense that it's not necessarily any particular one difficult skill.
[01:01:02]
It's more just like, yeah, it is that simple, but you just do it all the time and don't not do it.
[01:01:09]
And it's a habit, but it's hard to build that habit.
[01:01:12]
And I think there needs to be like a mindset shift.
[01:01:15]
I think when you have that mindset shift, when you see it as something that you go from thinking that it's something that slows you down to thinking that it's something that speeds you up.
[01:01:26]
I think that's essential.
[01:01:27]
But it's also something that you need to do yourself.
[01:01:32]
You need to get better. Right. But also your whole team needs to do the same thing.
[01:01:38]
Because as soon as someone doesn't adhere to this philosophy of you should write tests first, doing the right TDD the right way.
[01:01:48]
Well, then the whole system will not work as effectively.
[01:01:53]
Whereas one person could just add tests, add types and improve everyone's lives with a lot of work.
[01:02:01]
But yeah, I mean, someone can take that nice opaque type and expose the constructors and then just building it or not use the username opaque type and just start passing strings somewhere, too.
[01:02:14]
Right. So I like to curse again or there are cultural things to both.
[01:02:23]
And it's not a coincidence that extreme programming is heavy on test driven development and pair programming.
[01:02:33]
Right. Because it's cultural and it's about habits and spreading knowledge and spreading cultural ideas.
[01:02:39]
And if you don't if you don't do that, then it's not very useful.
[01:02:44]
I still imagine that if you have a team of 10 developers who are keen on doing TDD except two people, I feel like they will always pair together because they're going to be less annoyed by the other person.
[01:02:58]
Like, oh, yeah, you and me make a good team because we don't say, oh, please write a test first.
[01:03:05]
I think a lot of it comes down to the paradigm, like the lens that we that we look through.
[01:03:12]
And if if if you if you see types as a burden, the way that you use types, if if at all, if you can avoid using them, then maybe not at all.
[01:03:22]
But the way you use types will look very different. If you're writing TypeScript, you're probably going to use a lot of any's or inferred types.
[01:03:29]
Also inference apparently is a big thing.
[01:03:32]
Type inference. And you're probably going to just JSON that parse and get your any type and pass it around without, you know, using something that does something similar to JSON decoding an Elm where it gives you guarantees about the JSON values you're getting.
[01:03:47]
So but if you perceive it as something that gives you value to to check those types and be able to work with those types, giving you confidence and guarantees, it's going to change the way that you leverage types.
[01:03:59]
If you're working in Elm, you're going to be making impossible states impossible and you're going to be using parse, don't validate and all these things that let you get more value out of types instead of saying, oh, this is just a burden.
[01:04:11]
And I think it's the same with tests. Like if you view tests as a burden and a moral responsibility, which I'm I really don't think it's constructive to talk about these things as a moral responsibility or a professional shortcoming or something.
[01:04:26]
If you don't follow these practices, in my opinion, that's just very counterproductive. To me, it's like, hey, this is a tool that allows you to to work in a much more like enjoyable and safe way where you're just like flowing through your code and you don't have to keep manually testing this thing.
[01:04:48]
And is it working now? Is it working now? Is it working now? It's it's very satisfying to be working with this auto test runner that and you want to refactor. Oh, no problem. Let me just go and rip off.
[01:05:01]
Like if you like the feeling of refactoring Elm code without tests, like if you have a well tested code base that is, you know, nicely abstracted to have nice units that have, you know, nicely defined responsibilities that aren't heavily coupled together.
[01:05:20]
That feels really good. So, you know, but but it's a paradigm shift and a lot of people view tests as a burden. And I think that's the hump to get over. And culturally, that's that's the first step is like seeing, seeing that in action.
[01:05:35]
But I think a really good way to to build this up is to do it outside of production code. So, you know, there's this concept of code cadres, which is code that you don't ship to production where the purpose of it is learning. So you take a simple exercise, Roman numerals, fizzbuzz, things like that.
[01:05:53]
And you use all of the techniques that you're trying to learn, you you only do very disciplined red green refactor. And because when you're working in a large production system, then you can build up bad habits. And you can make you can take steps where you're actually like coupling things in a way that makes it harder to work with when you're working on these simple problems, you can experiment and learn and develop these habits.
[01:06:21]
Oh, I always feel like you can play with Elm codes more than other languages. But yeah, absolutely. One thing that just came to my mind is that we we didn't mention compilation times. Because some languages have very long compilation times.
[01:06:39]
So if you want to be if you want the type checker, the compiler to tell you, hey, there's a problem here, there's a problem there, or to make guarantees for you. Well, do this, how fast the compiler runs will impact the experience that you will have.
[01:06:57]
Yes.
[01:06:58]
And if this run a lot faster than the type checker, then yeah, I think it makes a lot of sense that you will write more unit tests.
[01:07:06]
In Elm's case, like both are really fast. I would say well, the type check is much faster than unit tests because the more tests you add, the slower it will run. But yeah, they're both fast in our case. So we're pretty lucky.
[01:07:19]
Units are quite fast. Yeah.
[01:07:21]
Units are quite fast in Elm.
[01:07:23]
Depends on how many you have. If you have 10,000, it's gonna take a few seconds, probably.
[01:07:28]
Depending on what you test, right?
[01:07:30]
Even so, that's not so bad. So what tends to happen a lot in like Ruby on Rails shops is you get a lot of integration tests that are spinning up a database in an integration test, but there's some there's some mocking but there's some spinning up a database, and they get very slow.
[01:07:49]
And that is painful because now you have to run a subset of your tests because it's too slow, the feedback loop, but they're not giving you full confidence.
[01:08:02]
And they're flaky because it's spinning up a database and sometimes gives you non deterministic results. And sometimes it's depending on time and giving you a result based on when you run it.
[01:08:11]
Or worse, depending on other tests being run.
[01:08:15]
Right. Exactly. The order of the tests being run. And so you wonder why DHH says TDD is dead, right? It's not a big surprise.
[01:08:24]
Yeah.
[01:08:26]
I feel like we have done a pretty good round of it. We have done other episodes on testing opaque types.
[01:08:34]
I actually wonder, did we say opaque types enough? Have we hit our quota or should we say it a few more times?
[01:08:41]
I'm not sure. We could always say it a little bit more. But if you haven't listened to our opaque types episode, as we've said, mandatory listening, that should have been episode number one of Elm Radio.
[01:08:51]
That should have been our most listened to episode. It actually isn't.
[01:08:57]
Yeah, that's true. We'll keep pestering people until it becomes our number one listened to episode. If people want to remember to subscribe to our podcast, some people are not subscribed and getting every episode.
[01:09:11]
Subscribe to your podcast feed. Give us a rating on Apple podcasts and follow us on Twitter and Yeroon. Until next time.
[01:09:19]
Until next time.