Tune in to the tools and techniques in the Elm ecosystem.
Types vs. Tests
We discuss the role of types versus tests, and whether they complement each other or make the other obsolete.
November 7, 2022
Is TDD Dead?
(TDD is Dead and associated video discussions with TDD experts)
Roc-lang's tagged unions
XP (Extreme Programming)
80-20 rule (i.e. Pareto Principle)
Amanda Laucher's ScalaDays talk
Types vs Tests
Kent Beck's book
TDD by Example
Martin Janiczek's Elm Online talk on Storybook-driven development
Talk with demo of
Relentless Tiny Habits
So are you the type of person that likes tests?
Because we're going to have a face off between those two.
I might be strongly typed.
I might be...
Why do we have to choose?
Why do we have to fight over this?
Can't we just all agree that they're both good?
And let's do that.
So maybe that's the episode already.
Types versus tests.
It's not a competition.
Just do both.
I have that little girl's face in my head now.
And like, yep, yep.
That's our episode.
That's the episode.
Yeah, I mean, I actually do want to like investigate that with you a little bit.
Like, why is it?
Well, so first of all, we're talking about types versus tests today.
If you hadn't gathered that.
Why does it have to be a war between these two things?
Like, why is there any notion that one would take away from the other or that one would supplant the other?
Like, I find that very interesting.
Yeah, well, my first instinct to this question is like, in a lot of languages, you only have one, right?
You only have tests.
For all the languages that are dynamically typed, it doesn't feel like you have types.
Or at least they don't help you writing your code in the way that we think about when we talk about types versus tests, at least.
So therefore, you have to use tests, right?
If you want to make sure that your code is correct.
At least I hope they do.
God, I hope they do.
So if you don't have types, then you're going to go for tests.
But at some point, you're going to ask yourself the question, should we add types?
Should we use MyPy from Python, etc.?
Or like, oh, we're using Java or some other type language, but we don't make very good use of our types.
Or at least we don't use them to the effect that other languages do like Haskell and obviously Elm.
And then I think that's the part where you ask yourself the question, well, types or tests?
Right. I think you're right.
I think there can be a very strong testing culture in a lot of these communities.
I think the Ruby community created a very strong testing culture, which was definitely a big influence on me,
the testing culture in the Ruby on Rails community.
Although interestingly, which I think is great and is a very strong part of that community.
Interestingly enough, DHH, the creator of Rails, has had some very controversial talks now where he says TDD is dead
and that it deteriorates your code quality and makes your code worse.
I haven't watched those talks. I have heard of that, but I haven't seen the points that were raised.
There are some interesting talks where, you know, Martin Fowler and some of these, you know,
pioneers of testing practices sit down and do some video calls with DHH and try to explain it to him.
Essentially, I believe that the way that DHH is testing things makes things very painful.
And I think that's a huge part of it is that not all tests are created equal.
I think that not all types are created equal.
And I think that that's part of the root of it is so when what you're talking about
where maybe people are sprinkling types in, maybe they're adding something that helps add a little bit of types into an untyped language.
You know, maybe adding some TypeScript or something through some comments or something like Elixir that has types you can sprinkle in,
but it's not really a core part of the language.
Yeah, or simply avoiding primitive obsession by re implementing or wrapping some primitives into new types.
That's also a good use case, I think.
Oh, that's interesting because that's yeah, that's types without a compilation step.
And you're right. That's another dimension to it.
And I think that there can be a little bit of a sense that when you're using types in this way, when you sprinkle them in,
then it's just like maybe it feels like just another test that you're just like, hey, this is another thing that sort of checks something.
And it sort of gives us a little boost in confidence.
But it fundamentally feels different when you have, like, to me, a sound type system or a sprinkling of types that sort of help you out a little bit more.
And it feels more like a test where like you can test like if I pass in a string here, it behaves this way.
If I pass in undefined here, then it returns undefined or raises an invalid type exception or whatever.
And you say, well, my type system, I can do that through a type. I can do that through a test.
And if I've been following a really strong test driven development practice where I am actually doing red green refactor,
then I would have built up all these test cases. But the thing is, like, did you test every possible input type?
And because tests only cover what you write and types work almost in in the opposite way.
Like if you define a union type, then a union type only has something when you add something to it,
whereas a test only gives you confidence like working the other way around.
Like you you only constrain it by adding to it.
Whereas you only allow a possibility when you add something to a union type.
So you mentioned TDD, right? To go back to why is this question of types versus tests even there in the first place?
There is such a thing as TDD, right? There is test driven development and there's a lot of articles, a lot of videos, a lot of books on it.
I think mostly pushed by extreme programming, I think, and agile coaches.
Kent Beck sort of created the process and extreme XP was Kent Beck's sort of early agile process that he introduced those techniques.
Yeah. So TDD is big. It's really big.
But I think mostly originated in languages like C style languages like C, Java, C sharp.
On the other side, you've got the academic languages like Haskell, where they are big on like, hey, see what all the things that we can do or at least we can prevent using types.
And don't quote me on any of this. This is my my understanding, at least.
And there's one side where they go, they make code work through tests.
And there's another side where they say, hey, look at all these types that are so useful.
And I think like these two have not been melted together quite well.
There's also like just there's no words. There's you mentioned type driven development in one of your talks.
But like there's nothing as catchy as TDD.
And there's no practice that I know of that has that name of like, hey, let's improve our code by changing our types.
We call it making impossible states impossible, make legal states and reprisidential, but nothing that is books with that name.
Right. Yeah. I also wonder if if the workflow is at odds where some people like to write a type and model out their types.
We've talked about this process. We both enjoy this workflow of sketching out a bunch of custom types to
wrap our heads around a concept. And that's in a way that sort of an upfront design process in a sense.
It can be. Now, with the caveat, it is it is a sketch. So you can you can throw that sketch away.
And then test driven development is more of a pull model. You don't do it up front.
You pull things in as needed. You use the simplest thing that could possibly work.
I tend to think, you know, I've talked in the past about this concept of spikes.
And to me, like that's really essential in TDD is being able to do spikes because they remind me what you mean with spikes exactly.
Yes. A spike is essentially writing code that you can throw away for the purpose of learning.
So the the deliverable of a spike is learning, not production code.
And that's important because in TDD, you always write a failing test first and then kind of pull code into existence as needed to satisfy that failing test.
But sometimes you want to just explore something. And so a spike gives you a space to explore where you sort of put that discipline to the side for a second.
But you're not writing production code. You're writing throwaway code.
And so it allows you to just explore without being constrained by that workflow where you're doing things just in time.
Yeah. You're thinking of spikes that last like a few minutes, right? Not spikes that last half a day or two days.
Right. Absolutely. So to me, sketching out types can be a really nice type of spike.
And I do really like that workflow where, you know, I mean, I imagine you do too with not having Elm Review saying this is an unused union type constructor for a long time where it's just sitting there collecting dust because you think you're going to use it at some point in the future.
So it is a little bit difficult to piece these two workflows together in a sense because one is just in time and one is a more upfront design process.
Yeah. So in TDD, there's this red green refactor cycle.
But I don't know of such a popular cycle or routine that you have to do for types.
It's just like, well, you write the type and then do something like follow the compiler errors, change the type again if needed, follow the compiler errors again, et cetera, et cetera.
There's a lot more process around TDD that has been written down in things that are easy to teach to beginners, but not for type programming.
Yeah. Yeah. See, I don't even have a term for this.
Right. Right. Exactly. Yeah. It's kind of like domain modeling with types.
But yeah, like what are the steps? At what point in your process do you do it? At what point in your process do you update the types? And how do you decide how to update your types?
So let's imagine you're doing both. You're doing TDD.
So TDD, there's red, you write a thing test, green, you make the code work, and then you refactor. Is refactor the phase where you change your types? Or is it in red? Is it in green?
It's definitely not in red. I think it's in refactor.
Well, keep in mind in TDD, I think this is a subtle point that's very important to do TDD in a typed context.
A compilation error is red, is a red step in TDD. And that's very important.
So you would consider changing the type to something that would... Actually, you would just consider changing the types to be another cycle, just like tests, but it's not a test related cycle. It's just for types.
Potentially. So like, for example, in an untyped context, if you're writing a unit test in Ruby, you might say like, you expect calling this method to return this value.
And then you get a failing test because what does it say? It says that method doesn't exist. Right?
And what kind of thing is that? It's a runtime error, which the test runner says, oh, this test failed. Runtime error.
And so you fix that. Whereas in Elm, suddenly you have a compilation error. Things aren't working.
And you... But it still is red, even though it couldn't actually run your test.
So it feels like, well, how is it a red test? It's not even running the tests yet.
But the point is that you're pulling along just as much as you need. And so you're ensuring that you're exercising things through tests because they're coming into existence by that process where the test tells you, I need this thing in order to continue or in order to pass.
And you're giving it the simplest thing you could to satisfy that, which means you're getting test coverage of everything. Right?
That's the elegance of that process. And it allows you to split work where you can work on one small slice at a time.
And that's why the process works that way. So if you call a function in Elm, it doesn't exist. It's a compiler error.
Now you have to write that function and you can give it a debug.to do or you can give it a fake value or whatever.
But so you do have now there are certain things that you have to do. Debug.to do is a little bit different than returning nil or null.
And you have to be you have to work within more constraints.
It feels a little more formal. And there are certain things about working in a typed context and working in, you know, working in Java versus Elm feels very different than working in TypeScript.
But if you're if you're working in Elm, there are certain things that you have to do upfront to just get the compiler happy.
And I've been really interested to hear some of the some of the things that Richard has been exploring with Rock here, where he's playing around with the idea, for example, of allowing the compiler to execute code until it gets to a point where there's a compilation issue.
With basically like compiling everything it can in the program in a rock program.
And then when it hits a dead end where it says there's a compiler error here, essentially that it puts a debug.to do in there for you.
So if it executes that in debug mode, then it raises an exception.
And that is very interesting for testing workflow, because it lends itself a little bit more to this just in time process where you don't have to do everything upfront.
So there's I think there's a little bit of a push and pull with those two different mindsets.
There's also like a concept that I've been kind of keenly following in Rock, which is the, you know, Rock's approach to to tags like the I forget the term, but these global tags that you can reference without explicitly defining.
And it infers the different tags that are possible given given your usage of these named tags without you having to define upfront.
Here's a custom type. Here are all the variants.
Yeah. If something returns A in one branch, B or in another branch, then it will be the result will be either A or B.
Right. Of none either.
Yes. And if you think about that in the context of TDD, it's kind of interesting because now you can write a failing test that says, I expect this to return this variant without defining all of the possibilities for that variant upfront.
So it enables a different type of workflow.
So, I mean, is that good or bad? I don't know.
I definitely it's it's very subtle. And maybe as you say, we do need to more concretely define a process just like TDD has this very clear, easy to teach set of steps for red green refactor.
Maybe we need something similar for like TDD in a typed context.
So, yeah, as to why people tend to draw these stark dividing lines between tests and TDD or tests and sorry, tests and types.
I think somehow like these different mindsets are looking at the problem of gaining confidence in your system in a different way where one is saying, you know, when we're thinking about types, we're thinking about guarantees and proofs.
And when we're thinking about tests, we're thinking of specific scenarios, right?
Exactly. We're exercising some specific scenarios and gaining confidence through, I don't know, through through a type of automated check.
It's less formal. It feels like a less formal process of narrowing down these constraints with proofs.
And somehow I feel like people get into one mindset or the other.
But to me, it's like, why wouldn't like these these two different things play two different roles?
Like if I'm so if if I'm trying to say, you know, I don't know, I have a I have a game that has died that can be a number one through six.
Then isn't it nice to model parts of the game through that?
And, you know, yes, you want to capture the behavior of the game through through tests because the way that things interact and the behaviors are very subtle.
And you can't you can't model that all through your types.
And I know we want to as people who love types.
We want to make all states impossible through the type system, but you can't.
And even if you could, isn't that what a theorem prover is about?
Like languages like Coq and Agda, I think, and Idris.
Yeah. You write your code and specifically your types in such a way that it proves that your code does exactly what it what it is meant to.
That is, follows some kind of specification through types, I think mostly and some implementations, but not necessarily through tests.
That said, I don't know how to work. So, right.
I've definitely seen walkthroughs of this type of thing where where the types are the proof like.
And the fact that you have an executable program that fulfills that is is a proof, which is pretty pretty fascinating and incredible stuff.
Yeah. In practice, you know, you want simple, easy to reason about types.
And so you want the best tool for the job. Those types are very hard to read.
And if you're trying so it certainly I think it does depend on what you're trying to model.
And perhaps if you're doing something that's going to be on a Mars rover or something like that, you know, yeah.
Perhaps there would be an appropriate place to to use one of these like proof proof tools.
And, you know, that that's great. But oftentimes, like if you're writing a game, then what does what's the cost benefit of that?
And I think a lot of the time, like the the cost of writing a type to narrow down a few basic constraints about the primitive things you're dealing with.
Not primitive as in primitive language values, but the primitives of your domain to sort of clearly define the constraints of those core pieces.
And then having some tests that talk about how those pieces interact works very well for a cost benefit.
You know, I think that that's what it comes down to at the end of the day is it's like if you can write very simple types and then you can write very simple tests.
You don't have to write tests that exercise every possible type you could throw at it and that it fulfills these contracts.
And so they do their job very well when you when you just use types in a simple way, getting the sort of 80 20 like, you know, what is it?
I think it's also called the Pareto principle, the 80 20 rule, which is 80.
Yeah, yeah, you do the the 20 percent of the work that gives you 80 percent of the benefit because it's diminishing returns at a certain point.
And I think that's really the case with with types. And I think if you're if you're saying, oh, we don't need types, we have tests.
I think you also sort of get to to an 80 20 where you're getting diminishing returns for your tests when types would be the best tool for that.
So in languages like Coch and Agda Idris, where you prove something through these types, the types are really hard to read.
And I do wonder, like, how do you test that those types are proving the right thing as well?
Right. Because if they're complex, then it's some sort of code, right? And you want to test code.
So how do you test that? I'm sure they have some techniques or tools for that.
But, yeah, yes, right. And I think it's it's easy to get into the headspace of trying to like like, well, if it doesn't give me a 100 percent guarantee or how do I prove this?
But but I think getting practical, you say, like, listen, how easy is it to reason about the correctness of this system?
How easy is it to spot a failure? How easy is it to fit the behavior of the system into my head with these pieces?
And if you have some very simple types, that's a great tool for reasoning about your system.
And that's like a tool that really lets you work confidently with with maintaining some code and updating the constraints as they change.
Tests are very good for looking at what is the current behavior. And if something goes wrong or if you're adding some new behavior, you you write some tests to exercise that and make sure it does what you expect.
And so I think it's it's using the right tool for the right job. Also, I think that so actually I suspect we may have both watched this this talk in preparation for it.
Types versus tests. There was one at a scholar conference that I watched. What is the name of the talk?
Because I'm not sure. So I watched this talk called Types versus Tests, an epic battle by Amanda Loucher.
Yeah, me too. OK, cool. Yeah, I had a feeling that we both watched the one. It was very interesting.
Yeah, I wanted to watch a talk that you probably wouldn't watch. And we watched the same one. That one was very good. Yeah.
Which surprised me a bit because it was from someone who is from that TDD world, that agile world at a SCADA conference still.
They're like, huh, she does a good job joining those two two worlds together.
I thought so, too. Somewhat contradicting what I said previously, where I said that those worlds didn't mingle.
Well, they do mingle, but not as much as I would like them to be. Right.
Absolutely. Which which I think is why it stood out as a really good talk, too, because it doesn't get discussed that often.
Usually people are in one space or the other. And unfortunately, you know, a lot of people who are very focused on talking about types don't talk about craftsmanship principles and test driven development as much.
And vice versa. But they really pair very nicely together.
I have to admit, like whenever I hear about craftsmanship coding, I always hear that in the back of my mind, it means TDD, it means agile practices, tiny steps, refactoring.
But it doesn't mean types at all. So. Right. Do you have the same feeling as well?
Yeah, I would say I associate it with types not necessarily being a core piece of that, unfortunately. I mean, in the most common practice, it's yeah, I agree.
So one thing that makes me think is also like you can do TDD in every language. As soon as you have a test framework, you can do it in any language.
You might not have property based testing. You might not have advanced levels testing like end to end testing.
But types, you can only do that in some languages. Right. Yes. So, yeah, it makes sense that there's a lot of a lot more writing about TDD than about types, especially since most popular languages are not all that big into defining your own custom types and having opaque types and stuff like that.
So I think it makes a lot of sense.
And a lot of them. I mean, I learned a ton about test driven development from Kent Beck's excellent book, Test Driven Development by Example.
It's a very nice, simple little book. And from what I recall, it used Java for all the examples.
And, you know, the thing is, like when you're using Java for the examples, you're not using it as a tool for giving you guarantees.
Right. Because you have you still have casting and all these possibilities to to circumvent the type system.
So it's not really operating in the same way where you're able to rely on it.
And also it doesn't have the features that we love so much, like some types, which they are coming.
Yes. Right. But in the time when it was written, that that certainly wasn't a feature.
And it certainly wasn't the idiomatic approach to solving problems that would have been used in a book like Test Driven Development by Example.
So one thing that this that this types versus tests talk, I felt missed.
It did a very good job covering a lot of like a lot of the core things that I wanted to hit upon in our discussion.
But one of the things that it didn't talk about that I think is very important is how types and tests fit together and work in tandem,
which is our favorite topic, which if you're playing Elm Radio Bingo, you can go ahead and cross off that square.
Opaque types. I was going to go for Elm Review. I lost.
I'd put that on our Elm Review Bingo card as well. It's a solid, solid choice.
So what I mean by by these two things working together is I think that let's say you write a unit test for a function and, you know, I don't know.
I mean, you know, one of my go to opaque type examples, you have a function that checks the validity of a username.
So now you have, you know, is valid username returns a bool. And, OK, you've used test driven development for that.
And it's fully tested code, right? It's fully tested code that takes a string and gives you a boolean.
Well, you have a function or a method that says this username is valid.
Right. Yep. And it's fully tested, 100 percent done through TDD. But is it used appropriately everywhere in the code base?
Every is every string that does truly represent a username invoke that function to to make sure it checks that bool?
Of course, we're we are good coders, right? We'll never forget to do that.
But to me, this is so, so core to how I think about craftsmanship principles is being able to sort of narrow down my thinking about something into a nice, neat, well tested concept where the knowledge lives in one place.
So I'm able to not only organize that logic into a single place, which, you know, don't repeat yourself.
It's about knowledge. It's not about not repeating code. It's about not repeating knowledge. There's a single authoritative place where any piece of knowledge lives.
Well, in a pick type is a great way to represent that single authoritative place because, well, you can't create it outside of that thing.
So it is authoritative because it's the only way you can create a username. And so you use that username type.
And, you know, sure, you could still pass strings somewhere, but it gives you more confidence that you're using that well tested unit in the appropriate places.
So to me, types and tests work very well together. And in this types versus test talk, they were kind of discussing this in the talk and a little bit in the Q&A as well.
They were talking about, you know, how do you like, you know, that you can you can make impossible states impossible through your type system.
But certain things you can't represent that in your type system. But then, like you, you have this opaque type.
You test that opaque type. And now that gives you more guarantees. So they work together, you know.
So we do need more formal write ups about these processes as you as you hinted at. I really I think I think you're spot on with that, that we need some type.
What was the name of the Ken Beck's book? Test driven development by example. Right.
OK, well, type driven developments by example. That's what we need.
Absolutely. Or, you know, test driven development aided by test, test driven development aided by types by example.
Maybe, you know. Oh, no, no, no, no. Now it seems like types are less good than tests.
Like, come on. Like, tests are not better than types.
So I think it's important to understand, like, when are types the good the good abstraction and when are tests the right tool?
So the way I tend to see. So maybe first let's talk about usually when people say, well, you don't need.
Well, whenever people say that you don't need tests, what they mean is that you don't need to write as many tests when you have type when you have types.
Right. So, for instance, the common examples. Well, what if the argument that you pass to is valid user name is undefined or is nil or whatever?
Well, you don't need to check for those if your types say that it needs to be a valid, it needs to be a non null string or whatever.
And you also don't need to check that this the this function returns anything else than a Boolean.
So when you have types, it's very much limits the wiggle room that a function has, like between its inputs and outputs.
The inputs say what is available to the function and the output is what is available to return.
Like in the this enormous space of potential implementation of potential values to return and to get.
What what can you do? So whenever you add types, you constrain what you can write, what you can receive, what you can get, what you can write as the implementation.
And I think that's when people who write a lot of a lot of types, but don't know a lot of tests, what they think of.
And that's partially my case, because I don't write that many tests in practice, is that because there's so little wiggle room.
If you have good types that you don't really need to test those, like, for instance, if you have an enum of four things as an input and you return a Boolean as an output, then you have very few implementation possibilities.
I think you have like eight or something or you have a very few. So it's going to be hard for me to make a mistake here.
I'm going to make some at some point, probably, and therefore it's still useful to have tests. But the wiggle room is a lot less than if we wrote it in a dynamic language where the inputs and the outputs are any value there.
They can be undefined. They can be JSON functions, whatever.
So just restricting the wiggle room that you have to something very tiny makes it much more likely that you're going to have the correct implementation or a correct implementation or somewhat correct.
Right, right. And people put a lot of thought and effort into those practices of using types very well.
And then they focus on that one thing and then, oh, it turns out like, well, do I really need to test for this workflow?
And I think, I mean, I don't know, if you take the username example, like testing a valid username, like types are probably not a great tool for that.
Like, sure, you could say that like these are the valid characters and this first character can be this and the second character and characters after can be that.
But like, it's probably not necessary. But tests are a very good tool for that.
And to me, like at the core of this is the habits to to build in these practices and also like not all tests are created equal.
So I think if we like dig into it, I think a lot of people view I think a lot of people view types as a hindrance to their productivity.
Which they can be depending on the language and tools that you have at your disposal.
Right. And depending on how you use them. Right. And a lot of people's experiences with Java where they're not getting strong guarantees, you know, probably from a time when optional wasn't really baked into it.
They're getting null exceptions all over the place and array index out of bounds, runtime exceptions, casting exceptions.
So not all not all type systems are created equal. But also, if you're just using strings everywhere in your type system and not really leveraging types, then it's going to feel like nothing but a burden.
It's going to feel like nothing but this thing that's forcing me to implement an abstract, you know, instance of this factory.
And so I have to make like an anonymous class to satisfy this thing. And, OK, great. What safety did that give me?
Yeah, it's only going to give you limited benefits because if everything is nullable anyway, then you're still going to end up with the biggest problem is that everything is nullable and you have to check for null everywhere, which I mean, you're going to have to write tests for that.
Right. And if you still are using primitive types all over the place and you're not really making impossible states impossible and you're not really, you know, using, you know, union types or don't have that functionality in your language.
So I think that's a lot of people's experience with it. So the way that you use your type system matters a lot, as I think a lot of our listeners will, you know, will be preaching to the choir there.
But similarly, I think the way that you write tests matters a lot. And I think that, you know, in my opinion, after watching Kent Beck or watching DHH Creative Rails talk about TDD is dead and watching these sort of TDD experts talking him through it and asking, why do you think TDD is dead and how do you practice TDD?
And what I gathered from all of that is the way he practices TDD is very focused on doing a lot of integration tests, sort of these in between tests. They're not end to end tests. They're not unit tests that are exercising one small unit of behavior.
They're integration tests. And from my experience in my Rails development days, there is a lot of that in testing culture in the Rails community and, you know, these controller tests. So it's not really giving you confidence that your full system works end to end because it's not opening up a browser and running through a user workflow and giving you confidence end to end.
It's not thoroughly exercising all of the possible ways to call one method either. So it gets very messy and you're doing a lot of mocking and stubbing, creating a lot of fake values.
And that is like very important because you're testing your mocks. You're testing your fake value producers, not your system under test. And so when you change something and nothing breaks, well, oh, I guess I was mocking that.
And what happens is your tests become extremely coupled to your code, but don't actually give you confidence about it. So it's the worst of both worlds. And I think that's why DHH had this whole Rails is dead thing besides just being a provocateur and liking to say confidential things.
Were you pulling your hair out when you were watching the talk?
Yeah, yeah, well, it was just very on brand and it's like, all right, you know, I think people are going to continue to think what they think about testing and if they already thought that testing was a waste of time, then that will reinforce that opinion.
And if they thought TDD was great, then that will reinforce that opinion and have them think about why they disagree. But I think it's an interesting conversation.
To me, the takeaway is it really matters how you write your tests. And now you can't mock in Elm, particularly, but you can write good, good or bad tests, useful or not useful tests.
Yeah, if you write Elm program tests, you have to mock somehow.
But you're not going to have those spies or in the exact same kind of mocks like you're going to have test data, you're not going to have mocks. And like, I actually, you can't even do white box testing in Elm, right?
So black boxes testing is when you check, you give inputs and you make assertions on the output, but can't do white box. I'd say not. You can influence the internals by passing inputs. Well, yeah.
I mean, it depends on how like with Elm a lot of the time. I mean, you know, when I was doing technical coaching, you know, in non Elm companies, like I was spending a lot of my time trying to teach people to follow these practices that Elm forces you to do, like dependency inversion and, you know, dependency injection.
And, you know, so instead of like mocking things, you can pass in the value. So you have control over instead of like mocking time dot now or whatever, right? You, you pass in the time and in Elm, you have to do that because you can't just go get the side effect.
So you don't have to mock the current date because you have to have that as an explicit dependency as, as an argument with, with dependency injection essentially. So Elm does help with a lot of those practices, but nonetheless, like you can write useful tests or not.
You can, and you can scope things into meaningful units or not. And I think the way you organize your code and extract things into modules and opaque types is a big part of that.
The way I see it, whenever I think of TDD, I mostly think of unit tests because that's what people push towards. That's where you will see the most benefits. And integration tests are always a bit slower and a bit more clunky, especially if you need to do mocks and spies and, and those kinds of things, which we can't do.
And now, as you just said, but also integration tests is like about connecting multiple things, right? And that's actually where types shine. So in a unit test, like if you're not going to, you're going to give some input and going to assert something on the output.
If the tests are not exactly what is expected, that might not be too much of a problem. I mean, your tests are going to cover that, but types are contracts, right? They say, well, this thing takes this as an input and it will return this type as an output.
And then that can only be used in specific ways, just like you said with the username and other non primitive types. And well, whoever's going to use those types as inputs or outputs, they're going to have to do it in a correct way because the type checker will validate that for you.
But that's going to be something that is very hard for your unit test to verify, right? You're going to have to write multiple unit tests. You're going to have to write a lot of scenarios and to figure out where, when it fails to do it correctly, which you're going to fix.
But basically you're going to do like tests based on, yeah, scrap that, scrap the last part.
So yeah, whenever I think about integration tests, I feel like that's where types are better suited or view the whole thing as a unit tests. So unit tests all the way and for big things still considered as unit tests, but the implementation inside, that's where types shine.
Right. I totally agree. I think that fitting pieces together wiring, like, and in my Ruby on Rails development days, I thought a lot about wiring and with Elm, you just let the compiler think about it.
You know, I mean, you think about designing how the pieces will fit together and then you trust it once you've sort of designed how you want that to work. And, but you have to think about that in your testing process. And as you say, with like writing mocks and things as you're writing integration tests in your Rails applications.
And it's very challenging and it takes a lot to gain confidence through test driven development about your wiring, whereas it's trivial in a type system. That's what type systems really shine at. Like they're so good at doing that.
And yes, they're good at making impossible states impossible too. And that's great. But the wiring, it's just, you can't go wrong. And if your wiring is just this function takes a string, this function takes an int, you're missing out.
You're missing out on what you can do with your type system. But yeah, a lot of these integration tests go away. But I think the way that you organize your code is very important. So just like, I think the cost of change is very important to this types versus tests, because I think a lot of people will feel that, like, they'll feel that tests slow them down, they'll feel the types slow them down, or, you know, one or both, whichever they hate more.
Yeah. Possibly they'll feel that both slow them down. But if so, maybe they're not listening to Elm Radio. But I think the way that you write your tests will will affect how it slows you down or not. And if you're, you can write Elm code in a way where things get very tangled up with each other.
And it feels like making a change, you have to change all of these tests and throw things away. But if you are kind of organizing things into nice encapsulated, opaque types that have well defined areas of knowledge and responsibility, and you pull out these clean leaf nodes that are responsible for this one area of work, then things aren't coupled in an awkward way, right?
So you can't, you can't separate the conversation about how you couple your system with the maintainability of your your types and tests. And so it's essential. So I would, I would encourage people if they are feeling like either types or tests are slowing them down, think about how, like, are you leveraging them to maximum effect?
Like, are you actually getting something meaningful and useful out of them? And are you are you coupling them that in a way that makes the cost of change difficult, just like DHH is talking about a lot of tests with tons of mocks that are not really giving a lot of confidence and, and coupling all these things to the internals of the system?
Well, you can kind of couple things in a way where your your tests are very hard to maintain and change in your Elm application. So the way you couple and organize your code is essential for that.
So maybe let's talk about when you need to test like, so types are going to check for things that are very general, very generic, they're never going to be very detailed, they're never going to provide a lot of detail.
So if you say that a function returns an integer, well, the type checker will prove that it will always be an integer, otherwise it wouldn't compile. But it doesn't tell you which one it will be.
That's when you want to test, you want to test if you want to assert that in a specific scenario, or if you use property based testing, that it will always have some that either has a specific value or is constrained by a specific rule.
So you're going to basically you're going to want to write tests whenever you can't prove anything by type system. So in my mind, there are at least two things for that. One, when you want to verify specific values among the wiggle room that you have, right?
And two, whenever you want to prove things, things that can't be proven by the type system, including side effects, when your type system doesn't convey the information of which side effects are returned, then you're going to have to write the test for that.
And also, if your type system is unsound, it's giving you some guarantees, but not everything. Like everything is nullable, like in Java, then you will want to have tests that prove that things are never null, things like that.
So I think it's mostly for those things. So when you have, you want to test specific values, when you want to test side effects, and when you want to test things that can't be proved by your type system.
Right. Yeah, I mean, types, to me are all about constraints. They describe and enforce constraints, and tests are about behavior. And there's an interplay. There are times when you'll want to capture a constraint in a test because it's very hard to express in the type.
But yeah, a lot of the time, you describe your constraints and your types, and then you test the behavior in the tests. You just can't test the behavior with types. So yeah, I think to me, it's your business logic.
Right. Now, I'm usually not going to be writing view tests because Martin Janacek recently gave an Elm Online talk where he was showcasing how he uses Elm Book to do this sort of storybook driven development style of writing view components in his Elm app.
And I thought that was very cool for, like, to me, that's pretty sufficient for testing visual elements, and I don't find much value to writing unit level tests for that. I think it's very valuable to have end to end tests, not integration tests that are faking things out, but end to end tests that are actually running through opening browser Cypress tests, things like that, to give you confidence in the system.
Oh, sorry. What if you have view code that has quite a bit of logic that returns HTML?
Great point. So in cases like that, I would tend to already want that as a separate testable unit that is invoked by my view logic, but not spread out all over in my view code.
Okay, so it's kind of like, wrap early, unwrap late. You unwrap to HTML as late as possible, but whatever logic you have, you want to do it with specific types or maybe even just primitives, but not HTML, which is harder to test.
Yeah, and, and like, to me, I want to split out my business logic from my view logic and templating it to me is like if I'm writing a test that's testing my templating, it's just, it's just writing the same thing twice and it and coupling me in a way where I change this thing, this thing breaks, it just feels like brittle tests.
It doesn't feel valuable to me. It doesn't feel like it's preventing me from causing bugs. It just feels like it's slowing me down from making changes in the system and making it more frustrating.
Yeah, absolutely. I was mostly thinking of like, if you have branching conditionals in your view code.
Right, right. So, so like, for example, if there's like, you know, I don't know, you could have like, how do you render, you could have some complex logic for how you you render names based on, you know, if there's a, if it's a guest login, or if there's a last name or if there's a username or you, you know, you pick this display name to use or, you know, you could have complex logic with lots of branching and lots of complexity.
That feels like business logic, right? The, the key thing is it feels like business logic, then I'm going to want to encapsulate that and invoke that from my view template.
But I want my view template to be kind of dumb. And I don't really want to test that through unit tests. But I do want to unit test my business logic. And I think this is really essential is it's like, it's just a skill and a habit.
It's kind of hard to learn without just sitting next to somebody who's done this a lot and probably sat next to somebody else who had done it a lot. You know, you go up the chain enough, and then there's probably somebody that sat next to Kent Beck and did it a lot and Kent Beck came up with this discipline, right?
But that's kind of like the easiest way to learn how to decide what to separate out as business logic. Like it's this design sense that you, it's hard to just learn naturally by thinking it through. But, but to me, that's how I think about it.
Business logic, I want to encapsulate that out somewhere as a unit and then test that unit. So, so unit testing, figuring out what the units are is very hard.
Okay, I'm starting to think we're getting at the point where we need to answer the question. Dillon, types or tests?
I actually don't even know. Are we allowed to swear on this radio?
No, I don't think we ever have.
That elicited a very strong response.
Yeah, let's imagine you only had the choice between writing tests or writing types. What would you prefer?
All I have to say is, if you had modeled your constraints better in your possible responses, then I wouldn't have been able to respond that way.
Yeah, but everything you say is just stringly typed.
That's true. Our whole podcast is.
I'm working against your interface and you only return strings.
Maybe strings, because you could keep silent.
Exactly. It's hard for, I mean, I genuinely can't make up my mind. I really can't. Like, you can't make me choose between my children. I love them both.
You don't have any children.
And more importantly, you don't need to choose one or the other. Like, what about it takes away from the other thing? They enhance each other.
Right. Well, you could, or you could write Elm.
Well, that's a bold choice.
So I think, do you have an opinion? There's something I want to mention, but before I do, what would you choose?
I would totally choose types.
I suspected as much. Why do you think that is?
Yeah, because I also said, like, I don't write that many tests because I feel like I constrained that wiggle room enough for me to not mess up too often.
But also, like, the experience between writing unit tests and types is so different because for writing unit tests, you need to have a somewhat clear understanding of the API, where if you change it, you're going to have to change to update your unit tests.
But also the experience of finding issues is going to be very different for it when you have test failures versus when you have compiler errors.
So if you change your code, your types or your tests, no, your production code or your types, then unit tests will just start breaking, right?
They will say, hey, you've got to, this is not returning the correct thing or it crashes for some reason.
And you're going to have to figure out yourself where the problem lies.
But when you change a type, the compiler will tell you, hey, you've got a problem here and you go fix it. You got a problem here, you go fix it and so on and so on.
You don't have to figure out where the problem lies. The compiler tells you. And that is just so much more useful, I think.
So I know you're going to say, like, oh, you didn't have to think too much about your API.
You can change it and that's fine. But it's just like another type error.
Right. No, I mean, I think, yeah, these are great points. And I think that in a way, I suspect that some of this types versus tests conversation, again, as I said earlier, I think that often when people are dealing with type systems that aren't sound type systems,
they're not working with guarantees from types.
They're working with checks from types, which feels a lot like a test. A test checks one thing. You run a test, it checks one thing.
You run a, you know, fuzz test and it checks and things for and runs within the constraints you build up for it to check.
But it's it's finite, whereas types work the other way, constraining things and giving you guarantees, not checking one thing and saying, yes, that one thing does what you think.
One kind of testing we haven't talked about, which is writing assertions in your code.
But I know that some ecosystems are bigger on that. Maybe Rust, maybe C like languages where, oh, you're going to write an assertion checking that some list is non empty, for instance.
Invariants. Yeah, invariants, but writing those in your code. And if that ever if that is ever false in a specific behavior, a specific scenario, then it's going to crash, I guess.
But again, you need to write unit tests to to find those in your C.I. Right.
Right. And yeah, it's interesting, but it feels like something that, you know, impossible states or opaque types or these different techniques should be able to help you help you with.
So, yeah. So I think the way a lot of people are used to working with type systems, it feels more like testing because it's just checking one thing rather than giving you guarantees.
But when you're working with it as guarantees, it feels qualitatively different.
As you're saying, when you have tests with guarantees, tools can help you. The compiler can point you to, hey, here's what's wrong in this specific spot. Here's what you can do to fix that.
Also, static analysis tools, maybe Elm review, cross that out your bingo card.
Actually, Elm review can infer some things from from your code. Like we don't do much, but we it could potentially. But figure out things from your tests. That would be interesting and a lot harder.
Exactly. Because it's just the whole point to me. It's it's the bare metal. Like tests are bare metal. They're this very low level thing. That's the whole point.
They are not as expressive as code. And that's why they're useful. It's not arbitrary code.
What do you mean they're not as expressive as normal code?
They're not an unconstrained thing where you're writing arbitrary code. You're saying, you know, type user equals guest or admin, you know, admin details or regular user details.
Like it's you're not saying, oh, and if this if this conditional checking run time conditions, you look at it, you look at it and you can fit it in your head all at once without imperatively running through the code and all the interactions of a complex system.
That's what makes them interesting. And that's what makes them useful for tools. So static analysis tools, optimizations as well.
Optimizers also. Yeah, absolutely. Optimizations, IDEs, code completion, stuff like that. Yeah, absolutely.
So and I just feel like we're barely scratching the surface of what we can do with tools with with constraints, with with very strong guarantees.
Right. Like, you know, automatic code solvers. Right. Like GitHub Copilot is, you know, pretty pretty popular these days.
Well, what if what if there was something that took a completely different approach to GitHub Copilot and used its understanding of the constraints baked into the language and the types to suggest possible solutions?
Right. Like I saw a talk that was showcasing this, like, you know, automatic function generator that takes like all of the values that are in scope and tries to infer like here are 10 possible functions I can create with this.
They use these different values because this is a list and this is a function that takes a value and returns this.
And then I can list out fold over this. And so here are 10 different things you could do with this.
And a lot of the time it auto generates code that you're like, oh, yeah, that was what I wanted to do with these inputs.
Tests can't really fulfill those types of possibilities. So tests are very compelling in that regard.
But, you know, you can you can you can use tests to do a a worse job at being a type system and checking constraints.
So I think tests are tests are just so good. And I wish that like I wish that people in the type community would embrace them a little bit more.
But maybe we need to give some really good resources for how to how to do that.
Yeah. To make good use of tests, you do need to be a not going to say a good developer, but you need to have some some good habits.
Yeah, some good habits, because if you don't have the habits of running a test, then you don't have any guarantees at all.
Running a type is a lot less effort, in my opinion.
And also, one thing that is interesting to figure out and to notice is that it is actually quite easy to ignore a failing unit test because you can delete the unit tests.
Right. Or, you know, you can with big quotes and forget to write the unit tests.
Right. But with a type checker, that won't be possible.
So type checkers are much more general, also in the sense that they will look at the whole code base.
So it's going to be better to find a lot more issues, especially if you don't have good habits.
Test development is only as good as the culture and the habits that it's operating within.
Yeah. I wrote a blog post a while back called Relentless Tiny Habits.
I think that test development is fascinating in the sense that it's not necessarily any particular one difficult skill.
It's more just like, yeah, it is that simple, but you just do it all the time and don't not do it.
And it's a habit, but it's hard to build that habit.
And I think there needs to be like a mindset shift.
I think when you have that mindset shift, when you see it as something that you go from thinking that it's something that slows you down to thinking that it's something that speeds you up.
I think that's essential.
But it's also something that you need to do yourself.
You need to get better. Right. But also your whole team needs to do the same thing.
Because as soon as someone doesn't adhere to this philosophy of you should write tests first, doing the right TDD the right way.
Well, then the whole system will not work as effectively.
Whereas one person could just add tests, add types and improve everyone's lives with a lot of work.
But yeah, I mean, someone can take that nice opaque type and expose the constructors and then just building it or not use the username opaque type and just start passing strings somewhere, too.
Right. So I like to curse again or there are cultural things to both.
And it's not a coincidence that extreme programming is heavy on test driven development and pair programming.
Right. Because it's cultural and it's about habits and spreading knowledge and spreading cultural ideas.
And if you don't if you don't do that, then it's not very useful.
I still imagine that if you have a team of 10 developers who are keen on doing TDD except two people, I feel like they will always pair together because they're going to be less annoyed by the other person.
Like, oh, yeah, you and me make a good team because we don't say, oh, please write a test first.
I think a lot of it comes down to the paradigm, like the lens that we that we look through.
And if if if you if you see types as a burden, the way that you use types, if if at all, if you can avoid using them, then maybe not at all.
But the way you use types will look very different. If you're writing TypeScript, you're probably going to use a lot of any's or inferred types.
Also inference apparently is a big thing.
Type inference. And you're probably going to just JSON that parse and get your any type and pass it around without, you know, using something that does something similar to JSON decoding an Elm where it gives you guarantees about the JSON values you're getting.
So but if you perceive it as something that gives you value to to check those types and be able to work with those types, giving you confidence and guarantees, it's going to change the way that you leverage types.
If you're working in Elm, you're going to be making impossible states impossible and you're going to be using parse, don't validate and all these things that let you get more value out of types instead of saying, oh, this is just a burden.
And I think it's the same with tests. Like if you view tests as a burden and a moral responsibility, which I'm I really don't think it's constructive to talk about these things as a moral responsibility or a professional shortcoming or something.
If you don't follow these practices, in my opinion, that's just very counterproductive. To me, it's like, hey, this is a tool that allows you to to work in a much more like enjoyable and safe way where you're just like flowing through your code and you don't have to keep manually testing this thing.
And is it working now? Is it working now? Is it working now? It's it's very satisfying to be working with this auto test runner that and you want to refactor. Oh, no problem. Let me just go and rip off.
Like if you like the feeling of refactoring Elm code without tests, like if you have a well tested code base that is, you know, nicely abstracted to have nice units that have, you know, nicely defined responsibilities that aren't heavily coupled together.
That feels really good. So, you know, but but it's a paradigm shift and a lot of people view tests as a burden. And I think that's the hump to get over. And culturally, that's that's the first step is like seeing, seeing that in action.
But I think a really good way to to build this up is to do it outside of production code. So, you know, there's this concept of code cadres, which is code that you don't ship to production where the purpose of it is learning. So you take a simple exercise, Roman numerals, fizzbuzz, things like that.
And you use all of the techniques that you're trying to learn, you you only do very disciplined red green refactor. And because when you're working in a large production system, then you can build up bad habits. And you can make you can take steps where you're actually like coupling things in a way that makes it harder to work with when you're working on these simple problems, you can experiment and learn and develop these habits.
Oh, I always feel like you can play with Elm codes more than other languages. But yeah, absolutely. One thing that just came to my mind is that we we didn't mention compilation times. Because some languages have very long compilation times.
So if you want to be if you want the type checker, the compiler to tell you, hey, there's a problem here, there's a problem there, or to make guarantees for you. Well, do this, how fast the compiler runs will impact the experience that you will have.
And if this run a lot faster than the type checker, then yeah, I think it makes a lot of sense that you will write more unit tests.
In Elm's case, like both are really fast. I would say well, the type check is much faster than unit tests because the more tests you add, the slower it will run. But yeah, they're both fast in our case. So we're pretty lucky.
Units are quite fast. Yeah.
Units are quite fast in Elm.
Depends on how many you have. If you have 10,000, it's gonna take a few seconds, probably.
Depending on what you test, right?
Even so, that's not so bad. So what tends to happen a lot in like Ruby on Rails shops is you get a lot of integration tests that are spinning up a database in an integration test, but there's some there's some mocking but there's some spinning up a database, and they get very slow.
And that is painful because now you have to run a subset of your tests because it's too slow, the feedback loop, but they're not giving you full confidence.
And they're flaky because it's spinning up a database and sometimes gives you non deterministic results. And sometimes it's depending on time and giving you a result based on when you run it.
Or worse, depending on other tests being run.
Right. Exactly. The order of the tests being run. And so you wonder why DHH says TDD is dead, right? It's not a big surprise.
I feel like we have done a pretty good round of it. We have done other episodes on testing opaque types.
I actually wonder, did we say opaque types enough? Have we hit our quota or should we say it a few more times?
I'm not sure. We could always say it a little bit more. But if you haven't listened to our opaque types episode, as we've said, mandatory listening, that should have been episode number one of Elm Radio.
That should have been our most listened to episode. It actually isn't.
Yeah, that's true. We'll keep pestering people until it becomes our number one listened to episode. If people want to remember to subscribe to our podcast, some people are not subscribed and getting every episode.
Subscribe to your podcast feed. Give us a rating on Apple podcasts and follow us on Twitter and Yeroon. Until next time.
Until next time.