Developer Productivity

We share our productivity and workflow tips, and how it changes the way we write Elm code.
April 11, 2022


Hello, Jeroen.
Hello, Dillon.
And what are we talking about today?
Today we're talking about developer productivity.
One of my favorite topics.
Yeah, right?
So we're going to talk about how to become a 10x developer.
Only 10x.
Elm can make you a 100x developer.
So how do you become a 100x developer, Dillon?
That's it.
All right, that's the show today.
Until next time.
You're joking, but yeah, we both know that it does make you faster in the long run for a big project where you need to maintain it for a long time.
I mean, I think no one would be surprised that we believe that.
Of course, not everyone would agree.
Maybe let's say that'll get you 2x, but we've still got a ways to go.
I think probably the big, you know, whenever I hear people talking about 10x developer, I'm always thinking like, it's not about being a 10x developer.
It's about being like a 0.1x developer where you do one tenth of the things that are the most important things.
Wait, you're 0.1x developer and you do one tenth?
Do less of the work, but it's more valuable work.
And so you deliver more valuable things through less effort and therefore you can produce more value.
But you're not necessarily producing more code.
You're not necessarily being more productive.
You're being more effective, right?
It's sort of that distinction between effectiveness and productivity.
But you do have to be to do less than one hundredth if you want to be a 10x.
Otherwise you're just a 1x developer, which is fine.
I mean, we're lazy.
We have a lazy job, right?
We try to be lazy.
So being a 1x developer just by doing less work is fine.
If you can do more than even better.
And that sounds like a term that calls to mind overworking, burning yourself out, being the hero of the team that finishes all the tasks and no one else understands the code that person is working on.
And these are all extreme anti patterns that are the types of things that a lot of my career I've spent trying to disentangle because they seem on the surface like they're helpful and people get really excited about the team heroes and everything.
But that's actually detracting from everybody else's ability to contribute.
That's creating code that only one person can understand.
So taking breaks is, I think, an important part of the process.
And for every one task that you say yes to, say yes to, I think that's a really important part of the process.
Then you'll be a 10x developer.
So I guess the most important question to answer is, what do you want to be?
What is the best?
To be a 10x developer, a rock star, or a ninja?
Because I think that's the most important thing.
To be a 10x developer, a rock star, or a ninja?
Because those questions have not been answered.
Are they the same thing?
What about a pirate?
It's extremely simple.
You want to be a 10x rock star ninja.
How can you be a rock star if you try to be sneaky?
This is a good point.
It's the rock star ninja paradox.
The famous one.
Didn't Einstein call that a paradox?
I'm sure of it.
I read that on the internet.
We talked about incremental steps, or tiny steps, in a previous episode.
Yes, we talk about that pretty often.
I think that's one way of making sure you're developing in a more efficient manner.
I believe that's it.
You can go back to that episode if you want.
Do you want to summarize that?
I'm often thinking about keeping code in a continuous state of being valuable, working, functioning, green tests.
The longer I'm in a working, functioning state, the better.
A nonincremental step would be building up a lot of partially working code and not having any feedback loop in that process.
That's a risk.
By cutting off that feedback, you're not going to be able to do anything.
You're going to be able to do a lot of things.
The way I think about it is you want to set up your feedback loops before, so you can have the benefit from those feedback loops as you do the work, not after.
If you write a test after you write the code, is that a good thing?
People will have different opinions on whether the test itself will be an effective test.
For example, it means that you could have introduced code that you didn't test.
If the only code you write is in response to a failing test, that means that you necessarily have tested all the things that you built.
That was the only reason that you wrote code, was in order to make a failing test pass.
Putting that aside, you've also done the work of writing this test, but you didn't get the benefit of it.
You're not going to get the benefit of it.
You've also done the work of writing this test, but you didn't get the benefit of it to use that to help you with the feedback cycle for building the feature.
Why not do that first?
If you get 10 times more feedback, or more often, do you become a 10x developer?
Is that part of the solution in your opinion?
I think so. I think there's a lot of waste.
I think there's a lot of waste in the context of lean thinking.
I think that we do, by creating features upfront, by creating not Oxbow dead code, but dead code that we create because we're writing a lot of unused code just in case, and we're not getting feedback.
I do think that that is crucial for getting more out of your time.
Again, if you're not getting these feedback loops also for seeing that the things you're creating are valuable in your whole production pipeline.
If you're not doing frequent releases, you're not getting feedback on whether you're shipping valuable features,
we're kind of being tongue in cheek about the whole 10x developer thing.
That's what it really would look like to have a more effective, it's more about effectiveness than productivity.
You want those feedback loops at all levels or at least several levels.
Yeah. Another thing I think is really important.
Teams will often start slowing down because of technical debt, and there can be these boom and bust cycles, feast and famine of getting…
What are those?
Well, the idea of feast and famine, it's like you have these spoils one day that you can feast on, and then the next day you don't have anything to eat.
You want a more sustained, healthy level all along the way.
Yeah, you don't want hurricanes followed by droughts followed by hurricanes.
Exactly. You just want good weather all the time.
We want the same thing in our code quality.
We don't want to have code quality that is significantly slowing us down and making it hard to make changes without introducing bugs,
making it hard to ship new features or do architectural changes, things like that.
Then suddenly we say, okay, we need to go and take a few months and do this refactoring and going back and forth in these cycles.
I think it's really important to bake in quality to the work itself.
Quality is not a separate thing that you add on.
Quality is something that you bake into the process.
If you fail, then you will have to do it after the fact at some point.
Exactly. That's the thing. You have the context.
This is a lean concept as well. The concept is called built in quality.
It's a concept from the Toyota manufacturing process, but it very much applies in software, I believe.
I think there's a lot of great nuggets of wisdom in that thinking.
The idea is you've got context and do the things while you have context rather than as a separate step.
I really do believe that quality is not something that you can tack on after the fact.
You have to build it into the process itself.
This is generally true of any sort of...
There's a tendency to want to go add quality after the fact, go do big refactorings after the fact,
which granted, sometimes we do need to do big technical changes.
I'm not saying that that doesn't exist,
but I think that often teams will say, we don't have time to refactor, we have to ship this feature.
But I think you have to bake it into the process of building things, that you build things with quality.
There are different reasons why you would do refactorings.
Sometimes you notice that code is bad according to some metric or gut feeling.
And then you have some refactorings that you do because the things that you need to do
are different from what you're expecting originally.
The company said we need to build A, and then they say we need to build B.
And some of it is shared with what was A, but you still need to adapt it.
Then you do need to make some refactorings to make B possible.
You didn't know that when you first worked on the task.
So that's refactorings that you will have to do, and you can't fix it.
You can't predict the future unless you do premature abstraction and optimization,
which have their own problems.
But yeah, if you know the context well, then you can build the quality while you're working on it.
Right. And to a certain extent, quality just means not building those premature abstractions.
Sometimes it's okay to have some duplication.
Like jumping onto that and creating an abstraction right away,
like having overly abstracted code or code with abstractions that don't solve a real need or pain point.
So I think always doing these things in response to pain points really leads to more sustainable, maintainable code.
Another thing that I think there's a tendency to try to do after the fact is putting the pieces together.
So you build one system, you build another system, you build the front end piece, you build the back end piece.
And actually putting the pieces together is the really hard part.
Like integrating with some external service, that's where the risks are.
So start with that risky unknown part.
You want to get that out of the way as fast as possible so that you're not doing things with faulty assumptions.
So you don't want to build on a shaky foundation of bad assumptions,
but you can't really figure out if those assumptions are bad or not unless you actually do the work.
So I think that's really essential is understanding that the biggest risks are fitting the pieces together.
And that's why I think it's so valuable to get full end to end slices as quickly as possible.
So one example that would be, you have a client and you need to connect to the back end and show the data
or even process the data.
So you would build a very simple HTML page which only makes the request to the back end and only shows the data unprocessed.
But then for that, you need a client which is super simple and you need a back end that returns anything.
Right. Yeah, that would be a full slice.
Yeah. And then you already have built some of the complex parts, like actually building a server
or choosing your tech stack maybe.
Right. Right. Yeah.
Which is weirdly enough, pretty hard.
Unless you're biased like we are and you just choose Elm for the front end, that's easy.
But I don't know what I would choose for the back end.
So for me, that would be already a few questions that would be solved then.
Yeah, all those decisions play off of each other.
And so if you go too far down one path without having made all those other decisions,
then you might find that the pieces don't fit together once you say,
okay, well, I've built all the pieces. Now I just have to put them together.
That creates a lot of extra work.
Yeah. And then every piece can be updated individually, like making the page beautiful,
actually doing some computations on the front end or the back end.
Right. Because at that point, you have a feedback loop built in.
You're actually doing a thing.
Yeah. So now another thing we talk about a lot is tiny commits.
I thought it might be good to break that down a little bit because you can do tiny commits a lot of different ways.
And so I think it's important to kind of clarify a few things about tiny commits.
Tiny commits is about actually working in tiny chunks, tiny slices,
and doing these types of meaningful, coherent pieces.
I mean, if you do a refactor, you extract a function that can be a tiny commit.
That's an obvious one because separating out refactoring steps from feature steps is extremely valuable for a lot of reasons.
One is for me, it's just less cognitive load to keep in your brain
when you're thinking about what in progress changes do I have right now?
Yeah. When people review your code or your pull request and they go through the pull request commit by commit,
then the refactoring stones add to the noise as well.
Right. Exactly.
Someone extracts a function or moves a function like that is really simple.
No need to review it almost.
But if you review the entire thing or a bigger chunk of change,
then all those things are mangled together and it's hard to know what was refactored and what was feature.
Exactly. Yeah. And if you break something midway through, I mean, it's like playing a video game
and you haven't made a save checkpoint in a long time, right?
And now you have to go all the way back.
Yeah. I often say commit as if you thought you were going to mess up very soon.
Because sometimes you get into arguments like, hey, should I commit now?
Yeah. Because what you have right now is working.
You don't know what the future will bring.
Because if you mess up something and that's going to potentially be hard to debug,
then the easiest would be one way you could fix the issue is by removing everything and starting all over again.
But that's more work if you erase a lot of code, a lot of changes.
But if you committed like 30 seconds ago, well, you know what changed
and you can redo that just by typing on your keyboard again.
Exactly. Yeah.
And if you're looking through the git diff to say, what did I do wrong?
There's less to look through.
So you're reducing that search base.
And yeah, if there's one thing that maybe all of us Elm developers have in common,
it's maybe a mistrust of our ability to do things.
We don't trust ourselves to do side effects and mutations anywhere we want.
I mean, I don't trust myself to do big changes.
And I'm proud of that.
I wear that as a badge of honor.
I think that that's just an insight that I have about our lack of ability to...
It's okay to accept that we're not going to do hard things well.
So let's just make them easier.
That's fine.
Just make things easier and you'll do a better job.
Well, I sometimes trust myself and sometimes I'm wrong.
It does happen that sometimes I'm right.
In which case I celebrate and I still feel bad that I did it the way I did.
I should have done the dining commits.
I should have done tiny commits.
I really like a quote from Kent Beck.
Kent Beck created test driven development and a lot of these technical practices that we are familiar with.
And in his book, Test Driven Development by Example, he talks about this analogy of changing gears in a car.
And he says, taking smaller steps is like dropping into a lower gear.
And he says, do you always need to be in that low gear?
Not necessarily, but it should be available when you need it.
So you should have the skill set that's needed to do things in these small, cohesive increments, these tiny steps.
But sometimes you might not need it and that's okay, but you should be able to drop in and out of it as needed.
And I would say more often than not, we think we don't need a tiny step when we actually do.
I mean, you're rarely going to buy a car that only needs the parking state and fifth gear.
So I hear people sometimes talking about how do I ship my side project?
Or how do I make sure that I don't have a bunch of dead side projects lying around?
What's been your experience with trying to make sure that you're shipping things when you're working on side projects?
Oh, I just make more side projects and release some of them.
And then people think that I release all of them.
I don't know. My focus is not always there.
So I tend to switch from one project to another pretty quickly.
And then sometimes when the project is not done or it reaches a hard part, then I put it away for a while.
And next time I work on it, I forgot everything.
And that gives me more reasons to put it away.
So I'm in that part of the community where I create side projects and I don't publish them more often than I'd like to.
Oh, okay.
Or just branches for new features in projects that I own.
It's mostly that actually.
Well, are they small experiments that you're testing the waters and they don't pan out?
Or are they things that you've nearly completed and they don't quite get shipped?
It's usually like I have it working for some parts, but then I hit something that is pretty hard to solve.
For instance, the one that I have in mind is detecting misuses of HTML.lazy, which is super interesting and would be super useful.
So it's easy to a certain extent.
And then the next milestone is super hard.
And I don't know if people want to trust that rule if they don't get a lot of the errors that they're having.
I see.
You shouldn't trust it.
Oh, the Elm review doesn't say anything.
So I have no issues.
No, you have a lot of false negatives because I didn't finish it.
I see.
That is interesting.
That's a very good point.
So my advice to people who want to ship side projects from my experience with it is I'll at least share what my favorite tips that have worked for me are.
One of them is, well, we talked about one of them, set up feedback loops.
I wouldn't be able to ever ship side projects if I didn't, you know, like I think we've talked about on this podcast before, the end to end test suite that I set up for Elm GraphQL, which gives me extreme confidence in features because, you know, I generate the code for a particular GraphQL schema with several different schemas, including all of the examples folders.
And I know that I haven't broken something.
So it makes it like a lot less intimidating to go work on a new feature because it's far easier to know whether I've broken something.
So setting up those feedback mechanisms is very important for me to continue to have that motivation and not get overwhelmed as I'm getting close to releasing something.
You also know what to work on because you know which tests are still failing.
Exactly. Yeah. Yeah.
It gives you these little wins that you can move towards that keep you motivated.
And it just makes it more fun when there's a feedback loop.
Like I don't get bogged down as quickly.
Another thing that's really important for me to be able to ship side projects is having things in a constant state of completion.
So rather than going, you know, like we were talking about earlier, long stretches of time where things are in a partially working state.
I try to keep things in a working state, building the most valuable piece that I need next in the smallest slice that I can.
And then I can ship it at any time and I can step away.
If I step away from a project and get burnt out on it or get some new shiny thing that's more interesting or whatever it might be, which happens.
That's just something that I know happens.
I think for many, many of us who work on side projects that happens, I know that I need to keep things always in a state of completion where things are shippable at all times because it could happen at any point.
And if I get distracted by some other shiny thing, I say, OK, well, I guess I'm going to ship it like this and I can.
But another thing that I do is I try to pick things that are that I know are going to be valuable and easy as much as possible.
Like, for example, I think I don't know, like Elm GraphQL, there are far harder problems than generating code for for a type safe schema.
Like as far as the amount of value to difficulty ratio, I'd say it's pretty high for that.
And I try to pick problems that are like that because I know it's going to be more motivating than it is difficult.
One of the bigger problems that I have, but I don't know if I want this to be an open source therapy.
Isn't every episode you're in?
I don't know.
It's like when I make something, I want it to be of high quality.
I want it to be very useful to everyone or to some people at least.
And I want to write a very nice blog post explaining the problem, explaining how it solves the issue or what the underlying issue is in detail.
So that's often what I do in the blog post, like the one for Telco recursion.
I explained like how it actually works in Elm and then how Elm Review is now able to detect issues regarding that.
And I think that makes for a nice blog post, at least in my opinion.
And writing is actually for me a pretty hard part of it.
It's like motivationally, it's hard.
Absolutely. Writing is hard. Writing release notes is hard.
Writing change logs and blog posts and announcement posts.
And yes, it's far easier to write code. And it's almost like code is the easy part.
You get to the end and you're like, well, I did all the hard work. I've been waiting for this moment.
You're like, oh, now I have to write something about it, though.
There have been projects that I postponed by two months just because of that.
I think I heard someone put it really well once. It was something like, writing is something that I love having done.
I am actually considering asking someone to write the blog post for me.
I would almost be ready to pay someone.
But I was like, yeah, they wouldn't do it the way that I wanted to.
You're going to get a ghostwriter published under your name, but it's secretly written by someone else.
That would be good. No one needs to do it like I do, right?
Whether it's open source or professional stuff, when you sit down to start your day, how do you keep track of what you want to work on for the day?
I don't have any special methods, to be honest.
You just kind of know the important thing you're working on for the day and you just sit down and work on that?
Yeah. And sometimes I just don't know. And then I'll figure it out.
Sorry to disappoint you with this super useful advice.
If it works for you, it works for you.
I find it very useful to collect my thoughts by just writing a lot of to do's on notes.
I use a markdown app called Bear Notes on Mac to track my notes.
I try to get all my open loops, all the things on my mind that I know I need to do.
You know, like we talked about in our most recent episode on dead code, if you don't trust yourself to keep track of the things you need to remember to do,
then you might find yourself writing code to make sure you don't forget.
And so I find that it's really good to just do a brain dump and write all the things that are floating around in my brain because that slows me down.
If I have those things on my brain, my brain keeps trying to hold on to those things.
So I try to get them down on paper or into my notes app.
I don't know how you do that because I totally agree about the to do list.
Like the most productive parts of my life were when I wrote a lot of to do's just for what I had to do in the day.
And when I don't do that, I'm a lot less productive.
But yeah, like even when I'm trying to do a brain dump of what I have in mind, my thoughts are much faster than my ability to type.
Yeah, yeah, yeah, yeah.
Maybe I should learn to type faster or I should learn to think less.
I don't know which one I should choose.
Right. I can relate to that.
I know what you mean with your brain fluttering with ideas.
But so I've been as I mentioned in our last episode, I've been very influenced by the getting things done productivity methodology.
And I think it's got some some great negative wisdom.
And one of the ideas is having a process for capturing things, which means, you know, whether it's pen and paper or digital note system or dictation to your phone reminders or notes app or whatever it might be some way to quickly capture things.
I think having a quick way to capture things is really valuable.
And then another really valuable idea in this kind of capturing concept is when you when you capture a note, could be like something pops in your head or it could be you start your morning you say I'm going to do a brain dump of all the things floating around my head.
Just write down as fast as you can.
And don't try to get the details right and figure out exactly what everything means.
Just make sure you don't forget anything and write everything down.
But I want to remember the details.
Well, you can write some details but don't try to refine the details so it's sort of like the difference between writing and editing. So, like when you draft something, you can just write it all out, and the editing, your brain can fight against itself.
If you're trying to draft and edit at the same time, because you're drafting mind wants to spit out a million ideas, and your editing mind wants to say, I don't know if that idea fits here that idea is good, make sure you don't forget this idea, make sure you write this idea that way oh this doesn't quite fit there I should rearrange this I should edit this.
So those two parts of your brain are fighting those two modes. So it works much better if you separate those two modes.
So draft, turn off your editing brain turn off the part of your brain that is critiquing and saying, Could I write this better Could I write this more clearly Does this belong here Does this not just draft brain dump, and then turn on your editing brain, go through and be an editor and say, this could be more clear.
This should be moved up to the front of the article. So it's the same thing with capturing with capturing just be in your drafting mode, dump out all of your ideas from your brain as quickly as possible just trying to capture everything.
Some of them you might realize you know what, this actually isn't important. But by writing it down, you're able to look at it and see that and then release it from your brain. And then you go through and in your editing mode, with all of your captured tasks or ideas, you can go through and decide what it means.
And now this is for me, this is a very important step, turn it into something actionable. Because sometimes I'll look at something on my task list, and it'll say, you know, does it handle this case? Is it accessible? Is it does it handle input from this type of device? And it's like, okay, well, what do I do about that?
That was something I needed to capture to make sure I remember to do that. But what do I want to do about it? And then you turn that into something you can actually take action on. Do I need to go read this relevant checklist for accessibility? Do I need to test it out with a screen reader?
So that's deciding what it means. That's the editing part of your brain. And if you don't go through your to do list with that editing part to figure out what it means, this is an idea from getting things done that your brain is going to be repelled by this because it's not clear what you need to do about it.
So your brain doesn't like looking at it because it feels overwhelmed by it. That's definitely my experience. I'm also a really big fan of the Pomodoro method. Have you ever tried that? Or are you familiar with it?
I'm familiar, but maybe you can still explain it.
Yeah, it's basically the idea of picking something to work on in a time box. You said a 25 minute timer, usually you have some sort of audio cue that gets your brain into a focused mode. The origins of Pomodoro means tomato in Italian because these Italian tomato timers.
Yeah, that's what I had in mind as well.
Yeah, that's the origin of the term. And when you turn on those kitchen timers, you've got that little clicking noise. And that's part of it because it cues your brain to focus because it reminds that habit is triggered.
So you always have like a ticking noise when you're working?
Yeah, what I do is I've been doing this for many years. Actually, I have a rain white noise sound. I use this sound from rainy, but I actually have it automated where it plays on my computer.
So my setup is, I actually just recently started using an app that I've really been enjoying. It's called Centered App, We'll link to it.
And the pro version has a feature where you can have it run scripts. So I run a script to have it start playing my rain white noise and stop playing the rain white noise when I end my session.
And then it starts playing music.
But Pomodoro is quite nice for getting you focused on doing a small task to completion in a short period and then moving on and saying okay what's the next valuable thing I want to work on.
So I'm kind of that Pomodoro method.
Yes, it's something that I've tried, but not very thoroughly. So you never stuck with me at least.
Yeah, I think there are a few things that are pretty important to make it work effectively.
One of them is, so you do a 25 minute session of focused work, turn off all distractions, notifications, that sort of thing.
And then you alternate between 25 minute session and 5 minute break. And the break is not optional. The break is an important part of it.
You're able to rest your mind so that you can come back and do a focused session. So that's a really important part of it.
Another important part of it is defining your 25 minute work and focusing on that chunk.
Yeah, defining what you're going to focus on, right? What the tasks are that you're going to focus on.
Yeah, to me that is so valuable because it keeps me moving through tasks and keeping in mind what do I really need to be focused on.
Like the minimum viable product or the important tasks that I need to be working on without getting distracted by things that are less important.
So it helps me stay on task. Because sometimes after 25 minutes there's something else I was thinking of doing around a task.
But I realize, you know what, I did the most valuable part of this and I can actually leave it here.
If I'm going to start another 25 minute session, I actually want to work on this more important thing now that I've done that first part.
So do you also use Pomodoro when you're exploring something like debugging Elm code or any code?
Or when you're trying to find a good API for something?
That's a good question. I actually usually do. And I tend to be more productive and analytical in the morning.
And my ideas are more creative in the afternoon, evening.
And so usually I find it harder to do Pomodoros and have that intense focus in the afternoon hours.
But in the morning I can just crank through items.
So usually I will do the more clearly defined analytical tasks in the morning doing back to back Pomodoros for almost the whole morning.
And then in the afternoons my brain is better at coming up with associations and ideas.
And I try to embrace that and not force myself to try to be into that intense analytical mode.
And so I try instead to be open to those ideas and explore.
And so sometimes that does mean exploring an API idea or something like that without a Pomodoro running.
And what I try to do is I try to write down a lot of notes as I'm doing that because that helps me the next morning capture any concrete actionable ideas that come out of it that my more analytical focused brain can do the next morning.
So when you're sketching, what does that look like for you writing Elm code that you're doing in an exploratory sketching mode?
Does it look a lot different than if you're building a feature and you know exactly what you're going to build?
So sketching in your definition means trying to figure out a solution or I'm trying to figure out an API or trying to figure out something?
Yeah, well, I mean, that's another question is like, do you think of those as two separate modes or do you just code?
And that's just trying to understand what you mean with sketching.
Well, for me, I think a lot about sketching when there's like not a clear like I don't know how I'm going to approach this.
Like I don't know, like I want to have some way to have Elm pages be able to use like data sources in Elm review.
I don't know what that would look like. Would that mean a fork of Elm review?
Would that mean Elm pages knowing about Elm review specifically?
Would that mean Elm pages being able to run arbitrary command line tools and output data that it can import from some other Elm file?
So I need to just explore a lot of different things and try out ideas and sketch things similar with designing an API where it could take many different forms.
I think I tend to sketch by building what I already know and trying to figure out like what are the hard parts by encountering them.
So basically what you said about having an entire end to end slice and then figure out like what do I need more?
Like what is preventing me from having this end to end slice working?
That's in the case where I really don't know what's how to work on the task otherwise.
Like this is just about exploring like what is going to be hard.
That's a good point. I like that.
I also find myself doing both things that you're touching on here which one is just like build the obvious things because if you just do the obvious things then the things that are less clear might come into focus and become more clear.
But then on the other hand sometimes you want to front load things that are unclear to get a sense of them because they might inform the things that you thought were obvious.
Or it might turn out that a particular approach is not feasible or not the best way to approach it.
And so you want to front load those riskier unknowns and get those out of the way.
Yeah I guess I'm not sure exactly how I pick between the two but I guess that just comes with experience of knowing when is an unknown important to figure out soon because it's potentially going to drastically change the way you approach something.
And when is an unknown actually encapsulated and self contained enough that it's not going to drastically change the way you do things so you can just go do the things around it and put it off and that it'll become easier once you do the other things.
I think tests are useful as well.
Often when you have a problem you need to figure out a use case for the problem or a user for the problem and by writing tests you use it so you're going to notice like oh I'm not going to need this piece of information to do the task so I'm going to need that as an argument or something.
And you figure that out while writing the tests.
And you start with a very simple function that takes one argument and returns a constant and then then you get the ball rolling.
Yeah, someone was asking recently about how do you know when to split out a module, which is a topic that we've talked about a lot as well.
But, but I realized that one of one of the best hacks for knowing when to extract a module is doing test driven development because what are you going to test are you going to test something in like main dot, you know, yeah, main dot is valid username.
Or are you going to test username dot is valid or username dot from string returns and maybe username or whatever right like that. It makes it very obvious what you need to extract.
Yeah, the. Yeah.
And then there's the question about testing the internals, like you might move something to a separate module and practice you're only going to test the internals may have may make some sense in some case like if you're doing something that is very detailed.
Yeah, otherwise you want to test the more general module right. But yeah, naming is also a good way of noticing when to split out a module. But yeah, yeah, we'll just put what I mean is, if you have a from string function, and then you have a user ID from string, like, okay,
well, now you have the same, you have the same function, the same name structure for different things. So for new data type. So probably that should be moved to another module.
Right. Although if that happens, what are the odds that you had a unit test around that? Because a lot of the time, like what my process would be around that is like, I'd be like, Well, all right, I need to make sure this is a valid username.
Well, how do I know if it's a valid username? Well, I need to write a function to check that and give me this like, how am I going to use it? Well, let me write a test because it's a very easy thing to test it like doesn't require a lot of context or just like an easy, easily separable piece.
And I just open up like, you know, it's just it's so easy to write a test for these little separable modules that that's how I start. Speaking of naming, I can't remember if we've talked about this on the podcast before, but the way I approach naming is very similar to the process that I talked about for sort of the drafting versus editing mindset.
I will come up with a name often I will just name something foo or thing or thingy or something. And if you if you see something called thing or thingy in my code, I apologize. I, I do sometimes ship those I usually try to rename them before I commit.
Yeah, you just try to keep reminders for that. Like if you see foo, you know, yeah, yeah, this rename this.
Exactly. Exactly. And see, that's like a feature, not a bug that it is obvious that is it's a nonsense name. And there's a there's a really nice blog post about about this called naming is a process. It's actually blog post series by someone named Arlo Belshi.
I highly highly recommend it. It's a great read. But he talks about nonsense names specifically as step one that he often uses applesauce. And I probably should write an Elm review rule for my or use the no forbidden words rule for myself for for thingy. Can I do that? Or is that just for comments?
I think it's only for comments and strings. Yeah, easily making a new rule, I could very easily make a rule that says if I have something named thingy, but what I typically like to do, because the idea is having having some unnamed code, that's all just in line.
If you realize that it belongs as a group, then giving it a name, even if it's a nonsense name, is actually a step for the better. And you don't need to make everything better at once. And often your brain will get hung up on, well, what is the right name, and that slows you down.
So I try as much as possible to reduce reduce that kind of friction and just say, just get a name out there. Just say thingy like I know that I want this chunk to be a thing. I'm just going to call it let thing equal this. That's a step for the better.
And I can actually loop back around and give it a better name when when I think of one or when I have more context on it. And that's okay.
Yeah, because usually the thing that prevents you from giving a good name is because you don't know exactly what it is or what it's going to be used for.
So when you have done enough work in this context, you will know more. You will know how it's used and what it is for. And that will be a very good basis for giving a really good name. And until then, foo is a reasonable name.
Yeah, exactly.
Right. As reasonable as you as it can be. Like you can give it a better name if you can think of one. But if it's better to have a nonsense name than to have a wrong name.
Like, exactly. Yeah, it does depend on your ability to trust yourself to come back around and change the name when you think of a better one. And I think that is essential is I think that a lot of people are in the habit of trying to get everything right the first time.
Try to get the right abstraction, try to get the feature right, try to handle all the cases at once, try to handle the error case and the edge case and the happy path and all these things.
And you just write that code and you say, oh, now I should probably write some tests. But what I try to do is I try to say, like, all right, let me write a test.
Do the simplest thing to get this one case working, you know, get the happy path working, write a test case for this error path, write a test case for this edge case, handle those separately, introduce a variable.
I'll think of a good name later. OK, now I thought of a better name. Doing these constant tiny steps. And when you work in that way, you have time because you've reduced the barrier to doing things because you're constantly doing these small, low friction things.
So extracting a function, inlining function, renaming a variable, renaming a function. These are all very like low, low friction tasks and operations because you're making tiny commits.
You're working on a single case at a time, not every single corner case all at once. You have feedback loops set up.
Yeah. And you're also scared of doing those because it helps you out there.
Exactly, exactly. And you have these sort of tools to support your work, end to end tests, unit tests, you know, a continuous integration server that is helping you write quality code on review and, you know, making sure you're you have format and all these types of things.
Yeah. One big aspect that we haven't talked about is like tooling. You should have good tools, right?
Yes, absolutely.
Which is why we're writing tools.
Make our own lives a lot easier.
Yes. Yeah, tools do matter. I went full time doing Elm work when I was coaching a client and helping them do, you know, more robust automated testing.
And, you know, a lot of these technical practices that we talk about often on the show, smaller steps and customer driven features and things like that.
But I was sitting to pair with somebody and they were they said, oh, how would I do this in a test driven way? And it was jQuery code.
It's like if you're having trouble feeling confident about this basic thing that putting text on the page, like writing a test isn't going to help that much.
I mean, we can write an end to end test and get some more confidence about that. But it just felt like at a certain point you want tools that really help you get great feedback.
And that's when I was like, you know what, like I really just want to focus on helping people use tooling that I really believe in.
And so I'm going to go all in on on helping people with Elm.
So you mentioned edge cases before. So we also mentioned the end to end test slice, like just going from just doing the happy path and not handling all the edge cases.
So what do you do to make Elm happy in the meantime? Do you use like debug to do? Is that it?
Excellent question. Excellent question. Yeah, this is a really fun topic.
So I think that it's not really end to end if there's do because you can't run it.
You can't run it against test. You can't run it in a browser. So you're cut off from certain types of feedback.
I mean, you can, but it will crash. So you can partially run it.
Yeah. I mean, you can use it in the parts where...
Oh, are you talking about like the...
Yeah, like, oh, this is an error case and use debug to do, make the compiler happy.
Absolutely. Yes, I absolutely do. Yeah, do. That is a good question, actually.
I have the best questions.
I do reach for do there sometimes. Sometimes I use hard coded values.
I guess in a way it's not shippable with do, you could argue.
So it is in a more shippable state with a hard coded value, even if it's not the case you're actively exercising.
So I would say I still only reach for do as a last resort, not the first resort. But I do use it.
So you use it when you have something more complex to build than just a hard coded value that you can just reach out for.
Yeah, I do use it when I just am like, when I am trying to get things into a compiling state.
So it's just like, you know, we've talked before about how ideally we would have more tools for doing atomic, you know,
architectural changes to our code, atomic refactorings, so that it wouldn't be multiple steps.
Because, you know, conceptually it is just one atomic step. It's just that in practice we have to do the tedious manual multi step process.
But when I'm doing that sort of tedious multi step manual process that is, you know, conceptually really just one atomic step,
sometimes you get stuck on something where you just are. It's an unmanageable step.
It's difficult to figure out how to get something compiling.
That's when I'll use debug.todo because I'm already in a non compiling state. So using debug.todo isn't making things less shippable.
It's just as shippable as it was before. And actually it gets me closer to a compiling state.
So I'd basically maybe we've got Jeroen's hierarchy. Maybe we also need Dillon's hierarchy.
Of shipping? Shipping constraints?
Yeah, like if I would rather have things be shippable than runnable, I'd rather have things be runnable than not compiling.
So like if something has debug.todo, it's not shippable, let's say. If something is not compiling, then it's not runnable.
So I would rather. So if I am already in a non compiling state, debug.todo can help me get to a compiling state faster.
And I would rather be in a runnable state than a non compiling state. So I will use that tool in that case.
But once I do that, I put in some debug.todo's. Now I've got it in a compiling state, but it's not in a shippable state.
So now I want to get it in a shippable state. And the way I do that is with hard coded values.
Yeah, just so we're clear, when you say shippable state is like it's usable.
Some people could use it. It's going to be subpar, but it's going to be usable.
And it's not something that you could you would make people pay for if it was a I mean, it doesn't have to be high quality is what I mean.
It's a good question. So I would say that I would actually put it differently.
I would say that to be shippable, it should be high quality, but not necessarily comprehensive.
So I mean, of course, it's a bit of an arbitrary distinction because, you know, I mean, are you going to ship something that doesn't have error handling?
Right. But but you want to do things that are so like not every single step is literally shippable, but it's as close as that close to that as possible.
So if something has debug.todo, it is very much not shippable.
If something has if it has a hard coded value, then it's shippable, but it's not going to do what you intended to do in some scenario.
That's right. Some edge case. Yes, absolutely.
But people can still use it. Right. And that's preferable.
And in addition to that, there's just you can map out a path of building features where you are saying, could it could I literally ship this if I mapped my path?
Like like if I start with this feature and then go to this feature, like these tiny little things, could I ship it at each of these points?
And there's a way to map that path where it stays shippable for as long as possible.
And I'm not talking about like error handling and things like that here, but I'm talking about, you know, like if if a user can't log in, then they can't use the system.
So could I could I ship a system where all you can do is log in? Yes.
Is it going to be a feature rich application? No.
Literally all you can do is log in, but it's usable.
But if I so if I ship this other thing and you can't log in, you can't do anything with it.
If I can log in, I can actually ship something that somebody someone can log into.
Obviously, that's like in the, you know, the sort of very first steps of an app or not the best case to illustrate that because more realistically, it would be, you know, if I.
But you get what I mean, like you're building the subset of features that can be usable as quickly as possible.
So you say you're looking for what is shippable as long as possible.
Yes. Yeah. Is that the same thing as shippable as early as possible?
Yes, absolutely. I think of it as early and often ship early and often know what I mean is like.
So when you you're starting a new feature, are you going to break it like making it a non shippable state early on and then fix it as soon as possible in very tiny feedback loops if possible?
Or are you going to add to it in a way that never breaks the shippable state?
Right. Right. You see the difference? Yeah, I totally see what you're saying.
Yeah. So I I do try to practice keeping it shippable.
And this is sort of so there's this whole area of thinking that people sometimes refer to as trunk based development, which is the idea that, you know, often you need to use that.
You need to build things differently in order to make this possible.
But so, for example, if you introduce a feature flag, you can ship something very quickly.
It's it's in a shippable state. Right. So that's like a technique that you wouldn't have to do if you if you were building things in a way where you didn't care how soon you could ship it.
So it's this idea that like why bother building something in with a feature flag?
Well, for one thing, you don't get this drift where you have multiple people working on the same area of code, but this code has not been merged.
That's the sort of. Yeah. The term for it that's often used is delayed integration.
So you have this delayed integration. There's this latent risk sitting out there that's waiting to come in and introduce a cost as as things deviate from these these branches.
They get stale. Yeah. We call them way too big.
Way too big, poor requests. Right. Right. Yeah.
So so trunk, you know, trunk based development would be that you're literally getting that merged into the main branch from the very beginning of the feature and behind a feature flag, perhaps.
But but you can turn on that feature flag in production and test it out in a production context.
It's interacting with all the same code. There's no big merge conflict after weeks of working on it.
Ideally, it shouldn't take weeks to work on a feature either. But that's a separate question.
Yeah. I guess if you're working on a CLI tool or something like it's it can be behind a hidden flag.
Yeah. Ligial flag again. Right. Absolutely. Yeah. You can just not document it.
But as long as it doesn't break what is existing, then it's fine for people to just use the products as it as they used to.
And if they want to then enable that feature flag or that new feature. Right. Right.
Yeah, exactly. Because then you're you're avoiding having this big integration work at the end where you have to make sure everything fits in.
Like you've it's just been released. Like if you have an Elm review feature and like you're saying, you know, you ship it behind a hidden feature flag that's unpublished and it doesn't show up in the help documentation or whatever.
Now you can actually ask for feedback and say, hey, could you try this out and see if it worked?
Does it work on Windows? Does it does it work for your use case?
And you can get feedback and you don't have this like integration step at the end where you have to fit it in with all the other pieces.
Yeah, there's a feature that I really like in Elm review or that I'm super happy that I built is the template flag,
which allows you to run a unpublished or which allows you to run the an Elm review configuration that is defined somewhere in GitHub.
So every Elm review package, at least the ones that I made, have an example folder and they also have a preview folder.
So examples for all where you use the the version of Elm review that has been published, like that is actually on the package registry.
And the one and then you have a preview folder where you can test out the version of a package that has not been published.
So you can test new rules or you can test bug fixes.
So very often what I do is when someone opens an issue, I fix it in a branch and then I ask them, can you run Elm review dash dash templates and this argument?
And people can just try it out and tell me whether this fixed their issue.
And that is such a nice workflow, in my opinion. Like, I'm so happy I made that.
Yeah. Yes. It's incredibly nice because I don't want to publish something that will create new issues.
And it's sometimes a bit hard to tell. Thankfully, it's always been. Yeah. OK, this works basically.
Or at least it works better than before.
Well, I think the bottom line is to be a 10x developer workflow matters, but so do your tools.
Yes, very much. But you also need to prepare yourself to have that feedback and to have those tools like you mentioned your end to end tests, which are super useful to your workflow.
Well, you need to build those. You need to invest some time in building those.
I need some time to invest in create that template feature, which has other useful applications, but it also has the one that I can test out things very easily.
Yeah. So I think that feedback is sort of like compound interest.
You know, it like the earlier you can build in that feedback, the more it'll pay dividends over time.
And that's why no one releases products once once a year anymore.
Right. Right. Like they used to be the case for video games, but now they have patches.
Yeah. And games are just generally unfinished nowadays anyway.
They're just fixed and improve the game after a while. Right.
Yeah. Generally, the advice that I hear is don't play a game on the first few days.
Huh? Yeah. Don't try an Apple update in the first few days.
Also, if you have mechanisms like for NPM packages to use to market a version as beta or alpha, that also helps.
That is probably the biggest thing at the top of my Elm package management to do list.
I wish I could publish beta versions. So do you use do much in your workflow?
I usually use it as a reminder to do something later. Not to do it.
Yeah, I usually use it as a way to remind myself to do something. Not as a way to.
If I use a hard code value, then I lose the natural reminder that do is.
Right. But if I use do, then I can still use it if I only used in some branches.
But I do get a reminder from Elm review from crashes, from the compiler, that I need to do something somewhere.
I mean, it's in the name, right? It's to do.
That's a good point. Yeah.
Well, I guess that's sort of maybe a good way to bring it back to the beginning where we were talking about managing the work so you don't forget. Right.
So like I think what I what I try to do is to write those to do's in a place that's right in front of me as I'm working,
like a notes page that's always open on a side monitor when I'm working and not in my code.
I do put to do's in code sometimes, but usually to do comments.
But I try to take those things that are in my head that I don't want to forget, manage them in my notes and my to do list,
and then put hard coded values instead of to do so I can ship it and then write a test case for the next case when I get there.
For things that I know I will want to look at very soon and then that I put a hard code of values for,
I tend to add a comment or something that Elm Review will tell me will remind me about.
So there's the Elm Review rule from Elm Review, no forbidden words,
which reminds you when you use things like replace me or to do or actually you can set it up.
It's configurable. So for all the Elm Review rules that you create using Elm Review, new package or new rule,
you get this rule preconfigured with replace me.
So every time I put a new to put it in a string or comments to remind me to do something,
I use replace me because I know that at some point Elm Review is going to tell me that I need to do something there.
Right, right.
But often I just figured out like, oh, I see a big replace me in all caps. Go tackle it right now.
Yeah, I'm also a fan of more and more. I'll leave notes in my code for that sort of thing.
I mean, sometimes it's like something long term to consider and those types of things can get stale.
But I think there's been a sort of false dichotomy of this like different school of thought of like self documenting code versus using comments and code.
And in my opinion, the concept of self documenting code is it's not that you should never have comments in your code.
It's that comments shouldn't be saying this is a function that takes the user name and returns a user if it's valid or something like it's like put that in the code.
Right. Like that that comment is probably indicating either it's just exactly the same as the name of your module and functions or it's telling me something that the name the names in the code, the function names and everything are not telling me.
And they should be refactored to tell me. Right. So it should be more expressive code.
So I actually came across a tweet recently that that was kind of giving a few rules and there's an article I'll link to these.
I thought it was really good summary. It said like code comment rules comments should not duplicate the code. Good comments do not excuse unclear code.
If you can't write a clear comment, there may be a problem with the code. Comments should dispel the confusion, not cause it.
But some things that code can code comments can do explain on the idiomatic code. That's something I find very useful.
Unidiomatic? Yeah.
So for some reason, like normally Elm code would use list dot map. But I'm doing this here for performance reasons because it caused some memory issues when I did it this way or whatever.
Provide links to original source of copied code include external references like often I'll link to MDN resources in my code.
I think that's a great use of code comments add comments when fixing bugs. So, you know, you could describe like this has been a recurring issue or this vendor has this API that needs to be used in this particular way, even though you wouldn't expect to call it this way.
This is why we're doing this. And then the last one here, it says to to mark incomplete implementation.
So, yeah, that list resonated with me.
The first thing I thought about when you said don't repeat what the code says, I was thinking of JS talk where you basically have to. So the comments that you add on top of above JavaScript functions where you have to repeat all of the arguments, the types and what they do.
And very often it's just like listing the the arguments and repeating what the argument name means. So it's very much not helpful or not any more helpful than what the names already gave you.
And I feel like in Elm we tend to do that less. First of all, because we don't have a clear way of documenting arguments, but also because we have a type annotation that is a comment anyway.
It's just the nice thing about type annotations is that they're always up to date.
Right, right. Yeah. Another thing that can stay up to date is Elm verify examples, examples in the code. I actually wish there was a way to have it check that an example compiles, not just gives the correct output.
But at least when you're there, there's no way to just easily say check that this code compiles.
Right. If you don't have a value that you want to evaluate.
Exactly. But you can turn it into a unit test where it asserts that the return value is what the example shows, which is an amazing way to keep your documentation up to date.
And I mean, as a user reading the documentation, you can trust it more. I think this applies to internal APIs, too. It's like a great tool.
Yeah, for Elm review rules, I want the examples to be checked. Usually I have a success and fail section where this code would not be reported, this code would be reported.
And I would love to have an Elm verify examples alternative to that.
Yeah, maybe we can make a pull request to help with it. I don't think it would be that much more difficult than what it's already doing.
For compiling? Yeah, yeah, probably. Yeah, some issues to fix, but yeah.
Yeah, but I know that when I write an Elm review rule, I'm always copy pasting some example to start just to get the type signatures right and the arguments and the parameter names and that sort of thing.
That's just how people use code, you know? Yeah, I use examples from the documentation all the time. Super happy that some guy wrote it before.
Right, absolutely. All right, well, any final words of wisdom or resources we should point people to for productivity tips?
So you mentioned that you ship very often and ship unfinished things as well, but you don't ship all that many things.
Right. I know you do a lot of work, but I guess you could ship things more often, right?
Right, yeah, it's a good question. So, well, first of all, I think I have a pretty good track record of shipping tiny things very frequently.
So that's one thing. But for large things, let's say like a big Elm pages release, like Elm pages 2.0, I'm working on Elm pages 3.0 right now.
These are long, big release cycles that like it's actually like not that long since Elm pages 2.0, but they're big releases and I'm not doing 2.1, 2.2, 2.3, 2.4, you know.
Or version 3, version 4, version 17.
But I think that these considerations are a little bit different between publishing libraries and packages versus publishing SaaS applications or that sort of thing.
I think so when you're publishing a library, it's sort of like it's not solving a use case for one group of users.
It is a meta use case for a meta group of users, right, that are shipping many SaaS applications.
Right. So some of these things about not doing too much upfront design, not thinking about every possible use case, but kind of shipping things, getting feedback on it and adapting based on that.
Some of those things you do have to like say, well, you know what, I actually do need to consider more than just my use case because it's a library. That's sort of what it does is it handles a large variety of users use cases that are being used for a large variety of end users.
So I do think that changes the calculus. And there is an interesting interplay between this sort of how you think about a minimum viable product or a minimum viable package, if you will.
Also, I think if I could release a beta version in the packages, that would change things to some extent, but also to a certain extent, like you want to batch breaking changes as a cohesive set.
You know, like Evan talks about this idea of recognizing multiple problems and finding one way to solve all of those problems.
Yeah. Rather than playing whack a mole and building feature, feature, feature, feature to address each problem.
Yeah. So it's creating a lot of solutions or a lot of bug fixes for things that are inherently related.
Exactly. Exactly. So sometimes it does take time to step back and say, what is the bigger picture?
And you have to sit with something, you have to explore, you have to, you know, we did our sort of API design lessons talk, our episode on that and talked through a lot of our lessons that we've learned about that.
Like I talked about hammock driven development and going on walks to let ideas simmer in the background.
So I think that it is very exploratory and it's just a different art to design libraries and packages than it is to ship code to users.
You have a very different standards as well for packages and for applications.
For applications it seems to be very much lower.
Which is a bit weird in a way.
Well, I wouldn't put it that way. I, again, I don't think it's about shipping low quality code and unpolished code.
And I think a lot of people, those terms come to mind when they think about like shipping in an iterative agile way.
And I very much do not think of it in those terms.
I think the opposite.
I think it's about shipping high quality code that is adaptable and it is about not solving every problem at once, but solving those problems very well.
And it's about doing things that are custom tailored to solve specific problems and needs and to solve those problems well rather than designing upfront for many perceived needs and shipping something that.
See, because often people use the word polish to talk about large designs that have a lot of bells and whistles, which may not actually hit at the actual problem and solve it well.
And that's what I associate with this big upfront design approach.
And what I see as the way to do this sort of more iterative approach to shipping products is, again, it's it. You're not shipping bells and whistles. You're not shipping features that will never be used, which often big upfront design approaches end up shipping many features that will not be used or that will be used.
But there could have been a simpler way to solve that same problem.
But it's solving the simplest way to get out something that that solves the user's problem that meets their need in a very high quality way.
So like there's no you don't ship things that aren't quality. You don't ship things that aren't tested.
And again, refactoring you building the quality. It's not a separate step testing.
The test is not done as a separate step.
The test is written before the code.
If you have an end to end test suite, that's part of the process of building the features not done as a separate step.
If there are specific security or legal concerns, you don't wait until after and do all those things at once.
You do that as you ship the feature.
So all of the things that define quality for your product are built into it and shipped.
And that's part of shippable.
It's just that you're trying to get to that shippable as soon as possible every time you deliver a feature rather than building a whole bunch of features,
some of which may or may not be needed, some of which may not be the simplest way to achieve something or may not be necessary for the user or solving a real problem that they have.
And then polishing all of those at once and waiting until all of that's done to then go and release a big batch of things every year.
So again, it's not about lowering the standards for quality.
It's about reducing scope, but having high quality for that small scope and doing that over and over.
So for packages and libraries, I think that the equation does change a little bit because you do need to go through.
You don't want to ship breaking changes.
So you don't want a lot of churn and you don't want to introduce a lot of breaking changes all at once.
And you want to solve a set of problems in a cohesive way.
So I think the art of designing packages is a little bit different in that way.
But that doesn't change the fact that I work in a way where I keep things shippable as I'm iterating on a package.
Yeah, in the back of your mind, you will still have a to do list of features that you want to build to prevent an additional major version.
But you could ship it technically.
Exactly, exactly.
And there's a difference between shipping and releasing.
So shipping, you could ship something with the feature flag turned off.
And then you say, OK, we've got this suite of features.
Maybe you had a round of beta feedback with some beta users for a SAS product and you've sort of validated it.
And maybe it's like related to marketing that they say, all right, we've got this conference and we're releasing this set of features for this around this conference.
We're announcing it there and now we're turning on the feature flag or whatever it may be.
And ideally, you don't want it to turn into a big yearly release of all your things and all your future flags going on.
But there's some nuance to that, right?
You can separate those two ideas.
And that's one of the ideas of continuous delivery is that you can separate releases from shipping and feature flags are an important part of that equation.
So I would say it's similar with like, you know, authoring a package where you can say, OK, well, we're doing a big breaking release.
You know, with Elm pages, I'm working on something that has server rendered pages and it can manage cookies and, you know,
send arbitrary HTTP responses and parse incoming HTTP requests and things like that.
Now, I am building that in a way where I'm saying open up a Cypress test, write a test.
It hits this page, it logs in, it does these things, it says the cookie and this stuff works.
I'm not like, oh, let's see at the end if this works.
Some people have been doing alpha testing and giving me feedback along the way.
There have been very few issues coming up in the alpha testing because like I've been testing it very thoroughly along the way.
It's like pretty close to shippable.
Well, you've been shipping it just not through the usual means.
You haven't released anything yet.
Yes, that's right. That's right.
And I also, you know, the site will probably be Elm pages 3.0 before Elm pages 3.0 is officially shipped, for example.
So that's something I do often is I maintain these other projects with the vendor version of the early release to get feedback.
But you want to like so let's say for the Elm pages 3.0 incoming HTTP request parsing where you can say, you know,
I want to match requests that are a form post with these form fields or I want to match something that's an HTTP post method with a JSON body that can be parsed with this JSON decoder or whatever it may be.
I don't want to just ship, you know, the package version 10.0, 11.0, 12.0, iterating on those ideas.
And I don't want users to go depend on these packages and have constant breaking releases.
I want to reduce the amount of churn there, but get feedback along the way.
So it just it's just the nature of it that it takes some extra design consideration because you're you're building something that is you do have to do a little bit of upfront design or at least consider many use cases in a way that you don't have to for designing a single product.
And you you want to kind of get that feedback up front and and sit with that and look at the big picture and say, OK, I built this API.
Is there a better way to approach this? Does it really solve people's problems? Do I do I see any patterns I could simplify?
I'll drop some links to to all the things we talked about.
If you're interested, definitely try out the Pomodoro method.
Try out read up on getting things done. Lots of great stuff in there.
And you're in. Until next time.
Until next time.