spotifyovercastrssapple-podcasts

Developer Productivity

We share our productivity and workflow tips, and how it changes the way we write Elm code.
April 11, 2022
#54

Transcript

[00:00:00]
Hello, Jeroen.
[00:00:02]
Hello, Dillon.
[00:00:03]
And what are we talking about today?
[00:00:05]
Today we're talking about developer productivity.
[00:00:09]
One of my favorite topics.
[00:00:11]
Yeah, right?
[00:00:13]
So we're going to talk about how to become a 10x developer.
[00:00:16]
Only 10x.
[00:00:19]
Elm can make you a 100x developer.
[00:00:21]
So how do you become a 100x developer, Dillon?
[00:00:26]
Elm.
[00:00:27]
That's it.
[00:00:28]
All right, that's the show today.
[00:00:30]
Until next time.
[00:00:33]
You're joking, but yeah, we both know that it does make you faster in the long run for a big project where you need to maintain it for a long time.
[00:00:44]
I mean, I think no one would be surprised that we believe that.
[00:00:48]
Yes.
[00:00:48]
Of course, not everyone would agree.
[00:00:53]
Maybe let's say that'll get you 2x, but we've still got a ways to go.
[00:00:59]
I think probably the big, you know, whenever I hear people talking about 10x developer, I'm always thinking like, it's not about being a 10x developer.
[00:01:07]
It's about being like a 0.1x developer where you do one tenth of the things that are the most important things.
[00:01:15]
Wait, you're 0.1x developer and you do one tenth?
[00:01:20]
Do less of the work, but it's more valuable work.
[00:01:24]
And so you deliver more valuable things through less effort and therefore you can produce more value.
[00:01:31]
But you're not necessarily producing more code.
[00:01:35]
You're not necessarily being more productive.
[00:01:37]
You're being more effective, right?
[00:01:38]
It's sort of that distinction between effectiveness and productivity.
[00:01:42]
Right.
[00:01:42]
But you do have to be to do less than one hundredth if you want to be a 10x.
[00:01:47]
Otherwise you're just a 1x developer, which is fine.
[00:01:52]
I mean, we're lazy.
[00:01:55]
We have a lazy job, right?
[00:01:56]
We try to be lazy.
[00:01:58]
So being a 1x developer just by doing less work is fine.
[00:02:04]
Yeah.
[00:02:05]
If you can do more than even better.
[00:02:08]
Right.
[00:02:09]
Yeah.
[00:02:14]
And that sounds like a term that calls to mind overworking, burning yourself out, being the hero of the team that finishes all the tasks and no one else understands the code that person is working on.
[00:02:23]
And these are all extreme anti patterns that are the types of things that a lot of my career I've spent trying to disentangle because they seem on the surface like they're helpful and people get really excited about the team heroes and everything.
[00:02:40]
But that's actually detracting from everybody else's ability to contribute.
[00:02:45]
That's creating code that only one person can understand.
[00:02:50]
So taking breaks is, I think, an important part of the process.
[00:02:55]
And for every one task that you say yes to, say yes to, I think that's a really important part of the process.
[00:03:00]
Then you'll be a 10x developer.
[00:03:03]
Yeah.
[00:03:05]
So I guess the most important question to answer is, what do you want to be?
[00:03:12]
What is the best?
[00:03:15]
To be a 10x developer, a rock star, or a ninja?
[00:03:20]
Because I think that's the most important thing.
[00:03:25]
To be a 10x developer, a rock star, or a ninja?
[00:03:30]
Because those questions have not been answered.
[00:03:33]
Are they the same thing?
[00:03:35]
What about a pirate?
[00:03:37]
It's extremely simple.
[00:03:39]
You want to be a 10x rock star ninja.
[00:03:41]
How can you be a rock star if you try to be sneaky?
[00:03:44]
This is a good point.
[00:03:46]
It's the rock star ninja paradox.
[00:03:48]
Classic.
[00:03:49]
The famous one.
[00:03:54]
Didn't Einstein call that a paradox?
[00:03:57]
Absolutely.
[00:03:58]
I'm sure of it.
[00:03:59]
I read that on the internet.
[00:04:02]
We talked about incremental steps, or tiny steps, in a previous episode.
[00:04:07]
Yes, we talk about that pretty often.
[00:04:09]
I think that's one way of making sure you're developing in a more efficient manner.
[00:04:14]
I believe that's it.
[00:04:19]
You can go back to that episode if you want.
[00:04:23]
Do you want to summarize that?
[00:04:25]
Sure.
[00:04:26]
I'm often thinking about keeping code in a continuous state of being valuable, working, functioning, green tests.
[00:04:31]
The longer I'm in a working, functioning state, the better.
[00:04:36]
A nonincremental step would be building up a lot of partially working code and not having any feedback loop in that process.
[00:04:41]
That's a risk.
[00:04:45]
By cutting off that feedback, you're not going to be able to do anything.
[00:04:50]
You're going to be able to do a lot of things.
[00:04:55]
The way I think about it is you want to set up your feedback loops before, so you can have the benefit from those feedback loops as you do the work, not after.
[00:05:11]
If you write a test after you write the code, is that a good thing?
[00:05:16]
People will have different opinions on whether the test itself will be an effective test.
[00:05:21]
For example, it means that you could have introduced code that you didn't test.
[00:05:26]
If the only code you write is in response to a failing test, that means that you necessarily have tested all the things that you built.
[00:05:31]
That was the only reason that you wrote code, was in order to make a failing test pass.
[00:05:36]
Putting that aside, you've also done the work of writing this test, but you didn't get the benefit of it.
[00:05:41]
You're not going to get the benefit of it.
[00:05:46]
You've also done the work of writing this test, but you didn't get the benefit of it to use that to help you with the feedback cycle for building the feature.
[00:05:55]
Why not do that first?
[00:06:00]
If you get 10 times more feedback, or more often, do you become a 10x developer?
[00:06:05]
Is that part of the solution in your opinion?
[00:06:08]
I think so. I think there's a lot of waste.
[00:06:13]
I think there's a lot of waste in the context of lean thinking.
[00:06:18]
I think that we do, by creating features upfront, by creating not Oxbow dead code, but dead code that we create because we're writing a lot of unused code just in case, and we're not getting feedback.
[00:06:35]
I do think that that is crucial for getting more out of your time.
[00:06:43]
Again, if you're not getting these feedback loops also for seeing that the things you're creating are valuable in your whole production pipeline.
[00:06:52]
If you're not doing frequent releases, you're not getting feedback on whether you're shipping valuable features,
[00:06:57]
we're kind of being tongue in cheek about the whole 10x developer thing.
[00:07:02]
That's what it really would look like to have a more effective, it's more about effectiveness than productivity.
[00:07:12]
You want those feedback loops at all levels or at least several levels.
[00:07:18]
Yeah. Another thing I think is really important.
[00:07:22]
Teams will often start slowing down because of technical debt, and there can be these boom and bust cycles, feast and famine of getting…
[00:07:33]
What are those?
[00:07:35]
Well, the idea of feast and famine, it's like you have these spoils one day that you can feast on, and then the next day you don't have anything to eat.
[00:07:49]
You want a more sustained, healthy level all along the way.
[00:07:55]
Yeah, you don't want hurricanes followed by droughts followed by hurricanes.
[00:07:59]
Exactly. You just want good weather all the time.
[00:08:03]
We want the same thing in our code quality.
[00:08:06]
We don't want to have code quality that is significantly slowing us down and making it hard to make changes without introducing bugs,
[00:08:16]
making it hard to ship new features or do architectural changes, things like that.
[00:08:21]
Then suddenly we say, okay, we need to go and take a few months and do this refactoring and going back and forth in these cycles.
[00:08:30]
I think it's really important to bake in quality to the work itself.
[00:08:34]
Quality is not a separate thing that you add on.
[00:08:37]
Quality is something that you bake into the process.
[00:08:40]
If you fail, then you will have to do it after the fact at some point.
[00:08:45]
Exactly. That's the thing. You have the context.
[00:08:48]
This is a lean concept as well. The concept is called built in quality.
[00:08:52]
It's a concept from the Toyota manufacturing process, but it very much applies in software, I believe.
[00:09:01]
I think there's a lot of great nuggets of wisdom in that thinking.
[00:09:05]
The idea is you've got context and do the things while you have context rather than as a separate step.
[00:09:12]
I really do believe that quality is not something that you can tack on after the fact.
[00:09:17]
You have to build it into the process itself.
[00:09:21]
This is generally true of any sort of...
[00:09:25]
There's a tendency to want to go add quality after the fact, go do big refactorings after the fact,
[00:09:32]
which granted, sometimes we do need to do big technical changes.
[00:09:36]
I'm not saying that that doesn't exist,
[00:09:39]
but I think that often teams will say, we don't have time to refactor, we have to ship this feature.
[00:09:45]
But I think you have to bake it into the process of building things, that you build things with quality.
[00:09:50]
There are different reasons why you would do refactorings.
[00:09:53]
Sometimes you notice that code is bad according to some metric or gut feeling.
[00:10:01]
And then you have some refactorings that you do because the things that you need to do
[00:10:06]
are different from what you're expecting originally.
[00:10:10]
The company said we need to build A, and then they say we need to build B.
[00:10:16]
And some of it is shared with what was A, but you still need to adapt it.
[00:10:24]
Then you do need to make some refactorings to make B possible.
[00:10:28]
You didn't know that when you first worked on the task.
[00:10:32]
So that's refactorings that you will have to do, and you can't fix it.
[00:10:37]
You can't predict the future unless you do premature abstraction and optimization,
[00:10:42]
which have their own problems.
[00:10:45]
But yeah, if you know the context well, then you can build the quality while you're working on it.
[00:10:53]
Right. And to a certain extent, quality just means not building those premature abstractions.
[00:10:59]
Sometimes it's okay to have some duplication.
[00:11:03]
Like jumping onto that and creating an abstraction right away,
[00:11:08]
like having overly abstracted code or code with abstractions that don't solve a real need or pain point.
[00:11:13]
So I think always doing these things in response to pain points really leads to more sustainable, maintainable code.
[00:11:20]
Another thing that I think there's a tendency to try to do after the fact is putting the pieces together.
[00:11:27]
So you build one system, you build another system, you build the front end piece, you build the back end piece.
[00:11:33]
And actually putting the pieces together is the really hard part.
[00:11:37]
Like integrating with some external service, that's where the risks are.
[00:11:43]
So start with that risky unknown part.
[00:11:46]
You want to get that out of the way as fast as possible so that you're not doing things with faulty assumptions.
[00:11:51]
So you don't want to build on a shaky foundation of bad assumptions,
[00:11:56]
but you can't really figure out if those assumptions are bad or not unless you actually do the work.
[00:12:03]
So I think that's really essential is understanding that the biggest risks are fitting the pieces together.
[00:12:11]
And that's why I think it's so valuable to get full end to end slices as quickly as possible.
[00:12:17]
So one example that would be, you have a client and you need to connect to the back end and show the data
[00:12:25]
or even process the data.
[00:12:27]
So you would build a very simple HTML page which only makes the request to the back end and only shows the data unprocessed.
[00:12:37]
But then for that, you need a client which is super simple and you need a back end that returns anything.
[00:12:45]
Right. Yeah, that would be a full slice.
[00:12:48]
Yeah. And then you already have built some of the complex parts, like actually building a server
[00:12:53]
or choosing your tech stack maybe.
[00:12:55]
Right. Right. Yeah.
[00:12:57]
Which is weirdly enough, pretty hard.
[00:13:00]
Unless you're biased like we are and you just choose Elm for the front end, that's easy.
[00:13:06]
But I don't know what I would choose for the back end.
[00:13:08]
So for me, that would be already a few questions that would be solved then.
[00:13:13]
Yeah, all those decisions play off of each other.
[00:13:16]
And so if you go too far down one path without having made all those other decisions,
[00:13:22]
then you might find that the pieces don't fit together once you say,
[00:13:26]
okay, well, I've built all the pieces. Now I just have to put them together.
[00:13:30]
That creates a lot of extra work.
[00:13:32]
Yeah. And then every piece can be updated individually, like making the page beautiful,
[00:13:39]
actually doing some computations on the front end or the back end.
[00:13:43]
Right. Because at that point, you have a feedback loop built in.
[00:13:46]
You're actually doing a thing.
[00:13:49]
Yeah. So now another thing we talk about a lot is tiny commits.
[00:13:55]
I thought it might be good to break that down a little bit because you can do tiny commits a lot of different ways.
[00:14:01]
And so I think it's important to kind of clarify a few things about tiny commits.
[00:14:07]
Tiny commits is about actually working in tiny chunks, tiny slices,
[00:14:12]
and doing these types of meaningful, coherent pieces.
[00:14:17]
I mean, if you do a refactor, you extract a function that can be a tiny commit.
[00:14:23]
That's an obvious one because separating out refactoring steps from feature steps is extremely valuable for a lot of reasons.
[00:14:33]
One is for me, it's just less cognitive load to keep in your brain
[00:14:38]
when you're thinking about what in progress changes do I have right now?
[00:14:43]
Yeah. When people review your code or your pull request and they go through the pull request commit by commit,
[00:14:51]
then the refactoring stones add to the noise as well.
[00:14:56]
Right. Exactly.
[00:14:58]
Someone extracts a function or moves a function like that is really simple.
[00:15:03]
No need to review it almost.
[00:15:05]
But if you review the entire thing or a bigger chunk of change,
[00:15:11]
then all those things are mangled together and it's hard to know what was refactored and what was feature.
[00:15:17]
Exactly. Yeah. And if you break something midway through, I mean, it's like playing a video game
[00:15:24]
and you haven't made a save checkpoint in a long time, right?
[00:15:28]
And now you have to go all the way back.
[00:15:31]
Yeah. I often say commit as if you thought you were going to mess up very soon.
[00:15:38]
Exactly.
[00:15:40]
Because sometimes you get into arguments like, hey, should I commit now?
[00:15:48]
Yeah. Because what you have right now is working.
[00:15:52]
Exactly.
[00:15:53]
You don't know what the future will bring.
[00:15:55]
Exactly.
[00:15:56]
Because if you mess up something and that's going to potentially be hard to debug,
[00:16:00]
then the easiest would be one way you could fix the issue is by removing everything and starting all over again.
[00:16:08]
But that's more work if you erase a lot of code, a lot of changes.
[00:16:14]
But if you committed like 30 seconds ago, well, you know what changed
[00:16:19]
and you can redo that just by typing on your keyboard again.
[00:16:23]
Exactly. Yeah.
[00:16:24]
And if you're looking through the git diff to say, what did I do wrong?
[00:16:28]
There's less to look through.
[00:16:30]
So you're reducing that search base.
[00:16:32]
And yeah, if there's one thing that maybe all of us Elm developers have in common,
[00:16:37]
it's maybe a mistrust of our ability to do things.
[00:16:43]
We don't trust ourselves to do side effects and mutations anywhere we want.
[00:16:49]
I mean, I don't trust myself to do big changes.
[00:16:53]
And I'm proud of that.
[00:16:56]
I wear that as a badge of honor.
[00:16:57]
I think that that's just an insight that I have about our lack of ability to...
[00:17:05]
It's okay to accept that we're not going to do hard things well.
[00:17:09]
So let's just make them easier.
[00:17:11]
That's fine.
[00:17:12]
Just make things easier and you'll do a better job.
[00:17:15]
Well, I sometimes trust myself and sometimes I'm wrong.
[00:17:19]
It does happen that sometimes I'm right.
[00:17:25]
In which case I celebrate and I still feel bad that I did it the way I did.
[00:17:31]
I should have done the dining commits.
[00:17:34]
I should have done tiny commits.
[00:17:36]
I really like a quote from Kent Beck.
[00:17:39]
Kent Beck created test driven development and a lot of these technical practices that we are familiar with.
[00:17:46]
And in his book, Test Driven Development by Example, he talks about this analogy of changing gears in a car.
[00:17:54]
And he says, taking smaller steps is like dropping into a lower gear.
[00:18:00]
And he says, do you always need to be in that low gear?
[00:18:04]
Not necessarily, but it should be available when you need it.
[00:18:07]
So you should have the skill set that's needed to do things in these small, cohesive increments, these tiny steps.
[00:18:15]
But sometimes you might not need it and that's okay, but you should be able to drop in and out of it as needed.
[00:18:20]
And I would say more often than not, we think we don't need a tiny step when we actually do.
[00:18:27]
I mean, you're rarely going to buy a car that only needs the parking state and fifth gear.
[00:18:36]
Right.
[00:18:38]
So I hear people sometimes talking about how do I ship my side project?
[00:18:44]
Or how do I make sure that I don't have a bunch of dead side projects lying around?
[00:18:51]
What's been your experience with trying to make sure that you're shipping things when you're working on side projects?
[00:18:58]
Oh, I just make more side projects and release some of them.
[00:19:05]
And then people think that I release all of them.
[00:19:10]
Interesting.
[00:19:11]
I don't know. My focus is not always there.
[00:19:14]
So I tend to switch from one project to another pretty quickly.
[00:19:19]
And then sometimes when the project is not done or it reaches a hard part, then I put it away for a while.
[00:19:30]
And next time I work on it, I forgot everything.
[00:19:33]
And that gives me more reasons to put it away.
[00:19:36]
So I'm in that part of the community where I create side projects and I don't publish them more often than I'd like to.
[00:19:46]
Oh, okay.
[00:19:48]
Or just branches for new features in projects that I own.
[00:19:53]
It's mostly that actually.
[00:19:55]
Well, are they small experiments that you're testing the waters and they don't pan out?
[00:20:01]
Or are they things that you've nearly completed and they don't quite get shipped?
[00:20:06]
It's usually like I have it working for some parts, but then I hit something that is pretty hard to solve.
[00:20:16]
For instance, the one that I have in mind is detecting misuses of HTML.lazy, which is super interesting and would be super useful.
[00:20:25]
So it's easy to a certain extent.
[00:20:30]
And then the next milestone is super hard.
[00:20:35]
And I don't know if people want to trust that rule if they don't get a lot of the errors that they're having.
[00:20:43]
I see.
[00:20:44]
You shouldn't trust it.
[00:20:46]
Oh, the Elm review doesn't say anything.
[00:20:49]
So I have no issues.
[00:20:51]
No, you have a lot of false negatives because I didn't finish it.
[00:20:55]
I see.
[00:20:56]
That is interesting.
[00:20:57]
That's a very good point.
[00:20:59]
So my advice to people who want to ship side projects from my experience with it is I'll at least share what my favorite tips that have worked for me are.
[00:21:12]
One of them is, well, we talked about one of them, set up feedback loops.
[00:21:18]
I wouldn't be able to ever ship side projects if I didn't, you know, like I think we've talked about on this podcast before, the end to end test suite that I set up for Elm GraphQL, which gives me extreme confidence in features because, you know, I generate the code for a particular GraphQL schema with several different schemas, including all of the examples folders.
[00:21:41]
And I know that I haven't broken something.
[00:21:43]
So it makes it like a lot less intimidating to go work on a new feature because it's far easier to know whether I've broken something.
[00:21:52]
So setting up those feedback mechanisms is very important for me to continue to have that motivation and not get overwhelmed as I'm getting close to releasing something.
[00:22:02]
You also know what to work on because you know which tests are still failing.
[00:22:06]
Exactly. Yeah. Yeah.
[00:22:07]
It gives you these little wins that you can move towards that keep you motivated.
[00:22:11]
And it just makes it more fun when there's a feedback loop.
[00:22:13]
Like I don't get bogged down as quickly.
[00:22:16]
Another thing that's really important for me to be able to ship side projects is having things in a constant state of completion.
[00:22:24]
So rather than going, you know, like we were talking about earlier, long stretches of time where things are in a partially working state.
[00:22:32]
I try to keep things in a working state, building the most valuable piece that I need next in the smallest slice that I can.
[00:22:39]
And then I can ship it at any time and I can step away.
[00:22:43]
If I step away from a project and get burnt out on it or get some new shiny thing that's more interesting or whatever it might be, which happens.
[00:22:52]
That's just something that I know happens.
[00:22:54]
I think for many, many of us who work on side projects that happens, I know that I need to keep things always in a state of completion where things are shippable at all times because it could happen at any point.
[00:23:07]
And if I get distracted by some other shiny thing, I say, OK, well, I guess I'm going to ship it like this and I can.
[00:23:13]
But another thing that I do is I try to pick things that are that I know are going to be valuable and easy as much as possible.
[00:23:21]
Like, for example, I think I don't know, like Elm GraphQL, there are far harder problems than generating code for for a type safe schema.
[00:23:33]
Like as far as the amount of value to difficulty ratio, I'd say it's pretty high for that.
[00:23:40]
And I try to pick problems that are like that because I know it's going to be more motivating than it is difficult.
[00:23:46]
One of the bigger problems that I have, but I don't know if I want this to be an open source therapy.
[00:23:55]
Isn't every episode you're in?
[00:23:58]
I don't know.
[00:24:01]
It's like when I make something, I want it to be of high quality.
[00:24:06]
I want it to be very useful to everyone or to some people at least.
[00:24:11]
And I want to write a very nice blog post explaining the problem, explaining how it solves the issue or what the underlying issue is in detail.
[00:24:19]
So that's often what I do in the blog post, like the one for Telco recursion.
[00:24:25]
I explained like how it actually works in Elm and then how Elm Review is now able to detect issues regarding that.
[00:24:34]
And I think that makes for a nice blog post, at least in my opinion.
[00:24:37]
And writing is actually for me a pretty hard part of it.
[00:24:41]
It's like motivationally, it's hard.
[00:24:44]
Absolutely. Writing is hard. Writing release notes is hard.
[00:24:48]
Writing change logs and blog posts and announcement posts.
[00:24:53]
And yes, it's far easier to write code. And it's almost like code is the easy part.
[00:24:58]
You get to the end and you're like, well, I did all the hard work. I've been waiting for this moment.
[00:25:02]
You're like, oh, now I have to write something about it, though.
[00:25:06]
There have been projects that I postponed by two months just because of that.
[00:25:11]
Yeah.
[00:25:13]
I think I heard someone put it really well once. It was something like, writing is something that I love having done.
[00:25:23]
I am actually considering asking someone to write the blog post for me.
[00:25:29]
I would almost be ready to pay someone.
[00:25:33]
But I was like, yeah, they wouldn't do it the way that I wanted to.
[00:25:38]
You're going to get a ghostwriter published under your name, but it's secretly written by someone else.
[00:25:46]
That would be good. No one needs to do it like I do, right?
[00:25:50]
Right.
[00:25:52]
Whether it's open source or professional stuff, when you sit down to start your day, how do you keep track of what you want to work on for the day?
[00:26:03]
I don't have any special methods, to be honest.
[00:26:07]
You just kind of know the important thing you're working on for the day and you just sit down and work on that?
[00:26:12]
Yeah. And sometimes I just don't know. And then I'll figure it out.
[00:26:16]
Sorry to disappoint you with this super useful advice.
[00:26:22]
If it works for you, it works for you.
[00:26:24]
I find it very useful to collect my thoughts by just writing a lot of to do's on notes.
[00:26:30]
I use a markdown app called Bear Notes on Mac to track my notes.
[00:26:36]
I try to get all my open loops, all the things on my mind that I know I need to do.
[00:26:41]
You know, like we talked about in our most recent episode on dead code, if you don't trust yourself to keep track of the things you need to remember to do,
[00:26:51]
then you might find yourself writing code to make sure you don't forget.
[00:26:55]
And so I find that it's really good to just do a brain dump and write all the things that are floating around in my brain because that slows me down.
[00:27:03]
If I have those things on my brain, my brain keeps trying to hold on to those things.
[00:27:07]
So I try to get them down on paper or into my notes app.
[00:27:10]
I don't know how you do that because I totally agree about the to do list.
[00:27:15]
Like the most productive parts of my life were when I wrote a lot of to do's just for what I had to do in the day.
[00:27:23]
And when I don't do that, I'm a lot less productive.
[00:27:27]
But yeah, like even when I'm trying to do a brain dump of what I have in mind, my thoughts are much faster than my ability to type.
[00:27:35]
Yeah, yeah, yeah, yeah.
[00:27:36]
Maybe I should learn to type faster or I should learn to think less.
[00:27:41]
I don't know which one I should choose.
[00:27:43]
Right. I can relate to that.
[00:27:45]
I know what you mean with your brain fluttering with ideas.
[00:27:49]
But so I've been as I mentioned in our last episode, I've been very influenced by the getting things done productivity methodology.
[00:27:58]
And I think it's got some some great negative wisdom.
[00:28:01]
And one of the ideas is having a process for capturing things, which means, you know, whether it's pen and paper or digital note system or dictation to your phone reminders or notes app or whatever it might be some way to quickly capture things.
[00:28:17]
I think having a quick way to capture things is really valuable.
[00:28:20]
And then another really valuable idea in this kind of capturing concept is when you when you capture a note, could be like something pops in your head or it could be you start your morning you say I'm going to do a brain dump of all the things floating around my head.
[00:28:35]
Just write down as fast as you can.
[00:28:38]
And don't try to get the details right and figure out exactly what everything means.
[00:28:42]
Just make sure you don't forget anything and write everything down.
[00:28:45]
But I want to remember the details.
[00:28:47]
Well, you can write some details but don't try to refine the details so it's sort of like the difference between writing and editing. So, like when you draft something, you can just write it all out, and the editing, your brain can fight against itself.
[00:29:04]
If you're trying to draft and edit at the same time, because you're drafting mind wants to spit out a million ideas, and your editing mind wants to say, I don't know if that idea fits here that idea is good, make sure you don't forget this idea, make sure you write this idea that way oh this doesn't quite fit there I should rearrange this I should edit this.
[00:29:24]
So those two parts of your brain are fighting those two modes. So it works much better if you separate those two modes.
[00:29:30]
So draft, turn off your editing brain turn off the part of your brain that is critiquing and saying, Could I write this better Could I write this more clearly Does this belong here Does this not just draft brain dump, and then turn on your editing brain, go through and be an editor and say, this could be more clear.
[00:29:51]
This should be moved up to the front of the article. So it's the same thing with capturing with capturing just be in your drafting mode, dump out all of your ideas from your brain as quickly as possible just trying to capture everything.
[00:30:03]
Some of them you might realize you know what, this actually isn't important. But by writing it down, you're able to look at it and see that and then release it from your brain. And then you go through and in your editing mode, with all of your captured tasks or ideas, you can go through and decide what it means.
[00:30:22]
And now this is for me, this is a very important step, turn it into something actionable. Because sometimes I'll look at something on my task list, and it'll say, you know, does it handle this case? Is it accessible? Is it does it handle input from this type of device? And it's like, okay, well, what do I do about that?
[00:30:42]
That was something I needed to capture to make sure I remember to do that. But what do I want to do about it? And then you turn that into something you can actually take action on. Do I need to go read this relevant checklist for accessibility? Do I need to test it out with a screen reader?
[00:30:59]
So that's deciding what it means. That's the editing part of your brain. And if you don't go through your to do list with that editing part to figure out what it means, this is an idea from getting things done that your brain is going to be repelled by this because it's not clear what you need to do about it.
[00:31:17]
So your brain doesn't like looking at it because it feels overwhelmed by it. That's definitely my experience. I'm also a really big fan of the Pomodoro method. Have you ever tried that? Or are you familiar with it?
[00:31:27]
I'm familiar, but maybe you can still explain it.
[00:31:30]
Yeah, it's basically the idea of picking something to work on in a time box. You said a 25 minute timer, usually you have some sort of audio cue that gets your brain into a focused mode. The origins of Pomodoro means tomato in Italian because these Italian tomato timers.
[00:31:50]
Yeah, that's what I had in mind as well.
[00:31:52]
Yeah, that's the origin of the term. And when you turn on those kitchen timers, you've got that little clicking noise. And that's part of it because it cues your brain to focus because it reminds that habit is triggered.
[00:32:07]
So you always have like a ticking noise when you're working?
[00:32:12]
Yeah, what I do is I've been doing this for many years. Actually, I have a rain white noise sound. I use this sound from rainy mood.com, but I actually have it automated where it plays on my computer.
[00:32:26]
So my setup is, I actually just recently started using an app that I've really been enjoying. It's called Centered App, centered.app. We'll link to it.
[00:32:36]
And the pro version has a feature where you can have it run scripts. So I run a script to have it start playing my rain white noise and stop playing the rain white noise when I end my session.
[00:32:48]
And then it starts playing music.
[00:32:51]
But Pomodoro is quite nice for getting you focused on doing a small task to completion in a short period and then moving on and saying okay what's the next valuable thing I want to work on.
[00:33:03]
So I'm kind of that Pomodoro method.
[00:33:06]
Yes, it's something that I've tried, but not very thoroughly. So you never stuck with me at least.
[00:33:13]
Yeah, I think there are a few things that are pretty important to make it work effectively.
[00:33:20]
One of them is, so you do a 25 minute session of focused work, turn off all distractions, notifications, that sort of thing.
[00:33:28]
And then you alternate between 25 minute session and 5 minute break. And the break is not optional. The break is an important part of it.
[00:33:36]
You're able to rest your mind so that you can come back and do a focused session. So that's a really important part of it.
[00:33:42]
Another important part of it is defining your 25 minute work and focusing on that chunk.
[00:33:48]
Yeah, defining what you're going to focus on, right? What the tasks are that you're going to focus on.
[00:33:54]
Yeah, to me that is so valuable because it keeps me moving through tasks and keeping in mind what do I really need to be focused on.
[00:34:04]
Like the minimum viable product or the important tasks that I need to be working on without getting distracted by things that are less important.
[00:34:14]
So it helps me stay on task. Because sometimes after 25 minutes there's something else I was thinking of doing around a task.
[00:34:23]
But I realize, you know what, I did the most valuable part of this and I can actually leave it here.
[00:34:28]
If I'm going to start another 25 minute session, I actually want to work on this more important thing now that I've done that first part.
[00:34:35]
So do you also use Pomodoro when you're exploring something like debugging Elm code or any code?
[00:34:43]
Or when you're trying to find a good API for something?
[00:34:47]
That's a good question. I actually usually do. And I tend to be more productive and analytical in the morning.
[00:34:57]
And my ideas are more creative in the afternoon, evening.
[00:35:02]
And so usually I find it harder to do Pomodoros and have that intense focus in the afternoon hours.
[00:35:11]
But in the morning I can just crank through items.
[00:35:15]
So usually I will do the more clearly defined analytical tasks in the morning doing back to back Pomodoros for almost the whole morning.
[00:35:23]
And then in the afternoons my brain is better at coming up with associations and ideas.
[00:35:30]
And I try to embrace that and not force myself to try to be into that intense analytical mode.
[00:35:37]
And so I try instead to be open to those ideas and explore.
[00:35:43]
And so sometimes that does mean exploring an API idea or something like that without a Pomodoro running.
[00:35:51]
And what I try to do is I try to write down a lot of notes as I'm doing that because that helps me the next morning capture any concrete actionable ideas that come out of it that my more analytical focused brain can do the next morning.
[00:36:08]
So when you're sketching, what does that look like for you writing Elm code that you're doing in an exploratory sketching mode?
[00:36:17]
Does it look a lot different than if you're building a feature and you know exactly what you're going to build?
[00:36:23]
So sketching in your definition means trying to figure out a solution or I'm trying to figure out an API or trying to figure out something?
[00:36:32]
Yeah, well, I mean, that's another question is like, do you think of those as two separate modes or do you just code?
[00:36:40]
And that's just trying to understand what you mean with sketching.
[00:36:43]
Well, for me, I think a lot about sketching when there's like not a clear like I don't know how I'm going to approach this.
[00:36:52]
Like I don't know, like I want to have some way to have Elm pages be able to use like data sources in Elm review.
[00:37:03]
I don't know what that would look like. Would that mean a fork of Elm review?
[00:37:07]
Would that mean Elm pages knowing about Elm review specifically?
[00:37:11]
Would that mean Elm pages being able to run arbitrary command line tools and output data that it can import from some other Elm file?
[00:37:21]
So I need to just explore a lot of different things and try out ideas and sketch things similar with designing an API where it could take many different forms.
[00:37:30]
I think I tend to sketch by building what I already know and trying to figure out like what are the hard parts by encountering them.
[00:37:41]
So basically what you said about having an entire end to end slice and then figure out like what do I need more?
[00:37:51]
Like what is preventing me from having this end to end slice working?
[00:37:54]
That's in the case where I really don't know what's how to work on the task otherwise.
[00:37:59]
Like this is just about exploring like what is going to be hard.
[00:38:03]
That's a good point. I like that.
[00:38:05]
I also find myself doing both things that you're touching on here which one is just like build the obvious things because if you just do the obvious things then the things that are less clear might come into focus and become more clear.
[00:38:22]
But then on the other hand sometimes you want to front load things that are unclear to get a sense of them because they might inform the things that you thought were obvious.
[00:38:32]
Or it might turn out that a particular approach is not feasible or not the best way to approach it.
[00:38:39]
And so you want to front load those riskier unknowns and get those out of the way.
[00:38:44]
Yeah I guess I'm not sure exactly how I pick between the two but I guess that just comes with experience of knowing when is an unknown important to figure out soon because it's potentially going to drastically change the way you approach something.
[00:39:01]
And when is an unknown actually encapsulated and self contained enough that it's not going to drastically change the way you do things so you can just go do the things around it and put it off and that it'll become easier once you do the other things.
[00:39:16]
I think tests are useful as well.
[00:39:18]
Often when you have a problem you need to figure out a use case for the problem or a user for the problem and by writing tests you use it so you're going to notice like oh I'm not going to need this piece of information to do the task so I'm going to need that as an argument or something.
[00:39:39]
And you figure that out while writing the tests.
[00:39:42]
Yes.
[00:39:43]
And you start with a very simple function that takes one argument and returns a constant and then then you get the ball rolling.
[00:39:51]
Right.
[00:39:52]
Yeah, someone was asking recently about how do you know when to split out a module, which is a topic that we've talked about a lot as well.
[00:40:01]
But, but I realized that one of one of the best hacks for knowing when to extract a module is doing test driven development because what are you going to test are you going to test something in like main dot, you know, yeah, main dot is valid username.
[00:40:18]
Or are you going to test username dot is valid or username dot from string returns and maybe username or whatever right like that. It makes it very obvious what you need to extract.
[00:40:31]
Yeah, the. Yeah.
[00:40:33]
And then there's the question about testing the internals, like you might move something to a separate module and practice you're only going to test the internals may have may make some sense in some case like if you're doing something that is very detailed.
[00:40:49]
Yeah, otherwise you want to test the more general module right. But yeah, naming is also a good way of noticing when to split out a module. But yeah, yeah, we'll just put what I mean is, if you have a from string function, and then you have a user ID from string, like, okay,
[00:41:08]
well, now you have the same, you have the same function, the same name structure for different things. So for new data type. So probably that should be moved to another module.
[00:41:21]
Right. Although if that happens, what are the odds that you had a unit test around that? Because a lot of the time, like what my process would be around that is like, I'd be like, Well, all right, I need to make sure this is a valid username.
[00:41:39]
Well, how do I know if it's a valid username? Well, I need to write a function to check that and give me this like, how am I going to use it? Well, let me write a test because it's a very easy thing to test it like doesn't require a lot of context or just like an easy, easily separable piece.
[00:41:54]
And I just open up like, you know, it's just it's so easy to write a test for these little separable modules that that's how I start. Speaking of naming, I can't remember if we've talked about this on the podcast before, but the way I approach naming is very similar to the process that I talked about for sort of the drafting versus editing mindset.
[00:42:18]
I will come up with a name often I will just name something foo or thing or thingy or something. And if you if you see something called thing or thingy in my code, I apologize. I, I do sometimes ship those I usually try to rename them before I commit.
[00:42:36]
Yeah, you just try to keep reminders for that. Like if you see foo, you know, yeah, yeah, this rename this.
[00:42:45]
Exactly. Exactly. And see, that's like a feature, not a bug that it is obvious that is it's a nonsense name. And there's a there's a really nice blog post about about this called naming is a process. It's actually blog post series by someone named Arlo Belshi.
[00:43:01]
I highly highly recommend it. It's a great read. But he talks about nonsense names specifically as step one that he often uses applesauce. And I probably should write an Elm review rule for my or use the no forbidden words rule for myself for for thingy. Can I do that? Or is that just for comments?
[00:43:23]
I think it's only for comments and strings. Yeah, easily making a new rule, I could very easily make a rule that says if I have something named thingy, but what I typically like to do, because the idea is having having some unnamed code, that's all just in line.
[00:43:43]
If you realize that it belongs as a group, then giving it a name, even if it's a nonsense name, is actually a step for the better. And you don't need to make everything better at once. And often your brain will get hung up on, well, what is the right name, and that slows you down.
[00:44:01]
So I try as much as possible to reduce reduce that kind of friction and just say, just get a name out there. Just say thingy like I know that I want this chunk to be a thing. I'm just going to call it let thing equal this. That's a step for the better.
[00:44:17]
And I can actually loop back around and give it a better name when when I think of one or when I have more context on it. And that's okay.
[00:44:25]
Yeah, because usually the thing that prevents you from giving a good name is because you don't know exactly what it is or what it's going to be used for.
[00:44:34]
Right.
[00:44:35]
So when you have done enough work in this context, you will know more. You will know how it's used and what it is for. And that will be a very good basis for giving a really good name. And until then, foo is a reasonable name.
[00:44:52]
Yeah, exactly.
[00:44:54]
Right. As reasonable as you as it can be. Like you can give it a better name if you can think of one. But if it's better to have a nonsense name than to have a wrong name.
[00:45:06]
Like, exactly. Yeah, it does depend on your ability to trust yourself to come back around and change the name when you think of a better one. And I think that is essential is I think that a lot of people are in the habit of trying to get everything right the first time.
[00:45:25]
Try to get the right abstraction, try to get the feature right, try to handle all the cases at once, try to handle the error case and the edge case and the happy path and all these things.
[00:45:36]
And you just write that code and you say, oh, now I should probably write some tests. But what I try to do is I try to say, like, all right, let me write a test.
[00:45:46]
Do the simplest thing to get this one case working, you know, get the happy path working, write a test case for this error path, write a test case for this edge case, handle those separately, introduce a variable.
[00:46:00]
I'll think of a good name later. OK, now I thought of a better name. Doing these constant tiny steps. And when you work in that way, you have time because you've reduced the barrier to doing things because you're constantly doing these small, low friction things.
[00:46:15]
So extracting a function, inlining function, renaming a variable, renaming a function. These are all very like low, low friction tasks and operations because you're making tiny commits.
[00:46:28]
You're working on a single case at a time, not every single corner case all at once. You have feedback loops set up.
[00:46:35]
Yeah. And you're also scared of doing those because it helps you out there.
[00:46:40]
Exactly, exactly. And you have these sort of tools to support your work, end to end tests, unit tests, you know, a continuous integration server that is helping you write quality code on review and, you know, making sure you're you have format and all these types of things.
[00:46:59]
Yeah. One big aspect that we haven't talked about is like tooling. You should have good tools, right?
[00:47:05]
Yes, absolutely.
[00:47:07]
Which is why we're writing tools.
[00:47:09]
Right.
[00:47:10]
Make our own lives a lot easier.
[00:47:14]
Yes. Yeah, tools do matter. I went full time doing Elm work when I was coaching a client and helping them do, you know, more robust automated testing.
[00:47:27]
And, you know, a lot of these technical practices that we talk about often on the show, smaller steps and customer driven features and things like that.
[00:47:37]
But I was sitting to pair with somebody and they were they said, oh, how would I do this in a test driven way? And it was jQuery code.
[00:47:48]
It's like if you're having trouble feeling confident about this basic thing that putting text on the page, like writing a test isn't going to help that much.
[00:48:01]
I mean, we can write an end to end test and get some more confidence about that. But it just felt like at a certain point you want tools that really help you get great feedback.
[00:48:13]
And that's when I was like, you know what, like I really just want to focus on helping people use tooling that I really believe in.
[00:48:21]
And so I'm going to go all in on on helping people with Elm.
[00:48:26]
So you mentioned edge cases before. So we also mentioned the end to end test slice, like just going from just doing the happy path and not handling all the edge cases.
[00:48:39]
So what do you do to make Elm happy in the meantime? Do you use like debug to do? Is that it?
[00:48:46]
Excellent question. Excellent question. Yeah, this is a really fun topic.
[00:48:50]
So I think that it's not really end to end if there's debug.to do because you can't run it.
[00:48:58]
You can't run it against test. You can't run it in a browser. So you're cut off from certain types of feedback.
[00:49:04]
I mean, you can, but it will crash. So you can partially run it.
[00:49:09]
Yeah. I mean, you can use it in the parts where...
[00:49:13]
Oh, are you talking about like the...
[00:49:15]
Yeah, like, oh, this is an error case and use debug to do, make the compiler happy.
[00:49:21]
Absolutely. Yes, I absolutely do. Yeah, debug.to do. That is a good question, actually.
[00:49:28]
I have the best questions.
[00:49:30]
I do reach for debug.to do there sometimes. Sometimes I use hard coded values.
[00:49:37]
I guess in a way it's not shippable with debug.to do, you could argue.
[00:49:42]
So it is in a more shippable state with a hard coded value, even if it's not the case you're actively exercising.
[00:49:49]
So I would say I still only reach for debug.to do as a last resort, not the first resort. But I do use it.
[00:49:57]
So you use it when you have something more complex to build than just a hard coded value that you can just reach out for.
[00:50:05]
Yeah, I do use it when I just am like, when I am trying to get things into a compiling state.
[00:50:12]
So it's just like, you know, we've talked before about how ideally we would have more tools for doing atomic, you know,
[00:50:21]
architectural changes to our code, atomic refactorings, so that it wouldn't be multiple steps.
[00:50:27]
Because, you know, conceptually it is just one atomic step. It's just that in practice we have to do the tedious manual multi step process.
[00:50:36]
But when I'm doing that sort of tedious multi step manual process that is, you know, conceptually really just one atomic step,
[00:50:44]
sometimes you get stuck on something where you just are. It's an unmanageable step.
[00:50:49]
It's difficult to figure out how to get something compiling.
[00:50:52]
That's when I'll use debug.todo because I'm already in a non compiling state. So using debug.todo isn't making things less shippable.
[00:51:03]
It's just as shippable as it was before. And actually it gets me closer to a compiling state.
[00:51:08]
So I'd basically maybe we've got Jeroen's hierarchy. Maybe we also need Dillon's hierarchy.
[00:51:15]
Of shipping? Shipping constraints?
[00:51:18]
Yeah, like if I would rather have things be shippable than runnable, I'd rather have things be runnable than not compiling.
[00:51:29]
So like if something has debug.todo, it's not shippable, let's say. If something is not compiling, then it's not runnable.
[00:51:37]
So I would rather. So if I am already in a non compiling state, debug.todo can help me get to a compiling state faster.
[00:51:46]
And I would rather be in a runnable state than a non compiling state. So I will use that tool in that case.
[00:51:53]
But once I do that, I put in some debug.todo's. Now I've got it in a compiling state, but it's not in a shippable state.
[00:52:02]
So now I want to get it in a shippable state. And the way I do that is with hard coded values.
[00:52:06]
Yeah, just so we're clear, when you say shippable state is like it's usable.
[00:52:12]
Some people could use it. It's going to be subpar, but it's going to be usable.
[00:52:17]
And it's not something that you could you would make people pay for if it was a I mean, it doesn't have to be high quality is what I mean.
[00:52:26]
It's a good question. So I would say that I would actually put it differently.
[00:52:30]
I would say that to be shippable, it should be high quality, but not necessarily comprehensive.
[00:52:37]
So I mean, of course, it's a bit of an arbitrary distinction because, you know, I mean, are you going to ship something that doesn't have error handling?
[00:52:45]
Right. But but you want to do things that are so like not every single step is literally shippable, but it's as close as that close to that as possible.
[00:52:56]
So if something has debug.todo, it is very much not shippable.
[00:53:01]
If something has if it has a hard coded value, then it's shippable, but it's not going to do what you intended to do in some scenario.
[00:53:11]
That's right. Some edge case. Yes, absolutely.
[00:53:15]
But people can still use it. Right. And that's preferable.
[00:53:19]
And in addition to that, there's just you can map out a path of building features where you are saying, could it could I literally ship this if I mapped my path?
[00:53:33]
Like like if I start with this feature and then go to this feature, like these tiny little things, could I ship it at each of these points?
[00:53:40]
And there's a way to map that path where it stays shippable for as long as possible.
[00:53:45]
And I'm not talking about like error handling and things like that here, but I'm talking about, you know, like if if a user can't log in, then they can't use the system.
[00:53:56]
So could I could I ship a system where all you can do is log in? Yes.
[00:54:00]
Is it going to be a feature rich application? No.
[00:54:03]
Literally all you can do is log in, but it's usable.
[00:54:06]
But if I so if I ship this other thing and you can't log in, you can't do anything with it.
[00:54:11]
If I can log in, I can actually ship something that somebody someone can log into.
[00:54:16]
Obviously, that's like in the, you know, the sort of very first steps of an app or not the best case to illustrate that because more realistically, it would be, you know, if I.
[00:54:27]
But you get what I mean, like you're building the subset of features that can be usable as quickly as possible.
[00:54:33]
So you say you're looking for what is shippable as long as possible.
[00:54:38]
Yes. Yeah. Is that the same thing as shippable as early as possible?
[00:54:44]
Yes, absolutely. I think of it as early and often ship early and often know what I mean is like.
[00:54:51]
So when you you're starting a new feature, are you going to break it like making it a non shippable state early on and then fix it as soon as possible in very tiny feedback loops if possible?
[00:55:07]
Or are you going to add to it in a way that never breaks the shippable state?
[00:55:14]
Right. Right. You see the difference? Yeah, I totally see what you're saying.
[00:55:18]
Yeah. So I I do try to practice keeping it shippable.
[00:55:22]
And this is sort of so there's this whole area of thinking that people sometimes refer to as trunk based development, which is the idea that, you know, often you need to use that.
[00:55:34]
You need to build things differently in order to make this possible.
[00:55:38]
But so, for example, if you introduce a feature flag, you can ship something very quickly.
[00:55:43]
It's it's in a shippable state. Right. So that's like a technique that you wouldn't have to do if you if you were building things in a way where you didn't care how soon you could ship it.
[00:55:53]
So it's this idea that like why bother building something in with a feature flag?
[00:55:59]
Well, for one thing, you don't get this drift where you have multiple people working on the same area of code, but this code has not been merged.
[00:56:07]
That's the sort of. Yeah. The term for it that's often used is delayed integration.
[00:56:12]
So you have this delayed integration. There's this latent risk sitting out there that's waiting to come in and introduce a cost as as things deviate from these these branches.
[00:56:23]
They get stale. Yeah. We call them way too big.
[00:56:27]
Way too big, poor requests. Right. Right. Yeah.
[00:56:31]
So so trunk, you know, trunk based development would be that you're literally getting that merged into the main branch from the very beginning of the feature and behind a feature flag, perhaps.
[00:56:44]
But but you can turn on that feature flag in production and test it out in a production context.
[00:56:50]
It's interacting with all the same code. There's no big merge conflict after weeks of working on it.
[00:56:56]
Ideally, it shouldn't take weeks to work on a feature either. But that's a separate question.
[00:57:01]
Yeah. I guess if you're working on a CLI tool or something like it's it can be behind a hidden flag.
[00:57:08]
Yeah. Ligial flag again. Right. Absolutely. Yeah. You can just not document it.
[00:57:13]
But as long as it doesn't break what is existing, then it's fine for people to just use the products as it as they used to.
[00:57:23]
And if they want to then enable that feature flag or that new feature. Right. Right.
[00:57:30]
Yeah, exactly. Because then you're you're avoiding having this big integration work at the end where you have to make sure everything fits in.
[00:57:38]
Like you've it's just been released. Like if you have an Elm review feature and like you're saying, you know, you ship it behind a hidden feature flag that's unpublished and it doesn't show up in the help documentation or whatever.
[00:57:49]
Now you can actually ask for feedback and say, hey, could you try this out and see if it worked?
[00:57:55]
Does it work on Windows? Does it does it work for your use case?
[00:57:59]
And you can get feedback and you don't have this like integration step at the end where you have to fit it in with all the other pieces.
[00:58:08]
Yeah, there's a feature that I really like in Elm review or that I'm super happy that I built is the template flag,
[00:58:17]
which allows you to run a unpublished or which allows you to run the an Elm review configuration that is defined somewhere in GitHub.
[00:58:28]
So every Elm review package, at least the ones that I made, have an example folder and they also have a preview folder.
[00:58:37]
So examples for all where you use the the version of Elm review that has been published, like that is actually on the package registry.
[00:58:45]
And the one and then you have a preview folder where you can test out the version of a package that has not been published.
[00:58:55]
So you can test new rules or you can test bug fixes.
[00:58:58]
So very often what I do is when someone opens an issue, I fix it in a branch and then I ask them, can you run Elm review dash dash templates and this argument?
[00:59:09]
And people can just try it out and tell me whether this fixed their issue.
[00:59:14]
And that is such a nice workflow, in my opinion. Like, I'm so happy I made that.
[00:59:19]
Yeah. Yes. It's incredibly nice because I don't want to publish something that will create new issues.
[00:59:26]
And it's sometimes a bit hard to tell. Thankfully, it's always been. Yeah. OK, this works basically.
[00:59:32]
Or at least it works better than before.
[00:59:35]
Well, I think the bottom line is to be a 10x developer workflow matters, but so do your tools.
[00:59:42]
Yes, very much. But you also need to prepare yourself to have that feedback and to have those tools like you mentioned your end to end tests, which are super useful to your workflow.
[00:59:56]
Well, you need to build those. You need to invest some time in building those.
[01:00:01]
I need some time to invest in create that template feature, which has other useful applications, but it also has the one that I can test out things very easily.
[01:00:13]
Yeah. So I think that feedback is sort of like compound interest.
[01:00:17]
You know, it like the earlier you can build in that feedback, the more it'll pay dividends over time.
[01:00:25]
And that's why no one releases products once once a year anymore.
[01:00:30]
Right. Right. Like they used to be the case for video games, but now they have patches.
[01:00:36]
Yeah. And games are just generally unfinished nowadays anyway.
[01:00:42]
They're just fixed and improve the game after a while. Right.
[01:00:48]
Yeah. Generally, the advice that I hear is don't play a game on the first few days.
[01:00:54]
Huh? Yeah. Don't try an Apple update in the first few days.
[01:01:00]
Also, if you have mechanisms like for NPM packages to use to market a version as beta or alpha, that also helps.
[01:01:10]
That is probably the biggest thing at the top of my Elm package management to do list.
[01:01:17]
I wish I could publish beta versions. So do you use debug.to do much in your workflow?
[01:01:23]
I usually use it as a reminder to do something later. Not to do it.
[01:01:29]
Yeah, I usually use it as a way to remind myself to do something. Not as a way to.
[01:01:37]
If I use a hard code value, then I lose the natural reminder that debug.to do is.
[01:01:44]
Right. But if I use debug.to do, then I can still use it if I only used in some branches.
[01:01:50]
But I do get a reminder from Elm review from crashes, from the compiler, that I need to do something somewhere.
[01:01:59]
I mean, it's in the name, right? It's to do.
[01:02:02]
That's a good point. Yeah.
[01:02:06]
Well, I guess that's sort of maybe a good way to bring it back to the beginning where we were talking about managing the work so you don't forget. Right.
[01:02:17]
So like I think what I what I try to do is to write those to do's in a place that's right in front of me as I'm working,
[01:02:27]
like a notes page that's always open on a side monitor when I'm working and not in my code.
[01:02:35]
I do put to do's in code sometimes, but usually to do comments.
[01:02:38]
But I try to take those things that are in my head that I don't want to forget, manage them in my notes and my to do list,
[01:02:47]
and then put hard coded values instead of to do so I can ship it and then write a test case for the next case when I get there.
[01:02:57]
For things that I know I will want to look at very soon and then that I put a hard code of values for,
[01:03:05]
I tend to add a comment or something that Elm Review will tell me will remind me about.
[01:03:12]
So there's the Elm Review rule from Elm Review, no forbidden words,
[01:03:18]
which reminds you when you use things like replace me or to do or actually you can set it up.
[01:03:24]
It's configurable. So for all the Elm Review rules that you create using Elm Review, new package or new rule,
[01:03:32]
you get this rule preconfigured with replace me.
[01:03:36]
So every time I put a new to put it in a string or comments to remind me to do something,
[01:03:43]
I use replace me because I know that at some point Elm Review is going to tell me that I need to do something there.
[01:03:50]
Right, right.
[01:03:51]
But often I just figured out like, oh, I see a big replace me in all caps. Go tackle it right now.
[01:03:58]
Yeah, I'm also a fan of more and more. I'll leave notes in my code for that sort of thing.
[01:04:05]
I mean, sometimes it's like something long term to consider and those types of things can get stale.
[01:04:10]
But I think there's been a sort of false dichotomy of this like different school of thought of like self documenting code versus using comments and code.
[01:04:20]
And in my opinion, the concept of self documenting code is it's not that you should never have comments in your code.
[01:04:29]
It's that comments shouldn't be saying this is a function that takes the user name and returns a user if it's valid or something like it's like put that in the code.
[01:04:43]
Right. Like that that comment is probably indicating either it's just exactly the same as the name of your module and functions or it's telling me something that the name the names in the code, the function names and everything are not telling me.
[01:04:58]
And they should be refactored to tell me. Right. So it should be more expressive code.
[01:05:02]
So I actually came across a tweet recently that that was kind of giving a few rules and there's an article I'll link to these.
[01:05:10]
I thought it was really good summary. It said like code comment rules comments should not duplicate the code. Good comments do not excuse unclear code.
[01:05:18]
If you can't write a clear comment, there may be a problem with the code. Comments should dispel the confusion, not cause it.
[01:05:25]
But some things that code can code comments can do explain on the idiomatic code. That's something I find very useful.
[01:05:33]
Unidiomatic? Yeah.
[01:05:35]
So for some reason, like normally Elm code would use list dot map. But I'm doing this here for performance reasons because it caused some memory issues when I did it this way or whatever.
[01:05:48]
Provide links to original source of copied code include external references like often I'll link to MDN resources in my code.
[01:05:57]
I think that's a great use of code comments add comments when fixing bugs. So, you know, you could describe like this has been a recurring issue or this vendor has this API that needs to be used in this particular way, even though you wouldn't expect to call it this way.
[01:06:11]
This is why we're doing this. And then the last one here, it says to to mark incomplete implementation.
[01:06:16]
So, yeah, that list resonated with me.
[01:06:19]
The first thing I thought about when you said don't repeat what the code says, I was thinking of JS talk where you basically have to. So the comments that you add on top of above JavaScript functions where you have to repeat all of the arguments, the types and what they do.
[01:06:38]
And very often it's just like listing the the arguments and repeating what the argument name means. So it's very much not helpful or not any more helpful than what the names already gave you.
[01:06:53]
And I feel like in Elm we tend to do that less. First of all, because we don't have a clear way of documenting arguments, but also because we have a type annotation that is a comment anyway.
[01:07:08]
It's just the nice thing about type annotations is that they're always up to date.
[01:07:13]
Right, right. Yeah. Another thing that can stay up to date is Elm verify examples, examples in the code. I actually wish there was a way to have it check that an example compiles, not just gives the correct output.
[01:07:28]
But at least when you're there, there's no way to just easily say check that this code compiles.
[01:07:35]
Right. If you don't have a value that you want to evaluate.
[01:07:39]
Exactly. But you can turn it into a unit test where it asserts that the return value is what the example shows, which is an amazing way to keep your documentation up to date.
[01:07:52]
And I mean, as a user reading the documentation, you can trust it more. I think this applies to internal APIs, too. It's like a great tool.
[01:08:00]
Yeah, for Elm review rules, I want the examples to be checked. Usually I have a success and fail section where this code would not be reported, this code would be reported.
[01:08:14]
And I would love to have an Elm verify examples alternative to that.
[01:08:20]
Yeah, maybe we can make a pull request to help with it. I don't think it would be that much more difficult than what it's already doing.
[01:08:27]
For compiling? Yeah, yeah, probably. Yeah, some issues to fix, but yeah.
[01:08:36]
Yeah, but I know that when I write an Elm review rule, I'm always copy pasting some example to start just to get the type signatures right and the arguments and the parameter names and that sort of thing.
[01:08:51]
That's just how people use code, you know? Yeah, I use examples from the documentation all the time. Super happy that some guy wrote it before.
[01:09:00]
Right, absolutely. All right, well, any final words of wisdom or resources we should point people to for productivity tips?
[01:09:09]
So you mentioned that you ship very often and ship unfinished things as well, but you don't ship all that many things.
[01:09:19]
Right. I know you do a lot of work, but I guess you could ship things more often, right?
[01:09:27]
Right, yeah, it's a good question. So, well, first of all, I think I have a pretty good track record of shipping tiny things very frequently.
[01:09:38]
So that's one thing. But for large things, let's say like a big Elm pages release, like Elm pages 2.0, I'm working on Elm pages 3.0 right now.
[01:09:49]
These are long, big release cycles that like it's actually like not that long since Elm pages 2.0, but they're big releases and I'm not doing 2.1, 2.2, 2.3, 2.4, you know.
[01:10:02]
Or version 3, version 4, version 17.
[01:10:08]
But I think that these considerations are a little bit different between publishing libraries and packages versus publishing SaaS applications or that sort of thing.
[01:10:19]
I think so when you're publishing a library, it's sort of like it's not solving a use case for one group of users.
[01:10:32]
It is a meta use case for a meta group of users, right, that are shipping many SaaS applications.
[01:10:40]
Right. So some of these things about not doing too much upfront design, not thinking about every possible use case, but kind of shipping things, getting feedback on it and adapting based on that.
[01:10:56]
Some of those things you do have to like say, well, you know what, I actually do need to consider more than just my use case because it's a library. That's sort of what it does is it handles a large variety of users use cases that are being used for a large variety of end users.
[01:11:14]
So I do think that changes the calculus. And there is an interesting interplay between this sort of how you think about a minimum viable product or a minimum viable package, if you will.
[01:11:27]
Also, I think if I could release a beta version in the packages, that would change things to some extent, but also to a certain extent, like you want to batch breaking changes as a cohesive set.
[01:11:42]
You know, like Evan talks about this idea of recognizing multiple problems and finding one way to solve all of those problems.
[01:11:54]
Yeah. Rather than playing whack a mole and building feature, feature, feature, feature to address each problem.
[01:12:00]
Yeah. So it's creating a lot of solutions or a lot of bug fixes for things that are inherently related.
[01:12:08]
Exactly. Exactly. So sometimes it does take time to step back and say, what is the bigger picture?
[01:12:14]
And you have to sit with something, you have to explore, you have to, you know, we did our sort of API design lessons talk, our episode on that and talked through a lot of our lessons that we've learned about that.
[01:12:29]
Like I talked about hammock driven development and going on walks to let ideas simmer in the background.
[01:12:36]
So I think that it is very exploratory and it's just a different art to design libraries and packages than it is to ship code to users.
[01:12:46]
You have a very different standards as well for packages and for applications.
[01:12:51]
Right.
[01:12:52]
For applications it seems to be very much lower.
[01:12:55]
Right.
[01:12:56]
Which is a bit weird in a way.
[01:12:58]
Well, I wouldn't put it that way. I, again, I don't think it's about shipping low quality code and unpolished code.
[01:13:06]
And I think a lot of people, those terms come to mind when they think about like shipping in an iterative agile way.
[01:13:13]
And I very much do not think of it in those terms.
[01:13:17]
I think the opposite.
[01:13:19]
I think it's about shipping high quality code that is adaptable and it is about not solving every problem at once, but solving those problems very well.
[01:13:30]
And it's about doing things that are custom tailored to solve specific problems and needs and to solve those problems well rather than designing upfront for many perceived needs and shipping something that.
[01:13:46]
See, because often people use the word polish to talk about large designs that have a lot of bells and whistles, which may not actually hit at the actual problem and solve it well.
[01:13:58]
And that's what I associate with this big upfront design approach.
[01:14:02]
And what I see as the way to do this sort of more iterative approach to shipping products is, again, it's it. You're not shipping bells and whistles. You're not shipping features that will never be used, which often big upfront design approaches end up shipping many features that will not be used or that will be used.
[01:14:27]
But there could have been a simpler way to solve that same problem.
[01:14:31]
But it's solving the simplest way to get out something that that solves the user's problem that meets their need in a very high quality way.
[01:14:41]
So like there's no you don't ship things that aren't quality. You don't ship things that aren't tested.
[01:14:48]
And again, refactoring you building the quality. It's not a separate step testing.
[01:14:54]
The test is not done as a separate step.
[01:14:56]
The test is written before the code.
[01:14:58]
If you have an end to end test suite, that's part of the process of building the features not done as a separate step.
[01:15:05]
If there are specific security or legal concerns, you don't wait until after and do all those things at once.
[01:15:12]
You do that as you ship the feature.
[01:15:14]
So all of the things that define quality for your product are built into it and shipped.
[01:15:19]
And that's part of shippable.
[01:15:21]
It's just that you're trying to get to that shippable as soon as possible every time you deliver a feature rather than building a whole bunch of features,
[01:15:30]
some of which may or may not be needed, some of which may not be the simplest way to achieve something or may not be necessary for the user or solving a real problem that they have.
[01:15:40]
And then polishing all of those at once and waiting until all of that's done to then go and release a big batch of things every year.
[01:15:48]
So again, it's not about lowering the standards for quality.
[01:15:52]
It's about reducing scope, but having high quality for that small scope and doing that over and over.
[01:16:00]
So for packages and libraries, I think that the equation does change a little bit because you do need to go through.
[01:16:10]
You don't want to ship breaking changes.
[01:16:12]
So you don't want a lot of churn and you don't want to introduce a lot of breaking changes all at once.
[01:16:21]
And you want to solve a set of problems in a cohesive way.
[01:16:25]
So I think the art of designing packages is a little bit different in that way.
[01:16:30]
But that doesn't change the fact that I work in a way where I keep things shippable as I'm iterating on a package.
[01:16:39]
Yeah, in the back of your mind, you will still have a to do list of features that you want to build to prevent an additional major version.
[01:16:50]
But you could ship it technically.
[01:16:53]
Exactly, exactly.
[01:16:55]
And there's a difference between shipping and releasing.
[01:17:00]
So shipping, you could ship something with the feature flag turned off.
[01:17:05]
And then you say, OK, we've got this suite of features.
[01:17:10]
Maybe you had a round of beta feedback with some beta users for a SAS product and you've sort of validated it.
[01:17:18]
And maybe it's like related to marketing that they say, all right, we've got this conference and we're releasing this set of features for this around this conference.
[01:17:27]
We're announcing it there and now we're turning on the feature flag or whatever it may be.
[01:17:31]
And ideally, you don't want it to turn into a big yearly release of all your things and all your future flags going on.
[01:17:39]
But there's some nuance to that, right?
[01:17:41]
You can separate those two ideas.
[01:17:43]
And that's one of the ideas of continuous delivery is that you can separate releases from shipping and feature flags are an important part of that equation.
[01:17:52]
So I would say it's similar with like, you know, authoring a package where you can say, OK, well, we're doing a big breaking release.
[01:18:01]
You know, with Elm pages, I'm working on something that has server rendered pages and it can manage cookies and, you know,
[01:18:09]
send arbitrary HTTP responses and parse incoming HTTP requests and things like that.
[01:18:16]
Now, I am building that in a way where I'm saying open up a Cypress test, write a test.
[01:18:22]
It hits this page, it logs in, it does these things, it says the cookie and this stuff works.
[01:18:28]
I'm not like, oh, let's see at the end if this works.
[01:18:31]
Some people have been doing alpha testing and giving me feedback along the way.
[01:18:34]
There have been very few issues coming up in the alpha testing because like I've been testing it very thoroughly along the way.
[01:18:41]
It's like pretty close to shippable.
[01:18:44]
Well, you've been shipping it just not through the usual means.
[01:18:48]
Right.
[01:18:49]
You haven't released anything yet.
[01:18:52]
Yes, that's right. That's right.
[01:18:54]
And I also, you know, the Elmradio.com site will probably be Elm pages 3.0 before Elm pages 3.0 is officially shipped, for example.
[01:19:03]
So that's something I do often is I maintain these other projects with the vendor version of the early release to get feedback.
[01:19:10]
But you want to like so let's say for the Elm pages 3.0 incoming HTTP request parsing where you can say, you know,
[01:19:20]
I want to match requests that are a form post with these form fields or I want to match something that's an HTTP post method with a JSON body that can be parsed with this JSON decoder or whatever it may be.
[01:19:35]
I don't want to just ship, you know, the package version 10.0, 11.0, 12.0, iterating on those ideas.
[01:19:45]
And I don't want users to go depend on these packages and have constant breaking releases.
[01:19:51]
I want to reduce the amount of churn there, but get feedback along the way.
[01:19:56]
So it just it's just the nature of it that it takes some extra design consideration because you're you're building something that is you do have to do a little bit of upfront design or at least consider many use cases in a way that you don't have to for designing a single product.
[01:20:15]
And you you want to kind of get that feedback up front and and sit with that and look at the big picture and say, OK, I built this API.
[01:20:26]
Is there a better way to approach this? Does it really solve people's problems? Do I do I see any patterns I could simplify?
[01:20:32]
I'll drop some links to to all the things we talked about.
[01:20:36]
If you're interested, definitely try out the Pomodoro method.
[01:20:38]
Try out read up on getting things done. Lots of great stuff in there.
[01:20:42]
And you're in. Until next time.
[01:20:44]
Until next time.