API Design Lessons

We share what we've learned from designing Elm APIs and how it applies when building applications and tools.
July 5, 2021


  1. Avoid unexpected or silent behavior
  2. Give context/feedback when things go wrong so the user knows their change was registered, to enhance trust
  3. Good errors aren't just for beginners - Curb Cut Effect
  4. Sandi metz - code has a tendency to be duplicated - be a good role model - we're influenced by precedence
  5. Matt Griffith - API design is holistic. It's a problem domain. Rethink from the ground up.
  6. Learn from the domain and terms, but don't limit yourself to it when you can create better abstractions.
  7. Linus Torvalds' definition of elegance/good taste - recognize two code paths as one. Reduce the number of concepts in your API when you can treat two things as one, then things compose more easily. How Elm Code Tends Towards Simplicity.
  8. You don't need a direct mapping of your domain, but start with the spec and terms. Leverage existing concepts, and have an economy of concepts. Tereza's talk: elm-plot The Big Picture
  9. API design is a tool to help you solve problems.
  10. There's a qualitative difference when you wire up feedback before you up front.
  11. Avoid toy examples, use meaningful use cases to direct your design.
  12. Design for concrete use cases, and drive changes through feedback from concrete use cases. Legal standing. Better to do it right than to do it right now Evan's concept from the Elm philosophy. If you don't have a motivating use case, then wait. Extract APIs from real world code. It's okay for there to be duplication. Premature abstraction is the root of all evil. sSmplicity is the best thing you can do to anticipate future API design needs.
  13. Come up with an API with the most benefits and the least pain points.
  14. If there's something that you want to make really good, invest in giving it a good feedback mechanism.
  15. Rich Hickey's talk Hammock Driven Development. We don't design APIs, our extremely creative subconscious designs APIs - let your conscious brain do the hard work to put all the information in front of your subconscious so it can do what it does best. elm-pages 2.0 screencast with Jeroen and Dillon.
  16. Pay no attention to the man behind the curtain. Parse, Don't validate at the high level, but under the hood you may need a low level implementation.
  17. Have a clear message/purpose - whether it's an API, or an internal module.
  18. Take responsibility for user experiences.


Hello, Jeroen.
Hello, Dillon.
What are we talking about today?
Today we're talking about API design.
API design?
That's not a topic that we talk about.
Never, never.
So we figured we'd go out on a limb
and talk about something entirely different than usual.
Topics we know nothing about,
topics we don't care about.
I think this is going to be a fun one.
I think we're just going to share some stories
from our API design experience
and try to draw out some lessons learned from them.
And I think we're going to be building up
a little API design manifesto in the process.
So maybe we should...
I'm wondering if I should update
the idiomatic Elm package guide readme
because we'll see how it goes.
Does the manifesto say that you should use
the Elm package starter as well?
Oh, I haven't linked to that in there yet.
I totally should.
But hopefully this episode will be interesting
for library authors as well as application authors.
Because if there's one thing we've learned
from making this podcast, Jeroen,
it's that lessons about API design for library authors
are also very relevant to API design
for application designers.
Because when you're writing an application,
you're in a way building plenty of smaller libraries
that you can post together
and that create your application.
So building tools and packages are very useful
for learning how to build your application.
Yeah, it's almost like designing...
Let's call them external APIs versus internal APIs.
And the process of building external APIs
is this highly condensed form
because basically you have more users,
more use cases you're supporting.
So you rapidly go through this design process
because you're getting a lot of feedback
at once in a short time.
Whereas internal API design,
you don't have to be as intentional about it.
But whether you know it or not,
the principles are going to help you out.
But you can just sort of write code
without having clear APIs.
Whereas when you build an external package,
you have to put that extra thought into it.
So it forces you to be more deliberate
about those design choices.
With an internal API,
there's also the scope.
You know exactly what you're building the library for
or more than from an external API.
The use case you're thinking of,
maybe a few others.
And there's always the question,
is that thing over there something
that I should support as well?
If so, does that change how I should design my API?
For an internal API,
you also have those kinds of questions.
For an internal API,
you also have those kinds of questions,
but less so, a tiny bit less so.
I mean, you can be fuzzy
about how a module solves
or a set of related modules.
And it's interesting because you can publish a package
that's a little bit fuzzy about that.
And you can also have modules
in an internal application
that are a little bit fuzzy
about what their purpose is,
what their reason for existence is.
And what happens a lot
in application design
is you kind of have all of these functions in line
together in this one module
as their reason for existing,
but they're sort of scattered all over,
and everything has to be concerned about them.
So the process of building external APIs
forces you to create
this condensed external cohesive set of things.
But really, those are good lessons to learn
for internal application design as well.
So do you have API design stories and lessons?
So there's one design lesson
which I don't think is necessarily all that obvious
or one that people point towards.
But unexpected behavior is something
that I try to avoid in my API.
Mm hmm.
Or silent behavior.
So when you use a function
and it does not what you expect it to,
or it unexpectedly does nothing,
that I think you should try to avoid
because that creates surprises.
Mm hmm.
And people don't like surprises.
People, when they are surprised by the API,
they will create issues.
They will create bug reports.
And that requires time for you to spend on support.
So that's one thing that I try to avoid in my APIs.
And I've done so pretty successfully, I think,
reports bugs in the API.
Uh huh.
So I'm happy about that.
So there's one, for instance, that I'm thinking about.
I'm going to talk about Elm Review mostly.
So do you know how to create an error in Elm Review?
There's the tuple of an error.
There's the tuple where you can do
sort of the equivalent of commands like we talked about,
which is giving an error
so you would return an error in that tuple
that's equivalent to the update function
in the Elm architecture.
So instead of returning a list of commands, or a command,
you return a list of errors.
And those errors, you can create them using several functions.
One of them is the error function.
So the error function, you pass it a title,
a message, and details, and a location,
for the current module.
There's also another one for the Elm JSON file,
and another one for the README.
And there's another function to create an error
for a different module,
using something like the navigation key,
where you say, oh, I have this navigation key
because this module key, because I have visited previously.
Oh, okay.
Now create an error for that module.
So if you haven't, you need proof of having visited a module
in order to mark an error on that module.
That's cool.
Yeah, so the reason why I did that is because
if people pass in just a path to a file,
or a module name, and it's not right,
then they will probably not see an error anywhere,
and things would fail silently.
But that is actually not what I wanted to talk about.
The F function creates an error for the current module
that you're visiting.
But you can create errors for the Elm JSON file,
or in the final evaluation, where you have seen all files,
and you are not in the context of an error anymore,
but where you can still create error.
So what happens if you call error?
There is no local module.
But the API, the types still say,
I have an error.
So if I didn't try to be very smart about that,
people could report issues because,
hey, I reported an error here, or I returned an error,
and nothing's happening.
So that was a bit tricky to solve,
and one of the thoughts I had is,
this is tricky, but I can determine this at runtime,
so I can detect this in tests.
So I'm going to, in my test checker,
I'm going to look at whether people call that function
outside of the module visitor,
and people are going to be happy
as long as they create a test for that.
And that would have worked,
but maybe I would have had those bug reports,
and if they didn't write a test,
then the user would not see what they were supposed to see.
So what I did is I added the Phantom type to the error,
to the error type,
and the ElmJSON returns an error,
a type error, empty record,
and all the other errors,
they return a type error with the unbound Phantom type.
So you can basically return them from wherever.
And in the final evaluation,
in the ElmJSON visitor,
or anywhere that is not really a module,
or not in the context of a visitor module,
I say you need to return an error with type.
Exactly again?
I was going to guess never,
but you've got your own specific type?
I have a specific type.
It is of type error records with a property
useErrorForModule colon unit.
Nice, nice, nice.
So basically I'm saying,
if you have this error,
you should use ErrorForModule,
which is the appropriate function most of the time.
I'm going to go ahead and call that a Phantom error.
It's a bad thing that happens
when your continuous integration server
is failing intermittently.
So maybe that's not a good term for it.
Who's that a thing?
Yeah, people do talk about Phantom build failures,
which is not a positive term.
I know about CI pain,
and I know about Phantom pain.
So that could be CI Phantom pain.
So yeah,
I can find this solution,
but once I found it,
I'm like,
I don't even need to handle
this in my tests anymore,
and people can't have this error
because the compiler will check for it.
And I have never gotten an error report for that.
That's cool.
So the lesson is,
don't let people do things
with undefined behavior
or with silent behavior.
Like when you do,
when you run Elm test
and I can hear Richard's voice
because I'm pretty sure
he wrote that original message.
It's like,
there aren't any tests.
Let's define some
with an exclamation point,
which is great.
I've been thinking
about this principle as well,
and I think that as a community,
we sort of develop
these principles together
and kind of come up
with what's important
to us as a community.
of doubling down on.
So like I've been working on,
for the Elm pages 2.0 beta,
I've been working on
improved 404 pages
for the dev server.
And basically,
like I'm thinking about
when you go to a 404 page,
like most static site frameworks,
whether it's Next.js or Nuxt
or whatever popular tool,
the 404 page will just say
here are some pages
and you can search through
and here's how you create a page,
which is better,
gives you more feedback.
But I've been thinking
what is the user experiencing
when they're seeing that page?
So if you've just created a page,
you've got this sort of
file based routing system,
what are you thinking?
And I actually know
from my experience,
what I'm thinking is
did it pick that up?
And like did I do something wrong
or did it do something wrong
or is there some weird
environment thing or do I need
to restart the dev server?
Did I put the file
in the right folder?
Yeah, is there some weird
configuration thing?
And you feel like
you don't have visibility
into what's actually happening.
So in the 404 page,
I'm like well,
the 404 errors should tell you
what should say,
if you go to slash article slash hello,
is it, hey, there's no matching route.
Here are some routes.
There's slash blog slash slug.
There's a root route.
There are these other,
there's an about route.
And then you can look at that
and you can say, oh, right,
I typed slash article
instead of slash blog.
Or you can look at,
if you just tried to add a route,
it's not working very well.
It picked it up.
So you get that feedback
that I just added a thing
and oh, it has the thing,
but I'm typing it in a way
that it's not pulling up
the right thing.
So you understand
why it's not working
and you trust the system better
because you can see
it's picking up changes
when you add something,
the dev server,
the page just reloads
and it's not changed.
So this is one of those principles
that I think we have built into
our ethos in the Elm community,
which is that good errors
aren't just for beginners.
And this kind of makes me think
of what Tessa was telling us about
in our last episode
of the curb cutting effect
where in Berkeley,
they cut the sidewalks
so if you're in a wheelchair,
you can more easily,
you can access those sidewalks,
but if you have a stroller,
then it's easier
to access those sidewalks
and it becomes better
for everybody.
And I think in the same way,
good error messages aren't just
about making things accessible
for beginners.
They're a type of feedback
and feedback is really good
for experts too
because we recognize the value
of great error feedback.
I'm going to go on a bit
of a tangent question here,
but I wonder why the documentation
for Elm packages and Elm tools
is usually so good.
I'm wondering whether
it's just something like
the people who were there
before us, mostly Evan
and other core contributors,
they did a very good job
of having good documentation
and we are just copying them
or honoring their legacy in a way
because we want to contribute
at the same level as they did.
I do think that there's like the,
actually Sandy Metz talks about this
in the context of code
in one of her books
of this idea that,
I can't remember what she calls it,
like replicability
or whatever it is,
but the idea is that code
is people,
when you're building a new page,
what's the first thing you do?
You go and look at another page,
you copy paste it
or you see how it's done.
I don't know about you, Yaron,
but if I'm like writing
a new test module,
I know how to write
a test module.
I know how to say,
describe string, list,
test, string, left arrow.
I know how to do that from scratch,
and I know how to do it
from the beginning.
It's convenient.
Probably a similar amount of time,
but for some reason,
like is there some pattern?
Is it importing
some helper functions?
You want to replicate
the styles and patterns
in an existing code base.
Yeah, totally.
People say copy pasting is bad,
especially if you don't understand
the code,
and there are some truths to that.
I can just copy paste a test,
and then I know what I should
and should not change.
Even when I make a mistake,
I usually have some good feedback
from Elm test or from the compiler.
I copy paste code a lot.
I think that's just
what we do.
We look at how has
it been done before.
That's just how our minds work
because maybe like,
convention in this code base
where it maps over a list
of test data for each test case,
or it uses some special
test helper functions.
We tend to follow conventions.
We tend to follow precedents.
I think we're very influenced
by APIs that have been out there.
I know like,
Matt Griffith's talk
on style elements at the time,
which is now Elm UI,
really changed the way
I thought about API design
of this idea that rethinking
the thing you're doing
from the ground up.
Basically this idea that
API design is more holistic.
It's not just,
how do I write a function
that does this?
It's not a set of functions.
It's a problem domain,
and you can actually
change the domain
in order to make a simpler API.
You can change the rules
of the game.
If you don't like the game,
you're making the game up,
so change the rules of the game.
It made me rethink that.
You look at the docs,
and they had pictures
of here's what it looks like
if you use a row or a column
or an EL.
We're very influenced
by having the mechanism
for package docs
where the types are in line.
Elm has such a nice,
elegant type system,
and you put the types
in the doc comments above it.
It lends itself
to really good documentation.
I agree, when I go to documentation
in other communities,
I feel a little lost sometimes.
It's so easy to navigate packages
in the Elm ecosystem.
That actually reminds me
of a couple of stories
from my experience,
which is this change
in Elm GraphQL
where I entirely removed
this field type,
and everything became
a selection set.
It is before my time,
because I learned Elm GraphQL
with the selection sets.
Only the selection set,
so I have no clue
what you're talking about.
Yeah, so the way it worked
was called GraphQL,
which I know,
it's extremely funny.
Yeah, yeah.
I know what you're thinking,
Dillon, you're so clever,
and I can't argue with that.
I would love to open a shop
just for the sake
of having a pun in its name.
we used to have a cheese store
in my hometown here
called Say Cheese,
like C apostrophe E S T,
because when you're taking
a picture of someone,
you say Say Cheese.
But in French,
it's very clever.
It's very clever.
So this field type,
there used to be a field type
and a selection set,
because in GraphQL,
if you look at all the concepts
in the GraphQL spec,
there are selection sets,
there are fields.
A field would be like,
if you want the first and last
name from a user,
then first and last are fields,
and you're building
a selection set
where you're selecting
first and last from a user.
And so it's a natural mapping
that you would,
the obvious thing
when you're mapping
those concepts of GraphQL
into an Elm API would be
to have a selection set.
So you would differentiate
the properties or the fields
and whatever contains it, right?
Is that it?
And so the breakthrough was,
well, a field is actually
a selection set with one item.
And if you can combine together
two different fields,
if you can combine
two selection sets,
then you can have a field
that's just a selection set.
So if you just say every field
is just a selection set of one,
then when you combine
them together,
it's the same API experience
of building selection sets,
except you could build
a selection set by just saying,
I only want this field.
Now you have a selection set.
Or you could build
a selection set by saying,
here are these three fields
which happen to be
of type selection set.
Oh, and I can do
or use the pipeline API
to do that.
So what was it
that spawned this change?
Was it because it felt
a bit hard or unnatural
or painful to have to query
the individual fields
compared to what was
contained in them?
Like, was there...
So when you use
selection sets,
everything is the same.
So you have one API
for everything.
So before,
did you have two APIs
and it was a bit hard
to get the right one?
One thing is,
if you wanted to do
what's called a fragment
in GraphQL,
so GraphQL has these fragments
which are just
little composable units.
It's almost like a variable
you can sort of
store something off.
A fragment is just,
oh, here are these things
and I want to select
this selection of things
that I want to use,
share between code.
It's like a little
subselection set
that you can pull
into a selection set.
So it's like a higher order
selection set sort of.
And using the previous API
where things were fields,
it was a lot more clunky
to pull in a selection set
to a selection set.
There had to be whole new
APIs for dealing with that.
Essentially, one of my favorite
little tidbits
of API design wisdom
that I've heard
is Linus Torvalds was
talking about
his coding principles
and how one of the things
that he looks for in good code
is this concept of elegance.
And what he defined elegance
as was a way to
a sensibility
where you're able to see
two different code paths
and so like, you know,
if you say,
if the list is empty, do this.
If the list is not empty,
do this.
If you can get the exact
same output in a single code path
and see the way to treat it
as a single code path,
that's what he calls elegance.
And I think that same
principle of elegance
applies in API design.
So there's an elegance
to reducing the number
of concepts
because then you can treat
different things interchangeably
and they just compose
together more elegantly.
So another key lesson
from that design
was again, like, going outside
of the boundaries
and rethinking how you're
approaching the problem
from the ground up.
And in order to do
that API design,
I wrote a blog post about this
called How Elm Guides
Towards Simplicity.
And I talk about how I had
to introduce this technique
of using hashes for performing
the GraphQL queries under the hood.
And the reason is because Elm
GraphQL builds up decoders
under the hood, but what's
it going to decode if...
Because you can have
collisions in fields.
You can select multiple fields
with the same name
but with slightly different
arguments passed to them,
in which case it's actually
a GraphQL error.
So the old API would combine
them together
and it would add numbers
as you added them,
and it knew how many had been
added before, so it knew
what field alias to use
as you added new ones.
But the breakthrough idea
I came up with was,
what if I made it deterministic
by taking all of the things
that make a field unique
and then building hash with that
and always using that hash?
And which I get questions
about a lot because people are
like, why are there hashes in here?
But it ends up giving you this
much more high level API.
And another principle that I
learned building Elm GraphQL
is that you don't necessarily
need to have a direct mapping
of your domain to your library.
That said, when I'm working
on a library, the first,
the last, the middle, the second,
the fifth thing I'm doing
is looking at the spec,
if there is a spec,
and looking at the terms
that are used in that spec.
Because I do want to leverage
existing concepts.
If the GraphQL spec calls
something a field alias,
then even if it's under the
hood internally in my code,
I'm going to try to use that
concept of the field alias.
But at a certain point,
sometimes you break out
of those concepts and you say,
actually, I want to decouple
myself from this particular
concept, I want to abstract
this concept away.
Then you have to be able
to do both, I think.
You have to be able to take
the domain, understand it
extremely well, and then use
the concepts and terms from
that domain as much as possible,
except where deviating from
them would give you a better
design or allow you to simplify
or have a more elegant design
where multiple concepts
fold together into one.
And having an economy of
concepts is so valuable
because people need to learn
those different concepts.
People need to have a mental
model of how they work, so the
fewer concepts you have
operating, the better.
So with Elm GraphQL, I've tried
to abstract away a lot of things
about GraphQL and also make a
lot of impossible states
impossible through the API
Yeah, you want to hide all the
confusing parts of the spec
from the public API.
That's why when we are writing
JavaScript, we don't refer to
the same names as the ECMAScript
aspects tells us, right?
That would be hell.
Yeah, Teresa gave a really cool
talk at Elm Europe one year
about building a chart library.
And I thought it was really cool
because it was kind of like
about how do you most effectively
present information?
And that really hammered home
the point that it's not about
API design, it's about solving
problems, and API design is this
tool to help people solve
So how did you notice the problem
for the selection sets?
Like, did you create a lot of
examples or were you just using
Elm GraphQL everywhere and
noticed that there was something
to be improved?
Well, one of the, I'm a big fan
of feedback of all kinds, and I
actually give a talk about this
concept of basically, and this is
actually one of my principles,
is that there's a qualitative
difference when you wire up
feedback mechanisms before you
do the thing.
So if you can get user feedback
before you finalize an API or,
you know, 1.0 the API, if you
can write examples before you
write the code, right?
That's like when writing examples
is an afterthought, it's
qualitatively different than when
writing examples comes first.
When you write tests as an
afterthought, the tests become
qualitatively different than when
you write the tests first.
And there are a lot less of them.
Yeah, yeah.
The coverage is way lower.
And it just doesn't, you don't
get the design benefits from it
in the same way.
And the same is true for, you
know, getting user feedback.
And I really like to write
examples first as a way of
getting feedback.
So that's like one of my main
ways of getting feedback when
I'm playing around with an API
is I use it in tests, I use it
in an examples folder and see
how it feels.
And I try really hard to have
meaningful examples, not toy
examples, because toy examples
don't give you that benefit.
I try to avoid, you know, foo,
bar, baz in my examples.
I try to avoid it in my test
I try, like, I try really hard
when I'm writing unit tests to
have meaningful, relevant
examples that because I know
that those examples are actually
going to direct the design that
I come up with.
And I don't want foo, bar, baz
directing my design.
I want useful examples that my
API is designed to solve to be
directing the design direction.
So those are sort of the main
main things.
I often try, like, I really like
hearing how people are using
APIs in Slack.
I ask people frequently.
When people when people post an
issue, the first thing I do is
if it's not clear from the issue,
I try to understand what were
you trying to accomplish when you
ran into this problem or this
confusing piece of the API or
whatever it might be.
What were you expecting?
What were you expecting and what
were you trying to accomplish?
And how and let's let's see if
we can accomplish that using the
existing API and then examine it
and understand your use case.
And then maybe we find, oh,
actually, for the particular
thing you're trying to solve, it
would be much more elegant if we
had these new pieces in the API.
There's a concept in law called
I can't remember if I brought
this up on the podcast before,
but this doesn't ring a bell.
So this is an analogy that I
that I think of sometimes that
so in law, you can't go and sue
somebody or, you know, in
American law, you can't like go
to the Supreme Court and say,
hey, you can't limit free speech
in this particular way that I
think you're limiting free
I can't just go do that if it
wasn't done to me.
I don't have legal standing.
So you need to find a defendant
or a victim.
Yeah, not a defendant.
You need to find somebody who
has standing in order to bring
a case to court.
And you can't just say this is
You have to say you wronged me
in order to bring a case to
And in the same way, I think
that I'm not so interested to
hear people's feedback about how
they think an API should be.
I'm more interested to hear how
this API is serving you or not
that I really want to hear.
But it's not that interesting
to like I feel like my job as an
API designer is to come up with
And what users can really help
me do is not tell me what the
designs could be because I mean
they're going to have a different
vision and different ways things
could be.
What I really want users to help
me understand is what's
confusing to you when you're
trying to use it.
What problem are you trying to
What problem can you solve
What problem can you not solve
What's intuitive about it?
Because I can't come up with
those things.
I need people to help me
understand it.
So I want people with standing
who actually are trying to solve
something with my API to give me
Yeah, you basically don't want
people to come up with ideas
like, hey, it would be cool if
we could do this without having
a proper use case for it.
Yes, because those things are
they're just not as solid.
The API designs that come out of
thin air from the abstract are
not as useful.
So design for concrete use cases
and drive your designs from
feedback from concrete use cases.
Yeah, there's one request that
I've got a lot from, or a lot, a
few times for Elm Review is, hey,
Elm Review is great.
I think it would be cool if we
could ignore rules with using
comments, just like ESLint does.
And the few times where it got
on it, I replied with, sure, do
you have a good use case?
Why would you use this?
And I never got a good reply.
And this is one of the one of
those cases where it's, well,
all the other tools have it.
Surely there must be a good
reason, which is something that
we in the Elm community feel a
bit like, we should wait until
we implement this.
So there's this, as you said,
like, there needs to be some
standing, there needs to be a
proper use case.
And if we don't, then we're just
not going to implement the
And we're going to wait for it
to pop up.
Which is exactly how the legal
system is designed.
I mean, it's flawed in some ways,
but it is a really interesting
concept that let's not
preemptively create laws or, you
know, legal rulings.
Let's let people respond to
specific cases because we can't
really do these things in the
We need concrete things.
And fortunately, it's much more
clear cut in the API design
Did you know that it was not
legal to adopt a Martian?
But how am I going to adopt a
There's no standing for it.
It has never happened.
There's no law for it.
I'm guessing.
Well, there go all my dreams,
Is that the kind of kid that you
want with your wife?
I guess he would be unique.
He would be special.
Well, you will just have to tell
him that he's special, just like
my parents did.
Yeah, there's this principle that
Evan talks about in his little
tweet thread that I often look
back on about the sort of Elm
philosophy that it's better to
do it right than to do it right
And I think that maybe this is a
good refinement or addition or
complementary concept to that,
that it's like if you don't have
a use case, a motivating use
case, then wait.
Like don't preemptively make API
And I think this is very much
true for application code too.
For products or for application
Yeah, for application code.
I think extracting APIs from
real world code is a great idea.
Like duplication is okay.
It's okay to allow there to be
some duplication.
It's okay to allow there to be
some code that doesn't quite
feel right.
You just have to be able to trust
yourself in your process to when
you've gathered enough information
like, oh, this doesn't quite feel
right, but I don't have clarity on
what it needs to be yet.
And then another case comes up.
Oh, it doesn't feel right here
But now I have another data point
and now I'm starting to get
clarity on what direction it can
go, then move in that direction.
But don't anticipate it.
Yeah, the reason why once you
wait is because if we implement it
preemptively, then there's a good
chance that we are doing it in a way
that will not support the use case
We're going to do it wrong.
We're going to do it slightly off
or simply this will never come up
because it's not a good use case.
It's not something that people will
Yes, exactly.
Premature optimization is the root
of all evil.
And if you like, as you say, sorry,
premature abstraction is the root
of all evil.
So if you create an abstraction
preemptively, then what happens is
you create that abstraction.
And as you say, that abstraction
is for an imagined problem.
Then when the real problem comes
So now there's like a second place
you need to use, you know, solve
that problem.
Now you have more clarity on what
the abstraction should have been.
Now you need to back out that
abstraction that was the wrong
abstraction because it was
designed in the abstract and then
create a new abstraction.
So you've created an extra step
for yourself.
So you want to be in like a state
of readiness where you're and
simplicity is the best thing you
can do to be ready to anticipate
future design needs.
Farm review.
I don't have as many tests as I
would like to.
But what I do have and what serves
as a test are a lot of rule
So when I started off writing on
review, I wrote both the library
and the CLI and the rules that I
would like to see implemented.
So if I know that one of them is
failing, then I probably have
some issue in that I should fix.
Surprisingly, that happens not
very often.
So I don't feel the need to add
But in retrospect, I should have
added more tests, obviously.
So I work a lot by not by
examples, but by use cases.
And I really tried running a lot
of rules.
One reason is because it would
give me a lot of feedback because
that would be the more important
and more instant feedback that I
would get to have without having
to talk to users or other people
who would write rules.
And what data also allowed me was
to push the boundaries of what I
wanted to support.
I was like, OK, I now have this
rule that reports unused
But would it be interesting if I
could detect unused dependencies?
Oh, well, that would be
Let me try and see how that would
And then I would try to write the
And I would notice, oh, well, it
wouldn't work because I would
need to have the Elm JSON file as
a string or as an ASD.
And by doing this over and over
again, I can know what cases I
want to support and get a lot of
And when you do that a lot, what
also happens is that you feel the
pain points a lot more.
So I went through a lot of
versions for the Elm Review API
before it got to 1.0.
And a lot of them were painful to
So I had a version where you
would say, these are my visitors.
I have a default visitor.
And you override the expression
one, which by default does
And you have another one for the
import visitor.
Or then I went to a list of
visitors, just kind of like the
And after a while, you notice
this doesn't work out for Elm
So you try it a few other things.
And ultimately, you come up with
an API where you have the most
benefits and the least pain
points for the use case that
you're working on.
So if you can write tests,
that's great.
If you can use examples, that's
But if you can write actual
proper applications of your
library, that is the best.
Because then you are both a user
and a designer.
And you get to have all the
benefits of all positions.
They play different roles.
If you have real world use cases
or like, so examples can be
really good for showing off an
Like, look at how you do this.
This is how you do this thing.
Maybe here's a more simple
example to teach this concept or
to show off how the API makes
this easier.
But it's some sort of motivating
example that's demonstrating the
So that can be really good for
like, what is the simplest thing
and can we make the simplest
thing simpler?
That can help drive that
That's that feedback loop that's
telling you, hey, even in the
simplest case, there are a lot of
things that it seems like we
could still simplify.
And so that's going to let you
know that that's a feedback
mechanism to tell you that.
Real application code that's
solving real problems is the
And if it's doing that well, or
if it's painful to do certain
things in that context, that's
There's no that's the that's the
whole point.
And so that's like the most real
kind of feedback.
But then if you don't have a
great test suite, then when you
get all that feedback about how
things could be simpler, less
confusing, solve different types
of problems that it's not
currently able to solve.
Now you can't change your code to
do that safely because you can't.
And I think that I often think
about like if there's something
that you want to make really
good, invest in giving it a great
feedback mechanism.
Like if you build a great feedback
mechanism for something, then
it's going to evolve over time to
give you even better feedback.
And the design is going to evolve
over time because you're going to
be able to more easily experiment
with it.
So I think that's like that's one
of the most important things.
So there's also like I really
liked this Rich Hickey talk about
hammock driven design.
And I think that there's Rich
Hickey in that talk talks about
like when there's more work to be
done, then you're not in the
hammock, you're working, you're
figuring out how to solve the API.
So like, for example, how do
other libraries solve this
Or, you know, what collecting all
the use cases you're looking at
or finding all the code examples
and comparing them, finding
common threads or, you know,
That's the API design process
where there's active work to do.
And in a way, your job there is
to put useful information in
front of your brain.
But we don't design APIs are
extremely creative, subconscious
designs APIs, right?
But us are sort of conscious
minds can go do the hard work of
gathering prior art.
How did other things design this
understanding the spec,
understanding the problems,
understanding how the current
API design fall short, gather
all that information, put it in
front of your subconscious brain.
And then when you've hit a wall
and there's nothing more you can
do there, all you can do is sit
in a hammock.
You can't work harder to come up
with API design breakthroughs.
You have to go sit in a hammock
at that point.
But if you haven't given your
subconscious mind all these great
bits of information to consider
first, then it's not going to
come up with anything
So that's sort of something I
think about often.
Now, that doesn't necessarily
literally mean I'm going to go
sit in a hammock, although I
often do like go for a walk if
I'm trying to if I'm noticing
that I'm not getting clear ideas
and I need to come up with some
API concept.
But also, like sometimes I'll
just put things on the
You know, I'll think as hard as
I can about an API, make sure I
get all the information I need.
And then I'll work on a
different thing.
And give that some time.
What I like to do is to write a
lot of notes.
So, oh, I want to figure out a
nice API for this or because I
have this pain point and I'm
just throw all the thoughts I
have in a note application, a
note taking application,
And while you're writing, you're
thinking about the problem and
you're getting new thoughts, or
at least I am.
I'm going to say I instead of
you now.
And at some point, yeah, you're
I am out of ideas.
And I will just leave it at
And I will think about other
things, other things I would
like to add support for.
And I will wait a few days.
I will come back to the notes
every so often and reread it
And I'm like, no, no, actually
this doesn't make sense.
Or, oh, you know what could be
So I think it's the same ideas
like, yeah, absolutely.
Write down all the thoughts you
You reread it sometimes and your
subconscious will figure things
out for you at some point.
And this can take a while, but
this can be very effective too.
So I noticed that I'm a lot more
creative when I do this.
Because I need to go back to the
notes sometimes.
So when I go on Twitter too much
and I'm doom scrolling, I feel
like I'm not productive.
But instead, when I have some
time and I go to the notes
application, then I think about
things a lot more and I advance
a lot more.
Yeah, I think there's an art to
assigning work to your background
processing mind.
But the thing is you can't
micromanage it.
You can't give it a deadline.
You have to be a good manager and
trust your background processing
and set it up for success as much
as possible and say, like, all
right, here's the problem I'm
trying to solve.
Here are the things I've
Here are my goals.
And I trust you to figure
something out and I'm not going
to pressure you to do it quickly
because that's not how our
background processing works.
But you do need a good system for
taking notes, capturing these
concepts and organizing your
thoughts in a way that's going to
trigger these creative ideas in
your mind.
And yeah, like for, you know, for
Elm pages 2.0, there were
problems that I knew I was trying
to solve.
I need to like look back at the
specific timeline.
But for over a year, I've been
trying to solve these problems,
if not longer.
For example, the problem of
being able to have pages based on
external data sources like a
content management system, CMS,
like a like a blog management
system or based on or creating
an arbitrary mapping of files to
creating new pages, right?
Like, I never really considered
Elm pages to be a proper 1.0.
I knew I wouldn't consider it a
proper 1.0 until that
functionality existed where you
could say, hey, here's an API and
take all this data from this API.
And these are the pages you're
going to have.
And I thought about so many
different ways to approach that.
I have so many notes.
I had so many interesting
conversations with users with,
you know, with Elm friends.
And I have sketches of API
design ideas.
I knew and what I did is I sort
of like wrote out here are the
problems like here are some
potential solutions and here's
why this solution doesn't quite
Here's what I want a solution to
Like the main challenge is that
since Elm pages hydrates a page,
all the data that that page
depends on, it's not enough to
have it when you're sort of
pre rendering the HTML for that
You then need to have just that
data that was needed available
to hydrate the Elm app because
maybe you have like a listing of
all of your your blog posts.
Well, in your Elm app, you need
to have access to that data
because you need to be able to
say, oh, well, OK, display the
blog posts with this sorting
But if you want to sort it by
most popular, you can click this
thing and it's a full Elm app
that has that data.
So the data needs to be pulled
Yeah, it needs to be accessible
to to the Elm application once
it's hydrated and and it needs
to be optimized.
So you can't just pull in all
the data that you used ever.
You have to have a way to narrow
down just the data that you
need, because if you pull down a
giant API response, you can't
just be like, OK, well, let's
shove that in the app bundle.
Now, now you just have all this
It doesn't work.
You need to strip down the
minimum amount of data.
So I thought so hard about that
problem, but I gave my
background processing all the
information that needed.
I looked at all these different.
I mean, really, I watched a lot
of next J.S. tutorials.
I've never written a full next
J.S. app, but I know a lot about
next J.S. because I've watched so
many tutorials, conference
I've looked at how people
thought about these new features
as they're being designed through
conference talks, RFCs in their
GitHub issues.
I read through issues about like
minutia and release notes.
I follow this with Svelte kit.
And so that's the prior art I'm
putting in front of my brain so
it can do some creative thinking
about it.
Just look at a bunch of
different ideas, write down all
the ways I can imagine doing it.
And finally, that all those
things sort of mixed together
and, you know, Ryan's LMS PA
really gave me a lot of great
material there too.
And, you know, we had some
conversations early on when I did
this sort of beta version of
these template modules.
But the template modules, I tried
this design.
It was in beta.
I never actually merged it into
the main release.
It was always just there as a
beta, but that was an important
step too.
So I did this experimental beta
for template modules where you
could have these different types
of pages and for each type of
page, a blog post type of page,
an author information type of
page, you could have a specific
sort of mini Elm architecture
type thing for those pages.
But it didn't quite come
together, but that was an
important step to have that
And finally, I realized that this
abstraction of metadata and the
rendered body was the thing that
was holding me back.
And I rethought the model to be a
pull model rather than a push
So instead of, hey, you need to
define your metadata for your
page types.
Because I had this idea that what
would an Elm static site
generator be like if it was more
Elm it would be based around a
custom type, which seems
And then you have like, okay,
well, so what's the custom type?
Well, each page is just a custom
It's like a blog post and here's
the information for that blog
But then what about your route
route, just your landing page?
There's really no data associated
with that.
It's just a route.
There's no like metadata
describing that page.
It's just, oh, this is what my
root page looks like.
So the abstraction falls apart in
these one off pages.
And the more I pulled on that
abstraction, the more it didn't
work out elegantly.
And I finally had the
realization if I like I'd been
wanting to remove that
abstraction for a while, but I
didn't know how.
And I finally realized if I
if I do it with file based
routing and then the based on
the route parameters in your
file based router, you tell me
what static data to go fetch for
that page, then it solves all of
the problems.
But it was so hard to get to that
Like I can describe it in like a
couple of sentences now what the
breakthrough was and the design
that that solved it.
But it took so long to get
And that was your preview of the
Elm pages 2.0 episode.
That's our teaser.
That's our teaser.
So designing Elm pages 2.0 has
been so interesting because there
was that big insight.
And suddenly with that design
insight, it opened up all of
these new possibilities in the
And so I've designed more APIs in
the last couple of months than
like in the rest of my coding
I built a glob API.
I built an API for accessing
query parameters the way I want
I built an API for the routing
because the routing the Elm URL
parser only works for a single
level route.
It doesn't let you say slash star
for a route.
There's no sort of way to do
something like that.
And I wanted a way to do that in
my routing API.
So that's been a really
interesting experience.
So one of the things I've
encountered along the way
building some of these APIs like
the glob API, for example, which
glob is just a way of describing
patterns that match certain
files on the file system.
Like kind of like a regex.
Yeah, regex like you can say
like star star slash star dot
TXT and that gives you all your
nested TXT files.
It was really interesting
designing that because it's so
there's this existing concept of
globs, right?
And I looked for prior art as
much as possible.
I looked in Haskell and Rust and
Python and Unix and all these
different APIs looked at the
documentation, looked at the
terms they used and the APIs they
But for the most part, it's
pretty much the same.
There was no sort of globbing API
that had this concept of pulling
out pieces of the glob.
And I knew I wanted that.
So I had to come up with some
ways to do that.
But it's interesting because
under the hood, it's actually
executing the glob with a glob
pattern using like this node JS
package to match files because
you can't just do that in Elm.
But then I have Elm code that
matches that builds up a regex.
So it's the API simultaneously
builds up a regex and a glob
It sends out the glob pattern to
node JS, gets the files, the
matching files, and you get the
full file paths back.
Then using Elm, it matches the
regex as it builds up, gets the
captured parts of it, and gives
it to your applicative glob
So when you say glob and you
capture a part of it, so you can,
like we did in our screencast
recently, you can do like glob
dot capture, and then you can
capture like the slug part in
like star star slash some slug
dot MD.
You can like pull out that one
piece of it.
So anyway, one of the
interesting realizations I had
doing this is there's this
concept of parse don't validate,
And which we have talked about
Yeah, it's I mean, it's a
brilliant concept and something
I think about often now.
So, you know, having an API where
you can use that like
applicative pattern to sort of
pick out pieces of it and into
structured data, that's sort of
like a parse don't validate
thing, right?
But under the hood, so I had an
interesting conversation with
Philip about this, how you can
like you don't actually have that
guarantee because I'm building up
a regex.
So under the hood, the code is
actually technically not really
parse don't validate.
It's just like a hacked together
But so that's that's an
interesting thing is like at some
level, you have to be doing
something clever.
Maybe it's at like the machine
code level that something clever
is happening.
But you give these high level
guarantees and abstractions and
sometimes you just need API
designed to do that.
You can't I mean, it's kind of
like does the type make
impossible states impossible or
does the API make impossible
states impossible?
And as long as it works, the
user doesn't care, right?
I mean, sometimes you have to
leverage like in this case, I'm
leveraging these properties of a
glob pattern to say, OK, I know
that if I limit the API to being
able to describe these types of
globs, then I know that I can
build an equivalent regex that
will that is guaranteed to match
the data in the way that I need
to to be able to give it to the
user in a in a way it's you know,
but behind the curtain, it's, you
know, pay no attention to the man
behind the curtain kind of thing,
It's like you create these air
tight abstractions.
But under the hood, you sometimes
have to do some dirty hacks to to
do that.
And in that case, you just kind
of test it like crazy.
But but the you want to create a
quality abstraction.
And sometimes you have to get a
little bit clever to do that
behind the scenes.
So from what I'm gathering from
what you said before is that
there is a need to to get
inspiration from other tools and
other solutions and to just have
a broad knowledge of what is
available to you to create your
API because when you design an
API, there are so many choices
that you can make.
If you look at the Elm session
HTML library, you have the list
of attributes.
Those are all just functions that
return an attribute and you put
them into a list.
That is one kind of API.
Then you have records.
Oh, like I'm your eyes.
Maybe I don't know if I'm your I
has that.
Maybe the Elm UI button has a
record with an click because that
is mandatory.
And same with images.
Images have the alt text and the
URL because it's trying to say
images need to have explicit alt
This is mandatory.
You have this list.
You have the records.
You have builder pattern.
You have style things like
You have custom types, opaque
types, not opaque types.
You have functions exposing
under the hood custom types.
And then you've got other tools
like code generation.
You've got compiler checks and
you need to have a good overview
of all those, of all that is
possible and how you can mix
those together.
And then you see whether they
make sense for your use case.
You list the pain points, you
list the benefits.
And understand the use cases
intimately, build things with it,
get feedback from actual pain
points from users.
From what I'm getting from your
one year exploration is that you
had most of it right.
And then you had one or two new
tools to your arsenal or your
toolkits that just made the few
last pain points go away.
And it opened up a new
architecture fundamentally.
And I went through a lot of
different design ideas too for,
you know, I went through a
design idea of instead of doing
a file based router, doing a sort
of URL parser style router.
And there are some interesting
ideas there, but there are some
real problems.
And I thought long and hard
about that and had some really
interesting discussions about
And I just had to sort of pick
There were a couple of problems
that came up with the sort of
URL parser like API that didn't
quite line up.
But I just had to explore it and
pay attention to the design
And sometimes you need to let
something sit there for a while
like that template modules beta
that I had.
And I also had a sort of beta
for an Elm pages build step that
didn't use Webpack that did all
the pre rendering and everything
without Webpack.
And that was sitting around for
a while, but it served its
purpose, which was to I mean, all
of Elm radio dot com and all of
the Elm pages sites that I
maintain were using it.
So I was able to get feedback
from my own experiences using it.
Some other people gave me
feedback and it served its
purpose to prove out a concept
and learn from it.
But sometimes an experiment takes
a while and you need to have
patience in API design.
But then you need to know when
you need to be decisive and say,
OK, I know what I know three
options that I'm considering
that all have trade offs.
I've laid out the pros and cons
and now I need to pick one.
I've gathered all the information
I'm going to gather.
I've thought about it as much as
I'm going to think about it.
And then you have to just go
for something to wave life to
lifestyle API design.
So we've talked a lot about API
design during this episode.
Do you know when we stop talking
about it when it's good?
I.e. never.
It's never good enough, right?
Could always improve.
Could always improve.
But I mean, when people stop
talking about it.
Yeah, it's kind of a
sign that it's a good API.
With with Elm Graph QL, Elm Graph
QL has definitely gotten to the
point where when people bring up
new ideas or new things that
like, hey, I wonder if this could
be improved.
This was a little awkward using,
you know, using this part of the
API or something.
It's gotten to the point where
it's harder and harder to come up
with obvious wins.
And it's more like, OK, well,
we could try these different
approaches, but they would
actually make things worse in
this way and better in this way.
And not that the API is is
perfect, but it is getting to the
point of diminishing returns in
a lot of cases when there are new
It's like not as much of a wow,
this is just an obvious win in
every way.
This is like so much nicer to
work with.
So I mean, it depends on how
narrowly you define the problem
you're solving to.
If you're defining a static
analysis tool, then you can
always optimize it more.
You could always squeeze a
little more performance out of
Sure, but that is not an API
design anymore.
Yeah, or probably.
Well, Elm Review is an
API design space where you
could, I mean, you could
reinvent an AST to be custom
tailored to give very useful
static analysis information.
So you could have like instead
of having like identifiers, you
could have like a type that
describes what it actually is
and what it points to uniquely.
A reference, yeah.
Yeah, concrete references and
you could have type information
associated with certain parts of
the AST.
You could have a typed AST.
Don't get me started on that.
But at the end of the day, you
do need to sort of pick the
problem you're solving and
being, I mean, there is a
balance, isn't there, between
like being ambitious and
choosing a narrow goal.
There's a push and pull there.
But I think there's something
like I've noticed this is a
common thread in many things is
that there needs to be a clear
There needs to be a clear
message, a clear goal, clear
For the tool or library?
Yes, or for in an internal API
for a module.
It needs to have a clear purpose,
a clear thing that it solves.
If it tries to be everything,
then it's not going to be
anything effectively.
Like I noticed this sometimes
when, you know, like if I'm
watching a movie or a TV show,
when something really hits home,
it often has like some sort of
clear theme or takeaway or
message or lesson or idea it's
If it's too scattered and there
are too many lessons in a movie
or too many themes that it's
exploring, then it feels muddy
and unsatisfying.
And I think the same thing is
true in anything in writing and
storytelling in APIs for external
packages and for internal API
It needs to have a clear story,
a clear message.
So what you're saying is that
you don't like my input module
and my button module?
You don't like those?
They don't have clear purposes?
Is that what you're saying?
That could be a clear purpose.
It needs a raison d'tre.
I like the say cheese better.
That one was actually very good.
Well, any parting API lessons
that we want to share here?
One that you bring up a lot,
Jeroen, is this idea that to take
responsibility for user
Oh, it's not mine.
It's not mine.
It's Richard's.
That's Richard's.
And Evan's got that in his Elm
philosophy as well.
Yeah, what Richard told me was
the author is responsible for the
experience of the tool or library.
And that is something that I took
by heart when I was working on Elm
review or Elm lint at the time
before it was released.
And that changed a lot of things
for the tool, for the future of the
And all the things that I tried to
make impossible, including the
things that I talked before about
trying to have to remove all the
unexpected behaviors.
If I didn't get that advice, I
wouldn't have done it.
I would have released it.
It would have worked, but it
would not have worked in some
cases or as expected.
So, yeah, when you're working on
the API design, do what you think
will be the best for your users
and the results will show and
people's responses will show for
it, I think.
Yeah, there's a classic scene in
the show Silicon Valley where
they're doing this user feedback
session behind a one way mirror
or something.
These users are trying out the
product and giving feedback.
And the founder who created the
tool is behind the mirror looking
at them yelling, no, you're using
it right.
And he storms into the room and
shows everybody how to use it
But what if all this time we
haven't actually been talking
about API design and we've been
talking about problem solving?
And I think really that is API
design is a narrow thing.
Problem solving is this broad
thing that requires a lot of
different skills that you bring
to bear and taking a lot of
things into consideration and
seeing the big picture.
And that's really what it comes
down to is we're solving
Using APIs.
And tools, which you can see
kind of as an API as well.
But don't forget that you're
solving real problems.
Because an API in search of a
I mean, you can solve problems
that no one has.
You can create a foobar
generator that generates foobar
Just find someone on the
street, $5 just to say the words
I have a problem where I need a
foobar generator and then design
an API for it.
There you go.
Someone has a problem, you
become a problem solver.
And you lost $5.
Well, I think that won't be the
last you'll be hearing from us
about API design, but hopefully
it was interesting.
Let us know what you thought.
If there's something else you
want to hear about, go to
[01:07:57] question and let
a friend know about Elm Radio.
Review us on Apple Podcasts.
And Jeroen, until next time.