JSON Decoders

We discuss the basics of JSON Decoders, benefits compared to JSON in JavaScript, best practices, and how to get started learning.
April 27, 2020


Ports and Flags

Here's an Ellie example that shows the error when you have an implicit type that your flags decode to and it's incorrect.

Sorry Dillon... Jeroen won the trivia challenge this time 😉 It turns out that ports will throw exceptions when they are given a bad value from JavaScript, but it doesn't bring down your Elm app. Elm continues to run with an unhandled exception. Here's an Ellie example.

Flags and ports will never throw a runtime exception in your Elm app if you always use Json.Decode.Values and Json.Encode.Values for them and handle unexpected cases. Flags and ports are the one place that Elm lets you make unsafe assumptions about JSON data.

Benefits of Elm's Approach to JSON

  • Bad data won't end up deep in your application logic where it's hard to debug (and discover in the first place)
  • Decouples serialization format from your Elm data types
  • You can do data transformations locally as you build up your decoder, rather than passing your giant data structure through a single transform function

Decoding Into Ideal Types

Note about Decode.maybe. It can be unsafe to use this function because it can cover up failures. Json.Decode.maybe will cover up some cases that you may not have intended to. For example, if an API returns a float we would suddenly get Nothing back, but we probably want a decoding failure here:

import Json.Decode as Decode """ {"temperatureInF": 86} """ |> Decode.decodeString (Decode.maybe (Decode.field "temperatureInF" --> Ok (Just 86) """ {"temperatureInF": 86.14} """ |> Decode.decodeString (Decode.maybe (Decode.field "temperatureInF" --> Ok Nothing

Json.Decode.Extra.optionalNullableField might have more intuitive and desirable behavior for these cases.

Thank you to lydell for the tip! See the discussion in this discourse thread.

Learning resource

Joël Quenneville’s Blog Posts About JSON Decoders

Guaranteeing that json is valid before runtime

Autogenerating json decoders

Organizing your decoders

Getting Started

  • Understand
  • Understand record type aliases - the function that comes from defining type alias Album = { ... }

Submit your question to Elm Radio!


Hello, Jeroen.
How's it going?
How are you doing today?
I'm doing pretty well.
Hold up inside, but doing pretty well.
Yep, same here.
Well, why don't we keep ourselves occupied by talking about some Elm?
What are we talking about today?
Oh, Elm.
I like that.
We're going to talk about JSON decoders today.
Ah, that's a good one.
Yeah, JSON decoders.
So I feel like we're talking about JSON decoders, but this is really like a broader pattern.
So it's extra important to talk about.
Yeah, we should probably talk about what JSON decoders are and what they are for.
Yeah, maybe like imagine that somebody is like brand new to Elm even like what the heck
is a JSON decoder?
First of all, welcome.
So Elm gives you a lot of guarantees about how it will work.
It will produce no runtime errors and all the functions will get the types they expect
and the arguments they expect.
And that is very good.
That's something that we love about Elm.
But what happens when you get data from sources that you do not control?
Data from outside the Elm code.
So for instance, data that comes from HTTP or data that comes from ports or from HTML
Those are pieces of data that Elm cannot control and cannot control the types of it.
So what you do is you validate the JSON that is coming in, if it's JSON, but we're going
to talk about JSON decoders.
You validate the fields that it has and the types that it has.
So if they match, you will get a successful decoding result of the type that you provided.
And otherwise you will get an error saying, hey, this field was not what I expected or
this field was missing.
So from that point on, you do have the data that you want to work with.
So if you wanted a record with this and that field with these types, then now you have
You have successfully told Elm, this is data that I'm going to work with and the data that
you're going to have to play with from here on.
So that is basically the idea about JSON decoders from what I understand them.
Yes, and now, okay, so this is interesting because I'm realizing that from the description
you gave, this is basically the more specific instance of the general concept that we described
in our opaque types episode.
Because in our opaque types episode, we described this sort of ability to do runtime type checking
by conditionally returning a type when a validation passes, right?
And that's exactly what you just described.
And it turns out that JSON decoders are just a special case of that pattern.
And the sort of opaque type patterns we talked about are a more general tool that you can
use to build that sort of thing yourself.
Yeah, that's a good point.
So one other point I wanted to make is that just to contrast this against what you would
be doing in other languages, let's talk about what that would look like doing this in JavaScript,
So you would say, you know, you have some JSON response from an HTTP response.
And you say that it's JSON, so it's decoded as JSON.
And then you do, you know,
And then maybe it's undefined because you're reaching in and grabbing an object key for
which there's no value.
Or maybe something's blowing up because you're dereferencing something that was that was
not there.
Or, you know, so basically, you say that you that it's possible to have a crash there or
undefined behavior.
Yeah, it's possible to have a crash or worse.
The worst case is that you don't have a crash, and you have some bad or unexpected thing
Maybe it's a crash, maybe it's a weird bug.
It's in some ways even harder to debug what's going on if it's not a crash, if it's just
weird behavior, if it's just something that some undefined value gets passed into someplace,
it gets concatenated with some string, which makes it the empty string, and then that goes
in some other place.
And then it tries to look that up as the key for some other HTTP request.
And then the HTTP request 404s and you're like, what?
Yeah, so basically, you will take much longer to find the problem that there is a problem.
So it's lengthening that feedback loop where you're taking longer to discover what's going
And, you know, if you just think about it from having confidence in your code point
of view, how can you be confident that your code is, you know, it's one type of confidence
about your code that you even just grabbed data that actually exists and that like, I
don't know, if you have an API and you're saying, hmm, this would be kind of nice if
we renamed this because we're calling it this thing, but we've kind of discovered some things
about it and we think this would be a better name for this field.
How confident are you going to be doing that if you're grabbing that data from JavaScript?
Not very much.
You're probably just not going to bother doing it, right?
If the final replace is okay, if there are not too many instances, then let's try it.
Maybe try it, but then you're going to try it and then what are you going to do?
You're going to go through the app.
You're not just going to run the part that makes the HTTP request, but you're going to
run every part that you think uses that payload from that HTTP request to run through every
single code path with every single permutation of all the possible conditions to make sure
you exercise all possible code.
Sounds like fun.
Does that sound fun to you?
How else do you want to pass your afternoon?
This is maybe a good thing if you're new to Elm and listening to this and you're like,
JSON decoders, why can't I just use my data?
That's why.
That's why.
It feels like a pain and it is a hard part when you're starting out, but once you're
in the habit of doing it, I think it's fine.
It really brings a lot of guarantees about the code that you're working with.
You see the value after a while.
Let's put it that way.
It's the kind of thing.
I hear this over and over again from people that they thought it was a nuisance when they
first started and over time they grew to really love it and when they go to other languages,
they even try to replicate it.
There are some libraries out there that try to bring this pattern to TypeScript, for example.
How am I supposed to trust the HTTP request response?
I need to validate it and that's pretty much what we do with JSON decoders except that
it's enforced by the language or the language.
There's no way to tell Elm, just trust me, this is this type and just do your best and
blow up if I'm wrong about this.
Although there is actually one place where you can do that, but it's not a recommended
practice with a port.
You can describe, you got it.
Ports and flags, which you could say is like a special case of a port, but you can just
annotate your port or your flags as saying this takes a record with fields of these types
and Elm will just blow up if those assumptions are invalid.
And then you will not even get the message.
So that's one difference with implicit decoding and explicit decoding.
Explicit decoding, you will have to handle the error.
With implicit decoding, you will get a runtime error that will be shown in the console, but
you will not notice in the Elm code.
You will not be able to respond to the error.
And when you say runtime error, we should clarify your app dies completely, unrecoverably.
For flags, yes, but not for ports.
For flags.
I'm pretty sure it doesn't crash for ports.
Otherwise leave that in the comments.
Let's look that up and leave it in the comments.
What the result is there.
Yeah, that's a good question.
It's a good one to ponder.
You don't trust me.
You don't trust me.
I trust but verify.
The thing is that I'm actually doubting myself, so I'm not even trusting myself.
So I'm going to have to doubt and verify and then trust myself again.
Like, yes, you were right.
Or I'm just going to listen to this and say, yeah, no, he was wrong all along.
This is what I'm saying is like, we should all be a little more like Elm maybe, or maybe
it's good to be trusting people.
But just be like, hey, listen, I trust you, but let's just follow this process so we don't
have to even question trusting or not.
Make sure.
Yeah, I agree.
So we kind of talked about this benefit of JSON decoders that it validates your data
and you know once you have this typed data, you can trust it and that's great.
But there's another benefit and I think this is maybe a more subtle one, but it's one that
as you get experience with Elm, I think people really grow to love this part of JSON decoders
as well, which is that it allows you to decouple the way that you represent your data from
the way that you serialize and deserialize your data.
And so, for example, what you find often in JavaScript code is because JSON is this first
class citizen, you have JavaScript object notation and you're in JavaScript.
So hey, I've got this JavaScript object, I can just pull data off of it, I can just reference
these fields and that's quite convenient on the one hand.
But on the other hand, maybe you have a format that isn't the best way to describe something
or maybe you want to have certain data structures that say make impossible states impossible
or better match the rest of your code base.
So let's talk a little bit about how that looks in JSON decoders.
What are the practices that allow you to decouple the serialization format from the representation
of that data in your Elm code?
Basically when you decode something, when you decode JSON, you'd set it, hey, I want
this field with this name and this type.
And then what you do is you apply a function to that, to the results of that, which is
from the raw value into something that you control.
So what you usually do is we create a new type, so a type alias, which represents what
you want, but that could have different names or you create opaque types over which you
have much more control and which can give you a lot of guarantees.
And that's it.
So let's maybe get into a concrete example.
So let's say that you've got some way of representing a date time on the server and you have some
way of serializing that into JSON so you can send it from the server to the client.
And then in your Elm code, you have some way to represent that date time.
Well, all three of those may be different.
And so if you're using an ISO 8601 string to represent it in the JSON payload that comes
through to the client, what I see happening in a lot of JavaScript applications is they'll
just store that as the value that they pass around all over the place.
So you're passing around these strings, but what you really want to do is you want to
wrap that into a nice data type that represents the date time as soon as you get the response.
And so like if you're writing a JSON decoder in Elm, let's say you've got some field created
at and it's an ISO 8601 date time.
Well, you can use your function to parse that from a string and then map that string into
an Elm time POSIX value.
Like Richard Feldman has a library for doing just that.
In fact, he even exposes a decoder that first decodes it as a string and then it maps that
string into an Elm time by running a parser that will extract out that ISO 8601 time format
into something that Elm can interpret as a POSIX time.
And so now the really cool thing is there's not a single point in your Elm code where
you have access to that string value.
Yeah, that's basically the point of don't model your model based on what you get from
the user or from the HTTP, from the server, but model it based on what you will do with
So if the only thing that you're going to do with time is get the number of the year,
the year number, then just get that data.
If you need to display that number as a string, you can already store it as a string.
If you need to do operations on it, then store it as an integer and then whatever fits your
use case the best.
Do not store it as a string, as a POSIX, if it does not make sense to what you're going
to do with it afterwards.
That's interesting.
Yeah, I think there's a certain mindset that you get into.
So for me, test driven development is an example of this mindset.
There's a notion called programming by intention.
And all that term means is that you first sort of express how you want to use something
and then you reverse engineer that to figure out what the implementation looks like.
So you're thinking about how you want to consume something before you actually write the implementation,
which means the implementation details don't guide the high level data modeling or API.
Yeah, so this is kind of the same thing as API driven.
That's a good way to think of it.
Yeah, I mean, it's when you're doing data modeling, you know, I mean, you could even
start by just writing out your data type.
This is what I want to receive from the server.
What would my ideal data be?
This is actually something you can do with JSON decoders.
You can sort of incrementally build something out and start consuming hard coded data.
So a really good trick in the toolkit of JSON decoders is JSON.
It's a function called JSON decode succeed.
And what that function does, it's extremely useful for a number of reasons.
But one of the most useful use cases for it is to stub out data and just pass in hard
coded data.
So you could say, I need to get the year that this album was released.
There's like some music recording, and I need the year it was released.
And when I look at the Spotify API documentation, it says something about an ISO 8601 date time.
But all I know is I only care about the year something was released because I'm making
an app that takes this Spotify response.
And it lists out a listing of albums for all of these tracks and what year they were released
and orders them by year.
And it doesn't display the date or the month it was released.
It only shows the year.
And that's all I care about.
So ideally, my data structure would just be type alias album is a record.
And it's got a field called release year, which is an int.
And it's got a field called name, which is the album name, something like that.
Okay, let's start with that.
We've got those two fields.
Now we can write a JSON decoder that decodes data into an album.
So what does that look like?
Well, it's it's quite easy.
You can say JSON dot decode dot succeed 1975.
And now that's a JSON decoder that decodes into an int.
Well now I want to build that up into a JSON decoder that gives me an album, not just an
So how do you do that?
Well, you say, now we need to decode it into an object into into a record.
So we can do that by taking our type alias we defined, we can say JSON dot decode dot
map two and give it our album constructor, our record constructor defined by our type
alias map to use two arguments map to because we're passing into arguments.
And now we give it our two decoders for a hard coded JSON decode succeed 1975 and a
JSON decode succeed some album name.
So that's a lot of code for a podcast, but the point is that and actually, I kind of
did something like this in my Elm Europe talk, incremental type driven development, illustrating
this technique of succeeding values and having hard coded things and consuming them as quickly
as possible.
So you can sort of drive the design of your API by prioritizing things that help you define
the API rather than the implementation.
So you defer the implementation because you want to discover what the ideal API looks
like as fast as possible.
Yeah, I was in the front row for that talk.
I remember you apologizing because you were taking too much time.
That was tough to squeeze into that time slot.
Live coding is hard, especially when you have to keep it short.
Yeah, try not to.
So yeah, that really helps out working with JSON data because you can just do it incrementally.
So as long as you can work with raw data and just display the rest of your album, for instance,
then you can just do that.
And then whenever you will try to make this dynamic, then you can start implementing the
decoder parts.
There was one thing that you mentioned a few minutes ago now, is about the ISO 8601.
Is that it?
Yes, that's the number.
You got it.
I got so impressed when you said it's like 86.
I typed it so many times.
It's ingrained in my brain.
So when you use that ISO 8601 something, then you decode it to a string and then you parse
Parsing is kind of also the same idea as decoding, isn't it?
Yeah, exactly.
Decoding is kind of a special case of that.
Well it's basically decoding and parsing is the same thing, but you do not work with the
same raw data.
You have the same thing with other formats.
You could have YAML decoders.
I'm pretty sure there is a library with a YAML decoder actually from Thierry.
You could have a JSON decoder that you point at a GitHub repository and it will decode
an Elm repository and give you information about that project.
And you could say, decode primary programming language as a string.
And then you could do JSON.decode.end then take that string and you could do
if the string is not Elm.
So you want to reject any non Elm project, right?
So that's the thing about JSON decoding is it's actually nothing magical.
It's just this one building block of JSON.decode.succeed and
It allows you to build a decoder that always succeeds or always fails.
If it always succeeds, it succeeds with the hard coded value you give it.
If it always fails,, it always fails with the message you give it.
And so you can say if primary programming language does not equal Elm,
Or if it is Elm, JSON.decode.succeed, some value.
Yeah, the repository name or just something that says Elm rocks.
Yeah, exactly.
So that gives you the building blocks that you need to build something like an ISO 8601
decoder because you can just do whatever runtime checks you want.
That library happens to be built with an Elm parser, but that's just an implementation
detail if you built it with a regular expression or whatever, right?
It's just code.
Yeah, decoder is just a fancy name for saying a validator.
There are other things that we probably want to validate in our code, like forms.
I think it was someone called Lexi who made a post called, not Lexi, yes, who made a post
called parse don't validate or in the terms that we use now, decode don't validate.
So it's basically don't have if conditions that say if this field is this, then go ahead
and assume that everything is right, but instead decode form, like get a proper error.
Like in JavaScript, if you had if object has field something, if object has field name,
then take the name and pass it to this thing.
Well, no, you don't want to do that because the problem is if you just do an if condition
like that, then you haven't proven your work to the compiler.
And so the way I interpret this parse don't validate article is the idea is that you want
to be as you're proving certain conditions about your data, you want to show your work
so the compiler can keep track of that and keep refining down more constraints and more
guarantees about the data in a particular point.
And basically you validate each field and at the end you will have a whole form validator
in the form of a decoder.
A form validator in the form of a form decoder.
Yeah, it's like a decode style validator.
And there are a lot of different places that this sort of decoding pattern shows up.
Like I mean, for example, I built a package called Elm CLI options parser.
And what that library does is you can you can wire it into an Elm worker like a headless
Elm application wired in with node JS and then build a command line tool that parses
command line flags.
So if you say, you know, dash dash help or dash dash output to tell it where to output
your code or whatever.
So like I use that in the Elm GraphQL command line tool and other things.
Yeah, I wish I could have used it in Elm review, but I can't.
Oh, that's sad.
Because I'm building an Elm application.
I don't have an Elm application.
I see.
That might be a topic for another conversation because we'll see.
Maybe I can convince you to use it.
But the really nice thing that you get when you use that is that you have a guarantee.
You do this sort of decoder style pipeline where you say, okay, I have a command line
application and I have a flag called dash dash output.
And I expect it to be in this format and you can, you know, do whatever validations you
need to.
You can say, I expect it to be, you know, an int and I expect this thing to be an int.
I expect this string to be either this value or this value.
As you go and you build that up, you're basically describing the possible ways to invoke your
command line application, right?
And because you've described that, you've not only described it to my Elm CLI options
parser tool, but you've also described it to the Elm compiler, which means the Elm compiler
knows, Oh, based on all these chains that you've done, you added this field of this
type, you added this field of this type, you mapped it into this data structure, this record.
The Elm compiler knows what data type you're going to end up with.
If it succeeded, it knows that it's going to be a result of a given type, a string result
or whatever if it fails.
And my Elm CLI options parser tool can just guarantee you that you're either a going to
get the data you expected and you can just run the program for them or B the user gave
an invalid set of options based on your definition and the user is going to see an error message
telling them what went wrong, which is pretty cool.
What about when you don't want the code to fail?
Like it is fine if I don't have this flag or if this value is not well formed, but what
do you do then?
Well, you can define certain fields as optional and the baseline in Elm CLI options parser
is just a string.
I mean, ultimately everything in a terminal is just a string.
And so it's up to you to make additional guarantees or to use specific helpers that say, I expect
this to be this type or I expect this to be one of these discrete values.
And it can actually present documentation and it can give you better documentation if
you use these specific helpers.
But you can just say, give me the raw string input and I don't care what it is, just give
me the raw thing.
It's the same thing with JSON decoders.
So it's not necessarily black and white.
You don't have to have the whole thing crash or fail.
If one thing is missing, you can say, Hey, this is optional or this is okay if it doesn't
work, but then just use this default value for instance.
So there's this thing that often I still have to look up the documentation to remember this.
You can either say maybe decode.string.
Decode.maybe space decode.string, I think.
You can either say decode.maybe decode.string.
That gives you something that the string may or may not be null, right?
Or you can say, so you could decode a field that is a maybe string, but that field then
needs to be there.
So it would need to be, in the JSON, it would need to be name, colon, null.
Or you could say this field is optional.
So if this field isn't there, that's fine.
Just give me nothing.
Then you'd have to say maybe and then the field decoder with the string.
With the decode.string.
So that one kind of trips me up, but it's the kind of thing.
So it can feel overwhelming at first.
There's a lot to take in and it really hurts your brain.
But the thing is you want to start at the smaller level.
If you try to think about it as a whole, then it's too overwhelming.
But if you start by saying, hey, let's say I have a JSON response that comes back.
That's just a string, right?
That's JSON.
A string is JSON or an int is JSON.
How do I decode that?
Well you say JSON.decode.string and that will take whatever string, hello world in your
JSON response and it will give you hello world.
Or if you pass it a float, then it will say error.
I expected a string.
So start with that.
You know, you start with that and then you say, okay, well can I decode a string and
an int in a JSON object with these field names?
You know, you start with that and you build it up one step at a time.
And actually Brian Hicks has a book called the JSON Survival Kit.
It's a pretty quick read.
I think it's a really good resource if you're getting started with JSON decoders to learn
about this.
And he talks about this process of building it up piece by piece to get more complex ones,
which I think is the way to learn it.
I haven't gone through that one.
I probably should.
Joel Keneville.
Also has a few blog posts on the subject that we will link to because I know that he sends
them, he links to them a lot on Slack and they seem to help people a lot.
It's hard to get started, but just, I think that trying to write JSON decoders top down
is kind of overwhelming.
So start bottom up, start with the smallest piece of your JSON decoder and get that working.
And once you have that working, you can even write a unit test for that if you want, or
you can just try working with some smaller piece of data and then build up from there.
There's one thing that I'm not very fond of with JSON decoders is that my Elm application
won't crash.
It will work exactly as I planned it to.
And that's great because every HTTPS response that I get will be validated.
The thing is, if it's not valid, I will not necessarily know it from a developer's point
of view.
I will have to handle it at runtime and I have no guarantees that the decoding will
work or yeah, that's my biggest issue that I don't know if the decoding will work when
I compile.
What do you do for that?
Well, now that you put it that way, I'm kind of realizing that the main selling point of
two projects I've built is exactly that.
The first one is Elm GraphQL and Elm GraphQL gives you a way to, if you're not familiar
with GraphQL, it's basically a schema for your API.
So it's a schema that describes all of the possible API calls you can get.
And the graph part is that you can sort of traverse these relationships between different
parts of the API.
You can go to a user and you can see their friends or their likes and traverse to these
different objects.
But the part that we're most interested in as Elm developers, that part's great too,
but it gives you a schema.
So you have types describing all the different values you can get from the API.
And so you can build up a response and it's known before you run your query what data
you're going to get back.
So Elm GraphQL generates code that allows you to consume your specific GraphQL API in
a way where you know that the types are going to be correct based on the schema of your
API that GraphQL gives you.
So it's basically taking that knowledge that your GraphQL schema has and bringing it down
so the Elm compiler also has that knowledge.
If you want to get a better picture of what that looks like exactly, I gave a talk at
Elm Conf a couple of years back called Types Without Borders.
That's worth checking out if you want to get an overview of that concept.
So that's one way that you can do that.
The other one, which we did our first episode on Elm Pages and we talked about static HTTP,
actually that's one of the things that I find most exciting about the Elm Pages project
is that it gives you this tool static HTTP, which allows you to run your decoder at build
time and get this data from HTTP APIs.
And if any of your decoders fail, your users don't see it.
You see it in your build pipeline.
Your build tool will fail, your CI build will fail, but your users will not see a failure
and that's guaranteed, which is pretty exciting to reduce the possible failures that could
happen at runtime.
But I'm not sure, Jeroen, did you have any other thoughts on that topic of how you can...
So what happens when you don't have a schema?
That's where things are getting tricky.
Because you don't have a schema to generate something from or to compare against.
So you basically have to kind of unit test and hope the server will match what you expect,
I guess.
And that is where the limits of Elm end, I guess.
There's just conceptually, there's nothing Elm can do there.
Tools like Elm GraphQL can help.
There are similar things for Haskell, like Haskell Servant, is that what it's called?
That allows you to take the data types you're returning on some Haskell server and it generates
types and decoders for your Elm code.
Yeah, I don't remember the name, but there were some decoding generators and type generators
I think it makes a lot of sense to...
I mean, that's why I named the talk I gave Types Without Borders, because I think it
does make sense to sort of connect across these different boundaries of these different
languages and runtimes and say like, okay, we're in different runtimes, but that doesn't
mean we can't share information about what type of data you should expect.
I might be going on a tangent here, but you're tying yourself up to the response of the backend.
So if the backend changes, the decoders will change, but all the versions of your client
side will not work anymore.
So I could easily do a whole episode about this specific topic even, not even just Elm
GraphQL, but the short...
I'll give you the short version, which is that there are two pieces here.
One piece is that there are some practices around building GraphQL APIs where people
try to build them in a nonbreaking way.
This is just a sort of cultural value in the GraphQL community that you try to have nonbreaking
API changes.
So that means that you could, instead of removing a field, you would add a new one and maybe...
Well, see, I have some disagreements with certain parts of this because part of what
that implies is that you make every field nullable, which isn't great.
And so they say, okay, well it's nullable.
So if you stop returning certain fields or if something fails, then your whole assumptions
don't fail because you're treating everything as nullable.
That doesn't feel like a great solution.
But that said, there are certain ways that, okay, if you have a new required argument,
that's a breaking API change.
And so if you're going to do that, then maybe you make a new field and that field has that
required argument.
And then you deprecate the old version.
And there is a way in your GraphQL schema to deprecate certain fields.
So that's one thing.
And you can sort of have a path.
Maybe if you decide to make a breaking API change, maybe you at least give a nice transition
path where you have a deprecation period, and then if you go to a new version or do
a breaking change, you give some time for clients to get updated or whatever.
For REST endpoints, you could duplicate the endpoints.
So you do that way.
You don't have a breaking change, just new requirements.
And the second piece, and I know some people who are doing this is you can take a snapshot
of your GraphQL schema.
And if you ever have a breaking change, first of all, there's tooling that can tell you
if you're making a breaking change to your GraphQL schema, which is cool.
Oh, cool.
It could warn you.
And then what you could do is you could kind of take a snapshot every time that happens.
If the browser client is on a version between a breaking change, you could say increment
a number every time there's a breaking change and you could have the client know which number
it's on and then check before it makes a request if it's on an outdated version.
And then it has to reload itself before it continues if there's been a breaking change.
Mario Rogic had a talk kind of like that.
On Evergreen.
I really like his thoughts on that.
That's a really cool concept.
We'll link to that.
Yes, definitely.
Another thing we haven't touched on, but we've talked about generating types confidently
with tools like Elm GraphQL or Haskell Servant.
What about auto generating JSON decoders?
There are a lot of tools out there for copy pasting a JSON payload into a window and generating
some Elm decoders or there are some editors.
IntelliJ Elm has some tooling that lets you generate a decoder based on a type annotation,
for example.
What do you think about those?
Are they worth using?
Do you use them?
I don't use them because I usually don't have access to those.
I don't have the need for it.
I think they can be a very good starting point.
The thing is that you...
What do you decode into?
You decode in something that looks like what you have in the backend.
So I think they're useful, but only if you transform it afterwards into something that
is made for the client.
So what we discussed sometime in the episode.
You want something that you will use, not something that looks like what the backend
So it's like coupling you to the serialization format and it's getting you thinking about
the serialization format first rather than your ideal data structure.
And you don't want that.
You can map over what you got at decode time.
So add another layer of decoding and that will be fine in my book.
But you don't want to tell that coupling impact the rest of your application.
So I think it's a very good starting point if you have trouble making them yourself,
but I don't use them myself.
That's interesting that you say that it would be fine in your book if I could get into some
nitpicking and maybe explore something where we have a different perspective on things.
Yeah, I don't have a book.
That's nitpick.
That's it.
In your book.
It's okay.
Yeah, yeah, in your, it's okay in your blog.
That's the definitive source of your own opinions.
So I would consider that a smell in my book or in my blog.
And the reason is because, I mean, partially because of what we talked about to sort of
parse don't validate, I want to do it in a single step.
I want to just have this format and deserialize it into exactly what I want.
And we talked about this in the opaque types episode, this notion, I use this term, wrap
early unwrap late.
I want to wrap as early as possible.
In fact, if I can wrap in the appropriate custom types and nice data structures and
everything before I even have access to the data, that's the ideal, right?
So yeah, I totally agree.
Are you just saying like, it's not a big deal, but it's a best practice to decode into your
desired data type or what were your thoughts on that?
If I was understanding correctly, you were saying that it's okay in your book to auto
generate a decoder and then get some sort of data format that represents the serialization
format of the JSON and then pass that to a function and then map that into a different
data structure as a separate step.
You couldn't do it that way.
The thought I had was just generate the decoders for each field and then the function that
will create your custom type or your record, that one should make it look like what you're
going to use.
But you could have one extra step and just get that, have that logic of mapping one to
the other in a decoder.
But yeah, you don't want it to leak out somewhere else.
So it's basically what is good in your book.
Okay, so we're on the same page in our books there, it sounds like.
Well, you also don't have a book.
Well, we'll see about that.
We'll see.
Oh, yeah.
Sneak peek.
Keep your eyes out.
There may be a book.
There may be a book.
So okay.
And one of the other really cool things that comes from this strategy of immediately getting
the data type you want rather than having this intermediary data format in your Elm
It's really nice because you can sort of locally reason about how you want to transform something.
Like I remember back in the day working on some angular code.
Oh, man, it was it was fun.
It was lots of fun just going through this data that we were getting from the server
and running all these functions to filter over and change certain bits of data into
a format that we needed to match it against some filters that we were applying and things
like that.
So we needed it into a specific format.
And so there's this one giant function that takes all this data, this deeply nested data
structure, and then it reaches into these pieces and it mutates things and it maybe
it does some sort of functional style mapping of things.
But either way, it's not it's it's very error prone and it's a pain and it's really it really
hurts your brain a lot more than it needs to.
And so this is one of the best qualities of JSON decoders, I think, is when you can just
say, hmm, well, I've got this big data structure that's coming back from the server.
It's got all of the users who are online right now and it's got some information about those
users like their current status, and then it's got some nested data structures like
which rooms they're a member of or whatever.
And you say, well, we're going to change something about the way that we're dealing with the
rooms that they're in or we want to, you know, add an additional piece of data to that or
put it in a different data structure or whatever.
Well, where do you go?
You could have like one module that deals with that piece of it.
And it would be opaque.
It could even be opaque.
It gives you these guarantees.
You can you can just have it as a separate concern and you can reason about it locally.
You can have unit tests that say, hey, here's this piece of this giant data structure.
For performance reasons, we get it back as one giant JSON blob from the server.
But in terms of reasoning about it, I want to think about this piece of it as a unit.
And perhaps I even want to reuse that piece when we get a different response from the
API, when we ask for a specific part of it.
And so you can sort of like you can separate these pieces out.
You can use nice opaque types.
You can get nice test cases on them.
You can change it locally without going to this one big function that maps everything.
Like the thing that you have these things co located, the things that like decode into
this data, the things that like validate this part of the data and know about the JSON structure
of this data and put it into the right format.
All of those responsibilities are sort of cohesively together and you can isolate them
from the from the rest of it.
And then you can snap together these different decoders that you've isolated if you need
to reuse them.
So I find that it's just a really nice way to organize and structure your data and to
think about your code.
Yeah, I think you say that because you're used to modeling your modules by data type.
So there is something that Evan advises for in this talk, The Life of a File, which is
very good advice in my opinion, in my book, my blog.
We have one concern around the data type.
What you do is you put that data type into a separate module and everything will be cohesively
grouped together, as you said.
And that is just something that you need to start getting into your head into your habits
of grouping things and putting them into a separate module instead of having one giant
file that does everything.
Otherwise you'll get the same problem as you with your Angular application.
And because you can still pretty hard to reason about if you want to.
You can definitely create premature abstractions where you're going crazy with because you
don't necessarily know what's going to be grouped together naturally.
But the point is that you can, the way that JSON decoders work, you can reason about it
locally because it is this sort of way of transforming data.
And again, this is just like a broader pattern that you can have everything about how you
deserialize this data.
What are the names of the fields?
What are the raw data types you expect?
How do you transform those raw data types?
Those things just belong together and you can sort of hide them in this box and change
them in this one place and think about that as one unit.
Even though it fits into this bigger piece of decoding a giant JSON blob, you can think
about this one small part of it.
So it's just a very successful pattern.
And again, it's a pattern that shows up all over the place in Elm code.
So, you know, start paying attention to that broader pattern when you have a function for
succeed or fail that can stop the validation and say something went wrong.
Here's the error message to show or, Hey, just don't even try running any real validations
or anything like that.
Just give this value or you can have actual decoders or validators or whatever, be on
the lookout for that pattern.
There was another point that I think is interesting on this topic.
We talked about, should you generate your decoders?
And we talked about these different tools for that.
As I've gotten used to writing decoders in Elm, at first I found them intimidating.
And once I got some practice generating, it doesn't seem that useful to me because it's
so easy to write it.
I can just like write the code and it's not that big a deal because you have enough practice
and you become comfortable with those concepts.
But it takes time and it's good to like break things down.
So some people who are new to Elm say, well, why can't we use something like this approach
that Haskell uses where you can automatically generate something that decodes data based
on the data type.
So Evan has a really interesting document on this.
It's called a vision for data interchange in Elm.
And he touches on that and he talks about his experience using these automatically generated
decoders and he says, well, yeah, you can do that.
But in my experience, debugging an error when it's coming from an implicitly generated thing
based on the type signature is a lot harder to debug because I don't know where to look.
I don't know how to change it to make it work.
So that's another piece of it.
The fact that JSON decoders are explicit is very in line with the values of Elm, which
is there are worse things than being explicit.
Like is that really the bottleneck to you writing maintainable, easy to change and not
so error prone code?
Like that's not the problem.
Spoiler plate.
That's not the thing keeping you from moving fast in your application.
The thing keeping you from moving fast is something changes on your server the way that
your code is serialized and now your decoders broke, but they were being generated implicitly
based on your type signatures.
And how the heck do I fix it?
I explained that to your backend engineers.
You broke it.
They're like, what?
That's your problem, frontend developer.
The JSON decoders get the decoupling really nicely.
It's just a really elegant pattern.
And once you get used to it, you'll learn to love them.
Maybe it's Stockholm syndrome, but we're too far gone at this point.
I feel like I never had much trouble with JSON myself.
That's because my first work on Elm was working on Elm Lint, the previous name for Elm review.
So it was something not frontend related for months, maybe years before I really started
working on frontend work.
And I just kept seeing people say, wow, JSON decoders are really hard.
How do you do this?
How do you handle this case?
And I just read all the Slack conversations and I learned that way.
That was very nice for me.
That makes sense.
I mean, I think one of the things that can be hard to wrap your brain around with JSON
decoders when you're new to Elm is sometimes it feels like, why can't I just have a JSON
Why can't I build one up by passing a list of JSON decoders somewhere?
And then it gives me like, oh, here was a JSON decoder for an int and a JSON decoder
for a string.
And just give me a JSON decoder from that list that I passed to you.
But the way that Elm works, you have to understand how these types change as you apply functions.
And if you just pass in a list of things, then those things must have the same type
for one thing.
So you can't just have these different things and have it transform the type signature.
So it's a pretty advanced thing actually that takes some time when you're new to Elm to
get this sense of how calling functions also massages the types little by little.
So you're saying, here's a JSON decode dot string.
But well, I actually want to take this string and I want to extract the year from this ISO
8601 date.
I still get so impressed when you say that.
It feels like magic.
Say it again.
It's just a manifestation of the trauma I've been through dealing with that type too much.
Extract the year, you said?
So if you're trying to extract the year, you have to understand that you take this JSON
decode string and then you apply a JSON decode map function, which is going to take a string
So it's going to take a value of type decoder string and it's going to transform that into
a type decoder int.
And so you can apply that and transform the type of your decoders.
And then you can take, you know, you could take something if you do map two, you could
take something that takes a decoder of type string and a decoder of type int and it decodes
it into a decoder of type album, which is our type alias for the album record type.
So I mean, I don't know, it's the kind of thing that when you're just saying it, it
sounds so simple, but it takes time for your brain to get used to that, to how these types
sort of fold together and how applying these map functions transforms things.
Well, as you said, it's an advanced topic.
The problem with them is that they are an advanced topic, although they're not that
hard when you get used to it.
So they're advanced, but they happen at a point in your learning that is pretty early
So people who are used to doing JavaScript front end development, for instance, they're
used to making HTTP calls.
So they're used to getting data from the server.
And that's one of the first things they will try to do before other things.
I completely agree that you couldn't have said it better.
It's an advanced topic, but one that comes up right away.
Maybe in some cases, for some people, that will be one of the first things they will
try actually.
Don't do that.
So a couple of tips for sort of easing that learning curve for JSON decoders.
At least I found this very helpful.
One thing is understand the function.
Play around with that.
And in fact, forget the map seven, map two functions.
Just try
Take a string decoder and map that string into an int.
Like decode a JSON string into an integer value or fail if it's not a string that's
wrapping an integer.
Try something like that.
Just to give yourself a sense of how mapping feels, how you can write it, how you can transform
types in your pipeline.
That's one thing.
I think a second thing that people get tripped up on that's like a low level building block
of decoders that's really core to how you build decoders is record type aliases.
So one thing they don't teach you, it's sort of an implicit rule that isn't like you just
learn it by seeing it happen a lot.
When I say type alias album equals some record, it's actually giving me a type constructor
that takes if I have a field that's a string and a field that's an int.
In that order.
It gives me a function in that order.
It gives me a function which takes a string, an int and returns an album record.
And that's one of the few implicit things that Elm does for you.
So I usually say Elm has no magic.
And that's pretty much true.
But that one is implicit.
Yes, it's quite handy.
But where do you learn that?
And so we're stating it here, Jeroen.
We're giving people this explicit knowledge when you say type alias album equals a record
alias, a record type.
It gives you a constructor function.
Okay, so yeah, pull up an Elm REPL, try defining a type alias of a record, and then just write
the name of that record that you defined, capital A album, and the type is function.
Yeah, just be very wary about the order of fields.
I try not to put two elements with the same type next to each other in the type alias
Because the problem with this is that you can put the right value in the wrong field.
You can decode things in the wrong position.
So be very wary of that.
Yes, right.
So if I had a JSON decoder that's decoding like a name JSON object, and that name has
first name and last name, and I have a type alias name is a record with first name and
last name.
If I have my JSON decoder working perfectly, and it's getting the right first name and
last name, and I now change the order, and I switch the order in my type alias.
Or in the JSON decoder, either.
Yes, that's right.
If I change the order in either place, so it might seem like, oh, I'm just refactoring
my code.
I maybe maybe you put.
No, you're not.
It's not that simple.
It's not that simple.
The order matters because you're using a positional argument constructor function that the type
alias gives you when you define a type alias.
It gives you a constructor function and it is order dependent.
So beware of that.
That's the PSA for the day.
The more you know.
Anything else that people should know as they're getting started with JSON decoding?
Not on my blog.
All right.
Well, I think that's a good start and looking forward to getting into some other topics.
There are some more nuanced topics to explore here, but hopefully this is a good start for
So we have recorded these episodes, the four episodes before we released Elm Radio to the
public before you even knew this was a thing.
And we would probably like to have some suggestions about topics.
We have plenty of things to talk about.
Many, many, many things that I will not list.
But it might be useful for us at some point if you gave us some topics you would like
us to cover.
Yeah, topics.
And we might do some grab bag episodes where we go through different questions, maybe multiple
questions in one episode.
So submit a question that you have and we'd love to talk about it.
Don't make them questions that you can just get an answer on on Slack.
Yeah, perhaps it's best to give questions where we can sort of get into some interesting
best practices or different ways of looking at a topic rather than just a here's one clear
cut answer that somebody on Slack could probably do a better job just linking you to the right
article quickly.
Yeah, exactly.
Or why is this code not compiling?
Like that's not what we're going to answer.
Well, looking forward to seeing what people submit.
And yeah, thanks a lot.
You're in.
I'll talk to you next time.
See you next time.
We'll talk again soon.