elm-pages scripts

We discuss elm-pages BackendTasks and how to run them as scripts with a single command.
January 30, 2023


Hello Jeroen. Hello Dillon. So I think our listeners may be shocked to hear but our episodes
are usually unscripted. But today we're going to do something a little bit different. I'm
never ready for you making a pun. I should be at this point. But I'm not. You knew it
was coming. No I wasn't. What are we talking about today? Today we're going to talk about
Elm Pages scripts. Scripts! That was the pun. Yeah. So we haven't talked about Elm Pages
in a while. Yes. We did talk about it already in the first episode if I remember correctly.
Yep that's right. So it's a project you've been working a long time on. And we haven't
heard about it much this year except on Elm Radio because you keep mentioning it for good
reason. You're still working on it. And yeah you've been working you've been making a lot
of things in Elm Pages. A lot of very cool things that we're going to talk about in future
episodes. But today we're just going to focus on one part of it. And that is going to be
scripts. Yes. Because from the amount of content that I can see in Elm Pages we're not going
to be able to fit that into even hour long episodes. Exactly. So yeah Elm Pages scripts.
Like what is a script? What is Elm Pages? What is this cool new thing that people want
to hear about? Yeah. So I'm really excited to see what people do with Elm Pages scripts.
So yeah and just to reiterate we are talking about the Elm Pages V3 release. At the time
of this recording it is pre-release ramping up to get the release getting some final API
changes and feedback from the community and writing some docs updating some docs. But
yes so that's what we're talking about here. Elm Pages V3. The scope of what you can do
with Elm Pages V3 is kind of huge. We will talk about it in the future but Elm Pages
V2 people may think of as a static site generator which is what it was. So Elm Pages V3 allows
you to do everything you could with V2. You can still build a static site using it.
is built using Elm Pages V3 actually. It's on a pre-release and it does that by generating
static assets but you can also do server rendered pages. So the scope of Elm Pages V3 has changed
with what you can do quite a bit compared to V2. But the heart of Elm Pages is still
the same throughout all of its permutations. I've always thought of the heart of Elm Pages
as being this sort of engine that's able to like execute things on a back end and give
you back data. In Elm Pages V2 that was called data sources. So you know you could do a data
source to make an HTTP request. So like for it uses a lot of local files
with markdown and pulls them in, runs a markdown parser. It's reading files. You know obviously
with Elm there's not a way to go to your local file system and read files directly. But Elm
Pages gives you ways to do that. So in V2 that tool was called a data source and you
know you can pull in data to your initial page render using a data source. So before
your static page renders you can read from a file. That was called data sources in V2.
In V3 because the scope of what Elm Pages does has changed the term data source has
been renamed and the concept has changed a tiny bit. And the reason for that is because
in V2 it was the model was much more you try to make an HTTP request. You try to read from
a file and if anything goes wrong you just stop the build and fail. And then the developer
can read the issue. They can read a nicely formatted error message and say oh this API
turned to 404. Let me fix that and then rerun the build and it succeeds. With V3 so for
example if your server rendering pages maybe you get a 404 in an HTTP request. Maybe you're
doing a post and you need to update something and you need to handle that error in a graceful
way. So that's one of the reasons why this concept has changed and become a little more
powerful and part of the reason why the name has changed. So in V3 the term is no longer
data source is now called a back end task. And in addition to that in V so back end task
it actually looks and feels a lot like the Elm core tasks.
Okay well that's not going to be very surprising then to use.
Yeah I hope so. In fact a lot of the API looks quite similar. You can do back end task dot
and then you can do back end task dot map or back end task dot map error. Now if you're
familiar with data sources in Elm pages V2 the map error part might be surprising because
in V2 there was no error. So an Elm task is task with an error type variable and a data
type variable. So if you do an Elm HTTP task it's going to give you a task HTTP dot error
and then your decoded data as the data type. And then you do task dot attempt. You have
to do dot attempt if there is an error that could happen. And then you get a message where
you can deal with that result. So you can do you know turn it into remote data or do
whatever you want to with that result. So Elm pages V3 has these back end tasks. They
can have an error. So the reason for that is because in V2 it was much more if anything
goes wrong stop the stop the line stop the assembly line stop everything. And so there
is a trade off there because that is very convenient in a lot of ways. It's more convenient
but less powerful. Yeah because for instance you can't handle the fact that an error has
happened. You can't retry an HTTP request when you know that your server is a bit flaky
or not always available. Stuff like that. So you can handle errors. Exactly. Now you
it's completely at your disposal what you do with error handling. So we'll get into
that. And one of the things that I love about back end tasks as compared to the design in
V2 with data sources is with a back end task. If there is an error in that type variable
then it has a possibility of failing. If there is no error there. So if you have you know
back end task never my data then you know it will never fail. And that's that's something
that you couldn't just look at the types in a data source in V2 and know whether or not
it's going to fail because that possible failure gets sort of tucked under the hood. It's not
represented by the types. Right. So it's much more like a task in that sense where you you
can have a task where you know that it doesn't have the possibility of failure. So if you
if you have like an HTTP task which is task HTTP dot error my data and then you can do
task dot on error and that allows you to do you know you could just say task dot succeed
in that on error and turn it into some other fallback data for example. Yeah. So you can
recover from the from the failure. Exactly. And in some cases that might make sense. In
some cases you might want to do a follow up task and try something else whatever it might
be but it's completely at your disposal. But if you if you were to do task dot succeed
in the on error then you would get you would be able to recover from that failure and then
the types would would reflect that it's not possible for it to fail anymore which is kind
of cool. So you can do the same thing with Elm pages V3 back end tasks. It's it's the
same mental model. And if the error type variable is unbound if if you don't have any errors
there then you know it's not possible for it to have an error. So that's back in tasks
back in tasks are the heart of Elm pages even more so in the in the V3 release that is coming
up soon. And and they are also the heart of Elm pages scripts. So let's talk about what
Elm pages scripts are. I almost forgot about that part. Yeah. If only we had a script that
we should follow you know exactly we got we got a little bit off script there. So Elm
pages scripts are really just a way of defining a back end task and executing it from the
command line. So if you wanted to make an HTTP request and log something to the console.
So what you would do is so the minimal setup for Elm pages scripts it actually doesn't
require any of the Elm pages application folder or anything. You don't need any route modules
defined. You don't need any of the config for Elm pages. All you need is a folder called
script. So much like for an Elm review project you have a folder called review and it's a
regular Elm project. It has an Elm.JSON Elm pages script is the same thing. So you have
a script folder that script folder has to have an Elm.JSON. So it's a little Elm project.
It has to have Elm pages as a dependency in that Elm.JSON. And and then what you do is
you do Elm pages run hello. And now if you have something in your source directories
like source slash hello dot Elm it's going to go and execute that module. So right. Yeah.
What does it mean to execute an Elm module becomes the question. Yeah. So you mentioned
it's basically running a back end task. Exactly. So back end task is used to fetch data from
HTTP or from files so you can grab the data. But then that's it. Right. You can only grab
data. You can only fetch things. What else can you do with it. Yeah. Exactly. And that
would be much more the mental model in Elm pages V2. But now there's a little bit more
of a notion of being able to do effectful things using back end tasks in V3 because
if you're doing a server rendered route and responding to a form submission which you
can do in V3 which we'll talk about in a future episode you might want to do something that
has an effect on the world. It's not just about shoveling in data to generate static
pages. So similarly a back end task is not only about grabbing data. It does let you
pull in data and map that data back and test that map back and test that. And then. But
you can also perform effects. So for example there is a back end task in the Elm pages
API called script log script log takes a string and it gives you a back end task with no data.
All right. Yeah. So that looks like back end task lowercase error because it's not possible
for it to error and unit because it doesn't have any data. So the hello world for Elm
pages script is you have your script folder you have an Elm.JSON you have source slash
hello dot Elm and in your hello dot Elm you expose a function called run. Okay. That is
the main function in a way. Exactly. Run is like the main function for an Elm pages script.
You run has the type script and and then you do script dot without CLI options scripts
dot log hello. That's hello world. So that's it. Then you run Elm pages run hello and it
prints hello world. I hope that was clear to listeners. It's always hard to explain
code all of this was not that much code. Yeah. And that's sort of the thing I hope people
take away from this is like to write a hello world script and run it is a very small number
of files that you need to create code you need to write and commands you need to run.
Notice for example that we didn't say Elm pages compile script this and then make an
index dot JS file and node and run a node script and all of that. And if we had a compiler
error or anything like that in hello dot Elm then running Elm pages run hello would tell
us about that. So it's it's a very minimal amount of code we will link to an example
of a hello dot Elm will link to a starter repo that gives you like a minimal boiler
plate so you can clone it and play around with it. But it's designed to be a very small
amount of code. And also the the abstraction of a back end task is designed to be minimal
in a way too because there's no in it. There's no update. There's no subscriptions like you
just log. You don't return a model and then a command with a logging thing. And then you
like if you wanted to make an HTTP request you just do you know back end task dot HTTP
dot get JSON give it a URL give it a JSON decoder and then back end task dot and then
and then you can log some data that you decoded. So right. It's designed to be more like the
abstraction of a back end task lets you do things in a more lightweight way especially
for this sort of mental model where it's just like just execute this thing. So it back end
task maps very nicely to the idea of a script where it's just execute this thing or fail.
Do this do this do that. And we're done. Right. Exactly. It's not updating the model. It's
not responding to user input. So it's just a sequence of tasks that it's performing.
Whereas if you had to listen to user input if you had to listen to the user typing into
an input field or an on click that would sort of break that mental model of just saying
there's one back end task it just runs and it finishes or it fails. Right. So is there
no way to listen to user prompts or could that be added. That is a that is a very good
question. I have thought about that. Wolfgang Schuster has been doing some experiments with
some with a sort of Elm Inc project where you can do things like create interactive
terminal applications in Elm. And I definitely think it could be interesting potentially
to have a way to have an init update things like that if you wanted to do something like
that. But it would be possible but it would add complexity. Yeah. But I mean even without
having to have an init update and all those things you could probably have a define a
back end tasks which succeeds with the user prompts without having this the Elm architecture
lifecycle. Exactly. That's the that's the thing is you can and you actually can. I actually
haven't tried that specific use case but you can run arbitrary Node.js code using back
and task dot custom. We'll link to the back end task dot custom module docs which explains
how to set that up. But essentially you just give it a JSON encoder and a JSON decoder
and you can define a custom back end task and that custom back end task could be a wait
user input and you could give it a prompt and you know you could block until you receive
the user input and then return that data. And that would totally work. So. So yeah you
can do that and it supports that simpler mental model of just running a script until you're
done. The thing that's that I find really fun about back end tasks is that like it is
this it's a type it's data. It's a description of an effect or of something to achieve and
exactly get out of it. Exactly. And so because it's this sort of declarative description
of an effect and how to respond to subsequent effects you can use it in a lot of different
places. So like Elm pages scripts is one place you could use it. You know if you wanted to
define some sort of runtime where you say oh yeah well you can turn that into you know
not an Elm command but something like an Elm command and you return or like you can return
a back end task in your in a tuple with a model change and a back end task or like it's
possible to do that. And I think that sometimes people underestimate what you can model for
frameworks to be able to do like effectful things using this pattern of describing effects
as data. I think it's like it's actually a very powerful tool that we can do a lot with
and as framework designers we can put guardrails so it's very very clear what what it's possible
to do using those data types and where they can be used and where they cannot be used.
So you know it's essentially the idea of a managed effect where like calling a back end
creating a back end task in Elm pages doesn't do anything. You can create a back end task
just like you can create a command but when you give it to Elm pages in a place where
it accepts that type then it lets the framework do something with it. So the sky's the limit
with how you build things with that. What I really like about this pattern is that because
there's an abstraction layer because you make a new API that whose internals are hidden.
I mean I'm guessing they're opaque right. Well that decouples you from how it's implemented
under the hood. So however you implement back end tasks under the hood or how you implement
logging, reading a file, writing to a file, all those things. I'm guessing they're implemented
in Node.js at the moment but they could be implemented using a bash script or whatever.
And all those options are available as long as you don't tie it to a specific implementation
which because there's an abstraction layer they are not. So that is something that I
really like is that you're free to implement it however you want, you as the framework
author. But I'm guessing if you do it through a port a user can do so as well. Although
it is going to be executed through JavaScript. But it could be done through other means maybe
Absolutely. Well yeah as you say I mean at the end of the day Elm pages is creating,
it's scaffolding up an application around your application. That's sort of what a framework
is. And so it's at the end of the day compiling an Elm application and executing it in this
case in Node.js. But it could be executing it in other contexts. It could be executing
it with Deno or Cloudflare workers or with Bun with different run times. But at the end
of the day it is using Elm which its way of communicating is through ports. And so it's
just it's just building that. It's just like a little application. You know just like when
we write Elm review and Elm GraphQL command line tools that you npm install somewhere
in there where we have compiled Elm code where we take that JS we import that code and run
it set up you know in it the Elm application subscribe to some ports send back some ports.
So we're communicating to the Elm app through ports and that's that's all that Elm pages
is doing. But it creates a set of abstractions for that that makes it easier for the user
to basically execute things in a back end and run a script in a back end context which
turns out is a very useful thing to do if you're you know making a static site because
you want to read some files and then you want to pull that data in your front end. But that's
also scripting right. So it does bring up the question like is Elm a good tool for this
type of task like this kind of back end task. Right. Yeah exactly. Is Elm a good tool for
writing a script. Is that a good idea. And I mean of course we're biased. We want to
do everything in Elm and we write lots and lots of Node.js code so that we can have the
ability to do things only in Elm. But I think it's I think it's quite nice to be able to
just operate within the confines of type safe Elm code where you can write a JSON decoder
and have have things fail and have this explicitness. But you can write to a file. You can log you
can read files which by the way like writing to a file is like a built in thing in the
script module that Elm pages provides. But you can define your own custom back end tasks
as well. So it's just a way of binding Elm code and these back end tasks to a back end.
That's that's really what a back end task is. OK. So you mentioned doing this in Elm
or doing this in other languages or tools. So yeah. Like does it make sense to write
a script in Elm or in Elm pages or is it sometimes better to do it in JavaScript or Bash or Python
or whatever. So you say that there's at the moment logging there's writing a file. That's
not a lot of things that you can do built out of the box but you can do more through
custom back end tasks using ports. So basically you can probably do anything that a JavaScript
script could do. But is it what are the gains what are the benefits that you have when you
do it through Elm pages compared to just running a Node.js script for instance.
Exactly. Yeah. Great question. And that's that's exactly the right question I think.
So first of all a little bit of background. The motivation for Elm pages scripts and people
might be asking like Elm pages scripts like why what does Elm pages have to do with scripts.
Yeah the name don't match. Right. At the moment. So the Elm pages script was born out of this
use case of generating like the scaffolding for a new route. So Elm pages v2 has an Elm
pages add command so you can say Elm pages add blog dot slug underscore and it generates
something for a route where it's blog slash some dynamics slug. And I wanted to have a
way to let users customize that. Ryan has created a nice feature in Elm SPA where you
can do some templating and create custom commands for for scaffolding new pages. I was really
keen on on using Matt's Elm code gen tool for that. And so as I was starting to build
that I'm like well it would be really nice if if I could use Elm code gen to create scaffolding
for new routes. But I also want to be able to read an environment variable read some
configuration from a JSON file maybe get some like JSON data from an API to figure out how
I'm going to generate my my new routes. And so well that's kind of what back end tasks
let you do. So I wanted I knew for a long time that I wanted to have the ability to
use back end tasks to scaffold new routes because I just really like this abstraction
of back end tasks and I want to use it for a lot of things. And then it's like well if
I can like once I've built that it's like well this is no longer a scaffolding tool.
This is just a way to like run back end tasks by like writing a module that defines a back
end task to run. And OK maybe the special case is it's like writing a file in a specific
format but why not just give a back end task to write a file and then you can use that.
And now it's just on pages scripts. So it's really like Ruby on Rails generators where
it's just like that was the main motivation was Ruby on Rails generators are used for
if you want to create a new page with a form and then you just it's a tool for very quickly
building up boilerplate. So it's like you know you create a new controller in Rails
and your template and your template is defining a form and your form has these fields and
you also want to create you know some some stuff for for working with active record to
define this new user model or whatever. And so people are very productive using Rails
generators where they'll say like Rails generate whatever and and you can build custom workflows.
You can build custom generators. You can even install custom generators. So and then they
say that Elm has a lot of boilerplates like we don't write write scripts for that usually.
Right. Right. But I wanted a way to customize template templates for for adding new routes.
And also you know if you want to create a new page and be super productive where you
can say hey I'm going to make a new form and it has these fields. Why not be able to write
a custom generator a custom Elm pages script that lets you just template that. And if you
want to read some configuration from something or whatever you want to do why not let users
do that. So that was the motivation. Now back to the question of like why what what benefit
do you gain by doing this compared to a bash script or a node script. If we look at the
pros and cons between like writing a script in in Elm and writing a script in bash or
Node.js we can see some pretty pretty obvious pros and cons on either side. So let's look
at like writing a vanilla Elm script. If we were to do it on our own we would need to
compile we would you know we'd need to like compile some Elm script we would need to take
that compiled Elm script and import it into Node.js so we could run it and do some boilerplate
around that. And obviously that's not great. And Elm pages scripts takes care of that for
you. So we no longer have to worry about that. But what if we just wanted to grab some HTTP
data. Right. If we have to create and update to do that that becomes pretty verbose and
tedious. So back end tasks make that less tedious because you just do back end tasks
dot HTTP dot get JSON URL JSON decoder and then you can do back end tasks dot and then
you don't have that boilerplate of init update subscriptions. Exactly. Now the other thing
that becomes tedious there is dealing with failure. Right. So in Elm everything is very
explicit when things can go wrong and you have to painstakingly handle every possible
error. What if the decoded value does not successfully decode to the format you expected.
What if there's an HTTP error. What if there's a file reading error. All these things you
have to painstakingly handle every possible failure. So compare that with writing a script
in bash or node. The challenge is well what what things can fail. So it's very easy to
just run something and let it fail. Right. It just throws an exception. The problem is
knowing where it might fail and what implicit assumptions there are and what possible runtime
errors are lurking there. So if you want to write a quick and dirty script and you just
say I want to hit this API I want to grab this data I want to map the data a little
bit and I want to write some file or something like that. Right. Then writing a Node.js script
is is great for that because it doesn't get in your way with saying hey the errors might
be wrong. You just pull off JSON data. It doesn't get in your way with saying hey this
HTTP request might fail. So that if you're just writing a vanilla Elm file you do have
to deal with those cases and that becomes tedious. Elm Elm pages back end tasks try
to address that problem. So now with with data sources and Elm pages v2 it was more
like we talked about earlier it was more convenient because you can just let things fail but it
was less powerful because you couldn't handle possible failures or see where it was possible
for something to fail. So it was less safe and less powerful in v3 it's more safe and
powerful. It's a little a little more verbose. So Elm pages v3 provides a new abstraction
called a fatal error. And this is very important for the ergonomics of being able to do a quick
and dirty task but it tries to achieve a balance between convenience and safety. So the way
it works is it allows you to just give a fatal error to two Elm pages and it will just stop
and report the error. So in the case of a script it will just print out what went wrong.
So if you say backend task dot HTTP dot get JSON and it gives you a 404 error what you
can do it you can do backend task dot allow fatal and that is going to take your HTTP
backend task and and give you a backend task fatal error your decoded data. So that fatal
error contains the the information that the Elm pages framework needs to print an error
message describing what went wrong. So if you do Elm pages run hello and then you hit
your API with allow fatal it's just going to print an error message saying hey I was
running this HTTP backend task something went wrong. There was a 404 error and because you
you opted out of handling and recovering from that error.
So what happens if you don't write allow fatal.
If you don't write allow fatal in the case of backend task dot HTTP dot get JSON then
the types just won't line up because the error type in that get JSON function returns a backend
task with with the error having a fatal error and a recoverable error data. So if you wanted
to recover from it then you can do backend task dot map error and then you can pull off
that recoverable data from that record which is going to be a nice structured HTTP error
which could be your JSON decoding error it could be bad body timeout. And so if you want
to say if it's a timeout try it again you can do that. You can do backend task on error
case error dot recoverable timeout try again or if it is you know whatever else and for
all of those different cases that structured data you can choose explicitly how to handle
So if you don't write allow fatal then you have to do something. And what is that something
that you have to do.
If you don't. So the at the end of the day the Elm pages expects when you say script
dot without CLI options and you give it a backend task the type of that backend task
needs to be the error type can be a fatal error and the data type needs to be unit.
So so at the end of the day you you need to give it either no possibility of an error
or a fatal error if anything. So doing allow fatal just throws away that recoverable error
data that has the nicely structured error whereas allow fit. Yeah allow fatal just grabs
that fatal error and passes it through. But if you do on error then you can continue with
something else.
OK. So you have to transform the error type in a way that will print an error or succeed
I guess. And if you don't then you have to write to use allow fatal.
Right. At the end of the day that's the type of error that you can give it. So you can't
you can't just give obviously any error type to Elm pages and have it do something because
Elm doesn't let you have like variable return types for something. It's like so it needs
to be returning back in task fatal error and you can and unit. And so you could you could
define a back end task that has whatever error type just like you know a regular Elm task
have an error type. You can map that error type you can do task dot map error. You can
also do back and test that map error. It's the same thing whatever the error type is
along the way doesn't concern Elm pages. You can you can do whatever you want. You can
have whatever structured error data. It's just that if you have the possibility of a
failure you have to turn that error type into a fatal error at the end of the day. Right.
OK. So basically the fatal error concept in Elm pages is a way of saying hey let's have
safety. Let's have a balance between safety and convenience because for the sake of safety
we could. So in the design of this I could have just had a single type variable for the
data not had an error type variable and just let you say oh yeah it can fail or I want
to recover from the failure. What I wanted to have was I wanted to make it very explicit
where failures happen and if there's no error type variable there's no possibility of failure.
So you can tell just by looking at the types if it's possible for something to fail or
not and how it could fail. Now the fatal error type is a very generic failure that doesn't
contain any useful information for you. So it's just saying that's the that's the balance
between the safety and the convenience. You know it can fail but you can't do anything
meaningful to recover from it at that point because you have to kind of choose when you
get that data. So that's so the core APIs and Elm pages like HTTP reading from files
writing to files things that can fail. They give you these two different bits of data
where you can choose I want to either recover or let the fatal exception through the fatal
error through. So the point of that design is that you can have the convenience of just
saying yeah just give this message to the framework and let it fail or you can recover
from it. So it's trying to give you an ergonomic way to easily just say I don't care about
this error or a way to recover from it while knowing explicitly whether failure is possible
just based on the types. Right. And at the end of the script if it fails then it's always
going to have some kind of nice error message or a reasonably nice error message I'm hoping.
Oh yeah. I mean that's a major goal of Elm pages for sure is to you know strive for quality
we expect in Elm community for error messages. OK. So what I like about this is that you
said as you say like you can identify what is going to succeed and what is what can fail.
So once you so once this script is compiled by Elm pages if it compiles then it's either
going to fail in the intended way or well in a intended way in a or it's going to succeed
in the intended way. Yeah. But it's never going to fail because of how you wrote the
code. So something that happens a lot to people at least to me but I'm guessing to a lot of
people who write scripts is that the script is going to fail because you did something
stupid in your script like for instance you write a Node.js script and you mistyped a
function name. So that's not going to happen anymore. The only thing the only reason that
is going to fail is because some operation that's touched the external system like the
file system or made requests across HTTP failed for some reason. But it's never going to fail
because of how you wrote the code. So that is quite nice. So that is one of the plus
sides that I find in using Elm pages scripts. But do you see other ones compared to writing
because you compared it previously with writing a script in Elm without Elm pages which yeah
sounds painful. Some people have done it. It's actually not that bad in practice. I
have done so myself obviously. But how does it compare to writing something in JavaScript
or in Bash or Perl or Python or whatever. When would you do one of those or when would
you use Elm pages scripts. Right. Yeah. So you know there's there's a tradeoff in that
you know again in the context of Bash or Node.js you don't you don't know where possible failures
lurk because you know even if you're writing a script in TypeScript you don't know where
you might have gotten some any data back that is actually leaking possibly incorrect type
data somewhere. You don't know. So you for me if I'm trying to solve a problem for example
like recently I was trying to I was writing a script for for Elm radio where I can automatically
apply the right ID3 tags to MP3s that we publish that will apply the right album cover image
which you need to do before publishing and the right track information and it helps pull
in data from the Notion API so it can get title information and the number of the episode
and things like that. And writing it in Node.js I found really frustrating because I even
though I was even using like an NPM package for hitting the Notion API but there were
all these incorrect assumptions about the format of the JSON data I was getting back
even so with this helper package. NPM helper package? Yeah. And you know if I was writing
writing in Elm it would be JSON decoders so I would I would immediately turn it into nicely
structured data or an error and and be able to get like a shorter feedback cycle as I
was working on that script instead of just having to like run it a little bit further
run it a little bit further it would just tell me that I have decoding errors until
I've gotten the data format as expected which is my preferred workflow. And also I just
I can once like if you write a script in Node.js and then it succeeds you're like okay well
it's possible for this script to succeed but you're not necessarily convinced that it will
succeed for all cases. Whereas like if I if I write the script in Elm I would be much
more confident that like oh yeah it it's good now like it's it's handling the expected JSON
data I mean maybe the API sends slightly different data formats in different cases but I'm much
more confident that I'm done at that point. So that's that's one thing is that confidence
which I still want when I'm writing a script like and it's I still want to pull API data
down and have some sanity around that being confident that the data format I'm getting
is right I still want to work with nice types while I'm doing that and and know that the
types I'm working with are correct not like half correct mixed in with some anys that
trickled into my system. I mean you don't have anys in bash. Right. Oh man working with
the API data responses in bash does not sound fun. I don't even know how you would do that.
Yeah. I would just curl it and yeah pray that it works. JQ or something I don't know there
yeah there are tools but it's it's not fun you know so it's it's nice to use like a programming
language for that not just a bash script. But yes so like the other thing is if if I
want to make it more robust to run this script maybe it's like when you write a quick and
dirty script you want to just allow failures to just happen right. That would be like in
Node.js you just don't do a try catch. So with Elm pages back end tasks you do need
to be explicit where failures are possible. But at the same time I mean you know you do
your get JSON and then a failure is possible. So the types will not fit together unless
you do allow fatal back and test out allow fatal. And yes you do have to write that explicitly
but that's all you do. And if if you just say hey I don't want to deal with any possible
errors I just want to work with the happy path I expect everything to work. And if if
anything goes wrong just give me an error message right. Then you just any time the
types tell you to you just do back and test dot allow fatal. And now what you end up with
is yes you had to write allow fatal a handful of places. But for one thing you can look
at it and see where can fatal things happen. Right. Yeah. That's nice. Yeah. And if you
want to recover from it at some point later when you have more time or you want to print
out a nicer error message then you know where to look. Exactly. And you know exactly the
possible failures that can happen. Like it always feels like uncomfortable for me doing
like a try catch in Node.js and then just like expecting the cut exception to be this
thing that has this key but then like it might not. And like do I do a try catch within my
catch in case my expectations about the properties on that on that error are incorrect. Like
so it's if you want to do error handling error recovery like you have nice types that let
you do that. If you don't it's explicit where you're not doing that. And that's a very intentional
design trade off that it's a little less convenient but it feels more safe. So to me that is a
trade off I'm willing to make to write allow fatal a few extra places and have that explicit
this and know where things can fail. And then if I want to make my script more robust over
time and say like OK this error case I should really have proper error handling for this.
This this script that I'm running fails on Sundays because this thing happens and I should
really clean that up and add proper error handling. You can or if you want to present
nicer error message in one case instead of just saying I got this HTTP error you could
give a more custom error message. You can do that. You don't like it. No ends. Right.
So you could you could say like instead of saying you know and you know or instead of
saying the default can't read file error message that the that it gives you when you get that
fatal error in pages in the core APIs you could have you could turn that into your custom
error with nice error feedback whereas doing that in Node.js it's just going to be a lot
harder. So I just feel like it like the goal of this design is to give you a way to be
productive build things up with minimal boilerplate. You write your script hello.elm you expose
run its type of script you define a back end task and then you want a quick and dirty script
just the happy path you allow fatal. But as you want to deal with more error cases in
a graceful way it gives you the tools to do that and to really maintain it. So it's trying
to give a balance between convenience and maintainability. So I don't know like would
you use Node.js in some cases instead of using an Elm pages script. I'm sure there are cases
for that but I think if I if I have some little scripts for like helping with the Elm radio
publishing process like I want that in an Elm pages script because I want I want that
in Elm and I know. Yeah. Yeah. That makes sense. I mean you could just start writing
a script in Node.js like because you start small you do like one thing because it's a
prototype and then well you need one additional thing and then you need another additional
thing and two three seven additional things and. Right. And at some point you think you
should rewrite this in another language. Yeah. Let's rewrite this in bash. This makes a lot
more sense. Perfect. Yeah. I would say for me like I feel like when I hit dealing with
JSON data in Node.js that's when I really want to just use Elm for that. And I know
there are tools like Zod to help you do it in a more Elm way where you're writing things
in the style of a decoder. But I don't know for me I'm I'm going to tend to reach for
Elm to do that type of task. And if you as you say you can you can start something in
Node.js you can create custom back end tasks so you can like write whole chunks of Node.js
code in your custom back end task dot TS file and then you can just execute that as a back
end task. Yeah. So you can easily migrate from one to the other is what you're saying.
Yeah exactly. So the so the custom back end tasks the way you define them you write your
custom back end task TS file or JS whatever you prefer. It transpiles it using ES build
and and you export async functions and then you do back end task dot custom dot run. You
give it the name of the function that you exported from that TypeScript file. You encode
some JSON data to pass in. You give it a JSON decoder and then you've got a back end task.
And does Elm pages make sure that that port exists both in JavaScript and in Elm before
you run it. So it it doesn't need to make sure the port exists in Elm because it's not
actually it's it it's using it's not defining a port for each of those but it's just calling
your async function. But it does make sure that your your custom back end task TypeScript
file compiles. If there is a an error in the file you can recover from that as one as the
recoverable error type for that back end task or if you allow fatal it'll print it out in
a nice formatted way. Yeah. If you do not have an exported function of the name that
you're trying to call it gives you that as part of the structured error type. And if
you export something but it's not a function it even tells you about that. So all of those
possible error variants will be automatically printed for you in a nice format if you allow
fatal and if you want to recover from it you can even do that like it even has a custom
type with all those possible failure cases for you. Yeah. So you can pat a match on it
and print out a nice error message or something. Exactly. And it will also if you throw an
exception in your port data source it will give you if you throw JSON data it will give
you that as the error type. OK. So now another topic I do find it a little bit weird that
to run a script in Elm which I would love to do and I don't mind necessarily writing
a scripts folder with an Elm JSON file and all those boilerplate things. But do I really
have to pull in all of Elm pages. Right. Well I mean how do I explain it to my co-worker
like oh yeah of course use Elm pages. The name makes a lot of sense. Why not Elm scripts
or. Right. No I mean that's fair. So you can you know you can create a script folder in
any project doesn't need any of the Elm pages boilerplate. And you know could it someday
make sense to have maybe slimmed down version of the NPM package with a different name.
Sure. But right now that's not a priority right now it's like I mean right now it is
a tool that's basically Rails generate for Elm pages. So it's a tool for helping Elm
pages users be more productive. And it happens to be usable outside of an Elm pages project.
But yeah it's definitely like a little funky. The thing is like the concept of a back end
task is so tied to Elm pages right now. To Elm pages implementation you mean or. Because
as you say it like doesn't have anything to do with Elm pages necessarily. It just happens
to be code that is in that project also for good reasons. The use case of Elm pages but
it doesn't have to be. Right. Yeah. The problem is like I've had this also for Elm review
is like where do you draw the line. Like does it make sense to have Elm have the scripts
parts in a separate CLI in a separate Elm package called Elm scripts or whatever which
you would then use in Elm pages. But that adds a lot of complexity about how do you
make sure that those are in sync and how do you handle some of the underlying things that
have to be written in JavaScript or have to be written to something. So yeah it's a it's
a bit annoying but also yeah. Yeah it feels a little funky but like the fact that you're
calling Elm pages run in a project that is not an Elm pages project is like the main
problem. And in that case like make an alias to solve that problem. But like in the future
it definitely could be reasonable to have like you know like something called Elm engine
or Elm back end task or Elm back end or something and have a package for that have fatal error
and back end task and the things related to that concept exist there in that separate
thing. And then at that point actually it could be could be cool because potentially
I could make it like a standalone thing for resolving a back end task where the code to
take something of that back end task type and execute that and then give you the resolved
data could be like split off into something and then Elm review could let you use a back
end task in some place or whichever tool. So I mean I would I am interested in like
being being able to access arbitrary files. Right. Exactly. Maybe not HTTP but I mean
I could make that limitation. So yeah that could be interesting. It probably wouldn't
work that way but maybe under the hood. Right. So that's yeah it's definitely something that
could happen in the future. For now I'm really keen to see like what people build with it
and and go from there. But yeah you could definitely imagine a possible future where
it's sort of designed to fit into more places and I would love to see people using it for
more types of tasks. Yeah. If you split it off then the only thing that you gain is ergonomics
I'm guessing because it's not going to be necessarily faster. Definitely one use case
I see for for these scripts is if you want to work with existing data that you have in
your own pages projects. Right. For instance it's used on the Elm Radio website to fetch
episodes. Right. Episode data. Well now if you want to generate something you want to
generate a file containing the list of episodes while you just reuse those same back end tasks.
So that is really nice I think. Exactly. Yeah. For like generating our transcripts where
there are a set of things that don't have transcripts yet that could just be an Elm
pages script because right now there's a back end task that goes and looks at the file system
and decodes a bunch of front matter from files and all these things that you can do with
Elm pages back end tasks and it figures out the list of episodes which are which exist
but don't yet have transcript data. So we could tie that in in an Elm pages script with
actually just run the script and it goes and executes the transcripts for generating transcripts
that you need and moving the files to the appropriate file locations and all that. So
you mentioned before that this is mostly used for running scripts on your own computer right.
I know that Elm pages also has support for serverless or all those kinds of things that
I have to admit I don't understand too much. But would this be usable for serverless things
as well or would that be different parts of Elm pages in which case we will talk about
it in a later episode. Yeah. So yes and no. So it wouldn't be script but back end tasks
can be resolved in serverless functions or on a server and that that's what server rendered
routes are in Elm pages v3. It uses back end tasks you can do the same types of things
but for scripts there's no reason why you wouldn't use an Elm pages script in your CI.
So like you should you should absolutely use it like outside of your local machine. And
again like it's designed to be relatively easy to write a quick and dirty script that
only handles the happy path and then mature into a script with nice error handling and
and be a really robust script. So like I think it's a great tool for like writing team scripts
and maintaining them and having them on your CI and making them really robust over time.
So one thing that we haven't touched on yet that I want to make sure we mention is the
CLI options. So Elm pages scripts have the ability to to include a CLI options parser.
So for anyone who hasn't heard the term CLI options it's just the term for you know running
Elm review dash dash fix dash all that would be a command line option that's specifically
a keyword option but is it right. Sorry that one is called a flag. Yeah that was a what
was I thinking. Yeah. But yeah all the things that you can provide are options I'm guessing
and the things that start with a dash or dash dash are flags. Is that it. So the ones that
do not that only have a key but not a value are called flags. The ones that have a key
and value are. So I looked through a lot of different names for these terms and based
on common conventions and the the ones that were widely used and seemed like the most
intuitive I I came up with a little label. So we'll link to my Elm CLI options parser
package. This is actually what Elm CLI or what Elm pages scripts uses to parse command
line options. But there's a little graphic in there that has little annotations of what
these parts of a command line call are. But yeah so a flag does not have a value. You
know if you write log dash dash stat. Yeah it's a Boolean in a way. Exactly exactly.
It's going to give you a Boolean and then you have keyword ones. You can mix up the
order of those ones and it's order independent. You have positional arguments. You can have
optional positional arguments. So Elm CLI options parser is an Elm package that I built.
I use it for Elm GraphQL. I've used it for years in Elm GraphQL and it turns a command
line command into structured data or an error message that tells you the help options of
what went wrong and why the command was not valid. So in Elm pages scripts if you want
to you can accept command line arguments. So our hello world we said script dot without
CLI options. But if you and that just takes a back end task and that's it. So script dot
log hello world. That's it. Script dot without CLI options script dot log hello. If you want
you can accept CLI options. So that would be script dot with CLI options. Then you give
it your CLI options parser and then you receive that parsed data and return a back end task.
So you could you know based on based on a flag do one type of back end task or another.
Yeah that makes a lot of sense. Yeah. So just another sort of like essentially if you think
about it if you if you want to do a simple scripting workflow you know in in bash you
can just pull off positional arguments in node JS you read a bunch of stack overflow
questions until you figure out the right incantation and which like which array index the actual
user arguments start at and then where to get those. Yeah you mean until you learn which
command line tool you have to use like the use commander or minimists or no not that
one because it's deprecated or that other one has this problem with duplicate flags
or exactly that stage to that stage. But stage one is just like oh wait the index zero of
the arguments of the process dot argv or whatever is like the command that was called and then
yeah the second one is whatever. And so yeah after you like finally figure that out and
you pull a single argument because that's all you need and then you realize oh I actually
need to parse different types of options and then you then you go through and figure out
which of the NPM packages is cool for that now. And of course it's like Elm is really
good for turning unstructured data into structured data. It's like parse don't validate is what
makes Elm awesome to me I think among other things. But it really shines there whereas
if you're using minimist or commander or whatever it's just not as nice to work with massaging
these things into nicely structured data. So yeah I find that that's like a really nice
workflow because in Elm pages script like it comes built in with this tool. You just
define your command line options parser and like you sort of know the data you're going
to end up with and it has built it. It's all wired in for you. So it has a baked in opinion
about that. So again that's the philosophy is like trying to remove friction as much
as possible while still giving you like tools for doing things in a powerful but safe way.
And I think this this fits in with that where like I don't know I just I feel like it's
prohibitively expensive to actually figure out how to build command line options parsing
for a quick and dirty script in a lot of cases. But when I'm working with this I don't I don't
feel that I feel like I should ask because there are alternatives to running.
Scripts in Elm. I think the most known one is Elm POSIX. There's also ElmScript which
is a name we've used unknowingly so far. At least I did. So ElmScript from Ian McKenzie
and ElmPOSIX from Albert Dahlin. Have you used those for inspiration? Have you seen
limitations of those or is it just that well it made sense for Elm pages and this is just
an entirely novel approach and API.
Right. Yeah I'm I've been aware of those tools but but yeah as you say it's more the latter
that it's sort of emerged from the wanting to be able to use back end tasks in different
places. And so rather than looking at what's out there how would I do it differently or
do I like the way it's done and then designing based on that it was more just I want to be
able to use back end tasks. What would that look like. But that said like comparing them
like Elm POSIX for example it's it has this IO Monad concept where it makes the tradeoff
of having a single type variable for the for the resulting data that you get meaning that
errors are not represented in the type which as we talked about is a it's a tradeoff. It's
a tradeoff of convenience versus explicitness of possible failures. So it chooses the tradeoff
of convenience which is totally reasonable tradeoff for a command line tool. And yeah
the Elm POSIX standard API has a lot more functions in the toolkit designed at designed
for helping you do sort of scripty tasks like reading the the flags for a file and you know
making things writable and things like that. So that's not really the it's it's certainly
like a little bit confusing but that's that's not the main purpose of of Elm pages scripts.
The main purpose of Elm pages scripts again it's like trying to be like a Rails generator
type thing and trying to be a toolkit for helping to manage your project again like
helping the publishing process for Elm radio dot com. That's like that's the type of thing
it's designed for. You can do whatever you want to with the custom back end tasks but
it's not like it wouldn't fit well in Elm pages to have a large API for making directories
and making files executable and things like that. So that's not what it focuses on.
I don't know. I think it could make sense at least creating directories. Yeah. If you
say like this is not what's Elm pages scripts is meant for then you know that people are
afraid to use it then in a sense that oh well if this is not what it was meant for then
I might use it in a way that was unexpected or not meant for and then Dillon is going to
pull the plug and remove those features from me. I mean we've seen this in Elmland so.
I don't I don't look at it quite like that. So to me it's more about what exists in the
standard API because a back end task gives you a way to define a custom back end task
which is just JSON in JSON out. You can you can build anything with that. So if you want
to build like the difference is that the standard library in Elm pages does not have a lot of
functions for that built in. So but so you know maybe that in the future could be an
argument for something like you were describing pulling out a separate thing and maybe having
like an extended standard library. It's a difficult challenge of like how you package
together Elm code and this like back end code for doing these Node.js things. But you know
but potentially you could kind of have the ability to do these things sort of built in
somewhere but then not expose the back end task set of functions for for using them by
default. There are a number of ways I could imagine that going but.
So what I'm hearing is Pinky promise I won't remove things. And I mean it's it's just like
a back end task is a general purpose tool. It is not very opinionated. It's like like
Elm Elm removed like custom the ability to do user defined custom operators but ports
are there. It's not like oh our ports going to stop letting me do whatever it's like no
port a port is a port. It's like a general purpose language feature that's like core
to the design and it's not going to it's not going to go away. Like it's the same with
a back end task like that's just a core concept in in Elm pages and that's not going to go
away and that's not going to change like you can you can define your own back end tasks.
Right. Yeah. I mostly wanted people to to know what they can rely on without being afraid
of like things getting removed. Right. No it's a it's a great point to set expectations
there and again the expectation is like it's a totally general purpose building block and
you can you can run whatever you need to in your back end task. But the core libraries
might not you might not expect the core standard library to expose functions for doing a wide
variety of scripting tasks because that's not the main goal. So that's where I went
then yourself. Exactly. And that's not going to change. Yeah. All right. OK. Well we are
at the end of the script. If I want to make one final pun where can people try this out
because Elm pages v3 has not been released but you can already try this out. So how can
they try it out. How can they help. And what are you looking for. Yeah. So I will link
to a starter repo both in Elm pages starter repo for v3 as well as a minimal boilerplate
branch on that repo that gives you the minimum setup for an Elm pages script and also has
some information about Elm pages scripts and how to run them. I would I would love to hear
about what people do with them. I think it's like a pretty general tool and I think I'll
be surprised by some of the use cases people find for this. Other than that yeah the Elm
pages docs on pages v3 docs and and the Elm pages channel on Slack is a great place to
ask questions. Yeah. So people will not use Elm pages they will use Elm pages v3 alpha.
Was that correct. Yeah. There's a package Elm pages v3 beta in hindsight I probably
should have called it like Elm pages pre-release or something. But yeah. So and and hopefully
it won't be too much longer before that's a stable release. So hopefully that that instruction
will be irrelevant soon. But but definitely keep an eye on the docs if it says deprecated
this is now a stable release then keep an eye out for that. I did want to mention one
more quick thing which is so I think one of the really exciting things that that comes
from designing these things you know as as data back end tasks are just a type of data
that you pass to a specific place a CLI options parser is just a piece of data. So one interesting
thing that comes from that is because it's data you could turn the validator for CLI
options into a web interface that presents input fields for all the different flags.
So this is one thing on my mind that I think could be a cool project is to have like on
the Elm pages dev server or to have maybe some separate command whatever it may be some
way to pop up a web page that you can type in your command and get the validation messages
because Elm CLI options parser lets you define validations for all of the CLI flags and it
enumerates all the possible ways you could define you could run the CLI all the sub commands
all of the optional keyword arguments all of the required keyword arguments. So you
could have a cool set of like drop downs that tell you exactly what's required and what's
optional and give you validation messages in real time as you type them so you can see
before you run it what error message you're going to get and you know before you run it
if the CLI option parsing will succeed and then you can just hit execute when you're
done and it can just run it from that running server it can actually execute the command.
So that is one thing that I think just again it's like using these tools that are built
around pure functions and data types that describe these things opens up some really
cool stuff.
Absolutely. Yeah.
Another thing on my radar for the future is I think it would be really cool to have a
command for bundling a script so you could write a script run this bundle command and
get a little optimized script that includes all the command line parsing and everything
just a single self-contained node.js file and then you could publish that as an npm
package or include it in your in your path somewhere in your bin folder and run it on
your system. So yeah lots of lots of fun things that this could take different directions
in the future.
Yeah I'm excited to hear what people use it for as well.
Could be fun to use it inside of Elm Review but that would be an additional dependency
that I don't think people want.
Right. Which is why maybe bundling it could could become interesting in some use cases
but that's that's fair.
Yeah. Well stay tuned and Jeroen until next time.
Until next time.