GitHub Actions

We discuss best practices to setup GitHub Actions to make sure everyone has the same source of truth for checking your Elm code and deploying to production.
April 12, 2021


Hello, Jeroen.
Hello, Dillon.
And what are we talking about today?
Today we're talking about GitHub actions
and maybe also just CI in general.
Yeah, I'm hoping that I can get into some controversial opinions
and rile you up so we can hear about your thoughts
about this topic.
All right, all right.
I'm looking forward to it.
Let's see if I can do that.
They were running Elm review locally on their machine,
but they weren't running it anywhere else.
It's terrible.
Yeah, I already spoke about that during,
like every time I talk about Elm review,
I just run it in CI.
If nowhere else, in CI.
Yeah, CI.
So like why is that important?
Because it's a team.
So if you run Elm review on your machine, that's great.
But if another teammate doesn't run it on his machine by habit,
then there might be problems.
But as long as CI runs it,
then everyone will agree at some point,
as long as they don't merge in four requests
that failed tests and failed CI.
Right, right.
Yeah, it's almost like a way of making
it makes me think of this recent news story
where I don't remember what the company was,
but some company was blaming some like large security incident
on an intern using an insecure password.
And it was like, come on,
you're blaming this giant security vulnerability on an intern.
Obviously, the problem is that the intern was able
to use an insecure password
and an insecure password was able to lead
to the vulnerability, right?
So it's not the intern's fault.
Just like if somebody commits some code
and they don't use Elm format,
it's not their fault for messing up the code base
with non Elm formatted code or non Elm reviewed code.
It's the fault of the system for allowing it
and you need to address the systemic problem.
Yeah, I heard about the intern problem,
I heard about people blaming things on the intern
and like, you shouldn't blame things on the intern.
That's stupid.
And like, yeah, but why is everyone talking about this?
I didn't get that.
That's what happens when you read Twitter,
but you get the last things first.
Then you have to scroll down a lot.
Isn't that all that Twitter is?
Yeah, I know.
But if you don't scroll enough,
which I usually unfortunately do,
I didn't scroll enough that day, I guess.
I think, yeah, the bottom line is,
if you blame the cause, not the problem,
then it means you're not looking at the root cause.
And I think that this is like a cultural value
in the Elm community and in the Elm language itself.
Sometimes people talk about like,
what, you don't trust developers to write code
with side effects?
I'm glad that Elm has these built in constraints
because it allows me to trust myself more
because I know I'm working within those constraints.
And so CI is an application of that same principle.
And it allows you to put in constraints and guarantees
because your central source of truth
is actually enforcing those things.
And if you're finding that issues are coming up,
just like instead of blaming the intern and saying,
have you heard of the five whys?
Go ahead, repeat them.
Well, it's actually not a specific set of whys,
but the idea of the five whys is that you start with,
it's also sometimes called root cause analysis.
So the idea is you start with the problem,
which is we had a security exploit.
So why did we have that security exploit?
Why did we have the security exploit?
And if you do the one whys,
then you end up with,
why did we have a security exploit?
Because the intern used a weak password.
And then you stop there and blame the intern
and call it a day.
Yeah, perfect.
But that's why you do the five whys.
Five whys is just an arbitrary number,
but the point is you keep going until you can't.
Why was the security,
why was the intern able to use a weak password?
And why was a weak password
able to cause a security exploit?
So I think that that's an important process for CI
to ask yourself,
what do we want to make impossible in our code?
And when you run into an issue,
just like in your Elm code,
you run into a problem and you say,
how did this type of prevented this bug?
That's like a standard part of my workflow
working with Elm code.
If I find a bug,
then my first question is,
could I fix this bug
through better types and constraints?
Yeah, or better abstractions
or whatever technique.
Right, right, exactly.
Maybe an opaque type would have fixed that issue
or some sort of constraint.
So with CI, it's the same thing.
It's those root causes.
So what are GitHub Actions?
So from my understanding,
GitHub Actions is the CI that is rolled out by GitHub.
So I used to use Travis before that.
And GitHub Actions is just another one
that is built into the GitHub platform
and that I find pretty easy to use
for GitHub Actions configurations files.
And that works well.
And we will go into how well that works
and how that works.
Yeah, if you like YAML,
you will love GitHub Actions.
Yeah, yeah.
But YAML aside,
GitHub Actions are great
and it sort of lets you tap into the raw compute power
that YAML now gives you.
I mean, the free tier is limitless.
At least I haven't run up against the limits of it
and I use them a lot.
I've never looked at the pricing actually.
Is it not limitless?
I think you get like 10,000 minutes a month
for free or something.
Yeah, you get a lot.
How many runs is that per month?
Oh, you get 2,000 per month.
I think you get 10,000 per month maybe
if you have a pro plan.
So I guess I just haven't run...
Oh, you get 3,000 per month with pro.
Well, I haven't run up against the limits of it.
I guess my CI jobs are running fast enough
and I don't have too many of them to cause issues.
Well, for my open source projects,
they're very fast.
I guess you can easily run into those limits
when you run full test suites for giant projects,
work projects usually.
But for open source projects,
you don't have to think about the limits.
It's a no brainer.
And projects like Travis CR
are shutting down their operations, I think,
because they can't compete with that anymore.
If the only thing you're offering is raw compute power,
you're not offering GitHub Actions.
I think Travis stopped proposing a free tier, actually.
That would make sense.
Yeah, I've noticed that a lot of the Travis CI jobs
in open source Elm packages and tools have started failing.
So I had to migrate some tools over to use GitHub Actions.
But I've been quite happy with the way that GitHub Actions
is set up and it's been a very good experience.
It's very powerful, it's fast.
It's the best way to run GitHub Actions.
Do we have to do any definitions before that?
Good question.
We defined CI and some of the reasons behind it.
We defined GitHub Actions.
Are we missing anything?
What does CI stand for?
Do you know the etymology of CI?
Continuous integration?
It stands for continuous integration.
And I believe the term was first coined
and actually, this is a bit of a tangent,
but continuous integration was initially a concept
from extreme programming where you're actually
not using long lived branches.
So you don't have your feature branch
that is drifting away from the rest of the code
that's getting merged in for a month
and then the feature is done and then it gets merged in
and you're sort of rapidly iterating with
trunk based development or short lived branches
that don't live for longer than hours or a day.
But the basic idea is it's a feedback mechanism.
And this is one of the other principles
of continuous integration that sometimes gets overlooked
is it is about that source of truth
that you enforce your constraints
and also are speeding up your feedback loop.
So instead of finding out about code not being formatted
or some dead code sitting around a month later
when you next get to it,
you have to address those things
and build in quality as you work.
So it shortens the feedback loop.
That is my understanding from Agile in general.
You can say whatever it is,
but I feel like it's just short feedback loops
of product design and product development.
Right. People get fixated on standup meetings
and estimation or not doing estimation or doing sprints
or not doing sprints and poll based and Kanban.
But it's actually about feedbacks about the dailies.
So you do the daily, then you do feedback on the daily,
then you do feedback on the feedback of the daily.
That's how it works, right?
More meetings. More meetings is the answer.
Answer that question.
And then feedback is on the meetings.
The idea of extreme development is really cool actually.
It's this idea that Kent Beck described
having a knob on a guitar amp or a stereo or something.
And the knob is, you just take all the knobs
and you turn them up to 11.
His idea of you take the things that are working,
short feedback loops and collaborating with people
and those sorts of ideas.
So pair programming is just taking this idea
of collaborating with people and you say,
oh, we like that.
Well, what if we did more of it?
And what if we did the most we could of that?
And like, oh, feedback loops are nice.
Well, what if every code we wrote
was starting with a failing test?
Every time you checked in code,
it gave you feedback to make sure everything is good.
So mob programming is like pair programming
but dialed from 2 to 11.
So you have 11 people.
Yeah, it's taking something that was dialed up to 10 or 11
to get pair programming.
And then it's dialing that up to 11.
All right.
So I think that's a good definition of CI.
Those are core definitions here.
I think then you also have CD, continuous development,
which is just continuation of that idea, I guess.
Well, there are actually two.
CD is a bit ambiguous.
There's continuous deployment
and there's continuous delivery
and there are all sorts of nuances with those terms.
Oh, I would have expected them to be the same thing.
All right.
There are some subtle distinctions,
but I would say that you're not delaying deployments
until your release is ready.
You're getting code out there
and using techniques like feature flags
to continuously have the latest code out there.
And it's just another turning the dial up to 11
to get better feedback
because you're not delaying
when you find out if there's an issue with your code.
If you deploy a year's worth of code at once,
you're going to get a lot of speed
because you can end up with very large problems
and problems that you're building on top of.
Whereas if you deploy every 30 minutes,
then you may occasionally deliver a small issue,
but it's a tiny issue compared to that yearly release.
And also, it forces you to build in feedback mechanisms
where you're finding out you're doing,
program tests and tools like that too,
so that you can safely release that frequently
without long manual testing cycles and things like that.
So it gets you to automate quality more.
Yeah, and then you usually also try to shorten
the execution times of your tests and deployments.
If you only deploy once a year,
then I guess it would be fine
to take four hours, maybe three days to prepare.
But if you try to deploy on every commit
on the main branch, that can't take three days.
That has to take five minutes at most.
Yeah, exactly.
So when you do that, you put in the effort
into making a good deployment experience
and a good experience in general, I guess, too.
Yeah, exactly. It's sort of a forcing function.
Do it more often.
Because if you have painful releases,
then you do your yearly releases.
It's full of manual scripts in Google Docs
that you're running through and checklists with team members.
And things fail, and you have to address them.
And you have to tweak the manual process.
Some things aren't documented in the scripts.
But you're doing it once a year.
You tend to want to do it less frequently
when something is painful.
But actually, if you want to address the problem,
doing it more often is going to be a forcing function
that forces you to address those problems.
Yeah, do it more often
so that you will find a way not to do it anymore.
Right, yes.
In that way, in that painful way.
And that's the definition of CD,
whatever it might be.
So shall we talk a little bit about some of the specific syntax
that GitHub Actions give you?
Yeah, sure.
I think some of the origins of GitHub Actions
were kind of having specific hooks for GitHub events.
So if somebody comments on an issue,
or if somebody opens an issue,
then you could have a bot that says,
oh, thanks for opening an issue.
Here are some guidelines.
The functionality still exists.
So you can have event triggers
where you say,
I want to do this on new issue.
Another example would just be run tests
whenever you get a pull request,
something like that.
Yeah, exactly.
And I think that most people
who are familiar with GitHub Actions
are probably going to be most familiar
with just that standard on pull request
on new commit on the main branch.
That is all I know, actually.
And you can get a lot of mileage with just that.
You can also trigger jobs on a schedule.
It's on schedule,
and it just uses cron syntax.
And by the way,
if you're using cron syntax,
there's a site, I think it's
And once you use it,
if you're wondering,
how do I run something every morning at this time
and finding stack overflow answers,
you'll just open that up.
for those who are looking it up right now.
Oh, okay.
Yeah, makes me think of
Also very nice.
So you can run things on a schedule.
You can run things on a schedule.
We don't have that functionality at the moment,
so our Elm Radio episodes are released
with an on schedule trigger from a GitHub action.
So we have a GitHub action
that calls the API from NetDefy to say,
please deploy.
Yeah, it's actually just a web hook.
You just hit a URL,
and it causes a deployment.
And there's also a lesser known feature in GitHub actions
where you can use something called Workflow Dispatch.
It's in the show notes.
And that allows you to click a button in the UI
for GitHub actions and trigger something,
and you can actually use different inputs.
So you can have different input fields
and use those in your GitHub action job.
So I could have a button that says,
run the very long end to end tests right now.
Okay, cool.
Or you could have,
because this sounds kind of cool,
I could imagine having a button that says,
hey, I'm ready for a new release of this Elm package.
And you click a button,
and you could have it actually run Elm bump.
Oh, yeah, I like that.
That could be kind of cool.
So I mean, it's one of those things
where GitHub actions gives you
a lot of these building blocks,
and you sort of get creative
with the building blocks.
But the building blocks,
I mean, just being able to take user input
and run a workflow with those input fields,
there's so much you can do with that.
You can get really creative.
Maybe I shouldn't talk about this right now,
but if we have the button to bump the version
and then publish,
could anyone do that on my repo?
Or only me and the maintainers?
Or can I say, who has access to that button?
I have a great discourse thread
about this on the GitHub discourse.
So you can enforce that.
You can do
to find out who triggered the workflow
using that dispatch action.
And what's the intern?
Right, if it's the intern,
then don't do it.
No, make your CI process robust enough
to be able to do that.
So this gets at one of the core concepts,
I think, which is that
just like you can have your server
enforce certain things.
So you have to validate something
before you write it to the database.
You have to authenticate somebody
before you give them some information
or let them perform some task.
It's the same with a CI.
So you can build in
some of these permissions into the CI.
It automatically runs Elm Publish for you
because there's this sort of dance
that you do when you publish an Elm package.
You have to create a tag,
and you want to do that after your CI passes
because you want Elm Review to check out fine,
and you want Elm Tests to say
everything's good in Elm format.
You wouldn't want to run Elm Publish
and push and then get a red CI back
that would be a bummer.
So this is the cool thing,
is that can anybody do an Elm Publish?
Well, it's built into the CI,
so when your main branch gets a release
that has a bumped Elm version,
it automatically publishes.
So anybody who has permission
to change the version in your Elm.json
for a package has permission to publish,
and if somebody has right access,
then they have publish access.
I imagine that in some companies,
like very large companies,
a lot of people can merge into master,
but then more select group can publish, can release.
So maybe it makes sense for smaller teams
or smaller organizations,
but not for larger ones.
I'm a little hesitant
with giving people special access.
I'm more on making it safe for anybody to do so,
but that's just my inclination.
But people can figure out
whatever sort of permissions schema
that they want to there.
But this is one of the cool things in CI
is that you can build those things into the CI.
So if somebody makes a pull request
that bumps the Elm version,
when you merge that,
then a new version goes out.
But if they don't have permission
to have guards that functionality,
and so you have these sort of built in permissions
and things that you can enforce
through your GitHub action.
So that's kind of a cool thing.
I think that in your GitHub action,
because it's under your name,
you have a specific guard
that this only runs on the main branch, right?
It only runs on the default branch,
whether that's main or master,
or whatever your default branch
is configured to be.
It also only runs after version 1.0.0.
So it requires you to manually say,
hey, I'm ready to make this package public,
because if you set up that action
and it accidentally 1.0 something for you,
that would be unfortunate.
So it will never do that.
But that could be interesting.
You could have a button that says,
do the initial release,
and then remove the button from the GitHub actions,
and then let the Elm published action
be a success for the other releases.
The button is not something you can configure
based on a condition, I think.
The button is something you have to build into your...
No, but you can make the action
change the GitHub workflow configuration.
I don't know.
That's starting to get...
If something gets too sophisticated
and clever like that,
I start to worry that...
I start to trust it less.
If you have a new package command
that creates a new project
and adds GitHub action CI,
well, maybe that could be interesting
for those people,
even though it will feel like magic,
like, hey, I had a button.
Where did it go?
But yeah, I agree that it's pushing
the envelope a little bit too far.
It seems like performing some side effects
on the function there.
I really like the idea, though.
It's fun.
Maybe for Halloween you can enable that,
because it's a little spooky.
We should mention also that
this Elm publish action that we're talking about
is sort of another special feature
that GitHub actions has,
which is that you can package up JavaScript code
and so the interface to it is
you can provide inputs that you pass in through your YAML.
So you wire in this action,
you say it uses
dylan kerns slash elm publish action
and then add a specific version,
and then you give it,
I think it's a with,
with a list of the inputs,
and you have to pass in things like...
So you can pull secrets.
Secrets is another feature
that creates to give permission to do certain things,
like elm publish action needs to create a git tag.
And then it passes those things in,
and that's published in the GitHub marketplace.
So people can install that NPM package
and run it as a GitHub action.
So you don't have to manually NPM install it
or anything like that.
You just reference it in your workflow.
Okay, does it only work for JavaScript packages?
And you can also just have a Docker image.
You can actually run things with Docker.
So you can do anything you could imagine
since you can just use Docker,
but JavaScript is the most common way I see of doing things
with sort of front end focused tools,
like if you're running prettier or something like that.
So there was a, Phil Sparks created this tool
for running elm review and giving you feedback
which is pretty cool.
Yeah, so you make a pull request
and then it adds annotations
wherever elm review reports things.
So if you have something that is unused,
it adds an annotation,
so kind of like a comment,
by the elm review action bots,
or I don't know what that is called.
That's incredible.
That's huge.
And that's the kind of thing
that I love about GitHub actions
is it's giving you this sort of core platform
pointing elm review rules,
pointing out the line where there's a problem
with the comment is huge.
Yeah, and there's also another one
that Phil Sparks made,
which is not published
because it is more of a succession
of different actions,
which is to run elm review, fix all the issues,
and then make a pull request.
Oh, cool, cool, cool, yes.
Well, for the ones that can be fixed, obviously.
Right, the ones that can be auto fixed
and then that may result in a green build
once you merge that.
Hopefully, yeah.
That's the feeling I get for most issues,
is things are just unused
or something that is easy to fix
by the elm review tool anyway.
Right, and then the intern
doesn't even have to install elm review.
Yeah, exactly.
If you install elm and elm review, who cares?
Right, right.
If there's a compiler error,
maybe it'll use machine learning
to automatically fix the compiler error for them.
Maybe we should tell the intern
not to use Notepad to edit elm code and not to...
No, we love interns.
You're great interns.
Then we put the blame on you.
Yeah, it's a good scapegoat.
That's what we should call them, scapegoats.
Hey, we have a new scapegoat coming on this week.
Let's use the one whys to figure out the problem.
Oh, that's terrible.
So for this elm review GitHub action you're in,
would you say that that would replace
the build review in the CI?
Probably would, right?
Yes, as long as the CI stops
whenever there's an elm review error.
If it makes the build be green, then no,
that's not enough.
But I think it does, so yeah.
Yeah, so that gets into an interesting space
that I sometimes wonder about,
which is like, what's the relationship
between the checks that you have
in your local environment versus the CI checks, right?
We say that you want this source of truth
of the CI, right?
But that doesn't mean that you want
to ever find out about errors there.
But you want to know that if for some reason
something passed through, that it would be caught there.
But you want to catch it sooner if you can.
So you want to run your unit tests
and your end to end tests and all these things in your CI,
and you want to know the source of truth,
and you don't want to go live and do a deployment
if those things are failing.
But what's the relationship between that set of checks
that you do in your CI and the set of checks
that you do locally, and how you do them?
Do you encapsulate them in the same scripts?
I noticed that when you do elm review new package,
which is like a little init helper
that you have to create a folder
to run all, which is like a little npm package.
And you basically just run that in the CI
where the check does, it runs the unit tests,
it runs elm review, it runs all these checks.
Yeah, just for those who don't know npm run all,
so in npm scripts, usually people do this,
then the end sign, I don't know, what's the name?
Double ampersand?
You can do that and so forth,
but that doesn't work on Windows,
so that's why I used npm run all
to run all of them together in sequence,
just for it to work on Windows.
Yeah, I think you can even use
a glob style syntax with npm run all.
I think you can say,
test colon star.
Yeah, you don't like the implicitness of it.
And for people who don't know npm run all,
which I think is most people,
even I don't really like it that particularly,
if you use that and use like test colon star,
like you will know where test dot elm review will be used,
or test colon elm review.
So I just want to be able to find it.
Right, because you want to run elm review
and test colon elm review.
Yeah, also.
You would want to find out that elm make fails,
you'd prefer to have elm make's error message
take precedence and then fail the rest of the build.
Yeah, it makes sense.
But as to what do I run in CI,
I really like having the CI do the same thing
as what I would do locally.
Or, instead of another way,
whenever I push something to get up
and I make a pull request and then it fails,
most of the time I want it to be because I was lazy
and I forgot to run tests.
But if I run tests, I want to succeed in CI.
Right, if you have a surprise in CI,
it means something went wrong.
Yeah, the obvious exception is when for very long tests,
like end to end tests,
if they take a long time, let the CI do it.
Right, because, I mean, but sometimes that is unavoidable
and sometimes you, maybe you have some smoke tests
that before something goes live to production,
you need to run and they're just going to be long running tests
and there's really nothing you can do about it.
And in cases like that,
sometimes you do have to make that trade off to say,
we know that any code that goes to production
passes these tests, but they just are expensive tests
that we're not going to run locally every time.
It's up to you to make sure that it often won't break
in end to end tests.
So if you do something wrong,
maybe add an Elm review rule that says you shouldn't do this.
That would prevent something to break in the end to end test
or something.
That's obviously not possible because end to end tests
have their own use and they're very useful.
But yeah, you want to push things to the fast thing
that's possible.
Right, right.
Yeah, the sooner you can find out the better.
Ideally, there are kind of like three different stages
in this actually.
We've talked about like, you know,
the CI running things before deploy.
We've talked about running these things locally,
but we haven't talked about the local iterations
and development.
So there's sort of like a step before you commit and push
where you run things locally.
It's sort of a different workflow because you're running
most of these things in watch tasks.
So you're running, you know,
elm test dash dash watch.
You're running elm review dash dash watch or dash dash fix dash all dash dash watch.
How's that for podcast material, Jeroen?
I love it.
Great radio here.
We haven't made it with an Elm review episode in a while.
So yeah, I mean,
the awkward thing about it is there's no way to just say like,
okay, there's no declarative way where you say,
here are all the things that I test when I run my CI job.
And now do all of those things in their watch mode.
There's no way to do that.
You just sort of like manually keep them in sync,
which, you know, as people who are obsessive about automating everything
and making everything safe, it hurts our brains,
I guess there are git commit hooks, which are kind of in between.
Yeah, but that's not the watch mode, right?
That's like,
well, if you commit often, then kind of.
Those are way too long, I think.
Like if you run your whole test suite every time you commit
and you try to commit small things as we both like to do,
then I don't think the experience is great.
I don't like git commit hooks at all.
Yeah, I don't like git commit hooks.
And I mean, the problem with git commit hooks is that they're not shared,
they're individual, and there are some tools that sort of try to help with this,
but they're not, you certainly can't treat it as an assumption that they're run,
but the CI you can treat as an assumption.
So they certainly are not a replacement for CI.
Yeah, I actually joined a company which were using git commit hooks pretty heavily
and they didn't enforce ESLint, so a linting tool in CI,
only during the git commit hook phase.
And well, no surprise, there were a lot of issues
and no one was noticing them because no one was looking at them.
It's a bummer because then, you know,
somebody goes and they want to enforce something with a linter or with whatever tool,
and now they're responsible for fixing all the things
and they have a clean slate.
And you know, as opposed to the person who was working on something
and could have addressed it right as they were introducing that issue,
they could just turn around and fix it.
They have the context.
Well, actually, I have to precise something.
They were running ESLint, but only on the diff difference between what they did
and what was on master commit before.
So that's why a lot of things accumulated.
They weren't checking what was there before already.
Because you can actually write,
if you're able to get a certain build through, then now you're good,
even if that build was red potentially, right?
They were doing that locally with git commit hooks,
but in the CI, they were not rerunning ESLint at all.
So a lot of circumstances that led to that.
Yeah, I think where I come down with this stuff is ultimately like,
as much as I would love there to be an elegant way
to just sort of keep all these things in sync,
I think you have to sort of treat it as independent, but related things
that you have to maintain independently,
but you have to try to make sure that conceptually,
you're enforcing everything very robustly at the CI level.
You're trying to get as close as possible to knowing that your CI is going to pass
when you run your whatever you do pre commit,
whether it's running an NPM script or a bash script or a make file
or a pre commit hook or whatever, however you do that,
you want to get as close as possible to being confident that the CI will pass.
In some cases, not exactly,
because you may have long running CI tasks.
And then for your local watch mode, it's sort of its own thing too,
where there are certain things you want to run in a watch loop.
Maybe you don't want to run your end to end tests in a watch loop.
You probably don't,
but you probably do want to run your unit tests in a watch loop.
Maybe you do.
Even on review, I sometimes run it in watch modes when I'm developing,
but I'm not looking at it all that often.
I really like the experience of refactoring with Elm review,
watch fix all in the background,
because you start deleting instances where you're calling a function
and move something over to a new function you're using
or a new way you're writing some code.
And then all the old code just starts disappearing.
Beautiful, isn't it?
It's great.
You can start writing codes and then gradually move those
and then remove the old ones.
And Elm reviews just gives a great experience for doing that.
But you also want to know when that happens
so you can rename the old one, do a better name.
I'll duplicate by adding just two.
So I have new function and then I have a view two function
because I hate to replace the view function.
It's not fixed yet because you have unused code at first, right?
Well, the view function will be used.
And once I will migrate things, it will become unused.
Did it get you?
Well, isn't some of the code dead?
Oh, yeah.
You mean when you rename things?
Or if you're introducing a new function.
Yeah, I'm duplicating the code.
So once it's duplicated, it's dead
and you can't run Elm review dash dash fix all.
When you create it.
So you don't want to delete it whenever you create it.
So you sort of have to have a custom workflow.
You can't just assume that you're going to run these things.
Even if you do have that fix all enabled in your CI
where it's automatically going to fix things when you push,
it's different in the lifecycle of your local development
where you may want to hold off on some of those things
during your development cycle.
I think you have to treat them as three different things.
At least three, I think.
At least three different things.
But just having that mental model hopefully helps.
So to go back to what should we run in CI,
I usually run NPM tests and maybe additional things.
So we talked about the Elm review action,
which adds annotations about things that went wrong.
And I would probably run that next to it.
I've done Elm review twice.
Just because you would try to encapsulate
your local scripts that mirror your CI?
And as long as they're both really fast,
it's not worth optimizing.
Just run it twice.
If you have a very long setup,
then I'm guessing the Elm build step will not be the longest one.
At my work, we have plenty of GitHub jobs,
and the Elm ones are the fastest ones.
The backend takes a lot longer.
So I couldn't imagine just having one additional job running Elm review,
which would take like 30 seconds maybe at most
with the whole setup and stuff.
So it's fine, I'd say.
I would tend to just have separate scripts for that.
But it's, yeah,
I mean, unfortunately, there's no silver bullet here.
They picked their poison a little bit.
Yeah, because in my case,
you would double some of the work,
and in your case, you could have the problem
that they're not in sync anymore.
So let's talk about some of the other cool workflows
that you can do with GitHub actions in an Elm project.
So of course, you should run your tests,
you should run the compiler,
you should run Elm format verify and Elm review.
There's a cool thing that's not directly using a GitHub action.
But I wrote a blog post a little while back
about keeping your Elm dependencies current using Dependabot.
And I think this is like maybe a lesser known feature,
but people might know Dependabot as something
that updates your NPM dependencies
with like 10 million pull requests every day.
Elm being for vulnerabilities.
It's like you'll never get to zero.
It's just not going to happen.
But it also works with Elm dependencies,
which don't have 10 million
security vulnerability updates per day
that might have zero security vulnerability updates ever.
One reason might also be
because no one actually cares about Elm vulnerabilities.
I mean, there's a lot of people who,
well, I guess it's NPM who checks security issues.
But no one does that for Elm,
at least for now, until we find attack vectors.
That's true.
There could be some sitting out there
that we don't know about.
I think it would be pretty easy to make one.
Like just run one HTTP request
that does something that you don't want it to do.
Yeah, sure.
But as long as that is part of your API
and people use it, then you have a problem.
But it is really, really edgy.
That doesn't really happen at all.
Don't encourage the hackers you're in.
You're just trying to give them a challenge.
Yeah, so that's kind of a cool workflow.
You can actually point Dependabot to your Elm projects.
And it makes a pull request.
It shows you the relevant change log
that you're updating to.
That's a cool workflow.
So what you mean is you could do that
with GitHub Actions also?
Well, those are the kinds of workflows
you can get into your CI with GitHub.
What are some other cool workflows?
You've got some interesting workflows
that you do for the Elm Review New Package CI.
Yeah, so in Elm Review New Package,
I'm creating boilerplates
for Elm Review rules.
So there's a few things that we can check
with Elm Test and with Elm Review,
and I run those in the default settings.
And there's also some checks that I want to run
only when people are attempting to publish.
So there's a Preview folder,
which people can use to run the Elm Review rules
before they're published.
And there's an Example folder,
which uses the latest published version.
So you can update the Example folder
with whatever is in the Preview folder.
So if you have a new rule,
then your example is now up to date.
And that is a test that I only run
right before publishing,
so right before running Elm Publish Action.
So that way, you bump your version,
you push, it runs that test,
and if you forgot to update the examples,
you have a helpful script to update the examples,
but they need to be run before publishing.
I try to publish the experience here and there.
It really makes a difference.
I mean, just kind of having those checks
to make sure everything is polished before it goes out,
like CI is so good for that.
And you can get so creative too.
So like, I think it's really good to get creative
and force using my CI.
What quality can I check and automate?
There's one thing that we may not have talked about too much.
It's like GitHub Actions works with configuration files
that are checked into your repo.
So it's in the.github slash workflow slash whatever.
And the very nice thing with GitHub Actions
is that it works only with those files
in the sense that Travis,
you also needed to go to Travis website
and configure your repo so that the hooks worked.
Oh, yeah. That's a good point.
So if I provided a Travis setup,
I would still have to tell people,
hey, after you used Elmer View new package,
you should also go to Travis,
maybe create an account,
and then you're good to go.
But with GitHub Actions, I'm like,
here are the files, and you're good to go already.
You don't have to do anything more than that.
That's really the killer feature.
Yeah, it's huge.
I mean, there's just so much you can do with them.
You know what would be cool?
So you can send a tweet or call an API
or post to a webhook, right?
You can trigger a deployment.
So what if after you publish a package,
what if you set a Twitter account to automatically tweet
that you published it with the changelog, right?
So you have like a certain mindset
to just getting creative with these little automations.
I wired up, I don't know if you've seen this, Yaron,
but on my GitHub profile,
I think that's the term they use for it.
If you go to slash Dillon Kerns,
it says, Hi, I'm Dillon,
and it's got some weekly tips that I've written
and Elm Radio podcast episodes.
So it actually gets RSS feeds from those,
and it shows the latest.
You can click in there and you can go to see
the latest Elm Radio episode.
It's a fun little automation,
and all it does is it uses this GitHub actions workflow.
It's called blog post workflow.
I'll put a link in the show notes,
but it's so easy to just wire it in.
You point it to an XML feed.
You give it like the name of a comment
that you're going to HTML with,
and then it puts in a UL with list items
within those comments in your HTML in that markup.
Then you just have to hope that the latest episode,
because you only showed the latest ones,
that the latest episodes are better
than the ones that were removed before.
Well, if I wanted to get really fancy,
I pull in data from the Simplecast API,
which under the hood hosts Elm
I could have a special Elm Radio feed
that sorts episodes by download count,
because that's available in the Simplecast API.
I use the Simplecast API to get the data in Elm pages
to generate the feed.
I could say here are some popular Elm Radio episodes.
I wonder if our listeners can guess
which one is the most popular one.
Please tweet about this.
It's going to be released on the day we're recording,
which is the one with Evan.
We'll see.
Elm UI is pretty popular too.
I'm guessing we won't edit this out.
We're rolling with it.
Sorry, Rune.
We'll put a spoiler warning.
Yeah, so there are all sorts
of little clever things you can do.
You can do it with SQL code,
using GitHub pages with a GitHub action.
It's worth kind of browsing through the marketplace
for some GitHub actions.
But the really exciting thing to me is just,
I mean, number one, the raw compute power
that you have easily available.
But number two, the fact that you can piece together
these tools and encapsulate them
with these little published actions
and wire together something.
You can grab output from one action
and pass it to another action.
There's a syntax for that.
It's the kind of thing that you have to look up
every time you do it,
but it's not that hard to do.
I think of it like automation.
It's just like this Rube Goldberg machine
that you create these cool things
that one thing happens
and it triggers another thing to happen
to just get creative about what you can enforce.
Run Lighthouse on every build
and then publish those stats someplace
and publish it to a spreadsheet or to Airtable
and keep track of those metrics there
and then build a little web page
that shows charts of those.
Get creative and automate things.
Do you know if this, then that?
Yeah, that's exactly the thing
that's going through the explore page right now.
There's one that says,
when I answer a call, pause the Roomba.
Yeah, that makes sense, I guess.
If my CI fails, then pause the Roomba.
If I succeed.
Because I'm going to need to concentrate.
If I make a pull request and the test pass,
hand me a soda or something.
I don't know.
Actually, at a company I worked for
several years ago,
I built this thing.
I thought it was really fun.
Where I got a green lava lamp and a red lava lamp.
Do you see where this is going?
And I would publish our CI status
so it was accessible with some JSON file somewhere or something.
And then I would do a polling thing
or whatever.
And then this little like
Wi Fi power switch would automatically toggle
the lamps on based on whether the CI was failing.
The really cool thing about the lava lamps
is that if you look over and you see the red lava lamp on
and the little bubbles are floating around,
then you know it's been red for a while
and the green one is completely stable and not moving.
Someone just fixed it.
Then it means that it's been broken for a long time.
Or if the green one is on, then someone just fixed it.
Whenever I think of lava lamps,
I think of the Cloudflare wall.
Do you know that?
Oh yeah.
They have a wall full of lava lamps
and because the movements inside the lamps are pretty random,
they use that to generate random numbers,
which is like, what the hell?
That's crazy.
What a world we live in.
In a way it's automation, right?
So what did we miss about how GitHub actions work?
You can compose actions together.
There's a lot of syntax for that.
Do this when this happens.
Do that when that does not happen.
Things like that, I'm guessing.
I got the feeling that you can't do this.
You published an unpublished action.
That's what it's called, right?
That's the thing about literal naming, right?
Elm package called Elm package, right.
So if I wanted to reuse that one and another one
or maybe some additional script
and I wanted to wrap that itself into one action,
can I do that?
Can I compose actions together like that
or do I need to tell people,
please copy paste this portion of this script
and put it into yours?
I created the Elm publish action code.
It's just JavaScript code.
It's actually TypeScript
and it transpiles down to JavaScript.
You have such a sour face right now.
It's JavaScript crap.
No, it's great.
I think JavaScript is kind of a good tool for the job.
It's got its blemishes,
but I'd rather use that than most things
because it's just a,
it's ultimately just some JavaScript calls.
They call some APIs.
It calls this like core octokit thing
and this core GitHub actions core package stuff
which allows you to like write out debug statements
if you want to or print things to the logs.
And it sort of,
I think it's the same as Travis this way
where actually they use some conventions
where they have some like prefix
and expandable or to highlight certain things
as errors and things like that.
Just like ANSI color codes in the terminal.
That sort of deal.
But they encapsulate that
using these like JavaScript NPM packages
that you pull into your code
and do it in a higher level way.
But so basically I don't think there's like,
I don't think there's a way,
it's not a workflow.
It's just JavaScript code
and you call some code
but it actually doesn't run like a YAML workflow.
So you can't call other people's workflows
and encapsulate those.
You just kind of would have to create your own thing.
But it works pretty well
and it's kind of tricky like testing things out
and making sure everything works.
But it's quite nice
being able to encapsulate these things.
How do you go about testing those actually
and writing one maybe first?
Well, you can start with their starter repos.
And I mean, you can write tests with a bajillion mocks.
I kind of didn't find much value in that.
So to be honest, I kind of,
I mean, I think of it as a script.
And like, so I kind of test out the script
and make sure it works
and build it in such a way that it's robust
and not going to do anything dangerous.
Like for example, you and I discussed this year
and when I built this that I added a dry run feature
to publish actions.
So you can see if everything checks out
and if it's going to publish.
And I don't accept your GitHub token in that form
and I'll actually reject it if you pass that in.
And so that's like a safety measure
where it's like, okay,
you don't need to test that.
Just don't pass your GitHub token.
And you can be confident
that if anything goes wrong,
it would be that it crashes
and you have to do a package.
So you have some load,
but it's basically a way of nicely encapsulating a script.
And most of these things are scripting tasks
like the, like Matthew Pitzenberg created this GitHub action
that encapsulates the Elm tooling CLI
for packages that, for projects that are not using NPM
because like we talked about in our Elm tooling episode,
it takes a lot of VX to sort of install
and run these dependencies.
But for projects that don't use that,
he created a GitHub action that encapsulates that.
And like that's another great use case.
Like there's really not a lot of logic to it.
It's just sort of conveniently wiring some things for you.
And so I would say like probably
the Unix tool chain philosophy
is a good strategy for publishing
these little standalone actions
that are not doing well.
It should send a tweet, install a tool,
cache something, whatever that may be.
And the gluing is done by listing them in the workflow
in the GitHub configuration YAML.
And sometimes it's a bit of a pain to glue them together,
but it's better to have that flexibility
than have one tool that does everything
except for this one use case where it doesn't work.
Also, we haven't mentioned,
we haven't talked about caching,
but it's worth knowing how caching works.
So I recommend looking at the example setup
that Simon created for the Elm Tooling CLI.
He's got a nice, well sort of documented
with a bunch of comments workflow.
And he, so but basically the way that it works
is there's this like,
there's these steps in your GitHub actions workflow
in this YAML file, and you enumerate these steps
and you can list a step that uses actions slash cache.
And all that's going to do is you give it a cache key
and to get that cache key,
you hash the files that determine
whether it's a cache hit or a cache miss.
So for NPM projects,
that would be the package lock.json
then it's a cache miss.
And for Elm projects,
that would be the Elm.json file,
but it would also be the review slash Elm.json file
if you have an Elm review configuration
because that's a separate Elm.json file
with its own Elm dependencies.
But you're caching the Elm home directory
which keeps a cache of the packages installed
throughout that entire project.
So what happens if the cache is a miss?
Then it's just not going to download the cache, right?
Yeah, that's my understanding as well.
But then you will still need to do the task
of installing NPM packages or...
The way that it works,
basically there's like a...
I think this is like a general feature of GitHub actions.
I don't use it for Elm publish action,
but I think that anybody can hook into this functionality
that has like setup tasks and tear down tasks.
And so for the caching action,
and again, it's just one of these little
Unix tool chain pieces,
this small thing that does one job well.
And so for the caching action,
the setup is check if it's a cache hit
and then restore the cache if so.
And then for the tear down,
you can cache the cached locations
under a particular, like push it up
to this temporary storage location,
which like GitHub actions has a way of sort of pushing up
tar balls somewhere and then pulling them down
and restoring them.
So that's basically how the caching works.
So it's interesting because it is kind of low level,
but I think it's because it's that
Unix tool chain philosophy.
So you sort of like,
you have a node and then you run the GitHub actions caching
for NPM and then you NPM install.
But I think that's just the philosophy
that they've gone with is to be explicit about these things
and let you configure them.
But so basically the way the tear down is going to work
is you have to NPM install in the appropriate places
and you have to L make so that it installs
and it will be the cache that gets restored.
So there's one thing about caching
that never properly understood.
So the way it works, as we said before,
is it uses the actions slash cache and a version of that.
If you give it a path where to restore the cache from two,
I guess, and a key which gets,
which determines whether there will be a hit or a miss.
But it doesn't say what to do if the cache is a miss.
So that's something that you run in a later step.
But when does it then save that cache?
Does it do that at the end of the workflow?
And if so, only if it succeeds? Do you know that?
I do know that it saves the cache
at the end of the workflow.
That's like that tear down thing I was talking about.
I'm actually not sure if it, well, I mean,
it's going to save the cache no matter what, right?
Whether it's a, well, I guess if it's a cache hit,
then it doesn't save it, right?
It doesn't need to resave it.
Because it restored the previous version.
So it won't save it again if it doesn't need to.
And it will save a fresh cache if it was a cache miss.
Right? So that's what the core thing does.
But it also provides an output.
So these are sort of the basic building blocks
of these actions you piece together.
So you can do your own steps, which are just like running
a bash script or a little bash command,
or these sort of things that are published in the marketplace
as little JavaScript snippets of code,
like unpublish action and the actions cache.
Either way, you can have inputs to that action
that they depend on and outputs that they pass out.
And the actions cache actually provides an output
for it was a cache hit.
Yeah, so you can do, if there was a cache hit,
then you don't have to rerun npm install.
Yeah, exactly.
Yeah, that's what Simon does in his boilerplate
for Elm tooling CLI.
He skips npm CI if it was a cache hit.
All right.
I feel like we went through everything, right?
I think we've largely exhausted our knowledge on this topic.
I'm certainly not an expert on GitHub actions,
but I use them a lot, and I get a lot of value from them.
So I hope there were some useful tips in there for people.
And there are a lot of resources online too.
I think a lot of GitHub actions is just really getting creative
with what you do with it.
Yeah, obviously learning the basics,
but after that getting creative.
By learning the basics, do you mean Googling
every single time you use it?
Because it's YAML and you can never remember it?
And talk about Norway a lot.
What's that?
In YAML, if you have NO, that means false.
But if you use that as the Norway, like the two letter,
the country code, well, you have trouble.
Yes, that's true.
Also dates.
If you do dates, it will automagically parse them
into these weird string things.
And you're like, what?
That's not the date I wrote.
Turns out you have to put it between quotes
to have the literal 2021 dash 03 dash 15.
It'll parse that out to a UTC time.
Okay, so next time you do a date, don't do YAML,
go to a restaurant or something.
That's right.
YAML would be a very bad date idea.
It would be a terrible date night.
You can just, yeah, no.
You feel bad about yourself?
Because you probably should a little bit.
So how should people get started?
Besides just scouring through the documentation
What I'm doing is just taking a look
at existing ones.
Mostly Simon Lytle hits GitHub action.
And yeah, going through the docs from GitHub.
Do you have good courses or good resources?
There are a couple of Brian Douglas works at GitHub
and he's been publishing a lot of stuff about it.
So he's got like a series, a blog post series on
has a lot of videos and a lot of,
it's almost like an Advent style thing of one thing
he posted every month recently.
So that's useful.
And then Edward Thompson, who was one of the product managers
that developed the product of GitHub actions,
has a GitHub actions actual Advent calendar.
So kind of similar things there.
An actual one.
It's useful to just look at lots of examples, as you say.
You can look at a bunch of other Elm open source projects
or look at these cool things people are doing with it
in these blog posts.
Well thanks so much for listening everyone.
And be sure to leave us a review on Apple Podcasts
if you enjoy the podcast.
Tell a friend, if you've got an Elm friend
who doesn't know about us, let them know
and send us a tweet.
Let us know what question you'd like us to talk about
or go to slash question.
And as always, have a good one.
Have a good one.